All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Team, We are trying to integrate one of the SQL data base using the splunk db connect add-on and we are getting the below error.  Id MS SQL 2012 is compatible with the below db connect and spl... See more...
Hello Team, We are trying to integrate one of the SQL data base using the splunk db connect add-on and we are getting the below error.  Id MS SQL 2012 is compatible with the below db connect and splunkversions ? Splunk DB Connect Version: 3.5.1 Build: 4 Splunk Enterprise : 8.1.7.2 DB version is Microsoft SQL Server 2012 ERROR : The driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption. Error: "Certificates do not conform to algorithm constraints". ClientConnectionId:xxxxxxxxxxxxxxxxxxxxxxxxxxxx
Hi all, I want to get the syslog events of my VMware ESXi hosts (free hypervisor) in my splunk Enterprise (free edition). I set up the ESXi hosts and installed the "Add-on for VMware ESXi Logs" (... See more...
Hi all, I want to get the syslog events of my VMware ESXi hosts (free hypervisor) in my splunk Enterprise (free edition). I set up the ESXi hosts and installed the "Add-on for VMware ESXi Logs" (Splunk_TA_esxilogs 4.2.1). When I do a search with the IP address of a host, I only see events with the sourcetype "vmware:esxlog:Rhttpproxy". I'm not filtering the search with this sourcetype. And these events aren't the same I see in the syslog file of the ESXi hosts. When only searching for "vmware" I see more sourcetypes: But again, I don't see all events. The sourcetype "syslog" is binded to my Sophos UTM firewall. I want to get the events of smartd of the ESXi hosts for seeing if my SATA drives are OK. In the syslog file on the ESXi host there are events but I don't see them in splunk. Any ideas, how to see the events of the syslog file of the ESXi hosts in splunk? Thank You and kind Regards.
bonjour j’arrive pas a trouver le fichier authorize.conf je sais pas j’ai bien suivi le lien Comment déployer l’application Splunk pour l’infrastructure Windows - Documentation Splunk   j'a... See more...
bonjour j’arrive pas a trouver le fichier authorize.conf je sais pas j’ai bien suivi le lien Comment déployer l’application Splunk pour l’infrastructure Windows - Documentation Splunk   j'avais aussi ces 2 messages et j'ai lu plusieurs articles sans savoir le corriger: Received event for unconfigured/disabled/deleted index=perfmon with source="source::PerfmonMk:Network_Interface" host="host::SRV-DC-02" sourcetype="sourcetype::PerfmonMk:Network_Interface". So far received events from 1 missing index(es). Eventtype 'wineventlog_security' does not exist or is disabled.    
Dear Splunkers, we are trying to build a baseline of login events. We are using this example.   The search is at the end of the post.  The problem we are facing is that there are no Outlier event... See more...
Dear Splunkers, we are trying to build a baseline of login events. We are using this example.   The search is at the end of the post.  The problem we are facing is that there are no Outlier events detected. We are using the CERT Insider Threat Dataset r4.2. It doesn't matter if we change the amount of stdevs, it won't ever classify an event as outlier.  Maybe it won't work because there are different logins per user per day. How could we change it that it will only use the first login event per user per day? Does anyone have an idea what we can try? Thank you in advance. activity=Logon | eventstats avg("_time") AS avg stdev("_time") AS stdev | eval lowerBound=(avg-stdev*exact(2)), upperBound=(avg+stdev*exact(2)) | eval isOutlier=if('_time' < lowerBound OR '_time' > upperBound, 1, 0) | table _time isOutlier  
Hi , In one of the OLD UF,  fish bucket has occupied the complete disk space and service has been stopped.  will deleting the fish bucket file cause forwarder to send all the old data that is alrea... See more...
Hi , In one of the OLD UF,  fish bucket has occupied the complete disk space and service has been stopped.  will deleting the fish bucket file cause forwarder to send all the old data that is already indexed ?
Hello, we are on splunk 6.5.1 (same versione for the forwarder; unfortunately we can't upgrade at the moment). We installed the forwarder on a Windows machine, and we configured deployment.conf to ... See more...
Hello, we are on splunk 6.5.1 (same versione for the forwarder; unfortunately we can't upgrade at the moment). We installed the forwarder on a Windows machine, and we configured deployment.conf to talk with the deployment server, like this: [target-broker:deploymentServer] targetUri = deployment.ourdomain.ext:80 From the forwarder logs, we see that this error is showing up: 02-24-2022 12:19:54.474 +0100 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected The communication with deployment.ourdomain.ext seems to be working (telnet works; the DNS is transforming calls to port 80 to port 8089 of the deployment server). Why is the forwarder giving that error? We restarted it many times, but with no result. Thanks  
name uuid sysfs size dm-st paths failures action path_faults vend prod rev mpatha 360002ac000000000000010e30001c751 dm-1 120G active 4 0 0 3PARdata VV 3315 mpathb 360002ac000000000000010fb0001c751 ... See more...
name uuid sysfs size dm-st paths failures action path_faults vend prod rev mpatha 360002ac000000000000010e30001c751 dm-1 120G active 4 0 0 3PARdata VV 3315 mpathb 360002ac000000000000010fb0001c751 dm-0 240G active 4 0 0 3PARdata VV 3315   The above is my multiline event in table format... I need to extract the below values(mpath, uuid): mpatha 360002ac000000000000010e30001c751 mpathb 360002ac000000000000010fb0001c751 Please help me. im new to this.. thank you so much..
Is it possible to restrict specific role to search index=* * but allow access for a specific dashboard?
2022-02-23 11:02:55 Cust=A1,txn_num=12,txn_status=success 2022-02-23 11:02:55 Cust=A1,txn_num=7,txn_status=failed 2022-02-23 11:02:55 Cust=A1,txn_num=18,txn_status=awaiting 2022-02-23 11:04:55 Cus... See more...
2022-02-23 11:02:55 Cust=A1,txn_num=12,txn_status=success 2022-02-23 11:02:55 Cust=A1,txn_num=7,txn_status=failed 2022-02-23 11:02:55 Cust=A1,txn_num=18,txn_status=awaiting 2022-02-23 11:04:55 Cust=A2,txn_num=13,txn_status=success 2022-02-23 11:04:55 Cust=A2,txn_num=18,txn_status=failed 2022-02-23 11:04:55 Cust=A2,txn_num=26,txn_status=awaiting 2022-02-23 11:06:55 Cust=A3,txn_num=12,txn_status=success 2022-02-23 11:06:55 Cust=A3,txn_num=7,txn_status=failed 2022-02-23 11:06:55 Cust=A3,txn_num=18,txn_status=awaiting 2022-02-23 11:15:55 Cust=A4,txn_num=13,txn_status=success 2022-02-23 11:15:55 Cust=A4,txn_num=18,txn_status=failed 2022-02-23 11:15:55 Cust=A4,txn_num=26,txn_status=awaiting 2022-02-23 11:25:55 Cust=A5,txn_num=12,txn_status=success 2022-02-23 11:25:55 Cust=A5,txn_num=7,txn_status=failed 2022-02-23 11:25:55 Cust=A5,txn_num=18,txn_status=awaiting 2022-02-23 11:30:55 Cust=A6,txn_num=13,txn_status=success 2022-02-23 11:30:55 Cust=A6,txn_num=18,txn_status=failed 2022-02-23 11:30:55 Cust=A6,txn_num=18,txn_status=awaiting   what im trying to achieve is to create a bar chart for each cust i need the status for the timeframe i would the divide the cust by trellis but for each cust i can't get the bar height based on txn_num and split by status  something like below   
Hi all, I'm using the syndication component (latest version), to fetch data from multiple feeds: https://www.cloudflarestatus.com/history.atom https://cloud.ibm.com/status/api/notifications/feed... See more...
Hi all, I'm using the syndication component (latest version), to fetch data from multiple feeds: https://www.cloudflarestatus.com/history.atom https://cloud.ibm.com/status/api/notifications/feed.rss https://status.aws.amazon.com/rss/all.rss https://status.cloud.google.com/feed.atom https://ocistatus.oraclecloud.com/history.rss By adding the entries, the events have started to repeat every time each feed is processed, which is 5 minutes, that is, it is re-indexing the entire set of events every 5 minutes for each feed. The check is activated so that it only takes into account new events. When I set one feed, for example google feed with 3 events: After 5 min: If I make: index=gcc_extension_1 source = syndication://google_gcc_ext | stats count values(host) values(source) values(sourcetype) values(index) by _raw | WHERE count>0 There are 6 results, note that it is not the entire _raw that is repeated, since the _indextime is different each time the array is processed. I've been researching and doing all kinds of tests for a long time, but I don't know what the problem could be. If anyone could help me out a bit with this I'd really appreciate it. Here, the detail of feed conf: Aside from screenshots, I can provide configuration as needed. Thank you very much in advance.
Hi everyone,  I created a dashboard with the choroplethSVG function. However for my usecase I can not use the upload function for SVG's.  I have to use the link function for SVG's.  I tried to use ... See more...
Hi everyone,  I created a dashboard with the choroplethSVG function. However for my usecase I can not use the upload function for SVG's.  I have to use the link function for SVG's.  I tried to use a link to google drive to insert the SVG data but the result is an error message. "error was`TypeError: Failed to fetch`"   I often use links referring to PNG data in drive and it always works. I realized that a link to PNG leads to opening the PNG. A link referring to SVG in drive results in opening a new tab and downloading the data instead of opening it.   Is there any way to make use of this function or does Splunk not yet support the link function for SVG.  
I am registering an app in Azure AD to use the Microsoft 365 App for Splunk. When I registered the app, I added the Office 365 Management API to the API permissions. However, the permissions that ... See more...
I am registering an app in Azure AD to use the Microsoft 365 App for Splunk. When I registered the app, I added the Office 365 Management API to the API permissions. However, the permissions that are actually displayed are different from the ones displayed in the steps described in the reference URL. Is it possible for Splunk to get logs from Microsoft 365 without "ActivityReports" and "ThreatIntelligence" in the permissions? procedure   Actual screen Reference URL https://www.splunk.com/ja_jp/blog/it/set-up-guide-microsoft365-vol1.html 
Hi, I'm new to Splunk and I was trying to compare values in the same field and group them subsequently. The events had client transaction id, pp_account_number, corrid different so had to remove th... See more...
Hi, I'm new to Splunk and I was trying to compare values in the same field and group them subsequently. The events had client transaction id, pp_account_number, corrid different so had to remove them and compare and group. I used | stats group by and it didn't get me the results.  There were results that looked same but were not grouped together. Below is my query. I went on to remove spaces so that it will group better but didn't work as well .    (index=pp_cal_live_logs_failure_services OR index=pp_cal_live_logs_success_sampling OR index=pp_cal_live_logs_allowlist)(machineColo="*") source IN ("riskexternalgateway") | eval corrId=corr_id | fields "corrId" , "calName" , "calMessage" | where (match(calName,"Monitor_Vendor_Service_Call") AND match(calMessage,"usecase_name=US_CIPACHFunding&VReq[a-z]*")) | eval calMessage= replace(calMessage, " ", "") | eval calMessage = replace(calMessage, "<client_transaction_id>.*</client_transaction_id>" ," ") | eval calMessage = replace(calMessage, "<pp_account_number>.*</pp_account_number>" ," ") | eval calMessage = replace(calMessage, "corr_id_=.*" ,"") | stats by calMessage 
Need to extract json file in fields { "AAA": { "modified_files": [ "\"b/C:\\\\/HEAD\"", "\"b/C:\\\\/dev\"", "\"b/C:\\\\HEAD\"" ] }, "BBB": { "modified_files": [ "\"b/C:\\\\/HEAD\"", "\"... See more...
Need to extract json file in fields { "AAA": { "modified_files": [ "\"b/C:\\\\/HEAD\"", "\"b/C:\\\\/dev\"", "\"b/C:\\\\HEAD\"" ] }, "BBB": { "modified_files": [ "\"b/C:\\\\/HEAD\"", "\"b/C:\\\\/dev\"", "\"b/C:\\\\HEAD\"" ] } } Expected Output as: AAA,BBB is application name eg: Application: AAA Thanks in advance
Hi All, How do we know whether typing queues are blocked or not? Is it from Internal logs? From the backend of the server, is it possible to find the queue blocks?
Hi all, So, I have this URL/API endpoint as http://xml.app.com/pay/ent/auth/service/getId and I want to extract getId for the index that has field name 'end_points' and create table for the same fie... See more...
Hi all, So, I have this URL/API endpoint as http://xml.app.com/pay/ent/auth/service/getId and I want to extract getId for the index that has field name 'end_points' and create table for the same field name that only displays the text 'getId' rather than the entire URL. How to do it using regex in Splunk. Although, I tried something like this:   rex "^http(s)?:\W+\w+\.\w+\.com\W\w+\W\w+\W\w+\W\w+\W(?<end_points>)" | table end_points Since, I started learning Splunk quite a few days ago, I'm new to this. Any help would be appreciated. Thanks.
I am trying to set up the Planck add-on for Microsoft Office365 by referring to the following URL. I'm trying to set up "Service Status" and "Service Message", but they don't appear in the menu. If... See more...
I am trying to set up the Planck add-on for Microsoft Office365 by referring to the following URL. I'm trying to set up "Service Status" and "Service Message", but they don't appear in the menu. If anyone knows the reason, please let me know. Reference URL https://www.splunk.com/ja_jp/blog/it/set-up-guide-microsoft365-vol2.html
As the title says, I have a list of subnets and I would like to create a search which would show traffic (using Palo logs) passing through those subnets. It should still show those subnets that had n... See more...
As the title says, I have a list of subnets and I would like to create a search which would show traffic (using Palo logs) passing through those subnets. It should still show those subnets that had no traffic. I am using below query but it doesn't return results of those subnets with 0 traffic. If anyone can help with a better version of this query with a lookup and possibly using datamodels, that would be great. index=palo sourcetype=pan:log |eval stan=case(cidrmatch("10.0.0.0/24",src),"stanA"),(cidrmatch("10.0.1.0/24",src),"stanB"),(cidrmatch("10.0.2.0/24",src),"stanC") |stats count by stan  
I am using the Splunk Add on for Aws app and using a generic s3 data input. I am unable to get the data into splunk. I get this in the splunk_ta_aws_generic_s3_<data_input_name>.log  phase="fetch... See more...
I am using the Splunk Add on for Aws app and using a generic s3 data input. I am unable to get the data into splunk. I get this in the splunk_ta_aws_generic_s3_<data_input_name>.log  phase="fetch_key" | message="Failed to get object." key="filename.json" Any help would be helpful.
I know this may be backward but do we have the ability to create an alert if data ingest fails so I can know ahead of time