All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

On my attached picture these many events should become one event by ID instead of so many, how can I break those events by ID. ID is on first line "DRCProvision-[1663729240506]" like this  
Hi,  I have set up an alert and under Actions, I have added 'Add to triggered Alerts'.  I would like to be able to use an API to retrieve the actual results of a specific triggered alert (Example... See more...
Hi,  I have set up an alert and under Actions, I have added 'Add to triggered Alerts'.  I would like to be able to use an API to retrieve the actual results of a specific triggered alert (Example, get the results of the alert triggered at 17.43.  I am using alerts/fired_alerts/<alert_name> but it just gives me the list of trigger history.  Is it possible to be able to retrieve the actual results? Preferably in JSON  
Hello Team,  I am running below query to get the stats but I am looking to get the Store numbers in serial order, can you help me with the query ?  index=ABC env="XYZ" StoreNumber="*" | sort by S... See more...
Hello Team,  I am running below query to get the stats but I am looking to get the Store numbers in serial order, can you help me with the query ?  index=ABC env="XYZ" StoreNumber="*" | sort by StoreNumber | stats count by StoreNumber, country, Application Store Number country count 1 US 22 100 US 7 100 US 9 100 US 2 1000 US 13 1000 US 10 1002 US 9 1002 US 32 1018 US 22 1018 US 1 104 US 3 104 US 6 1055 US 9 1055 US 28 1081 US 39 1081 US 38 1086 US 1 1086 US 6 1086 US 1 109 US 1 109 US 2 1094 US 3 1094 US 9 11 US 3
Hi , Splunk 9 Universal Forwarder getting "[app key value store migration collection data is not available] " error after upgrade from Splunk 8.2.4.    Splunk is not starting   app key value store... See more...
Hi , Splunk 9 Universal Forwarder getting "[app key value store migration collection data is not available] " error after upgrade from Splunk 8.2.4.    Splunk is not starting   app key value store migration collection data is not available   Why is UF is looking for KV Store ?   Any suggestion to resolve it
Hi all, I'm trying to create a "Fallback escalation rate" for a chatbot. This rate would be calculated by users that hit the fallback intent, and then ask for an agent anytime after that, within a ... See more...
Hi all, I'm trying to create a "Fallback escalation rate" for a chatbot. This rate would be calculated by users that hit the fallback intent, and then ask for an agent anytime after that, within a given session.  For context, when a user says something, we use an intent classifier to try and match it to an intent. If we can't match the user input to an intent, it hits our fallback intent. And if a user asks for an agent, it hits the followup_live_agent intent. Each session contains multiple events, and each event represents one intent.  Today, we calculate "Escalation rate" by counting the sessions with at least one "followup_live_agent" intent. Here's the search query I created for that:      index=conversation sourcetype=cui-orchestration-log botId=123456 | eval AgentRequests=if(match(intent, "followup_live_agent"), 1, 0) | stats sum(AgentRequests) as Totaled by sessionId | eval Cohort=case(Totaled=0, "Cooperated", Totaled>=1, "Escalated") | stats count by Cohort | eventstats sum(count) as Total | eval Agent_Request_Rate = round(count*100/Total,2)."%" | fields - Total | where Cohort="Escalated"       I need to know how to calculate this same thing, but only after the fallback intent is hit. I figure I need to retain the timestamp and do some calculation using that. I'm not even sure how to get started on this, so if anyone could point me in the right direction, that would be really helpful. 
Hello, I have installed the DB Connect add-on, after restarting and logging into the APP, it keeps loading indefinitely until the error message appears. I have gone through all the threads related ... See more...
Hello, I have installed the DB Connect add-on, after restarting and logging into the APP, it keeps loading indefinitely until the error message appears. I have gone through all the threads related to this error message but none of them have helped me to solve the problem. root@myhost:/usr# java -version openjdk version "1.8.0_242" OpenJDK Runtime Environment (build 1.8.0_242-8u242-b08-1~deb9u1-b08) OpenJDK 64-Bit Server VM (build 25.242-b08, mixed mode)   index=_internal sourcetype=dbx*    
Hello All, Can someone help me with the steps to upgrade Splunk Universal Forwarder on Linux machines? Appreciate your help.   Thanks,
Let's say we have an alert which has a few field like:   | search <INSERT_RANDOM_BASE_QUERY> | table src_ip, _time, dest_ip | rename _time as "Time", src_ip as "Source IP", dest_ip as "Destination ... See more...
Let's say we have an alert which has a few field like:   | search <INSERT_RANDOM_BASE_QUERY> | table src_ip, _time, dest_ip | rename _time as "Time", src_ip as "Source IP", dest_ip as "Destination IP"   And we want to suppress on "Source IP" and "Destination IP" being the same. Should our suppress fields look like:   alert.suppress.fields = "Source IP","Destination IP"   Or:   alert.suppress.fields = Source IP,Destination IP    ?
Long story short, I was indexing my own data for years now and recently started forwarding up stream to another cluster. I don't need to index on my network anymore and just want to have my indexer s... See more...
Long story short, I was indexing my own data for years now and recently started forwarding up stream to another cluster. I don't need to index on my network anymore and just want to have my indexer serve as a heavy forwarder so I don't have to reconfigure 600+ endpoints. Is this feasible or will I break lots of things? Thanks!
Does anyone have troubleshooting steps on how to troubleshoot parse time or index time related issue.  The use case sourcetype override or sending thing to nullQueue and filter. The reason for aski... See more...
Does anyone have troubleshooting steps on how to troubleshoot parse time or index time related issue.  The use case sourcetype override or sending thing to nullQueue and filter. The reason for asking is that I didn't see anything in the internal logs or search string that was obvious to me.  Any tips can help...  Thanks in advance 
I'm not sure what I've done -  Getting an error when trying to use the webtools curl add on which I'm not getting from postman or powershell "https://<myhost>.splunkcloud.com:8089/services/server... See more...
I'm not sure what I've done -  Getting an error when trying to use the webtools curl add on which I'm not getting from postman or powershell "https://<myhost>.splunkcloud.com:8089/services/server/introspection/kvstore/serverstatus" HTTPSConnectionPool(host='<myhost>.splunkcloud.com', port=8089): Max retries exceeded with url: /services/server/introspection/kvstore/serverstatus (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7fb7520b0990>: Failed to establish a new connection: [Errno 110] Connection timed out')) Any internal settings from within splunk web likely to have that effect ?
Does anyone know if it's possible to rename an HEC or do you have to create a new one and update the token everywhere?  Seems like just renaming it should be an option under edit, but not seeing anyt... See more...
Does anyone know if it's possible to rename an HEC or do you have to create a new one and update the token everywhere?  Seems like just renaming it should be an option under edit, but not seeing anything. Thanks,    
Similar to some other existing community posts, I am having issues sending POST requests to the https://.../services/collector/event endpoint of my Splunk enterprise server running on AWS after follo... See more...
Similar to some other existing community posts, I am having issues sending POST requests to the https://.../services/collector/event endpoint of my Splunk enterprise server running on AWS after following Splunk guides on creating self signed ssl and using it.   Using -k in curl to skip insecure verify works, but including --cacert myselfsignedca does not. I've gone further and even added relevant x509 extensions like SANs with no success. The result from curl: ... * successfully set certificate verify locations: * CAfile: ./splunkCA.pem CApath: /etc/ssl/certs * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (OUT), TLS alert, Server hello (2): * SSL certificate problem: self signed certificate in certificate chain * stopped the pause stream! * Closing connection 0 curl: (60) SSL certificate problem: self signed certificate in certificate chain More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. ...   Any help is appreciated!
Hey Splunkers !   When an error occur during integration process, will that be recorded by "_internal" index?? Will data on-boarding / data parsing errors recorded by the _internal index....? ... See more...
Hey Splunkers !   When an error occur during integration process, will that be recorded by "_internal" index?? Will data on-boarding / data parsing errors recorded by the _internal index....? if so , logical SPL query to trouble shoot those errors would be welcome what kind of integration errors will be recorded in _internal index ?
According to my tests the Authorization header should not have a space between the colon and splunk keyword.  It should be "Authorization:Splunk ###-####..." and not "Authorization:  Splunk ###-####.... See more...
According to my tests the Authorization header should not have a space between the colon and splunk keyword.  It should be "Authorization:Splunk ###-####..." and not "Authorization:  Splunk ###-####..." https://docs.splunk.com/Documentation/Splunk/9.0.1/Data/FormateventsforHTTPEventCollector In other words this works: curl -k https://prd-p.splunkcloud.com:8088/services/collector -H "Authorization:Splunk ###-######" -d "{\"sourcetype\":\"_json\",\"index\": \"job1\",\"event\": {\"a\": \"value1\", \"b\": [\"value1\", \"value1\"]}}" Whereas this does not work: curl -k https://prd-p.splunkcloud.com:8088/services/collector -H "Authorization: Splunk ###-######-b680-72c7bd33f9bb" -d "{\"sourcetype\":\"_json\",\"index\": \"job1\",\"event\": {\"a\": \"value1\", \"b\": [\"value1\", \"value1\"]}}"  
  I want to create subsearch based on parent fields search. I want to show only rows from cor_inbox_entry that includes keys.OrderID. (keys.OrderID is substring of fullBodID) Example ... See more...
  I want to create subsearch based on parent fields search. I want to show only rows from cor_inbox_entry that includes keys.OrderID. (keys.OrderID is substring of fullBodID) Example for fullBodID : infor-nid:infor:111:APRD00908_2022-09-06T12:01:26Z:?ProductionOrder&verb=Process&event=10545  Example for keys.OrderID : APRD00908 index=elbit_im sourcetype=cor_inbox_entry | spath input=C_XML output=bod path=ConfirmBOD.DataArea.BOD | xpath outfield=fullBodID field=bod "//NameValue[@name='MessageId']" |appendpipe [ search "metadata.Composite"=ReportOPMes2LN | search fullBodID = "*".keys.OrderID."*"] | table _time, fullBodID Any idea?
Hi, the initial situation is the following: I have an all-in-one instance, that simultaneously takes on the role of the DS, and a UF that sends its data to the AiO. The required stanzas were distri... See more...
Hi, the initial situation is the following: I have an all-in-one instance, that simultaneously takes on the role of the DS, and a UF that sends its data to the AiO. The required stanzas were distributed as a separate app, in addition to the Linux TA via the DS. Scripted inputs from the TA like "vmstat.sh" or "netstat.sh" can be browsed on the AiO and work so far. In the next step I wanted to activate the "cpu_metric.sh" stanza and proceeded like this: 1. I created a metric index on the AiO, called "linux_metrics". 2. I configured the inputs.conf under the "deployment-apps" on the AiO and enabled the stanza. This config was pushed to the UF, or rather it was pulled by the UF. Config: [script:///opt/splunkforwarder/etc/apps/Splunk_TA_nix/bin/cpu_metric.sh] interval = 30 disabled = 0 index = linux_metrics Unfortunately, however, no data ran into my metric index. "For fun" I tried the same procedure for other metric stanzas, which then immediately passed their data to the dedicated indexer. Standard solutions, like installing sysstat, have already been tried. Maybe one of you can think of something else. Thanks in advance.
Hi All, I am eager to know which one would be a preferred monitoring tool and its relatively cost effective as well. i.e. Splunk or Microsoft Sentinel  If there are any comparison documents or Supp... See more...
Hi All, I am eager to know which one would be a preferred monitoring tool and its relatively cost effective as well. i.e. Splunk or Microsoft Sentinel  If there are any comparison documents or Supporting documents then kindly help to share the link.   
Hi everybody, I need your assistance if you have encountered this problem or not since I want to mask a particular field before it is processed by SPLUNK indexers. I need to mask a particular field... See more...
Hi everybody, I need your assistance if you have encountered this problem or not since I want to mask a particular field before it is processed by SPLUNK indexers. I need to mask a particular field because the data will be transferred via a universal forwarder (an externally installed agent). Regards, Amira  
Hello everyone, I am trying to create an add-on REST API with Splunk to download a .csv file from a confluence page and push the information to Splunk.  I would like to have a table with the conten... See more...
Hello everyone, I am trying to create an add-on REST API with Splunk to download a .csv file from a confluence page and push the information to Splunk.  I would like to have a table with the content of the .csv file, but so far I was just able to get one entry with 130 lines in the index and sourcetype defined by the add-on. It would help if I could split at the end of every line.. I've tried to define the sourcetype in porps.conf with line breaker and some other regex params. and at this point I don't know how to move on. Data sample:   1 UserID,Profile,Role1,Role2,Licenses,Programs,Domain,FieldX 2 user139,Profile description,Role 1,Role 2,"lic1,lic2,lic3,lic4,lic5,lic6",protram,domain,extra_field . . 130 user139,Profile description,Role 1,Role 2,"lic1,lic2,lic3,lic4,lic5,lic6",protram,domain,extra_field   I've tried to reproduce the content as best as I could, included spaces. The content between the quotes "", all belong to the header "licenses".