All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm not following you... It's a cluster of indexers (4) and a single Management node. ...and I'm failing to see how setting the volume size affects the missing queries in the management dashboard. ... See more...
I'm not following you... It's a cluster of indexers (4) and a single Management node. ...and I'm failing to see how setting the volume size affects the missing queries in the management dashboard. But to answer the questions: of course I did. I have multiple volumes per best practices.
Hi @bowesmana  I ran your example. When the lookup used in the search, "behindfirewall" field contains both 1 and 0. So, I can use if condition:   if behindfirewall contains 1, then the hostname ... See more...
Hi @bowesmana  I ran your example. When the lookup used in the search, "behindfirewall" field contains both 1 and 0. So, I can use if condition:   if behindfirewall contains 1, then the hostname is behind the firewall, correct? Thanks for your help firewall.csv after the lookup  
Hi @bowesmana  so you want all values of student-X-Y to be included for each combination of student-X-Y? >> yes, like it is in the expected result In that case, you don't need the match statem... See more...
Hi @bowesmana  so you want all values of student-X-Y to be included for each combination of student-X-Y? >> yes, like it is in the expected result In that case, you don't need the match statement, so what is the issue? >> I figured out after I posted this that I don't need the match statement, but I am curious if it also can be done  using match statement.  So, in this case it won't work using match statement, correct? Thanks for your help.
Hello, I have splunk installed on 3 servers (searchhead, index, HF) on windows server. I upgrade from 8.2.x to 9.2.1 - on the search head and index everything is working - including the kvstore (it... See more...
Hello, I have splunk installed on 3 servers (searchhead, index, HF) on windows server. I upgrade from 8.2.x to 9.2.1 - on the search head and index everything is working - including the kvstore (it was upgraded to wiredTiger before the upgrade. BUT - on the HF the kvstore failing. In the mongoDB log file I can see: CONTROL [main] Failed global initialization: InvalidSSLConfiguration: Could not read private key attached to the selected certificate, ensure it exists and check the private key permissions splunk show kvstore-status --verbose show: This member: backupRestoreStatus : Ready disabled : 0 featureCompatibilityVersion : An error occurred during the last operation ('getParameter', domain: '15', code: '13053'): No suitable servers found: `serverSelectionTimeoutMS` expired: [Failed to connect to target host: 127.0.0.1:8191] guid : xxxxxxxxxxxxxxxxxxxx port : 8191 standalone : 1 status : failed storageEngine : wiredTiger I tried to: Delete the server.pem file and also splunk clean kvstore --local but still the same error. Commenting out the "sslPassword" under the stanza "[sslConfig]"  in the server.conf  didn't help. The pfx file was added in the Windows certificate store - but not sure the right why. I will be happy for any help.
hello, has anyone worked with traces (generated with opentelemetry) of an application on a splunk enterprise? i am ingesting this information with opentelemetry. And i would like to exploit the inf... See more...
hello, has anyone worked with traces (generated with opentelemetry) of an application on a splunk enterprise? i am ingesting this information with opentelemetry. And i would like to exploit the information, tracking the traces...is there any add-on to visualize this data useful? Thanks and cheers   Jar
Thanks much for the reply, it works now!
Hi Team, Please help me whit the steps to enable boot start of Splunk forwarder on oracle Linux 6.x. Splunk forwarder version- 9.0.8 Splunk version - 9.0.5   Regards, Shabana
How i update the test_MID_IP.csv  with the output IP, so that next time it runs with updated list index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | lookup t... See more...
How i update the test_MID_IP.csv  with the output IP, so that next time it runs with updated list index=abc IP!="10.*" [| inputlookup ip_tracking.csv | rename test_DATA AS MID | format ] | lookup test_MID_IP.csv test_IP as IP OUTPUT test_IP | eval match=if('IP'== test_IP, "yes", "no")| search match=no | stats count by IP
Hi, I guess the question I still need an answer to is, how can I apply a time restriction to the START event, but not the END event? Cheers, David
The transforms to set sourcetypes has a bug. The regex uses a capture group that is not used in the format statment. When this is the case splunk does not return a match on the regex. To get this ... See more...
The transforms to set sourcetypes has a bug. The regex uses a capture group that is not used in the format statment. When this is the case splunk does not return a match on the regex. To get this to work it is neccessary to change the regex to a non-capturing group e.g. for: [auditdclasses2] REGEX = type\=(ANOM_|USER_AVC|AVC|CRYPTO_REPLAY_USER|RESP) DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::linux:audit:ocsf:finding must be change to  REGEX = type\=(?:ANOM_|USER_AVC|AVC|CRYPTO_REPLAY_USER|RESP) Then it works. The same for the other auditdclasses1 - 6.
Hi, Thank you so much for the suggestion.   Is it possible to achieve this by splunk search? since it is expected to be a simple alert configuration due to access limitation. Please share if you ... See more...
Hi, Thank you so much for the suggestion.   Is it possible to achieve this by splunk search? since it is expected to be a simple alert configuration due to access limitation. Please share if you have any suggestions with splunk query which will greatly help !
Even after removing the escape character , still getting error, now as "Error in 'EvalCommand': The expression is malformed." Updated query :  strftime($field$ - (strptime(strftime($field$,"%Y-%m... See more...
Even after removing the escape character , still getting error, now as "Error in 'EvalCommand': The expression is malformed." Updated query :  strftime($field$ - (strptime(strftime($field$,"%Y-%m-%dT%H:%M:%SZ"),"%Y-%m-%dT%H:%M:%SZ") - strptime(strftime($field$,"%Y-%m-%dT%H:%M:%S"),"%Y-%m-%dT%H:%M:%S")),"$format$")   Also in "validation expression" while creating macro, i wrote   iseval=1  
Hi , How to collect server logs without installing the Splunk Universal forwarder. Because the server owned team is not interested to install UF. Please let me know is any other way to collect the d... See more...
Hi , How to collect server logs without installing the Splunk Universal forwarder. Because the server owned team is not interested to install UF. Please let me know is any other way to collect the data and how?   Thanks, Karthi
I can successfully send data to Splunk Cloud using the HEC webhook via a Curl command. However, when attempting to send events from Splunk Observability to Splunk Cloud using the Generic Webhook meth... See more...
I can successfully send data to Splunk Cloud using the HEC webhook via a Curl command. However, when attempting to send events from Splunk Observability to Splunk Cloud using the Generic Webhook method, it doesn't seem to function properly.
Hi All,   From Splunk article, I know it supporting using docker / portainer hosting. Would like to check whether Spunk Enterprise support official hosting in Kubernetes?  
Does Splunk DBConnect support gMSA accounts? If so, when configuring the Splunk Identity, do I leave the password field empty?
For CIM compliance I am trying to fill the action field from some logs using a case. This works in search but not in the calculated field, I see some others had similar issues but there has not been ... See more...
For CIM compliance I am trying to fill the action field from some logs using a case. This works in search but not in the calculated field, I see some others had similar issues but there has not been an answer on here. I am on Cloud so cannot directly change the confs, but calculated fields so far have been working fine. Simple case statements that do not have multivalue fields with objects (e.g. category instead of entitis{}.remediationStatus) work as expected in calculated fields. The events have a similar setup like this: {"entities": [{"name": "somename"}, {"name": "other naem", "remediationStatus": "Prevented"}]}   Search (WORKS):   eval action=case('entities{}.remediationStatus'=="Prevented", "blocked", 'entities{}.deliveryAction'=="Blocked", "blocked", 'entities{}.deliveryAction'=="DeliveredAsSpam", "blocked", true(), "allowed")   Calculated field (Doesn't WORK): action=case('entities{}.remediationStatus'=="Prevented", "blocked", 'entities{}.deliveryAction'=="Blocked", "blocked", 'entities{}.deliveryAction'=="DeliveredAsSpam", "blocked", true(), "allowed")
In the Gui >  Data > Data availability - Click on the Green Base Line Search Button, that will generate the look up, you can then go back to the Data availability and it should display results.   
I believe you don't have to escape the double quotes. Check the examples in the docs: https://docs.splunk.com/Documentation/Splunk/9.2.1/admin/macrosconf
Could this contribute to the slow performance when search for Knowledge Objects in our deployment? We have over 2000 user directories in $SPLUNK_HOME/etc/users on our SHs, representing every user who... See more...
Could this contribute to the slow performance when search for Knowledge Objects in our deployment? We have over 2000 user directories in $SPLUNK_HOME/etc/users on our SHs, representing every user who ever existed since we started with Splunk. When we run Settings -> Searches, Report and Alerts it can take over a minute to find a search.