All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks, let me give that a go in the overall solution, but it looks very promising.
Here is a screenshot of the source type:  
That's a Splunk app that's no longer supported and has been archived.  It was never supported on Splunk Cloud. You can try uploading it as a custom app, but Splunk may recognize it and reject it.  I... See more...
That's a Splunk app that's no longer supported and has been archived.  It was never supported on Splunk Cloud. You can try uploading it as a custom app, but Splunk may recognize it and reject it.  If that happens then you'll need to rename the app, making sure to change the directory name and update app.conf.  Even then there's no guarantee it will pass vetting.
I feed data to Splunk using the HTTP Event Collector, sample event: { "event":{ "event_id": "58512040", "event_name": "Access Granted", ... "event_local_time_with_offset":"2025-07-09T14:46:28+0... See more...
I feed data to Splunk using the HTTP Event Collector, sample event: { "event":{ "event_id": "58512040", "event_name": "Access Granted", ... "event_local_time_with_offset":"2025-07-09T14:46:28+00:00", }, "sourcetype": "BBL_splunk_pacs" }     I set up datasource type BBL_splunk_pacs (see screenshot below) When I search for the events, I get: I see 2 issues: _time is not parsed correctly from the event_local_time_with_offset.  Most of the time, randomly (?), we get all event fields duplicated, and sometimes they are not duplicated.   Any idea what I may be doing wrong?  Thank you.      
Thanks @ITWhisperer 
I am trying to extract multiple metrics at once using a Signalflow query, but I am not sure if this is supported or just not undocumented.  One metric works fine: | sim flow query=" data('k8s.hpa... See more...
I am trying to extract multiple metrics at once using a Signalflow query, but I am not sure if this is supported or just not undocumented.  One metric works fine: | sim flow query=" data('k8s.hpa.current_replicas', filter............" Wildcard matching metrics works fine too: | sim flow query=" data('k8s.hpa*', filter............" But I have not been able to extract multiple named metrics (not wildcarded). Something like this (not working!!!): | sim flow query=" data('k8s.hpa.current_replicas k8s.hpa.max_replicas', filter............"   Any ideas on how to get this to work?
| makeresults count=3 | streamstats count | eval _time=_time+ count*3600 | eval asset_names="Asset1 Asset2 Asset3" | makemv delim=" " asset_names | eval asset_names=mvindex(asset_names, count-1)
Hi, I'm trying to transfer this app from Splunk enterprise to Splunk Cloud.  IS there a way to that?   
Hello Guys, Here is the current situation Below is what I'd like to achieve I've tried the following with no success   Can anyone help me achieve my goal ? Thanks in advance
Use spath and mvexpand | spath path=audit_details.audit.details.detail{} | mvexpand audit_details.audit.details.detail{} | spath input=audit_details.audit.details.detail{} | fields - audit_details.a... See more...
Use spath and mvexpand | spath path=audit_details.audit.details.detail{} | mvexpand audit_details.audit.details.detail{} | spath input=audit_details.audit.details.detail{} | fields - audit_details.audit.details.detail{}* Your would give app_name audit_details.audit.name audit_details.audit.responseContentLength messageId ordinal time my_app app_name -1 -4 0 1752065281146 my_app app_name -1 7103 1 1752065281146 my_app app_name -1 7101 2 1752065281146 Here is an emulation for you to play with and compare with real data | makeresults | fields - _time | eval _raw = "{ \"app_name\": \"my_app\", \"audit_details\": { \"audit\": { \"responseContentLength\": \"-1\", \"name\": \"app_name\", \"details\": { \"detail\": [{ \"messageId\": \"-4\", \"time\": \"1752065281146\", \"ordinal\": \"0\" }, { \"messageId\": \"7103\", \"time\": \"1752065281146\", \"ordinal\": \"1\" }, { \"messageId\": \"7101\", \"time\": \"1752065281146\", \"ordinal\": \"2\" } ] } } } }" | spath ``` data emulation above ```
@gcusello    It turns out there was different casing between the sourctype name in the lookup file and in Splunk. That was causing the duplicates in the search you gave me. The final version of the... See more...
@gcusello    It turns out there was different casing between the sourctype name in the lookup file and in Splunk. That was causing the duplicates in the search you gave me. The final version of the query that worked exactly as desired was this (once I made the sourctype match exactly in the lookup file and in Splunk). index=sw tag=MemberServers sourcetype="windows PFirewall Log" | stats count BY sourcetype host | eval host=lower(host) | append [ | inputlookup MemberServers.csv | eval count=0 | fields sourcetype host count] | stats sum(count) AS total BY sourcetype host | where total=0 Thank you for your help!
I have an event that looks as follows: { "app_name": "my_app", "audit_details": { "audit": { "responseContentLength": "-1", "name": "app_name", "d... See more...
I have an event that looks as follows: { "app_name": "my_app", "audit_details": { "audit": { "responseContentLength": "-1", "name": "app_name", "details": { "detail": [{ "messageId": "-4", "time": "1752065281146", "ordinal": "0" }, { "messageId": "7103", "time": "1752065281146", "ordinal": "1" }, { "messageId": "7101", "time": "1752065281146", "ordinal": "2" } ] } } } } I want to create a table that includes a row for each detail record that includes the messageId, time and ordinal, but also a messageIdDescription that is retrieved from a lookup similar to as follows: lookup Table_MessageId message_Id as messageId OUTPUT definition as messageIdDescription the Table_MessageId has three columns - message_Id, definition, audit_Level Any pointers are appreciated.
Hey there! Any chance you would still be able to help with this?
Hi, I tried to use the Next Step of the correlation search: Ping - NSLOOKUP - Risk Analysis I was lucky to find the result of the Risk Analysis in the Risk Analysis dashboard. but when try to us... See more...
Hi, I tried to use the Next Step of the correlation search: Ping - NSLOOKUP - Risk Analysis I was lucky to find the result of the Risk Analysis in the Risk Analysis dashboard. but when try to use ping/nslookup, i have no output?   How can i find the result of the ping command?
Hi @mfleitma , is it acceptable to replace the empty fild with another char e.g. "-"= in this case,  you can use the fillnull command. Ciao. Giuseppe
Hi, I have a variety of CSV lookup tables and have to add a field to each of these tables. The CSV files are used by scheduled searches, so I need their contents AND the field names. table1.cs... See more...
Hi, I have a variety of CSV lookup tables and have to add a field to each of these tables. The CSV files are used by scheduled searches, so I need their contents AND the field names. table1.csv: index,sourcetype index1,st1 table2.csv: sourcetype,source st1,source1 table3.csv: field1,field2 - no rows For this, I use the following spl: | inputlookup table1.csv | table index,sourcetype,comment1 | outputlookup table1.csv | inputlookup table2.csv | table sourcetype,source,comment2 | outputlookup table2.csv | inputlookup table3.csv | table field1,field2,comment3 | outputlookup table3.csv For table1 and table2, this works. But for table3, I have the problem that outputlookup creates an empty table and the field names are missing. Is there a search that can extend empty and filled lookups?   Thank you.
Hi @gurunagasimha  Lookup files on forwarders are not automatically forwarded from forwarders to indexers or search heads. To make lookup data available across your distributed environment you would... See more...
Hi @gurunagasimha  Lookup files on forwarders are not automatically forwarded from forwarders to indexers or search heads. To make lookup data available across your distributed environment you would need to send it somehow to your Search Head (Cluster) - there are a number of ways you could do this: 1) On your HF run a scheduled search using | inputlookup to load the contents of the lookup and then use the | collect command to write the contents to an index, on your SH/SHC you can create a scheduled search to load the indexed data and use | outputlookup to write it to a lookup 2) Use a custom REST API script to copy the kvstore lookup from your HF to your SH/SHC. 3) Use the KV Store Tools Redux app (https://splunkbase.splunk.com/app/5328) to upload from the HF to SHC  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Lookup format is kvstore. We are ingesting the data through scripts and storing it in lookups in the Splunk forwarder. We are using a heavy forwarder. Is there any other way to automatically sync in... See more...
Lookup format is kvstore. We are ingesting the data through scripts and storing it in lookups in the Splunk forwarder. We are using a heavy forwarder. Is there any other way to automatically sync in the lookups?
Hi!  Is it  possible to restore deleted Mobile Apps in User experience monitoring of AppDynamics? 
I tried this curl command and got this output curl -k https://<splunkcloudlink>:8088/services/collector/event -H "Authorization: Splunk <hec token>" -d "{\"event\": \"hello from the other side\"}"... See more...
I tried this curl command and got this output curl -k https://<splunkcloudlink>:8088/services/collector/event -H "Authorization: Splunk <hec token>" -d "{\"event\": \"hello from the other side\"}" Output: {"text":"Success","code":0} what should i see next