All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Guys, Here is the current situation Below is what I'd like to achieve I've tried the following with no success   Can anyone help me achieve my goal ? Thanks in advance
I have an event that looks as follows: { "app_name": "my_app", "audit_details": { "audit": { "responseContentLength": "-1", "name": "app_name", "d... See more...
I have an event that looks as follows: { "app_name": "my_app", "audit_details": { "audit": { "responseContentLength": "-1", "name": "app_name", "details": { "detail": [{ "messageId": "-4", "time": "1752065281146", "ordinal": "0" }, { "messageId": "7103", "time": "1752065281146", "ordinal": "1" }, { "messageId": "7101", "time": "1752065281146", "ordinal": "2" } ] } } } } I want to create a table that includes a row for each detail record that includes the messageId, time and ordinal, but also a messageIdDescription that is retrieved from a lookup similar to as follows: lookup Table_MessageId message_Id as messageId OUTPUT definition as messageIdDescription the Table_MessageId has three columns - message_Id, definition, audit_Level Any pointers are appreciated.
Hi, I tried to use the Next Step of the correlation search: Ping - NSLOOKUP - Risk Analysis I was lucky to find the result of the Risk Analysis in the Risk Analysis dashboard. but when try to us... See more...
Hi, I tried to use the Next Step of the correlation search: Ping - NSLOOKUP - Risk Analysis I was lucky to find the result of the Risk Analysis in the Risk Analysis dashboard. but when try to use ping/nslookup, i have no output?   How can i find the result of the ping command?
Hi, I have a variety of CSV lookup tables and have to add a field to each of these tables. The CSV files are used by scheduled searches, so I need their contents AND the field names. table1.cs... See more...
Hi, I have a variety of CSV lookup tables and have to add a field to each of these tables. The CSV files are used by scheduled searches, so I need their contents AND the field names. table1.csv: index,sourcetype index1,st1 table2.csv: sourcetype,source st1,source1 table3.csv: field1,field2 - no rows For this, I use the following spl: | inputlookup table1.csv | table index,sourcetype,comment1 | outputlookup table1.csv | inputlookup table2.csv | table sourcetype,source,comment2 | outputlookup table2.csv | inputlookup table3.csv | table field1,field2,comment3 | outputlookup table3.csv For table1 and table2, this works. But for table3, I have the problem that outputlookup creates an empty table and the field names are missing. Is there a search that can extend empty and filled lookups?   Thank you.
Hi!  Is it  possible to restore deleted Mobile Apps in User experience monitoring of AppDynamics? 
We are storing data in a Splunk lookup file on one of the forwarders.  In our distributed Splunk architecture, this lookup data is not getting forwarded to the indexers or the search head, and there... See more...
We are storing data in a Splunk lookup file on one of the forwarders.  In our distributed Splunk architecture, this lookup data is not getting forwarded to the indexers or the search head, and therefore it is not available for search or enrichment.  How can we sync or transfer this lookup data from the forwarder to the search head (or indexers) so that it can be used across the distributed environment?  
Hello. I make Splunk Enterprise Server. License Manager, Heavy Forwarder, Cluster Manager, Indexer, Search Head Cluster Deployer, Search Head, Deployment server. I wanna know what communication be... See more...
Hello. I make Splunk Enterprise Server. License Manager, Heavy Forwarder, Cluster Manager, Indexer, Search Head Cluster Deployer, Search Head, Deployment server. I wanna know what communication between Splunk server and other. ex) License Manager to Heavy Forwarder, they communicate 8086 port(TCP). Which manual does write these things? Thank you.
I have created a pipeline for filtering data coming into the sourcetype = fortigate_traffic. I would like to further add an exclusion to the data coming into this sourcetype. How can this be done?... See more...
I have created a pipeline for filtering data coming into the sourcetype = fortigate_traffic. I would like to further add an exclusion to the data coming into this sourcetype. How can this be done? Nested ? or any other method eg;- 1st pipeline is where NOT (dstport IN ("53") AND dstip IN ("10.5.5.5"))   Need to add onother pipeline as  NOT (dstport IN ("80","443") AND (app IN (xyz,fgh,dhjkl,.....) Has anyone done anything similar to this. Please guide. Thanks 
I have a dashboard to show a statistic about user events. I have a field that return dynamic urls and I want to show Image from that Url. Alternatively, it can be a hyperlink to click on it to open i... See more...
I have a dashboard to show a statistic about user events. I have a field that return dynamic urls and I want to show Image from that Url. Alternatively, it can be a hyperlink to click on it to open image on another browser. Currently, I tested on both Dashboard Studio and Dashboard Classic   Thank you
I am using the Splunk Add-on for Microsoft Cloud Services to retrieve Event Hub data in Splunk Cloud, but I encountered the following error in the internal log. 2025-07-09 02:16:40,345 level=ERROR... See more...
I am using the Splunk Add-on for Microsoft Cloud Services to retrieve Event Hub data in Splunk Cloud, but I encountered the following error in the internal log. 2025-07-09 02:16:40,345 level=ERROR pid=1248398 tid=MainThread logger=modular_inputs.mscs_azure_event_hub pos=mscs_azure_event_hub.py:run:925 | datainput="Azure_Event_hub" start_time=1752027388 | message="Error occurred while connecting to eventhub: Failed to authenticate the connection due to exception: [Errno -2] Name or service not known Error condition: ErrorCondition.ClientError Error Description: Failed to authenticate the connection due to exception: [Errno -2] Name or service not known The credentials should not be an issue, as I am using the same credentials in FortiSIEM and successfully get the data from event hub. Could anyone help identify the cause of the issue and suggest how to resolve it?  
Hi all, I'm collecting iLO logs in Splunk and have set up configurations on a Heavy Forwarder (HF). Logs are correctly indexed in the ilo index with sourcetypes ilo_log and ilo_error, but I'm facing... See more...
Hi all, I'm collecting iLO logs in Splunk and have set up configurations on a Heavy Forwarder (HF). Logs are correctly indexed in the ilo index with sourcetypes ilo_log and ilo_error, but I'm facing an issue with search results. When I run index=ilo | stats count by sourcetype, it correctly shows the count for ilo_log and ilo_error. Also, index=ilo | spath | table _raw sourcetype confirms logs are indexed with the correct sourcetype. However, when I search directly with index=ilo sourcetype=ilo_log, index=ilo sourcetype=ilo_error, or even index=ilo ilo_log, I get zero results. Strangely, sourcetype!=ilo_error returns all ilo_log events and the same for ilo_error. props.conf: [source::udp:5000] TRANSFORMS-set_sourcetype = set_ilo_error SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)  transforms.conf: [set_ilo_error] REGEX=(failure|failed) DEST_KEY=MetaData:Sourcetype FORMAT=sourcetype::ilo_error WRITE_META = true
When collecting Linux logs using a Universal Forwarder we are collecting a lot of unnecessary audit log from cronjobs collecting server status and other information. In particular the I/O from a litt... See more...
When collecting Linux logs using a Universal Forwarder we are collecting a lot of unnecessary audit log from cronjobs collecting server status and other information. In particular the I/O from a little script. To build a grep based logic on a Heavy Forwarder there would have to be a long list of very particular "grep" strings not to loose ALL grep attempts. In a similar manner, commands like 'uname' and 'id' are even harder to filter out. The logic needed o reliably filter out only I/O generated by the script would be to find events with comm="script-name", get the pid value from that initial event and drop all events for the next say 10 seconds with a ppid that matches the pid. To make things complicated there is no control over the log/files on the endpoints, only what the universal forwarder is able to do then the heavy forwarder before the log in indexed. Is there any way to accomplish this kind of adaptive filtering/stateful cross-event logic in transit and under these conditions? Is this something that may be possible using the new and shiny Splunk Edge Processor once it is generally available?
Hello, We have a search head cluster and an ITSI instance. How do we replicate the tags.conf files from various apps on the SHC to ITSI? These are needed for running the various module searches, an... See more...
Hello, We have a search head cluster and an ITSI instance. How do we replicate the tags.conf files from various apps on the SHC to ITSI? These are needed for running the various module searches, and other ITSI macros. Did someone create an app to handle this? Thanks and God bless, Genesius
Hello community, I have a question which has been floating around here for quite some time and though I've seen quite a few conversations and tips, I have not found a "single definitive source of tru... See more...
Hello community, I have a question which has been floating around here for quite some time and though I've seen quite a few conversations and tips, I have not found a "single definitive source of truth". At some point, some time ago, when bumping Splunk from v8 to v9 we started noting Iowait alerts from the health monitor. I've checked our resource usage on our indexers (which are generating the alerts) and the cause of the alert seem to be spikes in resource usage. 3 out of x indexers have spikes in resource usage within 10 minutes which triggers an alert. Most of the time these alert seem wound really tight and the alerts somewhat overblown, on the other hand they should be there for a reason and I am not sure of tuning the alert levels is the right way to go. I have gone through the following threads: https://community.splunk.com/t5/Monitoring-Splunk/Why-is-IOWait-red-after-upgrade/m-p/600262#M8968 https://community.splunk.com/t5/Deployment-Architecture/IOWAIT-alert/m-p/666536#M27634 https://community.splunk.com/t5/Splunk-Enterprise/Why-am-I-receiving-this-error-message-IOWait-Resource-usage/m-p/578077#M10932 https://community.splunk.com/t5/Splunk-Search/Configure-a-Vsphere-VM-for-Splunk/td-p/409840 https://community.splunk.com/t5/Monitoring-Splunk/Running-Splunk-on-a-VM-CPU-contention/m-p/107582 Some recommendations exist to either ignore or adjust thresholds. Continuously ignoring seems like a slippery slope to desensitization and continuously monitoring add to the risk of alert fatigue. Other recommends ensuring adequate resources to solve the core issue, which seems logical though I am unsure regarding how. I am left with two questions 1) What are concrete actions could be taken to minimize the chance of these alerts/issues in a deployment based on VMWare Linux servers. In other words, what can/should I forward to the server group that they can work with, check and confirm in order to minimize the chance of these alerts? 2) What recommendations if any exists regarding modifying default thresholds? I could set thresholds high enough to not alert on "normal activity", is this the recommended adjustment or are there any concrete recommended modifications?
Hey there, I'm trying to create a custom/filtered list of lookups to simplify edits by end users pulling reports. I've dug through the docs and can't find anything, although perhaps I'm missing it in... See more...
Hey there, I'm trying to create a custom/filtered list of lookups to simplify edits by end users pulling reports. I've dug through the docs and can't find anything, although perhaps I'm missing it in my searches.   What I was hoping would be to add a custom HREF link in the menu to a filtered lookup list, but it doesn't appear the lookup_list accepts parameters (or I haven't found the right ones). For example... https://splunk-srvr/en-US/app/lookup_editor/lookup_edit?namespace=user_reports_app&type=csv If this isn't possible, the other option I had thought of was a dashboard section with a filtered list of the appropriate lookups similar to the Lookup App overview page, but this appears to be build directly in the app using javascript and not something easily replicated. Have I missed something completely obvious, or is this even possible?  Thanks!  
Hi, we use iPads in our production area to display Splunk dashboards. The dashboards are classic ones with enhanced JS/CSS functionallity but standard dashboard searches. We have the issue, that som... See more...
Hi, we use iPads in our production area to display Splunk dashboards. The dashboards are classic ones with enhanced JS/CSS functionallity but standard dashboard searches. We have the issue, that sometimes the searches are not run. When we inspect the console/network within safari dev settings, the request is sent but after 50ms an error occurs and no response is received. If we try again, mostly the search runs as expected.  On Windows devices those problems never occured. Our network department says there are no network issues.  Anybody have a similar problem? Thanks!
Has anyone figured out how to successfully join the three new _DS indexes into a meaningful report? I would like to create a report that shows me when a UF/HF phoned home and what actions it may h... See more...
Has anyone figured out how to successfully join the three new _DS indexes into a meaningful report? I would like to create a report that shows me when a UF/HF phoned home and what actions it may have performed.
Hi guys, I'm trying to customize an app I created. For the dashboards, I placed the CSS file in appserver/static and linked it in the dashboard using stylesheet="my.css". How does it work for the a... See more...
Hi guys, I'm trying to customize an app I created. For the dashboards, I placed the CSS file in appserver/static and linked it in the dashboard using stylesheet="my.css". How does it work for the app's CSS? Where should I put the CSS file? Do I also need to reference it in any .conf file? Thanks for your attention.  
Guys i have Splunk Cloud , i created Http Event Collector & in prisma i gave url /service/collector   but logs are not showing up in splunk .. my questions :  should i add port number after my http... See more...
Guys i have Splunk Cloud , i created Http Event Collector & in prisma i gave url /service/collector   but logs are not showing up in splunk .. my questions :  should i add port number after my http url ? after url is it  /service/collector or /service/collector/events   what should i check as i tesed my prisma said tested pass    
Unable to update and save detections after upgrading to Splunk ES version 8.1.0. It says Detection ID is missing.