All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello. I make Splunk Enterprise Server. License Manager, Heavy Forwarder, Cluster Manager, Indexer, Search Head Cluster Deployer, Search Head, Deployment server. I wanna know what communication be... See more...
Hello. I make Splunk Enterprise Server. License Manager, Heavy Forwarder, Cluster Manager, Indexer, Search Head Cluster Deployer, Search Head, Deployment server. I wanna know what communication between Splunk server and other. ex) License Manager to Heavy Forwarder, they communicate 8086 port(TCP). Which manual does write these things? Thank you.
I have created a pipeline for filtering data coming into the sourcetype = fortigate_traffic. I would like to further add an exclusion to the data coming into this sourcetype. How can this be done?... See more...
I have created a pipeline for filtering data coming into the sourcetype = fortigate_traffic. I would like to further add an exclusion to the data coming into this sourcetype. How can this be done? Nested ? or any other method eg;- 1st pipeline is where NOT (dstport IN ("53") AND dstip IN ("10.5.5.5"))   Need to add onother pipeline as  NOT (dstport IN ("80","443") AND (app IN (xyz,fgh,dhjkl,.....) Has anyone done anything similar to this. Please guide. Thanks 
I have a dashboard to show a statistic about user events. I have a field that return dynamic urls and I want to show Image from that Url. Alternatively, it can be a hyperlink to click on it to open i... See more...
I have a dashboard to show a statistic about user events. I have a field that return dynamic urls and I want to show Image from that Url. Alternatively, it can be a hyperlink to click on it to open image on another browser. Currently, I tested on both Dashboard Studio and Dashboard Classic   Thank you
I am using the Splunk Add-on for Microsoft Cloud Services to retrieve Event Hub data in Splunk Cloud, but I encountered the following error in the internal log. 2025-07-09 02:16:40,345 level=ERROR... See more...
I am using the Splunk Add-on for Microsoft Cloud Services to retrieve Event Hub data in Splunk Cloud, but I encountered the following error in the internal log. 2025-07-09 02:16:40,345 level=ERROR pid=1248398 tid=MainThread logger=modular_inputs.mscs_azure_event_hub pos=mscs_azure_event_hub.py:run:925 | datainput="Azure_Event_hub" start_time=1752027388 | message="Error occurred while connecting to eventhub: Failed to authenticate the connection due to exception: [Errno -2] Name or service not known Error condition: ErrorCondition.ClientError Error Description: Failed to authenticate the connection due to exception: [Errno -2] Name or service not known The credentials should not be an issue, as I am using the same credentials in FortiSIEM and successfully get the data from event hub. Could anyone help identify the cause of the issue and suggest how to resolve it?  
Hi all, I'm collecting iLO logs in Splunk and have set up configurations on a Heavy Forwarder (HF). Logs are correctly indexed in the ilo index with sourcetypes ilo_log and ilo_error, but I'm facing... See more...
Hi all, I'm collecting iLO logs in Splunk and have set up configurations on a Heavy Forwarder (HF). Logs are correctly indexed in the ilo index with sourcetypes ilo_log and ilo_error, but I'm facing an issue with search results. When I run index=ilo | stats count by sourcetype, it correctly shows the count for ilo_log and ilo_error. Also, index=ilo | spath | table _raw sourcetype confirms logs are indexed with the correct sourcetype. However, when I search directly with index=ilo sourcetype=ilo_log, index=ilo sourcetype=ilo_error, or even index=ilo ilo_log, I get zero results. Strangely, sourcetype!=ilo_error returns all ilo_log events and the same for ilo_error. props.conf: [source::udp:5000] TRANSFORMS-set_sourcetype = set_ilo_error SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+)  transforms.conf: [set_ilo_error] REGEX=(failure|failed) DEST_KEY=MetaData:Sourcetype FORMAT=sourcetype::ilo_error WRITE_META = true
When collecting Linux logs using a Universal Forwarder we are collecting a lot of unnecessary audit log from cronjobs collecting server status and other information. In particular the I/O from a litt... See more...
When collecting Linux logs using a Universal Forwarder we are collecting a lot of unnecessary audit log from cronjobs collecting server status and other information. In particular the I/O from a little script. To build a grep based logic on a Heavy Forwarder there would have to be a long list of very particular "grep" strings not to loose ALL grep attempts. In a similar manner, commands like 'uname' and 'id' are even harder to filter out. The logic needed o reliably filter out only I/O generated by the script would be to find events with comm="script-name", get the pid value from that initial event and drop all events for the next say 10 seconds with a ppid that matches the pid. To make things complicated there is no control over the log/files on the endpoints, only what the universal forwarder is able to do then the heavy forwarder before the log in indexed. Is there any way to accomplish this kind of adaptive filtering/stateful cross-event logic in transit and under these conditions? Is this something that may be possible using the new and shiny Splunk Edge Processor once it is generally available?
Hello, We have a search head cluster and an ITSI instance. How do we replicate the tags.conf files from various apps on the SHC to ITSI? These are needed for running the various module searches, an... See more...
Hello, We have a search head cluster and an ITSI instance. How do we replicate the tags.conf files from various apps on the SHC to ITSI? These are needed for running the various module searches, and other ITSI macros. Did someone create an app to handle this? Thanks and God bless, Genesius
Hello community, I have a question which has been floating around here for quite some time and though I've seen quite a few conversations and tips, I have not found a "single definitive source of tru... See more...
Hello community, I have a question which has been floating around here for quite some time and though I've seen quite a few conversations and tips, I have not found a "single definitive source of truth". At some point, some time ago, when bumping Splunk from v8 to v9 we started noting Iowait alerts from the health monitor. I've checked our resource usage on our indexers (which are generating the alerts) and the cause of the alert seem to be spikes in resource usage. 3 out of x indexers have spikes in resource usage within 10 minutes which triggers an alert. Most of the time these alert seem wound really tight and the alerts somewhat overblown, on the other hand they should be there for a reason and I am not sure of tuning the alert levels is the right way to go. I have gone through the following threads: https://community.splunk.com/t5/Monitoring-Splunk/Why-is-IOWait-red-after-upgrade/m-p/600262#M8968 https://community.splunk.com/t5/Deployment-Architecture/IOWAIT-alert/m-p/666536#M27634 https://community.splunk.com/t5/Splunk-Enterprise/Why-am-I-receiving-this-error-message-IOWait-Resource-usage/m-p/578077#M10932 https://community.splunk.com/t5/Splunk-Search/Configure-a-Vsphere-VM-for-Splunk/td-p/409840 https://community.splunk.com/t5/Monitoring-Splunk/Running-Splunk-on-a-VM-CPU-contention/m-p/107582 Some recommendations exist to either ignore or adjust thresholds. Continuously ignoring seems like a slippery slope to desensitization and continuously monitoring add to the risk of alert fatigue. Other recommends ensuring adequate resources to solve the core issue, which seems logical though I am unsure regarding how. I am left with two questions 1) What are concrete actions could be taken to minimize the chance of these alerts/issues in a deployment based on VMWare Linux servers. In other words, what can/should I forward to the server group that they can work with, check and confirm in order to minimize the chance of these alerts? 2) What recommendations if any exists regarding modifying default thresholds? I could set thresholds high enough to not alert on "normal activity", is this the recommended adjustment or are there any concrete recommended modifications?
Hey there, I'm trying to create a custom/filtered list of lookups to simplify edits by end users pulling reports. I've dug through the docs and can't find anything, although perhaps I'm missing it in... See more...
Hey there, I'm trying to create a custom/filtered list of lookups to simplify edits by end users pulling reports. I've dug through the docs and can't find anything, although perhaps I'm missing it in my searches.   What I was hoping would be to add a custom HREF link in the menu to a filtered lookup list, but it doesn't appear the lookup_list accepts parameters (or I haven't found the right ones). For example... https://splunk-srvr/en-US/app/lookup_editor/lookup_edit?namespace=user_reports_app&type=csv If this isn't possible, the other option I had thought of was a dashboard section with a filtered list of the appropriate lookups similar to the Lookup App overview page, but this appears to be build directly in the app using javascript and not something easily replicated. Have I missed something completely obvious, or is this even possible?  Thanks!  
Hi, we use iPads in our production area to display Splunk dashboards. The dashboards are classic ones with enhanced JS/CSS functionallity but standard dashboard searches. We have the issue, that som... See more...
Hi, we use iPads in our production area to display Splunk dashboards. The dashboards are classic ones with enhanced JS/CSS functionallity but standard dashboard searches. We have the issue, that sometimes the searches are not run. When we inspect the console/network within safari dev settings, the request is sent but after 50ms an error occurs and no response is received. If we try again, mostly the search runs as expected.  On Windows devices those problems never occured. Our network department says there are no network issues.  Anybody have a similar problem? Thanks!
Has anyone figured out how to successfully join the three new _DS indexes into a meaningful report? I would like to create a report that shows me when a UF/HF phoned home and what actions it may h... See more...
Has anyone figured out how to successfully join the three new _DS indexes into a meaningful report? I would like to create a report that shows me when a UF/HF phoned home and what actions it may have performed.
Hi guys, I'm trying to customize an app I created. For the dashboards, I placed the CSS file in appserver/static and linked it in the dashboard using stylesheet="my.css". How does it work for the a... See more...
Hi guys, I'm trying to customize an app I created. For the dashboards, I placed the CSS file in appserver/static and linked it in the dashboard using stylesheet="my.css". How does it work for the app's CSS? Where should I put the CSS file? Do I also need to reference it in any .conf file? Thanks for your attention.  
Guys i have Splunk Cloud , i created Http Event Collector & in prisma i gave url /service/collector   but logs are not showing up in splunk .. my questions :  should i add port number after my http... See more...
Guys i have Splunk Cloud , i created Http Event Collector & in prisma i gave url /service/collector   but logs are not showing up in splunk .. my questions :  should i add port number after my http url ? after url is it  /service/collector or /service/collector/events   what should i check as i tesed my prisma said tested pass    
Unable to update and save detections after upgrading to Splunk ES version 8.1.0. It says Detection ID is missing.   
Hi, can anybody help with this problem, please? Old Splunk 4 is running on Windows 2016 Srv. The old Splunk 4 should be upgraded to he newest version on a new hardware with Windows 2022 Srv. 1. ho... See more...
Hi, can anybody help with this problem, please? Old Splunk 4 is running on Windows 2016 Srv. The old Splunk 4 should be upgraded to he newest version on a new hardware with Windows 2022 Srv. 1. how to do it 2. how to migrate all data 3. how to use existing licence ????   Sorry, my mistake. The old version is 7.1.2.  
Hi at all, I have an issue on Data Models accelerations: the run times of each accelerations are too high to use DMs in my Correlation Searches: more than 2000 seconds for each run. I have six IDXs... See more...
Hi at all, I have an issue on Data Models accelerations: the run times of each accelerations are too high to use DMs in my Correlation Searches: more than 2000 seconds for each run. I have six IDXs with 24 CPUs (only partially used: less that 50%) and storage with 1500 IOPS, so the infrastructure shouldn't be the issue. Six Indexers should be sufficient to index and search 1TB/day of data, so this shouldn't be the issue. I have around 1 TB/day of data distributed in more than 30 indexes and I listed these indexes in the CIM macro, so this shouldn't be the issue. Where could I search the issue? Now I'm trying with some parameters: I enabled "Poll Buckets For Data To Summarize" and I disabled "Automatic Rebuilds". Is there something else in the DM structure that could be critical? Thank you for your help. Ciao.  Giuseppe
I have setup an episode review that is capturing alerts and generating episodes, so now I want to know if I can add comments to the Episode based on conditions, for example splunk-system-user should ... See more...
I have setup an episode review that is capturing alerts and generating episodes, so now I want to know if I can add comments to the Episode based on conditions, for example splunk-system-user should check if the status becomes -pending and add a comment : "The details for this are - (fieldvalue) " for example : if i have a field with name "Version" I want the system to add a comment like : "The details for this are : 1.2.3" I tried adding this in rules. But when i check the comments i see the comments like this Please let me know if you know of any way I can add a field value in the comments. Thanks in advance.
_raw data [fw4_deny] [ip-address] start_time="1998-07-07 11:21:09" end_time="1998-07-07 11:21:09" machine_name=test_chall_1 fw_rule_id=11290 fw_rule_name=auto_ruleId_1290 nat_rule_id=0 nat_rule_name... See more...
_raw data [fw4_deny] [ip-address] start_time="1998-07-07 11:21:09" end_time="1998-07-07 11:21:09" machine_name=test_chall_1 fw_rule_id=11290 fw_rule_name=auto_ruleId_1290 nat_rule_id=0 nat_rule_name= src_ip=1xx.1xx.0.x user_id=- src_port=63185 dst_ip=192.168.0.2 dst_port=16992 protocol=6 app_name=- app_protocol=- app_category=- app_saas=no input_interface=eth212 bytes_forward=70 bytes_backward=0 packets_total=1 bytes_total=70 flag_record=S terminate_reason=Denied by Deny Rule is_ssl=no is_sslvpn=no host=- src_country=X2 dst_country=X2 [resource_cnt] [10.10.10.10] time="1998-07-07 11:24:50" machine_name=test_boby_1 cpu_usage=7.0 mem_usage=19.8 disk_usage=5.6 cpu_count=32, cpu_per_usage=3.0-2.9-2.0-2.0-2.0-2.0-0.0-0.0-23.0-7.9-7.0-6.9-19.4-19.0-8.0-7.0-1.0-1.0-16.0-1.0-2.0-2.0-1.0-2.0-24.8-9.0-16.2-8.0-9.0-9.9-5.0-8.1 my props.conf [secui:fw] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SEDCMD-duration = s/duration=\d+\s// SEDCMD-fragment_info = s/fragment_info=\S*\s// SEDCMD-ingres_if = s/ingres_if=\S*\s// SEDCMD-input = s/input\sinterface/interface/ SEDCMD-packets_backward = s/packets_backward=\S*\s// SEDCMD-packets_forward = s/packets_forward=\S*\s// SEDCMD-pre = s/^[^\[]+// SEDCMD-terminate_reason = s/\sterminate_reason=-// SEDCMD-user_auth = s/user_auth=\S*\s// SEDCMD-userid = s/user_id=\S*\s// TRANSFORMS-secui_nullq = secui_nullq TRANSFORMS-stchg7 = secui_resource TRANSFORMS-stchg8 = secui_session category = Custom description = test disabled = false pulldown_type = true <Fields you want to exclude> fw_rule_name, app_saas nat_rule_name, is_ssl user_id, is_sslvpn app_name, host app_protocol, src_country app_category, dst_country I want to exclude fields that I want to exclude from being extracted at index time. Currently, fields that I want to exclude are automatically extracted when searching for fields of interest. Is there a way to do this?  
Hello, I have Database Connect setup and it's working all fine. But I can't wrap my head around how the Alert Action works.  The Alert action "Output results to databases" has no parameters - what ... See more...
Hello, I have Database Connect setup and it's working all fine. But I can't wrap my head around how the Alert Action works.  The Alert action "Output results to databases" has no parameters - what am I missing? I have a DB table "test_table" with columns col1, col2 and want to setup | makeresults | eval col1 = "test", col2 = "result" as an alert that pushes the results into the "test_table". I would expect the Alert action to at least need to know what DB Output to use? Any help appreciated, Kind Regards Andre 
Hi, sometimes there are 3 new data and I need JSON separate, but they overwritten, I find no way to add a UUID to the file name /results_%H%M%S.json