All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lif... See more...
I'm on the server / infrastructure team at my organization. There is a dedicated Splunk team, and they want to replace some RHEL7 Splunk servers with RHEL 8. RHEL 8 is already near the end of its lifecycle, and I'd rather provide them with RHEL 9, which is now our standard build. The fact that they still use RHEL 7 servers gives you some sense of how long it takes them to move their application to a new(ish) OS. They are insistent that we deploy them RHEL 8 servers so they are "all the same." I want to encourage them to move forward and have a platform that will be fully supported for several  years to come. Is having some servers on RHEL 8 and some on RHEL 9 for a period of time an actual problem? They use version 9.1.2. I found this document: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/Systemrequirements It lists support both for x86_64 kernels 4.x (rhel and 5.x (rhel 9). It doesn't elaborate any further.  I know that for various reasons we'd want to eventually have all servers on the same OS version; I'm just wondering if having RHEL 8 and RHEL 9 coexist for a limited period presents an actual problem. I'd appreciate your thoughts.  Daniel
I was looking at my organizations SVC Utilization by the hour and I noticed this component under the label "process_type" with the value "Search launcher" which is consuming a huge portion of the SVC... See more...
I was looking at my organizations SVC Utilization by the hour and I noticed this component under the label "process_type" with the value "Search launcher" which is consuming a huge portion of the SVCs. What exactly is this search launcher? How do I deep dive into whats running under the hood? Any tips on how to approach on reducing the svc consumed by this?
When I launch an application, the name of the application no longer appears at the top right      
  I have a multivalue field called weeksum that contains the following values 2024:47 2024:48 2024:49 2024:50 2024:51 2024:52 2025:01 2025:02 2025:03 In this case, from the first to the last wee... See more...
  I have a multivalue field called weeksum that contains the following values 2024:47 2024:48 2024:49 2024:50 2024:51 2024:52 2025:01 2025:02 2025:03 In this case, from the first to the last week, there are no missing weeks. I would like to create a field that identifies if there are any missing weeks in the sequence. For example, if week 2024:51 is missing, the field should indicate that there is a gap in the sequence. Please note that the weeksum multivalue field already consists of pre-converted values, so converting them back to epoch (using something like | eval week = strftime(_time, "%Y:%U")) does not work.
We are facing a log indexing issue with the log paths mentioned below. Previously, with the same inputs.conf configuration, logs were being ingested without issues, but suddenly, it stopped sending l... See more...
We are facing a log indexing issue with the log paths mentioned below. Previously, with the same inputs.conf configuration, logs were being ingested without issues, but suddenly, it stopped sending logs. Each log file contains logs for a single day, but splunk reports that it has already read these logs and skips them. Below is the inputs.conf configuration: [monitor://C:\Ticker\out\] whitelist = .*_Mcast2Msg\\logs\\.*log$ index = rtd disabled = false followTail = 0 ignoreOlderThan = 3d recursive = true sourcetype = rtd_mcast crcSalt = <SOURCE> source path: C:\Ticker\out\Equiduct_Mcast2Msg\logs\EquiductTest-01-21-25.log C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-16-25.log C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-16-25.log C:\Ticker\out\JSE_Mcast2Msg\logs\JSEtst-01-17-25.log C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-14-25.log _internal logs: 01-21-2025 14:48:20.745 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=105 for file='C:\Ticker\out\Equiduct_Mcast2Msg\logs\Equiduct-Limit-1-01-21-25.log'. 01-21-2025 14:48:13.586 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=171 for file='C:\Ticker\out\Equiduct_Mcast2Msg\logs\Equiduct-Limit-1-01-20-25.log'. 01-21-2025 14:48:06.332 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=66 for file='C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-21-25.log'. 01-21-2025 14:47:57.650 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=66 for file='C:\Ticker\out\Istanbul_Mcast2Msg\logs\Istanbul-01-20-25.log'. 01-21-2025 14:47:51.466 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=65 for file='C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-20-25.log'. 01-21-2025 14:47:45.271 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=65 for file='C:\Ticker\out\JSE_Mcast2Msg\logs\JSE-01-21-25.log'. 01-21-2025 14:47:39.644 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=114 for file='C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-21-25.log'. 01-21-2025 14:47:35.855 +0000 INFO WatchedFile [708 tailreader0] - Will begin reading at offset=114 for file='C:\Ticker\out\Warsaw_Mcast2Msg\logs\Warsaw-01-20-25.log'. 01-21-2025 14:47:35.660 +0000 INFO TailingProcessor [6536 MainTailingThread] - Adding watch on path: C:\Ticker\out. 01-21-2025 14:47:35.659 +0000 INFO TailingProcessor [6536 MainTailingThread] - Parsing configuration stanza: monitor://C:\Ticker\out\. Issue Details: 1) When we update the very first line of a log file, only the updated first line is ingested by Splunk, and the rest of the content is skipped. 2) We have deleted the fishbucket, but the issue persists. 3) Even after reinstalling the Splunk forwarder (version 8.2.12), the problem continues.
Hi To transfer TLS from Deep Security to Splunk I think Privatekey, Certificate, and Certificatechain should be created in the splunk. Please kindly explain the procedure for creating Privatekey, ... See more...
Hi To transfer TLS from Deep Security to Splunk I think Privatekey, Certificate, and Certificatechain should be created in the splunk. Please kindly explain the procedure for creating Privatekey, Certificate, and Certificatechain. And what settings do I need to set up when I receive TCP from Splunk?
I am trying to get total traffic vs attack traffic splunk query in order to keep it in dashboard panel. We have a field called attack_type which contains all the attacks and those will be dynamic (... See more...
I am trying to get total traffic vs attack traffic splunk query in order to keep it in dashboard panel. We have a field called attack_type which contains all the attacks and those will be dynamic (daily new ones will be coming). For last 24 hours, we have 1000 total events and 400 attack_type events. how can I show this in single dashboard panel: tried to write this query: index=* *jupiter* | stats count as "Total Traffic" count(eval(attack_type="*")) as "Attack Traffic" but getting this error: Error in 'stats' command: The eval expression for dynamic field 'attack_type=*' is invalid. Error='The expression is malformed. An unexpected character is reached at '*'.'. please help me in this regards.
I am working on Splunk Enteprise Security.  | savedsearch "Traffic  - Total Count" is working fine and giving me desired output in the search but when calling it in source code not showing any r... See more...
I am working on Splunk Enteprise Security.  | savedsearch "Traffic  - Total Count" is working fine and giving me desired output in the search but when calling it in source code not showing any result. {     "type": "ds.savedSearch",     "options": {         "query": "'| savedsearch \"Traffic - Total Count\"'",         "ref": "Traffic - Total Count"     },     "meta": {         "name": "Traffic - Total Count"     } } Do I need to do any configurations to get output on this dashboard???
Hi, By default, if no timestamp exist in a field, Splunk defaulting timestamp of previous event On one hand, I do want Splunk to do it, but on the other hand I don't want Splunk to treat it as a "T... See more...
Hi, By default, if no timestamp exist in a field, Splunk defaulting timestamp of previous event On one hand, I do want Splunk to do it, but on the other hand I don't want Splunk to treat it as a "Timestamp Parsing Issues" in the Data quality. Is there any way explicitly to tell Splunk to do it? I just want Splunk to treat it as error. Thanks
Hi When using email templates, how do I capture the current threshold value configured and the value that was observed. with the below template, I couldn't get the threshold value and actual... See more...
Hi When using email templates, how do I capture the current threshold value configured and the value that was observed. with the below template, I couldn't get the threshold value and actual observed value Health Rule Violation: ${latestEvent.healthRule.name} What is impacted: $impacted Summary: ${latestEvent.summaryMessage} Event Time: ${latestEvent.eventTime} Threshold Value: ${latestEvent.threshold} Actual Observed Value: ${latestEvent.observedValue} Output:
We have big application which contains small applications data coming onto Splunk. Currently we are mapping FQDNs to indexname for other application. But this big application wants single index for a... See more...
We have big application which contains small applications data coming onto Splunk. Currently we are mapping FQDNs to indexname for other application. But this big application wants single index for all their FQDNs but they want to differentiate their application data based on sourcetype. As of now we have only one sourcetype which receives data from all other applications. Example: there is Fruits application and there is apple, orange, and pineapple applications in it. They want single index for Fruits application and they want to differentiate by using sourcetype=apple and sourcetype=orange and soon.... For remaining applications we are simply mapping FQDN to indexname in transforms.conf by using lookups and ingestEval. I can map all fruits application FQDNs to single index then all logs will be mixed right (apple,orange and soon....)... How can we differentiate with by using sourcetype? Where and how I need to write the logic? 
i want to know in which index is microsoft defender logs getting stored , I know some important fields which are there in microsoft defender and now i want to find them whether they are getting store... See more...
i want to know in which index is microsoft defender logs getting stored , I know some important fields which are there in microsoft defender and now i want to find them whether they are getting stored or not .
Getting following Static Errors in Splunk SOAR PR review from Bot. 1.  { "minimal_data_paths": { "description": "Checks to make sure each action includes the min... See more...
Getting following Static Errors in Splunk SOAR PR review from Bot. 1.  { "minimal_data_paths": { "description": "Checks to make sure each action includes the minimal required data paths", "message": "One or more actions are missing a required data path", "success": false, "verbose": [ "Minimal data paths: summary.total_objects_successful, action_result.status, action_result.message, summary.total_objects", " action one is missing one or more required data path", " action two is missing one or more required data path", " action three is missing one or more required data path" ] } },  I have provided all the data paths in output array in <App Name>.json file. Is there any other place where I have to provide the data paths? 2. { "repo_name_has_expected_app_id": { "description": "Validates that the app ID in the app repo's JSON file matches the recorded app ID for the app", "message": "Could not find an app id for <App Name>. Please add the app id for <App Name> to data/repo_name_to_appid.json", "success": false, "verbose": [ "Could not find an app id for <App Name>. Please add the app id for <App Name> to data/repo_name_to_appid.json" ] } } How do we resolve this issue,  did I missed any file?
Calculating metrics. I need to count the number of sensors that are created and monitored for each host. I have the index and sourcetype. I created about 7 different dashboards with multiple host on... See more...
Calculating metrics. I need to count the number of sensors that are created and monitored for each host. I have the index and sourcetype. I created about 7 different dashboards with multiple host on each dashboard and I need to get a count on the number of sensors that are being monitored by each host.  index=idx_sensors sourcetype = sensorlog | stats count by host the above query is giving me all the hostnames that are being monitored but the count is giving me all the events... I just need the # of sensors per host.   
Hi everyone, I'm running a query in Splunk using the dbxquery command and received the following error:   Error in 'script': Getinfo probe failed for external search command 'dbxquery'.   When I... See more...
Hi everyone, I'm running a query in Splunk using the dbxquery command and received the following error:   Error in 'script': Getinfo probe failed for external search command 'dbxquery'.   When I check Apps -> Manage Apps -> Splunk DB Connect, I see the version is 2.4.0. Please help me identify the cause and how to fix this error. Thank you!
Hi, Can any one please help in creating regex to extract 12 words(Words with characters/letters only) from beginning of the field? Sharing few samples with required output:   1)00012243asdsfgh - N... See more...
Hi, Can any one please help in creating regex to extract 12 words(Words with characters/letters only) from beginning of the field? Sharing few samples with required output:   1)00012243asdsfgh - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations Required Output - No recommendations from System A. Message - ERROR: System A | No Matching Recommendations 2)001b135c-5348-4arf-b3vbv344v - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; ::: Other details - 001sss-445-4f45-b3ad-gsdfg34 - Incorrect page and placement found: Channel1; Required Output - Validation Exception reason - Empty/Invalid Page_Placement Value ::: Input received - Channel1; 3)00assew-34df-34de-d34k-sf34546d :: Invalid requestTimestamp : 2025-01-21T21:36:21.224Z Required Output:Invalid requestTimestamp 4)01hg34hgh44hghg4 - Exception while calling System A - null Required Output:Exception while calling System A - null            
Hello, I have a question about sh deployer and search heads. We have three search heads within a cluster and for some reason at some point of time deployer connection got disconnected and now I am ... See more...
Hello, I have a question about sh deployer and search heads. We have three search heads within a cluster and for some reason at some point of time deployer connection got disconnected and now I am trying to connect it. Let me know what need to be done ? Is it just we need to match password of all search heads with deployer. Configurations I currently see: On Search head(1/2/3): /opt/splunk/etc/system/localserver.conf [shclustering] conf_deploy_fetch_url = https://XXXXXX:8089 disabled = 0 mgmt_uri = https://XXXXXXX:8089 replication_factor = 2 shcluster_label = shcluster1 id = 1F81D83B manual_detention = off Deployer : /opt/splunk/etc/system/localserver.conf [shclustering] shcluster_label = shcluster1 pass4SymmKey = XXXXXXX Thanks in advance for your help!  
We have a lookup that has all kinds of domain (DNS) information in it with about  60 fields like create date, ASN, name server IP,  MX IP, many of which are usually populated. But there are several f... See more...
We have a lookup that has all kinds of domain (DNS) information in it with about  60 fields like create date, ASN, name server IP,  MX IP, many of which are usually populated. But there are several fields which have no data - 10 to 20 on any given search (assuming that they are 'null'). The empty fields are likely to vary on each search. In other words some domains will have an MX record, some will not, but if they are in this lookup, they will always have a create-date.  I am presenting this data on a domain lookup dashboard, using "|transpose" so that you have a table with the field name and value on a dashboard. I would like to just show a field and a value where this is returned data and filter out or not show a field which is null. Is there a way to do this?
Hello, if you have specific app conf (like after configuring it using HF web gui for a specific site), is it still recommended to use deployment server as this requires to sync / copy HF app/local co... See more...
Hello, if you have specific app conf (like after configuring it using HF web gui for a specific site), is it still recommended to use deployment server as this requires to sync / copy HF app/local conf back to deployment server etc/deployment-apps/app/local to avoid any deletion when reloading deployment server/app update from DS? I guess using DS is good for centralizing (same) configurations across HFs? https://docs.splunk.com/Documentation/Splunk/9.3.0/Updating/Createdeploymentapps "The only way to allow an instance to continue managing its own copy of such an app is to disable the instance's deployment client functionality. If an instance is no longer a client of a deployment server, the deployment server will no longer manage its apps."   Thanks.
Stupid form editor adds extra CRs. Having trouble getting this search to work as desired. I've tried these 2 methods and can't them to work:   eventtype="x" Name="x" | fields Name, host ... See more...
Stupid form editor adds extra CRs. Having trouble getting this search to work as desired. I've tried these 2 methods and can't them to work:   eventtype="x" Name="x" | fields Name, host | dedup host | stats count by host | appendpipe [stats count | where count=0 | eval host="Specify your text here"]     and using the   fillnull   command. Here is my search:   index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 (field2="20005") OR (field2="20006") OR (field2="20007") OR (field2="666") | stats count(field2) by field2, field3 | sort count(field2)   In this case the value for field2="666" does not exist in the results. Here're the results I get:   field2 field3 count(field2) 1 20005 This is field3 value 1 2 2 20006 This is field3 value 2 6 3 20007 This is field3 value 3 13   To summarize, I want to search for all the values of field2 and return the counts for each field2 value even if the field2 value is not found in the search; so, then, count(field2) for field2=666 would be 0. As follows:   field2 field3 count(field2) 1 666 <empty string> 0 2 20005 This is field3 value 1 2 3 20006 This is field3 value 2 6 4 20007 This is field3 value 3 13   This is a simplified example. The actual use case is that I want to search one data set and return all the field2 values and then search for those values in the first data set. This actual search I'm running looks like this:   index=idx1 host=host1 OR host=host2 source=*filename*.txt field1!=20250106 [search index=idx1 host=host1 OR host=host2 source=*filename*.txt field1=20250106 | fields field2 | dedup field2 | return 1000 field2] | stats count(field2) by field2, field3 | sort count(field2)   I want to find all the field2 values when field1=20250106 and then find the counts of those values in the field1!=20250106 events (even for when the count of some field2 values have count=0 in results).