All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being cr... See more...
Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being created in the one event rather than a new event being created for all submissions. Events in Splunk are held for 30 days, this can results in a time sensitive request being requested and run 30 days ago for example, but if it's requested again n those 30 days it won't generate a new event and run the playbook. I could probably add a unique ID to the form submissions which would result in a new container being made (as the artifact values wouldn't be identical) but I was wondering if there's an option in the app or in SOAR to always generate a new container?    Thanks
HI Team, what would be best way to send logs of apps and addon installed on onprem HF and sh to cloud enviroment.
We have implemented the dual ingestion for syslog servers , we can see the logs in cloud but few logs file missing and onprem have all the data  but on cloud we are missing some files which has large... See more...
We have implemented the dual ingestion for syslog servers , we can see the logs in cloud but few logs file missing and onprem have all the data  but on cloud we are missing some files which has large count of logs . PLease help me to understand how we can get the data of those log files in splunk cloud. Splunk cloud logs of syslog server   Syslog server logs on onprem    
Hello, Splunk offers the option of saving changes made in an app via Splunk Web directly to the default directory. By default, Splunk saves all changes made via the Splunk Web interface in the local... See more...
Hello, Splunk offers the option of saving changes made in an app via Splunk Web directly to the default directory. By default, Splunk saves all changes made via the Splunk Web interface in the local directory. Is there a possibility that the changes are saved directly to the default directory? Some more information about the background of the question: For my Splunk instances, the config management is done using Gitlab. All config files in the apps are pushed to the corresponding Splunk instances in the default directory. When I clone an app to my Dev-Splunk instance and make changes, these are saved in the corresponding local directory. Before I can push the changes to my Prod-Splunk instance via Gitlab, I have to manually copy the changes from local/config files to the default/config files. This step is quite tedious as soon as it is not just a single config file. Have any of you already had the same problem and can give me a tip as to whether this is technically possible in Splunk? best regards Lukas
Hi @Abass42  Are you able to see from the splunkd.log which of the outputs are connecting and any error messages around connections? Have a look for "TcpOutputProc" and see if there are any events w... See more...
Hi @Abass42  Are you able to see from the splunkd.log which of the outputs are connecting and any error messages around connections? Have a look for "TcpOutputProc" and see if there are any events which give us any clues. Regarding the trimming of data - This should be something which you can do using Splunk props/transforms - Let me know if you want some assistance with this  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @krishnaunni  Given that you are limited to RHEL 7.9 - I would recommend moving to Splunk 9.2.x (9.2.5) which is supported til Jan 31 2026 RHEL 7.9 is supported up to Splunk Enterprise 9.2.x, sp... See more...
Hi @krishnaunni  Given that you are limited to RHEL 7.9 - I would recommend moving to Splunk 9.2.x (9.2.5) which is supported til Jan 31 2026 RHEL 7.9 is supported up to Splunk Enterprise 9.2.x, specifically it is Kernel 3.x which is supported up to 9.2.x however is marked as deprecated - meaning that from future versions it is no longer supported. "Splunk supports this platform and architecture, but might remove support in a future release" Kernel 3.x is listed as removed from the 9.3.x build release notes: https://docs.splunk.com/Documentation/Splunk/9.3.0/ReleaseNotes/Deprecatedfeatures#:~:text=in%20this%20version.-,Removed%20operating%20systems%20in%20version%209.3,-The%20following%20table Regarding your mention of HF/DS - these are actually the same installation package - Splunk Enterprise is the installation and then the configuration applied to it determines whether it is a HF / DS / SearchHead (SH) etc, with the exception of the Universal Forwarder (UF) which is a smaller package with fewer features available (such as Python environment etc).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @marksheinbaum  It is likely that the reason that "jobName=(W6) Power Quality Read - MT - IR Meters Pascal" is being extracted as only "jobName=(W6)" is because it breaks on the space due to the ... See more...
Hi @marksheinbaum  It is likely that the reason that "jobName=(W6) Power Quality Read - MT - IR Meters Pascal" is being extracted as only "jobName=(W6)" is because it breaks on the space due to the value not being enclosed in quotes. You could create an eval field to extract the full jobName - the following is an example using the rex command: | makeresults | eval _raw="runID=79004968, jobID=72212875, jobName=(W6) Power Quality Read - MT - IR Meters Pascal, jobType=Meter Read Job, status=Failure" | rex field=_raw "jobName=(?<fullJobName>[^,]+)"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@krishnaunni  What is the supported version of Splunk Enterprise for RHEL 7.9? For a list of supported operating systems, see https://docs.splunk.com/Documentation/Splunk/latest/Installation/Syste... See more...
@krishnaunni  What is the supported version of Splunk Enterprise for RHEL 7.9? For a list of supported operating systems, see https://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements  NOTE: Splunk doesn't care what flavor of Linux you use. As long as the kernel is a supported version you'll be fine. Does Splunk Enterprise include Heavy Forwarders (HF) and Deployment Servers (DS) by default, or do these components need to be installed separately? Splunk Enterprise (Full Package) includes all Splunk components except for the Universal Forwarders. Please find the package details below. Splunk Enterprise:- https://www.splunk.com/en_us/download/splunk-enterprise.html  Splunk Universal Forwarder:- https://www.splunk.com/en_us/download/universal-forwarder.html  Splunk Enterprise is a full-featured platform that includes the capabilities for both Heavy Forwarders and Deployment Servers within its installation. These aren’t separate packages you need to install they’re roles you configure within a Splunk Enterprise instance.  
@livehybrid Okay what are the role capabilities one should have if they want to share a dashboard within an app if the user has a role with write permissions within the app.
1. Your copied event contents are inconsistent, sometimes you have key=value, sometimes key = value (with spaces) 2. We don't know how are your extractions defined. Default automatic K/V extractions... See more...
1. Your copied event contents are inconsistent, sometimes you have key=value, sometimes key = value (with spaces) 2. We don't know how are your extractions defined. Default automatic K/V extractions would probably stop at first space in all cases. If you have custom regex-based extractions, you have to check your regexes.
There are only two installers - the Universal Forwarder and the "full" Splunk Enterprise packages. DS, HF, indexer and so on - these are just server roles which are configured on the "full" installat... See more...
There are only two installers - the Universal Forwarder and the "full" Splunk Enterprise packages. DS, HF, indexer and so on - these are just server roles which are configured on the "full" installation. https://docs.splunk.com/Documentation/Splunk/latest/Installation/Systemrequirements https://docs.splunk.com/Documentation/Splunk/latest/Installation/HowtoupgradeSplunk
Dear Team, I am currently running Splunk Enterprise version 9.1.0.1 on a RHEL 7.9 system. I would like to clarify the following: What is the supported version of Splunk Enterprise for RHEL 7.9? Do... See more...
Dear Team, I am currently running Splunk Enterprise version 9.1.0.1 on a RHEL 7.9 system. I would like to clarify the following: What is the supported version of Splunk Enterprise for RHEL 7.9? Does Splunk Enterprise include Heavy Forwarders (HF) and Deployment Servers (DS) by default, or do these components need to be installed separately? Given that I currently have Splunk 9.1.0.1 installed on RHEL 7.9, what would be the recommended version of Splunk Enterprise moving forward? I appreciate your assistance and look forward to your response.
I have events like the following. The filed jobName contains "(W6) Power Quality Read - MT - IR Meters Pascal" delimited with a comma. Splunk is representing the field, jobName as containing "(W6)" t... See more...
I have events like the following. The filed jobName contains "(W6) Power Quality Read - MT - IR Meters Pascal" delimited with a comma. Splunk is representing the field, jobName as containing "(W6)" truncating the remainder of the value. I don't believe it is terminating because of the ") " in the value. Please advise if you have a suggestion.  04/08/2025 17:35:33 runID = 79004968, jobID=72212875, jobName=(W6) Power Quality Read - MT - IR Meters Pascal, jobType=Meter Read Job,status = Failure,started = Tue Apr 08 09:35:13 GMT 2025,finished = Tue Apr 08 10:48:29 GMT 2025,elapsed = 1h 13m 16s ,Process_Index_=0,Write_Index_=0,device_count=625997,imu_device_count=0,devices_in_nicnac=0,members_success=625879,members_failed=118,members_timed_out=0,members_retry_complete=518,devices_not_in_cache=0,nicnac_sent_callback=3144189,nicnac_complete_callback=625879,nicnac_failed_callback=0,nicnac_timeout_callback=518,unresolved_devices=791,process_batch=12555,process_1x1=0,name_resolver_elapsed=384249,process_elapsed_ms=1145247,jdbc_local_elapsed_ms=0,jdbc_net_elapsed_ms=1036711,load_device_elapsed_ms=18697
What you are asking does make much sense - the sparkline is associated with the status (which is why it appears in the same column as the other stats related to each status. What is it that you are h... See more...
What you are asking does make much sense - the sparkline is associated with the status (which is why it appears in the same column as the other stats related to each status. What is it that you are hoping to represent by making this a column?
So the title is pretty self explanatory. I have been approached and requested to trim logs. I had initially installed and tested Cribl, but fast forward later, I am still doing a bit of testing. I am... See more...
So the title is pretty self explanatory. I have been approached and requested to trim logs. I had initially installed and tested Cribl, but fast forward later, I am still doing a bit of testing. I am now also using Data dog, A tool that we already have installed and are already paying for. They approached me with a similar proposition and use cases.  I am having issues sending data to both sources and they both appear. I have been able to get only one tool to work at any given time.  I guess my question is, as a forwarder that forwards all of the data it takes in, can it send all of the data to two different sources to be parsed and passed back in.    Currently, this is what I have.  1 indexer 1 forwarder 1 search Head Currently, I have some data being monitored locally on the forwarder, and it sends it over to the indexer. Currently, any data the forwarder sends needs to be sent to and modified by Cribl and DataDog. I have this as my outputs.conf currently:   When I have the configuration like this, Through the forwarder I get: I guess its too much to send.    For datadog, we are sending data from the Forwarder to the DataDog agent (Installed on the Indexer) over Splunk TCP, and then uses the HEC endpoint as the destination.    I am trying to understand the Indexing Queues on the Forwarder. Currently, I am sending data to the Indexer, and it is searchable through the Search Head, but I do not see any indexing or anything happening on the Forwarder.  How do i read and understand exactly what is happening and where I need to investigate to see what is happening to the data as I send it out. Any docs or assistance is greatly appreciated.    Thank you
Thank you.... yes, that worked and will give me something to work with.   Much appreciated!
Hi @madhav_dholakia  When you run the same loadjob in the SPL search bar at the same time as in the dashboard - do you get the same - old - results? Does the scheduled search job you're loading hav... See more...
Hi @madhav_dholakia  When you run the same loadjob in the SPL search bar at the same time as in the dashboard - do you get the same - old - results? Does the scheduled search job you're loading have the earliest/latest fixed in the SPL or the search config? What is this set to?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Are we able to create a saved filter in Mission Control that can be shared across users? Just like in incident review in ES? 
Hi @666Meow  Looking at the `investigation_get_current_status` macro - This is expecting a JSON string for the status field - however in my tests the mcincidents command returns an number (in my cas... See more...
Hi @666Meow  Looking at the `investigation_get_current_status` macro - This is expecting a JSON string for the status field - however in my tests the mcincidents command returns an number (in my case "1") for the field "status". I found the following mappings for the numeric values which should help! 0 = Unassigned 1 = New 2 = In Progress 3 = Pending 4 = Resolved 5 = Closed You may also find that |`mc_incidents` gives you some of the info you need - this has a "status_name" field for the status already.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Thanks @ITWhisperer  I've used addtotals. Just one more thing left now how can I transform last  row Trend to become last column ?