All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@elend  Splunk bucket transitions from Hot → Warm → Cold → Frozen are controlled by multiple parameters: maxHotBuckets, maxDataSize or homePath.maxDataSizeMB, maxHotSpanSecs maxWarmDBCount Simply ... See more...
@elend  Splunk bucket transitions from Hot → Warm → Cold → Frozen are controlled by multiple parameters: maxHotBuckets, maxDataSize or homePath.maxDataSizeMB, maxHotSpanSecs maxWarmDBCount Simply reducing maxWarmDBCount may not trigger a bucket roll if other thresholds, such as time or size haven’t been met. For instance, warm buckets will remain as-is if they haven't exceeded the defined size or time limits.
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled ... See more...
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled to secondary storage. But this time i apply this modification again but it not rolled until one week. I also already crosscheck on the DB created on the index created and it still above the limit i set, i also check if the config already distributed. Anyone faced this scenario?  #splunk
@Fa1  Seems like this is a known issue with 9.4.0.  Please check below for the workaround,   #https://splunk.my.site.com/customer/s/article/After-upgrading-Splunk-from-v9-2-to-v9-4-the-Forwarder-M... See more...
@Fa1  Seems like this is a known issue with 9.4.0.  Please check below for the workaround,   #https://splunk.my.site.com/customer/s/article/After-upgrading-Splunk-from-v9-2-to-v9-4-the-Forwarder-Manager-Web-UI-is-unavailable?_gl=1*3l6qbm*_gcl_au*MTQ5NDE2MzMwOS4xNzUzMTgyNTIy*FPAU*MTQ5NDE2MzMwOS4xNzUzMTgyNTIy*_ga*NDgxNjQ3NjgzLjE3NTMxODI1MjQ.*_ga_5EPM2P39FV*czE3NTMyNDIyOTYkbzIkZzEkdDE3NTMyNDM2NDUkajYwJGwwJGgxODI0MDkyMzEw*_fplc*a0xmeG4lMkZmU25MTkZtN0E1Mk9kdFlKVUNROVFjQzhEQ1F4a1B1OTI1eTQlMkJSdGpSeU1FS0o3aGxBR1N3aUhxUEVzWkRvYkpDZHliWnF1Q1N2WmZLMlNjVUIlMkZjTEVYQlNLbmwwRUplSkhWWm1DWEpUVUJqYngwMGs4ZVhMSkNRJTNEJTNE Original post#https://community.splunk.com/t5/Splunk-Enterprise/Forwarder-Management-UI-error-on-new-install-9-4-0/m-p/710504 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@Salvador_Dalí  The run a script alert action is officially deprecated.  You can better try with a custom alert action app. Create custom alert action app with bin, default, and metadata folders ... See more...
@Salvador_Dalí  The run a script alert action is officially deprecated.  You can better try with a custom alert action app. Create custom alert action app with bin, default, and metadata folders Eg: $SPLUNK_HOME/etc/apps/custom_alert_action/bin/ Put your script.bat inside the bin/ folder Inside default/, create alert_actions.conf [run_script] is_custom = 1 label = Run Script description = Executes a script script = script.bat Also in default/, create app.conf [install] state = enabled [ui] is_visible = true   Restart Splunk After restarting, your alert action “Run Script” will show up in the alert UI #https://help.splunk.com/en/splunk-enterprise/alert-and-respond/alerting-manual/9.4/configure-alert-actions/run-a-script-alert-action Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello @livehybrid , thank you for your support! I've set the following environment variables: SPLUNK_START_ARGS: --accept-license TZ: Europe/Berlin SPLUNK_PASSWORD: XXXXXXX I run splunk on a kube... See more...
Hello @livehybrid , thank you for your support! I've set the following environment variables: SPLUNK_START_ARGS: --accept-license TZ: Europe/Berlin SPLUNK_PASSWORD: XXXXXXX I run splunk on a kubernets (k3s)  cluster, so there are many variables managed by k3s. I've uploaded the output a a failed start to https://bloms.de/download/splunk-failed-start.txt   Thank you Dieter
I don't understand why the legacy 'run a script' alert action has been deprecated.  The official guidelines to create a 'Custom Alert Action' are to complicated to follow. I attempted to find a guid... See more...
I don't understand why the legacy 'run a script' alert action has been deprecated.  The official guidelines to create a 'Custom Alert Action' are to complicated to follow. I attempted to find a guide from Google, but there are too many conflicting methods, and I consistently failed to implement them. I just want a simple and straightforward guide to create a 'Custom Alert Action'  that runs a batch file (script.bat) or a PowerShell script file (script.ps1) when the alert is triggered.  Or just create a 'custom alert action' that exactly do the same thing as the deprecated 'run a script' alert action. (Just type the batch file name and that's it)   Environment: Splunk Enterprise 9.1 (Windows)  
Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, th... See more...
Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, the Forwarder Management (Deployment Server) functionality is no longer working as expected. Despite multiple troubleshooting attempts, the issue persists. I have attached a screenshot showing the specific error encountered. I would greatly appreciate your guidance or recommendations to help resolve this matter. Please let me know if any additional logs or configuration details are needed.     Thank you in advance for your support.
Hi @tech_g706  Do you have custom SSL Certs on your server? Please can you confirm the output of the following which might help us dig down. Thanks $SPLUNK_HOME/bin/splunk cmd btool server list --... See more...
Hi @tech_g706  Do you have custom SSL Certs on your server? Please can you confirm the output of the following which might help us dig down. Thanks $SPLUNK_HOME/bin/splunk cmd btool server list --debug kvstore  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @bigchungusfan55  Have you created the actual collections.conf collection stanza as well as creating the lookup definition? It sounds like either the name in the definition of the lookup (which ... See more...
Hi @bigchungusfan55  Have you created the actual collections.conf collection stanza as well as creating the lookup definition? It sounds like either the name in the definition of the lookup (which is where you match then name you use after outputlookup/inputlookup/lookup) is incorrect, or the collection itself does not exist. Please can you review this and let us know?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I haven't found a fix, but this is how I've been working around it: In the detection search, make sure to call addinfo . Then, you can still use info_min/max_time to filter. You just have to do the... See more...
I haven't found a fix, but this is how I've been working around it: In the detection search, make sure to call addinfo . Then, you can still use info_min/max_time to filter. You just have to do the filtering yourself. Examples: index=StuffYouWant starttimeu=$info_min_time$ endtimeu=$info_max_time$ | ...   | from datamodel:"Authentication"."Failed_Authentication" | search  _time>$info_min_time$ _time<$info_max_time$ ...
Hi @dbloms  What env variables and/or configs are you passing through to this container?  Thanks Will
Hi @pc1  On your host with the inputs configured, do you see anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to this input not running? Or is there a filename in the $SPLUNK_HOME/var/lo... See more...
Hi @pc1  On your host with the inputs configured, do you see anything in $SPLUNK_HOME/var/log/splunk/splunkd.log relating to this input not running? Or is there a filename in the $SPLUNK_HOME/var/log/splunk/ relating to the app? What does this output when the modular input tries to run.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If this is an accurate representation of your data, I agree with @PickleRick it is bad. You have 3 open braces and 2 close braces. Go back to you developers and ask them to redevelop the application... See more...
If this is an accurate representation of your data, I agree with @PickleRick it is bad. You have 3 open braces and 2 close braces. Go back to you developers and ask them to redevelop the application producing these logs so that they are in a more reasonable format to process. If this is not an accurate representation of your data, please provide something which accurately represents the data you are dealing with so we have a chance at suggesting something which might help you.
Ok. This is bad. This is ugly. If all your events look like this you have a completely unnecessary header which just wastes space (and your license) and then you have an escaped payload which you ha... See more...
Ok. This is bad. This is ugly. If all your events look like this you have a completely unnecessary header which just wastes space (and your license) and then you have an escaped payload which you have to unescape to be able to do anything reasonable with. Get rid of that header, ingest your messages as well-formed json and your life will be much, much easier. In this form... it's hard to do anything about extracting fields in the first place since it's "kinda structured" data so you can't just handle it with regexes. You could try to unescape it by simple substitution but be aware that depending on your data you might hit some unexpected strings which will not unescape properly. Having unescaped json you can parse it with spath but it will definitely not be a very fast solution.
If you want to modify displayed time so that whenever you're searching for the event you're being shown current time, you have to do it in search time. <your_search> | eval _time=now() Question is... See more...
If you want to modify displayed time so that whenever you're searching for the event you're being shown current time, you have to do it in search time. <your_search> | eval _time=now() Question is why would you do that. Time is one of the main and most important metadata about the event. And it has nothing to do with DATETIME_CONFIG - that setting only works during event ingestion. It modifies what timestamp will be assigned to the event. But each event when it's indexed gets its own timestamp and you can't modify the indexed timestamp. You can only "cheat" during searching by overwriting the value as I've shown above.
Did you put <collection> in a collections.conf file, distribute it to all SHs, and restart Splunk?  Make sure the collections.conf file defines each field you want to use.
"MESSAGE_PAYLOAD": "{\"applicationIdentifier\": \"a7654718-435f-4765-a324-d2b6d682b964\", \"timestamp\": \"2025-07-22 13:24:29 001\", \"information\": {\"someDetails\": [{\"sourceName\": \"NONE\"}]}
I am using the Cisco Security Cloud integration in order to try and import my Duo logs into splunk enterprise (on prem). Following a plethora of directions, including Duo Splunk Connector guide I sti... See more...
I am using the Cisco Security Cloud integration in order to try and import my Duo logs into splunk enterprise (on prem). Following a plethora of directions, including Duo Splunk Connector guide I still cannot get it to work. No data goes through and it stays in a "Not Connected" status.  So far, I have verified that: - Admin API token has correct permissions - Integration is configured with correct admin api info like secret key, integration key, api hostname, etc.  - I am using the newest version of this app: Cisco Security Cloud    Does anyone have any tips for helping troubleshoot this issue? I cannot seem to find any logs or anything to even get a more advanced error code than "Not Connected" when I am pretty sure it should be working. 
That worked. Thank you so much. For other people that need help about this situation, a summary: My environment: a standalone Splunk Enterprise instance  on-prem exchange server 2019 in mailbox ... See more...
That worked. Thank you so much. For other people that need help about this situation, a summary: My environment: a standalone Splunk Enterprise instance  on-prem exchange server 2019 in mailbox role exhange server universal forwarder installed Actions to get exchange logs: on Splunk Enterprise instance :  deployed the Splunk add-on for Microsoft Exchange indexes (to easily manage indexes) Deploy the TA-Exchange-Mailbox add-on in the file at /opt/splunk/etc/apps/TA-Exchange-Mailbox Restart Splunk service on Exchange Server Deploy the TA-Exchange-Mailbox add-on at C:\Program Files\SplunkUniversalForwarder\etc\apps Restart the forwarder