All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@livehybrid  Thanks for the response. Yes, some servers having custom certificates on those servers, we are having issue If I try changing to the default local certificate, then it works roo... See more...
@livehybrid  Thanks for the response. Yes, some servers having custom certificates on those servers, we are having issue If I try changing to the default local certificate, then it works root@test02:/opt/splunk/bin# ./splunk cmd openssl verify -verbose -x509_strict -CAfile /opt/splunk/etc/auth/cacert.pem.default /opt/splunk/etc/auth/server.pem_old /opt/splunk/etc/auth/server.pem_old: OK root@test02:/opt/splunk/bin# root@test02:/opt/splunk/bin# root@test02:/opt/splunk/bin# root@test02:/opt/splunk/bin# ./splunk cmd openssl verify -verbose -x509_strict -CAfile /opt/splunk/etc/auth/cacert.pem /opt/splunk/etc/auth/server.pem error 20 at 0 depth lookup: unable to get local issuer certificate ./splunk cmd btool server list --debug kvstore /opt/splunk/etc/system/default/server.conf [kvstore] /opt/splunk/etc/system/default/server.conf clientConnectionPoolSize = 500 /opt/splunk/etc/system/default/server.conf clientConnectionTimeout = 10 /opt/splunk/etc/system/default/server.conf clientSocketTimeout = 300 /opt/splunk/etc/system/default/server.conf dbCursorOperationTimeout = 300 /opt/splunk/etc/system/default/server.conf dbPath = $SPLUNK_DB/kvstore /opt/splunk/etc/system/default/server.conf defaultKVStoreType = local /opt/splunk/etc/system/default/server.conf delayShutdownOnBackupRestoreInProgress = false /opt/splunk/etc/system/default/server.conf disabled = false /opt/splunk/etc/system/default/server.conf initAttempts = 300 /opt/splunk/etc/system/default/server.conf initialSyncMaxFetcherRestarts = 0 /opt/splunk/etc/system/default/server.conf kvstoreUpgradeCheckInterval = 5 /opt/splunk/etc/system/default/server.conf kvstoreUpgradeOnStartupDelay = 60 /opt/splunk/etc/system/default/server.conf kvstoreUpgradeOnStartupEnabled = true /opt/splunk/etc/system/default/server.conf kvstoreUpgradeOnStartupRetries = 2 /opt/splunk/etc/system/default/server.conf minSnapshotHistoryWindow = 5 /opt/splunk/etc/system/default/server.conf oplogSize = 1000 /opt/splunk/etc/system/default/server.conf percRAMForCache = 15 /opt/splunk/etc/system/default/server.conf port = 8191 /opt/splunk/etc/system/default/server.conf replicaset = splunkrs /opt/splunk/etc/system/default/server.conf replicationWriteTimeout = 1800 /opt/splunk/etc/system/default/server.conf shutdownTimeout = 100 /opt/splunk/etc/system/default/server.conf sslVerifyServerCert = false /opt/splunk/etc/system/default/server.conf sslVerifyServerName = false /opt/splunk/etc/system/default/server.conf storageEngine = wiredTiger /opt/splunk/etc/system/default/server.conf storageEngineMigration = false
It didn't work.  The alert is triggered but the batch still didn't run. 
I know you accepted an answer in Is there a way to group the data by time and another field? There is a simpler, perhaps more intuitive approach. index=my_app sourcetype=my_logs:hec (source=my_Logge... See more...
I know you accepted an answer in Is there a way to group the data by time and another field? There is a simpler, perhaps more intuitive approach. index=my_app sourcetype=my_logs:hec (source=my_Logger) msgsource="*" msgtype="*MyClient*" host=* [ inputlookup My_Application_Mapping.csv | search Client="SomeBank" | table appl ] | eval total_milliseconds = 1000 * (strptime("00:" . elapsed, "%T.%N") - relative_time(now(), "-0d@d")) | eval timebucket = case(total_milliseconds <= 1000, "TXN_1000", total_milliseconds <= 2000, "1sec-2sec", total_milliseconds <= 5000, "2sec-5sec", true(), "5sec+") | rename msgsource as API | bucket _time span=1d | eventstats avg(total_milliseconds) as AvgDur by _time API | stats count by AvgDur _time API timebucket | tojson output_field=api_time _time API AvgDur | chart values(count) over api_time by timebucket | addtotals | spath input=api_time | rename time as _time | fields - api_time Main ideas: Organizing time bucket by case function is easier to maintain. Use chart command to perform transpose over one composite field (api_time). Use tojson to pack all information needed after transpose. In addition, using strptime to calculate total_milliseconds is more maintainable. (It would be even simpler if Splunk doesn't have a bug near epoc zero.)
@elend check this https://community.splunk.com/t5/Getting-Data-In/Reducing-maxWarmDBCount-below-current-warm-bucket-count/m-p/86115 
Hi @Fa1  Are you able to either post your serverclass.conf, or double check it to ensure no syntax errors within it?  You could also try running a btool to check it: $SPLUNK_HOME/bin/splunk cmd bt... See more...
Hi @Fa1  Are you able to either post your serverclass.conf, or double check it to ensure no syntax errors within it?  You could also try running a btool to check it: $SPLUNK_HOME/bin/splunk cmd btool serverclass list --debug If this doesnt highlight any issues then it would be worth investigating a known issue at https://splunk.my.site.com/customer/s/article/After-upgrading-Splunk-from-v9-2-to-v9-4-the-Forwarder-Manager-Web-UI-is-unavailable which looks to be caused by a bad /etc/hosts file - The resolution of this issue is to edit the /etc/hosts file and use the correct format and entries, ensure it starts: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
yes sure. also based on this HOW TO: Reduce the Amount of Hot and Warm Buckets. | Splunk, i already meet with the condition. and the changes will trigger the bucket change whenever one of the followi... See more...
yes sure. also based on this HOW TO: Reduce the Amount of Hot and Warm Buckets. | Splunk, i already meet with the condition. and the changes will trigger the bucket change whenever one of the following is reached. And on my condition one of the trigger already reached, but the data(DB) still not moved.
Additionaly, i also check and reduce the maxsize too for sampling one index. So I change the value from homePath.maxDataSizeMB, maxWarmDBCount. I did it last week, but when i check the total db or th... See more...
Additionaly, i also check and reduce the maxsize too for sampling one index. So I change the value from homePath.maxDataSizeMB, maxWarmDBCount. I did it last week, but when i check the total db or the size from the index data, there is still no data moved. 
Hi @Salvador_Dalí  To create a simple custom alert action that runs a batch file (script.bat) or PowerShell script (script.ps1) on Windows in Splunk Enterprise 9.x, you'll need to build a basic Splu... See more...
Hi @Salvador_Dalí  To create a simple custom alert action that runs a batch file (script.bat) or PowerShell script (script.ps1) on Windows in Splunk Enterprise 9.x, you'll need to build a basic Splunk app with a custom modular alert. This replaces the deprecated "run a script" action.  Create a new app directory on your Splunk server, navigate to $SPLUNK_HOME/etc/apps/ and create a new folder, e.g., myorg_custom_action. Create default/alert_actions.conf with: [my_script_action] is_custom = 1 label = Run My Script description = Runs a batch or PowerShell script payload_format = json Create default/app.conf with basic app metadata: [ui] is_visible = 0 # Hide from app list because this isnt a UI based app... #... etc. Create bin/my_script_action.py (the Python script that executes your batch/PS script). Use this template to get you started: python import sys import json import subprocess # Read payload from stdin payload = json.loads(sys.stdin.read()) # Define your script path (absolute path on the Splunk server) script_path = "C:\\path\\to\\your\\script.bat" # Or .ps1 for PowerShell # Run the script (use powershell.exe for .ps1) if script_path.endswith('.ps1'): subprocess.call(['powershell.exe', '-File', script_path]) else: subprocess.call([script_path]) sys.exit(0)   If you want to pass alert data to the script, modify the Python to write payload to a file or pass as args, then adjust your batch/PS script accordingly. Restart Splunk ($SPLUNK_HOME/bin/splunk restart). The action "Run My Script" will appear in alert configuration under "Add Actions". Test: Create a test alert, add your custom action, and trigger it to verify the script runs. This is a minimal setup, I would recommend extending it for error handling or parameters as required. Custom alert actions are modular apps that allow flexible scripting. The Python handler example reads the alert payload and executes your external script using subprocess. This works on Windows but ensure the Splunk service account has permissions to run the scripts.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @elend  The issue is likely that reducing maxWarmDBCount alone doesn't force immediate bucket rolling. Splunk only moves buckets from warm to cold during natural bucket transitions, not retroacti... See more...
Hi @elend  The issue is likely that reducing maxWarmDBCount alone doesn't force immediate bucket rolling. Splunk only moves buckets from warm to cold during natural bucket transitions, not retroactively for existing buckets that exceed the new limit. Theres also some good info at https://docs.splunk.com/Documentation/Splunk/latest/Indexer/HowSplunkstoresindexes#How_buckets_roll_through_their_stages which might help describe your situation, and also https://splunk.my.site.com/customer/s/article/HOW-TO-Reduce-the-Amount-of-Hot-and-Warm-Buckets   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Didn't work. The alert is triggered but the batch didn't run.
I will answer my own question: * spl2 currently still uses kv-store and file sync implementation is neither complete nor fully functional as of today.  * one needs to apply enterprise license for s... See more...
I will answer my own question: * spl2 currently still uses kv-store and file sync implementation is neither complete nor fully functional as of today.  * one needs to apply enterprise license for spl2 to be enabled. 
@elend  Splunk bucket transitions from Hot → Warm → Cold → Frozen are controlled by multiple parameters: maxHotBuckets, maxDataSize or homePath.maxDataSizeMB, maxHotSpanSecs maxWarmDBCount Simply ... See more...
@elend  Splunk bucket transitions from Hot → Warm → Cold → Frozen are controlled by multiple parameters: maxHotBuckets, maxDataSize or homePath.maxDataSizeMB, maxHotSpanSecs maxWarmDBCount Simply reducing maxWarmDBCount may not trigger a bucket roll if other thresholds, such as time or size haven’t been met. For instance, warm buckets will remain as-is if they haven't exceeded the defined size or time limits.
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled ... See more...
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled to secondary storage. But this time i apply this modification again but it not rolled until one week. I also already crosscheck on the DB created on the index created and it still above the limit i set, i also check if the config already distributed. Anyone faced this scenario?  #splunk
@Fa1  Seems like this is a known issue with 9.4.0.  Please check below for the workaround,   #https://splunk.my.site.com/customer/s/article/After-upgrading-Splunk-from-v9-2-to-v9-4-the-Forwarder-M... See more...
@Fa1  Seems like this is a known issue with 9.4.0.  Please check below for the workaround,   #https://splunk.my.site.com/customer/s/article/After-upgrading-Splunk-from-v9-2-to-v9-4-the-Forwarder-Manager-Web-UI-is-unavailable?_gl=1*3l6qbm*_gcl_au*MTQ5NDE2MzMwOS4xNzUzMTgyNTIy*FPAU*MTQ5NDE2MzMwOS4xNzUzMTgyNTIy*_ga*NDgxNjQ3NjgzLjE3NTMxODI1MjQ.*_ga_5EPM2P39FV*czE3NTMyNDIyOTYkbzIkZzEkdDE3NTMyNDM2NDUkajYwJGwwJGgxODI0MDkyMzEw*_fplc*a0xmeG4lMkZmU25MTkZtN0E1Mk9kdFlKVUNROVFjQzhEQ1F4a1B1OTI1eTQlMkJSdGpSeU1FS0o3aGxBR1N3aUhxUEVzWkRvYkpDZHliWnF1Q1N2WmZLMlNjVUIlMkZjTEVYQlNLbmwwRUplSkhWWm1DWEpUVUJqYngwMGs4ZVhMSkNRJTNEJTNE Original post#https://community.splunk.com/t5/Splunk-Enterprise/Forwarder-Management-UI-error-on-new-install-9-4-0/m-p/710504 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@Salvador_Dalí  The run a script alert action is officially deprecated.  You can better try with a custom alert action app. Create custom alert action app with bin, default, and metadata folders ... See more...
@Salvador_Dalí  The run a script alert action is officially deprecated.  You can better try with a custom alert action app. Create custom alert action app with bin, default, and metadata folders Eg: $SPLUNK_HOME/etc/apps/custom_alert_action/bin/ Put your script.bat inside the bin/ folder Inside default/, create alert_actions.conf [run_script] is_custom = 1 label = Run Script description = Executes a script script = script.bat Also in default/, create app.conf [install] state = enabled [ui] is_visible = true   Restart Splunk After restarting, your alert action “Run Script” will show up in the alert UI #https://help.splunk.com/en/splunk-enterprise/alert-and-respond/alerting-manual/9.4/configure-alert-actions/run-a-script-alert-action Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello @livehybrid , thank you for your support! I've set the following environment variables: SPLUNK_START_ARGS: --accept-license TZ: Europe/Berlin SPLUNK_PASSWORD: XXXXXXX I run splunk on a kube... See more...
Hello @livehybrid , thank you for your support! I've set the following environment variables: SPLUNK_START_ARGS: --accept-license TZ: Europe/Berlin SPLUNK_PASSWORD: XXXXXXX I run splunk on a kubernets (k3s)  cluster, so there are many variables managed by k3s. I've uploaded the output a a failed start to https://bloms.de/download/splunk-failed-start.txt   Thank you Dieter
I don't understand why the legacy 'run a script' alert action has been deprecated.  The official guidelines to create a 'Custom Alert Action' are to complicated to follow. I attempted to find a guid... See more...
I don't understand why the legacy 'run a script' alert action has been deprecated.  The official guidelines to create a 'Custom Alert Action' are to complicated to follow. I attempted to find a guide from Google, but there are too many conflicting methods, and I consistently failed to implement them. I just want a simple and straightforward guide to create a 'Custom Alert Action'  that runs a batch file (script.bat) or a PowerShell script file (script.ps1) when the alert is triggered.  Or just create a 'custom alert action' that exactly do the same thing as the deprecated 'run a script' alert action. (Just type the batch file name and that's it)   Environment: Splunk Enterprise 9.1 (Windows)  
Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, th... See more...
Dears, Hope you are doing well, I would like to request your assistance regarding an issue we've encountered after upgrading Splunk Enterprise from version 9.1.5 to 9.4.0. Since the upgrade, the Forwarder Management (Deployment Server) functionality is no longer working as expected. Despite multiple troubleshooting attempts, the issue persists. I have attached a screenshot showing the specific error encountered. I would greatly appreciate your guidance or recommendations to help resolve this matter. Please let me know if any additional logs or configuration details are needed.     Thank you in advance for your support.
Hi @tech_g706  Do you have custom SSL Certs on your server? Please can you confirm the output of the following which might help us dig down. Thanks $SPLUNK_HOME/bin/splunk cmd btool server list --... See more...
Hi @tech_g706  Do you have custom SSL Certs on your server? Please can you confirm the output of the following which might help us dig down. Thanks $SPLUNK_HOME/bin/splunk cmd btool server list --debug kvstore  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing