All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Raja_Selvaraj  Can you confirm which server(s) you have put the DATETIME_CONFIG = CURRENT on and what type of instance this is? (Universal Forwarder / Heavy Forwarder / Indexer) ? This needs to... See more...
Hi @Raja_Selvaraj  Can you confirm which server(s) you have put the DATETIME_CONFIG = CURRENT on and what type of instance this is? (Universal Forwarder / Heavy Forwarder / Indexer) ? This needs to be on the first full deployment (HF/Indexer) that the data hits as this is where it is parsed  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Splunkie  Do you want this to affect the raw data (e.g when its indexed) or do you want the original string to exist in the data but also have a field which has it without the suffix? You could... See more...
Hi @Splunkie  Do you want this to affect the raw data (e.g when its indexed) or do you want the original string to exist in the data but also have a field which has it without the suffix? You could do the following at search time: | rex field=Username_Field mode=sed "s/ sophos_event_input$//" (See https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Rex) Alternatively you could use a REPLACE function: | eval cleaned_Username=REPLACE(Username_Field," sophos_event_input","") You could also make this an automatic calculated field so that you dont need to include it in your SPL:   If you want this to be replaced in the _raw event at index time then you need to deploy a props.conf file within a custom app to your HF or Indexers (whichever the data lands on first) with something like this: # props.conf # [yourSourcetype] SEDCMD-removeSophosSuffix = "s/ sophos_event_input//g"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Splunkie , do you want to do this at index time, recording the modified events or at search time (only in visualization)? if at search time, you can use a regex in your searches like the follow... See more...
Hi @Splunkie , do you want to do this at index time, recording the modified events or at search time (only in visualization)? if at search time, you can use a regex in your searches like the following: | rex mode=sed "s/sophos_event_input/ /g" if at index time, you should put in the props.conf: [<your_sourcetype>] SEDCMD = "s/sophos_event_input/ /g" This conf file must be located in the first full Splunk instance where data pass through, in other words in the first Heavy Forwarder (if present) or otherwise on the Indexers. Ciao. Giuseppe
I am trying to remove a field which  has a suffix of sophos_event_input after the username. Example Username_Field Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would l... See more...
I am trying to remove a field which  has a suffix of sophos_event_input after the username. Example Username_Field Joe-Smith, Adams sophos_event_input Jane-Doe, Smith sophos_event_input I would like to change the Username field to only contain the users name, Example Username_Field Joe-Smith, Adams  Jane-Doe, Smith  Basically I want to get rid of the sophos_event_input suffix. How will I go about this? 
Your "table" must come with a sequence or the whole problem is unsolvable.  The sequence may come in the form of a _time field, or a special field such as sequence_number, or in the form of sheer ord... See more...
Your "table" must come with a sequence or the whole problem is unsolvable.  The sequence may come in the form of a _time field, or a special field such as sequence_number, or in the form of sheer order of the table. The whole point is, you can use transaction command to get what you need if your table Has a _time field and Is in reverse time order. | transaction "change #" "user ID" startswith="Mod_type=OLD" endswith="Mod_type=NEW" If, for any reason, your "table" doesn't come with a _time field, you can always make sure it is in reverse time order, and make up a _time field.  You can also use stats to do the same.  The bottom line is: join is seldom the answer. Here is a data emulation for you to play with and compare with real data. | makeresults format=csv data="Mod_type, user ID,Email,change #,Active NEW,123,Me@hotmail.com,152,Yes OLD,123,Me@hotmail.com,152,No" | eval _time = now()
yeah, im not set it up on default folder. so it should be same with that condition. Additionaly, this is distributed indexer (3 instance).
@livehybrid  Thanks for the response. Yes, some servers having custom certificates on those servers, we are having issue If I try changing to the default local certificate, then it works roo... See more...
@livehybrid  Thanks for the response. Yes, some servers having custom certificates on those servers, we are having issue If I try changing to the default local certificate, then it works root@test02:/opt/splunk/bin# ./splunk cmd openssl verify -verbose -x509_strict -CAfile /opt/splunk/etc/auth/cacert.pem.default /opt/splunk/etc/auth/server.pem_old /opt/splunk/etc/auth/server.pem_old: OK root@test02:/opt/splunk/bin# root@test02:/opt/splunk/bin# root@test02:/opt/splunk/bin# root@test02:/opt/splunk/bin# ./splunk cmd openssl verify -verbose -x509_strict -CAfile /opt/splunk/etc/auth/cacert.pem /opt/splunk/etc/auth/server.pem error 20 at 0 depth lookup: unable to get local issuer certificate ./splunk cmd btool server list --debug kvstore /opt/splunk/etc/system/default/server.conf [kvstore] /opt/splunk/etc/system/default/server.conf clientConnectionPoolSize = 500 /opt/splunk/etc/system/default/server.conf clientConnectionTimeout = 10 /opt/splunk/etc/system/default/server.conf clientSocketTimeout = 300 /opt/splunk/etc/system/default/server.conf dbCursorOperationTimeout = 300 /opt/splunk/etc/system/default/server.conf dbPath = $SPLUNK_DB/kvstore /opt/splunk/etc/system/default/server.conf defaultKVStoreType = local /opt/splunk/etc/system/default/server.conf delayShutdownOnBackupRestoreInProgress = false /opt/splunk/etc/system/default/server.conf disabled = false /opt/splunk/etc/system/default/server.conf initAttempts = 300 /opt/splunk/etc/system/default/server.conf initialSyncMaxFetcherRestarts = 0 /opt/splunk/etc/system/default/server.conf kvstoreUpgradeCheckInterval = 5 /opt/splunk/etc/system/default/server.conf kvstoreUpgradeOnStartupDelay = 60 /opt/splunk/etc/system/default/server.conf kvstoreUpgradeOnStartupEnabled = true /opt/splunk/etc/system/default/server.conf kvstoreUpgradeOnStartupRetries = 2 /opt/splunk/etc/system/default/server.conf minSnapshotHistoryWindow = 5 /opt/splunk/etc/system/default/server.conf oplogSize = 1000 /opt/splunk/etc/system/default/server.conf percRAMForCache = 15 /opt/splunk/etc/system/default/server.conf port = 8191 /opt/splunk/etc/system/default/server.conf replicaset = splunkrs /opt/splunk/etc/system/default/server.conf replicationWriteTimeout = 1800 /opt/splunk/etc/system/default/server.conf shutdownTimeout = 100 /opt/splunk/etc/system/default/server.conf sslVerifyServerCert = false /opt/splunk/etc/system/default/server.conf sslVerifyServerName = false /opt/splunk/etc/system/default/server.conf storageEngine = wiredTiger /opt/splunk/etc/system/default/server.conf storageEngineMigration = false
It didn't work.  The alert is triggered but the batch still didn't run. 
I know you accepted an answer in Is there a way to group the data by time and another field? There is a simpler, perhaps more intuitive approach. index=my_app sourcetype=my_logs:hec (source=my_Logge... See more...
I know you accepted an answer in Is there a way to group the data by time and another field? There is a simpler, perhaps more intuitive approach. index=my_app sourcetype=my_logs:hec (source=my_Logger) msgsource="*" msgtype="*MyClient*" host=* [ inputlookup My_Application_Mapping.csv | search Client="SomeBank" | table appl ] | eval total_milliseconds = 1000 * (strptime("00:" . elapsed, "%T.%N") - relative_time(now(), "-0d@d")) | eval timebucket = case(total_milliseconds <= 1000, "TXN_1000", total_milliseconds <= 2000, "1sec-2sec", total_milliseconds <= 5000, "2sec-5sec", true(), "5sec+") | rename msgsource as API | bucket _time span=1d | eventstats avg(total_milliseconds) as AvgDur by _time API | stats count by AvgDur _time API timebucket | tojson output_field=api_time _time API AvgDur | chart values(count) over api_time by timebucket | addtotals | spath input=api_time | rename time as _time | fields - api_time Main ideas: Organizing time bucket by case function is easier to maintain. Use chart command to perform transpose over one composite field (api_time). Use tojson to pack all information needed after transpose. In addition, using strptime to calculate total_milliseconds is more maintainable. (It would be even simpler if Splunk doesn't have a bug near epoc zero.)
@elend check this https://community.splunk.com/t5/Getting-Data-In/Reducing-maxWarmDBCount-below-current-warm-bucket-count/m-p/86115 
Hi @Fa1  Are you able to either post your serverclass.conf, or double check it to ensure no syntax errors within it?  You could also try running a btool to check it: $SPLUNK_HOME/bin/splunk cmd bt... See more...
Hi @Fa1  Are you able to either post your serverclass.conf, or double check it to ensure no syntax errors within it?  You could also try running a btool to check it: $SPLUNK_HOME/bin/splunk cmd btool serverclass list --debug If this doesnt highlight any issues then it would be worth investigating a known issue at https://splunk.my.site.com/customer/s/article/After-upgrading-Splunk-from-v9-2-to-v9-4-the-Forwarder-Manager-Web-UI-is-unavailable which looks to be caused by a bad /etc/hosts file - The resolution of this issue is to edit the /etc/hosts file and use the correct format and entries, ensure it starts: 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
yes sure. also based on this HOW TO: Reduce the Amount of Hot and Warm Buckets. | Splunk, i already meet with the condition. and the changes will trigger the bucket change whenever one of the followi... See more...
yes sure. also based on this HOW TO: Reduce the Amount of Hot and Warm Buckets. | Splunk, i already meet with the condition. and the changes will trigger the bucket change whenever one of the following is reached. And on my condition one of the trigger already reached, but the data(DB) still not moved.
Additionaly, i also check and reduce the maxsize too for sampling one index. So I change the value from homePath.maxDataSizeMB, maxWarmDBCount. I did it last week, but when i check the total db or th... See more...
Additionaly, i also check and reduce the maxsize too for sampling one index. So I change the value from homePath.maxDataSizeMB, maxWarmDBCount. I did it last week, but when i check the total db or the size from the index data, there is still no data moved. 
Hi @Salvador_Dalí  To create a simple custom alert action that runs a batch file (script.bat) or PowerShell script (script.ps1) on Windows in Splunk Enterprise 9.x, you'll need to build a basic Splu... See more...
Hi @Salvador_Dalí  To create a simple custom alert action that runs a batch file (script.bat) or PowerShell script (script.ps1) on Windows in Splunk Enterprise 9.x, you'll need to build a basic Splunk app with a custom modular alert. This replaces the deprecated "run a script" action.  Create a new app directory on your Splunk server, navigate to $SPLUNK_HOME/etc/apps/ and create a new folder, e.g., myorg_custom_action. Create default/alert_actions.conf with: [my_script_action] is_custom = 1 label = Run My Script description = Runs a batch or PowerShell script payload_format = json Create default/app.conf with basic app metadata: [ui] is_visible = 0 # Hide from app list because this isnt a UI based app... #... etc. Create bin/my_script_action.py (the Python script that executes your batch/PS script). Use this template to get you started: python import sys import json import subprocess # Read payload from stdin payload = json.loads(sys.stdin.read()) # Define your script path (absolute path on the Splunk server) script_path = "C:\\path\\to\\your\\script.bat" # Or .ps1 for PowerShell # Run the script (use powershell.exe for .ps1) if script_path.endswith('.ps1'): subprocess.call(['powershell.exe', '-File', script_path]) else: subprocess.call([script_path]) sys.exit(0)   If you want to pass alert data to the script, modify the Python to write payload to a file or pass as args, then adjust your batch/PS script accordingly. Restart Splunk ($SPLUNK_HOME/bin/splunk restart). The action "Run My Script" will appear in alert configuration under "Add Actions". Test: Create a test alert, add your custom action, and trigger it to verify the script runs. This is a minimal setup, I would recommend extending it for error handling or parameters as required. Custom alert actions are modular apps that allow flexible scripting. The Python handler example reads the alert payload and executes your external script using subprocess. This works on Windows but ensure the Splunk service account has permissions to run the scripts.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @elend  The issue is likely that reducing maxWarmDBCount alone doesn't force immediate bucket rolling. Splunk only moves buckets from warm to cold during natural bucket transitions, not retroacti... See more...
Hi @elend  The issue is likely that reducing maxWarmDBCount alone doesn't force immediate bucket rolling. Splunk only moves buckets from warm to cold during natural bucket transitions, not retroactively for existing buckets that exceed the new limit. Theres also some good info at https://docs.splunk.com/Documentation/Splunk/latest/Indexer/HowSplunkstoresindexes#How_buckets_roll_through_their_stages which might help describe your situation, and also https://splunk.my.site.com/customer/s/article/HOW-TO-Reduce-the-Amount-of-Hot-and-Warm-Buckets   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Didn't work. The alert is triggered but the batch didn't run.
I will answer my own question: * spl2 currently still uses kv-store and file sync implementation is neither complete nor fully functional as of today.  * one needs to apply enterprise license for s... See more...
I will answer my own question: * spl2 currently still uses kv-store and file sync implementation is neither complete nor fully functional as of today.  * one needs to apply enterprise license for spl2 to be enabled. 
@elend  Splunk bucket transitions from Hot → Warm → Cold → Frozen are controlled by multiple parameters: maxHotBuckets, maxDataSize or homePath.maxDataSizeMB, maxHotSpanSecs maxWarmDBCount Simply ... See more...
@elend  Splunk bucket transitions from Hot → Warm → Cold → Frozen are controlled by multiple parameters: maxHotBuckets, maxDataSize or homePath.maxDataSizeMB, maxHotSpanSecs maxWarmDBCount Simply reducing maxWarmDBCount may not trigger a bucket roll if other thresholds, such as time or size haven’t been met. For instance, warm buckets will remain as-is if they haven't exceeded the defined size or time limits.
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled ... See more...
I wanna ask something on my lab clustered indexer. I got max primary capacity on my indexer. Last time i just reduce the maxWarmDBCount based on the existing db created, and after 1-2 day, it rolled to secondary storage. But this time i apply this modification again but it not rolled until one week. I also already crosscheck on the DB created on the index created and it still above the limit i set, i also check if the config already distributed. Anyone faced this scenario?  #splunk