All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the links, I gona read them and check logs for output errors.
OK, this is the explanation of the connexions refused when one pipeline queue get blocked. Thanks, Now, I have to understand why i've got pipelines queues blocked.    
Hi @kakawun !  Whilst this issue is from a while ago, it may help other users. Just wanted to let you know this issue is resolved now in 9.3.0 and later releases! If any reference to this ... See more...
Hi @kakawun !  Whilst this issue is from a while ago, it may help other users. Just wanted to let you know this issue is resolved now in 9.3.0 and later releases! If any reference to this fix is needed with support, you can quote SPL-251796. Thanks! 
See this example dashboard - this uses a <change> block on the input to change the token <form version="1.1" theme="light"> <label>Backslash escaped input</label> <fieldset submitButton="false">... See more...
See this example dashboard - this uses a <change> block on the input to change the token <form version="1.1" theme="light"> <label>Backslash escaped input</label> <fieldset submitButton="false"> <input type="text" token="Get_Process_Path" searchWhenChanged="true"> <label>Enter Path</label> <prefix>process_path="*</prefix> <suffix>*"</suffix> <change> <eval token="escaped_path">replace($Get_Process_Path$, "\\\\", "\\\\")</eval> </change> </input> </fieldset> <row> <panel> <html>Token created from the user's input is <b style="color:blue">[$Get_Process_Path$]</b> and the up[dated search token applied is <b style="color:red">[$escaped_path$]</b></html> <table> <search> <query>index=_audit $escaped_path$</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>
@livehybrid  Thank you so much for the feedback as to answering your question "Although I'm confused as to why you couldn't do this? index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | sea... See more...
@livehybrid  Thank you so much for the feedback as to answering your question "Although I'm confused as to why you couldn't do this? index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] | stats count | where count>0" Would this also help capture if there was 0 events?  The Goal is to have the Alert Trigger anything except 1 event , so !=1  . It needs to alert if  there is 0 events found OR more than 1 event. Either way I have a scenario where there is 0 events BUT! Its a Mute date on my Lookup table and it still fired an alert. Its either that or because its was a Mute date that there might have been 1 event but since its a Mute date it changed it to 0 event Still causing the Alert to fire. Let me know if you need more clarification and I can post what I have setup
The OS in your first result has OS has "Microsoft Windows 11 Enterprise", whereas your OperatingSystems field in your OS_Outdated.csv lookup does not appear to have "Microsoft" in the name, so natura... See more...
The OS in your first result has OS has "Microsoft Windows 11 Enterprise", whereas your OperatingSystems field in your OS_Outdated.csv lookup does not appear to have "Microsoft" in the name, so naturally it will not match. You will either have to make your OperatingSystems field a wildcarded lookup or massage your data so the two fields contain similar data. You also have a small issue with your use of fillnull - you specify a field name "outdated" which is lower case, whereas your field from the lookup is Outdated (capital O) You can try this search index=endpoint_defender source="AdvancedHunting-DeviceInfo" | rex field=DeviceName "(?<DeviceName>\w{3}-\w{1,})." | eval DeviceName=upper(DeviceName) | lookup snow_os.csv DeviceName output OS BuildNumber Version ``` Remove the word Microsoft and any following spaces ``` | eval OperatinsSystems=replace(OS, "Microsoft\s*", "") ``` Now use this modified field as the lookup field ``` | lookup OS_Outdated.csv OperatingSystems BuildNumber Version OUTPUT Outdated | fillnull value=false Outdated | table DeviceName OS BuildNumber Version Outdated  
You could use INGEST_EVAL with @kiran_panchavat ‘s example. Put it in transforms.conf like INGEST_EVAL = _time := if(match(date, "\\d{4}-\\d{2}-\\d{2} \\d{1,2}:\\d{2}:\\d{2} [APMapm]{2}"), strptim... See more...
You could use INGEST_EVAL with @kiran_panchavat ‘s example. Put it in transforms.conf like INGEST_EVAL = _time := if(match(date, "\\d{4}-\\d{2}-\\d{2} \\d{1,2}:\\d{2}:\\d{2} [APMapm]{2}"), strptime(date, "%Y-%m-%d %I:%M:%S %p"), strptime(date, "%Y-%m-%d"))
Anything on mongodb.log? Normally you shouldn’t do any “fixing” activities before you know what and why it’s broken! Those fix could get the situation even worse!
@rafiq_rehman  Most probably no captain or kvstore port blocked on this new member. the KVStore cannot reach "Ready" unless a captain is elected and cluster coordination is healthy. If there’s no... See more...
@rafiq_rehman  Most probably no captain or kvstore port blocked on this new member. the KVStore cannot reach "Ready" unless a captain is elected and cluster coordination is healthy. If there’s no captain or communication is broken, KVStore remains in "starting" or kvstore default port 8191 might be blocked. If this port is blocked/network issues, KVStore cannot synchronize and will not start Regards, Prewin  Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!
@cherrypick  Splunk cannot natively parse multiple timestamp formats for the same field at index time—it only allows a single TIME_FORMAT per sourcetype. If you can preprocess or route events diffe... See more...
@cherrypick  Splunk cannot natively parse multiple timestamp formats for the same field at index time—it only allows a single TIME_FORMAT per sourcetype. If you can preprocess or route events differently, you can assign different sourcetypes based on the date format # props.conf [test_json] TRANSFORMS-set_sourcetype = set_sourcetype_datetime, set_sourcetype_dateonly [test_json_datetime] TIME_PREFIX = "date":\s*" TIME_FORMAT = %Y-%m-%d %I:%M:%S %p [test_json_dateonly] TIME_PREFIX = "date":\s*" TIME_FORMAT = %Y-%m-%d # transforms.conf [set_sourcetype_datetime] REGEX = "date":\s*"\d{4}-\d{2}-\d{2} \d{1,2}:\d{2}:\d{2} [AP]M" DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::test_json_datetime [set_sourcetype_dateonly] REGEX = "date":\s*"\d{4}-\d{2}-\d{2}" DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::test_json_dateonly Regards, Prewin  Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a kudos. Thanks!  
Again, this does not work because this filters events during search time and not using shared time picker.  Let's say I have an event that has been indexed with date 2025-04-01 but I ingest it so ... See more...
Again, this does not work because this filters events during search time and not using shared time picker.  Let's say I have an event that has been indexed with date 2025-04-01 but I ingest it so that _time is 2025-04-02 (so date and _time is mismatched), if I use a timechart command to filter alerts over 2025-04-01, this event will not appear on the timechart because it is first filtered on _time. Even if I specify timechart by date, this event will not appear.  My core issue is how to ensure _time and date fields are the same in the index (NOT SEARCH TIME) when ingesting data with mismatched formats.
@cherrypick  Then you can try this  props.conf  [json_splunk] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = "date":\s*" TIME_FORMAT = %Y-%m-%d %I:%M:%S %p  MAX_TIMESTAMP_LOO... See more...
@cherrypick  Then you can try this  props.conf  [json_splunk] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = "date":\s*" TIME_FORMAT = %Y-%m-%d %I:%M:%S %p  MAX_TIMESTAMP_LOOKAHEAD = 60 TRANSFORMS-normalize = fix_date_field, fix_time_hour transforms.conf [fix_date_field] REGEX = ("date":\s*")(\d{4}-\d{2}-\d{2}|\d{2}-\d{2}-\d{2})(") FORMAT = $1$2 12:00:00 AM$3 DEST_KEY = _raw [fix_time_hour] REGEX = ("date":\s*".*?\s)(\d{1})(:\d{2}:\d{2}\s(?:AM|PM)) FORMAT = $10$2$3 DEST_KEY = _raw output: Sample events which i tried: {"date": "2025-05-23 9:35:35 PM", "event": "Login"} {"date": "2025-05-23", "event": "Logout"} {"date": "2025-05-24 10:15:00 AM", "event": "Login"} {"date": "2025-05-24", "event": "Logout"} {"date": "2025-05-25 11:45:00 AM", "event": "Update"} {"date": "2025-05-25", "event": "Login"} {"date": "2025-05-26 12:00:00 PM", "event": "Logout"} {"date": "2025-05-26", "event": "Update"} {"date": "2025-05-27 1:30:00 PM", "event": "Login"} {"date": "2025-05-27", "event": "Logout"}    
I need this to be done at ingest and not during search. The reason is that Splunk first filters on _time so not having the correct _time values will filter out results that shouldn't.  
@cherrypick  SPL with Dummy Data Using makeresults | makeresults count=10 | streamstats count as id | eval raw_json=case( id=1, "{\"date\": \"2025-05-23 9:35:35 PM\", \"event\": \"Login\"}", id... See more...
@cherrypick  SPL with Dummy Data Using makeresults | makeresults count=10 | streamstats count as id | eval raw_json=case( id=1, "{\"date\": \"2025-05-23 9:35:35 PM\", \"event\": \"Login\"}", id=2, "{\"date\": \"2025-05-23\", \"event\": \"Logout\"}", id=3, "{\"date\": \"2025-05-24 10:15:00 AM\", \"event\": \"Login\"}", id=4, "{\"date\": \"2025-05-24\", \"event\": \"Logout\"}", id=5, "{\"date\": \"2025-05-25 11:45:00 AM\", \"event\": \"Update\"}", id=6, "{\"date\": \"2025-05-25\", \"event\": \"Login\"}", id=7, "{\"date\": \"2025-05-26 12:00:00 PM\", \"event\": \"Logout\"}", id=8, "{\"date\": \"2025-05-26\", \"event\": \"Update\"}", id=9, "{\"date\": \"2025-05-27 1:30:00 PM\", \"event\": \"Login\"}", id=10, "{\"date\": \"2025-05-27\", \"event\": \"Logout\"}" ) | spath input=raw_json | eval parsed_time = if(match(date, "\\d{4}-\\d{2}-\\d{2} \\d{1,2}:\\d{2}:\\d{2} [APMapm]{2}"), strptime(date, "%Y-%m-%d %I:%M:%S %p"), strptime(date, "%Y-%m-%d")) | eval _time = parsed_time | table _time, date, event | makeresults count=4 | streamstats count AS row | eval _raw=case( row=1, "{\"date\":\"2025-05-23 21:35:35\"}", row=2, "{\"date\":\"2025-05-22\"}", row=3, "{\"date\":\"2025-05-21 15:20:00\"}", row=4, "{\"date\":\"2025-05-20\"}" ) | spath | eval _time=if(match(date, "\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}"), strptime(date, "%Y-%m-%d %H:%M:%S"), strptime(date, "%Y-%m-%d")) | table date _time  
@livehybrid  You're absolutely right that the public documentation (including the Restore indexed data from a self-storage location guide) outlines the DDSS process in detail, and it is technically ... See more...
@livehybrid  You're absolutely right that the public documentation (including the Restore indexed data from a self-storage location guide) outlines the DDSS process in detail, and it is technically possible for customers to manage this independently, especially those with in-house Splunk expertise. 
I have a Json file which contains a "date" field. The date field in my data that can either be of format %Y-%m-%d %H:%M:%S (e.g. 2025-05-23 9:35:35 PM) or %Y-%m-%d (e.g. 2025-05-23). The only way to ... See more...
I have a Json file which contains a "date" field. The date field in my data that can either be of format %Y-%m-%d %H:%M:%S (e.g. 2025-05-23 9:35:35 PM) or %Y-%m-%d (e.g. 2025-05-23). The only way to ingest this Json is via manual ingestion. When trying to set the _time field on ingest, setting the timestamp format to %Y-%m-%d %H:%M:%S will fail and default to the wrong _time value for date fields with format %Y-%m-%d. However, setting timestamp to format %Y-%m-%d won't capture the HMS part. Was there a way to coalesce these so that it will check if HMS is present, and if so, then apply %Y-%m-%d %H:%M:%S format? Or is there a workaround so at least the data ingestion for _time is accurate?
See these events in splunkd: 05-22-2025 21:07:58.608 -0400 ERROR KVStoreAdminHandler [1848035 TcpChannelThread] - An error occurred. 05-22-2025 21:07:36.668 -0400 ERROR KVStoreIntrospection [184803... See more...
See these events in splunkd: 05-22-2025 21:07:58.608 -0400 ERROR KVStoreAdminHandler [1848035 TcpChannelThread] - An error occurred. 05-22-2025 21:07:36.668 -0400 ERROR KVStoreIntrospection [1848033 TcpChannelThread] - failed to get introspection data 05-22-2025 21:07:19.587 -0400 WARN KVStoreConfigurationProvider [1845927 MainThread] - Action scheduled, but event loop is not ready yet Tried cleaning up kvstore by running "splunk clean kvstore --local --answer-yes" but that didn't change anything, status is still stuck in starting.
No it’s not normal. Usually it should be ready quite quickly. Anything in your internal logs?
You definitely should read what Harendra said!
I followed these steps to add new box to existing SHC, everything looks fine on the SHC side but kvstore status has been 'status : starting' since, and it's been over an hour. Is this normal or I mis... See more...
I followed these steps to add new box to existing SHC, everything looks fine on the SHC side but kvstore status has been 'status : starting' since, and it's been over an hour. Is this normal or I missed something?