All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @cpaulraj , I’m a Community Moderator in the Splunk Community. This question was posted 7 years ago, so it might not get the attention you need for your question to be answered. We recommend t... See more...
Hi @cpaulraj , I’m a Community Moderator in the Splunk Community. This question was posted 7 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Has anyone figured out how to disable this behavior? We would like the powershell to run only at scheduled time not everytime the UF is started.
Hi @KSV , I’m a Community Moderator in the Splunk Community. This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that y... See more...
Hi @KSV , I’m a Community Moderator in the Splunk Community. This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
I used this.  Thank you! SELECT * FROM sys.fn_get_audit_file('/tmp/SQLAudit/*',default,default) WHERE event_time > ? ORDER BY event_time ASC   Sample data in Splunk with index with current. The... See more...
I used this.  Thank you! SELECT * FROM sys.fn_get_audit_file('/tmp/SQLAudit/*',default,default) WHERE event_time > ? ORDER BY event_time ASC   Sample data in Splunk with index with current. The site won't allow me to post sql query result in the readable format. 2024-11-11 20:58:14.339, event_time="2024-11-11 15:58:14.3397210", sequence_number="1", action_id="DR ", succeeded="1", is_column_permission="0", session_id="53", server_principal_id="1", database_principal_id="1", target_server_principal_id="0", target_database_principal_id="0", object_id="6", class_type="DB", session_server_principal_name="sa", server_principal_name="sa", database_principal_name="dbo", server_instance_name="u22", database_name="testdb114", object_name="testdb114", statement="drop database testdb114", file_name="/tmp/SQLAudit/MSSQL_Server_Audit_5C4ED78A-BFBD-4C6C-8793-F98B88C55293_0_133757544438840000.sqlaudit", audit_file_offset="20992", user_defined_event_id="0", audit_schema_version="1", transaction_id="852605", client_ip="127.0.0.1", application_name="SQLCMD", duration_milliseconds="0", response_rows="0", affected_rows="0", connection_id="EB46CB4B-CF55-48EA-B497-99D4A04D41FF", host_name="u22", client_tls_version="771", client_tls_version_name="1.2", database_transaction_id="0", ledger_start_sequence_number="0", is_local_secondary_replica="0
@bowesmana , Query won't be able to share it, but I tried few different ways, 1)Created data model and tried combining using append and union as well, but it's not working when running for large dat... See more...
@bowesmana , Query won't be able to share it, but I tried few different ways, 1)Created data model and tried combining using append and union as well, but it's not working when running for large data set which contains nearly 70k records for 15 mins time period, so when I run same query for that individual id it shows no mismatch but in large dataset the data won't be loaded from query 1 2) Created lookup files for each query , and in each file it has the data, but when it's combined using append or union the data is showing as  data doesn't exist in query 1. So suggest how we can proceed further.
Hi @drogo , if you use the INDEXED_EXTRACTIONS=JSON option for the sourcetype you're using for those data, you have all the fileds extracted. If you don't see this field, youcan use a regex to extr... See more...
Hi @drogo , if you use the INDEXED_EXTRACTIONS=JSON option for the sourcetype you're using for those data, you have all the fileds extracted. If you don't see this field, youcan use a regex to extract it: | rex "\d*\s\[(?<message>[^\]]+)" that you can test at https://regex101.com/r/QcGAwT/1 Ciao. Giuseppe
This is a bit vague - Do you want to search for events that have ERR in? Do you want to extract what comes after "[ERR}" in the message field? Do you already have these JSON fields extracted?
Team, I am bit new to Splunk, need help to pull ERR message from below sample raw data.  {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46... See more...
Team, I am bit new to Splunk, need help to pull ERR message from below sample raw data.  {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46.899958 [ERR] 10.25.1.2:30080 - pid:96866" - unable to connect to endpoint , "service": "hello world"}   Thanks!
Assuming there is only one event per TransNum which has a message field and that TransNum is the correlating field, try something like this | rex "TransNum:\s(?<TransNum>\S+)" | rex "\"message\":\"(... See more...
Assuming there is only one event per TransNum which has a message field and that TransNum is the correlating field, try something like this | rex "TransNum:\s(?<TransNum>\S+)" | rex "\"message\":\"(?<message>[^\"]+)" | eventstats values(message) as message by TransNum | where message="Not available"
Hello all, This thread was very helpful to me and i described my picked time period in the dashboard panel description .  I used the progress tag :   <eval token="a1_jobEarliest">strpti... See more...
Hello all, This thread was very helpful to me and i described my picked time period in the dashboard panel description .  I used the progress tag :   <eval token="a1_jobEarliest">strptime($job.earliestTime$,"%Y-%m-%d_%H:%M:%S")</eval> <eval token="a1_jobLatest">strptime($job.latestTime$,"%Y-%m-%d_%H:%M:%S")</eval> <set token="a1_jobEarliest">$job.earliestTime$</set> <set token="a1_jobLatest">$job.latestTime$</set> However I still get formatting details that I dont need ( underlined in blue are miliseconds) :      
hi @yuanliu thanks for tips, i tried running with the modified query. i got the results for ORDERS which are NOT AVAILABLE (which is the resultant of First search) . my requirement is to match ORDERS... See more...
hi @yuanliu thanks for tips, i tried running with the modified query. i got the results for ORDERS which are NOT AVAILABLE (which is the resultant of First search) . my requirement is to match ORDERS which are NOT AVAILABLE with ORDERS in second log . and display ORDERS  and UNIQUEID   sharing the data here  INFO [pool-9-thread-3] CLASS_NAME=Q, METHOD=, MESSAGE=response status for TransNum: 629f2ad - 400 | Response - {"code":0001,"message":"Not available","messages":[],"additionalTxnFields":[]} INFO [pool-9-thread-7] CLASS_NAME=Q, METHOD=, MESSAGE=Request for TransNum: 629f2ad - {"address":{"billToThis":true,"country":"","email":"******************","firstname":"FN","lastname":"LN","postcode":"0","salutation":null,"telephone":"+999999999999"},"deliveryMode":"","payments":[{"amount":10,"code":"BFD"}],"products":[{"currency":356,"price":600,"qty":2,"uniqueid":"QSTRUJIK"}],"refno":"629f2ad","syncOnly":true}  
Your by clause also include dv_priority which is why you are getting multiple results for an incident. Try something like this index=snow "INC783" | search dv_state="In Progress" OR dv_state="New" O... See more...
Your by clause also include dv_priority which is why you are getting multiple results for an incident. Try something like this index=snow "INC783" | search dv_state="In Progress" OR dv_state="New" OR dv_state="On Hold" | stats max(_time) as Time latest(dv_state) as State latest(dv_priority) as Priority by number | fieldformat Time=strftime(Time,"%Y-%m-%d %H:%M:%S") | table number,Time, Priority, State
Not every chart type supports zoom/selections. You may need to represent your data in a different way, e.g. column chart, which does support zoom/selections.
From looking at what you have posted, it appears that there may be a space between the "-" and the start of the number which is not present in the regex. This is why we ask for event data and SPL cod... See more...
From looking at what you have posted, it appears that there may be a space between the "-" and the start of the number which is not present in the regex. This is why we ask for event data and SPL code to be shared in code blocks, so these things can be more easily spotted. Assuming this is the case, then use the regex as I showed (not as you have apparently used).
+1 on that - don't use join unless there is absolutely no other way (or you have a very small dataset). Not only it's relatively slow and resource-hungry, it has also pretty serious limitations and ... See more...
+1 on that - don't use join unless there is absolutely no other way (or you have a very small dataset). Not only it's relatively slow and resource-hungry, it has also pretty serious limitations and you can get wrong or incomplete results without knowing it.
Hi Team,   I have a splunk query that am testing for Service Now data extract.   index=snow "INC783" | search dv_state="In Progress" OR dv_state="New" OR dv_state="On Hold" | stats max(_time) as ... See more...
Hi Team,   I have a splunk query that am testing for Service Now data extract.   index=snow "INC783" | search dv_state="In Progress" OR dv_state="New" OR dv_state="On Hold" | stats max(_time) as Time latest(dv_state) as State by number, dv_priority | fieldformat Time=strftime(Time,"%Y-%m-%d %H:%M:%S") | table number,Time, dv_priority, State     The challenge with the code is, above output is listing all the states for the particular Incidnet, even when i tried to filter for only the latest and max time. number Time dv_priority State INC783 2024-11-13 16:56:14 1 - Critical In Progress INC783 2024-11-13 17:00:03 3 - Moderate On Hold   The data must only show the latest one, which must be the one with "On Hold". Tried multiple method, but failing and showing all. how can i achieve it.   thanks Jerin V
Hi @bowesmana & @ITWhisperer ,    Thanks for your reply!    I have tried using selection but facing some error even after this warning this is not working. "Invalid child="selection" is not all... See more...
Hi @bowesmana & @ITWhisperer ,    Thanks for your reply!    I have tried using selection but facing some error even after this warning this is not working. "Invalid child="selection" is not allowed in node="viz" " <row> <panel> <title>status</title> <viz type="timeline_app.timeline"> <search> <query>index=$siteid$ sourcetype=logs* CAT IN ("TAT") _raw=*** (NOT CODE=* OR CODE IN ("T11")) | head 100000 | eval Eventts_date=substr(Eventts,1,10) | eval Eventts_time=substr(Eventts,12,8) | eval Eventts_new=Eventts_date." ".Eventts_time | eval _timee=strptime(Eventts_new,"%Y-%m-%d %H:%M:%S.%6N") | fillnull value="N/A" ............................. | eval displayname="Operational".displayname | table _time displayname FIELD_01 duration | append [ search index=$siteid$ sourcetype=FSC* CAT IN ("ST") _raw=*** (NOT CODE=* OR CODE IN ("Ad13")) | head 100000 | eval Eventts_date=substr(Eventts,1,10) | eval Eventts_time=substr(Eventts,12,8) | eval Eventts_new=Eventts_date." ".Eventts_time | eval _timee=strptime(Eventts_new,"%Y-%m-%d %H:%M:%S.%6N") .............................. | table _time displayname FIELD_01 duration ] </query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="height">460</option> <option name="refresh.display">progressbar</option> <option name="timeline_app.timeline.axisTimeFormat">SECONDS</option> <option name="timeline_app.timeline.colorMode">categorical</option> <option name="timeline_app.timeline.maxColor">#DA5C5C</option> <option name="timeline_app.timeline.minColor">#FFE8E8</option> <option name="timeline_app.timeline.numOfBins">6</option> <option name="timeline_app.timeline.tooltipTimeFormat">SECONDS</option> <option name="timeline_app.timeline.useColors">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <selection> <set token="selection.earliest">$start$</set> <set token="selection.latest">$end$</set> <set token="start.count">$start.count$</set> <set token="end.count">$end.count$</set> </selection> <drilldown><link target="_blank">search?q= <query>index=$siteid$ sourcetype=FSC* CAT IN ("TAT") _raw=*** (NOT CODE=* OR MARKCODE IN ("TZ11")) | head 100000 | where _time &gt;= $selection.earliest$ AND _time ?&lt;= $selection.latest$ | eval Eventts_date=substr(Eventts,1,10) | eval Eventts_time=substr(Eventts,12,8) | eval Eventts_new=Eventts_date." ".Eventts_time | eval _timee=strptime(Eventts_new,"%Y-%m-%d %H:%M:%S.%6N") .................. | table _time displayname FIELD_01 duration | append [ search index=$siteid$ sourcetype=FSC* CAT IN ("ST") _raw=*** (NOT CODE=* OR CODE IN ("Ak03")) | head 100000 | eval Eventts_date=substr(Eventts,1,10) | eval Eventts_time=substr(Eventts,12,8) | eval Eventts_new=Eventts_date." ".Eventts_time | eval _timee=strptime(Eventts_new,"%Y-%m-%d %H:%M:%S.%6N") ............................................ | eval displayname="Maintenance".displayname | table _time displayname FIELD_01 duration ] </query></link></drilldown> </viz> </panel> </row>
Yes, same issue when I tried. Only when disabling the operator flag, the deployment goes through
First, Key value pairs (field=value) are usually auto extracted when KV_MODE is set to auto in props.conf. Configure automatic key-value field extraction - Splunk Documentation If it is set to none... See more...
First, Key value pairs (field=value) are usually auto extracted when KV_MODE is set to auto in props.conf. Configure automatic key-value field extraction - Splunk Documentation If it is set to none please set your field extraction under Settings --> Fields --> Field extractions that's the right place for it.
Sample logs looks like this: adshdsfkdlfpofgsk message hdksodb Stage=8 gjhjyeomhf hjhdgy …   I deployed the configurations in the cloud instance from the settings > sourcetypes option.