All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you @bowesmana . With the Time Selector set to Year to date, and not using the earliest command | timechart span=1mon count Results in 2024 as expected. Then using the following, I end up w... See more...
Thank you @bowesmana . With the Time Selector set to Year to date, and not using the earliest command | timechart span=1mon count Results in 2024 as expected. Then using the following, I end up with a timeline of 2024, but the data claiming it's 2023. But is for sure 2024 data, labeled as 2023. | timechart span=1mon count | timewrap 1y series=exact time_format=%Y  
Not sure if this will help but here's a couple things I noticed.  In your original question, you have the word "Log" capitalized but in the syntax it is not.  Could that be why it's not working?  I a... See more...
Not sure if this will help but here's a couple things I noticed.  In your original question, you have the word "Log" capitalized but in the syntax it is not.  Could that be why it's not working?  I also noticed that in your question the words "INFO" and "WARNING" are all capitalized but "Error" is not but you have it as "ERROR" in the syntax.   I often have spelling mistakes in my code that I don't catch right away so thought I'd offer that up as a suggestion.  Good luck!
Real-time searches lock up cpus and should probably be avoided. You should ask yourself (and your users) how urgently do you need an alert? What is the maximum tolerable time between the event occurr... See more...
Real-time searches lock up cpus and should probably be avoided. You should ask yourself (and your users) how urgently do you need an alert? What is the maximum tolerable time between the event occurring and user y being sent an email? How quickly does y need to be able to react? Are they sitting waiting for the email to come in? How quickly does the notification get stale? Basically, buy yourself as much time as possible and then schedule your searches based on this, otherwise you will end up burning resources frequently checking for events that don't happen very often.
Thank you for sharing sample data.  This reveals additional weaknesses in the pursuit. ORDERS seems to be the ID that comes after TransNum, not extracted by the original regex at all.  Sample data ... See more...
Thank you for sharing sample data.  This reveals additional weaknesses in the pursuit. ORDERS seems to be the ID that comes after TransNum, not extracted by the original regex at all.  Sample data also show contradiction with your original index search.  But that is more for you to fine tune. Part of the event is structured in JSON.  This should be treated as a structure not literal strings.  Extraction using regex is instable. Based on your sample events (which suggest that the source is exactly the same, therefore subsearch is really a bad approach), this would be a much better strategy   index=source (("status for" "Not available") OR "Request for") | rex "TransNum: (?<ORDERS>\S+) .*?(?<JSON>{.+})" | spath input=JSON path=products{} | mvexpand products{} | spath input=products{} | stats values(uniqueid) as uniqueid by ORDERS   (Note the index search is purely based on sample data.  You may need to tune it to actually include the correct events.)  Your sample data will give you ORDERS uniqueid 629f2ad QSTRUJIK Here is an emulation of your data. Play with it and compare with real data and refine your search strategy   | makeresults | fields - _* | eval data = mvappend("INFO [pool-9-thread-3] CLASS_NAME=Q, METHOD=, MESSAGE=response status for TransNum: 629f2ad - 400 | Response - {\"code\":0001,\"message\":\"Not available\",\"messages\":[],\"additionalTxnFields\":[]}", "INFO [pool-9-thread-7] CLASS_NAME=Q, METHOD=, MESSAGE=Request for TransNum: 629f2ad - {\"address\":{\"billToThis\":true,\"country\":\"\",\"email\":\"******************\",\"firstname\":\"FN\",\"lastname\":\"LN\",\"postcode\":\"0\",\"salutation\":null,\"telephone\":\"+999999999999\"},\"deliveryMode\":\"\",\"payments\":[{\"amount\":10,\"code\":\"BFD\"}],\"products\":[{\"currency\":356,\"price\":600,\"qty\":2,\"uniqueid\":\"QSTRUJIK\"}],\"refno\":\"629f2ad\",\"syncOnly\":true}") | mvexpand data | rename data as _raw | extract ``` the above emulates index=source (("status for" "Not available") OR "Request for") ```    
Hi, This error is seems to occur with older versions of helm. Can you please confirm your version of helm and see if it's possible to update to a current version?
@uagraw01 that is by splunk's default user role and recommended as best practices. That works with rest_properties_get but if you remove that, you will have different issues, I do not recommend that.... See more...
@uagraw01 that is by splunk's default user role and recommended as best practices. That works with rest_properties_get but if you remove that, you will have different issues, I do not recommend that. You have different ones which are not needed there like Data inputs, Tokens Server Settings these should be handled by admin. Typical Splunk user role native capabilities. If this helps, please Upvote. 
I am looking to build an alert that sends an email if someone locks themselves out of their Windows account after so many attempts.  I built a search out but when using real time I was bombarded with... See more...
I am looking to build an alert that sends an email if someone locks themselves out of their Windows account after so many attempts.  I built a search out but when using real time I was bombarded with emails every few minutes which doesn't seem right.  I would to have the alert be setup such as if user x types the password in 3 times wrong and AD locks the account, send an email to y's email address.  Real time alerting seems to be what I would need but it bombards me way too much.
Thank you for your reply, we hear from support that interval attribute can be used in [script] but not in [powershell].     
Check the scripted inputs for those with interval=-1.  That tells Splunk to run the script at startup.
Has anyone figured out how to run powershell only at scheduled time? In addition to scheduled time, it is running everytime the forwarder is restarted.
Hi @cpaulraj , I’m a Community Moderator in the Splunk Community. This question was posted 7 years ago, so it might not get the attention you need for your question to be answered. We recommend t... See more...
Hi @cpaulraj , I’m a Community Moderator in the Splunk Community. This question was posted 7 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Has anyone figured out how to disable this behavior? We would like the powershell to run only at scheduled time not everytime the UF is started.
Hi @KSV , I’m a Community Moderator in the Splunk Community. This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that y... See more...
Hi @KSV , I’m a Community Moderator in the Splunk Community. This question was posted 4 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
I used this.  Thank you! SELECT * FROM sys.fn_get_audit_file('/tmp/SQLAudit/*',default,default) WHERE event_time > ? ORDER BY event_time ASC   Sample data in Splunk with index with current. The... See more...
I used this.  Thank you! SELECT * FROM sys.fn_get_audit_file('/tmp/SQLAudit/*',default,default) WHERE event_time > ? ORDER BY event_time ASC   Sample data in Splunk with index with current. The site won't allow me to post sql query result in the readable format. 2024-11-11 20:58:14.339, event_time="2024-11-11 15:58:14.3397210", sequence_number="1", action_id="DR ", succeeded="1", is_column_permission="0", session_id="53", server_principal_id="1", database_principal_id="1", target_server_principal_id="0", target_database_principal_id="0", object_id="6", class_type="DB", session_server_principal_name="sa", server_principal_name="sa", database_principal_name="dbo", server_instance_name="u22", database_name="testdb114", object_name="testdb114", statement="drop database testdb114", file_name="/tmp/SQLAudit/MSSQL_Server_Audit_5C4ED78A-BFBD-4C6C-8793-F98B88C55293_0_133757544438840000.sqlaudit", audit_file_offset="20992", user_defined_event_id="0", audit_schema_version="1", transaction_id="852605", client_ip="127.0.0.1", application_name="SQLCMD", duration_milliseconds="0", response_rows="0", affected_rows="0", connection_id="EB46CB4B-CF55-48EA-B497-99D4A04D41FF", host_name="u22", client_tls_version="771", client_tls_version_name="1.2", database_transaction_id="0", ledger_start_sequence_number="0", is_local_secondary_replica="0
@bowesmana , Query won't be able to share it, but I tried few different ways, 1)Created data model and tried combining using append and union as well, but it's not working when running for large dat... See more...
@bowesmana , Query won't be able to share it, but I tried few different ways, 1)Created data model and tried combining using append and union as well, but it's not working when running for large data set which contains nearly 70k records for 15 mins time period, so when I run same query for that individual id it shows no mismatch but in large dataset the data won't be loaded from query 1 2) Created lookup files for each query , and in each file it has the data, but when it's combined using append or union the data is showing as  data doesn't exist in query 1. So suggest how we can proceed further.
Hi @drogo , if you use the INDEXED_EXTRACTIONS=JSON option for the sourcetype you're using for those data, you have all the fileds extracted. If you don't see this field, youcan use a regex to extr... See more...
Hi @drogo , if you use the INDEXED_EXTRACTIONS=JSON option for the sourcetype you're using for those data, you have all the fileds extracted. If you don't see this field, youcan use a regex to extract it: | rex "\d*\s\[(?<message>[^\]]+)" that you can test at https://regex101.com/r/QcGAwT/1 Ciao. Giuseppe
This is a bit vague - Do you want to search for events that have ERR in? Do you want to extract what comes after "[ERR}" in the message field? Do you already have these JSON fields extracted?
Team, I am bit new to Splunk, need help to pull ERR message from below sample raw data.  {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46... See more...
Team, I am bit new to Splunk, need help to pull ERR message from below sample raw data.  {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46.899958 [ERR] 10.25.1.2:30080 - pid:96866" - unable to connect to endpoint , "service": "hello world"}   Thanks!
Assuming there is only one event per TransNum which has a message field and that TransNum is the correlating field, try something like this | rex "TransNum:\s(?<TransNum>\S+)" | rex "\"message\":\"(... See more...
Assuming there is only one event per TransNum which has a message field and that TransNum is the correlating field, try something like this | rex "TransNum:\s(?<TransNum>\S+)" | rex "\"message\":\"(?<message>[^\"]+)" | eventstats values(message) as message by TransNum | where message="Not available"
Hello all, This thread was very helpful to me and i described my picked time period in the dashboard panel description .  I used the progress tag :   <eval token="a1_jobEarliest">strpti... See more...
Hello all, This thread was very helpful to me and i described my picked time period in the dashboard panel description .  I used the progress tag :   <eval token="a1_jobEarliest">strptime($job.earliestTime$,"%Y-%m-%d_%H:%M:%S")</eval> <eval token="a1_jobLatest">strptime($job.latestTime$,"%Y-%m-%d_%H:%M:%S")</eval> <set token="a1_jobEarliest">$job.earliestTime$</set> <set token="a1_jobLatest">$job.latestTime$</set> However I still get formatting details that I dont need ( underlined in blue are miliseconds) :