All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You're right @ITWhisperer, I can't change the time from what was used in the base search which brings me to my second question. How can I add a drilldown to the same panel with a different timestamp?... See more...
You're right @ITWhisperer, I can't change the time from what was used in the base search which brings me to my second question. How can I add a drilldown to the same panel with a different timestamp? I want to expand the bar chart for a particular time to a drilldown containing more detailed information for that selected time frame.
@Sandivsu - Not sure if you can do that with props and transforms. But I'll provide a solution you can apply at the search query level. index=<your-index> ..... | rex field=_raw "\s\w+\[\w+\]:\s(?<j... See more...
@Sandivsu - Not sure if you can do that with props and transforms. But I'll provide a solution you can apply at the search query level. index=<your-index> ..... | rex field=_raw "\s\w+\[\w+\]:\s(?<json_content>\{.*\})" | spath input=json_content   I hope this helps!!! Kindly upvote if it does!!!
Hi @Mrig342 , It is a tested solution in my lab environment. Can you please check if the double quotes are the correct characters in your search? Sometimes they got replaced while copying from the b... See more...
Hi @Mrig342 , It is a tested solution in my lab environment. Can you please check if the double quotes are the correct characters in your search? Sometimes they got replaced while copying from the browser.  
Hi @Ismail_BSA, Splunk cannot convert/read these binary files. Maybe you can install SQLServer on Server C, import these audit files into that SQLServer, and query with DBConnect.
Hi @scelikok  Thank you for the query.. But its not working for me.. Its giving error: Error in 'EvalCommand': The expression is malformed. Expected).   Can you please help to modify the query.. ... See more...
Hi @scelikok  Thank you for the query.. But its not working for me.. Its giving error: Error in 'EvalCommand': The expression is malformed. Expected).   Can you please help to modify the query.. Thank you..!!
@bhall_2 - I didn't hear of it. Only Splunk Universal Forwarder (Splunk Agent on the host).
@avikc100  - You can add custom CSS to your simple XML dashboard to achieve this.   Dashboard XML Source Code   <form> <label>Fixed Column Sticky</label> <row depends="$tkn_never_show$"> ... See more...
@avikc100  - You can add custom CSS to your simple XML dashboard to achieve this.   Dashboard XML Source Code   <form> <label>Fixed Column Sticky</label> <row depends="$tkn_never_show$"> <panel> <html> <style> #myTable table td:nth-child(1) { position: fixed !important; } #myTable table th:nth-child(1) { position: fixed !important; } </style> </html> </panel> </row> <row> <panel> <table id="myTable"> <search> ....     If position: fixed doesn't work, you can try with position: sticky;   I hope this helps!! If it does kindly upvote!!!
I am browsing to look for a solution to this issue and eventually accidentally found a solution myself. Try if this will work for you.  search sourcetype=type1 field1='$arg1$' | rename field2 as... See more...
I am browsing to look for a solution to this issue and eventually accidentally found a solution myself. Try if this will work for you.  search sourcetype=type1 field1='$arg1$' | rename field2 as query | fields query | eval newField=query  Single quotes will return the value of the field in an eval expression.
Hi @asabatini, You can reorder or modify raw data using transforms,  you need to capture parts of the messages and reorder them like $1$3$2, etc. please see the document below; https://docs.splunk... See more...
Hi @asabatini, You can reorder or modify raw data using transforms,  you need to capture parts of the messages and reorder them like $1$3$2, etc. please see the document below; https://docs.splunk.com/Documentation/Splunk/9.0.3/Data/Anonymizedata#Configure_the_transforms.conf_file
@rickymckenzie10 - To simplify your understanding of warm and cold buckets and different parameters. (Only applicable when you are not using volumes)   Warm Buckets -> buckets in /db path Cold Bu... See more...
@rickymckenzie10 - To simplify your understanding of warm and cold buckets and different parameters. (Only applicable when you are not using volumes)   Warm Buckets -> buckets in /db path Cold Buckets -> buckets in /colddb path Frozen Buckets -> Deleted/Archived data   Warm to Cold Bucket Movement -> when maxWarmDBCount bucket count is reached.   Cold to Frozen (deleting, max age) Bucket Movement -> when all events are older than frozenTimePeriodInSecs   I hope this helps you understand the parameters better. Kindly upvote if it does!!!
Hi @richa, Since you asked for alerting data sources that stopped for more than 24 hours, it will not show yesterday's logs. You can change the delay parameter according to your needs.  86400 is e... See more...
Hi @richa, Since you asked for alerting data sources that stopped for more than 24 hours, it will not show yesterday's logs. You can change the delay parameter according to your needs.  86400 is equivalent to 24 hours in seconds. 
I have tried the option to set the tag. But the problem is by the time the 1st artifact set the tag, the 2nd one have already completed the decision block and hence it repeats the playbook
A quick win will be tagging the container. You can edit your playbook to check if the tag of the container is XYZ, which will be not in the first run (for the first artifact). Once you call your act... See more...
A quick win will be tagging the container. You can edit your playbook to check if the tag of the container is XYZ, which will be not in the first run (for the first artifact). Once you call your action to create an incident, change the tag of the container to XYZ, so even if the next artifact triggers the playbook, the tag will already by XYZ and the create incident action will not be called as it will not satisfy your condition. While creating artifacts manually (via rest, for example), you can force the parameter "run_automation" to false, preventing that new artifact to trigger the playbook execution, but in your case data is coming from the export app so maybe you can find some settings in there to change this behavior (honestly I don't recall one, but maybe you can find something interesting)
Your props is not matching the stanza name of transforms. Not sure if that was a typo... About a typo, you don't need that first pipe in the ingest_eval. Try this instead (I changed the regex a bit)... See more...
Your props is not matching the stanza name of transforms. Not sure if that was a typo... About a typo, you don't need that first pipe in the ingest_eval. Try this instead (I changed the regex a bit) Props.conf: [your_sourcetype] TRANSFORMS-set_time = set_time_from_file_path Transforms.conf [set_time_from_file_path] INGEST_EVAL = eval _time = strptime(replace(source, ".*/ute-(\\d{4}-\\d{2}-\\d{2}[a-z]+)/([^/]+/[^/]+).*","\\1"), "%Y-%m-%d_%H-%M-%S")
Hey Ricky, AFAIK maxWarmDBCount doesn't affect the rollover of data (but it can be storage hungry so be careful with that), it is something the frozenTimePeriodInSecs do instead. In your case, if I ... See more...
Hey Ricky, AFAIK maxWarmDBCount doesn't affect the rollover of data (but it can be storage hungry so be careful with that), it is something the frozenTimePeriodInSecs do instead. In your case, if I understood correctly, the frozen time already passed but your data did not rolled over, and that may be either because your cluster manager is too busy at the moment (and you are experiencing delay in this processing) OR maybe it is waiting for the buckets to hit a threshold in size. Check the bucket replication status also, it may indicate if there is any problem in there... Are you using maxTotalDataSizeMB key by any chance? Try to add that also to see if you get any diff behavior.
Looks like you don't have nested json events in there, so have you tried to just regex by the } and { characters? Try this: [your_sourcetype] SHOULD_LINEMERGE = false LINE_BREAKER = \}\s+\{
Hello, I have some issues with parsing events and a few sample events are given below: {"eventVer":"2.56", "userId":"A021", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:10:15", ... See more...
Hello, I have some issues with parsing events and a few sample events are given below: {"eventVer":"2.56", "userId":"A021", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:10:15", "statusCode":"active"} {"eventVer":"2.56", "userId":"A021", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:09:11", "statusCode":"active"} {"eventVer":"2.56", "userId":"A021", "accountId":"Adm02", "accessKey":"26dsaa", "time":"2023-12-03T09:09:08", "statusCode":"active"} {\"eventVer\":\"2.56", "userId":"B001", "accountId":"Test04", "accessKey":"21fsda", "time":"2023-12-03T09:09:04", "statusCode":"active"} {\"eventVer\":\"2.56", "userId":"B009", "accountId":"Adm01", "accessKey":"21assaa", "time":"2023-12-03T09:09:01", "statusCode":"active"} {"eventVer":"2.56", "userId":"B023", "accountId":"Adm01", "accessKey":"30tsaa", "time":"2023-12-03T09:08:55", "statusCode":"active"} {"eventVer":"2.56", "userId":"A025", "accountId":"Adm01", "accessKey":"21asaa", "time":"2023-12-03T09:08:51", "statusCode":"active"} {"eventVer":"2.56", "userId":"C015", "accountId":"Dev01", "accessKey":"41scab", "time":"2023-12-03T09:08:48", "statusCode":"active"} The event breaking point is marked as Bold and I used  LINE_BREAKER=([\r\n]*)\{"eventVer":" in my prop.conf file, but not parsing all events as expected. Any recommendations will be highly appreciated. Thank you.
An even longer answer: How can search head know who is viewing and which time zone each user prefers - if not from user preference? By the end of day, this is not a technical question, but a design ... See more...
An even longer answer: How can search head know who is viewing and which time zone each user prefers - if not from user preference? By the end of day, this is not a technical question, but a design question.  As you stated, you have a global workforce, implying that you cannot force everyone to accept Eastern US time.  Is this correct?  If it is, you need to ask yourself: What is the reason why you cannot allow those special users to set their preference? If there is a good reason for 1, the second question is: Will a dashboard selector be acceptable? One way or another, you need to give your global workforce a method to tell search head their preference.  After the user makes a selection, then yes, there is a way to display specific time zone.
Currently, each of my indexes is set to a specific and own frozenTimePeriodInSecs, but I am noticing they are not rolling over to cold when the frozenTimePeriodInSecs value is set. Data Age (keeps g... See more...
Currently, each of my indexes is set to a specific and own frozenTimePeriodInSecs, but I am noticing they are not rolling over to cold when the frozenTimePeriodInSecs value is set. Data Age (keeps growing) vs Frozen Age (stays as what it is set in frozenTimePeriodInSecs) maxWarmDBCount is set to:   maxWarmDBCount = 4294967295    Does this effect? If the value is changed, would data roll to cold?
Let me clarify the requirement.  You want to modify the saved search so it can handle curly brackets that users may accidentally enter when invoking it.  If this correct, you can do something like  ... See more...
Let me clarify the requirement.  You want to modify the saved search so it can handle curly brackets that users may accidentally enter when invoking it.  If this correct, you can do something like   index=foo | ... some stuff | search [makeresults format=csv data="search $INPUT_SessionId$" | eval search = replace(search, "{|}", "") | format] | ... more stuff   (Note trim(someField, "{}") will not work in your use case because "{" does not appear in the beginning of $INPUT_SessionId$.)