All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello I am collecting data via AWS add on and what I have found is that my timestamp recognition isn't working properly. I have a single AWS input using the [aws:s3:csv] sourcetype. this then use... See more...
Hello I am collecting data via AWS add on and what I have found is that my timestamp recognition isn't working properly. I have a single AWS input using the [aws:s3:csv] sourcetype. this then uses transforms to update the sourcetype based on the file name the data comes from. Config snips: props.conf   [aws:s3:csv] LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = true BREAK_ONLY_BEFORE_DATE = true FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , TRUNCATE = 20000 TRANSFORMS-awss3 =sourcetypechange:awss3-object_rolemap_audit,sourcetypechange:awss3-authz-audit-logs [awss3:object_rolemap_audit] TIME_FORMAT=%d %b %Y %H:%M:%S LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false BREAK_ONLY_BEFORE_DATE = true FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1 [awss3:authz_audit] TIME_FORMAT=%Y-%m-%d %H:%M:%S,%3Q #TZ=GMT FIELD_DELIMITER = , HEADER_FIELD_DELIMITER = , FIELD_QUOTE = " INDEXED_EXTRACTIONS = CSV HEADER_FIELD_LINE_NUMBER = 1   transforms.conf   [sourcetypechange:awss3-object_rolemap_audit] SOURCE_KEY = MetaData:Source REGEX = .*?object_rolemap_audit.csv DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::awss3:object_rolemap_audit [sourcetypechange:awss3-authz-audit-logs] SOURCE_KEY = MetaData:Source REGEX = .*?authz-audit.csv DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::awss3:authz_audit     It seems that the data comes in at indextime from what I can see, even though I set recognition for each sourcetype. I believe that timestamping is happening at the initial pass into Splunk before it gets the transforms applied.   How can i set timestamping via the initial sourcetype if there are multiple formats for the sourcetype depending on the file? Since its not honoring the timestamp recognition setting post-transforms. Thanks for the help.
Hi Practically you must have admin role to share KOs to global/all apps. r. Ismo
I try change permission to all app option but I don't see the option. I s anyother way make my macro available for all apps.
My mistake, it should be max(_time). I've fixed it in the other reply.
Here are three lines of the file to illustrate what I'm going for: Line from file Desired field URI : https://URL.net/token token URI : https://URL.net/rest/v1/check rest/v1/check URI... See more...
Here are three lines of the file to illustrate what I'm going for: Line from file Desired field URI : https://URL.net/token token URI : https://URL.net/rest/v1/check rest/v1/check URI : https://URL.net/service_name/3.0.0/accounts/bah service_name I have successfully extracted the 3rd example using this:  rex field=_raw "URI.+\:\shttp.+\.(net|com)\/(?<URI_ABR>.+)\/\d+\." That does not match the other two though so no field is extracted. Is there a way to say if it doesn't match that regex then capture till the end of line? I've tried this but then the 3rd example also captures everything till the end of the line: rex field=_raw "URI.+\:\shttp.+\.(net|com)\/(?<URI_ABR>.+)(\/\d+\.|\n)"
how to get a list of splunk cloud index restores & time each restore consumed to complete
Thanks that worked.  I was running down the addcoltotals letting the column get in my way.  Simple solution, thanks!
yes..the lookup column names are index and count
Hello, is there a way to add a control to a dashboard (in dashboard studio), a dropdown for example, to enable/disable a certain alert? Thanks!
What are the field names in your lookup. I assumed that your list of indexes was in a field called index. 
Your suggestion worked!! Thank you so much for your help
the join is not working
Hi, I have a similar situation as yours. I want to find users who perform searches that are resource intensive. Could you share the search strings you used to perform your task? Thanks
Start with your lookup as the base, then join on the the search data. Also, use tstats for something like this.      | inputlookup index_list | join type=left index [|tstats max(_time) as latest... See more...
Start with your lookup as the base, then join on the the search data. Also, use tstats for something like this.      | inputlookup index_list | join type=left index [|tstats max(_time) as latestTime where index=* by index | eval latestTime=strftime(latestTime,"%x %X")] | where isnull(latestTime)    
I have the actual list of indexes in a lookup file. I ran below query to find the list of indexes with the latest ingestion time. how to find is there any index which is listed in the lookup, but not... See more...
I have the actual list of indexes in a lookup file. I ran below query to find the list of indexes with the latest ingestion time. how to find is there any index which is listed in the lookup, but not listed from the below query. index=index* | stats latest(_time) as latestTime by index | eval latestTime=strftime(latestTime,"%x %X") Can you please help
Hi, the documentation I found details the update of a two-site cluster in "site-by-site" fashion, which is solid as a rock. We normally go that way, yet w/o taking down one site's the peers at once ... See more...
Hi, the documentation I found details the update of a two-site cluster in "site-by-site" fashion, which is solid as a rock. We normally go that way, yet w/o taking down one site's the peers at once but by updating them one by none. And there is a description of a rolling update, where I did not find any mention of multi-site clusters. I tried a combination of both by rollingly updating one site and then the other, which at the end of the day did not speed up things very much, I still had to wait in the middle for the cluster to recover and become green again. Did I miss a description of the rolling update of a multi-site indexer cluster? What would be the benefit? And what's the difference anyway between going into maintenance mode and a rolling update? Thanks in advance Volkmar
How to onboard cloudwatch data to splunk using HEC
I am unable to create a data collector on my node.js application. I came across this doc " For the Node.js agent, you can create a method data collector only using the addSnapshotData() Node.js API, ... See more...
I am unable to create a data collector on my node.js application. I came across this doc " For the Node.js agent, you can create a method data collector only using the addSnapshotData() Node.js API, not the Controller UI as described here. See Node.js Agent API Reference".  I have 2 questions; how do I determine the value and key to use where do I add addSnapshotData()
For JSON data, use spath command.  Reference: https://community.splunk.com/t5/Splunk-Search/How-to-parse-my-JSON-data-with-spath-and-table-the-data/m-p/250462 https://kinneygroup.com/blog/splunk... See more...
For JSON data, use spath command.  Reference: https://community.splunk.com/t5/Splunk-Search/How-to-parse-my-JSON-data-with-spath-and-table-the-data/m-p/250462 https://kinneygroup.com/blog/splunk-spath-command/
How to extract fields which comes under message and failedRecords.