Here are three lines of the file to illustrate what I'm going for: Line from file Desired field URI : https://URL.net/token token URI : https://URL.net/rest/v1/check rest/v1/check URI...
See more...
Here are three lines of the file to illustrate what I'm going for: Line from file Desired field URI : https://URL.net/token token URI : https://URL.net/rest/v1/check rest/v1/check URI : https://URL.net/service_name/3.0.0/accounts/bah service_name I have successfully extracted the 3rd example using this: rex field=_raw "URI.+\:\shttp.+\.(net|com)\/(?<URI_ABR>.+)\/\d+\." That does not match the other two though so no field is extracted. Is there a way to say if it doesn't match that regex then capture till the end of line? I've tried this but then the 3rd example also captures everything till the end of the line: rex field=_raw "URI.+\:\shttp.+\.(net|com)\/(?<URI_ABR>.+)(\/\d+\.|\n)"
Hi, I have a similar situation as yours. I want to find users who perform searches that are resource intensive. Could you share the search strings you used to perform your task? Thanks
Start with your lookup as the base, then join on the the search data. Also, use tstats for something like this. | inputlookup index_list
| join type=left index
[|tstats max(_time) as latest...
See more...
Start with your lookup as the base, then join on the the search data. Also, use tstats for something like this. | inputlookup index_list
| join type=left index
[|tstats max(_time) as latestTime where index=* by index
| eval latestTime=strftime(latestTime,"%x %X")]
| where isnull(latestTime)
I have the actual list of indexes in a lookup file. I ran below query to find the list of indexes with the latest ingestion time. how to find is there any index which is listed in the lookup, but not...
See more...
I have the actual list of indexes in a lookup file. I ran below query to find the list of indexes with the latest ingestion time. how to find is there any index which is listed in the lookup, but not listed from the below query.
index=index* | stats latest(_time) as latestTime by index | eval latestTime=strftime(latestTime,"%x %X") Can you please help
Hi, the documentation I found details the update of a two-site cluster in "site-by-site" fashion, which is solid as a rock. We normally go that way, yet w/o taking down one site's the peers at once ...
See more...
Hi, the documentation I found details the update of a two-site cluster in "site-by-site" fashion, which is solid as a rock. We normally go that way, yet w/o taking down one site's the peers at once but by updating them one by none. And there is a description of a rolling update, where I did not find any mention of multi-site clusters. I tried a combination of both by rollingly updating one site and then the other, which at the end of the day did not speed up things very much, I still had to wait in the middle for the cluster to recover and become green again. Did I miss a description of the rolling update of a multi-site indexer cluster? What would be the benefit? And what's the difference anyway between going into maintenance mode and a rolling update? Thanks in advance Volkmar
I am unable to create a data collector on my node.js application. I came across this doc " For the Node.js agent, you can create a method data collector only using the addSnapshotData() Node.js API, ...
See more...
I am unable to create a data collector on my node.js application. I came across this doc " For the Node.js agent, you can create a method data collector only using the addSnapshotData() Node.js API, not the Controller UI as described here. See Node.js Agent API Reference". I have 2 questions; how do I determine the value and key to use where do I add addSnapshotData()
For JSON data, use spath command. Reference: https://community.splunk.com/t5/Splunk-Search/How-to-parse-my-JSON-data-with-spath-and-table-the-data/m-p/250462 https://kinneygroup.com/blog/splunk...
See more...
For JSON data, use spath command. Reference: https://community.splunk.com/t5/Splunk-Search/How-to-parse-my-JSON-data-with-spath-and-table-the-data/m-p/250462 https://kinneygroup.com/blog/splunk-spath-command/
You won't find event 4662 because they're blacklisted. The blacklist prevents events with that code from being ingested and indexed, therefore, they cannot be searched. Removing the blacklist will ...
See more...
You won't find event 4662 because they're blacklisted. The blacklist prevents events with that code from being ingested and indexed, therefore, they cannot be searched. Removing the blacklist will allow new 4662 events to be indexed, but will not do anything for the older events that happened while the blacklist was in effect.
Are you saying that you eliminate DNS logging from eStreamer only or that you don't log it at all (in the firewall)? I'm trying to find a way to keep logging it in Firepower but not pulling it into...
See more...
Are you saying that you eliminate DNS logging from eStreamer only or that you don't log it at all (in the firewall)? I'm trying to find a way to keep logging it in Firepower but not pulling it into Splunk via eStreamer.
I should probably add that we're very early on in our Intune configuration and deployment so there isn't a huge amount going on yet but I've tried generating some test data in the Event Hub itself.