Hi, I have a similar situation as yours. I want to find users who perform searches that are resource intensive. Could you share the search strings you used to perform your task? Thanks
Start with your lookup as the base, then join on the the search data. Also, use tstats for something like this. | inputlookup index_list
| join type=left index
[|tstats max(_time) as latest...
See more...
Start with your lookup as the base, then join on the the search data. Also, use tstats for something like this. | inputlookup index_list
| join type=left index
[|tstats max(_time) as latestTime where index=* by index
| eval latestTime=strftime(latestTime,"%x %X")]
| where isnull(latestTime)
I have the actual list of indexes in a lookup file. I ran below query to find the list of indexes with the latest ingestion time. how to find is there any index which is listed in the lookup, but not...
See more...
I have the actual list of indexes in a lookup file. I ran below query to find the list of indexes with the latest ingestion time. how to find is there any index which is listed in the lookup, but not listed from the below query.
index=index* | stats latest(_time) as latestTime by index | eval latestTime=strftime(latestTime,"%x %X") Can you please help
Hi, the documentation I found details the update of a two-site cluster in "site-by-site" fashion, which is solid as a rock. We normally go that way, yet w/o taking down one site's the peers at once ...
See more...
Hi, the documentation I found details the update of a two-site cluster in "site-by-site" fashion, which is solid as a rock. We normally go that way, yet w/o taking down one site's the peers at once but by updating them one by none. And there is a description of a rolling update, where I did not find any mention of multi-site clusters. I tried a combination of both by rollingly updating one site and then the other, which at the end of the day did not speed up things very much, I still had to wait in the middle for the cluster to recover and become green again. Did I miss a description of the rolling update of a multi-site indexer cluster? What would be the benefit? And what's the difference anyway between going into maintenance mode and a rolling update? Thanks in advance Volkmar
I am unable to create a data collector on my node.js application. I came across this doc " For the Node.js agent, you can create a method data collector only using the addSnapshotData() Node.js API, ...
See more...
I am unable to create a data collector on my node.js application. I came across this doc " For the Node.js agent, you can create a method data collector only using the addSnapshotData() Node.js API, not the Controller UI as described here. See Node.js Agent API Reference". I have 2 questions; how do I determine the value and key to use where do I add addSnapshotData()
For JSON data, use spath command. Reference: https://community.splunk.com/t5/Splunk-Search/How-to-parse-my-JSON-data-with-spath-and-table-the-data/m-p/250462 https://kinneygroup.com/blog/splunk...
See more...
For JSON data, use spath command. Reference: https://community.splunk.com/t5/Splunk-Search/How-to-parse-my-JSON-data-with-spath-and-table-the-data/m-p/250462 https://kinneygroup.com/blog/splunk-spath-command/
You won't find event 4662 because they're blacklisted. The blacklist prevents events with that code from being ingested and indexed, therefore, they cannot be searched. Removing the blacklist will ...
See more...
You won't find event 4662 because they're blacklisted. The blacklist prevents events with that code from being ingested and indexed, therefore, they cannot be searched. Removing the blacklist will allow new 4662 events to be indexed, but will not do anything for the older events that happened while the blacklist was in effect.
Are you saying that you eliminate DNS logging from eStreamer only or that you don't log it at all (in the firewall)? I'm trying to find a way to keep logging it in Firepower but not pulling it into...
See more...
Are you saying that you eliminate DNS logging from eStreamer only or that you don't log it at all (in the firewall)? I'm trying to find a way to keep logging it in Firepower but not pulling it into Splunk via eStreamer.
I should probably add that we're very early on in our Intune configuration and deployment so there isn't a huge amount going on yet but I've tried generating some test data in the Event Hub itself.
Hi, I've searched the forums and found one thread about getting Intune data in to Splunk which set me on a path, hopefully, to getting the data in. I'm working through the guidance here - Splunk Ad...
See more...
Hi, I've searched the forums and found one thread about getting Intune data in to Splunk which set me on a path, hopefully, to getting the data in. I'm working through the guidance here - Splunk Add-on for Microsoft Cloud Services - Splunk Documentation I have requested and got an Event Hub from the team that look after Azure infrastructure. I've created the Azure app registration, set the API permissions, Access control on the Event Hub, and can see this is being successfully signed in to having configured the Splunk add-on side as well. I'm not seeing any data come in to the index though. The one thing I'm unclear on and I haven't been able to work out the definitive answer to is whether I need an Azure storage account in order to store the date. My reading of the Event Hub configuration options suggested to me that it was capable of some form of retention to allow streaming elsewhere (e.g. setting the retention time) but perhaps that is me misinterpreting it. Has anyone successfully got this working and if so, are you using a storage account with this?