Hello
I would like to be able to detect
- When a device has stopped sending logs to splunk within a timeframe
- When a new device has started sending logs
How I am thinking of doing this is to run a search every hour so that I can populate a lookup csv with entries like the following:
Hostname : DeviceIP: SourceType: Index: Event First Seen: Event Last Seen
Im afraid I've used other SIEM's but am a bit new to Splunk.
I would then query this table of data to alert when a device has not sent data or when a new device is seen.
What would be the best way to achieve this?
Many thanks for your help.
Hi @davidwaugh
Try using the metadata command:
| metadata type=hosts index=_internal
| eval status=case(lastTime<(now()-(86400*3)), "missing", firstTime>(now()-(86400*3)), "new", 1=1, "normal")
| where status!="normal"
This will show you devices which have not sent data in the last 3 days, or have recently (within 3 days) started sending data.
Run the search over all time.
Note - my example above uses the internal indexes - if your retention on internal data is not very long, you can use index=* to look at your data indexes.
Excellent solution thanks for sharing it @nickhills
Hi @davidwaugh
Try using the metadata command:
| metadata type=hosts index=_internal
| eval status=case(lastTime<(now()-(86400*3)), "missing", firstTime>(now()-(86400*3)), "new", 1=1, "normal")
| where status!="normal"
This will show you devices which have not sent data in the last 3 days, or have recently (within 3 days) started sending data.
Run the search over all time.
Note - my example above uses the internal indexes - if your retention on internal data is not very long, you can use index=* to look at your data indexes.