Hello again community
Today I received notice that on every Friday morning at a particular time there are a lot of new sessions registered in the firewall log, apparently caused somehow by Splunk.
The question was passed down, why? So I played around with the metrics log, input/output etc. Though I cannot se any corelated increase or decrease in the numbers observed around the same time.
What I ended up with was alterations of
index=_internal source=*metrics.log group=tcp<in|out>_connections | timechart count by host useother=false
My question, is this a reasonable approach?
Otherwise, what would be a better search to get the number of newly established connections between members of the Splunk infrastructure to figure out if any components are establishing a higher number of new connections?
All the best
The words "apparently" and "somehow" aren't much to work with. I'd go back to the reporter for more detail. Find out what makes them think the connections are caused by Splunk. Are the connections *to* Splunk, *from* Splunk, or something else?
I hope you're Splunking your firewall logs. Then you'd be able to search the timeframe in question to see just what is happening - how many connections, which sources (address and port), and which destinations (address and port).
Your query is a reasonable one for finding TCP connections to or from Splunk instances. I'm not sure Metrics covers all possible Splunk connections, though. For example, a burst of API calls wouldn't show up there. Other connections to consider include a burst of alerts, scheduled dashboard deliveries, and forwarders phoning home (there may be more).
Not much to work with pretty much summarizes it all nicely, lets just say "it's complicated"
Well, until there is more to go on my interpretation is that there is not much more that can be done.
I'll see if I can get some additional information regarding connections, otherwise I suppose that there is not much else to do.
Thank you very much for the feedback