Microsoft has a really slick new event routing solution that we are using in our Service Fabric distributed application:
It supports a myriad of inputs (EventSource / ETW and Microsoft.Extensions.Logging in our case) and outputs (stdout, Elastisearch, Application Insights, OMS). You can define routes between the two, including filters and other pattern matching. It is really powerful, and runs really fast since in our case all of the input sources are in-process. It has open API for adding your own custom / third-party Input and Output Providers. In fact, every single supported provider is implemented using this mechanism.
In short, we need an EventFlow Output Provider for Splunk.
We want to be able to get all of these events index into Splunk so we can do analysis / queries. We have a Splunk cluster, load balanced indexer / search head, the whole nine yards as we already use Splunk to index our large game battle analytics files. We wish to leverage that investment without having to use OMS or AI. Using PerfView on every node is too cumbersome. We need something centralized.
Under certain conditions the amount of traces running through the system can be very heavy. Writing to the local EventLog is not an option as it adds disk load to our servers that we wish to avoid. Again, since this is ETW data, it is already being sourced in process. So in an ideal world, we would want to asynchronously send it directly to the Splunk indexer via the Splunk Web API. It is self-defeating to enable EventLog logging so that Splunk indexer indirectly gtes it thorugh the Agent which gets it thorugh the Event Log. Blech!
In short: you guys need to implement your own EventFlow Output Provider for Splunk. Is this something that is already underway?
Of course, there's nothing preventing us from doing this. MS has open API for the providers, and you have open API for sending events into Splunk. So this is totally doable. We just don't won't bother doing it if this is something you are planning on providing soon...
Or, is there another route to get this in-process ETW data into Splunk without it having to touch disk?
It's been two weeks since the last bump. Any Splunk folks have an opinion here?
I can provide our own implementation, but it seems that this is something a lot of people are going to want, especially since Service Fabric SDK is now open source, just as EventFlow is.
MS provided their own Elasticsearch implementation, but not Splunk...
I'm not an expert into eventflow, so not sure how it captures its input.
What I would in a generic case is collect all data (eg all Windows:Eventlog from various systems) using various methods(eg Universal forwarders) into Splunk and output them to third party (eventflow).
You can output data as
- TCP out => send cooked or uncooked data in real-time. Any splunk component can do this job including UF
- Syslog output => Forward syslog data to a third-party, but needs a HF or Indexer (This is mostly what Elastic search does)
In both cases you have flexibility to provide subset data or other filters
Please see this link for more details and examples:
Thanks for the reply.
EventFlow can capture any input for which you authorize a provider. Microsoft provides many, the ones we're interested in are ETW / EventSources. ServiceFabric and it's hosted services can generate a mountain of diagnostic data in the form of events and traces.
Thus, using the universal splunk forwarder as it scrapes the EventLog is not an option as mentioned in original post.
EventFlow is highly extensibly and the right solution is to built an output provider that takes the in-process ETW / EventSource data that arrives through EventFlow (which is essentially a router), and then forward it onto the splunk indexer directly, just as the universal forwarder does, but without ever having to write these events to disk.
So, assuming Splunk isn't going to do this, I'll do it and open source it. I've been talking with the MS EventFlow folks and they are open to adding it into the base product. Woot.
But given the possibility of high volumes, what is the preferred mechanism for sending these events to the indexer? REST API? TCP Socket?
Any other considerations?
ah.ok. So eventflow is essentially a collector and pusher of data.
would you log into a file? if yes, that's the best option and splunk can collect it.
otherwise send as TCP and Splunk indexers/HF/UF can collect it. Try to send to a non-standard port, so it won't be segregated easily (eg port 10520)
logging to file is not an option.
So yes we would like to stream events over TCP.
We are currently deciding what is the best way to do this:
directly to indexer
through a heavy forwarder
through local universal forwarder
Is there any documentation which compares and contrasts these three approaches and cites the advantages / disadvantages of each?