Getting Data In

Choosing an approach for sending metrics to Splunk

sri420
New Member

Hi,

I have just started exploring Splunk.

My requirement is to capture metrics from the a set of micro services running in our environment.

I see splunk provides multiple options for this. I am not clear which one is the right one to use.

  1. I see REST API exposed by Splunk with endpoints for both CollectD and StatsD line protocols.
  2. Also I see there is a Universal forwarder which needs to be installed in the different host machines and that will forward to splunk indexer. I feel installing a forwarder on all machines might be a constraint but might have better performance but, I am not sure how we can send metrics to Universal Forwarder. Is there a REST API for that.
  3. Also, there is an option to use TCP instead of HTTP. I am also not sure how I can send using TCP. Also, would TCP give better performance than HTTP.

Can someone please guide me on the right approach for the right scenario?

0 Karma

esix_splunk
Splunk Employee
Splunk Employee

Well, first ill comment on #3...

TCP is a transport, and HTTP is an application on top of the Transport.. HTTP/S is over TCP. This is how the OSI model works ( 7 layers of the OSI, Google that and you’ll get a better feeling for what this means.)

Now the meat of your question..

Typically microservices provide some sort of logging out facitility, syslog is common, but some form of JSON out over HTTP is more prevalent. This is where the Splunk HEC comes into play. We have the capability to listen over HTTP/S and collect events in raw or JSON format and index them, in the same manner we would with a UF installed and reading a file.

I would look at this as an option for a microservice based world. As you mention, installing a UF requires some type of application footprint ( memory / cpu / storage.) Where as a logging endpoint is typically just a configuration setting, Awesome stuff!

REST API is another option, I don’t have a lot of thoughts on this except that customers at large scale are or pushing TBs+ of data per day over HEC or UF, and not REST... REST would entail writing some custom apps via shell or SDK though. While doable, HEC is much better suited here..

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Why Splunk Customers Should Attend Cisco Live 2026 Las Vegas

Why Splunk Customers Should Attend Cisco Live 2026 Las Vegas     Cisco Live 2026 is almost here, and this ...

What Is the Name of the USB Key Inserted by Bob Smith? (BOTS Hint, Not the Answer)

Hello Splunkers,   So you searched, “what is the name of the usb key inserted by bob smith?”  Not gonna lie… ...

Automating Threat Operations and Threat Hunting with Recorded Future

    Automating Threat Operations and Threat Hunting with Recorded Future June 29, 2026 | Register   Is your ...