All Apps and Add-ons

TrackMe App: Adding Host to DataSources

chrisboy68
Contributor

Hi,

 

Looking at TrackMe to monitor inputs on our Heavy Forwarders. Looking at the UI, the Data Source Tracking would give me all I need IF it has a host listed.

My scenario is, we have over 10 Heavy Forwarders pushing multiple sourcetypes with multiple indexes to our Indexers. When one "data_name" is in error, I would like to know which Heavy Forwarder to look at to further troubleshoot. It would also be great if I could just sort by host on the main page, maybe use tags that are a host name? I couldn't see a way to meet my requirments. 

 

Any suggestions?

 

Thank you,


Chris

Labels (1)
0 Karma

guilmxm
SplunkTrust
SplunkTrust

Hi @chrisboy68 !

Sorry for late replying on this one, and thanks for the mention, I am not monitoring actively Splunk community, we used to receive automatic messages with Splunk Answer but it is not happening anymore which I find a bit missing with the new site.

So, to answer to your question, there are multiple ways to tackle this, very easily actually and builtin within the UI.

The first answer is using Elastic Sources:
https://trackme.readthedocs.io/en/latest/userguide.html#elastic-sources

You basically can create a virtual data source (standard data sources represent the index + sourcetype) which match your need, for instance mydata_source:context1 which is underneath a tstats against index + sourcetype + host.
And repeat the process up to your needs.
As long as you are dealing with tstats searches, you can easily add these into the common "bucket" as shared elastic sources (which means handled via a single scheduled generating the SPL dynamically)
For very large data sources, you could create dedicated trackers via the same UI.

The second answer can be as well in the data host monitoring, there are two modes available (see the config UI), in the standard mode we monitor all sourcetype on a per host basis, and start alerting when none of the meet the monitoring rules (meaning none of these still come into Splunk)

In the second mode (data host global alerting policy), TrackMe monitors sourcetypes individually per host, which means to be simple that the host would turn red if any of the sourcetypes monitored for that host is not meeting the monitoring conditions and rules.

So, you have at least two builtin ways to address your needs, Elastic Sources which can be whatever you need, and data host monitoring with the proper level of configuration. (including what you include / exclude)

Let me know if my explanation isn't clear enough 😉

Guilhm


0 Karma

SamHTexas
Builder

I installed this App (Trackme) on my Cluster Master where most of my apps are located. This App generated a few errors that I could not. Am assuming that it needs to be initially configured. Please show me how to setup / Configure Trackme after the initial install. Thanks a million.

Tags (1)
0 Karma

guilmxm
SplunkTrust
SplunkTrust

Hi @SamHTexas 

Please have a careful at the following documentation tutorials:

- https://trackme.readthedocs.io/en/latest/configuration.html

https://trackme.readthedocs.io/en/latest/userguide.html#your-first-steps-with-trackme

You have as well my talk from .conf:

TrackMe .conf 2021 video: https://conf.splunk.com/files/2021/recordings/TRU1548B.mp4

TrackMe .conf 2021 slides: https://conf.splunk.com/files/2021/slides/TRU1548B.pdf

When you say:
"on my Cluster Master where most of my apps are located" I guess perhaps you meant the monitoring console node?

The cluster master is not a right candidate, and you should not deploy here any third party application other than base configuration apps, a proper candidate for TrackMe is either a dedicated search head, a search head cluster or the monitoring console host.

Make sure you satisfy with the app dependencies too:

https://trackme.readthedocs.io/en/latest/deployment.html#dependencies

Guilhem



chrisboy68
Contributor

Thank you. Still a bit confused how to set up elastic sources For example.  Given this tstats

|  tstats count summariesonly=t  where index !=_* AND sourcetype=*aws* AND host= ip* groupby host, index ,sourcetype

I want my elastic sources to be "host:index:sourcetype". As mentioned, I have over 1k of inputs on all my HFs , so looking at using tstats with just so is it possible to put in one tstats query in the constraint to achieve what I'm looking for? 

Thank you,


Chris

0 Karma

guilmxm
SplunkTrust
SplunkTrust

@chrisboy68 

Ok. so let's be simple, an Elastic Source is a virtual entity which is the combination of your choice.

If you carefully read the doc, you will see different examples to illustrate this, but let's say in your context, you could have:

a. entity "one"

type: tstats
search constraint: index=* sourcetype=*aws* host=<my HF1>

b. entity "one"

type: tstats
search constraint: index=* sourcetype=*aws* host=<my HF2>

Entity a represents data as a whole, such as b, so as long as there is data in entity "a" and monitoring rules are respected. then the entity is green.
And so forth.

You are the master of your constraints and you can choose what to do.

For example, it is likely as the HF is the collector, the "host" Metadata is not the best suitable for the job, because it might be overriden (because in the data the host is not the host that collects but the underneath technical host, etc)

So often, a good approach is to make your HF creating an indexed field which represents the collection layer, the pipeline if you like, which is then what you would add.

With this approach, you could have an elastic source that does

index=* sourcetype=*aws* collector::<my first collector>

and entity b would be:

index=* sourcetype=*aws* collector::<my second collector>

As explained here:

https://trackme.readthedocs.io/en/latest/userguide.html#elastic-source-example-2-custom-indexed-fiel...


The Cribl Logstream integration for example relies on the excellent schema Cribl has, when it creates automatically an indexed field (cribl_pipe) which trackMe then relies on, look at the following:

https://trackme.readthedocs.io/en/latest/cribl_integration.html

(to give you some more understanding)

Does this helps?


chrisboy68
Contributor

Ok, let me give it a shot. Thanks

Chris

0 Karma

bowesmana
SplunkTrust
SplunkTrust

It sounds like you'd be able to use the Data Hosts tracking for that.

https://trackme.readthedocs.io/en/latest/configuration.html#trackme-data-hosts-define-what-works-for...

It gives you two modes, either by host or sourcetype for determining any issues for a given host.

It sounds like this is what you need to monitor the HFs.

 

chrisboy68
Contributor

@guilmxm  any ideas?

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...