Looking at TrackMe to monitor inputs on our Heavy Forwarders. Looking at the UI, the Data Source Tracking would give me all I need IF it has a host listed.
My scenario is, we have over 10 Heavy Forwarders pushing multiple sourcetypes with multiple indexes to our Indexers. When one "data_name" is in error, I would like to know which Heavy Forwarder to look at to further troubleshoot. It would also be great if I could just sort by host on the main page, maybe use tags that are a host name? I couldn't see a way to meet my requirments.
Hi @chrisboy68 !
Sorry for late replying on this one, and thanks for the mention, I am not monitoring actively Splunk community, we used to receive automatic messages with Splunk Answer but it is not happening anymore which I find a bit missing with the new site.
So, to answer to your question, there are multiple ways to tackle this, very easily actually and builtin within the UI.
The first answer is using Elastic Sources:
You basically can create a virtual data source (standard data sources represent the index + sourcetype) which match your need, for instance mydata_source:context1 which is underneath a tstats against index + sourcetype + host.
And repeat the process up to your needs.
As long as you are dealing with tstats searches, you can easily add these into the common "bucket" as shared elastic sources (which means handled via a single scheduled generating the SPL dynamically)
For very large data sources, you could create dedicated trackers via the same UI.
The second answer can be as well in the data host monitoring, there are two modes available (see the config UI), in the standard mode we monitor all sourcetype on a per host basis, and start alerting when none of the meet the monitoring rules (meaning none of these still come into Splunk)
In the second mode (data host global alerting policy), TrackMe monitors sourcetypes individually per host, which means to be simple that the host would turn red if any of the sourcetypes monitored for that host is not meeting the monitoring conditions and rules.
So, you have at least two builtin ways to address your needs, Elastic Sources which can be whatever you need, and data host monitoring with the proper level of configuration. (including what you include / exclude)
Let me know if my explanation isn't clear enough 😉
I installed this App (Trackme) on my Cluster Master where most of my apps are located. This App generated a few errors that I could not. Am assuming that it needs to be initially configured. Please show me how to setup / Configure Trackme after the initial install. Thanks a million.
Please have a careful at the following documentation tutorials:
You have as well my talk from .conf:
TrackMe .conf 2021 video: https://conf.splunk.com/files/2021/recordings/TRU1548B.mp4
TrackMe .conf 2021 slides: https://conf.splunk.com/files/2021/slides/TRU1548B.pdf
When you say:
"on my Cluster Master where most of my apps are located" I guess perhaps you meant the monitoring console node?
The cluster master is not a right candidate, and you should not deploy here any third party application other than base configuration apps, a proper candidate for TrackMe is either a dedicated search head, a search head cluster or the monitoring console host.
Make sure you satisfy with the app dependencies too:
Thank you. Still a bit confused how to set up elastic sources For example. Given this tstats
| tstats count summariesonly=t where index !=_* AND sourcetype=*aws* AND host= ip* groupby host, index ,sourcetype
I want my elastic sources to be "host:index:sourcetype". As mentioned, I have over 1k of inputs on all my HFs , so looking at using tstats with just so is it possible to put in one tstats query in the constraint to achieve what I'm looking for?
Ok. so let's be simple, an Elastic Source is a virtual entity which is the combination of your choice.
If you carefully read the doc, you will see different examples to illustrate this, but let's say in your context, you could have:
a. entity "one"
search constraint: index=* sourcetype=*aws* host=<my HF1>
b. entity "one"
search constraint: index=* sourcetype=*aws* host=<my HF2>
Entity a represents data as a whole, such as b, so as long as there is data in entity "a" and monitoring rules are respected. then the entity is green.
And so forth.
You are the master of your constraints and you can choose what to do.
For example, it is likely as the HF is the collector, the "host" Metadata is not the best suitable for the job, because it might be overriden (because in the data the host is not the host that collects but the underneath technical host, etc)
So often, a good approach is to make your HF creating an indexed field which represents the collection layer, the pipeline if you like, which is then what you would add.
With this approach, you could have an elastic source that does
index=* sourcetype=*aws* collector::<my first collector>
and entity b would be:
index=* sourcetype=*aws* collector::<my second collector>
As explained here:
The Cribl Logstream integration for example relies on the excellent schema Cribl has, when it creates automatically an indexed field (cribl_pipe) which trackMe then relies on, look at the following:
(to give you some more understanding)
Does this helps?
Ok, let me give it a shot. Thanks
It sounds like you'd be able to use the Data Hosts tracking for that.
It gives you two modes, either by host or sourcetype for determining any issues for a given host.
It sounds like this is what you need to monitor the HFs.