Splunk ITSI

Splunk IT Service Intelligence: How to get on-Generic KPI's (Pre configured KPI source searches) to work?

Jarohnimo
Builder

So when setting up a new Service in Splunk IT Service Intelligence, it allows you to select a Generic KPI or Select from a list of pre-defined KPI's provided by sSplunk (Thank you very much). I'm new to the Technology, so I'm still figuring it all out, but I feel there may be missing pieces of instruction on how to properly configure ITSI (When using non generic KPI's).

Each time I would try to use the non-generic KPI say for example: Under Operating System: Network Utilization, I select: Network Utilization: Total Packets/second (in/out). it doesn't return any results. In the beginning of the setup it allows you to click "Generated Search" It will show all the servers in your farm, but then it also shows tags at the top (that's when I gasped and begin to have hope!... so I'm here now trying to confirm the right way to do this)

I've been following Splunk's Guide here: http://docs.splunk.com/Documentation/ITSI/2.5.0/Configure/HowtocreateKPIsearches to try and configure this properly, but there's no documentation on how to get this to work.

My suspicion is that you have to manually tag each server to match all the predefined tags associated with the KPI you have so that when you use those KPI's it will return results. Is this the case? If so can someone please provide me a link from Splunk that explains in detail this process? Trying to do things by the book on the setup as ITSI is not the easiest to setup and we need for it to be extra accurate!

Thanks!

One reason why I didn't move forward with manually creating the tags an adding my servers was because I'm unsure if ITSI was suppose to create those tags once ITSI was installed and then I just go in there and plug in the name of my servers.

JosephSwann
Explorer

Thank you!

What's funny is i put in several support tickets as well as worked directly with our splunk reps and their support engineer and I lie to you not! Not one of them knew how to do this.
Now I know ITSI isn't as popular or well accepted as Enterprise security but what has happened is its bleed all real support to their money maker and us ITSI folks have a much smaller support group and knowledge base.

Their is a huge knowledge gap at splunk and in general of the Splunk ninja masters vs people who copy stuff off splunk base to survive each and every day. This seems like it should of been promoted along with Itsi. .

Why they don't have these apps available as apart of the configuration bundle
IIke they did with the universal forwarer install asking if you want to install the windows module during setup (there giving you options, people like options! )
where you can say select a check box for sql, vmware etc if your monitoring those type of devices or logs is beyond me. It's stupid to have a product that ships in a fashion where it won't work without a great deal of other apps of knowledge and configuration. .. installing this in a search head cluster and indexer cluster is of no smaller effort seeing how you have got to make sure you identify all your TAs or apps across your enterprise. .. adding space and configuration load.

This has been my toughest challenge yet but I'm thankful

bandit
Motivator

Looks like you need the below addons for the linux/unix and windows metrics on the same search head as ITSI.

tag=oshost search started working as soon as I installed.

Splunk Add-on for Microsoft Windows
https://splunkbase.splunk.com/app/742/
Splunk_TA_windows\default\tags.conf

Splunk Add-on for Unix and Linux
https://splunkbase.splunk.com/app/833/
Splunk_TA_nix\default\tags.conf

   [eventtype=cpu]
    os = enabled
    resource = enabled
    report = enabled
    unix = enabled
    cpu = enabled
    avail = enabled
    performance = enabled
    oshost = enabled

    [eventtype=cpu_anomalous]
    anomalous = enabled

    [eventtype=df]
    df = enabled
    host = enabled
    check = enabled
    success = enabled
    storage = enabled
    performance = enabled
    oshost = enabled

    [eventtype=iostat]
    report = enabled
    resource = enabled
    iostat = enabled
    performance = enabled
    cpu = enabled
    storage = enabled
    success = enabled
    oshost = enabled

    [eventtype=lsof]
    report = enabled
    lsof = enabled
    resource = enabled
    file = enabled
    success = enabled

    [eventtype=netstat]
    report = enabled
    netstat = enabled
    os = enabled
    cpu = enabled
    success = enabled

    [eventtype=ps]
    report = enabled
    process = enabled
    os = enabled
    success = enabled
    ps = enabled
    performance = enabled
    oshost = enabled

    [eventtype=top]
    process = enabled
    report = enabled
    top = enabled
    os = enabled
    success = enabled

    [eventtype=time]
    report = enabled
    os = enabled
    success = enabled
    time = enabled

    [eventtype=vmstat]
    report = enabled
    vmstat = enabled
    resource = enabled
    success = enabled
    cpu = enabled
    memory = enabled
    performance = enabled
    oshost = enabled

    [eventtype=bandwidth]
    network = enabled
    resource = enabled
    success = enabled
    performance = enabled
    oshost = enabled

    [eventtype=hardware]
    inventory = enabled
    oshost = enabled
    cpu = enabled
    memory = enabled

    # For ESS:
    os = enabled
    avail = enabled
    unix = enabled

bandit
Motivator

Splunk, what's the point of having generic/sample templates if they don't work? The docs should at least recommend some basic addons to make cpu or memory kpis work.

0 Karma

thejeffreystone
Path Finder

My understanding is that to use those prebuilt KPIs you need have those logs go through the Splunk built TAs which will make sure everything is CIM compliant. For example if you want your Apache web logs to be tagged correctly then you should be using the Splunk Add-on for Apache to process those logs.

Just had a conversation with my SE about that this week.

0 Karma

Jarohnimo
Builder

Still feels like I'm piecing it together because I am. I wish their was published information on how to use these guys because right now I'm guessing.

I don't use heavy forwarders only universal and to me this seems way more complicated than it needs to be. I feel we should be able to tag our servers (while setting up entities) where they will know which ones an app servers, which is sql, which is VM host etc....

0 Karma

thejeffreystone
Path Finder

Yeah. It has to do with Splunk wanting ITSI to use CIM Compliant data and using the TAs to ensure it is.

0 Karma

Jarohnimo
Builder

I've see the TA folders but didn't pay them much mind as I never had to do anything with them. My setup has been pretty much all basic. How does one have the logs go through the splunk built TA's? And Were you able to locate any documentation that explains all of this.

Before I read your response I thought it was easy enough going to.. Settings>Tags>List by tag name and go in there and manually create tags.. oshost=*

^ Or can this be used as a work around if the other way is a pain!?

0 Karma

thejeffreystone
Path Finder

So most of the TAs are just field extractions, and tags. So if you are using a heavy forwarder in your setup just install the TAs on the heavy forwarder. As long as your applications are using the default source paths or log names/formats the heavy forwarder will process it. You will also probably need them on your search head so it is aware of the eventtypes if there are any, and the props and transforms if you are not using a heavy forwarder.

You could do it the manual way. But I would dissect the TA and see what all it is doing. Is suspect you will need to create more than just tags, but the TA will have a props.conf and transfoms.conf that will tell you what it would do, and then you could manually recreate it.

I am by far not the expert in this, but going through the same thing. My problem is everyone has heavily customized the app logs that the TAs wont help so we are having to do custom KPIs.

0 Karma

Jarohnimo
Builder

I've see the TA folders but didn't pay them much mind as I never had to do anything with them. My setup has been pretty much all custom. How does one have the logs go through the splunk built TA's (I hope to God it's not hard)? And Were you able to locate any documentation that explains all of this.

Before I read your response I thought it was easy enough... Setting>Tags>List by tag name and go in there and manually create tags.. oshost=*

^ Or can this be used as a work around if the other way is a pain!?

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In September, the Splunk Threat Research Team had two releases of new security content via the Enterprise ...

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...