Getting Data In

Hostname from GUID

grijhwani
Motivator

Occasionally I generate a report of our throughput per indexer over a period of time with the following search:

index=_internal source=*license_usage.log "type=Usage" | convert timeformat="%Y-%m-%d (%a)" ctime(_time) as ISODate | eval MB=b/1024/1024 | chart eval(round(sum(MB),0)) over i by ISODate limit=0 | addcoltotals labelfield=i

This produces a table of usage with the indexers tabulated by their GUID. Ideally I would like this to display their hostnames instead. Question is there - other than manually creating a lookup table - any good solid way of associating the GUID back to a hostname to tabulate against?

Tags (2)
0 Karma
1 Solution

lguinn2
Legend

Second attempt

index=_internal source=*license_usage.log type=Usage
| convert timeformat="%Y-%m-%d (%a)" ctime(_time) as ISODate 
| eval MB=round(b/1024/1024,0) 
| rename i as guid
| chart sum(MB) over guid by ISODate limit=0 
| addcoltotals labelfield=guid
| join guid type=outer
    [ search index=_internal sourcetype=splunkd component=LMConfig serverName=* guid=* earliest=-60d
      | dedup guid | table guid serverName ]
| table serverName guid *

Notice that this changes the earlier version - now you will always get all of the license info. You will also get the serverName if the server has restarted in the last 60 days, regardless of the timerange of the outer search.

Finally, you could set up a search, based on the subsearch here, that runs periodically and generates a csv file of the guid and serverName. Then you could set up a lookup based on that csv. You would not need to manually edit the csv, but you would have to create the lookup.

View solution in original post

Jason
Motivator

Provided the indexers in question are also peers of this particular search head, you can do this REST based query from any, not just the license master:

(chart changed to timechart for ease of use)

index=_internal source=*license_usage.log "type=Usage" 
| eval MB=b/1024/1024 
| join type=outer i
  [
    rest /services/search/distributed/peers 
    | rename guid as i 
    | rename peerName as indexer
    | table i indexer
  ]
| timechart sum(MB) by indexer

lguinn2
Legend

Second attempt

index=_internal source=*license_usage.log type=Usage
| convert timeformat="%Y-%m-%d (%a)" ctime(_time) as ISODate 
| eval MB=round(b/1024/1024,0) 
| rename i as guid
| chart sum(MB) over guid by ISODate limit=0 
| addcoltotals labelfield=guid
| join guid type=outer
    [ search index=_internal sourcetype=splunkd component=LMConfig serverName=* guid=* earliest=-60d
      | dedup guid | table guid serverName ]
| table serverName guid *

Notice that this changes the earlier version - now you will always get all of the license info. You will also get the serverName if the server has restarted in the last 60 days, regardless of the timerange of the outer search.

Finally, you could set up a search, based on the subsearch here, that runs periodically and generates a csv file of the guid and serverName. Then you could set up a lookup based on that csv. You would not need to manually edit the csv, but you would have to create the lookup.

grijhwani
Motivator

Oh, I like that.

0 Karma

yannK
Splunk Employee
Splunk Employee

Here is a way to get the GUID resolved based on a REST endpoint call (works only on the license-master of course)


index=_internal source=*license_usage.log* type=Usage
| convert timeformat="%Y-%m-%d (%a)" ctime(_time) as ISODate
| eval MB=round(b/1024/1024,0)
| rename i as guid
| chart sum(MB) over guid by ISODate limit=0
| addcoltotals labelfield=guid
| join type=outer guid
[| rest /services/licenser/slaves | dedup title | table title splunk_server | rename title AS guid ]

0 Karma

grijhwani
Motivator

I understand. But since the indexers don't get restarted, and the _internal index is only retained for 30 days the static table is actually the better option, otherwise hosts would disappear. What you have shown me is new data sources I didn't know about, which is good, but they don't stand for long enough to be reliable. Same problem as I was having with deployment clients yesterday, only over a longer timescale.

0 Karma

lguinn2
Legend

Well if you generate the csv from a search, you could have the search run periodically to update the table. That would prevent you from having to manage the table manually.

grijhwani
Motivator

Thanks. I can see the difference, although it presumes that all of the indexers will have been restarted within a given timeframe. I guess the manual table is the best option.

0 Karma

lguinn2
Legend

Try this

index=_internal sourcetype=splunkd component=LMConfig serverName=* guid=* 
| dedup guid 
| table guid serverName
| join guid 
    [ search index=_internal source=*license_usage.log type=Usage
    | convert timeformat="%Y-%m-%d (%a)" ctime(_time) as ISODate 
    | eval MB=round(b/1024/1024,0) 
    | rename i as guid
    | chart sum(MB) over guid by ISODate limit=0 
    | addcoltotals ]

grijhwani
Motivator

You've definitely given me the right direction.

Curiously this only seems to give partial results. Although the indexers on a remote site are clearly reporting in their licence usage, their instances don't appear in the initial search. In fact when run for the last 30 days the first search only returns results for about half our total Splunk deployment.

It also seems very sensitive to the time range indeed. Nothing appears for the last 7 days at all.

It appears that these elements only appear for each time a Splunk instance is restarted. Any further suggestions?

0 Karma

alterdego
Path Finder

Why not just use h instead of i in the search?

index=_internal source=*license_usage.log "type=Usage" | convert timeformat="%Y-%m-%d (%a)" ctime(_time) as ISODate | eval MB=b/1024/1024 | chart eval(round(sum(MB),0)) over h by ISODate limit=0 | addcoltotals labelfield=h | rename h as host

0 Karma

grijhwani
Motivator

Doesn't produce valid results. The "h" in question is the source host not the indexer.

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...