Splunk Search

How to make search efficient with spath and regex

Path Finder

How can I make this search efficient?

earliest=-1m source="/var/log/aws/opsworks/opsworks-agent.statistics.log"  host="*prod*" Reported statistics data
| dedup host
| rex field=_raw "Reported statistics data: (?<json>.*)\N"
| fields json, host
| spath input=json
| rename stats.memory.free as memFree, stats.memory.total as memtotal
| eval memFreePer=memFree/memtotal*100
| table host, memFreePer, stats.cpu.idle
0 Karma
1 Solution

Esteemed Legend

There is not greater efficiency to be had other than to explicitly specify an index; here is that along with some other clarification adjustments:

index="YouShouldAlwaysSpecifyAnIndex" AND source="/var/log/aws/opsworks/opsworks-agent.statistics.log" AND host="prod" AND Reported AND statistics AND data
| dedup host
| rex "Reported statistics data: (?<json>.*)\N"
| fields json host
| spath input=json
| rename stats.memory.free AS memFree, stats.memory.total AS memtotal
| eval memFreePer = 100 * memFree / memtotal
| table host, memFreePer, stats.cpu.idle

View solution in original post

Esteemed Legend

There is not greater efficiency to be had other than to explicitly specify an index; here is that along with some other clarification adjustments:

index="YouShouldAlwaysSpecifyAnIndex" AND source="/var/log/aws/opsworks/opsworks-agent.statistics.log" AND host="prod" AND Reported AND statistics AND data
| dedup host
| rex "Reported statistics data: (?<json>.*)\N"
| fields json host
| spath input=json
| rename stats.memory.free AS memFree, stats.memory.total AS memtotal
| eval memFreePer = 100 * memFree / memtotal
| table host, memFreePer, stats.cpu.idle

View solution in original post

Path Finder

Thanks Jpolvino. Unfortunately splunk is administered by another group and I'll have to raise a request for it.

0 Karma

Builder

Is it possible for stats.memory.free and stats.memory.total to be extracted as fields as the log is ingested? This would save you search-time overhead and the cost of some disk space and ingest overhead.

0 Karma

Path Finder

Thanks woodcock. Every one of our logs has index=main, which is why i chose to ignore it. Apart from that, is the query alright?

0 Karma

Esteemed Legend

Yes, efficiency-wise, there is nothing to do, but there some best-practices that provide additional clarity as shown in my answer. Is ALL of your data in index=main or just the data you need for this purpose? If the former, you should REALLY fix that. In any case, do not ever run a search without specifying index= somewhere.

0 Karma

Path Finder

Yes every application log is with index=main. I have asked for a change and it'll hopefully happen soon.

Appreciate your response.

0 Karma

SplunkTrust
SplunkTrust

What makes you think it's inefficient? What does Job Inspector say?
Have you tried replacing 'Reported statistics data' with ' "Reported statistics data:" ' in the base search?

---
If this reply helps you, an upvote would be appreciated.
0 Karma

Path Finder

Its not too bad. I was wondering if there was a better way to do this. Since it has both regex and spath and also an eval

0 Karma