Splunk Cloud Platform

need to optimize the below query

Praz_123
Communicator

it is utilizing more memory and resource, what to be add and remove for the below query :-


index=_internal source="*/var/log/splunk/health.log" node_path="splunkd"
| eval component=case(
match(host, "sh"), "Search Head",
match(host, "ix"), "Indexer",
match(host, "hf"), "Heavy Forwarder",
match(host, "if"), "Intermediate Forwarder",
match(host, "uf"), "Universal Forwarder",
match(host, "ds"), "Deployment Server",
match(host, "cm"), "Cluster Master",
match(host, "dp"), "Deployer",
match(host, "mc"), "Monitoring Console",
true(), null()
)
| stats latest(color) as status by host, component
| eval "RAG Status"=status
| rename host as "Host", component as "Check Items"
| table Date, "Check Items", Host, "RAG Status", Comments
| sort "Check Items", Host
| append
[ search index=_internal source="*license_usage.log" earliest=@d latest=now
| stats latest(b) AS b by slave, pool
| eval DailyGB=round(b/1024/1024/1024, 2)
| stats sum(DailyGB) AS Total_License_Usage_GB
| eval "RAG Status"=case(Total_License_Usage_GB > 7000, "red", Total_License_Usage_GB > 6000, "yellow", Total_License_Usage_GB <= 6000, "green")
| eval Host="lm"
| eval "Check Items"="License Master"
| eval Date=strftime(now(), "%d %B'%y, %I.%M %p %Z") ]
| eval email_time = strftime(now(),"%d/%m/%Y %H:%M:%S")
| table "Check Items", Host, "RAG Status", Total_License_Usage_GB

Tags (2)
0 Karma

richgalloway
SplunkTrust
SplunkTrust

Instead of using append, split the query into two separate queries.  They'll each run faster.

The first table command probably is showing empty Date and Comments fields.  That's because those fields don't exist after the stats command.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.
Get Updates on the Splunk Community!

Thanks for the Memories! Splunk University, .conf25, and our Community

Thank you to everyone in the Splunk Community who joined us for .conf25, which kicked off with our iconic ...

Data Persistence in the OpenTelemetry Collector

This blog post is part of an ongoing series on OpenTelemetry. What happens if the OpenTelemetry collector ...

Introducing Splunk 10.0: Smarter, Faster, and More Powerful Than Ever

Now On Demand Whether you're managing complex deployments or looking to future-proof your data ...