Monitoring Splunk

How to import CPU load from Check Point logs?

k909
Engager

Hi all,
we're trying to import archive logs from checkpoint (gaia).
In one archive it includes many hardware params.
Each file has a unix timestamp (cpu_load.1480888800) 866 each day
cut cpu_load.1480888800

Average:     CPU   %user   %nice    %sys %iowait    %irq   %soft  %steal   %idle    intr/s
Average:     all    0.54    0.00    0.47    0.08    0.02    1.13    0.00   97.75  18997.50
Average:       0    4.00    0.00    2.90    3.00    0.10    3.20    0.00   86.80   1060.80
Average:       1    0.10    0.00    0.40    0.00    0.00    0.80    0.00   98.80      0.00
Average:      12    0.30    0.00    0.00    0.00    0.10    0.50    0.00   99.20    637.00
Average:      39    0.10    0.00    0.60    0.00    0.10    1.10    0.00   98.10   1267.90

How import cpu load for each core?

0 Karma

mnatkin_splunk
Splunk Employee
Splunk Employee

Part of the beauty of managing the GAiA OS is that the vast majority is directly derived from RedHat EL (depending on the version of GAiA, predominantly RHEL3 or RHEL5).

The output you're looking to ingest is from "mpstat", a standard *Nix application.

The *NIX TA (available at https://splunkbase.splunk.com/app/833/ ) does the necessary work to make sense of this input type without reinventing the wheel. The sourcetype will be "cpu".

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Have you tried importing the file as a CSV with the field delimiter set to space?

---
If this reply helps you, Karma would be appreciated.

k909
Engager

Hi,
Thanks for answer, but space delimeter not help.
try to change props.conf
[cpu_61k_csv]
FIELDS = Average,CPU,user,nice,sys,iowait,irq,soft,steal,idle,intr/s
DELIMS = " "

But space delimeter its different, it depend from value from row

0 Karma

richgalloway
SplunkTrust
SplunkTrust

From your edited question it appears the delimiter is tab rather than space. Try using \t as the DELIMS value.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Automatic Discovery Part 1: What is Automatic Discovery in Splunk Observability Cloud ...

If you’ve ever deployed a new database cluster, spun up a caching layer, or added a load balancer, you know it ...

Real-Time Fraud Detection: How Splunk Dashboards Protect Financial Institutions

Financial fraud isn't slowing down. If anything, it's getting more sophisticated. Account takeovers, credit ...

Splunk + ThousandEyes: Correlate frontend, app, and network data to troubleshoot ...

 Are you tired of troubleshooting delays caused by siloed frontend, application, and network data? We've got a ...