All Apps and Add-ons

HDFS capacity N/A

jackshine
Explorer

I installed splunk to monitor and analyze hadoop jobs. After I installed splunk core and splunk hadoop app in Jobtracker, forwarder and TA on other nodes, the HDFS capacity and slots capacity shows N/A in Utilization section. Anyone has some idea of possible causes?

Thank you

1 Solution

pierre4splunk
Splunk Employee
Splunk Employee

Have you enabled collection for Hadoop Metrics?

Each Hadoop daemon exposes rich runtime metrics that are useful for both monitoring and ad-hoc exploration of cluster activity, job performance, and historical workload. The utilization gauge for HDFS and Slot capacity gauges depend on NameNode and JobTracker metrics, respectively.

The simplest way to collect Hadoop metrics is to:

  1. configure your Hadoop daemon(s) to dump Hadoop metrics to a named log file
  2. configure your Splunk forwarders to monitor the resulting output files

For more info about #1:
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SplunkTAforHadoopOps#Hadoop_metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourClouderaplatform
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourHortonworksplatform

For #2, refer to the example input.conf stanzas for metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SampleHadoopinputs.conffiles

For more about Hadoop Metrics: http://blog.cloudera.com/blog/2009/03/hadoop-metrics/

View solution in original post

pierre4splunk
Splunk Employee
Splunk Employee

Have you enabled collection for Hadoop Metrics?

Each Hadoop daemon exposes rich runtime metrics that are useful for both monitoring and ad-hoc exploration of cluster activity, job performance, and historical workload. The utilization gauge for HDFS and Slot capacity gauges depend on NameNode and JobTracker metrics, respectively.

The simplest way to collect Hadoop metrics is to:

  1. configure your Hadoop daemon(s) to dump Hadoop metrics to a named log file
  2. configure your Splunk forwarders to monitor the resulting output files

For more info about #1:
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SplunkTAforHadoopOps#Hadoop_metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourClouderaplatform
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourHortonworksplatform

For #2, refer to the example input.conf stanzas for metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SampleHadoopinputs.conffiles

For more about Hadoop Metrics: http://blog.cloudera.com/blog/2009/03/hadoop-metrics/

jackshine
Explorer

The slots is also working now. It turned out that the N/A showed because 500MB limit was already exceeded yesterday.
Thank you

0 Karma

jackshine
Explorer

You are right, I forgot to modify the hadoop.metric configuration file and input.conf.After I did that, the HDFS capacity shows up. However, the slot capacity still shows N/A.

0 Karma
Get Updates on the Splunk Community!

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...

Splunk Education Goes to Washington | Splunk GovSummit 2024

If you’re in the Washington, D.C. area, this is your opportunity to take your career and Splunk skills to the ...