All Apps and Add-ons

HDFS capacity N/A

jackshine
Explorer

I installed splunk to monitor and analyze hadoop jobs. After I installed splunk core and splunk hadoop app in Jobtracker, forwarder and TA on other nodes, the HDFS capacity and slots capacity shows N/A in Utilization section. Anyone has some idea of possible causes?

Thank you

1 Solution

pierre4splunk
Splunk Employee
Splunk Employee

Have you enabled collection for Hadoop Metrics?

Each Hadoop daemon exposes rich runtime metrics that are useful for both monitoring and ad-hoc exploration of cluster activity, job performance, and historical workload. The utilization gauge for HDFS and Slot capacity gauges depend on NameNode and JobTracker metrics, respectively.

The simplest way to collect Hadoop metrics is to:

  1. configure your Hadoop daemon(s) to dump Hadoop metrics to a named log file
  2. configure your Splunk forwarders to monitor the resulting output files

For more info about #1:
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SplunkTAforHadoopOps#Hadoop_metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourClouderaplatform
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourHortonworksplatform

For #2, refer to the example input.conf stanzas for metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SampleHadoopinputs.conffiles

For more about Hadoop Metrics: http://blog.cloudera.com/blog/2009/03/hadoop-metrics/

View solution in original post

pierre4splunk
Splunk Employee
Splunk Employee

Have you enabled collection for Hadoop Metrics?

Each Hadoop daemon exposes rich runtime metrics that are useful for both monitoring and ad-hoc exploration of cluster activity, job performance, and historical workload. The utilization gauge for HDFS and Slot capacity gauges depend on NameNode and JobTracker metrics, respectively.

The simplest way to collect Hadoop metrics is to:

  1. configure your Hadoop daemon(s) to dump Hadoop metrics to a named log file
  2. configure your Splunk forwarders to monitor the resulting output files

For more info about #1:
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SplunkTAforHadoopOps#Hadoop_metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourClouderaplatform
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourHortonworksplatform

For #2, refer to the example input.conf stanzas for metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SampleHadoopinputs.conffiles

For more about Hadoop Metrics: http://blog.cloudera.com/blog/2009/03/hadoop-metrics/

jackshine
Explorer

The slots is also working now. It turned out that the N/A showed because 500MB limit was already exceeded yesterday.
Thank you

0 Karma

jackshine
Explorer

You are right, I forgot to modify the hadoop.metric configuration file and input.conf.After I did that, the HDFS capacity shows up. However, the slot capacity still shows N/A.

0 Karma
Get Updates on the Splunk Community!

CX Day is Coming!

Customer Experience (CX) Day is on October 7th!! We're so excited to bring back another day full of wonderful ...

Strengthen Your Future: A Look Back at Splunk 10 Innovations and .conf25 Highlights!

The Big One: Splunk 10 is Here!  The moment many of you have been waiting for has arrived! We are thrilled to ...

Now Offering the AI Assistant Usage Dashboard in Cloud Monitoring Console

Today, we’re excited to announce the release of a brand new AI assistant usage dashboard in Cloud Monitoring ...