All Apps and Add-ons

HDFS capacity N/A

jackshine
Explorer

I installed splunk to monitor and analyze hadoop jobs. After I installed splunk core and splunk hadoop app in Jobtracker, forwarder and TA on other nodes, the HDFS capacity and slots capacity shows N/A in Utilization section. Anyone has some idea of possible causes?

Thank you

1 Solution

pierre4splunk
Splunk Employee
Splunk Employee

Have you enabled collection for Hadoop Metrics?

Each Hadoop daemon exposes rich runtime metrics that are useful for both monitoring and ad-hoc exploration of cluster activity, job performance, and historical workload. The utilization gauge for HDFS and Slot capacity gauges depend on NameNode and JobTracker metrics, respectively.

The simplest way to collect Hadoop metrics is to:

  1. configure your Hadoop daemon(s) to dump Hadoop metrics to a named log file
  2. configure your Splunk forwarders to monitor the resulting output files

For more info about #1:
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SplunkTAforHadoopOps#Hadoop_metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourClouderaplatform
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourHortonworksplatform

For #2, refer to the example input.conf stanzas for metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SampleHadoopinputs.conffiles

For more about Hadoop Metrics: http://blog.cloudera.com/blog/2009/03/hadoop-metrics/

View solution in original post

pierre4splunk
Splunk Employee
Splunk Employee

Have you enabled collection for Hadoop Metrics?

Each Hadoop daemon exposes rich runtime metrics that are useful for both monitoring and ad-hoc exploration of cluster activity, job performance, and historical workload. The utilization gauge for HDFS and Slot capacity gauges depend on NameNode and JobTracker metrics, respectively.

The simplest way to collect Hadoop metrics is to:

  1. configure your Hadoop daemon(s) to dump Hadoop metrics to a named log file
  2. configure your Splunk forwarders to monitor the resulting output files

For more info about #1:
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SplunkTAforHadoopOps#Hadoop_metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourClouderaplatform
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SetupyourHortonworksplatform

For #2, refer to the example input.conf stanzas for metrics
http://docs.splunk.com/Documentation/HadoopOps/latest/HadoopOps/SampleHadoopinputs.conffiles

For more about Hadoop Metrics: http://blog.cloudera.com/blog/2009/03/hadoop-metrics/

jackshine
Explorer

The slots is also working now. It turned out that the N/A showed because 500MB limit was already exceeded yesterday.
Thank you

0 Karma

jackshine
Explorer

You are right, I forgot to modify the hadoop.metric configuration file and input.conf.After I did that, the HDFS capacity shows up. However, the slot capacity still shows N/A.

0 Karma
Get Updates on the Splunk Community!

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer Certification at ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...