All Apps and Add-ons

Entities not displayed in Splunk App for Infrastructure.

Path Finder

I have set up a Universal Forwarder(UF) from the script on Machine 2 but UF is not added on Splunk Enterprise(Machine 1).
I have manually added the deployment server and in this case, the UF is added on Splunk Enterprise but the entity is not displayed on Splunk App for Infrastructure for which I have waited for more than 5 mins.

Followed the below link to install SAI on Splunk Enterprise:
https://docs.splunk.com/Documentation/InfraApp/2.0.1/Install/Install

0 Karma
1 Solution

Splunk Employee
Splunk Employee

Does the splunkd.log from the UF say anything about whether the data is successfully sending to Machine 1?

View solution in original post

Path Finder

In the UF logs on Machine 2 getting the message that it's connected to Machine 1 but when I visited in Forwarder Management tab then it's not displayed there.

For the reference please refer the last few lines of UF logs after starting UF:

01-07-2020 05:35:13.024 -0500 INFO TcpOutputProc - Connected to idx=192.168.1.15:9997, pset=0, reuse=0.
01-07-2020 05:35:13.029 -0500 INFO WatchedFile - Will begin reading at offset=13776943 for file='/data/splunkforwarder/var/log/splunk/metrics.log'.
01-07-2020 05:35:13.032 -0500 INFO WatchedFile - Will begin reading at offset=978 for file='/data/splunkforwarder/var/log/splunk/conf.log'.
01-07-2020 05:35:42.667 -0500 INFO ScheduledViewsReaper - Scheduled views reaper run complete. Reaped count=0 scheduled views

0 Karma

Splunk Employee
Splunk Employee

Is the data arriving at Machine 1? If you search index=_internal host=${Machine 2} or | mcatalog values(metric_name) WHERE host=${Machine 2} AND index=em_metrics, do you see data?

Path Finder

Yes, Gettings logs at Machine 1 but didn't get the metrics.

I am getting the output of this command:
index=_internal host=${Machine 2}

but didn't get any output of this command:
| mcatalog values(metric_name) WHERE host=${Machine 2} AND index=em_metrics

0 Karma

Splunk Employee
Splunk Employee

Check if collectd running or installed on monitored Machine 2..

apt-cache policy collectd
ps -ef | grep collectd

Did you get any errors when you ran the script from "Add Data" page?

New Member

Hello @dagarwal_splunk , I'm having the same issue although, is there any alternative for this command?

apt-cache policy collectd

since I can't install it in my UF. It says "No package apt-cache available."

Regards,
Rockie

0 Karma

Path Finder

Yes, collectd is installed and running, verified from the above 2 commands.

Yes, getting error in the collectd.logs:

[2020-01-09 08:31:40] [error] processmon plugin: Error reading /proc/12820/stat
[2020-01-09 08:31:40] [notice] read-function of plugin `processmon' failed. Will suspend it for 120.000 seconds.

0 Karma

Splunk Employee
Splunk Employee

Ignore that error. It's just that process died while it was being monitored.

To debug, let's try some steps:

  1. Machine2: Do you see any recurring errors like "curl_easy_perform failed" in collectd.log ?
  2. In Machine 1, Check if all the Hec tokens are enabled: Settings -> Data Inputs ->HTTP Event Collector
  3. Machine 1: Check the Global Settings on the same page as 2. Verify "enable ssl" is checked and note down the port number.
  4. Machine 1: Verify the HEC token you are using has default index as "em_metrics"
  5. Now In Machine 2, check /etc/collectd/collectd.conf file. Verify that HEC token, server and port number in write_splunk stanza is correct.
  6. If still not solved, try sending fake data from Machine 2 to Machine 1 using curl and see if you get success. Here is the curl command you need to run in Machine 2: curl -k https://Machine1:8088/services/collector -H "Authorization: Splunk hec_token_here" -d '{"time": 1486683865.000,"event":"metric","source":"disk","host":"host_99","fields":{"region":"us-west-1","datacenter":"us-west-1a","rack":"63","os":"Ubuntu16.10","arch":"x64","team":"LON","service":"6","service_version":"0","service_environment":"test","path":"/dev/sda1","fstype":"ext3","_value":1099511627776,"metric_name":"total"}}'

https://docs.splunk.com/Documentation/Splunk/7.3.3/Metrics/GetMetricsInOther#Example_of_sending_metr...
Update token, port and server in the command

Path Finder
  1. Got the below error in the collectd.logs when searched for "curl_easy_perform failed":

[error] write splunk plugin: curl_easy_perform failed to connect to 192.168.1.15:8088 with status 7: Couldn't connect to server

  1. Yes, I have already enabled all the HEC tokens.

  2. In the Global Settings, SSL is already enabled and the port number is 8088(default).

  3. HEC token which I am using has default index as "em_metrics".

  4. HEC token, Server IP and port number in /etc/collectd/collectd.conf file is correct.

  5. Getting the below output of the given curl command:

{"text":"Server is busy","code":9,"invalid-event-number":0}

0 Karma

Path Finder

Yeah, it's working now, I have unchecked the Use Deployment Server option from Global Settings.
Thanks, dagarwal

Splunk Employee
Splunk Employee

"Add Data" script installs both collectd(Metrics) and UF(logs) for Linux machine.
Also check, "/etc/collectd/collectd.logs" for any errors.
For machine 2, what is you Linux distro like Centos, Ubuntu? and what version?

Path Finder

For both Machine 1 and Machine 2 I am using Ubuntu 18.04 LTS.

This file doesn't exist on Machine 2(client machine): /etc/collectd/collectd.logs

0 Karma

Splunk Employee
Splunk Employee

The specific location of the collectd.log may vary by distro, but the information should be in the collectd.log on Machine 2.

Path Finder

I have reinstalled it from the script and get the below error in the collectd.logs:

[2020-01-09 08:31:40] [error] processmon plugin: Error reading /proc/12820/stat
[2020-01-09 08:31:40] [notice] read-function of plugin `processmon' failed. Will suspend it for 120.000 seconds.

0 Karma