Security

Newly added monitor only ingests some of the data

Abass42
Communicator

SO the other day, I was asked to ingest some data for jenkins, and Splunk has seemed to only ingest some of that data. 

I have this monitor installed on both the Production and Development remote instances:

 

 

 

[monitor:///var/lib/jenkins/jobs.../log]
recursive = true
index = azure
sourcetype = jenkins
disabled = 0


[monitor:///var/lib/jenkins/jobs]
index = jenkins
sourcetype = jenkins
disabled = 0
recursive = true


#[monitor:///var/lib/jenkins/jobs/web-pipeline/branches/develop/builds/14]
#index = testing
#sourcetype = jenkins
#recursive = true
#disabled = 0

 

 

 

 

Pretty much, I have most of the data ingested, but for whatever reason, I cant find any data for 

/var/lib/jenkins/jobs/web-pipeline/branches/develop/builds/14, or other random paths that we spot check.  For that bottom commented out input, I specify the entire path and I even added a salt so we could re ingest it.  Its commented out rn, but i have tried different iterations for that specific path. 

Abass42_0-1722446868911.png

 

It has and continues to ingest everything under that /var/lib/jenkins/jobs, but i do not see some of the data. 

Based on this input, should i be doing something else? Could it be an issue with having the same sourcetype as the data that is funneled to the azure index? Is the syntax incorrect? I want to ingest EVRYTHING, including files within subdirectories into splunk. Thats why i used recursive, but is that not enough? 

 

Thanks for any help. 

Labels (1)
Tags (1)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

When debugging monitor inputs it's very useful to look at output of

splunk list monitor

and

splunk list inputstatus

 

0 Karma

Abass42
Communicator

For a pickle, that was a very fast response. 

But running those commands looks like it outputs the internal logs. All of the logs monitored at /export/opt/splunk. 

Doesn't really show anything other than those directories. 

Abass42_0-1722448176039.png

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Something _has_ to read those files that you have already ingested so it's kinda unbelievable that you only have this directory monitored.

Are you running this on the machine which has the inputs defined? (Of course if the inputs are ingested by a remote forwarder you need to run those commands on the forwarder)

0 Karma

Abass42
Communicator

That makes sense. I was able to find some errors in Splunk _internal index

Abass42_0-1722448932897.png

 

DO i just need to salt every file? How would i re-ingest those, or why are those not ingesting, but the other ones are? 

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

It's hard to say without knowing the actual files. But generally crcsalting is rarely used. Usually - when the files have relatively long common beginning parts - it's better to increase the size of the header used for crc calculation.

0 Karma
Get Updates on the Splunk Community!

.conf25 Community Recap

Hello Splunkers, And just like that, .conf25 is in the books! What an incredible few days — full of learning, ...

Splunk App Developers | .conf25 Recap & What’s Next

If you stopped by the Builder Bar at .conf25 this year, thank you! The retro tech beer garden vibes were ...

Congratulations to the 2025-2026 SplunkTrust!

Hello, Splunk Community! We are beyond thrilled to announce our newest group of SplunkTrust members!  The ...