All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That makes sense. I was able to find some errors in Splunk _internal index   DO i just need to salt every file? How would i re-ingest those, or why are those not ingesting, but the other ones a... See more...
That makes sense. I was able to find some errors in Splunk _internal index   DO i just need to salt every file? How would i re-ingest those, or why are those not ingesting, but the other ones are?   
Hi Community, I need to calculate the difference between two timestamps printed in log4j logs of java application from 3 different searches, the timestamp is printed in the log after system time keyw... See more...
Hi Community, I need to calculate the difference between two timestamps printed in log4j logs of java application from 3 different searches, the timestamp is printed in the log after system time keyword in the logs. Logs for search -1 2024-07-18 06:11:23.438 INFO [ traceid=8d8f1bad8549e6ac6d1c864cbcb1f706 spanid=cdb1bb734ab9eedc ] com.filler.filler.filler.MessageLoggerVisitor [TLOG4-Thread-1-7] Jul 18,2024 06:11:23 GMT|91032|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.visitor.MessageLoggerVisitor|-|PRD01032 - Processor (Ingress Processor tlog-node4) processed message with system time 1721283083437 batch id d6e50727-ffe7-4db3-83a9-351e59148be2-23-0001 correlation-id (f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001) and body ( Logs for search -2 DFM01081 - Batch having id d6e50727-ffe7-4db3-83a9-351e59148be2-23-0001 on processor-name Egress Processor, transaction status commited by consumer Logs for search-3 2024-07-18 06:11:23.487 INFO [ traceid= spanid= ] com.filler.filler.filler.message.processor.RestPublisherProcessor [PRD-1] Jul 18,2024 06:11:23 GMT|91051|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.processor.RestPublisherProcessor|-|PRD01051 - Message with correlation-id f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001 successfully published at system time 1721283083487 to MCD I am using below query to calculate the time difference. I need to filter out the correlation ids in search 1not matching the batch ids from search 2 and calculate the systime difference from the matching correlation ids b/w search-1 and search-2 which also match with search-3. The below query gives empty systime_mcd need help in getting this through sourcetype=log4j | rex "91032\|PRD\|SYSTEM\|test\-01\.Autodeploy\-profiles\-msgdeliver\|10\.12\.163\.65\|\-\|\-\|\-\|\-\|com\.filler\.filler\.filler\.message\.visitor\.MessageLoggerVisitor\|\-\|PRD01032 \- Processor (.*?) processed message with system time (?.+) batch id (?.+) correlation-id \((?.+)\) and body" |rex "DFM01081 \- Batch having id (?.+) on processor-name Egress Processor\, transaction status commited by consumer | rex "com\.filler\.filler.filler\.message\.processor\.RestPublisherProcessor\|\-\|PRD01051 \- Message with correlation\-id \((?.+)\) successfully published at system time (?.+) to MCD" | stats first(systime_batch) as systime_batch values(systime_mcd) as systime_mcd values(corrid) as corrid by batch_id_passed | mvexpand corrid | eval diff = (systime_mcd-systime_batch) @ITWhisperer   can you please look into this as well, this is an extension of what you already helped with. Thanks in advance
Something _has_ to read those files that you have already ingested so it's kinda unbelievable that you only have this directory monitored. Are you running this on the machine which has the inputs de... See more...
Something _has_ to read those files that you have already ingested so it's kinda unbelievable that you only have this directory monitored. Are you running this on the machine which has the inputs defined? (Of course if the inputs are ingested by a remote forwarder you need to run those commands on the forwarder)
Hi @gcusello,  Thank you for your valuable feedback, indeed I'm not an architect but thanks for the pointers I will go through them. I have one more question, If we are going with a new environment ... See more...
Hi @gcusello,  Thank you for your valuable feedback, indeed I'm not an architect but thanks for the pointers I will go through them. I have one more question, If we are going with a new environment in splunk is it ideally best to go with the latest version? 
For a pickle, that was a very fast response.  But running those commands looks like it outputs the internal logs. All of the logs monitored at /export/opt/splunk.  Doesn't really show anything othe... See more...
For a pickle, that was a very fast response.  But running those commands looks like it outputs the internal logs. All of the logs monitored at /export/opt/splunk.  Doesn't really show anything other than those directories.   
From that perspective, that makes so much sense. I've gotten what i wanted. Thanks @PickleRick and @richgalloway 
When debugging monitor inputs it's very useful to look at output of splunk list monitor and splunk list inputstatus  
As I wrote before - EXTRACT and REPORT are run in search-time. TRANSFORM (including INGEST_EVAL) is run in index-time. You don't have search-time stuff in index-time. So you don't have your "year" fi... See more...
As I wrote before - EXTRACT and REPORT are run in search-time. TRANSFORM (including INGEST_EVAL) is run in index-time. You don't have search-time stuff in index-time. So you don't have your "year" field when you're trying to run INGEST_EVAL.
Apart from some specific use cases this is impossible. First ask yourself what do you mean by "unused knowledge object". Let's assume you have an automatic lookup which translates code 0,1,2 or3 to... See more...
Apart from some specific use cases this is impossible. First ask yourself what do you mean by "unused knowledge object". Let's assume you have an automatic lookup which translates code 0,1,2 or3 to values "critical/serious/moderate/benign". It's "used" only by users looking at it when browsing through the events. Do you consider such KO used or not? You can use some techniques to find explicitly requested KOs in searches but also only in some cases. In some (especially if parts of the searches are dynamically generated by means of aliases or map) you can't know before running the search what it will use.
SO the other day, I was asked to ingest some data for jenkins, and Splunk has seemed to only ingest some of that data.  I have this monitor installed on both the Production and Development remote in... See more...
SO the other day, I was asked to ingest some data for jenkins, and Splunk has seemed to only ingest some of that data.  I have this monitor installed on both the Production and Development remote instances:       [monitor:///var/lib/jenkins/jobs.../log] recursive = true index = azure sourcetype = jenkins disabled = 0 [monitor:///var/lib/jenkins/jobs] index = jenkins sourcetype = jenkins disabled = 0 recursive = true #[monitor:///var/lib/jenkins/jobs/web-pipeline/branches/develop/builds/14] #index = testing #sourcetype = jenkins #recursive = true #disabled = 0         Pretty much, I have most of the data ingested, but for whatever reason, I cant find any data for  /var/lib/jenkins/jobs/web-pipeline/branches/develop/builds/14, or other random paths that we spot check.  For that bottom commented out input, I specify the entire path and I even added a salt so we could re ingest it.  Its commented out rn, but i have tried different iterations for that specific path.    It has and continues to ingest everything under that /var/lib/jenkins/jobs, but i do not see some of the data.  Based on this input, should i be doing something else? Could it be an issue with having the same sourcetype as the data that is funneled to the azure index? Is the syntax incorrect? I want to ingest EVRYTHING, including files within subdirectories into splunk. Thats why i used recursive, but is that not enough?    Thanks for any help. 
From the EXTRACT in props.conf. EXTRACT-wsjtx = (?<year>\d{2})(?<month>\d{2})(?<day>\d{2})_(?<hour>\d{2})(?<min>\d{2})(?<sec>\d{2})\s+(?<freqMhz>\d+\.\d+)\s+(?<action>\w+)\s+(?<mode>\w+)\s+(?<rxDB>\... See more...
From the EXTRACT in props.conf. EXTRACT-wsjtx = (?<year>\d{2})(?<month>\d{2})(?<day>\d{2})_(?<hour>\d{2})(?<min>\d{2})(?<sec>\d{2})\s+(?<freqMhz>\d+\.\d+)\s+(?<action>\w+)\s+(?<mode>\w+)\s+(?<rxDB>\d+|-\d+)\s+(?<timeOffset>-\d+\.\d+|\d+\.\d+)\s+(?<freqOffSet>\d+)\s+(?<remainder>.+)
OK. From the start. Your INGEST_EVAL looks like this: INGEST_EVAL = fyear="20" . year Right? Where does the "year" field come from?
I removed the $ signs from the field (I copied from the web UI). I also used this as a guide but still no go.   https://docs.splunk.com/Documentation/Splunk/9.2.2/Data/IngestEval
OK. Cloud can be different here. My way works in an on-prem environment.
First step in debugging such stuff is to run two commands splunk list monitor and splunk list inputstatus But as far as I remember Splunk has problems with monitor inputs overlapping the same dir... See more...
First step in debugging such stuff is to run two commands splunk list monitor and splunk list inputstatus But as far as I remember Splunk has problems with monitor inputs overlapping the same directories. You could instead just monitor whole directory with a whitelist of all four types of files and then dynamically rewrite sourcetype on ingest depending on the file path included in the source field. But yes, it can cause issues with multiple significantly different sourcetypes (especially if they differ in timestamp format/placement).
This is not a trivial task since Splunk does not record when each KO is used. Some are easy to determine - scheduled searches, reports, and alerts, for example. You should be able to use the audit ... See more...
This is not a trivial task since Splunk does not record when each KO is used. Some are easy to determine - scheduled searches, reports, and alerts, for example. You should be able to use the audit log to find uses of dashboards and unscheduled saved searches. Others, like macros, aliases, and tags will be more challenging.  It will require parsing every executed search (find them in _audit) and identifying the KOs in each. That will produce a list of *used* KOs.  From that, you can derive a list of unused objects.
@yuanliuSo this section of the props.conf spec MAX_EVENTS = <integer> * The maximum number of input lines to add to any event. * Splunk software breaks after it reads the specified number of lines. ... See more...
@yuanliuSo this section of the props.conf spec MAX_EVENTS = <integer> * The maximum number of input lines to add to any event. * Splunk software breaks after it reads the specified number of lines. * Default: 256 takes precedence over the LINE_BREAKER?
@PickleRick  we have cloud deployment and i see only two panels in ingest , i want data by per sc4s host not splunk server.   
I had a similar problem and the answer is in Line breaking.  See Why are REST API receivers/simple breaks input unexpectedly in Getting Data In.
Version 9.2.2 was released on July 1, 2024 That means it's supported "fully" for 24 months since release date and for 36 additional moths at P3 level. https://www.splunk.com/en_us/legal/splunk-soft... See more...
Version 9.2.2 was released on July 1, 2024 That means it's supported "fully" for 24 months since release date and for 36 additional moths at P3 level. https://www.splunk.com/en_us/legal/splunk-software-support-policy.html