All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

So will v9.2.2 of Splunk Universal Forwarder be updated to close future vulnerabilities between now and the end of Windows Server 2016 extended maintenance?  If so, how will Windows Server 2016 clien... See more...
So will v9.2.2 of Splunk Universal Forwarder be updated to close future vulnerabilities between now and the end of Windows Server 2016 extended maintenance?  If so, how will Windows Server 2016 clients be notified of the alternate stream updates?
Wait a second. You're talking about an UF? And those props are where? On the UF or on the idx/HF? Do you use EVENT_BREAKER?
1. Please don't call out specific people. It's rude. If you demand someone's help you typically pay for consulting services. Here people help in own spare time out of good will. 2. When you post sam... See more...
1. Please don't call out specific people. It's rude. If you demand someone's help you typically pay for consulting services. Here people help in own spare time out of good will. 2. When you post samples and SPL excerpts, please format them properly - in code block or preformatted paragraphs (and use line breaking for SPL) 3.  Did you verify that before you do the stats the  fields you're aggregating are properly extracted? 4. stats values() can produce multivalued fields - trying to treat them as simple integers won't work 5. As you're extracting fields from textual content, you might need to call tonumber() on them to get an integer which you can use to calculate difference.
It's hard to say without knowing the actual files. But generally crcsalting is rarely used. Usually - when the files have relatively long common beginning parts - it's better to increase the size of ... See more...
It's hard to say without knowing the actual files. But generally crcsalting is rarely used. Usually - when the files have relatively long common beginning parts - it's better to increase the size of the header used for crc calculation.
It doesn't take precedence.  It just limits how many lines are allowed in each event.  Splunk has a good reason to use 256 as default.  I just wish they name the property with better clarity:-)  You ... See more...
It doesn't take precedence.  It just limits how many lines are allowed in each event.  Splunk has a good reason to use 256 as default.  I just wish they name the property with better clarity:-)  You mentioned that you had 1083 lines.  Raise MAX_EVENTS to 2000 for this sourcetype and you should be good. (You made a very astute observation about line count in your events from the very beginning.  I wish I had that insight so I wouldn't have been stuck for years.)
That makes sense. I was able to find some errors in Splunk _internal index   DO i just need to salt every file? How would i re-ingest those, or why are those not ingesting, but the other ones a... See more...
That makes sense. I was able to find some errors in Splunk _internal index   DO i just need to salt every file? How would i re-ingest those, or why are those not ingesting, but the other ones are?   
Hi Community, I need to calculate the difference between two timestamps printed in log4j logs of java application from 3 different searches, the timestamp is printed in the log after system time keyw... See more...
Hi Community, I need to calculate the difference between two timestamps printed in log4j logs of java application from 3 different searches, the timestamp is printed in the log after system time keyword in the logs. Logs for search -1 2024-07-18 06:11:23.438 INFO [ traceid=8d8f1bad8549e6ac6d1c864cbcb1f706 spanid=cdb1bb734ab9eedc ] com.filler.filler.filler.MessageLoggerVisitor [TLOG4-Thread-1-7] Jul 18,2024 06:11:23 GMT|91032|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.visitor.MessageLoggerVisitor|-|PRD01032 - Processor (Ingress Processor tlog-node4) processed message with system time 1721283083437 batch id d6e50727-ffe7-4db3-83a9-351e59148be2-23-0001 correlation-id (f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001) and body ( Logs for search -2 DFM01081 - Batch having id d6e50727-ffe7-4db3-83a9-351e59148be2-23-0001 on processor-name Egress Processor, transaction status commited by consumer Logs for search-3 2024-07-18 06:11:23.487 INFO [ traceid= spanid= ] com.filler.filler.filler.message.processor.RestPublisherProcessor [PRD-1] Jul 18,2024 06:11:23 GMT|91051|PRD|SYSTEM|test-01.Autodeploy-profiles-msgdeliver|10.12.163.65|-|-|-|-|com.filler.filler.filler.message.processor.RestPublisherProcessor|-|PRD01051 - Message with correlation-id f00d9f9e-7534-4190-99ad-ffeea14859e5-23-0001 successfully published at system time 1721283083487 to MCD I am using below query to calculate the time difference. I need to filter out the correlation ids in search 1not matching the batch ids from search 2 and calculate the systime difference from the matching correlation ids b/w search-1 and search-2 which also match with search-3. The below query gives empty systime_mcd need help in getting this through sourcetype=log4j | rex "91032\|PRD\|SYSTEM\|test\-01\.Autodeploy\-profiles\-msgdeliver\|10\.12\.163\.65\|\-\|\-\|\-\|\-\|com\.filler\.filler\.filler\.message\.visitor\.MessageLoggerVisitor\|\-\|PRD01032 \- Processor (.*?) processed message with system time (?.+) batch id (?.+) correlation-id \((?.+)\) and body" |rex "DFM01081 \- Batch having id (?.+) on processor-name Egress Processor\, transaction status commited by consumer | rex "com\.filler\.filler.filler\.message\.processor\.RestPublisherProcessor\|\-\|PRD01051 \- Message with correlation\-id \((?.+)\) successfully published at system time (?.+) to MCD" | stats first(systime_batch) as systime_batch values(systime_mcd) as systime_mcd values(corrid) as corrid by batch_id_passed | mvexpand corrid | eval diff = (systime_mcd-systime_batch) @ITWhisperer   can you please look into this as well, this is an extension of what you already helped with. Thanks in advance
Something _has_ to read those files that you have already ingested so it's kinda unbelievable that you only have this directory monitored. Are you running this on the machine which has the inputs de... See more...
Something _has_ to read those files that you have already ingested so it's kinda unbelievable that you only have this directory monitored. Are you running this on the machine which has the inputs defined? (Of course if the inputs are ingested by a remote forwarder you need to run those commands on the forwarder)
Hi @gcusello,  Thank you for your valuable feedback, indeed I'm not an architect but thanks for the pointers I will go through them. I have one more question, If we are going with a new environment ... See more...
Hi @gcusello,  Thank you for your valuable feedback, indeed I'm not an architect but thanks for the pointers I will go through them. I have one more question, If we are going with a new environment in splunk is it ideally best to go with the latest version? 
For a pickle, that was a very fast response.  But running those commands looks like it outputs the internal logs. All of the logs monitored at /export/opt/splunk.  Doesn't really show anything othe... See more...
For a pickle, that was a very fast response.  But running those commands looks like it outputs the internal logs. All of the logs monitored at /export/opt/splunk.  Doesn't really show anything other than those directories.   
From that perspective, that makes so much sense. I've gotten what i wanted. Thanks @PickleRick and @richgalloway 
When debugging monitor inputs it's very useful to look at output of splunk list monitor and splunk list inputstatus  
As I wrote before - EXTRACT and REPORT are run in search-time. TRANSFORM (including INGEST_EVAL) is run in index-time. You don't have search-time stuff in index-time. So you don't have your "year" fi... See more...
As I wrote before - EXTRACT and REPORT are run in search-time. TRANSFORM (including INGEST_EVAL) is run in index-time. You don't have search-time stuff in index-time. So you don't have your "year" field when you're trying to run INGEST_EVAL.
Apart from some specific use cases this is impossible. First ask yourself what do you mean by "unused knowledge object". Let's assume you have an automatic lookup which translates code 0,1,2 or3 to... See more...
Apart from some specific use cases this is impossible. First ask yourself what do you mean by "unused knowledge object". Let's assume you have an automatic lookup which translates code 0,1,2 or3 to values "critical/serious/moderate/benign". It's "used" only by users looking at it when browsing through the events. Do you consider such KO used or not? You can use some techniques to find explicitly requested KOs in searches but also only in some cases. In some (especially if parts of the searches are dynamically generated by means of aliases or map) you can't know before running the search what it will use.
SO the other day, I was asked to ingest some data for jenkins, and Splunk has seemed to only ingest some of that data.  I have this monitor installed on both the Production and Development remote in... See more...
SO the other day, I was asked to ingest some data for jenkins, and Splunk has seemed to only ingest some of that data.  I have this monitor installed on both the Production and Development remote instances:       [monitor:///var/lib/jenkins/jobs.../log] recursive = true index = azure sourcetype = jenkins disabled = 0 [monitor:///var/lib/jenkins/jobs] index = jenkins sourcetype = jenkins disabled = 0 recursive = true #[monitor:///var/lib/jenkins/jobs/web-pipeline/branches/develop/builds/14] #index = testing #sourcetype = jenkins #recursive = true #disabled = 0         Pretty much, I have most of the data ingested, but for whatever reason, I cant find any data for  /var/lib/jenkins/jobs/web-pipeline/branches/develop/builds/14, or other random paths that we spot check.  For that bottom commented out input, I specify the entire path and I even added a salt so we could re ingest it.  Its commented out rn, but i have tried different iterations for that specific path.    It has and continues to ingest everything under that /var/lib/jenkins/jobs, but i do not see some of the data.  Based on this input, should i be doing something else? Could it be an issue with having the same sourcetype as the data that is funneled to the azure index? Is the syntax incorrect? I want to ingest EVRYTHING, including files within subdirectories into splunk. Thats why i used recursive, but is that not enough?    Thanks for any help. 
From the EXTRACT in props.conf. EXTRACT-wsjtx = (?<year>\d{2})(?<month>\d{2})(?<day>\d{2})_(?<hour>\d{2})(?<min>\d{2})(?<sec>\d{2})\s+(?<freqMhz>\d+\.\d+)\s+(?<action>\w+)\s+(?<mode>\w+)\s+(?<rxDB>\... See more...
From the EXTRACT in props.conf. EXTRACT-wsjtx = (?<year>\d{2})(?<month>\d{2})(?<day>\d{2})_(?<hour>\d{2})(?<min>\d{2})(?<sec>\d{2})\s+(?<freqMhz>\d+\.\d+)\s+(?<action>\w+)\s+(?<mode>\w+)\s+(?<rxDB>\d+|-\d+)\s+(?<timeOffset>-\d+\.\d+|\d+\.\d+)\s+(?<freqOffSet>\d+)\s+(?<remainder>.+)
OK. From the start. Your INGEST_EVAL looks like this: INGEST_EVAL = fyear="20" . year Right? Where does the "year" field come from?
I removed the $ signs from the field (I copied from the web UI). I also used this as a guide but still no go.   https://docs.splunk.com/Documentation/Splunk/9.2.2/Data/IngestEval
OK. Cloud can be different here. My way works in an on-prem environment.
First step in debugging such stuff is to run two commands splunk list monitor and splunk list inputstatus But as far as I remember Splunk has problems with monitor inputs overlapping the same dir... See more...
First step in debugging such stuff is to run two commands splunk list monitor and splunk list inputstatus But as far as I remember Splunk has problems with monitor inputs overlapping the same directories. You could instead just monitor whole directory with a whitelist of all four types of files and then dynamically rewrite sourcetype on ingest depending on the file path included in the source field. But yes, it can cause issues with multiple significantly different sourcetypes (especially if they differ in timestamp format/placement).