All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @AL3Z, are they windows events? if yes, you can blacklist them, if not, you cannot blacklist them in inputs.conf. Then you have to check if the regex I shared is correct or too large, for this ... See more...
Hi @AL3Z, are they windows events? if yes, you can blacklist them, if not, you cannot blacklist them in inputs.conf. Then you have to check if the regex I shared is correct or too large, for this reasono I asked to share also events to not discard. Ciao. Giuseppe
Hi @Diab.Awada, I just got a hold of this info. The ingestion pipeline only supports trace ingestion and then derives the big 3 metrics (ART, CPM, and EPM) from the ingested traces.
HI @gcusello , I want to exclude these events by blacklisting  on inputs.conf so that it can be stop ingesting into splunk .........
@ITWhisperer  Below are screenshot in which you can see from 6th of November we are receiving 3 sources. and before that the source was only one.   
Makeresults changed in version 9 allowing you to specify format and data. If you have a prior version, you need to set up the dummy data in a different way. | makeresults | eval _raw="Status FILE_NO... See more...
Makeresults changed in version 9 allowing you to specify format and data. If you have a prior version, you need to set up the dummy data in a different way. | makeresults | eval _raw="Status FILE_NOT_DELIVERED FILE_NOT_DELIVERED FILE_DELIVERED FILE_NOT_DELIVERED FILE_NOT_DELIVERED" | multikv forceheader=1 | table Status | head 5 | eval {Status}=Status | fields - Status | stats values(*) as * | eval Status=coalesce(FILE_DELIVERED, FILE_NOT_DELIVERED) | fields Status
summary is the default index for summaries but you can collect to different indexes. I can't tell from your screenshot whether these are for the same index or not. Perhaps you should collect additio... See more...
summary is the default index for summaries but you can collect to different indexes. I can't tell from your screenshot whether these are for the same index or not. Perhaps you should collect additional information about these sources e.g. exactly when did they update, what other fields are in the summary events, etc.
sure thank you , I am trying to reach out to the addon creator and trying few things here . Will update here in case I come with something 
As @ITWhisperer said, you cannot use str*time functions to convert those correctly. Here is another example for converting those correctly and calculate avg and sum and then convert those back to du... See more...
As @ITWhisperer said, you cannot use str*time functions to convert those correctly. Here is another example for converting those correctly and calculate avg and sum and then convert those back to duration. This is not handling durations which are greater than 23:59:59. | makeresults | eval duration="01:00:01,00:15:00,10:10:10,05:04:03" | eval duration = split(duration,",") | mvexpand duration ``` above create test data ``` | eval d1 = split(duration,":"), d=tonumber(mvindex(d1,2)) + 60 * tonumber(mvindex(d1,1)) + 3600 * tonumber(mvindex(d1,0)) | stats sum(d) as tD1 avg(d) as aD1 | eval sum_duH = floor(tD1/3600), sum_duM = floor((tD1%3600) / 60), sum_duS = floor(tD1 % 3600 % 60) | eval avg_duH = floor(aD1/3600), avg_duM = floor((aD1%3600) / 60), avg_duS = floor(aD1 % 3600 % 60) | eval avg_D = printf("%02d:%02d:%02d", avg_duH, avg_duM, avg_duS) | eval sum_D = printf("%02d:%02d:%02d", sum_duH, sum_duM, sum_duS) | table avg_D sum_D r. Ismo 
Your solution worked. Thank you so much for your help
Hi! We use Splunk Stream 7.3.0. When receiving an event in a log longer than 1000000 characters, Splunk cuts it. Event in json format. Tell me what settings should be applied in Splunk Stream so tha... See more...
Hi! We use Splunk Stream 7.3.0. When receiving an event in a log longer than 1000000 characters, Splunk cuts it. Event in json format. Tell me what settings should be applied in Splunk Stream so that Splunk parses the data correctly. Thanks!
Hello, We have to import a csv file that always contains the same amount of column (and corresponding values), but the system that generates it sometimes change the order of the header columns, like... See more...
Hello, We have to import a csv file that always contains the same amount of column (and corresponding values), but the system that generates it sometimes change the order of the header columns, like this:   File01.csv field01,field02,field03       File02.csv field03,field01,field02     Is there any way to ingest the file without using in props.conf this set-up? INDEXED_EXTRACTIONS=csv   The reason is that using the INDEXED_EXTRACTIONS Splunk is adding those fields in the .tsidx and we would like to avoid that.   Thanks a lot, Edoardo
Thanks @ITWhisperer  It worked. I used |eval NewField=trim(OldField) to remove the whitespaces.
Using strptime() and strftime() are for handling epoch date times which is why you are getting strange numbers. You might be better to do something like this: | rex field=DURATION "(?<hours>\d+):(?<... See more...
Using strptime() and strftime() are for handling epoch date times which is why you are getting strange numbers. You might be better to do something like this: | rex field=DURATION "(?<hours>\d+):(?<minutes>\d+):(?<seconds>\d+)" | eval DURATION=((hours*60)+minutes)*60+seconds | stats sum(DURATION) as event_duration by NAME | eventstats sum(event_duration) as total_time | eval percentage_time=(event_duration/total_time)*100 | eval event_duration1=tostring(event_duration,"duration") | eval total_time1=tostring(total_time,"duration") | eval av_time_hrs=(event_duration/total_time) Having said that, I am not sure what the final calculation is supposed to be showing
Thank you - what version of splunk does your suggestion work please, because I ran the query before I can modify mine, no results at all for any value of head. We are on 8.2.11.2    
@ITWhisperer Let me understand correctly, if more than one source is generating that means, more than one summary index ? Multiple source “/var/spool*”  file generation on the same time frame means ?
Unfortunately not no
I notice the regexes are using double quotes ("), but event uses single quotes (').  That will prevent a match.
Hi @AL3Z, let me understand: do you want to filter your logs to send these event to nullqueue or do you want to delete part of these events? in the first case, you have to follow the instructions a... See more...
Hi @AL3Z, let me understand: do you want to filter your logs to send these event to nullqueue or do you want to delete part of these events? in the first case, you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.1.2/Forwarding/Routeandfilterdatad using this regex \<Event xmlns\=\'http:\/\/schemas\.microsoft\.com\/win\/\d+\/\d+\/events\/event\'> if you can share also events to maintain, I could be more sure abut the regex. Ciao. Giuseppe
Hi There , Are you able to resolve this issue ? if yes please post your workaround as I am also facing same issue .
It is certainly worth looking