All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I had this same problem so I thought I would share. For me, I was dealing with a clustered Environment. I went and looked at Splunkd.log and saw a bunch of messages like network unreachable and could... See more...
I had this same problem so I thought I would share. For me, I was dealing with a clustered Environment. I went and looked at Splunkd.log and saw a bunch of messages like network unreachable and could not connect to peer etc.  Turned out Splunkd was down.  Restarted Splunkd on the CM and it reconnected. Hope that helps someone in the future.
@gcusello , Yes they're windows events...
The JS is implemented through an external file, colorFormat.js. I have overwritten that file but for some of my users they are getting the old file ran instead of the new one. Which in my mind doesn'... See more...
The JS is implemented through an external file, colorFormat.js. I have overwritten that file but for some of my users they are getting the old file ran instead of the new one. Which in my mind doesn't make sense because how would splunk still have that code, since I have overwritten it?
11/06/2023 23:57:02 +1100, info_min_time=1699189200.000, info_max_time=1700571600.000, info_search_time=1700625838.094, foo=3, Mixed=0, CaseQty=64, OrderId=52128969634, TrayQty=35, Location="DEP/Auto... See more...
11/06/2023 23:57:02 +1100, info_min_time=1699189200.000, info_max_time=1700571600.000, info_search_time=1700625838.094, foo=3, Mixed=0, CaseQty=64, OrderId=52128969634, TrayQty=35, Location="DEP/AutoDep03", Dimension=2, TrayError=3, OrientationError=1, ProtrusionError=0, CaseTypeId=6210, WidthError=2, reporttype=DepTrayCaseQty, OffCentreError=0, HeightError=0, LengthError=0, PalletLayers=4 OrderId = 52128969634host = MSRDC-BPIsource = D:\Splunk\var\spool\splunk\d0d3783e41cf130c_events.stash_newsourcetype = stash ===================================================================== 11/06/2023 23:57:02 +1100, search_name="File Collector: DepTrayCaseQty", search_now=1699279200.000, info_min_time=1699189200.000, info_max_time=1699275600.000, info_search_time=1699279202.226, foo=2, Mixed=0, CaseQty=29, OrderId=52128969634, TrayQty=17, Location="DEP/AutoDep03", Dimension=2, TrayError=2, OrientationError=0, ProtrusionError=0, CaseTypeId=6210, WidthError=2, reporttype=DepTrayCaseQty, OffCentreError=0, HeightError=0, LengthError=0, PalletLayers=4 OrderId = 52128969634host = MSRDC-BPIsource = File Collector: DepTrayCaseQtysourcetype = stash ================================================================= 11/06/2023 23:57:02 +1100, info_min_time=1699189200.000, info_max_time=1700398800.000, info_search_time=1700618994.511, foo=3, Mixed=0, CaseQty=64, OrderId=52128969634, TrayQty=35, Location="DEP/AutoDep03", Dimension=2, TrayError=3, OrientationError=1, ProtrusionError=0, CaseTypeId=6210, WidthError=2, reporttype=DepTrayCaseQty, OffCentreError=0, HeightError=0, LengthError=0, PalletLayers=4 OrderId = 52128969634host = MSRDC-BPIsource = D:\Splunk\var\spool\splunk\adb0f8d721bf93e3_events.stash_newsourcetype = stash
Rather than pasting pictures, please paste 3 "duplicated" raw events into a code block </>
Hi @AL3Z, are they windows events? if yes, you can blacklist them, if not, you cannot blacklist them in inputs.conf. Then you have to check if the regex I shared is correct or too large, for this ... See more...
Hi @AL3Z, are they windows events? if yes, you can blacklist them, if not, you cannot blacklist them in inputs.conf. Then you have to check if the regex I shared is correct or too large, for this reasono I asked to share also events to not discard. Ciao. Giuseppe
Hi @Diab.Awada, I just got a hold of this info. The ingestion pipeline only supports trace ingestion and then derives the big 3 metrics (ART, CPM, and EPM) from the ingested traces.
HI @gcusello , I want to exclude these events by blacklisting  on inputs.conf so that it can be stop ingesting into splunk .........
@ITWhisperer  Below are screenshot in which you can see from 6th of November we are receiving 3 sources. and before that the source was only one.   
Makeresults changed in version 9 allowing you to specify format and data. If you have a prior version, you need to set up the dummy data in a different way. | makeresults | eval _raw="Status FILE_NO... See more...
Makeresults changed in version 9 allowing you to specify format and data. If you have a prior version, you need to set up the dummy data in a different way. | makeresults | eval _raw="Status FILE_NOT_DELIVERED FILE_NOT_DELIVERED FILE_DELIVERED FILE_NOT_DELIVERED FILE_NOT_DELIVERED" | multikv forceheader=1 | table Status | head 5 | eval {Status}=Status | fields - Status | stats values(*) as * | eval Status=coalesce(FILE_DELIVERED, FILE_NOT_DELIVERED) | fields Status
summary is the default index for summaries but you can collect to different indexes. I can't tell from your screenshot whether these are for the same index or not. Perhaps you should collect additio... See more...
summary is the default index for summaries but you can collect to different indexes. I can't tell from your screenshot whether these are for the same index or not. Perhaps you should collect additional information about these sources e.g. exactly when did they update, what other fields are in the summary events, etc.
sure thank you , I am trying to reach out to the addon creator and trying few things here . Will update here in case I come with something 
As @ITWhisperer said, you cannot use str*time functions to convert those correctly. Here is another example for converting those correctly and calculate avg and sum and then convert those back to du... See more...
As @ITWhisperer said, you cannot use str*time functions to convert those correctly. Here is another example for converting those correctly and calculate avg and sum and then convert those back to duration. This is not handling durations which are greater than 23:59:59. | makeresults | eval duration="01:00:01,00:15:00,10:10:10,05:04:03" | eval duration = split(duration,",") | mvexpand duration ``` above create test data ``` | eval d1 = split(duration,":"), d=tonumber(mvindex(d1,2)) + 60 * tonumber(mvindex(d1,1)) + 3600 * tonumber(mvindex(d1,0)) | stats sum(d) as tD1 avg(d) as aD1 | eval sum_duH = floor(tD1/3600), sum_duM = floor((tD1%3600) / 60), sum_duS = floor(tD1 % 3600 % 60) | eval avg_duH = floor(aD1/3600), avg_duM = floor((aD1%3600) / 60), avg_duS = floor(aD1 % 3600 % 60) | eval avg_D = printf("%02d:%02d:%02d", avg_duH, avg_duM, avg_duS) | eval sum_D = printf("%02d:%02d:%02d", sum_duH, sum_duM, sum_duS) | table avg_D sum_D r. Ismo 
Your solution worked. Thank you so much for your help
Hi! We use Splunk Stream 7.3.0. When receiving an event in a log longer than 1000000 characters, Splunk cuts it. Event in json format. Tell me what settings should be applied in Splunk Stream so tha... See more...
Hi! We use Splunk Stream 7.3.0. When receiving an event in a log longer than 1000000 characters, Splunk cuts it. Event in json format. Tell me what settings should be applied in Splunk Stream so that Splunk parses the data correctly. Thanks!
Hello, We have to import a csv file that always contains the same amount of column (and corresponding values), but the system that generates it sometimes change the order of the header columns, like... See more...
Hello, We have to import a csv file that always contains the same amount of column (and corresponding values), but the system that generates it sometimes change the order of the header columns, like this:   File01.csv field01,field02,field03       File02.csv field03,field01,field02     Is there any way to ingest the file without using in props.conf this set-up? INDEXED_EXTRACTIONS=csv   The reason is that using the INDEXED_EXTRACTIONS Splunk is adding those fields in the .tsidx and we would like to avoid that.   Thanks a lot, Edoardo
Thanks @ITWhisperer  It worked. I used |eval NewField=trim(OldField) to remove the whitespaces.
Using strptime() and strftime() are for handling epoch date times which is why you are getting strange numbers. You might be better to do something like this: | rex field=DURATION "(?<hours>\d+):(?<... See more...
Using strptime() and strftime() are for handling epoch date times which is why you are getting strange numbers. You might be better to do something like this: | rex field=DURATION "(?<hours>\d+):(?<minutes>\d+):(?<seconds>\d+)" | eval DURATION=((hours*60)+minutes)*60+seconds | stats sum(DURATION) as event_duration by NAME | eventstats sum(event_duration) as total_time | eval percentage_time=(event_duration/total_time)*100 | eval event_duration1=tostring(event_duration,"duration") | eval total_time1=tostring(total_time,"duration") | eval av_time_hrs=(event_duration/total_time) Having said that, I am not sure what the final calculation is supposed to be showing
Thank you - what version of splunk does your suggestion work please, because I ran the query before I can modify mine, no results at all for any value of head. We are on 8.2.11.2    
@ITWhisperer Let me understand correctly, if more than one source is generating that means, more than one summary index ? Multiple source “/var/spool*”  file generation on the same time frame means ?