All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unfortunately not no
I notice the regexes are using double quotes ("), but event uses single quotes (').  That will prevent a match.
Hi @AL3Z, let me understand: do you want to filter your logs to send these event to nullqueue or do you want to delete part of these events? in the first case, you have to follow the instructions a... See more...
Hi @AL3Z, let me understand: do you want to filter your logs to send these event to nullqueue or do you want to delete part of these events? in the first case, you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.1.2/Forwarding/Routeandfilterdatad using this regex \<Event xmlns\=\'http:\/\/schemas\.microsoft\.com\/win\/\d+\/\d+\/events\/event\'> if you can share also events to maintain, I could be more sure abut the regex. Ciao. Giuseppe
Hi There , Are you able to resolve this issue ? if yes please post your workaround as I am also facing same issue .
It is certainly worth looking
Hi Splunkers, I would like to calculate the duration of an event as a percentage of the day. I have data in a database that is being extracted, one of the fields is duration; DURATION="01:00:00" ... See more...
Hi Splunkers, I would like to calculate the duration of an event as a percentage of the day. I have data in a database that is being extracted, one of the fields is duration; DURATION="01:00:00" As this is already in human readable format, I thought i would convert it to epoch to sum and i got the returned value; 06:00:00 So far so good, or so I thought, but looking at the percentages things were not quite right. So i included the epoch in the results and it showed me this; 20412540000  (Wed Nov 06 2616 06:00:00 GMT+0000)       | eval DURATION=strptime(DURATION,"%H:%M:%S") | stats sum(DURATION) as event_duration by NAME | eventstats sum(event_duration) as total_time | eval percentage_time=(event_duration/total_time)*100 | eval event_duration1=strftime(event_duration,"%H:%M:%S") | eval total_time1=strftime(total_time,"%H:%M:%S") | eval av_time_hrs=(event_duration1/total_time1)         based on the data is it possible to get a percentage?
Hi, Is there someone here who can create an XML regular expression for these events to prevent them from being ingested into Splunk? 1. Sample Event: <Event xmlns='http://schemas.microsoft.com... See more...
Hi, Is there someone here who can create an XML regular expression for these events to prevent them from being ingested into Splunk? 1. Sample Event: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{XXXXX}'/><EventID>4688</EventID><Version>2</Version><Level>0</Level><Task>13312</Task><xxx>0</Opcode><Keywords>xxxxx</Keywords><TimeCreated SystemTime='2023-11-27'/><EventRecordID>151284011</EventRecordID><Correlation/><Execution ProcessID='4' ThreadID='8768'/><Channel>Security</Channel><Computer>XXX.com</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>xxx\SYSTEM</Data><Data Name='SubjectUserName'>XXX$</Data><Data Name='SubjectDomainName'>EC</Data><Data Name='SubjectLogonId'>xxx</Data><Data Name='NewProcessId'>0x3878</Data><Data Name='NewProcessName'>C:\Program Files (x86)\Tanium\Tanium Client\Patch\tools\TaniumExecWrapper.exe</Data><Data Name='TokenElevationType'>%%xxxx</Data><Data Name='ProcessId'>xxxx</Data><Data Name='CommandLine'></Data><Data Name='TargetUserSid'>NULL SID</Data><Data Name='TargetUserName'>-</Data><Data Name='TargetDomainName'>-</Data><Data Name='TargetLogonId'>xxx</Data><Data Name='ParentProcessName'>C:\Program Files (x86)\Tanium\Tanium Client\TaniumClient.exe</Data><Data Name='MandatoryLabel'>Mandatory Label\System Mandatory Level</Data></EventData></Event> THANKS
@ITWhisperer Can I check those details in _audit index ?
When did the report change? What does search does the current report use? What search did the report use prior to 6th November?
Numeric values are right-aligned, string values are left-aligned. It looks like some of your zeros may include white-spaces making them strings and therefore left-aligned.
@ITWhisperer No only one report Triggering the events
| makeresults format=csv data="Status FILE_NOT_DELIVERED FILE_NOT_DELIVERED FILE_DELIVERED FILE_NOT_DELIVERED FILE_NOT_DELIVERED" | head 5 | eval {Status}=Status | fields - Status | stats values(*) a... See more...
| makeresults format=csv data="Status FILE_NOT_DELIVERED FILE_NOT_DELIVERED FILE_DELIVERED FILE_NOT_DELIVERED FILE_NOT_DELIVERED" | head 5 | eval {Status}=Status | fields - Status | stats values(*) as * | eval Status=coalesce(FILE_DELIVERED, FILE_NOT_DELIVERED) | fields Status
Yes, I think changing the retention to 60 days, or maybe even longer is the best solution for this. Lets hope they manage to fix the "Past 60 day" dashboard in the future too, for convenience.
Hi Splunk Enthusiasts, I have created a table using Splunk Dashboard Studio. In that a column contains results like 0 and some other number. 0 is displayed on left where as all other values displays... See more...
Hi Splunk Enthusiasts, I have created a table using Splunk Dashboard Studio. In that a column contains results like 0 and some other number. 0 is displayed on left where as all other values displays is aligned right. Can you please help me to make it all align left. TIA !! PFB screenshot for your reference.
Hi basically it's like @gcusello said. You should manage your internet access by firewalls etc. not with splunk. Anyhow there are some conf files where you could manage e.g. sending statistics, tel... See more...
Hi basically it's like @gcusello said. You should manage your internet access by firewalls etc. not with splunk. Anyhow there are some conf files where you could manage e.g. sending statistics, telemetry, check app/splunk versions etc. But also those should restricted/denied by your FW not by Splunk itself. r. Ismo
I suppose (I haven't seen this particular Add-On) it might contain search-time settings as well. Often add-ons should be installed on several tiers at the same time since they might contain search-ti... See more...
I suppose (I haven't seen this particular Add-On) it might contain search-time settings as well. Often add-ons should be installed on several tiers at the same time since they might contain search-time extractions which are effective at SH tier as well as index-time settings (like sourcetype definitions for timestamp extractions and event breaking) which are efective on indexer tier or HF.
If you want to retroactively find period of downtime, you need to compare the output of the uptime.sh script and check when the uptime value dropped instead of increasing. You could check periodical... See more...
If you want to retroactively find period of downtime, you need to compare the output of the uptime.sh script and check when the uptime value dropped instead of increasing. You could check periodically for lack of events from the host to find when there is a possible outage but remember that this might be caused by completely different thing than a reboot of the server - for example UF crash. While Splunk can do some form of monitoring based on the logs it gets it's not a pro-active monitoring solution in the likes of Zabbix or Nagios.
OK. Don't use the _json sourcetype. It's there so that in a poorly configured environment data is somehow at least partially correctly processed but in a production scenario it shouldn't be used. You... See more...
OK. Don't use the _json sourcetype. It's there so that in a poorly configured environment data is somehow at least partially correctly processed but in a production scenario it shouldn't be used. You should define your own sourcetype. As you're probably not using indexed extractions (and you generally shouldn't use them), you need to set proper timestamp extraction settings in your config along with other settings from the so-called great 8. https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Configuring_new_source_types Finding latest/oldest event (or any other ordered-first/last event) can be done for example by using head or tail command (optionally sorting the data first; remember that by default Splunk returns events in reverse chronological order - newest first - so sorting might not always be necessary).
Brilliant, my requirements are, the output should contain FILE_DELIVERED status for head 4 and head 5 as well, as we have received FILE_DELIVERED status for head 3. In other words, as soon as we see... See more...
Brilliant, my requirements are, the output should contain FILE_DELIVERED status for head 4 and head 5 as well, as we have received FILE_DELIVERED status for head 3. In other words, as soon as we see FILE_DELIVERED, the subsequent runs should always include FILE_DELIVERED line ONLY (should NOT include FILE_NOT_DELIVERED from the previous or current run)so the alert won't be missed. The output should continue stating FILE_NOT_DELIVERED ONLY when no occurrence of FILE_DELIVERED was found.
I assume by that you mean there are two extra reports adding to the summary index? So, what else in your environment changed (which may have impacted the summary index)?