Hi, Is there someone here who can create an XML regular expression for these events to prevent them from being ingested into Splunk? 1. Sample Event: <Event xmlns='http://schemas.microsoft.com...
See more...
Hi, Is there someone here who can create an XML regular expression for these events to prevent them from being ingested into Splunk? 1. Sample Event: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{XXXXX}'/><EventID>4688</EventID><Version>2</Version><Level>0</Level><Task>13312</Task><xxx>0</Opcode><Keywords>xxxxx</Keywords><TimeCreated SystemTime='2023-11-27'/><EventRecordID>151284011</EventRecordID><Correlation/><Execution ProcessID='4' ThreadID='8768'/><Channel>Security</Channel><Computer>XXX.com</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>xxx\SYSTEM</Data><Data Name='SubjectUserName'>XXX$</Data><Data Name='SubjectDomainName'>EC</Data><Data Name='SubjectLogonId'>xxx</Data><Data Name='NewProcessId'>0x3878</Data><Data Name='NewProcessName'>C:\Program Files (x86)\Tanium\Tanium Client\Patch\tools\TaniumExecWrapper.exe</Data><Data Name='TokenElevationType'>%%xxxx</Data><Data Name='ProcessId'>xxxx</Data><Data Name='CommandLine'></Data><Data Name='TargetUserSid'>NULL SID</Data><Data Name='TargetUserName'>-</Data><Data Name='TargetDomainName'>-</Data><Data Name='TargetLogonId'>xxx</Data><Data Name='ParentProcessName'>C:\Program Files (x86)\Tanium\Tanium Client\TaniumClient.exe</Data><Data Name='MandatoryLabel'>Mandatory Label\System Mandatory Level</Data></EventData></Event> THANKS
Numeric values are right-aligned, string values are left-aligned. It looks like some of your zeros may include white-spaces making them strings and therefore left-aligned.
Yes, I think changing the retention to 60 days, or maybe even longer is the best solution for this. Lets hope they manage to fix the "Past 60 day" dashboard in the future too, for convenience.
Hi Splunk Enthusiasts, I have created a table using Splunk Dashboard Studio. In that a column contains results like 0 and some other number. 0 is displayed on left where as all other values displays...
See more...
Hi Splunk Enthusiasts, I have created a table using Splunk Dashboard Studio. In that a column contains results like 0 and some other number. 0 is displayed on left where as all other values displays is aligned right. Can you please help me to make it all align left. TIA !! PFB screenshot for your reference.
Hi basically it's like @gcusello said. You should manage your internet access by firewalls etc. not with splunk. Anyhow there are some conf files where you could manage e.g. sending statistics, tel...
See more...
Hi basically it's like @gcusello said. You should manage your internet access by firewalls etc. not with splunk. Anyhow there are some conf files where you could manage e.g. sending statistics, telemetry, check app/splunk versions etc. But also those should restricted/denied by your FW not by Splunk itself. r. Ismo
I suppose (I haven't seen this particular Add-On) it might contain search-time settings as well. Often add-ons should be installed on several tiers at the same time since they might contain search-ti...
See more...
I suppose (I haven't seen this particular Add-On) it might contain search-time settings as well. Often add-ons should be installed on several tiers at the same time since they might contain search-time extractions which are effective at SH tier as well as index-time settings (like sourcetype definitions for timestamp extractions and event breaking) which are efective on indexer tier or HF.
If you want to retroactively find period of downtime, you need to compare the output of the uptime.sh script and check when the uptime value dropped instead of increasing. You could check periodical...
See more...
If you want to retroactively find period of downtime, you need to compare the output of the uptime.sh script and check when the uptime value dropped instead of increasing. You could check periodically for lack of events from the host to find when there is a possible outage but remember that this might be caused by completely different thing than a reboot of the server - for example UF crash. While Splunk can do some form of monitoring based on the logs it gets it's not a pro-active monitoring solution in the likes of Zabbix or Nagios.
OK. Don't use the _json sourcetype. It's there so that in a poorly configured environment data is somehow at least partially correctly processed but in a production scenario it shouldn't be used. You...
See more...
OK. Don't use the _json sourcetype. It's there so that in a poorly configured environment data is somehow at least partially correctly processed but in a production scenario it shouldn't be used. You should define your own sourcetype. As you're probably not using indexed extractions (and you generally shouldn't use them), you need to set proper timestamp extraction settings in your config along with other settings from the so-called great 8. https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Configuring_new_source_types Finding latest/oldest event (or any other ordered-first/last event) can be done for example by using head or tail command (optionally sorting the data first; remember that by default Splunk returns events in reverse chronological order - newest first - so sorting might not always be necessary).
Brilliant, my requirements are, the output should contain FILE_DELIVERED status for head 4 and head 5 as well, as we have received FILE_DELIVERED status for head 3. In other words, as soon as we see...
See more...
Brilliant, my requirements are, the output should contain FILE_DELIVERED status for head 4 and head 5 as well, as we have received FILE_DELIVERED status for head 3. In other words, as soon as we see FILE_DELIVERED, the subsequent runs should always include FILE_DELIVERED line ONLY (should NOT include FILE_NOT_DELIVERED from the previous or current run)so the alert won't be missed. The output should continue stating FILE_NOT_DELIVERED ONLY when no occurrence of FILE_DELIVERED was found.
I assume by that you mean there are two extra reports adding to the summary index? So, what else in your environment changed (which may have impacted the summary index)?
I don't know zscaler logs but this task can be tricky. While the general approach seems to be relatively straightforward (just search, group by URL and user with the stats command and check if you ha...
See more...
I don't know zscaler logs but this task can be tricky. While the general approach seems to be relatively straightforward (just search, group by URL and user with the stats command and check if you have both the vendor_signature as well as successful request for the same URL/user pair), it might not be that easy to execute since proxies are usually quite talkative so if you did this over a longer timeframe the amount of returned data could overwhelm your SH.
Yep. As @isoutamo said - only one LDAP strategy is effective at any given moment for a particular user. So different strategies are meant to be used if you have separate sources of authentication and...
See more...
Yep. As @isoutamo said - only one LDAP strategy is effective at any given moment for a particular user. So different strategies are meant to be used if you have separate sources of authentication and authorization (when - for example - your company has two separte divisions, each with its own AD environment). You don't want to configure separate strategies just to authenticate different groups from the same authentication source. That should be done by managing group (and thus associated role) membership.
Hi There could be several reasons why indexers have high CPU%. It's really hard to make correct guess without seeing what there is happening via MC. I'm suspecting that you have MC on place? If you ...
See more...
Hi There could be several reasons why indexers have high CPU%. It's really hard to make correct guess without seeing what there is happening via MC. I'm suspecting that you have MC on place? If you have, then use it and if You haven't then it's time to setup it now. Under MC there are several places where you could look in which part that issue could be: ingesting which pipeline Disk I/O etc. searching indexer vs. sh side Also there must be good understanding which kind of environment you have to make guesses where that issue could be. Best option will be if you could as some splunk partner/specialist or Splunk Professional services to look your environment. r. Ismo