All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Don't worry. Everyone starts somewhere Glad I could help.
I don't know zscaler logs but this task can be tricky. While the general approach seems to be relatively straightforward (just search, group by URL and user with the stats command and check if you ha... See more...
I don't know zscaler logs but this task can be tricky. While the general approach seems to be relatively straightforward (just search, group by URL and user with the stats command and check if you have both the vendor_signature as well as successful request for the same URL/user pair), it might not be that easy to execute since proxies are usually quite talkative so if you did this over a longer timeframe the amount of returned data could overwhelm your SH.
@ITWhisperer Number of events are thripled. As well as duplicated data ingested in splunk
"from November 6th onwards" begs the question, what changed in your environment on 6th November?
Yep. As @isoutamo said - only one LDAP strategy is effective at any given moment for a particular user. So different strategies are meant to be used if you have separate sources of authentication and... See more...
Yep. As @isoutamo said - only one LDAP strategy is effective at any given moment for a particular user. So different strategies are meant to be used if you have separate sources of authentication and authorization (when - for example - your company has two separte divisions, each with its own AD environment). You don't want to configure separate strategies just to authenticate different groups from the same authentication source. That should be done by managing group (and thus associated role) membership.
Hi There could be several reasons why indexers have high CPU%. It's really hard to make correct guess without seeing what there is happening via MC. I'm suspecting that you have MC on place? If you ... See more...
Hi There could be several reasons why indexers have high CPU%. It's really hard to make correct guess without seeing what there is happening via MC. I'm suspecting that you have MC on place? If you have, then use it and if  You haven't then it's time to setup it now. Under MC there are several places where you could look in which part that issue could be: ingesting  which pipeline Disk I/O etc. searching indexer vs. sh side Also there must be good understanding which kind of environment you have to make guesses where that issue could be. Best option will be if you could as some splunk partner/specialist or Splunk Professional services to look your environment. r. Ismo
Hi Probably you should take this eLearnig https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/app/me/learningeventdetail;spf-url=common%2Fledetail%2Fcours000000000003373%3Freturnurl%3Dcatalog%252... See more...
Hi Probably you should take this eLearnig https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/app/me/learningeventdetail;spf-url=common%2Fledetail%2Fcours000000000003373%3Freturnurl%3Dcatalog%252Fsearch%253FselectedTab%253DLEARNINGEVENT%2526filter%253D%25257B%252522LEARNINGEVENTTYPEFACET%252522%253A%25257B%252522label%252522%253Anull%252C%252522values%252522%253A%252520%25255B%25257B%252522facetValueId%252522%253A%2525221%252522%252C%252522facetValueLabel%252522%253Anull%25257D%25255D%25257D%25257D r. Ismo
There are essentially 3 states, 1) FILE_DELIVERED not present, 2) FILE_DELIVERED last status, and 3) FILE_DELIVERED present but not last. Taking an example of 5 events with the middle event being FI... See more...
There are essentially 3 states, 1) FILE_DELIVERED not present, 2) FILE_DELIVERED last status, and 3) FILE_DELIVERED present but not last. Taking an example of 5 events with the middle event being FILE_DELIVERED and using head to change the situation, you can see that the search only returns results for the cases head 1 to head 3, with head 4 and head 5 having no results.          
@isoutamo For now simple restart of splunkd fixed my issue.
Hi if those have any events then you could try also this   | metadata type=sourcetypes index=*   Then use field lastTime which told when last event has sent to this sourcetype unfortunately this... See more...
Hi if those have any events then you could try also this   | metadata type=sourcetypes index=*   Then use field lastTime which told when last event has sent to this sourcetype unfortunately this don't say from which index that event has sent, but you probably could add some other SPL after that. There are also several apps for helping you with this issue. See also these links: There are a lot of options for finding hosts or sources that stop submitting events: Meta Woot! https://splunkbase.splunk.com/app/2949/ TrackMe https://splunkbase.splunk.com/app/4621/ Broken Hosts App for Splunk https://splunkbase.splunk.com/app/3247/ Alerts for Splunk Admins ("ForwarderLevel" alerts) https://splunkbase.splunk.com/app/3796/ Monitoring Console https://docs.splunk.com/Documentation/Splunk/latest/DMC/Configureforwardermonitoring Deployment Server https://docs.splunk.com/Documentation/DepMon/latest/DeployDepMon/Troubleshootyourdeployment#Forwarder_warningsSome helpful posts: https://lantern.splunk.com/hc/en-us/articles/360048503294-Hosts-logging-data-in-a-certain-timeframe https://www.duanewaddle.com/proving-a-negative/ r. Ismo
There are at least two major things what you must remember: LDAP strategies are applied by ASCII order of their names if account/group has found on strategy, but it haven't any access to resources... See more...
There are at least two major things what you must remember: LDAP strategies are applied by ASCII order of their names if account/group has found on strategy, but it haven't any access to resources (mappings) then it has used as no access. It don't try to found it any more from other strategies. The 2nd one is something which could hit you quite easily and without any real warnings.
Hello, I continued to receive FILE_NOT_DELIVERED message, despite the output contains FILE_DELIVERED status     Requirement: For Example: So we expect the file to be delivered around 05:15 AM, a... See more...
Hello, I continued to receive FILE_NOT_DELIVERED message, despite the output contains FILE_DELIVERED status     Requirement: For Example: So we expect the file to be delivered around 05:15 AM, any alert before that should continue with FILE_NOT_DELIVERED and if the SPL finds the file to be delivered, any alert after that should just output FILE_DELIVERED, removing/suppressing FILE_NOT_DELIVERED in the subsequent runs until 07:00 AM.   If the file was NOT delivered, then the alert should continue stating FILE_NOT_DELIVERED until 07:00 AM   By following this approach I believe we can ensure the status of the file delivery (regardless of the status) cannot be missed.  I am not sure if the SPL you have posted in your previous post will satisfy this requirement, but I will certainly check.
In the below screenshot, we can see that from November 6th onwards, there are three sources generated in Splunk; it shows only one "File Collector: DepTrayCaseQty." Splunk created unnecessary two oth... See more...
In the below screenshot, we can see that from November 6th onwards, there are three sources generated in Splunk; it shows only one "File Collector: DepTrayCaseQty." Splunk created unnecessary two other sources. Because of the creation of two other sources, unwanted duplicate events were also generated. "D:\Splunk\var\spool\splunk\adb0f8d721bf93e3_events.stash_new" and "D:\Splunk\var\spool\splunk\d0d3783e41cf130c_events.stash_new" . Please guide us on how I can fix this issue.   My assumption : Is collect command is not working fine? How to prevent both of those sources from being ingested into Splunk ?
What result did it produce and why does it not meet your requirement?
Hello @gcusello , Thanks for your response, however I manually uploaded my sample logs the problem is when I'm providing the regex for event breaker it shows me result on each line making every li... See more...
Hello @gcusello , Thanks for your response, however I manually uploaded my sample logs the problem is when I'm providing the regex for event breaker it shows me result on each line making every line as event. I've uploaded the SS how it is showing me data you can see the attached file.  Also you can take below sample logs  LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 1 11:34:02 23 NOV 2023 @ID............ 202403.16 @ID............ 202403.16 PROTOCOL.ID.... 202403.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... DC.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... QUIRY - AC.REPORT @ID............ 202303.16 @ID............ 202303.16 PROTOCOL.ID.... 202303.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... AC.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.REPORT by using this regex "^@ID[\s\S]*?REMARK.*$" i'm getting correct data on regex101. 
Hello there, unfortunately the previous input does not provide the desired result, unless I am lost - which could very well be. But I have explained the requirement again in my previous post. Would ... See more...
Hello there, unfortunately the previous input does not provide the desired result, unless I am lost - which could very well be. But I have explained the requirement again in my previous post. Would it be possible to check my previous post and suggest please?
Hi @mukhan1, at first parsing is done on Indexers or (when present) on Heavy Forwarders, never on Universal Forwarders. the only exception are INDEXED_EXTRACTIONS (as e.g. csv) where parsing starts... See more...
Hi @mukhan1, at first parsing is done on Indexers or (when present) on Heavy Forwarders, never on Universal Forwarders. the only exception are INDEXED_EXTRACTIONS (as e.g. csv) where parsing starts on UF. Anyway, see if it already exists an add-on for the technology you want to parse. Then my hint is to take a sample of your logs and manually uoload it in your Splunk (using the Add Data feature) so you can use the guided sourcetype creation. Ciao. Giuseppe
Hi @gcusello , Need an help from you to decode the xml  regex we have this  C:\Program Files (x86)\Tanium\Tanium Client\TaniumClient.exe in NewProcessName and ParentProcessName  tried regex: it... See more...
Hi @gcusello , Need an help from you to decode the xml  regex we have this  C:\Program Files (x86)\Tanium\Tanium Client\TaniumClient.exe in NewProcessName and ParentProcessName  tried regex: its not excluding the events even after placing these both combinations. blacklist4 = $XmlRegex=%<Provider[^>]+Name="Microsoft-Windows-Security-Auditing"% $XmlRegex=%<EventID>4688<\/EventID>% $XmlRegex=%<Data Name="NewProcessName">(C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumClient\.exe)<\/Data>% blacklist5= $XmlRegex=%<Provider[^>]+Name="Microsoft-Windows-Security-Auditing"% $XmlRegex=%<EventID>4688<\/EventID>% $XmlRegex=%<Data Name="ParentProcessName">(C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumClient\.exe)<\/Data>% Sample event: <Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'><System><Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-A5BA-3E3B0328C30D}'/><EventID>4688</EventID><Version>2</Version><Level>0</Level><Task>13312</Task><Opcode>0</Opcode><Keywords>0x8020000000000000</Keywords><TimeCreated SystemTime='2023-11-27T11:53:55.027269400Z'/><EventRecordID>151278170</EventRecordID><Correlation/><Execution ProcessID='4' ThreadID='10472'/><Channel>Security</Channel><Computer>xyz.com</Computer><Security/></System><EventData><Data Name='SubjectUserSid'>NT AUTHORITY\SYSTEM</Data><Data Name='SubjectUserName'>Admin$</Data><Data Name='SubjectDomainName'>EC</Data><Data Name='SubjectLogonId'>0x3e7</Data><Data Name='NewProcessId'>0x36ec</Data><Data Name='NewProcessName'>C:\Program Files (x86)\Tanium\Tanium Client\TaniumClient.exe</Data><Data Name='TokenElevationType'>%%1936</Data><Data Name='ProcessId'>0x2888</Data><Data Name='CommandLine'></Data><Data Name='TargetUserSid'>NULL SID</Data><Data Name='TargetUserName'>-</Data><Data Name='TargetDomainName'>-</Data><Data Name='TargetLogonId'>0x0</Data><Data Name='ParentProcessName'>C:\Program Files (x86)\Tanium\Tanium Client\TaniumClient.exe</Data><Data Name='MandatoryLabel'>Mandatory Label\System Mandatory Level</Data></EventData></Event>   Thanks..
Hello Team, I was trying to parse my data by updating props.conf file i have created this file in (C:\Program Files\SplunkUniversalForwarder\etc\system\local\props.conf)  [t24protocollog] DATE... See more...
Hello Team, I was trying to parse my data by updating props.conf file i have created this file in (C:\Program Files\SplunkUniversalForwarder\etc\system\local\props.conf)  [t24protocollog] DATETIME_CONFIG=CURRENT SHOULD_LINEMERGE=false LINE_BREAKER=^@ID[\s\S]*?REMARK.*$ NO_BINARY_CHECK=true disabled=false by using above configuration file.  When I'm using the regex it separate each line as a event instead of defining starting and endpoint of event. I checked this regex on regex 101 it is giving me proper result on regex101.com but not on Splunk. Attaching the screenshot how my logs are showing on SPLUNK GUI. Below is the content of log file. -------------------------------------- LIST F.PROTOCOL @ID PROTOCOL.ID PROCESS.DATE TIME.MSECS K.USER APPLICATION LEVEL.FUNCTION ID REMARK PAGE 1 11:34:02 23 NOV 2023 @ID............ 202403.16 @ID............ 202403.16 PROTOCOL.ID.... 202403.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... DC.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... QUIRY - AC.REPORT @ID............ 202303.16 @ID............ 202303.16 PROTOCOL.ID.... 202303.16 PROCESS.DATE... 20230926 TIME.MSECS..... 11:15:23:649 K.USER......... INPUTTER APPLICATION.... AC.ENTRY LEVEL.FUNCTION. 1 ID............. REMARK......... ENQUIRY - AC.REPORT Kindly do let me know if im doing anything wrong, need to parse the log file.
Greetings Team, I am trying to implement the Java Agent onto the Webmethods integration Server, following the steps here, The document referred to does not provide a way to pass the argument onto th... See more...
Greetings Team, I am trying to implement the Java Agent onto the Webmethods integration Server, following the steps here, The document referred to does not provide a way to pass the argument onto the runtime.sh or the server.sh onto the files respectively. Has anyone performed it yet? If yes, please let me know the argument that will be needing to add in order to get this working effectively on a Windows 2019 server  Regards, Shashwat