All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I narrowed the issue down to an add-on and then updated to the latest version.  This fixed the problem.  Thanks for you help @gcusello and @PickleRick 
Hi I was wondering if there was a way I could blacklist the following event based on the event code and the account name under the Subject field. So I want to blacklist events of code 4663 with a sub... See more...
Hi I was wondering if there was a way I could blacklist the following event based on the event code and the account name under the Subject field. So I want to blacklist events of code 4663 with a subject name of COMPUTER8-55$. What would the regex for that look like? 05/10/2024 01:05:35 PM LogName=Sec EventCode=4670 EventType=0 ComputerName=myComputer.net SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=10000000 Keywords=Audit Success TaskCategory=Authorization Policy Change OpCode=Info Message=Permissions on an object were changed.   Subject: Security ID: S-0-20-35 Account Name: COMPUTER8-55$ Account Domain: myDomain Logon ID: 0x3E7   Object: Object Server: Security Object Type: Token Object Name: - Handle ID: 0x1718   Process: Process ID: 0x35c Process Name: C:\Windows\System32\svchost.exe  
Hello Mikeydee,    i have exactly the same issue/problem in our splunk environment. Do you have a solution for this yet?    Regards,  Tobias
Please understand that alerts *never* expire.  They will continue to run until you disable or delete them. What *does* expire are the alert *results*.  That is the data found by the query that ran t... See more...
Please understand that alerts *never* expire.  They will continue to run until you disable or delete them. What *does* expire are the alert *results*.  That is the data found by the query that ran to trigger (or not) the alert.  That data is kept in the search head and is subject to disk space limits based on the role of the user running the alert.  Without such limits, the SH risks running out of space to use to store more search results. IMO, there's very little need to preserve alert results beyond the standard 2p.  Perhaps 24 or 72 hours, but not 100 days.
That endpoint returns information about all saved searches in all apps.  See the REST API Reference Manual for an explanation of the data returned. Note that reports and alerts are both saved search... See more...
That endpoint returns information about all saved searches in all apps.  See the REST API Reference Manual for an explanation of the data returned. Note that reports and alerts are both saved searches.  Reports are distinguished by the attribute alert_type=always, but there may be other indicators.
Hi @Real_captain, sorry for the previous message: I forgot the search command! ,anyway, please try this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA)... See more...
Hi @Real_captain, sorry for the previous message: I forgot the search command! ,anyway, please try this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time | eval file="count after PIDZJEA" | table file eventcount _time] | chart sum(eventcount) AS eventcount OVER _time BY file Ciao. Giuseppe
Hi @gcusello  I have corrected the search query but the results are like below:  Possible to have records for the date in the same line.  Query :  index=events_prod_cdp_penalty_esa source="SYSLOG... See more...
Hi @gcusello  I have corrected the search query but the results are like below:  Possible to have records for the date in the same line.  Query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS "count after PIDZJEA" BY _time ]  
Hi @Pallavi.Lohar, Looks like the community was not able to jump in and help. Did you happen to find a solution yourself you could share here? If you're still needing help, you can contact AppDyna... See more...
Hi @Pallavi.Lohar, Looks like the community was not able to jump in and help. Did you happen to find a solution yourself you could share here? If you're still needing help, you can contact AppDynamics Support. How do I submit a Support ticket? An FAQ 
Hi @gcusello  I am not able to use the append command as suggested by you. Facing the below error:   
Hello @Temuulen0303 ,  Thanks for taking your time replying to my post! I checked, it is only applicable to search "notable event by urgency", as for the saved searches, there is no option to c... See more...
Hello @Temuulen0303 ,  Thanks for taking your time replying to my post! I checked, it is only applicable to search "notable event by urgency", as for the saved searches, there is no option to choose the time range:   Also, for some reason when I linked the time range with "notable events by urgency" when I select the custom time, it does not apply for some reason... I checked in the source code of that search, the query for the earliest and latest time, it does take it from my time picker that I added. 
@PickleRick Yes rolling files every 15 minutes could produce hundreds of files, but my tests were executed with a very small number of files ( 10 - 20 ) and even with these files Splunk doesn't monit... See more...
@PickleRick Yes rolling files every 15 minutes could produce hundreds of files, but my tests were executed with a very small number of files ( 10 - 20 ) and even with these files Splunk doesn't monitor the newly created. I will check the commands you wrote and hope to find what is the problem
Hi @Real_captain, the only way is the append command, with another transaction, but you'll have a very slow search: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NID... See more...
Hi @Real_captain, the only way is the append command, with another transaction, but you'll have a very slow search: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY file | append [ index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith=PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS "count after PIDZJEA" BY _time ] Ciao. Giuseppe
I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usag... See more...
I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usage. I could not find anything about this in the logs. Could someone help me to understand relationship between disk quota utilization vs triggered alert retention period?   
Hi @ITWhisperer , Yeah, i also feel the same. But if i take stats values of every data by the Identity, i am not able to get the desired results like i explained. Is there any better way ? At the e... See more...
Hi @ITWhisperer , Yeah, i also feel the same. But if i take stats values of every data by the Identity, i am not able to get the desired results like i explained. Is there any better way ? At the end i should be having Identity, data, status in a table as i described. I am finding it very hard to get a logic for this.  
Thanks! We see now, after some digging, that the bug is probably caused by a notable event being too big. The error message is "events are not displayed in the search results because _raw fields exce... See more...
Thanks! We see now, after some digging, that the bug is probably caused by a notable event being too big. The error message is "events are not displayed in the search results because _raw fields exceed the limit". Seems like this one too big event have caused bugs in the "Incident Review - Main" search, which also caused other incidents to fail to load. We are deleting the event and fixing the correlation search now, to add a fail-safe to avoid creating this big notable events in the future. Hope this fixes the issue!
You probably need to use mvexpand on the combi_fields then split it or parse it into separate fields, and use stats/eventstats to find the highest number (which number are you talking about?) for eac... See more...
You probably need to use mvexpand on the combi_fields then split it or parse it into separate fields, and use stats/eventstats to find the highest number (which number are you talking about?) for each "data" within each identity, and take the "status" from that event. Having said that, you might be better off going back a step or two i.e. before the stats values(*) as * and whatever commands you used to combine the fields in the first place, as it seems you have just made it harder for yourself.
So, does the search work without the lookup?
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table ... See more...
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table with Identity and combi_fields. In combi fields i want to check for a data whether it is same in all the mutivalued data for a given Identity. For example, Identity                                  combi_fields ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   abcdefg - 113 - 110 - Passed - folder1- folder2 In the above example all the 1st data is same. If it is same i have to consider the greatest number and give its status as output. Like ABC abcdefg  Passed there might be different data in the 1 st place like below ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   xyzabc- 113 - 110 - Passed - folder1- folder2                                                   xyzabc- 201 - 219- Passed - folder1- folder2 Here is hould show as ABC abcdefg Passed                                              ABC xyzabc Passed.   How can i do this? How can i compare among a field?  
Hi Thanks for the update.  But we cannot use the query without endswith because without endswith it will give all the events of the day which was created after the event PIDZJEA.  1. is it possibl... See more...
Hi Thanks for the update.  But we cannot use the query without endswith because without endswith it will give all the events of the day which was created after the event PIDZJEA.  1. is it possible to use both startswith and endswith and get the records of the current day ?  2. also is it possible to get the count of events which are generated after the PIDZJEA (endswith) on the same day for every day??  Expected result.       Current query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY file Result:     
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For ... See more...
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For reference I've tried to use that endpoint and match it with saved searches only (reports) and not to return any alerts.  But the data returned has a lot more than expected as the number in the "reports" tab under "all apps" is a lot smaller than the number returned from the REST call   Any help or link to docs would be appreciated