All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What other indicators would there be that distinguish it to reports only?   And also how do you know that "alert_type=always" is an attribute that singles out reports, can't find this info anywhere... See more...
What other indicators would there be that distinguish it to reports only?   And also how do you know that "alert_type=always" is an attribute that singles out reports, can't find this info anywhere
My question is what else should I put on there.
Hi @SplunkNinja , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma... See more...
Hi @SplunkNinja , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Time Event 4/27/245:30:37.182 AM { "Client":"ClientA", "Msgtype":"WebService", "Priority":2, "Interactionid":"1DD6AA27-6517-4D62-84C1-C58CA124516C", "Seq":15831, "Threadid":23, "messag... See more...
Time Event 4/27/245:30:37.182 AM { "Client":"ClientA", "Msgtype":"WebService", "Priority":2, "Interactionid":"1DD6AA27-6517-4D62-84C1-C58CA124516C", "Seq":15831, "Threadid":23, "message":"TimeMarker: MyClient: Result=Success Time=0000.05s Message=No payments found. (RetrievePaymentsXY - ID1:123131 ID2:Site|12313 ID3:05/14/2024-07/12/2024 1|12313", "Userid":"Unknown" }   I just want to make sure that I state it right, when I run the following query, I get an output already, so json and fields are all correct. It is just my json was messed up when I massaged it (please ignore) : index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error | rex field=message "Message=.* \((?<apiName>\w+?) -" | lookup My_Client_Mapping Client OUTPUT ClientID ClientName Region | chart count over ClientName by apiName where `chart count over` is at the end. But, when I move the `lookup` statement after `chart`, I don't get any data back. If I remove the `lookup` the query won't work as `ClientName` is stored in lookup mapping file.
Thanks for your reply. At the end, the solution was about to just disable the splunk light forwarder via CLI. ./splunk disable app SplunkLightForwarder after this change I restarted splunk servic... See more...
Thanks for your reply. At the end, the solution was about to just disable the splunk light forwarder via CLI. ./splunk disable app SplunkLightForwarder after this change I restarted splunk service and it worked fine back again.   
I narrowed the issue down to an add-on and then updated to the latest version.  This fixed the problem.  Thanks for you help @gcusello and @PickleRick 
Hi I was wondering if there was a way I could blacklist the following event based on the event code and the account name under the Subject field. So I want to blacklist events of code 4663 with a sub... See more...
Hi I was wondering if there was a way I could blacklist the following event based on the event code and the account name under the Subject field. So I want to blacklist events of code 4663 with a subject name of COMPUTER8-55$. What would the regex for that look like? 05/10/2024 01:05:35 PM LogName=Sec EventCode=4670 EventType=0 ComputerName=myComputer.net SourceName=Microsoft Windows security auditing. Type=Information RecordNumber=10000000 Keywords=Audit Success TaskCategory=Authorization Policy Change OpCode=Info Message=Permissions on an object were changed.   Subject: Security ID: S-0-20-35 Account Name: COMPUTER8-55$ Account Domain: myDomain Logon ID: 0x3E7   Object: Object Server: Security Object Type: Token Object Name: - Handle ID: 0x1718   Process: Process ID: 0x35c Process Name: C:\Windows\System32\svchost.exe  
Hello Mikeydee,    i have exactly the same issue/problem in our splunk environment. Do you have a solution for this yet?    Regards,  Tobias
Please understand that alerts *never* expire.  They will continue to run until you disable or delete them. What *does* expire are the alert *results*.  That is the data found by the query that ran t... See more...
Please understand that alerts *never* expire.  They will continue to run until you disable or delete them. What *does* expire are the alert *results*.  That is the data found by the query that ran to trigger (or not) the alert.  That data is kept in the search head and is subject to disk space limits based on the role of the user running the alert.  Without such limits, the SH risks running out of space to use to store more search results. IMO, there's very little need to preserve alert results beyond the standard 2p.  Perhaps 24 or 72 hours, but not 100 days.
That endpoint returns information about all saved searches in all apps.  See the REST API Reference Manual for an explanation of the data returned. Note that reports and alerts are both saved search... See more...
That endpoint returns information about all saved searches in all apps.  See the REST API Reference Manual for an explanation of the data returned. Note that reports and alerts are both saved searches.  Reports are distinguished by the attribute alert_type=always, but there may be other indicators.
Hi @Real_captain, sorry for the previous message: I forgot the search command! ,anyway, please try this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA)... See more...
Hi @Real_captain, sorry for the previous message: I forgot the search command! ,anyway, please try this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time | eval file="count after PIDZJEA" | table file eventcount _time] | chart sum(eventcount) AS eventcount OVER _time BY file Ciao. Giuseppe
Hi @gcusello  I have corrected the search query but the results are like below:  Possible to have records for the date in the same line.  Query :  index=events_prod_cdp_penalty_esa source="SYSLOG... See more...
Hi @gcusello  I have corrected the search query but the results are like below:  Possible to have records for the date in the same line.  Query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS "count after PIDZJEA" BY _time ]  
Hi @Pallavi.Lohar, Looks like the community was not able to jump in and help. Did you happen to find a solution yourself you could share here? If you're still needing help, you can contact AppDyna... See more...
Hi @Pallavi.Lohar, Looks like the community was not able to jump in and help. Did you happen to find a solution yourself you could share here? If you're still needing help, you can contact AppDynamics Support. How do I submit a Support ticket? An FAQ 
Hi @gcusello  I am not able to use the append command as suggested by you. Facing the below error:   
Hello @Temuulen0303 ,  Thanks for taking your time replying to my post! I checked, it is only applicable to search "notable event by urgency", as for the saved searches, there is no option to c... See more...
Hello @Temuulen0303 ,  Thanks for taking your time replying to my post! I checked, it is only applicable to search "notable event by urgency", as for the saved searches, there is no option to choose the time range:   Also, for some reason when I linked the time range with "notable events by urgency" when I select the custom time, it does not apply for some reason... I checked in the source code of that search, the query for the earliest and latest time, it does take it from my time picker that I added. 
@PickleRick Yes rolling files every 15 minutes could produce hundreds of files, but my tests were executed with a very small number of files ( 10 - 20 ) and even with these files Splunk doesn't monit... See more...
@PickleRick Yes rolling files every 15 minutes could produce hundreds of files, but my tests were executed with a very small number of files ( 10 - 20 ) and even with these files Splunk doesn't monitor the newly created. I will check the commands you wrote and hope to find what is the problem
Hi @Real_captain, the only way is the append command, with another transaction, but you'll have a very slow search: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NID... See more...
Hi @Real_captain, the only way is the append command, with another transaction, but you'll have a very slow search: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY file | append [ index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith=PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS "count after PIDZJEA" BY _time ] Ciao. Giuseppe
I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usag... See more...
I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usage. I could not find anything about this in the logs. Could someone help me to understand relationship between disk quota utilization vs triggered alert retention period?   
Hi @ITWhisperer , Yeah, i also feel the same. But if i take stats values of every data by the Identity, i am not able to get the desired results like i explained. Is there any better way ? At the e... See more...
Hi @ITWhisperer , Yeah, i also feel the same. But if i take stats values of every data by the Identity, i am not able to get the desired results like i explained. Is there any better way ? At the end i should be having Identity, data, status in a table as i described. I am finding it very hard to get a logic for this.  
Thanks! We see now, after some digging, that the bug is probably caused by a notable event being too big. The error message is "events are not displayed in the search results because _raw fields exce... See more...
Thanks! We see now, after some digging, that the bug is probably caused by a notable event being too big. The error message is "events are not displayed in the search results because _raw fields exceed the limit". Seems like this one too big event have caused bugs in the "Incident Review - Main" search, which also caused other incidents to fail to load. We are deleting the event and fixing the correlation search now, to add a fail-safe to avoid creating this big notable events in the future. Hope this fixes the issue!