All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That endpoint returns information about all saved searches in all apps.  See the REST API Reference Manual for an explanation of the data returned. Note that reports and alerts are both saved search... See more...
That endpoint returns information about all saved searches in all apps.  See the REST API Reference Manual for an explanation of the data returned. Note that reports and alerts are both saved searches.  Reports are distinguished by the attribute alert_type=always, but there may be other indicators.
Hi @Real_captain, sorry for the previous message: I forgot the search command! ,anyway, please try this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA)... See more...
Hi @Real_captain, sorry for the previous message: I forgot the search command! ,anyway, please try this: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" keeporphans=True | bin span=1d _time | stats sum(eventcount) AS eventcount BY _time | eval file="count after PIDZJEA" | table file eventcount _time] | chart sum(eventcount) AS eventcount OVER _time BY file Ciao. Giuseppe
Hi @gcusello  I have corrected the search query but the results are like below:  Possible to have records for the date in the same line.  Query :  index=events_prod_cdp_penalty_esa source="SYSLOG... See more...
Hi @gcusello  I have corrected the search query but the results are like below:  Possible to have records for the date in the same line.  Query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY file | append [ search index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS "count after PIDZJEA" BY _time ]  
Hi @Pallavi.Lohar, Looks like the community was not able to jump in and help. Did you happen to find a solution yourself you could share here? If you're still needing help, you can contact AppDyna... See more...
Hi @Pallavi.Lohar, Looks like the community was not able to jump in and help. Did you happen to find a solution yourself you could share here? If you're still needing help, you can contact AppDynamics Support. How do I submit a Support ticket? An FAQ 
Hi @gcusello  I am not able to use the append command as suggested by you. Facing the below error:   
Hello @Temuulen0303 ,  Thanks for taking your time replying to my post! I checked, it is only applicable to search "notable event by urgency", as for the saved searches, there is no option to c... See more...
Hello @Temuulen0303 ,  Thanks for taking your time replying to my post! I checked, it is only applicable to search "notable event by urgency", as for the saved searches, there is no option to choose the time range:   Also, for some reason when I linked the time range with "notable events by urgency" when I select the custom time, it does not apply for some reason... I checked in the source code of that search, the query for the earliest and latest time, it does take it from my time picker that I added. 
@PickleRick Yes rolling files every 15 minutes could produce hundreds of files, but my tests were executed with a very small number of files ( 10 - 20 ) and even with these files Splunk doesn't monit... See more...
@PickleRick Yes rolling files every 15 minutes could produce hundreds of files, but my tests were executed with a very small number of files ( 10 - 20 ) and even with these files Splunk doesn't monitor the newly created. I will check the commands you wrote and hope to find what is the problem
Hi @Real_captain, the only way is the append command, with another transaction, but you'll have a very slow search: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NID... See more...
Hi @Real_captain, the only way is the append command, with another transaction, but you'll have a very slow search: index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY file | append [ index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith=PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS "count after PIDZJEA" BY _time ] Ciao. Giuseppe
I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usag... See more...
I have alerts configured expires after 100days and scheduled to execute search query every 10mins. I can see alert search job is available under "| rest /services/search/jobs" and utilizing disk usage. I could not find anything about this in the logs. Could someone help me to understand relationship between disk quota utilization vs triggered alert retention period?   
Hi @ITWhisperer , Yeah, i also feel the same. But if i take stats values of every data by the Identity, i am not able to get the desired results like i explained. Is there any better way ? At the e... See more...
Hi @ITWhisperer , Yeah, i also feel the same. But if i take stats values of every data by the Identity, i am not able to get the desired results like i explained. Is there any better way ? At the end i should be having Identity, data, status in a table as i described. I am finding it very hard to get a logic for this.  
Thanks! We see now, after some digging, that the bug is probably caused by a notable event being too big. The error message is "events are not displayed in the search results because _raw fields exce... See more...
Thanks! We see now, after some digging, that the bug is probably caused by a notable event being too big. The error message is "events are not displayed in the search results because _raw fields exceed the limit". Seems like this one too big event have caused bugs in the "Incident Review - Main" search, which also caused other incidents to fail to load. We are deleting the event and fixing the correlation search now, to add a fail-safe to avoid creating this big notable events in the future. Hope this fixes the issue!
You probably need to use mvexpand on the combi_fields then split it or parse it into separate fields, and use stats/eventstats to find the highest number (which number are you talking about?) for eac... See more...
You probably need to use mvexpand on the combi_fields then split it or parse it into separate fields, and use stats/eventstats to find the highest number (which number are you talking about?) for each "data" within each identity, and take the "status" from that event. Having said that, you might be better off going back a step or two i.e. before the stats values(*) as * and whatever commands you used to combine the fields in the first place, as it seems you have just made it harder for yourself.
So, does the search work without the lookup?
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table ... See more...
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table with Identity and combi_fields. In combi fields i want to check for a data whether it is same in all the mutivalued data for a given Identity. For example, Identity                                  combi_fields ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   abcdefg - 113 - 110 - Passed - folder1- folder2 In the above example all the 1st data is same. If it is same i have to consider the greatest number and give its status as output. Like ABC abcdefg  Passed there might be different data in the 1 st place like below ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   xyzabc- 113 - 110 - Passed - folder1- folder2                                                   xyzabc- 201 - 219- Passed - folder1- folder2 Here is hould show as ABC abcdefg Passed                                              ABC xyzabc Passed.   How can i do this? How can i compare among a field?  
Hi Thanks for the update.  But we cannot use the query without endswith because without endswith it will give all the events of the day which was created after the event PIDZJEA.  1. is it possibl... See more...
Hi Thanks for the update.  But we cannot use the query without endswith because without endswith it will give all the events of the day which was created after the event PIDZJEA.  1. is it possible to use both startswith and endswith and get the records of the current day ?  2. also is it possible to get the count of events which are generated after the PIDZJEA (endswith) on the same day for every day??  Expected result.       Current query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY file Result:     
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For ... See more...
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For reference I've tried to use that endpoint and match it with saved searches only (reports) and not to return any alerts.  But the data returned has a lot more than expected as the number in the "reports" tab under "all apps" is a lot smaller than the number returned from the REST call   Any help or link to docs would be appreciated  
On top of that your use might simply be restricted from using such commands. And your dashboards may not run if powered by risky commands. https://docs.splunk.com/Documentation/Splunk/latest/Securit... See more...
On top of that your use might simply be restricted from using such commands. And your dashboards may not run if powered by risky commands. https://docs.splunk.com/Documentation/Splunk/latest/Security/SPLsafeguards
Ugh. This looks almost like a json structure. Unfortunately your keys and values are not enclosed in quotes so it is not a valid json object. If it were a json object you wouldn't have to worry about... See more...
Ugh. This looks almost like a json structure. Unfortunately your keys and values are not enclosed in quotes so it is not a valid json object. If it were a json object you wouldn't have to worry about regexes because splunk can parse jsons. And it's best to let it do so instead of trying to fiddle with regexes to handle structured data. EDIT: OK, earlier you showed some representation of your event and it did include the quotes. So how is it?
Why this addon is not supported anymore? Is there any other alternative for OT/ICS data?  
Also the first business question - how do you know that you need to use Smartstore? Not that I'm saying that you don't but what's the rationale for this particular requirement?