All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You probably need to use mvexpand on the combi_fields then split it or parse it into separate fields, and use stats/eventstats to find the highest number (which number are you talking about?) for eac... See more...
You probably need to use mvexpand on the combi_fields then split it or parse it into separate fields, and use stats/eventstats to find the highest number (which number are you talking about?) for each "data" within each identity, and take the "status" from that event. Having said that, you might be better off going back a step or two i.e. before the stats values(*) as * and whatever commands you used to combine the fields in the first place, as it seems you have just made it harder for yourself.
So, does the search work without the lookup?
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table ... See more...
Hi Everyone, I have created a mutli valued field by using some of the fields called as combi_fields. I am showing those multivalued fields as | stats values(*) as * by identity. Now I have a table with Identity and combi_fields. In combi fields i want to check for a data whether it is same in all the mutivalued data for a given Identity. For example, Identity                                  combi_fields ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   abcdefg - 113 - 110 - Passed - folder1- folder2 In the above example all the 1st data is same. If it is same i have to consider the greatest number and give its status as output. Like ABC abcdefg  Passed there might be different data in the 1 st place like below ABC                                         abcdefg - 231 - 217 - Passed - folder1- folder2                                                   abcdefg - 441 - 456 - Passed - folder1- folder2                                                   xyzabc- 113 - 110 - Passed - folder1- folder2                                                   xyzabc- 201 - 219- Passed - folder1- folder2 Here is hould show as ABC abcdefg Passed                                              ABC xyzabc Passed.   How can i do this? How can i compare among a field?  
Hi Thanks for the update.  But we cannot use the query without endswith because without endswith it will give all the events of the day which was created after the event PIDZJEA.  1. is it possibl... See more...
Hi Thanks for the update.  But we cannot use the query without endswith because without endswith it will give all the events of the day which was created after the event PIDZJEA.  1. is it possible to use both startswith and endswith and get the records of the current day ?  2. also is it possible to get the count of events which are generated after the PIDZJEA (endswith) on the same day for every day??  Expected result.       Current query :  index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) OR PIDZJEA OR IDJO20P) | rex field=TEXT "NIDF=(?<file>[^\\s]+)" | transaction startswith="IDJO20P" endswith="PIDZJEA" keeporphans=True | bin span=1d _time | chart sum(eventcount) AS eventcount OVER _time BY file Result:     
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For ... See more...
Hi all, I'm trying to get all the saved searches in Splunk that are in all apps. Could someone explain to me what the endpoint servicesNS/-/-/saved/searches  is and what data is returned.     For reference I've tried to use that endpoint and match it with saved searches only (reports) and not to return any alerts.  But the data returned has a lot more than expected as the number in the "reports" tab under "all apps" is a lot smaller than the number returned from the REST call   Any help or link to docs would be appreciated  
On top of that your use might simply be restricted from using such commands. And your dashboards may not run if powered by risky commands. https://docs.splunk.com/Documentation/Splunk/latest/Securit... See more...
On top of that your use might simply be restricted from using such commands. And your dashboards may not run if powered by risky commands. https://docs.splunk.com/Documentation/Splunk/latest/Security/SPLsafeguards
Ugh. This looks almost like a json structure. Unfortunately your keys and values are not enclosed in quotes so it is not a valid json object. If it were a json object you wouldn't have to worry about... See more...
Ugh. This looks almost like a json structure. Unfortunately your keys and values are not enclosed in quotes so it is not a valid json object. If it were a json object you wouldn't have to worry about regexes because splunk can parse jsons. And it's best to let it do so instead of trying to fiddle with regexes to handle structured data. EDIT: OK, earlier you showed some representation of your event and it did include the quotes. So how is it?
Why this addon is not supported anymore? Is there any other alternative for OT/ICS data?  
Also the first business question - how do you know that you need to use Smartstore? Not that I'm saying that you don't but what's the rationale for this particular requirement?
This is an error resulting from the python code trying to do somethin it's not supposed to. In this case - it's trying to serialize to json an object which is not serializable (not all classes can be... See more...
This is an error resulting from the python code trying to do somethin it's not supposed to. In this case - it's trying to serialize to json an object which is not serializable (not all classes can be serialized). Why it happens? We don't know - you should look in your logs for indication where this exception is triggered.
yes you are doing it right. After adding time picker you can click on this icon: and 1) select edit on your query 2) Go to "Time Range" 3) Click on Input and select your Time picker token
splunk list monitor and splunk list inputstatus are your friends here. Also - crcSalt = <SOURCE> is a setting often used by newcomers to Splunk but in reality it's rarely needed (usually raising ... See more...
splunk list monitor and splunk list inputstatus are your friends here. Also - crcSalt = <SOURCE> is a setting often used by newcomers to Splunk but in reality it's rarely needed (usually raising initCrcLength suffices). alwaysOpenFile is most typically not needed. Leave it at default unless you're doing some weird stuff on Windows. My suspicion would be that since you have many files (almost a hundred files for each day), you're running out of file descriptors.
Or - if the permissions look right - a SELinux mislabeling issue.
This looks like looks like filesystem permissions.  The splunk paths are normally based on the splunk account user permissions  example  sudo chown -R splunk:splunk <YOUR DATA PATH> Find out what... See more...
This looks like looks like filesystem permissions.  The splunk paths are normally based on the splunk account user permissions  example  sudo chown -R splunk:splunk <YOUR DATA PATH> Find out what account Splunk was running under. 
You could use Prometheus Metrics for Splunk | Splunkbase if you wann avoid using the OTel Collector.    
Do oyu want to embed a dashboard in an external site? If yes, check out Embedded Dashboards For Splunk (EDFS) | Splunkbase
Hello Team, I had followed steps mentioned in below page for migration to Splunk Enterprise version 9.2.1: Upgrade to version 9.2 on UNIX - Splunk Documentation I receive below error on running st... See more...
Hello Team, I had followed steps mentioned in below page for migration to Splunk Enterprise version 9.2.1: Upgrade to version 9.2 on UNIX - Splunk Documentation I receive below error on running start command. Due to this error, I am unable to complete the migration on Splunk indexer machine. Warning: cannot create "/data/splunk/index_data" Creating: /data/splunk/index_data ERROR while running renew-certs migration.
Hi @gcusello  No, the new file has a different name ( the name is the time when they are generated ). The content of the files is not the same because they contain. I tried different options of crcS... See more...
Hi @gcusello  No, the new file has a different name ( the name is the time when they are generated ). The content of the files is not the same because they contain. I tried different options of crcSalt but nothing happened. I also checked logs in $SPLUNK_FORWARDER/var/log/splunk/metrics.log but there are no logs about new files
Yes, and here is an example: /Users/yotov/app/.logs/ - 1/ - 2024-05-14/ - 10_00_00.log - 10_15_00.log ( every 15 minutes a new file is created ) - 15_00_00.log - 2/ - 2... See more...
Yes, and here is an example: /Users/yotov/app/.logs/ - 1/ - 2024-05-14/ - 10_00_00.log - 10_15_00.log ( every 15 minutes a new file is created ) - 15_00_00.log - 2/ - 2024-05-14/ - 10_00_00.log - 10_15_00.log .... About alwaysOpenFile - no, I tried with and without it. but nothing happens
Reference to this: https://github.com/elastic/elasticsearch/issues/57018#issuecomment-1501986185 and adding -Djava.io.tmpdir surely helped in the case of another customer I was working with as well.