All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Your description is not quite clear. First you're saying about  "ALLLOWED1" : "NONE" but then it suddenly turns out to be \"ALLOWEDFIELD\": \"NONE\". Make up your mind. Additionally, do you have yo... See more...
Your description is not quite clear. First you're saying about  "ALLLOWED1" : "NONE" but then it suddenly turns out to be \"ALLOWEDFIELD\": \"NONE\". Make up your mind. Additionally, do you have your fields extracted or do you have to dynamically pull the data from raw events?  
How did you come up with the second search? Is that the same as the first one just with one additional condition? What does your data look like?
Hi @LS1 , did you tried to click on the value in interesting fields to add to the search? on this way, you can see the exact syntax to use that you can add to your main search. Ciao. Giuseppe
It is a bit difficult to figure out what might be going on without some sample data. Please post some anonymised raw (unformatted) events in a code block using the </> format button above so we can s... See more...
It is a bit difficult to figure out what might be going on without some sample data. Please post some anonymised raw (unformatted) events in a code block using the </> format button above so we can see what you are dealing with.
This looks like it might be JSON data? If so, please post some sample data (anonymised appropriately) in raw format in a code block using the </> option, to preserve the formatting of your event.
Try something like this | untable _time msgsource count | eval group=mvindex(split(msgsource,": "),0) | eval msgsource=mvindex(split(msgsource,": "),1) | eval _time=_time.":".msgsource | xyseries _t... See more...
Try something like this | untable _time msgsource count | eval group=mvindex(split(msgsource,": "),0) | eval msgsource=mvindex(split(msgsource,": "),1) | eval _time=_time.":".msgsource | xyseries _time group count | eval msgsource=mvindex(split(_time,":"),1) | eval _time=mvindex(split(_time,":"),0) | table _time msgsource total *
I was able to write a query that group by api (msgsource) to show the response times, but I am trying to see if I can extract the result in a different format. Here is the query I used: query | rex ... See more...
I was able to write a query that group by api (msgsource) to show the response times, but I am trying to see if I can extract the result in a different format. Here is the query I used: query | rex field=_raw "Time=(?<NewTime>\d{4}\.\d+)" | eval TimeMilliseconds=(NewTime*1000) | timechart span=1d count as total, count(eval(TimeMilliseconds<=1000)) as "<1sec", count(eval(TimeMilliseconds>1000 AND TimeMilliseconds<=2000)) as "1sec-2sec" count(eval(TimeMilliseconds>2000 AND TimeMilliseconds<=5000)) as "2sec-5sec" count(eval(TimeMilliseconds>48000 )) as "48sec+", by msgsource Here is the output that I get today: _time total: retrieveApi total: createApi <1sec: retireveApi <1sec: createApi 1sec-2sec: retrieveApi 1sec-2sec: createApi 2sec-5sec: retrieveApi 2sec-5sec: createApi 25-07-13 1234 200 1200 198 34 1 0 1 2025-07-14 1000 335 990 330 8 5 2 0   This is what I would like to see, the results grouped by `_time` and `msgsource` both. _time msgsource total <1sec 1sec-2sec 2sec-5sec 2025-07-13 retrieveApi 1234 1200 34 0 2025-07-13 createApi 200 198 1 1 2025-07-14 retrieveApi 1000 990 8 2 2025-07-14 createApi 335 330 5 0
I want to search the "NONE" not in 3 allowed enum value. I need to ignore the "NONE" if it is in the allowed enum. For example, if the "ALLLOWED1" : "NONE" is in the event,  but no "NONE" other than ... See more...
I want to search the "NONE" not in 3 allowed enum value. I need to ignore the "NONE" if it is in the allowed enum. For example, if the "ALLLOWED1" : "NONE" is in the event,  but no "NONE" other than that, I do not count it. If "ALLOWED2": "NONE" and "not-allowed": "NONE" in same record, I need this record.  format in my record. \"ALLOWEDFIELD\": \"NONE\" I am not sure how should I deal with " and \ in the string for the query.
     Hello, maybe I don't have the vocabulary to find the answer when Googling.  I only submit this question after many attempts to find the answer on my own.  I am trying to figure out why ... See more...
     Hello, maybe I don't have the vocabulary to find the answer when Googling.  I only submit this question after many attempts to find the answer on my own.  I am trying to figure out why neither "started" nor "blocked" will show events when I add them to my search criteria, as shown in the images. The "success" action returns events found in  the same "Interesting Fields" category ("action"). When using the search: index=security action="*" the event listings include what's been "blocked" (and what's been "started"). I can then add a search on "failed" password and the correct number of events display.  All of the "report" options: Top value, Events with this field, etc all display the proper count for "Blocked". I have tried other "Interesting fields" with greater values wondering if there was some kind of limit set somewhere, but they work.    I'm sure it's simple but I cannot figure it out.  Please advise. Thanks LS      
That's the idea. 1. On each indexer you move the data from old location to the new one leaving a symlink behind. 2. You update the path to the index in indexes.conf so that it points to the new loc... See more...
That's the idea. 1. On each indexer you move the data from old location to the new one leaving a symlink behind. 2. You update the path to the index in indexes.conf so that it points to the new location. 3. You remove the symlinks since they're not needed anymore.
The trick was to do the silent install documented in https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.4/install-the-universal-forwarder/install-a-win... See more...
The trick was to do the silent install documented in https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.4/install-the-universal-forwarder/install-a-windows-universal-forwarder#id_97c49283_f5a8_4748_9e3e_87ca9b57633d__Install_a_Windows_universal_forwarder_from_the_command_line but create $SPLUNK_HOME\etc\system\local\user-seed.conf and $SPLUNK_HOME\etc\system\local\deploymentclient.conf before running the install.
In the very last rm command, aren't you just removing the symbolic link you created a couple of steps above? You already moved the directory to 'Old'. 
I might have a solution now by using this statement: NOT match(_raw,"splunk.test@test.co.uk")
Hi @livehybrid, Here is the eval which works on the search | eval match=if(RecipientAddress="splunk.test@vwfs.co.uk",1,0) | search match=1
Hi @vishalduttauk  Can you share the eval you created which works in the search and I can check this against Ingest Actions.  Did this answer help you? If so, please consider: Adding karma to s... See more...
Hi @vishalduttauk  Can you share the eval you created which works in the search and I can check this against Ingest Actions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am ingesting data from the Splunk Add on for O365. I want to use the Eval Expression filter within an ingestion action to filter what email addresses we ingest data from. Sampling the data is easy ... See more...
I am ingesting data from the Splunk Add on for O365. I want to use the Eval Expression filter within an ingestion action to filter what email addresses we ingest data from. Sampling the data is easy but the next bit isn't. I drop events where the RecipientAddress is not splunk.test@test.co.uk. Creating an | eval within a search is simple but creating something that works for a filter using eval expression,  which drops Events is where i am struggling. Our Exchange/Entra team are having problems limiting the online mailboxes the Splunk application which is why I am looking at this workaround. Ignore the application thats tagged as we are using Enterprise 9.3.4. Can you help?
Hi @Haleb  Ive had pretty much this exact usecase with a previous customer who was enriching Enterprise Security rules with a lookup of data pulled in via one of the AWS apps.  I found that the bes... See more...
Hi @Haleb  Ive had pretty much this exact usecase with a previous customer who was enriching Enterprise Security rules with a lookup of data pulled in via one of the AWS apps.  I found that the best way to tackle this is to ensure that you have a scheduled search to populate/update your CSV/KVStore lookup that runs BEFORE your alerts. e.g. if you run your alerts hourly, then configure them such that they run at something like 5 mins past the hour, and have the lookup updating script that runs just before it, e.g. 3 mins past the hour. By itself this doesnt *entirely* remove your issue because if an EC2 instance was created at 4 mins past the hour then the data wont have been in the logs when the lookup updated at 3 mins past..but will be in the alert at 5 mins past...also with things like Cloudtrail there can be quite a bit of lag (as you may know!) therefore you may wish configure your alert to lookback something like earliest=-70m latest=-10m  A combination of these approaches should cover the timegap between the lookup updating and your alert firing, whilst maintaing a capability to regularly fire alerts in a timely manner. I hope that makes sense!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Splunk Community, I'm looking for guidance on how to properly manage and organize lookup files to ensure they are always up-to-date, especially in the context of alerting. I’ve run into situatio... See more...
Hi Splunk Community, I'm looking for guidance on how to properly manage and organize lookup files to ensure they are always up-to-date, especially in the context of alerting. I’ve run into situations where an alert is triggered, but the related lookup file hasn't been updated yet, resulting in missing or incomplete context at the time of the alert. What are the best practices for ensuring that lookup files are refreshed frequently and reliably? Should I be using scheduled saved searches, external scripts, KV store lookups, or another mechanism to guarantee the most recent data is available for correlation in real-time or near-real-time? Any advice or example workflows would be greatly appreciated. Use case for context: I’m working with AWS CloudTrail data to detect when new ports are opened in Security Groups. When such an event is detected, I want to enrich it with additional context — for example, which EC2 instance the Security Group is attached to. This context is available from AWS Config and ingested into a separate Splunk index. I’m currently generating a lookup to map Security Group IDs to related assets, but sometimes the alert triggers before this lookup is updated with the latest AWS Config data. Thanks in advance!
Hi as you have an individual servers, you could use this method https://community.splunk.com/t5/Installation/How-to-migrate-indexes-to-new-indexer-instance/m-p/528064/highlight/true When you have s... See more...
Hi as you have an individual servers, you could use this method https://community.splunk.com/t5/Installation/How-to-migrate-indexes-to-new-indexer-instance/m-p/528064/highlight/true When you have several indexers you should consider to migrate into clustered environment. You should read this https://docs.splunk.com/Documentation/SVA/current/Architectures/TopologyGuidance r. Ismo
@peterow  Great to see that your issue has been resolved!