All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I want to search the "NONE" not in 3 allowed enum value. I need to ignore the "NONE" if it is in the allowed enum. For example, if the "ALLLOWED1" : "NONE" is in the event,  but no "NONE" other than ... See more...
I want to search the "NONE" not in 3 allowed enum value. I need to ignore the "NONE" if it is in the allowed enum. For example, if the "ALLLOWED1" : "NONE" is in the event,  but no "NONE" other than that, I do not count it. If "ALLOWED2": "NONE" and "not-allowed": "NONE" in same record, I need this record.  format in my record. \"ALLOWEDFIELD\": \"NONE\" I am not sure how should I deal with " and \ in the string for the query.
     Hello, maybe I don't have the vocabulary to find the answer when Googling.  I only submit this question after many attempts to find the answer on my own.  I am trying to figure out why ... See more...
     Hello, maybe I don't have the vocabulary to find the answer when Googling.  I only submit this question after many attempts to find the answer on my own.  I am trying to figure out why neither "started" nor "blocked" will show events when I add them to my search criteria, as shown in the images. The "success" action returns events found in  the same "Interesting Fields" category ("action"). When using the search: index=security action="*" the event listings include what's been "blocked" (and what's been "started"). I can then add a search on "failed" password and the correct number of events display.  All of the "report" options: Top value, Events with this field, etc all display the proper count for "Blocked". I have tried other "Interesting fields" with greater values wondering if there was some kind of limit set somewhere, but they work.    I'm sure it's simple but I cannot figure it out.  Please advise. Thanks LS      
That's the idea. 1. On each indexer you move the data from old location to the new one leaving a symlink behind. 2. You update the path to the index in indexes.conf so that it points to the new loc... See more...
That's the idea. 1. On each indexer you move the data from old location to the new one leaving a symlink behind. 2. You update the path to the index in indexes.conf so that it points to the new location. 3. You remove the symlinks since they're not needed anymore.
The trick was to do the silent install documented in https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.4/install-the-universal-forwarder/install-a-win... See more...
The trick was to do the silent install documented in https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/universal-forwarder-manual/9.4/install-the-universal-forwarder/install-a-windows-universal-forwarder#id_97c49283_f5a8_4748_9e3e_87ca9b57633d__Install_a_Windows_universal_forwarder_from_the_command_line but create $SPLUNK_HOME\etc\system\local\user-seed.conf and $SPLUNK_HOME\etc\system\local\deploymentclient.conf before running the install.
In the very last rm command, aren't you just removing the symbolic link you created a couple of steps above? You already moved the directory to 'Old'. 
I might have a solution now by using this statement: NOT match(_raw,"splunk.test@test.co.uk")
Hi @livehybrid, Here is the eval which works on the search | eval match=if(RecipientAddress="splunk.test@vwfs.co.uk",1,0) | search match=1
Hi @vishalduttauk  Can you share the eval you created which works in the search and I can check this against Ingest Actions.  Did this answer help you? If so, please consider: Adding karma to s... See more...
Hi @vishalduttauk  Can you share the eval you created which works in the search and I can check this against Ingest Actions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am ingesting data from the Splunk Add on for O365. I want to use the Eval Expression filter within an ingestion action to filter what email addresses we ingest data from. Sampling the data is easy ... See more...
I am ingesting data from the Splunk Add on for O365. I want to use the Eval Expression filter within an ingestion action to filter what email addresses we ingest data from. Sampling the data is easy but the next bit isn't. I drop events where the RecipientAddress is not splunk.test@test.co.uk. Creating an | eval within a search is simple but creating something that works for a filter using eval expression,  which drops Events is where i am struggling. Our Exchange/Entra team are having problems limiting the online mailboxes the Splunk application which is why I am looking at this workaround. Ignore the application thats tagged as we are using Enterprise 9.3.4. Can you help?
Hi @Haleb  Ive had pretty much this exact usecase with a previous customer who was enriching Enterprise Security rules with a lookup of data pulled in via one of the AWS apps.  I found that the bes... See more...
Hi @Haleb  Ive had pretty much this exact usecase with a previous customer who was enriching Enterprise Security rules with a lookup of data pulled in via one of the AWS apps.  I found that the best way to tackle this is to ensure that you have a scheduled search to populate/update your CSV/KVStore lookup that runs BEFORE your alerts. e.g. if you run your alerts hourly, then configure them such that they run at something like 5 mins past the hour, and have the lookup updating script that runs just before it, e.g. 3 mins past the hour. By itself this doesnt *entirely* remove your issue because if an EC2 instance was created at 4 mins past the hour then the data wont have been in the logs when the lookup updated at 3 mins past..but will be in the alert at 5 mins past...also with things like Cloudtrail there can be quite a bit of lag (as you may know!) therefore you may wish configure your alert to lookback something like earliest=-70m latest=-10m  A combination of these approaches should cover the timegap between the lookup updating and your alert firing, whilst maintaing a capability to regularly fire alerts in a timely manner. I hope that makes sense!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Splunk Community, I'm looking for guidance on how to properly manage and organize lookup files to ensure they are always up-to-date, especially in the context of alerting. I’ve run into situatio... See more...
Hi Splunk Community, I'm looking for guidance on how to properly manage and organize lookup files to ensure they are always up-to-date, especially in the context of alerting. I’ve run into situations where an alert is triggered, but the related lookup file hasn't been updated yet, resulting in missing or incomplete context at the time of the alert. What are the best practices for ensuring that lookup files are refreshed frequently and reliably? Should I be using scheduled saved searches, external scripts, KV store lookups, or another mechanism to guarantee the most recent data is available for correlation in real-time or near-real-time? Any advice or example workflows would be greatly appreciated. Use case for context: I’m working with AWS CloudTrail data to detect when new ports are opened in Security Groups. When such an event is detected, I want to enrich it with additional context — for example, which EC2 instance the Security Group is attached to. This context is available from AWS Config and ingested into a separate Splunk index. I’m currently generating a lookup to map Security Group IDs to related assets, but sometimes the alert triggers before this lookup is updated with the latest AWS Config data. Thanks in advance!
Hi as you have an individual servers, you could use this method https://community.splunk.com/t5/Installation/How-to-migrate-indexes-to-new-indexer-instance/m-p/528064/highlight/true When you have s... See more...
Hi as you have an individual servers, you could use this method https://community.splunk.com/t5/Installation/How-to-migrate-indexes-to-new-indexer-instance/m-p/528064/highlight/true When you have several indexers you should consider to migrate into clustered environment. You should read this https://docs.splunk.com/Documentation/SVA/current/Architectures/TopologyGuidance r. Ismo
@peterow  Great to see that your issue has been resolved!
Hi @azer271  In Splunk Cloud, bucket management is abstracted and handled by Splunk therefore users do not directly interact with hot/warm/cold distinctions. Instead the storage usage is based on th... See more...
Hi @azer271  In Splunk Cloud, bucket management is abstracted and handled by Splunk therefore users do not directly interact with hot/warm/cold distinctions. Instead the storage usage is based on the raw ingested volumes. DDAS (Dynamic Data Active Storage)  is the equiv. to your hot/warm/cold buckets from Splunk Enterprise, and can be configured as to how long data remains in this active, fast-searchable storage. DDAA (Dynamic Data Active Archive)  is an additional license cost and is essentially a bit like frozen bucket storage - there is a mechanism in Splunk Cloud to restore this data (which isnt instant) which remains searchable for a period of time before being removed again (and retained in DDAA). This can be cost effective but also tricky to manage, if you need to search the data you will need to know what timeframe you need the data from when you restore it. Another storage type is Dynamic Data: Self-Storage (DDSS) which is the equiv of a frozen bucket storage within your own S3 buckets. This isnt restorable back in to Splunk Cloud so if you ever needed to restore it then you'd need to do this back to your own on-premise Splunk Enterprise instance to thaw it out and become searchable again. The Cloud Monitoring Console makes it easy to see your DDAS/DDAA usage, If DDAS/DDAA exceeds 100% then you may be liable for overage costs. It usually doesnt impact performance because of the elastic nature of the backend storage, but over-consuming and then searching more data than scoped can slow things down. For more info its worth checking out https://www.splunk.com/en_us/blog/platform/dynamic-data-data-retention-options-in-splunk-cloud.html  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @siv  How about  | makeresults | eval Field1="value1", Field2="value1 value2 value3" | eval Field2=split(Field2," ") | foreach Field2 mode=multivalue [| eval Field2=mvappend(Field2,IF(<<ITEM>>!=... See more...
Hi @siv  How about  | makeresults | eval Field1="value1", Field2="value1 value2 value3" | eval Field2=split(Field2," ") | foreach Field2 mode=multivalue [| eval Field2=mvappend(Field2,IF(<<ITEM>>!=Field1, <<ITEM>>,null()))]  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @laura  The main developer for the app is Daniel Knights at Hyperion 3 - Their email is daniel.knights@hyperion3.com.au so worth reaching out directly, perhaps cc in contact@hyperion3.com.au too,... See more...
Hi @laura  The main developer for the app is Daniel Knights at Hyperion 3 - Their email is daniel.knights@hyperion3.com.au so worth reaching out directly, perhaps cc in contact@hyperion3.com.au too, or use their contact form at https://www.hyperion3.com.au/  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ws  For rebuilding and migrating your Splunk Enterprise setup to a new site while preserving existing data and configurations either of your mentioned paths would work. This assumes compatible O... See more...
Hi @ws  For rebuilding and migrating your Splunk Enterprise setup to a new site while preserving existing data and configurations either of your mentioned paths would work. This assumes compatible OS/architecture between old and new servers. Personally I'd probably use the same version as your existing deployment for the new site and upgrade once complete, that way you're doing a migration rather than a transformation - which is less risky. It also means that there wont be any unknown config changes when copying the contents of $SPLUNK_HOME. You may want to look at using something like rsync for copying the $SPLUNK_DB paths over from the old servers to the new ones, which might take some time depending on your data retention size/configurations. You could move the bulk of this first and then copy the config.  If you're able to keep the same hostnames etc and switch the DNS over, or retain the same IPs then this will obviously reduce a lot of additional work, otherwise you will need to go through various servers to update things like deploymentclient.conf for clients connecting to the DS etc.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Update: I remove the old expired license (that was incorrectly detected as a PRODUCTION license when it is Non-Production): /opt/splunk/bin/splunk remove license <license_hash> Then I went back to... See more...
Update: I remove the old expired license (that was incorrectly detected as a PRODUCTION license when it is Non-Production): /opt/splunk/bin/splunk remove license <license_hash> Then I went back to the Splunk Web: Log in to Splunk Web as an admin. Navigate to Settings > Licensing. Add new license New license successfully added  Thanks both @kiran_panchavat and @PrewinThomas for your help.    
Hi,  I have saw there are many recommendations to rebuild and migrate with its existing data and configuration. It abit confusing for me as a new Splunk user, would appreciate if there some guidanc... See more...
Hi,  I have saw there are many recommendations to rebuild and migrate with its existing data and configuration. It abit confusing for me as a new Splunk user, would appreciate if there some guidance for it. The following are the instances. 1x Search Head 3x Indexer 3x Heavy Forwarder 1x License server 1x deployment server Current version: 9.3.2 Assuming the hostname/IP could be the same or different for the rebuild. What is the best way to perform the rebuild and migration with it existing data and configuration? Same hostname/IP: - Copy the entire contents of the $SPLUNK_HOME directory from the old server to the new server - Install all instance for the new Splunk component into new server Different hostname/IP: - Copy the entire contents of the $SPLUNK_HOME directory from the old server to the new server - Install all instance for the new Splunk component into new server - Update individual .conf of instances if using new hostname - Update individual instances to point to their respecitive instances roles And could i install a newer version of Splunk without going to 9.3.2 when rebuilding and migrating? For testing purpose, I'll be trying it at one AIO instances for the rebuilding/migration due to space limitation.
Please check this out,  https://splunk.my.site.com/customer/s/article/Can-Splunk-Ingest-Windows-etl-Formated-Files