All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

When you refresh, can you see the token in the URL as form.Slot1_TailNum=true ? When you interact with the dashboard, it preserves the token values in the URL so they will keep their values when the ... See more...
When you refresh, can you see the token in the URL as form.Slot1_TailNum=true ? When you interact with the dashboard, it preserves the token values in the URL so they will keep their values when the page is simply refreshed. You can reload the page completely by removing all the token values after the question mark (?) in your dashboard URL.
The requirements are inconsistent.  Sometimes everything after the second : is the service name; other times the service name follows the first _.  How is a computer to decide which method to use?
Ading to @ITWhisperer 's answer - ideally you should have your main time field for a given event parsed on ingestion to the _time field so that Splunk can effectively search your data and "organize i... See more...
Ading to @ITWhisperer 's answer - ideally you should have your main time field for a given event parsed on ingestion to the _time field so that Splunk can effectively search your data and "organize it" timewise.
Splunk Enterprise Security is the official product to handle threat intelligence feeds and other security functions. Enterprise Security can be run on-prem as it is a Splunk app (albeit a large one).... See more...
Splunk Enterprise Security is the official product to handle threat intelligence feeds and other security functions. Enterprise Security can be run on-prem as it is a Splunk app (albeit a large one). Unless you have a compelling reason not to use Enterprise Security, it is the best way to go. If Enterprise Security is not an option, then you could build your own threat intelligence feed functionality into Splunk Enterprise, but this would take a lot of work. You could pull the threat data into a KV store, then use searches to perform lookups against that KV store. Though these searches can become quite complicated when you are matching certain types of intel (e.g. IP, domain) against various other fields that can contain matching values. 
OK. First things first. As @bowesmana already pointed out - you're using dedup. This is a very tricky command. It leaves you with just the first event having a unique value of given field (or set of... See more...
OK. First things first. As @bowesmana already pointed out - you're using dedup. This is a very tricky command. It leaves you with just the first event having a unique value of given field (or set of fields). You lose all information about data in all other fields if they differ in multiple events. A simple run-anywhere example | makeresults count=10 | eval constant="b" | eval random=random() | eventstats values(random) as all_values | dedup constant Here we explicitly created a summarized field called  all_values so that we can in the end see what events were generated but as we do the dedup command we're left with just one result having just one value in the random field. That's one thing that definitely affects your results. Another thing is that multivalued fields can be a bit unintuitive sometimes. Another run-anywhere example | makeresults count=10 | eval a=split("a,b,c,d",",") | stats count count(a)  Since each of generated result rows has a multivalued field a with four separate values, the overall count which returns just count of results will return 10 but count(a) which will count values in the a field will return 40 because there are 40 separate values in that field.
We didn't end up going this route since there are fairly long stretches of time where the check would be running unnecessarily, and it wouldn't have the immediate effect which is necessary for incide... See more...
We didn't end up going this route since there are fairly long stretches of time where the check would be running unnecessarily, and it wouldn't have the immediate effect which is necessary for incident response (our main use case for this question). We did, however, keep another piece of @phanTom wisdom in mind. "There are many ways to do things in SOAR, just depends how janky you want to get!" We ended up creating a new subplaybook to go at the beginning of those playbooks likely to be affected by missing the fields: Check if an artifact with the required fields exists If it does, continue with the rest of the parent as normal If not, do the following Use the Phantom app to add an artifact with the required fields Create a custom code block which uses multiple API operations to Get the subplaybook's own ID This is necessary since that number changes any time you edit the PB Get the audit data from the container Pull the json of the currently running subplaybook to get its parent's name and ID runs_resp_json['misc']['parent_playbook_run']['parent_playbook_name'] runs_resp_json['misc']['parent_playbook_run']['parent_playbook_run_id'] Use parent_playbook_name to call a new instance of the parent playbook SOAR treats a PB called via API as independent while those called via block are children This step is needed so the parent playbook interacts with the newly created artifact normally (we ran into problems referencing an artifact created within the same PB run) Use parent_playbook_run_id to cancel the original run since it's likely to run into the problem mentioned above This must come at the very end since cancelling the parent cancels any children, including the playbook running this code
According to the documentation on the Details page of the splunkbase page at Splunkbase (https://splunkbase.splunk.com/app/1352), this app reads from the sourcetype cisco:ios. Make sure that when you... See more...
According to the documentation on the Details page of the splunkbase page at Splunkbase (https://splunkbase.splunk.com/app/1352), this app reads from the sourcetype cisco:ios. Make sure that when you configure the input for the labdata file, that you set the sourcetype to cisco:ios . Once this is done, the dashboards should work.
That sounds frustrating. Sometimes it takes time to receive a reply email. I've had these response emails arrive sometimes in the same day, other times it takes a few days to arrive.
I'm looking to turn off the INFO messages in the server.log file for my on-prem controller.   Finding the file that will allow me to set the different levels of logging would be very much appreciate... See more...
I'm looking to turn off the INFO messages in the server.log file for my on-prem controller.   Finding the file that will allow me to set the different levels of logging would be very much appreciated.  
Hi @KendallW @bowesmana , Can I share you the raw data to go with the search? Please let me know.
In order to bin the Event time, you need to keep it as a number (after parsing with strptime). You can format it as a string later or use fieldformat for display purposes   index=test1 sourcetype=t... See more...
In order to bin the Event time, you need to keep it as a number (after parsing with strptime). You can format it as a string later or use fieldformat for display purposes   index=test1 sourcetype=test2 | eval Event_Time=strptime(SUBMIT_TIME,"%Y%m%d%H%M%S") | table Event_Time ``` This next line is redundant since you only have Event_Time to the nearest second anyway ``` | bin Event_Time span=1s | sort 0 Event_Time | fieldformat Event_Time=strftime(Event_Time, "%m/%d/%y %H:%M:%S")  
Ideally you'd be able to chunk the Json log event into smaller subunits, but this depends on what your JSON log event looks like. If your json log events are over 10k characters long, they may be ge... See more...
Ideally you'd be able to chunk the Json log event into smaller subunits, but this depends on what your JSON log event looks like. If your json log events are over 10k characters long, they may be getting truncated. If this is the case, you can override the truncation by putting the following setting in a props.conf file on the indexing machines: [<yoursourcetype>] TRUNCATE = <some number above the size of your json logs, or 0 for no truncation> If your broken json logs in Splunk are less than 10k characters long, then it could be that Splunk is splitting the logs part-way through the json object, so you would need to set the LINE_BREAKER field so that it properly splits whole json objects.
Can someone assist on this request please? Thank you.
Since you already have applicationName=" as your prefix, this line index=mulesoft environment=$env$ applicationName=$BankApp$ InterfaceName=$interface$ will expand to index=mulesoft environment=$e... See more...
Since you already have applicationName=" as your prefix, this line index=mulesoft environment=$env$ applicationName=$BankApp$ InterfaceName=$interface$ will expand to index=mulesoft environment=$env$ applicationName=applicationName="*" InterfaceName=InterfaceName="*" Either remove applicationName= from your prefix or from your search index=mulesoft environment=$env$ $BankApp$ $interface$
Hi, I'm currently ingesting CSV files to Splunk. One of the fields record actual Event Timestamp in this format YYYYmmddHHMMSS (e.g. 20240418142025). I need to format this field's value in a way th... See more...
Hi, I'm currently ingesting CSV files to Splunk. One of the fields record actual Event Timestamp in this format YYYYmmddHHMMSS (e.g. 20240418142025). I need to format this field's value in a way that Splunk will understand the data (e.g. date, hour, minutes, second etc.). Once this formatting is complete, I need to sort these time stamps/events for each Second (e.g. bucket span=1s Event_Time). Note here Event_Time is the formatted data from original Event Timestamp field. So far, I've tried this: index=test1 sourcetype=test2 | eval Event_Time=strftime(strptime(SUBMIT_TIME,"%Y%m%d%H%M%S"), "%m/%d/%y %H:%M:%S") | table Event_Time Above command gives me decent output such as 04/18/24 14:20:25. But, when I try to group values of Event_Time using "bucket span=1s Event_Time", it does not do anything. Note that "bucket span=1s _time" works as I'm using Splunk default time field. Appreciate any help to make this formatting work for post processing Event_Time. Thank you in advance.
I am struggling to find a post for my answer because the naming for Splunk Enterprise and Enterprise Security is so similar and I am only seeing results for ES.. I want to find a way to add Threat I... See more...
I am struggling to find a post for my answer because the naming for Splunk Enterprise and Enterprise Security is so similar and I am only seeing results for ES.. I want to find a way to add Threat Intelligence feeds into my Splunk Enterprise environment so my organization can eventually move off of the other SIEM we have been using in tandem with Splunk.  Is this possible with Splunk Enterprise? I know ES has the capability but we are strictly on-prem at the moment and I do not see us moving to it anytime soon. Any suggestions? Has anyone set these up on prem?
@richgalloway  : Sorry I did not get what rule you are mentioning. Could you please be more clear on this ? 434531263412:us-west-2:lambda_functions -> lambda_functions 434531263412:us-west-2:nat_... See more...
@richgalloway  : Sorry I did not get what rule you are mentioning. Could you please be more clear on this ? 434531263412:us-west-2:lambda_functions -> lambda_functions 434531263412:us-west-2:nat_gateways -> gateways 434531263412:us-west-2:application_load_balancers -> load_balancers yes , this is the requirement. In the above , right side values are the values from source field. I want to extract service name from this field value.
Any luck with support?  I tried the outputs.conf solution in this thread but it doesn't seem to have worked.     Pre-upgrade from 9.0.x to 9.2.1 I had 300ish clients in my DS.  right now only 14 ar... See more...
Any luck with support?  I tried the outputs.conf solution in this thread but it doesn't seem to have worked.     Pre-upgrade from 9.0.x to 9.2.1 I had 300ish clients in my DS.  right now only 14 are showing up.   Thanks, Dave
OK. Time to dig into the gory details of Splunk licensing. When you have an enforcing license (either a trial, dev or "full" license not big enough to be non-enforcing), each day you exceed your dai... See more...
OK. Time to dig into the gory details of Splunk licensing. When you have an enforcing license (either a trial, dev or "full" license not big enough to be non-enforcing), each day you exceed your daily ingestion allowance will generate a warning. If you exceed given number of warnings during a given time period (with a trial version it's 5 warnings in 30-day rolling window; with a "full" Splunk Enterprise license it's 45 warnings in 60 day), your environment will go into a "violation mode". Most importantly - it will stop allowing you search any data other than internal indexes. And the tricky question is that even if you add new/bigger/whatever license at this point, it will not automatically "unlock" your environment. You need to either wait for the violations to clear (for some license types) or request a special unlock license from the Splunk sales team. So tl,dr -  if you let your Splunk run out of license, it's not as easy as "I add my freshly bought license" and it starts working again.
Yes, But its still showing same error  Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the left hand   side: applicationName=APPLICATION_NAME. ... See more...
Yes, But its still showing same error  Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the left hand   side: applicationName=APPLICATION_NAME.   This the query which i am using:     index=mulesoft environment=$env$ applicationName=$BankApp$ InterfaceName=$interface$ (priority="ERROR" OR priority="WARN") | stats values(*) as * by correlationId | rename content.InterfaceName as InterfaceName content.FileList{} as FileList content.Filename as FileName content.ErrorMsg as ErrorMsg | eval Status=case(priority="ERROR","ERROR",priority="WARN","WARN",priority!="ERROR","SUCCESS") | fields Status InterfaceName applicationName FileList FileName correlationId ErrorMsg message | where FileList!=" "