All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer Thank you so much, it really saved my time.
| eval Management=if(Applications="OCC", "Out", "In")
Hi @selvam_sekar , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I would like to add a column called Management  to my table. The management value is not part of the event data. It is  something I would like to assign based on the value of Applications:  Any help... See more...
I would like to add a column called Management  to my table. The management value is not part of the event data. It is  something I would like to assign based on the value of Applications:  Any help would be appreciated. Management Applications In IIT In ALP In MAL In HST Out OCC In ALY In GSS In HHS In ISD  
Hi @Bisho-Fouad  Here's an example search to solve your question...   host=<your host> ``` and whatever else you need to filter your data ```` | eval bytes = length(_raw) ``` generally 1 charac... See more...
Hi @Bisho-Fouad  Here's an example search to solve your question...   host=<your host> ``` and whatever else you need to filter your data ```` | eval bytes = length(_raw) ``` generally 1 character = 1 byte ``` | stats sum(bytes) AS bytes BY source ``` this gives the size of each log, assuming the source is the name of the log file ``` | eval kilobytes = bytes/1024) | evenstats sum(kilobytes) AS total_kb   Hope that helps  
Thanks @yuanliu, I explained why I couldn't use path directly, because it contains actual parameters.  For example, for the route /orders/{orderID}, the path could be: /orders/123456 /orders/21312... See more...
Thanks @yuanliu, I explained why I couldn't use path directly, because it contains actual parameters.  For example, for the route /orders/{orderID}, the path could be: /orders/123456 /orders/213123 /orders/435534 I want to analyze, for example, count of failed requests, or percentiles of call duration on this particular API route /orders/{orderID}.   Of course I can modify my service code to print the route pattern in log, but that is another way, i need to deploy new code to production environment. 
Hi @PickleRick  Not sure If I explained my requirements in a right way. I would like to expand the multiple values present in the each row to separate rows which is having only the "Frustated" relat... See more...
Hi @PickleRick  Not sure If I explained my requirements in a right way. I would like to expand the multiple values present in the each row to separate rows which is having only the "Frustated" related details like the below expected output   Expected output URL Duration  Type Status www.cde.com 88647 Load Frustated www.fge.com 6265 Load Frustated www.abc.com 500 Load Frustated
Hey there , kindly need support how to determine received logs SIZE for specific Host. Prefers to be done through GUI  Hit: working on distributed environment also own License master instance    t... See more...
Hey there , kindly need support how to determine received logs SIZE for specific Host. Prefers to be done through GUI  Hit: working on distributed environment also own License master instance    thanks in advance, 
Can anyone help on this pls  
Can any one help on this
What do you mean by "console width limit"? If an event is split into two separate ones it's either because it's split before it reaches Splunk or it hits the LINE_BREAKER for give sourcetype. If the ... See more...
What do you mean by "console width limit"? If an event is split into two separate ones it's either because it's split before it reaches Splunk or it hits the LINE_BREAKER for give sourcetype. If the event was too long it'd simply get truncated, not split. And no, you can't join two separate events in Splunk - each event is processed as separate entity (in fact with distributed environment each of those events could end up on a different indexer).
When you refresh, can you see the token in the URL as form.Slot1_TailNum=true ? When you interact with the dashboard, it preserves the token values in the URL so they will keep their values when the ... See more...
When you refresh, can you see the token in the URL as form.Slot1_TailNum=true ? When you interact with the dashboard, it preserves the token values in the URL so they will keep their values when the page is simply refreshed. You can reload the page completely by removing all the token values after the question mark (?) in your dashboard URL.
The requirements are inconsistent.  Sometimes everything after the second : is the service name; other times the service name follows the first _.  How is a computer to decide which method to use?
Ading to @ITWhisperer 's answer - ideally you should have your main time field for a given event parsed on ingestion to the _time field so that Splunk can effectively search your data and "organize i... See more...
Ading to @ITWhisperer 's answer - ideally you should have your main time field for a given event parsed on ingestion to the _time field so that Splunk can effectively search your data and "organize it" timewise.
Splunk Enterprise Security is the official product to handle threat intelligence feeds and other security functions. Enterprise Security can be run on-prem as it is a Splunk app (albeit a large one).... See more...
Splunk Enterprise Security is the official product to handle threat intelligence feeds and other security functions. Enterprise Security can be run on-prem as it is a Splunk app (albeit a large one). Unless you have a compelling reason not to use Enterprise Security, it is the best way to go. If Enterprise Security is not an option, then you could build your own threat intelligence feed functionality into Splunk Enterprise, but this would take a lot of work. You could pull the threat data into a KV store, then use searches to perform lookups against that KV store. Though these searches can become quite complicated when you are matching certain types of intel (e.g. IP, domain) against various other fields that can contain matching values. 
OK. First things first. As @bowesmana already pointed out - you're using dedup. This is a very tricky command. It leaves you with just the first event having a unique value of given field (or set of... See more...
OK. First things first. As @bowesmana already pointed out - you're using dedup. This is a very tricky command. It leaves you with just the first event having a unique value of given field (or set of fields). You lose all information about data in all other fields if they differ in multiple events. A simple run-anywhere example | makeresults count=10 | eval constant="b" | eval random=random() | eventstats values(random) as all_values | dedup constant Here we explicitly created a summarized field called  all_values so that we can in the end see what events were generated but as we do the dedup command we're left with just one result having just one value in the random field. That's one thing that definitely affects your results. Another thing is that multivalued fields can be a bit unintuitive sometimes. Another run-anywhere example | makeresults count=10 | eval a=split("a,b,c,d",",") | stats count count(a)  Since each of generated result rows has a multivalued field a with four separate values, the overall count which returns just count of results will return 10 but count(a) which will count values in the a field will return 40 because there are 40 separate values in that field.
We didn't end up going this route since there are fairly long stretches of time where the check would be running unnecessarily, and it wouldn't have the immediate effect which is necessary for incide... See more...
We didn't end up going this route since there are fairly long stretches of time where the check would be running unnecessarily, and it wouldn't have the immediate effect which is necessary for incident response (our main use case for this question). We did, however, keep another piece of @phanTom wisdom in mind. "There are many ways to do things in SOAR, just depends how janky you want to get!" We ended up creating a new subplaybook to go at the beginning of those playbooks likely to be affected by missing the fields: Check if an artifact with the required fields exists If it does, continue with the rest of the parent as normal If not, do the following Use the Phantom app to add an artifact with the required fields Create a custom code block which uses multiple API operations to Get the subplaybook's own ID This is necessary since that number changes any time you edit the PB Get the audit data from the container Pull the json of the currently running subplaybook to get its parent's name and ID runs_resp_json['misc']['parent_playbook_run']['parent_playbook_name'] runs_resp_json['misc']['parent_playbook_run']['parent_playbook_run_id'] Use parent_playbook_name to call a new instance of the parent playbook SOAR treats a PB called via API as independent while those called via block are children This step is needed so the parent playbook interacts with the newly created artifact normally (we ran into problems referencing an artifact created within the same PB run) Use parent_playbook_run_id to cancel the original run since it's likely to run into the problem mentioned above This must come at the very end since cancelling the parent cancels any children, including the playbook running this code
According to the documentation on the Details page of the splunkbase page at Splunkbase (https://splunkbase.splunk.com/app/1352), this app reads from the sourcetype cisco:ios. Make sure that when you... See more...
According to the documentation on the Details page of the splunkbase page at Splunkbase (https://splunkbase.splunk.com/app/1352), this app reads from the sourcetype cisco:ios. Make sure that when you configure the input for the labdata file, that you set the sourcetype to cisco:ios . Once this is done, the dashboards should work.
That sounds frustrating. Sometimes it takes time to receive a reply email. I've had these response emails arrive sometimes in the same day, other times it takes a few days to arrive.
I'm looking to turn off the INFO messages in the server.log file for my on-prem controller.   Finding the file that will allow me to set the different levels of logging would be very much appreciate... See more...
I'm looking to turn off the INFO messages in the server.log file for my on-prem controller.   Finding the file that will allow me to set the different levels of logging would be very much appreciated.