All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Do you mean something like this? index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Query Service [ search index=poc container_name=app horizontalId=orange | stats count by tra... See more...
Do you mean something like this? index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Query Service [ search index=poc container_name=app horizontalId=orange | stats count by trace_id | table trace_id] | rex field=_raw "execution time is[ ]+(?<latency>\d+)[ ]+ms" | stats p90(latency) as Latency
Hi @vikas_gopal, I answered a similar question a couple years ago. The answer should still be relevant but may require adjustments for the most recent version of ES. https://community.splunk.com/t5... See more...
Hi @vikas_gopal, I answered a similar question a couple years ago. The answer should still be relevant but may require adjustments for the most recent version of ES. https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-create-a-Dashboard-that-will-show-SLA-for-alerts-received/m-p/594780/highlight/true#M10776
Hi @tscroggins ,   Thank you for the script, it's very helpful to save my time
Specify the sourcetype at the oneshot command and have a props.conf with the following paramers set.  The TIME parameters will take care of your timestamp issue.  Make sure to restart the splunkd ser... See more...
Specify the sourcetype at the oneshot command and have a props.conf with the following paramers set.  The TIME parameters will take care of your timestamp issue.  Make sure to restart the splunkd service after adding the props.conf. [sourcetypename] LINE_BREAKER TIME_PREFIX MAX_TIMESTAMP_LOOKAHEAD TIME_FORMAT TRUNCATE SHOULD_LINEMERGE = false # LINE_BREAKER should be properly set so you can keep SHOULD_LINEMERGE = false NO_BINARY_CHECK = true
We need to remember that Universal Forward does not do index-time parsing, on the other hand a Splunk Enterprise server such as a Heavy Forwarder, Search Head, Deployment server, depoyer, etc does th... See more...
We need to remember that Universal Forward does not do index-time parsing, on the other hand a Splunk Enterprise server such as a Heavy Forwarder, Search Head, Deployment server, depoyer, etc does the index-time parsing.  Indexers  expect that the data that come via another Splunk Enterprise server have the data cooked.
Correct.  We have to make sure that changing the configs via cli editor we need to restart the splunkd service for the configs to take effect.
We need to keep in mind that KV_MODE applies to search time only and the field extractons ae best to be done at search time.  Therefore, at index time if you have the following parameters set you sho... See more...
We need to keep in mind that KV_MODE applies to search time only and the field extractons ae best to be done at search time.  Therefore, at index time if you have the following parameters set you should be good. props.conf [sourcetypename] LINE_BREAKER TIME_PREFIX MAX_TIMESTAMP_LOOKAHEAD TIME_FORMAT TRUNCATE SHOULD_LINEMERGE = false # LINE_BREAKER should be properly set so you can keep SHOULD_LINEMERGE = false NO_BINARY_CHECK = true 
Thanks for quick response. Link means to combine trace_ids of the first query and fed into the second query. Ex. take the trace ids output from the first query and add it to the second query for th... See more...
Thanks for quick response. Link means to combine trace_ids of the first query and fed into the second query. Ex. take the trace ids output from the first query and add it to the second query for the P90 search latency total.  The first query returns trace_ids    outputs look like this  2024-... 15:23:58.961 INFO c.....impl....r#58 - Response from a....: ... [service.name=<service-name>=qa,trace_id=2b......,span_id=cs.....,trace_flags=01] P90 Latency query index=<> container-name=<> Exec... Search Query Service | rex field=_raw "execution time is[ ]+(?<latency>\d+)[ ]+ms" | stats p90(latency) as Latency if I want to combine the output of query 1 via trace ids, how can I do that so that the query 2 is the latency value?
--you may also need to perform other sanity checks on username and password before passing them to encodeURIComponent. Check mdn etc. for examples.
Hi @yallami, At a glance, do username and password contain special or unsafe characters? You may need e.g.:   { username: encodeURIComponent(username), password: encodeURIComponent(password), ... See more...
Hi @yallami, At a glance, do username and password contain special or unsafe characters? You may need e.g.:   { username: encodeURIComponent(username), password: encodeURIComponent(password), output_mode: "json" }  
Hi @bowesmana , @ITWhisperer ,   Ok this method works fine, I'll explain what I did. first I created a multivalue field with "sex, and S_n_mm"  fields. | eval value=mvappend(sex,'S_N mm')   aft... See more...
Hi @bowesmana , @ITWhisperer ,   Ok this method works fine, I'll explain what I did. first I created a multivalue field with "sex, and S_n_mm"  fields. | eval value=mvappend(sex,'S_N mm')   after this I created the condition directly on the XML code dashboard.   <format type="color" field="value"> <colorPalette type="expression">case(mvindex(value, 1) &gt;"79" AND mvindex(value, 0) == "male","#00FF00",mvindex(value, 1) &gt;"74" AND mvindex(value, 0) == "female","#00FF00")</colorPalette> </format>
hi @vr2312  Thank you for your response; it was completely correct.  
As @bowesmana said (and does the articles he referenced, and many others on this subject), your calculation to determine the colour should be done in SPL, so try modifying your search accordingly.
What does "link" mean in this context? The second query doesn't return any trace ids. Please clarify what you are trying to do (in non=SPL terms, provide some sample events, and a representation of y... See more...
What does "link" mean in this context? The second query doesn't return any trace ids. Please clarify what you are trying to do (in non=SPL terms, provide some sample events, and a representation of your expected output.
We don't know your data, we don't know your config. But my shot would be that your data is not properly onboarded - you don't have a proper configuration for this type of source so Splunks tries by ... See more...
We don't know your data, we don't know your config. But my shot would be that your data is not properly onboarded - you don't have a proper configuration for this type of source so Splunks tries by default to extract key-value pairs and does it with its own built-in mechanics which ends up as you can see. FireEyes can be painful to set up. Try to avoid CEF altogether - it's not very nice to parse.
As @ITWhisperer said - show us your raw events and what have you tried so far because maybe your idea was OK but applied in a wrong place.
Hi @splunkguy , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
1. Are you sure you're using the webhook inputs app or did you just configure a HEC input? 2. Whatever that ngrok is - since you said that Splunk is listening on localhost - is it running on the sam... See more...
1. Are you sure you're using the webhook inputs app or did you just configure a HEC input? 2. Whatever that ngrok is - since you said that Splunk is listening on localhost - is it running on the same machine? 3. Did you verify if that ngrok is connecting to your Splunk instance and sending data?
I have installed free Splunk enterprise in my local system and It can be accessed via localhost:8000 I have also configured the webhook receiver in this instance to run at port 8088 via the HTTP eve... See more...
I have installed free Splunk enterprise in my local system and It can be accessed via localhost:8000 I have also configured the webhook receiver in this instance to run at port 8088 via the HTTP event collector settings I tried ngrok to expose localhost:8000 and localhost:8088 and use that public URL as a webhook listening server. But Splunk is not receiving any events. I can see my ngrok server being hit with the events but seems like it's not able to forward it over to splunk. what am I doing wrong here? What's the right way to expose my localhost Splunk instance to start receiving these webhook events? Thank you in advance for help! Webhooks Input #splunklocalhost
Hello Members,   i have data coming from HF indexed in indexer and i can search it the problem at the details of event    for example : event sample cs4=FIREEYE test when i see the details of th... See more...
Hello Members,   i have data coming from HF indexed in indexer and i can search it the problem at the details of event    for example : event sample cs4=FIREEYE test when i see the details of this event i see cs4=FIREEYE only first string other is truncated why?