All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, I have a requirement to perform end to search for troubleshooting in a dashboard. I am using multiple tokens inside the dashboard. Some tokens have a condition to be set or unset depending ... See more...
Hi, I have a requirement to perform end to search for troubleshooting in a dashboard. I am using multiple tokens inside the dashboard. Some tokens have a condition to be set or unset depending upon null values. However, if any of the tokens are not null, then I should concatenate the tokens and pass the combined token to the other sub searches. Note: There is always a token which is not null I tried but the other panels always say 'search is waiting for the input' Below is a sample snippet from the xml dashboard. <search><query>index=foo</query></search> <drilldown> <eval "combined">$token1$. .$token2$. .$token3$. .$token4$. $token5$</eval> <set "combined_token">$combined$</set> </drilldown> <panel> <search><query>index=abc $combined_token$</query></search> </panel>
it's like this  <149>Jul 23 18:54:24 fireeye.mps.test cef[5159]: CEF:0|fireeye|HX|4.8.0|IOC Hit Found|IOC Hit Found|10|rt=Jul 23 2019 16:54:24 UTC dvchost=fireeye.mps.test categoryDeviceGroup=/IDS... See more...
it's like this  <149>Jul 23 18:54:24 fireeye.mps.test cef[5159]: CEF:0|fireeye|HX|4.8.0|IOC Hit Found|IOC Hit Found|10|rt=Jul 23 2019 16:54:24 UTC dvchost=fireeye.mps.test categoryDeviceGroup=/IDS categoryDeviceType=Forensic Investigation categoryObject=/Host cs1Label=Host Agent Cert Hash cs1=fwvqcmXUHVcbm4AFK01cim dst=192.168.1.172 dmac=00-00-5e-00-53-00 dhost=test-host1 dntdom=test deviceCustomDate1Label=Agent Last Audit deviceCustomDate1=Jul 23 2019 16:54:22 UTC cs2Label=FireEye Agent Version cs2=29.7.0 cs5Label=Target GMT Offset cs5=+PT2H cs6Label=Target OS cs6=Windows 10 Pro 17134 externalId=17688554 start=Jul 23 2019 16:53:18 UTC categoryOutcome=/Success categorySignificance=/Compromise categoryBehavior=/Found cs7Label=Resolution cs7=ALERT cs8Label=Alert Types cs8=exc act=Detection IOC Hit msg=Host test-host1 IOC compromise alert categoryTupleDescription=A Detection IOC found a compromise indication. cs4Label=IOC Name cs4=SVCHOST SUSPICIOUS PARENT PROCESS   this as a raw data but when i try to expand the details of event i see it's truncated . i will provide you with the config i did inside HF this morning.   Thx
My custom default.xml isn't displayed at all in 9.3.0. I know it's being read because I changed the default view, and that's the view that loads when the app loads. ??? EDIT: I had named my test app... See more...
My custom default.xml isn't displayed at all in 9.3.0. I know it's being read because I changed the default view, and that's the view that loads when the app loads. ??? EDIT: I had named my test app localized_menu, and for reasons unknown to me right now, this broke menu customization. Variations on this app name worked fine: lauz_localized_menu leaz_localized_menu localzzz localized_nav localized_xxx localizedxxx mlocalized_menu This appears to be a bug specific to apps named localized_menu or similar enough permutations, but I don't have a personal support account. For now, though, I can test localization in a sanely named localized_nav app.
Do you mean something like this? index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Query Service [ search index=poc container_name=app horizontalId=orange | stats count by tra... See more...
Do you mean something like this? index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Query Service [ search index=poc container_name=app horizontalId=orange | stats count by trace_id | table trace_id] | rex field=_raw "execution time is[ ]+(?<latency>\d+)[ ]+ms" | stats p90(latency) as Latency
Hi @vikas_gopal, I answered a similar question a couple years ago. The answer should still be relevant but may require adjustments for the most recent version of ES. https://community.splunk.com/t5... See more...
Hi @vikas_gopal, I answered a similar question a couple years ago. The answer should still be relevant but may require adjustments for the most recent version of ES. https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-create-a-Dashboard-that-will-show-SLA-for-alerts-received/m-p/594780/highlight/true#M10776
Hi @tscroggins ,   Thank you for the script, it's very helpful to save my time
Specify the sourcetype at the oneshot command and have a props.conf with the following paramers set.  The TIME parameters will take care of your timestamp issue.  Make sure to restart the splunkd ser... See more...
Specify the sourcetype at the oneshot command and have a props.conf with the following paramers set.  The TIME parameters will take care of your timestamp issue.  Make sure to restart the splunkd service after adding the props.conf. [sourcetypename] LINE_BREAKER TIME_PREFIX MAX_TIMESTAMP_LOOKAHEAD TIME_FORMAT TRUNCATE SHOULD_LINEMERGE = false # LINE_BREAKER should be properly set so you can keep SHOULD_LINEMERGE = false NO_BINARY_CHECK = true
We need to remember that Universal Forward does not do index-time parsing, on the other hand a Splunk Enterprise server such as a Heavy Forwarder, Search Head, Deployment server, depoyer, etc does th... See more...
We need to remember that Universal Forward does not do index-time parsing, on the other hand a Splunk Enterprise server such as a Heavy Forwarder, Search Head, Deployment server, depoyer, etc does the index-time parsing.  Indexers  expect that the data that come via another Splunk Enterprise server have the data cooked.
Correct.  We have to make sure that changing the configs via cli editor we need to restart the splunkd service for the configs to take effect.
We need to keep in mind that KV_MODE applies to search time only and the field extractons ae best to be done at search time.  Therefore, at index time if you have the following parameters set you sho... See more...
We need to keep in mind that KV_MODE applies to search time only and the field extractons ae best to be done at search time.  Therefore, at index time if you have the following parameters set you should be good. props.conf [sourcetypename] LINE_BREAKER TIME_PREFIX MAX_TIMESTAMP_LOOKAHEAD TIME_FORMAT TRUNCATE SHOULD_LINEMERGE = false # LINE_BREAKER should be properly set so you can keep SHOULD_LINEMERGE = false NO_BINARY_CHECK = true 
Thanks for quick response. Link means to combine trace_ids of the first query and fed into the second query. Ex. take the trace ids output from the first query and add it to the second query for th... See more...
Thanks for quick response. Link means to combine trace_ids of the first query and fed into the second query. Ex. take the trace ids output from the first query and add it to the second query for the P90 search latency total.  The first query returns trace_ids    outputs look like this  2024-... 15:23:58.961 INFO c.....impl....r#58 - Response from a....: ... [service.name=<service-name>=qa,trace_id=2b......,span_id=cs.....,trace_flags=01] P90 Latency query index=<> container-name=<> Exec... Search Query Service | rex field=_raw "execution time is[ ]+(?<latency>\d+)[ ]+ms" | stats p90(latency) as Latency if I want to combine the output of query 1 via trace ids, how can I do that so that the query 2 is the latency value?
--you may also need to perform other sanity checks on username and password before passing them to encodeURIComponent. Check mdn etc. for examples.
Hi @yallami, At a glance, do username and password contain special or unsafe characters? You may need e.g.:   { username: encodeURIComponent(username), password: encodeURIComponent(password), ... See more...
Hi @yallami, At a glance, do username and password contain special or unsafe characters? You may need e.g.:   { username: encodeURIComponent(username), password: encodeURIComponent(password), output_mode: "json" }  
Hi @bowesmana , @ITWhisperer ,   Ok this method works fine, I'll explain what I did. first I created a multivalue field with "sex, and S_n_mm"  fields. | eval value=mvappend(sex,'S_N mm')   aft... See more...
Hi @bowesmana , @ITWhisperer ,   Ok this method works fine, I'll explain what I did. first I created a multivalue field with "sex, and S_n_mm"  fields. | eval value=mvappend(sex,'S_N mm')   after this I created the condition directly on the XML code dashboard.   <format type="color" field="value"> <colorPalette type="expression">case(mvindex(value, 1) &gt;"79" AND mvindex(value, 0) == "male","#00FF00",mvindex(value, 1) &gt;"74" AND mvindex(value, 0) == "female","#00FF00")</colorPalette> </format>
hi @vr2312  Thank you for your response; it was completely correct.  
As @bowesmana said (and does the articles he referenced, and many others on this subject), your calculation to determine the colour should be done in SPL, so try modifying your search accordingly.
What does "link" mean in this context? The second query doesn't return any trace ids. Please clarify what you are trying to do (in non=SPL terms, provide some sample events, and a representation of y... See more...
What does "link" mean in this context? The second query doesn't return any trace ids. Please clarify what you are trying to do (in non=SPL terms, provide some sample events, and a representation of your expected output.
We don't know your data, we don't know your config. But my shot would be that your data is not properly onboarded - you don't have a proper configuration for this type of source so Splunks tries by ... See more...
We don't know your data, we don't know your config. But my shot would be that your data is not properly onboarded - you don't have a proper configuration for this type of source so Splunks tries by default to extract key-value pairs and does it with its own built-in mechanics which ends up as you can see. FireEyes can be painful to set up. Try to avoid CEF altogether - it's not very nice to parse.
As @ITWhisperer said - show us your raw events and what have you tried so far because maybe your idea was OK but applied in a wrong place.
Hi @splunkguy , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors