All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello members   i'm facing problems regarding parsing the event details on splunk i have forwarded the events from HF to indexers and now it's able to search but i'm facing issues with field extrac... See more...
Hello members   i'm facing problems regarding parsing the event details on splunk i have forwarded the events from HF to indexers and now it's able to search but i'm facing issues with field extractions and event details because the messages are truncated for example    if i have something like this sample event    CEF:0|fireeye|HX|4.8.0|IOC Hit Found|IOC Hit Found|10|rt=Jul 23 2019 16:54:24 UTC dvchost=fireeye.mps.test categoryDeviceGroup=/IDS categoryDeviceType=Forensic Investigation categoryObject=/Host   the categoryDeviceType parameter is truncated in field extraction so it display only forensic and other string is truncated   so can any one please help on this matter   my props.conf is   [trellix] category = Custom pulldown_type = 1 TIME_FORMAT = ^<\d+> EVAL-_time = strftime(_time, "%Y %b %d %H:%M:%S") TIME_PREFIX = %b %d %H:%M:%S
That doesn't always work. I cant seem to find a good solution for this type of problem either. I can't convert this timestamp for subtraction purposes for example (see how t3 column is empty?):
is there another options for parsing like editing props.conf since i don't want to add new app    is there any possibility for this type of events to just edit props.conf?   my props.conf [trell... See more...
is there another options for parsing like editing props.conf since i don't want to add new app    is there any possibility for this type of events to just edit props.conf?   my props.conf [trellix] category = Custom pulldown_type = 1 TIME_FORMAT = ^<\d+> EVAL-_time = strftime(_time, "%Y %b %d %H:%M:%S") TIME_PREFIX = %b %d %H:%M:%S
Hi @jmartens , Very sorry for the inconvenience. Engineering is aware of this (reference: SPL-258019). They have come up with a fix, which excludes the 2 internal users, 'nobody' and 'splunk-system... See more...
Hi @jmartens , Very sorry for the inconvenience. Engineering is aware of this (reference: SPL-258019). They have come up with a fix, which excludes the 2 internal users, 'nobody' and 'splunk-system', from the warning message. The fix will most likely be added to the next 9.1.x version after 9.1.6 and the next 9.2.x version after 9.2.3, respectively. 
Your description is insufficient for others to help.  You need to make some mockup data scenarios and corresponding desired outcomes for us to understand what "multiple" token for each column means. ... See more...
Your description is insufficient for others to help.  You need to make some mockup data scenarios and corresponding desired outcomes for us to understand what "multiple" token for each column means. If you mean whether clicking on different cells in the same row can set different tokens, the answer is no.  You can set multiple tokens by one click.  But each row can only result in one set of values.
A simple test in a barebones app worked for me. After working though the app name bug (see above), I customized default.xml, and the menu is rendered in both en-US and it-IT locales. My browser local... See more...
A simple test in a barebones app worked for me. After working though the app name bug (see above), I customized default.xml, and the menu is rendered in both en-US and it-IT locales. My browser locale remains US English, however, so I'm probably not sufficiently emulating your environment.
All tokens in a search must have a non-null value before the search will run. Try setting the null tokens to empty strings ("") before the search.
Yup, classic FireEye CEF. There was an add-on for FireEye on Splunkbase but it's archived already (last version was released 7 years ago so no wonder) - https://splunkbase.splunk.com/app/1904 As far ... See more...
Yup, classic FireEye CEF. There was an add-on for FireEye on Splunkbase but it's archived already (last version was released 7 years ago so no wonder) - https://splunkbase.splunk.com/app/1904 As far as I remember it also had some issues with proper parsing. If you want to use CEF, you might try this add-on https://splunkbase.splunk.com/app/487 but I wouldn't count on it being CIM-compliant.
Hi, I have a requirement to perform end to search for troubleshooting in a dashboard. I am using multiple tokens inside the dashboard. Some tokens have a condition to be set or unset depending ... See more...
Hi, I have a requirement to perform end to search for troubleshooting in a dashboard. I am using multiple tokens inside the dashboard. Some tokens have a condition to be set or unset depending upon null values. However, if any of the tokens are not null, then I should concatenate the tokens and pass the combined token to the other sub searches. Note: There is always a token which is not null I tried but the other panels always say 'search is waiting for the input' Below is a sample snippet from the xml dashboard. <search><query>index=foo</query></search> <drilldown> <eval "combined">$token1$. .$token2$. .$token3$. .$token4$. $token5$</eval> <set "combined_token">$combined$</set> </drilldown> <panel> <search><query>index=abc $combined_token$</query></search> </panel>
it's like this  <149>Jul 23 18:54:24 fireeye.mps.test cef[5159]: CEF:0|fireeye|HX|4.8.0|IOC Hit Found|IOC Hit Found|10|rt=Jul 23 2019 16:54:24 UTC dvchost=fireeye.mps.test categoryDeviceGroup=/IDS... See more...
it's like this  <149>Jul 23 18:54:24 fireeye.mps.test cef[5159]: CEF:0|fireeye|HX|4.8.0|IOC Hit Found|IOC Hit Found|10|rt=Jul 23 2019 16:54:24 UTC dvchost=fireeye.mps.test categoryDeviceGroup=/IDS categoryDeviceType=Forensic Investigation categoryObject=/Host cs1Label=Host Agent Cert Hash cs1=fwvqcmXUHVcbm4AFK01cim dst=192.168.1.172 dmac=00-00-5e-00-53-00 dhost=test-host1 dntdom=test deviceCustomDate1Label=Agent Last Audit deviceCustomDate1=Jul 23 2019 16:54:22 UTC cs2Label=FireEye Agent Version cs2=29.7.0 cs5Label=Target GMT Offset cs5=+PT2H cs6Label=Target OS cs6=Windows 10 Pro 17134 externalId=17688554 start=Jul 23 2019 16:53:18 UTC categoryOutcome=/Success categorySignificance=/Compromise categoryBehavior=/Found cs7Label=Resolution cs7=ALERT cs8Label=Alert Types cs8=exc act=Detection IOC Hit msg=Host test-host1 IOC compromise alert categoryTupleDescription=A Detection IOC found a compromise indication. cs4Label=IOC Name cs4=SVCHOST SUSPICIOUS PARENT PROCESS   this as a raw data but when i try to expand the details of event i see it's truncated . i will provide you with the config i did inside HF this morning.   Thx
My custom default.xml isn't displayed at all in 9.3.0. I know it's being read because I changed the default view, and that's the view that loads when the app loads. ??? EDIT: I had named my test app... See more...
My custom default.xml isn't displayed at all in 9.3.0. I know it's being read because I changed the default view, and that's the view that loads when the app loads. ??? EDIT: I had named my test app localized_menu, and for reasons unknown to me right now, this broke menu customization. Variations on this app name worked fine: lauz_localized_menu leaz_localized_menu localzzz localized_nav localized_xxx localizedxxx mlocalized_menu This appears to be a bug specific to apps named localized_menu or similar enough permutations, but I don't have a personal support account. For now, though, I can test localization in a sanely named localized_nav app.
Do you mean something like this? index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Query Service [ search index=poc container_name=app horizontalId=orange | stats count by tra... See more...
Do you mean something like this? index=poc container_name=app ExecutionTimeAspect Elastic Vertical Search Query Service [ search index=poc container_name=app horizontalId=orange | stats count by trace_id | table trace_id] | rex field=_raw "execution time is[ ]+(?<latency>\d+)[ ]+ms" | stats p90(latency) as Latency
Hi @vikas_gopal, I answered a similar question a couple years ago. The answer should still be relevant but may require adjustments for the most recent version of ES. https://community.splunk.com/t5... See more...
Hi @vikas_gopal, I answered a similar question a couple years ago. The answer should still be relevant but may require adjustments for the most recent version of ES. https://community.splunk.com/t5/Splunk-Enterprise-Security/How-to-create-a-Dashboard-that-will-show-SLA-for-alerts-received/m-p/594780/highlight/true#M10776
Hi @tscroggins ,   Thank you for the script, it's very helpful to save my time
Specify the sourcetype at the oneshot command and have a props.conf with the following paramers set.  The TIME parameters will take care of your timestamp issue.  Make sure to restart the splunkd ser... See more...
Specify the sourcetype at the oneshot command and have a props.conf with the following paramers set.  The TIME parameters will take care of your timestamp issue.  Make sure to restart the splunkd service after adding the props.conf. [sourcetypename] LINE_BREAKER TIME_PREFIX MAX_TIMESTAMP_LOOKAHEAD TIME_FORMAT TRUNCATE SHOULD_LINEMERGE = false # LINE_BREAKER should be properly set so you can keep SHOULD_LINEMERGE = false NO_BINARY_CHECK = true
We need to remember that Universal Forward does not do index-time parsing, on the other hand a Splunk Enterprise server such as a Heavy Forwarder, Search Head, Deployment server, depoyer, etc does th... See more...
We need to remember that Universal Forward does not do index-time parsing, on the other hand a Splunk Enterprise server such as a Heavy Forwarder, Search Head, Deployment server, depoyer, etc does the index-time parsing.  Indexers  expect that the data that come via another Splunk Enterprise server have the data cooked.
Correct.  We have to make sure that changing the configs via cli editor we need to restart the splunkd service for the configs to take effect.
We need to keep in mind that KV_MODE applies to search time only and the field extractons ae best to be done at search time.  Therefore, at index time if you have the following parameters set you sho... See more...
We need to keep in mind that KV_MODE applies to search time only and the field extractons ae best to be done at search time.  Therefore, at index time if you have the following parameters set you should be good. props.conf [sourcetypename] LINE_BREAKER TIME_PREFIX MAX_TIMESTAMP_LOOKAHEAD TIME_FORMAT TRUNCATE SHOULD_LINEMERGE = false # LINE_BREAKER should be properly set so you can keep SHOULD_LINEMERGE = false NO_BINARY_CHECK = true 
Thanks for quick response. Link means to combine trace_ids of the first query and fed into the second query. Ex. take the trace ids output from the first query and add it to the second query for th... See more...
Thanks for quick response. Link means to combine trace_ids of the first query and fed into the second query. Ex. take the trace ids output from the first query and add it to the second query for the P90 search latency total.  The first query returns trace_ids    outputs look like this  2024-... 15:23:58.961 INFO c.....impl....r#58 - Response from a....: ... [service.name=<service-name>=qa,trace_id=2b......,span_id=cs.....,trace_flags=01] P90 Latency query index=<> container-name=<> Exec... Search Query Service | rex field=_raw "execution time is[ ]+(?<latency>\d+)[ ]+ms" | stats p90(latency) as Latency if I want to combine the output of query 1 via trace ids, how can I do that so that the query 2 is the latency value?
--you may also need to perform other sanity checks on username and password before passing them to encodeURIComponent. Check mdn etc. for examples.