All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@efheem Thanks for posting this!  Did this setup "just work" for you?   With your configs, I see the files downloading in the logs, but it never finishes the first run. stating "The downloaded taxi... See more...
@efheem Thanks for posting this!  Did this setup "just work" for you?   With your configs, I see the files downloading in the logs, but it never finishes the first run. stating "The downloaded taxii intelligence has a size that exceeds the configured max_size and will be discarded."  I've tried increasing the max to 500Mb in the lab, but still encounter the same problem.
Hello @isoutamo  Thanks a lot for your feedback I need to study the httpout because Splunk nodes communicate though customer network, with firewalls, so it's easier to open a proxy compatible traff... See more...
Hello @isoutamo  Thanks a lot for your feedback I need to study the httpout because Splunk nodes communicate though customer network, with firewalls, so it's easier to open a proxy compatible traffic than a tcp/9997 for example. So, is there any possibility to use Indexer loadbalancer, ack, and maxQueueSize functions in httpout ? A saw that httpout is a relative new functionnality, since 8.x, maybe these functionality will be in the roadmap?   Thanks Jonas
It ain't that bad https://www.aplura.com/assets/pdf/hec_pipelines.pdf
Hello,   Transaction       Last 5min Vol        Last 10min Vol       Last 15min Vol Timeouts Errors A B C
My three cents on general approach to such tasks. Since "last 15 minutes" and "last 10 minutes" can be expressed in terms of 5-minute periods, you can simply either use a timechart with 5-minute bin... See more...
My three cents on general approach to such tasks. Since "last 15 minutes" and "last 10 minutes" can be expressed in terms of 5-minute periods, you can simply either use a timechart with 5-minute bins or bin manually time to 5-minute buckets and do stats over the 5-minute periods. And then - when you have those 5-minute stats - you can aggregate last two or last three stats to get summarized "last 10 minutes" and "last 15 minutes" values. It's often useful to see if the problem containing several "parallel" computations cannot be transformed to a single - maybe a bit more detailed - calculation and some form of aggregation after that.
Can you give some sample events and how you would like to present results?
Thank you. Is there way to combine this in stats instead of chart as i need extract few other fields also as par of stats?
Small tweak to the regex: (removing two space characters from the second-to-last line) | makeresults | eval _raw = "{\"orderNum\":\"1234\",\"orderLocation\":\"demoLoc\",\"details\":{\"key1\":\"valu... See more...
Small tweak to the regex: (removing two space characters from the second-to-last line) | makeresults | eval _raw = "{\"orderNum\":\"1234\",\"orderLocation\":\"demoLoc\",\"details\":{\"key1\":\"value1\",\"key2\":\"value2\"}}" | spath | spath input=_raw path=details output=hold | rex field=hold "\"(?<kvs>[^\"]*\"*[^\"]*\"*[^\"]*\"*)\"" max_match=0 | stats values(*) as * by kvs | rex field=kvs "(?<key>[^\"]*)\":\"(?<value>[^\"]*)" max_match=0 | table orderNum key value orderLocation   If the value can be an escaped JSON string, then indeed you need to be more crafty with the regex. E.g.: | makeresults | eval _raw = "{\"orderNum\":\"1234\",\"orderLocation\":\"demoLoc\",\"details\":{\"key1\":\"{\\\"jsonvalue\\\":\\\"jsonvaluevalue\\\",\\\"jsonvalue2\\\":\\\"jsonvaluevalue2\\\"}\",\"key2\":\"value2\"}}" | spath | spath input=_raw path=details output=hold | rex field=hold "(?<kvs>\"[^\"]*\":\"{?[^}]*}?\")" max_match=0 | stats values(*) as * by kvs | rex field=kvs "(?<key>[^\"]*)\":\"(?<value>{?[^{}]*}?)\"" max_match=0 | table orderNum key value orderLocation
i have replaced with like.. but it search from one host only. as i mentioned in QA i have 3 hosts and Prod i have 3 hosts. i have used dedup label to avoid duplicate in drop down list . but search re... See more...
i have replaced with like.. but it search from one host only. as i mentioned in QA i have 3 hosts and Prod i have 3 hosts. i have used dedup label to avoid duplicate in drop down list . but search result containes only from one host. not from all the 3 hosts if i select QA or PROD. please advise.   <input type="dropdown" token="envtoken"> <label>env</label> <fieldForLabel>label</fieldForLabel> <fieldForValue>host</fieldForValue> <prefix>(host=</prefix> <suffix>)</suffix> <search> <query> index=aaa (source="/var/log/testd.log") |stats count by host | eval label=case(like(host, "%tv00.test"), "Test", like(host, "%qv00.qa"), "QA", like(host, "%pv00.prod"), "Prod")| dedup label</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input>  
How did you disable the workload management? Via web, cli, or rest? What is the host system producing these messages? (e.g. various indexers? Search head?) Can you then go to those hosts and then di... See more...
How did you disable the workload management? Via web, cli, or rest? What is the host system producing these messages? (e.g. various indexers? Search head?) Can you then go to those hosts and then disable workload management in the CLI? Ref: https://docs.splunk.com/Documentation/Splunk/9.2.1/Workloads/Enableworkloadmanagement
Hello, I am trying to troubleshoot sendemail.py since after an upgrate to red hat 9 our splunk stopped sending emails. I understand the command to use the splunk python interpreter in the cli is: ... See more...
Hello, I am trying to troubleshoot sendemail.py since after an upgrate to red hat 9 our splunk stopped sending emails. I understand the command to use the splunk python interpreter in the cli is: splunk cmd python /opt/splunk/etc/apps/search/bin/sendemail.py however, how do i combine the above with the below _internal search results so i can see what the interpreter would provide as feedback (such as errors).   _raw results of a sendemail: subject="old: : $: server-prod - AlertLog_Check - 4 Log(s) ", encoded_subject="old: : $: server-prod - AlertLog_Check - 4 Log(s) ", results_link="https://MyWebsite:8080/app/search/@go?sid=scheduler__nobody__search__RMD50fd7c7e5334fc616_at_1712993040_1213", recipients="['sysadmin@MyWebsite.com']", server="localhost"   any examples would e greatly apreciated, thanks, A totally blind Splunker with a mission
Unfortunately this message does not give much information other than the start_phantom.sh process failed with status code 2. Could you try running the install script with the "-v" (verbose) flag? 
Strange... though this looks like an issue with your Teams client, given that you can reply to the message with a different mobile client.
@Jasmine - Use like instead of match function. | eval label=case(like(host, "%tv00.test.net"), "Test", like(host, "%qv00.qa.net"), "QA", like(host, "%pv00.... See more...
@Jasmine - Use like instead of match function. | eval label=case(like(host, "%tv00.test.net"), "Test", like(host, "%qv00.qa.net"), "QA", like(host, "%pv00.prod.net"), "Prod")   I hope this helps!!!
macOS requires that all software be signed and notarized by the developer. It seems you're getting issues because macOS is not trusting the installed Splunk binary to load the libbz libraries. There... See more...
macOS requires that all software be signed and notarized by the developer. It seems you're getting issues because macOS is not trusting the installed Splunk binary to load the libbz libraries. There may be a way to whitelist Splunk from the code signing requirements of macOS, or perhaps you could try installing a different version in the hopes that the new version will be code-signed and accepted by macOS.
Are there any interesting messages in the error log? index=_internal TA-purestorage-unified log_level=ERROR I tried installing this add-on on a test machine with Splunk Enterprise 9.2.0 and could l... See more...
Are there any interesting messages in the error log? index=_internal TA-purestorage-unified log_level=ERROR I tried installing this add-on on a test machine with Splunk Enterprise 9.2.0 and could load the configuration page without error. Which version of Splunk Enterprise are you using?
@PickleRick - You must be right and I know its so complicated for HEC endpoint on what will execute or not, so I would avoid it all together at all and filter it early directly from source when using... See more...
@PickleRick - You must be right and I know its so complicated for HEC endpoint on what will execute or not, so I would avoid it all together at all and filter it early directly from source when using HEC.
@dc17  - You need to give full path like:   [WinEventLog://Microsoft-Windows-Sysmon/Operational] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = sysmon sourcetype =... See more...
@dc17  - You need to give full path like:   [WinEventLog://Microsoft-Windows-Sysmon/Operational] checkpointInterval = 5 current_only = 0 disabled = 0 start_from = oldest index = sysmon sourcetype = WinEventLog:Sysmon   In my case, I can see a folder called Micrsoft > Windows > Sysmon folder. In which I can see Operational logs.   You need to give full path, instead of just MyCustomLog. Give full path, which you can find from Event Viewer.   I hope this helps!!!
  <input type="dropdown" token="envtoken"> <label>env</label> <fieldForLabel>label</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> index=aaa (source="/v... See more...
  <input type="dropdown" token="envtoken"> <label>env</label> <fieldForLabel>label</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query> index=aaa (source="/var/log/testd.log") |stats count by host | eval label=case(match(host, ".*tv*."), "Test", match(host, ".*qv*."), "QA", match(host, ".*pv*."), "Prod")| dedup label</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input>   dropdownlist binding with TEST, QA and PROD In QA and prod have 3 host. If i select QA from dropdown list , will the search includes from all the three hosts? could you plase confirm
Hi What issue you are trying to solve with this change? I think that usually it’s better to use S2S between splunk nodes than http version. r. Ismo