All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If you really do need to test from a shell, enable debug logging as previously noted, and then from the shell (Bash in this example) on the Splunk host, run: # this script assumes your management po... See more...
If you really do need to test from a shell, enable debug logging as previously noted, and then from the shell (Bash in this example) on the Splunk host, run: # this script assumes your management port is 8089 $SPLUNK_HOME/bin/splunk login $SPLUNK_HOME/bin/splunk cmd python $SPLUNK_HOME/etc/apps/search/bin/sendemail.py 'to="test@example.com" subject="Test Message"' << EOF authString:$(echo -n $(cat ~/.splunk/authToken_splunk_8089)) sessionKey:$(echo -n $(sed -re 's/.*<sessionkey>(.*)<\/sessionkey>.*/\1/' ~/.splunk/authToken_splunk_8089)) owner:$(echo -n $(sed -re 's/.*<username>(.*)<\/username>.*/\1/' ~/.splunk/authToken_splunk_8089)) namespace:search sid: _time,_raw "1713023419","This is the first event/row." "1713023420","This is the second event/row." EOF $SPLUNK_HOME/bin/splunk logout Note that the empty line between sid: and _time is mandatory. The empty line indicates to Intersplunk that CSV formatted search results follow. The setting:value entries before the empty line represent the Intersplunk header. sendemail.py makes several Splunk REST API calls and requires a session key and app context to work correctly. The Splunk login command will create a new session and cache your username, session key, etc. in ~/.splunk/authToken_splunk_8089. The Splunk logout command will invalidate the session and remove ~/.splunk/authToken_splunk_8089.
Just to nitpick a little. You can set up a cluster without redundancy. It's not a HA cluster but it has its uses (one advantage of such setup is the ability to rebalance buckets when you add a new pe... See more...
Just to nitpick a little. You can set up a cluster without redundancy. It's not a HA cluster but it has its uses (one advantage of such setup is the ability to rebalance buckets when you add a new peer). But yes, if you set up a cluster with RF>=2, every bucket should have at least one additional copy somewhere in the cluster.
The answer depends on your definition of "HA".  If you only care that your data has some place to go then having (at least) 2 indexers qualifies.  OTOH, if it's the data itself that must be HA then u... See more...
The answer depends on your definition of "HA".  If you only care that your data has some place to go then having (at least) 2 indexers qualifies.  OTOH, if it's the data itself that must be HA then unclustered indexers is/are not the answer.  That's because loss of an indexer means loss of the data stored on that indexer.  SmartStore helps by putting warm buckets in off-box storage, but hot buckets remain on the indexer unprotected. In an indexer cluster, each bucket is replicated to at least one other indexer so the loss of an indexer does not result in data loss. Yes, an indexer cluster requires a cluster manager, but that instance can be shared with the Monitoring Console/License Manager instance.
Hello, I am not getting emails regarding the alerts I have setup in splunk ...Can anyone please help?
Hi @gcusello  We tried the query you provided eval command, but it not working output is: RampdataSet Initial message received with below details Total WAC 10 Letter published correctl... See more...
Hi @gcusello  We tried the query you provided eval command, but it not working output is: RampdataSet Initial message received with below details Total WAC 10 Letter published correctley to ATM subject WAX 30 Letter published correctley to DMM subject WAM 22 Letter rejected due to: DOUBLE_KEY STC 33 Letter rejected due to: UNVALID_LOG STX 66 Letter rejected due to: UNVALID_DATA_APP   We tried addtotals as well, pls see the output: RampdataSet Initial message received with below details Total WAC   20 WAX   165 WAM   184 STC   150 STX   222 OTP   70 TTC   15 TAN   21  
Can this be done or is the official Splunk guidance to utilize an index cluster? Curious if there's any current (potentially) possible method to achieve high-availability with only 2 indexers? My r... See more...
Can this be done or is the official Splunk guidance to utilize an index cluster? Curious if there's any current (potentially) possible method to achieve high-availability with only 2 indexers? My reading on index clusters has me thinking one needs at a minimum 3 licensed Splunk instances. At least, that's what I got from Splunk's documentation. You need one master, and at least 2 dedicated indexer peers. Where the search head goes in all of that and how that would be supported, I have no clue. I'm sure everyone can think of a very green reason as to why one would want to be able to just have a pair of indexers serve high availability without being forced into an index cluster kind of deployment. I can see older posts where apparently this used to be supported but my understanding now is that the only Splunk supported high-availability deployment is via index clusters. Can anyone confirm?
Hi @alfredoh14, The best way to test sendemail.py is using a search: | sendemail to="test@example.com" subject="Test Message" The script reads configuration from alert_actions.conf using the Splun... See more...
Hi @alfredoh14, The best way to test sendemail.py is using a search: | sendemail to="test@example.com" subject="Test Message" The script reads configuration from alert_actions.conf using the Splunk REST API, and testing the script from the command-line isn't straightforward. (See the sendEmail function in sendemail.py.) Messages will be logged to $SPLUNK_HOME/var/log/splunk/python.log. Errors will also be logged to search.log and displayed in Splunk Web. The default log level is specified in $SPLUNK_HOME/etc/log.cfg: [python] splunk = INFO You can change the log level in $SPLUNK_HOME/etc/log-local.cfg: [python] splunk = DEBUG Restart Splunk after modifying log-local.cfg. Other Splunk Python scripts will produce verbose debug output. I recommend returning the log level to INFO when you're finished debugging. You can search python.log directly from Splunk: index=_internal source=*python.log* sendemail I also recommend opening a support case. If you find a compatibility issue between sendemail.py and a specific RHEL 9 configuration in the latest maintenance release of a supported version of Splunk, either sendemail.py can be fixed or Splunk documentation can be updated.
Thwarted by high cardinality! You can adjust the similarity threshold of the cluster command with the t option: | cluster showcount=t t=0.5 or change how the cluster command determines similarity... See more...
Thwarted by high cardinality! You can adjust the similarity threshold of the cluster command with the t option: | cluster showcount=t t=0.5 or change how the cluster command determines similarity with the match option: | cluster showcount=t match=termset ``` unordered terms ``` If you find a frequently occurring event unrelated to your original question and want a bit of help, you'll get the best answer by starting a new question. Everyone here loves solving problems!
No, you cannot get the index name from a log file. The index is specified when the data is onboarded as part of the inputs.conf settings. At search time, data is fetched from one or more indexes.  ... See more...
No, you cannot get the index name from a log file. The index is specified when the data is onboarded as part of the inputs.conf settings. At search time, data is fetched from one or more indexes.  Getting the index from a log file would mean going to an index to get a log file to get the name of an index.  Doesn't make much sense. What problem are you trying to solve?
host is same across all the env. i am facing issue when i bind the same value to drop down list saying "Duplicate values causing conflict". But i need dropdown list with TEST/QA/PROD(label) with same... See more...
host is same across all the env. i am facing issue when i bind the same value to drop down list saying "Duplicate values causing conflict". But i need dropdown list with TEST/QA/PROD(label) with same host value. - how can i achieve this?
Disclaimer - I haven't used httpout much so I might be mistaken here. But. httpout is not a HEC output (although it needs an HEC input and valid HEC token; it's complicated). It's s2s protocol embe... See more...
Disclaimer - I haven't used httpout much so I might be mistaken here. But. httpout is not a HEC output (although it needs an HEC input and valid HEC token; it's complicated). It's s2s protocol embedded in http transport. It is indeed a fairly recent invention mostly aimed at situations like yours - where it's easier (politically, not technically) to allow outgoing http traffic (even if it's only pseudo-http) than some unknown protocol. Having said that I'd expect most of the functionalities normally working with tcpout (like useACK) to work. I'd test it first in the lab before pushing to prod anyway.
@efheem Thanks for posting this!  Did this setup "just work" for you?   With your configs, I see the files downloading in the logs, but it never finishes the first run. stating "The downloaded taxi... See more...
@efheem Thanks for posting this!  Did this setup "just work" for you?   With your configs, I see the files downloading in the logs, but it never finishes the first run. stating "The downloaded taxii intelligence has a size that exceeds the configured max_size and will be discarded."  I've tried increasing the max to 500Mb in the lab, but still encounter the same problem.
Hello @isoutamo  Thanks a lot for your feedback I need to study the httpout because Splunk nodes communicate though customer network, with firewalls, so it's easier to open a proxy compatible traff... See more...
Hello @isoutamo  Thanks a lot for your feedback I need to study the httpout because Splunk nodes communicate though customer network, with firewalls, so it's easier to open a proxy compatible traffic than a tcp/9997 for example. So, is there any possibility to use Indexer loadbalancer, ack, and maxQueueSize functions in httpout ? A saw that httpout is a relative new functionnality, since 8.x, maybe these functionality will be in the roadmap?   Thanks Jonas
It ain't that bad https://www.aplura.com/assets/pdf/hec_pipelines.pdf
Hello,   Transaction       Last 5min Vol        Last 10min Vol       Last 15min Vol Timeouts Errors A B C
My three cents on general approach to such tasks. Since "last 15 minutes" and "last 10 minutes" can be expressed in terms of 5-minute periods, you can simply either use a timechart with 5-minute bin... See more...
My three cents on general approach to such tasks. Since "last 15 minutes" and "last 10 minutes" can be expressed in terms of 5-minute periods, you can simply either use a timechart with 5-minute bins or bin manually time to 5-minute buckets and do stats over the 5-minute periods. And then - when you have those 5-minute stats - you can aggregate last two or last three stats to get summarized "last 10 minutes" and "last 15 minutes" values. It's often useful to see if the problem containing several "parallel" computations cannot be transformed to a single - maybe a bit more detailed - calculation and some form of aggregation after that.
Can you give some sample events and how you would like to present results?
Thank you. Is there way to combine this in stats instead of chart as i need extract few other fields also as par of stats?
Small tweak to the regex: (removing two space characters from the second-to-last line) | makeresults | eval _raw = "{\"orderNum\":\"1234\",\"orderLocation\":\"demoLoc\",\"details\":{\"key1\":\"valu... See more...
Small tweak to the regex: (removing two space characters from the second-to-last line) | makeresults | eval _raw = "{\"orderNum\":\"1234\",\"orderLocation\":\"demoLoc\",\"details\":{\"key1\":\"value1\",\"key2\":\"value2\"}}" | spath | spath input=_raw path=details output=hold | rex field=hold "\"(?<kvs>[^\"]*\"*[^\"]*\"*[^\"]*\"*)\"" max_match=0 | stats values(*) as * by kvs | rex field=kvs "(?<key>[^\"]*)\":\"(?<value>[^\"]*)" max_match=0 | table orderNum key value orderLocation   If the value can be an escaped JSON string, then indeed you need to be more crafty with the regex. E.g.: | makeresults | eval _raw = "{\"orderNum\":\"1234\",\"orderLocation\":\"demoLoc\",\"details\":{\"key1\":\"{\\\"jsonvalue\\\":\\\"jsonvaluevalue\\\",\\\"jsonvalue2\\\":\\\"jsonvaluevalue2\\\"}\",\"key2\":\"value2\"}}" | spath | spath input=_raw path=details output=hold | rex field=hold "(?<kvs>\"[^\"]*\":\"{?[^}]*}?\")" max_match=0 | stats values(*) as * by kvs | rex field=kvs "(?<key>[^\"]*)\":\"(?<value>{?[^{}]*}?)\"" max_match=0 | table orderNum key value orderLocation
i have replaced with like.. but it search from one host only. as i mentioned in QA i have 3 hosts and Prod i have 3 hosts. i have used dedup label to avoid duplicate in drop down list . but search re... See more...
i have replaced with like.. but it search from one host only. as i mentioned in QA i have 3 hosts and Prod i have 3 hosts. i have used dedup label to avoid duplicate in drop down list . but search result containes only from one host. not from all the 3 hosts if i select QA or PROD. please advise.   <input type="dropdown" token="envtoken"> <label>env</label> <fieldForLabel>label</fieldForLabel> <fieldForValue>host</fieldForValue> <prefix>(host=</prefix> <suffix>)</suffix> <search> <query> index=aaa (source="/var/log/testd.log") |stats count by host | eval label=case(like(host, "%tv00.test"), "Test", like(host, "%qv00.qa"), "QA", like(host, "%pv00.prod"), "Prod")| dedup label</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input>