All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for your reply ...   The email settings are fine  and in internal logs I don't see any error related to "sendemail"...Could you please suggest what else could be done?
Hi @shakti , did you configured the email relay in [Settings > Server Settings > Email Settings] ? did you open all the routes between the Search Head and the eMail host? You can troubleshoot the ... See more...
Hi @shakti , did you configured the email relay in [Settings > Server Settings > Email Settings] ? did you open all the routes between the Search Head and the eMail host? You can troubleshoot the connection searching in _internal the word "Sendmail". Ciao. Giuseppe
Hi @vishwa , if you run your search, have you the table you shared? if yes, using the eval I hinted you sum the values ot the columns in the Total value. You could also use addtotals command that ... See more...
Hi @vishwa , if you run your search, have you the table you shared? if yes, using the eval I hinted you sum the values ot the columns in the Total value. You could also use addtotals command that sums all the values for each row: index=app-index source=application.logs | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex field= _raw "(?<Message>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" | chart count over RampdataSet by Message | addtotals But also in this case que question is: does your search extract the value for each column? Ciao. Giuseppe  
addtotals is the wrong tool for what you wanted. (It is really bad to use screenshot to illustrate text output.  Always use text unless you are illustrating a visual effect.)  addcoltotals is what yo... See more...
addtotals is the wrong tool for what you wanted. (It is really bad to use screenshot to illustrate text output.  Always use text unless you are illustrating a visual effect.)  addcoltotals is what you need. | addcoltotals labelfield=MOP
hello, We upgraded our red hat 7 to 9 this past monday. and splunk stopped sending emails. We were inexperience and unprepare for this so we upgraded our splunk enterprise from 9.1 to 9.13 to see ... See more...
hello, We upgraded our red hat 7 to 9 this past monday. and splunk stopped sending emails. We were inexperience and unprepare for this so we upgraded our splunk enterprise from 9.1 to 9.13 to see if this would fix it. It did not. then we upgraded to 9.2, that did not fix it. I started adding debug mode to everything and found that splunk would send the emails to postfix and the postfix logs would state the emails were send. however, after looking at it closer, I notice the from field of the splunk sendemail generated emails had the from field like: splunk@prod not splunk@prod.mydomain.com (as they used to before we upgraded to redhat 9 When we use mailx, the fron field from field is constructed correctly such as: splunk@prod.domain.com extra python debugging does not show the from field but only the user and the domain: from': 'splunk', 'hostname': 'prod.mydomain.com', My stanza in /opt/splunk/etc/system/local/alert_action.conf: [email] hostname = prod.mydomain.com Does anyone know how to fix this? Is there a setting in splunk that would make sure the email from field is constructed correctly. It is funny that if you add an incorrect "to" address splunk whines but if splunk create a incorrect to field address in sendemail, it is fine and, just send it to postfix and let it handle it, lol dandy  
Hi @alfredoh14, By default, the From: address is splunk. Are you using default Splunk email settings and a local instance of postfix? Because no domain is specified, postfix likely appends the host ... See more...
Hi @alfredoh14, By default, the From: address is splunk. Are you using default Splunk email settings and a local instance of postfix? Because no domain is specified, postfix likely appends the host name to the user name, i.e. splunk@prod, before forwarding the message to either an upstream relay or the recipient's mail server. You can set the From: address in Splunk Web from Settings > Server settings > Email settings in the "Send emails as" setting. This will update $SPLUNK_HOME/etc/system/local/alert_actions.conf. For example, using no-reply@mydomain.com:  [email] from = no-reply@mydomain.com
Hi @Satyapv, Hi, Here's another alternative. We'll use internal splunkd components to simulation a field named Transaction. To see event counts over [-300,0], [-600,0], and [-900,0] seconds: inde... See more...
Hi @Satyapv, Hi, Here's another alternative. We'll use internal splunkd components to simulation a field named Transaction. To see event counts over [-300,0], [-600,0], and [-900,0] seconds: index=_internal sourcetype=splunkd component=* earliest=-15m latest=now | rename component as Transaction | addinfo ``` assumes a valid latest value ``` | stats count(eval(_time>=info_max_time-300)) as "Last 5min Vol" count(eval(_time>=info_max_time-600)) as "Last 10min Vol" count as "Last 15min Vol" by Transaction To see event counts over [-300,0], [-600,300), and [-900,600) seconds: index=_internal sourcetype=splunkd component=* earliest=-15m latest=now | rename component as Transaction | addinfo ``` assumes a valid latest value ``` | stats count(eval(_time>=info_max_time-300)) as "Last 5min Vol" count(eval(_time>=info_max_time-600 AND _time<info_max_time-300)) as "Last 10min Vol" count(eval(_time<info_max_time-600)) as "Last 15min Vol" by Transaction You can adjust earliest and latest as needed, but note that the last count will always be inclusive of earliest, i.e. the last 15 minutes for -15m. You adjust the count aggregates to disallow counting events more than 900 seconds (15 minutes) prior to latest: count(eval(_time>=info_max_time-900)) as "Last 15min Vol" or count(eval(_time>=info_max_time-900 AND _time<info_max_time-600)) as "Last 15min Vol"  
that was awsome help, it really just solidify the assumption that I had which was that splunk was sending the emails to the postfix server and they were getting dropped by it. My next step was to ad... See more...
that was awsome help, it really just solidify the assumption that I had which was that splunk was sending the emails to the postfix server and they were getting dropped by it. My next step was to add debugging on postfix, and i found out that the "to" field was splunk@prod not splunk@prod.mydomain.com the server name (hostname) on linux is prod but when i access splunk web UI, it is https://prod.mydomain.com:8080/en-US When I added the python debugging it only showed that the user who was sending it was splunk and the domain was (based on debug logs of python sendemail): from': 'splunk', 'hostname': 'prod.mydomain.com', this did not show me that the sendemail.py command would construct the from field to only splunk@prod (I had to look at the postfix logs for that info). My stanza in /opt/splunk/etc/system/local/alert_action.conf: [email] hostname = prod.mydomain.com again, we can send emails from cli using mailx but splunk cannot using sendemail.py because it is not cnstructing the from field correctly so although postfix sents it, the smtp server which receives it drops it. so, do you have any idea of where I have to set a setting so that the from field is constructed correctly by the sendemail.py script?  
Note you need to place source=OMITTED host="SERVER1" OR host="SERVER2" in parentheses; alternatively use IN operator.  Finding difference should not be that complicated. index=_internal earliest=-15... See more...
Note you need to place source=OMITTED host="SERVER1" OR host="SERVER2" in parentheses; alternatively use IN operator.  Finding difference should not be that complicated. index=_internal earliest=-15mindex=OMITTED source=OMITTED host IN ("SERVER1", "SERVER2") | stats max(Value) as Value by host | stats max(Value) as max_of_two min(Value) as min_of_two | where max_of_two / min_of_two > 0.75 However, your OP says you want timechart.  That's why @richgalloway includes _time in groupby in that first stats.  But you can substitute the first stats with timechart to simplify this, then use the same technique in every row to find percent deviation. index=_internal earliest=-15mindex=OMITTED source=OMITTED host IN ("SERVER1", "SERVER2") | timechart span=1d max(Value) as Value by host | eventstats max(Value) as max_of_two min(Value) as min_of_two | where max_of_two / min_of_two > 0.75  
In addition to possible additions such as "Timeout Errors", you case requires an additional consideration.  Using case function (or use timechart command) will count each 5-minute interval separately... See more...
In addition to possible additions such as "Timeout Errors", you case requires an additional consideration.  Using case function (or use timechart command) will count each 5-minute interval separately in disagreement with the semantics of "last 10 min vol" and "last 15min vol".  These terms are cumulative.  Any event in "last 5min" must also be counted in "last 10min" and "last 15min". Here is a semantic implementation; mvappend satisfies both considerations.   | foreach 5min 10min 15min [eval header = mvappend(header, if(_time - relative_time(now(), "-<<FIELD>>") > 0, "Last <<FIELD>> Vol", null()))] | eval header = mvappend(header, if(log_level == "ERROR", "Timeout Errors", null()), ``` This is error emulation. Use real condition(s) ``` if(someother > 0, "Some other count", null())) | chart count OVER Transaction BY header | table Transaction "Last 5min Vol" "Last 10min Vol" "Last 15min Vol" "Timeout Errors"   Note the last mvappend evaluation is emulation.  Use your real condition.  Here is a data emulation I use to test the above code; one emulated error condition gives non-zero output.   index=_internal earliest=-15m | rename sourcetype AS Transaction ``` data emulation above; some events have leg_level "ERROR" ```   This emulation gives the counts like the following: Transaction Last 5min Vol Last 10min Vol Last 15min Vol Timeout Errors dbx_health_metrics 1370 2055 2740 0 dbx_server 0 0 4 0 splunk_python 10 15 20 0 splunk_search_messages 4 2 2 2 splunkd 4736 7390 9833 2779 splunkd_access 388 600 787 0 splunkd_ui_access 244 134 148 0
If you really do need to test from a shell, enable debug logging as previously noted, and then from the shell (Bash in this example) on the Splunk host, run: # this script assumes your management po... See more...
If you really do need to test from a shell, enable debug logging as previously noted, and then from the shell (Bash in this example) on the Splunk host, run: # this script assumes your management port is 8089 $SPLUNK_HOME/bin/splunk login $SPLUNK_HOME/bin/splunk cmd python $SPLUNK_HOME/etc/apps/search/bin/sendemail.py 'to="test@example.com" subject="Test Message"' << EOF authString:$(echo -n $(cat ~/.splunk/authToken_splunk_8089)) sessionKey:$(echo -n $(sed -re 's/.*<sessionkey>(.*)<\/sessionkey>.*/\1/' ~/.splunk/authToken_splunk_8089)) owner:$(echo -n $(sed -re 's/.*<username>(.*)<\/username>.*/\1/' ~/.splunk/authToken_splunk_8089)) namespace:search sid: _time,_raw "1713023419","This is the first event/row." "1713023420","This is the second event/row." EOF $SPLUNK_HOME/bin/splunk logout Note that the empty line between sid: and _time is mandatory. The empty line indicates to Intersplunk that CSV formatted search results follow. The setting:value entries before the empty line represent the Intersplunk header. sendemail.py makes several Splunk REST API calls and requires a session key and app context to work correctly. The Splunk login command will create a new session and cache your username, session key, etc. in ~/.splunk/authToken_splunk_8089. The Splunk logout command will invalidate the session and remove ~/.splunk/authToken_splunk_8089.
Just to nitpick a little. You can set up a cluster without redundancy. It's not a HA cluster but it has its uses (one advantage of such setup is the ability to rebalance buckets when you add a new pe... See more...
Just to nitpick a little. You can set up a cluster without redundancy. It's not a HA cluster but it has its uses (one advantage of such setup is the ability to rebalance buckets when you add a new peer). But yes, if you set up a cluster with RF>=2, every bucket should have at least one additional copy somewhere in the cluster.
The answer depends on your definition of "HA".  If you only care that your data has some place to go then having (at least) 2 indexers qualifies.  OTOH, if it's the data itself that must be HA then u... See more...
The answer depends on your definition of "HA".  If you only care that your data has some place to go then having (at least) 2 indexers qualifies.  OTOH, if it's the data itself that must be HA then unclustered indexers is/are not the answer.  That's because loss of an indexer means loss of the data stored on that indexer.  SmartStore helps by putting warm buckets in off-box storage, but hot buckets remain on the indexer unprotected. In an indexer cluster, each bucket is replicated to at least one other indexer so the loss of an indexer does not result in data loss. Yes, an indexer cluster requires a cluster manager, but that instance can be shared with the Monitoring Console/License Manager instance.
Hello, I am not getting emails regarding the alerts I have setup in splunk ...Can anyone please help?
Hi @gcusello  We tried the query you provided eval command, but it not working output is: RampdataSet Initial message received with below details Total WAC 10 Letter published correctl... See more...
Hi @gcusello  We tried the query you provided eval command, but it not working output is: RampdataSet Initial message received with below details Total WAC 10 Letter published correctley to ATM subject WAX 30 Letter published correctley to DMM subject WAM 22 Letter rejected due to: DOUBLE_KEY STC 33 Letter rejected due to: UNVALID_LOG STX 66 Letter rejected due to: UNVALID_DATA_APP   We tried addtotals as well, pls see the output: RampdataSet Initial message received with below details Total WAC   20 WAX   165 WAM   184 STC   150 STX   222 OTP   70 TTC   15 TAN   21  
Can this be done or is the official Splunk guidance to utilize an index cluster? Curious if there's any current (potentially) possible method to achieve high-availability with only 2 indexers? My r... See more...
Can this be done or is the official Splunk guidance to utilize an index cluster? Curious if there's any current (potentially) possible method to achieve high-availability with only 2 indexers? My reading on index clusters has me thinking one needs at a minimum 3 licensed Splunk instances. At least, that's what I got from Splunk's documentation. You need one master, and at least 2 dedicated indexer peers. Where the search head goes in all of that and how that would be supported, I have no clue. I'm sure everyone can think of a very green reason as to why one would want to be able to just have a pair of indexers serve high availability without being forced into an index cluster kind of deployment. I can see older posts where apparently this used to be supported but my understanding now is that the only Splunk supported high-availability deployment is via index clusters. Can anyone confirm?
Hi @alfredoh14, The best way to test sendemail.py is using a search: | sendemail to="test@example.com" subject="Test Message" The script reads configuration from alert_actions.conf using the Splun... See more...
Hi @alfredoh14, The best way to test sendemail.py is using a search: | sendemail to="test@example.com" subject="Test Message" The script reads configuration from alert_actions.conf using the Splunk REST API, and testing the script from the command-line isn't straightforward. (See the sendEmail function in sendemail.py.) Messages will be logged to $SPLUNK_HOME/var/log/splunk/python.log. Errors will also be logged to search.log and displayed in Splunk Web. The default log level is specified in $SPLUNK_HOME/etc/log.cfg: [python] splunk = INFO You can change the log level in $SPLUNK_HOME/etc/log-local.cfg: [python] splunk = DEBUG Restart Splunk after modifying log-local.cfg. Other Splunk Python scripts will produce verbose debug output. I recommend returning the log level to INFO when you're finished debugging. You can search python.log directly from Splunk: index=_internal source=*python.log* sendemail I also recommend opening a support case. If you find a compatibility issue between sendemail.py and a specific RHEL 9 configuration in the latest maintenance release of a supported version of Splunk, either sendemail.py can be fixed or Splunk documentation can be updated.
Thwarted by high cardinality! You can adjust the similarity threshold of the cluster command with the t option: | cluster showcount=t t=0.5 or change how the cluster command determines similarity... See more...
Thwarted by high cardinality! You can adjust the similarity threshold of the cluster command with the t option: | cluster showcount=t t=0.5 or change how the cluster command determines similarity with the match option: | cluster showcount=t match=termset ``` unordered terms ``` If you find a frequently occurring event unrelated to your original question and want a bit of help, you'll get the best answer by starting a new question. Everyone here loves solving problems!
No, you cannot get the index name from a log file. The index is specified when the data is onboarded as part of the inputs.conf settings. At search time, data is fetched from one or more indexes.  ... See more...
No, you cannot get the index name from a log file. The index is specified when the data is onboarded as part of the inputs.conf settings. At search time, data is fetched from one or more indexes.  Getting the index from a log file would mean going to an index to get a log file to get the name of an index.  Doesn't make much sense. What problem are you trying to solve?
host is same across all the env. i am facing issue when i bind the same value to drop down list saying "Duplicate values causing conflict". But i need dropdown list with TEST/QA/PROD(label) with same... See more...
host is same across all the env. i am facing issue when i bind the same value to drop down list saying "Duplicate values causing conflict". But i need dropdown list with TEST/QA/PROD(label) with same host value. - how can i achieve this?