All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

im looking for a solution that i will be able to monitor if emails stopped receiving , not for troubleshoot for specific issue
Can you show current inputs.conf and props.conf stanzas for this CSV file? And example (modified) from 1st 2 lines (header + real masked events) from that file?
Hello I have a search head configured with assets and identity from current ad domain. I have 5 more ad domains without trust and on different networks. In each domain / network I have a HF sendin... See more...
Hello I have a search head configured with assets and identity from current ad domain. I have 5 more ad domains without trust and on different networks. In each domain / network I have a HF sending data to indexers. How can I set those domains to send assets and identity information to my search head? Thank you Splunk Enterprise Security  
You can try to send email and then check those events from internal 1st send email e.g. index=_internal sourcetype=splunkd | head 1 | sendemail to="your.email@your.domain" subject="testing" After ... See more...
You can try to send email and then check those events from internal 1st send email e.g. index=_internal sourcetype=splunkd | head 1 | sendemail to="your.email@your.domain" subject="testing" After that you should get at least this event from internal index=_internal sourcetype=splunk_python sendemail source="/opt/splunk/var/log/splunk/python.log"  Of course it needs that your previous command has worked w/o issues. It needs also access to _internal logs. There could be also need for some capabilities to send email.
Your first replace effectively reduces the string to 8 characters and the subsequent replaces are expecting 12 characters so the replaces fail. Also, using map is tricky at the best of times, perhaps... See more...
Your first replace effectively reduces the string to 8 characters and the subsequent replaces are expecting 12 characters so the replaces fail. Also, using map is tricky at the best of times, perhaps you could try something like this index=main sourcetype=syslog [| makeresults | eval input_mac="48a4.93b9.xxxx" | eval mac_clean=lower(replace(input_mac, "[^0-9A-Fa-f]", "")) | eval mac_colon=replace(mac_clean, "(..)(..)(..)(..)", "\1:\2:\3:\4:") | eval mac_hyphen=replace(mac_clean, "(..)(..)(..)(..)", "\1-\2-\3-\4-") | eval mac_dot=replace(mac_clean, "(....)(....)", "\1.\2.") | eval query=mvappend(mac_colon, mac_hyphen, mac_dot) | mvexpand query | table query] | table _time host _raw"
No worries. It's just that some assumptions which are obvious to the one writing the question might not be clear at all to the readers. Again - there are some ways of approaching this problem, but w... See more...
No worries. It's just that some assumptions which are obvious to the one writing the question might not be clear at all to the readers. Again - there are some ways of approaching this problem, but we need to know what you want to do Splunk can carry over the value from the previous result row but as for the additional logic - it has to know what to do with it. Compare this: | makeresults count=10 | streamstats count | eval field{count}=count | table field* With this: | makeresults count=10 | streamstats count | eval field{count}=count | table field* | streamstats current=f last(*) as * The values are carried over. But if there is additional logic which should be applied to them - that's another story.
As @PickleRick says, your problem is (still) not well described. However, based on the limited information, you could try something like this ``` Find latest daily version for each item ``` | timech... See more...
As @PickleRick says, your problem is (still) not well described. However, based on the limited information, you could try something like this ``` Find latest daily version for each item ``` | timechart span=1d latest(version) as version by item useother=f limit=0 ``` Filldown to cover missing intervening days (if any exist) ``` | filldown ``` Fill null to cover days before first report (if any exist) ``` | fillnull value=0 ``` Convert to table ``` | untable _time item version ``` Count versions by day ``` | timechart span=1d count by version useother=f limit=0 Here is a simulated version using gentimes to generate some dummy data (which hopefully represents your data closely enough to be valuable) | gentimes start=-3 increment=1h | rename starttime as _time | table _time | eval item=(random()%4).".".(random()%4) | eval update=random()%2 | streamstats sum(update) as version by item global=f ``` Find latest daily version for each item ``` | timechart span=1d latest(version) as version by item useother=f limit=0 ``` Filldown to cover missing intervening days (if any exist) ``` | filldown ``` Fill null to cover days before first report (if any exist) ``` | fillnull value=0 ``` Convert to table ``` | untable _time item version ``` Count versions by day ``` | timechart span=1d count by version useother=f limit=0 Notice with the simulated data, all the rows add up to 16 which represents the 16 possible item names used in the simulation. Also, note that the counts move towards the bottom right as the versions of the items goes up over time.
Without concrete examples, I can only guess what might work, but you could try using appendpipe. For example: <your search to determine whether an alert should be raised> | appendpipe [| eval aler... See more...
Without concrete examples, I can only guess what might work, but you could try using appendpipe. For example: <your search to determine whether an alert should be raised> | appendpipe [| eval alert_raised=time() ``` Create a field to show when the alert was raised ``` ``` Reduce fields to only those required (including alert_raised) ``` | table severity, expiration, ss_name, alert_raised ``` Output fields to lookup ``` | outputlookup alerts_raised.csv append=true ``` Remove appended events ``` | where isnull(alert_raised)]
First of all, many thanks for your support and sorry if I'm wasting time. Theoretically, there should only be a maximum of 3 versions. In practice, however, I have seen more versions at the same tim... See more...
First of all, many thanks for your support and sorry if I'm wasting time. Theoretically, there should only be a maximum of 3 versions. In practice, however, I have seen more versions at the same time. I had originally hoped that splunk had native support for the LOCF topic, which I have not yet found out. In my research so far I have not come across this and have only discovered complex cross-join solutions. Perhaps it is the wrong use case for this requirement and I need to proceed differently: daily recording of the version for all items Disadvantage: over 100,000 log entries are recorded daily cron job that records the missing versions for all items Disadvantage: here too, over 100,000 log entries are recorded daily usage of a different tool for this use case, e.g. influxDB. Native LOCF support seems to exist here. Disadvantage: several tools must be supported
No, i'm trying to do something different. Every time my Alert is triggered, i want to output some fields (like severity, expiration, ss_name...) to a kvs lookup. Then i want to see the lookup on a da... See more...
No, i'm trying to do something different. Every time my Alert is triggered, i want to output some fields (like severity, expiration, ss_name...) to a kvs lookup. Then i want to see the lookup on a dashboard: i'm doing this cause i'm trying to create an app where i can manage alerts (like Alert Manager). Of course i can just create a dashboard where i table all the events from the Alert, but then i'm not sure i'm going to be able to modify the table.
Hello. This search returns zero results, but a manual "OR" search shows results. I cannot find the reason (neither can ChatGPT). The end result would be a query where I can input any format of MAC a... See more...
Hello. This search returns zero results, but a manual "OR" search shows results. I cannot find the reason (neither can ChatGPT). The end result would be a query where I can input any format of MAC address in one section, but automatically search for all formats shown.  Any guidance would be appreciated. BTW, this is a local Splunk installation.  (Please ignore the "xxxx".) | makeresults | eval input_mac="48a4.93b9.xxxx" | eval mac_clean=lower(replace(input_mac, "[^0-9A-Fa-f]", "")) | eval mac_colon=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1:\2:\3:\4:\5:\6") | eval mac_hyphen=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1-\2-\3-\4-\5-\6") | eval mac_dot=replace(mac_clean, "(....)(....)(....)", "\1.\2.\3") | fields mac_clean mac_colon mac_hyphen mac_dot | eval search_string="\"" . mac_clean . "\" OR \"" . mac_colon . "\" OR \"" . mac_hyphen . "\" OR \"" . mac_dot . "\"" | table search_string | map search="search index=main sourcetype=syslog ($search_string$) | table _time host _raw"
Hi @AleCanzo  You can use outputlookup (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/outputlookup) in your query to output the fields in your results to a KV Store, just the s... See more...
Hi @AleCanzo  You can use outputlookup (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/outputlookup) in your query to output the fields in your results to a KV Store, just the same as a CSV lookup - is this what you're looking to achieve?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, this is my first interaction with Splunk Community so be patient please   I'm trying to output some fields from an Alert to a kvs lookup. I'm using a Lookup editor app and a KVS app, but proba... See more...
Hi, this is my first interaction with Splunk Community so be patient please   I'm trying to output some fields from an Alert to a kvs lookup. I'm using a Lookup editor app and a KVS app, but probably i'm missing some theory. Thanks!
I had tried your method before Apparently I screwed up the syntax The | lookup system_info.csv System as System_Name line was failing.
Thanks for the input.  In fact, your first solution is what I ended up doing.  That one works. The second solution does not work.   The query doesn't have the list of all systems when it calculate... See more...
Thanks for the input.  In fact, your first solution is what I ended up doing.  That one works. The second solution does not work.   The query doesn't have the list of all systems when it calculates the missing
Hi @sarit_s6  SMTP logs arent directly logged into your Splunk Cloud environment, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails boun... See more...
Hi @sarit_s6  SMTP logs arent directly logged into your Splunk Cloud environment, however if you log a support ticket they are able to check the PostMark mail server logs to check if any emails bounced, this could help confirm that  a) If the alert actually fired correctly from Splunk b) Email accepted by the mail relay c) If the relay had any issue sending on to the final destination. At a previous customer we had a number of issues with the customer email server detecting some of the Splunk Cloud alerts as spam and silently bouncing them. You can contact Support via https://www.splunk.com/support  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
we know for sure that Splunk had issue with sending emails during this time so for sure its in splunk's end 
If there are no errors on Splunk's end then your email provider should be contacted to find out why the messages were not delivered.  It's possible the message were treated as spam or there was anoth... See more...
If there are no errors on Splunk's end then your email provider should be contacted to find out why the messages were not delivered.  It's possible the message were treated as spam or there was another problem that prevented delivery.
Hello I'm trying to monitor SMTP failures in my Splunk cloud environment.  I know for sure that at some date we had problem and did not receive any emails but when im running this query :    inde... See more...
Hello I'm trying to monitor SMTP failures in my Splunk cloud environment.  I know for sure that at some date we had problem and did not receive any emails but when im running this query :    index=_internal sendemail source="/opt/splunk/var/log/splunk/python.log"   I don't see any errors.  How can I achieve my goal ? Thanks 
Thanks for your reply,  I will try to change the interval time to 600 seconds first.