All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As always - there are two questions. 1. Will it run? Probably. I've worked with 9.0 Splunk servers supplied with UFs going as far back as 6.6.x. 2. Is it a good idea? Depends on the circumstances. ... See more...
As always - there are two questions. 1. Will it run? Probably. I've worked with 9.0 Splunk servers supplied with UFs going as far back as 6.6.x. 2. Is it a good idea? Depends on the circumstances. As the others already said - if you have no other choice, you're running what you have. But it's usually better to upgrade (unless there are some critical bugs affecting your particular use case). If not for any other reason - 9.0 introduced configuration tracking so you can see what changed and when.
Hi @livehybrid  Unfortunately not, its a button CSS + Javascript.     <button type="button" class="web-ui-component__button jjui-11138fz" style="display: inline-block;"> <div data-analytics-name=... See more...
Hi @livehybrid  Unfortunately not, its a button CSS + Javascript.     <button type="button" class="web-ui-component__button jjui-11138fz" style="display: inline-block;"> <div data-analytics-name="resource-tile" data-testid="sampleApp_Dev" tabindex="-1" class="jjui-vkvk0d ell0llb0"> .............     I can successfully simulate the click of the button using "click" CSS selector    div[data-testid="sampleApp_Dev"]     but do not have direct access to the JS. I've traced it in Chrome but it has so many nested calls its challenging to find anything useful
Thank you! That gave me the proper direction to go! My final, validated version is... index=main sourcetype=syslog [ | makeresults | eval input_mac="48a4.93b9.xxxx" | eval mac_clean=lower(repla... See more...
Thank you! That gave me the proper direction to go! My final, validated version is... index=main sourcetype=syslog [ | makeresults | eval input_mac="48a4.93b9.xxxx" | eval mac_clean=lower(replace(input_mac, "[^0-9A-Fa-f]", "")) | where len(mac_clean)=12 | eval mac_colon=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1:\2:\3:\4:\5:\6") | eval mac_hyphen=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1-\2-\3-\4-\5-\6") | eval mac_dot=replace(mac_clean, "(....)(....)(....)", "\1.\2.\3") | eval query=mvappend(mac_clean, mac_colon, mac_hyphen, mac_dot) | mvexpand query | where isnotnull(query) | fields query | format ] | table _raw
Have you tried my examples? If you can send email and you have access those internal logs then there are at least one log line. If you cannot see those then you haven't have access to those logs to s... See more...
Have you tried my examples? If you can send email and you have access those internal logs then there are at least one log line. If you cannot see those then you haven't have access to those logs to see it. 2025-06-11 18:39:08,616 +0300 INFO sendemail:275 - Sending email. sid=1749656347.70143, subject="testing", encoded_subject="testing", results_link="None", recipients="['your.email@your.domain']", server="localhost" How you are sure that the issue is with splunk? Have you some logs which shows that e.g. alert is fired and it has try to send it via sendemail? For that reason I suggest 1st check that sending email is working and after that start to look why your alerts are not sending it. And quite often then the reason was that alert hasn't fired.
As already said technically you could use quite old UF with new splunk IHF/Server version. BUT you must understand that there are several improvements and also many security issues fixed on newer UF ... See more...
As already said technically you could use quite old UF with new splunk IHF/Server version. BUT you must understand that there are several improvements and also many security issues fixed on newer UF versions.  Of course if you have some ancient OS versions then you cannot upgrade UF on those, but then you should also consider to update those OS too.
Are you sure that e.g. some of your source nodes are not HF instead of UF? This is valid also form collecting node not only any nodes between UF and Indexers! Basically your configuration seems to b... See more...
Are you sure that e.g. some of your source nodes are not HF instead of UF? This is valid also form collecting node not only any nodes between UF and Indexers! Basically your configuration seems to be ok. Of course you could modify those REGEX little bit efficient than those are currently, but it's another story. As other already said the most obvious reason is that you have HF somewhere before your indexer (splunk enterprise node). You can check this on source node and all other nodes (check those from outputs.conf) $SPLUNK_HOME/bin/splunk version  This should show in UF Splunk Universal Forwarder 9.4.0 (build 6b4ebe426ca6) or in HF/indexer  Splunk 9.4.1 (build e3bdab203ac8) Just check which path it is running and replace SPLUNK_HOME with it. 
im getting the log all the logs are in level INFO I know for sure that splunk had an issue with sending emails at specific time but i cannot see any logs in _internal
im looking for a solution that i will be able to monitor if emails stopped receiving , not for troubleshoot for specific issue
Can you show current inputs.conf and props.conf stanzas for this CSV file? And example (modified) from 1st 2 lines (header + real masked events) from that file?
Hello I have a search head configured with assets and identity from current ad domain. I have 5 more ad domains without trust and on different networks. In each domain / network I have a HF sendin... See more...
Hello I have a search head configured with assets and identity from current ad domain. I have 5 more ad domains without trust and on different networks. In each domain / network I have a HF sending data to indexers. How can I set those domains to send assets and identity information to my search head? Thank you Splunk Enterprise Security  
You can try to send email and then check those events from internal 1st send email e.g. index=_internal sourcetype=splunkd | head 1 | sendemail to="your.email@your.domain" subject="testing" After ... See more...
You can try to send email and then check those events from internal 1st send email e.g. index=_internal sourcetype=splunkd | head 1 | sendemail to="your.email@your.domain" subject="testing" After that you should get at least this event from internal index=_internal sourcetype=splunk_python sendemail source="/opt/splunk/var/log/splunk/python.log"  Of course it needs that your previous command has worked w/o issues. It needs also access to _internal logs. There could be also need for some capabilities to send email.
Your first replace effectively reduces the string to 8 characters and the subsequent replaces are expecting 12 characters so the replaces fail. Also, using map is tricky at the best of times, perhaps... See more...
Your first replace effectively reduces the string to 8 characters and the subsequent replaces are expecting 12 characters so the replaces fail. Also, using map is tricky at the best of times, perhaps you could try something like this index=main sourcetype=syslog [| makeresults | eval input_mac="48a4.93b9.xxxx" | eval mac_clean=lower(replace(input_mac, "[^0-9A-Fa-f]", "")) | eval mac_colon=replace(mac_clean, "(..)(..)(..)(..)", "\1:\2:\3:\4:") | eval mac_hyphen=replace(mac_clean, "(..)(..)(..)(..)", "\1-\2-\3-\4-") | eval mac_dot=replace(mac_clean, "(....)(....)", "\1.\2.") | eval query=mvappend(mac_colon, mac_hyphen, mac_dot) | mvexpand query | table query] | table _time host _raw"
No worries. It's just that some assumptions which are obvious to the one writing the question might not be clear at all to the readers. Again - there are some ways of approaching this problem, but w... See more...
No worries. It's just that some assumptions which are obvious to the one writing the question might not be clear at all to the readers. Again - there are some ways of approaching this problem, but we need to know what you want to do Splunk can carry over the value from the previous result row but as for the additional logic - it has to know what to do with it. Compare this: | makeresults count=10 | streamstats count | eval field{count}=count | table field* With this: | makeresults count=10 | streamstats count | eval field{count}=count | table field* | streamstats current=f last(*) as * The values are carried over. But if there is additional logic which should be applied to them - that's another story.
As @PickleRick says, your problem is (still) not well described. However, based on the limited information, you could try something like this ``` Find latest daily version for each item ``` | timech... See more...
As @PickleRick says, your problem is (still) not well described. However, based on the limited information, you could try something like this ``` Find latest daily version for each item ``` | timechart span=1d latest(version) as version by item useother=f limit=0 ``` Filldown to cover missing intervening days (if any exist) ``` | filldown ``` Fill null to cover days before first report (if any exist) ``` | fillnull value=0 ``` Convert to table ``` | untable _time item version ``` Count versions by day ``` | timechart span=1d count by version useother=f limit=0 Here is a simulated version using gentimes to generate some dummy data (which hopefully represents your data closely enough to be valuable) | gentimes start=-3 increment=1h | rename starttime as _time | table _time | eval item=(random()%4).".".(random()%4) | eval update=random()%2 | streamstats sum(update) as version by item global=f ``` Find latest daily version for each item ``` | timechart span=1d latest(version) as version by item useother=f limit=0 ``` Filldown to cover missing intervening days (if any exist) ``` | filldown ``` Fill null to cover days before first report (if any exist) ``` | fillnull value=0 ``` Convert to table ``` | untable _time item version ``` Count versions by day ``` | timechart span=1d count by version useother=f limit=0 Notice with the simulated data, all the rows add up to 16 which represents the 16 possible item names used in the simulation. Also, note that the counts move towards the bottom right as the versions of the items goes up over time.
Without concrete examples, I can only guess what might work, but you could try using appendpipe. For example: <your search to determine whether an alert should be raised> | appendpipe [| eval aler... See more...
Without concrete examples, I can only guess what might work, but you could try using appendpipe. For example: <your search to determine whether an alert should be raised> | appendpipe [| eval alert_raised=time() ``` Create a field to show when the alert was raised ``` ``` Reduce fields to only those required (including alert_raised) ``` | table severity, expiration, ss_name, alert_raised ``` Output fields to lookup ``` | outputlookup alerts_raised.csv append=true ``` Remove appended events ``` | where isnull(alert_raised)]
First of all, many thanks for your support and sorry if I'm wasting time. Theoretically, there should only be a maximum of 3 versions. In practice, however, I have seen more versions at the same tim... See more...
First of all, many thanks for your support and sorry if I'm wasting time. Theoretically, there should only be a maximum of 3 versions. In practice, however, I have seen more versions at the same time. I had originally hoped that splunk had native support for the LOCF topic, which I have not yet found out. In my research so far I have not come across this and have only discovered complex cross-join solutions. Perhaps it is the wrong use case for this requirement and I need to proceed differently: daily recording of the version for all items Disadvantage: over 100,000 log entries are recorded daily cron job that records the missing versions for all items Disadvantage: here too, over 100,000 log entries are recorded daily usage of a different tool for this use case, e.g. influxDB. Native LOCF support seems to exist here. Disadvantage: several tools must be supported
No, i'm trying to do something different. Every time my Alert is triggered, i want to output some fields (like severity, expiration, ss_name...) to a kvs lookup. Then i want to see the lookup on a da... See more...
No, i'm trying to do something different. Every time my Alert is triggered, i want to output some fields (like severity, expiration, ss_name...) to a kvs lookup. Then i want to see the lookup on a dashboard: i'm doing this cause i'm trying to create an app where i can manage alerts (like Alert Manager). Of course i can just create a dashboard where i table all the events from the Alert, but then i'm not sure i'm going to be able to modify the table.
Hello. This search returns zero results, but a manual "OR" search shows results. I cannot find the reason (neither can ChatGPT). The end result would be a query where I can input any format of MAC a... See more...
Hello. This search returns zero results, but a manual "OR" search shows results. I cannot find the reason (neither can ChatGPT). The end result would be a query where I can input any format of MAC address in one section, but automatically search for all formats shown.  Any guidance would be appreciated. BTW, this is a local Splunk installation.  (Please ignore the "xxxx".) | makeresults | eval input_mac="48a4.93b9.xxxx" | eval mac_clean=lower(replace(input_mac, "[^0-9A-Fa-f]", "")) | eval mac_colon=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1:\2:\3:\4:\5:\6") | eval mac_hyphen=replace(mac_clean, "(..)(..)(..)(..)(..)(..)", "\1-\2-\3-\4-\5-\6") | eval mac_dot=replace(mac_clean, "(....)(....)(....)", "\1.\2.\3") | fields mac_clean mac_colon mac_hyphen mac_dot | eval search_string="\"" . mac_clean . "\" OR \"" . mac_colon . "\" OR \"" . mac_hyphen . "\" OR \"" . mac_dot . "\"" | table search_string | map search="search index=main sourcetype=syslog ($search_string$) | table _time host _raw"
Hi @AleCanzo  You can use outputlookup (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/outputlookup) in your query to output the fields in your results to a KV Store, just the s... See more...
Hi @AleCanzo  You can use outputlookup (https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/outputlookup) in your query to output the fields in your results to a KV Store, just the same as a CSV lookup - is this what you're looking to achieve?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, this is my first interaction with Splunk Community so be patient please   I'm trying to output some fields from an Alert to a kvs lookup. I'm using a Lookup editor app and a KVS app, but proba... See more...
Hi, this is my first interaction with Splunk Community so be patient please   I'm trying to output some fields from an Alert to a kvs lookup. I'm using a Lookup editor app and a KVS app, but probably i'm missing some theory. Thanks!