All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You must put those configurations in first full splunk enterprise instance from source to indexers. If you have separate HF which is endpoint for those udp feed, then those must be there. If that is U... See more...
You must put those configurations in first full splunk enterprise instance from source to indexers. If you have separate HF which is endpoint for those udp feed, then those must be there. If that is UF and then you are send events through IHF then add those conf there. And if there haven’t been any HF before indexers then add those configurations in all indexers. And as said it’s better to use real syslog server to terminate syslog feeds a use e.g. a UF to collects events from files or use SC4S for that.
The best instructions for real time alerts is never ever use those! Usually those generate more issues inside and outside of splunk e.g. in email systems when there are some mistakes in configuration... See more...
The best instructions for real time alerts is never ever use those! Usually those generate more issues inside and outside of splunk e.g. in email systems when there are some mistakes in configuration or even any mistakes. Instead of real time alert you should use scheduled alerts. Just select suitable time schedule for based on individual alert. When you are creating those check if there are regularly some latency when indexing events and if, then adjust earliest and latest based on that. For sending emails, you could add needed configuration for base splunk email settings or add some alert actions to do it. Personally I prefer to add links to alert into its body, never add real data into it. Time by time there could be some static or similar content. But never send real events outside of splunk. More instructions can found from community/answers and also alerting manual.
Hello, I am creating an alert, and want to make sure that the schedule or real time setup sends an email out once the query finds a match. What is the best configuration for an alert to send an emai... See more...
Hello, I am creating an alert, and want to make sure that the schedule or real time setup sends an email out once the query finds a match. What is the best configuration for an alert to send an email as soon as the criteria of the query matches?  Thank you! 
Here is instructions how to backup and restore deployer https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/BackuprestoreSHC I think that you could just replicate old deployer to new one a... See more...
Here is instructions how to backup and restore deployer https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/BackuprestoreSHC I think that you could just replicate old deployer to new one and change its name, ip and GUID. I’m not sure if there is something on bundles or state files what needs to change etc? Then change replication url on all those SHC nodes which you are moving to this new cluster to point into it. After that just bring those nodes up one by one and wait that each one is up and there is no errors. After that start next. Before you start this take offline backups including kvstore and stop all those nodes which belongs to SHC which you are moving away from original deployer. If there is no need to keep those nodes in SHC anymore, then just remove those nodes from SHC and use those as individual SH. Note: I haven’t try this by myself, so you take the risk by yourself!
Could you explain a bit more what do you mean by @gcusello wrote: The only way is, if it's wrong, to modify the timestamp format to take the second one and not the one added by the syslog receiver... See more...
Could you explain a bit more what do you mean by @gcusello wrote: The only way is, if it's wrong, to modify the timestamp format to take the second one and not the one added by the syslog receiver. ? is approach with having that EXTRACT-extracted_time = \b(?P<extracted_time>\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+(-\d{2}:\d{2})?) EVAL-_time = strptime(extracted_time, "%Y-%m-%dT%H:%M:%S.%6N%z") in props.conf right? 
That doesn't help. _time still represents current time, not time in the event.  
By default this has configured to look only SPLUNK_HOME mount point. I don’t know if there is way to add additional mount points there.  If you need monitor other mount points and other Linux st... See more...
By default this has configured to look only SPLUNK_HOME mount point. I don’t know if there is way to add additional mount points there.  If you need monitor other mount points and other Linux statistics, I think that you should use e.g. *nix ta for collecting logs and metrics https://splunkbase.splunk.com/app/833
Ahh..thanks, this was killing me. I was also having trouble with the eval statement checking an array value (kept erroring out), but seems like spath was the key there as well.  This ended up workin... See more...
Ahh..thanks, this was killing me. I was also having trouble with the eval statement checking an array value (kept erroring out), but seems like spath was the key there as well.  This ended up working for me: index=someindex | spath output=sentSubject "Item.Subject" | spath output=receivedSubject "AffectedItems{}.Subject" | eval subject = if(isnull(sentSubject),receivedSubject,sentSubject) | table UserId,subject,Operation, _time  
There haven’t been any requirements for specific client cert as HEC has planned to be “open” at this point of view. The real “authentication” has done with given token. In that way there is no need t... See more...
There haven’t been any requirements for specific client cert as HEC has planned to be “open” at this point of view. The real “authentication” has done with given token. In that way there is no need to share any TLS client certificates for source side.
Introspection seems to give me the data.mount_point only for "/" and not for the other file systems that I can see via the Linux "df -kh" command. How come?
Hi, use spath : https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Spath To see why it happens, add and eval with just | eval subject2=Item.Subject ... | table ..., subject2  (subje... See more...
Hi, use spath : https://docs.splunk.com/Documentation/Splunk/9.4.0/SearchReference/Spath To see why it happens, add and eval with just | eval subject2=Item.Subject ... | table ..., subject2  (subject2 be null) I have a splunk index in JSON that has the key SRV and key CONTENT_LENGTH. If i do index=someindex | eval CONTENT_TYPE=if(isnull(SRV.CONTENT_TYPE),"true","false") | table SRV.CONTENT_TYPE, CONTENT_TYPE I will get the same problem as you do. But like below, i dont : index=someindex | spath output=qwe "SRV.CONTENT_TYPE" | eval CONTENT_TYPE=if(isnull(qwe),"true","false") | table SRV.CONTENT_TYPE, CONTENT_TYPE    
This resolution worked with minor changes . Many thanks for your help | chart count OVER transaction_id BY source
Hello, trying to figure out why this eval statement testing for a null value always evaluates to "true", even when the field does contain data: Here is what the data looks like in the results: ... See more...
Hello, trying to figure out why this eval statement testing for a null value always evaluates to "true", even when the field does contain data: Here is what the data looks like in the results:    
Hi Everyone,  i got error when open Splunk Security Essentials, it says    A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.   When i ... See more...
Hi Everyone,  i got error when open Splunk Security Essentials, it says    A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.   When i check it in my browser console it says   GET http://my.ipaddress:8000/en-US/splunkd/__raw/servicesNS/zake/system/storage/collections/data/RecentlyViewedKO?limit=1&query=%7B%22%24and%22%3A%5B%7B%22type%22%3A1%7D%2C%7B%22id%22%3A%22home%22%7D%2C%7B%22app%22%3A%22Splunk_Security_Essentials%22%7D%5D%7D 503 (Service Unavailable)   since i know 503 code is error from server can i know any website to check where that server down ? i mean i check in Statusgator all is okay.  Any solution ?
Hi @ITWhisperer ,   I am getting same events which has "slot" messages   events: {"priority":6,"sequence":4704,"sec":695048,"usec":639227,"msg":"hv_netvsc 54243fd-13dc-6043-bddd-13dc6045bddd e... See more...
Hi @ITWhisperer ,   I am getting same events which has "slot" messages   events: {"priority":6,"sequence":4704,"sec":695048,"usec":639227,"msg":"hv_netvsc 54243fd-13dc-6043-bddd-13dc6045bddd eth0: VF slot 1 added {"priority":6,"sequence":4698,"sec":695037,"usec":497286,"msg":"hv_netvsc 54243fd-13dc-6043-bddd-13dc6045bddd eth0: VF slot 1 removed    query used : index="index1" | search "slot" | rex field=msg "(?<action>added|removed)"| eval added_time=if(action="added",strftime(_time, "%H:%M:%S"),null())| eval removed_time=if(action="removed",strftime(_time, "%H:%M:%S"),null())| sort 0 _time| streamstats max(added_time) as added_time latest(removed_time) as removed_time by host, slot| eval added_epoch=strptime(added_time, "%H:%M:%S")| eval removed_epoch=strptime(removed_time, "%H:%M:%S")| eval downtime=if(isnotnull(added_epoch) AND isnotnull(removed_epoch), removed_epoch - added_epoch, 0)   here I tried converting time to hour:min:sec and later into epoch to get the difference in seconds  but its not working and downtime is always showing 0    
I recently had an error message pop up synchronizing from our on-prem AD servers to Entra about an account issue.  I found that the account in question had all the attributes correct except for the u... See more...
I recently had an error message pop up synchronizing from our on-prem AD servers to Entra about an account issue.  I found that the account in question had all the attributes correct except for the userPrincipalName.  In the UPN, instead of having the username@mydomain.com, it was changed to a "\"@mydomain.com.  I am trying to figure out who or which account made that change in Splunk Cloud.  I have searched for Event IDs 4738 and it shows the UPN with the "\" but it doesn't tell me who made the change.  Also I am looking in the Windows TA addon to see if I can find any more info in there.
Thank you @isoutamo, I changed the global setting to HTTPS and it works perfectly fine. I just don't understand how it works, doesn't the sender need the public key? how does it work?
Hi @onthakur , use chart command, instead stats: <your_search> | chart count OVER source BY transaction_id Ciao. Giuseppe
It's a bit long, hope i will not bore you. I made a splunk graph with two lines I need to see the values compared to the average of the last 10 days. So: One line is the percentage between a... See more...
It's a bit long, hope i will not bore you. I made a splunk graph with two lines I need to see the values compared to the average of the last 10 days. So: One line is the percentage between a time period, let's say Today 28 Jan 14:20 --> 14:25 Second line is the average percentage between the same time period but for last 10 days, 18-27 Jan 14:20 --> 14:25 What i can tell by looking at this graph is stuff like , "Today at 14:20 we had x% more/less than the last 10 day average, but at 14:21 we had x% more/less " etc. It's important to always have time snapped at the start of the minute (so if "now" is 17:31:23 then last minute is 17:30:00.000 --> 17:30:59.999) To make the search for this graph, i am using ealiest= and latest= like this: index=logs earliest=-5m@m latest =-1m@m | .... | append [search index=logs ( (earliest=24h-5m@m AND latest=-24h-1m@m) OR (earliest=-48h-5m@m AND latest=-48h-1m@) OR ... ) | ... ] | ... The search itself works ok, but my problem is when i try to make a dashboard for it. The dashboard needs to contain a time input with a token I named "thetime" Usually, you make the dashboard search use this time input by selecting "Shared Time picker (thetime)". This is not possible for my search, so i need somehow to specify $thetime.earliest$ / $thetime.latest$ in the search query. But i cannot just simply do something straight forward like:   index=logs earliest=$thetime.earliest$ latest=$thetime.latest$-24h@m | ...   Depending one what i select in the time picker, i can end up with messages like: Invalid value "now-24h" for time term 'latest' I know about | addinfo  https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Addinfo but it's impossible to use "info_max_time" in the first part of the searches,  only after the pipe addinfo. Add even if it did somehow, there would still be the issue of the required minute snap to 00 --> 59 seconds. My approach, was to use the the <init> part of the dashboard xml to calculate all the needed earliest/latest. Currently i am dealing only with relative ranges, will deal with exact dates (between) later. So in my dashboard xml i have this: <form version="1.1" theme="light"> <init> <eval token="RSTART">strftime(relative_time(now(), $thetime.earliest$),"%Y-%m-%d %H:%M:00")</eval> <eval token="REND">strftime(relative_time(now(), $thetime.latest$),"%Y-%m-%d %H:%M:00")</eval> </init> ... <query>index=logs | eval RRSTART="$RSTART$", RREND="$REND$" | table _time, RRSTART, RREND</query> ... </form>   The following part drives me crazy. Assuming now is 17:55:02. I am accessing the splunk board that has this link: https://splunk-self-hosted/en-US/app/search/DASHBOARD_NAME When i first load the page, i see the time picker and a submit button. There are no results shown until i press submit. I select "Relative" , earliest 1 Hours ago, "No snap-to", latest now, apply and submit. The browser URL changes to https://splunk-self-hosted/en-US/app/search/DASHBOARD_NAME?form.thetime.earliest=-1h&form.thetime.latest=now and the results i get RRSTART RREND 2025-01-28 17:55:00  2025-01-28 17:55:00 (same values, bad) At this point, I just click the refresh button of the browser, and i get : RRSTART RREND 2025-01-28 16:55:00  2025-01-28 17:55:00 (correct values) So basically, if i always click submit and then reload, im get the correct values From what i understand from https://docs.splunk.com/Documentation/Splunk/9.4.0/Viz/tokens#Set_tokens_on_page_load this should not happen. As for my questions : Can anyone tell me if i am doing something wrong with <init> ? Maybe it cannot be used this way with dashboard tokens ? Or maybe there is another way to do this without using <init> ? Thank you for taking the time to read. Using Splunk Enterprise Version: 9.1.0.2            
gcusello Firewalld is enabled and I have all the respective ports enabled as well. firewall-cmd --zone=public --permanent --add-port 8000/tcp firewall-cmd --reload I have worked with Splunk Suppo... See more...
gcusello Firewalld is enabled and I have all the respective ports enabled as well. firewall-cmd --zone=public --permanent --add-port 8000/tcp firewall-cmd --reload I have worked with Splunk Support and Red Hat Support and they have verified my configuration and still didn't figure it out. So the only thing it could be is a hardening configuration from CIS level 1 Thank you buddy for your polite comments.