All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for the suggestion PickleRick, I've also submitted a false positive report at https://www.clamav.net/reports/fp.
We are currently using Splunk Enterprise on-premises, and the client has expressed plans to migrate to Splunk Cloud. In addition, they have clearly stated the need to work, specifically focusing on S... See more...
We are currently using Splunk Enterprise on-premises, and the client has expressed plans to migrate to Splunk Cloud. In addition, they have clearly stated the need to work, specifically focusing on Synthetic Monitoring and Real User Monitoring (RUM). While it appears they intend to adopt Splunk Cloud as the primary observability platform, I would like to confirm whether their strategy involves solely utilizing Splunk Cloud or if they intend to integrate with AWS or Azure cloud platforms as part of the observability or hosting architecture. Could you please provide guidance or clarity on whether the migration includes leveraging Splunk Cloud hosted on a public cloud provider (e.g., AWS or Azure), or if there is a broader hybrid/cloud-native observability strategy in play?
We are doing a dry run of a spunk 9.0.2 upgrade to 9.2.4  and when running the splunk show kvstore-status just get status starting How do we get this started? Note in mind that we will be runnin... See more...
We are doing a dry run of a spunk 9.0.2 upgrade to 9.2.4  and when running the splunk show kvstore-status just get status starting How do we get this started? Note in mind that we will be running this in prod in the near future   /opt/splunk/bin/splunk show kvstore-status WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. This member: backupRestoreStatus : Ready disabled : 0 guid : 9AEF8531-6F71-46C8-AC9F-F4EEE7FFE8DB port : 7511 standalone : 0 status : starting storageEngine : wiredTiger
/opt/splunk/bin/splunk show kvstore-status WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Your session is invalid. P... See more...
/opt/splunk/bin/splunk show kvstore-status WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Your session is invalid. Please login. Splunk username: admin Password: This member: backupRestoreStatus : Ready disabled : 0 guid : 9AEF8531-6F71-46C8-AC9F-F4EEE7FFE8DB port : 7511 standalone : 0 status : starting storageEngine : wiredTiger
I think I was going too far down that particular rabbit hole. I was planning to combine audit.log and linux secure into one sourcetype but finally realized there's no good reason for doing that when ... See more...
I think I was going too far down that particular rabbit hole. I was planning to combine audit.log and linux secure into one sourcetype but finally realized there's no good reason for doing that when I can just call on both types.
That was a great catch! But that was just a typo on my part. All of this I happening on an air gapped system, so I'm having to hand jam all this over. 
Hi livehybrid,   thanks for your answer.  I finally find the solution to this, and it was easier than ever. There's a note on SC4S guide saying that "When configuring a fortigate fortios device for... See more...
Hi livehybrid,   thanks for your answer.  I finally find the solution to this, and it was easier than ever. There's a note on SC4S guide saying that "When configuring a fortigate fortios device for TCP syslog, port 601 or an RFC6587 custom port must be used. UDP syslog should use the default port of 514." I read this, tried this on a first stage, but as it does now seemed to work, basically because I was using some test events sent via netcat without specifying any newline character. I have tried it again simply adding a "\r\n" at the end of the event blob and setting this environment variable on SC4S env file: SC4S_LISTEN_RFC6587_PORT=601 It works perfectly now, also tried in production environment with live events. Regards
"With 9 indexers and an RF of 3, each data bucket will be replicated across 3 of the 9 indexers. This means that any given bucket will have 3 copies, ensuring redundancy and high availability." This... See more...
"With 9 indexers and an RF of 3, each data bucket will be replicated across 3 of the 9 indexers. This means that any given bucket will have 3 copies, ensuring redundancy and high availability." This is not entrely true. 1. "Ensuring" redundancy and HA depends on organization's risk acceptance level. But more importantly 2. "each data bucket will be replicated across 3 indexers" suggests that whole bucket data will be replicated which is not true, as you pointed out earlier mentioning RF and SF.
Hi @TheJagoff  If you do not want your main index buckets to be frozen then you should set both coldToFrozenDir and coldToFrozenScript to blank values (which is the default unless specified elsehwer... See more...
Hi @TheJagoff  If you do not want your main index buckets to be frozen then you should set both coldToFrozenDir and coldToFrozenScript to blank values (which is the default unless specified elsehwere). coldToFrozenDir = coldToFrozenScript = By not setting coldToFrozenDir and not having a coldToFrozenScript, you effectively cause the data to be deleted when frozen.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks so much for the info! The app I was working in was Mission Control, going along with the Video, but your suspicion of ES version is probably spot on.  This video is two years old, and I just ... See more...
Thanks so much for the info! The app I was working in was Mission Control, going along with the Video, but your suspicion of ES version is probably spot on.  This video is two years old, and I just upgraded to the latest version recently. This was the video: https://youtu.be/xhfb5Cc11Tg?t=177 I understand what you mean about creating the event from the analyst queue.  I'm just confused about how to add more searched events when performing manual searches.  There is an events tab within the investigation.  What I'm seeing you would go to the Search tab and if you find anything else of interest you should be able to add it along to your investigation created.  Right now the only way I can populate additional searches into this tab, is by using the add events macro, which works fine, but this can cause accidental additions if my SPL catches other entries in my search which I don't want added to the investigation.  Seems like a better way would be to allow me to manually add the event by finding the search myself and telling splunk to add it.     Hope that makes sense?   I did training on the previous version of Splunk for investigations, this newer Mission Control is totally different and appears to lack some of the functionality in the older version? Or perhaps I'm just missing something in terms of workflow for investigations in this version of Splunk ES - I see Response is probably the primary tab to work in, but it feels lacking at the moment.  Probably because everything is defaulted at the moment.        
I have a coldToFrozenScript that controls all of the indexes at an installation. I want the data in the "main" index to simply be deleted when it's time to be frozen. My question is, if I set the co... See more...
I have a coldToFrozenScript that controls all of the indexes at an installation. I want the data in the "main" index to simply be deleted when it's time to be frozen. My question is, if I set the coldToFrozenDir for the "main" stanza in indexes.conf to a blank, will it delete the buckets?  coldToFrozenDir =    Thank you. 
Hello @livehybrid . It works. Thanks a lot
Yes, this is exactly what I expected. Thank you for confirming the way it works.
There is most probably a better way to achieve your goal. Try to describe the logic behind what you're trying to do. Anyway,  | dedup A | table A is usually _not_ the way to go. You'd rather want ... See more...
There is most probably a better way to achieve your goal. Try to describe the logic behind what you're trying to do. Anyway,  | dedup A | table A is usually _not_ the way to go. You'd rather want to do | stats values(A) as A | mvexpand A  
Thanks a lot @ITWhisperer , but it seems like my account was deactivated. could you help me out to restore it? thanks    
Try this splunkcommunity.slack.com
Hi @isoutamo  is the slack channel still alive? the link you provided is not working anymore   thanks!  
I am monitoring some of the information from the TrackMe App, and I noticed that for the trackme_host_monitoring lookup AKA the data host monitoring utils of the app, all the hosts I can see had the ... See more...
I am monitoring some of the information from the TrackMe App, and I noticed that for the trackme_host_monitoring lookup AKA the data host monitoring utils of the app, all the hosts I can see had the data_last_time_seen value greater than 04/03. Today is 05/12. If I use metadata command, I can see the hosts that has not sent log from earlier than 04/03. Like 03/31. So what config/macro of the app is probably the cause?
Updated response - you said 2d would be OK - essentially, the subsearch needs to be fewer than 50,000 events, so if 15d matches that requirement, then use 15d otherwise use a smaller amount (like 2d ... See more...
Updated response - you said 2d would be OK - essentially, the subsearch needs to be fewer than 50,000 events, so if 15d matches that requirement, then use 15d otherwise use a smaller amount (like 2d as you suggested).
Hi @koshyk  Are you currently using the rules from ESCU without modification at all (e.g. just enabling the search)?  If you make changes to the ESCU rule/search then the changes will be applied to... See more...
Hi @koshyk  Are you currently using the rules from ESCU without modification at all (e.g. just enabling the search)?  If you make changes to the ESCU rule/search then the changes will be applied to the ./local/savedsearches.conf on your Splunk deployment. These changes will not be overwritten with future changes to the published ESCU app, however note that this could have the opposite effect as changes made to resolve issues might not take affect.  Only the modified keys will be updated in savedsearches.conf - so if you modify the actual search then future changes to the search from ESCU will not be applied. A lot of users opt to clone the ESCU rules and apply their organisation name as a prefix to the rules, this means they can always compare between the current and their custom ESCU rule.  There is also an app on Splunkbase (ESCU Companion App) which looks like a good way to monitor changes between cloned rules and the current ESCU definitions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing