All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think I was going too far down that particular rabbit hole. I was planning to combine audit.log and linux secure into one sourcetype but finally realized there's no good reason for doing that when ... See more...
I think I was going too far down that particular rabbit hole. I was planning to combine audit.log and linux secure into one sourcetype but finally realized there's no good reason for doing that when I can just call on both types.
That was a great catch! But that was just a typo on my part. All of this I happening on an air gapped system, so I'm having to hand jam all this over. 
Hi livehybrid,   thanks for your answer.  I finally find the solution to this, and it was easier than ever. There's a note on SC4S guide saying that "When configuring a fortigate fortios device for... See more...
Hi livehybrid,   thanks for your answer.  I finally find the solution to this, and it was easier than ever. There's a note on SC4S guide saying that "When configuring a fortigate fortios device for TCP syslog, port 601 or an RFC6587 custom port must be used. UDP syslog should use the default port of 514." I read this, tried this on a first stage, but as it does now seemed to work, basically because I was using some test events sent via netcat without specifying any newline character. I have tried it again simply adding a "\r\n" at the end of the event blob and setting this environment variable on SC4S env file: SC4S_LISTEN_RFC6587_PORT=601 It works perfectly now, also tried in production environment with live events. Regards
"With 9 indexers and an RF of 3, each data bucket will be replicated across 3 of the 9 indexers. This means that any given bucket will have 3 copies, ensuring redundancy and high availability." This... See more...
"With 9 indexers and an RF of 3, each data bucket will be replicated across 3 of the 9 indexers. This means that any given bucket will have 3 copies, ensuring redundancy and high availability." This is not entrely true. 1. "Ensuring" redundancy and HA depends on organization's risk acceptance level. But more importantly 2. "each data bucket will be replicated across 3 indexers" suggests that whole bucket data will be replicated which is not true, as you pointed out earlier mentioning RF and SF.
Hi @TheJagoff  If you do not want your main index buckets to be frozen then you should set both coldToFrozenDir and coldToFrozenScript to blank values (which is the default unless specified elsehwer... See more...
Hi @TheJagoff  If you do not want your main index buckets to be frozen then you should set both coldToFrozenDir and coldToFrozenScript to blank values (which is the default unless specified elsehwere). coldToFrozenDir = coldToFrozenScript = By not setting coldToFrozenDir and not having a coldToFrozenScript, you effectively cause the data to be deleted when frozen.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks so much for the info! The app I was working in was Mission Control, going along with the Video, but your suspicion of ES version is probably spot on.  This video is two years old, and I just ... See more...
Thanks so much for the info! The app I was working in was Mission Control, going along with the Video, but your suspicion of ES version is probably spot on.  This video is two years old, and I just upgraded to the latest version recently. This was the video: https://youtu.be/xhfb5Cc11Tg?t=177 I understand what you mean about creating the event from the analyst queue.  I'm just confused about how to add more searched events when performing manual searches.  There is an events tab within the investigation.  What I'm seeing you would go to the Search tab and if you find anything else of interest you should be able to add it along to your investigation created.  Right now the only way I can populate additional searches into this tab, is by using the add events macro, which works fine, but this can cause accidental additions if my SPL catches other entries in my search which I don't want added to the investigation.  Seems like a better way would be to allow me to manually add the event by finding the search myself and telling splunk to add it.     Hope that makes sense?   I did training on the previous version of Splunk for investigations, this newer Mission Control is totally different and appears to lack some of the functionality in the older version? Or perhaps I'm just missing something in terms of workflow for investigations in this version of Splunk ES - I see Response is probably the primary tab to work in, but it feels lacking at the moment.  Probably because everything is defaulted at the moment.        
I have a coldToFrozenScript that controls all of the indexes at an installation. I want the data in the "main" index to simply be deleted when it's time to be frozen. My question is, if I set the co... See more...
I have a coldToFrozenScript that controls all of the indexes at an installation. I want the data in the "main" index to simply be deleted when it's time to be frozen. My question is, if I set the coldToFrozenDir for the "main" stanza in indexes.conf to a blank, will it delete the buckets?  coldToFrozenDir =    Thank you. 
Hello @livehybrid . It works. Thanks a lot
Yes, this is exactly what I expected. Thank you for confirming the way it works.
There is most probably a better way to achieve your goal. Try to describe the logic behind what you're trying to do. Anyway,  | dedup A | table A is usually _not_ the way to go. You'd rather want ... See more...
There is most probably a better way to achieve your goal. Try to describe the logic behind what you're trying to do. Anyway,  | dedup A | table A is usually _not_ the way to go. You'd rather want to do | stats values(A) as A | mvexpand A  
Thanks a lot @ITWhisperer , but it seems like my account was deactivated. could you help me out to restore it? thanks    
Try this splunkcommunity.slack.com
Hi @isoutamo  is the slack channel still alive? the link you provided is not working anymore   thanks!  
I am monitoring some of the information from the TrackMe App, and I noticed that for the trackme_host_monitoring lookup AKA the data host monitoring utils of the app, all the hosts I can see had the ... See more...
I am monitoring some of the information from the TrackMe App, and I noticed that for the trackme_host_monitoring lookup AKA the data host monitoring utils of the app, all the hosts I can see had the data_last_time_seen value greater than 04/03. Today is 05/12. If I use metadata command, I can see the hosts that has not sent log from earlier than 04/03. Like 03/31. So what config/macro of the app is probably the cause?
Updated response - you said 2d would be OK - essentially, the subsearch needs to be fewer than 50,000 events, so if 15d matches that requirement, then use 15d otherwise use a smaller amount (like 2d ... See more...
Updated response - you said 2d would be OK - essentially, the subsearch needs to be fewer than 50,000 events, so if 15d matches that requirement, then use 15d otherwise use a smaller amount (like 2d as you suggested).
Hi @koshyk  Are you currently using the rules from ESCU without modification at all (e.g. just enabling the search)?  If you make changes to the ESCU rule/search then the changes will be applied to... See more...
Hi @koshyk  Are you currently using the rules from ESCU without modification at all (e.g. just enabling the search)?  If you make changes to the ESCU rule/search then the changes will be applied to the ./local/savedsearches.conf on your Splunk deployment. These changes will not be overwritten with future changes to the published ESCU app, however note that this could have the opposite effect as changes made to resolve issues might not take affect.  Only the modified keys will be updated in savedsearches.conf - so if you modify the actual search then future changes to the search from ESCU will not be applied. A lot of users opt to clone the ESCU rules and apply their organisation name as a prefix to the rules, this means they can always compare between the current and their custom ESCU rule.  There is also an app on Splunkbase (ESCU Companion App) which looks like a good way to monitor changes between cloned rules and the current ESCU definitions.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @bmer  The subsearch is missing the "search" prefix, try this (adjusted to -2d as required) (index=xxxx) orgName=xxx sourcetype=CASE(SourceB) earliest=-2d [ search (index=xxxx) orgName=xxx ... See more...
Hi @bmer  The subsearch is missing the "search" prefix, try this (adjusted to -2d as required) (index=xxxx) orgName=xxx sourcetype=CASE(SourceB) earliest=-2d [ search (index=xxxx) orgName=xxx sourcetype=CASE(SourceA) earliest=-2d uniqueIdentifier="Class.ClassName.MethodName*" | dedup SourceASqlId | rename SourceASqlId as SourceBSqlId | table SourceBSqlId] | table SourceBSqlText  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hi folks, the scenario is like below - have Enterprise security (ESS) in Splunk cloud + ESCU (content updates) as part of it - if we enable a ESCU detection it works all good. - we need to modify ... See more...
hi folks, the scenario is like below - have Enterprise security (ESS) in Splunk cloud + ESCU (content updates) as part of it - if we enable a ESCU detection it works all good. - we need to modify the ESCU slightly with a standard field and also the name of the search to fit existing organisation policy - The uuid remain the same What will happen when the next ESCU update comes? Will it overwrite the custom changes? What is the actual ESCU update looking for? is it looking for 'search name' or the 'search id (uuid)?'?   What will happen when the next ESCU update comes?
@livehybrid also i could not see any manually created apps are in manager apps folder. is this correct path i am searching to get the app lists in CM ?  
@ITWhisperer I received an error saying "Error in 'SearchParser': Missing a search command before '('.Error at position '90' of search query 'search (index=xxxx) CASE(SourceA) source..." Also any ... See more...
@ITWhisperer I received an error saying "Error in 'SearchParser': Missing a search command before '('.Error at position '90' of search query 'search (index=xxxx) CASE(SourceA) source..." Also any reason why the outer search is of 15d whereas subsearch is set for 2d?Is it for the optimisation?