All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Saved searches are scheduled across the whole search head cluster. (with some additional conditions like a cluster member being in detention). That's what search head is for. Also - limiting searches... See more...
Saved searches are scheduled across the whole search head cluster. (with some additional conditions like a cluster member being in detention). That's what search head is for. Also - limiting searches to just one SH would inevitably lead to delayed/skipped searches. It won't solve the performance issue. Even if you have multiple indexers holding the same bucket, the indexers holding primary copies respond with results from those primaries - it's by design and lets you distribute the search. Even if you had a possibility to get results from just one indexer, there would be no guarantee that you'd get all events from given time range because with sf=rf=3 and 4 indexers you'd still probably hit (actually would _not_ hit) some buckets which are not present at that chosen indexer. So your idea is not a very good one. You can use site affinity to force search heads to use only one site. But again - especially if you already have performance problems - that's counterproductive. And from experience - it's often not the _number_ of searches but _how_ they're written.
Thank you Mitesh_Gajjar Unfortunately, https://splunkbase.splunk.com/app/7404) gives this very useful information: No information provided. Reach out to the developer to learn more.   The link t... See more...
Thank you Mitesh_Gajjar Unfortunately, https://splunkbase.splunk.com/app/7404) gives this very useful information: No information provided. Reach out to the developer to learn more.   The link to the Cisco website is for a different App altogether, so not much further along. (https://splunkbase.splunk.com/app/5558)  is also a different app Thanks for your efforts however!
Hello,   A client went mad on how many saved searches they require and wont get rid of them. Due to this it is hammering rw on the indexers to the point the indexers can cope and remove themselves ... See more...
Hello,   A client went mad on how many saved searches they require and wont get rid of them. Due to this it is hammering rw on the indexers to the point the indexers can cope and remove themselves from the cluster and then re adds which and more resource strain. adding more indexers isnt an option The current setup is 3vm multisite search head cluster and a 4vm multisite indexer cluster.   As they only require 3rf and 3sf i am wondering if there is a way to use only 1SH and 1 Indexer for all saved searches to run so that the load doesnt affect the other 3 indexers?
Yes. Dashboard Studio doesn't allow as much customizations as classic dashboards - no custom visualizations, no custom code. But still in classic you'd have to implement it all by yourself (but hey, ... See more...
Yes. Dashboard Studio doesn't allow as much customizations as classic dashboards - no custom visualizations, no custom code. But still in classic you'd have to implement it all by yourself (but hey, that's how half of Enterprise Security is written ;-))
So there is no way I cold do it on Dashboard Studio... but I can do it on Classic because I can edit the java script?    Thank you so much for the idea!
There is no out-of-the-box component that gives you this functionality. With classic dashboards you can add your own javascript code that will take care of storing the "notes" into kvstore so that it... See more...
There is no out-of-the-box component that gives you this functionality. With classic dashboards you can add your own javascript code that will take care of storing the "notes" into kvstore so that it can be shared and updated but it is something you'd need to write yourself. And there are several possible issues with that (including of course permissions management, concurrent access and so on). So it's not that straightforward.
So, they want to add notes and have them on live for them to view.  So on the dashboard, I will have tables and a notes section and they want to add note to the table/section on the dashboard view. ... See more...
So, they want to add notes and have them on live for them to view.  So on the dashboard, I will have tables and a notes section and they want to add note to the table/section on the dashboard view. Is that possible???  I can make a lookup maybe or something, but I think I did not communicate what I wanted. 
This worked beautifully, thank you!
Understood! I appreciate your answers.  I will keep this post unresolved for now and test it.
I'll give it a shot.  Thank you for your help!
Hi @alferone , I don't think so, if you have a fixed frequency of data update. I prefer a summary index for the reasons I listed in my before answer. Ciao. Giuseppe
No. As far as I know it's neither officially supported nor (well) documented. And at least up to not so long ago you couldn't install both components from a RPM or DEB package because they were insta... See more...
No. As far as I know it's neither officially supported nor (well) documented. And at least up to not so long ago you couldn't install both components from a RPM or DEB package because they were installed in the same place (/opt/splunk). More recent versions install in separate directories (/opt/splunk vs. /opt/splunkforwarder) so it might be possible to install both from packages (I haven't tried this myself though so I'd strongly advise to test in lab first).
Hi @jpillai , ok, my hint is to evaluate the effort to cheng the searches that use these indexes. Anyway, let us know if we can help you more, or, please, accept one answer for the other people of ... See more...
Hi @jpillai , ok, my hint is to evaluate the effort to cheng the searches that use these indexes. Anyway, let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
I was considering that initially.  Would the search be a little taxing to pull all of the different tools and times together?
The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log vol... See more...
The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log volume thats getting copied is relatively small, the additional license usage is not a major issue.
Hi @alferone , why don't you use a summary index? in this way you're sure to have the last updated version, you have also the previous versions and you don't have any limitation to the number of en... See more...
Hi @alferone , why don't you use a summary index? in this way you're sure to have the last updated version, you have also the previous versions and you don't have any limitation to the number of entries. Ciao. Giuseppe
The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log vol... See more...
The reason I need to ''copy the logs instead of splitting is that, there are some searches thats using index1 for searching the same data.  So we still need to support that dependancy. As the log volume thats getting copied is relatively small, the additional license usage is not a major issue.   It looks like scheduling search with collect is what fits the requirement we have
Hi, yes, I know the setup might look a bit overengineered, but it best fits our needs as we need to “logically” separate the ES data from other Splunk use cases. Anyway, I wasn't aware that I can r... See more...
Hi, yes, I know the setup might look a bit overengineered, but it best fits our needs as we need to “logically” separate the ES data from other Splunk use cases. Anyway, I wasn't aware that I can run a Universal Forwarder together with another Splunk Enterprise Component. Is this supported or is it at least officially documented somewhere?  
To be fully honest, this setup seems a bit overcomplicated. I've seen setups with a single indexer cluster and multiple SHCs performing different tasks connecting to it but multiple separate environm... See more...
To be fully honest, this setup seems a bit overcomplicated. I've seen setups with a single indexer cluster and multiple SHCs performing different tasks connecting to it but multiple separate environments and events still sent between them... that's a bit weird. But hey, it's your environment Actually, since you want to do some strange stuff with OS-level logs, it might be that one unique use case when it makes sense to install a UF alongside a normal Splunk Enterprise installation. That might be an easiest and least confusing solution.  
Hello all,  I have a requirement to list all of our assets and show the last time they appeared in the logs of many different tools.  I wanted to use KV store for this.  We would run a search agains... See more...
Hello all,  I have a requirement to list all of our assets and show the last time they appeared in the logs of many different tools.  I wanted to use KV store for this.  We would run a search against each tool's logs and then update it's "last seen" time in the KV store for the particular asset. I've attempted this a few ways, but I can't see to get it going.  I have the KV Store built with one column of last_seen times for one tool. But I am lost on how to update last_seen times for other tools for existing entries in the KV Store. Any guidance would be appreciated.  Thank you!