All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunkers, Good Day! I'm getting this error consistent. Out of confusion, those this mean it's the estimated KVStore size? max_size_per_result_mb value is current set by default (50 MB).  KVS... See more...
Hello Splunkers, Good Day! I'm getting this error consistent. Out of confusion, those this mean it's the estimated KVStore size? max_size_per_result_mb value is current set by default (50 MB).  KVStorageProvider [123456789 TcpChannelThread] - Result size too large, max_size_per_result_mb=52428800, Consider applying a skip and/or limit   Thanks!
Hello,   We use Splunk Enterprise  9.3.2 and LDAP Integration We Granted and AD Group 90 capabilies in ITSI to cover above analyst role so they can create correaltion searches ,episodes and polici... See more...
Hello,   We use Splunk Enterprise  9.3.2 and LDAP Integration We Granted and AD Group 90 capabilies in ITSI to cover above analyst role so they can create correaltion searches ,episodes and policies but not delete them. These particular users are having error :   Does anyone know why access gets blocked
this is on search app window, not created dashboard now. But accepting it would be the same behaviour  
this is happening, on search page. Not even created dashboard
To be fully honest, I have no idea what's going on if this is indeed the only thing that's happening. Is this the output in the search app window? Or is it an output of some dashboard panel powered b... See more...
To be fully honest, I have no idea what's going on if this is indeed the only thing that's happening. Is this the output in the search app window? Or is it an output of some dashboard panel powered by the search you've provided? Anyway, I'd start with checking if just listing raw events causes the same issue. If it does, add more and more commands one by one to see when the issue appears.  
OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (bu... See more...
OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (but as @isoutamo said - it's probably not the best choice since you will want to keep those SHCs different after all). So when we were doing this for one of our customers we did: 1. Spin up a new environment with completely clean deployer and clean SHs 2. Copy out selected apps from the old deployer to the new deployer 3. Copy out modified apps state from one of the SHCs and merged it with the apps on the new deployer (this one might not apply to you if your users don't have permissions to modify the apps on SHC and your admins do not do it either). 4. While modifying the built-in apps is in itself not the best practice, sometimes people do make those changes. We migrated changes from the built-in apps (like apps/search/local) into custom apps (i.e. search_migrated). 5. Migrating users and their content, if you want to do that, might be problematic. We didn't bother.
Hi @liangliang  You can separate some of the SHC away but I would recommend using a new SH Deployer for them in order to effectively manage the cluster as "The deployer sends the same configuration... See more...
Hi @liangliang  You can separate some of the SHC away but I would recommend using a new SH Deployer for them in order to effectively manage the cluster as "The deployer sends the same configuration bundle to all cluster members that it services. Therefore, if you have multiple search head clusters, you can use the same deployer for all the clusters only if the clusters employ exactly the same configurations, apps, and so on." (See docs here).  Be sure to clear the raft and re-bootstrap the cluster (see https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/Handleraftissues#Fix_the_entire_cluster) to configure the captaincy.   There is some other good background info at https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/SHCarchitecture which might also help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Yes, it is working for me, for other data sources. This one particularly creating problem. So just wanted to know, what should i check?
No, it is not coded. Operators are there.
Thanks for your reply @isoutamo  The only change I can think of is that we replaced RHEL8 with RHEL9 recently. 
Hi Ricklepick. Thanks for your info. We have a lot of data in our network. We lost logs form this device at approx 05:15 in the morning, local time. At that time, there isn`t a lot of traffic on our... See more...
Hi Ricklepick. Thanks for your info. We have a lot of data in our network. We lost logs form this device at approx 05:15 in the morning, local time. At that time, there isn`t a lot of traffic on our network. We had not excperinced any lack of connectivty in that period where we was missing these logs from this device. I If it was the loadbalancer, then we should miss logs from more one device. Our syslogs sources are sending logs through the netscaler to syslog servers. Syslogservers are sending then the syslogs to SPlunk index Cluster, which are sending it to the Heavy forwarders. Brgds DD
Thank you for your reply @livehybrid  1) Yes, they are unique   2) Yes, I thought about that, but could find only on one indexer.  I had not touched the indexers for 3-4 days, and today the confli... See more...
Thank you for your reply @livehybrid  1) Yes, they are unique   2) Yes, I thought about that, but could find only on one indexer.  I had not touched the indexers for 3-4 days, and today the conflicting bucket appeared on 2 indexers, I renamed on both indexers. I'll check tomorrow again to see if it made any difference
Thanks for your answer, I will try this in a test environment
If your two new SHC have the same content then you can theoretically use one deployer to manage those both, but I think that you want to keep content of those SHCs differently? I haven’t try this, b... See more...
If your two new SHC have the same content then you can theoretically use one deployer to manage those both, but I think that you want to keep content of those SHCs differently? I haven’t try this, but maybe this works if you can separate those node physically in network level? Do this with your own risk! I’m expecting that this is not an supported way to do it! Split those members into two groups and keep deployer in bigger group where is majority of nodes. This group should automatically recover the lost of other members. If not do normal stuff for removing members, sync SHC &kvstore.  For second group you must replicate current deployer to it. In docs there is instructions how to replace/recover deployer. Then you probably need to do manually captain election to get another SHC up and running. I’m not sure if you can change those deployers to new names or not. If not then you will probably get some issues later on! I think that better way is just create additional SHC and deployer and then migrate needed apps and users from old to this new. This is official and supported way. Anyhow you must do an offline backup from kvstore and nodes before start migration and definitely you should try it in test environment first!
@kiran_panchavat  thanks for your answer, we want to split a big shcluster. not a mutisite cluster.
@liangliang  Migration from a standalone searchhead to a SHC Here is the document that discusses how to migration from a standalone to a Search Head Cluster: https://docs.splunk.com/Documentation/... See more...
@liangliang  Migration from a standalone searchhead to a SHC Here is the document that discusses how to migration from a standalone to a Search Head Cluster: https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Migratefromstandalonesearchheads 
@mpk_24You're welcome. If this resolves your issue, please consider accepting the solution, as it may be helpful for others as well.
@kiran_panchavat Thank you so much for your insights and the assistance extended. 
@PickleRick Thank you so much for your valuable insights. 
@liangliang  You can deploy search head cluster members across multiple physical sites. You can also integrate cluster members into a multisite indexer cluster. However, search head clusters do not ... See more...
@liangliang  You can deploy search head cluster members across multiple physical sites. You can also integrate cluster members into a multisite indexer cluster. However, search head clusters do not have site awareness. https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/DeploymultisiteSHC  https://community.splunk.com/t5/Deployment-Architecture/How-multisite-SH-clusters-work/m-p/594465