@whitefang1726 The error does not indicate the total size of the KV Store. Instead, it means the data returned by a specific query is too large (exceeds 50 MB). Your query is likely retrieving too ...
See more...
@whitefang1726 The error does not indicate the total size of the KV Store. Instead, it means the data returned by a specific query is too large (exceeds 50 MB). Your query is likely retrieving too many records or large documents from the KV Store, exceeding the 50 MB limit per result set.
@whitefang1726 In the Splunk KV store, max_size_per_result_mb controls the maximum size of a result set (in MB) that can be returned from a single query to a collection. Default Value: The defaul...
See more...
@whitefang1726 In the Splunk KV store, max_size_per_result_mb controls the maximum size of a result set (in MB) that can be returned from a single query to a collection. Default Value: The default value is 50 MB, but it's recommended to increase it if you need to retrieve larger results from the KV store https://docs.splunk.com/Documentation/Splunk/latest/admin/Limitsconf max_size_per_result_mb = <unsigned integer>
* The maximum size, in megabytes (MB), of the result that will be
returned for a single query to a collection.
* Default: 50
HI @a1bg503461 Please can you share the capabilities listed when the user runs: |rest /services/authentication/current-context If they are unable to run this then they are missing the rest_prope...
See more...
HI @a1bg503461 Please can you share the capabilities listed when the user runs: |rest /services/authentication/current-context If they are unable to run this then they are missing the rest_properties_get capability. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Splunkers, Good Day! I'm getting this error consistent. Out of confusion, those this mean it's the estimated KVStore size? max_size_per_result_mb value is current set by default (50 MB). KVS...
See more...
Hello Splunkers, Good Day! I'm getting this error consistent. Out of confusion, those this mean it's the estimated KVStore size? max_size_per_result_mb value is current set by default (50 MB). KVStorageProvider [123456789 TcpChannelThread] - Result size too large, max_size_per_result_mb=52428800, Consider applying a skip and/or limit Thanks!
Hello, We use Splunk Enterprise 9.3.2 and LDAP Integration We Granted and AD Group 90 capabilies in ITSI to cover above analyst role so they can create correaltion searches ,episodes and polici...
See more...
Hello, We use Splunk Enterprise 9.3.2 and LDAP Integration We Granted and AD Group 90 capabilies in ITSI to cover above analyst role so they can create correaltion searches ,episodes and policies but not delete them. These particular users are having error : Does anyone know why access gets blocked
To be fully honest, I have no idea what's going on if this is indeed the only thing that's happening. Is this the output in the search app window? Or is it an output of some dashboard panel powered b...
See more...
To be fully honest, I have no idea what's going on if this is indeed the only thing that's happening. Is this the output in the search app window? Or is it an output of some dashboard panel powered by the search you've provided? Anyway, I'd start with checking if just listing raw events causes the same issue. If it does, add more and more commands one by one to see when the issue appears.
OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (bu...
See more...
OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (but as @isoutamo said - it's probably not the best choice since you will want to keep those SHCs different after all). So when we were doing this for one of our customers we did: 1. Spin up a new environment with completely clean deployer and clean SHs 2. Copy out selected apps from the old deployer to the new deployer 3. Copy out modified apps state from one of the SHCs and merged it with the apps on the new deployer (this one might not apply to you if your users don't have permissions to modify the apps on SHC and your admins do not do it either). 4. While modifying the built-in apps is in itself not the best practice, sometimes people do make those changes. We migrated changes from the built-in apps (like apps/search/local) into custom apps (i.e. search_migrated). 5. Migrating users and their content, if you want to do that, might be problematic. We didn't bother.
Hi @liangliang You can separate some of the SHC away but I would recommend using a new SH Deployer for them in order to effectively manage the cluster as "The deployer sends the same configuration...
See more...
Hi @liangliang You can separate some of the SHC away but I would recommend using a new SH Deployer for them in order to effectively manage the cluster as "The deployer sends the same configuration bundle to all cluster members that it services. Therefore, if you have multiple search head clusters, you can use the same deployer for all the clusters only if the clusters employ exactly the same configurations, apps, and so on." (See docs here). Be sure to clear the raft and re-bootstrap the cluster (see https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/Handleraftissues#Fix_the_entire_cluster) to configure the captaincy. There is some other good background info at https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/SHCarchitecture which might also help. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Ricklepick. Thanks for your info. We have a lot of data in our network. We lost logs form this device at approx 05:15 in the morning, local time. At that time, there isn`t a lot of traffic on our...
See more...
Hi Ricklepick. Thanks for your info. We have a lot of data in our network. We lost logs form this device at approx 05:15 in the morning, local time. At that time, there isn`t a lot of traffic on our network. We had not excperinced any lack of connectivty in that period where we was missing these logs from this device. I If it was the loadbalancer, then we should miss logs from more one device. Our syslogs sources are sending logs through the netscaler to syslog servers. Syslogservers are sending then the syslogs to SPlunk index Cluster, which are sending it to the Heavy forwarders. Brgds DD
Thank you for your reply @livehybrid 1) Yes, they are unique 2) Yes, I thought about that, but could find only on one indexer. I had not touched the indexers for 3-4 days, and today the confli...
See more...
Thank you for your reply @livehybrid 1) Yes, they are unique 2) Yes, I thought about that, but could find only on one indexer. I had not touched the indexers for 3-4 days, and today the conflicting bucket appeared on 2 indexers, I renamed on both indexers. I'll check tomorrow again to see if it made any difference
If your two new SHC have the same content then you can theoretically use one deployer to manage those both, but I think that you want to keep content of those SHCs differently? I haven’t try this, b...
See more...
If your two new SHC have the same content then you can theoretically use one deployer to manage those both, but I think that you want to keep content of those SHCs differently? I haven’t try this, but maybe this works if you can separate those node physically in network level? Do this with your own risk! I’m expecting that this is not an supported way to do it! Split those members into two groups and keep deployer in bigger group where is majority of nodes. This group should automatically recover the lost of other members. If not do normal stuff for removing members, sync SHC &kvstore. For second group you must replicate current deployer to it. In docs there is instructions how to replace/recover deployer. Then you probably need to do manually captain election to get another SHC up and running. I’m not sure if you can change those deployers to new names or not. If not then you will probably get some issues later on! I think that better way is just create additional SHC and deployer and then migrate needed apps and users from old to this new. This is official and supported way. Anyhow you must do an offline backup from kvstore and nodes before start migration and definitely you should try it in test environment first!
@liangliang Migration from a standalone searchhead to a SHC Here is the document that discusses how to migration from a standalone to a Search Head Cluster: https://docs.splunk.com/Documentation/...
See more...
@liangliang Migration from a standalone searchhead to a SHC Here is the document that discusses how to migration from a standalone to a Search Head Cluster: https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/Migratefromstandalonesearchheads