OK. A stupid question since I don't know ITSI. But ES has this nasty role configurator in WebUI and you cannot just add capabilities to a role using standard Splunk role settings screen, you have to ...
See more...
OK. A stupid question since I don't know ITSI. But ES has this nasty role configurator in WebUI and you cannot just add capabilities to a role using standard Splunk role settings screen, you have to do it in ES and let the ES "modular input" managing capbilities do its magic. Doesn't ITSI have its equivalent of that? We had similar errors when trying to manage ES capabilities directly, instead of via ES internal mechanisms.
We have a lab Splunk deployment with the following specification: 3 indexers in an indexer cluster 1 SH for normal searches 1 SH with ITSI installed 1 SH with Enterprise Security installed 1 se...
See more...
We have a lab Splunk deployment with the following specification: 3 indexers in an indexer cluster 1 SH for normal searches 1 SH with ITSI installed 1 SH with Enterprise Security installed 1 server that acts as the Cluster manager for the indexers and as the License manager We have NFR licenses (Enterprise, ITSI) installed on the License manager and all the other servers are configured as license peers. With the above setup the problem is that the ITSI license doesn't work, and we only get IT Essential Works. When the ITSI license is installed directly on the ITSI server ITSI is working correctly (but the other licences don't apply in this case, because those are installed on the License manager). We installed the required applications (SA-ITSI-Licensechecker and SA-UserAccess) on the License manager as per the official documentation. Did anyone encounter similar problem and if so, what was the solution?
He has them but still there is error, is there anything on the conf files: accelerate_search bulk_import_service_or_entity change_own_password configure_mltk_container configure_perms co...
See more...
OK. This is very very strange. I've had logs with < and > signs many times over my years of Splunk experience and never noticed such behaviour. It is possible that you're triggering some obscure bug...
See more...
OK. This is very very strange. I've had logs with < and > signs many times over my years of Splunk experience and never noticed such behaviour. It is possible that you're triggering some obscure bug so it's important to narrow down its scope (as I wrote earlier - try to pinpoint the exact moment when this issue appears - whether it's the transaction command, the table command after transaction or maybe it is happening with the table command without transaction as well). And it's most probably a support case material.
@whitefang1726 The error does not indicate the total size of the KV Store. Instead, it means the data returned by a specific query is too large (exceeds 50 MB). Your query is likely retrieving too ...
See more...
@whitefang1726 The error does not indicate the total size of the KV Store. Instead, it means the data returned by a specific query is too large (exceeds 50 MB). Your query is likely retrieving too many records or large documents from the KV Store, exceeding the 50 MB limit per result set.
@whitefang1726 In the Splunk KV store, max_size_per_result_mb controls the maximum size of a result set (in MB) that can be returned from a single query to a collection. Default Value: The defaul...
See more...
@whitefang1726 In the Splunk KV store, max_size_per_result_mb controls the maximum size of a result set (in MB) that can be returned from a single query to a collection. Default Value: The default value is 50 MB, but it's recommended to increase it if you need to retrieve larger results from the KV store https://docs.splunk.com/Documentation/Splunk/latest/admin/Limitsconf max_size_per_result_mb = <unsigned integer>
* The maximum size, in megabytes (MB), of the result that will be
returned for a single query to a collection.
* Default: 50
HI @a1bg503461 Please can you share the capabilities listed when the user runs: |rest /services/authentication/current-context If they are unable to run this then they are missing the rest_prope...
See more...
HI @a1bg503461 Please can you share the capabilities listed when the user runs: |rest /services/authentication/current-context If they are unable to run this then they are missing the rest_properties_get capability. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Splunkers, Good Day! I'm getting this error consistent. Out of confusion, those this mean it's the estimated KVStore size? max_size_per_result_mb value is current set by default (50 MB). KVS...
See more...
Hello Splunkers, Good Day! I'm getting this error consistent. Out of confusion, those this mean it's the estimated KVStore size? max_size_per_result_mb value is current set by default (50 MB). KVStorageProvider [123456789 TcpChannelThread] - Result size too large, max_size_per_result_mb=52428800, Consider applying a skip and/or limit Thanks!
Hello, We use Splunk Enterprise 9.3.2 and LDAP Integration We Granted and AD Group 90 capabilies in ITSI to cover above analyst role so they can create correaltion searches ,episodes and polici...
See more...
Hello, We use Splunk Enterprise 9.3.2 and LDAP Integration We Granted and AD Group 90 capabilies in ITSI to cover above analyst role so they can create correaltion searches ,episodes and policies but not delete them. These particular users are having error : Does anyone know why access gets blocked
To be fully honest, I have no idea what's going on if this is indeed the only thing that's happening. Is this the output in the search app window? Or is it an output of some dashboard panel powered b...
See more...
To be fully honest, I have no idea what's going on if this is indeed the only thing that's happening. Is this the output in the search app window? Or is it an output of some dashboard panel powered by the search you've provided? Anyway, I'd start with checking if just listing raw events causes the same issue. If it does, add more and more commands one by one to see when the issue appears.
OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (bu...
See more...
OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (but as @isoutamo said - it's probably not the best choice since you will want to keep those SHCs different after all). So when we were doing this for one of our customers we did: 1. Spin up a new environment with completely clean deployer and clean SHs 2. Copy out selected apps from the old deployer to the new deployer 3. Copy out modified apps state from one of the SHCs and merged it with the apps on the new deployer (this one might not apply to you if your users don't have permissions to modify the apps on SHC and your admins do not do it either). 4. While modifying the built-in apps is in itself not the best practice, sometimes people do make those changes. We migrated changes from the built-in apps (like apps/search/local) into custom apps (i.e. search_migrated). 5. Migrating users and their content, if you want to do that, might be problematic. We didn't bother.
Hi @liangliang You can separate some of the SHC away but I would recommend using a new SH Deployer for them in order to effectively manage the cluster as "The deployer sends the same configuration...
See more...
Hi @liangliang You can separate some of the SHC away but I would recommend using a new SH Deployer for them in order to effectively manage the cluster as "The deployer sends the same configuration bundle to all cluster members that it services. Therefore, if you have multiple search head clusters, you can use the same deployer for all the clusters only if the clusters employ exactly the same configurations, apps, and so on." (See docs here). Be sure to clear the raft and re-bootstrap the cluster (see https://docs.splunk.com/Documentation/Splunk/9.4.1/DistSearch/Handleraftissues#Fix_the_entire_cluster) to configure the captaincy. There is some other good background info at https://docs.splunk.com/Documentation/Splunk/latest/DistSearch/SHCarchitecture which might also help. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Ricklepick. Thanks for your info. We have a lot of data in our network. We lost logs form this device at approx 05:15 in the morning, local time. At that time, there isn`t a lot of traffic on our...
See more...
Hi Ricklepick. Thanks for your info. We have a lot of data in our network. We lost logs form this device at approx 05:15 in the morning, local time. At that time, there isn`t a lot of traffic on our network. We had not excperinced any lack of connectivty in that period where we was missing these logs from this device. I If it was the loadbalancer, then we should miss logs from more one device. Our syslogs sources are sending logs through the netscaler to syslog servers. Syslogservers are sending then the syslogs to SPlunk index Cluster, which are sending it to the Heavy forwarders. Brgds DD
Thank you for your reply @livehybrid 1) Yes, they are unique 2) Yes, I thought about that, but could find only on one indexer. I had not touched the indexers for 3-4 days, and today the confli...
See more...
Thank you for your reply @livehybrid 1) Yes, they are unique 2) Yes, I thought about that, but could find only on one indexer. I had not touched the indexers for 3-4 days, and today the conflicting bucket appeared on 2 indexers, I renamed on both indexers. I'll check tomorrow again to see if it made any difference