All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@hv64  Please review this older solution for reference https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-DB-Connect-connection-to-Hana/m-p/311647 
Hello, I have a Search that is taking 5 min to complete when looking at only the last 24 hrs.  If possible, could someone help me figure out how I can improve this Search?  I am in need of deduping ... See more...
Hello, I have a Search that is taking 5 min to complete when looking at only the last 24 hrs.  If possible, could someone help me figure out how I can improve this Search?  I am in need of deduping by SessionId and combing  3 fields into a single field. source="mobilepro-test" | dedup Session.SessionId | strcat UserInfo.UserId " " Location.Site " " Session.StartTime label | table Session.SessionId, label It looks like it's the dedup that is causing the slowness, but I have no idea how to improve that. Thanks for any help on this one, Tom
Hi, We want to connect Splunk to SAP Hana Database. Have you some idea ? Do we use ngdbc.jar and  Put that driver in: $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers Regards.
Hi @vtamas  Just to check, how many licenses do you have listed in the license page on your License Manager?  As far I know you should have 3, the Core/Enterprise license, the ITSI entitlement lice... See more...
Hi @vtamas  Just to check, how many licenses do you have listed in the license page on your License Manager?  As far I know you should have 3, the Core/Enterprise license, the ITSI entitlement license and the ITSI "Internal" license (not to be confused with the other ITSI license - this allows for ITSI sourcetypes to be ingested without impacting your main core license).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@Mahendra_Penuma  I’m not familiar with the process for generating a diag file for the Edge Processor, but you may want to refer to this resource to see if it helps https://docs.splunk.com/Document... See more...
@Mahendra_Penuma  I’m not familiar with the process for generating a diag file for the Edge Processor, but you may want to refer to this resource to see if it helps https://docs.splunk.com/Documentation/SplunkCloud/9.3.2408/EdgeProcessor/Troubleshooting#Generate_a_diagnostic_report_for_an_Edge_Processor_instance 
Need assistance to create diag file on splunk edge processor
OK. A stupid question since I don't know ITSI. But ES has this nasty role configurator in WebUI and you cannot just add capabilities to a role using standard Splunk role settings screen, you have to ... See more...
OK. A stupid question since I don't know ITSI. But ES has this nasty role configurator in WebUI and you cannot just add capabilities to a role using standard Splunk role settings screen, you have to do it in ES and let the ES "modular input" managing capbilities do its magic. Doesn't ITSI have its equivalent of that? We had similar errors when trying to manage ES capabilities directly, instead of via ES internal mechanisms.
We have a lab Splunk deployment with the following specification: 3 indexers in an indexer cluster 1 SH for normal searches 1 SH with ITSI installed 1 SH with Enterprise Security installed 1 se... See more...
We have a lab Splunk deployment with the following specification: 3 indexers in an indexer cluster 1 SH for normal searches 1 SH with ITSI installed 1 SH with Enterprise Security installed 1 server that acts as the Cluster manager for the indexers and as the License manager We have NFR licenses (Enterprise, ITSI) installed on the License manager and all the other servers are configured as license peers. With the above setup the problem is that the ITSI license doesn't work, and we only get IT Essential Works. When the ITSI license is installed directly on the ITSI server ITSI is working correctly (but the other licences don't apply in this case, because those are installed on the License manager). We installed the required applications (SA-ITSI-Licensechecker and SA-UserAccess) on the License manager as per the official documentation.  Did anyone encounter similar problem and if so, what was the solution?
He has them  but still there is error, is there anything on the conf files: accelerate_search bulk_import_service_or_entity change_own_password configure_mltk_container configure_perms co... See more...
He has them  but still there is error, is there anything on the conf files: accelerate_search bulk_import_service_or_entity change_own_password configure_mltk_container configure_perms control_mltk_container delete_drift_detection_results delete_itsi_correlation_search delete_itsi_custom_threshold_windows delete_itsi_data_integration delete_itsi_deep_dive delete_itsi_deep_dive_context delete_itsi_drift_detection_template delete_itsi_event_management_export delete_itsi_event_management_state delete_itsi_glass_table delete_itsi_homeview delete_itsi_kpi_at_info delete_itsi_kpi_base_search delete_itsi_kpi_entity_threshold delete_itsi_kpi_state_cache delete_itsi_kpi_threshold_template delete_itsi_notable_aggregation_policy delete_itsi_notable_event_email_template delete_itsi_refresh_queue_job delete_itsi_sandbox_service delete_itsi_service delete_itsi_temporary_kpi delete_maintenance_calendar delete_module_interface delete_notable_event edit_log_alert_event edit_own_objects edit_search_schedule_window edit_sourcetypes edit_statsd_transforms edit_token_http embed_report entities_at_configurations_get execute-notable_event_action execute_notable_event_action export_results_is_visible get_drift_detection_kpis get_drift_detection_results get_metadata get_typeahead input_file interact_with_itsi_correlation_search interact_with_itsi_deep_dive interact_with_itsi_deep_dive_context interact_with_itsi_event_management_state interact_with_itsi_glass_table interact_with_itsi_homeview interact_with_itsi_notable_aggregation_policy kpis_at_configurations_get list_accelerate_search list_all_objects list_health list_inputs list_metrics_catalog list_mltk_container list_search_head_clustering list_settings list_storage_passwords list_tokens_own metric_alerts output_file pattern_detect read-notable_event read-notable_event_action read_itsi_backup_restore read_itsi_base_service_template read_itsi_correlation_search read_itsi_custom_threshold_windows read_itsi_data_integration read_itsi_deep_dive read_itsi_deep_dive_context read_itsi_drift_detection_template read_itsi_entity_discovery_searches read_itsi_entity_management_policies read_itsi_event_management_export read_itsi_event_management_state read_itsi_glass_table read_itsi_homeview read_itsi_kpi_at_info read_itsi_kpi_base_search read_itsi_kpi_entity_threshold read_itsi_kpi_state_cache read_itsi_kpi_threshold_template read_itsi_notable_aggregation_policy read_itsi_notable_event_email_template read_itsi_refresh_queue_job read_itsi_sandbox read_itsi_sandbox_service read_itsi_sandbox_sync_log read_itsi_service read_itsi_team read_itsi_temporary_kpi read_maintenance_calendar read_metric_ad read_module_interface read_notable_event read_notable_event_action request_remote_tok rest_access_server_endpoints rest_apps_view rest_properties_get rest_properties_set rtsearch run_collect run_custom_command run_dump run_mcollect run_msearch run_sendalert schedule_rtsearch schedule_search search search_process_config_refresh upload_lookup_files upload_onnx_model_file write-notable_event write_itsi_correlation_search write_itsi_custom_threshold_windows write_itsi_data_integration write_itsi_deep_dive write_itsi_deep_dive_context write_itsi_drift_detection_template write_itsi_event_management_export write_itsi_event_management_state write_itsi_glass_table write_itsi_homeview write_itsi_kpi_at_info write_itsi_kpi_base_search write_itsi_kpi_entity_threshold write_itsi_kpi_state_cache write_itsi_kpi_threshold_template write_itsi_notable_aggregation_policy write_itsi_notable_event_email_template write_itsi_refresh_queue_job write_itsi_sandbox write_itsi_sandbox_service write_itsi_sandbox_sync_log write_itsi_service write_itsi_temporary_kpi write_maintenance_calendar write_metric_ad write_module_interface write_notable_event    
OK. This is very very strange. I've had logs with < and > signs many times over my years of Splunk experience and never noticed such behaviour. It is possible that you're triggering some obscure bug... See more...
OK. This is very very strange. I've had logs with < and > signs many times over my years of Splunk experience and never noticed such behaviour. It is possible that you're triggering some obscure bug so it's important to narrow down its scope (as I wrote earlier - try to pinpoint the exact moment when this issue appears - whether it's the transaction command, the table command after transaction or maybe it is happening with the table command without transaction as well). And it's most probably a support case material.
Anyhow I strongly recommend you to use that last option as also @PickleRick present!
@whitefang1726  The error does not indicate the total size of the KV Store. Instead, it means the data returned by a specific query is too large (exceeds 50 MB). Your query is likely retrieving too ... See more...
@whitefang1726  The error does not indicate the total size of the KV Store. Instead, it means the data returned by a specific query is too large (exceeds 50 MB). Your query is likely retrieving too many records or large documents from the KV Store, exceeding the 50 MB limit per result set.
@whitefang1726  In the Splunk KV store, max_size_per_result_mb controls the maximum size of a result set (in MB) that can be returned from a single query to a collection. Default Value: The defaul... See more...
@whitefang1726  In the Splunk KV store, max_size_per_result_mb controls the maximum size of a result set (in MB) that can be returned from a single query to a collection. Default Value: The default value is 50 MB, but it's recommended to increase it if you need to retrieve larger results from the KV store   https://docs.splunk.com/Documentation/Splunk/latest/admin/Limitsconf  max_size_per_result_mb = <unsigned integer> * The maximum size, in megabytes (MB), of the result that will be returned for a single query to a collection. * Default: 50  
HI @a1bg503461  Please can you share the capabilities listed when the user runs:  |rest /services/authentication/current-context If they are unable to run this then they are missing the rest_prope... See more...
HI @a1bg503461  Please can you share the capabilities listed when the user runs:  |rest /services/authentication/current-context If they are unable to run this then they are missing the rest_properties_get capability.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello Splunkers, Good Day! I'm getting this error consistent. Out of confusion, those this mean it's the estimated KVStore size? max_size_per_result_mb value is current set by default (50 MB).  KVS... See more...
Hello Splunkers, Good Day! I'm getting this error consistent. Out of confusion, those this mean it's the estimated KVStore size? max_size_per_result_mb value is current set by default (50 MB).  KVStorageProvider [123456789 TcpChannelThread] - Result size too large, max_size_per_result_mb=52428800, Consider applying a skip and/or limit   Thanks!
Hello,   We use Splunk Enterprise  9.3.2 and LDAP Integration We Granted and AD Group 90 capabilies in ITSI to cover above analyst role so they can create correaltion searches ,episodes and polici... See more...
Hello,   We use Splunk Enterprise  9.3.2 and LDAP Integration We Granted and AD Group 90 capabilies in ITSI to cover above analyst role so they can create correaltion searches ,episodes and policies but not delete them. These particular users are having error :   Does anyone know why access gets blocked
this is on search app window, not created dashboard now. But accepting it would be the same behaviour  
this is happening, on search page. Not even created dashboard
To be fully honest, I have no idea what's going on if this is indeed the only thing that's happening. Is this the output in the search app window? Or is it an output of some dashboard panel powered b... See more...
To be fully honest, I have no idea what's going on if this is indeed the only thing that's happening. Is this the output in the search app window? Or is it an output of some dashboard panel powered by the search you've provided? Anyway, I'd start with checking if just listing raw events causes the same issue. If it does, add more and more commands one by one to see when the issue appears.  
OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (bu... See more...
OK. There are some theoretical aspects but there is also the practical one - I'm not aware of any way of "splitting" the SHC as such and while you can bootstrap a clean SHC from the same deployer (but as @isoutamo said - it's probably not the best choice since you will want to keep those SHCs different after all). So when we were doing this for one of our customers we did: 1. Spin up a new environment with completely clean deployer and clean SHs 2. Copy out selected apps from the old deployer to the new deployer 3. Copy out modified apps state from one of the SHCs and merged it with the apps on the new deployer (this one might not apply to you if your users don't have permissions to modify the apps on SHC and your admins do not do it either). 4. While modifying the built-in apps is in itself not the best practice, sometimes people do make those changes. We migrated changes from the built-in apps (like apps/search/local) into custom apps (i.e. search_migrated). 5. Migrating users and their content, if you want to do that, might be problematic. We didn't bother.