All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We would like to patch up the OS and would like to know what are the dependencies on RHEL 8 OS does Splunk has. Thanks.
I want to do a field extraction for my sourcetype under the Fields-> Calculated Fields section. Confused how to draft the if condition to achieve the following logic. Condition. Some events conta... See more...
I want to do a field extraction for my sourcetype under the Fields-> Calculated Fields section. Confused how to draft the if condition to achieve the following logic. Condition. Some events contain  only the userid field, for those, check if it is not null/empty, then fetch the userid field as user or fill unknown Some events contain both userid and cmdid field, in this case (if the event has both these fields) cmdid is the real user field. so the logic in both cases should first compare the existence of those 2 fields and then accordingly derive.
Hi, While running the command "mvn clean install", I am getting this error (screenshot attached)while configuring with process monitoring extension. I have configured all the dependencies which is r... See more...
Hi, While running the command "mvn clean install", I am getting this error (screenshot attached)while configuring with process monitoring extension. I have configured all the dependencies which is required(java version 8 & Apache maven 3.8.6). Kindly help me with this Thank you in advance
Hi Folks, I am trying to understand the triggering option for existing anomalies models available in UEBA. For that i have tried to clone the existing model to understand the trigger option but not... See more...
Hi Folks, I am trying to understand the triggering option for existing anomalies models available in UEBA. For that i have tried to clone the existing model to understand the trigger option but not all the models showing up under the clone page. Could you please let me know, is there any option to identify the trigger option for existing data model.
Right now everyone must have been facing the same issue regarding with the removal of basic authentication from Microsoft.  We are using  Splunk Add-on for Microsoft Office 365 for ingesting logs ... See more...
Right now everyone must have been facing the same issue regarding with the removal of basic authentication from Microsoft.  We are using  Splunk Add-on for Microsoft Office 365 for ingesting logs of service status, service messages, and management activity logs from the Office 365 Management API.  Due to this removal of basic authentication, we may not use this add-on. We are searching for other wau how to ingest logs from O365. Do anyone have any idea how to ingest these logs to splunk other than this add-on ? Please kindly help us in this issue. Thanks in advance.!
Hi Are there any Splunk APP that provides dashboards for AWS WAF & IPS ? we didn't find any in Splunk App for AWS Security Dashboards
While configuring IAM role in splunk aws addon I am getting error-->In handler 'splunk_ta_aws_iam_roles': bad character (52) in reply size
I am creating a new file in the /var/log directory but when I sure for events I get zero result. How do I get Splunk to pick up the file so I can view it in the UI?
Dear forum, I'm trying to test my "Delegation" panel from the logbinder app but without success. I have results in the eventviewer file but in the dahsboard it appears as "no results found", as i... See more...
Dear forum, I'm trying to test my "Delegation" panel from the logbinder app but without success. I have results in the eventviewer file but in the dahsboard it appears as "no results found", as in the official site: https://www.logbinder.com/Content/Solutions/splunkapp1.jpg everything else works fine. How can I simulate a test in my AD to have results in this "Delegation" panel?     'filter_dc_winseclog_events' EventCode=5136 AttributeLDAPDisplayName=nTSecurityDescriptor | transaction maxspan=5s Correlation_ID | eval ObjectClass=if(ObjectClass="organizationalUnit" OR ObjectClass="group" OR ObjectClass="user" OR ObjectClass="computer" OR ObjectClass="domainDNS" OR ObjectClass="groupPolicyContainer",ObjectClass,"other") | rename ObjectClass as "Object Type" | rename DirectoryServiceName as Domain | timechart count by "Object Type"     Thanks Paulo
Hello Everyone, I have a table like this: _time value1 value2 30/12/2021 06:30 12.1 25.2 30/12/2021 06:00 12.1 25.2 30/12/2021 05:30 11.2 26.4 30/12/... See more...
Hello Everyone, I have a table like this: _time value1 value2 30/12/2021 06:30 12.1 25.2 30/12/2021 06:00 12.1 25.2 30/12/2021 05:30 11.2 26.4 30/12/2021 05:00 11.2 26.4 30/12/2021 04:30 12.1 24.5 30/12/2021 04:00 10.6 29.5 30/12/2021 03:30 10.6 29.5 30/12/2021 03:00 10.6 35.2 I want to select distinct of value 1 and get the corresponding _time and value2. When I do:  |stats values(*) as * by value1,  it returns only value1 and value2, no include _time   But I do want to see the _time. Do you have any solution please? Thanks, Julia
Hello, I need your help, can you tell me: How can I install the splunk sdk in java with eclipse? I appreciate it a lot
i tried with : https://prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event and also with : https://http-inputs.prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event   in both ca... See more...
i tried with : https://prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event and also with : https://http-inputs.prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event   in both cases, the connation fails  do I need to enable anything else in the SplunkCloud Server side ? Thanks   
Hello, I have a csv file that have 209,946 rows of event as show   After some query to apply some condition, as |inputlookup VCCS_VIB.csv |eval TIME = strptime(Time,"%H:%M %d/%m/%Y") |... See more...
Hello, I have a csv file that have 209,946 rows of event as show   After some query to apply some condition, as |inputlookup VCCS_VIB.csv |eval TIME = strptime(Time,"%H:%M %d/%m/%Y") |where TIME>=1656090000 AND TIME<=1659286800 |stats count by TYPE NAME CMND CARDNUM The meaning is I want to find events that between 25/6 and 31/7 and filter out duplicate row that match NAME, CMND and CARDNUM. The query above show 207,460 events (note that all events are between the time constrain), when I order the count column, it show   So there are only two duplicate row -> the final number of row should have been 209,946 - 2 = 209,944, not 207,460. There are over two thousand events missing somewhere. Could anyone show me?
What is the role capability required to view all the indexes in splunk cloud settings? We have below capabilities in place accelerate_datamodel accelerate_search acs_conf admin_all_objects ap... See more...
What is the role capability required to view all the indexes in splunk cloud settings? We have below capabilities in place accelerate_datamodel accelerate_search acs_conf admin_all_objects apps_backup apps_restore change_authentication change_own_password cloud_internal customer_cases delete_by_keyword delete_messages dispatch_rest_to_indexers dmc_deploy_apps dmc_deploy_token_http dmc_manage_topology edit_authentication_extensions edit_auto_ui_updates edit_bookmarks_mc edit_cmd edit_deployment_client edit_deployment_server edit_dist_peer edit_encryption_key_provider edit_field_filter edit_forwarders edit_global_banner edit_health edit_health_subset edit_httpauths edit_indexer_cluster edit_indexerdiscovery edit_ingest_rulesets edit_input_defaults edit_ip_allow_list edit_kvstore edit_local_apps edit_log_alert_event edit_manager_xml edit_metric_schema edit_metrics_rollup edit_modinput_journald edit_monitor edit_own_objects edit_restmap edit_roles edit_roles_grantable edit_scripted edit_search_concurrency_all edit_search_concurrency_scheduled edit_search_head_clustering edit_search_schedule_priority edit_search_schedule_window edit_search_scheduler edit_search_server edit_server edit_server_crl edit_sourcetypes edit_splunktcp edit_splunktcp_ssl edit_splunktcp_token edit_statsd_transforms edit_tcp edit_tcp_stream edit_telemetry_settings edit_token_http edit_tokens_all edit_tokens_own edit_tokens_settings edit_udp edit_upload_and_index edit_user edit_view_html edit_watchdog edit_web_features edit_web_settings edit_webhook_allow_list edit_workload_policy edit_workload_pools edit_workload_rules embed_report export_results_is_visible fsh_manage fsh_search get_diag get_metadata get_typeahead indexes_edit indexes_list_all input_file install_apps license_edit license_read license_tab license_view_warnings list_accelerate_search list_all_objects list_cascading_plans list_deployment_client list_deployment_server list_dist_peer list_forwarders list_health list_health_subset list_httpauths list_indexer_cluster list_indexerdiscovery list_ingest_rulesets list_inputs list_introspection list_metrics_catalog list_pipeline_sets list_remote_input_queue list_remote_output_queue list_search_head_clustering list_search_scheduler list_settings list_storage_passwords list_token_http list_tokens_all list_tokens_own list_tokens_scs list_workload_policy list_workload_pools list_workload_rules merge_buckets metric_alerts never_expire never_lockout output_file pattern_detect phantom_read phantom_write read_internal_libraries_settings refresh_application_licenses request_pstacks request_remote_tok rest_access_server_endpoints rest_apps_management rest_apps_view rest_properties_get rest_properties_set restart_reason restart_splunkd rtsearch run_collect run_commands_ignoring_field_filter run_custom_command run_debug_commands run_dump run_mcollect run_msearch run_noah_command run_sendalert run_walklex schedule_rtsearch schedule_search search search_process_config_refresh select_workload_pools upload_lookup_files upload_mmdb_files use_file_operator use_remote_proxy web_debug
hi im trying to replace credit card number (16 digits) in a csv file with xxxx when i input below text, full event will be masked i will only see xxxx in the search test1,test2, 0123456789123... See more...
hi im trying to replace credit card number (16 digits) in a csv file with xxxx when i input below text, full event will be masked i will only see xxxx in the search test1,test2, 0123456789123456  when i input any credit card number which is less than 16 digits , i can see full event in the search test3,test4,1234   please find the following  configuration files props.conf [ccdata] TRANSFORMS-anonymize = masking   transforms.conf [masking] REGEX = \d{16} FORMAT = xxxx DEST_KEY = _raw  
Creating A dashboard to log any New Firewall rule that has been committed to Panorama. How do i go about this? Any assistance will be greatly appreciated. Thanks 
Hi,   I have a use case where user has been removed from the LDAP , but when we check in the user via setting , we see the user still exists   Ideally automatically user should also be remove... See more...
Hi,   I have a use case where user has been removed from the LDAP , but when we check in the user via setting , we see the user still exists   Ideally automatically user should also be removed from splunk 
Hello, I have a log file that admins can write when they start or stop their server maintenance. This is then jued to silence email alerts so admins do not get email alerts when they are doing serv... See more...
Hello, I have a log file that admins can write when they start or stop their server maintenance. This is then jued to silence email alerts so admins do not get email alerts when they are doing server maintenance. When the admin will start server maintenance they will write "start of maintenance...." into a specific log file (the source). When the admin will stop server maintenance they will then write "sen of maintenance...", to that same file. However, since the email alerts reset themselves after a period (4 hours ) after splunk read the "start of maintenance..." some admins will "forget" to write the "stop of maintenance..." to this file. task: I need to have an "start of maintenance..." and corresponding "end of maintenance..." entry. if I only have a "start of maintenance..." then I must use SPL to insert an event that has "end of maintenance..." and that the _time (or another field that is time-related) has the time of the "start of maintenance..." + 4 hours. So for example, if "start of maintenance..." _time is 2022/08/05 16:00:00 then I must create a event that has _time (or a time field)) of 2022/08/05 20:00:00. If there is a corresponding "end of maintenance...." within 4 hours of having a "start of maintenance..." then I should do nothing. My ultimate goal is to create a dashboard with results filtered by "start of maintenance.." _time and "end of maintenance..." _time, but in order to do this I first have to make sure I have both "start of maintenance..." and "end of maintenance..." _time/Time values.  
Hi, My search is giving below output, Month  FieldA    FieldB Jan         285      1410 Feb         247      1934 Mar         215      2197 Can we create a new column with FieldA% and FieldB... See more...
Hi, My search is giving below output, Month  FieldA    FieldB Jan         285      1410 Feb         247      1934 Mar         215      2197 Can we create a new column with FieldA% and FieldB%, below is the example Month  FieldA    FieldB   FieldA% FieldB% Jan         285      1410        20%           80%       Feb         247      1934         22%          78% Mar         215      2197         15%           85% Eventually i will only be using Month and FieldA% in a column chart
Long post, newish to splunk, search strings are still a foreign language to me. So I am tasked with incorporating azure gov into splunk. Splunk support recommended to use a particular app for micro... See more...
Long post, newish to splunk, search strings are still a foreign language to me. So I am tasked with incorporating azure gov into splunk. Splunk support recommended to use a particular app for microsoft cloud services. The app is easy enough to configure and whatnot. But having issue with creating an index for the app and ingesting into splunk. We have the master node/deployment server, 8 indexers, 5 search heads, 2 heavy forwarders. How do i create an index in a index cluster?  I ask because the directions seem easy enough, however there are some hiccups. When I look at our indexes listed in splunk web, it does not match what is shown in the indexes.conf files. Which is in itself an issue. These are the locations that I have found indexes.conf $SPLUNK_HOME/var/lib/splunk    lists all my indexes and their dat files $SPLUNK_HOME/etc/system/default/  the default files $SPLUNK_HOME/etc/system/local/  has a listing of almost 80 indexes, but not all that are in the web portal search head, missing some of the sensitive indexes with naming conventions for systems like our txs and usr like txs_systemlog, usr-firewall, etc.   I went to our master node and the location $SPLUNK_HOME/etc/master-apps/_cluster/local/  to look at what the indexes.conf file says there...but its not present. Yet we obviously have indexes across our cluster. So here are the issues: 1 - This prevents me from creating the needed index "usr-azure" as I do not where to put it. 2 - why are some indexes, like the sensitive ones, not listed in the conf files but are listed in the /var/lib/splunk/ ? 3 - Why is my master node web showing 48 indexes   yet my indexers separately show 99 indexes?     Additionally, another issue. I know we need to use CLI and edit the indexes.conf file for a indexer cluster, but I tried to do it via the web on indexer1, Settings >  Indexes (under Data), and I can click the New Index button. All is good, but when I get to the the App selection, it only lists all the apps. Whereas all the indexes listed show TWC_all_indexes   Q4 - how do i get that for this app setting "TWC_all_indexes" for new index I am creating? I assume it has something to do with the index clustering and a setting on the master node. But I don't even see that option in the indexes.conf file.