All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Everyone, I have a table like this: _time value1 value2 30/12/2021 06:30 12.1 25.2 30/12/2021 06:00 12.1 25.2 30/12/2021 05:30 11.2 26.4 30/12/... See more...
Hello Everyone, I have a table like this: _time value1 value2 30/12/2021 06:30 12.1 25.2 30/12/2021 06:00 12.1 25.2 30/12/2021 05:30 11.2 26.4 30/12/2021 05:00 11.2 26.4 30/12/2021 04:30 12.1 24.5 30/12/2021 04:00 10.6 29.5 30/12/2021 03:30 10.6 29.5 30/12/2021 03:00 10.6 35.2 I want to select distinct of value 1 and get the corresponding _time and value2. When I do:  |stats values(*) as * by value1,  it returns only value1 and value2, no include _time   But I do want to see the _time. Do you have any solution please? Thanks, Julia
Hello, I need your help, can you tell me: How can I install the splunk sdk in java with eclipse? I appreciate it a lot
i tried with : https://prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event and also with : https://http-inputs.prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event   in both ca... See more...
i tried with : https://prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event and also with : https://http-inputs.prd-p-xxxxxx.splunkcloud.com:8088/services/collector/event   in both cases, the connation fails  do I need to enable anything else in the SplunkCloud Server side ? Thanks   
Hello, I have a csv file that have 209,946 rows of event as show   After some query to apply some condition, as |inputlookup VCCS_VIB.csv |eval TIME = strptime(Time,"%H:%M %d/%m/%Y") |... See more...
Hello, I have a csv file that have 209,946 rows of event as show   After some query to apply some condition, as |inputlookup VCCS_VIB.csv |eval TIME = strptime(Time,"%H:%M %d/%m/%Y") |where TIME>=1656090000 AND TIME<=1659286800 |stats count by TYPE NAME CMND CARDNUM The meaning is I want to find events that between 25/6 and 31/7 and filter out duplicate row that match NAME, CMND and CARDNUM. The query above show 207,460 events (note that all events are between the time constrain), when I order the count column, it show   So there are only two duplicate row -> the final number of row should have been 209,946 - 2 = 209,944, not 207,460. There are over two thousand events missing somewhere. Could anyone show me?
What is the role capability required to view all the indexes in splunk cloud settings? We have below capabilities in place accelerate_datamodel accelerate_search acs_conf admin_all_objects ap... See more...
What is the role capability required to view all the indexes in splunk cloud settings? We have below capabilities in place accelerate_datamodel accelerate_search acs_conf admin_all_objects apps_backup apps_restore change_authentication change_own_password cloud_internal customer_cases delete_by_keyword delete_messages dispatch_rest_to_indexers dmc_deploy_apps dmc_deploy_token_http dmc_manage_topology edit_authentication_extensions edit_auto_ui_updates edit_bookmarks_mc edit_cmd edit_deployment_client edit_deployment_server edit_dist_peer edit_encryption_key_provider edit_field_filter edit_forwarders edit_global_banner edit_health edit_health_subset edit_httpauths edit_indexer_cluster edit_indexerdiscovery edit_ingest_rulesets edit_input_defaults edit_ip_allow_list edit_kvstore edit_local_apps edit_log_alert_event edit_manager_xml edit_metric_schema edit_metrics_rollup edit_modinput_journald edit_monitor edit_own_objects edit_restmap edit_roles edit_roles_grantable edit_scripted edit_search_concurrency_all edit_search_concurrency_scheduled edit_search_head_clustering edit_search_schedule_priority edit_search_schedule_window edit_search_scheduler edit_search_server edit_server edit_server_crl edit_sourcetypes edit_splunktcp edit_splunktcp_ssl edit_splunktcp_token edit_statsd_transforms edit_tcp edit_tcp_stream edit_telemetry_settings edit_token_http edit_tokens_all edit_tokens_own edit_tokens_settings edit_udp edit_upload_and_index edit_user edit_view_html edit_watchdog edit_web_features edit_web_settings edit_webhook_allow_list edit_workload_policy edit_workload_pools edit_workload_rules embed_report export_results_is_visible fsh_manage fsh_search get_diag get_metadata get_typeahead indexes_edit indexes_list_all input_file install_apps license_edit license_read license_tab license_view_warnings list_accelerate_search list_all_objects list_cascading_plans list_deployment_client list_deployment_server list_dist_peer list_forwarders list_health list_health_subset list_httpauths list_indexer_cluster list_indexerdiscovery list_ingest_rulesets list_inputs list_introspection list_metrics_catalog list_pipeline_sets list_remote_input_queue list_remote_output_queue list_search_head_clustering list_search_scheduler list_settings list_storage_passwords list_token_http list_tokens_all list_tokens_own list_tokens_scs list_workload_policy list_workload_pools list_workload_rules merge_buckets metric_alerts never_expire never_lockout output_file pattern_detect phantom_read phantom_write read_internal_libraries_settings refresh_application_licenses request_pstacks request_remote_tok rest_access_server_endpoints rest_apps_management rest_apps_view rest_properties_get rest_properties_set restart_reason restart_splunkd rtsearch run_collect run_commands_ignoring_field_filter run_custom_command run_debug_commands run_dump run_mcollect run_msearch run_noah_command run_sendalert run_walklex schedule_rtsearch schedule_search search search_process_config_refresh select_workload_pools upload_lookup_files upload_mmdb_files use_file_operator use_remote_proxy web_debug
hi im trying to replace credit card number (16 digits) in a csv file with xxxx when i input below text, full event will be masked i will only see xxxx in the search test1,test2, 0123456789123... See more...
hi im trying to replace credit card number (16 digits) in a csv file with xxxx when i input below text, full event will be masked i will only see xxxx in the search test1,test2, 0123456789123456  when i input any credit card number which is less than 16 digits , i can see full event in the search test3,test4,1234   please find the following  configuration files props.conf [ccdata] TRANSFORMS-anonymize = masking   transforms.conf [masking] REGEX = \d{16} FORMAT = xxxx DEST_KEY = _raw  
Creating A dashboard to log any New Firewall rule that has been committed to Panorama. How do i go about this? Any assistance will be greatly appreciated. Thanks 
Hi,   I have a use case where user has been removed from the LDAP , but when we check in the user via setting , we see the user still exists   Ideally automatically user should also be remove... See more...
Hi,   I have a use case where user has been removed from the LDAP , but when we check in the user via setting , we see the user still exists   Ideally automatically user should also be removed from splunk 
Hello, I have a log file that admins can write when they start or stop their server maintenance. This is then jued to silence email alerts so admins do not get email alerts when they are doing serv... See more...
Hello, I have a log file that admins can write when they start or stop their server maintenance. This is then jued to silence email alerts so admins do not get email alerts when they are doing server maintenance. When the admin will start server maintenance they will write "start of maintenance...." into a specific log file (the source). When the admin will stop server maintenance they will then write "sen of maintenance...", to that same file. However, since the email alerts reset themselves after a period (4 hours ) after splunk read the "start of maintenance..." some admins will "forget" to write the "stop of maintenance..." to this file. task: I need to have an "start of maintenance..." and corresponding "end of maintenance..." entry. if I only have a "start of maintenance..." then I must use SPL to insert an event that has "end of maintenance..." and that the _time (or another field that is time-related) has the time of the "start of maintenance..." + 4 hours. So for example, if "start of maintenance..." _time is 2022/08/05 16:00:00 then I must create a event that has _time (or a time field)) of 2022/08/05 20:00:00. If there is a corresponding "end of maintenance...." within 4 hours of having a "start of maintenance..." then I should do nothing. My ultimate goal is to create a dashboard with results filtered by "start of maintenance.." _time and "end of maintenance..." _time, but in order to do this I first have to make sure I have both "start of maintenance..." and "end of maintenance..." _time/Time values.  
Hi, My search is giving below output, Month  FieldA    FieldB Jan         285      1410 Feb         247      1934 Mar         215      2197 Can we create a new column with FieldA% and FieldB... See more...
Hi, My search is giving below output, Month  FieldA    FieldB Jan         285      1410 Feb         247      1934 Mar         215      2197 Can we create a new column with FieldA% and FieldB%, below is the example Month  FieldA    FieldB   FieldA% FieldB% Jan         285      1410        20%           80%       Feb         247      1934         22%          78% Mar         215      2197         15%           85% Eventually i will only be using Month and FieldA% in a column chart
Long post, newish to splunk, search strings are still a foreign language to me. So I am tasked with incorporating azure gov into splunk. Splunk support recommended to use a particular app for micro... See more...
Long post, newish to splunk, search strings are still a foreign language to me. So I am tasked with incorporating azure gov into splunk. Splunk support recommended to use a particular app for microsoft cloud services. The app is easy enough to configure and whatnot. But having issue with creating an index for the app and ingesting into splunk. We have the master node/deployment server, 8 indexers, 5 search heads, 2 heavy forwarders. How do i create an index in a index cluster?  I ask because the directions seem easy enough, however there are some hiccups. When I look at our indexes listed in splunk web, it does not match what is shown in the indexes.conf files. Which is in itself an issue. These are the locations that I have found indexes.conf $SPLUNK_HOME/var/lib/splunk    lists all my indexes and their dat files $SPLUNK_HOME/etc/system/default/  the default files $SPLUNK_HOME/etc/system/local/  has a listing of almost 80 indexes, but not all that are in the web portal search head, missing some of the sensitive indexes with naming conventions for systems like our txs and usr like txs_systemlog, usr-firewall, etc.   I went to our master node and the location $SPLUNK_HOME/etc/master-apps/_cluster/local/  to look at what the indexes.conf file says there...but its not present. Yet we obviously have indexes across our cluster. So here are the issues: 1 - This prevents me from creating the needed index "usr-azure" as I do not where to put it. 2 - why are some indexes, like the sensitive ones, not listed in the conf files but are listed in the /var/lib/splunk/ ? 3 - Why is my master node web showing 48 indexes   yet my indexers separately show 99 indexes?     Additionally, another issue. I know we need to use CLI and edit the indexes.conf file for a indexer cluster, but I tried to do it via the web on indexer1, Settings >  Indexes (under Data), and I can click the New Index button. All is good, but when I get to the the App selection, it only lists all the apps. Whereas all the indexes listed show TWC_all_indexes   Q4 - how do i get that for this app setting "TWC_all_indexes" for new index I am creating? I assume it has something to do with the index clustering and a setting on the master node. But I don't even see that option in the indexes.conf file.        
Splunk Noob here.  How do I search for Windows Servers Version (2008, 2012 etc)?  Can this be done?
I am fairly new to Splunk but I come from a background of SQL databases and I may still be trying to do things in a "relational" way... Having said that I have two data sources. One represents test ... See more...
I am fairly new to Splunk but I come from a background of SQL databases and I may still be trying to do things in a "relational" way... Having said that I have two data sources. One represents test results (a list of test results) and one represents test suites (just some metadata for a set of tests like number of tests and a minimum required passing tests) I want to be able to compute the ratio of tests that passed and compare that with a passing threshold ratio. To do this I join test results with the test summary data like this: index=test_results | where (!isnull(test_result)) | join type=inner left=L right=R where L.test_summary_id=R.test_summary_id [search index=test_summaries] |stats values(L.project_short) AS project, count(eval(L.test_result=='PASS')) as tests_passing count, values(R.number_of_tests) as number_of_tests, values(R.passing_threshold) as pass_threshold by L.sw_release_id The line count(eval(L.test_result=='PASS')) as tests_passing always evaluates to 0 but I expect it to be the number of tests with the value "PASS" as a result for that sw_release_id. Other searches where I am not joining two tables, I can compute the tests_passing value correctly. Is there something about a join that prevents me from doing evaluations? Should I not use a join? Thanks...
Hello, this is the first time i post here but I have learn alot from this website by just using google search. Situation: At work server admins ask if I could "silence" splunk email alerts when t... See more...
Hello, this is the first time i post here but I have learn alot from this website by just using google search. Situation: At work server admins ask if I could "silence" splunk email alerts when they were doing maintenance so that they do not get emails of errors during server maintenance. I was able to do this because I created a maintenance.log in the /var/log/ folder that splunk keeps track of. if the admins write: "start of maintenance..." then any alert that monitors this logs will stop sending emails. when the admins write: "end of maintenance..." then splunk knows it can start sending emails since maintenance period is completed. this was useful to silence apache access log alerts that occurred during maintenance, meaning the admins did not get alerts that the apache access log wrote while admins were during maintenance as denoted by the _time of "stat of maintenance..." and _time of "end of maintenance...." Task: I have to show search results that do not contain any results that were reported during a maintenance period in a dashboard. this means that any search results between the _time of "start of maintenance...." and _time of "end of maintenance..." should not be included in the results. Moreover, there might be times when maintenance happened several times, for example, if maintnenace was done twice in one day or if they are searching for a time period of say, 1 month, and it shows there were 3 "starts of maintneance" and 3 corresponding "end of maintenance..." entries. Action: I have writen SPL that will get all the results: earliest=-1d (host="Server-web" source="/var/log/httpd24/error_log") OR (host="Server-Web" index=bizapps source=/var/log/bizapps_maintenance.log) I am not sure if splunk SPL can pull this off but am confident someone can help me out. If you need more info, let me know.
How do I schedule an alert to run every 5 minutes between the hours of 9:30 and 16:00 Eastern Time Monday-Friday?
I'm looking for a way to extract a value from the middle of a sting. The value(green) I want is after the first underscore(blue) and before the dash(pink) Example: GET_tres_main.aspx_detail_showa... See more...
I'm looking for a way to extract a value from the middle of a sting. The value(green) I want is after the first underscore(blue) and before the dash(pink) Example: GET_tres_main.aspx_detail_showall-0
Hi all, I need to get the value Windows 7 from the below string . used something like OS[\n]+([^\n]+) , but then it captures from Value till Windows 7.  Could someone please help me in capturing on... See more...
Hi all, I need to get the value Windows 7 from the below string . used something like OS[\n]+([^\n]+) , but then it captures from Value till Windows 7.  Could someone please help me in capturing only windows 7? DeviceProperties: [ [-] { [-] Name: OS Value: Windows 7     
Say I'm just trying to find if anything in Splunk is showing number "12345678". Isn't there a way to query a simple search trying to find that?  Or if I'm looking for a specific user; is there a wa... See more...
Say I'm just trying to find if anything in Splunk is showing number "12345678". Isn't there a way to query a simple search trying to find that?  Or if I'm looking for a specific user; is there a way to write a query like "jsmith@gmail.com". Essentially looking for anything associated with this username or anything associated with that number above. 
I am trying to run a search where I want my data to be more than 12 months old. However when I run this search, it brings up data between 2 days old and 12 months old. Anyone got any ideas on wher... See more...
I am trying to run a search where I want my data to be more than 12 months old. However when I run this search, it brings up data between 2 days old and 12 months old. Anyone got any ideas on where I am going wrong? | inputlookup append=T access_tracker where lastTime_user>=1659602543.000000 | stats min(firstTime) as firstTime,values(second2lastTime) as second2lastTime,values(lastTime) as lastTime_vals,max(lastTime) as lastTime by user | eval "second2lastTime"=mvdedup(mvappend('second2lastTime',NULL,'lastTime_vals')),"second2lastTime"=if(mvcount('lastTime')=1 AND mvcount('second2lastTime')>1 AND 'second2lastTime'='lastTime',split(ltrim(replace("|".mvjoin('second2lastTime',"|"),"\|".'lastTime',""),"|"), "|"),'second2lastTime'),"second2lastTime"=max('second2lastTime'),inactiveDays=round((lastTime-second2lastTime)/86400,2),_time=lastTime | search inactiveDays>=12mo
Hello, I have complex JSON events ingested as *.log files. I have issues (or couldn't do) with extracting fields from this files/events. Any help on how to extract Key-Value pairs from these events... See more...
Hello, I have complex JSON events ingested as *.log files. I have issues (or couldn't do) with extracting fields from this files/events. Any help on how to extract Key-Value pairs from these events would be highly appreciated. One sample event is given below. Thank you so much.   2022-07-15 12:44:03 - {     "type" : "TEST",     "r/o" : false,     "booting" : false,     "version" : "6.2.7.TS",     "user" : "DS",     "domainUUID" : null,     "access" : "NATIVE",     "remote-address" : "localhost",     "success" : true,     "ops" : [{         "address" : [             {                 "subsystem" : "datasources"             },             {                 "data-source" : "mode_tp"             }         ],   "address" : [                 {                     "cservice" : "management"                 },                 {                     "access" : "identity"                 }             ],             "DSdomain" : "TESTDomain"         },         {             "address" : [                 {                     "cservice" : "management"                 },   {             "operation" : "add",             "address" : [                 {                     "subsystem" : "finit"                 },                 {                     "bucket" : "TEST"                 },                 {                     "clocal" : "passivation"                 },                 {                     "store" : "file"                 }             ],             "passivation" : true,             "purge" : false         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "finit"                 },                 {                     "bucket" : "TEST"                 }             ],             "module" : "dshibernate"         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "finit"                 },                 {                     "bucket" : "hibernate"                 },                 {                     "clocal" : "entity"                 }             ]         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "finit"                 },                 {                     "bucket" : "hibernate"                 },                 {                     "clocal" : "entity"                 },                 {                     "component" : "transaction"                 }             ],             "model" : "DSTEST"         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "infit"                 },                 {                     "bucket" : "hibernate"                 },                 {                     "clocal" : "entity"                 },                 {                     "memory" : "object"                 }             ],             "size" : 210000         },   {             "operation" : "add",             "address" : [                 {                     "subsystem" : "DS"                 },                 {                     "workplace" : "default"                 },                 {                     "running-spin" : "default"                 }             ],             "Test-threads" : 45,             "queue-length" : 60,             "max-threads" : 70,             "keepalive-time" : {                 "time" : 20,                 "unit" : "SECONDS"             }         },         {             "operation" : "add",             "address" : [                 {                     "subsystem" : "DS"                 },                 {                     "workplace" : "default"                 },                 {                     "long-running-threads" : "default"                 }             ],             "Test-threads" : 45,             "queue-length" : 70,             "max-threads" : 70,             "keepalive-time" : {                 "time" : 20,                 "unit" : "SECONDS"             }         },       }] }