All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm new in Splunk and have a test environment contains search head cluster with three Splunk 9.0.1 instances: one deployer and two search heads. If it important a Deployer also have an indexer cluste... See more...
I'm new in Splunk and have a test environment contains search head cluster with three Splunk 9.0.1 instances: one deployer and two search heads. If it important a Deployer also have an indexer cluster master role. This is a fresh install without any specific changes.  Output of splunk show shcluster-status --verbose:   Captain: decommission_search_jobs_wait_secs : 180 dynamic_captain : 1 elected_captain : Tue Jan 24 17:57:01 2023 id : 17B17CF3-57A4-4F34-A943-835219C2DA41 initialized_flag : 1 kvstore_maintenance_status : disabled label : spl-sh02 max_failures_to_keep_majority : 0 mgmt_uri : https://spl-sh02.domain.com:8089 min_peers_joined_flag : 1 rolling_restart : restart rolling_restart_flag : 0 rolling_upgrade_flag : 0 service_ready_flag : 1 stable_captain : 1 Cluster Manager(s): https://spl-ms01.domain.com:8089 splunk_version: 9.0.0.1 Members: spl-sh02 kvstore_status : ready label : spl-sh02 manual_detention : off mgmt_uri : https://domain.com:8089 mgmt_uri_alias : https://172.28.56.104:8089 out_of_sync_node : 0 preferred_captain : 1 restart_required : 0 splunk_version : 9.0.0.1 status : Up spl-sh01 kvstore_status : ready label : spl-sh01 last_conf_replication : Wed Jan 25 10:52:26 2023 manual_detention : off mgmt_uri : https://spl-sh01.domain.com:8089 mgmt_uri_alias : https://172.28.56.100:8089 out_of_sync_node : 0 preferred_captain : 1 restart_required : 0 splunk_version : 9.0.0.1 status : Up   When i'm try to execute "apply shcluster-bundle" on deployer i'm see this error:   Warning: Depending on the configuration changes being pushed, this command might initiate a rolling restart of the cluster members. Please refer to the documentation for the details. Do you wish to continue? [y/n]: y WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Error in pre-deploy check, uri=https://spl-sh02.domain.com:8089/services/shcluster/captain/kvstore-upgrade/status, status=401, error=No error   How i can solve this problem? 
Hello Splunkers, I the following error on my Splunk HF which is listening to incoming data from F5 network appliance.   01-25-2023 08:06:56.794 +0000 ERROR TcpInputProc [2612981 FwdDataReceiverThr... See more...
Hello Splunkers, I the following error on my Splunk HF which is listening to incoming data from F5 network appliance.   01-25-2023 08:06:56.794 +0000 ERROR TcpInputProc [2612981 FwdDataReceiverThread] - Error encountered for connection from src=<internal_ip_f5>:59697. Read Timeout Timed out after 600 seconds.   I am wondering what the number after the F5 IP is... I specified a unique port for the forwarding of data between f5 and the HF so I do not understand why I have number like 59697 (and many others). More generally I do not know how to troubleshoot this... Thanks for your help, GaetanVP
I have following splunk query (index=index_1 OR index=index_2) sourcetype=openshift_logs openshift_namespace="my_ns" openshift_cluster="*" | spath "message.url" | search "message.url"="/dummy/url/v... See more...
I have following splunk query (index=index_1 OR index=index_2) sourcetype=openshift_logs openshift_namespace="my_ns" openshift_cluster="*" | spath "message.url" | search "message.url"="/dummy/url/v1*" | search "message.tracers.ke-channel{}"="*" |search "message.jsonObject.payments{}.products{}.type"=GROCERY | dedup message.tracers.ke-correlation-id{} | search "message.statusCode"<400 |rename "message.jsonObject.payments{}.orderStatus.status" AS "ORDER_STATUS"| top limit=50 "ORDER_STATUS" which gives the below output ORDER_STATUS count percent ----------------------------------- PAYMENT_ACCEPTED 500 70 PAYMENT_PENDING 100 20 PAYMENT_UNDER_REVIEW 90 2 PAYMENT_REDIRECTION 40 1.32 PAYMENT_NOT_ATTEMPTED10 3.11 I want to display another item in the dashbaord which should be the sum of the count of following order status: PAYMENT_ACCEPTED+PAYMENT_PENDING+PAYMENT_UNDER_REVIEW+PAYMENT_REDIRECTION i.e 500 + 100+90+40=730 Below is my query: (index=index_1 OR index=federated:index_2) sourcetype=openshift_logs openshift_namespace="my_ns" openshift_cluster="*" | spath "message.url" | search "message.url"="/dummy/url/v1*" | search "message.tracers.ke-channel{}"="*" |search "message.jsonObject.payments{}.products{}.type"=GROCERY | search "message.statusCode"<400 | dedup message.jsonObject.id |search ("message.jsonObject.payments{}.orderStatus.status"="PAYMENT_ACCEPTED" OR "message.jsonObject.payments{}.orderStatus.status"="PAYMENT_PENDING" OR "message.jsonObject.payments{}.orderStatus.status"="PAYMENT_UNDER_REVIEW" OR "message.jsonObject.payments{}.orderStatus.status"="PAYMENT_REDIRECTION") | stats count(message.jsonObject.id) But the sum of the count using the above query is always more than the actual total count. Appreciate if someone can let me know where am i going wrong.
Hi community.  Some searches have: index="my_index" index=my_index I want to extract a new field named user_index but cannot figure out the regex capture group that may or may not contain quotes ... See more...
Hi community.  Some searches have: index="my_index" index=my_index I want to extract a new field named user_index but cannot figure out the regex capture group that may or may not contain quotes around the index name.    
Hi Splunker, We are already onboarding Windows Event logs to Splunk, and with that now we also want to onboard windows Key Management Service logs to Splunk. Does anyone know how to onboard this t... See more...
Hi Splunker, We are already onboarding Windows Event logs to Splunk, and with that now we also want to onboard windows Key Management Service logs to Splunk. Does anyone know how to onboard this type of log into Splunk? Thanks in advance.
I have a dataset with incident numbers and their associated Jurisdiction. It is possible that a incident will be listed in multiple jurisdictions.  I don't want to dedup(incident_number) globally. ... See more...
I have a dataset with incident numbers and their associated Jurisdiction. It is possible that a incident will be listed in multiple jurisdictions.  I don't want to dedup(incident_number) globally. I need to count by jurisdiction, but the dedup or distinct count needs to be within each Jurisdiction.  Any suggestions?
Anyone have a search for Meant Time to Triage for specific urgency (high or critical)? I'm having no luck trying to manipulate the built in MTTT panel from the SOC operations dashboard to insert spec... See more...
Anyone have a search for Meant Time to Triage for specific urgency (high or critical)? I'm having no luck trying to manipulate the built in MTTT panel from the SOC operations dashboard to insert specific urgency.
We have a use case where we need to have an alert emailed if a user (under the field User) does not have an event of Activity="logged on" within the past 90 days within a specific sourcetype.   W... See more...
We have a use case where we need to have an alert emailed if a user (under the field User) does not have an event of Activity="logged on" within the past 90 days within a specific sourcetype.   We have tried index=index sourcetype=sourcetype Activity="logged on" | chart count over Activity by User limit=0 But we can't seem to be able to filter to only specify a count of 0 over the past 90 days   Any ideas or leads as to what would get us in the right direction?
Hello Experts.. Configuring the inupts.conf file I am trying to send data from the same windows log to multiple index's for separate dashboards. I think some sort precedence is blocking some of the... See more...
Hello Experts.. Configuring the inupts.conf file I am trying to send data from the same windows log to multiple index's for separate dashboards. I think some sort precedence is blocking some of the data. Here is what I was trying to accomplish. Is there a better way to get where I'm trying to go?   [WinEventLog://Application] disabled = 0 index = WINDOWS start_from = oldest [WinEventLog://System] disabled = 0 index = WINDOWS start_from = oldest [WinEventLog://Security] disabled = 0 index = WINDOWS start_from = oldest ######## Separate to send USB bus traffic ########## [WinEventLog://Security] disabled = 0 index = USB start_from = oldest whitelist = 1234,4321,5467, etc [WinEventLog:/Microsoft-Windows-DriverFrameworks-UserMode/Operational] disabled = 0 index = USB start_from = oldest interval = 1000,1001,1002,1003  
Given web access log data with following fields: _time,  http_status, src_ip, dest_ip After a bruteforce attack on a login page, where http_status of 200=success and 401=failure, how can I displa... See more...
Given web access log data with following fields: _time,  http_status, src_ip, dest_ip After a bruteforce attack on a login page, where http_status of 200=success and 401=failure, how can I display the number of failures, plus earliest(_time) and latest(_time) by src_ip I've tried using streamstats like below, but do not get what I'm looking for index=myIndex AND status=* | table _time status src_ip dest_ip | sort + _time | streamstats reset_on_change=true count earliest(_time) AS ET latest(_time) AS LT by status | convert ctime(ET) ctime(LT)
Can someone help with query? I have 2 index abc and bcz From abc index I want to show stats for field1 where field2 from index abc matches with field3 of index bcz and bcz index field5="value" ... See more...
Can someone help with query? I have 2 index abc and bcz From abc index I want to show stats for field1 where field2 from index abc matches with field3 of index bcz and bcz index field5="value"   what I tried which is not working:  index=abc | stats count by field1 | join type=inner field2 [search index=bcz  | rename field3 as field2 | where field5="employee_name"]
We need to configure which dashboards is a user allowed to navigate through Splunk Mobile, but this user is shown all the dashboards he has access to instead of the admin selected ones. According to... See more...
We need to configure which dashboards is a user allowed to navigate through Splunk Mobile, but this user is shown all the dashboards he has access to instead of the admin selected ones. According to the documentation (https://docs.splunk.com/Documentation/SecureGateway/3.3.0/Admin/AppSelection) admins can choose which apps to show dashboards from in the Connected Experiences mobile apps. This configuration is global. Dashboards from the apps you choose show for all devices registered to the Splunk platform instance with this configuration. This is not what we are experiencing, the admin selects a few dashboards from a single app to show (just this ones, the rest are hidden), but the user still sees all the dashboards he is allowed through RBAC.
< query > ... | stats count by return_code fetches me the below output. I have to create an alert where the sum of any return_code value other than 100 and 200 should not cross 20% of the ... See more...
< query > ... | stats count by return_code fetches me the below output. I have to create an alert where the sum of any return_code value other than 100 and 200 should not cross 20% of the overall value. Example: from the above image, I will add the count of return_codes (other than 100 and 200 ) which will result as 226. now the count of 100 and 200 is 2924. now the  percentage will come around 7.17 %. How do I achieve this via query?
Hi, I have an ITSI issue, it was working correctly and suddenly I have N/A, no events and no data in my ITSI module.  I have checked all splunkd.log and ITSI log and conf files without success. ... See more...
Hi, I have an ITSI issue, it was working correctly and suddenly I have N/A, no events and no data in my ITSI module.  I have checked all splunkd.log and ITSI log and conf files without success. Could you give me a hint ? Regards      
In the below search I am looking for rules hit by count, but how or where would I add a NOT or !, if I wanted to know what rules have not be hit. index=pan_logs | fields _time, rule | stats count b... See more...
In the below search I am looking for rules hit by count, but how or where would I add a NOT or !, if I wanted to know what rules have not be hit. index=pan_logs | fields _time, rule | stats count by rule | sort -count
Hi community, I've just performed an upgrade on my infrastructure (distributed environment) from Splunk 8.2.3 to Splunk 9.0.3. All the instances seem to work fine, I have problems though in apply... See more...
Hi community, I've just performed an upgrade on my infrastructure (distributed environment) from Splunk 8.2.3 to Splunk 9.0.3. All the instances seem to work fine, I have problems though in applying search head cluster bundle. I use this command to upgrade Splunk Enterprise Security:   $SPLUNK_HOME/bin/splunk apply shcluster-bundle -preserve-lookups true -target https://instance1:8089     But it doesn't work and I receive this message:   Error while deploying apps to first member, aborting apps deployment to all members: Error while updating app=SplunkEnterpriseSecuritySuite on target=https://instance1:8089: Error in JSON response: Unexpected EOF     Do you have any idea of what could be the problem?   Thank you Marta
Hi All, Can anyone help me with splunk command to know how much disk space utilized by UF and what is using that much space ? Regards, PNV
I display licensing in a dashboard using the licensing search for Previous 60 Days split by index. This shows a line with my license, all days, and stacked indexes. It works on classic dashboards, bu... See more...
I display licensing in a dashboard using the licensing search for Previous 60 Days split by index. This shows a line with my license, all days, and stacked indexes. It works on classic dashboards, but in the new Dashboard Studio it doesn't work correctly and I don't see column settings for interval, etc. in Studio. Is there another way to display this using Studio? Or am I missing some settings?
|eval TotalApps=if(match('Total',"NTB"),"1","0") |eval In-Progress=if('Total'="NTB" AND isnull('APPL_SUB-DATE'),"1","0") |eval Submitted=if('Total'="NTB" AND isnotnull('APPL_SUB-DATE'),"1","0") ... See more...
|eval TotalApps=if(match('Total',"NTB"),"1","0") |eval In-Progress=if('Total'="NTB" AND isnull('APPL_SUB-DATE'),"1","0") |eval Submitted=if('Total'="NTB" AND isnotnull('APPL_SUB-DATE'),"1","0") |eval My-InfoUsed=if('Total'="NTB" AND isnotnull('APPL_SUB-DATE') AND isnotnull('MY-INF0-CONCUR-FLAG'),"1","0") |stats sum(TotalApps) as "Total Apps" sum(In-Progress) as "In Progress" sum(Submitted) as "Apps Submitted" sum(My-InfoUsed) as "My InfoUsed" by Mon-Year |transpose Column_name="Category" getting results as Category        row1 Mon-Year                Jan-2023 Total Apps                06 In Progress              06 Apps Submitted      0 My InfoUsed              0 But requirement is , Mon-Year        Category               Total Jan-2023         TotalApps              06                               In Progress            06                               Apps Submitted    0                               My InfoUsed             0
Hello, which method is best, using TIME_PREFIX = timestamp":" or TIMESTAMP_FIELDS = @timestamp? https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Configuretimestamprecognition#Examples does... See more...
Hello, which method is best, using TIME_PREFIX = timestamp":" or TIMESTAMP_FIELDS = @timestamp? https://docs.splunk.com/Documentation/Splunk/8.2.2/Data/Configuretimestamprecognition#Examples does not talk about TIMESTAMP_FIELDS We are using this parameter for another JSON source and it works fine too.       Examples : UF side : etc/deployment-apps/_server_app_LBA_ZZZ_LX/local/props.conf [ZZZ_metrics_json] TIMESTAMP_FIELDS = start (useless in my opinion as should only run on indexers side?) TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N.%z (useless in my opinion as should only run on indexers side?) INDEXED_EXTRACTIONS = json etc/deployment-apps/_server_app_LBA_MIC_SUP/local/props.conf [VVV:sup:json] INDEXED_EXTRACTIONS = json IDXC side : [siem@s301lbasplmgt2 ~]$ cat /OPT/siem/splunk/etc/master-apps/APP_PROPS/local/props.conf [ZZZ_metrics_json] TIMESTAMP_FIELDS = start TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N.%z etc/master-apps/XXX_VVV_PROPS/default/props.conf [VVV:sup:json] TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N.%z TIME_PREFIX = timestamp":" MAX_TIMESTAMP_LOOKAHEAD = 50 SHC side: etc/shcluster/apps/XXX_VVV_PROPS/default/props.conf [VVV:sup:json] KV_MODE = none etc/shcluster/apps/APP_YYY_parser_json/default/props.conf [ZZZ_metrics_json] KV_MODE = none       Thanks for your help.