All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I'm trying to write a Splunk search for detecting unusual behavior in emails sending, here is the spl query: | tstats summariesonly=true fillnull_value="N/D" dc(All_Email.internal_message_... See more...
Hello, I'm trying to write a Splunk search for detecting unusual behavior in emails sending, here is the spl query: | tstats summariesonly=true fillnull_value="N/D" dc(All_Email.internal_message_id) as total_emails from datamodel=Email where (All_Email.action="quarantined" OR All_Email.action="delivered") AND NOT [| `email_whitelist_generic`] by All_Email.src_user, All_Email.subject, All_Email.action | `drop_dm_object_name("All_Email")` | eventstats sum(eval(if(action="quarantined", count, 0))) as quarantined_count_peruser, sum(eval(if(action="delivered", count, 0))) as delivered_count_peruser by src_user, subject | where total_emails>50 AND quarantined_count_peruser>10 AND delivered_count_peruser>0 I want to count the number of quarantined emails and the delivered ones only and than filter them for some threshold, but it seems that the eventstats command is not working as expected. I already used this logic for authentication searches and it's working fine. Any help?
can i remove the button which is just below (-) button ?  
When I creating "on poll" action on App Wizard, I always get an error: "Action type: Select a valid choice. ingest is not one of the available choices." Does anyone know a way to avoid this?
Following the documentation https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform  I have: Created a trial ac... See more...
Following the documentation https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform  I have: Created a trial account in Splunk Cloud platform Generated a HEC Token Send telemetry data to Splunk Cloud platform using a OpenTelemetry collectory with Splunk HEC exporter    splunk_hec: token: "<hec-token>" endpoint: https://prd-p-e7xnh.splunkcloud.com:8088/services/collector/event source: "otel" sourcetype: "otel" splunk_app_name: "ThousandEyes OpenTelemetry" tls: insecure: false       I see the following error in my `otel-collector`:   Post "https://splunkcloud.com:8088/services/collector/event": tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match splunkcloud.com       The endpoint `https://prd-p-e7xnh.splunkcloud.com:8088` seems to have a invalid certificate. It was sign by a self-sign CA. It does not include subject name for the endpoint.   openssl s_client -showcerts -connect prd-p-e7xnh.splunkcloud.com:8088 CONNECTED(00000005) depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify error:num=19:self-signed certificate in certificate chain verify return:1 depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify return:1 depth=0 CN = SplunkServerDefaultCert, O = SplunkUser verify return:1 --- Certificate chain 0 s:CN = SplunkServerDefaultCert, O = SplunkUser i:C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256 v:NotBefore: May 28 17:34:47 2024 GMT; NotAfter: May 28 17:34:47 2027 GMT     We confirmed that for the paid version using the port 443, Splunk is using a valid CA certificate:   echo -n | openssl s_client -connect prd-p-e7xnh.splunkcloud.com:443 | openssl x509 -text -noout Warning: Reading certificate from stdin since no -in or -new option is given depth=2 C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert Global Root G2 verify return:1 depth=1 C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1 verify return:1 depth=0 C=US, ST=California, L=San Francisco, O=Splunk Inc., CN=*.prd-p-e7xnh.splunkcloud.com verify return:1 DONE Certificate: Data: Version: 3 (0x2) Serial Number: 02:ac:04:07:e1:b9:47:0f:a1:83:02:a7:45:99:a4:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1 Validity Not Before: May 28 00:00:00 2024 GMT Not After : May 27 23:59:59 2025 GMT Subject: C=US, ST=California, L=San Francisco, O=Splunk Inc., CN=*.prd-p-e7xnh.splunkcloud.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: 74:85:80:C0:66:C7:DF:37:DE:CF:BD:29:37:AA:03:1D:BE:ED:CD:17 X509v3 Subject Key Identifier: 35:18:36:ED:18:F5:18:A6:89:90:28:E0:12:AB:14:47:18:37:61:F9 X509v3 Subject Alternative Name: DNS:*.prd-p-e7xnh.splunkcloud.com, DNS:prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-prd-p-e7xnh.splunkcloud.com, DNS:akamai-inputs-prd-p-e7xnh.splunkcloud.com, DNS:*.akamai-inputs-prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-ack-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-ack-prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-firehose-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-firehose-prd-p-e7xnh.splunkcloud.com, DNS:*.pvt.prd-p-e7xnh.splunkcloud.com, DNS:pvt.prd-p-e7xnh.splunkcloud.com     Could you use the same certificate for both Trial and Paid version? Why are you using a different one? Could you please help us. It is blocking us when using Trial accounts.  Thank you in advance.
Hi Team, We have some reports in a shared path, how to bring it to splunk?
こんにちは。初めてのため、不手際があるかもしれません。 $SPLUNK_HOME/etc/passwdに以下のフィールドがあると思いますが、<?1>と<?2>に入る内容について教えて頂きたいです。 : <ログインユーザー名> : <パスワード> : <?1> : <splunk Web上の名前> : <ロール> : <メールアドレス> : <?2> : < 「初回ログイン時にパスワードを要求... See more...
こんにちは。初めてのため、不手際があるかもしれません。 $SPLUNK_HOME/etc/passwdに以下のフィールドがあると思いますが、<?1>と<?2>に入る内容について教えて頂きたいです。 : <ログインユーザー名> : <パスワード> : <?1> : <splunk Web上の名前> : <ロール> : <メールアドレス> : <?2> : < 「初回ログイン時にパスワードを要求する」にすると[force_change_pass]の値を入力します> : <数字(何の数字か不明、uid?)> また、認識と違う箇所があれば教えてください。    
Hello! I was able to connect fine to the controller using the provided `agent-info.xml` (after downloading the custom-config AppServerAgent-1.8-24.4.1.35880) Now, what I'm trying to, is to reproduc... See more...
Hello! I was able to connect fine to the controller using the provided `agent-info.xml` (after downloading the custom-config AppServerAgent-1.8-24.4.1.35880) Now, what I'm trying to, is to reproduce the same configuration, but solely based on env. variables; because I'll be deploying my app with a "blank" AppServerAgent-1.8-24.4.1.35880 with no custom config. Unfortunately, I can't make it work. When it works fine using the provided `agent-info.xml`, I can see in the logs: [AD Agent init] 30 May 2024 17:32:07,965 INFO XMLConfigManager - Full certificate chain validation performed using default certificate file [AD Agent init] 30 May 2024 17:32:18,898 INFO ConfigurationChannel - Auto agent registration attempted: Application Name [anthony-app] Component Name [anthony-tier] Node Name [anthony-node] [AD Agent init] 30 May 2024 17:32:18,898 INFO ConfigurationChannel - Auto agent registration SUCCEEDED! When it fails, using env. variables, I'll get [AD Agent init] 30 May 2024 17:29:22,398 INFO XMLConfigManager - Full certificate chain validation performed using default certificate file [AD Agent init] 30 May 2024 17:29:22,600 ERROR ConfigurationChannel - HTTP Request failed: HTTP/1.1 401 Unauthorized [AD Agent init] 30 May 2024 17:29:22,600 INFO ConfigurationChannel - Resetting AuthState for Proxy: [state:UNCHALLENGED;] and Target [state:FAILURE;] [AD Agent init] 30 May 2024 17:29:22,601 WARN ConfigurationChannel - Could not connect to the controller/invalid response from controller, cannot get initialization information, controller host [xxx.saas.appdynamics.com], port[443], exception [null] [AD Agent init] 30 May 2024 17:29:22,601 ERROR ConfigurationChannel - Exception: NULL Although here are all the env. variables I've set: APPDYNAMICS_CONTROLLER_HOST_NAME=xxx.saas.appdynamics.com APPDYNAMICS_AGENT_TIER_NAME=anthony-tier APPDYNAMICS_AGENT_APPLICATION_NAME=anthony-app APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=password-copy-pasted-from-config APPDYNAMICS_AGENT_NODE_NAME=anthony-node APPDYNAMICS_CONTROLLER_PORT=443 APPDYNAMICS_CONTROLLER_SSL_ENABLED=true There's one thing that is troubling me in the logs though, is that using env. variables, I always get in the logs such messages: [AD Agent init] 30 May 2024 17:29:18,333 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller host information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,333 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller port information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,335 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller host information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,335 WARN XMLConfigManager - XML Controller Info Resolver found invalid controller port information [] in controller-info.xml; Please specify a valid value if it is not already set in system properties. [AD Agent init] 30 May 2024 17:29:18,339 INFO XMLConfigManager - XML Agent Account Info Resolver did not find account name. Using default account name [customer1] [AD Agent init] 30 May 2024 17:29:18,339 WARN XMLConfigManager - XML Agent Account Info Resolver did not find account access key. [AD Agent init] 30 May 2024 17:29:18,339 INFO XMLConfigManager - Configuration Channel is using ControllerInfo:: host:[xxx.saas.appdynamics.com] port:[443] sslEnabled:[true] keystoreFile:[DEFAULT:cacerts.jks] use-encrypted-credentials:[false] secureCredentialStoreFileName:[] secureCredentialStorePassword:[] secureCredentialStoreFormat:[] use-ssl-client-auth:[false] asymmetricKeysStoreFilename:[] asymmetricKeysStorePassword:[] asymmetricKeyPassword:[] asymmetricKeyAlias:[] validation:[UNSPECIFIED] Although all env. variables were properly loaded I believe: [AD Agent init] 30 May 2024 17:29:18,295 INFO XMLConfigManager - Default Controller Info Resolver found env variable [APPDYNAMICS_CONTROLLER_HOST_NAME] for controller host name [xxx.saas.appdynamics.com] [AD Agent init] 30 May 2024 17:29:18,295 INFO XMLConfigManager - Default Controller Info Resolver found env variable [APPDYNAMICS_CONTROLLER_PORT] for controller port [443] [AD Agent init] 30 May 2024 17:29:18,295 INFO XMLConfigManager - Default Controller Info Resolver found env variable [APPDYNAMICS_CONTROLLER_SSL_ENABLED] for controller ssl enabled [true] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Default Agent Account Info Resolver found env variable [APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY] for account access key [****] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [anthony-app] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [anthony-tier] [AD Agent init] 30 May 2024 17:29:18,303 INFO XMLConfigManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_NODE_NAME] for node name [anthony-node] Anyways... Is configuration solely provided by env. variable supposed to work? Thank you!
I would like to visualize using the Single Value visualization with and Trellis Layout and sort panels by the value of the latest field in the BY clause.  I can follow the timechart with a table and ... See more...
I would like to visualize using the Single Value visualization with and Trellis Layout and sort panels by the value of the latest field in the BY clause.  I can follow the timechart with a table and order the rows manually, but I would like something more automatic. Is there a way of specifying a field projection order via some sort of sort that can be used with timechart. I can't seem to find anything and may need to rely upon something that is an outside the box. Please advise, Tim Here is my SPL and the resulting visualization below   | mstats latest(_value) as value WHERE index="my_metrics" AND metric_name="my.app.metric.count" BY group span=15m | timechart span=15m usenull=false useother=false partial=false sum(value) AS count BY group WHERE max in top6  
I am in Vulnerability Management and a novice Splunk user.  I want to create a query to quickly determine whether we possess any assets that could be affected when a critical CVE is released.    For ... See more...
I am in Vulnerability Management and a novice Splunk user.  I want to create a query to quickly determine whether we possess any assets that could be affected when a critical CVE is released.    For example, if Cisco releases a CVE that affects Cisco Adaptive Security Appliance (ASA), I want to be able to run a query and quickly determine whether we possess any of the affected assets in our environment.    
Good Afternoon,  Running a super basic test to validate i can send a POST to our Ansible Tower Listener. Search:  | makeresults | eval header="{\"content-type\":\"application/json\"}" | eval ... See more...
Good Afternoon,  Running a super basic test to validate i can send a POST to our Ansible Tower Listener. Search:  | makeresults | eval header="{\"content-type\":\"application/json\"}" | eval data="{\"Message\": \"This is a test\"}" | eval id="http://Somehost:someport/" | curl method=post urifield=id headerfield=header datafield=data debug=true The same payload in Postman works 100%  what i have noticed is that its converting the double quotes to single: curl_data_payload: {'monitor_state': 'unmonitored'} When i test this payload in Post man it also fails with the same message invalid JSON payload. Has Anyone had this issue? knows how to address it... I have no hair left to rip out. 
I'm using Splunk Enterprise 9.2.0.  We are able to update our app, which is run on a VM in vSphere, using a .tar.gz file from GitLab...  [our_app_name].tar.gz So, it works just fine on that VM, but ... See more...
I'm using Splunk Enterprise 9.2.0.  We are able to update our app, which is run on a VM in vSphere, using a .tar.gz file from GitLab...  [our_app_name].tar.gz So, it works just fine on that VM, but when I try to update the one we have running in a cloud environment in AWS, I get the following error after I upload a .tar.gz file: There was an error processing the upload.Invalid app contents: archive contains more than one immediate subdirectory: and [our_app_name] Any advice on what might be the fix for that, or how I should start troubleshooting, I would appreciate.  Thank you.
Hi guys, I have several topics on the table. 1) I would like to know if you would have any advice, process or even document defining the principles of creating a naming convention when it comes... See more...
Hi guys, I have several topics on the table. 1) I would like to know if you would have any advice, process or even document defining the principles of creating a naming convention when it comes to sourcetypes. We have faced the classic issue where some of our sourcetypes had the same name and we would like to find ways to avoid that from now on. 2) Is it pertinent to add predefined / learned sourcetypes with a naming convention based on the format.? (then we could solve the point 1 with a naming convention like app_format for example). How do you technically add new predefined sourcetypes and how do you solve both the management of sourcetypes (point 1) and the management of predefined sourcetypes  ? 3) How do you share Knowledge Objects, props and transforms between 2 Search Head clusters, how do you implement a continuously synced mechanism that would keep these objects synced between both clusters ? Do we have to use Ansible deployments to perform the changes on both clusters or is there any Splunk way to achieve this synchronization in an easier way inside Splunk (via a script using REST API, command line, configuration etc) ?
I want to merge the cells in column S.No and share the output to the requestor. The only ask is Splunk should take all the values seperated in different colours and send three different emails. Ex ... See more...
I want to merge the cells in column S.No and share the output to the requestor. The only ask is Splunk should take all the values seperated in different colours and send three different emails. Ex    S.No. 1 2 3 4 5 6 7 8 9 10 11 12 I should send emails to  S.No  1 4 9  
Hi Splunk Community,   I need help to write a Splunk query to join two different indexes using any Splunk command that will satisfy the logic noted below.   Two separate ID fields in Index 2 ... See more...
Hi Splunk Community,   I need help to write a Splunk query to join two different indexes using any Splunk command that will satisfy the logic noted below.   Two separate ID fields in Index 2 must match two separate ID fields in Index 1 using any permutation of index 2’s three ID fields. Here is an outline of the logic below.   Combine index 1 record with index 2 record into a single record when any matching condition is satisfied below: (ID_1_A=ID_2_A AND ID_1_B=ID_2_B) OR (ID_1_A=ID_2_A AND ID_1_B=ID_2_C) OR (ID_1_A=ID_2_B AND ID_1_B=ID_2_A) OR (ID_1_A=ID_2_B AND ID_1_B=ID_2_C) OR (ID_1_A=ID_2_C AND ID_1_B=ID_2_A) OR (ID_1_A=ID_2_C AND ID_1_B=ID_2_B) Sample Data:   Index 1: ----------------------- |ID_1_A |ID_1_B| ------------------------ |123       |345      | ------------------------ | 345      |123      | ------------------------   Index 2: ________________________ |ID_2_A   | ID_2_B   | ID_2_C| ---------------------------------------- |123         |345           |999       | ---------------------------------------- |123         |999           |345       | ---------------------------------------- |345         |123            |999      | ---------------------------------------- |999         |123           | 345       | ---------------------------------------- | 345       | 999           |123        | ----------------------------------------   Any help would be greatly appreciated.   Thanks.  
Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the dat... See more...
Hi, We are collecting the logs directly though UF and HEC in the indexer cluster. All inputs are defined in Cluster Manager and then bundle is applied to indexers.  Currently we are sending the data to Splunk indexers as well as to other output group. Following is the config of outputs.conf under peer-apps:   [tcpout] indexandForward=true   [tcpout:send2othergroup] server=..... sslPassword=..... sendCookedData=true   This config is currently sending the same data to both the outputs. Indexing locally and then forwarding to another group.  Is there a way to keep some indexes to be indexed locally and some to be sent only to another group? I tried using props and transforms by including _TCP_ROUTING but it is not working at all.    Thanks in advance!  
Hello I'm using the transaction function to compute average duration and identify uncompleted transactions. Assuming only the events within the selected timerange are taken into account by default,... See more...
Hello I'm using the transaction function to compute average duration and identify uncompleted transactions. Assuming only the events within the selected timerange are taken into account by default, it means that the transactions which start within the selected timerange of the search but ends after are counted as uncompleted transactions.  How can I do to extend the search out of the range for the uncompleted transactions?  StartSearchtime > StartTransaction > EndTransaction > EndSearchTime = OK  StartSearchtime > StartTransaction  > EndSearchTime = True KO (case where the EndTransaction never happened) StartSearchtime > StartTransaction > EndSearchTime > EndTransaction = False KO (case where EndTransaction exists but can only be found after the selected timerange) Extending the EndSearchTime is not the solution, as the service runs 24/7,  new transactions started within the extended slot will then end up with potential EndTransaction out of the new range.  Thanks for your help. Flo
currently for asset correlation with ips we have infoblox ,but that only works when we are in the company premises and ip assigned on asset is part of company network.when someone works from home and... See more...
currently for asset correlation with ips we have infoblox ,but that only works when we are in the company premises and ip assigned on asset is part of company network.when someone works from home and the ip of asset changes due to personal internet that ip does not get added to the asset lookup as its not part of infoblox flow. i was thinking maybe using zscaler to add ip details for the asset but if there is any successful way someone used to mitigate this would be helpful .    
Is there a way to run a search for all correlation searches and see their response actions?  I want to see what correlation searches create notable events and which ones do not.  For example,  which ... See more...
Is there a way to run a search for all correlation searches and see their response actions?  I want to see what correlation searches create notable events and which ones do not.  For example,  which ones only increase risk score.  I had hoped to use /services/alerts/correlationsearches however it doesn't appear that endpoint exists anymore?    
Hi All, i am trying reduce the width of 2nd and 3rd column of a table since some of the cell has big sentence and it occupies too much space. i tried referring an example like below. <row> <pan... See more...
Hi All, i am trying reduce the width of 2nd and 3rd column of a table since some of the cell has big sentence and it occupies too much space. i tried referring an example like below. <row> <panel> <html depends="$alwaysHideCSSPanel$"> <style> #tableColumWidth table thead tr th:nth-child(2), #tableColumWidth table thead tr th:nth-child(3){ width: 10% !important; overflow-wrap: anywhere !important; } </style> </html> <table id="tableColumWidth">   But i am not able to change the width using this. Any corrections needed in above html?