All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This isn't JSON as JSON uses double quotes not single quotes. Please post an accurate representation of the field you want to extract the data from. Having said that, you should look at the json fun... See more...
This isn't JSON as JSON uses double quotes not single quotes. Please post an accurate representation of the field you want to extract the data from. Having said that, you should look at the json functions new to 9.x as these would probably be the basis of a solution.
I re-implemented your solutions and found #2 sorted by name. Your solution #3 does indeed sort by value.  There is a limitation of 9 or less fields/columns due lexical sorting, and the field... See more...
I re-implemented your solutions and found #2 sorted by name. Your solution #3 does indeed sort by value.  There is a limitation of 9 or less fields/columns due lexical sorting, and the fields now have additional ##_ prepended.  The limitation of 9 or less is significant if you watch a couple of dozen items and rank them. I will accept the answer.  I am thinking there is a simpler subsearch to drive the | table projection of the columns and I will continue to look in that direction.  For now, I will probably save as a macro. Thank you.
Hi @bowesmana  My actual  requirement is that if the field with empty values then I dont want to show in the table.IF some of the correlationID we dont have ImpconID so i used above query to filte... See more...
Hi @bowesmana  My actual  requirement is that if the field with empty values then I dont want to show in the table.IF some of the correlationID we dont have ImpconID so i used above query to filter the empty values. Now i want to filter the null values from the field. PFA
Hi, my splunk search results in two fields - Time and Event. Inside Event field there are multiple searchable fields, one of which is json array as string like this: params="[{'field1':'value1','fie... See more...
Hi, my splunk search results in two fields - Time and Event. Inside Event field there are multiple searchable fields, one of which is json array as string like this: params="[{'field1':'value1','field2':'value2','field3':'value3'}]" Above json array always has one json object like in example. I need to extract values for given fields from this json object - how can i do that? I figured spath is the way to do this, but none of solutions I found so far worked - maybe because all examples were operating on json as string only and in my case it is in Event as splunk shows in search - can you help?
Your solution is right: jdbc:sqlserver://IP:Port;databaseName=dbname;selectMethod=cursor;encrypt=false;trustServerCertificate=true I resolve my issue, thank a lot
Did you understand my comment about the difference between null and empty? Please confirm that these are null values you are taking about rather than empty values and provide some evidence that you ... See more...
Did you understand my comment about the difference between null and empty? Please confirm that these are null values you are taking about rather than empty values and provide some evidence that you actually have null values. Without that it's impossible to know what is going on
https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Replicate_a_subset_of_data_to_a_third-party_system should helps you.
I test it at least for /raw endpoint.
There was a bug with http inputs where it hasn't work earlier even it should. Nice that it has fixed and it works also with http input too.
Hi @bowesmana  Still the null field values is appearing.
Thanks, that was helpful!
Thank you so for the responses @bowesmana @ITWhisperer and a special thanks to @yuanliu. I really apologize for posting the requirement in an unclear manner, I was extremely fatigued yet desperate... See more...
Thank you so for the responses @bowesmana @ITWhisperer and a special thanks to @yuanliu. I really apologize for posting the requirement in an unclear manner, I was extremely fatigued yet desperately needed to find the solution. Honestly saying I wasn't confident that I would receive the response so quickly and precise. I sincerely appreciate the community and individuals like you make this as a wonderful forum for discussion. To be part of this community is an honor.
You can use below splunk to check locked out accounts   sourcetype="wineventlog" EventCode=4740 OR EventCode=644 |eval src_nt_host=if(isnull(src_nt_host),host,src_nt_host) |stats latest(_time) AS... See more...
You can use below splunk to check locked out accounts   sourcetype="wineventlog" EventCode=4740 OR EventCode=644 |eval src_nt_host=if(isnull(src_nt_host),host,src_nt_host) |stats latest(_time) AS time latest(src_nt_host) AS host BY dest_nt_domain user |eval ltime=strftime(time,"%c") |table ltime,dest_nt_domain user host |rename ltime AS "Lockout Time",dest_nt_domain AS Domain,user AS "Account Locked Out", host AS "Workstation"  
Hello to everyone We have about >300 hosts sending syslog messages to the indexer cluster The cluster runs on Windows Server All settings across the indexer cluster that relate to syslog ingestion... See more...
Hello to everyone We have about >300 hosts sending syslog messages to the indexer cluster The cluster runs on Windows Server All settings across the indexer cluster that relate to syslog ingestion look like this: [udp://port_number] connection_host = dns index = index_name sourcetype = sourcetype_name   So I expected to see no IP addresses in the host field when I ran searches I created the alert to be aware that no one message has an IP in the host field But a couple of hosts have this problem I know that PTR records are required for this setting, but we checked that the record exists. When I run "dnslookup *host_ip* *dns_server_ip* I see that everything is OK I also cleared the DNS cache across the indexer cluster, but I still see this problem Does Splunk have some internal logs that can help me identify where the problem is? Or do I only have the opportunity to catch the network traffic dump with DNS queries?
Hi @Siddharthnegi , I don't think that's possible, but why do you want to remove it? it restore the default visualization you configured. Ciao. Giuseppe
Point 1. This will help in sourcetype naming conventions (People still use my_app_soucetype, meaning they use underscores, which isn't a problem, but this is the recommended naming convention, see li... See more...
Point 1. This will help in sourcetype naming conventions (People still use my_app_soucetype, meaning they use underscores, which isn't a problem, but this is the recommended naming convention, see link below.  https://docs.splunk.com/Documentation/AddOns/released/Overview/Sourcetypes   Point 2.  As there are a plethora of data sources, many are common and at some point, you will have a custom log source and will need to create your own sourcetype. These are some of the pretrained ones https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Listofpretrainedsourcetypes   Many common sourcetypes come out of the box with the Splunk TA's so this is your starting point, these should be used and do not need to change them as they categorise the data which is important for parsing.  For any custom data sources, you need to analyse your log source, check the format, timestamp, and other settings, and use props.conf to set the sourcetype with a naming convention standards, this makes admin life easier.  See the below link https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Configuring_new_source_types    Point 3.  Syncing two different SH Clusters' mean you have one Deployer for each, that’s the Splunk setup. So, you need some kind of Repo like git where your KO's / Apps are located and keep that under code control. You can then Ansible to deploy the apps to the deployers for them to push the Apps. You could also use the linux rsync command to have a repo and sync with the deployers. So you should have a strategy for this type of app management based on the tools you use.   
Hello, I'm trying to write a Splunk search for detecting unusual behavior in emails sending, here is the spl query: | tstats summariesonly=true fillnull_value="N/D" dc(All_Email.internal_message_... See more...
Hello, I'm trying to write a Splunk search for detecting unusual behavior in emails sending, here is the spl query: | tstats summariesonly=true fillnull_value="N/D" dc(All_Email.internal_message_id) as total_emails from datamodel=Email where (All_Email.action="quarantined" OR All_Email.action="delivered") AND NOT [| `email_whitelist_generic`] by All_Email.src_user, All_Email.subject, All_Email.action | `drop_dm_object_name("All_Email")` | eventstats sum(eval(if(action="quarantined", count, 0))) as quarantined_count_peruser, sum(eval(if(action="delivered", count, 0))) as delivered_count_peruser by src_user, subject | where total_emails>50 AND quarantined_count_peruser>10 AND delivered_count_peruser>0 I want to count the number of quarantined emails and the delivered ones only and than filter them for some threshold, but it seems that the eventstats command is not working as expected. I already used this logic for authentication searches and it's working fine. Any help?
can i remove the button which is just below (-) button ?  
When I creating "on poll" action on App Wizard, I always get an error: "Action type: Select a valid choice. ingest is not one of the available choices." Does anyone know a way to avoid this?
Following the documentation https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform  I have: Created a trial ac... See more...
Following the documentation https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform  I have: Created a trial account in Splunk Cloud platform Generated a HEC Token Send telemetry data to Splunk Cloud platform using a OpenTelemetry collectory with Splunk HEC exporter    splunk_hec: token: "<hec-token>" endpoint: https://prd-p-e7xnh.splunkcloud.com:8088/services/collector/event source: "otel" sourcetype: "otel" splunk_app_name: "ThousandEyes OpenTelemetry" tls: insecure: false       I see the following error in my `otel-collector`:   Post "https://splunkcloud.com:8088/services/collector/event": tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match splunkcloud.com       The endpoint `https://prd-p-e7xnh.splunkcloud.com:8088` seems to have a invalid certificate. It was sign by a self-sign CA. It does not include subject name for the endpoint.   openssl s_client -showcerts -connect prd-p-e7xnh.splunkcloud.com:8088 CONNECTED(00000005) depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify error:num=19:self-signed certificate in certificate chain verify return:1 depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify return:1 depth=0 CN = SplunkServerDefaultCert, O = SplunkUser verify return:1 --- Certificate chain 0 s:CN = SplunkServerDefaultCert, O = SplunkUser i:C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256 v:NotBefore: May 28 17:34:47 2024 GMT; NotAfter: May 28 17:34:47 2027 GMT     We confirmed that for the paid version using the port 443, Splunk is using a valid CA certificate:   echo -n | openssl s_client -connect prd-p-e7xnh.splunkcloud.com:443 | openssl x509 -text -noout Warning: Reading certificate from stdin since no -in or -new option is given depth=2 C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert Global Root G2 verify return:1 depth=1 C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1 verify return:1 depth=0 C=US, ST=California, L=San Francisco, O=Splunk Inc., CN=*.prd-p-e7xnh.splunkcloud.com verify return:1 DONE Certificate: Data: Version: 3 (0x2) Serial Number: 02:ac:04:07:e1:b9:47:0f:a1:83:02:a7:45:99:a4:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1 Validity Not Before: May 28 00:00:00 2024 GMT Not After : May 27 23:59:59 2025 GMT Subject: C=US, ST=California, L=San Francisco, O=Splunk Inc., CN=*.prd-p-e7xnh.splunkcloud.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: 74:85:80:C0:66:C7:DF:37:DE:CF:BD:29:37:AA:03:1D:BE:ED:CD:17 X509v3 Subject Key Identifier: 35:18:36:ED:18:F5:18:A6:89:90:28:E0:12:AB:14:47:18:37:61:F9 X509v3 Subject Alternative Name: DNS:*.prd-p-e7xnh.splunkcloud.com, DNS:prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-prd-p-e7xnh.splunkcloud.com, DNS:akamai-inputs-prd-p-e7xnh.splunkcloud.com, DNS:*.akamai-inputs-prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-ack-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-ack-prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-firehose-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-firehose-prd-p-e7xnh.splunkcloud.com, DNS:*.pvt.prd-p-e7xnh.splunkcloud.com, DNS:pvt.prd-p-e7xnh.splunkcloud.com     Could you use the same certificate for both Trial and Paid version? Why are you using a different one? Could you please help us. It is blocking us when using Trial accounts.  Thank you in advance.