All Topics

Top

All Topics

I'm very new to metrics data in Splunk, I have a question regarding the what is plugin_instance and how can i get the values of it. I'm trying to get the results for the query but end up with no re... See more...
I'm very new to metrics data in Splunk, I have a question regarding the what is plugin_instance and how can i get the values of it. I'm trying to get the results for the query but end up with no results.  | mstats avg("processes.actions.ps_cputime.syst") prestats=true WHERE `github_collectd` host="*" span=10s BY plugin_instance
Hi Cummunity team,  I have a complex query to gather the data below, but a new request came up, it was asked to me to add in the report email subject the product category totals by Category. with ... See more...
Hi Cummunity team,  I have a complex query to gather the data below, but a new request came up, it was asked to me to add in the report email subject the product category totals by Category. with the $result.productcat1$ and $result.productcat2$ I could apprach that, but the way I'm calculating the totals I'm not getting the expected numbers, because I'm appeding the columns from a subquery and transposing the values with  xyseries. Could you please suggest how can I sum(Sales Total) by productcat1 and productcat2 in a new field but keeping the same output as I have now?,  e.g.: something like if ProducCategory="productcat1"; then  productcat1=productcat1+SalesTotal, else productcat2=productcat2+SalesTotal ``` But Print the original output ```   Consider productcat1 and productcat2 are fixed values.  ENV ProducCategory ProductName SalesCondition SalesTotal productcat1 productcat2 prod productcat1 productR blabla 9 152 160 prod productcat1 productj blabla 8     prod productcat1 productc blabla 33     prod productcat2 productx blabla 77     prod productcat2 productpp blabla 89     prod productcat2 productRr blabla 11     prod productcat1 productRs blabla 6     prod productcat1 productRd blabla 43     prod productcat1 productRq blabla 55     Thanks in advance.
Is there a TA for HPE 3PAR data? I have the logs ingested and would like to use an existing TA to normalize the data, but I haven't found one in Splunkbase or elsewhere online.
When using the Splunk Logging Driver for Docker, you can leverage SPLUNK_LOGGING_DRIVER_BUFFER_MAX to set the maximum number of messages held in buffer for retries. The default is 10 * 1000 but can a... See more...
When using the Splunk Logging Driver for Docker, you can leverage SPLUNK_LOGGING_DRIVER_BUFFER_MAX to set the maximum number of messages held in buffer for retries. The default is 10 * 1000 but can anyone confirm the maximum value that can be set?
Hello All,   I have searched high and low to try to discover why the kvstore process will not start. This system was upgraded from Splunk 8.0, to 8.2, and finally 9.2.1. I have looked in mongod.lo... See more...
Hello All,   I have searched high and low to try to discover why the kvstore process will not start. This system was upgraded from Splunk 8.0, to 8.2, and finally 9.2.1. I have looked in mongod.log and splunkd.log, but do not really see any thing that helps resolve the issue. Is ssl required for this? the - is there a way to set a correct ssl config, or disable it in the server.conf file? Would the failure of the KVstore process affect IOWAIT? I am running on Oracle Linux, ver 7.9 - I am open to any suggestions. Thanks ewholz
Hey all, wondering if anyone has solved this problem before. Looking at potential for taking a Splunk Cloud alert and using it to connect to Ansible Automation Platform to launch a template. Have loo... See more...
Hey all, wondering if anyone has solved this problem before. Looking at potential for taking a Splunk Cloud alert and using it to connect to Ansible Automation Platform to launch a template. Have looked into the webhooks however AAP is only configured to allow Github and GitLab webhooks on templates it seems, and when attempting to post to the API endpoint to launch the template it would sit there and eventually time out.   Wondering if anyone has explored this space before and if there are any suggestions on how to get this connection working. 
#machinelearning Hello, I am using dist=auto in my Density function and I am getting negative Beta Results. I feel like this is wrong but keep me honest, I would like to understand how Beta distrib... See more...
#machinelearning Hello, I am using dist=auto in my Density function and I am getting negative Beta Results. I feel like this is wrong but keep me honest, I would like to understand how Beta distribution is captured  and why the mean is a negative result if I am using 0 to 100% success rate? other distribution I am happy with it (e.g Gaussian KDE and Normal) |fit DensityFunction MyModelSuccessRate by "HourOfDay,Object" into MyModel2 dist="auto" Thanks,   Joseph     
We have a query where we are  getting the count by site. index=test-index |stats count by host site. When we run this query in search head cluster we are getting output as  site                   ... See more...
We have a query where we are  getting the count by site. index=test-index |stats count by host site. When we run this query in search head cluster we are getting output as  site                       host undefined         appdtz undefined        appstd undefined        apprtg undefined        appthf   When we run the same query in deployer we are getting output correctly with site. site                       host sitea         appdtz sitea       appstd siteb        apprtg siteb        appthf  how to fix this issue in SH cluster.
Hi all.  I'm trying to understand how to map my diagnostic setting AAD data coming in from an mscs:azure:eventhub sourcetype to CIM.  I notice in the official docs for the TA, it mentions that th... See more...
Hi all.  I'm trying to understand how to map my diagnostic setting AAD data coming in from an mscs:azure:eventhub sourcetype to CIM.  I notice in the official docs for the TA, it mentions that the sourcetype above isn't mapped to CIM, however the azure:monitor:aad is mapped to CIM.  I'm attempting to leverage Enterprise Security to build searches off of some UserRiskEvents data coming in, and would like to be able to reference datamodels. So, is there any world I can take my existing data and transform it to match what's mapped to CIM? I envision like other TA's, that this can filter down to unique sourcetypes upon ingestion, while the Inputs on the IDM is set to a parent sourcetype. I can't confirm if that's true or not.
May 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with a Special Edition of indexEducation, the newsletter that takes an untraditional twist on what’... See more...
May 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with a Special Edition of indexEducation, the newsletter that takes an untraditional twist on what’s new with Splunk Education – this month with a focus on all-things .conf24. We hope the updates about our courses, certification program, and self-paced training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability: Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go  Training You Gotta Take All the bootcamps | If you know, you know Longer days, warmer temps, and dining alfresco. Add Splunk University to the mix and it’ll be your summer to remember. Splunk University offers you an in-person opportunity to enhance your Splunk skills through 3-, 2-, and 1-day bootcamps, hands-on activities, and networking opportunities. So, register today for bootcamps galore between June 9-11, 2024 in Las Vegas, Nevada.  Gotta get summer vibes | Register for Splunk University bootcamps Last Minute Learning  | Grab a seat at the eleventh hour Whether you’re a procrastinator or on that Sigma grindset, we’ve got you covered. Last Minute Learning from Splunk offers you the option to get more learning courses under your belt faster, or make up for the courses you’ve been putting off. Each week, we share a list of the upcoming instructor-led classes that still have seats available. Just register with your Splunk.com account and use your company training units or a credit card to purchase.  Gotta get training in | Last minute instructor-led courses Things You Needa Know New badge | “I survived summer in Vegas”  Just kidding. The badge isn’t called “I survived summer in Vegas.” It’s called the Splunk Certified Cybersecurity Defense Engineer (CDE) and it’s awarded for passing the new certification exam only available onsite at .conf24 in Las Vegas between June 11-14, 2024. This is a beta exam, so registration is free. But, there’s a catch (errr, a prerequisite): You gotta have the Splunk Certified Cybersecurity Defense Analyst (CDA) certification – read more below. How good will it feel to head home with a new hoodie, loads of swag, and a new certification?  Needa new certification | Get your CDE at .conf24  Bring your prerequisite | Necessary for earning that coveted certification  Hey, look up! Did you catch the article about the Splunk Certified Cybersecurity Defense Engineer (CDE) certification? It’s an awesome opportunity for you to add a new certification to your resume for free in Las Vegas at .conf24.  But, to sit for this exam, you’ll need to take the prerequisite – the Splunk Certified Cybersecurity Defense Analyst (CDA) exam before you go. Even though it’s Vegas, we don’t recommend gambling on this. Get prepped for the test.  Needa know about the prereq | Take the CDA exam before Vegas Places You’ll Wanna Go The Spotlight Theater  | Certification bragging encouraged  Now that you know all about the Spunk Certification options available onsite at .conf24, consider celebrating with us. This year at .conf24, we will be celebrating the hard work and dedication of everyone who has earned a Splunk Certification during our “Bragging Rights Spotlight” celebration and networking event – Wednesday, June 12 from 5-6 p.m. at the Spotlight Theater in the source=*pavilion. Enjoy appetizers and beverages, music, photo ops, and more.    Wanna go celebrate | Brag about your certifications  Education Station | Visit us on the show floor at .conf24 The show floor at .conf24 is called the source=*pavilion and it’s the place to learn about Splunk, grab some fun swag, and make some new friends. Visit us at the Education Station where you can get rooted in education. Explore your Splunk learning options in a relaxed and engaging environment, then capture the experience with a fun photo. Before you go, visit our Knowledge Tree, “leave” us a wish, then take a gift.  Wanna go to the show | Education Station makes it fun   Find Your Way | Learning Bits and Breadcrumbs Go Stream It  | The Latest Course Releases (Some with Non-English Captions!) Go Deep Dive | Register for Security and Observability Tech Talks  Go to STEP | Get Upskilled Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Index It | Subscribe to our Newsletter   Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.  Answer to Index This: .conf – also known as the annual Splunk user conference  
Hello, I've a couple of detailed dashboards, all indicating the health status of my systems. Instead of opening each detailed dashboard and looking at every graph, I would like to have one "Overview... See more...
Hello, I've a couple of detailed dashboards, all indicating the health status of my systems. Instead of opening each detailed dashboard and looking at every graph, I would like to have one "Overview Dashboard" traffic light indication style.  If one error would be shown in a detailed dashboard, I woud like to have the traffic light at the overview dashboard turn red with the option to have the drilldown link to the ´detailed dasboard where the error was found.   Any good ideas how one would build something like that? I've one solution, but it seems to be complicated. I would leverage scheduled searches which write into different lookups.  The overview dashboard could read from those lookups and search for error codes.  
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re focusing on some great new articles that have been written by Splunk’s Authorized Learning Partners (ALPs). We’re also looking for your use case ideas to help Lantern expand its use case library, and as usual, we’re sharing the full list of articles published over the past month. Read on to find out more. Conquer New Data Sources with Splunk ALPs We’re excited to share some great new articles that have been brought to us by Splunk’s Authorized Learning Partners. ALPs are organizations that provide Splunk courses and education services, with localized training available around the world. ALP instructors are highly experienced Splunk experts, so we’re thrilled to publish these new ALP-written articles that all Splunk users can benefit from. Here are two new data descriptors and associated use cases that have been written this month by our ALPs.   CyberArk If you’re working with the CyberArk Identity Security Platform or using the CyberArk EPM for your endpoints, our new CyberArk data descriptor page shows you how to ingest data from these data sources. We’ve also published Validating endpoint privilege security with CyberArk EPM, which walks you through all the dashboards you can access for this platform within Splunk by using the CyberArk EPM App.    MOVEit MOVEit is a managed file transfer software product produced by Progress Software. MOVEit encrypts files and uses file transfer protocols such as FTP(S) or SFTP to transfer data, as well as provides automation services, analytics, and failover options.  MOVEit Automation helps you automate tasks like pushing and pulling files to/from any FTP server based on events or schedule, manipulating/transforming file content, or managing files for transfer, storage or deletion. The use case Reporting on MOVEit automation activities shows you how you can access reporting dashboards for your MOVEit Automation instance. MOVEit Transfer provides easy and secure file transfer exchanges that keep your organization secure and compliant. You can use the use case Reporting on MOVEit transfer activities to set up reporting on this MOVEit product.   Calling all ALPs! If you’re an ALP who’s interested in writing for Lantern, we’d love to have you on board! Check out our Information Deck, FAQs and fill in the form to submit a content idea to us.   Help Us Expand Lantern's Use Case Library! Did you know that Lantern’s articles are completely crowdsourced from Splunkers, ALPs and partners? We’re lucky to have such a huge community of Splunk experts who write our articles, but we’re always looking to expand our library with the help of innovative ideas from our readers. What is a Lantern use case? It's a detailed, step-by-step guide on how to use Splunk software for achieving specific business outcomes. Some examples of our current use cases include: ​​Splunk platform Security: Detecting a ransomware attack Splunk platform IT Modernization: Managing Azure cloud infrastructure Splunk SOAR: Detecting unusual GCP service account usage Infrastructure Monitoring: Monitoring Kubernetes pods  Have you ever looked for a specific use case on Lantern and haven’t found it? Or maybe you’re looking to get more value out of a particular data source, and seeking guidance to help you do that. If so, we're inviting you to contribute your ideas for use cases in security, observability, or industry-specific applications. Your input will directly influence the development of future Lantern articles, and your proposed use case could be crafted by a Splunk expert to benefit the entire Splunk community. As a token of our appreciation, we're offering exclusive Lantern merch to the first 50 people who submit an idea and come see us at .Conf! Submit your ideas through our online form or in-person at the kiosk. Don’t miss out - start thinking about your unique use case ideas today! Even if you can’t attend .Conf, we’re eager to hear your suggestions. Help us enhance our library by sharing your ideas now!   This Month’s New Articles Here are all of the other articles that are new on Lantern, published over the month of May: Combining multiple detector conditions into a single detector Combining multiple compound detector conditions into a single detector Recovering from an incident using SOAR We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
Hi, my splunk search results in two fields - Time and Event. Inside Event field there are multiple searchable fields, one of which is json array as string like this: params="[{'field1':'value1','fie... See more...
Hi, my splunk search results in two fields - Time and Event. Inside Event field there are multiple searchable fields, one of which is json array as string like this: params="[{'field1':'value1','field2':'value2','field3':'value3'}]" Above json array always has one json object like in example. I need to extract values for given fields from this json object - how can i do that? I figured spath is the way to do this, but none of solutions I found so far worked - maybe because all examples were operating on json as string only and in my case it is in Event as splunk shows in search - can you help?
I am looking to gain certification as a "Splunk Core Certified Advanced Power User".  My access to paid education is limited.  My trial license has expired and I am now using the free license.   I ... See more...
I am looking to gain certification as a "Splunk Core Certified Advanced Power User".  My access to paid education is limited.  My trial license has expired and I am now using the free license.   I am learning through imitating example that I am finding in YouTube Videos.  Many YT videos are for an older version of Splunk and they no longer work or work the same way for the current version.  I'm looking for someone that I can turn to when I encounter such problems and have him/her help me resolve the problem.
Hello to everyone We have about >300 hosts sending syslog messages to the indexer cluster The cluster runs on Windows Server All settings across the indexer cluster that relate to syslog ingestion... See more...
Hello to everyone We have about >300 hosts sending syslog messages to the indexer cluster The cluster runs on Windows Server All settings across the indexer cluster that relate to syslog ingestion look like this: [udp://port_number] connection_host = dns index = index_name sourcetype = sourcetype_name   So I expected to see no IP addresses in the host field when I ran searches I created the alert to be aware that no one message has an IP in the host field But a couple of hosts have this problem I know that PTR records are required for this setting, but we checked that the record exists. When I run "dnslookup *host_ip* *dns_server_ip* I see that everything is OK I also cleared the DNS cache across the indexer cluster, but I still see this problem Does Splunk have some internal logs that can help me identify where the problem is? Or do I only have the opportunity to catch the network traffic dump with DNS queries?
Hello, I'm trying to write a Splunk search for detecting unusual behavior in emails sending, here is the spl query: | tstats summariesonly=true fillnull_value="N/D" dc(All_Email.internal_message_... See more...
Hello, I'm trying to write a Splunk search for detecting unusual behavior in emails sending, here is the spl query: | tstats summariesonly=true fillnull_value="N/D" dc(All_Email.internal_message_id) as total_emails from datamodel=Email where (All_Email.action="quarantined" OR All_Email.action="delivered") AND NOT [| `email_whitelist_generic`] by All_Email.src_user, All_Email.subject, All_Email.action | `drop_dm_object_name("All_Email")` | eventstats sum(eval(if(action="quarantined", count, 0))) as quarantined_count_peruser, sum(eval(if(action="delivered", count, 0))) as delivered_count_peruser by src_user, subject | where total_emails>50 AND quarantined_count_peruser>10 AND delivered_count_peruser>0 I want to count the number of quarantined emails and the delivered ones only and than filter them for some threshold, but it seems that the eventstats command is not working as expected. I already used this logic for authentication searches and it's working fine. Any help?
can i remove the button which is just below (-) button ?  
When I creating "on poll" action on App Wizard, I always get an error: "Action type: Select a valid choice. ingest is not one of the available choices." Does anyone know a way to avoid this?
Following the documentation https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform  I have: Created a trial ac... See more...
Following the documentation https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector#Send_data_to_HTTP_Event_Collector_on_Splunk_Cloud_Platform  I have: Created a trial account in Splunk Cloud platform Generated a HEC Token Send telemetry data to Splunk Cloud platform using a OpenTelemetry collectory with Splunk HEC exporter    splunk_hec: token: "<hec-token>" endpoint: https://prd-p-e7xnh.splunkcloud.com:8088/services/collector/event source: "otel" sourcetype: "otel" splunk_app_name: "ThousandEyes OpenTelemetry" tls: insecure: false       I see the following error in my `otel-collector`:   Post "https://splunkcloud.com:8088/services/collector/event": tls: failed to verify certificate: x509: certificate is not valid for any names, but wanted to match splunkcloud.com       The endpoint `https://prd-p-e7xnh.splunkcloud.com:8088` seems to have a invalid certificate. It was sign by a self-sign CA. It does not include subject name for the endpoint.   openssl s_client -showcerts -connect prd-p-e7xnh.splunkcloud.com:8088 CONNECTED(00000005) depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify error:num=19:self-signed certificate in certificate chain verify return:1 depth=1 C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com verify return:1 depth=0 CN = SplunkServerDefaultCert, O = SplunkUser verify return:1 --- Certificate chain 0 s:CN = SplunkServerDefaultCert, O = SplunkUser i:C = US, ST = CA, L = San Francisco, O = Splunk, CN = SplunkCommonCA, emailAddress = support@splunk.com a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256 v:NotBefore: May 28 17:34:47 2024 GMT; NotAfter: May 28 17:34:47 2027 GMT     We confirmed that for the paid version using the port 443, Splunk is using a valid CA certificate:   echo -n | openssl s_client -connect prd-p-e7xnh.splunkcloud.com:443 | openssl x509 -text -noout Warning: Reading certificate from stdin since no -in or -new option is given depth=2 C=US, O=DigiCert Inc, OU=www.digicert.com, CN=DigiCert Global Root G2 verify return:1 depth=1 C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1 verify return:1 depth=0 C=US, ST=California, L=San Francisco, O=Splunk Inc., CN=*.prd-p-e7xnh.splunkcloud.com verify return:1 DONE Certificate: Data: Version: 3 (0x2) Serial Number: 02:ac:04:07:e1:b9:47:0f:a1:83:02:a7:45:99:a4:5f Signature Algorithm: sha256WithRSAEncryption Issuer: C=US, O=DigiCert Inc, CN=DigiCert Global G2 TLS RSA SHA256 2020 CA1 Validity Not Before: May 28 00:00:00 2024 GMT Not After : May 27 23:59:59 2025 GMT Subject: C=US, ST=California, L=San Francisco, O=Splunk Inc., CN=*.prd-p-e7xnh.splunkcloud.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Authority Key Identifier: 74:85:80:C0:66:C7:DF:37:DE:CF:BD:29:37:AA:03:1D:BE:ED:CD:17 X509v3 Subject Key Identifier: 35:18:36:ED:18:F5:18:A6:89:90:28:E0:12:AB:14:47:18:37:61:F9 X509v3 Subject Alternative Name: DNS:*.prd-p-e7xnh.splunkcloud.com, DNS:prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-prd-p-e7xnh.splunkcloud.com, DNS:akamai-inputs-prd-p-e7xnh.splunkcloud.com, DNS:*.akamai-inputs-prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-ack-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-ack-prd-p-e7xnh.splunkcloud.com, DNS:http-inputs-firehose-prd-p-e7xnh.splunkcloud.com, DNS:*.http-inputs-firehose-prd-p-e7xnh.splunkcloud.com, DNS:*.pvt.prd-p-e7xnh.splunkcloud.com, DNS:pvt.prd-p-e7xnh.splunkcloud.com     Could you use the same certificate for both Trial and Paid version? Why are you using a different one? Could you please help us. It is blocking us when using Trial accounts.  Thank you in advance.
Hi Team, We have some reports in a shared path, how to bring it to splunk?