All Topics

Top

All Topics

  Watch Now Join us in this session to learn all about AppDynamics, its key capabilities and advantages, how it fits within the Splunk portfolio, and our new integrations for a full-stack ob... See more...
  Watch Now Join us in this session to learn all about AppDynamics, its key capabilities and advantages, how it fits within the Splunk portfolio, and our new integrations for a full-stack observability experience. In particular, our AppDynamics speakers will cover: An overview of AppDynamics and the use cases it is best suited for How AppDynamics and Splunk are connected today and will be tomorrow A demo of AppDynamics, Log Observer Connect for AppDynamics and the new integrated experience
Join us in this session and learn how Splunk can help you build a standardized observability practice. From implementing an observability-as-code service to role-based access controls (RBAC), Tok... See more...
Join us in this session and learn how Splunk can help you build a standardized observability practice. From implementing an observability-as-code service to role-based access controls (RBAC), Token Management, Metrics Pipeline Management, and OpenTelemetry, learn how to create an Observability platform to optimize your metrics usage and costs while managing workloads efficiently. Find out how to: Design a Self-Serve Observability platform Take full advantage of OpenTelemetry (OTel) to increase velocity and reduce technical debt Foster a culture of Observability in your organization Automate data instrumentation with OpenTelemetry (OTel) Safeguard your data with advanced RBAC capabilities Measure and manage tenant usage and costs efficiently while scaling using access tokens, Metrics Pipeline Management, and Archived Metrics Automate monitoring safely with code-level observability Watch full Tech Talk here:
Please can anyone what are steps to migrate the old data to new server while upgrading the splunk to 9.3 version i have checked the splunk document but i did not understand properly.Kindly please cou... See more...
Please can anyone what are steps to migrate the old data to new server while upgrading the splunk to 9.3 version i have checked the splunk document but i did not understand properly.Kindly please could help anyone help on this. Prasent splunk vesrion is 8.2.0  
I am getting following error while configuring LDAP on my Splunk instances ( tried it on Splunk deployment server, Indexers, HF's), I get the same error everywhere.  Can someone help me understand wh... See more...
I am getting following error while configuring LDAP on my Splunk instances ( tried it on Splunk deployment server, Indexers, HF's), I get the same error everywhere.  Can someone help me understand whats going wrong ?   "Encountered the following error while trying to save: Splunkd daemon is not responding: ('Error connecting to /servicesNS/admin/config_explorer/authentication/providers/LDAP: The read operation timed out',)"   I tried increasing these attributes in authentication.conf but still no luck. network_timeout = 1200 sizelimit = 10000 timelimit = 1500 web.conf - [settings] enableSplunkWebSSL = true splunkdConnectionTimeout = 1201  
In Splunk Dashboard: Total request number for security token/priority token filtered by partner name Duplicate request number filtered by partner name and customer ID (to check if current expirati... See more...
In Splunk Dashboard: Total request number for security token/priority token filtered by partner name Duplicate request number filtered by partner name and customer ID (to check if current expiration time for both tokens are appropriate) Priority token usage filtered by partner name Response time analysis for security token/priority token   How to createádd panel for this 4 options
Hello Everyone, I have below splunk query which will display the output as below   (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="Paym... See more...
Hello Everyone, I have below splunk query which will display the output as below   (index= index_1 OR index= index_2) (kubernetes_namespace="kube_ns" OR openshift_namespace="ose_ns") (logger="PaymentErrorHandler" OR logger=PaymentStatusClientImpl") | search "* Did not observe any item or terminal signal within*" | spath "paymentStatusResponse.orderCode" | eval clusters=coalesce(openshift_cluster, kubernetes_cluster) | stats values(clusters) as cluster, values(host) as hostname, count(host) as count, values(correlation-id{}) as corr_id, values(paymentStatusResponse.orderCode) as order_code    From the above query, we have 2 loggers.  In the PaymentErrorHandler logger, I get the message containing: "Did not observe any item or terminal signal within" In the EmsPaymentStatusClientImpl logger, I get the json response object containing "paymentStatusResponse.orderCode" value In both loggers, we have correlation-id{} as common element. I want to output a table containing cluster, hostname, count, corr_id and order_code but the order code is alway empty. Please help  
I have this query is not mapped to ink name | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | sort - time | table time ink_type that will have this resul... See more...
I have this query is not mapped to ink name | rex "(?<time>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*Ink Type '(?<ink_type>[^']+)'" | sort - time | table time ink_type that will have this result I want the result to be just the latest log date. In this case it will only show the top 3. And when new logs comes in, then it will show that new logs only
Hello everyone could please help me to edit this app for FMC logs 
Hi, We recently upgraded the Heavy Forwarders (HF) of our Splunk Enterprise. After the upgrade the Universal Forwarders stopped sending data (e.g. Linux logs) to HFs over Http, the logs are not sear... See more...
Hi, We recently upgraded the Heavy Forwarders (HF) of our Splunk Enterprise. After the upgrade the Universal Forwarders stopped sending data (e.g. Linux logs) to HFs over Http, the logs are not searchable on Search head. We upgraded from v9.1.2 to 9.3.0. We also tried 9.3.1 which did not make any difference - logs are not being sent. v9.2.3 works without issues.  I checked the logs on UF on v9.3.x and can see  ERROR S2SOverHttpOutputProcessor [8340 parsing] - HTTP 503 Service Unavailable However I cannot figure out what causes the issue. Telnet from UF to HF works, Telnet form HF to indexers also work. The tokens on the Deployment server and UFs are the same. Please, advise  
Hello, We are in the process of fully migrating our Splunk Enterprise deployment to the Azure Cloud and will no longer be using Splunk Enterprise on-premises. Specifically, I have a question about m... See more...
Hello, We are in the process of fully migrating our Splunk Enterprise deployment to the Azure Cloud and will no longer be using Splunk Enterprise on-premises. Specifically, I have a question about moving the search head and all its associated components to the cloud without causing disruptions. While we found a Work Instruction on the Splunk website, it wasn't clear enough to follow, and we're concerned about minimizing downtime during the migration process. Could anyone provide guidance (step-by-step guidance) or best practices for migrating a Splunk search head and its components to the Azure Cloud, ensuring no service interruptions during the transition? Your help would be greatly appreciated!
Hi All, I'm having trouble getting conditional formatting to work for a column chart in dashboard studio.   I want something pretty simple... I want the column "ImpactLevel" to be colored red if the... See more...
Hi All, I'm having trouble getting conditional formatting to work for a column chart in dashboard studio.   I want something pretty simple... I want the column "ImpactLevel" to be colored red if the value is less than 50, orange if the value is between 50 and 80, and yellow if the value is more than 80.  Impact level is the only series on the y2 axis of the column chart. Here is the json for my chart: "type": "splunk.column",     "options": {         "y": "> primary | frameBySeriesNames('_lower','_predicted','_upper','avg','max','min','volume','ImpactLevel')",         "y2": "> primary | frameBySeriesNames('ImpactLevel')",         "y2AxisMax": 100,         "overlayFields": [             "volume"         ],         "legendDisplay": "bottom",         "seriesColorsByField": {             "ImpactLevel": [                 {                     "value": "#dc4e41",                     "to": 50                 },                 {                     "value": "#f1813f",                     "from": 50,                     "to": 80                 },                 {                     "value": "#f8be44",                     "from": 80                 }             ]         }     },     "dataSources": {         "primary": "ds_9sBnwPWM_ds_stihSmPw"     },     "title": "HP+ Claims E2E",     "showProgressBar": true,     "eventHandlers": [         {             "type": "drilldown.linkToDashboard",             "options": {                 "app": "sre",                 "dashboard": "noc_priority_dashboard_regclaimdrilldown",                 "newTab": true,                 "tokens": [                     {                         "token": "time.latest",                         "value": "$time.latest$"                     },                     {                         "token": "time.earliest",                         "value": "$time.earliest$"                     },                     {                         "token": "span",                         "value": "$span$"                     }                 ]             }         }     ],     "showLastUpdated": false,     "context": {}
We have logs that are written to /var/log /var/log/audit   We need to keep these for 365 days, and want to ensure that we are following best practices, is there a set of configuration settings w... See more...
We have logs that are written to /var/log /var/log/audit   We need to keep these for 365 days, and want to ensure that we are following best practices, is there a set of configuration settings we can follow to ensure we're following best practices? Ultimately, we want to ensure we have log retention, and that /var/log is not a cluttered mess.    Thank you!
My office has deployed around 120 devices that they have now requested splunk be added to. We have been unsuccessful in getting the CLI commands to work for a successful install. The GUI version work... See more...
My office has deployed around 120 devices that they have now requested splunk be added to. We have been unsuccessful in getting the CLI commands to work for a successful install. The GUI version works, but that would mean I have to reach out and touch each machine directly to set that up. Is there a way to program the GUI so that we can remote deploy this?   
I am trying to make a search that will fire only when an admin makes a change to their own account. I want to know if a-johndoe gives multiple permissions to a-johndoe and NOT if a-johndoe gives per... See more...
I am trying to make a search that will fire only when an admin makes a change to their own account. I want to know if a-johndoe gives multiple permissions to a-johndoe and NOT if a-johndoe gives permissions to a-janedoe.  would i use an IF statement for this?   Thank you
Hey,  i am looking for the third-party notice for the Splunk Add-on for Palo Alto Networks 1.0.0. Unfortunately, i cannot find them in the documentation, since the corresponting section in Release n... See more...
Hey,  i am looking for the third-party notice for the Splunk Add-on for Palo Alto Networks 1.0.0. Unfortunately, i cannot find them in the documentation, since the corresponting section in Release notes - Splunk Add-on for Palo Alto Networks is empty. Anyone here to help me out with this and provide the 3rd party notice information? Best regards! Matthias
hello, trying to monitor below path from the host gas UF installed: C:\Program Files (x86)\dir1\log\name_CRT_[DDMMYYYY].log   I have inserted below stanza but I have not received any logs  ... See more...
hello, trying to monitor below path from the host gas UF installed: C:\Program Files (x86)\dir1\log\name_CRT_[DDMMYYYY].log   I have inserted below stanza but I have not received any logs  [monitor://C:\Program Files (x86)\dir1\log\name_CRT_[DDMMYYYY].log sourcetype = mylog:auditlog disabled = 0 index=test   any help please
We are using Splunk forwarder v9.0.3. We would like to have Splunk forwarder to reject the TLS server certificate if path length basic constraint condition fails. We generated the TLS server certifi... See more...
We are using Splunk forwarder v9.0.3. We would like to have Splunk forwarder to reject the TLS server certificate if path length basic constraint condition fails. We generated the TLS server certificate with pathlen as 0 in "root CA" and chain is "root CA -> intermediate CA -> server certificate".  As "root CA" pathlen is 0, no intermediate CA should be present. But, forwarder accepting the chain "root CA -> intermediate CA -> server certificate". Is this a known limitation or does it require a configuration change to basic constraint validation on path length? Please advise. Below is our outputs.conf contents. [tcpout-server://host:port] clientCert = /<..>/clientCert.pem sslPassword = <..> sslRootCAPath = /<..>/ca.pem sslVerifyServerCert = true sslVerifyServerName = true  
Hi, F5 team is sending logs to our splunk syslog server as comma seperated values. Post onboarding we see some of field values (string values) are truncating.  Example: From F5: Violation_details=... See more...
Hi, F5 team is sending logs to our splunk syslog server as comma seperated values. Post onboarding we see some of field values (string values) are truncating.  Example: From F5: Violation_details=xxxxxxxxxxxxx(say 50 words) after on-boarding to Splunk: Violation_details=xxxxx (truncating) What might be the issue here? Syslog server -- UF -- Indexer (our flow)
Hi, I have a huge set of data with different emails in it , I want to setup email alerts for few parameters. But the issue is I'm unable to group the events on email and send a email alert with t... See more...
Hi, I have a huge set of data with different emails in it , I want to setup email alerts for few parameters. But the issue is I'm unable to group the events on email and send a email alert with the csv attachment of the results. Example:- abc@email has around 80 events in the table , I want to send only one alert to abc with all the 80 events in it as csv attachment. And there are around 85+ emails in my data , and they have to be grouped using only 1 spl and it should be used in alert. Note :- dont suggest $result.field$  or stats to group its not useful for me. Thank you
We are using Splunk forwarder v9.0.3. One of the X509 validation we would like to have against TLS server certificate coming from the Splunk Indexer is ExtendedKeyUsage(EKU) validation for Server aut... See more...
We are using Splunk forwarder v9.0.3. One of the X509 validation we would like to have against TLS server certificate coming from the Splunk Indexer is ExtendedKeyUsage(EKU) validation for Server authentication.  We generated the TLS server certificate without the ExtendedKeyUsage to test this use case. However, Splunk forwarder is still accepting the TLS server certificate. Ideally, it should allow only when ExtendedKeyUsage is set to Server authentication. Is this a known limitation or does it require a configuration change to perform this EKU validation? Please advise. Below is our outputs.conf contents.   [tcpout-server://host:port] clientCert = /<..>/clientCert.pem sslPassword = <..> sslRootCAPath = /<..>/ca.pem sslVerifyServerCert = true sslVerifyServerName = true