All Topics

Top

All Topics

It’s been a tremendous year of innovation at Splunk and we’re excited to introduce new features to help DevOps, IT Operations, and software development teams build, troubleshoot, and innovate faster.... See more...
It’s been a tremendous year of innovation at Splunk and we’re excited to introduce new features to help DevOps, IT Operations, and software development teams build, troubleshoot, and innovate faster. Our customers are undergoing large-scale initiatives around IT Modernization, cloud migration, and application modernization. With more data, dependencies, and changes from production environments, there are more failure scenarios than teams can ever anticipate, and there is more to monitor and react to. Splunk Observability is the only solution providing: end-to-end visibility across hybrid landscapes, AIOps capabilities for full correlation of logs, metrics, and traces to predict and prevent problems before they impact customers, and AI-directed troubleshooting that leverages a unified entity model which analyzes 100% of unsampled telemetry data to pinpoint issues that impact services and customers the most. Let’s take a spin through the new innovations, with a bit of context for each. End-to-end Visibility Across Your Hybrid Cloud It’s hard to measure how infrastructure health and application performance impacts your digital customer experience when you have to switch between tools or different monitoring experiences. With observability integrated as part of the broader Splunk platform, only Splunk instruments your entire tech landscape from packaged applications running on-prem (like order processing or fulfillment systems from third-party vendors) to cloud-native web applications with no sampling, so you can get end-to-end visibility and rapidly correlate issues that are impacting multiple parts of your entire software stack.  New innovations include: Splunk Log Observer Connect is now generally available. This capability of Splunk Log Observer connects your logs to metrics and traces, helping you understand which logs are dependent on infrastructure and applications as you investigate performance problems. More context across metrics, traces, and log data helps quickly scope issues in production environments, isolate latency, and identify granular details to understand the root cause, faster. Log Observer Connect helps joins log, metric, and trace data in context to scope and isolate performance problems across your infrastructure and applications (image source) Another capability helping expand your ability to connect granular log data with metrics is the preview of Logs in Observability Dashboards. You can now combine logs and metrics together in your Observability dashboards to help you scope an issue’s severity and reach, and quickly drill down with more granularity as you investigate possible causes. With the preview of auto-instrumentation of Java applications via the OpenTelemetry Collector, you can start streaming their traces and monitor your distributed applications with Splunk APM in minutes. This feature reduces the time to getting data into Splunk Observability Cloud, providing immediate value with no configuration for the instrumentation, and data collection agents. The general availability of our Infrastructure Navigator 2.0, from Splunk Infrastructure Monitoring, provides immediate full stack visibility across hosts, containers, databases, and services spanning hybrid cloud environments. Engineers can quickly diagnose health and performance problems across their tech stack with easy-to-use functionality and friendly pivot sidebars, which intelligently guide users to performance problems in complex infrastructure environments. Predict and Prevent Problems Before Your Customers Notice It's impossible to anticipate unknowns when you are solely dependent on alerts to know what's changing. AIOps is baked into Splunk Observability, making it possible to predict and prevent problems before they turn into customer-impacting incidents. New innovations include: We’ve integrated our Synthetic Monitoring capabilities into Splunk Observability in preview form, helping you proactively test and monitor the uptime and performance of your critical APIs, services, and customer experience, in a single user interface. Synthetic Monitoring within Splunk Observability enables you to detect and resolve issues before customers are impacted, with seamless context across both your client side and backend performance. Best-in-class synthetic monitoring in Splunk Observability provides full-page performance breakdowns, with filmstrips and video playback to visualize customer experience (image source) For IT Service Intelligence (ITSI), we’ve listened to your feedback, and are adding three capabilities voted as your top priorities on ideas.Splunk.com. First, the general availability of Splunk Observability content pac, Version 2,  includes everything from high-level Executive Glass Tables for quick summaries, to quick navigation out of ITSI and IT Essentials Work into Splunk Observability for further investigation into end-user experience with Splunk RUM, application performance with Splunk APM, and infrastructure health with Splunk Infrastructure Monitoring. The preview of Custom Threshold Windows helps identify when an expected abnormal behavior may arise to help reduce alert fatigue and prepare for upcoming changes in your KPIs and services. Our Service Now Content Pack is now generally available. This capability brings in key data from your ServiceNow instances, such as events, change requests, incidents, and business applications, making them all easily visible and available. For engineers looking to quickly understand and troubleshoot their infrastructure, AutoDetect with Customization is generally available. This capability provides intuitive alert integrations and workflows that offer a consolidated view of infrastructure alerts, and real-time streaming analytics to instantly detect critical patterns and anomalies. AI-directed Troubleshooting to Know Where to Look It’s hard to find the root cause of problems when you have to manually sift through dashboards to try to piece together answers to problems and find a problematic needle in a stack of needles. Only Splunk provides a directed troubleshooting experience that includes business context and tells you where to look when investigating a problem, for more rapid MTTR. New innovation includes: We’re announcing the preview of Splunk Incident Intelligence on Splunk Observability Cloud to help IT and DevOps teams prevent unplanned downtime with full-stack, full-context alerting. This new solution reduces alert noise with out-of-the-box correlation for Splunk and 3rd party alerts, so incident responders can improve their mean time to acknowledge issues. Incident Intelligence automates the entire incident response workflow from scheduling to post-incident reviews and integrates with Slack, Microsoft Teams, and ServiceNow to improve collaboration, knowledge base, and Mean Time to Resolve incidents. Splunk Incident Intelligence helps incident response teams prevent unplanned downtime and reduce their mean time to acknowledge and resolve issues impacting critical services. (image source) We’re extending Splunk APM’s AlwaysOn Profiling’s capabilities to continuously monitor your CPU and memory. Previews are available for CPU profiling of Node.js and .NET applications, and Memory profiling for Java applications. Engineers can now continuously monitor code level performance to find service bottlenecks in your Node.js and .NET applications, and understand how code impacts memory usage in Java applications. Splunk APM’s AlwaysOn Memory Profiling helps pinpoint code responsible for high memory consumption (image source) Try Splunk for Observability Today We encourage you to continue your Splunk journey and try our Observability capabilities. Whether you’re a current Splunk user looking to expand best in class security or logging capabilities, or want to unify your IT and engineering teams in a single platform for your IT Modernization, cloud migration, or application modernization initiatives, Splunk Observability helps solve problems faster, as you scale. Try Splunk Observability today in our free trial!  — Mat Ball, Sr. Product Marketing Manager - DevOps Solutions
Hi  I have two fields: target (server1, server2,…) , status count by (ok,nokey) how can i show these fields on timechart? (I mean overlay chart) stack bar chart show count of status by ta... See more...
Hi  I have two fields: target (server1, server2,…) , status count by (ok,nokey) how can i show these fields on timechart? (I mean overlay chart) stack bar chart show count of status by target? any idea?  Thanks 
Hello, I am trying to establish connectivity between AWS Kinesis Firehose and a Splunk HF using version 6.0.0 of the Splunk Add-on for AWS, and I am having trouble configuring the CA-signed certific... See more...
Hello, I am trying to establish connectivity between AWS Kinesis Firehose and a Splunk HF using version 6.0.0 of the Splunk Add-on for AWS, and I am having trouble configuring the CA-signed certificates. I am following this documentation, and since my HF is within AWS private cloud I am following this section that has the prerequisite for "the HEC endpoint to be terminated with a valid CA-signed SSL certificate". I have a valid CA-signed SSL certificate but I am unsure about where I need to install it. So far I have updated server/local/web.conf with the certificates so that web UI is secure. Do I need to make any additional adjustments on the HF concerning the certificates? For example, do I need to update inputs.conf in any way to secure HTTP communication? Any help is greatly appreciated! Thank you and best regards, Andrew
Hello! I have learned so much from this community over the years but there is one query I am trying to write that I cannot figure out. I have a number of logs each containing four fields, each of t... See more...
Hello! I have learned so much from this community over the years but there is one query I am trying to write that I cannot figure out. I have a number of logs each containing four fields, each of those fields have a unique set of a few values. I am trying to do a count for each unique value and put it in a three column table including the field name, value, and count. I know I can hard-code all the values to give them a category/field name but as these values change over time I would rather not have to do that if possible. Log examples     key exchange algo: dh-group-exchange-sha256, public key algo: ssh-dss, cipher algo: aes128-cbc, mac algo: sha256 key exchange algo: ecdh-sha2-nistp256, public key algo: ssh-rsa, cipher algo: aes256-ctr, mac algo: sha256     Desired result: field cipher count keyExchange dh-group-exchange-sha256 ## keyExchange ecdh-sha2-nistp256 ## publicKey ssh-dss ## publicKey ssh-rsa ##   etc. Is there a way to do this besides hard-coding a field for each cipher? For reference, here is how I am pulling the two column list of cipher | count without the field name:     base search | eval cipher=keyExchange.";".publicKey | makemv delim=";" cipher | stats count by cipher      This also works for two columns but appears to be a bit slower     | eval cipher = mvappend(keyExchange,publicKey) | mvexpand cipher | stats count by cipher     Thanks!
Hello We are running Enterprise 8.2.6 (Windows Server).  We use a product called Fastvue Syslog Server on another Windows Server as a central Syslog server.   Fastvue Syslog writes out the syslog... See more...
Hello We are running Enterprise 8.2.6 (Windows Server).  We use a product called Fastvue Syslog Server on another Windows Server as a central Syslog server.   Fastvue Syslog writes out the syslogs into folders such as: D:\Logs\Syslog\Logs\switch\x.x.x.x\x.x.x.x-YYYY-MM-DD.log D:\Logs\Syslog\Logs\esx\x.x.x.x\x.x.x.x-YYYY-MM-DD.log (where x.x.x.x is the syslog client ip address) The Syslog Server has the Splunk Universal Forwarder installed as is configured to for output Windows Event Logs. The inputs.conf file has the following added in addition to the eventlogs: [monitor://D:\Logs\Syslog\Logs\switch\*] sourcetype = syslog-switch disabled = false [monitor://D:\Logs\Syslog\Logs\esx\*] sourcetype = syslog-esx disabled = false On the Splunk Indexer, we can see event logs from the Windows Server, but we are not seeing any syslog message from the logged files? Am I missing something? Thanks in advance.  
Hello All, i have checked the  URLs in user experience ( pages & AJAX requests ) there is alot of urls don't have requests  ( 0 requests ) and we delete them manually. So, I thought of  some enhance... See more...
Hello All, i have checked the  URLs in user experience ( pages & AJAX requests ) there is alot of urls don't have requests  ( 0 requests ) and we delete them manually. So, I thought of  some enhancement. However, I want to know if we automated deleting the URLs which having 0 requests 1) why do we have URLs with 0 requests?  2) Can we automate the delete activity? If yes, what is the improvement in the tool in automating this step? 3) What is the consequences from this step?  Thanks in advance  Omneya                                                                                                              
Hi,   I'm trying to generate a report with the following information -Total Bandwidth for each user -List of top 3 (Bandwidth usage) URLs for each user -Bandwidth for each URL For example... See more...
Hi,   I'm trying to generate a report with the following information -Total Bandwidth for each user -List of top 3 (Bandwidth usage) URLs for each user -Bandwidth for each URL For example   Thank you!
Looking to brush off the cobwebs of my Splunk use and wanted to find a simple query of server activity/traffic for a server on our domain.  If anyone has a basic query they use on a regular basis to ... See more...
Looking to brush off the cobwebs of my Splunk use and wanted to find a simple query of server activity/traffic for a server on our domain.  If anyone has a basic query they use on a regular basis to see traffic on their servers, I'd appreciate if you could share it, once I get the basic syntax, I can take it from there.
  WATCH NOW  Admin Configuration Service (ACS)  is a RESTful API that equips Splunk Cloud admins with a steady stream of capabilities (e.g., private app management, HEC tokens) that wil... See more...
  WATCH NOW  Admin Configuration Service (ACS)  is a RESTful API that equips Splunk Cloud admins with a steady stream of capabilities (e.g., private app management, HEC tokens) that will remove friction and provide full control over their ACS is already enabled in your environment! It’s ready to use out of the box. Tune in  to learn about the following ACS capabilities in greater detail: Getting started and setting up the API (we use the Postman API) Index creation and management Adding a new IP allow list HEC management
Has anyone created a data visualization add-on or app for stock analysis - I have searched splunkbase extensively... I want to display open high low close data for stock tracking using a candlestick ... See more...
Has anyone created a data visualization add-on or app for stock analysis - I have searched splunkbase extensively... I want to display open high low close data for stock tracking using a candlestick view model but can't really find an existing visualization that is able to display stock data in a candlestick view? any thoughs or suggestions?? thankyou
I was going through the tutorial to build "your first app" on the Splunk Development site here, and I could not get the api call to create an index.   Running on a windows 10 Development box (tri... See more...
I was going through the tutorial to build "your first app" on the Splunk Development site here, and I could not get the api call to create an index.   Running on a windows 10 Development box (trial license). Splunk Enterprise Version:8.2.6 Build:a6fe1ee8894b   The command below fails and I am not sure why.  I can use one of the other two options (CLI or WebUI) to create the index, but wanted to know why the REST API option failed.   C:\apps\splunk\bin>curl -k -u "user":"password" https://localhost:8089/servicesNS/admin/search/data/indexes -d name="devtutorial" <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Action forbidden.</msg> </messages> </response>   Apologies for the formatting, but when I tried to insert it as code, it said it was invalid. I have included an image version below. Thank you.  
Hello Splunkers,   After my own unsuccessful researches, I thought you may have the answer.  So, I'm wondering if there is a way to make the thruput variable. Indeed,  my search peer may have... See more...
Hello Splunkers,   After my own unsuccessful researches, I thought you may have the answer.  So, I'm wondering if there is a way to make the thruput variable. Indeed,  my search peer may have a too large amount of data to index at a time due to a network issue, and I would like to spread out the indexing during the night for example. So is there a way to set a throughput ([thruput]) limit when my server is the most asked and unset this limit when it is less used?   Thanks in advance for your time and your answer! Regards, Antoine 
Hi everyone, i want to use the below command in a single line. i have tried "comma" but it's not working. How do i do it? |eval comments= if(Action="create","something has been created",'commen... See more...
Hi everyone, i want to use the below command in a single line. i have tried "comma" but it's not working. How do i do it? |eval comments= if(Action="create","something has been created",'comments') |eval comments= if(Action="delete","something  has been deleted",'comments') Thanks.
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other user... See more...
Hey everyone and I hope your having a great day! I have configured a custom field extraction in the Splunk search app for my sourcetype but I don't have the possibility to share them with other users like I can do with another Splunk instance where I have the role Power (With Power role, I can share it no problem). I don't want to assign myself the Power role since it's broad and wouldn't follow the rule of least privilege. For this reason which permission would I need to assign myself in order to be able to share my data extraction with other users?
Hi,   I'm wondering if there isn't an issue with the correlation search that comes with Splunk ES "Threat activity detected".  Indeed, my problem come from the fact that when it's triggered then I... See more...
Hi,   I'm wondering if there isn't an issue with the correlation search that comes with Splunk ES "Threat activity detected".  Indeed, my problem come from the fact that when it's triggered then I have at least 2 other alerts concerning the "24h thresold risk score" (RBA).    I have taken the original correlation search (at least I think it is)  | from datamodel:"Threat_Intelligence"."Threat_Activity" | dedup threat_match_field,threat_match_value | `get_event_id` | table _raw,event_id,source,src,dest,src_user,user,threat*,weight | rename weight as record_weight | `per_panel_filter("ppf_threat_activity","threat_match_field,threat_match_value")` | `get_threat_attribution(threat_key)` | rename source_* as threat_source_*,description as threat_description | fields - *time | eval risk_score=case(isnum(record_weight), record_weight, isnum(weight) AND weight=1, 60, isnum(weight), weight, 1=1, null()), risk_system=if(threat_match_field IN("query", "answer"),threat_match_value,null()), risk_hash=if(threat_match_field IN("file_hash"),null(),threat_match_value), risk_network=if(threat_match_field IN("http_user_agent", "url") OR threat_match_field LIKE "certificate_%",null(),threat_match_value), risk_host=if(threat_match_field IN("file_name", "process", "service") OR threat_match_field LIKE "registry_%",null(),threat_match_value), risk_other=if(threat_match_field IN("query", "answer", "src", "dest", "src_user", "user", "file_hash", "http_user_agent", "url", "file_name", "process", "service") OR threat_match_field LIKE "certificate_%" OR threat_match_field LIKE "registry_%",null(),threat_match_value)  And notice that the mechanism to select which type of risk category is concerned is changing after the first line.    1.  Risk_system  risk_system=if(threat_match_field IN("query", "answer"),threat_match_value,null()), If I translate : If the threat_match_field is "query or "answer" then the risk category is system and risk_system="IOC that matched" In this case this is a domain or URL (because it's a DNS query or answer) --> THIS LINE IS GOOD 2. Risk_hash risk_hash=if(threat_match_field IN("file_hash"),null(),threat_match_value), But in the case of hash, if I translate : If the threat_match_field is "file_hash" then the risk category is NOT hash and risk_hash="null" --> THIS LINE IS WRONG Then it is the same for all other category : network, host, other   So in my opinion the values in the if statement were reversed.  risk_hash=if(threat_match_field IN("file_hash"),null(),threat_match_value), shoud be  risk_hash=if(threat_match_field IN("file_hash"),threat_match_value, null()),   Is it me ? My instance ? or what ? Thanks in advance Xavier
Hi community,  I have 2 different lists with fields as follow : list A - ip_address, source, account_id list B - ip_address, source, account_id, field4, field5 I want to compare both lists to acc... See more...
Hi community,  I have 2 different lists with fields as follow : list A - ip_address, source, account_id list B - ip_address, source, account_id, field4, field5 I want to compare both lists to accomplish list(B) - list(A), ie. remain only list(B) entries with unique ip_address value, comparing to list(A) entries, while also return the field value of field4 and field5.  Example  list A ip_address source account_id 10.0.0.1 A 1000 192.168.0.1 A 1001   list B  ip_address source account_id field4 field5 10.0.0.2 B 999 xxx yyyy  192.168.0.1 B 1001 xxy yyyx   Result ip_address source account_id field4 field5 10.0.0.2 B 999 xxx yyyy    I have tried the following : index=seceng source="listB" | eval source="B" | fields ip_address source account_id field4 field5 | append [ | inputlookup listA   | eval source="A"   | fields ip_address source account_id] | stats values(source) as source, count by ip_address account_id field4 field5 | where count == 1 AND source == "B" The issue of this query is that since field4 and field5 are unique attributes for list(B) only, thus the stats query will only return list(B) entries. It works when the field4 and field5 removed from the stats query, but they are the attributes that I want to include to the result. Can anyone give me suggestion of how the expected result can be accomplished ? Really appreciate that, and thanks in advance !
Hello splunkers, I need your help to find a solution for the following issue. I have a log file as a source that I'm indexing as metrics Sample Event   2022/06/15 10:15:22 Total: 1G Used: 6533... See more...
Hello splunkers, I need your help to find a solution for the following issue. I have a log file as a source that I'm indexing as metrics Sample Event   2022/06/15 10:15:22 Total: 1G Used: 65332K Free: 960.2M     I'm able to index values in a metric index but I would like to convert everything to the same unit before doing this. I tried with eval but it doesn't work props.conf   DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 TRANSFORMS-extract_test = fields_extract_test EVAL-Total = Total*100 METRIC-SCHEMA-TRANSFORMS = metric-schema:extract_metrics_test   transforms.conf   [fields_extract_test] REGEX = .*Total: (.*?)([A-Z]) Used: (.*?)([A-Z]) Free: (.*?)([A-Z]) FORMAT = Total::$1 Total_Unit::$2 Used::$3 Used_Unit::$4 Free::$5 Free_Unit::$6 WRITE_META = true [metric-schema:extract_metrics_test] METRIC-SCHEMA-MEASURES = _ALLNUMS_ METRIC-SCHEMA-WHITELIST-DIMS = Total,Total_Unit,Used,Used_Unit,Free,Free_Unit   How to do this? Thanks in advance
I have a panel which shows the usage of a dashboard in GMT timezone. Is it possible to show the same data in different timezones (PST, EST, IST, etc) as different lines in same chart? Below is the q... See more...
I have a panel which shows the usage of a dashboard in GMT timezone. Is it possible to show the same data in different timezones (PST, EST, IST, etc) as different lines in same chart? Below is the query which shows count in GMT timezones  index="_internal" user!="-" sourcetype=splunkd_ui_access "GET" "sample" | rex field=uri "\/app\/(?<App_Value>\w+)\/(?<dashboard>[^?\/]+)" | search App_Value="sample" dashboard = "daily_health" |timechart count  How can we modify this query to show in different timezone in single chart?
We have platform events in salesforce that gets published , So from splunk we need to subscribe to those events. How to do this in splunk please suggest
If an cloud application like Servicenow or Salesforce is integrated with central authentication like Azure AD for authenticating users, how can I identify user authentication logs for these specific ... See more...
If an cloud application like Servicenow or Salesforce is integrated with central authentication like Azure AD for authenticating users, how can I identify user authentication logs for these specific apps from Azure AD logs ? I am looking at logs using this query index=o365 sourcetype=o365:management:activity | stats count by vendor_product but most of these vendor products are microsoft based. I don't see any other cloud apps here. Would somebody be able to help me with this please ?