All Topics

Top

All Topics

We are excited to introduce the enhanced Time Picker functionality, designed to empower developers and SREs with efficient observability workflows. With new capabilities that make investigating and w... See more...
We are excited to introduce the enhanced Time Picker functionality, designed to empower developers and SREs with efficient observability workflows. With new capabilities that make investigating and working across different product areas easier than ever, Time Picker is there to help you get to the insights faster across Logs, Application Performance Monitoring (APM), Real User Monitoring (RUM), Infrastructure Monitoring (IM), and Synthetics.  Seamless Transition between Product Areas Imagine starting your work in the Log Observer interface, pinpointing anomalies with precision. Now, when it's time to dive into APM, you can seamlessly carry your selected time range with you. Time Picker recent selection is persistent across all product areas to ensure that your carefully chosen timeframe is captured as you switch between Observability suite of products, eliminating the need for redundant configurations.  Intuitive UI and Enhanced Functionality Time Picker's improved user interface steps up the game. Not only does it resolve past issues/bugs, but it also introduces advanced features like time range pasting and type-ahead behavior. Consider this: you just joined your coworker in investigating an alert. They send you a time range when an outage happens. With enhanced Time Picker time range pasting feature, you can instantly and quickly apply your desired time frame without manually typing the value, minimizing clicks and accelerating your workflow.  Enhanced Timestamp Format for Convenience and Efficiency Time Picker also introduces a wide range of formats users can paste or type into the component, at the same time standardizing the output format across the product areas. In addition, Time Picker takes into consideration your preferred time zone or browser time zone, and if you paste time with offset or time zone designation, time picker automatically converts to the time zone you set. Now it is easier than ever to work with colleagues across different time zones. This not only elevates your experience but significantly reduces the chances of encountering bugs. Let’s look at our previous scenario, you are working on an alert, and your colleague that works in Eastern Time zone shared the time range when outage happened, their time range includes EST suffix, as you work in PST, and your observability suite is set to PST, when you paste the time range, it will convert the time range to your time zone and correctly display data. No need to manually adjust it! Like Gwen Stefani once said: What are you waiting for? Try out the enhanced Time Picker today!
I have 2 string which need to be searched in splunk both string having different index and different source type.one string is "published sourcing plan " and another string is "published transfer ord... See more...
I have 2 string which need to be searched in splunk both string having different index and different source type.one string is "published sourcing plan " and another string is "published transfer order" .I need to get "published transfer order" log from the splunk.if it's not available after 5 min of getting "published sourcing plan "log in the splunk.i need to count it or need to retrieve some details like salesorderid from "published sourcing order" log .how to prepare sea rch query in splunk.incase none of the log available in the splunk for "transfer order published",I need to capture the things
Hello, I would like to properly parse rspamd logs that look like this (2 lines sample):   2023-11-12 16:06:22 #28191(rspamd_proxy) <8eca26>; proxy; rspamd_task_write_log: action="no action", diges... See more...
Hello, I would like to properly parse rspamd logs that look like this (2 lines sample):   2023-11-12 16:06:22 #28191(rspamd_proxy) <8eca26>; proxy; rspamd_task_write_log: action="no action", digest="107a69c58d90a38bb0214546cbe78b52", dns_req="79", filename="undef", forced_action="undef", ip="A.B.C.D", is_spam="F", len="98123", subject="foobar", head_from=""dude" <info@example.com>", head_to=""other" <other@example.net>", head_date="Sun, 12 Nov 2023 07:08:57 +0000", head_ua="nil", mid="<B3.B6.48980.82A70556@gg.mta2vrest.cc.prd.sparkpost>", qid="0E4231619A", scores="-0.71/15.00", settings_id="undef", symbols_scores_params="BAYES_HAM(-3.00){100.00%;},RBL_MAILSPIKE_VERYBAD(1.50){A.B.C.D:from;},RWL_AMI_LASTHOP(-1.00){A.B.C.D:from;},URI_COUNT_ODD(1.00){105;},FORGED_SENDER(0.30){info@example.com;bounce-1699772934217.160136898825325270136859@example.com;},MANY_INVISIBLE_PARTS(0.30){4;},ZERO_FONT(0.20){2;},BAD_REP_POLICIES(0.10){},MIME_GOOD(-0.10){multipart/alternative;text/plain;},HAS_LIST_UNSUB(-0.01){},ARC_NA(0.00){},ASN(0.00){asn:23528, ipnet:A.B.C.0/20, country:US;},DKIM_TRACE(0.00){example.com:+;},DMARC_POLICY_ALLOW(0.00){example.com;none;},FROM_HAS_DN(0.00){},FROM_NEQ_ENVFROM(0.00){info@example.com;bounce-1699772934217.160136898825325270136859@example.com;},HAS_REPLYTO(0.00){support@example.com;},MIME_TRACE(0.00){0:+;1:+;2:~;},RCPT_COUNT_ONE(0.00){1;},RCVD_COUNT_ZERO(0.00){0;},REDIRECTOR_URL(0.00){twitter.com;},REPLYTO_DOM_NEQ_FROM_DOM(0.00){},R_DKIM_ALLOW(0.00){example.com:s=scph0618;},R_SPF_ALLOW(0.00){+exists:A.B.C.D._spf.sparkpostmail.com;},TO_DN_ALL(0.00){},TO_MATCH_ENVRCPT_ALL(0.00){}", time_real="1605.197ms", user="undef" 2023-11-12 16:02:04 #28191(rspamd_proxy) <4a3599>; proxy; rspamd_task_write_log: action="no action", digest="5151f8aa4eaebc5877c7308fed4ea21e", dns_req="19", filename="undef", forced_action="undef", ip="E.F.G.H", is_spam="F", len="109529", subject="Re: barfoo", head_from="other me <other@example.net>", head_to="someone <someone@exmaple.fr>", head_date="Sun, 12 Nov 2023 16:02:03 +0100", head_ua="Apple Mail (2.3731.700.6)", mid="<3425840B-B955-4647-AB4D-163FC54BE820@example.net>", qid="163A215DB3", scores="-4.09/15.00", settings_id="undef", symbols_scores_params="BAYES_HAM(-2.99){99.99%;},ARC_ALLOW(-1.00){example.net:s=openarc-20230616:i=1;},MIME_GOOD(-0.10){multipart/mixed;text/plain;},APPLE_MAILER_COMMON(0.00){},ASN(0.00){asn:12322, ipnet:E.F.0.0/11, country:FR;},FREEMAIL_CC(0.00){example.com;},FREEMAIL_ENVRCPT(0.00){example.fr;example.com;},FREEMAIL_TO(0.00){example.fr;},FROM_EQ_ENVFROM(0.00){},FROM_HAS_DN(0.00){},MID_RHS_MATCH_FROM(0.00){},MIME_TRACE(0.00){0:+;1:+;2:~;},RCPT_COUNT_TWO(0.00){2;},RCVD_COUNT_ZERO(0.00){0;},TO_DN_ALL(0.00){},TO_MATCH_ENVRCPT_ALL(0.00){}", time_real="428.021ms", user="me"     The field I need to split is symbols_scores_params. I’ve used this:   sourcetype=rspamd user=* | makemv tokenizer="([^,]+),?" symbols_scores_params | mvexpand symbols_scores_params | rex field=symbols_scores_params "(?<name>[A-Z0-9_]+)\((?<score>-?[.0-9]+)\){(?<options>[^{}]+)}" | eval {name}_score=score, {name}_options=options     It works great, proper fields are created (eg. BAYES_HAM_score, BAYES_HAM_options, etc.), but a single event is turned into a pack of 17 to 35 events. Is there a way to dedup those events and to keep every new fields extracted from symbols_scores_params ?
How to get default fields like host app,servie after using eval. after using eval, i am not able to fetch any default fields. please advise.
I have the query to find the response code and count vs time (in 1 minute time interval) as below.   index=sample_index path=*/sample_path* | bucket _time span=1m | stats count by _time respons... See more...
I have the query to find the response code and count vs time (in 1 minute time interval) as below.   index=sample_index path=*/sample_path* | bucket _time span=1m | stats count by _time responseCode   The result shows the response code and count vs time for each minute. But I just need the events in those 1 minutes which have 403 response code along with other response codes and skip which doesn't have 403.  Suppose during time1, if there are only events with response code 200, I don't need that in my result. But during time2, if there are events with response code 200 and 403, I need that in the result as time, response code, count. 
Hi,   we have a two site 6 indexer cluster 3 per site and we are upgrading the CPU and each site will be offline for 3 hours per site, do I need to do anything on Splunk or do I need to run ./Splun... See more...
Hi,   we have a two site 6 indexer cluster 3 per site and we are upgrading the CPU and each site will be offline for 3 hours per site, do I need to do anything on Splunk or do I need to run ./Splunk offline on 3 indexers at a time before they are shut down?   Thanks
Hi there: I have the following query: source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timesta... See more...
Hi there: I have the following query: source=accountCalc type=acct.change msg="consumed" event_id="*" process_id="*" posted_ timestamp =”*” msg_ timestamp =”*” | eval e1_t=strptime(posted_ timestamp, "%FT%T") | eval e2_t=strptime(msg_ timestamp, "%FT%T") | eval lag_in_seconds=e1_t-e2_t | eval r2_posted_timestamp=posted_time | table event_id process_id msg_timestamp r2_posted_timestamp lag_in_seconds The above query can return multiple events with the same event_id & process_id with different posted_ timestamp I need to only return the one with the earliest/oldest posted_time(one of the fields in the event). How can i change the above query to accomplish this? Thanks!
Hello Splunkers,  I have an issue with the UF file monitoring where the input is not being monitored/ not forwarding the events to splunk. I do not have access to the server to run the btool. [moni... See more...
Hello Splunkers,  I have an issue with the UF file monitoring where the input is not being monitored/ not forwarding the events to splunk. I do not have access to the server to run the btool. [monitor:///opt/BA/forceAutomation/workuser.ABD/output/event_circuit.ABD.*] sourcetype = banana _meta=Appid::APP-1234 DataClassification::Unclassified index = test disabled = 0 crcSalt = <SOURCE> ignoreOlderThan = 7d The host(s) are sending _internal logs to Splunk, Here is the info I see in splunkd.log no errors, I tried the wildcard (*) in the monitoring stanza at the end after /output dir however it didn't work TailingProcessor [ MainTailingThread] - Parsing configuration stanza: monitor :///opt/BA/forceAutomation/workuser.ABD/output/event_circuit.ABD.* Actual log file  -rw-r--r--1 automat autouser 6184 Oct 8 00:00 event_circuit.ABD.11082023      
Good day everyone Someone here will have had experience obtaining values from a JSON.. Currently I have _raws in JSON format from which I try to obtain a table that shows in a single row the values ... See more...
Good day everyone Someone here will have had experience obtaining values from a JSON.. Currently I have _raws in JSON format from which I try to obtain a table that shows in a single row the values of the object that has the array with the most data. I better explain myself with the following example: This is the JSON code that comes in each event:       { "investigationStatus":"New", "status":1, "priorityScore":38, "workbenchName":"PSEXEC Execution By Process", "workbenchId":"WB-18286-20231106-00005", "severity":"low", "caseId":null, "detail":{ "schemaVersion":"1.14", "alertProvider":"SAE", "description":"PSEXEC execution to start remote process", "impactScope":[ { "entityValue":{ "name":"SERVER01", }, "relatedIndicators":[ 2 ] }, { "entityValue":{ "name":"SERVER02", }, "relatedIndicators":[ 2, 3 ] }, { "entityValue":{ "name":"SERVER03", }, "relatedIndicators":[ 1, 2, 3, 4 ] }, { "entityValue":{ "name":"SERVER04", }, "relatedIndicators":[ 1 ] } ] } }       And this is the table I'm trying to get: workbenchId workbenchName  severity  name_host "WB-18286-20231106-00005" "PSEXEC Execution By Process" "low" "SERVER03"   If you can see, the values of the 1st level of the JSON are found, and then there is the host_name SERVER03, since this has the largest number of values in the "relatedIndicators" array (from 1 to 4), the rest of the servers do not because they have smaller amount in the array. Maybe any idea how I could achieve it? I tried with json_extract but didn't succeed  
Hi, I have deployed a search head cluster with 3 members and one deployer. based on splunk document, is recommends that run a third-party hardware or software load balancer in front of the clustered... See more...
Hi, I have deployed a search head cluster with 3 members and one deployer. based on splunk document, is recommends that run a third-party hardware or software load balancer in front of the clustered search heads. does splunk recommend any special load balancer that is most compatible
Hi, I am not able to login in any of the server like (CM, SH and more...). While am putting the username and passwords its showing . "Login Failed " What could be the reason , how can i troubles... See more...
Hi, I am not able to login in any of the server like (CM, SH and more...). While am putting the username and passwords its showing . "Login Failed " What could be the reason , how can i troubleshoot from backend 
Hi at all, a customer asked to me if it's possible to have an alias instead of the hostname in the Monitoring Console Dashboards. I know that's it's easy to do this in normal Splunk Searches, but i... See more...
Hi at all, a customer asked to me if it's possible to have an alias instead of the hostname in the Monitoring Console Dashboards. I know that's it's easy to do this in normal Splunk Searches, but in the Monitoring Console Dashboards (e.g. Summary or Overview or Instances)? Ciao. Giuseppe
Can i run appdynamics PHP agent on Alpine Docker image ? 
I can see logs from Cisco ASA firewall to Splunk and we are getting logs when a connection close. It have the total data send with bytes.    Nov 1 12:19:48 ASA-FW-01 : %ASA-6-302014: Teardown TCP c... See more...
I can see logs from Cisco ASA firewall to Splunk and we are getting logs when a connection close. It have the total data send with bytes.    Nov 1 12:19:48 ASA-FW-01 : %ASA-6-302014: Teardown TCP connection 4043630532 for INSIDE-339:192.168.42.10/37308 to OUTSIDE-340:192.168.36.26/8080 duration 0:00:00 bytes 6398 TCP FINs from INSIDE-VLAN339   I am unable to see bytes as a valid field.  I tried to create Extract New Fields for this.    ^(?:[^:\n]*:){8}\d+\s+(?P<BYTES>\w+\s+)   But when I use in the search it fails.    index=asa_* src_ip = "192.168.42.10" | rex field=_raw DATA=0 "^(?:[^:\n]*:){8}\d+\s+(?P<BYTES>\w+\s+)"     OBJECTIVE :  Calculate Server throughput for flows using Cisco ASA logs.   So view the network throughput for the flows using splunk. 
Hi there: I have the following makeresults query: | makeresults count=3 | eval source="abc" | eval msg="consumed" | eval time_1="2023-11-09T21:33:05Z" | eval time_2="2023-11-09T21:40:05Z" ... See more...
Hi there: I have the following makeresults query: | makeresults count=3 | eval source="abc" | eval msg="consumed" | eval time_1="2023-11-09T21:33:05Z" | eval time_2="2023-11-09T21:40:05Z" So i want to create three different events where the values for time_1 & time_2 are different for each event. How can i do that? Thanks!
Hello, How to filter all row if some fields are empty, but do not filter if one of the field has value?    I appreciate your help. Thank you I want to filter out row, if vuln, score and company ... See more...
Hello, How to filter all row if some fields are empty, but do not filter if one of the field has value?    I appreciate your help. Thank you I want to filter out row, if vuln, score and company fields are empty/NULL    (All 3 fields are empty: Row 2 and 6 in the table below) If vuln OR company fields have values(NOT EMPTY), do not filter  Row 4: vuln=Empty                            company=company D(NOT empty) Row 9: vuln=vuln9(NOT empty)    company=empty If I use the search below, it will filter out row with vuln OR company that are empty (Row 4 and Row 9) index=testindex  vuln=* AND score=* AND company=* Current data no ip vuln score company 1 1.1.1.1 vuln1 9 company A 2 1.1.1.2       3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 6 1.1.1.6       7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9     10 1.1.1.10 vuln10 4 company J   Expected Result: ***NEED CORRECTION*** no ip vuln score company 1 1.1.1.1 vuln1 9 company A 2 FILTERED FILTERED FILTERED FILTERED 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 6 FILTERED FILTERED FILTERED FILTERED 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9     10 1.1.1.10 vuln10 4 company J Sorry, This is what I mean by FILTERED no ip vuln score company 1 1.1.1.1 vuln1 9 company A 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9    
How long does it take for testing company to get back to me to take certification test?
Looking help to remove outliers (values greater than 90 percentile responses). For Ex:  Response Time  -------------------- 1 Second 2 Seconds 3 Seconds 4 Seconds 5 Seconds 6 Seconds ... See more...
Looking help to remove outliers (values greater than 90 percentile responses). For Ex:  Response Time  -------------------- 1 Second 2 Seconds 3 Seconds 4 Seconds 5 Seconds 6 Seconds 7 Seconds 8 Seconds 9 Seconds 10 Seconds 90 percentile for the above values is 9 Seconds. want to remove the outlier 10 Seconds and get the average response for remaining values. My expected Avg Response (after Removing the outlier) = 5 Seconds ==================================================== My Query is  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="******" | spath output=User_Action_Name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval User_Action_Name=substr(User_Action_Name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_Response" by Proper_User_Action | stats count(pp_user_action_response) As "Total_Calls",perc90(pp_user_action_response) AS "Perc90_Response" by User_Action_Name Avg_Response | eval Perc90_Response=round(Perc90_Response,0)/1000 | eval Avg_Response=round(Avg_Response,0)/1000 | table Proper_User_Action,Total_Calls,Perc90_Response
NOVEMBER 2023  Enhance Security Visibility and Simplify Investigations for Faster Threat Response with Splunk Enterprise Security In the face of an ever-increasing volume of cyberattack... See more...
NOVEMBER 2023  Enhance Security Visibility and Simplify Investigations for Faster Threat Response with Splunk Enterprise Security In the face of an ever-increasing volume of cyberattacks, and a limited security workforce to combat those attacks, a best-in-class SIEM can enhance security visibility and simplify investigations for faster threat response. Splunk Enterprise Security delivers enhanced security visibility with Splunk Enterprise Security 7.2, and helps SOCs simplify security investigations with risk-based alerting and Splunk Enterprise Security’s unified workflow experience, Mission Control. Read the latest blog across Splunk Security, Observability, and Platform innovations to learn more about how Splunk Enterprise Security is changing the game for SOCs around the world.  Security Content from the Splunk Threat Research Team The Splunk Threat Research Team has had two releases of security content in the last month, which provide 22 new detections, 6 new analytic stories and 3 updated analytic stories. Read the Product News & Announcements post to learn more and check out the latest blogs to help you stay ahead of threats:  More Than Just a RAT: Unveiling NjRAT's MBR Wiping Capabilities Detect WS_FTP Server Exploitation with Splunk Attack Range  Introducing Splunk Add-On for Splunk Attack Analyzer and Splunk App for Splunk Attack Analyzer Following the announcement of Splunk Attack Analyzer at .conf23, we are excited to announce the launch of the Splunk Add-on for Splunk Attack Analyzer and Splunk App for Splunk Attack Analyzer. These apps work together to ingest data from Splunk Attack Analyzer into the Splunk platform and provide out of the box dashboards to give security leaders insight into solution submission trends, patterns in threat volume trends, and phish kit and malware family trends. Learn more in the blog.  The Latest from SURGe  The SURGe security research team has updated their macro-level ATT&CK trending insights for 2023.  Check out the latest podcasts on The Security Detail Episode 8: The Technology Sector with Sean Heide Episode 9: Education with Brett Callow Episode 10: Aviation with Richard Waine Explore the latest Coffee Talk with SURGe interviews featuring: Michael Rodriguez, Principal Strategic Consultant for Google Public Sector Patrick Gray, host of the Risky Biz podcast Sherrod DeGrippo, Director of Threat Intelligence Strategy at Microsoft Jamie Williams, MITRE ATT&CK for Enterprise Lead and Principal Adversary Emulation Engineer Infosec Multicloud App for Splunk The Infosec App for Splunk is designed to address the most common security use cases, including continuous monitoring and security investigations. The new Infosec Multicloud App for Splunk is designed by our field team to help customers that have a cloud environment. In addition to views of security posture across cloud providers, the app includes a billing dashboard for a high level overview of costs spread across your various cloud providers. Read the blog to learn more details and the steps needed to install and configure the Infosec Multicloud app for Splunk. The Great Resilience Quest continues at full momentum   The Great Resilience Quest continues to welcome challengers until the end of January 2024. This gamified adventure teaches you how to implement key Splunk use cases on the path to digital resilience. Conquer each level by completing bite-sized learning activities and quizzes. With amazing prizes still up for grabs, every moment counts. Join the quest today!    Platform Updates Build Digital Resilience Through Expanded Access to Decentralized Data In his recent blog, Tom Casey, SVP Products & Technology for Splunk discusses several recent Splunk Platform innovations enabling customers to build digital resilience through expanded access to decentralized data, enabling better understanding of customer-facing issues, regardless of whether the data sits in Splunk or cost effective Amazon S3 storage, facilitating compliance with data sovereignty requirements.   Build Scalable Security While Moving to Cloud Now available as an on-demand webinar, hear from Clayton Homes on how to build scalable security while moving to the cloud successfully and efficiently with Splunk. By deploying Splunk Enterprise Security, a data-centric modern information and event management (SIEM) solution in the cloud, Clayton Homes was able to detect and respond to threats quickly while gaining end-to-end visibility across their IT environment with Splunk Cloud Platform (SaaS solution). Model Assisted Threat Hunting Powered by PEAK & Splunk AI Accelerate threat hunting with Splunk AI as a catalyst. Join us to learn how to leverage the PEAK threat hunting framework and Splunk AI to find malware dictionary-DGA domains. Learn the basics of the PEAK threat hunting framework developed by Splunk’s SURGe security research team, understand the power Splunk AI can bring to your threat hunts and see how to create automated detections from your successful hunts. Splunk App for Data Science and Deep Learning - What’s New in Version 5.1.1 In the ever-evolving world of data science, keeping your tools and software up to date is essential. This ensures that you have access to the latest features, security updates and bug fixes. The team behind our data science app has been hard at work to bring you the most robust and secure version yet. Explore our recent blog to dive into what's new in the recently released Splunk App for Data Science and Deep Learning (DSDL) version 5.1.1 available on Splunkbase. Machine Learning in General, Trade Settlement in Particular The recent T+1 compliance directive —which mandates that all USA trades starting in May 2024 be settled in at most one day — is the driving force behind wanting to provide resilience to the trade settlement process. Explore this hands on blog on using Splunk Machine Learning Toolkit to predict whether a trade settlement in the financial services industry will fail to be completed. Tech Talks, Office Hours and Lantern Tech Talks Advance Your App Development with the Visual Studio Code Extension Register Now  and join us on Wednesday, November 15, 2023. See the latest on the Visual Studio Code Extension for Splunk SOAR and how you can make developing apps a breeze. ICYMI: What’s New in Splunk SOAR 6.2?  Watch the Replay   Streaming Lookups with Splunk Edge Processor Register Now  and join us on Thursday, November 16, 2023 to learn how best to leverage lookups to optimize costs and maintain data fidelity, explore use cases for this capability that drive business outcomes, and review other ways to optimize your data management strategy using Edge Processor.   Community Office Hours Join our upcoming Community Office Hour sessions, where you can ask questions and get guidance.  Security: SOAR - Wed, Nov 29  (Register here) Splunk Search - Wed, Dec 13  (Register here)   Splunk Lantern  In this month’s blog we’re highlighting some great new updates to our Getting Started Guide for Enterprise Security (ES) that provide you with easy ways to get going on this powerful platform, as well as new data articles for MS Teams. As usual, we’re also sharing the rest of the new articles we’ve published this month.  Read on to see what’s new. Education Corner A Steady Drumbeat of New and Updated Splunk Training  Can you hear it? That’s the sound of new Splunk Education courses dropping on a regular! You can always search the Splunk Training and Enablement Platform (STEP) for courses that align with your observability learning journey, or check out our October Release Announcements. And, don’t forget to check in with your Org Manager if you’re looking to enroll in paid training using your company’s Training Units. Get curious about what's possible with Splunk.
NOVEMBER 2023  OpenTelemetry Insights Page Now Provides Visibility to Your GCP and Azure Hosts  Splunk Observability customers with GCP and Azure integrations now have access to th... See more...
NOVEMBER 2023  OpenTelemetry Insights Page Now Provides Visibility to Your GCP and Azure Hosts  Splunk Observability customers with GCP and Azure integrations now have access to the OpenTelemetry insights view within the UI. The OpenTelemetry insights page, accessed via the Data Management module, gives you a complete view of your host’s inventory, including a full list of instances, the deployment status of the OpenTelemetry Collector for each instance, and the version. This view is already available for AWS EC2 hosts.  Discover Our New and Improved Time Picker! 250,000 - That’s the number of times the Time Picker component within Observability Cloud is clicked on each month. A critical feature in the investigation journey, Time Picker allows engineers to easily and quickly determine the time frame of their analyses. For this reason, we’re launching a new version that not only fixes bugs but also includes new functionalities such as type-ahead behavior and additional timestamp formats that enhance developer experience and accelerate workflows. Find out more here! Integrating REST endpoints with Splunk On-Call Available now in the Observability Use Case Explorer!  A brand new use case for Splunk On-Call with a focus on sending customized alerts and incident details from your proprietary and open-source monitoring tools into the Splunk On-Call timeline.  Read all about it here! NOW AVAILABLE: Unified Identity enhancements Get more control over which Splunk Cloud user can access Observability Cloud! We’re introducing a new custom “o11y_access” role for admins to restrict who can create an Observability Cloud user and enjoy the Unified Identity/SSO capability. Check out our updated docs for more details. ICYMI: Check Out The Latest Observability Blog Posts Announcing the General Availability of Splunk RUM Session Replay Why Does Observability Need OTEL? The Great Resilience Quest continues at full momentum   The Great Resilience Quest continues to welcome challengers until the end of January 2024. This gamified adventure teaches you how to implement key Splunk use cases on the path to digital resilience. Conquer each level by completing bite-sized learning activities and quizzes. With amazing prizes still up for grabs, every moment counts. Join the quest today!    Platform Updates Build Digital Resilience Through Expanded Access to Decentralized Data In his recent blog, Tom Casey, SVP Products & Technology for Splunk discusses several recent Splunk Platform innovations enabling customers to build digital resilience through expanded access to decentralized data, enabling better understanding of customer-facing issues, regardless of whether the data sits in Splunk or cost effective Amazon S3 storage, facilitating compliance with data sovereignty requirements.   Explore the new Log Analytics for IT Troubleshooting Splunk Use Case Page Splunk Observability Log Analytics for It Troubleshooting allows customers to get comprehensive visibility, at scale with Splunk Platform. Accelerate innovation and IT troubleshooting in complex hybrid environments. Explore the use case here. Splunk Observability – The Latest Innovations To Perfect UX Splunk helps you prioritize the right issues and make faster and better decisions through proactive and smarter alerting, richer data, and simpler workflows. Join this webinar for a first look at these new features that can help you quickly resolve customer-facing issues to deliver great user experiences (UX). Featuring product demonstrations of Session Replay, Edge Processor, OpenTelemetry, and Federated Search for Amazon S3. Splunk App for Data Science and Deep Learning - What’s New in Version 5.1.1 In the ever-evolving world of data science, keeping your tools and software up to date is essential. This ensures that you have access to the latest features, security updates and bug fixes. The team behind our data science app has been hard at work to bring you the most robust and secure version yet. Explore our recent blog to dive into what's new in the recently released Splunk App for Data Science and Deep Learning (DSDL) version 5.1.1 available on Splunkbase. Machine Learning in General, Trade Settlement in Particular The recent T+1 compliance directive —which mandates that all USA trades starting in May 2024 be settled in at most one day — is the driving force behind wanting to provide resilience to the trade settlement process. Explore this hands on blog on using Splunk Machine Learning Toolkit to predict whether a trade settlement in the financial services industry will fail to be completed.  Tech Talks, Office Hours and Lantern Tech Talks OpenTelemetry: What’s Next. Logs, Profiles, and More Register Now  and join us on Tuesday, November 14, 2023. You’ll learn about OpenTelemetry's new logging functionality, including its two logging paths, the benefits of each, real-world production examples and so much more! ICYMI: Starting With Observability: OpenTelemetry Best Practices. Watch the Replay   Community Office Hours Join our upcoming Community Office Hour sessions, where you can ask questions and get guidance.  Security: SOAR - Wed, Nov 29  (Register here) Splunk Search - Wed, Dec 13  (Register here)   Splunk Lantern  In this month’s blog we’re highlighting everything that’s new on Lantern this month, with new data articles for MS Teams as well as brand new use cases, product tips and data descriptors. Read on to see what’s new.   Education Corner A Steady Drumbeat of New and Updated Splunk Training  Can you hear it? That’s the sound of new Splunk Education courses dropping on a regular! You can always search the Splunk Training and Enablement Platform (STEP) for courses that align with your observability learning journey, or check out our October Release Announcements. And, don’t forget to check in with your Org Manager if you’re looking to enroll in paid training using your company’s Training Units. Get curious about what's possible with Splunk.   Hola! Say Hello to Our Translated Content It’s a big world out there – with 8 billion people and about 7,000 languages spoken. Splunk Education is determined to get closer to as many of these people as possible by publishing training and certification in more diverse languages. We are pleased to share that we now offer free, self-paced eLearning courses with Spanish captions. Watch for more translated content and captions coming soon, Mucho gusto! Talk with us about Splunk!   The Splunk product design team wants to learn about how you use our products. If you’re interested in contributing, please fill out this quick questionnaire so we can reach out to you. This may take such forms as a survey, receiving an email to schedule an interview session, or some other type of research invitation. We look forward to hearing from you!     Until Next Time, Happy Splunking