All Topics

Top

All Topics

I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. What happens if frozenTimePeriodInSecs is defined and maxTotalDataSizeMB  is not? The Splunk docs don't cover this ... See more...
I understand that maxTotalDataSizeMB takes precedence over frozenTimePeriodInSecs. What happens if frozenTimePeriodInSecs is defined and maxTotalDataSizeMB  is not? The Splunk docs don't cover this specific circumstance, and I haven't been able to find anything else about it. I have requirement to keep all department security logs for 5 years regardless how big the indexes get. They need to delete at 5.1 years. My predecessor set it up so that frozenTimePeriodInSecs= (5.1years in seconds) and maxTotalDataSizeMB =1000000000mb (roughly 1000TB's) so that size would not affect retention, but now nothing will delete and we're retaining logs from 8 years ago. If I comment out  maxTotalDataSizeMB, will frozenTimePeriodInSecs take precedence or will the default  maxTotalDataSizeMB settings take over? My indexes are roughly 7Tb, so the 500Gb would wipe out a bunch of stuff I need to keep.  In my lab environment, I commented out  maxTotalDataSizeMB and set frozenTimePeriodInSecs to 6 months, but still have logs from 2 years ago. Unfortunately, my lab environment doesn't have enough archived data to test the default cut off.    Thanks!
I have a question.  We have an stand alone Splunk instance in AWS running version 7.2.3 and are looking to upgrade it to 9.3.0.  I see to get to that version I will have to do about 4 upgrades.  Also... See more...
I have a question.  We have an stand alone Splunk instance in AWS running version 7.2.3 and are looking to upgrade it to 9.3.0.  I see to get to that version I will have to do about 4 upgrades.  Also since our current version is running on RedHat version 6.4,  I would have to upgrade that to get be able to run the current version What I am curious about is, AWS has a Splunk 9.3.0 AMI with BYOL.   Would it be possible to migrate the data over to the new instance along with the configuration settings?  This is used as a customer lab so we only have about a dozen universal forwarders pointing to this server.  There are no alerts running on it and only 3 dashboards. The splunk home is stored on a separate volume than the OS so I could detach it from the old instance and attach it to the new one, or snapshot it and use the snapshot on the new one.   Any suggestions for this? Thanks.
Hello Splunk ES experts ,  I want to make a query which will produce MTTD (something like by analyzing the time difference between when a raw log event is ingested ( and meets the condition of a co... See more...
Hello Splunk ES experts ,  I want to make a query which will produce MTTD (something like by analyzing the time difference between when a raw log event is ingested ( and meets the condition of a correlation search ) and when a notable event is generated based on the correlation search , I have tried something below but it does not give me results I am expecting because it is not calculating time difference for those notables which are in New status , below is working fine for any other status . Can someone please help me on this , may be it is too simple to achieve and I am making this complex  index=notable | eval orig_epoch=if( NOT isnum(orig_time), strptime(orig_time, "%m/%d/%Y %H:%M:%S"), 'orig_time' ) | eval event_epoch_standardized= orig_epoch, diff_seconds='_time'-'event_epoch_standardized' | fields + _time, search_name, diff_seconds | stats count as notable_count, min(diff_seconds) as min_diff_seconds, max(diff_seconds) as max_diff_seconds, avg(diff_seconds) as avg_diff_seconds by search_name | eval avg_diff=tostring(avg_diff_seconds, "duration") | addcoltotals labelfield=search_name  
  Watch On-Demand   Join Splunk’s Growth Engineering team in their third Tech Talk as they discuss their adoption of Splunk Synthetic Monitoring to gain visibility across website pages, track c... See more...
  Watch On-Demand   Join Splunk’s Growth Engineering team in their third Tech Talk as they discuss their adoption of Splunk Synthetic Monitoring to gain visibility across website pages, track core web vitals, and evaluate API performance across both traditional monolithic and multi cloud environments.  In this session, we will Learn the key features necessary for effective website and page monitoring. Assess performance across complex multi-layered cloud application stacks. Simulate end user experiences to detect, alert and prioritize issues. Identify and address issues in critical paths. See Splunk Synthetic Monitoring in action, showcasing: 200% improvement in page load times. 99.999% site reliability. 50% boost in engineering team efficiency. Enhanced search engine performance -+231% CTR, +59% impressions, and +136% clicks.   Replay
 WE updated the Sysmon add-on from 3.x to 4.0.1 (latest) on a search head cluster. After, we're getting errors about how the node we're on and the indexers can't load a lookup (Could not load looku... See more...
 WE updated the Sysmon add-on from 3.x to 4.0.1 (latest) on a search head cluster. After, we're getting errors about how the node we're on and the indexers can't load a lookup (Could not load lookup=LOOKUP-record_type).
Hi I need to do observability on different web applications on Windows workstations  For example i need to mesure response time or error code of the webapp Is it possible to collect these metrics ... See more...
Hi I need to do observability on different web applications on Windows workstations  For example i need to mesure response time or error code of the webapp Is it possible to collect these metrics in splunk? How? With Splunk APM? Website monitoring? Other question : how to collect events from the Windows event viewer? Thanks 
Let's say I have the following SPL query.  Ignore the regexes, thery're not important for the example: index=abc | rex field=MESSAGE "aaa(?<FIELD1>bbb)" | rex field=MESSAGE "ccc(?<FIELD2>ddd)" stat... See more...
Let's say I have the following SPL query.  Ignore the regexes, thery're not important for the example: index=abc | rex field=MESSAGE "aaa(?<FIELD1>bbb)" | rex field=MESSAGE "ccc(?<FIELD2>ddd)" stats count by FIELD1, FIELD2   Right now, the query doesn't return a result unless both fields match, but I still want to return a result if only one field matches.  I just want to return an empty string in the field that doesn't match.  Is there a way to do this? Thanks!
We wanted to design a simplified and actionable navigation menu for new and returning community members alike. Check out some of the updates we've made!   "Join the Community" is a new lin... See more...
We wanted to design a simplified and actionable navigation menu for new and returning community members alike. Check out some of the updates we've made!   "Join the Community" is a new link in our nav bar to make it easier for new members to get started and jump into all that community goodness. "Splunk Answers" has been updated to "Find Answers" and is populated with the most-visited boards, covering major Splunk products and features "News & Education" is where you'll find product announcements, the Splunk Community Blog, and all things Splunk Education "Events" highlights Tech Talks, Office Hours, and User Groups -- all excellent places to connect with Splunk experts around the world, both live and virtually "Apps & Add-ons" brings discussion boards for Splunk Developers under one roof And finally, "Resources" is where you'll find links out to our sister websites to continue you learning journeys. Happy exploring, Splunk Community!
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month, we’re sharing all the details on an interesting new article on how to instrument LLMs with Splunk, a bunch of new Kubernetes articles, and a new Getting Started Guide for Splunk Asset and Risk Intelligence. We’ve also published lots of brand new use case, product tip, and data articles that we’ll share at the end of this blog. Read on to find out more.   Boost LLM observability with Splunk Many organizations have started to integrate LLM platforms like ChatGPT into their workflows, leveraging generative AI capabilities to improve productivity for their employees and customers.   But how can LLM applications be made observable? In our new article Instrumenting LLM applications with OpenLLMetry and Splunk you’ll find a step-by-step guide that demonstrates how OpenTelemetry can be used to view LLM data in Splunk Observability Cloud. If you like this article, you might also be interested to see another ChatGPT article we published recently, Monitoring applications using OpenAI API and GPT models with OpenTelemetry and Splunk APM.   Mastering Kubernetes and Splunk Some of the most popular articles on Splunk Lantern cover how best to integrate Kubernetes with the Splunk platform, so we’re happy to share a number of new articles on this topic that we’ve published throughout August.  Detecting and resolving issues in a Kubernetes environment shows you how to ​​implement a scalable observability solution that provides an overview of Kubernetes architecture, highlighting real-time issues and allowing you to act fast and mitigate impact. Enabling access between Kubernetes indexer clusters and external search heads teaches you how to use the Splunk Operator for Kubernetes to ensure continued communication between Splunk indexer clusters running on Kubernetes and search heads that are external to the Kubernetes environment. Improving hardware utilization by moving indexers into Kubernetes explains how Kubernetes and the Splunk Operator for Kubernetes can improve utilization of hardware by running multiple indexers (or K8s pods) on each bare metal server. Using Kubernetes Horizontal Pod Autoscaling demonstrates how you can use autoscaling to increase the capacity of your Kubernetes environment to match application resource demands with minimal manual intervention. Finally, Understanding how to use the Splunk Operator for Kubernetes introduces you to how you can use the Splunk Operator for Kubernetes to simplify getting Splunk indexer clusters, search head clusters, and standalone instances running within Kubernetes. What other Kubernetes-related articles would you like to see us tackle next? Let us know in the comments below!   Getting Started with Splunk Asset and Risk Intelligence If you struggle with asset discovery, risk management, or maintaining compliance, our new Getting Started Guide on Splunk Asset and Risk Intelligence (ARI) can help you learn how to use this powerful new product to streamline these processes with ease.  Splunk ARI provides a comprehensive, continuously updated asset inventory by leveraging rich data from the Splunk platform to accurately discover and monitor all assets and identities - including endpoints, servers, users, cloud resources, and OT/IoT devices. It enhances your investigative processes by reducing the time spent pivoting between systems, offering accurate asset and identity context that speeds up investigations and identifies compliance gaps to reduce risk exposure. Like all of our Security Getting Started Guides, this new guide is split into easy-to-navigate steps that walk you through how to prepare for, install, and use ARI. Check out the guide today, and please let us know your feedback in the comments!   This Month’s New Articles Here’s everything else we’ve published over the month: Using file system destinations with file system as a buffer Improving Splunk platform searches with the foreach command Using scheduled export in Dashboard Studio Benchmarking filesystem performance on Linux-based indexers Deleting data from an index Managing time ranges in your searches Monitoring security events with Enterprise Security and Microsoft Copilot for Security Improving Splunk platform searches with bitwise operators ​​Using Federated Search for Amazon S3 (FS-S3) with Edge Processor Configuring file system destinations with ingest actions Installing an existing certificate on a new Splunk Enterprise installation Renewing a certificate on a new Splunk Enterprise installation Using caution when cascading service health scores upwards Improving Smart Mode usage in ITSI Pushing alerts to the Splunk platform and ITSI We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
What’s up Splunk Community! Welcome to the inaugural edition of the Observability Round-Up, a monthly series in which we will spotlight the latest and greatest content that our crack team of experts ... See more...
What’s up Splunk Community! Welcome to the inaugural edition of the Observability Round-Up, a monthly series in which we will spotlight the latest and greatest content that our crack team of experts has thoughtfully crafted for you. Whether you were channeling your inner Simone Biles, vacationing on a yacht in the Greek isles (if not in reality, at least in your mind!), or planting your face in front of the A/C for hours on end, we’re here to catch you up on what you may have missed. New team, fresh content - meet the Developer Evangelists One amazing Splunk “develop”ment this summer (yuk, yuk, yuk)  was the creation of a brand new team of Developer enthusiasts - Moss Normand, @CaitlinHalla , and Mike Simon, led by our Evangelist-in-Chief, @gleffler. They have already begun pumping out a ton of great “how-to” material, and there are two pages that you’ll want to bookmark immediately: "Splunk Observability for Engineers" YouTube Playlist: All of the team’s new video content will live here, and Moss has already added 7 videos over the past month. Caitlin’s Community Blog Posts: You can find the full collection of Caitlin’s work here, in which she goes deep on a new topic every week. We also want to shine a light on some of their best work thus far to give you a taste of what you can look forward to! Kubernetes Monitoring and Troubleshooting: Learn about common failures in a Kubernetes (K8s) environment, how to detect and resolve issues in a K8s environment (using AutoDetect Detectors), or watch a video walkthrough of both topics. Integrating Kubernetes and Observability Cloud: See how much easier it is to do this with Helm! Check out our blog post and companion video on our happy path to get started with K8s and Splunk Observability Cloud.  Kubernetes Horizontal Pod Autoscaling: Ever wondered how to scale a K8s deployment? One way is the horizontal pod autoscaler. Learn about how this works and why you’d use it in our blog post and video – and learn how to monitor autoscaling in Splunk Observability Cloud, of course! August Tech Talks and Office Hours For those who may not know, we host two types of recurring virtual events for customers: Tech Talks, which are technical deep dives on a particular topic, and Community Office Hours, which are “Ask the Expert” type sessions to address specific customer questions. You can always find them on the handy, dandy “Events” section of the navigation bar right above us! Here are the recap materials from the past month. Community Office Hours Splunk Observability Cloud + Splunk Platform Integrations Recording (starts at 10:15 mark) | Slide Deck Optimize Your Cloud Monitoring Recording | Slide Deck Tech Talks Optimize Cloud Monitoring Troubleshooting the Otel Collector .conf24 Sessions Now Available to All On-Demand! Lastly, we are pleased to report that all of the breakout sessions from .conf24 are now available online for everyone! All you need to do to access the whole catalog is to log in to (or create) your account. From there, you can filter by year, topic, skill level, and more to find the sessions that you are most interested in. If you want to sharpen your expertise with our products, there is NO greater collection of content to help you do so.  We had over 40 breakout sessions in this year’s Observability track, and here are three of the most popular and well-received to get you started:  OBS1405C - Splunk IT Service Intelligence (ITSI) - The Latest and Greatest! (Splunker-led) OBS1599B - Observability: Going from Out of the Box to Out of the Gate (SplunkTrust-led!) OBS1875C - Adopting OpenTelemetry at Yahoo: The Good, The Bad, and the Ugly (Customer-led) If you have any content requests, we are always happy to hear them! Drop me a line at avirani@splunk.com. That's it for now! Signing off until next month. Arif
i write a custom alert with bash script who send values of spl query to the hive, the script create a case on the hive but with empty fields. alert_actions.conf: [alert_to_thehive] is_custom = 1 ... See more...
i write a custom alert with bash script who send values of spl query to the hive, the script create a case on the hive but with empty fields. alert_actions.conf: [alert_to_thehive] is_custom = 1 disabled = 0 label = Alert to TheHive description = Custom alert action to send alerts to TheHive icon_path = alert_icon.png payload_format = json ttl = 10 # Command to execute alert.execute.cmd = alert_to_thehive.sh # Arguments passed to the script alert.execute.cmd.arg.1 = $result.Image$ alert.execute.cmd.arg.2 = $result.CommandLine$
  (index=hcp_system OR index=hcp_logging) namespace=$env_dd$ | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>[^,]+),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MS... See more...
  (index=hcp_system OR index=hcp_logging) namespace=$env_dd$ | rex "#HLS#\s*IID:\s*(?P<IID>[^,]+),\s*STEP:\s*(?P<STEP>[^,]+),\s*PKEY:\s*(?P<PKEY>[^,]+),\s*STATE:\s*(?P<STATE>[^,]+),\s*MSG0:\s*(?P<MSG0>[^,]+),\s*PROPS:\s*(?P<PROPS>[^#]+)\s*#HLE#" | eval IID=if("$interface_dd$"!="", "$interface_dd$", IID), STEP=if("$step_dd$"!="", "$step_dd$", STEP), PKEY=if(isnull("$record_id$") OR "$record_id$"="", PKEY, "*" . "$record_id$" . "*"), STATE=if("$state_dd$"!="", "$state_dd$", STATE), MSG0=if(isnull("$message_1$") OR "$message_1$"="", MSG0, "*" . "$message_1$" . "*"), PROPS=if(isnull("$properties$") OR "$properties$"="", PROPS, "*" . "$properties$" . "*") | search (IID=* OR isnull(IID)) (STEP=* OR isnull(STEP)) (PKEY=* OR isnull(PKEY)) (STATE=* OR isnull(STATE)) (MSG0=* OR isnull(MSG0)) (PROPS=* OR isnull(PROPS)) | table IID STEP PKEY STATE MSG0 PROPS   How to make it show in the table values which are selected in DD and if the search field is "text field" (PKEY MSG0 and PROPS in my case) empty to show what the rex  PKEY:\s*(?P<PKEY>[^,]+) will extract. As current behavior is following: DD DropDown TF Text Field Input : -DD  IID:SF  -DD  STEP:RECEIVE_FROM_KAFKA -DD  STATE:IN_PROGRESS -TF  PKEY MSG0 and PROPS are empty Msg1:"#HLS# IID:SF, STEP:RECEIVE_FROM_KAFKA, PKEY:456, STATE:IN_PROGRESS, MSG0:Success, PROPS:YES #HLE#" Msg2: "#HLS# IID:SAP, STEP:SEND_TO_KAFKA, PKEY:52345345, STATE:IN_PROGRESS, MSG0:MOO, PROPS:FOO #HLE#" Extracted Table: STEP                                        |   PKEY             |       STATE                   |  MSG0      | PROPS RECEIVE_FROM_KAFKA |    52345345 |       IN_PROGRESS |  MOO         | YES   Resume: the result is mixed in column lines from different messages in the input of the text fields is empty, How can I make it to extract all messages with the following log pattern and then filter them based on the DD or text fields?
Hi at all, I don't know if someone else found this issue: Using for the first time 9.3.0 version I tried to customize an app menu bar. Then I found that if I try to use this app with my language (... See more...
Hi at all, I don't know if someone else found this issue: Using for the first time 9.3.0 version I tried to customize an app menu bar. Then I found that if I try to use this app with my language (it-IT) it doesn't change; if instead I run it with the default english interface (en-US) it correctly runs. Ciao. Giuseppe  
Hi, I’ve created some scheduled Splunk reports with inline tables in the email body. We're sending these reports to a Slack channel via email, but the URLs appear as plain text in Slack, while they a... See more...
Hi, I’ve created some scheduled Splunk reports with inline tables in the email body. We're sending these reports to a Slack channel via email, but the URLs appear as plain text in Slack, while they are hyperlinked in Gmail. Is there a workaround to ensure the URLs are clickable in Slack? Also how to enable hyperlinks for URLs in report(not dashboard) @ITWhisperer @gcusello @PickleRick 
Hello all, implementing some routing at the moment in order to forward a subset of data to a third party syslog system. However, i'm running into issues with the Windows Logs. They look like this at ... See more...
Hello all, implementing some routing at the moment in order to forward a subset of data to a third party syslog system. However, i'm running into issues with the Windows Logs. They look like this at syslog-NG  Dec 29 07:47:18 12/29/2014 02:47:17 AM Dec 29 07:47:18 LogName=Security Dec 29 07:47:18 SourceName=Microsoft Windows security auditing. Dec 29 07:47:18 EventCode=4689 Dec 29 07:47:18 EventType=0   I believe this is because of the /r/n in the Windows events caused by non-xml  How can i get the Splunk Heavy Forwarder to treat each Windows event as one line and then send it through?  Architecture = UF - HF - Third Party System/Splunk Cloud  Thanks 
I am currently working on creating an alert for a possible MFA fatigue attack from our Entra ID sign in logs. The logic would be to find sign in events where a user received x number of MFA requests ... See more...
I am currently working on creating an alert for a possible MFA fatigue attack from our Entra ID sign in logs. The logic would be to find sign in events where a user received x number of MFA requests within a given timeframe, denied them all and then on the 5th one for example they approved the MFA request for our SOC to investigate. I have some of the logic for this written out below, but I am struggling to figure out how to add the last piece in of an approved MFA request after the x number of denied MFA attempts by the same user. Has anyone had any luck creating this and if so, how did you go about it? Any help is greatly appreciated. Thank you! index=cloud_entraid category=SignInLogs operationName="Sign-in activity" properties.status.errorCode=500121 properties.status.additionalDetails="MFA denied; user declined the authentication" | rename properties.* as * | bucket span=10m _time | stats count min(_time) as firstTime max(_time) as lastTime by user, status.additionalDetails, appDisplayName, user_agent | where count > 4 | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)`
Me and other colleagues are not able to access Splunk Support Portal for days, receiving a 404 error. We have tried different links: https://splunk.my.site.com/customer/s/ https://splunk.my.site.c... See more...
Me and other colleagues are not able to access Splunk Support Portal for days, receiving a 404 error. We have tried different links: https://splunk.my.site.com/customer/s/ https://splunk.my.site.com/partner/s/ But non of them are working. This means we cannot access Entitlements or open and manage Cases. Is anyone having the same problem?
Hi Team, We are trying to install - Auto Update MaxMind Database into our splunk https://splunkbase.splunk.com/app/5482   --> This is the splunk app   We have the account id and the license ke... See more...
Hi Team, We are trying to install - Auto Update MaxMind Database into our splunk https://splunkbase.splunk.com/app/5482   --> This is the splunk app   We have the account id and the license key While testing this by running command - | maxminddbupdate  We got below error  HTTPSConnectionPool(host='download.maxmind.com', port=443): Max retries exceeded with url: /geoip/databases/GeoLite2-City/download?suffix=tar.gz (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1106)')))
Hi Everyone, Good Afternoon. We recently rename the add-on. After renaming we are facing the below issues : * After upgrading we are able to see two addon. One with old name and one with new name... See more...
Hi Everyone, Good Afternoon. We recently rename the add-on. After renaming we are facing the below issues : * After upgrading we are able to see two addon. One with old name and one with new name but ideally after upgrading only the latest addon should be there. * Inputs of old addon are not migrating to new addon. We replicated the APPID of old addon with new addon but it did not work. If anyone face the issue ,please suggest to resolve the problem. Thanks,
Hi there, I’m currently developing a React app and have almost finished the development. Now, I need to package it as a Splunk app, but I’m stuck on the packaging process. Is there a tool similar to... See more...
Hi there, I’m currently developing a React app and have almost finished the development. Now, I need to package it as a Splunk app, but I’m stuck on the packaging process. Is there a tool similar to the Splunk App Inspect that can fully inspect the React app I’ve created? Any documentation or blog posts on this would be really helpful. Thanks!