All Topics

Top

All Topics

Hi, I am trying to install API gateway extension. For this I have installed machine agent independently on a server with SIM Enabled. The server does not have an App agent. Then I cloned and extracte... See more...
Hi, I am trying to install API gateway extension. For this I have installed machine agent independently on a server with SIM Enabled. The server does not have an App agent. Then I cloned and extracted the API gateway extension from github in /machineagent/monitors. After extraction i couldn't find yml file. I have installed java 8 in server. Machine agent version os 24.9. Please let me know where this is wrong and whether any additional things to be done. Regards Fadil
Hi Guys. I've configured the Splunk_TA_nix plug-in running on a Linux server and this is providing data for a Metric Based Index in Splunk Enterprise v9.2.1 I've configured the most basic (Classi... See more...
Hi Guys. I've configured the Splunk_TA_nix plug-in running on a Linux server and this is providing data for a Metric Based Index in Splunk Enterprise v9.2.1 I've configured the most basic (Classic) Dashboard with just a dropdown and search based on this Index. The drop down never populates, so my question is whether dropdown searches can be based on Metric Indexes? My search works in the Search and Reporting field: |mstat min(df_metric.*) WHERE (host=myhost) span=1h index="linux_os_metric" BY MountedOn |stats values(MountedOn) as MountedOn |sort MountedOn |table MountedOn It says populating and does not return an error, but the dropdown is greyed out and not selectable. I was hoping it was going to present a list of mounted Filesystems thanks in advance if anyone can solve this.  
Hello Community! In 2024, simply having an observability practice is a given. In this era of observability, a high-functioning team will set leaders apart from their peers.  Leading observability... See more...
Hello Community! In 2024, simply having an observability practice is a given. In this era of observability, a high-functioning team will set leaders apart from their peers.  Leading observability practices don’t fix issues by putting hundreds of people into a virtual room, or frantically messaging in a temporary Slack channel to find root causes. Because leaders embed observability into their development practices early, a feature launch is a quiet non-event. An incident ends calmly, with swift resolution and proactive steps to prevent similar problems in the future, thanks to higher alert accuracy and close collaboration across teams.  They recognize that observability isn’t something you have; it’s something you do.   Our newly released annual report, the State of Observability 2024: Charting a Course to Success, surveys 1,850 IT ops and engineering professionals across 10 countries and 16 industries. We find that when teams don’t need to worry about their systems failing, they can focus on both resilience and innovation. They develop more high-quality code (and ship it faster), innovate more, and lean on technologies like AI, platform engineering, and OpenTelemetry to boost efficiency and wrangle their telemetry data. All of this leads straight to the bottom line, where leaders deliver more productivity and value than their peers — achieving a 2.67x annual return on their observability solutions. Leading observability practices gain a competitive edge As customer expectations and data complexity increase, leading observability practices are creating a competitive advantage. Our research shows that leaders consistently outperform their peers in several areas.  Leaders resolve issues quickly to dampen the impact of downtime, which costs Global 2000 companies $400 billion annually. They’re 2.3x more likely than beginners to measure their MTTR in minutes or hours, versus days, weeks, or even months. Leaders likely achieve this impressive speed because they don’t waste time chasing inaccurate alerts; a full 80% of their alerts are indicative of a real incident. They share tools and workflows with security teams for better context into issues, and 73% say they’ve improved MTTR thanks to this practice. Developers at leading organizations are more productive, likely because they aren’t spending their time putting out fires or fixing mistakes. The majority of leaders push their code on demand 2.6x more than beginners, enabling their teams to release more services and products faster to delight customers.  They do this thoughtfully, with a 22% higher change success rate than their peers.  All of this success leads to more value from their observability solutions; 92% say their observability solution cuts down on application development time, enabling them to bring products to market faster. And most importantly, leaders report an annual return on observability that’s 2.67x their spend. Platform engineering drives the developer experience  Leaders’ software development and deployment success is in part due to their adoption of platform engineering, a discipline that frees up software engineers from managing toolchains so they can dedicate time to what they do best: pushing new, revenue-generating products to market. Leaders adopt platform engineering more heavily, forging a path  as the practice gains traction throughout the industry, with 73% of all respondents saying they’ve implemented platform engineering.  Organizations with platform engineering teams are seeing the payoffs, with 55% citing their top achievement as increasing IT operations efficiency. Standardization is where platform engineering really shines, as 90% agree that these teams’ efforts to standardize operations have been successful. Platform teams are particularly successful in driving security and compliance standards that are instrumental in achieving high-demand certifications like FedRAMP. What’s more, 58% of leaders say their development teams view platform engineering as a competitive differentiator.  Exploring the benefits of AI, OpenTelemetry Platform engineering isn’t the only trend that’s here to stay; OpenTelemetry is emerging as the new industry standard for collecting observability data as flexibility and control become essential, with well over half (53%) of all respondents embracing it. OpenTelemetry enables organizations to skip vendor lock-in and proprietary agents, aligning with nearly three-quarters (73%) of respondents who say its main benefit is access to a broader ecosystem of technologies.  Traditional AI and ML continue to be staples of observability, with 97% of respondents using these capabilities to enhance observability operations. Specifically, 56% use AI and ML to correlate events and prioritize alerts. But generative AI remains relatively uncharted waters. Although 84% say they’ve explored these features within observability platforms, a mere 13% have actually adopted them.  Read the full report for more findings on trends like OpenTelemetry, AI, and platform engineering, and for recommendations from Splunk experts on how to build a leading observability practice.
Improvement request for New Content Available pop-up. ==Current Pop-Up== New Content Available Content Update version: 4.42.0 The Content Update app has new content available for download. Update... See more...
Improvement request for New Content Available pop-up. ==Current Pop-Up== New Content Available Content Update version: 4.42.0 The Content Update app has new content available for download. Update the app to see new detections/searches to keep up to date. This update will only take a few minutes. ==Suggested Change== New Enterprise Security Content Update App Version Available Content Update version: 4.42.0 App link - https://splunkbase.splunk.com/app/3449 The Enterprise Security Content Update app has new detections/searches available for download.  New Updates and Analytics are described in the Release Notes available on docs.splunk.com.
I have a splunk search that returns two columns, SESSION and URI. How can I show the sequence of URIs visited by each SESSION as columns, with a separate row for each SESSION? Thanks! 
Hi, I have an use case in which I need to assess the storage difference of the index. Like for example, I have an index which has around 100.15 GB of data in it with Searchable Retention Days as 10... See more...
Hi, I have an use case in which I need to assess the storage difference of the index. Like for example, I have an index which has around 100.15 GB of data in it with Searchable Retention Days as 1095 Days. Now, if I reduce the Searchable Retention Days to let's say 365 Days, then what would be the approximate storage utilization on the Index. I need to output these results onto a tabular form on a dashboard for the same. Please assist me on this. Thank you in advance.  
Hello, I am reaching out to inquire whether Splunk SOAR currently supports Red Hat Enterprise Linux 9 (RHEL9). We are considering an upgrade to our infrastructure and want to ensure compatibility wi... See more...
Hello, I am reaching out to inquire whether Splunk SOAR currently supports Red Hat Enterprise Linux 9 (RHEL9). We are considering an upgrade to our infrastructure and want to ensure compatibility with Splunk SOAR. Thank you!
I have a 3 node search head cluster and distributed indexers we are getting below error when running any type of search . suggest any ways to avoid it error: (indexers)..........of 41 peers omitted]... See more...
I have a 3 node search head cluster and distributed indexers we are getting below error when running any type of search . suggest any ways to avoid it error: (indexers)..........of 41 peers omitted] Could not load lookup=LOOKUP-connect_glpi
How we can check the data coming to Splunk creating problem to CM making it unstable leading the peers to reach more than 2k+ and also RF and SF are red .
Hello, I have a deployment server and deploy an app on an Universal Forwarder, like I usually do (Create an app folder -> create local folder -> write input.conf -> setup app, server class on DS, ti... See more...
Hello, I have a deployment server and deploy an app on an Universal Forwarder, like I usually do (Create an app folder -> create local folder -> write input.conf -> setup app, server class on DS, tick disable/enable app, tick restart Splunkd). But after make sure the log path and permission of the log file (664), I don't see the log forwarded.  I'm only manage the Splunk Deloyment but not the server that host universal forwarder so I asked the system team to check it for me. After sometime, they get back to me and said there is no change on the input.conf file. They have to manually restart splunk on the Universal Forwarder and after that I see the log finally ingested.  So I want to know if there is an app, or a way to check if the app or the input.conf was changed according to my config on the DS or not, I can't ask the system team to check for it for me all time time.  Thank you. 
Hello, We are experiencing an issue with the SOCRadar Threat Feed app in our Splunk cluster. The app is configured to download threat feeds every 4 hours; however, each feed pull results in duplicat... See more...
Hello, We are experiencing an issue with the SOCRadar Threat Feed app in our Splunk cluster. The app is configured to download threat feeds every 4 hours; however, each feed pull results in duplicate events being downloaded and indexed. We need assistance in configuring the app to prevent this duplication and ensure data deduplication before being saved to the indexers.
At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of skill level or learning preference. Whether you’re just starting your journey with Splunk ... See more...
At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of skill level or learning preference. Whether you’re just starting your journey with Splunk or sharpening advanced skills, our broad range of educational resources ensures you’re prepared for every step.  Our Portfolio We offer Free eLearning to kickstart your learning, eLearning with Labs for hands-on practice, Instructor-led courses for interactive, expert guidance, and Splunk Certifications to validate your expertise. For quick tips and insights, explore our Splunk YouTube How-Tos and Splunk Lantern, where you'll find up-to-date guidance and best practices that reflect the latest in Splunk's capabilities. New Courses Available Every month, we release new courses designed to empower learners with the tools and knowledge they need to stay ahead in the evolving tech landscape. Whether you prefer self-paced eLearning or the structure of live instruction, there’s a course to fit your style. This month, we are excited to announce three new courses to help you advance your Splunk skills: SOC Essentials: Investigating with Splunk – eLearning with labs SOC Essentials: Investigating with Splunk – Free eLearning Administering Splunk Observability Cloud – Free eLearning These courses provide targeted insights into security operations and observability, essential for anyone looking to enhance their data-driven capabilities. Explore them today to stay ahead in your field! All courses are available through the Splunk Course Catalog, accessible via our banner or directly on our platform.   Expanding Global Learning Access  As part of our commitment to accessibility and inclusion, we continue to translate eLearning courses into multiple languages and add non-English captions. This effort ensures that learners worldwide can grow their Splunk expertise in their preferred language, supporting our vision of an inclusive educational ecosystem. Each month presents new opportunities to expand your knowledge, boost your career, and enhance your contributions to enterprise resilience. Stay updated with the latest courses and continue your journey toward Splunk mastery – your next big career move could be just a course away. See you next month!  - Callie Skokos on behalf of the Splunk Education Crew
Hey guys, so i was basically trying to set up Splunk to work with terminal (bad idea). ended up moving directories using the CLI and boom! doesn't work anymore, and i have no way to undo in the chan... See more...
Hey guys, so i was basically trying to set up Splunk to work with terminal (bad idea). ended up moving directories using the CLI and boom! doesn't work anymore, and i have no way to undo in the change via terminal. i tried deleting and redownloading from Splunk but doesnt work. please tell me someone has an answer or a way to reset the directories for the version i once had i had so much data and apps to practice with. P.S. even if there isnt a way to get my old version back, i still would like to know why its not working when i try to redownload a new instance.
I am setting up a monitor on the log file for my Dolphin Gamecube emulator. Dolphin and Splunk Enterprise are both running locally on my machine (Windows 11). Splunk is ingesting multiple lines per e... See more...
I am setting up a monitor on the log file for my Dolphin Gamecube emulator. Dolphin and Splunk Enterprise are both running locally on my machine (Windows 11). Splunk is ingesting multiple lines per event, and my hope is to get each line to ingest as a separate event. I have tried all kinds of different props.conf configurations including SHOULD_LINEMERGE, LINE_BREAKER, BREAK_ONLY_BEFORE, etc. I'll paste a sample of the log file below. In this example, Splunk is ingesting lines 1 & 2 as an event, and then 3 & 4 as an event. When I turn on more verbose logging, it will lump even more lines into an event, sometimes 10+ 21:23:310 Common\FileUtil.cpp:796 I[COMMON]: CreateSysDirectoryPath: Setting to C:\Users\whjar\mnt\file-system\opt\dolphin\dolphin-2409-x64\Dolphin-x64/Sys/ 21:23:323 DolphinQt\Translation.cpp:155 W[COMMON]: Error reading MO file 'C:\Users\whjar\mnt\file-system\opt\dolphin\dolphin-2409-x64\Dolphin-x64/Languages/en_US.mo' 21:24:906 UICommon\AutoUpdate.cpp:212 I[COMMON]: Auto-update JSON response: {"status": "up-to-date"} 21:24:906 UICommon\AutoUpdate.cpp:227 I[COMMON]: Auto-update status: we are up to date.  
Need help passing a token value from a Single Value Panel using the ( | stats count) in conjuction to the ( | rex field= _raw) command to a Stats Table panel.  I created a dashboard showing various "... See more...
Need help passing a token value from a Single Value Panel using the ( | stats count) in conjuction to the ( | rex field= _raw) command to a Stats Table panel.  I created a dashboard showing various "winevent" logs for user accounts (created, enabled, disabled, deleted, etc...)  Current search I have for my various Single Value panel using the stats command in my search is seen below. (for this example, I used the win event code 4720 to count of "User Account Created" on the network) and extracted the EventCode. Acct Enable: index="wineventlog " EventCode=4720 | dedup user | _rex=field _raw "(?m)EventCode=(?<eventcode>[\S]*)" | stats count Output gives me a Single Value Count for window event codes that = 4720 ignoring duplicate user records.    I am now trying to capture the extracted "eventcode" using a drilldown in a token for each respective count panel.  I have setup the token as: (Set $token_eventcode$ = $click.value$) in my drill down editor in my second query table.  Using that token, I want to display the respective records in a second query panel to display the record(s) info in a table as seen below:    Acct Enable: index="wineventlog " EventCode=$token_eventcode$ | table _time, user, src_user, EventCodeDescription As I am still learning how to use the rex command, having problems in this instance in capturing the EventCode from the _raw logs, setting it to the ($token_eventcode$) token in the Single Value County query and passing that value down through a token to the table while maintaining the stats count value.  Any assistance with be greatly appreciated.
I am trying to route my windows security logs to another specified index but it has to meet certain criteria. EventCode has to be 4688 and the Token Elevation Level equals either %%1936, %%1938, T... See more...
I am trying to route my windows security logs to another specified index but it has to meet certain criteria. EventCode has to be 4688 and the Token Elevation Level equals either %%1936, %%1938, TokenElevationTypeDefault, TokenElevationTypeLimited. So far i have written these regular expressions 1. REGEX = ((?s).*EventCode=4688*.)((?si).*(%%1936|TokenElevationTypeDefault|TokenElevationTypeLimited)*.) 2. REGEX = EventCode=4688.*TokenElevationType=(%%1936|%%1938|TokenElevationTypeDefault|TokenElevationTypeLimited) When using 1, All eventcodes 4688 come to the specified index when i only wanted 1936 and 1938. I wanted to leave the %%1937 token in its original index. When using 2, no data at all comes to the index even though its seems to be a much simpler regex. What am i missing to ensure 4688 is properly filter using transforms and props?
What replaces Splunk TV?
Previously created war room template fail to load and attempting to recreated them gives errors.  I've tried as both SAML and Local user accounts, both with admin rights.  
I am planning on upgrading our Splunk infrastructure which requires our Splunk indexers to go offline for few minutes.   I am using smartstore for splunk indexing.  Before I start the upgrade and tak... See more...
I am planning on upgrading our Splunk infrastructure which requires our Splunk indexers to go offline for few minutes.   I am using smartstore for splunk indexing.  Before I start the upgrade and take down our indexers, I want to roll over all the data that is in hot bucket to the smartstore and then start the upgrade.  What is the best way to do this ?      
Hi, I need help to fetch field based on other field condition. I have lookup table  as below, NAME STATE abc-a-0 host1 master abc-a-1 host2 local abc-a-2 host3 local abc-b-0 host4 local abc-b... See more...
Hi, I need help to fetch field based on other field condition. I have lookup table  as below, NAME STATE abc-a-0 host1 master abc-a-1 host2 local abc-a-2 host3 local abc-b-0 host4 local abc-b-1 host4 local abc-b-2 host4 local I want to retrieve abc-a-* NAME based on STATE which it is as master. The master STATE is dynamic, it will be abc-b-* group also sometimes. Example: NAME HOST STATE abc-a-0 host1 local abc-a-1 host2 local abc-a-2 host3 local abc-b-0 host4 local abc-b-1 host5 master abc-b-2 host6 local The problem is, 1. Retrieve the current master STATE if it is abc-a-* or abc-b* NAME 2. Then fetch 3 NAMEs based on condition if it is abc-a-* or abc-b-*