All Topics

Learn Splunk

All Topics

1. Use below commands to list all the containers/pods for a particular synthetic job with <scheduleIdValue>: For Docker based PSA:   docker ps --filter "label=scheduleId=<scheduleIdValu... See more...
1. Use below commands to list all the containers/pods for a particular synthetic job with <scheduleIdValue>: For Docker based PSA:   docker ps --filter "label=scheduleId=<scheduleIdValue>"   For Kubernetes based PSA:   kubectl get pods -l scheduleId=<scheduleIdValue>     NOTE:  <scheduleIdValue> in the above commands can be copied from the Synthetic Jobs page under EUM App on your AppD Controller UI. Please refer attached screenshot.   2. And in order to debug a particular synthetic job session instead of all the sessions (pods/containers) for that synth job, we can use the label "measurementId=<measurementIdValue>" or both labels together as below: For docker based PSA:   docker container ls --filter "label=scheduleId=<scheduleIdValue>" --filter "label=measurementId=<measurementIdValue>"   For Kubernetes based PSA:   kubectl get pods -l scheduleId=<scheduleIdValue>,measurementId=<measurementIdValue>   NOTE:  <measurementIdValue> in the above commands can be copied from that particular synthetic job session's script output or session's deep-link url which will be in format <scheduleIdValue~measurementIdValue> i.e. scheduleIdValue and measurementIdValue separated with a tilde "~" symbol.
Introduction When it comes to an Event Service deployment it is crucial to ensure that there is sufficient RAM available on the OS, and the heap settings are configured correctly on each of the n... See more...
Introduction When it comes to an Event Service deployment it is crucial to ensure that there is sufficient RAM available on the OS, and the heap settings are configured correctly on each of the nodes in the cluster. This article will help you understand how to determine how much RAM the Event Service processes are requiring, how to change the heap settings appropriately, and how to trouble shoot memory related issues.  Table of contents  Determine how much RAM is needed for the environment  Configure the HEAP memory appropriately depending on your environment Tune the Operating System for Production Cluster Nodes POV/Testing environments Issues with Event Service Performance, and how to trouble shoot them Determine how much RAM is needed for the environment As per the AppDynamics official recommendations, for heap space allocation, AppDynamics recommends allocating half of the available RAM to the Events Service process, with a minimum of 7 GB up to 31 GB. Heap memory configuration refers to the settings that determine the amount of memory allocated to the heap in a Java Virtual Machine (JVM). The heap is the runtime data area from which memory for all class instances and arrays is allocated. By carefully configuring these parameters, you can optimize the performance of Java applications and reduce the likelihood of memory-related issues. Keep in mind that in case you have an environment with plenty of Analytics reporting to your Event Service, the minimum requirement of 7 GB might not be enough. The requirement is based on the amount of load from both Transaction Analytics and Log Analytics events, and depends highly on the use-case and the amount of load in your clsuter. The optimal setup for a production environment is to have: At least 62 GB of RAM on the system/host machine, At least three nodes. Everything below that should be used only for POV and testing purposes. To check the exact requirements based on the License Units consumed in your environment, please refer to the below table: Event Type Machine Instancei2.2xlarge (61 GB RAM, 8 vCPU, 1600 GB SSD) i2.4xlarge (122 GB RAM, 16 vCPU, 3200 GB SSD) i2.8xlarge (244 GB RAM, 32 vCPU, 6400 GB SSD)1 node 3 nodes 5 nodes 7 nodes 1 node 3 nodes 5 nodes 7 nodes 1 node 3 nodes 5 nodes Transaction Analytics license units 20 37 44 63 22 41 84 113 53 94 120 Log Analytics license units 7 10 17 19 16 19 32 44 39 116 270   Reference: https://docs.appdynamics.com/appd/onprem/23.x/23.11/en/events-service-deployment/events-service-requirements#id-.EventsServiceRequirementsv23.9-EventsServiceNodeSizingBasedonLicenseUnits Configure the HEAP memory appropriately depending on your environment Once you have deployed the appropriate amount of RAM on the system/host machine, it is essential to configure the heap memory settings inside the .properties file of the Event Service appropriately. The heap memory needs to be adjusted on all of the nodes in your cluster. The properties file can be found in the following path:     appdynamics/platform/product/events-service/processor/conf/events-service-api-store.properties     Inside the events-service-api-store.properties we have four settings responsible for HEAP memory allocation:   ad.jvm.heap.min=2048m ad.jvm.heap.max=2048m ad.es.jvm.heap.min=4096mm ad.es.jvm.heap.max=4096mm   The overall recommendation: Assign the half of the available RAM to the Events Service process. The half of the available RAM should be assigned to ad.jvm.heap and ad.es.jvm.heap settings in 1:3 proportions. For example: if you have 62GB of RAM on the system/host machine, first it needs to be divided by 2.   62 / 2 = 31 GB of RAM   The result needs to be assigned to ad.jvm.heap and ad.es.jvm.heap settings in 1:3 proportions. In this particular example you need to assign 31 GB of RAM in 1:3 proportions. Because of this condition, the best way to proceed further is to divide 31 GB by 4. The result 7.75 GB (7750m) needs to be assigned to ad.jvm.heap. Next, multiply this value by 3 (7750 * 3 = 23250) and assign this value to ad.es.jvm.heap. The heap memory configuration for 62 GB of RAM on the system/host machine should look as follows:   ad.jvm.heap.min=7750m ad.jvm.heap.max=7750m ad.es.jvm.heap.min=23250m ad.es.jvm.heap.max=23250m   After doing any changes inside the .properties file, it is extremely important to restart your Event Service for those changes to be applied.   Tune the Operating System for Production Cluster Nodes A crucial step which is often skipped during the Event Service deployment process is the procedure of tuning the Operating System for Production Cluster Nodes. Before installing the Events Service cluster, you need to perform a few manual changes as described below. These are particularly relevant for production Events Service deployments. On each node in the cluster, make these configuration changes: 1. Using a text editor, open /etc/sysctl.conf and add the following:   vm.max_map_count=262144   2. Raise the open file descriptor limit in /etc/security/limits.conf, as follows:   <username_running_eventsservice> soft nofile 96000 <username_running_eventsservice> hard nofile 96000 <username_running_eventsservice> soft memlock unlimited <username_running_eventsservice> hard memlock unlimited   3. Disable swap memory by running the following command. Remove swap mount points by removing or commenting the lines in /etc/fstab that contain the word swap.   swapoff -a   Reference: https://docs.appdynamics.com/appd/onprem/24.x/latest/en/events-service-deployment/install-the-events-service-on-linux#id-.InstalltheEventsServiceonLinuxv24.4-TunetheOperatingSystemforProductionClusterNodes   POV/Testing environments  For a POV/Testing environment, there is no need to have three nodes and 62 GB of RAM available on the node. To configure the Event Service as a single-node deployment, please refer to the following article: https://community.appdynamics.com/t5/Share-a-tip/Run-Event-Service-as-a-single-node-cluster/td-p/58121 When it comes to the RAM requirements on the system/host machine for a POV/Testing environment, it highly depends on the amount of load from both Transaction Analytics and Log Analytics events you are planning on using in this environment. It is still recommended to have around 20 to 32 GB on the system/host machine, even for testing environments. Anything below that might not be sufficient. Keep in mind that based on the amount of load on the Event Service cluster, this recommendation might not be enough, even in the case of a POV/Testing environment. If you run into any performance issues, please refer to steps n.1, n.2, n.3, n.4 and n.5 in the below "Issues with Event Service Performance" section to determine how much RAM your Elasticsearch processes are consuming.   Issues with Event Service Performance, and how to trouble shoot them In case you are experiencing a situation where your Event Service is experiencing performance issues or it crashes by itself after running successfully for couple of hours/couple of days, you might have ran into the memory consumption issue. Such situations are almost always related to not enough resources available on your OS, or insufficient heap memory configuration on your Event Service. To check on this issue, and understand how much RAM your Elastic Search process is consuming, kindly please follow can use the following trouble shooting steps. 1. First, start your Event Service normally (if it is not already running), and make sure it is running. To check that from the CLI level, you can use the following steps methods: Check the ps -ef | grep java. You should see two processes running on your OS - one is the Event Service process, another one is the elasticsearch process. Check the content of /processor directory - you should see events-service-api-store.id and elasticsearch.id files inside Check if the processes are running on appropriate ports. You can check that using netstat -tulpn Keep in mind, that if your RAM memory is not sufficient, or the HEAP memory configuration is too low your Event Service might not start at all in the first place. In such case, please re-visit the above sections on how to determine how much RAM is needed for the environment, and how to configure the HEAP memory appropriately depending on your environment. Ensure that you have appropriate amount of RAM on your OS, and the heap memory configuration is appropriate.   2. Check and note down the process ID that is associated to the Elastic Search process. If your Elastic Search is running on port 9200 (by default settings), you can use the below output for a reference:   [appdynamics@krk-vnap-461-146-12 processor]$ netstat -anp |grep 9200 |grep LIST (Not all processes could be identified, non-owned process info will not be shown, you would have to be root to see it all.) tcp6 0 0 :::9200 :::* LISTEN 495838/java [appdynamics@krk-vnap-461-146-12 processor]$   In the above environment the Elastic Search process ID is 495838. Make sure you node down this process ID.   3. Check the memory consumption of the process: On Linux systems, use the following command to check how much RAM the Elastic Search process is utilizing at the given moment of time:   echo 0 $(awk '/Rss/ {print "+", $2}' /proc/<process-id-noted-in-step-n.2>/smaps) | bc   The above command returns the memory consumption in KB. Compare this value agains the heap memory and the RAM memory available on your OS. If the memory used is at the edge of the heap limit config or the RAM available, it indicates that the process requires additional RAM memory to run smoothly.   4. In case the Operating System terminates your Elasticsearch process by itself, there is a high possibility it was terminated due to Out of Memory condition. You can check the system-logs to understand why it had happened in the first place. Once you notice your Event Service crashed, check and confirm if the Elastic Search process got terminated:   [appdynamics@krk-vnap-461-146-12 processor]$ netstat -anp |grep 9200 |grep LIST [appdynamics@krk-vnap-461-146-12 processor]$   As you can see from the above outputs, the Elastic Search process is no longer running, 5. Check the system logs to understand why the Elastic Search process was terminated by your operating system. To do that, you can use the dmesg | grep <process-id-noted-in-step-n.2> command. Please refer to the below output for reference:   [appdynamics@krk-vnap-461-146-12 processor]$ dmesg | grep 495838 [1362539.578533] [ 495838] 1010 495838 6399752 4925562 40865792 0 0 java [1362539.578544] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1010.slice/session-978.scope,task=java,pid=495838,uid=1010 [1362539.578758] Out of memory: Killed process 495838 (java) total-vm:25599008kB, anon-rss:19702248kB, file-rss:0kB, shmem-rss:0kB, UID:1010 pgtables:39908kB oom_score_adj:0 [appdynamics@krk-vnap-461-146-12 processor]$   The above outputs indicate that the AppDynamics Event Service running on the system is being terminated due to an "Out of Memory" (OOM) condition. The oom-kill message indicates that the system's Out of Memory (OOM) Killer was triggered. This happens when the system runs out of available memory, or the process heap memory configuration was not sufficient. In such case, the system starts terminating processes to free up resources. From the above snipped, we can conclude that the process was using approximately 19,702,248 kB (about 19.7 GB) of anonymous resident set size (anon-rss), which is the actual memory the process was consuming. Due to insufficient heap memory configuration, the process was terminated. If you face the above scenarios, it is clear that your Event Service does not have sufficient RAM available to run smoothly. If you encounter such issues, please re-visit the above sections on how to determine how much RAM is needed for the environment, how to tune the Operating System for Production Cluster Nodes and how to configure the HEAP memory appropriately depending on your environment. If you encounter any further issues with your Event Service component, please consider raising a Support ticket for further help.
Have completed the Splunk SPLK-3001 course. Understanding the subject is one thing and prepping for exam is another. Are there any exam practice tests available?
I want to trouble shoot an issue with our syslog servers logs being sent to the last chance index but im realizing I dont understand the syntax for the configuration of the conf files on the syslog s... See more...
I want to trouble shoot an issue with our syslog servers logs being sent to the last chance index but im realizing I dont understand the syntax for the configuration of the conf files on the syslog servers to do so. Where can I gain fundamental knowledge of syslog ng and how to configure it to send logs into splunk?
February 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist ... See more...
February 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist on what’s new with Splunk Education. We hope the updates about our courses, certification program, and self-paced training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability: Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go Training You Gotta Take Instructor-led Training | Training tales and testimonials It’s time to get curious with Splunk Classroom Chronicles. This new series introduces you to our top-notch instructors and course developers, and highlights stories and feedback from our learners. We’re spilling the tea about how to keep up in today’s fast-paced work environment with continuous professional development. Splunk Education offers engaging, interactive training to keep you one step ahead of the bad guys. From hands-on labs to expert-led sessions, grab a virtual seat and become part of our tales and testimonials. Gotta meet the instructors | Read the tales Splunk Enterprise Data Administration | You can get data in Some things are just worth the investment – Italian coffee, quality running shoes, new tires. The same holds true for Splunk Education courses, which sometimes require a solid investment in time. With the Splunk Enterprise System Administration course under your belt, you’re now ready for the 18-hour Splunk Enterprise Data Administration course, which is designed for administrators who are responsible for getting data into Splunk indexers. Structured into 15 modules, this course provides the fundamental knowledge of Splunk forwarders and methods to get remote data into Splunk indexers. It covers installation, configuration, management, monitoring, and troubleshooting of Splunk forwarders and Splunk Deployment Server components. Gotta invest the time | You’re worth 18 hours Things You Needa Know Answers to all the questions | Self-serve with FAQs If you don’t use self-checkout, are you even living in the modern AI era? Kidding. But seriously, self-service can be a real time-saver if you know how to maneuver it. At Splunk Education, we try to make our processes simple, but there are times you’ll have questions. Your peers have helped curate the most popular and frequently-asked questions – all available online and easy to follow. Bar code scanning skills are not required. Needa get help | FAQs are your guide Splunk Certified users earn 40% more | Hack your career If you want to level up and capture that new promotion, getting Splunk Certified is the ultimate cheat code. One of our many certifications, the Splunk Certified Cybersecurity Defense Analyst badge proves you’ve got the skills to analyze threats, hunt risks, and defend like a pro. Whether you’re aiming for a SOC analyst role or leveling up your cyber career, this certification helps you stand out. Become the boss you’re always trying to beat.  Needa earn more | Get Splunk Certified Places You’ll Wanna Go To class next week | We’ve got space for you Whether you’re a procrastinator or on that Sigma grindset, we’ve got you covered. Last Minute Learning from Splunk offers you the option to get more learning courses under your belt faster, or make up for the courses you’ve been putting off. Each week, we share a list of the upcoming instructor-led classes that still have seats available. Just register with your Splunk.com account and use your company training units or a credit card to purchase.  Go right now | Last Minute Instructor-led Courses To a bright spot | Splunk Lantern TFW you see the light. Splunk Lantern is your go-to hub for expert tips, use cases, and smarter ways to manage Splunk. Whether you need Getting Started Guides, Product Tips, or insights into data sources and types, Lantern is the place – with new content every month. This month, read expert articles on Splunk Platform Health Checks, OpenTelemetry, Edge Processor integration, compliance in regulated industries, workload management, and advanced data analysis. Smartness cannot thrive in darkness, so go find the light.  Go to Lantern | Simplified advice on the blog Find Your Way | Learning Bits and Breadcrumbs Go Chat | Join our Community User Group Slack Channel Go Stream It  | The Latest Course Releases (Some with Non-English Captions!) Go Last Minute | Seats Still Available for ILT Go to STEP | Get Upskilled Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Index It | Subscribe to our Newsletter Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.    Answer to Index This: 6,457 The last digit is moved to the front to make the next number.
The SAP C_BCSSS_2502 certification is a crucial milestone for professionals aiming to position SAP Sustainability Solutions as part of the SAP Business Suite. As sustainability becomes a key focu... See more...
The SAP C_BCSSS_2502 certification is a crucial milestone for professionals aiming to position SAP Sustainability Solutions as part of the SAP Business Suite. As sustainability becomes a key focus for enterprises worldwide, earning this certification demonstrates your ability to integrate SAP’s sustainability solutions into business operations effectively. This guide will walk you through a structured study plan, essential topics, exam strategies, and the best resources to ace your C_BCSSS_2502 certification with confidence. Whether you are just starting your preparation or refining your final steps, this comprehensive guide will ensure your success. Understanding the C_BCSSS_2502 Exam Exam Overview The C_BCSSS_2502 Positioning SAP Sustainability Solutions as Part of SAP Business Suite exam evaluates candidates on their understanding of SAP’s sustainability framework and how it aligns with business processes. This certification is designed for consultants, project managers, and sustainability-focused professionals who want to leverage SAP solutions for environmental and social governance. Exam Code: C_BCSSS_2502 Duration: 60 minutes Number of Questions: ~30 Passing Score: ~70% For the official exam syllabus, refer to SAP's certification page. C_BCSSS_2502 Exam Preparation Strategy 1. Understand the Exam Syllabus The C_BCSSS_2502 exam covers several key topics, including: Sustainability Solutions Positioning: Related course code: Sustainability Solutions Positioning For an in-depth breakdown of the syllabus, visit ERPPREP’s detailed guide. 2. Utilize C_BCSSS_2502 Practice Tests One of the most effective ways to prepare is by taking C_BCSSS_2502 practice tests to simulate the exam environment. These tests help you: Identify knowledge gaps. Improve time management. Get familiar with the exam format. Start practicing today with ERPPREP’s online practice exams to boost your confidence. 3. Leverage SAP Learning Resources SAP provides official learning materials that align with the exam content. Key resources include: SAP Learning Hub – Access to e-learning courses and hands-on practice. SAP Community & Forums – Engage with experts and peers to clarify doubts. SAP Training Courses – Instructor-led courses to strengthen your conceptual understanding. Best Practices for C_BCSSS_2502 Exam Success Time Management Tips Create a study plan – Allocate time for each syllabus topic. Focus on weak areas – Spend extra time on topics where you score lower in practice tests. Mock Exams – Take full-length mock exams under timed conditions. Exam Day Strategy Read questions carefully – Watch for tricky wording in multiple-choice questions. Eliminate wrong answers – Narrow your options to improve your chances. Manage time wisely – Don’t get stuck on one question; move on and return later if needed. FAQs on C_BCSSS_2502 Certification 1. Who should take the C_BCSSS_2502 exam? This exam is ideal for professionals involved in SAP sustainability solutions, including consultants, project managers, and IT professionals working on ESG compliance. 2. What is the best way to prepare for the C_BCSSS_2502 exam? Use official SAP materials, take practice tests, and engage with SAP forums and study groups. 3. Are there prerequisites for this exam? No formal prerequisites, but a background in SAP Business Suite and sustainability solutions is recommended. 4. Where can I find the latest C_BCSSS_2502 study materials? You can access official SAP learning resources on SAP Learning Hub  and practice tests on ERPPREP .
Created splunk account, unable to find the splunk id
  I want do the Splunk Certified Core User certification. Currently, I've registered for a learning path in Splunk itself. I selected only free course in each topic. I didn't select any topi... See more...
  I want do the Splunk Certified Core User certification. Currently, I've registered for a learning path in Splunk itself. I selected only free course in each topic. I didn't select any topic in 'Leveraging Lookups and Subsearches', 'Search Optimization' as it had only paid option. Are all these fine for completing the certification? I'm thinking of doing the same for Power User and Cloud Admin. Is this approach fine?
Whether it’s hummus, a ham sandwich, or a human, almost everything in this world has an expiration date. And, Splunk Education Training Units are no exception – always expiring one year from the date... See more...
Whether it’s hummus, a ham sandwich, or a human, almost everything in this world has an expiration date. And, Splunk Education Training Units are no exception – always expiring one year from the date of purchase. Don’t let these slip by without using them to gain access to our valuable instructor-led training or eLearning with labs. Get the details here.  Work with your org manager to use those training units If you’re a Splunk user at one of our customers, make sure you connect with the person in your company who manages the Splunk training units. Let them know that you would love to take some of those already paid-for training units off their hands so they don’t expire before they get used. Oh, if you’re just a curious learner itching to get more Splunk courses under your belt and don’t officially use Splunk at your org, you can purchase training and register for class using a credit card. No problem! Enroll in some classes today – Here’s what’s popular Here are just a few of the more popular Instructor-led Splunk Education courses to choose from: Intro to Splunk Using Fields Visualizations Working with Time Statistical Processing Comparing Values Result Modification And, remember that training units can also be used for eLearning with labs if you prefer more self-paced learning.  See you in class – even at the last minute In the next two weeks, there are still seats available via our Last Minute Learning program. You can quickly get into some popular virtual instructor-led training courses that have low enrollment – available for purchase via training units or credit card. This is a great opportunity to take those training units that might be expiring soon! If your company has purchased a contract of TUs, please consult with your Org Manager to enroll in these paid classes.  At Splunk Education, we get excited about helping our learners excel. So, go get those training units and we’ll see you in class!   -- Callie Skokos on behalf of the Splunk Education Crew Some fine print Training units expire at 12:01 AM Eastern Time on the expiration date. Registrations paid by training units must be placed before 12:01 AM Eastern Time. All instructor-led classes and dedicated classes paid by training units must start the day before the training units expire.
How to Deploy the Database Agent via Smart Agent (Using OpenJDK 17) Step-by-Step Guide: 1. Download the Smart Agent Go to AppDynamics Downloads. Download the Smart Agent ZIP file. 2. Ex... See more...
How to Deploy the Database Agent via Smart Agent (Using OpenJDK 17) Step-by-Step Guide: 1. Download the Smart Agent Go to AppDynamics Downloads. Download the Smart Agent ZIP file. 2. Extract & Install the Smart Agent Unzip the downloaded file into C:\appdynamics\appdsmartagent (or your preferred directory). Open an Administrator command prompt and run: appdsmartagent-service.bat​ This creates and starts the AppDynamics SmartAgent service on Windows. 3. Confirm the Service is Running Open services.msc and verify the AppDynamics SmartAgent service is running. 4. Configure the Smart Agent In C:\appdynamics\appdsmartagent , edit the config.ini file with your Controller details, such as: ControllerURL = https://ces-controller.saas.appdynamics.com FMServicePort = 443 AgentType = <agent_type> AccountAccessKey = <your_access_key> AccountName = <your_account_name> EnableSSL = true​ 5. Restart the Smart Agent service for changes to take effect.   6. Validate Smart Agent in the Controller In the Controller UI, go to Agents → Manage Agents → Smart Agents and look for your host name. Once it appears, you can install a Database Agent on the same machine directly from the UI. 7. Attach a Database Agent to the Smart Agent If the Database Agent isn’t associated with a Smart Agent, you won’t see a Smart Agent ID in the UI. Install or attach the DB Agent using the UI on the same machine where the Smart Agent is running. 8. Using the Latest OpenJDK (Example: JDK 17) If you need OpenJDK 17 on your Linux machine, for example, run as below: sudo apt update sudo apt install -y openjdk-17-jdk​ The Database Agent works fine with JDK 17 (tested on local machine IP address ip-172-31-9-203 ). 9. Upgrade or Downgrade the Database Agent If your Database Agent is attached to a Smart Agent, you can easily upgrade or downgrade it from the Controller UI. 10.  Successfully Created MySQL Collector with the latest JDK 17 and Latest Database Agent version 25.1.0 Troubleshooting & Tips Smart Agent Not Found? Ensure the Smart Agent is installed and running before installing a Database Agent via the UI. Verifying Smart Agent ID Check Agent Management → Smart Agents in the Controller to see if the ID is displayed. If it’s missing, the Database Agent won’t be able to attach it. JDK Compatibility If you’re running on Windows, Linux, or another OS, ensure you have OpenJDK 17 (or your desired version) installed. The Database Agent is compatible with modern JDK versions but always verify in AppDynamics’ compatibility matrix.
At Splunk Education, we’re dedicated to providing top-tier learning experiences that cater to every skill level and learning style. Whether you’re a beginner dipping your toes into Splunk for the fir... See more...
At Splunk Education, we’re dedicated to providing top-tier learning experiences that cater to every skill level and learning style. Whether you’re a beginner dipping your toes into Splunk for the first time or a seasoned professional looking to refine advanced skills, our wide variety of educational resources ensures that you’re always prepared for the next step in your journey. Our Diverse Course Offerings We’ve got you covered with a variety of learning options! Start strong with Free eLearning, dive deep with eLearning with Labs for hands-on practice, or benefit from Instructor-led Courses that provide interactive, expert-led sessions. Plus, don’t forget to validate your skills with Splunk Certifications. For those quick tips and insights, check out our Splunk YouTube How-Tos and Splunk Lantern, where you can access the latest guidance and best practices. Fresh Courses Just Released We’re thrilled to announce the latest additions to our course catalog! Every month, we release fresh content to keep you ahead of the curve. Whether you prefer the flexibility of self-paced eLearning or the structure of live, instructor-led courses, we have something for everyone. This month, we’re unveiling a brand-new instructor-led course, a cutting-edge eLearning with Labs course, and a new free eLearning course designed to elevate your Splunk skills. These courses provide essential insights into areas like security operations and observability, crucial for anyone looking to enhance their data-driven capabilities. Explore them now.  Global Learning, Now More Accessible In our commitment to inclusivity and accessibility, we continue to expand our language offerings and add non-English captions to our eLearning content. This ensures that learners around the world can enhance their Splunk expertise in their preferred language, furthering our vision of a globally inclusive educational ecosystem. Every month brings new learning opportunities to expand your knowledge, boost your career, and strengthen enterprise resilience. Stay on top of the latest course offerings and take the next step toward Splunk mastery – your next career breakthrough could be just one course away. We look forward to seeing you next month! — Callie Skokos on behalf of the Splunk Education Crew
Are there any Practice test for Splunk Cloud Certified Admin Exam?
Machine Agent HTTP Listener - How to Enable the http listener on a Windows Machine agent You can send metrics to the Machine Agent using its HTTP listener. You can report metrics through the Machin... See more...
Machine Agent HTTP Listener - How to Enable the http listener on a Windows Machine agent You can send metrics to the Machine Agent using its HTTP listener. You can report metrics through the Machine Agent by making HTTP calls to the Agent. The HTTP listener is not enabled by default. To activate the HTTP listener, restart the Machine Agent and set the metric.http.listener system property as mentioned below. Step-by-step guide: There are two ways to enable this property via the Command line with Java Arguments or add the parameters in the VMOPTIONs file if you’re running machine agent as a services.  Option 1:  Running machine agent as a services Assuming the Machine agent is installed under this directory then go to this path For Example C:\Program Files\AppDynamics\Machine Agent\bin       2. Add the below properties under VMOPTIONS File which is located under Machine Agent\bin folder.            metric.http.listener:  Required. Set to  true . metric.http.listener.port:  Optional. Set to the port to be used, defaults to 8293. metric.http.listener.host : Optional. This describes which interface to accept requests on. You can set it as follows -Dmetric.http.listener=true -Dmetric.http.listener.port=8293 -Dmetric.http.listener.host=0.0.0.0     Sending a custom metric to the AppDynamics Machine Agent running locally (on port 8293) using the built-in HTTP listener. C:\Users\Administrator>curl -v "http://localhost:8293/machineagent/metrics?name=Custom%20Metrics%7CTest%7CMy%20Metric&value=42&type=average" * Host localhost:8293 was resolved. IPv6: ::1 *IPv4: 127.0.0.1 *Trying [::1]:8293... *Connected to localhost (::1) port 8293 > GET /machineagent/metrics?name=Custom%20Metrics%7CTest%7CMy%20Metric&value=42&type=average HTTP/1.1 > Host: localhost:8293 > User-Agent: curl/8.9.1 > Accept: */* >  * Request completely sent off < HTTP/1.1 204 No Content < Date: Wed, 12 Feb 2025 05:01:08 GMT < Content-Type: application/xml < Server: Jetty(9.4.56.v20240826) <  * Connection #0 to host localhost left intact Option 2:  Machine Agent Running as a Runtime via Command line Run the below command and give the path where Java Agent has been installed for example here Machine Agent is installed under C:\Program Files\AppDynamics directory so change the path based on your machine agent has been installed. Please add the HTTP listener property during runtime <machine_agent_home>/bin/machine-agent -Dmetric.http.listener=true -Dmetric.http.listener.port=<port_number> -Dmetric.http.listener.host=0.0.0.0​ For example: C:\Program Files\AppDynamics\Machine Agent\jre\bin\Java -Xmx256m -Dlog4j.configuration=file:C:\Program Files\AppDynamics\Machine Agent\conf/logging/log4j.xml -Dmetric.http.listener=true -Dmetric.http.listener.port=8293 -Dmetric.http.listener.host=0.0.0.0 -jar C:\Program Files\AppDynamics\Machine Agent\machineagent.jar & Note: Ensure that you place the options/parameters before the JAR name in your start-up command. Validation Check Once the Property has been added you can see the details under the logs file as mentioned below and also can see the custom metric under the metric browser UI. Check in AppDynamics Controller UI under Metric Browser Sample Metric -"My Metric" and "You Metric"
Can someone help me view the previous/backdated Build your Splunk skills Wednesdays sessions. The latest one I subscribed to is marked for March which would be bit late for me to learn the tool. Than... See more...
Can someone help me view the previous/backdated Build your Splunk skills Wednesdays sessions. The latest one I subscribed to is marked for March which would be bit late for me to learn the tool. Thanks in advance
Guide for Index Rollovers in AppDynamics Event Service For event service deployments the index will rollover either when the average shard size breaches a threshold or when the age of the index... See more...
Guide for Index Rollovers in AppDynamics Event Service For event service deployments the index will rollover either when the average shard size breaches a threshold or when the age of the index exceeds the data retention period for the account/event type. Regarding the newest releases of Event Service, it is also quite a common scenario in case of recent environment migration. After such migrations, indexes sometimes don't roll over automatically. This issue occurs if data older than the destination Event Service's retention period was migrated. In this particular case, the migrated data is beyond the retention period, so the new Event Service don't include it in the roll-over process. In those cases, you can follow the instructions below to roll it over. The format of the curl request for index roll-over is a bit different for ES8 when compared to legacy ES2.  Before rolling over each index, please make sure that your cluster is in green status. Before and after rolling over an index, execute the following curl and ensure that the cluster is in "green" status, with no shards in unassigned status: curl -s 'http://localhost:9200/_cat/health?v' The template for index roll-over curl command is as follows: curl -XPOST http://{host}:{port}/v1/admin/cluster/{cluster}/index/{index}/rollover -H"Authorization: Basic {key}" -H"Content-Type: application/json" -H"Accept: application/json" -d '{"numberOfShards": "2"}' You need to fill up the following values manually in the above curl command: - {host} - change with hostname, if you are running from the Event Service CLI, this can be set to "localhost", - {port} - change to port event service is bined. By default 9080, refer to .properties file using "grep ad.dw.http.port events-service-api-store.properties", - {cluster} - cluster name, this can be get from .properties file using "grep ad.es.cluster.name events-service-api-store.properties", - {index} - Index to roll-over, the list of indexes cen be checked using the following curl http://localhost:9200/_cat/indices?v - {key} - Base64 encoded ad.accountmanager.key.ops from events-service-api-store.properties file. On linux CLI, to get this value you can for example use: "echo -n {ad.accountmanager.key.ops} | base64"
Hi everyone, Does anyone know where to find reliable study materials for the Splunk Cloud Certified Admin exam? It seems quite difficult to find good resources. Any recommendations would be greatly ... See more...
Hi everyone, Does anyone know where to find reliable study materials for the Splunk Cloud Certified Admin exam? It seems quite difficult to find good resources. Any recommendations would be greatly appreciated! Thanks in advance!
Welcome to the "Splunk Classroom Chronicles" series, created to help curious, career-minded learners get acquainted with Splunk Education and our instructor-led classes – and hear what other students... See more...
Welcome to the "Splunk Classroom Chronicles" series, created to help curious, career-minded learners get acquainted with Splunk Education and our instructor-led classes – and hear what other students are saying about their learning experiences.  In today's dynamic workplace, ongoing professional development is more important than ever, and Splunk is at the forefront of facilitating this growth through comprehensive, interactive online training sessions. Our courses are designed not only to enhance technical skills but to also enrich the overall learning experience. From engaging instructors to hands-on labs, each class is tailored to ensure that participants gain practical knowledge and real-world expertise. Episode 2 | Let’s Meet More Instructors  Join us for Episode 2 of our series as we explore the tales and testimonials from those who've experienced Splunk Education instructor-led training first-hand. You’ll meet our course instructors and developers – those who are dedicated to making your learning experience interesting, engaging, and valuable. Our Splunk course developers work to develop the quality curriculum and lab experiences, which is then handed off to our instructors. The end result, we hope, are happy learners with constructive feedback to share about our instructor-led courses. Administering Splunk Enterprise Security Course Administering Splunk Enterprise Security Course is a 13.5-hour course that prepares architects and systems administrators to install and configure Splunk Enterprise Security (ES). It covers ES event processing and normalization, deployment requirements, technology add-ons, dashboard dependencies, data models,managing risk, and customizing threat intelligence.  Chris Amidei is one of the course instructors and Nicole Bichon was the course developer.  Here’s what one student had to say about Chris Amidei “I not only learned the material I expected, but a few things I wasn't expecting that will help with my Splunk journey. Thank you!” Enroll Today  You can enroll in this course and meet Chris Amidei here on the STEP Learning Platform. SOC Essentials: Investigating and Threat Hunting Course In the SOC Essentials: Investigating and Threat Hunting Course you will learn and practice how to conduct investigations using Splunk Enterprise Security features, including Risk Based Alerting, through best practices shared by our security champions. You will also practice some common tasks using Splunk SOAR, and learn about the PEAK Threat Hunting framework and apply its basic concepts in hypothesis-driven threat-hunting. This course is part of a learning path that can help learners prepare for the role of a SOC Analyst and the Splunk Certified Cybersecurity Defense Analyst exam. Rick Rice is one of the course instructors and Daniela Herrera was the course developer.  Here’s what one student had to say about Rick Rice “Rick Rice is an informative and energetic instructor, which is great for an online training course. Best class I've taken all year! Rick is a great instructor, and encouraged attendee questions and engagement.” Enroll Today  You can enroll in this course and meet Rick Rice here on the STEP Learning Platform.  Implementing Splunk IT Service Intelligence 4.15 Implementing Splunk IT Service Intelligence 4.15 is an 18-hour course designed for administrator users who will implement Splunk IT Service Intelligence for analysts to use. The first day includes content from Using Splunk IT Service Intelligence. The course covers IT Service Intelligence analyst user training, designing, implementing services and searches, defining and adding entities, and more. Gary Swanson is one of the course instructors and Nate Pomeroy was the course developer.  Here’s what one student had to say about Gary Swanson “The instructor, Gary, did an excellent job leading the course. ITSI is, in my opinion, one of the most complex areas of the Splunk portfolio, with many moving parts and interdependent concepts. Gary did a fantastic job breaking everything down in a structured and engaging way, making it much easier to grasp. One of the highlights for me was his use of a car analogy to explain ITSI services. It was both clear and memorable, and I’m definitely going to borrow that when explaining ITSI to customers in the future.” Enroll Today  You can enroll in this course and meet Gary Swanson here on the STEP Learning Platform.  Splunk Enterprise Data Administration Splunk Enterprise Data Administration is an 18-hour course designed for administrators who are responsible for getting data into Splunk Indexers. The course provides the fundamental knowledge of Splunk forwarders and methods to get remote data into Splunk indexers. It covers installation, configuration, management, monitoring, and troubleshooting of Splunk forwarders and Splunk Deployment Server components. Pete Green is one of the course instructors and Kevin Stewart was the course developer.  Here’s what one student had to say about Pete Green “Pete was brilliant, giving time to assist in troubleshooting when I was stuck and making the 3 days an enjoyable experience!” Enroll Today  You can enroll in this course and meet Pete Green here on the STEP Learning Platform.  Resources and Reminders If we’ve piqued your interest in the value of Splunk Education and you’d like to increase your Splunk knowledge or get started on your journey, here are some useful resources: Course Registration: Ready to take the next step? Register for these or any of our courses here.  Splunk Education: Visit the official Splunk Education website to explore more courses and certification details. Splunk Lantern: Get field-tested guidance on use cases and best practices using Splunk Lantern. Community Insights: Join the Splunk Community to connect with other users and get insights into best practices and troubleshooting. Splunk Certification: Validate your Splunk proficiency with any of our Splunk Certifications. Whether you're a new administrator or a seasoned Splunk veteran, our courses are designed to empower you with the knowledge and skills needed to excel in your role. Stay curious, keep learning, and we look forward to seeing you in one of our upcoming classes!
Hello, I have completed splunk admin certification in 2019. I know its expired but i m not able to see the certificates anywhere Also someone please tell me where to view active and inactive certifi... See more...
Hello, I have completed splunk admin certification in 2019. I know its expired but i m not able to see the certificates anywhere Also someone please tell me where to view active and inactive certificates(not as a partner)
January 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  Happy New Year! We’re back with this month’s edition of indexEducation, the newsletter that takes an untra... See more...
January 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  Happy New Year! We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist on what’s new with Splunk Education. We hope the updates about our courses, certification program, and self-paced training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability:   Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go  Training You Gotta Take Instructor-led Training | Making the grade just got easier A grade can tell you what you scored on a test, but it can’t measure your passion, effort, or potential. Starting January 1, 2025, you can attend Splunk Instructor-Led Training feeling more confident in your skills.That’s because we’re moving from lab grading to a focus on participation and knowledge comprehension. So, breathe a sigh of relief knowing that we’re hoping to foster a more simplified learning experience that aligns with industry best practices and fosters more engagement in the classroom. You get an A for effort!  Gotta get to class |  Now with less lab grade anxiety  Splunk Enterprise System Administration | Go big or go home Considered one of the most challenging track and field events to master, learning to pole vault can take years of dedicated practice. Good thing you’re trying to master Splunk and not the pole vault, right? Instead of years, you’ll need to invest just 12-hours if you’re a system administrator who is responsible for managing the Splunk Enterprise environment. The course, however, provides the fundamental knowledge of Splunk license manager, indexers and search heads. It covers configuration, management, and monitoring core Splunk Enterprise components. To become an expert, well, that requires a bit more time.  Gotta invest the time | Get the fundamentals in 12 hours Things You Needa Know Careers are boosting | New report validates your efforts Don’t you love it when you find out you’re on the right track in life? Well, get ready to feel that validation again thanks to the 2024 Splunk Career Impact Report. The report highlights how building skills and earning Splunk Certification lays the groundwork for success. Among a slew of other data, the report shows that 51% of surveyed Splunk users improved skills with Splunk Education (up 8% YoY). And for you Splunk Certified gurus, we learned that you earn an average of 14% more, have 46% more confidence in job security, and are 2.3x more likely to get that big raise. WTG! (We are so proud of your dedication.) Needa know the impact | Read the full report  Just the facts | Modern day Picasso  The old adage is that “a picture paints a thousand words,” but in today’s fast-paced world, the infographic is the new oil painting. An infographic breaks down the facts and figures so you don’t have to. If you didn’t have a chance to read through the entire Splunk Career Impact Report showing that proficiency in using Splunk offers a competitive edge for users and customers (hint, hint: see story above) – we got you. We’re laying out just the stats and metrics behind the survey in the new 2024 Career Impact Report Infographic. Picture your future here. Need know the numbers | Infographic shows your potential Places You’ll Wanna Go To the Splunk Community | Pedro is our video guest  Do you feel all the feels when someone inspiring shares their experiences? Oh yeah, then you’ll want to meet Splunk Community member, Pedro Borges, on our Smartness video series. Splunk users like Pedro are improving their careers and optimizing Splunk in their organization by tapping into Splunk Education resources and the vast Splunk Community. Want to follow in Pedro’s footsteps? Explore the same Splunk Education resources and Community tools that helped him succeed.  Go get the feels | Sneak a peek into Pedro’s world  The Customer Success Center | Splunk Lantern Splunk Education provides many modalities and channels for learning Splunk. One really cool opportunity is Splunk Lantern, which is a customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. If you’re not sure how Splunk Lantern can help you optimize Splunk in your organization, the team curates information and shares advice every month via Community blog updates. It’s time to lighten up with Lantern. Go to Lantern | Simplified advice on the blog Find Your Way | Learning Bits and Breadcrumbs Go Chat | Join our Community User Group Slack Channel Go Stream It  | The Latest Course Releases (Some with Non-English Captions!) Go Last Minute | Seats Still Available for ILT Go to STEP | Get Upskilled Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Index It | Subscribe to our Newsletter Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.  Answer to Index This: Infinity  
Here's how to configure the Docker Compose deployment of OpenTelemetry Community Demo application to send telemetry data to Splunk AppDynamics. Since its launch in May 2019, the OpenTelemetry (OTel... See more...
Here's how to configure the Docker Compose deployment of OpenTelemetry Community Demo application to send telemetry data to Splunk AppDynamics. Since its launch in May 2019, the OpenTelemetry (OTel) Project has become one of the most popular open-source projects after Kubernetes®. For newcomers to the observability domain, OpenTelemetry™ provides a standard way to collect telemetry data (metric, logs and traces) from software applications and send it to one or more backends to analyze application performance. The backends can be open source (Jaeger ,Zipkin etc.), commercial (such as Splunk AppDynamics, Splunk Observability) or both.   To enable faster adoption and showcase instrumentation best practices, the OTel community has built a demo application, OpenTelemetry Community Demo. In this blog, I'll show how to configure the OpenTelemetry demo to send Traces data to Splunk AppDynamics for further analysis.   Backstory: Splunk AppDynamics and OpenTelemetry demo  Splunk AppDynamics provides full stack observability of hybrid and on-prem applications and their impact on business performance. In addition to the proprietary ingestion format, AppDynamics also supports OpenTelemetry trace ingestion from various language agents (Java, dotnet, python, golang etc.) giving customers more options in how they want to ingest telemetry data.  OpenTelemetry Community Demo is a simulated version of an eCommerce store selling astronomy equipment. The app consists of 14+ microservices communicating with each other via HTTP or grpc. The microservices are built using a variety of programming languages (Java, Javascript, C#, etc.) and instrumented using OpenTelemetry (auto, manual, or both). The diagram below shows the data flow and programming languages used.   (Image credit:  OpenTelemetry Demo contributors.)  In addition to the microservices shown here, the demo app also comes with supporting components such as OpenTelemetry Collector, Grafana, Prometheus, and Jaeger to export and visualize traces, metrics, and so on. The OpenTelemetry Collector is highly configurable. Once exporters for various backends are defined and enabled in the service pipeline, the Collector can be set up to send telemetry data to multiple backends simultaneously. The diagram below shows the OTel demo with supporting components, as well as a dotted line to Splunk AppDynamics, which we will configure in the next section.  Sending OpenTelemetry trace data to Splunk AppDynamics  Using the steps described in the Open Telemetry demo Docker deployment documentation, deploy the demo app on your local machine. Confirm it’s working by going to http://localhost:8080/ and completing an item checkout workflow.  Contact your Splunk AppDynamics account representative to set up an AppDynamics account for your company. The account will have a URL format similar to https://<your-company/account-name>.saas.appdynamics.com and will be the central location where you'll see all telemetry data from your applications.   Generate API Key by going to your AppDynamics url >  Otel > Get Started > Access Key. Go to the Processors, Exporters, and Service Configuration sections and note down the values of below Keys. We will use them in the next section:  appdynamics.controller.host   appdynamics.controller.account  As described below, update the file src/otel-collector/otelcol-config-extras.yml in your cloned repo. Alternatively, copy the GitHub gist. Make sure there are no yaml validation errors by opening this file in an IDE with yaml support (VSCode etc.). The contents of this file get merged with src/otelcollector/otelcol-config.yml at runtime to create the consolidated OpenTelemetry Collector configuration.  processors: resource: attributes: - key: appdynamics.controller.account action: upsert value: "from AppD account url > Otel > Configuration > Processor section" - key: appdynamics.controller.host action: upsert value: "from AppD account url > Otel > Configuration > Processor section" - key: appdynamics.controller.port action: upsert value: 443 - key: service.namespace action: insert value: otel-demo-local-mac batch: timeout: 30s send_batch_size: 90 exporters: otlphttp/appdynamics: endpoint: "from AppD account url > Otel > Configuration > Exporter section" headers: {"x-api-key": "from AppD account url > Otel > Configuration > API Key"} service: pipelines: traces: receivers: [otlp] processors: [transform, resource, batch] exporters: [otlp, spanmetrics, otlphttp/appdynamics, debug] Stop the docker containers (make stop ) and then start them (make start). Wait a few minutes and confirm that you can access the OpenTelemetry demo app UI at ​​​​http://localhost:8080/. Next, log in to your Splunk AppDynamics URL. You'll then see a service flow map that shows various microservices and the interactions between them. An observability platform should be able to detect an increase in error rates of the microservices it’s monitoring. Fortunately, the OpenTelemetry demo has an error injection capability via feature flags to test this functionality. Go to the feature flag UI at ​​​http://localhost:8080/feature/ and enable the productCatalogFailure feature flag. This will cause the product catalog service to return an error for a specific product ID and respond correctly to all other product IDs. Note the increase in error rate in home page. To view errors details, click on Troubleshoot > Errors > Error Transactions > Details. AppDynamics accurately captures the error reason as ”Product Catalog Feature Flag Enabled”. AppDynamics provides health rules and alerts functionality to respond quickly to such situations.  OpenTelemetry and Splunk AppDynamics  The OpenTelemetry Community Demo application is a valuable and safe tool for learning about OpenTelemetry and instrumentation best practices. In this blog, we showed how to configure the demo app to send telemetry data to Splunk AppDynamics. We also explored some key Splunk AppDynamics features such as FlowMap, APM metrics, and an observed increase in error rates via a fault-injection scenario.   Interested in trying this workflow to learn more about OpenTelemetry and Splunk AppDynamics? Go over Splunk AppDynamics OpenTelemetry documentation and  Sign up for a free trial of Splunk AppDynamics. Then, clone and deploy the opentelemetry-demo repo and send its telemetry data to Splunk AppDynamics to gain valuable insights.