All Topics

Learn Splunk

All Topics

Step-by-Step Guide to Verifying SSL Certificates Using AppDynamics Cluster Agent In this article, you will learn how to check if there are issues with the SSL Certificate for the On-Premises Contro... See more...
Step-by-Step Guide to Verifying SSL Certificates Using AppDynamics Cluster Agent In this article, you will learn how to check if there are issues with the SSL Certificate for the On-Premises Controller using the Cluster Agent. Below are the steps you can independently verify in the cluster.   Next steps: Check if a Secret containing the certificate named "custom-ssl.pem" is created. Check if the Cluster Agent configuration file contains a parameter with the name of the Secret containing the certificate: "cluster-agent.yaml" -> Kubernetes CLI "values.yaml" -> Helm Chart Check if the Cluster Agent pod description contains: kubectl describe pod <cluster_agent_pod_name> -n appdynamics Environment variable "APPDYNAMICS_CUSTOM_SSL_SECRET: ssl-cert" Volume mount Volume "agent-ssl-cert" Check the presence of "/opt/appdynamics/ssl/custom-ssl.pem" in the Cluster Agent container. kubectl exec -it <cluster_agent_pod_name> -n appdynamics -- /bin/sh I hope this article was helpful 
Guide to Monitoring URLs with Authentication Using Splunk AppDynamics and Python Monitoring URLs are an important part of your FullStackMonitoring. Splunk AppDynamics lets you monitor URLs with... See more...
Guide to Monitoring URLs with Authentication Using Splunk AppDynamics and Python Monitoring URLs are an important part of your FullStackMonitoring. Splunk AppDynamics lets you monitor URLs with different authentication. In this article, we will create a simple URL with a username and password. Afterwards, we will monitor it using AppDynamics Machine Agent. Create a Simple API with Python (Flask) Install Flask: pip install flask​ Create the API: Save the following Python code to a file, e.g., basic_auth_api.py : from flask import Flask, request, jsonify from flask_httpauth import HTTPBasicAuth app = Flask(__name__) auth = HTTPBasicAuth() # Dummy users for authentication users = { "user1": "password123", "user2": "securepassword", } @auth.get_password def get_pw(username): return users.get(username) @app.route('/api/data', methods=['GET']) @auth.login_required def get_data(): return jsonify({"message": f"Hello, {auth.username()}! Here is your data."}) if __name__ == '__main__': app.run(debug=True, port=5000) Run the API: Start the server by running: python basic_auth_api.py Test the API: Use curl to access the API: curl -u user1:password123 http://127.0.0.1:5000/api/data You should see a response like this: { "message": "Hello, user1! Here is your data." }​ Install Machine Agent You can install the Machine agent as recommended here Setup URL Monitoring Extension Clone the Github Repo: git clone https://github.com/Appdynamics/url-monitoring-extension.git​ cd url-monitoring-extension​ Download and install Apache Maven which is configured with  Java 8  to build the extension artifact from the source. You can check the Java version used in Maven using command  mvn -v  or  mvn --version . If your maven is using some other Java version then please download Java 8 for your platform and set JAVA_HOME parameter before starting maven. Run below in url-monitoring-extension directory mvn clean install Go into the target directory and copy the UrlMonitor-2.2.1.zip, Afterwards unzip the content inside <MA-Home>/monitors/folder cd target/ mv UrlMonitor-2.2.1.zip /opt/appdynamics/machine-agent/monitors unzip UrlMonitor-2.2.1.zip​ This will create an UrlMonitor directory inside the Monitors folder. Monitor the URL Inside the UrlMonitor folder, edit the config.yml file Under sites, I have added: sites: - name: AppDynamics url: http://127.0.0.1:5000/api/data username: user1 password: password123 authType: BASIC​ Change: metricPrefix: "Custom Metrics|URL Monitor|"​ Now, All you need to do is Start your Machine Agent again. Afterward, you can see this URL monitor in your AppDynamics Controller.
At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of skill level or learning preference. Whether you’re just starting your journey with Splunk ... See more...
At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of skill level or learning preference. Whether you’re just starting your journey with Splunk or sharpening advanced skills, our broad range of educational resources ensures you’re prepared for every step.        Our Portfolio We offer Free eLearning to kickstart your learning, eLearning with Labs for hands-on practice, Instructor-led courses for interactive, expert guidance, and Splunk Certifications to validate your expertise. For quick tips and insights, explore our Splunk YouTube How-Tos and Splunk Lantern, where you'll find up-to-date guidance and best practices that reflect the latest in Splunk's capabilities. New Courses Available Every month, we release new courses designed to empower learners with the tools and knowledge they need to stay ahead in the evolving tech landscape. Whether you prefer self-paced eLearning or the structure of live instruction, there’s a course to fit your style. This month, we are excited to announce a new instructor-led course, a new eLearning with Labs course, a new free eLearning course to help you advance your Splunk skills.    These courses provide targeted insights into security operations and observability, essential for anyone looking to enhance their data-driven capabilities. Explore them today to stay ahead in your field! All courses are available through the Splunk Course Catalog, accessible via our banner or directly on our platform. Expanding Global Learning Access  As part of our commitment to accessibility and inclusion, we continue to translate eLearning courses into multiple languages and add non-English captions. This effort ensures that learners worldwide can grow their Splunk expertise in their preferred language, supporting our vision of an inclusive educational ecosystem. Each month presents new opportunities to expand your knowledge, boost your career, and enhance your contributions to enterprise resilience. Stay updated with the latest courses and continue your journey toward Splunk mastery – your next big career move could be just a course away. See you next month!  - Callie Skokos on behalf of the Splunk Education Crew
A Step-by-Step Guide to run the standalone on-premise controller as a service in the Linux environment. When you run a standalone on-premise controller manually, you can follow the steps described ... See more...
A Step-by-Step Guide to run the standalone on-premise controller as a service in the Linux environment. When you run a standalone on-premise controller manually, you can follow the steps described in the documentation below: https://docs.appdynamics.com/appd/onprem/24.x/latest/en/controller-deployment/administer-the-controller/start-or-stop-the-controller However, there might be situations where you need to run the standalone on-premise controller as a service in a Linux environment. If so, you can follow the steps below. Change the user to root sudo -i​ Install the library below (optional) apt install libxml2-utils -y​ OR yum install libxml2 -y​ Move to the directory below: cd /opt/appdynamics/platform/product/controller/controller-ha​ Set up the controller db password and validate it ./set_mysql_password_file.sh -p <controller-db-password>​ Output results: Checking if db credential is valid...​ Move to the directory below cd /opt/appdynamics/platform/product/controller/controller-ha/init​ Run the script below ./install-init.sh -s​ Output results: update-rc.d will be used for installing init installed /etc/sudoers.d/appdynamics installing /etc/init.d/appdcontroller-db installing /etc/default/appdcontroller-db installing /etc/init.d/appdcontroller installing /etc/default/appdcontroller​ Run the commands below to enable the newly created service: systemctl enable appdcontroller systemctl enable appdcontroller-db systemctl restart appdcontroller systemctl restart appdcontroller-db systemctl status appdcontroller systemctl status appdcontroller-db​ Additionally, you might create your own unit file with the start/stop commands to run the standalone on-premise controller as a service in a Linux environment without using our script.
We're excited to announce that AppDynamics is transitioning our Support case handling system to Cisco Support Case Manager (SCM), enhancing your support experience with a standardized approach across... See more...
We're excited to announce that AppDynamics is transitioning our Support case handling system to Cisco Support Case Manager (SCM), enhancing your support experience with a standardized approach across all Cisco products. This migration is scheduled to take place on June 14th. As the transition date approaches, you will notice banners appearing in both the AppDynamics Admin Portal and on our help website (www.appdynamics.com/support). These banner notifications will keep you informed about the change and notify you once the transition has been completed.  Access 1-Year Historical AppDynamics Support Case Data On October 3, 2024, 1-year historical AppDynamics support case data will be accessible through Cisco Support Case Manager (SCM). This update will allow users to view all closed cases from Zendesk, dating between June 14, 2023, and June 14, 2024, directly in SCM. Please be aware that access to certain cases may be restricted to the individual who originally opened the case. We apologize for any inconvenience this may cause and want to assure you that Cisco is actively working to address these limitations. Temporary Work-Around  On June 14, AppDynamics transitioned to Cisco Support Case Manager (SCM) for case creation and management. Since the migration, we have become aware that some customers are experiencing difficulties accessing SCM to create/view cases. We sincerely apologize for any inconvenience this may have caused and want to assure you that Cisco is working diligently to resolve these issues as quickly as possible.   As a temporary workaround, beginning Saturday, August 17th, users who have encountered errors when attempting to open cases will be able to bypass these errors and proceed with case creation. Please note that for cases created using this workaround, only the user who initiates the case will have access to view it in SCM. If you need to share the visibility of these cases with others in your organization, please ensure that they are included in the CC list when creating the case. Please note that visibility is restricted to email communications only for data privacy and security.  If you continue to experience issues with SCM, or if you have any other concerns, please do not hesitate to contact us at appd-support@cisco.com for further assistance.  What does this mean for you?  AppDynamics will notify you once your profile and support cases have been successfully migrated, allowing you to seamlessly access your support cases in SCM. Until the migration is complete, you will continue to have access to your cases through the current AppDynamics case-handling tool. Access to the new SCM platform requires that your profile is migrated to the "Cisco User Identity," a process that will be automatically handled for you. For more information on the "Cisco User Identity" changes, please refer to the communication sent via email and published on the AppDynamics Community located here.   Key points to remember:  You will still be able to open cases from the portal and website, although the interface will undergo a visual update You will need a Cisco.com account to access SCM  Your open cases and up to 1-year of closed cases will be seamlessly migrated to the new system Additional Resources How do I open a case with AppDynamics Support? How do I manage my support cases?
Table of contents Search for a case Updating a case Upload an attachment to a case How can I request to close a case? Is there an easy way to manage my cases? Do you have a bot or assista... See more...
Table of contents Search for a case Updating a case Upload an attachment to a case How can I request to close a case? Is there an easy way to manage my cases? Do you have a bot or assistant to help manage cases?  Case Satisfaction Additional Resources Search for a case  To view an open or migrated case in SCM, navigate to the “Create and Manage Support cases”- view. There you type in the Case ID number (either new or old case ID) in the Search-field  and press enter (Figure 1) Figure 1 Updating a case   Go to the SCM start page, where under “Cases”, you pick “My Cases” (Figure 2) and select the case that needs updating. Here you edit your case and make sure to save the changes before exiting.  Figure 2 Upload an attachment to a case    If you need to upload and attach a file to a case, you can do so when opening a “new case”, or by going to an “existing case”. When opening a new case, you’re prompted to upload an attachment when the case has been submitted. For an existing case, navigate to the “My Cases” view as seen in Figure 6. In the right corner press the “Add File” (Figure 3) button, upload the file and save.  Figure 3 How can I request to close a case? You can close a case yourself in two different ways:  1. Manually Go to 'CASE SUMMARY' Edit Describe how the case was resolved (optional) Case status updates to "Close Pending / Customer Requested Closure" 2. With the Support Assistant From the Support Assistant type 'close the case (insert case number)' The Support Team will close the case How to reopen a closed case and validity You can reopen a closed case in two different ways: 1. Manually From Support Case Manager Check closed cases Apply filters Select the case Click reopen on the top right corner 2. With the Support Assistant  From the Support Assistant type 'reopen the case (insert case number)' A case can be reopened only for two weeks after the close date If a case is outside the two-week window, it is recommended to open a new case Case Satisfaction  After migrating to Cisco SCM, at case closure, you will be provided an industry standard 10-point scale and asked to choose a value to reflect satisfaction on the support of the case. (Figure 4) Figure 4 Is there an easy way to manage my cases? Do you have a bot or assistant to help manage cases?  Yes, we have a Support Assistant bot! In the bot's own words: Hello! I can help you get case, bug, RMA details and connect with Cisco TAC. Simply enter the case number as shown in the examples below and get the latest case summary. 612345678 - Cisco TAC case 00123456 - Duo support case S CS 0001234 - ThousandEyes support case 1234567 - Umbrella support case You can converse with me in English language or use commands. Currently, I can't open new cases or answer technical questions. • my cases • what is the status of (case number or bug number or rma number or bems number) You can ask me to perform the following tasks: • connect with engineer (case number) • create a virtual space (case number) • create an internal space • request an update for (case number) • update the case (case number) • add participant (email address) • raise severity (case number) • requeue (case number) • escalate (case number) • close the case (case number) • reopen the case (case number) • update case summary (case number) • show tac dm schedule • show cap dm schedule You can mark a case as a favorite and get automatic notifications when the case summary (Problem Description, Current Status, and Action Plan) gets updated: • favorite (case number) • list favorites • status favorites You can ask me to connect to support teams: • connect to duo I can help you manage cases that are opened from Cisco.com Support Case Manager. Currently, I can't open new cases or answer technical questions. Type "/list commands" to get a list of command requests and find details of supported features using the documentation and demo videos. Additional Resources How do I open a case with AppDynamics Support? AppDynamics Support migration to Cisco CSM
Hi All, As per the exam blueprint for "SPLK-3001: Splunk Enterprise Security Certified Admin" it says that there is a prerequisite of "Splunk Core Certified Power User". However, while booking the ex... See more...
Hi All, As per the exam blueprint for "SPLK-3001: Splunk Enterprise Security Certified Admin" it says that there is a prerequisite of "Splunk Core Certified Power User". However, while booking the exam, I am able to see the booking option directly for SPLK-3001. Can I safely book the SPLK-3001 exam then? Anything I am missing here?
Remember Splunk Community member, Pedro Borges? If you tuned into Episode 2 of our Smartness interview series, you know just how inspiring his journey with Splunk has been! Now, we’re excited to shar... See more...
Remember Splunk Community member, Pedro Borges? If you tuned into Episode 2 of our Smartness interview series, you know just how inspiring his journey with Splunk has been! Now, we’re excited to share a new companion video that dives even deeper into his story.   Pedro shares how Splunk Education helped him transform his career and optimize Splunk for his organization. From leveraging top-tier training to tapping into the incredible Splunk Community, Pedro’s story is proof that the right resources can make a world of difference. Ready to follow Pedro’s lead? Take your Splunk skills to the next level by exploring the tools that made a difference for Pedro: Splunk Lantern: Your guide to real-world use cases and solutions. Splunk Docs: The ultimate knowledge base for everything Splunk. Splunk Education: Courses to help you master Splunk. Splunk Community: Join discussions and connect with Splunk enthusiasts. Splunk Certifications: Showcase your expertise and grow your career. We hope Pedro's story helps to inspire your next steps with Splunk Education and nurturing your growth mindset! -Callie Skokos on behalf of the Splunk Education Crew
This is a new version of the licensing model that consumes licenses based on vCPUs. Available for both On-Premes and SaaS. Utilization is 1 License unit per CPU Core. It does not matter how man... See more...
This is a new version of the licensing model that consumes licenses based on vCPUs. Available for both On-Premes and SaaS. Utilization is 1 License unit per CPU Core. It does not matter how many agents are running on a server, how many applications/containers these agents are monitoring, how much data these agents are collecting/reporting, and how many transactions these agents are creating. The Licenses will be consumed based on the number of CPUs available on the Server/Host. The Basics What are the minimum versions required for controller and apm/database/server agents to properly count vCPU? These are the required AppD agent versions needed to make the customer fully IBL compliant. Controller: v21.2+ (for the database agent to default to 4vCPU instead of 12vCPU, v23.8+(csaas)/v23.7+(on-prem)) Machine Agent: 20.12+ .NET Agent: 20.12+ DB Agent: Min: 21.2.0 (recommended latest 21.4.0) (for MySQL/PostgreSQL RDS databases IBL support, the minimum is 22.6.0) For accurate license counting, the machine agent needs to be deployed, or hardware monitoring needs to be enabled in case of database monitoring. The machine agent version should be greater than 20.12. The machine agent will calculate the number of CPUs available on the monitoring Server/Host. How to migrate from Agent Based Licensing to Infrastructure Based Licensing? Migration from Agent Based Licensing to Infrastructure Based Licensing is handled by licensing-help@appdynamics.com On conversion: All license rules are maxed out to account value by default keeping app/server scope restrictions as is For example: LicenseRuleA to Z with 2 apm units each, and accountLevelApm=100 units on conversion will be set to LicenseRuleA-Z 400units "each" and accountLevelHBL=400 units on conversion. 400 is just a random number here, final conversion is made by the sales team What is the definition of vCPU and how do I verify if it's correct? In the case of a Physical Machine, the number of logical cores or processors is considered to be the vCPU count For planning purposes, you can use the following table to find out the CPU core in case the Machine agent is not available/running Technology Logical CPU Core Where is it captured? Bare metal servers Logical CPU Cores = # of processors Windows: - Task Manager - PowerShell Linux: Linux Virtual Machines Logical CPU Cores (accounting for hyperthreading) Cloud Providers Logical CPU Cores = vCPU AWS: EC2 AWS Instances Azure: Azure VMs GCP: Standard Machine Types Windows: Task Manager **insert image System Information ***insert image wmic ***insert image Linux nproc or lscpu **insert image Mac OS: sysctl -a | grep machdep.cpu.*_count OR sysctl -n hw.logicalcpu ** insert image What are packages? Each agent that consumes a license will be part of a single package. Packages will be provisioned at the account level and distributed within license rules (limited packages are supported by license rules). Packages fall under – ENTERPRISE, PREMIUM, INFRASTRUCTURE Package   What it offers? Agent list (as seen on the Connected Agents page) SAP Enterprise (SAP_ENTERPRISE) Monitor all your SAP Servers, network, and SAP Apps and get Business Insights on them using AppDynamics agents. APM Any Language: agent-type=sap-agent Network Visibility: agent-type=netviz + Everything under AppDynamics Infrastructure Monitoring Enterprise (ENTERPRISE) Monitor all your Servers, network, databases, and Apps and get Business Insights on them using AppDynamics agents Transaction Analytics: agent-type=transaction-analytics + Everything under AppDynamics Premium Premium (PREMIUM) Monitor all your Servers, network, databases, and Apps using AppDynamics agents. APM Any Language: agent-type=apm, java, dot-net, native-sdk, nodejs, php , python, golang-sdk, wmb-agent, native-web-server Network Visibility: agent-type=netviz Database Visibility: agent-type=db_agent, db_collector + Everything under AppDynamics Infrastructure Monitoring AppDynamics Infrastructure Monitoring (INFRA) Monitor all your Servers using AppDynamics agents. Server Visibility: agent-type=sim-machine-agent Machine Agent: agent-type=machine-agent Cluster Agent: agent-type=cluster-agent NET Machine Agent: agent-type=dot-net-machine-agent   For other packages please check https://docs.appdynamics.com/appd/23.x/latest/en/appdynamics-licensing/license-entitlements-and-restrictions Do I need individual packages for my account? Only transaction analytics needs ENTERPRISE as a mandate. If you do not have the ENTERPRISE package, the transaction analytics agent cannot report even if the licenses are available at the PREMIUM package All INFRA agents can be reported against PREMIUM or ENTERPRISE packages. All PREMIUM agents can be reported against the ENTERPRISE package What happens when my package level consumption is full? (Redirection Valid only if account level limits are not maxed out: If agents report against the INFRA package and the INFRA license pool is full but premium is free, new license consumption will be re-directed to PREMIUM. If agents report against PREMIUM package and PREMIUM license pool is full but ENTERPRISE is free, new license consumption will be re-directed to ENTERPRISE. For transaction analytics agents, if ENTERPRISE is full, the controller cannot switch back to unconsumed PREMIUM or unconsumed INFRA. This swapping takes place at the license rule level. Valid only if account level limits are maxed out: The above redirection will not take place if account level limits are maxed out even if a few license rule units are unconsumed. Can I force agents to report against a package? Yes, you can manage restrictions via license rules -> Server Scope / Application Scope with license rules. You will have to provide only ENTERPRISE, or ONLY PREMIUM within a single license rule. Otherwise, default re-direction is respected. 1 agent can report against 1 package only. Which packages are supported under license rules? As of 23.12.x controller version, only Premium, Enterprise, Enterprise SAP, and Infrastructure Monitoring packages are supported under License rules. The consumption and re-direction work the same as account-level switching. What happens if I do not have a machine agent to report vCPUs or I do not have hardware monitoring enabled? In case the machine agent is not running or deleted from the server or the agents are unable to find the number of CPUs, the license unit will be calculated based on the Fallback Mechanism. APM Agent - 4 CPU ~ 4 License Unit Default DB Collector – 4 OR 12 CPU ~4 OR 12 License Unit Default Why is my vCPU reported incorrectly? Inaccurate vcpu does not mean AppD is consuming the wrong licenses. It means users are not providing AppDynamics with ways to calculate licenses properly. Most common reasons: Machine Agent not installed- a. Any agent goes into fallback mode if there is no machine agent. b. database agent goes into fallback mode = 4 OR 12 default vpus even if host has1 vCPU c. APM Agent goes into fallback mode = 4 default vpus even if host has1 vCPU Managed Database (aws gcp azure) + machine agent mismatch a. Machine agent cannot be installed on managed/cloud database services b. Hardware metrics should be enabled which reports the vCPU count. If it is not enabled, default 4 OR 12 licenses are being consumed as fallback UniqueHostIdMisMatch- a. There is a mismatch between host mapping and thus each uniquehostid will be considered a different host even if they reside on the same physical machine b. Consider example - 2 java agents + 1 machine-agent on the same machine. With mismatch, 3 agents will show up as individual rows in the host-table and each would consume 4 vCPU license = 4*3 = 12vCPU towards the total License consumption, but the expectation is total as 4 and not 12. Can the two licensing models (agent based and host based) co-exist on the same license? No. A given license can only be on one of the two models. If Infrastructure Based Licensing (IBL) is enabled for a customer, can it be reverted to the legacy Agent Based Licensing (ABL) model later? No, we cannot revert. Can you share a couple of scenarios? A 4vCPU host is running with- 1 sim-agent 3 app agents and 1 netviz agent ~ Final consumption is total 4 vCPU license. A 4vCPU host is running with- 1 machine-agent 3 app agents and 1 netviz agent ~ Final consumption is total 4 vCPU license. If the machine is not updated/not reporting after the initial vcpus were reported We have timeout after which apm agents will default to fall back mechanism. If the machine adds on to the vCPUs(vertical scale) keeping the hostnames same etc. On scaling up or down, within 10min accurate vcpus would be reported into controller. A temporary spike / dip in license usage is expected if agent restarts within 5min. Database agent scenarios: if there is DB agents or DB and machine agents on a host (identified by unique host Id) then license units used ("vCPU") will be capped at 4 (4 or less, if MA reports less vCPUs for example) if there is any other agent type than DB / MA (e.g. app agent) the capping is not happening and license units used are calculated as usual in case of fallback it's 4 LUs per all DBs on the host + 4 LUs per any other agent reporting in non-fallback case (licensing knows vCPUs) the reported vCPU count is used (if both DB and MA reports vCPUs, licensing trusts MA more). Example: 2vcpu db + 3 vcpu db = 3. similarly 2vcpu db + 8vcpu machine agent = 4 max 5 vcpu db + ma = 4 max. 100vcpu db + ma = 4max 2vcpu db only = 4 100vcpu db = 4.
Step-by-Step Guide to Migrating AppDynamics Analytics Data to Harmonized Schemas Reason to migrate: This migration pertains exclusively to the analytics data captured and stored on the AppDynamic... See more...
Step-by-Step Guide to Migrating AppDynamics Analytics Data to Harmonized Schemas Reason to migrate: This migration pertains exclusively to the analytics data captured and stored on the AppDynamics controller, which has a maximum capacity of 20 schemas. To overcome this limitation, we have reengineered the approach to schema utilization, enabling additional capacity for customers to define their own schemas. Starting with agent version 24.11, this updated approach (called Harmonized) is the sole available option for new installations. However, for customers who are updating to this version, this document will walk them through the process as they should plan to migrate ASAP. Please note that any metrics migrated will begin reporting from the new location, and all historical analytics data associated with those metrics will be lost. Clean out old data Log into each SAP system (as these schemas are shared) Disable analytics via t-code /DVD/APPD_CUST, then enter Edit Mode Uncheck the analytics events API box Save changes Click the status button or run t-code /DVD/APPD_STATUS Click event service Then click Custom Analytics schema As stated before, the max number of schemas a controller can have is 20. Here is a list of the ones that are used going forward related to SAP: sap_log_data sap_workload_data sap_analytics_data_1 sap_idoc_data sap_biq_documents sap_hana_data (if a HANA DB is used) sap_bw_data (if it is a BW system) sap_pi_data (if it is a PI system) sap_custom_data_1 (custom) For the complete list use this Link, which also shows how they will be mapped Given that standard schemas will be created, it is important to ensure sufficient capacity for their inclusion. This decision will be evaluated on a case-by-case basis, but there are essentially two strategies to consider. Option 1:  Start fresh by removing all existing schemas. The necessary schemas will be recreated upon restart. Please note that this approach will result in the loss of all historical data stored on the controller, not just the analytics data that is relocated. In t-code /DVD/APPD_STATUS Click the Debug Mode Check both boxes and select the desired time duration Delete the ones you want or all of them for a fresh start Once all changes are made click the Debug Mode button again to exit debug mode (or wait for the duration to expire) Option 2: Start by just removing the schemas that are marked with the status “Not Used”. Do this by clicking the trash icon on the same row Then confirm the deletion (by clicking 'No' button) Flipping the switch: After deleting all unused schemas and verifying you have enough room go back to the t-code /DVD/APPD_CUST Enter change mode In the Analytics events API settings area, set Version to "Harmonized" and check the box for “Analytics events API is active” Verify Schemas: Once all the changes are made and running for a few hours, your end result could look like this. You can see which systems are using the different schemas by clicking the corresponding Used button Adjusting Dashboards to new schemas Any dashboard that has analytical data may have been impacted. You will need to go into each data field and modify the query like the following Replace the old (legacy) schema name with the new (harmonized) schema name in the FROM part of the query string. Add an extra WHERE condition AND sapSchema = <old schema name>. Example query change: Legacy query SELECT * FROM idocs_details WHERE SID = "ED2" Migrated query SELECT * FROM sap_idoc_data WHERE SID = "ED2" AND sapSchema = "idocs_details" Additional Resources Troubleshooting
We’re excited to share an update to our instructor-led training program that enhances the learning experience for Splunk learners. Starting January 1, 2025, the completion criteria for many of our... See more...
We’re excited to share an update to our instructor-led training program that enhances the learning experience for Splunk learners. Starting January 1, 2025, the completion criteria for many of our Instructor-led courses will shift from lab grading to a focus on participation and knowledge comprehension. This change simplifies the learning process, aligns with industry best practices, and fosters a more engaging environment for learners. For those new to Splunk’s instructor-led training, this update will feel seamless, as it reflects the standard structure of our courses moving forward. Updated completion criteria Class Attendance: Learners must attend all scheduled class sessions. Knowledge Check Quiz: A short, open-note quiz will assess understanding. Learners must achieve an 80% passing score and will have up to 10 attempts. These quizzes are designed to support learning and are not certification exams. Lab Engagement (Optional): Labs remain an integral part of the training experience but are no longer mandatory for course credit. The rationale behind the new completion criteria  By eliminating lab grading, we aim to: Simplify the training process for learners and instructors. Minimize administrative hurdles. Focus on active participation and comprehension during sessions. If you have initial questions, we encourage you to review the FAQ for more details. Thank you for being part of the Splunk learning journey! -Callie Skokos on behalf of the Splunk Education Crew
Contents:  What is the App Agent vs Coordinator? App Agent status vs Machine Agent status Why is my app agent status 0% on IIS applications? What are the options for having 100% app ... See more...
Contents:  What is the App Agent vs Coordinator? App Agent status vs Machine Agent status Why is my app agent status 0% on IIS applications? What are the options for having 100% app agent status? What if I cannot modify IIS settings? What is the App Agent vs Coordinator? The AppDynamics.Agent.Coordinator is the orchestration on when to inject the app agent's DLLs into an application as well as collecting machine metrics (CPU, Memory, Performance Counters, etc). The Coordinator does not monitor any application on the server has this is the responsibility of the app agent. In an environment where the profiler environment variables are defined, any .NET runtime at startup will check if the application should be profiled and what profiler to inject. As part of the installation process of the MSI package, it will create the necessary profiler environment variables.  https://learn.microsoft.com/en-us/dotnet/framework/unmanaged-api/profiling/setting-up-a-profiling-environment Profiler environment variables: COR_PROFILER Full framework profiler to be injected into the application COR_ENABLE_PROFILING Boolean value on whether or not full framework profiling is enabled COR_PROFILER_PATH Path to where the full framework profiler resides CORECLR_PROFILER .NET Core profiler to be injected into the application CORECLR_ENABLE_PROFILING Boolean value on whether or not .NET Core profiling is enabled CORECLR_PROFILER_PATH Path to where the .NET Core profiler resides If the .NET application is a full framework, it will write a message to the Event Viewer's Application logs. Sample of a successful instrumentation: .NET Runtime version 4.0.30319.0 - The profiler was loaded successfully. Profiler CLSID: 'AppDynamics.AgentProfiler'. Process ID (decimal): 110060. Message ID: [0x2507]. When the application does not match an application to be monitored in the config.xml of the Coordinator it will not inject the agent DLLs: .NET Runtime version 4.0.30319.0 - The profiler has requested that the CLR instance not load the profiler into this process. Profiler CLSID: 'AppDynamics.AgentProfiler'. Process ID (decimal): 111500. Message ID: [0x2516].   Both messages are at the level of Information. Neither message is a cause for alarm and is only informational. App Agent status vs Machine Agent status The AppDynamics.Agent.Coordinator reports to the controller and one of the metrics it reports is [Availability]. This metric represents the Machine Agent status on the Controller's Tiers & Nodes page.  The App Agent status is the app agent that is injected into your application. If your application is not running then neither is the app agent. This leads us to the next point regarding IIS applications. Why is my app agent status 0% on IIS applications?   The app agent is injected into your application and shares the application's lifecycle. For IIS, this means the app agent's DLLs are injected into the w3wp process on .NET startup. This can only happen at the startup of the process.  However, app pools are managed by IIS, and the default settings do the following: App pools are not started by default. Traffic must be sent to the application first App pools that have not received any traffic for 20 minutes will be terminated As mentioned earlier, the app agent shares the application's lifecycle, so you can see how these default settings might affect the app agent status that is displayed on the controller.  Two possible scenarios with the default IIS settings can cause the app agent status to show 0%.  App pool was killed by IIS because there was no activity on the application. On the controller, you will see a downward trend in the app agent status during periods of idle activity.  The server was restarted and no traffic is currently being sent to the application. Therefore, no w3wp process has been started so the controller shows a 0% on app agent status.  What are the options for having 100% app agent status? Three settings must be changed to ensure that the app pool is running and remains running regardless of traffic or server restart.  Idle timeout https://learn.microsoft.com/en-us/previous-versions/iis/6.0-sdk/ms525537(v=vs.90) Start Mode https://learn.microsoft.com/en-us/iis/configuration/system.applicationhost/applicationpools/applicationpooldefaults/#:~:text=is%201000.-,startMode,-Optional%20enum%20value IIS Application Initialization (requires IIS 8.0) https://learn.microsoft.com/en-us/iis/get-started/whats-new-in-iis-8/iis-80-application-initialization The Idle Timeout property is responsible for terminating an app pool that has not received traffic after some time (default is 20 minutes). Setting this property to 0 will prevent IIS from terminating the app pool regardless of how long the app pool is idle.  Start Mode set to AlwaysRunning instead of the default value of OnDemand.  IIS Application Initialization requires IIS 8.0. When the server starts, IIS will invoke a fake request to the specified page to start the app pool. Follow the instructions listed in the link above for the detailed steps. What if I cannot modify IIS settings? You can modify the config.xml to monitor the performance counter "Current Application Pool State" which is part of the APP_POOL_WAS category for your particular app pool and create a health rule to trigger in the event that the app pool is in a stopped state. "Current Application Pool State" possible values: Starting Started Stopping Stopped Unknown However, you need to be aware of the following: An app pool can be assigned to multiple sites and applications. There is no way to get a granular scope to a single application unless each IIS application/site uses a unique app pool There are really only three states for the "Current Application Pool State" - Started, Stopped, and Unknown. The in-between states are too quick to capture and report on.  The difference between an app pool and worker process. Having an app pool in a started state does not mean your application and, by extension, the agent is running.  In addition, an app pool in the started state does not mean your application is able to start. For example, .NET runtime errors at startup can prevent the application from starting even though the app pool is started.  I strongly recommend modifying the IIS settings to get a true app agent status and then rely on the "Current Application Pool State" performance counter but this option is available if your circumstances prevent modification of the IIS settings and the limitations above are not a concern.  With the caveats out of the way, let's discuss how to make this change.  Config.xml: <machine-agent> <perf-counters> <perf-counter cat="APP_POOL_WAS" name="Current Application Pool State" instance="MY_APP_POOL_NAME" /> </perf-counters> </machine-agent> Then create a new health to trigger if the app pool state is not in a Started state. 
We’ve been buzzing with excitement about the recent validation of Splunk Education! The 2024 Splunk Career Impact Report reveals how mastering Splunk gives users and customers a serious competitive a... See more...
We’ve been buzzing with excitement about the recent validation of Splunk Education! The 2024 Splunk Career Impact Report reveals how mastering Splunk gives users and customers a serious competitive advantage. Have you checked it out yet?   While a picture paints a thousand words, an infographic backs it up with data and insights.   No time to dive into the full report? No problem! Explore the key stats and survey results in the 2024 Career Impact Survey Infographic. (Get a quick preview below!) All of us in Splunk Education are dedicated to empowering our learners and are always seeking new ways to support your growth and success. Congratulations to all of you who are on your career -boosting journey with Splunk. Cheers to a new year filled with opportunities! --  Callie Skokos, on behalf of the entire Splunk Education Crew
I would like to know the duration of the voucher for taking the Splunk Power User exam, as I am unable to find the expiration date anywhere. Thank you, kind regards.  
December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation. Oh, but this month we’ve got a fun holiday edition. It... See more...
December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation. Oh, but this month we’ve got a fun holiday edition. It’s our way of wrapping up the year and sharing our thanks to you for being the best community of users and learners on the planet. Until next year, we leave you with this Splunky rendition of an old holiday classic. The 12 Days of Splunk-mas   On the first day of Splunk-mas, my true love gave to me   ~ A Catalog of Splunk Classes for Free ~ Learn anywhere, anytime – for free   **************************   On the second day of Splunk-mas, my true love gave to me  ~ Two Ways to Learn it ~ Instructor-led and self-paced classes **************************   On the third day of Splunk-mas, my true love gave to me ~ Three Class Champions ~  Get to know our course instructors **************************   On the fourth day of Splunk-mas, my true love gave to me ~ Four Smartness Stories ~ Read interviews with inspiring Splunk users **************************   On the fifth day of Splunk-mas, my true love gave to me ~ Five Golden Badges ~ Validate your expertise with Splunk Certification badges **************************   On the sixth day of Splunk-mas, my true love gave to me ~ Six Ways It’s Proven~ Discover how proficiency in Splunk has career benefits **************************   On the seventh day of Splunk-mas, my true love gave to me ~ Seven Experts Sharing ~   Discover use cases, product tips, and expert guidance on Splunk Lantern **************************   On the eighth day of Splunk-mas, my true love gave to me: ~ Eight Labs a-Launching ~   Enroll in instructor-led and self-paced courses with hands-on labs **************************   On the ninth day of Splunk-mas, my true love gave to me ~ Nine Sophomores SOC’ing ~      Splunk Academic Alliance is preparing the next generation through university training **************************   On the tenth day of Splunk-mas, my true love gave to me ~ Ten ALPs a Teaching ~  Authorized Learning Partners (ALPs) across the globe provide localized learning **************************   On the eleventh day of Splunk-mas, my true love gave to me ~  Eleven Courses Releasing ~ Enroll in a new course today  **************************   On the twelfth day of Splunk-mas, my true love gave to me ~ Twelve Hands-a-Keying ~ Attend Splunk .conf25 to get hands-on-keyboard learning **************************   Thanks for sharing a few minutes of your day with us and this special holiday edition of the indexEducation newsletter. See you next year!   Answer to Index This: A Splunky rendition of a traditional holiday classic.
We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of Global Education, just dropped a great blog post that showcases the power of habits, learnin... See more...
We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of Global Education, just dropped a great blog post that showcases the power of habits, learning, and community. Drawing from personal experiences – cue swimming progress – Eric connects the dots between building strong habits and achieving career success. He shines a spotlight on the 2024 Splunk Career Impact Report, which is packed with insights from nearly 500 Splunk users across the globe. TLDR: Splunk learners who invest in certifications and skill-building are absolutely thriving. What’s in the Report? Career Wins Galore! The 2024 Splunk Career Impact Report highlights how our community is crushing it—whether it’s earning 14% higher pay on average or snagging double the promotions compared to last year. Eric’s blog breaks it down, showing how Splunk Certifications are more than just badges—they’re game-changers. Certified users, especially early in their careers, are seeing massive salary bumps, with younger professionals earning up to 52% more than their non-certified peers.   Why You Should Read Eric’s Blog Right Now This isn’t just about stats; it’s about celebrating you. Eric highlights how the Splunk community’s feedback helped shape this report, and he shares why continuous learning is the secret sauce to staying ahead in tech. From hands-on labs to our buzzing online forums, Splunk Education offers something for everyone looking to level up. So, what are you waiting for? Dive into Eric’s blog and see how building great habits with Splunk can take your career to the next level! Read Eric’s blog here Check out the full 2024 Splunk Career Impact Report Happy learning! -Callie Skokos on behalf of the entire Splunk Education Crew
What does this error mean? We usually observe the log message below in the Application startup logs when the agent is unable to connect with the controller to retrieve the nodeName (in the case o... See more...
What does this error mean? We usually observe the log message below in the Application startup logs when the agent is unable to connect with the controller to retrieve the nodeName (in the case of using reuse.nodeName).  Started AppDynamics Java Agent Successfully. [Thread-0] Tue Apr 02 09:46:04 UTC 2019[INFO]: JavaAgent - Started AppDynamics Java Agent Successfully. 2019-04-02 09:46:09,545 ERROR Recursive call to appender Buffer 2019-04-02 09:46:09,547 ERROR Recursive call to appender Buffer Next steps: Could you please check if any logs are generated under the /opt/appdynamics-java/ver.xxx.xx/logs/  directory and share them if available? If there are no logs, please add the configuration line below under the instrumentatioRules applied to the problematic pod:" customAgentConfig: -Dappdynamics.agent.reuse.nodeName=false -Dappdynamics.agent.nodeName=test   If you are using Cluster Agent version >=23.11.0, to force re-instrumentation, you need to use the additional parameter in the default auto-instrumentation properties: enableForceReInstrumentation: true   apiVersion: cluster.appdynamics.com/v1alpha1 kind: Clusteragent metadata: name: k8s-cluster-agent namespace: appdynamics spec: # cluster agent properties # ... # required to enable auto-instrumentation instrumentationMethod: Env # default auto-instrumentation properties # may be overridden in an instrumentationRule containerAppCorrelationMethod: proxy nsToInstrumentRegex: default defaultAppName: "" enableForceReInstrumentation: true # ADDED # ... # one or more instrumentationRules instrumentationRules: - namespaceRegex: default customAgentConfig: -Dappdynamics.agent.reuse.nodeName=false -Dappdynamics.agent.nodeName=test # ADDED imageInfo: image: "docker.io/appdynamics/java-agent:24.8.1" agentMountPath: /opt/appdynamics imagePullPolicy: Always Afterward, please apply the changes and wait for the cluster agent to implement the new instrumentation. Then, collect the agent logs from the /opt/appdynamics-java/ver.xxx.xx/logs/ directory and attach them to the ticket. How do you collect logs from a Kubernetes pod? 1. Enter the container and pack the agent logs into a tar file. kubectl exec -it pod <pod_name> -- bash cd /opt/appdynamics-java/ver24.x.x.x/logs/ tar -cvf /java-agent-logs.tar test 2. Copy the created tar file. kubectl cp <some-namespace>/<some-pod>:/java-agent-logs.tar ./java-agent-logs.tar I hope this article was helpful/ Łukasz Kociuba
A Step-by-Step Guide to Setting Up and Monitoring Redis with AppDynamics on Ubuntu EC2 Monitoring your Redis instance is essential for ensuring optimal performance and identifying potential b... See more...
A Step-by-Step Guide to Setting Up and Monitoring Redis with AppDynamics on Ubuntu EC2 Monitoring your Redis instance is essential for ensuring optimal performance and identifying potential bottlenecks in real-time. In this guide, we’ll walk through the process of setting up Redis on an Ubuntu EC2 instance and configuring SplunkAppDynamics Redis Monitoring Extension to capture key metrics. Step 1: Setting up Redis on Ubuntu Prerequisites An AWS account with an EC2 instance running Ubuntu. SSH access to your EC2 instance. Installing Redis Update package lists and install Redis: sudo apt-get update sudo apt-get install redis-server​ Verify the installation: redis-server --version​ Ensure Redis is running: sudo systemctl status redis​ Step 2: Installing AppDynamics Machine Agent Download the Machine Agent: Visit AppDynamics and download the Machine Agent for your environment. Install the Machine Agent: Follow the installation steps provided in the AppDynamics Machine Agent documentation. https://docs.appdynamics.com/appd/24.x/24.11/en/infrastructure-visibility/machine-agent/install-the-machine-agent Verify Installation: Start the Machine Agent and confirm it connects to your AppDynamics Controller. Step 3: Configuring AppDynamics Redis Monitoring Extension Clone the Redis Monitoring Extension Repository git clone https://github.com/Appdynamics/redis-monitoring-extension.git cd redis-monitoring-extension Build the Extension sudo apt-get install openjdk-8-jdk maven mvn clean install Locate the  .zip file in the target folder and extract it: unzip target/RedisMonitor-*.zip -d <MachineAgent_Dir>/monitors/ Edit the Configuration File Navigate to the extracted folder and edit config.yml : metricPrefix: "Custom Metrics|Redis" #Add your list of Redis servers here. servers: - name: "localhost" host: "localhost" port: "6379" password: "" #encryptedPassword: "" useSSL: false Restart the Machine Agent .<MachineAgent_Dir>/bin/machine-agent Step 4: Verifying Metrics in AppDynamics Log in to your AppDynamics Controller. Navigate to the Metric Browser. Look for metrics under the path: Custom Metrics|Redis Verify that metrics like used_memory , connected_clients , and keyspace_hits are visible. Conclusion By combining the power of Redis with the advanced monitoring capabilities of AppDynamics, you can ensure your application remains scalable and responsive under varying workloads. Whether you’re troubleshooting an issue or optimizing performance, this setup gives you full visibility into your Redis instance. If you found this guide helpful, please share and connect with me for more DevOps insights!
Comprehensive Guide to RabbitMQ Setup, Integration with Python, and Monitoring with AppDynamics Introduction RabbitMQ is a powerful open-source message broker that supports a variety of messagi... See more...
Comprehensive Guide to RabbitMQ Setup, Integration with Python, and Monitoring with AppDynamics Introduction RabbitMQ is a powerful open-source message broker that supports a variety of messaging protocols, including AMQP. It allows developers to build robust, scalable, and asynchronous messaging systems. However, to ensure optimal performance, monitoring RabbitMQ metrics is crucial. This tutorial walks you through setting up RabbitMQ, integrating it with a Python application, and monitoring its metrics using AppDynamics. Step 1: Setting Up RabbitMQ 1.1 Install RabbitMQ via Docker To quickly get RabbitMQ up and running, use the official RabbitMQ Docker image with the management plugin enabled. Run the following command to start RabbitMQ: docker run -d --hostname my-rabbit --name rabbitmq \ -e RABBITMQ_DEFAULT_USER=guest \ -e RABBITMQ_DEFAULT_PASS=guest \ -p 5672:5672 -p 15672:15672 \ rabbitmq:management Management Console: Accessible at http://localhost:15672 . Default Credentials: Username: guest Password: guest 1.2 Verify the Setup Once the container is running, verify the RabbitMQ server by accessing the Management Console in your browser. Alternatively, test the API endpoint: curl -u guest:guest http://localhost:15672/api/overview This should return RabbitMQ metrics in JSON format. Step 2: Writing a Simple RabbitMQ Producer and Consumer in Python 2.1 Install Required Library Install the pika library for Python, which is used to interact with RabbitMQ: pip install pika 2.2 Create the Producer Script ( send.py ) This script connects to RabbitMQ, declares a queue, and sends a message. import pika # Connect to RabbitMQ connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() # Declare a queue channel.queue_declare(queue='hello') # Publish a message channel.basic_publish(exchange='', routing_key='hello', body='Hello RabbitMQ!') print(" [x] Sent 'Hello RabbitMQ!'") connection.close() 2.3 Create the Consumer Script ( receive.py ) This script connects to RabbitMQ, consumes messages from the queue, and prints them. import pika # Connect to RabbitMQ connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() # Declare a queue channel.queue_declare(queue='hello') # Define a callback to process messages def callback(ch, method, properties, body): print(f" [x] Received {body}") channel.basic_consume(queue='hello', on_message_callback=callback, auto_ack=True) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() 2.4 Test the Application a. Run the consumer in one terminal: python3 receive.py b. Send a message from another terminal: python3 send.py c. Observe the message output in the consumer terminal. [x] Sent 'Hello RabbitMQ!' [x] Received b'Hello RabbitMQ!' Step 3: Monitoring RabbitMQ with AppDynamics 3.1 Configure RabbitMQ Management Plugin Ensure that the RabbitMQ Management Plugin is enabled (default in the Docker image). It exposes an HTTP API that provides metrics. 3.2 Create a Custom Monitoring Script Use a shell script to fetch RabbitMQ metrics and send them to the AppDynamics Machine Agent. script.sh #!/bin/bash # RabbitMQ Management API credentials USERNAME="guest" PASSWORD="guest" URL="http://localhost:15672/api/overview" # Fetch metrics from RabbitMQ Management API RESPONSE=$(curl -s -u $USERNAME:$PASSWORD $URL) if [[ $? -ne 0 || -z "$RESPONSE" ]]; then echo "Error: Unable to fetch RabbitMQ metrics" exit 1 fi MESSAGES=$(echo "$RESPONSE" | jq '.queue_totals.messages // 0') MESSAGES_READY=$(echo "$RESPONSE" | jq '.queue_totals.messages_ready // 0') DELIVER_GET=$(echo "$RESPONSE" | jq '.message_stats.deliver_get // 0') echo "name=Custom Metrics|RabbitMQ|Total Messages, value=$MESSAGES" echo "name=Custom Metrics|RabbitMQ|Messages Ready, value=$MESSAGES_READY" echo "name=Custom Metrics|RabbitMQ|Deliver Get, value=$DELIVER_GET" 3.3 Integrate with AppDynamics Machine Agent Place the Script: Copy the script.sh script to the Machine Agent monitors directory: cp script.sh <MachineAgent_Dir>/monitors/RabbitMQMonitor/ 2. Create monitor.xml : Create a monitor.xml file to configure the Machine Agent: <monitor> <name>RabbitMQ</name> <type>managed</type> <enabled>true</enabled> <enable-override os-type="linux">true</enable-override> <description>RabbitMQ </description> <monitor-configuration> </monitor-configuration> <monitor-run-task> <execution-style>periodic</execution-style> <name>Run</name> <type>executable</type> <task-arguments> </task-arguments> <executable-task> <type>file</type> <file>script.sh</file> </executable-task> </monitor-run-task> </monitor> 3. Restart the Machine Agent: Restart the agent to apply the changes: cd <MachineAgent_Dir>/bin ./machine-agent & Step 4: Viewing Metrics in AppDynamics Log in to your AppDynamics Controller. Navigate to Servers > Custom Metrics. Look for metrics under: Custom Metrics|RabbitMQ You should see metrics like: Total Messages Messages Ready Deliver Get
Hello Splunkers, After completing a few splunk courses, working on a sandbox, when and how did you all get your first break? (Assuming the person has an IT background though not specifically in Spl... See more...
Hello Splunkers, After completing a few splunk courses, working on a sandbox, when and how did you all get your first break? (Assuming the person has an IT background though not specifically in Splunk) A Splunk certification is next on the agenda. Your insights are welcomed.