All Topics

Learn Splunk

All Topics

Now i am looking for study materials  (SPLK-3001)Splunk Enterprise Security Certified Admin, can any one share resource link,  much appreciated. 
Building a production-ready monitoring solution for GCP Pub/Sub that provides comprehensive visibility into your messaging infrastructure   The Challenge Google Cloud Pub/Sub is the bac... See more...
Building a production-ready monitoring solution for GCP Pub/Sub that provides comprehensive visibility into your messaging infrastructure   The Challenge Google Cloud Pub/Sub is the backbone of many modern distributed systems, handling millions of messages daily. But here’s the problem: how do you monitor something you can’t see? Most organizations struggle with: Limited visibility into topic and subscription health Reactive monitoring — finding out about issues after they impact users Fragmented metrics across different tools and dashboards Complex setup requiring deep GCP and monitoring expertise What if I told you there’s a way to get comprehensive Pub/Sub monitoring integrated directly into your existing AppDynamics dashboard with minimal setup? The Solution: End-to-End Automation In this guide, we’ll build a complete monitoring solution that: Automatically creates GCP Pub/Sub topics and subscriptions Generates realistic test data for immediate monitoring Collects 50+ metrics covering every aspect of your Pub/Sub infrastructure Integrates seamlessly with AppDynamics Machine Agent Supports multiple platforms (AWS, GCP, Azure, on-premises) Includes cleanup scripts for easy environment management Architecture Overview Here’s what we’re building: ┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐ │ GCP Pub/Sub │ │ Metrics Script │ │ AppDynamics │ │ │ │ │ │ │ │ ┌─────────────┐ │ │ ┌──────────────┐ │ │ ┌─────────────┐ │ │ │ Topics │◄┼────┼─│ gcloud │ │ │ │ Machine │ │ │ └─────────────┘ │ │ │ CLI │ │ │ │ Agent │ │ │ │ │ └──────────────┘ │ │ └─────────────┘ │ │ ┌─────────────┐ │ │ ┌──────────────┐ │ │ │ │ │ │Subscriptions│◄┼────┼─│ Metrics │─┼────┼────────┘ │ │ └─────────────┘ │ │ │ Collector │ │ │ │ │ │ │ └──────────────┘ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌──────────────┐ │ │ │ Controller │ │ │ │ Metrics │◄┼────┼─│ JSON │ │ │ │ Dashboard │ │ │ └─────────────┘ │ │ │ Auth │ │ │ └─────────────┘ │ └─────────────────┘ │ └──────────────┘ │ └─────────────────┘ └──────────────────┘ Prerequisites Before we start, ensure you have: GCP Project with billing enabled AppDynamics Machine Agent installed and running Linux system (we support Amazon Linux 2/2023, Ubuntu, RHEL/CentOS, Rocky Linux) Basic command-line knowledge Don’t worry if you’re missing some pieces — we’ll guide you through everything! Part 1: Setting Up the Foundation 1.1 Clone the Repository First, let’s get our toolkit: git clone https://github.com/Abhimanyu9988/gcp-pubsub-appdynamics.git cd gcp-pubsub-appdynamics chmod +x *.sh 1.2 Install Prerequisites Our smart installer detects your operating system and installs everything needed: bash sudo ./ec2-pre-req.sh What this does: Detects your OS (Amazon Linux 2/2023, Ubuntu, RHEL, etc.) Installs Google Cloud SDK with proper Python version ️ Installs required tools (jq, curl, etc.) Sets up proper permissions and paths Validates everything works Sample output: ============================================== Multi-Distribution GCP Pub/Sub Prerequisites ============================================== [INFO] Detected OS: Amazon Linux 2 [SUCCESS] System packages updated [SUCCESS] Python 3.9 installed [SUCCESS] Google Cloud SDK installed successfully [SUCCESS] All prerequisites installed successfully! Part 2: GCP Service Account Setup 2.1 Create Service Account (One-time Setup) On your local machine where you have gcloud configured: bash ./create_service_account.sh What this creates: Service account with minimal required permissions ️ JSON key file for authentication ️ IAM roles: pubsub.viewer, monitoring.viewer, serviceusage.serviceUsageViewer Sample output: bash ============================================ GCP Service Account Creation for Pub/Sub Monitoring ============================================ [SUCCESS] Using project: my-gcp-project [SUCCESS] Created service account: pubsub-monitor@my-project.iam.gserviceaccount.com [SUCCESS] Assigned: roles/pubsub.viewer [SUCCESS] Service account key created: pubsub-monitor-service-account.json Transfer to AWS Linux 2: scp -i your-key.pem pubsub-monitor-service-account.json ec2-user@your-instance-ip:~/ 2.2 Transfer Credentials Copy the service account key to your monitoring server: bash # On your local machine scp -i your-aws-key.pem pubsub-monitor-service-account.json ec2-user@your-server:~/ # On your monitoring server sudo mkdir -p /opt/appdynamics sudo mv ~/pubsub-monitor-service-account.json /opt/appdynamics/ sudo chmod 600 /opt/appdynamics/pubsub-monitor-service-account.json Part 3: Creating Pub/Sub Resources 3.1 Set Up Your Environment bash export GCP_PROJECT_ID="your-actual-project-id" 3.2 Create Topics and Subscriptions bash ./pubsub_create.sh What this does: Creates Pub/Sub topic: appdynamics-monitoring-topic Creates subscription: appdynamics-monitoring-subscription Publishes 100 sample messages with realistic data Generates custom metrics for immediate monitoring Optional: Runs continuous simulation workload Sample output: bash ================================== GCP Pub/Sub Creation Script ================================== [SUCCESS] Created topic: appdynamics-monitoring-topic [SUCCESS] Created subscription: appdynamics-monitoring-subscription [INFO] Publishing 100 messages to appdynamics-monitoring-topic [SUCCESS] Publishing completed: Total messages: 100 Published successfully: 100 Failed to publish: 0 Messages per second: 23 3.3 Verify Resources bash ./pubsub_info.sh status This shows you exactly what was created and provides troubleshooting information. Part 4: Configure Metrics Collection 4.1 Edit the Main Script Update the configuration in script.sh: bash vi script.sh # Update these values: PROJECT_ID="your-actual-project-id" SERVICE_ACCOUNT_KEY_FILE="/opt/appdynamics/pubsub-monitor-service-account.json" # Optional: Add more topics/subscriptions TOPIC_NAMES="topic1,topic2,topic3" SUBSCRIPTION_NAMES="sub1,sub2,sub3" 4.2 Test Metrics Collection bash ./script.sh Expected output (50+ metrics): bash name=Custom Metrics|PubSub|Health|Collection Success, value=1 name=Custom Metrics|PubSub|Topic|appdynamics-monitoring-topic|Status, value=1 name=Custom Metrics|PubSub|Subscription|appdynamics-monitoring-subscription|Ack Deadline, value=60 name=Custom Metrics|PubSub|API|PubSub Enabled, value=1 name=Custom Metrics|PubSub|Project|Total Topics, value=5 Part 5: AppDynamics Integration 5.1 Install Machine Agent Extension bash # Create extension directory sudo mkdir -p /opt/appdynamics/machine-agent/monitors/PubSubMonitor # Copy files sudo cp script.sh /opt/appdynamics/machine-agent/monitors/PubSubMonitor/ sudo cp monitor.xml /opt/appdynamics/machine-agent/monitors/PubSubMonitor/ # Set permissions sudo chown -R appdynamics:appdynamics /opt/appdynamics/machine-agent/monitors/PubSubMonitor sudo chmod +x /opt/appdynamics/machine-agent/monitors/PubSubMonitor/script.sh 5.2 Restart Machine Agent bash sudo systemctl restart appdynamics-machine-agent # Verify it's running sudo systemctl status appdynamics-machine-agent 5.3 Check Logs bash # Check Machine Agent logs tail -f /opt/appdynamics/machine-agent/logs/machine-agent.log # Check for our extension grep -i pubsub /opt/appdynamics/machine-agent/logs/machine-agent.log Part 6: The Metrics Deep Dive Our solution collects 50+ comprehensive metrics across these categories: Topic Metrics Status & Health: Topic accessibility, configuration validation Subscription Management: Count and health of attached subscriptions Security: IAM policy accessibility Configuration: Message retention settings Subscription Metrics Operational Health: Status, acknowledgment deadlines Configuration Analysis: Push vs Pull detection, retry policies Advanced Features: Dead letter queues, message filtering Performance: Message retention, delivery settings API & Service Health Service Availability: Pub/Sub and Monitoring API status Operational Capabilities: List topics/subscriptions permissions Health Scoring: Overall API accessibility Project-Level Insights Resource Inventory: Total topics and subscriptions across project Configuration Validation: Setup verification Custom Metrics Integration: Deployment script metrics Part 7: Viewing in AppDynamics 7.1 Navigate to Metrics Log into your AppDynamics Controller Go to Servers → Machine Agents → Your Server Navigate to Your Server → Custom Metrics Look for Custom Metrics | PubSub 7.2 Create Dashboards Create custom dashboards with widgets for: Health Overview: Collection Success Rate GCP Connectivity Status API Health Score Topic Performance: Topic Status by Name Subscription Count Distribution Success Rate Trends Subscription Analytics: Ack Deadline Distribution Push vs Pull Breakdown Dead Letter Queue Usage Project Insights: Total Resource Counts Configuration Compliance Custom Metrics Trends Part 8: Production Deployment 8.1 Security Best Practices bash # Secure service account keys sudo chmod 600 /opt/appdynamics/*.json sudo chown appdynamics:appdynamics /opt/appdynamics/*.json # Enable audit logging gcloud logging sinks create pubsub-monitoring-sink \ bigquery.googleapis.com/projects/YOUR_PROJECT/datasets/audit_logs \ --log-filter='protoPayload.serviceName="pubsub.googleapis.com"' 8.2 Monitoring Multiple Environments bash # Development environment export GCP_PROJECT_ID="dev-project" export TOPIC_NAMES="dev-orders,dev-inventory" # Production environment export GCP_PROJECT_ID="prod-project" export TOPIC_NAMES="orders,inventory,notifications,analytics" 8.3 Alerting Setup Create AppDynamics policies for: Topic/Subscription availability < 100% Collection errors > 0 API health score < 1 Custom metric age > 5 minutes Part 9: Maintenance & Operations 9.1 Resource Management bash # Check current resources ./pubsub_info.sh status # View available metrics ./pubsub_info.sh metrics # Generate more test data RUN_SIMULATION=true ./pubsub_create.sh # Clean up everything ./pubsub_destroy.sh 9.2 Service Account Rotation bash # Create new service account ./create_service_account.sh # Test with new credentials ./script.sh # Delete old service account ./delete_service_account.sh 9.3 Troubleshooting Common Issues & Solutions: Authentication Errors: bash # Verify service account file jq empty /opt/appdynamics/pubsub-monitor-service-account.json # Test authentication gcloud auth activate-service-account --key-file=/opt/appdynamics/pubsub-monitor-service-account.json Permission Denied: bash # Check IAM roles gcloud projects get-iam-policy YOUR_PROJECT_ID \ --flatten="bindings[].members" \ --filter="bindings.members:your-service-account@project.iam.gserviceaccount.com" No Metrics in AppDynamics: bash # Test script manually cd /opt/appdynamics/machine-agent/monitors/PubSubMonitor ./script.sh # Check Machine Agent logs tail -f /opt/appdynamics/machine-agent/logs/machine-agent.log Advanced Use Cases Multi-Project Monitoring Monitor Pub/Sub across multiple GCP projects by deploying separate instances with different service accounts: bash # Project 1 - Development PROJECT_ID="dev-project" SERVICE_ACCOUNT_KEY_FILE="/opt/appdynamics/dev-sa.json" ./script.sh # Project 2 - Production PROJECT_ID="prod-project" SERVICE_ACCOUNT_KEY_FILE="/opt/appdynamics/prod-sa.json" ./script.sh Custom Metrics Integration Extend the solution to include your application-specific metrics: bash # Add custom metrics to /tmp/pubsub_custom_metrics.log echo "[$(date -u +"%Y-%m-%dT%H:%M:%S.%3NZ")] CUSTOM_METRIC order_processing_rate 150" >> /tmp/pubsub_custom_metrics.log CI/CD Integration Include in your deployment pipelines: yaml # GitHub Actions example - name: Setup Pub/Sub Monitoring run: | ./pubsub_create.sh ./script.sh # Verify metrics collection if grep -q "Collection Success, value=1" <(./script.sh); then echo " Monitoring setup successful" else echo " Monitoring setup failed" exit 1 fi Performance & Scaling Our solution is designed for production scale: Fast Execution: Collects 50+ metrics in under 10 seconds Concurrent Safe: Multiple instances can run simultaneously Scalable: Handles hundreds of topics/subscriptions Lightweight: Minimal memory and CPU footprint ️ Resilient: Continues partial collection even if some resources fail Performance Metrics: Average collection time: 5-7 seconds Memory usage: < 50MB during collection Network requests: Optimized with bulk operations Error recovery: Continues on partial failures What's Next? You now have a production-ready Pub/Sub monitoring solution that provides: Complete Visibility - 50+ metrics covering every aspect Automated Setup - One-click deployment and configuration Integration Ready - Native AppDynamics dashboard integration Production Tested - Handles real-world scale and complexity Maintenance Friendly - Easy updates, rotation, and cleanup Take It Further Create Custom Dashboards with business-specific KPIs Set Up Intelligent Alerting for proactive issue detection Integrate with CI/CD for automated environment monitoring Add Business Metrics specific to your use cases Scale to Multiple Projects and regions Resources & Links Full Documentation: GitHub Repository ️ Issues & Support: GitHub Issues GCP Pub/Sub Docs: Official Documentation AppDynamics Extensions: Machine Agent Extensions Conclusion Monitoring GCP Pub/Sub doesn't have to be complicated. With the right automation and tooling, you can have comprehensive visibility into your messaging infrastructure in under 30 minutes. The solution we've built together provides enterprise-grade monitoring with minimal operational overhead. Your teams can now: Detect issues proactively before they impact users Understand performance patterns across your messaging layer Make data-driven decisions about scaling and optimization Maintain high availability with real-time health monitoring Have questions or want to share your success story? Drop a comment below or reach out on GitHub! If this guide helped you, please give the GitHub repository a star and share it with your team!
I have completed Splunk Enterprise Certified Admin course in Udemy, & have done Practical test 220 questions, is that enough to write the exam?  Much appreciated anyone help mock test materials & ... See more...
I have completed Splunk Enterprise Certified Admin course in Udemy, & have done Practical test 220 questions, is that enough to write the exam?  Much appreciated anyone help mock test materials & study resources.  Please advise which thrid party study resources is good enough pass the exam? 
Looking to sharpen your observability skills so you can better understand how to collect and analyze data from logs, metrics, traces, and events?  Splunk University (Sept 6–8) is the place to be an... See more...
Looking to sharpen your observability skills so you can better understand how to collect and analyze data from logs, metrics, traces, and events?  Splunk University (Sept 6–8) is the place to be and as luck may have it, there are still seats available in our four Splunk Observability tracks! Whether you're keen to learn OpenTelemetry, need training on monitoring microservices and user experience with Splunk Observability Cloud, or want to use IT Service Intelligence (ITSI) as an analyst or admin, we’ve got the Splunk University hands-on tracks to help you succeed.  Enroll now in one of our 2- or 3-day courses so you can begin to drive faster resolution, streamline investigations, and unlock real-time insights across your environment.  Track schedule 2-day track: Sep 7 – Sep 8 3-day track: Sep 6 – Sep 8   Course names Full Observability Pipeline with Splunk OpenTelemetry Collector (2-day) Splunk IT Service Intelligence (2-day) Splunk IT Service Intelligence Admin (3-day) Monitoring Applications Using Splunk Observability Cloud (3-day)   Register here for Splunk University
Hi All, Today, after reviewing the STEP profile,  noticed that the Troubleshooting Splunk Enterprise training has been marked as "unsuccessful" . Could you please advise on how we can update th... See more...
Hi All, Today, after reviewing the STEP profile,  noticed that the Troubleshooting Splunk Enterprise training has been marked as "unsuccessful" . Could you please advise on how we can update the status to "successful," or let us know if there is any process to reschedule or reattempt the training? Looking forward to your guidance.    
July 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist on w... See more...
July 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist on what’s new with Splunk Education. We hope the updates about our courses, certification program, and self-paced training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability: Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go Training You Gotta Take SOCin’ it to you | Simulated threats, real skills This isn’t your grandma’s sock hop; it’s our new, modern SOC Ops course. At Splunk University in Boston, our new one-day course Enhancing SOC Operations with Attack Simulations is designed to push seasoned defenders out of their swivel chairs and into the (virtual) trenches. You’ll run open-source attack simulations in a Splunk-powered lab, uncover threats, and flex your detection and response skills like a cybersecurity jitsu master in a Mission Impossible film. But, there’s a catch: To be part of this adventure, you must have experience with Splunk Enterprise Security and SOAR.   Gotta SOC it to threats | Register now for this Splunk University course Real-time is the right time | Take the Splunk Observability Cloud course  Hellooooooo, is anybody out there? We’re calling all IT pros, DevOps dynamos, and anyone who’s ever screamed “Why is the system down?!” into the void. We’ve got a Spunk course you didn’t know you needed. Splunk Observability Cloud: Real-Time Metrics in Splunk Cloud is here to transform your monitoring game from reactive to proactive. Think: faster root cause analysis, slicker performance insights, and fewer mystery outages at 2 a.m. If “observability” was your 2024 word of the year, make “proficiency” your 2025 flex. Gotta see the signals | Take the Observability Cloud course  Things You Needa Know Dashboards, detections and Dunkin’ | Welcome to Splunk University Pencils down, laptops open! It's time to talk about Splunk University at .conf25 in Boston, aka the ultimate pre-game before the main event. If you're still on the fence, allow us to give you a nudge using some YouTube riz: 5 Reasons to Attend Splunk University. Before you hit the conference floor, sharpen your skills in the classroom. It’s where learning meets lobstah rolls and dashboards meet Dunkin’. Needa get schooled |  Watch the video to see all 5 reasons Brains, bytes, and Boston | Learn from the best at .conf25 When you think of Boston, you might picture colonial charm, top-tier universities, or waving a Fenway foam finger. But this September, you can add interactive, hands-on Splunk learning to the nostalgia because .conf25 is coming to Boston. Whether you’re tracking AWS data exfiltration, decoding detections in ES 8.0, throwing machine learning at risk-based alerting, or building no-code SOAR playbooks, these sessions are designed to prepare you for what’s next. Come early for Splunk University, stay for the workshops, and leave with real, practical knowledge – oh, and don’t forget about the iconic Splunk swag.  Needa skill-up on Splunk | Add workshops to your schedule Data, defined | A user-friendly refresh from Lantern  If your data feels like a messy string of holiday lights that only your sweet Nana has the patience to untangle, then your Nana just joined the ranks of Lantern’s Data Descriptors. This month, the Lantern team completed a full-on makeover of their Data Descriptor pages, including new page categorizations that make it easier to find use cases and best-practice tips for the data you care about most. And with expert-vetted categorizations, you’ll finally find exactly what you need – turning data chaos into clarity. Nana would be so darn proud of us.  Needa define your data? Explore the updated Data Descriptors on Lantern Places You’ll Wanna Go The Splunk classroom | Training tales and testimonials We’re spilling the tea! If you’re curious about what the experience looks like, then check out Splunk Classroom Chronicles. This new series introduces you to our top-notch instructors and course developers, and highlights stories and feedback from our learners. With today’s fast-paced work environment, continuous professional development is key, and Splunk Education offers engaging, interactive training to keep you one step ahead of the bad guys. From hands-on labs to expert-led sessions, grab a virtual seat and meet these legends in the virtual classroom – or better yet, meet them in person at Splunk University.  Go to the head of the class | Read the tales Certifications testing center | Re-up or rank up at .conf25 USDA Prime. Organic. Cage-free. We love a good certification. It signals to the world that a thing is legit. The same holds true for Splunk Certifications. Whether you're renewing your current creds or working toward a new badge, .conf25 is your chance to get certified the quick, easy, and budget-friendly way. All exams are just $25. You can register in advance or do a walk-up in Boston if you have a valid .conf25 badge. With Splunk Certification, you’re not just showing up, you’re standing out. PS:  We’re celebrating our stand-outs with a special Bragging Rights Spotlight networking event on September 9. The .conf25 schedule has the fun details.  Go get Splunk Certified | Register for .conf25 in Boston Find Your Way | Learning Bits and Breadcrumbs Go to Lantern | New Data Type articles Go Chat | Join our Community User Group Slack Channel Go Stream It  | The Latest Course Releases (Some with Non-English Captions!) Go Last Minute | Seats Still Available for ILT Go to STEP | Get Upskilled Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Index It | Subscribe to our Newsletter   Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.  Answer to Index This: 22+(2 2) = 23.
Hi, I just created a splunk account and subscribed to Splunk Cloud free trial. I've already activated the account but I did not receive any link or credentials for the free trial. Is there any guide... See more...
Hi, I just created a splunk account and subscribed to Splunk Cloud free trial. I've already activated the account but I did not receive any link or credentials for the free trial. Is there any guide trigger generation? Thanks
How to get a paper certification aside from a badge you would receive after passing it? All i have is score report and exam history
I am pleased to share that I have successfully passed the Splunk Enterprise Certified Architect exam. This certification validates a strong understanding of best practices in deploying, managing, and... See more...
I am pleased to share that I have successfully passed the Splunk Enterprise Certified Architect exam. This certification validates a strong understanding of best practices in deploying, managing, and troubleshooting complex Splunk Enterprise environments. The exam was certainly challenging, requiring both hands-on experience and a deep understanding of Splunk architecture, distributed deployments, data ingestion strategies, and system resiliency. Preparing for it significantly enhanced my technical knowledge and broadened my perspective on enterprise-level log management and monitoring solutions. One of the most helpful resources during my preparation was the practice test by P2PExams, which closely aligned with the actual exam topics and helped me assess and strengthen my readiness effectively. Now that this milestone is complete, my next goal is to apply these skills in large-scale implementations and continue growing in the areas of observability, security analytics, and cloud-based SIEM solutions. I’m also planning to explore Splunk Cloud Architect capabilities and stay aligned with emerging trends in data engineering and operational intelligence.
Summer in the Northern hemisphere is in full swing, and is often a time to travel and explore. If your summer break involves learning and growing, let Splunk Education be your tour guide. Whether you... See more...
Summer in the Northern hemisphere is in full swing, and is often a time to travel and explore. If your summer break involves learning and growing, let Splunk Education be your tour guide. Whether you're dipping your toes into the world of observability or revisiting security best practices, this month’s new training offerings are your passport to up-skilling.    Here’s what’s on the map for your learning journey this month:   Implementing Splunk SmartStore for Indexer Clusters Are you a Splunk Enterprise Administrator? This guided eLearning with labs lets you explore SmartStore architecture and configuration, perfect for learners looking to configure and optimize the SmartStore for your environment. (Enroll) Introduction to Implementing Splunk SmartStore Start your SmartStore journey here with this free eLearning primer, designed for users new to the platform’s capabilities. (Enroll) Splunk Observability Cloud Realtime Metrics in Splunk Cloud Track and visualize your metrics like a pro. This hands-on lab experience teaches you how to monitor systems with speed and clarity. (Enroll) Developing SOAR Playbooks for Splunk Enterprise Security Advance your automation know-how with this guided lab course, designed to help you learn to configure the Splunk Enterprise Security system to use the Splunk SOAR playbooks – including how to build, test and troubleshoot Splunk Enterprise Security playbooks. (Enroll) Learning on the fly with YouTube SOAR 6.4.1 Playbook Feature Update YouTube video (Watch) Developing SOAR Playbooks for Splunk Enterprise Security YouTube video (Watch) So, where will your learning take you next? Decide on your destination, bring your curiosity, and let Splunk Education guide your growth. Explore the full Splunk Course Catalog and stay tuned for more destination drops each month.   — Callie Skokos on behalf of the Splunk Education Crew
Hi everyone, I’m scheduled to take a Splunk certification exam through Pearson VUE, and I have a quick question about how the name appears on the final certificate. My Pearson VUE account has my ... See more...
Hi everyone, I’m scheduled to take a Splunk certification exam through Pearson VUE, and I have a quick question about how the name appears on the final certificate. My Pearson VUE account has my correct legal name, matching my government-issued ID.However, my Splunk.com account displays my last name incorrectly (due to an autocomplete feature i had enabled during my registration to splunk account....) Before I proceed, I’d like to clarify: Which name is actually printed on the certification , the name from Pearson VUE, or the name from my Splunk account profile? If it’s based on the Splunk account, what’s the correct process to update the name before or after the exam? Has anyone had a mismatch between the two and successfully resolved it? I’ve contacted certification@splunk.com just in case, but I’d really appreciate input from others who have been through the process. Thanks in advance for your help !  Athanasios
Splunk University is expanding its instructor-led learning portfolio with dedicated Security tracks at .conf25 in Boston (Sept 6–8, 2025). These new role‑based tracks are designed to support foundati... See more...
Splunk University is expanding its instructor-led learning portfolio with dedicated Security tracks at .conf25 in Boston (Sept 6–8, 2025). These new role‑based tracks are designed to support foundational to advanced SOC capabilities, which range from threat detection and response to automated orchestration and architectural design. A quick look at the tracks The SOC Analyst track equips early‑career analysts with practical labs in threat investigation, detection, hunting, and response, featuring refreshed essentials and two brand-new courses. SOC Engineers will dive deeper into administering Splunk Enterprise Security and learn to integrate open‑source attack simulation tools to validate detection and response workflows.  For those focused on automation, the Security Automation track combines Splunk SOAR administration and playbook development with hands-on integration of attack simulation tools. Security Architects will tackle deployment design, security data architecture, attack simulation integration, and an immersive look at SOC roles and challenges.  3-, 2-, 1-day tracks These offerings range from our one‑day course called “Enhancing SOC Operations with Attack Simulations” aimed at experienced professionals to intensive three‑day tracks. And, attendees can use Training Units with .conf25 passes (240 TUs for three‑day, 180 TUs for two‑day, and 100 TUs for the one‑day session).     Splunk University’s Security tracks are designed to empower professionals at every stage – from supporting new security analysts to enabling engineers, developers, and architects – with the intention to prepare you to build a future‑ready SOC. We’re meeting learners where they are and hope to propel them to where they want to be. Please note that some of these courses require prior training and/or experience with Splunk security products (Enterprise Security and SOAR).
Heading to Boston this September for .conf25? Get a jumpstart by arriving a few days early for Splunk University (September 6–8, 2025). This immersive learning experience is your chance to dive deep ... See more...
Heading to Boston this September for .conf25? Get a jumpstart by arriving a few days early for Splunk University (September 6–8, 2025). This immersive learning experience is your chance to dive deep into real-world scenarios, boost your technical prowess, and emerge ready to drive data excellence in your organization. Learn with Intention: A Course for Every Role Choose a path that aligns with your goals—whether you’re seeking to become a power user, data guardian, or analytics expert, there’s a track for you: Three‑Day Tracks: Power User: From basic search to advanced data modeling—perfect for beginners to intermediates. Enterprise Administrator: Install, configure, and manage Splunk on-prem environments. Mastering Data Management: Optimize ingestion pipelines with advanced techniques. Advanced Enterprise Admin: Troubleshoot and cluster—ideal for admins ready to level up. Analytics & Data Science: Leverage the Machine Learning Toolkit for deeper insights. SOC Analyst, Engineer, Automation & Architect: Tailored to every stage of security operations. Observable Cloud Monitoring: Empower DevOps and SRE with Splunk Observability Cloud skills. Two‑Day Tracks: Focused on specialized topics like Splunk Cloud Admin, Dashboard Studio, UI app development, ITSI foundations, and full Observability pipeline implementation. One‑Day Intensive: Enhancing SOC Operations with Attack Simulations—a high-impact bootcamp for security professionals with existing SPL or security experience. What You’ll Walk Away With Attendees gain hands-on experience, real-world strategies, and immediate takeaways—dashboards you can deploy, pipelines you can optimize, and security practices you can apply from day one. You’ll not only expand your skill set, but build confidence to lead initiatives that empower your organization to proactively leverage data. Oh, and in true Splunky style, you’ll also walk away with some fun swag! Ready to Level Up? Register Today! Don’t miss this chance to boost your impact before .conf25 even starts. Splunk University seats are limited—reserve your spot now for 3-Day, 2-Day, or 1-Day tracks and bring home actionable knowledge, peer connections, and a fresh perspective. It’s go time—see you in Boston!    Register now!
I’m preparing for the Splunk Enterprise Certified Admin (SPLK-1005) exam and would appreciate any advice: Best study resources (besides official docs)? Key topics to focus on for the exam? Hands-o... See more...
I’m preparing for the Splunk Enterprise Certified Admin (SPLK-1005) exam and would appreciate any advice: Best study resources (besides official docs)? Key topics to focus on for the exam? Hands-on lab recommendations? Any tricky areas to watch out for? Thanks in advance!
This guide explains how to enable debug-level logging in the Jetty-based AppDynamics Controller. This is especially helpful when troubleshooting issues across key components like Business Transaction... See more...
This guide explains how to enable debug-level logging in the Jetty-based AppDynamics Controller. This is especially helpful when troubleshooting issues across key components like Business Transactions, Metrics, Events, and more. It is intended for system administrators managing self-hosted AppDynamics environments who need granular log details without restarting services. Prerequisites Admin or root access to the Controller host machine Familiarity with XML configuration files Basic understanding of Java logging levels (e.g., INFO, DEBUG, TRACE) Step-by-Step Instructions Step 1: Log in to the Controller Server SSH into the host where the AppDynamics Controller is installed: ssh user@<controller-host> Step 2: Locate the logback.xml File Navigate to the logging configuration file: cd <controller_home>/appserver/jetty/resources/ The file you need to edit is: logback.xml This file controls logging levels for all Controller components using the Logback framework. Step 3: Modify Log Level by Component Open the file in a text editor: vi logback.xml Find or add the logger entry for the desired component. For example, to enable DEBUG logs for Hibernate: <logger name="com.hibernate" level="DEBUG"/> To enable debug logging for a common Controller component (e.g., Business Transactions or Metrics), you might add: <logger name="com.singularity.BT" level="DEBUG"/> <logger name="com.singularity.metrics" level="DEBUG"/> Save and close the file. Step 4: Wait for Changes to Take Effect You do not need to restart the Controller. Log level changes made in logback.xml take effect within a few minutes automatically. Troubleshooting Logs Not Updating with New Verbosity Solution: Ensure your XML syntax is valid and enclosed within <configuration> tags Double-check that the logger name is spelled correctly Wait a few minutes—Logback applies changes dynamically Excessive Log File Growth Solution: Return the level to INFO or WARN after troubleshooting Enable log rotation via configuration or scheduled scripts Additional Notes / Best Practices Use targeted component loggers to avoid noise. Some common loggers include: com.singularity.BT for Business Transactions com.singularity.metrics for Metrics com.singularity.snapshots for Snapshots com.singularity.orchestration for Orchestration Avoid leaving DEBUG or TRACE logging on for long periods in production environments Conclusion By editing the logback.xml file, you can enable debug logging in the Jetty-based AppDynamics Controller without restarting services. This allows you to collect detailed diagnostic data during issues while maintaining flexibility and control. FAQ Q: Do I need to restart the Controller after editing logback.xml? A: No, changes are picked up automatically within a few minutes. Q: Where are the debug logs stored? A: Logs are written to the standard Controller log directory, typically: <controller_home>/logs/server.log Q: Can I enable debug mode for just one module? A: Yes, by specifying the exact logger (e.g., com.singularity.metrics) in logback.xml.
Your Next Big Security Credential: No Prerequisites Needed We know you’ve got the skills, and now, earning the Splunk Certified Cybersecurity Defense Engineer certification is simpler than ever. Sp... See more...
Your Next Big Security Credential: No Prerequisites Needed We know you’ve got the skills, and now, earning the Splunk Certified Cybersecurity Defense Engineer certification is simpler than ever. Splunk Education has removed all prerequisites for this advanced certification, opening the door for more professionals to showcase their expertise. This certification validates your ability to leverage Splunk Enterprise Security and SOAR to enhance workflows, create custom detections, and design powerful automations. Whether you’re optimizing your organization’s security operations or advancing your career, this certification is a game-changer for cybersecurity pros. Get Certified at .conf25 for Just $25! Heading to .conf25 in Boston? Don’t miss your chance to take (or retake) the Splunk Certified Cybersecurity Defense Engineer exam for just $25—a fraction of the usual cost! Walk-up testing will be available, but slots fill up quickly, so secure your spot early to avoid missing out. Whether you’re a first-time test taker or looking to recertify, there’s no better time to validate your skills. Plus, you’ll be part of an amazing community of Splunk enthusiasts and experts—talk about leveling up your Splunk journey! Learn more about the certification here. Take exams at .conf25
There will be times when multiple teams in your Organization will want to monitor and instrument their Applications. They would also want to have these applications report to different AppDynamics ... See more...
There will be times when multiple teams in your Organization will want to monitor and instrument their Applications. They would also want to have these applications report to different AppDynamics Backend (Controller) The latest Cluster agent releases gives you that option. Below has been tested on Cluster Agent 25.5 release.  First create 2 Applications: 1. tomcat-sample.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-app labels: app: tomcat-app-java-apps namespace: java-apps spec: replicas: 1 selector: matchLabels: app: tomcat-app template: metadata: labels: app: tomcat-app spec: containers: - name: tomcat-app #image: docker.io/abhimanyubajaj98/tomcat-sample:latest image: docker.io/abhimanyubajaj98/tomcat-sample imagePullPolicy: Always ports: - containerPort: 8080 env: - name: JAVA_TOOL_OPTIONS value: "-Xmx512m" --- apiVersion: v1 kind: Service metadata: name: tomcat-app-service labels: app: tomcat-app namespace: java-apps spec: ports: - port: 8080 targetPort: 8080 selector: app: tomcat-app   2. tomcat-sample-ces.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-app-1 labels: app: tomcat-app-java-apps-1 namespace: one-apps spec: replicas: 1 selector: matchLabels: app: tomcat-app-1 template: metadata: labels: app: tomcat-app-1 spec: containers: - name: tomcat-app-1 #image: docker.io/abhimanyubajaj98/tomcat-sample:latest image: docker.io/abhimanyubajaj98/tomcat-sample imagePullPolicy: Always ports: - containerPort: 8080 env: - name: JAVA_TOOL_OPTIONS value: "-Xmx512m" --- apiVersion: v1 kind: Service metadata: name: tomcat-app-service-1 labels: app: tomcat-app-1 namespace: one-apps spec: ports: - port: 8080 targetPort: 8080 selector: app: tomcat-app-1   To deploy-> 1. kubectl create ns java-apps one-apps 2. kubectl create -f tomcat-sample.yaml 3. kubectl create -f tomcat-sample-ces.yaml   Once deployed: root@ip-172-31-12-116:/opt/appdynamics/java-apps# kubectl -n java-apps get pods NAME READY STATUS RESTARTS AGE tomcat-app-76df69dc7b-f74wv 1/1 Running 0 71m root@ip-172-31-12-116:/opt/appdynamics/java-apps# kubectl -n one-apps get pods NAME READY STATUS RESTARTS AGE tomcat-app-1-6b86c4b444-h65sj 1/1 Running 0 20m   Auto-instrument For this I will deploy one cluster agent with Helm charts and the other one with normal kubectl command line. First up is creating the secret. Remember Cluster agent will be deployed in 2 namespaces so we will need to create 2 secrets in different namespace.  Our 1st namespace  : appdynamics Our 2nd namespace : appdynamics-1 Let's create secret: kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=<access-key> kubectl -n appdynamics-1 create secret generic cluster-agent-secret --from-literal=controller-key=<access-key>   First Cluster Agent deployment in appdynamics namespace My helm: root@ip-172-31-12-116:/opt/appdynamics/cluster-agent/cluster-agent-alpine-arm64-bundled-distribution/helm-charts# cat values.yaml installClusterAgent: true installInfraViz: false installSplunkOtelCollector: false imageInfo: agentImage: docker.io/appdynamics/cluster-agent agentTag: 25.5.0-1126 operatorImage: docker.io/appdynamics/cluster-agent-operator operatorTag: 25.5.0-1107 imagePullPolicy: Always machineAgentImage: docker.io/appdynamics/machine-agent machineAgentTag: latest machineAgentWinImage: docker.io/appdynamics/machine-agent-analytics machineAgentWinTag: win-latest netVizImage: docker.io/appdynamics/machine-agent-netviz netvizTag: latest controllerInfo: url: https://controllerces.saas.appdynamics.com:443 account: controllerces username: null password: null accessKey: null globalAccount: controllerxxx_xxxxxx3 customSSLCert: null keyStorePasswordSecret: '' keyStoreFileSecret: '' authenticateProxy: false proxyUrl: null proxyUser: null proxyPassword: null createServiceAccount: true clusterAgent: containerProperties: containerBatchSize: 5 containerParallelRequestLimit: 1 containerRegistrationInterval: 120 logProperties: logFileSizeMb: 5 logFileBackups: 3 logLevel: DEBUG metricProperties: metricsSyncInterval: 30 metricUploadRetryCount: 2 metricUploadRetryIntervalMilliSeconds: 5 podMetricCollectionMaxGoRoutines: 3 podMetricCollectionRequestTimeoutSeconds: 5 nsToMonitorRegex: .* appName: two-ca instrumentationConfig: enabled: true containerAppCorrelationMethod: proxy instrumentationMethod: Env numberOfTaskWorkers: 5 appNameStrategy: label nsToInstrumentRegex: java-apps|nodejs-apps|dotnet-apps instrumentationRules: - namespaceRegex: java-apps language: java appNameLabel: app runAsUser: 999 runAsGroup: 999 imageInfo: image: docker.io/appdynamics/java-agent:latest agentMountPath: /opt/appdynamics imagePullPolicy: Always   helm install -f values.yaml cluster-agent -n appdynamcis  root@ip-172-31-12-116:/opt/appdynamics/cluster-agent/cluster-agent-alpine-arm64-bundled-distribution/helm-charts# kubectl -n appdynamics get pods NAME READY STATUS RESTARTS AGE appdynamics-operator-769dcf8f4b-kjm2z 1/1 Running 0 77m two-ca-appdynamics-54bc66f55c-5qzfk 1/1 Running 0 77m   Second Cluster Agent deployment in appdynamics namespace Before we deploy the second Cluster Agent, we will need to edit our cluster-agent-operator.yaml file. What we need to make sure is every place where you have  namespace: appdynamics This gets changed to namespace: appdynamics-1   I have uploaded cluster-agent-operator.txt file that has this configuration. You can use with 25.5 version of Cluster Agent by changing the extension to yaml Do-> kubectl create -f cluster-agent-operator.yaml My cluster-agent.yaml: root@ip-172-31-12-116:/opt/appdynamics/cluster-agent/cluster-agent-alpine-arm64-bundled-distribution# cat cluster-agent.yaml apiVersion: cluster.appdynamics.com/v1alpha1 kind: Clusteragent metadata: name: k8s-cluster-agent namespace: appdynamics-1 spec: appName: "cluster-2" controllerUrl: "https://ces-controller.saas.appdynamics.com:443" account: "ces-controller" # docker image info image: "docker.io/appdynamics/cluster-agent:latest" serviceAccountName: appdynamics-cluster-agent nsToMonitorRegex: appdynamics resources: limits: cpu: 90m memory: "200Mi" requests: cpu: 90m memory: "100Mi" instrumentationMethod: Env nsToInstrumentRegex: one-apps appNameStrategy: label instrumentationRules: - namespaceRegex: one-apps language: java appNameLabel: app runAsUser: 999 runAsGroup: 999 imageInfo: image: docker.io/appdynamics/java-agent:latest agentMountPath: /opt/appdynamics imagePullPolicy: Always ### Uncomment the following line if you need pull secret #imagePullSecret: "<your-docker-pull-secret-name>" ### Uncomment the following line if you need to enable the profiling #pprofEnabled: true #pprofPort: 9991   kubectl create -f cluster-agent.yaml Once done: root@ip-172-31-12-116:/opt/appdynamics/cluster-agent/cluster-agent-alpine-arm64-bundled-distribution# kubectl -n appdynamics-1 get pods NAME READY STATUS RESTARTS AGE appdynamics-operator-7677764d7c-52nx9 1/1 Running 0 51m k8s-cluster-agent-6dbdb6b4dc-dkdgs 1/1 Running 0 30m   Now, what should happen is namespace one-apps should be instrumented with second Cluster Agent aka one running in appdynamics-1 namespace and java-apps should be instrumented with first Cluster Agent aka one running in appdynamics namespace And that is what happened-> root@ip-172-31-12-116:/opt/appdynamics/cluster-agent/cluster-agent-alpine-arm64-bundled-distribution# kubectl -n java-apps describe pod Name: tomcat-app-76df69dc7b-f74wv Namespace: java-apps Priority: 0 Service Account: default Node: ip-172-31-12-116/172.31.12.116 Start Time: Mon, 30 Jun 2025 19:07:11 +0000 Labels: app=tomcat-app pod-template-hash=76df69dc7b Annotations: APPD_DEPLOYMENT_NAME: tomcat-app APPD_INSTRUMENTED_CONTAINERS: tomcat-app APPD_POD_INSTRUMENTATION_STATE: Successful APPD_tomcat-app_APPNAME: tomcat-app-java-apps APPD_tomcat-app_NODEID: 1588804 APPD_tomcat-app_NODENAME: tomcat-app--12 APPD_tomcat-app_TIERID: 30171 APPD_tomcat-app_TIERNAME: tomcat-app cni.projectcalico.org/podIP: 10.244.116.136/32 cni.projectcalico.org/podIPs: 10.244.116.136/32 Status: Running IP: 10.244.116.136 IPs: IP: 10.244.116.136 Controlled By: ReplicaSet/tomcat-app-76df69dc7b Init Containers: appd-agent-attach-java: Container ID: containerd://2785c3422edd754d96270af5e970312791079bb6dc9840acd3ab4af84e5985a0 Image: docker.io/appdynamics/java-agent:latest Image ID: docker.io/appdynamics/java-agent@sha256:d237aeb95a7b77d6e3e5b2c868e03cd22077e24424b58fa2380e6de340305e35 Port: <none> Host Port: <none> Command: /bin/sh -c cp -r /opt/appdynamics/. /opt/appdynamics-java && chown -R 999:999 /opt/appdynamics-java ; ls -la /opt/appdynamics-java State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 30 Jun 2025 19:07:15 +0000 Finished: Mon, 30 Jun 2025 19:07:20 +0000 Ready: True Restart Count: 0 Limits: cpu: 20m memory: 75M Requests: cpu: 10m memory: 50M Environment: <none> Mounts: /opt/appdynamics-java from appd-agent-repo-java (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-95g58 (ro) Containers: tomcat-app: Container ID: containerd://5a27c1bb0d2f539f8288fc65a99cebabeed69d10d34de85c50d0f1389c4e5701 Image: docker.io/abhimanyubajaj98/tomcat-sample Image ID: docker.io/abhimanyubajaj98/tomcat-sample@sha256:19558c877ec1fca506c3b7a6696a7a3a8914e855df3af93778323fb909b2fdff Port: 8080/TCP Host Port: 0/TCP State: Running Started: Mon, 30 Jun 2025 19:07:20 +0000 Ready: True Restart Count: 0 Environment: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY: <set to the key 'controller-key' in secret 'cluster-agent-secret'> Optional: false JAVA_TOOL_OPTIONS: -Xmx512m -Dappdynamics.agent.accountAccessKey=$(APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY) -Dappdynamics.socket.collection.bci.enable=true -Dappdynamics.jvm.shutdown.mark.node.as.historical=false -Dappdynamics.agent.reuse.nodeName=true -javaagent:/opt/appdynamics-java/javaagent.jar APPDYNAMICS_CONTROLLER_HOST_NAME: controllerces.saas.appdynamics.com APPDYNAMICS_CONTROLLER_PORT: 443 APPDYNAMICS_AGENT_TIER_NAME: tomcat-app APPDYNAMICS_POD_NAMESPACE: java-apps APPDYNAMICS_CONTAINER_NAME: tomcat-app APPDYNAMICS_CONTROLLER_SSL_ENABLED: true APPDYNAMICS_AGENT_ACCOUNT_NAME: controllerces APPDYNAMICS_AGENT_APPLICATION_NAME: tomcat-app-java-apps APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX: tomcat-app APPDYNAMICS_CONTAINERINFO_FETCH_SERVICE: cluster-metadata-service.appdynamics:9090 APPDYNAMICS_NETVIZ_AGENT_HOST: (v1:status.hostIP) APPDYNAMICS_NETVIZ_AGENT_PORT: 3892 Mounts: /opt/appdynamics-java from appd-agent-repo-java (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-95g58 (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: appd-agent-repo-java: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-95g58: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none>   Name: tomcat-app-1-6b86c4b444-h65sj Namespace: one-apps Priority: 0 Service Account: default Node: ip-172-31-12-116/172.31.12.116 Start Time: Mon, 30 Jun 2025 19:58:04 +0000 Labels: app=tomcat-app-1 pod-template-hash=6b86c4b444 Annotations: APPD_DEPLOYMENT_NAME: tomcat-app-1 APPD_INSTRUMENTED_CONTAINERS: tomcat-app-1 APPD_POD_INSTRUMENTATION_STATE: Successful APPD_tomcat-app-1_APPNAME: tomcat-app-java-apps-1 APPD_tomcat-app-1_NODEID: 1588843 APPD_tomcat-app-1_NODENAME: tomcat-app-1--1 APPD_tomcat-app-1_TIERID: 31497 APPD_tomcat-app-1_TIERNAME: tomcat-app-1 cni.projectcalico.org/podIP: 10.244.116.150/32 cni.projectcalico.org/podIPs: 10.244.116.150/32 Status: Running IP: 10.244.116.150 IPs: IP: 10.244.116.150 Controlled By: ReplicaSet/tomcat-app-1-6b86c4b444 Init Containers: appd-agent-attach-java: Container ID: containerd://172d5f35af34468b82a4bd22a030495de959a6a71ae79bed17cae8d8d203adff Image: docker.io/appdynamics/java-agent:latest Image ID: docker.io/appdynamics/java-agent@sha256:d237aeb95a7b77d6e3e5b2c868e03cd22077e24424b58fa2380e6de340305e35 Port: <none> Host Port: <none> Command: /bin/sh -c cp -r /opt/appdynamics/. /opt/appdynamics-java && chown -R 999:999 /opt/appdynamics-java ; ls -la /opt/appdynamics-java State: Terminated Reason: Completed Exit Code: 0 Started: Mon, 30 Jun 2025 19:58:05 +0000 Finished: Mon, 30 Jun 2025 19:58:10 +0000 Ready: True Restart Count: 0 Limits: cpu: 20m memory: 75M Requests: cpu: 10m memory: 50M Environment: <none> Mounts: /opt/appdynamics-java from appd-agent-repo-java (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59w98 (ro) Containers: tomcat-app-1: Container ID: containerd://3234455f370b9af9c2e906ce0a0a183cf9812c53fd2fec3897a4a364c592031c Image: docker.io/abhimanyubajaj98/tomcat-sample Image ID: docker.io/abhimanyubajaj98/tomcat-sample@sha256:19558c877ec1fca506c3b7a6696a7a3a8914e855df3af93778323fb909b2fdff Port: 8080/TCP Host Port: 0/TCP State: Running Started: Mon, 30 Jun 2025 19:58:11 +0000 Ready: True Restart Count: 0 Environment: APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY: <set to the key 'controller-key' in secret 'cluster-agent-secret'> Optional: false JAVA_TOOL_OPTIONS: -Xmx512m -Dappdynamics.agent.accountAccessKey=$(APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY) -Dappdynamics.socket.collection.bci.enable=true -Dappdynamics.jvm.shutdown.mark.node.as.historical=false -Dappdynamics.agent.reuse.nodeName=true -javaagent:/opt/appdynamics-java/javaagent.jar APPDYNAMICS_CONTROLLER_HOST_NAME: ces-controller.saas.appdynamics.com APPDYNAMICS_AGENT_ACCOUNT_NAME: ces-controller APPDYNAMICS_AGENT_APPLICATION_NAME: tomcat-app-java-apps-1 APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX: tomcat-app-1 APPDYNAMICS_CONTAINERINFO_FETCH_SERVICE: cluster-metadata-service.appdynamics-1:9090 APPDYNAMICS_CONTROLLER_PORT: 443 APPDYNAMICS_CONTROLLER_SSL_ENABLED: true APPDYNAMICS_AGENT_TIER_NAME: tomcat-app-1 APPDYNAMICS_POD_NAMESPACE: one-apps APPDYNAMICS_CONTAINER_NAME: tomcat-app-1 APPDYNAMICS_NETVIZ_AGENT_HOST: (v1:status.hostIP) APPDYNAMICS_NETVIZ_AGENT_PORT: 3892 Mounts: /opt/appdynamics-java from appd-agent-repo-java (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59w98 (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready True ContainersReady True PodScheduled True Volumes: appd-agent-repo-java: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> kube-api-access-59w98: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32m default-scheduler Successfully assigned one-apps/tomcat-app-1-6b86c4b444-h65sj to ip-172-31-12-116 Normal Pulling 32m kubelet Pulling image "docker.io/appdynamics/java-agent:latest" Normal Pulled 32m kubelet Successfully pulled image "docker.io/appdynamics/java-agent:latest" in 198ms (198ms including waiting). Image size: 84709497 bytes. Normal Created 32m kubelet Created container: appd-agent-attach-java Normal Started 32m kubelet Started container appd-agent-attach-java Normal Pulling 32m kubelet Pulling image "docker.io/abhimanyubajaj98/tomcat-sample" Normal Pulled 32m kubelet Successfully pulled image "docker.io/abhimanyubajaj98/tomcat-sample" in 236ms (236ms including waiting). Image size: 227949061 bytes. Normal Created 32m kubelet Created container: tomcat-app-1 Normal Started 32m kubelet Started container tomcat-app-1   Viola!! This is what we wanted
Where can I get a tenant ID for the cloud instance 
June 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist on w... See more...
June 2025 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this month’s edition of indexEducation, the newsletter that takes an untraditional twist on what’s new with Splunk Education. We hope the updates about our courses, certification program, and self-paced training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability: Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go  Training You Gotta Take Building Splunk UI Apps | Did UI see what we did there? We developed a new course to help you bring your Splunk apps to life with sleek, dynamic interfaces. The Building Splunk UI Apps course is your latest opportunity to master the Splunk UI (SUI) toolkit—a powerful library of React components designed to mirror the native Splunk experience. Tailored for application developers, this hands-on course dives into building responsive UIs, leveraging the Splunk REST API for data management, visualizing SPL search results, and packaging your apps for deployment. Like they say, “There’s an app for that!” And, Splunk Education offers training for the ‘app for that!’  Gotta get more knowledge | Master the SUI toolkit  Lantern Getting Started Guide | AI-driven insights with Splunk software  Five years ago, amid a world of face masks, Zoom school, and makeshift home offices, Splunk Lantern was launched as a beacon of clarity in uncertain times. Lantern set out to deliver trusted, self-service guidance powered by the real-world expertise of technical Splunkers. Since then, it’s grown into a go-to resource, helping hundreds of thousands of users with practical how-tos, insider tips, and best practices. Today, we’re celebrating that journey—and shining a light on our newest release, Getting Started with Splunk Artificial Intelligence. This is a prescriptive guide designed to help users harness AI/ML capabilities in Splunk software with confidence. Cheers to five more years of lighting the way—and integrating AI to make them brighter. Gotta get the guide | It's all about AI Things You Needa Know Splunk Certification | No prereqs, no problem We know you’re smart! So, starting June 30, you won’t need to prove it to take one of our most popular Splunk Certifications. The Splunk Certified Cybersecurity Defense Engineer (CDE) certification will no longer require prerequisites—making it easier than ever to level up your security credentials. This advanced certification validates your ability to use Splunk Enterprise Security and SOAR to optimize workflows, craft detections, and build powerful automations. And with all exams just $25 at .conf25, there’s never been a better (or more budget-friendly) time to certify or recertify. Walk-up testing is available, but spots fill fast—so register early and show your skills in Boston! Needa get your CDE certification | No prereqs necessary A taste of Splunk | More than 80 courses What’s more fun than those tiny spoons and free samples at the ice cream shop? How about a taste of Splunk analytics? There are still some tasty things in life that can bring you joy and cost you nothing. Take for example over 80 free Splunk training courses. Our self-paced training courses are accessible online – covering a wide range of topics, from basic Splunk functionalities to advanced security operations. By making these resources freely available, Splunk seeks to empower aspiring cybersecurity professionals to develop in-demand skills and enhance their career prospects in a rapidly evolving field. Our samples will get you hooked – no tiny spoons required.  Needa get started | Free eLearning  Places You’ll Wanna Go Splunk University in Beantown | Where learning meets lobstah rolls Chowder, championships, and cutting-edge tech—Boston has it all. And this year, Splunk University is going to be part of its modern history between September 6–8. Splunk Education is rolling into town for the ultimate pre-game before .conf25, offering hands-on labs, expert-led workshops, and a front-row seat to learning with our top instructors. Whether you’re walking the Freedom Trail or walking into your next certification exam, you’ll be surrounded by fellow Splunk enthusiasts sharpening their skills in one of America’s oldest cities. And if you need more reasons to join us, check out Eric Fusilero’s blog to see why he thinks Splunk University and all-things Splunk Education at .conf25 may inspire you even more than the athletes rowing on the Charles River.  Gotta go to Boston | The new campus for Splunk University Splunk Fundamentals on YouTube | Scroll less, learn more.  Caught in a doom scroll? Scroll with purpose and head over to our new Splunk eLearning Fundamentals YouTube playlist! We’ve dropped 12 fresh, bite-sized promo videos designed to introduce more novice learners to our foundational Splunk elearning courses. These quick-hit “how-to” shorts are designed to get you curious about what’s possible. Jump in to see what’s available and get hooked on more positive and productive content – your nervous system will thank you.  Gotta change the vibe | Head to YouTube shorts Find Your Way | Learning Bits and Breadcrumbs Go Chat | Join our Community User Group Slack Channel Go Stream It  | The Latest Course Releases (Some with Non-English Captions!) Go Last Minute | Seats Still Available for ILT Go to STEP | Get Upskilled Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Index It | Subscribe to our Newsletter   Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.  Answer to Index This: “Nice belt.”  
The AppDynamics Smart Agent offers a powerful way to automate agent installation, updates, and management. While it supports default artifact repositories, you can also host your own agent binaries... See more...
The AppDynamics Smart Agent offers a powerful way to automate agent installation, updates, and management. While it supports default artifact repositories, you can also host your own agent binaries and use the Custom HTTP URL option to install agents — great for enterprise customization and version control. In this article, we’ll walk through how to host agents in Amazon S3 and use the Smart Agent to install them via a custom HTTP URL.   Prerequisites Before we start, ensure: You have access to an Amazon S3 bucket The Smart Agent is installed on your host The agent ZIP file (e.g., Machine Agent, Java Agent) is available locally   Step 1: Upload the Agent to S3 Use the AWS CLI to upload your agent ZIP file to S3: aws s3 cp machineagent-bundle-64bit-linux-25.4.0.4712.zip s3://your-bucket-name/ Replace your-bucket-name with your actual S3 bucket.   Step 2: Make the Object Publicly Accessible To allow Smart Agent (which runs in AppDynamics cloud) to download the file, it must be public. Allow Public Access via ACL: aws s3api put-object-acl \ --bucket your-bucket-name \ --key machineagent-bundle-64bit-linux-25.4.0.4712.zip \ --acl public-read   Step 3: Get the Public HTTP URL Public S3 objects are accessible at: https://<bucket-name>.s3.<region>.amazonaws.com/<object-key> Example: https://appd-agent-smart.s3.us-west-2.amazonaws.com/machineagent-bundle-64bit-linux-25.4.0.4712.zip   Step 4: Use the URL in Smart Agent Go to AppDynamics Controller → Smart Agent UI Select Install Agent Choose the agent type (e.g., Machine Agent) Under Custom HTTP URL, paste the S3 link       Result Smart Agent will now download the ZIP from your public S3 URL and install the agent on the target host.