All Topics

Top

All Topics

Hi all, We have specific AD group for specific application and we create index for that app and restrict access to that AD group (for all app users of that specific app) for that specific index. Gen... See more...
Hi all, We have specific AD group for specific application and we create index for that app and restrict access to that AD group (for all app users of that specific app) for that specific index. Generally they will be given FQDN/Hostname to us and we will be mapping to the particular index. In this way we have numerous AD groups and indexes. But our client is expecting less AD groups because it is difficult to maintain those many AD groups.  So, here my question... is there any chance to reduce AD groups by restricting specific to Source type rather than Index? So in one index can we have multiple applications where we can restrict them by sourcetype? If yes, please help me with the approach?  
Hello friends! Long time gawker, first time poster.  I wanted to share my recent journey on Backing up and Restoring Splunk User Search History for users that decided to migrate their User Search hi... See more...
Hello friends! Long time gawker, first time poster.  I wanted to share my recent journey on Backing up and Restoring Splunk User Search History for users that decided to migrate their User Search history to the KV Store using the feature mentioned in the release notes.  As of now, and as with all backups/restores, please make sure you test.  Hope this helps someone else.  Thanks to all that helped test and validate (and listen to me vent) along the way!  Please feel free to share your experiences if you use this feature or if I may have missed something as well.   I'll throw the code up shortly as well. https://docs.splunk.com/Documentation/Splunk/9.1.6/ReleaseNotes/MeetSplunk Preserve search history across search heads Search history is lost when users switch between various nodes in a search head cluster. This feature utilizes KV store to keep search history replicated across nodes. See search_history_storage_mode in limits.conf in the Admin Manual for information on using this functionality.   ### Backup Kvstore - pick your flavor of backing up (rest api, splunk cli, splunk app like "KV Store Tools Redux") # To backup just Search History  /opt/splunk/bin/splunk backup kvstore -archiveName `hostname`-SearchHistory_`date +%s`.tar.gz -appName system -collectionName SearchHistory   # To backup entire Kvstore (most likely a good idea)  /opt/splunk/bin/splunk backup kvstore -archiveName `hostname`-SearchHistory_`date +%s`.tar.gz       ### Restore archive # Change directory to location of archive backup cd /opt/splunk/var/lib/splunk/kvstorebackup # Locate archive to restore ls -lst # List archive files (optional, but helpful to see what's inside and how archive will extract to ensure you don't overwrite expected files)  tar ztvf SearchHistory_1731206815.tar.gz -rw------- splunk/splunk 197500 2024-11-10 02:46 system/SearchHistory/SearchHistory0.json # Extract archive or selected files tar zxvf SearchHistory_1731206815.tar.gz system/SearchHistory/SearchHistory0.json       ### Parse archive to prep to restore # Change directory to where archive was extracted cd /opt/splunk/var/lib/splunk/kvstorebackup/system/SearchHistory # Create/copy splunk_parse_search_history_kvstore_backup_per_user.py script to parse archives in directory to /tmp (or someplace else) and run on archive(s) ./splunk_parse_search_history_kvstore_backup_per_user.py /opt/splunk/var/lib/splunk/kvstorebackup/system/SearchHistory/SearchHistory0.json # List files created ls -ls SearchHistory0*  96 -rw-rw-r-- 1 splunk splunk  95858 Nov 14 23:12 SearchHistory0_admin.json 108 -rw-rw-r-- 1 splunk splunk 108106 Nov 14 23:12 SearchHistory0_nobody.json       ### Restore archives needed # NOTE:  To prevent SearchHistory leaking between users, you MUST restore to the corresponding user context # Either loop/iterate through restored files or do them one at a time calling the corresponding REST API curl -k -u admin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory/batch_save -H "Content-Type: application/json" -d @SearchHistory0_<user>.json       ### Validate that the SearchHistory Kvstore was restored properly for the user through calling the REST API and/or also logging into Splunk as the user to test with, navigate to "Search & Reporting" and selecting "Search History" curl -k -u admin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory         #### NOTE: There are default limits in kvstore that you need to account for if you're files are large!   If you run into problems, review your splunkd.log and/or the KV Store dashboards within the MC (Search --> KV Store) # /opt/splunk/bin/splunk btool limits list --debug kvstore /opt/splunk/etc/system/default/limits.conf           [kvstore] /opt/splunk/etc/system/default/limits.conf           max_accelerations_per_collection = 10 /opt/splunk/etc/system/default/limits.conf           max_documents_per_batch_save = 50000 /opt/splunk/etc/system/default/limits.conf           max_fields_per_acceleration = 10 /opt/splunk/etc/system/default/limits.conf           max_mem_usage_mb = 200 /opt/splunk/etc/system/default/limits.conf           max_queries_per_batch = 1000 /opt/splunk/etc/system/default/limits.conf           max_rows_in_memory_per_dump = 200 /opt/splunk/etc/system/default/limits.conf           max_rows_per_query = 50000 /opt/splunk/etc/system/default/limits.conf           max_size_per_batch_result_mb = 100 /opt/splunk/etc/system/default/limits.conf           max_size_per_batch_save_mb = 50 /opt/splunk/etc/system/default/limits.conf           max_size_per_result_mb = 50 /opt/splunk/etc/system/default/limits.conf           max_threads_per_outputlookup = 1       ### Troubleshooting # To delete the entire SearchHistory KV Store (because maybe you inadvertently restored everything to an incorrect user, testing, or due to other shenanigans) /opt/splunk/bin/splunk clean kvstore -app system -collection SearchHistory   # To delete a user specific context in the SearchHistory KV Store (because see above) curl -k -u admin:splunk@dmin https://localhost:8089/servicesNS/<user>/system/storage/collections/data/SearchHistory -X DELETE   ### Additional Notes It was noted that restoring for a user that has not logged in yet may report messages similar to "Action forbidden".  To remedy this, you might be able to create a local user and then restore again. 
Hi, I am deploying Splunk Enterprise and will eventually be forwarding Check Point Firewall logs using Check Point's Log Exporter. Check Point provides the option to select "Syslog" or "Splunk" as t... See more...
Hi, I am deploying Splunk Enterprise and will eventually be forwarding Check Point Firewall logs using Check Point's Log Exporter. Check Point provides the option to select "Syslog" or "Splunk" as the log format (there are some other formats as well); I will choose "Splunk". I need to know how to configure Splunk Enterprise to receive encrypted traffic from Check Point if I use TLS at the Check Point to send encrypted traffic to Splunk. Can someone enlighten me on this please?   Thanks!
My effective daily volume was set to 3072 MB due to 3 License usage of 1024 MB each, but these were about to expire so we add other 3 1024MB license, But when the other 3 license expired the Effecti... See more...
My effective daily volume was set to 3072 MB due to 3 License usage of 1024 MB each, but these were about to expire so we add other 3 1024MB license, But when the other 3 license expired the Effective daily volume was set from 6144MB to 1024MB instead of 3072MB Does anyone know why it is not taking the 3 licenses correctly? even removing the 3 expired ones is limited to 1024MB only Even if you restart Splunk it's the same result
Hi everyone, I'm trying to personalize the "configuration" tab of my app generated by add-on builder. By default when we try to add an account, we enter the Account Name / Username / Password. Fir... See more...
Hi everyone, I'm trying to personalize the "configuration" tab of my app generated by add-on builder. By default when we try to add an account, we enter the Account Name / Username / Password. Firstly I would simply like to change the labels linked to Username and Password to replace them with Client ID and Client Secret. (and secondly add the Tenant ID field). I achieved this by editing the file in $SPLUNK_HOME/etc/apps/<my_app>/appserver/static/js/build/globalConfig.json. Then I incremented the version number in the app properties. (as shown in this post https://community.splunk.com/t5/Getting-Data-In/Splunk-Add-on-Builder-Global-Account-settings/m-p/570565) However when I make new modifications elsewhere in my app, the globalConfig.json file is reset to its default values. Do you know how to do this? Splunk Version : 9.2.1 Add-On Builder version : 4.3.0 Thanks
Hello AppDynamics Community!  SAN FRANCISCO – November 12, 2024 – Splunk, the cybersecurity and observability leader, today announced innovations across its expanded observability portfolio to empo... See more...
Hello AppDynamics Community!  SAN FRANCISCO – November 12, 2024 – Splunk, the cybersecurity and observability leader, today announced innovations across its expanded observability portfolio to empower organizations to build a leading observability practice. These product advancements provide ITOps and engineering teams with more options to unify visibility across their entire IT environment to drive faster detection and investigation, harness control over data and costs and improve their digital resilience.   Many ITOps and engineering teams struggle with scattered visibility across their tech stack and lack insights into the performance issues impacting their business. Observability with the right business context is key to overcoming this. Now with these observability innovations, Splunk customers have access to smarter insights and automation for guided troubleshooting, deeper visibility into Kubernetes environments and a holistic user experience. Through Splunk’s observability portfolio, organizations have richer insights to monitor critical issues, such as disrupted user transactions and supply chain outages, and take quick and accurate corrective actions to enhance digital resilience. “To help overcome scattered visibility and blind spots of today’s hybrid and modern environments, injecting the right business and user context is key,” said Patrick Lin, SVP and GM of Observability at Splunk, a Cisco Company. “Splunk excels at this — our observability portfolio, supercharged by AppDynamics, provides customers essential business context to see how a problem originated and what, or who, is being impacted. We also provide breadth of coverage so customers can standardize their observability practice on one solution.” Read the full Press Release on AppDynamics.com
The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and cost controls, cross-portfolio integrations, and more intuitive workflows to streamline ... See more...
The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and cost controls, cross-portfolio integrations, and more intuitive workflows to streamline troubleshooting across any environment and help ITOps and Engineering teams strengthen their observability practice to build digital resilience. New In Splunk Observability This Month Metrics Usage Analytics for Splunk Observability Cloud Kubernetes proactive troubleshooting enhancements UI changes to improve onboarding User Interface (UI) alignment between Splunk AppDynamics and Splunk Observability Cloud Global Search Updates Archived metrics visibility in charts and detectors Synthetic Monitoring Performance KPI Enhancements APM Tag Spotlight Enhancements Learn More About Each of These Enhancements Metrics Usage Analytics (MUA) - US0/US1 realms Metrics Usage Analytics (MUA) is a new self-serve reporting interface available within Metrics Pipeline Management that provides visibility into how many Metric Time Series (MTS) are being generated, used, and stored across your system. With MUA, you can unlock detailed information about data usage so you can optimize your telemetry volume and costs. Kubernetes Proactive Troubleshooting Improvements  These latest improvements to the Kubernetes navigator in Splunk Infrastructure Monitoring give more options for greater optimization to improve the usability for troubleshooting. The new features include: Tabular, Heatmap troubleshooting views Additional Entity coverage for K8s Entities Hierarchy Browser in instance views Data Links in Tables / Metadata / Sidebar UI Changes to Improve Onboarding We’ve released a series of UI changes to improve the onboarding experience, including:  Adding a tab layout to the Data Management page Replacing the Otel Collector banner with a Guided Onboarding banner Updating the listed integrations and adding a call-to-action for hosts/clusters without the Otel Collector installed in the Infrastructure Inventory User Interface (UI) Alignment We’ve released a series of UI changes in Observability Cloud to align the customer experience across AppDynamics and Splunk Observability Cloud. This is the first step toward building a more cohesive Splunk Observability Cloud and AppDynamics-powered observability experience.  Global Search Updates The latest updates to the search experience in Splunk Observability Cloud, accessible via the search feature in the top navigation bar, improve search coverage across RUM, Synthetics, and Log Observer and enhance the loading of incoming results. Archived Metrics Visibility in Charts & Detectors With this update, users building charts and detectors will know whether metric data has been archived using Archived Metrics. Synthetic Monitoring Performance KPI Enhancements We’ve enhanced the Run Results view making metrics visualization clearer to help streamline data analysis and improve ease of use for real-time troubleshooting. Enhancements include: Using custom  / absolute time: Dynamic resolution in Observability dashboards automatically adjusts data views based on zoom levels and data density, providing granular detail when zoomed in and summarized data when zoomed out. No need to manually set frequencies based on time period to adjust the view. Historic Runs: The Run History page offers up to 90 days of historical data across all three test types, capturing artifacts like waterfalls and API responses for comprehensive analysis. Play/Pause Run Data: Pausing run data at a specific time frame lets you freeze the display, making it easier to analyze and troubleshoot issues without the chart updating continuously due to live data changes. APM Tag Spotlight Enhancements Users can now rearrange cards on Splunk APM Tag Spotlight. With this capability, you can customize the Tag Spotlight layout based on span attributes that are most important to you for faster troubleshooting. Layouts are customizable per user.  Plus Enhancements in Splunk Platform & Splunk AppDynamics Centralized Role and User Management: This release allows Splunk admins to manage users and roles across Splunk Cloud and Splunk Observability Cloud through a single UI in Splunk Cloud. Splunk Observability Cloud Metrics in Splunk Cloud: Splunk Cloud users can bring real-time monitoring metrics series (IM, some of APM, RUM, Synthetics) into Splunk Dashboard Studio for more context of their events in their usual charting interface. Log Observer Connect Enhancements - Access Control List: Splunk Platform admins can now restrict access to specified users (at a service-account level) to provide more granular control over who can see logs in Splunk Observability Cloud. 
Register here ! This thread is for the Community Office Hours session on Security: Ask Me Anything on Wed, Jan 15, 2025 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to y... See more...
Register here ! This thread is for the Community Office Hours session on Security: Ask Me Anything on Wed, Jan 15, 2025 at 1pm PT / 4pm ET.    This is your opportunity to ask questions related to your specific Splunk Security needs. In our first broad security topic session, our experts are ready to answer all your questions, such as... What can I ask in this AMA? How to better get started with Splunk Security? What are the essential steps? What are the latest innovations in ES, SOAR, Mission Control, SAA, and so on? What are the best practices for implementing security use cases, like incident management, RBA, automation and so on? What is the best approach to building a unified workflow with ES, SOAR and other security products? What are the third party integrations Splunk Security has and how to best configure these tools? What is the magic of industry frameworks such as ATT&CK, Cyber Kill Chain work within ES and SOAR? Anything else you’d like to learn!   Please submit your questions at registration. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
I am confused on why I only get _ID's from my Salesforce ingest, for example, I am not getting Username, Profile Name, Dashboard Name, Report Names...etc...I am getting the User ID, Profile ID, Dashb... See more...
I am confused on why I only get _ID's from my Salesforce ingest, for example, I am not getting Username, Profile Name, Dashboard Name, Report Names...etc...I am getting the User ID, Profile ID, Dashboard ID, and so fourth, it makes searches really difficult...How am I to correlate the ID to readable relevant information.? Where User_ID equates to Username (Davey Jones)? Help Please. 
At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as expected and help quickly resolve errors to keep things up and running. At their worst, they c... See more...
At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as expected and help quickly resolve errors to keep things up and running. At their worst, they cause an overwhelming amount of noise that fatigues our engineering teams, distracts from real application issues, and decreases the reliability of our products. In this post, we’ll look at ways we can skew toward the former to keep our detectors helpful, our engineers unpaged by unnecessary alerts, our applications performant, and our users happy.  What are detectors? Detectors set conditions that determine when to alert or notify engineering teams of application issues. These detectors monitor signals and trigger alerts when the signals meet specified rules. These rules are defined either by engineering teams or automatically by third-party observability platforms, like Splunk Observability Cloud’s AutoDetect alerts and detectors.  It can be tempting to create detectors around every possible static threshold that might signal that our system is in an undesirable state – CPU above a certain percentage, disk space low, throughput high, etc. However, creating too many detectors and setting off too many alerts doesn’t always lead to actionable solutions and causes alert fatigue. Creating the right detectors can help minimize false positives or negatives, reduce alert noise, and improve troubleshooting while reducing downtime and fatigue.   How to create good detectors It’s important to thoughtfully and intentionally create detectors to keep them as meaningful and helpful as possible. Detectors and the alerts they trigger should be:  Signals of an actual emergency or a signal that there’s about to be an emergency Actionable and based on symptoms, not causes Well-documented Built using Observability as Code Iterated upon  Let’s dig into each of these best practices.  Detectors should signal an actual emergency or impending emergency The Google SRE book defines alerting as, “Something is broken, and somebody needs to fix it right now! Or, something might break soon, so somebody should look soon.” Preferably, our system is self-healing and can autoscale, load balance, and retry to minimize human intervention. If these things can’t happen, and our users aren’t able to access our site, are experiencing slow load times, or are losing data, then our teams should be paged because there’s a real emergency that requires human intervention.  If the alert being fired doesn’t meet these criteria, it should perhaps be a chart living in a dashboard or a factor contributing to the service’s health score instead of an alert that could potentially wake engineers up in the middle of the night. For example, high throughput could lead to slower response times and a degraded user experience, but it might not (e.g. when CPU spikes precede auto-scaling events). Instead of creating a detector around high throughput itself, alerting on throughput thresholds and metrics surrounding latency or error rates would indicate an official or looming emergency.   At the end of the day, your application exists to serve a purpose. If that purpose isn’t being served or is in danger of not being served, alerts need to fire. If your infrastructure is running extremely hot, but the application is working, you can probably wait on alerting until business hours to look at additional capacity. But if users are being impacted, that’s a real emergency.  Detectors should be actionable Human intervention is expensive. It interrupts work on deliverables, distracts from other potential incidents, and can unnecessarily stress out and exhaust the humans involved – either by literally waking them from sleep or by fatigue from too many alerts. Therefore, when detectors and the alerts they trigger are created, they should define actionable states.  If a detector sets a static threshold on something like CPU and creates an alerting rule that triggers an alert when CPU spikes above 80%, the human responding to that page will go into an observability platform, see the CPU spike, and may not have much more information right off the bat. Are error rates spiking? Are response times slow? Are customers impacted? Is CPU spiking because of garbage collection or backup processing? Is any action necessary? Static thresholds like these don’t provide much context around end-user impact or troubleshooting. Instead, creating detectors and alerts around the symptoms or the things that impact user experience (think Service Level Objectives), means that there’s something wrong, there’s a real emergency, and a human needs to step in and take action.  Some examples of actionable symptoms include:  HTTP response codes like 500s or 404s Slow response time Users aren’t seeing expected content Private content is exposed to unauthorized users  Here’s a good example of a Splunk Observability Cloud detector that alerts when the error rate for the API Service is above 5% over a 5-minute time period:  Alerts would be triggered where the red triangles are shown.  Another example of a seemingly good detector is one that alerts when disk utilization percentage for a specific host goes above 80% for a specified amount of time:  However, this specific host initiates a data dump to S3 every 48 hours at 4 am EST, and this data dump causes a period of increased disk utilization. We can see in the alert settings that this alert is predicted to trigger in 48 hours when our S3 dump will occur, and we definitely don’t want to page the engineering team that manages this host at 4 am. We could adjust the singal resolution to a longer time period if needed, or we could create a muting rule that suppresses notifications for this specific condition:  The downside to this is that we could potentially miss a real incident, so we might also want to create a second detector around disk utilization percentage with a slightly higher threshold (that wouldn’t alert with our data dump spike) to monitor utilization during this time period.  Detectors should be well-documented Responding to an alert triggered by a detector should not require institutional knowledge and pulling multiple humans in to resolve the incident. There are times when this will happen, but detectors should be well-documented with runbooks for quick and easy guidance on troubleshooting steps.   Notifying the right people and setting the right escalation policies should also be considered in the configuration of detectors and alerts. For example, it doesn’t make sense to page the entire engineering team for every incident; it does make sense to page the code owners for the service experiencing issues. It doesn’t make sense to page leadership teams if a handful of customers are experiencing latency; it does make sense to page leadership teams if an entire product is offline.  Detectors should be built using Observability as Code Speaking of documentation… creating detectors and alerts as code provides a source-controlled documentation trail of why, when, and by whom the detector was created. This means that any new members joining the team will have background knowledge on the detectors and alerts that they will be on-call for. It also creates an audit trail when fine-tuning and adjusting thresholds, rules, notification channels, etc.   Using Observability as Code also makes it easier to standardize detectors and alerts between services, environments, and teams. This standardization means that all members of all teams have at least a baseline knowledge of the detectors and alerts and can potentially have greater context and ability to jump in and help during an incident.  Here’s an example of creating a Splunk Observability Cloud detector using Terraform:  Check out our previous post to learn more about deploying observability configuration via Terraform.  Detectors should be iterated upon  Detectors and alerts should work for our teams, not against them, and they should constantly evolve with our services to improve the reliability of our systems. To keep detectors and alerts actionable and relevant, it’s important to update, fine-tune, or even delete them when they’re no longer effective or helpful. If alerts are frequently firing but require no action, this is a sign to adjust thresholds or re-evaluate the need for the detector. If the actions taken by the incident response team could be easily automated, this is a sign to adjust or remove the detector. If anyone on your team uses the word “ignore” when describing a detector or alert, this is a sign to adjust or remove the detector. If team members are unable to troubleshoot an incident effectively based on the information they receive from a detector or alert, this is a sign to iterate on the detector.  Alerting detectors should never be the norm. If a detector is alerting, action should be required. Having a board full of “business as usual” alerting detectors quickly leads to false positives, false negatives, and ultimately hurts customer experience. Wrap up Detectors and alerts are critical to keeping our systems up and running, but too many detectors can work against that goal. Good detectors:  Signal real emergencies  Indicate an actual issue with user experience and provide our on-call teams with actionable insight Are well-documented Are built using source control Are continuously fine-tuned  These things help keep our detectors and alerts useful, our engineering teams focused, and our users happy.  Interested in creating some good detectors in Splunk Observability Cloud? Try out a free 14-day trial! Resources Google SRE book: Monitoring Distributed Systems How to Create Great Alerts Introduction to alerts and detectors in Splunk Observability Cloud Introduction to the Splunk Terraform Provider: Create a Detector in Splunk Observability Cloud
I am looking to build an alert that sends an email if someone locks themselves out of their Windows account after so many attempts.  I built a search out but when using real time I was bombarded with... See more...
I am looking to build an alert that sends an email if someone locks themselves out of their Windows account after so many attempts.  I built a search out but when using real time I was bombarded with emails every few minutes which doesn't seem right.  I would to have the alert be setup such as if user x types the password in 3 times wrong and AD locks the account, send an email to y's email address.  Real time alerting seems to be what I would need but it bombards me way too much.
Has anyone figured out how to run powershell only at scheduled time? In addition to scheduled time, it is running everytime the forwarder is restarted.
Team, I am bit new to Splunk, need help to pull ERR message from below sample raw data.  {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46... See more...
Team, I am bit new to Splunk, need help to pull ERR message from below sample raw data.  {"hosting_environment": "nonp", "application_environment": "nonp", "message": "[20621] 2024/11/14 12:39:46.899958 [ERR] 10.25.1.2:30080 - pid:96866" - unable to connect to endpoint , "service": "hello world"}   Thanks!
Hi Team,   I have a splunk query that am testing for Service Now data extract.   index=snow "INC783" | search dv_state="In Progress" OR dv_state="New" OR dv_state="On Hold" | stats max(_time) as ... See more...
Hi Team,   I have a splunk query that am testing for Service Now data extract.   index=snow "INC783" | search dv_state="In Progress" OR dv_state="New" OR dv_state="On Hold" | stats max(_time) as Time latest(dv_state) as State by number, dv_priority | fieldformat Time=strftime(Time,"%Y-%m-%d %H:%M:%S") | table number,Time, dv_priority, State     The challenge with the code is, above output is listing all the states for the particular Incidnet, even when i tried to filter for only the latest and max time. number Time dv_priority State INC783 2024-11-13 16:56:14 1 - Critical In Progress INC783 2024-11-13 17:00:03 3 - Moderate On Hold   The data must only show the latest one, which must be the one with "On Hold". Tried multiple method, but failing and showing all. how can i achieve it.   thanks Jerin V
Hi, I have deployed AppServerAgent-1.8-24.9.0.36347 with following steps but agent is showing errors and not getting uploaded.  ENV APPDYNAMICS_CONTROLLER_HOST_NAME=" abc" ENV APPDYNAMICS_CONTROLL... See more...
Hi, I have deployed AppServerAgent-1.8-24.9.0.36347 with following steps but agent is showing errors and not getting uploaded.  ENV APPDYNAMICS_CONTROLLER_HOST_NAME=" abc" ENV APPDYNAMICS_CONTROLLER_PORT="443" ENV APPDYNAMICS_AGENT_ACCOUNT_NAME="abc" ENV APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY="your-access-key"  point. ENVAPPDYNAMICS_AGENT_APPLICATION_NAME=" -----------" ENV APPDYNAMICS_AGENT_TIER_NAME=" ----------- "ENV APPDYNAMICS_AGENT_NODE_NAME="your-node name" agent is not getting loaded and giving error: Error: it is because if application name is not get resolved or do i need to make some other configurations? 024-11-14 13:02:18 OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended 2024-11-14 13:02:18 Java 9+ detected, booting with Java9Util enabled. 2024-11-14 13:02:18 Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:18 Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:18 Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_NODE_NAME] for node name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using selfService [false] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using selfService [false] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using ephemeral node setting [false] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using application name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using tier name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:19 Full Agent Registration Info Resolver using node name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:19 Install Directory resolved to[/opt/appdynamics] 2024-11-14 13:02:19 getBootstrapResource not available on ClassLoader 2024-11-14 13:02:20 Class with name [com.ibm.lang.management.internal.ExtendedOperatingSystemMXBeanImpl] is not available in classpath, so will ignore export access. 2024-11-14 13:02:20 Class with name [jdk.internal.util.ReferencedKeySet] is not available in classpath, so will ignore export access. 2024-11-14 13:02:20 Failed to locate module [jdk.jcmd] 2024-11-14 13:02:20 Failed to locate module [jdk.attach] 2024-11-14 13:02:23 [AD Agent init] Thu Nov 14 07:32:23 GMT 2024[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: JavaAgent - UUIDPool size is 10 2024-11-14 13:02:25 Agent conf directory set to [/opt/appdynamics/ver24.9.0.36347/conf] 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: JavaAgent - Agent conf directory set to [/opt/appdynamics/ver24.9.0.36347/conf] 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver is running 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_APPLICATION_NAME] for application name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_TIER_NAME] for tier name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:25 [AD Agent init] Thu Nov 14 07:32:25 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver found env variable [APPDYNAMICS_AGENT_NODE_NAME] for node name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using selfService [false] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using ephemeral node setting [false] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using application name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using tier name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Full Agent Registration Info Resolver using node name [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[DEBUG]: AgentInstallManager - Full Agent Registration Info Resolver finished running 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Agent runtime directory set to [/opt/appdynamics/ver24.9.0.36347] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Agent node directory set to [measurement-api-service-637966829491.us-east1.run.app] 2024-11-14 13:02:26 Agent runtime conf directory set to /opt/appdynamics/ver24.9.0.36347/conf 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: AgentInstallManager - Agent runtime conf directory set to /opt/appdynamics/ver24.9.0.36347/conf 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: JavaAgent - JDK Compatibility: 1.8+ 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: JavaAgent - Using Java Agent Version [Server Agent #24.9.0.36347 v24.9.0 GA compatible with 4.4.1.0 r8d55fc625d557c4c19f7cf97088ecde0a2e43e82 release/24.9.0] 2024-11-14 13:02:26 [AD Agent init] Thu Nov 14 07:32:26 GMT 2024[INFO]: JavaAgent - Running IBM Java Agent [No] Thanks & Regards Rupinder Kaur 
Hello experts. I'm a Splunk newbie. I am using the Jira Service Desk simple AddOn to send Splunk alarms to Jira tickets. We also confirmed that Splunk alarms were successfully raised through Jira... See more...
Hello experts. I'm a Splunk newbie. I am using the Jira Service Desk simple AddOn to send Splunk alarms to Jira tickets. We also confirmed that Splunk alarms were successfully raised through Jira tickets. However, sometimes it is the same alarm, but a specific alarm receives the customfield value well, but there are cases where no value is retrieved.   In Splunk, it is confirmed that the value exists, but the value cannot be retrieved. No matter how much I searched, I couldn't find out what the reason was. If the Jira and Splunk field mappings are incorrect, you shouldn't be able to get the value from all tickets, but you can't get it from a specific ticket... What's the problem? Example here... The Client value is always a value. However, as shown in the images, the customer value does not exist.  [Error Ticket]   [Normal Ticket]        
hi guys After adding [clustering] stanza and [replication_port://9887] in indexer cluster,  getting the below error ERROR  - Waiting for the cluster manager to come up... (retrying every second),bu... See more...
hi guys After adding [clustering] stanza and [replication_port://9887] in indexer cluster,  getting the below error ERROR  - Waiting for the cluster manager to come up... (retrying every second),but  service is running, but it stuck at this point - Waiting for web server at http://<ip>:8000 to be available. later, got this warning, WARNING: web interface does not seem to be available! how to I fix this issue ? 8000,9887 ports are already open, I've passed same pass4SymmKey for all 3 servers in indexer cluster tags: @any  
Our add-on requires dateparser and within dateparser the regex library is required. I added the libraries to my addon (built with Add-on Builder 4.3) however, it fails in Splunk 9.2 with: File "/opt... See more...
Our add-on requires dateparser and within dateparser the regex library is required. I added the libraries to my addon (built with Add-on Builder 4.3) however, it fails in Splunk 9.2 with: File "/opt/splunk/etc/apps/.../regex/_regex_core.py", line 21, in <module> import regex._regex as _regex ModuleNotFoundError: No module named 'regex._regex' If I try on Splunk 9.3 it works fine.  I know Python version changed from 3.7 to 3.9 on Splunk 9.3 but the regex version 2024.4.16 seems to be good for Python 3.7 I will appreciate any insight on how to solve this issue.
Hi everyone, I’m working with Splunk IT Service Intelligence (ITSI) and want to automate the creation of maintenance windows using a scheduled search in SPL. Ideally, I’d like to use the rest comman... See more...
Hi everyone, I’m working with Splunk IT Service Intelligence (ITSI) and want to automate the creation of maintenance windows using a scheduled search in SPL. Ideally, I’d like to use the rest command within SPL to define a maintenance window, assign specific entities and services to it, and have it run on a schedule. Is it possible to set up maintenance windows with entities and services directly from SPL? If anyone has sample SPL code or guidance on setting up automated maintenance windows, it would be very helpful! Thanks in advance!
I am on Splunk 8.2.12. I am trying to get a distinct count of incidents that have happened in each month, year to date. I'd like to compare that to the year prior.  I feel like this should be pre... See more...
I am on Splunk 8.2.12. I am trying to get a distinct count of incidents that have happened in each month, year to date. I'd like to compare that to the year prior.  I feel like this should be pretty easy, but my results aren't showing the current year in comparison to the previous year. This shows the current year data (2024) (earliest=-1@y@y AND latest=now()) | eval date_month=strftime(_time, "%mon") | eval date_year = strftime(_time, "%Y") | timechart span=1mon dc(RMI_MastIncNumb) as "# of Incidents" When I add | timewrap 1year series=exact time_format=%Y it ends up just showing me 2023