All Topics

Top

All Topics

Dear Community members, It would be helpful if someone assist me with relevant document of deployment of Splunk enterprise security on AWS Containers & AWS ec2 instance to compare which is the appro... See more...
Dear Community members, It would be helpful if someone assist me with relevant document of deployment of Splunk enterprise security on AWS Containers & AWS ec2 instance to compare which is the appropriate model that will be supported by Splunk for any future issues along with upgradation. 
https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity I installed Splunk Enterprise Security to verify operation, following the above manual, I installed and configured t... See more...
https://docs.splunk.com/Documentation/ES/7.3.2/Install/InstallEnterpriseSecurity I installed Splunk Enterprise Security to verify operation, following the above manual, I installed and configured the initial settings. When I opened the incident view in the app's menu bar, an error message appeared saying "An error occurred while loading some filters." When I opened the investigation, an error message appeared saying "Unknown Error: FAiled to fetch KV Store." I can't display the incident review and investigation. Is anyone else experiencing the same issue?
Hi Splunk Community, I’m new to integrating Citrix NetScaler with Splunk, but I have about 9 years of experience working with Splunk. I need your guidance on how to successfully set up this integrat... See more...
Hi Splunk Community, I’m new to integrating Citrix NetScaler with Splunk, but I have about 9 years of experience working with Splunk. I need your guidance on how to successfully set up this integration to ensure that: All data from NetScaler is ingested and extracted correctly. The dashboards in the Splunk App for Citrix NetScaler display the expected panels and trends. Currently, I have a 3-machine Splunk environment (forwarder, indexer, and search head). Here's what I’ve done so far: I installed the Splunk App for Citrix NetScaler on the search head. Data is being ingested from the NetScaler server via the heavy forwarder, but I have not installed the Splunk Add-on for Citrix NetScaler on the forwarder or indexer. Despite this, the dashboards in the app show no data. From your experience, is it necessary to install the Splunk Add-on for Citrix NetScaler on the heavy forwarder (or elsewhere) to extract and normalize the data properly? If so, would that resolve the issue of empty dashboards? Any insights or steps to troubleshoot and ensure proper integration would be greatly appreciated! Thanks in advance!    
"""https://docs.splunk.com/observability/en/rum/rum-rules.html#use-cases Write custom rules for URL grouping in Splunk RUM — Splunk Observability Cloud documentation Write custom rules to group URL... See more...
"""https://docs.splunk.com/observability/en/rum/rum-rules.html#use-cases Write custom rules for URL grouping in Splunk RUM — Splunk Observability Cloud documentation Write custom rules to group URLs based on criteria that matches your business specifications, and organize data to match your business needs. Group URLs by both path and domain.  you also need custom URL grouping rules to generate page-level metrics (rum.node.*) in Splunk RUM.""   As per the splunk documentation ,we have configured custom URL grouping. But rum.node.* metrics not available.   pls help on this Path configured    
Hello There,    I would like to pass mutiple values in label, Where in the current search i can able to pass onlu one values at a time, <input type="multiselect" token="siteid" searchWhenChanged... See more...
Hello There,    I would like to pass mutiple values in label, Where in the current search i can able to pass onlu one values at a time, <input type="multiselect" token="siteid" searchWhenChanged="true"> <label>Site</label> <choice value="*">All</choice> <choice value="03">No Site Selected</choice> <fieldForLabel>displayname</fieldForLabel> <fieldForValue>prefix</fieldForValue> <search> <query> | inputlookup site_ids.csv |search displayname != "ABCN8" AND displayname != "ABER8" AND displayname != "AFRA7" AND displayname != "AMAN2" </query> <earliest>-15m</earliest> <latest>now</latest> </search> <delimiter>_fc7 OR index=</delimiter> <suffix>_fc7</suffix> <default>03</default> <initialValue>03</initialValue> <change> <eval token="form.siteid">case(mvcount('form.siteid') == 2 AND mvindex('form.siteid', 0) == "03", mvindex('form.siteid', 1), mvfind('form.siteid', "\\*") == mvcount('form.siteid') - 1, "03", true(), 'form.siteid')</eval> </change> <change> <set token="tokLabel">$label$</set> </change> </input> I need to pass this label value as well, which is a multiselect value. Thanks!
Choropleth map provides city level resolution, is there way to get higher resolution such as street or block level? thanks!
Hi Dear Splunkers,  I have been working on creating a Custom TA for counting unicode characters for non-eng dataset (long story discussion post in PS), getting these lookup file errors 1)Error in... See more...
Hi Dear Splunkers,  I have been working on creating a Custom TA for counting unicode characters for non-eng dataset (long story discussion post in PS), getting these lookup file errors 1)Error in 'lookup' command: Could not construct lookup 'ucd_count_chars_lookup, _raw, output, count'. See search.log for more details. 2)The lookup table 'ucd_count_chars_lookup' does not exist or is not available. The search job has failed due to an error. You may be able view the job in the Job Inspector.   The Custom TA creation steps I followed: (on my personal laptop, installed bare-min fresh 9.3.2 enterprise trial version) 1) Created the custom TA named "TA-ucd" on app creation page (given read for all, execute for owner, shared with all apps) 2) created the ucd_category_lookup.py (made sure of the permissions) $SPLUNK_HOME/etc/apps/TA-ucd/bin/ucd_category_lookup.py (this file should be readable and executable by the Splunk user, i.e. have at least mode 0500) #!/usr/bin/env python import csv import unicodedata import sys def main(): if len(sys.argv) != 3: print("Usage: python category_lookup.py [char] [category]") sys.exit(1) charfield = sys.argv[1] categoryfield = sys.argv[2] infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() for result in r: if result[charfield]: result[categoryfield] = unicodedata.category(result[charfield]) w.writerow(result) main()  $SPLUNK_HOME/etc/apps/TA-ucd/default/transforms.conf [ucd_category_lookup] external_cmd = ucd_category_lookup.py char category fields_list = char, category python.version = python3 $SPLUNK_HOME/etc/apps/TA-ucd/metadata/default.meta [] access = read : [ * ], write : [ admin, power ] export = system 3) after creating the 3 files above mentioned, i did restart Splunk service and laptop as well.  4) still the search fails with lookup errors above mentioned.  5) source=*search.log* - does not produce anything (Surprisingly !) Could you pls upvote the idea - https://ideas.splunk.com/ideas/EID-I-2176 PS - long story available here - https://community.splunk.com/t5/Splunk-Search/non-english-words-length-function-not-working-as-expected/m-p/705650  
I have created one Dashboard and trying to add different field color. I navigated to "source " >tried updating XML code as "charting.fieldColors">{"Failed Logins":"#FF9900", "NonCompliant_Keys":"#FF0... See more...
I have created one Dashboard and trying to add different field color. I navigated to "source " >tried updating XML code as "charting.fieldColors">{"Failed Logins":"#FF9900", "NonCompliant_Keys":"#FF0000", "Successful Logins":"#009900", "Provisioning Successful":"#FFFF00"</option>" but still all clumns are showing as "Purple"   Can someone help me with it?
We are looking to configure the Splunk Add-on for Microsoft Cloud Services to use a Service Principal as opposed to a client key.  The documentation for the Add-On does not provide insight into how o... See more...
We are looking to configure the Splunk Add-on for Microsoft Cloud Services to use a Service Principal as opposed to a client key.  The documentation for the Add-On does not provide insight into how one would configure the Splunk Add-on for Microsoft Cloud Services to work with a Service Principal.  Does the Splunk Add-on for Microsoft Cloud Services service principals for authentication?  
The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee Mills, Security Strategist at Splunk, as she walks through the new and improved Splunk Guide to... See more...
The Splunk Guide to Risk-Based Alerting is here to empower your SOC like never before. Join Haylee Mills, Security Strategist at Splunk, as she walks through the new and improved Splunk Guide to RBA! Join this Tech Talk to learn the power of RBA, such as how to: Reduce the number of overall alerts while increasing the fidelity of alerts that arise Define and produce internal threat intelligence to identify normal or anomalous behavior Create high-value detections from traditionally noisy data sources, which align to popular cybersecurity frameworks Develop a valuable risk library of metadata-enriched objects and behaviors for manual analysis or machine learning Watch full Tech Talk here:
I have a heavy forwarder that sends the same event to two different indexer cluster. Now this event has a new field "X" that I only want to see in one of the indexer clusters.  I know in the props.c... See more...
I have a heavy forwarder that sends the same event to two different indexer cluster. Now this event has a new field "X" that I only want to see in one of the indexer clusters.  I know in the props.conf I can configure the sourcetype to do the removal of the field but that would be on the sourcetype level. Is there any way to remove it on one copy and not the other?  Alternatively I could do the props.conf change on the indexer level instead.
A Step-by-Step Guide to Setting Up and Monitoring Redis with AppDynamics on Ubuntu EC2 Monitoring your Redis instance is essential for ensuring optimal performance and identifying potential b... See more...
A Step-by-Step Guide to Setting Up and Monitoring Redis with AppDynamics on Ubuntu EC2 Monitoring your Redis instance is essential for ensuring optimal performance and identifying potential bottlenecks in real-time. In this guide, we’ll walk through the process of setting up Redis on an Ubuntu EC2 instance and configuring SplunkAppDynamics Redis Monitoring Extension to capture key metrics. Step 1: Setting up Redis on Ubuntu Prerequisites An AWS account with an EC2 instance running Ubuntu. SSH access to your EC2 instance. Installing Redis Update package lists and install Redis: sudo apt-get update sudo apt-get install redis-server​ Verify the installation: redis-server --version​ Ensure Redis is running: sudo systemctl status redis​ Step 2: Installing AppDynamics Machine Agent Download the Machine Agent: Visit AppDynamics and download the Machine Agent for your environment. Install the Machine Agent: Follow the installation steps provided in the AppDynamics Machine Agent documentation. https://docs.appdynamics.com/appd/24.x/24.11/en/infrastructure-visibility/machine-agent/install-the-machine-agent Verify Installation: Start the Machine Agent and confirm it connects to your AppDynamics Controller. Step 3: Configuring AppDynamics Redis Monitoring Extension Clone the Redis Monitoring Extension Repository git clone https://github.com/Appdynamics/redis-monitoring-extension.git cd redis-monitoring-extension Build the Extension sudo apt-get install openjdk-8-jdk maven mvn clean install Locate the  .zip file in the target folder and extract it: unzip target/RedisMonitor-*.zip -d <MachineAgent_Dir>/monitors/ Edit the Configuration File Navigate to the extracted folder and edit config.yml : metricPrefix: "Custom Metrics|Redis" #Add your list of Redis servers here. servers: - name: "localhost" host: "localhost" port: "6379" password: "" #encryptedPassword: "" useSSL: false Restart the Machine Agent .<MachineAgent_Dir>/bin/machine-agent Step 4: Verifying Metrics in AppDynamics Log in to your AppDynamics Controller. Navigate to the Metric Browser. Look for metrics under the path: Custom Metrics|Redis Verify that metrics like used_memory , connected_clients , and keyspace_hits are visible. Conclusion By combining the power of Redis with the advanced monitoring capabilities of AppDynamics, you can ensure your application remains scalable and responsive under varying workloads. Whether you’re troubleshooting an issue or optimizing performance, this setup gives you full visibility into your Redis instance. If you found this guide helpful, please share and connect with me for more DevOps insights!
We have new SH node which we are trying to add to the Search head cluster,  updated the configs in shcluster config and other configs.  After adding this node in the cluster , now we have two nodes ... See more...
We have new SH node which we are trying to add to the Search head cluster,  updated the configs in shcluster config and other configs.  After adding this node in the cluster , now we have two nodes as pert of  the SH cluster. We can see both the nodes up and running part of the cluster,   when we check it with "splunk show shcluster-status". But, when we check the kvstore status with " splunk show kvstore-status" old nodes shows as captain , but the newly built node is not joining this cluster and giving the below error in the logs. Error in Splunkd log on the search head which has issue.. 12-04-2024 16:36:45.402 +0000 ERROR KVStoreBulletinBoardManager [534432 KVStoreConfigurationThread] - Local KV Store has replication issues. See introspection data and mongod.log for details. Cluster has not been configured on this member. KVStore cluster has not been configured We have configured all the cluster related info on the newly built search head server(server.conf), dont see any configs missing. We also see below error on the SH ui page messages tab.. Failed to synchronize configuration with KVStore cluster. Quorum check failed because not enough voting nodes responded; required 2 but only the following 1 voting nodes responded: search-head01:8191; the following nodes did not respond affirmatively: search-head01:8191 failed with Error connecting to search-head01:8191 (172.**.***.**:8191) :: caused by :: compression disabled. Anyone else faced this error before...need some support here...
Comprehensive Guide to RabbitMQ Setup, Integration with Python, and Monitoring with AppDynamics Introduction RabbitMQ is a powerful open-source message broker that supports a variety of messagi... See more...
Comprehensive Guide to RabbitMQ Setup, Integration with Python, and Monitoring with AppDynamics Introduction RabbitMQ is a powerful open-source message broker that supports a variety of messaging protocols, including AMQP. It allows developers to build robust, scalable, and asynchronous messaging systems. However, to ensure optimal performance, monitoring RabbitMQ metrics is crucial. This tutorial walks you through setting up RabbitMQ, integrating it with a Python application, and monitoring its metrics using AppDynamics. Step 1: Setting Up RabbitMQ 1.1 Install RabbitMQ via Docker To quickly get RabbitMQ up and running, use the official RabbitMQ Docker image with the management plugin enabled. Run the following command to start RabbitMQ: docker run -d --hostname my-rabbit --name rabbitmq \ -e RABBITMQ_DEFAULT_USER=guest \ -e RABBITMQ_DEFAULT_PASS=guest \ -p 5672:5672 -p 15672:15672 \ rabbitmq:management Management Console: Accessible at http://localhost:15672 . Default Credentials: Username: guest Password: guest 1.2 Verify the Setup Once the container is running, verify the RabbitMQ server by accessing the Management Console in your browser. Alternatively, test the API endpoint: curl -u guest:guest http://localhost:15672/api/overview This should return RabbitMQ metrics in JSON format. Step 2: Writing a Simple RabbitMQ Producer and Consumer in Python 2.1 Install Required Library Install the pika library for Python, which is used to interact with RabbitMQ: pip install pika 2.2 Create the Producer Script ( send.py ) This script connects to RabbitMQ, declares a queue, and sends a message. import pika # Connect to RabbitMQ connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() # Declare a queue channel.queue_declare(queue='hello') # Publish a message channel.basic_publish(exchange='', routing_key='hello', body='Hello RabbitMQ!') print(" [x] Sent 'Hello RabbitMQ!'") connection.close() 2.3 Create the Consumer Script ( receive.py ) This script connects to RabbitMQ, consumes messages from the queue, and prints them. import pika # Connect to RabbitMQ connection = pika.BlockingConnection(pika.ConnectionParameters('localhost')) channel = connection.channel() # Declare a queue channel.queue_declare(queue='hello') # Define a callback to process messages def callback(ch, method, properties, body): print(f" [x] Received {body}") channel.basic_consume(queue='hello', on_message_callback=callback, auto_ack=True) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming() 2.4 Test the Application a. Run the consumer in one terminal: python3 receive.py b. Send a message from another terminal: python3 send.py c. Observe the message output in the consumer terminal. [x] Sent 'Hello RabbitMQ!' [x] Received b'Hello RabbitMQ!' Step 3: Monitoring RabbitMQ with AppDynamics 3.1 Configure RabbitMQ Management Plugin Ensure that the RabbitMQ Management Plugin is enabled (default in the Docker image). It exposes an HTTP API that provides metrics. 3.2 Create a Custom Monitoring Script Use a shell script to fetch RabbitMQ metrics and send them to the AppDynamics Machine Agent. script.sh #!/bin/bash # RabbitMQ Management API credentials USERNAME="guest" PASSWORD="guest" URL="http://localhost:15672/api/overview" # Fetch metrics from RabbitMQ Management API RESPONSE=$(curl -s -u $USERNAME:$PASSWORD $URL) if [[ $? -ne 0 || -z "$RESPONSE" ]]; then echo "Error: Unable to fetch RabbitMQ metrics" exit 1 fi MESSAGES=$(echo "$RESPONSE" | jq '.queue_totals.messages // 0') MESSAGES_READY=$(echo "$RESPONSE" | jq '.queue_totals.messages_ready // 0') DELIVER_GET=$(echo "$RESPONSE" | jq '.message_stats.deliver_get // 0') echo "name=Custom Metrics|RabbitMQ|Total Messages, value=$MESSAGES" echo "name=Custom Metrics|RabbitMQ|Messages Ready, value=$MESSAGES_READY" echo "name=Custom Metrics|RabbitMQ|Deliver Get, value=$DELIVER_GET" 3.3 Integrate with AppDynamics Machine Agent Place the Script: Copy the script.sh script to the Machine Agent monitors directory: cp script.sh <MachineAgent_Dir>/monitors/RabbitMQMonitor/ 2. Create monitor.xml : Create a monitor.xml file to configure the Machine Agent: <monitor> <name>RabbitMQ</name> <type>managed</type> <enabled>true</enabled> <enable-override os-type="linux">true</enable-override> <description>RabbitMQ </description> <monitor-configuration> </monitor-configuration> <monitor-run-task> <execution-style>periodic</execution-style> <name>Run</name> <type>executable</type> <task-arguments> </task-arguments> <executable-task> <type>file</type> <file>script.sh</file> </executable-task> </monitor-run-task> </monitor> 3. Restart the Machine Agent: Restart the agent to apply the changes: cd <MachineAgent_Dir>/bin ./machine-agent & Step 4: Viewing Metrics in AppDynamics Log in to your AppDynamics Controller. Navigate to Servers > Custom Metrics. Look for metrics under: Custom Metrics|RabbitMQ You should see metrics like: Total Messages Messages Ready Deliver Get
I need to display list of all failed status code in column by consumers Final Result: Consumers Errors Total_Requests Error_Percentage list_of_Status Test 10 100 10  5... See more...
I need to display list of all failed status code in column by consumers Final Result: Consumers Errors Total_Requests Error_Percentage list_of_Status Test 10 100 10  500 400 404           Is there a way we can display the failed status codes as well in of list of status coloumn index=test | stats count(eval(status>399)) as Errors,count as Total_Requests by consumers | eval Error_Percentage=((Errors/Total_Requests)*100)
Estimados. donde podría encontrar la métrica Availability Trend de un job, para usarla en un dashboard. seria para hacerlo tal cual como esta en la imagen , pero en un dash. Translated version De... See more...
Estimados. donde podría encontrar la métrica Availability Trend de un job, para usarla en un dashboard. seria para hacerlo tal cual como esta en la imagen , pero en un dash. Translated version Dear all, where could I find the Availability Trend metric for a job to use it in a dashboard? I want to replicate it exactly as it appears in the image but in a dashboard. ^Post edited by @Ryan.Paredez to translate the post. 
Hello, I am having issues configuring the HTTP Event Collector on my organizations Splunk cloud instance. I have set up a token, and have been trying to test using the example curl commands. However... See more...
Hello, I am having issues configuring the HTTP Event Collector on my organizations Splunk cloud instance. I have set up a token, and have been trying to test using the example curl commands. However, I am having issues discerning which endpoint is the correct one. I have tested out several endpoint formats: - https://<org>.splunkcloud.com:8088/services/collector - https://<org>.splunkcloud.com:8088/services/collector/event - https://http-inputs-<org>.splunkcloud.com:8088/services/collector... - several other that I have forgotten.  For context, I do receive a response when I get from https://<org>.splunkcloud.com/services/server/info From what I understand, you cannot change the port from 8088 on a cloud instance, so I do not think it is a port error.  Can anyone point me to any resources that would be able to help me determine the correct endpoint? (Not this: Set up and use HTTP Event Collector in Splunk Web - Splunk Documentation. I've browsed for hours trying to find a more comprehensive resource.)   Thank you!  
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month, we’re focusing on our exciting new articles related to the Solution Accelerator for OT Security and Solution Accelerator for Supply Chain Optimization, which are both designed to enhance visibility, protect critical systems, and optimize operations for manufacturing customers. In addition, for Amazon users, we’re exploring the wealth of use cases featured on our Amazon data descriptor page, as well as sharing our new guide on sending masked PII data to federated search for Amazon S3 - a must-read for managing sensitive data securely. Plus, we’re sharing all of the other new articles we’ve published over the past month. Read on to find out more.   Enhancing OT Security and Optimizing Supply Chains Operational Technology (OT) environments pose unique security challenges that require tailored solutions. Traditional IT security strategies often fall short when applied to OT systems due to these systems' reliance on legacy infrastructure, critical safety requirements, and the necessity for high availability. To address these challenges, Splunk has introduced the Solution Accelerator for OT Security, a free resource designed to enhance visibility, strengthen perimeter defenses, and mitigate risks specific to OT environments. Our Lantern article on this new Solution Accelerator provides you with everything you need to know to get started with this helpful tool. Key capabilities include: Perimeter monitoring: Validate ingress and egress traffic against expectations, ensuring firewall rules and access controls are effective. Remote access monitoring: Gain insights into who is accessing critical systems, from where, and when, so you can safeguard against unauthorized access. Industrial protocol analysis: Detect unusual activity by monitoring specific protocol traffic like Modbus, providing early warnings of potential threats. External media device tracking: Identify and manage risks from USB devices or other external media that could bypass perimeter defenses. With out-of-the-box dashboards, analysis queries, and a dedicated Splunk app, this accelerator empowers organizations to protect their critical OT systems effectively.   For businesses navigating the complexities of supply chain management, real-time visibility is crucial to maintaining efficiency and meeting customer expectations. The Lantern article on the Solution Accelerator for Supply Chain Optimization shows how organizations can use this tool to overcome blind spots and optimize every stage of the supply chain. This accelerator offers: End-to-end visibility: Unified insights from procurement to delivery, ensuring no process is overlooked. Inventory optimization: Real-time and historical data analyses to fine-tune inventory levels and forecast demand with precision. Fulfillment and logistics monitoring: Tools to track order processing and delivery performance, minimizing delays and costs. Supplier risk management: Assess supplier performance and identify potential risks to maintain a resilient supply network. Featuring prebuilt dashboards, data models, and guided use cases for key processes like purchase order monitoring and EDI transmission tracking, this accelerator simplifies the adoption of advanced analytics in supply chain operations. Both accelerators are freely available on GitHub and offer robust frameworks and tools to address the unique challenges of OT security and supply chain optimization. Explore these resources to drive better outcomes in your operations today.   Working with Amazon Data Do you use Amazon Data in your Splunk environment? If so, don’t miss our Amazon data descriptor page! Packed with advice and one of the most often accessed sections in our site library, it covers everything from monitoring AWS environments to detecting privilege escalation and managing S3 data. This month, we’ve published a new article tailored for S3 users: Sending masked PII data to the Splunk platform and routing unmasked data to federated search for Amazon S3 (FS-S3). It guides you on how to: Mask sensitive data like credit card numbers for Splunk Cloud ingestion. Store unmasked raw data in S3 for compliance and use federated search for cost-effective access.   Explore this article and more on our Amazon data descriptor page to enhance your AWS and Splunk integration!   Everything Else That’s New Here’s everything else we’ve published over the month: Monitoring and logging MQTT topic messages using Eclipse Mosquitto Configuring and monitoring NETSCOUT Omnis AI Streamer data Netscout Classic dashboard export deprecation FAQ Monitoring electronic data interchange transmission and acknowledgement Monitoring purchase order lifecycles We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
we run in an issue with the Indexer ... if there are 5 Times an drop of the max day volume .. the indexer will be disable ... what is the case now with our installation  Error in tsats command: your... See more...
we run in an issue with the Indexer ... if there are 5 Times an drop of the max day volume .. the indexer will be disable ... what is the case now with our installation  Error in tsats command: your Splunk license expired or you have exceeded license limit too many times.  Renew your Splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK The license is now free and no way back to Enterprise trial , there is no way back  when the license is expired.
I deleted my custom dashboard from the dashboard list on my AppDynamics SaaS Controller, is there a way I can recover a deleted dashboard?