All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First, do not reach for inputlookup when lookup is readily applicable.  Assuming that you have distinct names in indexed events and the lookup, all you need to do is index=buildings_core | lookup ro... See more...
First, do not reach for inputlookup when lookup is readily applicable.  Assuming that you have distinct names in indexed events and the lookup, all you need to do is index=buildings_core | lookup roomlookup_buildings.csv building_from_search2 as building_from_search1 output building_from_search2 | where isnull(building_from_search2) | stats values(building_from_search1) as unmatched_buildings_from_search1 by request_unique_id Here, I assume that you have already defined a lookup called roomlookup_buildings.csv. (I usually name my lookups without that .csv.) But then, why would you have different field names for the same thing?  If in both indexed events and lookup the field name is building, you can do index=buildings_core | lookup roomlookup_buildings.csv building output building as matching_building | where isnull(matching_building) | stats values(building) as unique_buildings by request_unique_id Hope this helps.
I didn't have anything specific in mind. More like "that's where I'd look first and see if anything seems off". What's the lispy search for such a lookup-value-based search?
I am trying to ingest some json data into a new Splunk Cloud instance, with a custom sourcetype, but I keep getting duplicate data in the search results. This seems to be an extremely common problem,... See more...
I am trying to ingest some json data into a new Splunk Cloud instance, with a custom sourcetype, but I keep getting duplicate data in the search results. This seems to be an extremely common problem, based on the number of old posts, but none of them seem to address the Cloud version. I have a JSON file that looks like this: { "RowNumber": 1, "ApplicationName": "177525278", "ClientProcessID": 114889, "DatabaseName": "1539703986", "StartTime": "2024-07-30 12:15:13" }   I have a Windows 2022 server with a 9.2.2 universal forwarder installed. I manually added a very simple app to the C:\Program Files\SplunkUniversalForwarder\etc\apps folder. inputs.conf contains this monitor [batch://C:\splunk_test_files\ErrorMaster\*.json] move_policy=sinkhole index=centraladmin_errormaster sourcetype=errormaster props.conf contains this type  (copied from _json) [errormaster] pulldown_type = true INDEXED_EXTRACTIONS = json KV_MODE = none category = Structured description = JavaScript Object Notation format. For more information, visit http://json.org/ On the cloud side I created (from the UI) a new sourcetype called 'errormaster' as just a direct clone of the existing _json type. When I add a .json file to the folder, it is ingested and the events show up in the cloud instance, under the right correct centraladmin_errormaster index, and with the sourcetype=errormaster. However, the fields all have duplicate values.   If it switch it to the built-in _json type it works fine. I have some field extractions I want to add, which is why I wanted a custom type. I'm guessing this is something obvious to the Cloud experts, but I am an accidental Splunk Admin with very little experience, so any help you can offer would be appreciated.
Hi @Mario.Morelli , Thanks for the clarification. Have a great day! Regards
Hi there, now I'm trying some of escu's built-in rules and sending them as notable alerts and via msteams webhooks. However, from the built-in query, only a few fields can be sent to the webhook aler... See more...
Hi there, now I'm trying some of escu's built-in rules and sending them as notable alerts and via msteams webhooks. However, from the built-in query, only a few fields can be sent to the webhook alert as shown in the capture below.   Is it possible to enrich this information with some information like in the annotation section?    
Check search.log and python.log for any messages that might explain why the script returned an error code.
Hi Splunk Community, I have a query that retrieves building data from two sources and I need assistance in identifying the unique buildings that are present in buildings_from_search1 but not in bui... See more...
Hi Splunk Community, I have a query that retrieves building data from two sources and I need assistance in identifying the unique buildings that are present in buildings_from_search1 but not in buildings_from_search2. Here's the query I'm currently using: index=buildings_core |stats values(building_from_search1) as buildings_from_search1 by request_unique_id | append [ | inputlookup roomlookup_buildings.csv | stats values(building_from_search2) as buildings_from_search2 ] Could someone please guide me on how to modify this query to get the unique buildings that are only present in buildings_from_search1 and not in buildings_from_search2? Thank you in advance for your help!
Recently upgraded to 9.2.2 and Historic License Usage panels in the Monitoring Console are now broken. The panels in License Usage - Today still work. Most answers I found had to do with the licen... See more...
Recently upgraded to 9.2.2 and Historic License Usage panels in the Monitoring Console are now broken. The panels in License Usage - Today still work. Most answers I found had to do with the license manager not indexing or reading the usage data. But on the license manager the panels all work in Settings » Licensing » License Usage Reporting.  The suggestion in this post did not apply either: https://community.splunk.com/t5/Splunk-Enterprise/Why-is-Historical-license-usage-page-showing-blank/m-p/635412  
Hi, you will only need one database agent as it can be used to connect to a lot of db's, as long as that agent can communicate to all the databases to monitor it. However licensing will consume 6 lic... See more...
Hi, you will only need one database agent as it can be used to connect to a lot of db's, as long as that agent can communicate to all the databases to monitor it. However licensing will consume 6 licenses as you are monitoring 6 instances
We got flagged for an OpenSSL1.0.2f vulnerability on our SOAR instance within the default installed Splunk Universal Forwarder path. It seems in later UF versions this vulnerability is remediated. I'... See more...
We got flagged for an OpenSSL1.0.2f vulnerability on our SOAR instance within the default installed Splunk Universal Forwarder path. It seems in later UF versions this vulnerability is remediated. I'm wondering what version of UF comes in the latest SOAR 6.2.2 version?
When ingesting Microsoft Azure data, we see different time formats for different Azure categories, and I wonder how to parse it correctly? Both timezones seem to be UTC. Is the proper approach to set... See more...
When ingesting Microsoft Azure data, we see different time formats for different Azure categories, and I wonder how to parse it correctly? Both timezones seem to be UTC. Is the proper approach to set  TZ=UTC and specify in datetime.xml the two formats? { category: NonInteractiveUserSignInLogs time: 2024-07-30T18:02:42.0324621Z . . . } { category: RiskyUsers time: 7/30/2024 1:48:56 PM . . . }
Thank you, @PickleRick. What would I look for specifically? I've reviewed the search.log extensively, and nothing seems to point to the problem. I should also mention that I created a support case w... See more...
Thank you, @PickleRick. What would I look for specifically? I've reviewed the search.log extensively, and nothing seems to point to the problem. I should also mention that I created a support case with Splunk, but I wanted to post on Splunk Community because often the community solves my problem.
Hey,  I am doing Predictive Maintenance using LLM's and I want to use Splunk to build dashboard. There I am going to include Question/Answering mechanism where my model going to answer the questio... See more...
Hey,  I am doing Predictive Maintenance using LLM's and I want to use Splunk to build dashboard. There I am going to include Question/Answering mechanism where my model going to answer the question user has.  For that, I have created an app "customApp" in src/etc/apps and added user-queries.py file in apps/customApp/bin folder and command.conf file in apps/myapp/default folder.  I am getting the following error on Splunk dashboard when calling this command. Btw, it worked when I hardcoded instead of using my model to generate response. <Error in 'userquery' command: External search command exited unexpectedly with non-zero error code 1.> Can you someone please save me from this, Thanks in advance!   #CustomApp / #userQuery / #Dashboard / #LLM's / #models
I'm hoping you've found a solution. I'm working on a similar project where I created an app in splunk/etc/apps/my-app with a .py file in the bin folder and a .conf file in the default folder. Initial... See more...
I'm hoping you've found a solution. I'm working on a similar project where I created an app in splunk/etc/apps/my-app with a .py file in the bin folder and a .conf file in the default folder. Initially, when I ran the command <| mycommand "hello"> in Splunk, it outputted a response that I had hardcoded in my .py file. However, after updating the script to generate responses via a large language model, I started encountering the following error. Error in 'mycommand' command: External search command exited unexpectedly with non-zero error code 1. The search job has failed due to an error. You may be able view the job in the Job Inspector. Please help me with this.  Thanks in advance #splunk #LLM's
Hello community. We have a cluster architecture with 5 indexes. We have detected high license consumption, we are trying to identify the sources that generate it. I am using the following search ... See more...
Hello community. We have a cluster architecture with 5 indexes. We have detected high license consumption, we are trying to identify the sources that generate it. I am using the following search to find out which Windows index host consumes the most license: index=_internal type="Usage" idx=wineventlog | eval MB=round(b/1024/1024, 2) | stats sum(MB) as "Consumo de Licencia (MB)" by h | rename h as "Host" | sort -"Consumo de Licencia (MB)" With this search I can see the hosts and the consumption in megabytes, but in the h field, there are no values ​​or hosts, which I cannot identify and I need to know which are those hosts, since the sum of all of them gives me a high license consumption. What could be the cause of that?     this is the events from uknowns_host: I cannot identify what they are, if they are a specific host, if it is a Splunk component, or something that is causing this license increase. Regards  
https://github.com/Cisco-Observability-TME/ansible-smartagent-install   Introduction Welcome, intrepid tech explorer, to the ultimate guide on deploying the Cisco AppDynamics Smart Agent across mu... See more...
https://github.com/Cisco-Observability-TME/ansible-smartagent-install   Introduction Welcome, intrepid tech explorer, to the ultimate guide on deploying the Cisco AppDynamics Smart Agent across multiple hosts! In this adventure, we'll blend the magic of automation with the precision of Ansible, ensuring your monitoring infrastructure is both robust and elegant. So, buckle up, fire up your terminal, and let's dive into a journey that will turn your deployment woes into a seamless orchestration symphony. Steps to Deploy Cisco AppDynamics Smart Agent Step 1: Install Ansible on macOS Before we embark on this deployment journey, we need our trusty automation tool, Ansible. Follow these steps to install Ansible on your macOS system using Homebrew: Install Homebrew (if not already installed): /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" Install Ansible: brew install ansible Verify the Installation: ansible --version You should see output indicating the installed version of Ansible. Step 2: Prepare Your Files and Directory Structure The project directory should contain the following files: ├── appdsmartagent_64_linux_24.6.0.2143.deb ├── appdsmartagent_64_linux_24.6.0.2143.rpm ├── inventory-cloud.yaml ├── inventory-local.yaml ├── inventory-multiple.yaml ├── smartagent.yaml └── variables.yaml Step 3: Understanding the Files 1. inventory-cloud.yaml, inventory-local.yaml, inventory-multiple.yaml These inventory files list the hosts where the Smart Agent will be deployed. Each file is structured similarly: all: hosts: smartagent-hosts: ansible_host: <IP_ADDRESS> ansible_username: <USERNAME> ansible_password: <PASSWORD> ansible_become: yes ansible_become_method: sudo ansible_become_password: <BECOME_PASSWORD> ansible_ssh_common_args: '-o StrictHostKeyChecking=no' Update these placeholders with your actual host details. Explanation of smartagent.yaml This Ansible playbook is designed to deploy the Cisco AppDynamics Smart Agent on multiple hosts. Let's break down each section and task in detail. Playbook Header --- name: Deploy Cisco AppDynamics SmartAgent hosts: all become: yes vars_files: - variables.yaml # Include the variable file name: Describes the playbook. hosts: Specifies that the playbook should run on all hosts defined in the inventory. become: Indicates that the tasks should be run with elevated privileges (sudo). vars_files: Includes external variables from the variables.yaml file. Tasks Ensure required packages are installed (RedHat) - name: Ensure required packages are installed (RedHat) yum: name: - yum-utils state: present update_cache: yes when: ansible_os_family == "RedHat" Uses the yum module to install the yum-utils package on RedHat-based systems. The task runs only if the operating system family is RedHat (when: ansible_os_family == "RedHat"). Ensure required packages are installed (Debian) - name: Ensure required packages are installed (Debian) apt: name: - curl - debian-archive-keyring - apt-transport-https - software-properties-common state: present update_cache: yes when: ansible_os_family == "Debian" Uses the apt module to install necessary packages on Debian-based systems. This task is conditional based on the operating system family being Debian. Ensure the directory exists - name: Ensure the directory exists file: path: /opt/appdynamics/appdsmartagent state: directory mode: '0755' Uses the file module to create the directory /opt/appdynamics/appdsmartagent with the specified permissions. Check if config.ini exists - name: Check if config.ini exists stat: path: /opt/appdynamics/appdsmartagent/config.ini register: stat_config Uses the stat module to check for the existence of config.ini and registers the result in stat_config. Create default config.ini file if it doesn't exist - name: Create default config.ini file if it doesn't exist copy: dest: /opt/appdynamics/appdsmartagent/config.ini mode: '0644' content: | [default] AccountAccessKey="{{ smart_agent.account_access_key }}" ControllerURL="{{ smart_agent.controller_url }}" ControllerPort=443 AccountName="{{ smart_agent.account_name }}" FMServicePort={{ smart_agent.fm_service_port }} EnableSSL={{ smart_agent.ssl | ternary('true', 'false') }} when: not stat_config.stat.exists Uses the copy module to create a default config.ini file if it doesn't exist. The content field uses Jinja2 templating to populate the configuration with variables from variables.yaml. Configure Smart Agent - name: Configure Smart Agent lineinfile: path: /opt/appdynamics/appdsmartagent/config.ini regexp: '^{{ item.key }}=' line: "{{ item.key }}={{ item.value }}" loop: - { key: 'AccountAccessKey', value: "{{ smart_agent.account_access_key }}" } - { key: 'ControllerURL', value: "{{ smart_agent.controller_url }}" } - { key: 'AccountName', value: "{{ smart_agent.account_name }}" } - { key: 'FMServicePort', value: "{{ smart_agent.fm_service_port }}" } - { key: 'EnableSSL', value: "{{ smart_agent.ssl | ternary('true', 'false') }}" } Uses the lineinfile module to ensure specific lines in config.ini are present and correctly configured. Set the Smart Agent package path (Debian) - name: Set the Smart Agent package path (Debian) set_fact: smart_agent_package: "{{ playbook_dir }}/appdsmartagent_64_linux_24.6.0.2143.deb" when: ansible_os_family == "Debian" Uses the set_fact module to define the path to the Smart Agent package for Debian systems. Set the Smart Agent package path (RedHat) - name: Set the Smart Agent package path (RedHat) set_fact: smart_agent_package: "{{ playbook_dir }}/appdsmartagent_64_linux_24.6.0.2143.rpm" when: ansible_os_family == "RedHat" Defines the path to the Smart Agent package for RedHat systems. Fail if Smart Agent package not found (Debian) - name: Fail if Smart Agent package not found (Debian) fail: msg: "Smart Agent package not found for Debian." when: ansible_os_family == "Debian" and not (smart_agent_package is defined and smart_agent_package is file) Uses the fail module to halt execution if the Smart Agent package is not found for Debian systems. Fail if Smart Agent package not found (RedHat) - name: Fail if Smart Agent package not found (RedHat) fail: msg: "Smart Agent package not found for RedHat." when: ansible_os_family == "RedHat" and not (smart_agent_package is defined and smart_agent_package is file) Halts execution if the Smart Agent package is not found for RedHat systems. Copy Smart Agent package to target (Debian) - name: Copy Smart Agent package to target (Debian) copy: src: "{{ smart_agent_package }}" dest: "/tmp/{{ smart_agent_package | basename }}" when: ansible_os_family == "Debian" Uses the copy module to transfer the Smart Agent package to the target host for Debian systems. Install Smart Agent package (Debian) - name: Install Smart Agent package (Debian) command: dpkg -i /tmp/{{ smart_agent_package | basename }} when: ansible_os_family == "Debian" Uses the command module to install the Smart Agent package on Debian systems. Copy Smart Agent package to target (RedHat) - name: Copy Smart Agent package to target (RedHat) copy: src: "{{ smart_agent_package }}" dest: "/tmp/{{ smart_agent_package | basename }}" when: ansible_os_family == "RedHat" Transfers the Smart Agent package to the target host for RedHat systems. Install Smart Agent package (RedHat) - name: Install Smart Agent package (RedHat) yum: name: "/tmp/{{ smart_agent_package | basename }}" state: present disable_gpg_check: yes when: ansible_os_family == "RedHat" Uses the yum module to install the Smart Agent package on RedHat systems. Restart Smart Agent service - name: Restart Smart Agent service service: name: smartagent state: restarted Uses the service module to restart the Smart Agent service to apply the new configuration. Clean up temporary files - name: Clean up temporary files file: path: "/tmp/{{ smart_agent_package | basename }}" state: absent Uses the file module to remove the temporary Smart Agent package files from the target hosts. 3. variables.yaml This file contains the variables used in the playbook: smart_agent: controller_url: 'tme.saas.appdynamics.com' account_name: 'ACCOUNT NAME' account_access_key: 'ACCESS KEY' fm_service_port: '443' ssl: true smart_agent_package_debian: 'appdsmartagent_64_linux_24.6.0.2143.deb' smart_agent_package_redhat: 'appdsmartagent_64_linux_24.6.0.2143.rpm' Explaining the Variables File In Ansible, variables are used to store values that can be reused throughout your playbooks, roles, and tasks. They help make your playbooks more flexible and easier to maintain by allowing you to define values in one place and reference them wherever needed. Variables can be defined in several places, including: Playbooks: Directly within the playbook file. Inventory files: Associated with hosts or groups of hosts. Variable files: Separate YAML files that are included in playbooks. Roles: Within the defaults and vars directories of a role. Command line: Passed as extra variables when running the playbook. Variables can be referenced using the Jinja2 templating syntax, which is denoted by double curly braces {{ }}. The provided variables file is a YAML file that contains a set of variables used in an Ansible playbook. Here is a breakdown of the variables defined in the file: smart_agent: controller_url: 'tme.saas.appdynamics.com' account_name: 'ACCOUNT NAME' account_access_key: 'ACCESS CODE HERE' fm_service_port: '443' ssl: true smart_agent_package_debian: 'appdsmartagent_64_linux_24.6.0.2143.deb' smart_agent_package_redhat: 'appdsmartagent_64_linux_24.6.0.2143.rpm' smart_agent: This is a dictionary (or hash) containing several key-value pairs related to the configuration of a "smart agent". controller_url: The URL of the controller. account_name: The name of the account. account_access_key: The access key for the account. fm_service_port: The port number for the service. ssl: A boolean indicating whether SSL is used. smart_agent_package_debian: The filename of the Debian package for the smart agent. smart_agent_package_redhat: The filename of the Red Hat package for the smart agent. Step 4: Execute the Playbook To deploy the Smart Agent, run the following command from your project directory: ansible-playbook -i inventory-cloud.yaml smartagent.yaml Replace inventory-cloud.yaml with the appropriate inventory file for your setup. And there you have it! With these steps, you're now equipped to deploy the Cisco AppDynamics Smart Agent to multiple hosts with ease. Happy deploying!
Gone are the days of point-and-click monotony – we're going full CLI commando! Whether you're managing a lone server or herding a flock of hosts, this guide will transform you from a nervous newbie t... See more...
Gone are the days of point-and-click monotony – we're going full CLI commando! Whether you're managing a lone server or herding a flock of hosts, this guide will transform you from a nervous newbie to a confident commander of the AppDynamics realm. So grab your favorite caffeinated beverage, fire up that terminal, and let's turn those command-line frowns upside down! Installing the Smart Agent CLI Before we can conquer the world of application monitoring, we need to arm ourselves with the right tools. Let's start by installing the AppDynamics Smart Agent CLI with Python 3.11, our trusty sidekick in this adventure. Hosting the Smart Agent Package on a Local Web Server Before we start spreading the Smart Agent love to multiple hosts, let's set up a local web server to host our package. We'll use Python's built-in HTTP server because, let's face it, who doesn't love a bit of Python magic? Navigate to your Smart Agent package directory: cd /path/to/smartagent/package/ Start the Python HTTP server: python3 -m http.server 8000 Your package is now available at http://your-control-node-ip:8000/smartagent-package-name.rpm Verify with: curl http://your-control-node-ip:8000/smartagent-package-name.rpm --output /dev/null Keep this terminal window open – it's the lifeline for our installation process! 1. Verify Python 3.11 Installation First, let's make sure Python 3.11 is ready and waiting: which python3.11 You should see something like: /usr/bin/python3.11 If Python 3.11 is playing hide and seek, you'll need to find and install it before proceeding. 2. Install the Smart Agent CLI Now, let's summon the Smart Agent CLI using the magical incantation below (adjust the RPM filename to match your version): sudo APPD_SMARTAGENT_PYTHON3=/usr/bin/python3.11 yum install appdsmartagent_cli_64_linux_24.6.0.2143.rpm 3. Verify the Installation Let's make sure our new CLI friend is ready to party: appd --version If you see the version number, congratulations! You've just leveled up your AppDynamics game. Installing Smart Agent on a Single Host Let's start small and install the Smart Agent on a single host. Baby steps, right? Prepare your configuration file (config.ini): [default] controller_url: "your-controller-url.saas.appdynamics.com" controller_port: 443 controller_account_name: "your-account-name" access_key: "your-access-key" enable_ssl: true Installing Smart Agent on Multiple Hosts or Locl Feeling confident? Let's scale up and install the Smart Agent across multiple hosts like a boss! Preparing Your Inventory Before we unleash our Smart Agent army, we need to create an inventory of our target hosts. Here are a couple of examples to get you started: For a simple target with additional ansible variables: [targets] 54.221.141.103 ansible_user=ec2-user ansible_ssh_pass=ins3965! ansible_python_interpreter=/usr/bin/python3.11 ansible_ssh_common_args='-o StrictHostKeyChecking=no' Let's break down the provided hosts.ini file: [targets] 54.221.141.103 ansible_user=ec2-user ansible_ssh_pass=ins3965! ansible_python_interpreter=/usr/bin/python3.11 ansible_ssh_common_args='-o StrictHostKeyChecking=no' Group [targets]: This is a group name. In this case, the group is named targets. You can use this group name in your playbooks to refer to all the hosts listed under it. Host 54.221.141.103: This is the IP address of the host that belongs to the targets group. Host Variables Several variables are defined for the host 54.221.141.103: ansible_user=ec2-user: This specifies the SSH user to connect as. In this case, the user is ec2-user. ansible_ssh_pass=ins3965!: This specifies the SSH password to use for authentication. The password is ins3965!. Note that using plain text passwords in inventory files is generally not recommended for security reasons. It's better to use SSH keys or Ansible Vault to encrypt sensitive data. ansible_python_interpreter=/usr/bin/python3.11: This specifies the path to the Python interpreter on the remote host. Ansible needs Python to be installed on the remote host to execute its modules. Here, it is set to use Python 3.11 located at /usr/bin/python3.11. ansible_ssh_common_args='-o StrictHostKeyChecking=no': This specifies additional SSH arguments. In this case, -o StrictHostKeyChecking=no is used to disable strict host key checking. This means that SSH will automatically add new host keys to the known hosts file and will not prompt the user to confirm the host key. This can be useful in automated environments but can pose a security risk as it makes it easier for man-in-the-middle attacks. This hosts.ini file defines a single host (54.221.141.103) in the targets group with specific SSH and Python interpreter settings. Here's a summary of what each setting does: Connect to the host using the ec2-user account. Use the password ins3965! for SSH authentication. Use Python 3.11 located at /usr/bin/python3.11 on the remote host. Disable strict host key checking for SSH connections. For multiple managed nodes: [managed_nodes] managed1 ansible_host=192.168.33.20 ansible_python_interpreter=/usr/bin/python3 managed2 ansible_host=192.168.33.30 ansible_python_interpreter=/usr/bin/python3 Save your file named hosts respectively. You can adjust the hostnames, IP addresses, and other parameters to match your environment. Executing a Local or Multi-Host Installation We can install on our local host by using the following command. sudo ./appd install smartagent -c config.ini -u http://your-control-node-ip:8000/smartagent-package-name.xxx --auto-start -vvvv Now that we have our targets lined up, let's fire away: sudo ./appd install smartagent -c config.ini -u http://your-control-node-ip:8000/smartagent-package-name.xxx -i hosts -q ssh --auto-start -vvvv Replace with hosts if you're using the multiple managed nodes setup. Verifying Installation Let's make sure our Smart Agents are alive and kicking: Check the service status: sudo systemctl status appdynamics-smartagent Look for new nodes in your AppDynamics controller UI under Infrastructure Visibility. Troubleshooting If things go sideways, don't panic! Check the verbose output, verify SSH connectivity, double-check your config file, and peek at those Smart Agent logs. Remember, every IT pro was once a beginner – persistence is key! There you have it, intrepid AppDynamics adventurer! You've now got the knowledge to install, host, and deploy Smart Agents like a true CLI warrior. Go forth and monitor with confidence, knowing that you've mastered the art of the AppDynamics Smart Agent CLI. May your applications be forever performant and your alerts be always actionable!
Check the search job log, especially the lispy search performed.
It was SH that was also extracting. Putting KV_MODE = none for SH and let the indexer extract should NOT show the duplicate result for Json
Hi @Jonathan.Wang, Welcome to the Cisco AppDynamics Community. Thanks for asking and answering your first question! haha.