All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Gone are the days of point-and-click monotony – we're going full CLI commando! Whether you're managing a lone server or herding a flock of hosts, this guide will transform you from a nervous newbie t... See more...
Gone are the days of point-and-click monotony – we're going full CLI commando! Whether you're managing a lone server or herding a flock of hosts, this guide will transform you from a nervous newbie to a confident commander of the AppDynamics realm. So grab your favorite caffeinated beverage, fire up that terminal, and let's turn those command-line frowns upside down! Installing the Smart Agent CLI Before we can conquer the world of application monitoring, we need to arm ourselves with the right tools. Let's start by installing the AppDynamics Smart Agent CLI with Python 3.11, our trusty sidekick in this adventure. Hosting the Smart Agent Package on a Local Web Server Before we start spreading the Smart Agent love to multiple hosts, let's set up a local web server to host our package. We'll use Python's built-in HTTP server because, let's face it, who doesn't love a bit of Python magic? Navigate to your Smart Agent package directory: cd /path/to/smartagent/package/ Start the Python HTTP server: python3 -m http.server 8000 Your package is now available at http://your-control-node-ip:8000/smartagent-package-name.rpm Verify with: curl http://your-control-node-ip:8000/smartagent-package-name.rpm --output /dev/null Keep this terminal window open – it's the lifeline for our installation process! 1. Verify Python 3.11 Installation First, let's make sure Python 3.11 is ready and waiting: which python3.11 You should see something like: /usr/bin/python3.11 If Python 3.11 is playing hide and seek, you'll need to find and install it before proceeding. 2. Install the Smart Agent CLI Now, let's summon the Smart Agent CLI using the magical incantation below (adjust the RPM filename to match your version): sudo APPD_SMARTAGENT_PYTHON3=/usr/bin/python3.11 yum install appdsmartagent_cli_64_linux_24.6.0.2143.rpm 3. Verify the Installation Let's make sure our new CLI friend is ready to party: appd --version If you see the version number, congratulations! You've just leveled up your AppDynamics game. Installing Smart Agent on a Single Host Let's start small and install the Smart Agent on a single host. Baby steps, right? Prepare your configuration file (config.ini): [default] controller_url: "your-controller-url.saas.appdynamics.com" controller_port: 443 controller_account_name: "your-account-name" access_key: "your-access-key" enable_ssl: true Installing Smart Agent on Multiple Hosts or Locl Feeling confident? Let's scale up and install the Smart Agent across multiple hosts like a boss! Preparing Your Inventory Before we unleash our Smart Agent army, we need to create an inventory of our target hosts. Here are a couple of examples to get you started: For a simple target with additional ansible variables: [targets] 54.221.141.103 ansible_user=ec2-user ansible_ssh_pass=ins3965! ansible_python_interpreter=/usr/bin/python3.11 ansible_ssh_common_args='-o StrictHostKeyChecking=no' Let's break down the provided hosts.ini file: [targets] 54.221.141.103 ansible_user=ec2-user ansible_ssh_pass=ins3965! ansible_python_interpreter=/usr/bin/python3.11 ansible_ssh_common_args='-o StrictHostKeyChecking=no' Group [targets]: This is a group name. In this case, the group is named targets. You can use this group name in your playbooks to refer to all the hosts listed under it. Host 54.221.141.103: This is the IP address of the host that belongs to the targets group. Host Variables Several variables are defined for the host 54.221.141.103: ansible_user=ec2-user: This specifies the SSH user to connect as. In this case, the user is ec2-user. ansible_ssh_pass=ins3965!: This specifies the SSH password to use for authentication. The password is ins3965!. Note that using plain text passwords in inventory files is generally not recommended for security reasons. It's better to use SSH keys or Ansible Vault to encrypt sensitive data. ansible_python_interpreter=/usr/bin/python3.11: This specifies the path to the Python interpreter on the remote host. Ansible needs Python to be installed on the remote host to execute its modules. Here, it is set to use Python 3.11 located at /usr/bin/python3.11. ansible_ssh_common_args='-o StrictHostKeyChecking=no': This specifies additional SSH arguments. In this case, -o StrictHostKeyChecking=no is used to disable strict host key checking. This means that SSH will automatically add new host keys to the known hosts file and will not prompt the user to confirm the host key. This can be useful in automated environments but can pose a security risk as it makes it easier for man-in-the-middle attacks. This hosts.ini file defines a single host (54.221.141.103) in the targets group with specific SSH and Python interpreter settings. Here's a summary of what each setting does: Connect to the host using the ec2-user account. Use the password ins3965! for SSH authentication. Use Python 3.11 located at /usr/bin/python3.11 on the remote host. Disable strict host key checking for SSH connections. For multiple managed nodes: [managed_nodes] managed1 ansible_host=192.168.33.20 ansible_python_interpreter=/usr/bin/python3 managed2 ansible_host=192.168.33.30 ansible_python_interpreter=/usr/bin/python3 Save your file named hosts respectively. You can adjust the hostnames, IP addresses, and other parameters to match your environment. Executing a Local or Multi-Host Installation We can install on our local host by using the following command. sudo ./appd install smartagent -c config.ini -u http://your-control-node-ip:8000/smartagent-package-name.xxx --auto-start -vvvv Now that we have our targets lined up, let's fire away: sudo ./appd install smartagent -c config.ini -u http://your-control-node-ip:8000/smartagent-package-name.xxx -i hosts -q ssh --auto-start -vvvv Replace with hosts if you're using the multiple managed nodes setup. Verifying Installation Let's make sure our Smart Agents are alive and kicking: Check the service status: sudo systemctl status appdynamics-smartagent Look for new nodes in your AppDynamics controller UI under Infrastructure Visibility. Troubleshooting If things go sideways, don't panic! Check the verbose output, verify SSH connectivity, double-check your config file, and peek at those Smart Agent logs. Remember, every IT pro was once a beginner – persistence is key! There you have it, intrepid AppDynamics adventurer! You've now got the knowledge to install, host, and deploy Smart Agents like a true CLI warrior. Go forth and monitor with confidence, knowing that you've mastered the art of the AppDynamics Smart Agent CLI. May your applications be forever performant and your alerts be always actionable!
Check the search job log, especially the lispy search performed.
It was SH that was also extracting. Putting KV_MODE = none for SH and let the indexer extract should NOT show the duplicate result for Json
Hi @Jonathan.Wang, Welcome to the Cisco AppDynamics Community. Thanks for asking and answering your first question! haha.
Well, I did find another line which has the date and time, but it's over 15 lines into the log file.  We need to start with the first line which is the beginning of the stanza, but get the timestamp ... See more...
Well, I did find another line which has the date and time, but it's over 15 lines into the log file.  We need to start with the first line which is the beginning of the stanza, but get the timestamp which is 15th line showing after the opening line shown below C:\Program Files\Universal\UAGSrv\xxxl_p01.nam>set StartDate=Tue 07/23/2024  This is the actual timestamp which I think would work since it has both date and time (hoping that's what the _80514 is the time??  Files\Universal\UAGSrv\xxx_p01.nam>set timestamp=20240723_80514
Thanks for the help! @gcusello. I fixed my rex Iam seeing results now.
Hi @kc_prane , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Thanks @KendalW for the help!
Hello, @gcusello. Thank you for your response. I had an issue with Rex. I corrected that now, and your earlier query works for me.
Hello, Could you tell me what is priority? capabilities explicitly enabled/disabled or from inherited roles? I had to manually edit etc/system/local/authorize.conf (clustered environment) on edit_c... See more...
Hello, Could you tell me what is priority? capabilities explicitly enabled/disabled or from inherited roles? I had to manually edit etc/system/local/authorize.conf (clustered environment) on edit_correlationsearches = enabled (was disabled) even if it had ess_admin, ess_analyst and power inherited. Thanks for your help.  
Thank you @PickleRick for your answer. Eventually I worked around the problem like this: | makeresults | eval amount = 10.6 | eval integer = floor(amount) | eval fraction = round(amount - flo... See more...
Thank you @PickleRick for your answer. Eventually I worked around the problem like this: | makeresults | eval amount = 10.6 | eval integer = floor(amount) | eval fraction = round(amount - floor(amount), 2) | eval compare = if(fraction = 0.6, "T", "F") I simply rounded the floating point number to some decimal places. I tested also your example and this solves this problem (that is not actually a problem as you suggested).   Thank you!
I have a KV Store with replicate turned on, a lookup definition with WILDCARD(match_field), and an automatic configured to output a numeric lookup_field. When I run a search on the relevant source ty... See more...
I have a KV Store with replicate turned on, a lookup definition with WILDCARD(match_field), and an automatic configured to output a numeric lookup_field. When I run a search on the relevant source type, I see the lookup_field. However, when I search with the lookup_field (e.g., "lookup_field=1"), the search finishes quickly and doesn't return anything. This is an example of the lookup. mac,exception 00ABCD*,1 11EEFF*,1 This is an example of the lookup definition. WILDCARD(mac) This is an example of the automatic lookup. lookup mac_addresses mac OUTPUT exception Here is an example of a search that does not return the expected results: index=mac_index exception=1 Here's what's really strange. It works for some events, but not others. When I run this, I get five events earliest=7/29/2024:00:00:00 latest=7/30/2024:00:00:00 index=logs exception=1 When I run this (adding the manual lookup), I get 109 (which is accurate). earliest=7/29/2024:00:00:00 latest=7/30/2024:00:00:00 index=logs | lookup exception_lookup mac OUTPUTNEW exception | search exception=1 Any ideas of what could cause this? Any ideas on how to troubleshoot it?
I don't think so. Post-process search is a parameter for the POST request and needs a valid SPL search. If you wanted to have the post-process search reference the base search itself you'd have to lo... See more...
I don't think so. Post-process search is a parameter for the POST request and needs a valid SPL search. If you wanted to have the post-process search reference the base search itself you'd have to loadjob with that particular search's ID. EDIT: OK, you can do that using the same saved search (but for this you need a scheduled saved search).
I suppose it's not "for Splunk" but rather it's simply a floating point arithmetics which is not as straightforward as we are used to. You could simply manipulate numbers being 1 or 2 orders of magn... See more...
I suppose it's not "for Splunk" but rather it's simply a floating point arithmetics which is not as straightforward as we are used to. You could simply manipulate numbers being 1 or 2 orders of magnitude bigger than your "real" values so that you operate on integers. This is a common problem with floating-point arithmetics - numbers are not what they seem (or seems they should be).
@Siddharthnegi- Two things I would say to check: Restart Splunk and check. Use "My Reports" instead of "Reports" and check. (Do restart Splunk if you are updating the XML from backend. And ensure ... See more...
@Siddharthnegi- Two things I would say to check: Restart Splunk and check. Use "My Reports" instead of "Reports" and check. (Do restart Splunk if you are updating the XML from backend. And ensure you are updating on the right server.)   I hope this helps!!!!
My first thought is that the blocks downstream from the ansible block don't require it to complete, while the blocks downstream from the splunk block do. To check on this: Click on all downstream b... See more...
My first thought is that the blocks downstream from the ansible block don't require it to complete, while the blocks downstream from the splunk block do. To check on this: Click on all downstream blocks For each, open the advanced dropdown in the left panel See if the Join Settings require the ansible/splunk blocks If you don't want the block to be required, uncheck the box here   To directly answer your title question, you can build your own error handling by placing a decision block after the splunk block to check whether splunk_block:action_results:status returns success or failed. If you take this approach and have the different branches reconnect at any point, you'll have to check the join settings because they will automatically require the splunk block to have completed even if your playbook previously followed the "failed" path.
@kwiki- You are on the right track on using streamstats.  But I would just run two searches and compare the results, it would be much easier to write query for. Here it is: index=myindex sourcetype=... See more...
@kwiki- You are on the right track on using streamstats.  But I would just run two searches and compare the results, it would be much easier to write query for. Here it is: index=myindex sourcetype=trans response_code!=00 earliest=-3d@d latest=-2d@d | stats count as error_count_3_days_ago | append [| search index=myindex sourcetype=trans response_code!=00 earliest=-2d@d latest=-1d@d | stats count as error_count_2_days_ago] | stats first(*) as * | eval perc_increase = (error_count_2_days_ago-error_count_3_days_ago) / error_count_3_days_ago)*100, 2) | where perc_increase>3 | table perc_increase ( I have not tested the query, but logic is to append data data together and compare)   I hope this helps!!!!
Please confirm the "bin" field is present in the index.  It is not created by the bin command. If the 'bin' field is null or not present then the stats command will return no results and so the stre... See more...
Please confirm the "bin" field is present in the index.  It is not created by the bin command. If the 'bin' field is null or not present then the stats command will return no results and so the streamstats command will have nothing to evaluate.
You can use the savedsearch command to run a saved search in your query.  If you use the time picker to specify a time range other than All Time then the saved search will use your selected time rang... See more...
You can use the savedsearch command to run a saved search in your query.  If you use the time picker to specify a time range other than All Time then the saved search will use your selected time range; otherwise, the time range in the saved search will be used.
@nivets- Question has paradox. Do you want to change the time-range or not change the timerange?