All Topics

Top

All Topics

Hello Everyone, I have a requirement that the data can be searchable upto last 30 days in search page. But the index retention period is 90 days. Basically it should allow the user to search only be... See more...
Hello Everyone, I have a requirement that the data can be searchable upto last 30 days in search page. But the index retention period is 90 days. Basically it should allow the user to search only between last 30 days events and if it is required then allow the user to search for 90 days.  Is there any configuration available to make the data searchable and not searchable in splunk. Thanks in advance
we need a NAS logs integration to splunk but i dont know how to integrate .We have SC4s container. can anyone help on this
Hello community, we are currently a bit desperate because of a Splunk memory leak problem under Windows OS that most probably all of you will have, but may not have noticed yet, here is the history a... See more...
Hello community, we are currently a bit desperate because of a Splunk memory leak problem under Windows OS that most probably all of you will have, but may not have noticed yet, here is the history and analysis of it: The first time we observed a heavy memory leak problem on a Windows Server 2019 instance was after updating to Splunk Enterprise Version 9.1.3 (from 9.0.7). The Windows server affected has installed some Splunk apps (Symantec, ServiceNow, MS o365, DBconnect, Solarwinds), which are starting a lot of python scripts at very short intervals. After the update the server crashes every few hours due to low memory. Openend a Splunk case #3416998 in Feb 9th. With the MS sysinternals tool rammap.exe we found a lot "zombie" processes (PIDs no more listed in task manager) which are still using some KB of memory (~20-32 KB). Process names are btool.exe, python3.exe, splunk-optimiz, splunkd.exe. It seems every time a process of one of these programs ends, it leaves behind such a memory usage. The Splunk apps on our Windows server do this very often and fast which results in thousands of zombie processes.   After this insight we downgraded Splunk on the server to 9.0.7 and the problem disappears. Then on a test server we installed Splunk Enterprise versions 9.1.3 and 9.0.9. Both versions are showing the same issue. New Splunk case #3428922. In March 28th we got this information from Splunk: .... got an update from our internal dev team on this "In Windows, after upgrading Splunk enterprise to 9.1.3 or 9.2.0 consumes more memory usage. (memory and processes are not released)" internal ticket. They investigated the diag files and seems system memory usage is high, but only Splunk running. This issue comes from the mimalloc (memory allocator). This memory issue will be fixed in the 9.1.5 and 9.2.2 .......... 9.2.2 arrived at July 1st: Unfortunately, still the same issue, the memory leak persists. 3rd Splunk case #3518811 (which is still open). Also not fixed in Version 9.3.0. Even after a online session showing them the rammap.exe screen they wanted us to provide diags again and again from our (test) servers - but they should actually be able to reproduce it in their lab. The hudge problem is: because of existing vulnerabilities in the installed (affected) versions we need to update Splunk (Heavy Forwarders) on our Windows Servers, but cannot due to the memory leak issue. How to reproduce: - OS tested: Windows Server 2016, 2019, 2022, Windows 10 22H2 - Splunk Enterprise Versions tested: 9.0.9, 9.1.3, 9.2.2 (Universal Forwarder not tested) - let the default installation run for some hours (splunk service running) - download rammap.exe from https://learn.microsoft.com/en-us/sysinternals/downloads/rammap and start it - goto Processes tab, sort by Process column - look for btool.exe, python3.exe and splunkd.exe with a small total memory usage of about ~ 20-32 KB. PIDs of this processes don't exists in task list (see Task manager or tasklist.exe) - with the Splunk default installation (without any other apps) the memory usage slowly increases because the default apps script running interval isn't very high - stopping Splunk service releases memory usage (and zombie processes disappear in rammap.exe) - for faster results you can add an app for exessive testing with python3.exe, starting it in short (0 seconds) intervals. The test.py doesn't need to be exist! Splunk starts python3.exe anyway. Only inputs.conf file is needed: ... \etc\apps\pythonDummy\local\inputs.conf [script://$SPLUNK_HOME/etc/apps/pythonDummy/bin/test.py 0000] python.version = python3 interval = 0 [script://$SPLUNK_HOME/etc/apps/pythonDummy/bin/test.py 1111] python.version = python3 interval = 0 ...............if you want, add some more stanzas, 2222, 3333 and so on ............. - the more python script stanzas there are, the more and faster the zombies processes appears in rammap.exe Please share your experiences. And open tickets for Splunk support if you also see the problem, please. We hope Splunk finally react.  
Hello, I have a query used on Splunk enterprise web (search)- "index="__eit_ecio*" | ... | bin _time span=12h | ... | table ... | I am trying to put that into a python API code using Job clas... See more...
Hello, I have a query used on Splunk enterprise web (search)- "index="__eit_ecio*" | ... | bin _time span=12h | ... | table ... | I am trying to put that into a python API code using Job class as this - searchquery_oneshot ="<my above query>" I am getting error - "SyntaxError: invalid decimal literal" pointing to the 12h  in main query. How can I fix this? [2) Can I direct "collect" results (summary index) via this API into json format?] Thanks
Hi Team  Can you please help me to find a way to change the color of the output value in a single value visualization.  If COUNT_MSG is OK , then display OK in Green If COUNT_MSG is NOK , then d... See more...
Hi Team  Can you please help me to find a way to change the color of the output value in a single value visualization.  If COUNT_MSG is OK , then display OK in Green If COUNT_MSG is NOK , then display NOK in Red Current Code :  <panel> <title>SEMT FAILURES DASHBOARD</title> <single> <search> <query>(index="events_prod_gmh_gateway_esa") sourcetype="mq_PROD_GMH" Cr=S* (ID_FCT=SEMT_002 OR ID_FCT=SEMT_017 OR ID_FCT=SEMT_018 ) ID_FAMILLE!=T2S_ALLEGEMENT | eval ERROR_DESC= case(Cr == "S267", "T2S - Routing Code not related to the System Subscription." , Cr == "S254", "T2S - Transcodification of parties is incorrect." , Cr == "S255", "T2S - Transcodification of accounts are impossible.", Cr == "S288", "T2S - The Instructing party should be a payment bank.", Cr == "S299", "Structure du message incorrecte.",1=1,"NA") | stats count as COUNT_MSG | eval status = if(COUNT_MSG = 0 , "OK" , "NOK" ) | table status</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="drilldown">all</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0xdc4e41"]</option> <option name="refresh.display">progressbar</option> <option name="trellis.enabled">0</option> <option name="useColors">1</option> </single> </panel>   Current Output:   
Our deployment has indexers located in the main data center and multiple branches. We plan to deploy intermediate forwarders and Universal Forwarder (UF) agents in our remote branches to collect logs... See more...
Our deployment has indexers located in the main data center and multiple branches. We plan to deploy intermediate forwarders and Universal Forwarder (UF) agents in our remote branches to collect logs from security devices like firewalls and load balancers  .   What is the recommended bandwidth between the intermediate forwarders and indexers? What is the recommended bandwidth between the UF agents and indexers?"
Hi ,   I have a requirement to create a software stack dashboard in Splunk , which shows all the details which are seen in task manager like shown in below. We have both windows and unix addon  ins... See more...
Hi ,   I have a requirement to create a software stack dashboard in Splunk , which shows all the details which are seen in task manager like shown in below. We have both windows and unix addon  installed and getting the logs.   Can someone please help me in creating a dashboard which shows all the details in a dashboard.  
Hi,  We maintain a lookup table which contains a list of account_id and some other info as shown below. account_id account_owner type 12345 David prod 123456 John non-prod 45678 N... See more...
Hi,  We maintain a lookup table which contains a list of account_id and some other info as shown below. account_id account_owner type 12345 David prod 123456 John non-prod 45678 Nat non-prod In our query, We use a lookup command to match enrich the data using this lookup table. we match by account_id and get the corresponding owner and type as follows.   | lookup accounts.csv account_id OUTPUT account_owner type     In some events (depending on the source) , the account_id values contains a preceding 0 . But in our lookup table, the account_id column does not have a preceding 0.    Basically some events will have account_id = 12345  and some might have account_id=012345. They both are same accounts though.  Now, The lookup command displays the results when there is an exact exact matching account_id in events,   but fails when there is that extra 0 at the beginning. How to tune the lookup command to make it search the lookup table for both the conditions - with and without preceding 0 for the account_id field and even if one matches, it should produce the corresponding results ? Hope i am clear. I am unable to come with a regex for this.
Hello everyone,    I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average... See more...
Hello everyone,    I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value of logs as horizonal red line. Is there a way to achieve it ? Final aim is to be able to observe pattern and median/avg log volumes of certain week that ultimately helps to define baseline of log volume for each source. below is the SPL I am using,   | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=--7d@d latest=now by _time, source | timechart span=1d sum(log_count) by source Any suggestions would be highly appreciated. Thanks
Hi, I am trying to get a list off all users that hit our AI rule and see if this increase or decrease over the timespan of 90 days. I want to see the application they use and see the last three month... See more...
Hi, I am trying to get a list off all users that hit our AI rule and see if this increase or decrease over the timespan of 90 days. I want to see the application they use and see the last three months display as columns with a count of amount of users. Example below Applications June(Month1) July(Month2) August(Month3) chatGPT 213 233 512   index=db_it_network sourcetype=pan* rule=g_artificial-intelligence-access | table user, app, date_month ```| dedup user, app, date_month``` | stats count by date_month, app | sort date_month, app 0 | rename count as "Number of Users" | table date_month, app, "Number of Users"
Hello everyone,  I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value... See more...
Hello everyone,  I have created dashboard that shows total log volumes for different sources across 7 days. I am using line chart and trellis. As shown in pic, I want to add median/average value of logs as horizonal red line. Is there a way to achieve it ? Final aim is to be able to observe pattern and median/avg log volumes of certain week that ultimately helps to define baseline of log volume for each source. below is the SPL I am using,   | tstats count as log_count where index=myindex AND hostname="colla" AND source=* earliest=--7d@d latest=now by _time, source | timechart span=1d sum(log_count) by source Any suggestions would be highly appreciated. Thanks
I've been out of touch with Core Splunk for sometime, so just checking if there are options for below requirement Organisation is looking for RFP for various Big Data products and Organisation needs... See more...
I've been out of touch with Core Splunk for sometime, so just checking if there are options for below requirement Organisation is looking for RFP for various Big Data products and Organisation needs -  multi-cloud design for various applications. Application (and thus data) resides in AWS/Azure/GCP in multiple regions within Europe - Doesn't want to have lot of egress cost. So aggregating data into the cloud which Splunk was installed predominently is out of question. - The design is to have 'Data nodes' (Indexer clusters or Data clusters) in each of the application/data residing cloud providers - A Search Head cluster (Cross Cloud search) will be then spun in the main provider (eg AWS), which can then search ALL these remote 'Data nodes' Is this design feasible in Splunk? (I understand Mothership add-on, but my last encouter with it at enterprise scale was not that great) Looking for something like below with low latency
I have try to prompt with my email. To execute the requested action, deny or delegate, click here https://10.250.74.118:8443/approval/14. It need to enter the WEB UI and found the "certain" prompt.... See more...
I have try to prompt with my email. To execute the requested action, deny or delegate, click here https://10.250.74.118:8443/approval/14. It need to enter the WEB UI and found the "certain" prompt. If I have 10000 prompt, I can not found the event related to the email rapidly.  If it is possible that use rest api to post prompt decision to soar certain event?
Hello  I have some issue getting the Windows performance -Velocity SD Service Counters logs. I used [perform://Velocity SD Service Counters] counter=* disable==0 instances=* object=Velocity SD ... See more...
Hello  I have some issue getting the Windows performance -Velocity SD Service Counters logs. I used [perform://Velocity SD Service Counters] counter=* disable==0 instances=* object=Velocity SD Service Counters mode=multikv showZeroValue=1 index=windows But not getting events. Any recommendation will be highly appreciated!  
In today's fast-paced and highly competitive business landscape, organizations rely on robust and efficient enterprise resource planning (ERP) systems to streamline operations, enhance productivity, ... See more...
In today's fast-paced and highly competitive business landscape, organizations rely on robust and efficient enterprise resource planning (ERP) systems to streamline operations, enhance productivity, and drive growth. SAP is one of the leading ERP solutions adopted by enterprises worldwide due to its comprehensive suite of applications that cater to various business needs, including finance, logistics, human resources, and supply chain management. However, the complexity and criticality of SAP environments necessitate continuous monitoring to ensure optimal performance, security, and compliance. Monitoring an SAP environment involves tracking system health, performance metrics, and user activities to identify potential issues before they escalate into significant problems. This proactive approach not only helps maintain system reliability and efficiency but also safeguards sensitive business data and supports regulatory compliance. Comprehensive Visibility SAP environments are inherently complex, comprising multiple interconnected components that collectively support critical business functions. This complexity often results in fragmented visibility, making it challenging for IT teams to monitor the entire ecosystem effectively.  While SAP's native monitoring tools such as Solution Manager, CCMS monitoring and Focus Run are robust and well-suited for managing SAP-specific components, they come with certain limitations that can be challenging for organizations with heterogeneous IT landscapes.  They lack comprehensive visibility into non-SAP components, third-party applications, and external services that interact with the SAP environment.  This leads to fragmented monitoring and potential blind spots, like tracing transactions end-to-end, making it difficult to diagnose performance bottlenecks or errors that span beyond SAP components.  SAP Basis Administrators use a variety of transaction codes to troubleshoot an issue, below are some of them. AppDynamics excels in delivering comprehensive visibility across both SAP and non-SAP components, ensuring that every aspect of the system is monitored and optimized.  AppDynamics allows for end-to-end tracing of business transactions as they flow through various components of the SAP environment. This means that every user action, from the initial request to the final response, can be tracked across different modules, databases, and external services. All this in real time while being baselined, this granular level of visibility helps in pinpointing exactly where performance bottlenecks or errors occur, enabling faster and more accurate troubleshooting. Most of this is done in the background with no user interaction, then laid out in various ways for easy identification of issues. Daily/Monthly/Quarterly/Yearly Check Lists By default, SAP systems, like all major systems, need to be looked after to prevent issues from stacking up. These checklists cover various areas within the system, typically including hardware resources, processing utilization, job execution, and updates. These checks are at least performed on production systems and generally take about 10 to 15 minutes per system. This is to ensure the smooth operation of an SAP system and help identify and resolve potential issues before they impact business operations. When problems are detected, it often requires manual troubleshooting and correlation to determine the necessary actions. In such cases, the functional team is involved to coordinate and implement corrective measures. Below is an example of a daily checklist for an ERP system. AppDynamics takes a proactive stance to monitoring these systems, with 35+ dashboards and 350+ metrics/KPIs out-of-the-box.  These checks could now be automated, removing the human error factor.  AppDynamics supplies default Health Rules on key SAP System Metrics and Events for faster set-up on alerts, effectively switching the environment to being proactive. Giving your organization the ability to identify and resolve issues much faster, ensuring that the SAP system runs smoothly and efficiently. This process of maintenance helps preserve system stability, performance, and security, minimizing the risk of disruptions to business operations.  Ensuring your systems operate efficiently and keep up with end-user demand.                                    DB Specific Support SAP provides various tools and functionalities to monitor the databases supporting its environments. While these tools offer valuable insights, they also come with certain boundaries. These native SAP tools often lack comprehensive user experience monitoring capabilities, which are crucial for understanding the end-to-end performance impact on users.  While some historical data analysis is available, it may not be as extensive or detailed as some company’s need.  SAP has been trying to push its customers to move to HANA for some time as they announced support for its ERP systems will end in 2027 unless you move to S/4HANA.  This is no small task, and the SAP management tools leave much to be desired. AppDynamics supports monitoring a wide range of databases commonly used in SAP environments. This ensures comprehensive visibility and performance management across the entire IT landscape.  AppDynamics provides several OOTB dashboards dedicated to databases with 8 specifically for SAP HANA®, greatly reducing the learning curve. This ensures organizations can maintain optimal performance, reliability, and efficiency in their SAP environments. Final Thoughts By leveraging the advanced monitoring, alerting, and automation capabilities of AppDynamics, organizations can significantly reduce or even eliminate many of the manual tasks a Basis person needs to do. AppDynamics provides continuous, real-time visibility into system performance, automates diagnostics and reporting, and proactively alerts IT teams to potential issues to the code level if needed, greatly reducing the time a developer typically gets involved. This not only enhances system reliability and performance but also frees up valuable time for IT staff to focus on strategic initiatives and innovation, rather than routine maintenance tasks.
Hello Community!  Splunk and AppDynamics, united as part of Cisco, are driving the future of Observability. We are proud to announce that Splunk has been named a Leader in the 2024 Gartner® Magic Q... See more...
Hello Community!  Splunk and AppDynamics, united as part of Cisco, are driving the future of Observability. We are proud to announce that Splunk has been named a Leader in the 2024 Gartner® Magic Quadrant™ for Observability Platforms. Read the full blog on AppDynamics.com, republished from Splunk.com “Transformative Solution” says a Director of IT in a $30B+ retailer. “Best Monitoring and Observability Tool > Splunk,” is how a software engineer in a software company labels it. These are only a couple of the terms our customers use when describing the value they are getting from Splunk. With these descriptions in mind, we are elated that Splunk has been named a Leader in the 2024 Gartner® Magic Quadrant™ for Observability Platforms for the second year in a row in this category. Splunk’s leadership in the Observability market comes on the heels of Splunk being named a Leader in the 2024 Gartner® Magic Quadrant™ for Security Information and Event Management (SIEM) just about two months ago, which is the tenth consecutive time for Splunk in the Leaders Quadrant for that category. The old proverb says that good things come in pairs, and indeed Splunk delivers a whole lot of goodness for both Observability and Security, with Splunk being the singular vendor to appear in the Leaders Quadrant across both Magic Quadrant reports.
Hello and welcome back to "Splunk Smartness," where we explore how Splunk Education can turbocharge your career and your technical skills. I'm your host, Callie Skokos, and today we're chatting with ... See more...
Hello and welcome back to "Splunk Smartness," where we explore how Splunk Education can turbocharge your career and your technical skills. I'm your host, Callie Skokos, and today we're chatting with Pedro Borges, a Senior Security Engineer at Siemens, based out of Colorado Springs, Colorado, USA. Pedro, great to have you on the series!   Pedro Borges: Thanks, Callie. I’m really glad to be here! Callie Skokos: Pedro, you seem to balance a busy life with some fun activities. What do you like to do in your spare time? Pedro: Well, spending time with my family is my top priority. When I find some extra time, I enjoy playing video games and noodling around on my bass guitar. It's a great way to unwind. Callie: Sounds like a perfect way to recharge. Now, diving into your professional journey, how did you start using Splunk? Pedro: My Splunk journey kicked off around Splunk .conf18. I was working at Concur as a Security Engineer, mainly focused on vulnerability management. I was a bit skeptical about how Splunk could fit into that, but attending the conference with my manager/mentor opened my eyes to its capabilities. By the next year, I was deep into learning how to support our Splunk deployment. Callie: What an introduction! What Splunk products are you currently using? Pedro: Right now, we’re using Splunk Enterprise and Splunk Enterprise Security. They're integral tools for our security architecture. Callie: With these powerful tools at your disposal, how did you go about learning to use them effectively? Pedro: My management invested in Splunk instructor-led training and eLearning with labs, plus Splunk Certifications. So, I attended this type of training, but the best learning experience, really, has been on-the-job—really getting hands-on with the platform. I was also fortunate to have a great mentor and learn from others who've been working with Splunk. Plus, using Splunk exam prep for Splunk Certification is another great motivator to learn and dig deeper into what the products can do. Callie: Oh, so you have Splunk Certifications? Can you tell us about the ones you’ve achieved? Pedro: Absolutely! I have quite a few certifications under my belt: Splunk Enterprise Certified Architect, Splunk Enterprise Security Certified Admin, Splunk Cloud Certified Admin, and Splunk Certified Cybersecurity Defense Analyst. Honestly, I’m running out of certifications to take! Getting Splunk certified and becoming really proficient in using Splunk has been one of the best career moves I've made.   Callie: That's really impressive, Pedro. Shows such a growth mindset! Along with the traditional Splunk training, have you participated in any Splunk University bootcamps? Pedro: Yes, I’ve taken part in bootcamps, especially to ramp up on SOAR. The in-person experience is invaluable because you can interact directly with instructors and ask questions on the spot. It’s usually a long, intensive week in Las Vegas, but it’s worth it. It’s tough to match that level of engagement when you’re learning virtually. Callie: It sounds like those experiences have been integral to your learning. Pedro, what are the most valuable insights you've gained from using Splunk in your role? Pedro: My role leans heavily towards the architectural and administrative side of things, managing three Splunk environments end-to-end. It’s not just about sifting through data; it’s about managing updates, developing dashboards, and optimizing our processes. Splunk helps us keep our security posture proactive rather than reactive. Callie: And how has becoming proficient with Splunk impacted your career? Pedro: It’s been a game-changer. In a previous role, I was promoted to SIEM design manager, overseeing close to 20 Splunk deployments, which led to my current role at Siemens. Getting certified in Splunk was definitely one of the best career decisions I've made. Callie: That’s so amazing to hear. What future plans do you have regarding Splunk or further certifications? Pedro: I’m looking forward to tackling more complex challenges—integrating intricate data sources, managing complex deployments. It’s about continuous growth and pushing the boundaries of what I can do with Splunk to support my organization. I’m also very active in the Splunk Community and hope to share my expertise and experience with those who are early career learners like I once was when I started. Callie: I love hearing this because we LOVE our Splunk Community, too. I hear you’re a Splunk Community MVP, which is a way for Splunk to recognize and celebrate our star contributors. What’s that been like? Pedro: It’s been super validating. I’m on my way toward becoming a SplunkTrust member, and it feels great to give back to the community that has helped me so much. From sharing tips on the community channels to participating in presentations at .conf, it’s been a rewarding experience. Callie: I’m excited about what’s ahead for you, Pedro. Thank you so much for sharing your Splunk journey with us today. Pedro: Thank you, Callie. It was a pleasure to be here and share my story! ___________________________________________________ And thank you all for tuning into "Splunk Smartness." We'll be back next time with more insights on how Splunk can enhance your tech skills and career – and help you make your organization more resilient. Until then, stay smart and keep Splunking! PS:  Be like Pedro and check out our latest Splunk Certification – Splunk Certified Cybersecurity Defense Engineer.
Hello,  Is there a way to add 3rd party python modules to the add-on builder? I am trying to create a python script in the add-on builder, but looks like I need to use a module that is not included... See more...
Hello,  Is there a way to add 3rd party python modules to the add-on builder? I am trying to create a python script in the add-on builder, but looks like I need to use a module that is not included in the add-on builder. Thanks for any help on this. Tom  
Having the flexibility to use different AccessKey for different applications you auto-instrument with Cluster Agent is essential. This flexibility was introduced with the latest versions of the Clu... See more...
Having the flexibility to use different AccessKey for different applications you auto-instrument with Cluster Agent is essential. This flexibility was introduced with the latest versions of the Cluster agent. Follow these steps: Un-instrument your application. Create a new secret with the accessKey with the below command: kubectl -n appdynamics create secret generic <secret-name> --from-literal=<custom-Controller-key-name>=<key-value>​ In this command <key-value> will be your AccessKey. For example: kubectl -n appdynamics create secret generic abhi-java-apps --from-literal=controller-key=xxxxx-fb91-4dfc-895a-xxxxx​ I modified my yaml and in the specific instrumentationRule section I added: - namespaceRegex: abhi-java-apps language: java matchString: tomcat-app-abhi-non-root appNameLabel: app runAsUser: 1001 runAsGroup: 1001 customSecretName: abhi-java-apps customSecretKey: controller-key imageInfo: image: "docker.io/appdynamics/java-agent:latest" agentMountPath: /opt/appdynamics imagePullPolicy: Always​ customSecretName is the secret name and customSecretKey is the Key for that secret. After this, I re-instrumented my application, and in the Cluster agent logs I confirmed [INFO]: 2024-08-19 14:45:09 - deploymenthandler.go:262 - custom secretName is %s and is %s %!(EXTRA string=abhi-java-apps, string=controller-key)​ Also, when I exec inside the application pod and grepped for env | grep -i Access, i confirmed that this AccessKey is used: wso2carb@tomcat-app-abhi-non-root-5d558dddf4-rllzc:/usr/local/tomcat/webapps$ env | grep -i Access APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY=xxxxx-fb91-4dfc-895a-xxxxx JAVA_TOOL_OPTIONS=-Xmx512m -Dappdynamics.agent.accountAccessKey=xxxxx-fb91-4dfc-895a-xxxxx -Dappdynamics.agent.reuse.nodeName=true -Dappdynamics.socket.collection.bci.enable=true -javaagent:/opt/appdynamics-java/javaagent.jar​ Additional Resources Use Custom Access Key
Hi All, I am having 20+ Panels in Studio Dashboard, As per Customer Requirement they wants only 5 Panels Per Page, Could you pls help on JSON Code how to Segregate the Panels, For Example, If Click ... See more...
Hi All, I am having 20+ Panels in Studio Dashboard, As per Customer Requirement they wants only 5 Panels Per Page, Could you pls help on JSON Code how to Segregate the Panels, For Example, If Click on 1st it should display only 5 Panel, if I Click on next Dot it should display next 5 Panels and So On.