All Topics

Top

All Topics

Hi, may I know any documentation on how tokens work when using them in javascript files. The docs at https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/tokens don't present much info on Javascrip... See more...
Hi, may I know any documentation on how tokens work when using them in javascript files. The docs at https://docs.splunk.com/Documentation/Splunk/9.1.2/Viz/tokens don't present much info on Javascript usage. Particularly, I am trying to use tokens to delete KV store values and I am confused how this can be done. Just using tokens.unset() is not working. Any help would be appreciated!!
We are setting up the splunk otel collector in ssl and authorization enabled solr. We are facing issue in passing the username and password for solr endpoint in the agent_config.yaml file.  Refer the... See more...
We are setting up the splunk otel collector in ssl and authorization enabled solr. We are facing issue in passing the username and password for solr endpoint in the agent_config.yaml file.  Refer the bbelow content of the config file, due to security reason, we have masked the hostname, userid, password details.   receivers: smartagent/solr: type: collectd/solr host: <hostname> port: 6010 enhancedMetrics: true exporters: sapm: access_token: "${SPLUNK_ACCESS_TOKEN}" endpoint: "${SPLUNK_TRACE_URL}" signalfx: access_token: "${SPLUNK_ACCESS_TOKEN}" api_url: "${SPLUNK_API_URL}" ingest_url: "${SPLUNK_INGEST_URL}" sync_host_metadata: true headers: username: <username> password: <password> correlation: otlp tls: insecure: false cert_file: <certificate_file>.crt key_file: <key_file>.key   Error Log :    -- Logs begin at Fri 2023-11-17 23:32:38 EST, end at Tue 2023-11-28 02:46:22 EST. -- Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/site-packages/sfxrunner/scheduler/simple.py", line 57, in _call_on_interval Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: func() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/solr/solr_collectd.py", line 194, in read_metrics Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: solr_cloud = fetch_collections_info(data) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/solr/solr_collectd.py", line 328, in fetch_collections_info Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: get_data = _api_call(url, data["opener"]) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/collectd-python/solr/solr_collectd.py", line 286, in _api_call Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: resp = urllib.request.urlopen(req) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 216, in urlopen Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: return opener.open(url, data, timeout) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 519, in open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: response = self._open(req, data) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 536, in _open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: result = self._call_chain(self.handle_open, protocol, protocol + Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 496, in _call_chain Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: result = func(*args) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 1377, in http_open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: return self.do_open(http.client.HTTPConnection, req) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/urllib/request.py", line 1352, in do_open Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: r = h.getresponse() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/http/client.py", line 1378, in getresponse Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: response.begin() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/http/client.py", line 318, in begin Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: version, status, reason = self._read_status() Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: ^^^^^^^^^^^^^^^^^^^ Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: File "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/http/client.py", line 300, in _read_status Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: raise BadStatusLine(line) Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: [30B blob data] Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: [3B blob data] Nov 18 14:06:48 aescrsbsql01.scr.dnb.net otelcol[1035]: {"kind": "receiver", "name": "smartagent/solr", "data_type": "metrics", "createdTime": 1700334408.8198304, "lineno": 56, "logger": "root", "monitorID": "smartagentsolr", "monitorType": "collectd/solr", "runnerPID": 1703, "sourcePath": "/usr/lib/splunk-otel-collector/agent-bundle/lib/python3.11/site-packages/sfxrunner/logs.py"} Nov 18 14:06:58 aescrsbsql01.scr.dnb.net otelcol[1035]: 2023-11-18T14:06:58.821-0500 error signalfx/handler.go:188 Traceback (most recent call last):
Hi! I am trying to evaluate AppDynamics for monitoring IIS-sites but I get the error "unable to create application" when I try to create an application. / FKE
Hi Has anyone had issues monitoring splunk servers with zabbix agents. Just monitoring no integration or log ingestion? Thanks M
Hi, I'm trying to setup a way to automatically assign notables to the analysts, and evenly. The "default owner" in the notable adaptive response wouldn't help as it will keep on assigning the same r... See more...
Hi, I'm trying to setup a way to automatically assign notables to the analysts, and evenly. The "default owner" in the notable adaptive response wouldn't help as it will keep on assigning the same rule to the same person. Is there a way to shuffle the assignees automatically?
im getting this error from connection in  DB connect "There was an error processing your request. It has been logged (ID xxxx)" . I've done manually copy query in db_inputs.conf but doesnt worxs. ca... See more...
im getting this error from connection in  DB connect "There was an error processing your request. It has been logged (ID xxxx)" . I've done manually copy query in db_inputs.conf but doesnt worxs. can help me?
Hi  I am trying to set up an alert with the following query for the tickets that is not assigned to someone after 10 mins. I wanted the ticket number to get populated in the mail but I am not gettin... See more...
Hi  I am trying to set up an alert with the following query for the tickets that is not assigned to someone after 10 mins. I wanted the ticket number to get populated in the mail but I am not getting the same rather the mail is without the ticket number. index="servicenow"  sourcetype=":incident" |where assigned_to = "" | eval age = now() - _time |where age>600 |table ticket_number, age, assignment_group, team | lookup team_details.csv team as team OUTPUTNEW alert_email, enable_alert | where enable_alert = Y | sendemail to="$alert_email$" subject="Incident no. "$ticket_number$" is not assigned for more than 10 mins - Please take immediate action" message=" Hi Team, This is to notify you that the ticket: "$ticket_number$" is not assigned for more than 10 mins. Please take necessary action on priority"
I want to change the msg for a log i.e <list > <Header>.....</Header> <status> <Message>Thuihhh_4y3y27y234yy4 is pending</Message> </status> </list> to <list > <Header>.....</He... See more...
I want to change the msg for a log i.e <list > <Header>.....</Header> <status> <Message>Thuihhh_4y3y27y234yy4 is pending</Message> </status> </list> to <list > <Header>.....</Header> <status> <Message>request is pending</Message> </status> </list>   how can i achieve using rex+sed commands in splunk
I want to extract the  following information make it as a field as "error message" . index=os source="/var/log/syslog" "*authentication failure*" OR "Generic preauthentication failure" Events e... See more...
I want to extract the  following information make it as a field as "error message" . index=os source="/var/log/syslog" "*authentication failure*" OR "Generic preauthentication failure" Events example : Nov 28 01:02:31 server1 sssd[ldap_child[12010]]: Failed to initialize credentials using keytab [MEMORY:/etc/krb5.keytab]: Generic preauthentication failure. Unable to create GSSAPI-encrypted LDAP connection. Nov 28 01:02:29 server2  proxy_child[1939385]: pam_unix(system-auth-ac:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.177.46.57 user=hippm
Hello there, I would like to convert the  default time to the local country timezone and place the converted timezone next to the default one The defaut timezone is Central European time and based ... See more...
Hello there, I would like to convert the  default time to the local country timezone and place the converted timezone next to the default one The defaut timezone is Central European time and based on the country name available in the report, need to conver the timezone. I guess i need to have a lookup table which the coutryname and the timezone of that country Timestamp CountryCode CountryName Region 2023-10-29T13:15:51.711Z BR Brazil Americas 2023-10-30T10:13:19.160Z BH Bahrain APEC 2023-10-30T19:15:24.263Z AE Arab Emirates APEC  
Hi All, We have configured application log monitoring on windows application servers. The log path has a folder where all the _json files are stored. There are more that 300+ json files in each fold... See more...
Hi All, We have configured application log monitoring on windows application servers. The log path has a folder where all the _json files are stored. There are more that 300+ json files in each folder with different time stamps and dates. We have configured inputs.conf as shown below with ignoreOlderThan =2d so that Splunk should not consume more CPU/memory. But still we could see memory and CPU of the application server is going high. Kindly suggest best practice methods so that Splunk universal forwarder wont consume more CPU and memory.   [monitor://C:\Logs\xyz\zbc\*] disabled = false index = preprod_logs interval =300 ignoreOlderThan = 2d
Hi I want to inventory all Splunk tools related to artificial intelligence and observability Here is the list: Splunk AI Assistant - PREVIEW (formerly SPL Copilot) Splunk Machine Learning Too... See more...
Hi I want to inventory all Splunk tools related to artificial intelligence and observability Here is the list: Splunk AI Assistant - PREVIEW (formerly SPL Copilot) Splunk Machine Learning Toolkit (MLTK) Splunk App for Data Science and Deep Learning (DSDL) Splunk IT Service Intelligence Splunk App for Anomaly Detection Did I forget some tools? Thanks
"Hey Splunk experts! I'm a Splunk newbie and working with data where running `stats count by status` gives me 'progress' and 'Not Started'. I'd like to include 'Wip progress' and 'Completed' in the r... See more...
"Hey Splunk experts! I'm a Splunk newbie and working with data where running `stats count by status` gives me 'progress' and 'Not Started'. I'd like to include 'Wip progress' and 'Completed' in the results. When running `stats count by status`. Desired output is: - Not Started - Progress - Wip Progress - Completed  Any tips or examples on how to modify my query to achieve this would be fantastic! Thanks 
Hi, I am trying to report on access requests to actual logins. I have a list of events from our systems of when users have logged in: | table _time os host user clientName clientAddress signature ... See more...
Hi, I am trying to report on access requests to actual logins. I have a list of events from our systems of when users have logged in: | table _time os host user clientName clientAddress signature logonType I have a list of requests which cover a time frame and potentially multiple logins to multiple systems: | table key host reporterName reporterEmail summary changeStartDate changeEndDate So i want a list of events, with any corresponding requests (could be none, so i can alert the user/IT) joining on host, user, and _time between changeStartDate and changeEndDate. I do have this working by using map (see below), but it's very slow and not operable over large datasets/times. There must be a better way. I had issues with matching on the time range, and where it may not have a match, and optional username matching based on OS. Does anyone have any ideas? Existing search: ...search... | table _time os host user clientName clientAddress signature logonType | convert mktime(_time) as epoch | sort -_time | map maxsearches=9999 search=" | inputlookup Request_admin_access.csv | eval os=\"$os$\" | eval outerHost=\"$host$\" | eval user=\"$user$\" | eval clientName=\"$clientName$\" | eval clientAddress=\"$clientAddress$\" | eval signature=\"$signature$\" | eval logonType=\"$logonType$\" | eval startCheck=if(tonumber($epoch$)>=tonumber(changeStartDate), 1, 0) | eval endCheck=if(tonumber($epoch$)<=tonumber(changeEndDate), 1, 0) | eval userCheck=if(normalisedReporterName==\"$normalisedUserName$\", 1, 0) | where host=outerHost | eval match=case( os==\"Windows\" AND startCheck==1 AND endCheck==1,1, os==\"Linux\" AND startCheck==1 AND endCheck==1 AND userCheck==1,1) | appendpipe [ | makeresults format=csv data=\"_time,os,host,user,clientName,clientAddress,signature,logonType,wimMatch $epoch$,$os$,$host$,$user$,$clientName$,$clientAddress$,$signature$,$logonType$,1\" ] | where match==1 | eval _time=$epoch$ | head 1 | convert ctime(changeStartDate) timeformat=\"%F %T\" | convert ctime(changeEndDate) timeformat=\"%F %T\" | fields _time os host user clientName clientAddress signature logonType key reporterName reporterEmail summary changeStartDate changeEndDate"  
Hi, I mistakenly cloned an alert to the "Slack Alerts" app instead of the normal "Search & Reporting" app.  This alert is functioning and sending Slack messages when triggered. But the alert is in ... See more...
Hi, I mistakenly cloned an alert to the "Slack Alerts" app instead of the normal "Search & Reporting" app.  This alert is functioning and sending Slack messages when triggered. But the alert is in the wrong app. But worse is that the alert now appears in the "All Configurations" page. I am able to disable the alert but not able to remove it and I really need to remove it from the "All Configurations" page. I'm also not able to edit the alert in any way. Is it possible to remove the alert from the "All Configurations" page?   Thank you.
In September, v23.9 included enhancements to FSO Platform Developer Support, Cloud Native Application Observability,  SaaS Controller and Agent, and On-premises Platform. WATCH THIS PAGE FOR UPDA... See more...
In September, v23.9 included enhancements to FSO Platform Developer Support, Cloud Native Application Observability,  SaaS Controller and Agent, and On-premises Platform. WATCH THIS PAGE FOR UPDATES — Click the Options menu above right, then Subscribe Want to receive all monthly Product Updates? Click here, then subscribe to the series In this article…  What new product enhancements are there this month? FSO | Cloud Native Application Observability | Agents | SAP | SaaS Controller | On-premises Controller |  Where can I find additional information about product enhancements?  Advisories and Notices Resolved and known issues Essentials What new product enhancements are there this month? This month, there were enhancements to  FSO Platform Developer Support, Cloud Native Application Observability,  SaaS Controller and Agent, and On-premises Platform. Find the highlights under each heading below. TIP | Each product category below includes a link to the product's complete Release Notes, where you can find more detail for each enhancement.  Full-Stack Observability (FSO), Developer Support NOTE | There is no FSO Platform release for August 2023. For Developer Support enhancements, see the 23.9 FSO Platform Developer Support Release Notes. Accounts sign-in process The sign-in process for Cloud Native Application Observability customers has been updated. New customers will sign in through a new URL, while existing customers will continue using the existing URL. No action is required from users, who will be automatically directed to their correct login page.    Anomaly Detection Anomaly Detection enabled for Kubernetes entities Anomaly Detection is now enabled for various Kubernetes entity types. This feature allows automatic detection of anomalies and provides options for customization such as linking HTTP request actions, tuning sensitivity levels, and testing anomaly detection in different environments.  App Root Cause Analysis using Anomaly Detection You can now view Pod readiness and liveness probe information can on the Properties panel of Pods, Workloads, Clusters, and Namespaces.   Cloud service expansions We’ve extended  now extend monitoring support for the following cloud services:  Amazon SNS   AWS KDA Flink application support   AWS Glue   GCP Cloud Run  GCP Cloud SQL  GCP Load Balancers    Differentiate between Lambda and APM domains Now, you can differentiate between Lambda and APM domains, filter by Lambda entities, and view Lambda details in the unified service detail view and via the Properties panel.    Grafana plugin The new version of the Grafana plugin includes the Include All toggle option.   Infrastructure Collector The Cisco AppDynamics Infrastructure Collector now supports monitoring of Amazon ECS tasks and containers.   New artifacts New versions of various AppDynamics artifacts have been released, including:   OTel Docker images  Cluster Collector Docker images  Infrastructure Collectors Docker images  AppDynamics Operator Docker image  AppDynamics Helm charts   Observe page enhancements Enhancements have been made to entity Observe pages, including alert messages for unknown health rules and the ability to view the Violating Metrics chart for a metric expression associated with a health rule   Pattern detection for containers on Logs Pattern detection is now available for containers on the Logs page, and the Log Pattern tab has been renamed to Patterns.      Specify attribute timestamp format for HTTP requests When creating an HTTP request action, you can now specify the timestamp format for certain attributes.      Back to TOC | To Essentials   Cloud Native Application Observability* enhancement highlights NOTE | See the Cloud Native Application Observability Release Notes page for the complete v23.9 enhancement details—released September 27, 2023.  NOTE | As of 11/27/2023, now *Cisco Cloud Observability  Accounts sign-in process  The sign-in process for Cloud Native Application Observability customers has been updated. New customers will sign in through a new URL, while existing customers will continue using the existing URL. No action is required from users, who will be directed automatically to their correct login page. Anomaly Detection enabled for Kubernetes entities Anomaly Detection is now enabled for various Kubernetes entity types. This feature allows automatic detection of anomalies and provides options for customization such as linking HTTP request actions, tuning sensitivity level, and testing anomaly detection in different environments. Cloud Services Expansion   We now support monitoring the following cloud services   Amazon SNS   AWS KDA Flink application support   AWS Glue   GCP Cloud Run  GCP Cloud SQL  GCP Load Balancers   Amazon ECS for Infrastructure Collector The Cisco AppDynamics Infrastructure Collector now supports monitoring of Amazon ECS tasks and containers. Grafana plugin  The 23.9 version of the Grafana plugin includes the Include All option toggle. App Root Cause Analysis using Anomaly Detection You can now view Pod readiness and liveness probe information can on the Properties panel of Pods, Workloads, Clusters, and Namespaces. New artifacts New versions of various AppDynamics artifacts have been released, including: OTel Docker images Cluster Collector Docker images Infrastructure Collectors Docker images AppDynamics Operator Docker image AppDynamics Helm charts Observe page enhancements Enhancements have been made to entity Observe pages, including alert messages for unknown health rules and the ability to view the Violating Metrics chart for a metric expression associated with a health rule. Specify attribute timestamp format for HTTP requests When creating an HTTP request action, you can now specify the timestamp format for certain attributes.  Differentiate between Lambda and APM domains Now, you can differentiate between Lambda and APM domains, filter by Lambda entities, and view Lambda details in the unified service detail view and via the Properties panel.  Back to TOC | To Essentials   Agent enhancement highlights NOTE | See the AppDynamics v23.9 APM Platform (SaaS) Release Notes for the complete September 2023 enhancement details.  Analytics Agent  GA 23.9  September 15, 2023  This release replaces the deprecated javax.el 3.0.0 library with jakarta.el 3.0.4. Cluster Agent  GA 23.9  September 26, 2023  The enhancements in this release include:  Cluster Agent uses container ID to correlate APM entities with Infrastructure entities. For Kubernetes versions >=1.25, you must configure the auto-instrumentation configuration file because container runtime uses cgroup v2 API in Kubernetes version 1.25  onwards. See Correlate Application Containers with App Agents (For Kubernetes version 1.25).    You can use the instructions mentioned at Install Cluster Agent with OpenShift Operator Bundle and Install Infrastructure Visibility with OpenShift OperatorHub Bundle to install Cluster Agent and Infrastructure Visibility respectively with the OpenShift OperatorHub bundle.  Flutter Agent  GA 23.9 September 11, 2023  Flutter Agent is now compatible with Dio 5.1.0. IBM Integration Bus Agent  GA 23.9  September 27, 2023  With this release, Java Agent includes support for query statement on Couchbase database exit calls details.    There is now a new option, View Detection Rule in Business Transaction > Actions. Click this option to view the custom detection rule associated with a business transaction. See View Business Transactions in the documentation.  Private Synthetic Agent  GA 23.9  September 28, 2023  This release supports Private Synthetic Agent deployment in Red Hat OpenShift. See Set up PSA in Red Hat OpenShift in the documentation. .NET Agent  GA 23.9  September 29, 2023  This release includes JIT instrumentation on Azure App Service with the coordinator.  Back to TOC | To Essentials   SaaS Controller enhancement highlights NOTES | See the AppDynamics v23.9 APM Platform SaaS Controller Release Notes page for the complete September 2023 enhancements.  Alert and Respond GA 23.9 September 29, 2023 When creating or editing an email action without a template, you can now add details including the name of the email action, preferred time zone, and the To, Cc, and Bcc recipients list.   Cisco Secure Application GA 23.9 September 29, 2023 Cisco Secure Application now includes support for email-based alerts.   Cluster Monitoring GA 23.9 September 29, 2023 When monitoring failed pods, you can now hide historic pod details in the new Historical Pods section. Find it under Pods > Filters in the Cluster Details view.   View Detection Rule option GA 23.9 September 29, 2023 View the custom detection rule associated with a business transaction with the View Option Rule option, found in Business Transaction > Actions.   End User Monitoring GA 23.9 September 29, 2023  The End User Monitoring update upgrades EUM to Java 17 to ensure that Java supported services remain secure and efficient.   Back to TOC | To Essentials   Where can I find additional information about product enhancements?  In Documentation, each product category has a Release Notes page where enhancements are described in detail on an ongoing basis. Links to the most recent versions are:  Cisco Full Stack Observability  Cloud Native Application Observability  AppDynamics APM Platform 23.x  On-premises AppDynamics APM Platform  Accounts Administration AppDynamics SAP Agent  Back to TOC | To Essentials Resolved issues DID YOU KNOW? You can find ongoing lists of Resolved Issues on each Release Notes page by version. Sort the list on each page by headings, including key, product, severity, or affected version(s). Find Resolved Issues by Product here:  • Cisco Full Stack Observability (FSO) Release Notes  • Cloud Native Application Observability Release Notes  • AppDynamics APM Platform, Resolved Issues for Agents and SaaS Controller  • On-premises AppDynamics APM Platform  • Release Notes for Accounts and Licensing  • AppDynamics SAP Agent Release Notes Back to TOC | To Essentials Advisories and Notifications Upcoming End of Support for Cluster Collectors <23.10 and required upgrade for continued Kubernetes monitoring   Cluster Collectors <23.10 are deprecated with support ending January 30, 2024. After this date, monitoring Kubernetes entities via the relationship pane will require an upgrade to Cluster Collector version >=23.10.  Essentials ADVISORY | Customers are advised to check backward compatibility in the Agent and Controller Compatibility documentation. Download Essential Components (Agents, Enterprise Console, Controller (on-prem), Events Service, EUM Components) Download Additional Components (SDKs, Plugins, etc.) How do I get started upgrading my AppDynamics components for any release? Product Announcements, Alerts, and Hot Fixes Open Source Extensions License Entitlements and Restrictions CAN'T FIND WHAT YOU'RE LOOKING FOR? NEED ASSISTANCE? Connect in the Forums
is there a definitive KB article that tells us what exactly makes up a user's disk "Disk Space Limit"? What are activities that count towards this limit? Also, what are things that can help users c... See more...
is there a definitive KB article that tells us what exactly makes up a user's disk "Disk Space Limit"? What are activities that count towards this limit? Also, what are things that can help users clean up their usage? Screenshot attached of which setting I am talking about. appreciate any and all help.    
Hello, I have a table that shows vulnerabilities by asset name and severity level. For example, I have an asset name that has 3 critical, 2 high, 3 medium, and 1 low. Now what I want to do is be a... See more...
Hello, I have a table that shows vulnerabilities by asset name and severity level. For example, I have an asset name that has 3 critical, 2 high, 3 medium, and 1 low. Now what I want to do is be able to just click on the field critical and just be able to show those critical vulnerabilities for that asset name and so on. I am not sure if that requires a condition and how that is set up or if it just requires a simple drill-down. Can someone please help?  
November 2023 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation, the newsletter that takes an untraditional twist on wh... See more...
November 2023 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another edition of indexEducation, the newsletter that takes an untraditional twist on what’s new with Splunk Education. We hope the updates about our courses, certification, and technical training will feed your obsession to learn, grow, and advance your careers. Let’s get started with an index for maximum performance readability: Training You Gotta Take | Things You Needa Know | Places You’ll Wanna Go  Training You Gotta Take All the New Stuff | Listen for the Steady Drumbeat Can you hear it? That’s the sound of new Splunk Education courses landing on the Splunk learning catalog almost weekly! You can always search the Splunk Training and Enablement Platform (STEP) for courses that align with your security, cloud, or O11y learning journey, or check out our latest Release Announcements. And, don’t forget to check in with your Org Manager if you’re looking to enroll in paid training using your company’s Training Units. Turn up the volume and get rockin’ on some new coursework today.  Gotta Get Current | Tune into New Releases Español | Say Hola to Our Translated Content It’s a big world out there – with 8 billion people and about 7,000 languages spoken. Splunk Education is determined to get closer to as many of these people as possible by publishing training and certification in more diverse languages. We are pleased to share that we now offer free, self-paced eLearning courses with Spanish captions. Watch for more translated content and captions coming soon, Mucho gusto! Gotta Be Clear | Spanish Captions are Here Things You Needa Know Learning Splunk |  Proof It’s a Career-Booster  Wowsa. The newly-released 2023 Splunk Career Impact Study shows that almost two-thirds of our Splunk Community survey-takers believe that proficiency in Splunk directly correlates to their positive career benefits and success. Over 60 percent of these folks found the most benefits through Splunk Education – including Splunk University, Hands on Labs, Splunk Certification, and Splunk Academic Alliance. Future-proof your career with courses and certifications from Splunk Education.  Needa Know the Numbers | Quantify your Career Resilience Being Inclusive | Diverse Learning Spaces at Splunk October was National Disability Employment Awareness Month (NDEAM). At Splunk, we are grateful that our community is made up of all types of people, with all types of experiences and points of view. This diversity creates interest, sparks innovation, and fosters growth. Find out how Splunk supports NDEAM and weaves this awareness into its Splunk Education programs in our latest blog.  Needa Know About Accessibility | Read our Blog Places You’ll Wanna Go To the ALPs | Authorized Learning Partners, that is… The Alps may be a mountain range in South-Central Europe running through France, Switzerland, Monaco, Italy, Liechtenstein, Austria, Germany, and Slovenia, but our Splunk ALPs (Authorized Learning Partners) cover even more territory! In fact, we have certified partners teaching Splunk courses throughout EMEA, LATAM, APAC, and AMER (including PBST) – 12 languages in total – to ensure Splunk courses and education services are available to you around the world. Get ready to learn on your terms!  Wanna Learn on Your Terms | ALPs are Ready for You  To YouTube | Once Upon an Attack No matter how vigilant you are and how much you know about internet security, it can be helpful to get regular reminders. Once Upon An Attack is a creative web series developed by the Splunk Education team depicting some common cybersecurity threats and attacks. In the first episode we meet our protagonist, Jane, and go on a journey with her as she experiences a ruthless phishing campaign. Find out how Jane fares and subscribe to our YouTube How-To channel so you can access all-new episodes as they are launched.  Wanna Go Online | How-To Do it Safely Find Your Way | Learning Bits and Breadcrumbs Go Get Rewarded | Learning Points are Waiting to be Redeemed  Go to STEP | Get Upskilled Go Discuss Stuff | Join the Community Go Social | LinkedIn for News Go Share | Subscribe to the Newsletter   Thanks for sharing a few minutes of your day with us – whether you’re looking to grow your mind, career, or spirit, you can bet your sweet SaaS, we got you. If you think of anything else we may have missed, please reach out to us at indexEducation@splunk.com.  Answer to Index This: Too much JAVA.
Hi! We use Splunk Stream 7.3.0. When receiving an event in a log longer than 1000000 characters, Splunk cuts it. Event in json format. Tell me what settings should be applied in Splunk Stream so tha... See more...
Hi! We use Splunk Stream 7.3.0. When receiving an event in a log longer than 1000000 characters, Splunk cuts it. Event in json format. Tell me what settings should be applied in Splunk Stream so that Splunk parses the data correctly. Thanks!