All Topics

Top

All Topics

Hello, I'm am wondering how other security service providers have handled this issue or what is best practice To plan for least privilege, indexes would be separated out by group. We could store al... See more...
Hello, I'm am wondering how other security service providers have handled this issue or what is best practice To plan for least privilege, indexes would be separated out by group. We could store all data related to the group in a respective index. Network traffic, network security, antivirus, Windows Event Data, etc all in a single index for the group and give that group permissions to the index. An issue with this scenario is search performance. Searches may be performed on network traffic. Or on host data. Or on antivirus data. But Splunk will have to search the bucket containing all of the other unrelated data. If antivirus data is only producing 5GB a day, but network traffic is producing 10TB a day, this will have a huge negative affect on searches for antivirus data. This will be compounded with SmartStore (S2) where IOPS will be used to write the bucket back to disk. If least privilege isn't an issue, it would be optimal to create a bucket for the specific data. Network traffic would have it's own index. Windows hosts would have their own index. But the crux of architecting in this fashion is how to implement least privilege. One group cannot be able to see the host data of the other group. One idea to get around this is to limit the search capability by host, but that would require much work from the Splunk team and is not 100% certain if wildcards are used. Another idea is to simply create a separate index for each data type for each group. My concern with this is scaling. If we have 10 separate groups that require 10 indexes, that's 100 indexes. If we have 50, that's 500 indexes. 100 is 1000. This does not scale well. Thank you in advance for your help    
Improve software compliance, accelerate delivery, and gain business-critical insights faster — with Smart Agent for AppDynamics The intricacies of managing agents across diverse infrastructures, an... See more...
Improve software compliance, accelerate delivery, and gain business-critical insights faster — with Smart Agent for AppDynamics The intricacies of managing agents across diverse infrastructures, and throughout their lifecycles, can be daunting. But, the solution is here. Cisco AppDynamics has announced the Smart Agent, a revolutionary tool designed to streamline this process.  To learn more about these topics and how the Smart Agent can revolutionize your agent management, head over to the Manasa HG's blog post: a must-read if you want to focus more on innovation and less on maintenance: Cisco AppDynamics reimagines agent lifecycle management with Smart Agent  Key topics Manasa explores in the blog post include: Agent lifecycle management capabilities, from adherence to versioning compliance standards, bolstering upgrade processes at scale, and conserving agent health, agility, and upgrade activities. How it works: Get a step-by-step guide on how to install and register the Smart Agent with the Controller, and how to conduct all agent lifecycle management operations from within the user interface. The Agent management user interface: Explore the new interface that lets you view the inventory of all your existing agents, their status, and much more. A real-world upgrade scenario: Understand how the Smart Agent and central UI controls can help address compliance needs based on a real-world scenario. What’s next: Learn about the upcoming auto-discovery and auto-deploy capabilities that aim to further simplify agent management. About Manasa HG Manasa is a Product Manager at Cisco AppDynamics who has worked on various AppDynamics Agents, OpenTelemetry, and is currently focused on Agent Management. A product enthusiast, Manasa likes to bring delight to customers by solving their key pain points.
Sample data: <?xml version="1.0" encoding="UTF-8" ?> <Results xmlns:xsi="http://www.w3.org"> <Result> <Code>OK</Code> <Details>LoadMessageOverviewData</Details> <Text>Successful</Text> </Resul... See more...
Sample data: <?xml version="1.0" encoding="UTF-8" ?> <Results xmlns:xsi="http://www.w3.org"> <Result> <Code>OK</Code> <Details>LoadMessageOverviewData</Details> <Text>Successful</Text> </Result> <Data> <ColumnNames> <Column>Sender&#x20;Component</Column> <Column>Receiver&#x20;Component</Column> <Column>Interface</Column> <Column>System&#x20;Error</Column> <Column>Waiting</Column> </ColumnNames> <DataRows> <Row> <Entry>XYZ</Entry> <Entry>ABC</Entry> <Entry>Mobile</Entry> <Entry>-</Entry> <Entry>3</Entry> </Row> </DataRows> </Data> </Results> Hello, I need to extract fields from the above xml data. I have tried the below props, but still the data is not extracting properly. Props.conf CHARSET=UTF-8 BREAK_ONLY_BEFORE = <\/Row> MUST_BREAK_AFTER = <Row> SHOULD_LINEMERGE  = true KV_MODE = xml pulldown_type = true DATETIME_CONFIG = CURRENT NO_BINARY_CHECK=true TRUNCATE=0 description=describing props config disabled=false How to parse the data.? Thanks in advance
Hi to everyone,  For a project, I need to deploy a test environnement with splunk and I need to capture stream log in order to to analyze it. For this project I have deployed a Splunk enterprise (9.... See more...
Hi to everyone,  For a project, I need to deploy a test environnement with splunk and I need to capture stream log in order to to analyze it. For this project I have deployed a Splunk enterprise (9.1.2) on an ubuntu 20.04 and on another VM (also ubuntu 20.04) I put my UF (9.1.2). In the UF I put the add-on Splunk Add-on for Stream Forwarders (8.1.1) to capture packet and on my splunk enterprise Splunk App for Stream (8.1.1).  I follow all installations and configurations steps and debug some issues but I still have an error that I don't know how to fix it. In the streamfwd.log files I see this error :  2024-01-24 06:14:03 ERROR [140599052777408] (SnifferReactor/PcapNetworkCapture.cpp:238) stream.NetworkCapture - SnifferReactor unrecognized link layer for device <ens33>: 253 2024-01-24 06:14:03 FATAL [140599052777408] (CaptureServer.cpp:2337) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer ens33 is the right interface where I want to capture stream packet but I don't understand why it don't recognize it. If you have any idea I will be very gratefull.  
Hey everyone, I am in the situation where I have to provide a solution to a client of mine. Our application is deployed on their k8s and logs everything to stdout, where they take it and put it into... See more...
Hey everyone, I am in the situation where I have to provide a solution to a client of mine. Our application is deployed on their k8s and logs everything to stdout, where they take it and put it into a splunk index, let's call the index "standardIndex". Due to a change in legislation and a change in how they operate under this legislation change, we need to log specific logs based on the message content (easiest for us..) to a special index we can call "specialIndex". I managed to rewrite the messages we log, to satisfy their needs in that regard, but now I fail to log those to a separate index. The collectord annotations I put in our patch look like this, and they seem to work just fine:       spec: replicas: 1 template: metadata: annotations: collectord.io/logs-replace.1-search: '"message":"(?P<message>Error while doing the special thing\.).*?"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.1-val: '${timestamp} message="${message}" applicationid=superImportant status=failed' collectord.io/logs-replace.2-search: '"message":"(?P<message>Starting to do the thing\.)".*?"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.2-val: '${timestamp} message="${message}" applicationid=superImportant status=pending' collectord.io/logs-replace.3-search: '"message":"(?P<message>Nothing to do but completed the run\.)".*?"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.3-val: '${timestamp} message="${message}" applicationid=superImportant status=successful' collectord.io/logs-replace.4-search: '("message":"(?P<message>Deleted \d+ of the thing [^\s]+ where type is [^\s]+ with id)[^"]*").*"@timestamp":"(?P<timestamp>[^"]+)"' collectord.io/logs-replace.4-val: '${timestamp} message="${message} <removed>" applicationid=superImportant status=successfull'       My only remaining goal is to send these specific messages to a specific index, and this is where I can't follow the outcold documentation really well. Actually, I am even doubting this is possible but I fail to understand it completely. Does anyone have a hint?
We encounter an issue with our iislogs in an azure storage account. Our logging data becomes duplicated at the end of the hour when the last modified of a closed log file is updated. This is caused b... See more...
We encounter an issue with our iislogs in an azure storage account. Our logging data becomes duplicated at the end of the hour when the last modified of a closed log file is updated. This is caused by a known bug in the azure extension that we are using and which we cannot update. However the behaviour of the plugin causes the duplication in logs. An example error can be seen below: 2024-01-24 12:03:09,811 +0000 log_level=WARNING, pid=7648, tid=ThreadPoolExecutor-1093_9, file=mscs_storage_blob_data_collector.py, func_name=_get_append_blob, code_line_no=301 | [stanza_name="prd10915-iislogs" account_name="prd10915logs" container_name="iislogs" blob_name="WAD/bd136adb-2f39-4042-94f3-2ac21450cc22/IaaS/_prd10920EOLAUWebNeuVmss_2/u_ex24012410_x.log"] Invalid Range Error: Bytes stored in Checkpoint : 46738047 and Bytes stored in WAD/bd136adb-2f39-4042-94f3-2ac21450cc22/IaaS/_prd10920EOLAUWebNeuVmss_2/u_ex24012410_x.log : 46738047. Restarting the data collection for WAD/bd136adb-2f39-4042-94f3-2ac21450cc22/IaaS/_prd10920EOLAUWebNeuVmss_2/u_ex24012410_x.log  The error happens in the %SPLUNK_HOME%\etc\apps\Splunk_TA_microsoft-cloudservices\lib\mscs_storage_blob_data_collector.py file on line 280. The blob stream downloader expects more bytes than the known checkpoint and produces an exception when the amount of bytes is the same. This exception is then handled by this piece of code: blob_stream_downloader = blob_client.download_blob( snapshot=self._snapshot ) blob_content = blob_stream_downloader.readall() self._logger.warning( "Invalid Range Error: Bytes stored in Checkpoint : " + str(received_bytes) + " and Bytes stored in " + str(self._blob_name) + " : " + str(len(blob_content)) + ". Restarting the data collection for " + str(self._blob_name) ) first_process_blob = True self._ckpt[mscs_consts.RECEIVED_BYTES] = 0 received_bytes = 0 Where the blob is marked as new and fully redownloaded and ingested. Causing our data duplication. We would like to request a change to the addon that prevents this behaviour from happening when the checkpoint byte count is equal to the log file byte count. The addon should not assume that a file has grown in size if the last modified timestamp is changed.
Hi Team, My requirement is Universal Forwarder installation on Kubernetes On-premises system.   Please send me guide on installation on Kubernetes.       
Hi, I have the below SPL and I am not able to get the expected results. Please could you help? if i use stats count by - then i'm not getting the expected result as below. SPL: basesearch earlies... See more...
Hi, I have the below SPL and I am not able to get the expected results. Please could you help? if i use stats count by - then i'm not getting the expected result as below. SPL: basesearch earliest=@d latest=now | append [ search earliest=-1d@d latest=-1d] | eval Consumer = case(match(File_Name,"^ABC"), "Down", match(File_Name,"^csd"),"UP", match(File_Name,"^CSD"),"UP",1==1,"Others") | eval Day=if(_time<relative_time(now(),"@d"),"Yesterday","Today") | eval percentage_variance=abs(round(((Yesterday-Today)/Yesterday)*100,2)) | table Name Consumer Today Yesterday percentage_variance Expected Result: Name Consumer Today Yesterday percentage_variance TEN UP 10 10 0.0%
We want to install splunk in our golden image using packer .This is for deploying servers using golden images in Azure for RHEL8 and Ubuntu22. I found documentation for Windows Integrate a univer... See more...
We want to install splunk in our golden image using packer .This is for deploying servers using golden images in Azure for RHEL8 and Ubuntu22. I found documentation for Windows Integrate a universal forwarder onto a system image - Splunk Documentation  Not for RHEL/UBUNTU  Any help appreciated.
Hello, for a dashboard the user want every time when he opens the dashboard that the canvas size is fit to his screen. How can i define this ?
      01-24-2024 10:24:31.312 +0000 WARN sendmodalert [3050674 AlertNotifierWorker-0] - action=slack - Alert action script returned error code=1 01-24-2024 10:24:31.312 +0000 INFO sendmodalert [... See more...
      01-24-2024 10:24:31.312 +0000 WARN sendmodalert [3050674 AlertNotifierWorker-0] - action=slack - Alert action script returned error code=1 01-24-2024 10:24:31.312 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - action=slack - Alert action script completed in duration=96 ms with exit code=1 01-24-2024 10:24:31.304 +0000 FATAL sendmodalert [3050674 AlertNotifierWorker-0] - action=slack STDERR - Alert action failed 01-24-2024 10:24:31.304 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - action=slack STDERR - Slack API responded with HTTP status=200 01-24-2024 10:24:31.304 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - action=slack STDERR - Using configured Slack App OAuth token: xoxb-XXXXXXXX 01-24-2024 10:24:31.304 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - action=slack STDERR - Running python 3 01-24-2024 10:24:31.212 +0000 INFO sendmodalert [3050674 AlertNotifierWorker-0] - Invoking modular alert action=slack for search="Updated Testing Nagasri Alert" sid="scheduler_xxxxx__RMDxxxxxxx" in app="xxxxx" owner="xxxx" type="saved"       I have done the entire setup correctly , created an app with chat:write scope and added the channel to the app. got the oauth token and the webhook link of the channel. But the sendalert is failing with error code 1. And the git "slack-alerts/src/app/README.md at main · splunk/slack-alerts (github.com)" , doesnt mention about it.  Is it an issue from Splunk end or Slack end? What would be the fix for it?  
When I was searching  for the different data ranges in my Splunk dashboard it showed the same, for example, i am selecting 1/1/2024 to 1/10/2024 and  1/3/2024 to 1/4/2024 and i am adding this query... See more...
When I was searching  for the different data ranges in my Splunk dashboard it showed the same, for example, i am selecting 1/1/2024 to 1/10/2024 and  1/3/2024 to 1/4/2024 and i am adding this query earliest=-7d@d latest=+1d but when removed these values do not match  Please help out with this
Hi All, I need to collect system metrics and monitor local files on Solaris servers. I'm considering installing the Universal Forwarder (UF) and utilizing the Splunk add-on for Unix to collect sys... See more...
Hi All, I need to collect system metrics and monitor local files on Solaris servers. I'm considering installing the Universal Forwarder (UF) and utilizing the Splunk add-on for Unix to collect system metrics. Has anyone implemented this before, and any insights or thoughts on this approach?
Hi, I have html tags like <p> <br> <a href="www.google/com target=_blank"> & so on in my raw data, I want to capture everything except these html tags . Please help me with regex sample raw data A... See more...
Hi, I have html tags like <p> <br> <a href="www.google/com target=_blank"> & so on in my raw data, I want to capture everything except these html tags . Please help me with regex sample raw data A flaw in the way Internet Explorer handles a specific HTTP request could allow arbitrary code to execute in the context of the logged-on user, should the <UL> <LI> The first vulnerability occurs because Internet Explorer does not correctly determine an obr in a pop-up window.</LI> <LI> The t type that is returned from a Web server during XML data binding.</LI> </UL> <P> &quot;Location: URL:ms-its:C:WINDOWSHelpiexplore.::/itsrt.htm&quot; <P> :<P><A HREF='http://blogs.msdn.com/embres/archive/20/81.aspx' TARGET='_blank'>October Security Updates are (finally) available!</A><BR>
Hello, I'm installing the .NET Agent in a Windows 10 VM. When I run the \dotNetAgentSetup64-23.12.0.10912\Installer.bat file I get the following error: I can't find the missing key. I execute... See more...
Hello, I'm installing the .NET Agent in a Windows 10 VM. When I run the \dotNetAgentSetup64-23.12.0.10912\Installer.bat file I get the following error: I can't find the missing key. I execute the install batch with the option "Run as Administrator" Any ideas? Help? Thank you Here the install logs: Action ended 11:15:47: SetCoordinatorServiceUserNTAuthoritySystem. Return value 1. Action start 11:15:47: AppSearch. MSI (s) (3C:6C) [11:15:47:838]: Note: 1: 2262 2: Signature 3: -2147287038 MSI (s) (3C:6C) [11:15:47:839]: Note: 1: 2262 2: Signature 3: -2147287038 MSI (s) (3C:6C) [11:15:47:839]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\AppDynamics\dotNet Agent 3: 2 MSI (s) (3C:6C) [11:15:47:839]: Note: 1: 2262 2: Signature 3: -2147287038 MSI (s) (3C:6C) [11:15:47:839]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\AppDynamics\dotNet Agent 3: 2 MSI (s) (3C:6C) [11:15:47:839]: Note: 1: 2262 2: Signature 3: -2147287038 MSI (s) (3C:6C) [11:15:47:840]: PROPERTY CHANGE: Adding WIXNETFX4RELEASEINSTALLED property. Its value is '#528372'. MSI (s) (3C:6C) [11:15:47:840]: Doing action: SetWIX_IS_NETFRAMEWORK_462_OR_LATER_INSTALLED Action ended 11:15:47: AppSearch. Return value 1. MSI (s) (3C:6C) [11:15:47:840]: PROPERTY CHANGE: Adding WIX_IS_NETFRAMEWORK_462_OR_LATER_INSTALLED property. Its value is '1'. Action start 11:15:47: SetWIX_IS_NETFRAMEWORK_462_OR_LATER_INSTALLED. MSI (s) (3C:6C) [11:15:47:841]: Doing action: LaunchConditions Action ended 11:15:47: SetWIX_IS_NETFRAMEWORK_462_OR_LATER_INSTALLED. Return value 1. Action start 11:15:47: LaunchConditions. MSI (s) (3C:6C) [11:15:47:842]: Product: AppDynamics .NET Agent -- AppDynamics .NET Agent installer requires administrative privileges. Action ended 11:15:47: LaunchConditions. Return value 3. Action ended 11:15:47: INSTALL. Return value 3. MSI (s) (3C:6C) [11:15:47:844]: Note: 1: 1708 MSI (s) (3C:6C) [11:15:47:844]: Product: AppDynamics .NET Agent -- Installation failed. MSI (s) (3C:6C) [11:15:47:845]: Windows Installer installed the product. Product Name: AppDynamics .NET Agent. Product Version: 23.12.0. Product Language: 1033. Manufacturer: AppDynamics. Installation success or error status: 1603. MSI (s) (3C:6C) [11:15:47:848]: Deferring clean up of packages/files, if any exist MSI (s) (3C:6C) [11:15:47:848]: MainEngineThread is returning 1603 MSI (s) (3C:A8) [11:15:47:848]: No System Restore sequence number for this installation. === Logging stopped: 1/24/2024 11:15:47 === MSI (s) (3C:A8) [11:15:47:849]: User policy value 'DisableRollback' is 0 MSI (s) (3C:A8) [11:15:47:849]: Machine policy value 'DisableRollback' is 0 MSI (s) (3C:A8) [11:15:47:849]: Incrementing counter to disable shutdown. Counter after increment: 0 MSI (s) (3C:A8) [11:15:47:849]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (3C:A8) [11:15:47:850]: Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2 MSI (s) (3C:A8) [11:15:47:850]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (D4:24) [11:15:47:851]: Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1 MSI (c) (D4:24) [11:15:47:852]: MainEngineThread is returning 1603 === Verbose logging stopped: 1/24/2024 11:15:47 ===
Does Cisco FMC is compatible with Splunk Enterprise 8.2.7? do you have compatiblity matri
Hi, I have the situation that the need for Installation management tier on 2 sites (DC and DR) on VPC server. the problem is, I don't have the permission to capture the Vmotion from 1 site to anothe... See more...
Hi, I have the situation that the need for Installation management tier on 2 sites (DC and DR) on VPC server. the problem is, I don't have the permission to capture the Vmotion from 1 site to another. Why I need to install the management tier to both sites because, I want to upgrade the OS currently RHEL 7.9 (EOS soon) to RHEL 8.9, and the compan And, If I just Installed both sites, how to sync the data from 1 management tier to another, do I need to copy the data every day or just once. need help, Would Be Appreciated
In December, v23.12 enhancements included Cisco Cloud Observability,  SaaS Controller and Agent enhancements, and On-premises Controller  upgrades. Product name change announcements  As of No... See more...
In December, v23.12 enhancements included Cisco Cloud Observability,  SaaS Controller and Agent enhancements, and On-premises Controller  upgrades. Product name change announcements  As of November 27, 2023, the Cisco Full-Stack Observability Platform is now the Cisco Observability Platform and Cloud Native Application Observability is now Cisco Cloud Observability powered by the Cisco Observability Platform. These name changes better align our products with the Cisco portfolio and with our business strategy.  WATCH THIS PAGE FOR UPDATES — Click the Options menu above right, then Subscribe Want to receive all monthly Product Updates? Click here, then subscribe to the series In this article…  What new product enhancements were released in December 2023? Cisco Observability Platform | Cisco Cloud Observability | AppDynamics APM Platform: Agents, SaaS Controller | On-premises Controller | Where can I find ongoing information about product enhancements?  Advisories and Notices  | Updates to Documentation Information Architecture Essentials  | Download components  | Get started upgrading AppDynamics components for any release | Product Announcements, Alerts, and Hot Fixes | Open source extensions | License entitlements and restrictions What new product enhancements were released in December 2023? TIP | This article provides product enhancement highlights, organized by product. Each product section below includes a link to its corresponding Release Notes page. When available, links to the specific release version are also included.  Cisco Observability Platform enhancement highlights Formerly Cisco Full-Stack Observability Platform NOTE | In addition to the following highlights, find the complete v23.12 Cisco Cloud Observability Release Notes in the documentation  Log Collector Now, the Log Collector supports TLS protocol version 1.3 by default. To collect logs from a different TLS protocol, see:  (from AWS services), see TLSMinVersion in Create a CloudFormation Stack.  (from Amazon ECS backed by Amazon EC2), see APPD_LOGCOL_SSL_*  in Amazon Elastic Container Service on EC2 Application Logs.   (from Amazon EC2 instances - baremetal), see APPD_LOGCOL_SSL_*  in Amazon Elastic Compute Cloud Application Logs.   See the changes related to advanced format deployments on Kubernetes clusters (filebeatYaml) collectors-values.yaml in Log Collector Settings - Advanced YAML Layout.  NOTE The Log Collector running on Windows-backed Kubernetes nodes no longer supports x386 architecture.  v23.12 Cisco Observability Platform Release Notes For complete release details, see: Enhancements Known Issues and Limitations Resolved Issues Back to TOC | To Essentials   Cisco Cloud Observability enhancement highlights Formerly Cloud Native Application Observability prior to November 27, 2023 NOTE | See the Cisco Cloud Observability v23.12 Release Notes page for a complete list of enhancements in December 2023. Alerting All system components and collectors, along with all communication within the platform, now use the TLS protocol version 1.3 by default. (GA v23.12.14) Anomaly Detection On the entity-centric page, the anomaly status is displayed as unknown when data is unavailable for anomaly evaluation. (GA v23.12.14)  Cloud Service Expansions AWS: Cisco Cloud Observability now supports monitoring the following Amazon Web Services (AWS) services:  AWS Certificate Manager  AWS Config  AWS Direct Connect  Amazon DynamoDB  Amazon ElastiCache  Amazon MQ  Amazon Route 53  GCP: Cisco Cloud Observability now supports monitoring the following Google Cloud Platform™ (GCP) services:  GCP API Gateway  GCP Cloud Spanner  GCP Data Flow  GCP Filestore  GCP Virtual Private Cloud  License Consumption Visualize and meter MELT data and token usage   Visually evaluate the monthly capacity units of MELT data ingested by the FSO Platform against the purchased SKU amount to understand usage limits. Module Enhancements  Cisco Secure Application   The release introduces business transaction context to business risk factors, a new Detail View on the Vulnerabilities page for tracking entities affected by vulnerabilities, and a new Image page for monitoring vulnerabilities impacting each image. In addition to the Release Notes, see Using Business Metrics on Cisco Cloud Observability Platform in the Knowledge Base.  Smart Agent Smart Agent  The Helm chart deployment of the Smart Agent now automatically adds OPTIMIZER_ID and OPTIMIZED_WORKLOAD_NAME labels to help in tracking and troubleshooting Application Resource Optimizer issues by identifying objects created by ARO.   Application Resource Optimizer (ARO)   New YAML resource configurations, including auto-generated YAML snippets with optimized resource configurations, can be copied and applied to your workload environment.  The Helm chart deploying the Smart Agent will automatically apply labels to facilitate tracking and troubleshooting issues with the ARO.  ARO requirement change: The observed minimum of Kubernetes workloads pods over the previous seven days of metrics history increased from three to five. v23.12 Cisco Cloud Observability Release Notes   For complete release details see AppDynamics Cloud 23.12 Release Notes  Back to TOC | To Essentials AppDynamics APM Platform Agent enhancement highlights NOTE | See the full 23.12 Release Notes  for a complete, ongoing, and sortable list of Agent enhancements Analytics Agent  Configures the TLS version for agent to Controller and Event Service communications. Upgrades to several third-party components. (GA 23.12, December 14, 2023) iOS Agent  Offers compatibility support with Alamofire and includes minor bug fixes. (v23.12.0, GA December 6, 2023)  Java Agent Adds the option to associate error detection methods, log messages, and HTTP codes with business transactions, provides support for JDK21, and fixes some bugs. (v23.12.0, GA December 20, 2023)  Machine Agent Supports excluding Docker Container networks, fixes some bugs, and upgrades to logback-classic. (v23.12.0, GA December 20, 2023)  Xamarin Agent Includes stack trace support for hybrid applications built using Xamarin, segregating the hybrid stack trace as native and Xamarin. (v23.12.0, GA December 21, 2022)  v23.12 AppDynamics SaaS Agent Release Notes Agent Release Notes    Back to TOC | To Essentials   SaaS Controller enhancement highlights NOTES |See the AppDynamics v23.10 SaaS Controller Release Notes page for the complete October 2023 enhancements. No SaaS Controller enhancements were released in November. Analytics Infrastructure-based Licensing (IBL) usage details are now shown by default on the Configuration page. To hide IBL usage details, set the CONFIG_EXCLUDE_ANALYTICS_LICENSE_USAGE flag to false. See Collect Transaction Analytics Data. (GA v22.12, Released December 21, 2022)  Alert and Respond When you configure action suppression for servers, you can now select object scope by servers and server subgroups. (GA v22.12, Released December 21, 2022) Back to TOC | To Essentials   On-premises enhancement highlights NOTE |See the On-premises Platform Release Notes page for the complete December 2023 enhancements. Agent Management The release introduced a Smart Agent, which allows bulk operations and provides a command line utility—Smart Agent CLI for buildtime workflows. (v23.11.0, GA December 5, 2023)  Agent Management can now be administered in the Enterprise Console, and all installed agents can be viewed on the Agent Management tab in the Controller UI. (V23.11.0, GA December 5, 2023)  Adminster the Fleet Management Service   Agent Management User Interface   Browser Real User Monitoring Speed Index metric can now be enabled when configuring the JavaScript Agent. This metric evaluates page load performance. (v23.11.0, GA December 5, 2023)  Controller A number of Controller components were upgraded. In addition the Linux Kernel was removed, and Jetty replaced the GlassFish server.   NOTE: Configuration changes are required due to this switch. See the Release Notes and Port Settings (v23.11.1, GA December 13, 2023)  The TLS version was upgraded to 1.3. See End of Support for TLS 1.0 and 1.1 (v23.11.0, GA December 5, 2023)  Dash Studio You can now enable ThousandEyes access in the administration page, to allow users to visualize ThousandEyes data along with application data on the dashboard created in Dash Studio. (v23.11.0, GA December 5, 2023)  End User Monitoring   The Controller UI now displays the EUM Data in milliseconds. This feature can be enabled via a setting. (v23.11.0, GA December 5, 2023)  Enterprise Console Administering Fleet Management (Agent Management) is now supported. (v23.11.0, GA December 5, 2023)  Events Service Events Service data can be migrated from 4.5.x to 23.x using a single node. It also upgrades Elasticsearch from 2.x to 8.10.x version. (v23.11.0, GA December 5, 2023) Back to TOC | To Essentials Where can I find ongoing information about product enhancements?  The following links show the most up-to-date product information.  Documentation by product Latest information Cisco Observability Platform  Latest Release Notes Cisco Cloud Observability  Latest Release Notes  AppDynamics SaaS   Latest Release Notes Past Release Notes  AppDynamics On-Premises  Latest Release Notes Resolved and Known Issues  SAP Monitoring using AppDynamics  Latest Release Notes  Resolved Issues and Improvements are listed in the month’s Release Notes when there are items to report Accounts and Licensing  Release Notes  Known Issues and Limitations  Back to TOC | To Essentials Advisories and Notifications Changes on the Documentation Portal  In December, there were a number of enhancements to Cisco Cloud Observability documentation. The team has updated its information architecture for an easier, more organized user experience. You’ll also find new content. Check the Release Notes for a complete summary.  Essentials ADVISORY | Customers are advised to check backward compatibility in the Agent and Controller Tenant Compatibility documentation. Download Essential Components (Agents, Enterprise Console, Controller (on-prem), Events Service, EUM Components) Download Additional Components (SDKs, Plugins, etc.) How do I get started upgrading my AppDynamics components for any release? Product Announcements, Alerts, and Hot Fixes Open Source Extensions License Entitlements and Restrictions CAN'T FIND WHAT YOU'RE LOOKING FOR? NEED ASSISTANCE? Connect in the Forums
Hi all, today I successfully updated Splunk Enterprise to 9.1.3 (from 9.1.2) on a Windows 10 22H2 Pro machine with the newest Windows updates (January 2024).  Then I wanted to update the Universal ... See more...
Hi all, today I successfully updated Splunk Enterprise to 9.1.3 (from 9.1.2) on a Windows 10 22H2 Pro machine with the newest Windows updates (January 2024).  Then I wanted to update the Universal Forwarder on this machine, too. Actually, there's 9.1.2 running and everything is working fine. But updating to 9.1.3 doesn't work. Near to the end of the installation process, the installation is rolled back to 9.1.2. Before the rollback there are coming up some more windows for a very short time. And then there are more then one message windows saying, that the installation failed. You then have to click on OK in every message window to finish successfully the rollback. I don't see why the update is failing. Does anyone have the same issue? And how did you solve this issue? Thank you.
Hi All,   Just wanted to get your feedback on the below issue we have right now with our new Splunk Cloud instance.   Unlike in enterprise version where you can assign the index to an app, we don... See more...
Hi All,   Just wanted to get your feedback on the below issue we have right now with our new Splunk Cloud instance.   Unlike in enterprise version where you can assign the index to an app, we don't see the same option available in Splunk Cloud Version. Does anyone know know how Apps to which index to search without defining it? When you create new indexes, app column shows as 000-self-service and not the app we want to?   Thank you