All Posts

Top

All Posts

Here are the setting that you can enable on the log.conf to get more detail logging.  $splunk_install_dir$/etc/log.conf category.X509=DEBUG category.UiAuth=DEBUG Post the error message here or cal... See more...
Here are the setting that you can enable on the log.conf to get more detail logging.  $splunk_install_dir$/etc/log.conf category.X509=DEBUG category.UiAuth=DEBUG Post the error message here or call support. 
Hi, almost a year late to this thread, but experiencing the same. Has there been any resolution for you?   Thanks
Simply put: Don't.  Do not treat structured data as text and use rex to extract fields, use rex to extract the structured data.  In this case, the structure is in JSON. Not only that.  Your data con... See more...
Simply put: Don't.  Do not treat structured data as text and use rex to extract fields, use rex to extract the structured data.  In this case, the structure is in JSON. Not only that.  Your data contains very different JSON nodes that both have "LastModifiedBy", "RecordId", etc.  Your result table must distinguish between JSON nodes "servicechannel" and "omnisupervisionconfig", etc.  Does this make sense? Further, each of these nodes uses an array.  You may need to distinguish between each element in the arrays.  Because your illustration does not include multiple elements in the array, I cannot speculate what your developers' intention (semantics) is to use array for three distinct nodes.  It is possible that they committed the ultimate JSON data sin by assuming an implied semantic meaning in the arrays.   In light of this, I will not introduce the more intricate part of mixed kv-array data processing and just assume that all your data come with a single element in every of the three arrays. (When your developers give this type data, it is even more dangerous to use rex to extract data elements because no regex is compatible with the inherent flexibility afforded by the data structure.) Here is my suggestion:   | rex "^[^{]+(?<jsondata>.+)" | spath input=jsondata   Your sample data gives 3 sets of the 4 columns you desired, for a total of 12 columns.  That's too wide for display, so I will show a transposed table: field_name field_value livechatbutton{}.ComponentName LiveChatButton livechatbutton{}.LastmodifiedBy XYZ livechatbutton{}.ModifiedDate 2024-04-16T16:31:35.000Z livechatbutton{}.RecordId 5638X000000Xw55QAC omnisupervisorconfig{}.ComponentName OmniSupervisorConfig omnisupervisorconfig{}.LastmodifiedBy XYZ omnisupervisorconfig{}.ModifiedDate 2024-04-16T16:17:37.000Z omnisupervisorconfig{}.RecordId 0Q27X000000KyrESAS servicechannel{}.ComponentName ServiceChannel servicechannel{}.LastmodifiedBy XYZ servicechannel{}.ModifiedDate 2024-04-15T17:20:09.000Z servicechannel{}.RecordId 0N98X001200Gvv3SAC You must then decide how you want to present such data.  I do notice that each JSON node key and each ComponentName have an apparent semantic relationship.  If key name servicechannel and ComponentName ServiceChannel, etc., are indeed semantically related, your developers also committed a different type of data structure sin: that of duplicating semantic notation (without declaration).  The data could easily be presented without losing resolution but in a much simpler and more comprehensible form:   [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" }, { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" }, { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ]   If you have any influence on developers, discuss data structure with them, ask them to clarify the intention/semantics of the structure and help improve structure.  This is better for everybody in the long run. If you have no influence, one possible way to deal with this mess is to ignore all the key names and treat them like single name, i.e., by assuming the data to be   [ {"array": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ]}, {"array": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" } ]}, {"array": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ]} ]   To do the equivalent in SPL,  (and also handle potential multiple array elements in absence of semantic knowledge)   | rex "^[^{]+(?<jsondata>.+)" | eval jsonnode = json_keys(jsondata) | foreach jsonnode mode=json_array [eval newjson = mvappend(newjson, json_object("array", json_extract(jsondata, <<ITEM>>)))] | mvexpand newjson | spath input=newjson path=array{} | mvexpand array{} ``` potential multiple elements ``` | spath input=array{}    Your sample data will give ComponentName LastmodiefiedBy ModifiedDate RecordId newjson ServiceChannel XYZ 2024-04-15T17:20:09.000Z 0N98X001200Gvv3SAC {"array":[{"LastmodifiedBy":"XYZ","ModifiedDate":"2024-04-15T17:20:09.000Z","ComponentName":"ServiceChannel","RecordId":"0N98X001200Gvv3SAC"}]} OmniSupervisorConfig XYZ 2024-04-16T16:17:37.000Z 0Q27X000000KyrESAS {"array":[{"LastmodifiedBy":"XYZ","ModifiedDate":"2024-04-16T16:17:37.000Z","ComponentName":"OmniSupervisorConfig","RecordId":"0Q27X000000KyrESAS"}]} LiveChatButton XYZ 2024-04-16T16:31:35.000Z 5638X000000Xw55QAC {"array":[{"LastmodifiedBy":"XYZ","ModifiedDate":"2024-04-16T16:31:35.000Z","ComponentName":"LiveChatButton","RecordId":"5638X000000Xw55QAC"}]} Here is an emulation for you to play with and compare with real data   | makeresults | eval _raw="message: Updated Components { \"servicechannel\": [ { \"LastmodifiedBy\": \"XYZ\", \"ModifiedDate\": \"2024-04-15T17:20:09.000Z\", \"ComponentName\": \"ServiceChannel\", \"RecordId\": \"0N98X001200Gvv3SAC\" } ], \"omnisupervisorconfig\": [ { \"LastmodifiedBy\": \"XYZ\", \"ModifiedDate\": \"2024-04-16T16:17:37.000Z\", \"ComponentName\": \"OmniSupervisorConfig\", \"RecordId\": \"0Q27X000000KyrESAS\" } ], \"livechatbutton\": [ { \"LastmodifiedBy\": \"XYZ\", \"ModifiedDate\": \"2024-04-16T16:31:35.000Z\", \"ComponentName\": \"LiveChatButton\", \"RecordId\": \"5638X000000Xw55QAC\" } ] }"  
Could you try explicitly disabling workload management? Tscroggins has the instructions to do that above.
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND"... See more...
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"CPW","FUNCTION_NAME":"CPW_02171","TOTAL":"26434","PROCESSED":"26434","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02172","TOTAL":"23343","PROCESSED":"2647812","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"}]} I want to extract all fields in the form of table from  "message" which is holding JSON array . And I want a total row for each column where total running total will display for each numeric column based on TARGET_SYSTEM . 
All I can say is use now() instead of _time to use in the evaluation on whether to trigger or not on the solution provided earlier ? Do you have any test data to show your attribute values to help ... See more...
All I can say is use now() instead of _time to use in the evaluation on whether to trigger or not on the solution provided earlier ? Do you have any test data to show your attribute values to help figure out why its false triggering ? | eval current_time=now() | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S") | eval is_maintenance_window=if(current_time >= excluded_start_time AND current_time < excluded_end_time, 1, 0)
@sjringo  - We don't have specific date as it keeps changing so I created two variables where I specify date and time .    | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M... See more...
@sjringo  - We don't have specific date as it keeps changing so I created two variables where I specify date and time .    | eval excluded_start_time=strptime("2024-04-14 21:00:00", "%Y-%m-%d %H:%M:%S") | eval excluded_end_time=strptime("2024-04-15 04:00:00", "%Y-%m-%d %H:%M:%S  
Assuming Invetory is spelled (in)correctly, you could try this - the rex at the end is required because this date has an embedded space and it is the last field in the message If the fields were re... See more...
Assuming Invetory is spelled (in)correctly, you could try this - the rex at the end is required because this date has an embedded space and it is the last field in the message If the fields were re-ordered or an extra field was in the message (without an embedded space),  then the rex would not be required The problem is less embedded space, more lack of embedded quotation marks/proper field separator.  It is semantically more pleasing to fix structure with rex than using rex to extract one data snippet when most are extracted with extract command. (But if you have any influence on developers, beg them to add quotation marks - more on this later.)   | rex field=message mode=sed "s/Date=/&\"/ s/$/\"/" | rename message as _raw | extract   It would give you the same result like CPWRemaining CPWTotal EASRemaining EAStatal InvetoryDate SEQRemaining SEQTotal VRSRemaining VRSTotal id severity 5612 749860 15 1062804 4/16/2024 7:34:25 PM 32746 1026137 0 238 0 Information About feedback to developers.  @ITWhisperer gave one option by taking advantage of a side effect/gem feature) from Splunk's extract command by adding a comma at the end of every key-value pair.  They do not have to swap order, but also by simply adding a literal comma after each value, like this:   {"id":"0","severity":"Information","message":"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=4/16/2024 7:34:25 PM,"}   A more robust fix (that does not rely on Splunk's "generosity") is to properly quote the value.  Any language can extract that without the programmer's attention.   {"id":"0","severity":"Information","message":"CPWTotal=749860, SEQTotal=1026137, EASTotal=1062804, VRSTotal=238, CPWRemaining=5612, SEQRemaining=32746, EASRemaining=15, VRSRemaining=0, InvetoryDate=\"4/16/2024 7:34:25 PM\""}   The logic should be simple enough: Numeric data, no quote, string data, quote.  
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omni... See more...
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omnisupervisorconfig": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" } ], "livechatbutton": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ] }   LastModifiedBy ModifiedBy Component RecordId
thanks
We’ll explore how to deploy a robust application monitoring solution using AWS ECS (Elastic Container Service) and Cisco AppDynamics. This integration allows businesses to leverage the scalability of... See more...
We’ll explore how to deploy a robust application monitoring solution using AWS ECS (Elastic Container Service) and Cisco AppDynamics. This integration allows businesses to leverage the scalability of AWS and the comprehensive monitoring capabilities of AppDynamics, ensuring applications perform optimally in a cloud environment. What is AWS ECS? AWS ECS (Elastic Container Service) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. ECS eliminates the need to install, operate, and scale your own cluster management infrastructure, making it easier to schedule and run Docker containers on AWS. What is AppDynamics? AppDynamics is a powerful application performance management (APM) and IT operations analytics (ITOA) tool that helps monitor, analyze, and optimize complex software environments. It provides real-time visibility and insights into the performance of applications, enabling organizations to quickly detect, diagnose, and resolve issues to improve user experience and operational efficiency. Application Image The application we are deploying is packaged in a Docker image,  abhimanyubajaj98/java-tomcat-wit-otel-app-buildx , which contains a Java-based web application running on Apache Tomcat. This image is enhanced with the OpenTelemetry Java agent to facilitate detailed performance monitoring and telemetry. Configuration Overview Our setup involves several AWS resources managed through Terraform, a popular infrastructure as code (IaC) tool, ensuring our infrastructure is reproducible and maintainable. Below is a high-level overview of our configuration: ECS Cluster AWS ECS Cluster: Acts as the hosting environment for our Docker containers. Task Definitions: Specifies the Docker containers to be deployed, their CPU and memory allocations, and essential configurations such as environment variables and network mode. IAM Roles and Policies IAM Roles and Policies: Ensure proper permissions for ECS tasks to interact with other AWS services, such as retrieving Docker images from ECR and sending logs to CloudWatch. Container Setup Machine Agent Container: Hosts the AppDynamics machine agent, configured to monitor the underlying EC2 instances and collect machine metrics. Java Application Container: Runs the main Java application with OpenTelemetry instrumentation, configured to send telemetry data to AppDynamics. OpenTelemetry Collector Container: Aggregates and forwards telemetry data to the AppDynamics controller. Security and Network Network Mode: Uses host networking to ensure that containers can communicate efficiently and leverage the networking capabilities of the host EC2 instance. Security Groups: Configured to allow appropriate inbound and outbound traffic necessary for operation and monitoring. Detailed Steps and Configuration ECS Cluster Creation: Define an ECS cluster using Terraform to serve as the runtime environment for the containers. Task Definitions: Specify containers that need to be run as part of the ECS service. Include detailed settings for: Image versions CPU and memory requirements Environment variables for configuration Volume mounts for persistent or shared data IAM Configuration: Set up IAM roles and attach policies that grant necessary permissions for ECS tasks, including logging to CloudWatch and accessing ECR for image retrieval. Logging and Monitoring: Configure CloudWatch for logging and set up AppDynamics for advanced monitoring, linking it with OpenTelemetry for comprehensive observability. Deployment and Management: Use Terraform to manage deployment and updates to the infrastructure, ensuring consistency and reproducibility. provider "aws" { region = "us-east-1" # Change to your preferred AWS region } resource "aws_ecs_cluster" "ecs_cluster" { name = "ecs_cluster_with_agents" tags = { owner = "Abhi Bajaj" } } resource "aws_ecs_task_definition" "container_tasks" { family = "container_tasks" network_mode = "host" container_definitions = jsonencode([ { "name" : "machine-agent-container", "uid" : "0", "privileged": true, "image" : "docker.io/appdynamics/machine-agent:root-latest", "cpu" : 256, "memory" : 512, "essential" : true, "environment" : [ { "name" : "APPDYNAMICS_CONTROLLER_HOST_NAME" "value" : "xxx.saas.appdynamics.com" }, { "name" : "APPDYNAMICS_CONTROLLER_PORT" "value" : "443" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_NAME" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_NAME" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_UNIQUE_HOST_ID" "value" : "machine_agent_ecs" }, { "name" : "APPDYNAMICS_CONTROLLER_SSL_ENABLED" "value" : "true" }, { "name" : "APPDYNAMICS_SIM_ENABLED" "value" : "true" }, { "name" : "APPDYNAMICS_DOCKER_ENABLED" "value" : "true" } ], "mountPoints" : [ { "containerPath" : "/hostroot/proc", "sourceVolume" : "proc", "readOnly" : true }, { "containerPath" : "/hostroot/sys", "sourceVolume" : "sys", "readOnly" : false }, { "containerPath" : "/hostroot/etc", "sourceVolume" : "etc", "readOnly" : false }, { "containerPath" : "/var/run/docker.sock", "sourceVolume" : "docker_sock", "readOnly" : false } // Add more mount points as needed ] }, { "name" : "ecs_with_otel_java_app", "image" : "abhimanyubajaj98/java-tomcat-wit-otel-app-buildx", "cpu" : 512, "memory" : 1024, "network_mode" : "host", "privileged": true, "essential" : true, "environment" : [ { "name" : "JAVA_TOOL_OPTIONS" "value" : "-Dotel.resource.attributes=service.name=ECS_otel_abhi,service.namespace=ECS_otel_abhi" } ] }, { "name" : "OpenTelemetryCollector", "image" : "appdynamics/appdynamics-cloud-otel-collector", "privileged": true, "network_mode" : "host", "memory" : 1024, "cpu" : 512, "ports": [ { "containerPort": 13133, "hostPort": 13133 }, { "containerPort": 4317, "hostPort": 4317 }, { "containerPort": 4318, "hostPort": 4318 } ], "environment" : [ { "name" : "APPD_OTELCOL_CLIENT_ID" "value" : "xxx" }, { "name" : "APPD_OTELCOL_CLIENT_SECRET" "value" : "xxxx" }, { "name" : "APPD_OTELCOL_TOKEN_URL" "value" : "https://xxx-pdx-p01-c4.observe.appdynamics.com/auth/4f8da76d-01a8-4df6-85cd-3a111fba946e/default/oauth2/token" }, { "name" : "APPD_OTELCOL_ENDPOINT_URL" "value" : "https://xxx-pdx-p01-c4.observe.appdynamics.com/data" } ], "mountPoints" : [ { "containerPath" : "/hostroot/etc", "sourceVolume" : "etc", "readOnly" : true }, { "containerPath" : "/hostroot/sys", "sourceVolume" : "sys", "readOnly" : false } ] } ]) volume { name = "proc" host_path = "/proc" } volume { name = "sys" host_path = "/sys" } volume { name = "etc" host_path = "/etc" } volume { name = "docker_sock" host_path = "/var/run/docker.sock" } resource "aws_ecs_service" "container_services" { name = "container-services" cluster = aws_ecs_cluster.ecs_cluster.id task_definition = aws_ecs_task_definition.container_tasks.arn desired_count = 1 } ############################################################################################################## resource "aws_launch_template" "ecs_launch_template" { name = "alma" image_id = "ami-xxxxx" # Amazon ECS-Optimized Amazon Linux 2 (AL2) x86_64 AMI instance_type = "t2.medium" user_data = base64encode(<<EOF #!/bin/bash sudo su echo ECS_CLUSTER=ecs_cluster_with_agents >> /etc/ecs/ecs.config wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb dpkg -i amazon-ssm-agent.deb systemctl enable amazon-ssm-agent EOF ) vpc_security_group_ids = ["sg-xxxx"] iam_instance_profile { name = aws_iam_instance_profile.dev-resources-iam-profile.name } tag_specifications { resource_type = "instance" tags = { Name = "ECS_with_Agents" Owner = "abhibaj@cisco.com" } } } resource "aws_autoscaling_group" "auto_scaling_group" { name = "ecs_asg" availability_zones = ["us-east-1a", "us-east-1b"] desired_capacity = 1 min_size = 1 max_size = 10 health_check_grace_period = 300 health_check_type = "EC2" launch_template { id = aws_launch_template.ecs_launch_template.id } } resource "aws_ecs_capacity_provider" "provider" { name = "alma" auto_scaling_group_provider { auto_scaling_group_arn = aws_autoscaling_group.auto_scaling_group.arn managed_scaling { status = "ENABLED" target_capacity = 100 minimum_scaling_step_size = 1 maximum_scaling_step_size = 100 } } } resource "aws_ecs_cluster_capacity_providers" "providers" { cluster_name = aws_ecs_cluster.ecs_cluster.name capacity_providers = [aws_ecs_capacity_provider.provider.name] } ############################################# resource "aws_iam_instance_profile" "dev-resources-iam-profile" { name = "ec2_profile_for_services_otel" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role" "dev-resources-iam-role" { name = "role_for_services_ec2_otel" description = "The role for the developer resources on EC2" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"Service": "ec2.amazonaws.com"}, "Action": "sts:AssumeRole" } } EOF tags = { Owner = "abhibaj" } } resource "aws_iam_role_policy_attachment" "dev-resources-ssm-policy" { policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecr_read_only_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs_full_access_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/AmazonECS_FullAccess" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs_ecs_task_execution_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs-instance-role-attachment" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role" role = aws_iam_role.dev-resources-iam-role.name } To deploy-> Please edit ENV Variables subsituted with “xxx” Once done, Run terraform init terraform apply — auto-approve How it will look on your UI is:  
Close. The count will depend on how many of those hostname-customer_name pairs you have. If you have just one (supposedly only from the lookup), it will be 1. If you have two (from the lookup and fro... See more...
Close. The count will depend on how many of those hostname-customer_name pairs you have. If you have just one (supposedly only from the lookup), it will be 1. If you have two (from the lookup and from the data), it will be two. At least it should work this way. Of course I don't know either your events nor your lookup contents so I'm only deducing the data format from your searches. See this short run-anywhere example. | makeresults | eval hosts=split("a,b,c,d:a,b,c",":") | mvexpand hosts | eval customers="aaa" | eval hosts=split(hosts,",") This prepares some mockup data. You have one row supposedly from your summarized values from events and one from the summarized lookup. After you add | stats count by customers hosts You get three rows with count of 2 and one row with count of 1  
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPa... See more...
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPath (eg. C:\ for Win or / for *nix) fileSystemType (eg. ext3, NTFS, etc...) Ref #1: https://vdc-download.vmware.com/vmwb-repository/dcr-public/184bb3ba-6fa8-4574-a767-d0c96e2a38f4/ba9422ef-405c-47dd-8553-e11b619185b2/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestInfo.DiskInfo.html Ref #2: https://developer.vmware.com/apis/vsphere-automation/latest/vcenter/api/vcenter/vm/vm/guest/local-filesystem/get/ I believe RVTools, and some monitoring tools are using this specific api to grab info about local file system on the guest vm.   So far I was able to find metrics regarding datastore usage. This is fine, but equally important metric is local disk utilization of the guest vm. Which metric is responsible for getting this info in VMWare or VMWare Metrics AddOns? https://docs.splunk.com/Documentation/AddOns/released/VMW/Sourcetypes https://docs.splunk.com/Documentation/AddOns/released/VMWmetrics/Sourcetypes If none of the listed. Is there a way to customize VMW or VMWmetrics AddOns to grab this crucial information about VMs from vCenter? Perhaps I should look elsewhere - I mean different App/AddOn?
Hello @martynjlewis , Go to the Support Portal and submit a New Case. 
This is perfect. Thank you very much!
In the props.conf example, when it says "REPORT-file_name = url_domain", what should I replace file_name with? I'll stay tuned, thank you very much.
Greetings!    Im unable to start appdynamics-machine-agent following the same install instructions that work with rhel 7. Machine Agent Bundle - 64-bit linux (rpm)    24.3.0 installed. I update... See more...
Greetings!    Im unable to start appdynamics-machine-agent following the same install instructions that work with rhel 7. Machine Agent Bundle - 64-bit linux (rpm)    24.3.0 installed. I updated the config file to match the same controller/settings/etc as the rhel 7 servers. Upon starting the service I see the status is failed, and the logs say: Could not initialize class com.sun.jna.Native   /opt/appdynamics/machine-agent/logs/startup.out OUTPUT  2024-04-16 11:15:53.430 Using Agent Version [Machine Agent v24.3.0.4127 GA compatible with 4.4.1.0 Build Date 2024-03-20 05:00:40] ERROR StatusLogger Reconfiguration failed: No configuration found for '10dba097' at 'null' in 'null' 2024-04-16 11:15:55.037 [INFO] Agent logging directory set to: [/opt/appdynamics/machine-agent/logs] 2024-04-16 11:15:53.468 Could not start up the machine agent due to: Could not initialize class com.sun.jna.Native 2024-04-16 11:15:53.468 Please see startup.log in the current working directory for details.   /opt/appdynamics/machine-agent/startup.log OUTPUT Tue Apr 16 11:15:55 CDT 2024 java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native at oshi.jna.platform.linux.LinuxLibc.<clinit>(LinuxLibc.java:22) at oshi.software.os.linux.LinuxOperatingSystem.<clinit>(LinuxOperatingSystem.java:97) at oshi.hardware.platform.linux.LinuxCentralProcessor.initProcessorCounts(LinuxCentralProcessor.java:166) at oshi.hardware.common.AbstractCentralProcessor.<init>(AbstractCentralProcessor.java:65) at oshi.hardware.platform.linux.LinuxCentralProcessor.<init>(LinuxCentralProcessor.java:57) at oshi.hardware.platform.linux.LinuxHardwareAbstractionLayer.createProcessor(LinuxHardwareAbstractionLayer.java:43) at oshi.util.Memoizer$1.get(Memoizer.java:61) at oshi.hardware.common.AbstractHardwareAbstractionLayer.getProcessor(AbstractHardwareAbstractionLayer.java:48) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.getOshiBasedLicenseCpuInfo(MachineLicensePropertiesProvider.java:75) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.getLicenseCpuInfo(MachineLicensePropertiesProvider.java:44) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.get(MachineLicensePropertiesProvider.java:106) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.get(MachineLicensePropertiesProvider.java:25) at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86) at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:72) at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:60) at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:59) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.RealMultibinder$RealMultibinderProvider.doProvision(RealMultibinder.java:253) at com.google.inject.internal.RealMultibinder$ExtensionRealMultibinderProvider.doProvision(RealMultibinder.java:307) at com.google.inject.internal.RealMultibinder$ExtensionRealMultibinderProvider.doProvision(RealMultibinder.java:289) at com.google.inject.internal.InternalProviderInstanceBindingImpl$Factory.get(InternalProviderInstanceBindingImpl.java:113) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:58) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) at com.google.inject.internal.InternalInjectorCreator.loadEagerSingletons(InternalInjectorCreator.java:213) at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:186) at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:113) at com.google.inject.Guice.createInjector(Guice.java:87) at com.google.inject.Guice.createInjector(Guice.java:69) at com.appdynamics.voltron.FrameworkBootstrap.createInjector(FrameworkBootstrap.java:107) at com.appdynamics.voltron.FrameworkBootstrap.start(FrameworkBootstrap.java:162) at com.appdynamics.voltron.FrameworkBootstrap.startAndRun(FrameworkBootstrap.java:120) at com.appdynamics.voltron.FrameworkApplication.start(FrameworkApplication.java:31) at com.appdynamics.agent.sim.Main.startSafe(Main.java:64) at com.appdynamics.agent.sim.bootstrap.Bootstrap.main(Bootstrap.java:48)
Your trigger condition is the same it was before? | where is_maintenance window = 0 AND is_server_down=1 Im assuming your maintenance window is on a specific day of the week ? April 23rd is a T... See more...
Your trigger condition is the same it was before? | where is_maintenance window = 0 AND is_server_down=1 Im assuming your maintenance window is on a specific day of the week ? April 23rd is a Tuesday, is your maintenance window is every Tuesday night/Wed morning ? Introduce a new attribute for day of the week: | tstats count where index=cts-dcpsa-app sourcetype=app:dcpsa host_ip IN (xx.xx.xxx.xxx, xx.xx.xxx.xxx) by host | eval current_time=strftime(now(), "%H%M") | eval aDayNumber = strftime(now(), "%w") | eval is_maintenance_window=if((aDayNumber = 2 AND current_time >= 2100) OR (aDayNumber = 3 AND current_time < 0400), 1, 0) | eval is_server_down=if(count == 0, 1, 0) | where is_maintenance window = 0 AND is_server_down=1  
@KothariSurbhi  Yes I have developed an app and placed them in the default ui[prefs.conf and after app vetting process also it didnt worked. Need your inputs on the same please. I have also restarte... See more...
@KothariSurbhi  Yes I have developed an app and placed them in the default ui[prefs.conf and after app vetting process also it didnt worked. Need your inputs on the same please. I have also restarted the Splunk cloud search head instance but still the same.  
Hi @Fadil.Chalakandy, I was told after the Ops team ran a report they saw nothing reporting to TLS versions 1.0 and 1.1