All Topics

Top

All Topics

Hi Splunkers,  we have a Windows log source with a UF installed on it. We have no access to this log source: we only know that we collect Windows logs via UF and it works properly. Collected logs ar... See more...
Hi Splunkers,  we have a Windows log source with a UF installed on it. We have no access to this log source: we only know that we collect Windows logs via UF and it works properly. Collected logs are the usual one: Security, Applications, and so on. Starting from today, we need to add a monitor input: some files are stored in a folder and we need to collect them. So, on our DS, we created another app, inside deployment-app folder, with a proper inputs.conf and props.conf and then we deployed it. Why we created another app and does not simply added a monitor stanza in inputs.conf for Windows addon? Simply because Windows addon is deployed on many host; on the other side, we need to monitor the path only on 1 specific host, so we preferred to deploy another dedicated app, with its server class and so on. DS give no error; app is shown as deployed with no issues. At the same time, we got no error looking on splunkd.log and/or _internal index. By the way, logs are not collected. For sure, we are going to reach Host owner and perform basic checks, like: Is provided path the right one? User in charge of execute UF has read permission on that folder? In UF app folder, is the one deployed by us viewable?  But before this, there is a doubt I have: above point 2, in case of permission denied, I should see in _internal logs some error message, right? Because currently I don't see any error message related to this issue. The behavior is like the inputs.conf we set in deployment app is totally ignored: searching on _internl and/or splunkd.log, I cannot see anything related to path we have to monitor.
Hi all,  I am trying to set up SSE (v3.8.0) however all the searches that are using sseanalytics are failing. ERROR SearchMessages - orig_component="script" app="Splunk_Security_Essentials" sid... See more...
Hi all,  I am trying to set up SSE (v3.8.0) however all the searches that are using sseanalytics are failing. ERROR SearchMessages - orig_component="script" app="Splunk_Security_Essentials" sid="1713345298.75124_FB5E91CC-FD94-432D-8605-815038CDF897" message_key="EXTERN:SCRIPT_NONZERO_RETURN" message=External search command 'sseanalytics' returned error code 1. .
hi guys! could you recommend better way to archiving logs from k8s to S3 bucket?  maybe better to write custom script or use some Splunk tools (like Hadoop)
Hi Team, I am having requirement where i need to send an email for approval, if they reply to that email i need to read that email and continue with next steps. Could you please help me with your s... See more...
Hi Team, I am having requirement where i need to send an email for approval, if they reply to that email i need to read that email and continue with next steps. Could you please help me with your suggestions. Thanks in Advance  
Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not e... See more...
Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not expect that. So my question is, do Splunk pre-emptive evict buckets, even if there are enough space ? I se no documentation that states it does anything else than LRU.   Regards André
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custo... See more...
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custom OT metrics sent to it by the applications themselves.   Is this possible?   Thanks
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only ret... See more...
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only returned part of the users. Without the role 'edit_license', I received the following error: "messages": [ { "type": "ERROR", "text": "Unauthorized" } ] What are the minimum permissions required to retrieve all users, and does anyone know if this is the same for Splunk Cloud?  
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is comp... See more...
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is compatibl I have some doubts to configure it: where can you know the following connection points that my enterprise environment has? - SPLUNK_API_URL: The Splunk API URL, e.g. https://api.us0.signalfx.com  - SPLUNK_INGEST_URL: The Splunk ingest URL, e.g. https://ingest.us0.signalfx.com  - SPLUNK_LISTEN_INTERFACE: The network interface the agent receivers listen on.¿? - SPLUNK_TRACE_URL: The Splunk trace endpoint URL, e.g. https://ingest.us0.signalfx.com/v2/trace  Is there a configuration file where to view it? Do I have to do some step before to get those services up? thanks in advance BR JAR   T
Hello,   The UI of my search head is not loading ...I am seeing only a white screen with no error message as such ..Splunkd  is also running ...Kindly suggest?
I am a beginner in splunk and I have created a new app in the Splunk Enterprise.I am not able to see appserver folder in the newly created app? How can I add that directory?
Erro message: Unable to load app list. Refresh the page to try again. Can anyone help with this?
Good All I am new in Splunk, and I am currently having problem at startup. How do I switch to Free from Enterprise Trial License?      
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031:... See more...
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031: No Card found with the identifier for the request   But my query is getting "has exceeded configured match_limit, consider raising the value in limits.conf." after using fields extraction.         index = xxx sourcetype=xxx "Publish message on SQS" | search bulkDelete | rex field=_raw "(?ms)^(?:[^:\\n]*:){7}\"(?P<error_bulkDelete>[^\"]+)(?:[^:\\n]*:){2}\"(?P<error_errorCode>[^\"]+)[^:\\n]*:\"(?P<error_desc>[^\"]+)(?:[^:\\n]*:){6}\\\\\"(?P<error_statusText>[^\\\\]+)" offset_field=_extracted_fields_bounds       Target log:     Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxx1112233","clientContext":"xxxxxclientContext","cardTokenReferenceId":"xxxcardTokenReferenceId","eventSource":"bulkDelete","errors":[{"errorCode":"52099","errorDescription":"Feign Client Exception.","retryCategory":"RETRYABLE","errorDetails":"{\"clientContext\":\"xxxxxclientContext\",\"ewSID\":\"xxxxSID\",\"statusCode\":\"1020\",\"statusText\":\"3031: No Card found with the identifier for the request\",\"timestampISO8601\":\"2024-04-05T00:00:26Z\"}"}]}       I checked similar posts, they suggested to use non-greedy? So I tried:         index = "xxx" sourcetype=xxx "Publish message on SQS*" bulkDelete | rex field=_raw "\"statusText\":\s*\"(?P<statusText>[^\"]+)\"" | where NOT LIKE( statusText, "%Success%")       If I add "| table", I will get blank content on statusText
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OA... See more...
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OAuth2. When using a "Web Application" type, this requires a user account associated with the auth flow. This ties the auth to a specific user which, if the user is suspended or disabled, the TA stops working. Ideally this is not tied to a user, but to an "API Services" application type. Okta recommends the "API Services" application type to be used for machine to machine auth. Are there plans to support this in the add on going forward since "Web Application" type is less robust and not what Okta ideally recommends?
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. H... See more...
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. However, containers are built without an artifact when created manually. While we could certainly train people to follow some manual steps to create an artifact or toggle the Artifact Dependency switch, that goes against the nature of SOAR and it's easy to miss something. It's easier to have a playbook create an artifact with those fields we need. Unfortunately, the Artifact Dependency switch defaults to off. So, the actual question: Has anyone found a way to change the default for the Artifact Dependency switch or to make a playbook run before an artifact is created?
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND"... See more...
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"CPW","FUNCTION_NAME":"CPW_02171","TOTAL":"26434","PROCESSED":"26434","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02172","TOTAL":"23343","PROCESSED":"2647812","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"}]} I want to extract all fields in the form of table from  "message" which is holding JSON array . And I want a total row for each column where total running total will display for each numeric column based on TARGET_SYSTEM . 
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omni... See more...
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omnisupervisorconfig": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" } ], "livechatbutton": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ] }   LastModifiedBy ModifiedBy Component RecordId
We’ll explore how to deploy a robust application monitoring solution using AWS ECS (Elastic Container Service) and Cisco AppDynamics. This integration allows businesses to leverage the scalability of... See more...
We’ll explore how to deploy a robust application monitoring solution using AWS ECS (Elastic Container Service) and Cisco AppDynamics. This integration allows businesses to leverage the scalability of AWS and the comprehensive monitoring capabilities of AppDynamics, ensuring applications perform optimally in a cloud environment. What is AWS ECS? AWS ECS (Elastic Container Service) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. ECS eliminates the need to install, operate, and scale your own cluster management infrastructure, making it easier to schedule and run Docker containers on AWS. What is AppDynamics? AppDynamics is a powerful application performance management (APM) and IT operations analytics (ITOA) tool that helps monitor, analyze, and optimize complex software environments. It provides real-time visibility and insights into the performance of applications, enabling organizations to quickly detect, diagnose, and resolve issues to improve user experience and operational efficiency. Application Image The application we are deploying is packaged in a Docker image,  abhimanyubajaj98/java-tomcat-wit-otel-app-buildx , which contains a Java-based web application running on Apache Tomcat. This image is enhanced with the OpenTelemetry Java agent to facilitate detailed performance monitoring and telemetry. Configuration Overview Our setup involves several AWS resources managed through Terraform, a popular infrastructure as code (IaC) tool, ensuring our infrastructure is reproducible and maintainable. Below is a high-level overview of our configuration: ECS Cluster AWS ECS Cluster: Acts as the hosting environment for our Docker containers. Task Definitions: Specifies the Docker containers to be deployed, their CPU and memory allocations, and essential configurations such as environment variables and network mode. IAM Roles and Policies IAM Roles and Policies: Ensure proper permissions for ECS tasks to interact with other AWS services, such as retrieving Docker images from ECR and sending logs to CloudWatch. Container Setup Machine Agent Container: Hosts the AppDynamics machine agent, configured to monitor the underlying EC2 instances and collect machine metrics. Java Application Container: Runs the main Java application with OpenTelemetry instrumentation, configured to send telemetry data to AppDynamics. OpenTelemetry Collector Container: Aggregates and forwards telemetry data to the AppDynamics controller. Security and Network Network Mode: Uses host networking to ensure that containers can communicate efficiently and leverage the networking capabilities of the host EC2 instance. Security Groups: Configured to allow appropriate inbound and outbound traffic necessary for operation and monitoring. Detailed Steps and Configuration ECS Cluster Creation: Define an ECS cluster using Terraform to serve as the runtime environment for the containers. Task Definitions: Specify containers that need to be run as part of the ECS service. Include detailed settings for: Image versions CPU and memory requirements Environment variables for configuration Volume mounts for persistent or shared data IAM Configuration: Set up IAM roles and attach policies that grant necessary permissions for ECS tasks, including logging to CloudWatch and accessing ECR for image retrieval. Logging and Monitoring: Configure CloudWatch for logging and set up AppDynamics for advanced monitoring, linking it with OpenTelemetry for comprehensive observability. Deployment and Management: Use Terraform to manage deployment and updates to the infrastructure, ensuring consistency and reproducibility. provider "aws" { region = "us-east-1" # Change to your preferred AWS region } resource "aws_ecs_cluster" "ecs_cluster" { name = "ecs_cluster_with_agents" tags = { owner = "Abhi Bajaj" } } resource "aws_ecs_task_definition" "container_tasks" { family = "container_tasks" network_mode = "host" container_definitions = jsonencode([ { "name" : "machine-agent-container", "uid" : "0", "privileged": true, "image" : "docker.io/appdynamics/machine-agent:root-latest", "cpu" : 256, "memory" : 512, "essential" : true, "environment" : [ { "name" : "APPDYNAMICS_CONTROLLER_HOST_NAME" "value" : "xxx.saas.appdynamics.com" }, { "name" : "APPDYNAMICS_CONTROLLER_PORT" "value" : "443" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_NAME" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_NAME" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_UNIQUE_HOST_ID" "value" : "machine_agent_ecs" }, { "name" : "APPDYNAMICS_CONTROLLER_SSL_ENABLED" "value" : "true" }, { "name" : "APPDYNAMICS_SIM_ENABLED" "value" : "true" }, { "name" : "APPDYNAMICS_DOCKER_ENABLED" "value" : "true" } ], "mountPoints" : [ { "containerPath" : "/hostroot/proc", "sourceVolume" : "proc", "readOnly" : true }, { "containerPath" : "/hostroot/sys", "sourceVolume" : "sys", "readOnly" : false }, { "containerPath" : "/hostroot/etc", "sourceVolume" : "etc", "readOnly" : false }, { "containerPath" : "/var/run/docker.sock", "sourceVolume" : "docker_sock", "readOnly" : false } // Add more mount points as needed ] }, { "name" : "ecs_with_otel_java_app", "image" : "abhimanyubajaj98/java-tomcat-wit-otel-app-buildx", "cpu" : 512, "memory" : 1024, "network_mode" : "host", "privileged": true, "essential" : true, "environment" : [ { "name" : "JAVA_TOOL_OPTIONS" "value" : "-Dotel.resource.attributes=service.name=ECS_otel_abhi,service.namespace=ECS_otel_abhi" } ] }, { "name" : "OpenTelemetryCollector", "image" : "appdynamics/appdynamics-cloud-otel-collector", "privileged": true, "network_mode" : "host", "memory" : 1024, "cpu" : 512, "ports": [ { "containerPort": 13133, "hostPort": 13133 }, { "containerPort": 4317, "hostPort": 4317 }, { "containerPort": 4318, "hostPort": 4318 } ], "environment" : [ { "name" : "APPD_OTELCOL_CLIENT_ID" "value" : "xxx" }, { "name" : "APPD_OTELCOL_CLIENT_SECRET" "value" : "xxxx" }, { "name" : "APPD_OTELCOL_TOKEN_URL" "value" : "https://xxx-pdx-p01-c4.observe.appdynamics.com/auth/4f8da76d-01a8-4df6-85cd-3a111fba946e/default/oauth2/token" }, { "name" : "APPD_OTELCOL_ENDPOINT_URL" "value" : "https://xxx-pdx-p01-c4.observe.appdynamics.com/data" } ], "mountPoints" : [ { "containerPath" : "/hostroot/etc", "sourceVolume" : "etc", "readOnly" : true }, { "containerPath" : "/hostroot/sys", "sourceVolume" : "sys", "readOnly" : false } ] } ]) volume { name = "proc" host_path = "/proc" } volume { name = "sys" host_path = "/sys" } volume { name = "etc" host_path = "/etc" } volume { name = "docker_sock" host_path = "/var/run/docker.sock" } resource "aws_ecs_service" "container_services" { name = "container-services" cluster = aws_ecs_cluster.ecs_cluster.id task_definition = aws_ecs_task_definition.container_tasks.arn desired_count = 1 } ############################################################################################################## resource "aws_launch_template" "ecs_launch_template" { name = "alma" image_id = "ami-xxxxx" # Amazon ECS-Optimized Amazon Linux 2 (AL2) x86_64 AMI instance_type = "t2.medium" user_data = base64encode(<<EOF #!/bin/bash sudo su echo ECS_CLUSTER=ecs_cluster_with_agents >> /etc/ecs/ecs.config wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb dpkg -i amazon-ssm-agent.deb systemctl enable amazon-ssm-agent EOF ) vpc_security_group_ids = ["sg-xxxx"] iam_instance_profile { name = aws_iam_instance_profile.dev-resources-iam-profile.name } tag_specifications { resource_type = "instance" tags = { Name = "ECS_with_Agents" Owner = "abhibaj@cisco.com" } } } resource "aws_autoscaling_group" "auto_scaling_group" { name = "ecs_asg" availability_zones = ["us-east-1a", "us-east-1b"] desired_capacity = 1 min_size = 1 max_size = 10 health_check_grace_period = 300 health_check_type = "EC2" launch_template { id = aws_launch_template.ecs_launch_template.id } } resource "aws_ecs_capacity_provider" "provider" { name = "alma" auto_scaling_group_provider { auto_scaling_group_arn = aws_autoscaling_group.auto_scaling_group.arn managed_scaling { status = "ENABLED" target_capacity = 100 minimum_scaling_step_size = 1 maximum_scaling_step_size = 100 } } } resource "aws_ecs_cluster_capacity_providers" "providers" { cluster_name = aws_ecs_cluster.ecs_cluster.name capacity_providers = [aws_ecs_capacity_provider.provider.name] } ############################################# resource "aws_iam_instance_profile" "dev-resources-iam-profile" { name = "ec2_profile_for_services_otel" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role" "dev-resources-iam-role" { name = "role_for_services_ec2_otel" description = "The role for the developer resources on EC2" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"Service": "ec2.amazonaws.com"}, "Action": "sts:AssumeRole" } } EOF tags = { Owner = "abhibaj" } } resource "aws_iam_role_policy_attachment" "dev-resources-ssm-policy" { policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecr_read_only_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs_full_access_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/AmazonECS_FullAccess" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs_ecs_task_execution_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs-instance-role-attachment" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role" role = aws_iam_role.dev-resources-iam-role.name } To deploy-> Please edit ENV Variables subsituted with “xxx” Once done, Run terraform init terraform apply — auto-approve How it will look on your UI is:  
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPa... See more...
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPath (eg. C:\ for Win or / for *nix) fileSystemType (eg. ext3, NTFS, etc...) Ref #1: https://vdc-download.vmware.com/vmwb-repository/dcr-public/184bb3ba-6fa8-4574-a767-d0c96e2a38f4/ba9422ef-405c-47dd-8553-e11b619185b2/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestInfo.DiskInfo.html Ref #2: https://developer.vmware.com/apis/vsphere-automation/latest/vcenter/api/vcenter/vm/vm/guest/local-filesystem/get/ I believe RVTools, and some monitoring tools are using this specific api to grab info about local file system on the guest vm.   So far I was able to find metrics regarding datastore usage. This is fine, but equally important metric is local disk utilization of the guest vm. Which metric is responsible for getting this info in VMWare or VMWare Metrics AddOns? https://docs.splunk.com/Documentation/AddOns/released/VMW/Sourcetypes https://docs.splunk.com/Documentation/AddOns/released/VMWmetrics/Sourcetypes If none of the listed. Is there a way to customize VMW or VMWmetrics AddOns to grab this crucial information about VMs from vCenter? Perhaps I should look elsewhere - I mean different App/AddOn?
Greetings!    Im unable to start appdynamics-machine-agent following the same install instructions that work with rhel 7. Machine Agent Bundle - 64-bit linux (rpm)    24.3.0 installed. I update... See more...
Greetings!    Im unable to start appdynamics-machine-agent following the same install instructions that work with rhel 7. Machine Agent Bundle - 64-bit linux (rpm)    24.3.0 installed. I updated the config file to match the same controller/settings/etc as the rhel 7 servers. Upon starting the service I see the status is failed, and the logs say: Could not initialize class com.sun.jna.Native   /opt/appdynamics/machine-agent/logs/startup.out OUTPUT  2024-04-16 11:15:53.430 Using Agent Version [Machine Agent v24.3.0.4127 GA compatible with 4.4.1.0 Build Date 2024-03-20 05:00:40] ERROR StatusLogger Reconfiguration failed: No configuration found for '10dba097' at 'null' in 'null' 2024-04-16 11:15:55.037 [INFO] Agent logging directory set to: [/opt/appdynamics/machine-agent/logs] 2024-04-16 11:15:53.468 Could not start up the machine agent due to: Could not initialize class com.sun.jna.Native 2024-04-16 11:15:53.468 Please see startup.log in the current working directory for details.   /opt/appdynamics/machine-agent/startup.log OUTPUT Tue Apr 16 11:15:55 CDT 2024 java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native at oshi.jna.platform.linux.LinuxLibc.<clinit>(LinuxLibc.java:22) at oshi.software.os.linux.LinuxOperatingSystem.<clinit>(LinuxOperatingSystem.java:97) at oshi.hardware.platform.linux.LinuxCentralProcessor.initProcessorCounts(LinuxCentralProcessor.java:166) at oshi.hardware.common.AbstractCentralProcessor.<init>(AbstractCentralProcessor.java:65) at oshi.hardware.platform.linux.LinuxCentralProcessor.<init>(LinuxCentralProcessor.java:57) at oshi.hardware.platform.linux.LinuxHardwareAbstractionLayer.createProcessor(LinuxHardwareAbstractionLayer.java:43) at oshi.util.Memoizer$1.get(Memoizer.java:61) at oshi.hardware.common.AbstractHardwareAbstractionLayer.getProcessor(AbstractHardwareAbstractionLayer.java:48) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.getOshiBasedLicenseCpuInfo(MachineLicensePropertiesProvider.java:75) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.getLicenseCpuInfo(MachineLicensePropertiesProvider.java:44) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.get(MachineLicensePropertiesProvider.java:106) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.get(MachineLicensePropertiesProvider.java:25) at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86) at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:72) at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:60) at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:59) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.RealMultibinder$RealMultibinderProvider.doProvision(RealMultibinder.java:253) at com.google.inject.internal.RealMultibinder$ExtensionRealMultibinderProvider.doProvision(RealMultibinder.java:307) at com.google.inject.internal.RealMultibinder$ExtensionRealMultibinderProvider.doProvision(RealMultibinder.java:289) at com.google.inject.internal.InternalProviderInstanceBindingImpl$Factory.get(InternalProviderInstanceBindingImpl.java:113) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:58) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) at com.google.inject.internal.InternalInjectorCreator.loadEagerSingletons(InternalInjectorCreator.java:213) at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:186) at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:113) at com.google.inject.Guice.createInjector(Guice.java:87) at com.google.inject.Guice.createInjector(Guice.java:69) at com.appdynamics.voltron.FrameworkBootstrap.createInjector(FrameworkBootstrap.java:107) at com.appdynamics.voltron.FrameworkBootstrap.start(FrameworkBootstrap.java:162) at com.appdynamics.voltron.FrameworkBootstrap.startAndRun(FrameworkBootstrap.java:120) at com.appdynamics.voltron.FrameworkApplication.start(FrameworkApplication.java:31) at com.appdynamics.agent.sim.Main.startSafe(Main.java:64) at com.appdynamics.agent.sim.bootstrap.Bootstrap.main(Bootstrap.java:48)