All Topics

Top

All Topics

hi guys! could you recommend better way to archiving logs from k8s to S3 bucket?  maybe better to write custom script or use some Splunk tools (like Hadoop)
Hi Team, I am having requirement where i need to send an email for approval, if they reply to that email i need to read that email and continue with next steps. Could you please help me with your s... See more...
Hi Team, I am having requirement where i need to send an email for approval, if they reply to that email i need to read that email and continue with next steps. Could you please help me with your suggestions. Thanks in Advance  
Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not e... See more...
Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not expect that. So my question is, do Splunk pre-emptive evict buckets, even if there are enough space ? I se no documentation that states it does anything else than LRU.   Regards André
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custo... See more...
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custom OT metrics sent to it by the applications themselves.   Is this possible?   Thanks
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only ret... See more...
I created a role with the capabilities 'edit_license' and 'edit_user', but I didn't receive all the users from the GET request to the URL: /services/authentication/users?output_mode=json. It only returned part of the users. Without the role 'edit_license', I received the following error: "messages": [ { "type": "ERROR", "text": "Unauthorized" } ] What are the minimum permissions required to retrieve all users, and does anyone know if this is the same for Splunk Cloud?  
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is comp... See more...
Hello all, I am trying to ingest metrics via Opentelemetry in an enterprise environment. I have installed the Splunk Add-On for OpenTelemetry Collector , which according to the documentation is compatibl I have some doubts to configure it: where can you know the following connection points that my enterprise environment has? - SPLUNK_API_URL: The Splunk API URL, e.g. https://api.us0.signalfx.com  - SPLUNK_INGEST_URL: The Splunk ingest URL, e.g. https://ingest.us0.signalfx.com  - SPLUNK_LISTEN_INTERFACE: The network interface the agent receivers listen on.¿? - SPLUNK_TRACE_URL: The Splunk trace endpoint URL, e.g. https://ingest.us0.signalfx.com/v2/trace  Is there a configuration file where to view it? Do I have to do some step before to get those services up? thanks in advance BR JAR   T
Hello,   The UI of my search head is not loading ...I am seeing only a white screen with no error message as such ..Splunkd  is also running ...Kindly suggest?
I am a beginner in splunk and I have created a new app in the Splunk Enterprise.I am not able to see appserver folder in the newly created app? How can I add that directory?
Erro message: Unable to load app list. Refresh the page to try again. Can anyone help with this?
Good All I am new in Splunk, and I am currently having problem at startup. How do I switch to Free from Enterprise Trial License?      
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031:... See more...
In our log, I'd like to extract statusText and categorize it in table to see how many error response statusCode and statusText. EX: eventSource statusCode statusText bulkDelete 1020 3031: No Card found with the identifier for the request   But my query is getting "has exceeded configured match_limit, consider raising the value in limits.conf." after using fields extraction.         index = xxx sourcetype=xxx "Publish message on SQS" | search bulkDelete | rex field=_raw "(?ms)^(?:[^:\\n]*:){7}\"(?P<error_bulkDelete>[^\"]+)(?:[^:\\n]*:){2}\"(?P<error_errorCode>[^\"]+)[^:\\n]*:\"(?P<error_desc>[^\"]+)(?:[^:\\n]*:){6}\\\\\"(?P<error_statusText>[^\\\\]+)" offset_field=_extracted_fields_bounds       Target log:     Publish message on SQS, queueName=xxx, retryCount=0, message={"traceId":"xxx1112233","clientContext":"xxxxxclientContext","cardTokenReferenceId":"xxxcardTokenReferenceId","eventSource":"bulkDelete","errors":[{"errorCode":"52099","errorDescription":"Feign Client Exception.","retryCategory":"RETRYABLE","errorDetails":"{\"clientContext\":\"xxxxxclientContext\",\"ewSID\":\"xxxxSID\",\"statusCode\":\"1020\",\"statusText\":\"3031: No Card found with the identifier for the request\",\"timestampISO8601\":\"2024-04-05T00:00:26Z\"}"}]}       I checked similar posts, they suggested to use non-greedy? So I tried:         index = "xxx" sourcetype=xxx "Publish message on SQS*" bulkDelete | rex field=_raw "\"statusText\":\s*\"(?P<statusText>[^\"]+)\"" | where NOT LIKE( statusText, "%Success%")       If I add "| table", I will get blank content on statusText
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OA... See more...
We would like to be able to configure the Okta application to be of a "API Services " application type vs a "Web Application" type when setting up the "Splunk Add-on for Okta Identiy Cloud TA" for OAuth2. When using a "Web Application" type, this requires a user account associated with the auth flow. This ties the auth to a specific user which, if the user is suspended or disabled, the TA stops working. Ideally this is not tied to a user, but to an "API Services" application type. Okta recommends the "API Services" application type to be used for machine to machine auth. Are there plans to support this in the add on going forward since "Web Application" type is less robust and not what Okta ideally recommends?
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. H... See more...
Due to some oddities of our environment, my team needs default fields in order to run some playbooks automatically. We've built these fields into the notable events which get sent over from Splunk. However, containers are built without an artifact when created manually. While we could certainly train people to follow some manual steps to create an artifact or toggle the Artifact Dependency switch, that goes against the nature of SOAR and it's easy to miss something. It's easier to have a playbook create an artifact with those fields we need. Unfortunately, the Artifact Dependency switch defaults to off. So, the actual question: Has anyone found a way to change the default for the Artifact Dependency switch or to make a playbook run before an artifact is created?
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND"... See more...
{"id":"0","severity":"Information","message":[{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02170","TOTAL":"121257","PROCESSED":"121257","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"35 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"CPW","FUNCTION_NAME":"CPW_02171","TOTAL":"26434","PROCESSED":"26434","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"},{"TARGET_SYSTEM":"SEQ","FUNCTION_NAME":"CPW_02172","TOTAL":"23343","PROCESSED":"2647812","REMAINING":"0","ERROR":"0 (0%)","SKIPPED":"19 (0%)","ERROR_IND":"0","ERROR_1":"0"}]} I want to extract all fields in the form of table from  "message" which is holding JSON array . And I want a total row for each column where total running total will display for each numeric column based on TARGET_SYSTEM . 
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omni... See more...
message: Updated Components { "servicechannel": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-15T17:20:09.000Z", "ComponentName": "ServiceChannel", "RecordId": "0N98X001200Gvv3SAC" } ], "omnisupervisorconfig": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:17:37.000Z", "ComponentName": "OmniSupervisorConfig", "RecordId": "0Q27X000000KyrESAS" } ], "livechatbutton": [ { "LastmodifiedBy": "XYZ", "ModifiedDate": "2024-04-16T16:31:35.000Z", "ComponentName": "LiveChatButton", "RecordId": "5638X000000Xw55QAC" } ] }   LastModifiedBy ModifiedBy Component RecordId
We’ll explore how to deploy a robust application monitoring solution using AWS ECS (Elastic Container Service) and Cisco AppDynamics. This integration allows businesses to leverage the scalability of... See more...
We’ll explore how to deploy a robust application monitoring solution using AWS ECS (Elastic Container Service) and Cisco AppDynamics. This integration allows businesses to leverage the scalability of AWS and the comprehensive monitoring capabilities of AppDynamics, ensuring applications perform optimally in a cloud environment. What is AWS ECS? AWS ECS (Elastic Container Service) is a highly scalable, high-performance container orchestration service that supports Docker containers and allows you to easily run applications on a managed cluster of Amazon EC2 instances. ECS eliminates the need to install, operate, and scale your own cluster management infrastructure, making it easier to schedule and run Docker containers on AWS. What is AppDynamics? AppDynamics is a powerful application performance management (APM) and IT operations analytics (ITOA) tool that helps monitor, analyze, and optimize complex software environments. It provides real-time visibility and insights into the performance of applications, enabling organizations to quickly detect, diagnose, and resolve issues to improve user experience and operational efficiency. Application Image The application we are deploying is packaged in a Docker image,  abhimanyubajaj98/java-tomcat-wit-otel-app-buildx , which contains a Java-based web application running on Apache Tomcat. This image is enhanced with the OpenTelemetry Java agent to facilitate detailed performance monitoring and telemetry. Configuration Overview Our setup involves several AWS resources managed through Terraform, a popular infrastructure as code (IaC) tool, ensuring our infrastructure is reproducible and maintainable. Below is a high-level overview of our configuration: ECS Cluster AWS ECS Cluster: Acts as the hosting environment for our Docker containers. Task Definitions: Specifies the Docker containers to be deployed, their CPU and memory allocations, and essential configurations such as environment variables and network mode. IAM Roles and Policies IAM Roles and Policies: Ensure proper permissions for ECS tasks to interact with other AWS services, such as retrieving Docker images from ECR and sending logs to CloudWatch. Container Setup Machine Agent Container: Hosts the AppDynamics machine agent, configured to monitor the underlying EC2 instances and collect machine metrics. Java Application Container: Runs the main Java application with OpenTelemetry instrumentation, configured to send telemetry data to AppDynamics. OpenTelemetry Collector Container: Aggregates and forwards telemetry data to the AppDynamics controller. Security and Network Network Mode: Uses host networking to ensure that containers can communicate efficiently and leverage the networking capabilities of the host EC2 instance. Security Groups: Configured to allow appropriate inbound and outbound traffic necessary for operation and monitoring. Detailed Steps and Configuration ECS Cluster Creation: Define an ECS cluster using Terraform to serve as the runtime environment for the containers. Task Definitions: Specify containers that need to be run as part of the ECS service. Include detailed settings for: Image versions CPU and memory requirements Environment variables for configuration Volume mounts for persistent or shared data IAM Configuration: Set up IAM roles and attach policies that grant necessary permissions for ECS tasks, including logging to CloudWatch and accessing ECR for image retrieval. Logging and Monitoring: Configure CloudWatch for logging and set up AppDynamics for advanced monitoring, linking it with OpenTelemetry for comprehensive observability. Deployment and Management: Use Terraform to manage deployment and updates to the infrastructure, ensuring consistency and reproducibility. provider "aws" { region = "us-east-1" # Change to your preferred AWS region } resource "aws_ecs_cluster" "ecs_cluster" { name = "ecs_cluster_with_agents" tags = { owner = "Abhi Bajaj" } } resource "aws_ecs_task_definition" "container_tasks" { family = "container_tasks" network_mode = "host" container_definitions = jsonencode([ { "name" : "machine-agent-container", "uid" : "0", "privileged": true, "image" : "docker.io/appdynamics/machine-agent:root-latest", "cpu" : 256, "memory" : 512, "essential" : true, "environment" : [ { "name" : "APPDYNAMICS_CONTROLLER_HOST_NAME" "value" : "xxx.saas.appdynamics.com" }, { "name" : "APPDYNAMICS_CONTROLLER_PORT" "value" : "443" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_NAME" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_ACCOUNT_NAME" "value" : "xxx" }, { "name" : "APPDYNAMICS_AGENT_UNIQUE_HOST_ID" "value" : "machine_agent_ecs" }, { "name" : "APPDYNAMICS_CONTROLLER_SSL_ENABLED" "value" : "true" }, { "name" : "APPDYNAMICS_SIM_ENABLED" "value" : "true" }, { "name" : "APPDYNAMICS_DOCKER_ENABLED" "value" : "true" } ], "mountPoints" : [ { "containerPath" : "/hostroot/proc", "sourceVolume" : "proc", "readOnly" : true }, { "containerPath" : "/hostroot/sys", "sourceVolume" : "sys", "readOnly" : false }, { "containerPath" : "/hostroot/etc", "sourceVolume" : "etc", "readOnly" : false }, { "containerPath" : "/var/run/docker.sock", "sourceVolume" : "docker_sock", "readOnly" : false } // Add more mount points as needed ] }, { "name" : "ecs_with_otel_java_app", "image" : "abhimanyubajaj98/java-tomcat-wit-otel-app-buildx", "cpu" : 512, "memory" : 1024, "network_mode" : "host", "privileged": true, "essential" : true, "environment" : [ { "name" : "JAVA_TOOL_OPTIONS" "value" : "-Dotel.resource.attributes=service.name=ECS_otel_abhi,service.namespace=ECS_otel_abhi" } ] }, { "name" : "OpenTelemetryCollector", "image" : "appdynamics/appdynamics-cloud-otel-collector", "privileged": true, "network_mode" : "host", "memory" : 1024, "cpu" : 512, "ports": [ { "containerPort": 13133, "hostPort": 13133 }, { "containerPort": 4317, "hostPort": 4317 }, { "containerPort": 4318, "hostPort": 4318 } ], "environment" : [ { "name" : "APPD_OTELCOL_CLIENT_ID" "value" : "xxx" }, { "name" : "APPD_OTELCOL_CLIENT_SECRET" "value" : "xxxx" }, { "name" : "APPD_OTELCOL_TOKEN_URL" "value" : "https://xxx-pdx-p01-c4.observe.appdynamics.com/auth/4f8da76d-01a8-4df6-85cd-3a111fba946e/default/oauth2/token" }, { "name" : "APPD_OTELCOL_ENDPOINT_URL" "value" : "https://xxx-pdx-p01-c4.observe.appdynamics.com/data" } ], "mountPoints" : [ { "containerPath" : "/hostroot/etc", "sourceVolume" : "etc", "readOnly" : true }, { "containerPath" : "/hostroot/sys", "sourceVolume" : "sys", "readOnly" : false } ] } ]) volume { name = "proc" host_path = "/proc" } volume { name = "sys" host_path = "/sys" } volume { name = "etc" host_path = "/etc" } volume { name = "docker_sock" host_path = "/var/run/docker.sock" } resource "aws_ecs_service" "container_services" { name = "container-services" cluster = aws_ecs_cluster.ecs_cluster.id task_definition = aws_ecs_task_definition.container_tasks.arn desired_count = 1 } ############################################################################################################## resource "aws_launch_template" "ecs_launch_template" { name = "alma" image_id = "ami-xxxxx" # Amazon ECS-Optimized Amazon Linux 2 (AL2) x86_64 AMI instance_type = "t2.medium" user_data = base64encode(<<EOF #!/bin/bash sudo su echo ECS_CLUSTER=ecs_cluster_with_agents >> /etc/ecs/ecs.config wget https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/debian_amd64/amazon-ssm-agent.deb dpkg -i amazon-ssm-agent.deb systemctl enable amazon-ssm-agent EOF ) vpc_security_group_ids = ["sg-xxxx"] iam_instance_profile { name = aws_iam_instance_profile.dev-resources-iam-profile.name } tag_specifications { resource_type = "instance" tags = { Name = "ECS_with_Agents" Owner = "abhibaj@cisco.com" } } } resource "aws_autoscaling_group" "auto_scaling_group" { name = "ecs_asg" availability_zones = ["us-east-1a", "us-east-1b"] desired_capacity = 1 min_size = 1 max_size = 10 health_check_grace_period = 300 health_check_type = "EC2" launch_template { id = aws_launch_template.ecs_launch_template.id } } resource "aws_ecs_capacity_provider" "provider" { name = "alma" auto_scaling_group_provider { auto_scaling_group_arn = aws_autoscaling_group.auto_scaling_group.arn managed_scaling { status = "ENABLED" target_capacity = 100 minimum_scaling_step_size = 1 maximum_scaling_step_size = 100 } } } resource "aws_ecs_cluster_capacity_providers" "providers" { cluster_name = aws_ecs_cluster.ecs_cluster.name capacity_providers = [aws_ecs_capacity_provider.provider.name] } ############################################# resource "aws_iam_instance_profile" "dev-resources-iam-profile" { name = "ec2_profile_for_services_otel" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role" "dev-resources-iam-role" { name = "role_for_services_ec2_otel" description = "The role for the developer resources on EC2" assume_role_policy = <<EOF { "Version": "2012-10-17", "Statement": { "Effect": "Allow", "Principal": {"Service": "ec2.amazonaws.com"}, "Action": "sts:AssumeRole" } } EOF tags = { Owner = "abhibaj" } } resource "aws_iam_role_policy_attachment" "dev-resources-ssm-policy" { policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecr_read_only_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs_full_access_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/AmazonECS_FullAccess" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs_ecs_task_execution_policy_attachment" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy" role = aws_iam_role.dev-resources-iam-role.name } resource "aws_iam_role_policy_attachment" "ecs-instance-role-attachment" { policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role" role = aws_iam_role.dev-resources-iam-role.name } To deploy-> Please edit ENV Variables subsituted with “xxx” Once done, Run terraform init terraform apply — auto-approve How it will look on your UI is:  
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPa... See more...
I know vCenter have API get information about local file system on the guest vm running on ESXi host (as long as VMWare Tools are installed on vm): capacity (in bytes) freeSpace (in bytes) diskPath (eg. C:\ for Win or / for *nix) fileSystemType (eg. ext3, NTFS, etc...) Ref #1: https://vdc-download.vmware.com/vmwb-repository/dcr-public/184bb3ba-6fa8-4574-a767-d0c96e2a38f4/ba9422ef-405c-47dd-8553-e11b619185b2/SDK/vsphere-ws/docs/ReferenceGuide/vim.vm.GuestInfo.DiskInfo.html Ref #2: https://developer.vmware.com/apis/vsphere-automation/latest/vcenter/api/vcenter/vm/vm/guest/local-filesystem/get/ I believe RVTools, and some monitoring tools are using this specific api to grab info about local file system on the guest vm.   So far I was able to find metrics regarding datastore usage. This is fine, but equally important metric is local disk utilization of the guest vm. Which metric is responsible for getting this info in VMWare or VMWare Metrics AddOns? https://docs.splunk.com/Documentation/AddOns/released/VMW/Sourcetypes https://docs.splunk.com/Documentation/AddOns/released/VMWmetrics/Sourcetypes If none of the listed. Is there a way to customize VMW or VMWmetrics AddOns to grab this crucial information about VMs from vCenter? Perhaps I should look elsewhere - I mean different App/AddOn?
Greetings!    Im unable to start appdynamics-machine-agent following the same install instructions that work with rhel 7. Machine Agent Bundle - 64-bit linux (rpm)    24.3.0 installed. I update... See more...
Greetings!    Im unable to start appdynamics-machine-agent following the same install instructions that work with rhel 7. Machine Agent Bundle - 64-bit linux (rpm)    24.3.0 installed. I updated the config file to match the same controller/settings/etc as the rhel 7 servers. Upon starting the service I see the status is failed, and the logs say: Could not initialize class com.sun.jna.Native   /opt/appdynamics/machine-agent/logs/startup.out OUTPUT  2024-04-16 11:15:53.430 Using Agent Version [Machine Agent v24.3.0.4127 GA compatible with 4.4.1.0 Build Date 2024-03-20 05:00:40] ERROR StatusLogger Reconfiguration failed: No configuration found for '10dba097' at 'null' in 'null' 2024-04-16 11:15:55.037 [INFO] Agent logging directory set to: [/opt/appdynamics/machine-agent/logs] 2024-04-16 11:15:53.468 Could not start up the machine agent due to: Could not initialize class com.sun.jna.Native 2024-04-16 11:15:53.468 Please see startup.log in the current working directory for details.   /opt/appdynamics/machine-agent/startup.log OUTPUT Tue Apr 16 11:15:55 CDT 2024 java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native at oshi.jna.platform.linux.LinuxLibc.<clinit>(LinuxLibc.java:22) at oshi.software.os.linux.LinuxOperatingSystem.<clinit>(LinuxOperatingSystem.java:97) at oshi.hardware.platform.linux.LinuxCentralProcessor.initProcessorCounts(LinuxCentralProcessor.java:166) at oshi.hardware.common.AbstractCentralProcessor.<init>(AbstractCentralProcessor.java:65) at oshi.hardware.platform.linux.LinuxCentralProcessor.<init>(LinuxCentralProcessor.java:57) at oshi.hardware.platform.linux.LinuxHardwareAbstractionLayer.createProcessor(LinuxHardwareAbstractionLayer.java:43) at oshi.util.Memoizer$1.get(Memoizer.java:61) at oshi.hardware.common.AbstractHardwareAbstractionLayer.getProcessor(AbstractHardwareAbstractionLayer.java:48) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.getOshiBasedLicenseCpuInfo(MachineLicensePropertiesProvider.java:75) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.getLicenseCpuInfo(MachineLicensePropertiesProvider.java:44) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.get(MachineLicensePropertiesProvider.java:106) at com.appdynamics.agent.sim.properties.MachineLicensePropertiesProvider.get(MachineLicensePropertiesProvider.java:25) at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:86) at com.google.inject.internal.BoundProviderFactory.provision(BoundProviderFactory.java:72) at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:60) at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:59) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.RealMultibinder$RealMultibinderProvider.doProvision(RealMultibinder.java:253) at com.google.inject.internal.RealMultibinder$ExtensionRealMultibinderProvider.doProvision(RealMultibinder.java:307) at com.google.inject.internal.RealMultibinder$ExtensionRealMultibinderProvider.doProvision(RealMultibinder.java:289) at com.google.inject.internal.InternalProviderInstanceBindingImpl$Factory.get(InternalProviderInstanceBindingImpl.java:113) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) at com.google.inject.internal.BoundProviderFactory.get(BoundProviderFactory.java:58) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:40) at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:60) at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:113) at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:91) at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:300) at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:169) at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45) at com.google.inject.internal.InternalInjectorCreator.loadEagerSingletons(InternalInjectorCreator.java:213) at com.google.inject.internal.InternalInjectorCreator.injectDynamically(InternalInjectorCreator.java:186) at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:113) at com.google.inject.Guice.createInjector(Guice.java:87) at com.google.inject.Guice.createInjector(Guice.java:69) at com.appdynamics.voltron.FrameworkBootstrap.createInjector(FrameworkBootstrap.java:107) at com.appdynamics.voltron.FrameworkBootstrap.start(FrameworkBootstrap.java:162) at com.appdynamics.voltron.FrameworkBootstrap.startAndRun(FrameworkBootstrap.java:120) at com.appdynamics.voltron.FrameworkApplication.start(FrameworkApplication.java:31) at com.appdynamics.agent.sim.Main.startSafe(Main.java:64) at com.appdynamics.agent.sim.bootstrap.Bootstrap.main(Bootstrap.java:48)
Hi Can anyone please suggest where I can submit a bug report for dashboard visualisations? Thanks
The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest metrics, traces, and logs to the Splunk platform using an HEC. If you are a DevOps Engineer or... See more...
The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest metrics, traces, and logs to the Splunk platform using an HEC. If you are a DevOps Engineer or SRE, you may already be familiar with the OTel Collector’s flexibility, but for those less experienced, this blog post will serve as an introduction to routing logs.  The idea of OpenTelemetry as a whole is to unify the data so it's suitable for every input and output and put some processors in between to make it possible to perform operations on data (such as transforming and filtering). You may already see that one of the biggest advantages of OTel Collector is its flexibility - but sometimes figuring out how to use it in practice is a challenge. One of the most common cases in log processing is setting up the event’s index. If you’re familiar with the Splunk HEC receiver, you might recall this configuration snippet: This indicates that every event used by this exporter will be sent to the logs index.  As you may see, the logs index is specific to an exporter, so the intuition is to create as many splunk_hec exporters as you need, and additionally create multiple filelog receivers as well, so that we can filter which files go to which index. Using your imagination, visualize a scenario where all the logs go to the ordinary logs index, but some are only visible to people with higher permissions levels. These logs are gathered by filelog/security receiver and the pipeline structure would look like this one: But is it really the best solution? Let’s consider a few questions here: splunk_hec exporter config seems to be the same, the only difference is the index field. Does it make sense to copy the configuration over and over? filelog receiver gives a way of configuring a place to gather logs. What about other filtering options, like ones based on severity or a specific phrase in the log’s body? Everytime we create a new pipeline, a new process comes to life - doesn’t this consume too many resources? The solution: Dynamic index routing Today I’ll show you how to create a pipeline with dynamic index routing, meaning it is based on incoming logs and not statically set, with a transform processor and Splunk OpenTelemetry Collector for Kubernetes (SOCK). The idea is based on this attribute from Splunk HEC Exporter documentation: This means that we can specify com.splunk.index as a resource attribute for a log, and it will overwrite the default index. Let’s go through a few examples of how we can do it in SOCK. Viewing the pipelines configured in SOCK Before we cover how to overwrite your pipelines, let’s start with how you can view the pipeline. The final config is the result of your configuration in values.yaml, as well as the default configuration that is delivered by SOCK. The config’s yaml file is in the pod’s configmap. As logs are generated by the agent, you can look at the agent’s config, the command is: Where my-splunk-otel-collector-otel-agent is the configmap’s name - it might differ in your case, especially if you chose a different name for an installation versus one from the Getting Started docs. You can take a look at a configmaps you have with the command: An output example for a default namespace would be: After successfully running the describe command, scroll all the way down until you see the pipelines section. For logs, it looks more or less like this: Now you know what components your logs pipeline is made of!  Easy scenarios Now let’s get our hands dirty! Let’s see the easy examples of index routing based on real scenarios. Scenarios based on the log attribute The scenario: Let’s say we want to pass all the events with a log.iostream attribute stderr to error_index This would capture events emitted to the error stream and send them to their own index. The solution: This requires doing two things: Overwriting the default agent config with a new transform processor and adding it to the logs pipeline.  Setting up the new processor’s statement to specify the correct target index. Every transform processor consists of a set of statements. We need to create one that matches our use case, by defining what we need and writing it specifically for OTel. The logical statement here would be: set com.splunk.index value to be error_index for EVERY log from the pipeline whose attribute log.iostream is set to stderr Then the statement in the transform processor’s syntax described here looks like this: Next, we need to append the processor to the logs pipeline. To do that, we need to copy and paste the current processors under the agent.config section then insert our processor at the end. The whole config will be: After applying the config, the stderr events appear in the error_index: Scenarios based on specific log text The scenario Passing an event to a different index when something specific is written in the body of the log, for example, every log that contains [WARNING]: The solution All the keywords used here come from the transform processor documentation. We can use the  transform processor, this time using the following logic: Here are some sources that can be used to learn more about OpenTelemetry Transformation Language and its grammar. Then we repeat the steps described in the previous solutions section. The final config is: And the result in the Splunk Enterprise looks like this: How do I know what attributes I can use? At this point, you might think “Oh right, that looks easy, but how would I know what attributes to use?” The logs in the transform processor can use all the elements described here, but the most useful ones are: body - referring to the log body attributes - referring to the attributes specific to a single log resource.attributes - referring to the attributes specific to multiple logs from the same source You can see them in the Splunk Enterprise event preview: However, there’s no indication as to which dimensions are attributes and which are resource.attributes. You can see how it looks by running  your OTel agent with this config: This will produce all the information about log structure and which attributes are really the resource.attributes: From this snippet, you can see that only logtag and log.iostream are attributes, all the rest are part of the resource.attributes. The transform processor has many options aside from the ones described above, check them out here.  Complex Scenarios Let’s go even deeper and operate on two variables instead of one. Scenarios based on setting index-based annotations You may want to annotate the whole namespace with one splunk.com/index, but want specific pods from this namespace to redirect somewhere else. You can do this by using a transform processor to provide additional annotations to the pod of your choice. Let’s say the annotation is second_index. This is how it looks in kubectl describe of the pod: Transforming annotation into resource.attribute First, redirect logs from the pods according to the  second_index annotation to convert the annotation to a resource.attribute. This can be done with extraAttributes.fromAnnotations config:   tag_name is the identifier of an element in resource.attributes, it is optional. If you don’t configure it your attribute will look like this: k8s.pod.annotations.<key>is the output format. With tag_name you can decide how the name of your attribute, in this example it is the same as the key: Make OTel pass logs to the index Now that we have resource.attribute second_index set up, we can set the index destination for logs. We will use transform processor for this purpose: We will replace the com.splunk.index resource attribute with the second_index attribute, but only when the second_index attribute is present - so it doesn’t affect logs from other pods. Delete unnecessary attributes Once the attribute has been moved to the log's index, we can get rid of it. This requires adding another statement to the transform processor: Scenarios based on labels setting the index  This will work exactly the same as an annotation example from the previous section, the only difference is in how we’re transforming the label into resource.attribute. We now have the  second_index label on a pod: We can make it visible to the OTel collector with this config snippet: Conclusion In this article, I showed you how to route logs to different indexes. It is a commonly used feature and it can be used in many scenarios, as we can see in the examples. We will expand on other SOCK features in later articles, so stay tuned!