All Topics

Top

All Topics

Trying to setup splunk otel collector using the image quay.io/signalfx/splunk-otel-collector:latest in docker desktop or Azure Container App to read the log from file using file receiver and splunk_h... See more...
Trying to setup splunk otel collector using the image quay.io/signalfx/splunk-otel-collector:latest in docker desktop or Azure Container App to read the log from file using file receiver and splunk_hec exporter. Howerver the receiving following error. 2024-03-07 12:56:27 2024-03-07T17:56:27.001Z info exporterhelper/retry_sender.go:118 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec", "error": "Post  https://splunkcnqa-hf-east.com.cn:8088/services/collector/event\": dial tcp 42.159.148.223:8088: i/o timeout (Client.Timeout exceeded while awaiting headers)", "interval": "2.984810083s"}   using the below config ============================ extensions: memory_ballast: size_mib: 500 receivers: filelog: include: - /var/log/*.log encoding: utf-8 fingerprint_size: 1kb force_flush_period: "0" include_file_name: false include_file_path: true max_concurrent_files: 100 max_log_size: 1MiB operators: - id: parser-docker timestamp: layout: '%Y-%m-%dT%H:%M:%S.%LZ' parse_from: attributes.time type: json_parser poll_interval: 200ms start_at: beginning processors: batch: exporters: splunk_hec: token: "XXXXXX" endpoint: "https://splunkcnqa-hf-east.com.cn:8088/services/collector/event" source: "otel" sourcetype: "otel" index: "index_preprod" profiling_data_enabled: true tls: insecure_skip_verify: true service: pipelines: logs: receivers: [filelog] processors: [batch] exporters: [splunk_hec]
Hi, I am trying to explore APM in Splunk Observability. However, I am facing challenge in setting up Alwayson Profiling. I am wondering if this feature is not available in Trial version. Can someone... See more...
Hi, I am trying to explore APM in Splunk Observability. However, I am facing challenge in setting up Alwayson Profiling. I am wondering if this feature is not available in Trial version. Can someone confirm it?
In this article we will try to get your ECS Java application monitored by AppDynamics Java Agent. For starters I am assuming you have an application ready, if not a sample tomcat application image... See more...
In this article we will try to get your ECS Java application monitored by AppDynamics Java Agent. For starters I am assuming you have an application ready, if not a sample tomcat application image will be used in our task definition file. { "family": "aws-opensource-otel", "containerDefinitions": [ ##### Application image { "name": "aws-otel-emitter", "image": "docker.io/abhimanyubajaj98/tomcat-app-buildx:latest", "cpu": 0, "portMappings": [ { "name": "aws-otel-emitter", "containerPort": 8080, "hostPort": 8080, "protocol": "tcp", "appProtocol": "http" } ], "essential": true, "environment": [ { "name": "APPDYNAMICS_AGENT_ACCOUNT_NAME", "value": "XXXXX" }, { "name": "APPDYNAMICS_AGENT_TIER_NAME", "value": "abhi-tomcat-ecs" }, { "name": "APPDYNAMICS_CONTROLLER_PORT", "value": "443" }, { "name": "JAVA_TOOL_OPTIONS", "value": "-javaagent:/opt/appdynamics/javaagent.jar" }, { "name": "APPDYNAMICS_AGENT_APPLICATION_NAME", "value": "abhi-ecs-fargate" }, { "name": "APPDYNAMICS_CONTROLLER_HOST_NAME", "value": "XXXXX.saas.appdynamics.com" }, { "name": "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME_PREFIX", "value": "abhi-tomcat-ecs" }, { "name": "APPDYNAMICS_CONTROLLER_SSL_ENABLED", "value": "true" }, { "name": "APPDYNAMICS_AGENT_ACCOUNT_ACCESS_KEY", "value": "XXXXX" }, { "name": "APPDYNAMICS_JAVA_AGENT_REUSE_NODE_NAME", "value": "true" } ], "mountPoints": [], "volumesFrom": [ { "sourceContainer": "appdynamics-java-agent" } ], "dependsOn": [ { "containerName": "appdynamics-java-agent", "condition": "START" } ], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "True", "awslogs-group": "/ecs/ecs-aws-otel-java-tomcat-app", "awslogs-region": "us-west-2", "awslogs-stream-prefix": "ecs" } }, "healthCheck": { "command": [ "CMD-SHELL", "curl -f http://localhost:8080/sample || exit1" ], "interval": 300, "timeout": 60, "retries": 10, "startPeriod": 300 } }, #####Java Agent configuration { "name": "appdynamics-java-agent", "image": "docker.io/abhimanyubajaj98/java-agent-ecs", "cpu": 0, "portMappings": [], "essential": false, "environment": [], "mountPoints": [], "volumesFrom": [], "logConfiguration": { "logDriver": "awslogs", "options": { "awslogs-create-group": "true", "awslogs-group": "/ecs/java-agent-ecs", "awslogs-region": "us-west-2", "awslogs-stream-prefix": "ecs" } } } ], "taskRoleArn": "arn:aws:iam::778192218178:role/ADOTRole", "executionRoleArn": "arn:aws:iam::778192218178:role/ADOTTaskRole", "networkMode": "bridge", "requiresCompatibilities": [ "EC2" ], "cpu": "256", "memory": "512" } You will need to edit all the sections with “XXXXX” Your ECS task should have the appropriate permission. In my case I created a taskRole ADOTRole and taskexecutionrole ADOTTaskRole. The permission. The policy for ADOTRole looks like-> { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:DescribeLogGroups", "logs:PutRetentionPolicy", "xray:PutTraceSegments", "xray:PutTelemetryRecords", "xray:GetSamplingRules", "xray:GetSamplingTargets", "xray:GetSamplingStatisticSummaries", "cloudwatch:PutMetricData", "ec2:DescribeVolumes", "ec2:DescribeTags", "ssm:GetParameters" ], "Resource": "*" } ] } The policy for ADOTTaskRole looks like -> { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream", "logs:DescribeLogStreams", "logs:DescribeLogGroups", "logs:PutRetentionPolicy", "xray:PutTraceSegments", "xray:PutTelemetryRecords", "xray:GetSamplingRules", "xray:GetSamplingTargets", "xray:GetSamplingStatisticSummaries", "cloudwatch:PutMetricData", "ec2:DescribeVolumes", "ec2:DescribeTags", "ssm:GetParameters" ], "Resource": "*" } ] } Going back to the template, you can build your own image as well. The dockerfile for the image can be found here along with task definition file https://github.com/Abhimanyu9988/ecs-java-agent To understand more about AppDynamics Java Agent, Please refer to below document-> https://docs.appdynamics.com/appd/22.x/22.12/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent   For any queries, you can reach out to me on LinkedIn or simply post your question on github.
Hello, How to assign search_now value with info_max_time in _raw? I am trying to push "past" data using collect command into summary index.  I want to use search_now as a baseline time I ap... See more...
Hello, How to assign search_now value with info_max_time in _raw? I am trying to push "past" data using collect command into summary index.  I want to use search_now as a baseline time I appreciate your help.  Thank you Here's my attempt using some code from @bowesmana , but it gave me duplicate search_now:     index=original_index | addinfo | eval _raw=printf("search_now=%d", info_max_time) | foreach "*" [| eval _raw = _raw.case(isnull('<<FIELD>>'),"", mvcount('<<FIELD>>')>1,", <<FIELD>>=\"".mvjoin('<<FIELD>>',"###")."\"", true(), ", <<FIELD>>=\"".'<<FIELD>>'."\"") | fields - "<<FIELD>>" ] | collect index=summary testmode=false file=summary_test_1.stash_new name=summary_test_1" marker="report=\"summary_test_1\""      
Is the Geostats command supported by this visualization type for displaying city names in cluster bubbles? It seems not. Here is the command I am using for my result:     | (some result that prod... See more...
Is the Geostats command supported by this visualization type for displaying city names in cluster bubbles? It seems not. Here is the command I am using for my result:     | (some result that produces destination IP's and a total count by them) | iplocation prefix=dest_iploc_ dest_ip | eval dest_Region_Country=dest_iploc_Region.", ".dest_iploc_Country | geostats globallimit=0 locallimit=15 binspanlat=21.5 binspanlong=21.5 longfield=dest_iploc_lon latfield=dest_iploc_lat sum(Total) BY dest_Region_Country     In the search result visualization (which uses the old dashboard cluster map visualization and not the new dashboard studio one), this returns a proper cluster map showing this: There are bubbles showing areas on the grid where there were a lot of total connections. When moused over I can see the individual regions/cities contributing to this total. However, when I put this query into my Dashboard Studio visualization using Map > Bubble, it either breaks (when there are too many city values... because there are as many cities as there are), or when I change the grouping to use countries for example, it breaks in a different way when it tries to alphabetize all the countries under each bubble, like this: (I am obviously mousing over a bubble in Bogota, Colombia here, not Busan, South Korea or anywhere in Germany.) Not to mention the insane lag caused by this dashboard element. What to do for my use-case? Switch off of Dashboard Studio? That aside, anyone figure out a way to make interconnected bubbles/points showing sources and destinations like this (this is not intended as an ad, but an example)?  
Hi, we have a log that contains the amount of times any specific message has been sent by the user in every session. This log contains the user's ID (data.entity-id), the message ID (data.message-cod... See more...
Hi, we have a log that contains the amount of times any specific message has been sent by the user in every session. This log contains the user's ID (data.entity-id), the message ID (data.message-code), message name (data.message-name) and the total amount of times it was sent during each session (data.total-received). I'm trying to create a table where the 1st column shows the User's ID (data.entity-id), then each column shows the  sum of the total amount of times each message type (data.total-received) was received. Ideally I would be able to create a dashboard were I can have a list of the data.message-code's I want to be used as columns. Example data:    data: { entity-id: 1 message-code: 445 message-name: handshake total-received: 10 } data: { entity-id: 1 message-code: 269 message-name: switchPlayer total-received: 20 } data: { entity-id: 1 message-code: 269 message-name: switchPlayer total-received: 22 } data: { entity-id: 2 message-code: 445 message-name: handshake total-received: 12 } data: { entity-id: 2 message-code: 269 message-name: switchPlayer total-received: 25 } data: { entity-id: 2 message-code: 269 message-name: switchPlayer total-received: 30 } Ideally the table would look like this: Entity-id | handshake | switchPlayer 1 | 10 | 42 2 | 12 | 55   Is this possible? What would be the best way to store the message-code in a dashboard? Thanks!
Hi I am attempting to integrate Microsoft Azure with Splunk Enterprise to retrieve the status of App services. Could someone please provide a step-by-step guide for the integration? I have attach... See more...
Hi I am attempting to integrate Microsoft Azure with Splunk Enterprise to retrieve the status of App services. Could someone please provide a step-by-step guide for the integration? I have attached a snapshot for reference.
Dear Splunk Community,  I am here seeking your thoughts and suggestions on the error I am facing with TrackMe  ERROR search_command.py _report_unexpected_error 1013 Exception at "/opt/splunk/etc/ap... See more...
Dear Splunk Community,  I am here seeking your thoughts and suggestions on the error I am facing with TrackMe  ERROR search_command.py _report_unexpected_error 1013 Exception at "/opt/splunk/etc/apps/trackme/bin/splunkremotesearch.py", line 501 : This TrackMe deployment is running in Free limited edition and you have reached the maximum number of 1 remote deployment, only the first remote account (local) can be used Background: Objective - Setup TrackMe monitoring (a virtual tenant - dsm, dhm & mhm) for our remote Splunk deployment (SplunkCloud). TrackMe app is installed on our on-prem splunk instance and added  "Splunk targets URL and port" under Configuration --> Remote deployments accounts (only one account) No issues with the connectivity, it is successful (pic below)  We are using free license and as per trackme documentation, allowed to use 1 remote deployment. Could we use free license in our case or how to get rid off 'local' deployment? Please suggest.   
Cisco AppDynamics for SAP in Healthcare: an analysis of challenges and solutions Video Length: 2 min 50 seconds    CONTENTS | Introduction | Video |Resources | About the presenter   In th... See more...
Cisco AppDynamics for SAP in Healthcare: an analysis of challenges and solutions Video Length: 2 min 50 seconds    CONTENTS | Introduction | Video |Resources | About the presenter   In this video, Matt Schuetze delves into the role of Cisco AppDynamics for SAP in addressing the challenges in healthcare, biotech, and life sciences, including security, privacy, cost management, and the critical nature of patient-facing applications—as well as managing diverse systems across individual medical practices, electronic medical records (EMR), and SAP ERP systems.     With App Dynamics, transactions can be tagged, traced, and followed, ensuring sustained connectivity and security in the SAP environment.    Additional Resources  AppDynamics Monitoring for SAP® Solutions: Build resiliency into your SAP landscape  Explore SAP Monitoring with AppDynamics in the documentation  About presenter Matt Schuetze Matt Schuetze Field Architect Matt Schuetze is a Field Architect at Cisco on the AppDynamics product. He confers with customers and engineers to assess application tooling choices and helps clients resolve application performance problems. Matt runs the Detroit Java User Group and the AppDynamics Great Lakes User Group. His career includes 10+ years of speaking periodically at user groups and industry trade shows. He has a Master’s degree in Nuclear Engineering from MIT and a Bachelor’s degree in Engineering Physics from the University of Michigan.
Getting this error via Power Shell for the Splunk Universall installation   Error below The term 'C:\Program Files\SplunkUniversalForwarder\bin\splunk' is not recognized as the name of a cmdlet... See more...
Getting this error via Power Shell for the Splunk Universall installation   Error below The term 'C:\Program Files\SplunkUniversalForwarder\bin\splunk' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. working.ps1:17 char:3   Here is how i defined my variables $SplunkInstallationDir = "C:\Program Files\SplunkUniversalForwarder" & "$SplunkInstallationDir\bin\splunk" start --accept-license --answer-yes --no-prompt   It works if i run manually only.   Kindly assist
hi team What minimum bandwidth is necessary between indexers and the rest of the platform elements (Heavy Forwarders, Search Heads, Master Cluster, License, Deployment Servers, etc.) for different c... See more...
hi team What minimum bandwidth is necessary between indexers and the rest of the platform elements (Heavy Forwarders, Search Heads, Master Cluster, License, Deployment Servers, etc.) for different communications?
Hello, We had an index that stopped receiving logs.  Since we do not manage the host sending the logs I wanted to get more information before reaching out.  The one interesting error that showed up ... See more...
Hello, We had an index that stopped receiving logs.  Since we do not manage the host sending the logs I wanted to get more information before reaching out.  The one interesting error that showed up right about the time the logs stopped was the following.  I have not been able to find anything useful about this type of error.  Also the error is being thrown from the indexer. Unable to read from raw size file="/mnt/splunkdb/<indexname>/db/hot_v1_57/.rawSize": Looks like the file is invalid. thanks for any assistance I can get.
Heavy Forwarder issues   on a 9.02 version0 and cant connect to indexer after an upgrade from 8.2.0 Anyone know of more current discussion other than this 2015 post: https://community.splunk.com/t5... See more...
Heavy Forwarder issues   on a 9.02 version0 and cant connect to indexer after an upgrade from 8.2.0 Anyone know of more current discussion other than this 2015 post: https://community.splunk.com/t5/Getting-Data-In/Why-am-I-getting-error-quot-Unexpected-character-while-looking/m-p/250699 Error httpclient request [6244 indexerpipe] - caught exception while parsing http reply: enexpected character while looking for value : '<' Error S25OverHttpOutputProcessor [6244 indexerpipe] - http 502 bad gateway
Hey Can someone help me with getting the profiling metrics like cpu and ram used by the app to show up in the splunk observation portal , I can get the app metrics so i have used a simple curl java a... See more...
Hey Can someone help me with getting the profiling metrics like cpu and ram used by the app to show up in the splunk observation portal , I can get the app metrics so i have used a simple curl java app which curls google every second and this shows up in apm metrics I have done all the configs to have the profiling enabled as per all teh docs in splunk but nothing shows up in the profiling section . is it because i am using the free trial ?  I am trying this on simple ec2 instance instrumenting the app using java jar command and I have been exporting the necessary vars and have added the required java options while instrumenting the app using splunk-otel-agent-collector.jar but nothing shows up please help.
Hello, Can someone help me with a search to find out whether any changes has been made to the splunk reports(ex:paloalto report) in last 30 days.   thanks
Hi Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I ... See more...
Hi Experts, I am encountering an issue  with using filter tokens in specific row on my dashboard. I have two filters named ABC and DEF, tokens represented for ABC is $abc$ and DEF is $def$.  I want to pass these tokens only to one specific row, while for others, I want to reject them.  For the rows where i need to pass the tokens, I've used the following syntax: <row depends="$abc$ $def$"></row> For the row where i don't want to use the token, I've used the following syntax; <row rejects="$abc$ $def$"></row>. However when i use the rejects condition, the rows are hidden. I want these rows to still be visible. Could someone please advise on how to resolve this issue?  I would appreciate every help. Thank you in advance!
I have a relatively simple query that counts HTTP 404 events in IIS logs. I wanted to sort them according to which hosts had the highest individual count, however the "highcount" field is always blan... See more...
I have a relatively simple query that counts HTTP 404 events in IIS logs. I wanted to sort them according to which hosts had the highest individual count, however the "highcount" field is always blank. (I probably need to also sort by host, but that's irrelevant to the eventstats issue.)   index=iis status=404 uri="*/*.*" |stats count by host uri |eventstats max(count) by host as highcount |sort -highcount -count |table highcount count host uri    
i'm having a thought , that a text input box which has a dropdown and also able to enter text input with a single token, i am not sure this will works , needed guidance Thanks in Advance. Sanja... See more...
i'm having a thought , that a text input box which has a dropdown and also able to enter text input with a single token, i am not sure this will works , needed guidance Thanks in Advance. Sanjai S
Dear Splunkers, I would like to ask your feedback on following issues with the Service Now add-on app. The problem is that I´m not able display settings for the Add-on where I need to select differ... See more...
Dear Splunkers, I would like to ask your feedback on following issues with the Service Now add-on app. The problem is that I´m not able display settings for the Add-on where I need to select different Service now accounts configured succefully.   What it should look like is following (taken from different project): This is where I can choose my preferred account and configure details.  Can you suggest what could be the reason not to see these settings with my admin user role account ? Thanks in advance. BR
Hello, I have started a Cloud Trial to create a test environment for a connector that I wanted to test for a customer. This connector requires additional ports to be opened to allow data ingestion f... See more...
Hello, I have started a Cloud Trial to create a test environment for a connector that I wanted to test for a customer. This connector requires additional ports to be opened to allow data ingestion from Azure Event Hub. This should be configures using the ACS API. I've enabled the token authentication from the portal, and I generated a new token. I then try to configure Postman to use the API, and I've setup a new request to test the API access:    https://admin.splunk.com/{{stack}}/adminconfig/v2/status   Where {{stack}} represents my instance name defined at the collection level, and bearer token configured in the authorization tab. However, when executing the request, it loops for approximately 30 seconds to a minute before resulting in the following error message:     { "code": "500-internal-server-error", "message": "An error occurred while processing this request. Trying this request again may succeed if the bug is transient, otherwise please report this issue this response. (requestID=426a14b3-97e3-968a-a924-f3abc4300795). Please refer to https://docs.splunk.com/Documentation/SplunkCloud/latest/Config/ACSerrormessages for general troubleshooting tips." }   Despite my efforts, this error has persisted for over 24 hours, and I have no ideas about what can be the issue root cause. Could anyone advice on how to address this issue and successfully configure the necessary settings? Any assistance would be greatly appreciated. Thank you.