All Posts

Top

All Posts

dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (... See more...
dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (code signature in <8E64DF20-704B-3A23-9512-41A3BCD72DEA> '/opt/splunk/lib/libbz2.1.0.3.dylib' not valid for use in process: library load disallowed by system policy), '/usr/lib/libbz2.1.dylib' (no such file, not in dyld cache) ERROR: pid 8605 terminated with signal 6
Hi,   Are you using the Splunk distribution of the OTel collector? You'll need it to use the smartagent receivers I think.  Here is a working example. Please note all the indentation since yaml is ... See more...
Hi,   Are you using the Splunk distribution of the OTel collector? You'll need it to use the smartagent receivers I think.  Here is a working example. Please note all the indentation since yaml is picky. If you want to share your agent_config.yaml, that may help.  
Hi @jessieb_83, let me understand: you want to use as $SPLUNK_DB a removable hard drive? I'm not sure that's possible. Open a case to Splunk Support, they are the only that can answer to you. cia... See more...
Hi @jessieb_83, let me understand: you want to use as $SPLUNK_DB a removable hard drive? I'm not sure that's possible. Open a case to Splunk Support, they are the only that can answer to you. ciao. Giuseppe
Hi @Millowster , this (and many others) is the reason because I don't use Dashboad Studio: not all the functions of Classic Dashboard are still implemented. Ciao. Giuseppe
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] ... See more...
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success", "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}}   We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. not sure why other 2 services are not showing up in the table. index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done  Current output: (DCC:DONE &PIP:DONE  fields are missing) _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE 1/2/2022 1 100 1 100 1 1 66 1 2/2/2022 5 0 5 0 3 3 0 3 3/2/2022 10 0 10 0 8 7 0 8 4/2/2022 100 1 100 1 97 80 1 80 5/2/2022 0 5 0 5 350 0 4 0   Expected output: _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE DCC:DONE PIP:DONE 1/2/2022 1 100 1 100 1 1 66 1 99 1 2/2/2022 5 0 5 0 3 3 0 3 0 2 3/2/2022 10 0 10 0 8 7 0 8 0 3 4/2/2022 100 1 100 1 97 80 1 80 1 90 5/2/2022 0 5 0 5 350 0 4 0 5 200  
@bowesmana , Thank you so much, it worked
Try this on the end of your query | transpose 0 header_field=Tipo_Traffic | eval diff='APP DELIV REPORT'-MT | where diff!=0  
This looks very useful, is there a recommended way to set the maxSendQSize ? Do I need to vary it depending on the thruput of the HF per pipeline? I'm assuming the maxSendQSize would be in-memory b... See more...
This looks very useful, is there a recommended way to set the maxSendQSize ? Do I need to vary it depending on the thruput of the HF per pipeline? I'm assuming the maxSendQSize would be in-memory buffer/queue per-pipeline in addition to the overall maxQueueSize? Finally I'm assuming this would be useful when there is no load balancer in front of the indexers?
What do you want to extract? See this example which extracts parts  of the text  | makeresults | fields - _time | eval msgs=split("Initial message received with below details,Letter published corre... See more...
What do you want to extract? See this example which extracts parts  of the text  | makeresults | fields - _time | eval msgs=split("Initial message received with below details,Letter published correctley to ATM subject,Letter published correctley to DMM subject,Letter rejected due to: DOUBLE_KEY,Letter rejected due to: UNVALID_LOG,Letter rejected due to: UNVALID_DATA_APP",",") | mvexpand msgs | rex field=msgs "(Initial message |Letter published correctley to |Letter rejected due to: )(?<reason>.*)" you'll need to decide what you want and what you intend to use it for.
Hello, I have these two results, I need to compare them and tell me when they are different, could you help me. Regards.  
Hi @bowesmana , Thank you for sharing the query, it worked. But i have another query, how do we write rex to extract  these strings: index=app-index source=application.logs ("Initial message receiv... See more...
Hi @bowesmana , Thank you for sharing the query, it worked. But i have another query, how do we write rex to extract  these strings: index=app-index source=application.logs ("Initial message received with below details" OR "Letter published correctley to ATM subject" OR Letter published correctley to DMM subject" OR "Letter rejected due to: DOUBLE_KEY" OR "Letter rejected due to: UNVALID_LOG" OR "Letter rejected due to: UNVALID_DATA_APP")  
Look at the raw text rather than the JSON to see what Splunk may be using for timestamp detection. The JSON view is sorted and Splunk will only look a certain distance into the event to detect a time... See more...
Look at the raw text rather than the JSON to see what Splunk may be using for timestamp detection. The JSON view is sorted and Splunk will only look a certain distance into the event to detect a timestamp (128 bytes by default). If it cannot find a timestamp, then it will use current time https://docs.splunk.com/Documentation/Splunk/9.2.1/Admin/Propsconf#Timestamp_extraction_configuration
Try index=app-index source=application.logs ("Initial message received with below details" OR "Initial message Successfull" OR "Initial message Error") | rex field= _raw "RampData :\s(?<RampdataSet>... See more...
Try index=app-index source=application.logs ("Initial message received with below details" OR "Initial message Successfull" OR "Initial message Error") | rex field= _raw "RampData :\s(?<RampdataSet>\w+)" | rex "Initial message (?<type>\w+)" | chart count over RampdataSet by type | addtotals This extracts a 'type' field which will be received, Error or Successfull and then the chart command will do what you want - it will give you fields names as above, but you can rename those to what you want.
You can use the populating search of the drop down to add dynamic options and do something like this to categorise the host type index=aaa source="/var/log/test1.log" |stats count by host | eval ca... See more...
You can use the populating search of the drop down to add dynamic options and do something like this to categorise the host type index=aaa source="/var/log/test1.log" |stats count by host | eval category=case(match(host, "t"), "Test", match(host, "q"), "QA", match(host, "p"), "Prod", true(), "Unknown") change the match statement regex as needed and the category you want to show. category will be the <fieldForLabel> and then you need to make the <fieldForValue> to contain the value element you want for the token.
No difference - same speed - what's your macro doing?
There may be a few ways to do that.  Here's one. | eval Status = case(isnotnull(IPv4) AND isnotnull(IPv6), "IPv4 + IPv6", isnotnull(IPv4), "IPv4", isnotnull... See more...
There may be a few ways to do that.  Here's one. | eval Status = case(isnotnull(IPv4) AND isnotnull(IPv6), "IPv4 + IPv6", isnotnull(IPv4), "IPv4", isnotnull(IPv6), "IPv6", 1==1, "")
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR ... See more...
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR both IPv4 +IPv6.  HOSTNAME IPv4 IPv6 Status SampleA 0.0.0.1   IPv4 SampleB   0.0.0.2 IPv6 SampleC 0.0.0.3 A:B:C:D:E:F IPv4 + IPv6 Thanks in-advance!!!
The integration of OpenTelemetry Java agents into your application’s Docker containers represents a significant leap towards enhanced observability and monitoring capabilities. This guide details how... See more...
The integration of OpenTelemetry Java agents into your application’s Docker containers represents a significant leap towards enhanced observability and monitoring capabilities. This guide details how to embed the OpenTelemetry Java Agent into your Dockerfile for a Java application, deploy it on Kubernetes, and monitor its traces using Cisco AppDynamics, thereby providing a robust solution for real-time application performance monitoring. Pre-requisites Ensure you have the following set up and ready: A Kubernetes cluster Docker and Kubernetes command-line tools, docker and kubectl , installed and configured Access to an AppDynamics account for monitoring 1. Preparing Your Dockerfile for Observability The Dockerfile outlined integrates the OpenTelemetry Java agent into a Tomcat server to enable automated instrumentation of your Java application. FROM tomcat:latest RUN apt-get update -y && apt-get -y install wget RUN apt-get install -y curl ADD sample.war /usr/local/tomcat/webapps/ ADD https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar /tmp ENV JAVA_OPTS="-javaagent:/tmp/opentelemetry-javaagent.jar -Dappdynamics.opentelemetry.enabled=true -Dotel.resource.attributes="service.name=tomcatOtelJavaK8s,service.namespace=tomcatOtelJavaK8s"" ENV OTEL_EXPORTER_OTLP_ENDPOINT=http://appdynamics-collectors-ds-appdynamics-otel-collector.cco.svc.cluster.local:4318 CMD ["catalina.sh","run"] Base Image: Start with tomcat:latest as the base image for deploying a Java web application. Installing Utilities: Update the package list and install necessary utilities like wget and curl for downloading the OpenTelemetry Java agent. Adding Your Application: Use the ADD command to place your  .war file in the webapps directory of Tomcat. Integrating OpenTelemetry: Download the latest OpenTelemetry Java agent using ADD command and set JAVA_OPTS to include the path to the downloaded agent, enabling specific OpenTelemetry configurations. Environment Variables: Define OTEL_EXPORTER_OTLP_ENDPOINT to specify the endpoint of the AppDynamics OTel Collector or your custom otel collector, which will process and forward your telemetry data to AppDynamics. 2. Deploying Your Application on Kubernetes Your deployment YAML file configures Kubernetes to deploy your containerized application, exposing it through a service for external access. --- apiVersion: apps/v1 kind: Deployment metadata: name: java-app-with-otel-agent labels: app: java-app-with-otel-agent namespace: appd-cloud-apps spec: replicas: 1 selector: matchLabels: app: java-app-with-otel-agent template: metadata: labels: app: java-app-with-otel-agent spec: containers: - name: java-app-with-otel-agent image: docker.io/abhimanyubajaj98/java-app-with-otel-agent imagePullPolicy: Always ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: java-app-with-otel-agent labels: app: java-app-with-otel-agent namespace: appd-cloud-apps spec: ports: - port: 8080 targetPort: 8080 selector: app: java-app-with-otel-agent Deployment Configuration: Define a deployment in Kubernetes to manage your application’s replicas, ensuring it’s set to match your application’s requirements. Service Exposure: Create a Kubernetes service to expose your application on a specified port, allowing traffic to reach your application. 3. Setting Up the AppDynamics Otel Collector To monitor your application’s traces in Cisco AppDynamics, deploy the AppDynamics Otel Collector within your Kubernetes cluster. This collector processes traces from your application and sends them to AppDynamics. Collector Configuration: Use the official documentation to deploy the AppDynamics Otel Collector, ensuring it’s correctly configured to receive telemetry data from your application. https://docs.appdynamics.com/observability/cisco-cloud-observability/en/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring Service Discovery: Ensure your application’s deployment is configured to send traces to the collector service, typically through environment variables or configuration files. 4. Monitoring Traces in AppDynamics To produce a load for our Sample App, Exec inside the pod and run curl -v http://localhost:8080/sample/ With your application deployed and the Otel Collector set up, you can now monitor your application’s performance in AppDynamics. Accessing AppDynamics: Log into your AppDynamics dashboard. Viewing Traces: Navigate to the tracing or application monitoring section to view the traces sent from your Kubernetes-deployed application, allowing you to monitor requests, response times, and error rates. Conclusion Integrating the OpenTelemetry Java agent into your Java application’s Dockerfile and deploying it on Kubernetes offers a seamless path to observability. By leveraging Cisco AppDynamics in conjunction with this setup, you gain powerful insights into your application’s performance, helping you diagnose and resolve issues more efficiently. This guide serves as a starting point for developers looking to enhance their application’s observability in a Kubernetes environment.
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet ou... See more...
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet output: RampdataSet IntialMessage WAC 10 WAX 30 WAM 22 STC 33 STX 66 OTP 20   Query2: index=app-index source=application.logs "Initial message Successfull" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as SuccessfullMessage by RampdataSet output: RampdataSet SuccessfullMessage WAC 0 WAX 15 WAM 20 STC 12 STX 30 OTP 10 TTC 5 TAN 7 TXN 10 WOU 12   Query3: index=app-index source=application.logs "Initial message Error" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as ErrorMessage by RampdataSet output: RampdataSet ErrorMessage WAC 0 WAX 15 WAM 20 STC 12   We want to combine three queries and want to get the output as shown below, how to do that??? RampdataSet IntialMessage SuccessfullMessage ErrorMessage Total WAC 10 0 0 10 WAX 30 15 15 60 WAM 22 20 20 62 STC 33 12 12 57 STX 66 30 0 96 OTP 20 10 0 30 TTC 0 5 0 5 TAN 0 7 0 7 TXN 0 10 0 10 WOU 0 12 0 12  
We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. no... See more...
We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. not sure why other 2 services are not showing up in the table. please find the output. index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done  Current output: (DCC:DONE &PIP:DONE  fields are missing) _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE 1/2/2022 1 100 1 100 1 1 66 1 2/2/2022 5 0 5 0 3 3 0 3 3/2/2022 10 0 10 0 8 7 0 8 4/2/2022 100 1 100 1 97 80 1 80 5/2/2022 0 5 0 5 350 0 4 0   Expected output: _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE DCC:DONE PIP:DONE 1/2/2022 1 100 1 100 1 1 66 1 99 1 2/2/2022 5 0 5 0 3 3 0 3 0 2 3/2/2022 10 0 10 0 8 7 0 8 0 3 4/2/2022 100 1 100 1 97 80 1 80 1 90 5/2/2022 0 5 0 5 350 0 4 0 5 200