All Topics

Top

All Topics

HI  If I replace, for example, src=10.0.0.1 with my tag containing src=10.0.0.1 in the query, it doesn't work. Please help.
HI, I need to upgrade my correlation search for Excessive Failed Logins with Username, | tstats summariesonly=true values("Authentication.tag") as "tag",dc("Authentication.user") as "user_count",... See more...
HI, I need to upgrade my correlation search for Excessive Failed Logins with Username, | tstats summariesonly=true values("Authentication.tag") as "tag",dc("Authentication.user") as "user_count",values("Authentication.user") as "usernames", dc("Authentication.dest") as "dest_count",count from datamodel="Authentication"."Authentication" where nodename="Authentication.Failed_Authentication" by "Authentication.app","Authentication.src" | rename "Authentication.app" as "app","Authentication.src" as "src" | where 'count'>=6 I would like the query to trigger only when there is a Successful Authentication after 6 failed authentication     thank youu
Hello there,  Here I am writing to see my use case for integration of Splunk cloud/enterprise features on my website.  I am looking for web services regarding integration with Splunk cloud or Splun... See more...
Hello there,  Here I am writing to see my use case for integration of Splunk cloud/enterprise features on my website.  I am looking for web services regarding integration with Splunk cloud or Splunk enterprise. My aim is to render Splunk cloud /enterprise dashboards, reports on my website. I have, Splunk cloud admin account (trial) Splunk enterprise admin account (trial) I want to, Get list of apps of Splunk cloud/enterprise programmatically. After that I will be able to see list of dashboards, reports on desired app. Further, I can select a dashboard, report which I want to embed on my website. This will allow me to easily visualize up-to-date Splunk data on my website. Thank you in advance to consider on my query.
I am unable to find REST API Postman collection for Splunk Enterprise. Can anyone please provide a link to export or download Postman collection for Enterprise ?
Seeing some errors in the internal logs for lookup files. Can someone help me with the reason for these errors? 1) Unable to find filename property for lookup=xyz.csv will attempt to use implicit fi... See more...
Seeing some errors in the internal logs for lookup files. Can someone help me with the reason for these errors? 1) Unable to find filename property for lookup=xyz.csv will attempt to use implicit filename. 2) No valid lookup table file found for this lookup=* 3) The lookup table '*' does not exist or is not available. - This can be due to the definition or reference of the lookup file is there but the file has been deleted.
Looking for a solution that does certain validations check when we upgrade any splunk addon to latest version. This is to make sure when the addon is upgraded to latest version it does not break any... See more...
Looking for a solution that does certain validations check when we upgrade any splunk addon to latest version. This is to make sure when the addon is upgraded to latest version it does not break any of the existing working configs like field parsing, search execution time, etc. in prod. So we need to check if its possible to create a dashboard or something where in we can compare the old state vs upgraded state of the addon before we deploy to prod. Basic two validations can be CIM fields & search execution time and to kick off this we can pick any one sourcetype.
Hi,  I am trying to installing PureStorage Unified Add-on for Splunk but installing while looking to add configurations I am getting below error in configuration page. I am installing it on my on-pr... See more...
Hi,  I am trying to installing PureStorage Unified Add-on for Splunk but installing while looking to add configurations I am getting below error in configuration page. I am installing it on my on-prem deployment server rather than Splunk Cloud. Can anyone help advise what could be the reason for the same and how to resolve?  Error:  Failed to load current state for selected entity in form! Details Error: Request failed with status code 500 Addon: https://splunkbase.splunk.com/app/5513   Thanks
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few h... See more...
Hello everyone, I have around 3600 events to review but they all are encoded in HEX, I know I can decode them by hand one by one but this will take a lot of time which i do not have, I spent a few hours reading for similar problems here but none helped me, I found an app called decode2 but it was not able to help me either, it wants me to feed it a table to decode and I only have 2 tables, one called time and one called event, nothing else, pointing it to event returns nothing. bellow I'm posting 2 of the events as sample ```\hex string starts here\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x005\xE6\x00ppt/tags/tag6.\x00\x00\x00\x00]\x00]\x00\xA9\x00\x00N\xE7\x00\x00\x00   \hex start\x00\x00\x00n\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xE5\x00ppt/tags/tag3.-\x00\x00\x00\x00\x00\x00!\x00\xA1   i chanced the first part of the string because it did not let me post, i also deleted the part between tag6. and the next slash, same goes for tag3.-   is there a way to automatically convert all events from hex to text?
Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a parti... See more...
Hi Splunkers, I have a strange behavior with a Splunk Enteprise Security SH. In target Environment, we have a Indexer's Cluster queried by 2 SH: a Core one and a Enteprise Security one. For a particular index, If we perform a search on ES SH, we cannot see data. I mean, even if we perform the simplest query possible, which is: index=<index_name>   we go no result. Perhaps, if I try the same search on Core SH, data are shown. The behavior in my mind is very strange because it happened only with this specific index; all other remaining indexes return the same identical data, both  performing query on ES SH and Core SH. So in a nuthshell we can say: Index that return result on SH Core: N Index tha return result on ES Core: N - 1  
dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (... See more...
dyld[8605]: Library not loaded: @executable_path/../lib/libbz2.1.dylib Referenced from: <155E4B06-EBFB-3512-8A38-AF5B870FD832> /opt/splunk/bin/splunkd Reason: tried: '/opt/splunk/lib/libbz2.1.dylib' (code signature in <8E64DF20-704B-3A23-9512-41A3BCD72DEA> '/opt/splunk/lib/libbz2.1.0.3.dylib' not valid for use in process: library load disallowed by system policy), '/usr/lib/libbz2.1.dylib' (no such file, not in dyld cache) ERROR: pid 8605 terminated with signal 6
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] ... See more...
Here is the sample log: {"date": "1/2/2022 00:12:22,124", "DATA": "[http:nio-12567-exec-44] DIP: [675478-7655a-56778d-655de45565] Data: [7665-56767ed-5454656] MIM: [483748348-632637f-38648266257d] FLOW: [NEW] { SERVICE: AAP | Applicationid: iis-675456 | ACTION: START | REQ: GET data published/data/ui } DADTA -:TIME:<TIMESTAMP> (0) 1712721546785 to 1712721546885 ms GET /v8/wi/data/*, GET data/ui/wi/load/success", "tags": {"host": "GTU5656", "insuranceid": "8786578896667", "lib": "app"}}   We have around 10 services, by using below query i am getting 8 services and other 2 are not getting displayed in the table. But we can view them in events. Filed extraction is working correctly. not sure why other 2 services are not showing up in the table. index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done  Current output: (DCC:DONE &PIP:DONE  fields are missing) _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE 1/2/2022 1 100 1 100 1 1 66 1 2/2/2022 5 0 5 0 3 3 0 3 3/2/2022 10 0 10 0 8 7 0 8 4/2/2022 100 1 100 1 97 80 1 80 5/2/2022 0 5 0 5 350 0 4 0   Expected output: _time AAP:START ACC:START ABB:START DCC:START PIP:START AAP:DONE ACC:DONE ABB:DONE DCC:DONE PIP:DONE 1/2/2022 1 100 1 100 1 1 66 1 99 1 2/2/2022 5 0 5 0 3 3 0 3 0 2 3/2/2022 10 0 10 0 8 7 0 8 0 3 4/2/2022 100 1 100 1 97 80 1 80 1 90 5/2/2022 0 5 0 5 350 0 4 0 5 200  
Hello, I have these two results, I need to compare them and tell me when they are different, could you help me. Regards.  
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR ... See more...
Hi,  I have below scenario. My brain is very slow at this time of the day! I need an eval to create Status field as in the table below that will flag a host if it is running on IPv4 OR IPv6 OR both IPv4 +IPv6.  HOSTNAME IPv4 IPv6 Status SampleA 0.0.0.1   IPv4 SampleB   0.0.0.2 IPv6 SampleC 0.0.0.3 A:B:C:D:E:F IPv4 + IPv6 Thanks in-advance!!!
The integration of OpenTelemetry Java agents into your application’s Docker containers represents a significant leap towards enhanced observability and monitoring capabilities. This guide details how... See more...
The integration of OpenTelemetry Java agents into your application’s Docker containers represents a significant leap towards enhanced observability and monitoring capabilities. This guide details how to embed the OpenTelemetry Java Agent into your Dockerfile for a Java application, deploy it on Kubernetes, and monitor its traces using Cisco AppDynamics, thereby providing a robust solution for real-time application performance monitoring. Pre-requisites Ensure you have the following set up and ready: A Kubernetes cluster Docker and Kubernetes command-line tools, docker and kubectl , installed and configured Access to an AppDynamics account for monitoring 1. Preparing Your Dockerfile for Observability The Dockerfile outlined integrates the OpenTelemetry Java agent into a Tomcat server to enable automated instrumentation of your Java application. FROM tomcat:latest RUN apt-get update -y && apt-get -y install wget RUN apt-get install -y curl ADD sample.war /usr/local/tomcat/webapps/ ADD https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar /tmp ENV JAVA_OPTS="-javaagent:/tmp/opentelemetry-javaagent.jar -Dappdynamics.opentelemetry.enabled=true -Dotel.resource.attributes="service.name=tomcatOtelJavaK8s,service.namespace=tomcatOtelJavaK8s"" ENV OTEL_EXPORTER_OTLP_ENDPOINT=http://appdynamics-collectors-ds-appdynamics-otel-collector.cco.svc.cluster.local:4318 CMD ["catalina.sh","run"] Base Image: Start with tomcat:latest as the base image for deploying a Java web application. Installing Utilities: Update the package list and install necessary utilities like wget and curl for downloading the OpenTelemetry Java agent. Adding Your Application: Use the ADD command to place your  .war file in the webapps directory of Tomcat. Integrating OpenTelemetry: Download the latest OpenTelemetry Java agent using ADD command and set JAVA_OPTS to include the path to the downloaded agent, enabling specific OpenTelemetry configurations. Environment Variables: Define OTEL_EXPORTER_OTLP_ENDPOINT to specify the endpoint of the AppDynamics OTel Collector or your custom otel collector, which will process and forward your telemetry data to AppDynamics. 2. Deploying Your Application on Kubernetes Your deployment YAML file configures Kubernetes to deploy your containerized application, exposing it through a service for external access. --- apiVersion: apps/v1 kind: Deployment metadata: name: java-app-with-otel-agent labels: app: java-app-with-otel-agent namespace: appd-cloud-apps spec: replicas: 1 selector: matchLabels: app: java-app-with-otel-agent template: metadata: labels: app: java-app-with-otel-agent spec: containers: - name: java-app-with-otel-agent image: docker.io/abhimanyubajaj98/java-app-with-otel-agent imagePullPolicy: Always ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: java-app-with-otel-agent labels: app: java-app-with-otel-agent namespace: appd-cloud-apps spec: ports: - port: 8080 targetPort: 8080 selector: app: java-app-with-otel-agent Deployment Configuration: Define a deployment in Kubernetes to manage your application’s replicas, ensuring it’s set to match your application’s requirements. Service Exposure: Create a Kubernetes service to expose your application on a specified port, allowing traffic to reach your application. 3. Setting Up the AppDynamics Otel Collector To monitor your application’s traces in Cisco AppDynamics, deploy the AppDynamics Otel Collector within your Kubernetes cluster. This collector processes traces from your application and sends them to AppDynamics. Collector Configuration: Use the official documentation to deploy the AppDynamics Otel Collector, ensuring it’s correctly configured to receive telemetry data from your application. https://docs.appdynamics.com/observability/cisco-cloud-observability/en/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring Service Discovery: Ensure your application’s deployment is configured to send traces to the collector service, typically through environment variables or configuration files. 4. Monitoring Traces in AppDynamics To produce a load for our Sample App, Exec inside the pod and run curl -v http://localhost:8080/sample/ With your application deployed and the Otel Collector set up, you can now monitor your application’s performance in AppDynamics. Accessing AppDynamics: Log into your AppDynamics dashboard. Viewing Traces: Navigate to the tracing or application monitoring section to view the traces sent from your Kubernetes-deployed application, allowing you to monitor requests, response times, and error rates. Conclusion Integrating the OpenTelemetry Java agent into your Java application’s Dockerfile and deploying it on Kubernetes offers a seamless path to observability. By leveraging Cisco AppDynamics in conjunction with this setup, you gain powerful insights into your application’s performance, helping you diagnose and resolve issues more efficiently. This guide serves as a starting point for developers looking to enhance their application’s observability in a Kubernetes environment.
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet ou... See more...
Query1: index=app-index source=application.logs "Initial message received with below details" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as IntialMessage by RampdataSet output: RampdataSet IntialMessage WAC 10 WAX 30 WAM 22 STC 33 STX 66 OTP 20   Query2: index=app-index source=application.logs "Initial message Successfull" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as SuccessfullMessage by RampdataSet output: RampdataSet SuccessfullMessage WAC 0 WAX 15 WAM 20 STC 12 STX 30 OTP 10 TTC 5 TAN 7 TXN 10 WOU 12   Query3: index=app-index source=application.logs "Initial message Error" |rex field= _raw "RampData :\s(?<RampdataSet>\w+)" |stats count as ErrorMessage by RampdataSet output: RampdataSet ErrorMessage WAC 0 WAX 15 WAM 20 STC 12   We want to combine three queries and want to get the output as shown below, how to do that??? RampdataSet IntialMessage SuccessfullMessage ErrorMessage Total WAC 10 0 0 10 WAX 30 15 15 60 WAM 22 20 20 62 STC 33 12 12 57 STX 66 30 0 96 OTP 20 10 0 30 TTC 0 5 0 5 TAN 0 7 0 7 TXN 0 10 0 10 WOU 0 12 0 12  
Hi all!  I've got an issue with macro expansion taking an excessively long time when you use the keyboard shortcut - ctrl+shift+e.  I'm looking for someone to try the same thing on their own system a... See more...
Hi all!  I've got an issue with macro expansion taking an excessively long time when you use the keyboard shortcut - ctrl+shift+e.  I'm looking for someone to try the same thing on their own system and let me know if you're seeing this to. That will help me determine if this is a problem in my environment or a possible bug in the software. To test, find any macro in your environment. Establish baseline: Enter just the macro name in the search box and press ctrl+shift+e (or command+shift+e, I think, on MAC).  Note the length of time it takes for the modal pop up to show you the expanded macro. It is not necessary to run the search. `mymacro` Test issue: Using the same macro as above, create a simple search that has the macro inside of a sub-search. Try expanding the macro. Are you getting a slow response? For me, it's >20 seconds for it to expand the macro  |makeresults |append [`mymacro`] I appreciate the help from anyone willing to test. 
I'm setting up a lab instance of  Splunk Ent in prep to replace our legacy instance in a live environment and getting this error message: "homePath='/mnt/splunk_hot/abc/db' of index=abc on unusable ... See more...
I'm setting up a lab instance of  Splunk Ent in prep to replace our legacy instance in a live environment and getting this error message: "homePath='/mnt/splunk_hot/abc/db' of index=abc on unusable filesystem" I'm running RHEL 8 VM's, running Splunk 9.1, 2 indexers clustered  together and a cluster manager. I've attached external drives for hot and cold to each indexer. The external drives have been formatted in ext4 and set in fdisk to mount at boot every time as /mnt/splunk_hot and /mnt/splunk_cold and pointed indexes.conf by volume to them. They come up at boot, I can navigate to them and write to them. They're currently owned by root. I couldn't find who should have permission over them so I left them as is to start. I tried to enable OPTIMISTIC_ABOUT_FILE_LOCKING=1  but that didn't do anything. That being said, i suspect I've missed a step in the actions taken mounting the external drives.  I wasn't able to find specifics about the way I'm doing this, so I pose the question:  Am I doing something wrong, or missing a step on mounting these external drives? Is that now a bad practice?  I'm stumped. my indexes.conf: [volume:hot] path=/mnt/splunk_hot [volume:cold] path=/mnt/splunk_cold [abc] repFactor = auto homePath = volume:hot/abc/db coldPath = volume:cold/abc/db thawedPath = $SPLUNK_DB/abc/thaweddb ##We're not utilizing frozen storage at all so I left it default Any advice here would be greatly appreciated!
index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actio... See more...
index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done   When we run the above query , not all services getting captured,  but we have data, attached the screen shot(highlighted ones are missing). can anyone let me know what is the issue with the query.
i am reading teh host from log file and have query to return all the host.    index=aaa source="/var/log/test1.log"|stats count by host    can we include teh step to categories test/qa/prod in t... See more...
i am reading teh host from log file and have query to return all the host.    index=aaa source="/var/log/test1.log"|stats count by host    can we include teh step to categories test/qa/prod in the drop down list  from the list of host returned in the query itself?(using wildcard if host has t then test /if host has q -qa server, etc? but for now i am using static options  test - testhost qa - qahost prod - prodhost
hi Team, Docker build is failing with this error. => ERROR [15/16] RUN sed -i 's/<AppenderRef ref=\"Console\"\/>/<!-- <AppenderRef ref=\"Console\"\/> -->/g' /usr/local/lib/python3.10/site-packages/... See more...
hi Team, Docker build is failing with this error. => ERROR [15/16] RUN sed -i 's/<AppenderRef ref=\"Console\"\/>/<!-- <AppenderRef ref=\"Console\"\/> -->/g' /usr/local/lib/python3.10/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml  below is the pkg version appdynamics==23.8.0.6197 appdynamics-bindeps is not getting pulled/installed  tried with latest version of appdynamics pkg .. same experience.. appdynamics==24.2.0.6567 this is happening only on mac m1 pro. adding explicitly "appdynamics-bindeps-linux-x64==23.8.0" in requirements.txt is giving below error. File "/tmp/appd/lib/cp310-cp310-ffd7b4d13d09a0572eb0f3d85bb006d0043821e28e0e1e2c12f81995da1bd796/site-packages/appdynamics_bindeps/zmq/backend/cython/__init__.py", line 6, in <module> 2024-04-10 11:14:44 from . import (constants, error, message, context, 2024-04-10 11:14:44 ImportError: cannot import name 'constants' from partially initialized module 'appdynamics_bindeps.zmq.backend.cython' (most likely due to a circular import) (/tmp/appd/lib/cp310-cp310-ffd7b4d13d09a0572eb0f3d85bb006d0043821e28e0e1e2c12f81995da1bd796/site-packages/appdynamics_bindeps/zmq/backend/cython/__init__.py)