All Topics

Top

All Topics

Simplify Application Performance Troubleshooting with Log Observer Connect for AppDynamics Log Observer Connect for AppDynamics allows you to access the right logs in Splunk with just one click, al... See more...
Simplify Application Performance Troubleshooting with Log Observer Connect for AppDynamics Log Observer Connect for AppDynamics allows you to access the right logs in Splunk with just one click, all while providing troubleshooting context from AppDynamics.    By integrating Splunk's powerful log analytics with AppDynamics, Log Observer Connect lets you perform in-context log troubleshooting, quickly pinpoint issues, and centralize logs across teams for a single source of truth.  If you want to streamline your troubleshooting process and keep everything running smoothly, this integration could be a game-changer.   Getting started is as straightforward as taking five steps: Ensure there is an appropriate Splunk service account to be used to access relevant Splunk gathered logs. This will be creating a new role with edit_tokens_own and search capabilities. This role will then be assigned to an existing or or new Splunk user that will be used to create the connection within AppDynamics. A new user is recommended to serve as a service account.  Configure the Splunk Universal Forwarder to send application metadata.  Configure the AppDynamics Agents (currently Java, .NET, and Node.js) to enrich log data by setting the system property to appdynamics.enable.log.metadata.enrichment. Configure Cisco AppDynamics for Log Observer Connect by using the (new) user with new role created previously.  Allow Splunk Cloud Platform IP addresses to be used by Cisco AppDynamics.  *NOTE: These steps are high level and our Integration Steps documentation should be consulted when needing to complete this integration.  Configuration Integration Demo Video Additional Resources Take a self-guided test drive with Log Observer Connect Log Observer Connect One Pager Link to Website  Speak to an expert?
For Java-based agents, quite often, we run into issues with connectivity when either the agent is migrating from an on-prem controller to a SaaS controller or redirecting from one controller to anoth... See more...
For Java-based agents, quite often, we run into issues with connectivity when either the agent is migrating from an on-prem controller to a SaaS controller or redirecting from one controller to another or when the Network team introduces a network tool such as Zscaler or a simple Java upgrade. This is much more pronounced for Java agents than for Machine agents and Database agents, where the JRE is usually bundled with the product. The JRE is the one which is used to start the JVM and as the name suggests this is a Virtual Machine for Java. So whatever exists on the system or node Java ignores that and only trusts what the virtual machine spawned has access to.  For Java, the key here to establish a TLS connection with any server is that the public cert for the server needs to be trusted. For ease of consumption of this article, I will divert to 2 sections as the JRE is usually in control of MA and DB agents. For Java agents: The key parameter that the JVM uses to establish trust with the server is the truststore passed to the JVM. This ideally should have not just the certs for the controller but any secured service the JVM will connect to. Quite often developers will develop apps and pass a custom truststore to the same. This is the root of the issue in most scenarios as by passing a truststore to the JVM we are putting blinders on that JVM and the JVM will not trust the JRE default certs (which is usually at JRE/lib/security/cacerts) and only trust certs in the passed truststore. A quick check of the JVM args should be able to assist if a custom truststore is being passed or not. If not then the default JRE cacerts file is the truststore. If you have a java agent not connecting to the controller then add the public cert of the controller to the truststore. If you don't have a custom truststore being passed to the JVM then check the cacerts file for the JRE and use any tool to open and check if the public cert offered by the controller is present within the file. Network Connectivity Issue Sometimes the certificate is not the issue but network connectivity is. For that we we use a neutral tool such as keytool which exists within each JRE/bin folder to test connectivity as well as show the entire TLS handshake on the terminal. Simply run the following: keytool -printcert -sslserver <host>:<port> -J-Djavax.net.debug=ssl If proxy is involved then you can add proxy details by adding extra JVM args such as -J-DproxyHost and -J-DproxyPort. For eg.,  keytool -printcert -sslserver <insertyourtenant>.saas.appdynamics.com:443 -J-Djavax.net.debug=ssl -J-DproxyHost=10.8.9.234 -J-DproxyPort=443 The above command if it fails then the network team can be involved to check network connectivity between the server and controller. For Machine and Database agent: Since the JRE is usually bundled the chances of global root CAs not being present with the JRE is minimized, however, if you have an on-prem secured controller then the TLS handshake will fail till the public cert is available with the JVM. For that easiest option is to create a jks file holding the public certs and name the same cacerts.jks and place in the conf folder. The agents have a logic in them to add whatever certs you specify in the cacerts.jks file. Please note the file name must be cacerts.jks and if read the logs will mention the same including the actual cert details when debug logging is enabled   TLS handshake capture In case you wish to capture in the logs the actual TLS handshake the JVM establishes with the servers then you need to enable TLS debugging and ensure the stdout and stderr files are actually being logged in a file. Please note for JVMs run as a service the stdout/stderr may be redirected to /dev/null and thus we need to restart the JVM and ensure the redirection is in place. The JVM args are:  -Djavax.net.ssl=debug -Djavax.net.debug=all  For eg. nohup /app/appdynamics/machineagent/bin/machine-agent -Dappdynamics.http.proxyHost=10.8.9.234 -Dappdynamics.http.proxyPort=443 -Djavax.net.ssl=debug -Djavax.net.debug=all > /app/appdynamics/machineagent/bin/nohup.out 2>&1 &
Introduction In Kubernetes, securing communications between different components is essential for maintaining the integrity and confidentiality of your applications. Certificates play a pivotal rol... See more...
Introduction In Kubernetes, securing communications between different components is essential for maintaining the integrity and confidentiality of your applications. Certificates play a pivotal role in this security ecosystem, enabling encrypted communication, and authentication, and ensuring that data remains tamper-proof. However, as Kubernetes environments grow in complexity, tracking which certificates are used by specific deployments can become challenging. This guide provides a comprehensive, secrets-focused approach to identifying and verifying the certificates used by a specific Kubernetes deployment, using the AppDynamics Cluster Agent as a practical example. Understanding Kubernetes Certificates What Are Certificates in Kubernetes? Certificates in Kubernetes are digital documents used to secure communications between different components, such as API servers, nodes, and deployed applications. These certificates are typically X.509 certificates and are used for encryption, authentication, and integrity checks. In Kubernetes, certificates can be issued by a Certificate Authority (CA) and are often managed automatically, although there are scenarios where manual intervention is required. What Are Secrets in Kubernetes? Kubernetes Secrets are objects used to store sensitive information, such as passwords, OAuth tokens, and, importantly, TLS certificates. Storing certificates as secrets allows Kubernetes to manage and distribute them securely to the pods that require them. Common Certificate Use Cases Some common uses of certificates in Kubernetes include: Securing API Server Communications: Ensuring that communications between the Kubernetes API server and nodes are encrypted. Ingress Controllers: Handling SSL/TLS termination for external traffic entering the cluster. Mutual TLS (mTLS) Between Services: Encrypting communication between microservices within the cluster. Application Routes: Securing external access to applications through routes or services. Understanding Kubernetes Secrets Types of Secrets: kubernetes.io/tls: Used specifically for storing TLS certificates. Opaque: A generic type of secret that can store any key-value pair, including certificates. kubernetes.io/dockercfg: Used for storing Docker configuration, particularly for private image registries. kubernetes.io/service-account-token: Used to manage service account tokens. Prerequisites Before you begin, ensure you have the following: kubectl: The command-line tool for interacting with Kubernetes clusters. OpenSSL: A tool for working with SSL/TLS certificates. Access: Sufficient permissions to access and manage secrets in the Kubernetes cluster. AppDynamics Cluster Agent installed: Install the Cluster Agent  Step-by-Step Guide to Identifying Certificates via Secrets Step 1: List All Secrets in the Namespace Certificates are stored within secrets. Start by listing all secrets in the relevant namespace to identify potential candidates. kubectl get secrets -n <namespace> In our case I have deployed Cluster agent in appdynamics namespace, So I will reference appdynamics in subsequent steps. Step 2: Filter Secrets Related to Certificates Kubernetes secrets that store TLS certificates usually have the type kubernetes.io/tls . Filter secrets by this type to narrow down your search. kubectl get secrets -n appdynamics -o json | jq '.items[] | select(.type=="kubernetes.io/tls") | .metadata.name' In AppDynamics, We don’t have a TLS secret by default so the above should return nothing. Step 3: Check Other Secrets If no kubernetes.io/tls secrets are found, inspect other secrets in the namespace to see if they contain certificates. This command will list all the secrets in the namespace, including those that might contain certificate data. kubectl get secrets -n appdynamics NAME TYPE DATA AGE appdynamics-cluster-agent-dockercfg-zvglk kubernetes.io/dockercfg 1 30m appdynamics-cluster-agent-token-5dt8s kubernetes.io/service-account-token 4 30m appdynamics-infraviz-dockercfg-f9ncg kubernetes.io/dockercfg 1 30m appdynamics-infraviz-token-5qpfs kubernetes.io/service-account-token 4 30m appdynamics-operator-dockercfg-v9k7k kubernetes.io/dockercfg 1 30m appdynamics-operator-token-478lb kubernetes.io/service-account-token 4 30m builder-dockercfg-wkdnm kubernetes.io/dockercfg 1 30m builder-token-nvjmq kubernetes.io/service-account-token 4 30m cluster-agent-secret Opaque 1 30m default-dockercfg-6dn5t kubernetes.io/dockercfg 1 30m default-token-m7hkp kubernetes.io/service-account-token 4 30m deployer-dockercfg-25ws4 kubernetes.io/dockercfg 1 30m deployer-token-dd9s9 kubernetes.io/service-account-token 4 30m sh.helm.release.v1.abhi-cluster-agent.v1 helm.sh/release.v1 1 30m Step 4: Inspect the Content of a Secret To examine the contents of a specific secret, such as appdynamics-cluster-agent-dockercfg-zvglk , use the following command: kubectl get secret appdynamics-cluster-agent-dockercfg-zvglk -n appdynamics -o yaml The output will look something like below kubectl get secret appdynamics-cluster-agent-token-5dt8s -n appdynamics -o yaml apiVersion: v1 data: ca.crt: xxxxxxxx namespace: YXBwZHluYW1pY3M= service-ca.crt: xxxxxx== token: xxxxxxx kind: Secret metadata: annotations: kubernetes.io/created-by: openshift.io/create-dockercfg-secrets kubernetes.io/service-account.name: appdynamics-cluster-agent kubernetes.io/service-account.uid: 582c10bc-b3fb-4435-9adb-7c6dfb25c2ff creationTimestamp: "2024-09-03T17:42:20Z" name: appdynamics-cluster-agent-token-5dt8s namespace: appdynamics resourceVersion: "84672" uid: a7cc1cba-27e4-4b46-a33d-212614af9cad type: kubernetes.io/service-account-token Step 5: Decode and Inspect the Certificate To check the details of the certificate (e.g., ca.crt ), decode it, and then use openssl to view its content: echo "Content of ca.crt" | base64 - decode | openssl x509 -noout -text If the certificate is found and properly decoded, the output will resemble: Certificate: Data: Version: 3 (0x2) Serial Number: ... Signature Algorithm: sha256WithRSAEncryption Issuer: OU=openshift, CN=kube-apiserver-lb-signer ... Conclusion This guide has walked you through identifying the certificates used by a specific Kubernetes deployment, focusing on secrets. By following these steps, you can effectively manage and verify the certificates that secure your Kubernetes environments, ensuring the integrity and confidentiality of your applications.
Hi folks,   I have a quick question based on this kind of data. consider this table    Age sex id ^N-S-Ba S-N mm 17 male 1 125 84 17 female 2 133 75   I have to create a dynamic range for the ... See more...
Hi folks,   I have a quick question based on this kind of data. consider this table    Age sex id ^N-S-Ba S-N mm 17 male 1 125 84 17 female 2 133 75   I have to create a dynamic range for the field "S-N mm" for the female is from 74,6  to 77 for the male is from 79,3 to 87,7 I need to create a table that when one of these values ​​is within range it should turn green thanks for the support Ale
Hello, I'm trying to obtain a table like this : FQDN uri list of  attack_types attack_number www.test.com /index Information Leakage Path Traversal 57 www.test.com /test Path Tr... See more...
Hello, I'm trying to obtain a table like this : FQDN uri list of  attack_types attack_number www.test.com /index Information Leakage Path Traversal 57 www.test.com /test Path Traversal 30 prod.com /sample Abuse of Functionality Forceful Browsing Command Execution 10   I can obtain the table without the list of attack_types, but I can't figure out how to add the values function. | stats count as attack_number by FQDN,uri | stats values(attack_type) as "Types of attack"  For each FQDN/uri I want to have the number of attacks, and all the attack_types seen. It seems obvious, but I'm missing it. Can someone help me ?
My role as an Observability Specialist at Splunk provides me with the opportunity to work with customers of all sizes as they implement OpenTelemetry in their organizations.  If you read my earlier... See more...
My role as an Observability Specialist at Splunk provides me with the opportunity to work with customers of all sizes as they implement OpenTelemetry in their organizations.  If you read my earlier article, 3 Things I Love About OpenTelemetry, you'll know that I'm a huge fan of OpenTelemetry.  But like any technology, there's always room for improvement.  In this article, I'll share three areas where I think OpenTelemetry could be improved to make it even better.    #1:  Make OpenTelemetry Even Easier  While OpenTelemetry has come a long way in the past few years, making it even easier would allow more organizations to adopt it, and result in a faster time to value for everyone.  I’ll share a few specific examples below.    Expand Auto-Instrumentation Coverage One example where ease of use could be improved is in the instrumentation of languages that don’t support auto-instrumentation, such as Golang.   The good news is that efforts are already underway to build a solution that provides auto-instrumentation for Go using eBPF.  While this is still a work in progress, you can learn more about it in GitHub and even try it out yourself (on a non-production app, please!)     Troubleshooting Guidance Many practitioners that I work with find it challenging to troubleshoot OpenTelemetry-related issues.  While some of these issues are with the OpenTelemetry collector, the majority are at the instrumentation level. For example, they may not see the spans they’re expecting.  Or perhaps the application even crashes on startup when auto-instrumentation is added.  Regardless of the specific instrumentation issue, there’s frequently confusion about where to start troubleshooting.  More often than not, the focus is on the OpenTelemetry Collector logs, due to a lack of understanding of where the instrumentation occurs vs. what role the collector plays.   I believe the OpenTelemetry community as a whole would benefit from further guidance on troubleshooting techniques.  This could take the form of expanded documentation, videos demonstrating the troubleshooting process for real-world issues, or tutorials that let practitioners go through these processes themselves with a mock application.  Generative AI can also play a role in providing troubleshooting guidance.  For example, as of July 2024, ChatGPT is already able to help with collector configuration tasks and provide direction on how to troubleshoot OpenTelemetry-related issues.  For example, if we provide the following prompt:  I'm having trouble with the OpenTelemetry collector.  Can you help me to troubleshoot this error?             warn kubelet/accumulator.go:102 failed to fetch container metrics {"kind": "receiver", "name": "kubeletstats", "data_type": "metrics", "pod": "mypod-5888f4d9fb-lbbww", "container": "mypod", "error": "failed to set extra labels from metadata: pod \"a3473219-4ab1-427c-b0f2-226a6e5271e5\" with container \"mypod\" has an empty containerID"}           ChatGPT was able to dissect the error message and provide the following interpretation:  “The key part of the error message is has an empty containerID. This suggests that the OpenTelemetry collector is attempting to fetch metrics for a container within a pod (mypod-5888f4d9fb-lbbww), but it cannot proceed because the container ID is empty.” It also provided suggested troubleshooting steps such as confirming that the Kubernetes API Server is up and running, and ensuring the container has a valid Container ID associated with it.  While it’s not perfect, Generative AI is already helpful for troubleshooting OpenTelemetry issues today, and it will only continue to get better in the future.    #2:  Narrow the Gap with Traditional APM Agents One of my favorite aspects of traditional APM solutions that rely on proprietary APM agents is their ability to apply code-level instrumentation without requiring code changes.  It would be great to see similar “no-code change required” capabilities added to OpenTelemetry.  I’ve provided a few examples below.    Creating Spans It’s sometimes helpful to capture spans that go above and beyond what auto-instrumentation provides.  With OpenTelemetry today, this typically requires making a code change.  And while the code change itself is straightforward, with a few additional lines of code at most, it can take time to get this prioritized in a team’s sprint, tested, and pushed out to production.  OpenTelemetry does provide some support for this today with Java.  Specifically, with Java it’s possible to create additional spans by adding a system property to the JVM at startup.  See Creating spans around methods with otel.instrumentation.methods.include for further details.   It would be great to see similar capabilities added to languages beyond Java.    Capturing Span Attributes As an observability enthusiast, I believe it’s critical to capture span attributes to ensure engineers have the context they need during troubleshooting.  For example, let’s say we have a user profile service, and one of the endpoints is /get-profile, which retrieves the profile of a particular user.  We may find that the response time of this service varies widely.  Sometimes it responds in a few milliseconds, and other times it takes upwards of 1-2 seconds.  Adding span attributes to provide context about the request, such as the user ID and the number of items in that user’s history, is critical to ensure the engineer has the information they need for troubleshooting.  The /get-profile operation might run slowly for users that have a large number of items in their history, but it wouldn’t be possible to determine this without having those attributes included with the trace.  While some traditional APM solutions provide similar capabilities without code changes, capturing span attributes with OpenTelemetry currently requires code changes.  As with creating spans, these code changes aren’t difficult.  But it can be challenging to prioritize these types of changes amongst competing feature requests and get them into production in a timely manner.  There is some support in OpenTelemetry today for capturing HTTP headers as span attributes with the Java agent.   It would be great to see OpenTelemetry extend these capabilities, and add further support for capturing span attributes without requiring code changes.    Profiling Spans that are captured with OpenTelemetry’s auto-instrumentation will tell us how long calls between services are taking, and whether any errors are occurring.  This can be supplemented with manually created spans to provide insight into long-running tasks that require a deeper level of visibility.  But when it comes to finding the exact method or line of code in your code that’s causing an application to run slowly, we need to move beyond spans and look at profiling instead. Many traditional APM tools provide some form of profiling, but this level of detail hasn't been available with OpenTelemetry.  The good news is that the process of adding profiling to OpenTelemetry is already underway.  In fact, OpenTelemetry announced upcoming support for profiling in March 2024.  Please see OpenTelemetry announces support for profiling for details.  Note:  while profiling is in the process of being added to OpenTelemetry, Splunk distributions of OpenTelemetry already include AlwaysOn Profiling capabilities.    #3:  Expand Scope to Network and Security Domains OpenTelemetry provides a wealth of information about applications and the infrastructure they run on.  This includes apps running on traditional host-based environments as well as containerized apps running on Kubernetes.  It also includes observability data from other components which applications depend on, such as databases, caches, and message queues.  This data goes a long way in determining why an application is performing slowly, or why the error rate for a particular service has suddenly spiked.  But sometimes, issues go beyond the application code and server infrastructure that the applications run on.  It would be helpful to see OpenTelemetry broaden its scope and provide visibility into additional domains.     Network We’ve all heard the joke about software engineers blaming the network whenever something goes wrong in their application.  Well, the truth is that sometimes the problem *is* caused by the network.  Yet engineers responsible for building and maintaining these applications rarely have direct insights into how the network is performing.  So for more holistic visibility into anything that could be impacting application performance, it would be wonderful to see OpenTelemetry add support for the network domain in the future.  This could include ingesting metric and log data from existing network monitoring solutions, or pulling data directly from network devices themselves.     Security While it’s important for an application to be available and performant, none of that matters if the application isn’t secure.   Since OpenTelemetry already has a wealth of information about the applications it instruments, having OpenTelemetry expand into the security domain would open up a whole new set of use cases for observability data.    For example, OpenTelemetry could gather information about what specific packages and versions are used by an instrumented application.  This could take the form of a new “security” signal, with a corresponding set of security-related semantic conventions that ensure this data is captured in a consistent manner across different languages and runtimes.    The security signals could then be analyzed by an observability backend, and engineers could be alerted when a security vulnerability is present in their application.  And by correlating security-related signals with existing signals such as traces and the upcoming profiling signal, observability backends could also determine when a particular vulnerability is being exercised  by analyzing which code paths are actively being exercised.    Summary Thanks for taking the time to hear my thoughts on OpenTelemetry.  Please leave a comment or reach out to let us know how you would make OpenTelemetry even better. 
Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.2.2406 with many awaited features for both Analysts and Admins, helping you further your organizational progre... See more...
Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.2.2406 with many awaited features for both Analysts and Admins, helping you further your organizational progress toward resilience.  Analysts and Admins can benefit from these key highlights of the release: New Federated Search for Amazon S3 functions that facilitate the extraction of field values and matched strings from JSON objects from your Amazon S3 datasets. Forwarder certificate rotation that detects upcoming forwarder certificate expiration and rotates the certificate with the new one, without requiring downtime. New Federated Search for Splunk capabilities for metric indexes and |eventcount command that further enriches your searches. Enjoy seamless communication and improved efficiency with our enhanced email validation using regular expressions, increasing resiliency from situations such as malformed addresses.  Improve your productivity by consolidating dashboard views with multiple tabs within a given dashboard. Boost efficiency with decentralized Search Telemetry by sending data directly and bypassing the Search Head bottleneck. Search Head Cluster replication has been improved to reduce out-of-sync errors Check out the 9.2.2406 release notes for additional details.  Python 2 is in the process of complete removal and soon will no longer be available in coming releases jQuery v3.5 library is now set as the platform default; prior jQuery libraries are no longer supported
In August, the Splunk Threat Research Team had 3 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.38.0, v4.39.0 and v4.39.1). With these releases, there are ... See more...
In August, the Splunk Threat Research Team had 3 releases of new security content via the Enterprise Security Content Update (ESCU) app (v4.38.0, v4.39.0 and v4.39.1). With these releases, there are 49 new analytics, 4 new analytic stories, 15 updated analytics, and 1 updated analytic story now available in Splunk Enterprise Security via the ESCU application update process. Content highlights include: Detections aimed at addressing vulnerabilities in Ivanti Virtual Traffic Manager (CVE-2024-7593), with a particular focus on detecting SQL injection remote code execution and unauthorized account creation activities. A comprehensive set of new detections for Windows Active Directory, targeting potential threats related to privilege escalation, dangerous ACL modifications, GPO changes, and suspicious attribute modifications. New analytic stories to help detect Compromised Windows Hosts or activities linked to the Handala Wiper Malware. New Analytics (49) Detect Password Spray Attack Behavior From Source (External Contributor: @nterl0k) Detect Password Spray Attack Behavior On User (External Contributor: @nterl0k) Ivanti EPM SQL Injection Remote Code Execution Ivanti VTM New Account Creation O365 DLP Rule Triggered (External Contributor: @nterl0k) O365 Email Access By Security Administrator (External Contributor: @nterl0k) O365 Email Reported By Admin Found Malicious (External Contributor: @nterl0k) Email Reported By User Found Malicious (External Contributor: @nterl0k) O365 Email Security Feature Changed (External Contributor: @nterl0k) O365 Email Suspicious Behavior Alert (External Contributor: @nterl0k) O365 Safe Links Detection (External Contributor: @nterl0k) O365 SharePoint Allowed Domains Policy Changed (External Contributor: @nterl0k) O365 SharePoint Malware Detection (External Contributor: @nterl0k) O365 Threat Intelligence Suspicious Email Delivered (External Contributor: @nterl0k) O365 Threat Intelligence Suspicious File Detected (External Contributor: @nterl0k) O365 ZAP Activity Detection (External Contributor: @nterl0k) Windows AD DCShadow Privileges ACL Addition (External Contributor: @dluxtron) Windows AD Dangerous Deny ACL Modification (External Contributor: @dluxtron) Windows AD Dangerous Group ACL Modification (External Contributor: @dluxtron) Windows AD Dangerous User ACL Modification (External Contributor: @dluxtron) Windows AD Domain Root ACL Deletion (External Contributor: @dluxtron) Windows AD Domain Root ACL Modification (External Contributor: @dluxtron) Windows AD GPO Deleted (External Contributor: @dluxtron) Windows AD GPO Disabled (External Contributor: @dluxtron) Windows AD GPO New CSE Addition(External Contributor: @dluxtron) Windows AD Hidden OU Creation (External Contributor: @dluxtron) Windows AD Object Owner Updated (External Contributor: @dluxtron) Windows AD Self DACL Assignment (External Contributor: @dluxtron) Windows AD Suspicious Attribute Modification (External Contributor: @dluxtron) Crowdstrike Admin Weak Password Policy Crowdstrike Admin With Duplicate Password Crowdstrike High Identity Risk Severity Crowdstrike Medium Identity Risk Severity Crowdstrike Medium Severity Alert Crowdstrike Multiple Low Severity Alerts Crowdstrike Privilege Escalation For Non-Admin User Crowdstrike User Weak Password Policy Crowdstrike User with Duplicate Password O365 Application Available To Other Tenants O365 Cross-Tenant Access Change O365 External Guest User Invited O365 External Identity Policy Changed O365 Privileged Role Assigned To Service Principal O365 Privileged Role Assigned Windows Multiple NTLM Null Domain Authentications Windows Unusual NTLM Authentication Destinations By Source Windows Unusual NTLM Authentication Destinations By User Windows Unusual NTLM Authentication Users By Destination Windows Unusual NTLM Authentication Users By Source New Analytic Stories (4) Ivanti Virtual Traffic Manager CVE-2024-7593 MoonPeak Compromised Windows Host Handala Wiper Updated Analytics (15) Azure AD Concurrent Sessions From Different IPs Azure AD High Number Of Failed Authentications From IP Detect Regasm Spawning a Process Detect Regasm with Network Connection Detect Regasm with no Command Line Arguments Executables Or Script Creation In Suspicious Path Internal Horizontal Port Scan Linux c99 Privilege Escalation Powershell Windows Defender Exclusion Commands Suspicious Process File Path Windows AutoIt3 Execution Windows Data Destruction Recursive Exec Files Deletion Windows Gather Victim Network Info Through Ip Check Web Services Windows High File Deletion Frequency Windows Vulnerable Driver Installed Updated Analytic Stories (1) CISA AA23-347A For all our tools and security content, please visit research.splunk.com. — The Splunk Threat Research Team
Hi, Has anyone tried using the node.js agent to see if it will work with detecting the Nest.js framework? NestJS is a framework for building efficient, scalable Node.js web applications. It uses mo... See more...
Hi, Has anyone tried using the node.js agent to see if it will work with detecting the Nest.js framework? NestJS is a framework for building efficient, scalable Node.js web applications. It uses modern JavaScript. So don't know if this would at least partially work.
hello, I have an issue when creating some visualization in splunk dashboard. Im using dashboard studio, and my objective is want made a table panel with multiple token for each column, Is it possible... See more...
hello, I have an issue when creating some visualization in splunk dashboard. Im using dashboard studio, and my objective is want made a table panel with multiple token for each column, Is it possible in splunk? Like for this capture dashboard, is it possible when i click in signature value   The rest visualization belows the table will dynamically changes based on the clicked column values, the action also can applied when i click on different column values from the first table. Is it possible in dashboard studio ?
  I have logs indexed like this. How to break entries based on each lines . i need each line as a seperate entry.   I tried to do this via line breaker but didnt succeed. Any method to do it v... See more...
  I have logs indexed like this. How to break entries based on each lines . i need each line as a seperate entry.   I tried to do this via line breaker but didnt succeed. Any method to do it via search after indexing  
Hello, i am trying to intergrate the Splunk Ui Toolkit into my  own Splunk instace that is running on localhost. I am using react to get a sessionkey with the following function: async function ... See more...
Hello, i am trying to intergrate the Splunk Ui Toolkit into my  own Splunk instace that is running on localhost. I am using react to get a sessionkey with the following function: async function GetSessionKey(username, password, server) {     var key = await fetch(server + "/services/auth/login", {       method: "POST",       body: new URLSearchParams({         username: username,         password: password,         output_mode: "json",       }),       headers: {         "Content-Type": "application/x-www-form-urlencoded",       },     })       .then((response) => response.json())       .then((data) => {         return data["sessionKey"];       }); But i always get this on my network showing  
Hi Splunkers, SplunkEnterprise : 9.2.2 Splunk Security Essentials : 3.8   (and 3.4) I installed Splunk Security Essentials 3.8, but I can’t launch the app due to a Custom JavaScript Error. I ... See more...
Hi Splunkers, SplunkEnterprise : 9.2.2 Splunk Security Essentials : 3.8   (and 3.4) I installed Splunk Security Essentials 3.8, but I can’t launch the app due to a Custom JavaScript Error. I tried using an older version of SSE, but it didn’t resolve the issue. And I also enabled the ‘old version’ setting in the internal library, but it still didn’t help. If you know the solution please help........
Step-by-Step Guide to Creating Custom JMX Attributes in the MBean Browser From the MBean browser page select a MBean. Inside that you will have many attributes. Choose the attribute for which you... See more...
Step-by-Step Guide to Creating Custom JMX Attributes in the MBean Browser From the MBean browser page select a MBean. Inside that you will have many attributes. Choose the attribute for which you want to create a custom attribute from Eg: ImplementationVersion​ Now click on Configure JMX Metrics Click Add -> JMX Config Provide a name and description and click Save Now select the JMX Config and Add a Rule Here provide a name for the Rule. Metric Path can be any path that you want the attribute to be reported under. Domain name is the Mbean Name. Object Name Match Pattern is the Object Name from MBean Eg: Metric Path -> Tomcat Test Domain -> JMImplementation Object Name Match Pattern -> JMImplementation:type=MBeanServerDelegate​ Under Define Metric from MBean Attributes section, define the MBean Attribute , Metric Name and Metric Getter chain that you want and save it. Eg: MBean Attribute -> ImplementationVersion Metric Name -> ImplementationVersion Metric Getter Chain -> toString().split(\\.).[0]​ Now go to Node Dashboard -> JMX -> JMX Metrics -> View JMX Metrics. Here under JMX you will be able to see the custom JMX attribute that you created
Choosing the Correct Image when the Agent version is => 24 When you are deciding which image to use for the Node.js or Cluster Agent, you can select it based on the title of the image. The namin... See more...
Choosing the Correct Image when the Agent version is => 24 When you are deciding which image to use for the Node.js or Cluster Agent, you can select it based on the title of the image. The naming pattern that AppDynamics uses for the Node.js Agent is intuitive. There are three segments in the tag, with each segment being separated by a hyphen. The first segment refers to the Agent version The second segment refers to the major version of Node.js that the image was built for The last segment refers to the Linux distribution that it is compatible with In the latest version of the Node.js Agent, there are only three versions regarding the distribution, Alma, Alma-Arm64, and Alpine. For all Linux distributions, use the Alma version, unless you're working with Alpine. When deciding between Alma and Alma-Arm64, select Alma-Arm64 for systems with an ARM64 CPU architecture, such as AWS Graviton. For AMD64 systems, choose Alma. Also, please refer to the list below for examples of popular supported Linux distributions. When choosing an image, the following tag "nodejs-agent:24.3.0-14-alpine" indicates that the Agent version is 24.3.0, that Node.js 14 was used to build the image, and that it is intended for an Alpine system. Popular Supported Linux Distributions: RHEL Debian Ubuntu Choosing the Correct Image when the Agent version is < 24 When you are deciding which image to use for the Node.js or Cluster Agent, you can select it based on the title of the image. The naming pattern that AppDynamics uses for the Node.js Agent is intuitive. There are three segments in the tag, with each segment being separated by a hyphen. The first segment refers to the Agent version The second segment refers to the major version of Node.js that the image was built for The last segment refers to the Linux distribution that it is compatible with In version 23.x of the Node.js Agent, there are only three versions regarding the distribution which are Slim, Stretch-Slim, and Alpine. The Slim and Stretch-Slim versions should be used for every Debian-based Linux distribution, and Alpine should be used for Alpine-based distributions. Slim is the smallest Debian-based image optimized for production use, while Stretch-Slim includes a larger set of packages and dependencies from the Debian “stretch” release. When choosing an image, the following tag "nodejs-agent:23.10.0-14-alpine" indicates that the Agent version is 23.10.0, that Node.js 14 was used to build the image, and that it is intended for an Alpine system.
I'm working with Dashboard Studio for the first time and I've got another question. In the input on the Dashboard, I set this $servers_entered$.  I thought I had a solution for counting how many ite... See more...
I'm working with Dashboard Studio for the first time and I've got another question. In the input on the Dashboard, I set this $servers_entered$.  I thought I had a solution for counting how many items are in $servers_entered$, but I found a case that failed.  This is what $servers_entered$ looks like. host_1, host_2, host_3, host_4, ..., host_n What I need is a way of counting how many entries are in $servers_entered$.  So far the commands I've tried have failed.  What would work? TIA, Joe
Do you want to SPL™, too? SPL2, Splunk's next-generation data search and preparation language, is designed to serve as the single entry point for data processing behaviors across data in motion a... See more...
Do you want to SPL™, too? SPL2, Splunk's next-generation data search and preparation language, is designed to serve as the single entry point for data processing behaviors across data in motion and at rest. While it is already available Splunk Data Management (DM) pipelines in Splunk® Edge Processor and Ingest Processor, it is now in public beta in Splunk Enterprise! Not only can you build dashboards, reports and alerts with SPL2 (just like SPL today), but also leverage SPL2’s advanced code-like capabilities like custom functions, custom types, views, imports, and exports to solve long-standing administration and developer challenges. Watch this Tech Talk to learn… What SPL2 is, and how it extends SPL’s capabilities for developers and admins How to build your first app with SPL2 How SPL2 can help solve common access control challenges with run-as-owner views, and conduct data quality validation with custom type checks Watch the full Tech Talk here:
I see references to Blue Team Academy but I cannot find a webpage with dedicated curriculum. I am working through the Cybersecurity Defense Analyst certification and the course list seems only parti... See more...
I see references to Blue Team Academy but I cannot find a webpage with dedicated curriculum. I am working through the Cybersecurity Defense Analyst certification and the course list seems only partial.  Within a couple of the courses, at the very beginning, there is a nine step flowchart of courses. I am at "The Art of Investigation" but the remaining courses do not come up when I search the catalog. Specifically, where can I find:    SIEM Success with Splunk ES    Risk Based Analysis    Threat Hunting Theory and Practice    The Analyst Life Thanks for the help and guidance
As of [v23.5] AppDynamics APM Java, Node.js, and .NET Agents have been upgraded for dual output, so your existing instrumentation can be consumed by any 3rd party Open Telemetry(OTel) capable backend... See more...
As of [v23.5] AppDynamics APM Java, Node.js, and .NET Agents have been upgraded for dual output, so your existing instrumentation can be consumed by any 3rd party Open Telemetry(OTel) capable backend, such as Splunk Observability Cloud. The application itself doesn’t need to be written to support OpenTelemetry or be re-instrumented in any way. Once you updated your AppDynamics agents, you can enable OTel in the agent and start to push telemetry to your OpenTelemetry collector. To push data to Splunk Observability cloud we recommend the use of the Splunk Distribution of the OTel Collector  In this article...  Re-instrumentation options for OpenTelemetry  Solutions/Processes...  Cloud native applications Existing and legacy applications  What are the steps to instrumenting Additional resources  Re-instrumentation options for OpenTelemetry  There are three options for enabling OTel in your application:  Manually re-instrument your entire application. While this approach necessitates changes to your code, it provides complete flexibility. Since OpenTelemetry is an industry-standard and vendor-independent, this code-level instrumentation only needs to be performed once.  Automatically instrument with an OpenSource OpenTelemetry agent, comparable to an AppDynamics agent. You have the option to utilize the standard OpenSource agents or the Splunk Distribution agent, which offers enhancements and preconfiguration.  Auto-instrument with an AppDynamics agent configured to output OpenTelemetry, allowing you to retain your AppDynamics data and workflows while simultaneously emitting OTel data.  Solutions/Processes  While having multiple options is beneficial, it can sometimes be overwhelming and raise questions about the best choice for specific situations. Although there is no universal solution that fits all scenarios, we aim to provide some high-level guidance and recommendations.  Cloud native applications  For new cloud native applications, it is advantageous to instrument each service natively with OpenTelemetry. As mentioned earlier, OpenTelemetry SDKs and APIs are vendor-neutral, allowing you to send telemetry data to various backends. If you are working with an existing cloud native application, you can also apply auto-instrumentation on a per-service basis. Tools like the OpenTelemetry Operator for Kubernetes simplify this process by enabling instrumentation through annotations when running in Kubernetes environments.  Existing and legacy applications  For existing applications, particularly legacy ones, manual re-instrumentation can be a significant effort. In such cases, you can opt to re-instrument using either an open-source agent or an AppDynamics agent.  If your legacy application is already instrumented with AppDynamics, the simplest approach is to reconfigure it to output OpenTelemetry data instead of switching to an open-source agent. This allows you to retain all your existing configurations, troubleshooting workflows, dashboards, alerts, and notifications while adding OpenTelemetry export capabilities. Additionally, this setup enables AppDynamics agents to correlate data between services instrumented with both OpenTelemetry and AppDynamics, providing comprehensive end-to-end visibility.  AppDynamics Smart Agent Management features make it easier to install, manage, configure, and maintain compared to managing open-source agents.  What are the steps to instrumenting.... OpenTelemetry can be enabled with some simple steps, find here an example for Java:  Enable OTel using the following configuration flags:  -Dappdynamics.opentelemetry.enabled=true -Dotel.traces.exporter=otlp Set your collector endpoint if the collector is not running on the same host  -Dotel.exporter.otlp.traces.endpoint=http://<your collector ip>:4317 Pass the additional resource attributes to be included, the service.name is a required attribute -Dotel.resource.attributes="service.name=myServiceName,service.namespace=myServiceNameSpace" More information and exact steps, also for .Net and NodeJS, can be found in Instrument Applications with AppDynamics for OpenTelemetry™ documentation.  Additional resources:  Enable OpenTelemetry in the Java Agent  Enable OpenTelemetry in the .NET Agent  Enable OpenTelemetry in the NodeJS Agent  Instrument Ruby Application using OpenTelemetry  AppDynamics Smart Agent  Splunk Distribution of the OpenTelemetry Collector  Splunk Observability Cloud  
Hello, working on monitoring if someone has moved a file outside a specific folder inside a preset folder structure on a network using data from a CSV source.  Inside csv, I am evaluating two specifi... See more...
Hello, working on monitoring if someone has moved a file outside a specific folder inside a preset folder structure on a network using data from a CSV source.  Inside csv, I am evaluating two specific fields used:      Source_Directory and Destination_Directory I am trying to compare the two going 3 folders deep in the file path but running into issue when performing my rex command.  Preset folder structure is: "\\my.local\d\p\" pulled from the data set used.  Within the folder "\p\", there are various folder names.  Need to eval if a folder path is different beyond the preset path of "\\my.local\d\p\..." I put in bold what a discrepancy would if there is one.  Example data in CSV:   Source_Directory                                                    Destination_Directory      \\my.local\d\p\prg1\folder1\bfolder            \\my.local\d\p\prg1\folder1\ffolder      \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg2\folder2      \\my.local\d\p\prg1\folder2                             \\my.local\d\p\prg2\folder1\xfolder\mfolder\      \\my.local\d\p\prg3\folder2\afolder            \\my.local\d\p\prg3\folder2      \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg1\folder3 Output query I am trying to create    Status           Source_Directory                                                    Destination_Directory     Same             \\my.local\d\p\prg1\folder1\bfolder            \\my.local\d\p\prg1\folder1\ffolder     Same             \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg2\folder2     Different        \\my.local\d\p\prg1\folder2                             \\my.local\d\p\prg2\folder1\xfolder\mfolder\     Same             \\my.local\d\p\prg3\folder2\afolder            \\my.local\d\p\prg3\folder2     Different        \\my.local\d\p\prg2\folder1                             \\my.local\d\p\prg1\folder3 If folder name is different after the preset"\\my.local\d\p\" path I need that to show in the "Status" output.  I have searched extensively on how to use this rex command in this instance with no luck so thought I would post my issue.  Here is the search I have been trying to use.  Splunk Search host="my.local" source="file_source.csv" sourcetype="csv" | eval src_dir = Source_Directory | eval des_dir = Destination_Directory | rex src_path = src_dir "(?<path>.*)\\\\\w*\.\w+$" | rex des_path= des_dir "(?<path>.*)\\\\\w*\.\w+$" | eval status = if (src_path = des_path, "Same", "Diffrent") | table status, Source_Directory, Destination_Directory Any assistance would be much appreciated.