All Topics

Top

All Topics

Hi guys, I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue. 01-17-2025 04... See more...
Hi guys, I am currently encountering an error that is affecting performance, resulting in delays with the file processing. If anyone has insights or solutions to address this issue. 01-17-2025 04:33:12.580 -0600 INFO TailReader [1853894 batchreader0] - Will retry path="/apps2.log" after deferring for 10000ms, initCRC changed after being queued (before=0x47710a7c475501b6, after=0x23c7e0f63f123bf1). File growth rate must be higher than indexing or forwarding rate. 01-17-2025 04:20:24.672 -0600 WARN TailReader [1544431 tailreader0] - Enqueuing a very large file=/apps2.log in the batch reader, with bytes_to_read=292732393, reading of other large files could be delayed I would greatly appreciate your assistance. Thank you.
I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACC... See more...
I am writing an log file on my host using below command- " for ACCOUNT in \"$TARGET_DIR\"/*/; do", " if [ -d \"$ACCOUNT\" ]; then", " cd \"$ACCOUNT\"", " AccountId=$(basename \"$ACCOUNT\")", " AccountSize=$(du -sh . | awk '{print $1}')", " ProfilesSize=$(du -chd1 --exclude={events,segments,data_integrity,api} | tail -n1 | awk '{print $1}')", " NAT=$(curl -s ifconfig.me)", " echo \"AccountId: $AccountId, TotalSize: $AccountSize, ProfilesSize: $ProfilesSize\" >> \"$LOG_FILE\"", " fi", " done"   I have forwarded this log file to Splunk using the Splunk Forwarder. This script appends new log entries to the file after successfully completing each loop. However, I am not seeing the logs with the correct timestamps, as shown in the attached screenshot. The logs are from 2022, but I started sending them to Splunk on 17/01/2025. Additionally, the Splunk Forwarder is sending some logs as single-line events and others as multi-line events. Could you explain why this is happening?  
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purpos... See more...
Hello Splunk Community, I have a use case where we need to send metrics directly to Splunk instead of AWS CloudWatch, while still sending CPU and memory metrics to CloudWatch for auto-scaling purposes. Datadog offers solutions, such as their AgentCheck package (https://docs.datadoghq.com/developers/custom_checks/write_agent_check/), and their repository (https://github.com/DataDog/integrations-core) provides several integrations for similar use cases. Is there an equivalent solution or approach available in Splunk for achieving this functionality? Looking forward to your suggestions and guidance! Thanks!
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only ... See more...
Hello Team, When an organization is  having Hybrid deployment , so they using Splunk cloud service too, can data be sent directly to Splunk Cloud, for example there is a SaaS application which only has an option to send logs over syslog , how can this be achieved while using Splunk cloud. What are the options for Data input here. If someone can elaborate. Thanking you in advance, regards, Moh
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help ... See more...
Hello Everyone this is how iam getting error massage , while forwarding data from universal forwarder to indexer ,  This is the i got from error logs , Iam not able to understand : can anyone help me in this > 01-17-2025 06:32:15.605 +0000 INFO TailReader [1654 tailreader0] - Batch input finished reading file='/opt/splunkforwarder/var/spool/splunk/tracker.log'
 I am unable to select the option in dropdown or type anything (first part of URL) in the "Select Cloud Stack" while creating support case. Dropdown for adding Cloud Stack Name seems to be stuck, t... See more...
 I am unable to select the option in dropdown or type anything (first part of URL) in the "Select Cloud Stack" while creating support case. Dropdown for adding Cloud Stack Name seems to be stuck, tried other browsers too
Iam not able to see the file content in indexer,  After restarting the universal Forwarder what can be the reason 
I am trying to execute the sample command in Splunk MLTK. For some reason, I am getting an error everytime I run a stats command after the sample command.  index=_internal | sample partitions=3 see... See more...
I am trying to execute the sample command in Splunk MLTK. For some reason, I am getting an error everytime I run a stats command after the sample command.  index=_internal | sample partitions=3 seed=42 | stats count by action, partition_number Search error Error in 'sample' command: The specified field name for the partition already exists: partition_number   I tried providing different field name and it is still the same error. If I remove stats command and try running the same search multiple times, it works without any issues. What could be the reason ?  
Hello support team, we urgently need to update the Illumio logo here: https://splunkbase.splunk.com/app/3658 It would also be helpful to update the Illumio logo in any other locations throughout Sp... See more...
Hello support team, we urgently need to update the Illumio logo here: https://splunkbase.splunk.com/app/3658 It would also be helpful to update the Illumio logo in any other locations throughout Splunk marketing materials.  Updated Illumio logos can be downloaded at the bottom of this page: https://illumio.frontify.com/d/VMEFDUaDvuv5/design-system#/design-system/logo Please contact me, Jacy, via PM for additional support.  
I don't see a create new token option under Settings>Token. Anyone else having this issue? Not sure if its a permission related issue, but others in the team also can't create a new token.
We have a custom dashboard in Splunk that has a few filters, one of which is a multiselect. This dashboard allows users to perform CRUD operations with POA&Ms. The multiselect in question lists all P... See more...
We have a custom dashboard in Splunk that has a few filters, one of which is a multiselect. This dashboard allows users to perform CRUD operations with POA&Ms. The multiselect in question lists all POA&M statuses that have been previously created, filtering the results displayed in the table. The filter works fine for searching results for the table. The issue is that if someone creates a new POA&M with a status that hasn't been used yet, i.e. "Closed", the page must be refreshed for the multiselect to execute the search powering it and display "Closed" as an option. Is there a way to "refresh" the multiselect with Javascript after a new POA&M is created? The POA&M CRUD operations are performed with JS and Python btw. Here's the XML of the multiselect for reference:  
So I have an Index which contains the following "Starting iteration"on 1 event and "Stopping iteration" on another event I want to get the time taken from event 1 to event 2. And if over 15 mins t... See more...
So I have an Index which contains the following "Starting iteration"on 1 event and "Stopping iteration" on another event I want to get the time taken from event 1 to event 2. And if over 15 mins then I can setup an alert to warn me   
Guide to Monitoring URLs with Authentication Using Splunk AppDynamics and Python Monitoring URLs are an important part of your FullStackMonitoring. Splunk AppDynamics lets you monitor URLs with... See more...
Guide to Monitoring URLs with Authentication Using Splunk AppDynamics and Python Monitoring URLs are an important part of your FullStackMonitoring. Splunk AppDynamics lets you monitor URLs with different authentication. In this article, we will create a simple URL with a username and password. Afterwards, we will monitor it using AppDynamics Machine Agent. Create a Simple API with Python (Flask) Install Flask: pip install flask​ Create the API: Save the following Python code to a file, e.g., basic_auth_api.py : from flask import Flask, request, jsonify from flask_httpauth import HTTPBasicAuth app = Flask(__name__) auth = HTTPBasicAuth() # Dummy users for authentication users = { "user1": "password123", "user2": "securepassword", } @auth.get_password def get_pw(username): return users.get(username) @app.route('/api/data', methods=['GET']) @auth.login_required def get_data(): return jsonify({"message": f"Hello, {auth.username()}! Here is your data."}) if __name__ == '__main__': app.run(debug=True, port=5000) Run the API: Start the server by running: python basic_auth_api.py Test the API: Use curl to access the API: curl -u user1:password123 http://127.0.0.1:5000/api/data You should see a response like this: { "message": "Hello, user1! Here is your data." }​ Install Machine Agent You can install the Machine agent as recommended here Setup URL Monitoring Extension Clone the Github Repo: git clone https://github.com/Appdynamics/url-monitoring-extension.git​ cd url-monitoring-extension​ Download and install Apache Maven which is configured with  Java 8  to build the extension artifact from the source. You can check the Java version used in Maven using command  mvn -v  or  mvn --version . If your maven is using some other Java version then please download Java 8 for your platform and set JAVA_HOME parameter before starting maven. Run below in url-monitoring-extension directory mvn clean install Go into the target directory and copy the UrlMonitor-2.2.1.zip, Afterwards unzip the content inside <MA-Home>/monitors/folder cd target/ mv UrlMonitor-2.2.1.zip /opt/appdynamics/machine-agent/monitors unzip UrlMonitor-2.2.1.zip​ This will create an UrlMonitor directory inside the Monitors folder. Monitor the URL Inside the UrlMonitor folder, edit the config.yml file Under sites, I have added: sites: - name: AppDynamics url: http://127.0.0.1:5000/api/data username: user1 password: password123 authType: BASIC​ Change: metricPrefix: "Custom Metrics|URL Monitor|"​ Now, All you need to do is Start your Machine Agent again. Afterward, you can see this URL monitor in your AppDynamics Controller.
Hi, I'm trying to get a query for a table containing all the indexes that do not have a self storage attached, but I couldn't find anything useful. Does anyone has an idea of how to do it?   Thanks!
I'm seeing hundreds of these errors in the internal splunkd logs 01-16-2025 12:05:00.584 -0600 ERROR UserManagerPro [721361 SchedulerThread] - user="nobody" had no roles Is this a known bug? I'... See more...
I'm seeing hundreds of these errors in the internal splunkd logs 01-16-2025 12:05:00.584 -0600 ERROR UserManagerPro [721361 SchedulerThread] - user="nobody" had no roles Is this a known bug? I'm guessing knowledge objects with no owner defined is causing this. It's annoying because it fills the internal logs with noise. Is there an easy workaround without having to re-assign all objects without a valid owner/?
Hello, I wanted to know where I should keep this attribute KV_MODE=json to extract the json fields automatically? In Deployment server or manager node or deployer? We have props.conf in a app in DS... See more...
Hello, I wanted to know where I should keep this attribute KV_MODE=json to extract the json fields automatically? In Deployment server or manager node or deployer? We have props.conf in a app in DS. DS push that app to manager node. And manager will distribute that app to peer nodes. Can I add this in that props.conf? Or any alternative please suggest.
Agent Saturation What and Whys In application performance monitoring, saturation is defined as the total load on a system or how much of a given resource is consumed at a time. If saturation is at ... See more...
Agent Saturation What and Whys In application performance monitoring, saturation is defined as the total load on a system or how much of a given resource is consumed at a time. If saturation is at 100%, your system is running at 100% capacity. This is generally a bad thing. Agent saturation is a similar concept. It represents the percentage of available system resources currently being monitored by an observability agent. 100% agent saturation means 100% of available resources are instrumented with observability agents, which is a great thing. When it comes to observability practices, agent saturation is how well a system is instrumented and can be represented by:  ( instrumented resources / total resources ) x 100 Because greater visibility into system health and performance means proactive detection of issues, improved user experience, more efficient troubleshooting, decreased downtime, and countless other pluses, 100% agent saturation is the ultimate goal.  So why doesn’t everyone get to 100% agent saturation for full system observability and magical unicorn application visibility status? It’s challenging! Setting up observability agents across distributed applications and environments takes time. In ephemeral, dynamic systems already integrated into existing solutions, it can just be too much of a lift.  But the good news is that if you’re already using Splunk (maybe for logging and security) there are quick and easy ways to improve your system observability. In this post, we’re going to look at how to leverage the Splunk Add-on for OpenTelemetry Collector to gain a quick win when it comes to improving agent saturation.  Improve Agent Saturation with the Splunk Add-on for OpenTelemetry Collector For Splunk Enterprise or Splunk Cloud Platform customers who ingest logs using universal forwarders, you can quickly improve agent saturation and deploy, update, and configure OpenTelemetry Collector agents in the same way you do any of your other technology add-ons (TAs). The Splunk Add-on for OpenTelemetry Collector leverages your existing Splunk Platform and Splunk Cloud deployment mechanisms (specifically the universal forwarder and the deployment server) to deploy the OpenTelemetry Collector and its capabilities for increased visibility into your system from Splunk Observability Cloud. The add-on is a version of the Splunk Distribution of the OpenTelemetry Collector that simplifies configuration, management, and data collection of metrics and traces. This means OpenTelemetry instrumentation will out-of-the-box exist anywhere the universal forwarder is present for logs and security use cases, making it easier to instrument systems quickly and gain visibility into telemetry data from within Splunk Observability Cloud. This comprehensive system coverage also comes with out-of-the-box Collector content and configuration with Splunk-specific metadata and optimizations (like batching, compression, and efficient exporting), all preconfigured. This means that you can get answers using observability data faster, saving you time and effort. Prerequisites for using the Splunk Add-on for OpenTelemetry Collector include:  Splunk Universal Forwarder (version 8.x or 9.x on Windows or Linux) Splunk Observability Cloud Splunk Enterprise or Splunk Cloud or deployment server as forwarders (Optional) Install the deployment server if you plan to use it to push the Collector to multiple hosts Getting started with the Splunk Add-on for OpenTelemetry Collector The Splunk Add-on for OpenTelemetry Collector is available on Splunkbase similar to other TAs, and you can deploy it alongside universal forwarders using existing Splunk tools like the deployment server.  We have a Linux EC2 instance we’re going to be instrumenting, but we first need to download the Splunk Add-On for OpenTelemetry Collector from Splunkbase:  We’ll unzip the package and then create a local folder and copy over the config credential files: In Splunk Observability Cloud, we’ll get the access token and the realm for our Splunk Observability Cloud organization:  Your organization's realm can be found under your user’s organizations:  Next, we set these values in our /local/access_token file: We then need to make sure the Splunk Add-On for OpenTelemetry folder (Splunk_TA_otel) is in the deployments app folder on the deployment server instance: We’ll then move over to the deployment server UI in Splunk Enterprise to create the Splunk_TA_otel server class and add the relevant hosts along with the Splunk_TA_otel app. Once the TA is installed make sure you check both Enable App and Restart Splunkd and select Save: That’s it! If we now navigate to Splunk Observability Cloud, we’ll see the telemetry data flowing in from our EC2 instance:  Wrap up Increasing agent saturation and improving observability for comprehensive system insight can be quick and easy. Not sure how you’re currently doing in terms of agent saturation? Check out our Measuring & Improving Observability-as-a-Service blog post to learn how to set KPIs on agent saturation. Ready to improve your agent saturation? Sign up for a Splunk Observability Cloud 14-day free trial, integrate the Splunk Add-on for OpenTelemetry Collector, and start on your journey to 100% agent saturation.  Resources Splunk Add-On for OpenTelemetry Collector Differences between the OpenTelemetry Collector and the Splunk Add-on for OpenTelemetry Collector Get started with Splunk Observability Cloud
Hi,   I am trying to push the configuration bundle from the CM to the indexers. I keep getting the error message "Last Validate and Check Restart:  Unsuccessful" The validation is done for one of ... See more...
Hi,   I am trying to push the configuration bundle from the CM to the indexers. I keep getting the error message "Last Validate and Check Restart:  Unsuccessful" The validation is done for one of the indexers, and it's 'checking for restart' for the other two indexers. When I checked the last change date for all the indexers, only one of them has been updated and the other 2 are not. But it's opposite to what is shown in the UI of the CM.   Regards, Pravin    
Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for ... See more...
Hi everyone! My goal is to create an alert to monitor in ALL saved search if there's any email that no longer exist (mainly, colleagues that left the company or similar). My idea was to search for the same patter of Mail Delivery Subsystem that happens when sending an email from Gmail (or any other) to a non-existing mail. Bud didn't find anything in _internal index, nor with a rest to saved search and index=mail is empty. Amy idea?
Hello Hello! I'm trying to match the values from a lookup file, in this case being Amazon CIDRS values against ip-adresses that are dynamically retrieved from events, but I can't get it to work, the... See more...
Hello Hello! I'm trying to match the values from a lookup file, in this case being Amazon CIDRS values against ip-adresses that are dynamically retrieved from events, but I can't get it to work, the following is a snippet of what I have.     | append [| inputlookup cidr_aws.csv ] | foreach CIDR [ eval matched_ip = if(cidrmatch(<<FIELD>>, ip_address), ip_address, null()) ] | search matched_ip!=null | table matched_ip, CIDR      There is nothing outputted from this, and if I remove the "| search matched_ip!=null" then I can see that the IP appears which means that it failed the "cidrmatch" comparison and after some experimenting I figured out that the entire thing works If I hardcode either the "<<FIELD>>" value or "ip_address" like the following two examples..     | append [| inputlookup cidr_aws.csv ] | foreach CIDR [ eval matched_ip = if(cidrmatch("3.248.0.0/13", ip_address), ip_address, null()) ] | search matched_ip!=null | table matched_ip, CIDR, Country    or   | append [| inputlookup cidr_aws.csv ] | foreach CIDR [ eval matched_ip = if(cidrmatch(<<FIELD>>, "3.248.163.69"), ip_address, null()) ] | search matched_ip!=null | table matched_ip, CIDR, Country   but this is not optimal since it's supposed to be dynamic.   Does anybody know how to solve this?