All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status fiel... See more...
I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status field is Up or Down  The split by field is Instance host and threshold is based on val  The alert triggers fine but I want to put the field in email alert $result.Status$ Instead of $result.val$ But I dont see the field Status in tracked alerts. How can I make this field Status shows in tracked alerts index or events generated so that I can use it in my email (This is to avoid confusuion, instead of saying 0, 1 it will say up or down)
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable... See more...
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable solution other than Add ons. If yes : Any details /steps etc.
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mn... See more...
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000 [volume:s3volume] storageType = remote path = s3://<bucketname>/rest/of/path   is there a mechanism or reference to did this?
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. ... See more...
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. We had a service that had an allocation rate that was too high for the request volume, IMO, so we took a 1-minute flight recording in a test environment.  Total allocation, according to the flight recording samples, was around 35GB.  Based on where the samples were coming from, we made a code change.  When we retested, the total allocation for the same test over 1-minute was only 9GB, approximately 75% less. When we deployed the change and re-ran our endurance test, we saw an object allocation rate that was only slightly lower than the baseline.  Dividing the allocation rate by the request volume, the number had only gone down from 12MB/req to 10MB/req. We do not have verbose GC enabled, so I can't check against that. What could be causing the numbers to be so similar?  Is the allocation rate reported by AppD reliable? thanks  
<form version="1.1" theme="light"> <label>Dashboard</label> <!-- Hidden base search for dropdowns --> <search id="base_search"> <query> index=$index$ ---------- </query> <earliest>$time_tok.earliest$... See more...
<form version="1.1" theme="light"> <label>Dashboard</label> <!-- Hidden base search for dropdowns --> <search id="base_search"> <query> index=$index$ ---------- </query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search> <fieldset submitButton="false"></fieldset> <row> <panel> <html> <p>⚠️ Kindly avoid running the Dashboard for extended time frames <b>(Last 30 days)</b> unless absolutely necessary, as it may impact performance.</p> <p>To get started, Please make sure to select your <b>Index Name</b> - this is required to display the dashboard data </p> </html> </panel> </row> This is how I am writing the description. But I am not satisfied because it is not eye catchy. When the user opens the dashboard he should see this note first, i want in that way. I am not aware of HTML as well. Can some one help me. Copied icon from google and it seems small in dashboard.  
Hi All   I am building synthetic monitoring in Observability Cloud for an online shop. One thing I want to monitor is whether or not an items stock status is correct. I have a synthetic step "Sav... See more...
Hi All   I am building synthetic monitoring in Observability Cloud for an online shop. One thing I want to monitor is whether or not an items stock status is correct. I have a synthetic step "Save return value from JavaScript" to this effect: function checkStockStatus() { if (document.body.textContent.includes("OUT OF STOCK")) { return "oos"; } return "instock"; } checkStockStatus();   I am storing this in a variable named stockStatus Is there any way I can use the value of this variable in a dashboard, or to trigger an alert ? For example, say I am selling a golden lamp, and it gets sold out, how can I get a dashboard to show "oos" somewhere ?   Thanks       
Hello,   I need to send all syslog data from opnsense to a specific index. As this is not a known vender source what is the easiest way to do this? I have seen that you can create a parser based on... See more...
Hello,   I need to send all syslog data from opnsense to a specific index. As this is not a known vender source what is the easiest way to do this? I have seen that you can create a parser based on certain factors so guessing this would be the easiest way? If so does anyone have a good parser guide?
I have issue to transform data and extracting the fields value. Here is my sample data. 2025-07-20T10:15:30+08:00 h1 test[123456]: {"data": {"a": 1, "b": 2, "c": 3}} The data has timestamp and othe... See more...
I have issue to transform data and extracting the fields value. Here is my sample data. 2025-07-20T10:15:30+08:00 h1 test[123456]: {"data": {"a": 1, "b": 2, "c": 3}} The data has timestamp and other information in the beginning, and the data dictionary at the end. I want my data to go into Splunk in JSON format. Other than data, what I need is the timestamp. So I create a transform to pick only the data dictionary and move the timestamp into that dictionary. Here is my transforms. [test_add_timestamp] DEST_KEY = _raw REGEX = ([^\s]+)[^:]+:\s*(.+)} FORMAT = $2, "timestamp": "$1"} LOOKAHEAD = 32768 Here is my props to use the transforms. [test_log] SHOULD_LINEMERGE = false TRUNCATE = 0 KV_MODE = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true disabled = false pulldown_type = true TIME_PREFIX = timestamp MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT = %Y-%m-%dT%H:%M:%S TZ = UTC TRANSFORMS-timestamp = test_add_timestamp After transform, the data will look like this. {"data": {"a": 1, "b": 2, "c": 3}, "timestamp": "2025-07-20T10:15:30+08:00"} But when I search the data in Splunk, why do I see "none" as value in timestamp as well? Another thing I noticed is in my Splunk index that has many data, I can see few data has this timestamp extracted, and most of them have no timestamp, which is fine. But when I click "timestamp" under interesting fields, why is it showing only "none"? I also noticed some of the JSON keys are not available under "interesting fields". What is the logic behind this?
Hello All,  Below is my dataset from a base query. How can i calculate the average value of the column ? Incident avg_time days hrs minutes P1 1 hour, 29 minutes 0.06181041204508532 1.4... See more...
Hello All,  Below is my dataset from a base query. How can i calculate the average value of the column ? Incident avg_time days hrs minutes P1 1 hour, 29 minutes 0.06181041204508532 1.4834498890820478 29.00699334492286 P2 1 hour, 18 minutes 0.05428940107829018  1.3029456258789642 18.176737552737862 I need to display the average of the avg_time values. Since there is date/time involved, merely doing the below function is not working stats avg(avg_time) as average_ttc_overall  
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to someh... See more...
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to somehow show the parameters I used for a prior version. I've checked the global config json file and everywhere else I could think of and they all reflect the parameters for the new version. Despite that, the UI still shows the old parameters. Does anyone have any idea why or where else I could check?
Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/... See more...
Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/kvstore" -> Currently used KVSTore version=4.2.22. Expected version=4.2 or version=7.0 CPU Vendor: GenuineIntel CPU Family: 6 CPU Model: 44 CPU Brand: \x AVX Support: No SSE4.2 Support: Yes AES-NI Support: Yes   There seems to be an issue with determine AVX correctly thru rosetta?! - Anyway, i tried to upgrade on v9.4.1using ~/splunk/bin/splunk start-standalone-upgrade kvstore -version 7.0 -dryRun true and receive the error In handler 'kvstoreupgrade': Missing Mongod Binaries :: /Users/andreas/splunk/bin/mongod-7.0; /Users/andreas/splunk/bin/mongod-6.0; /Users/andreas/splunk/bin/mongod-5.0; /Users/andreas/splunk/bin/mongod-4.4; Please make sure they are present under :: /Users/andreas/splunk/bin before proceeding with upgrade. Upgrade Path = /Users/andreas/splunk/bin/mongod_upgrade not found Please make sure upgrade tool binary exists under /Users/andreas/splunk/bin The error that mongod-4.4, mongod-5.0, mongod-6.0 and mongod-7.0 are missing is correct, the files are not there. There are not in delivered splunk .tgz for macos. The linux tarball includes them..  any hints? best regards, Andreas
I'm at a loss and hoping for an assist.  Running a distributed Splunk instance, I used the Deployment Server to push props.conf and transforms.conf to my heavy forwarders to drop specific external c... See more...
I'm at a loss and hoping for an assist.  Running a distributed Splunk instance, I used the Deployment Server to push props.conf and transforms.conf to my heavy forwarders to drop specific external customer logs logs at the HF.    We're receiving logs from several external customers, each with their own index. I'm in progress of dividing each customer into sub indexes like {customer}-network, {customer}-authentication and {customer}-syslog Yes, I'm trying to dump all Linux syslog. This a temporary move while their syslog are flooding millions of errors before we're able to finish moving them to their new {customer}-syslog index. I did inform them and they're working it, with no ETA. I've been over a dozen posts on the boards, I've asked two different AI's how to do this backwards and forwards, I've triple checked spelling, placement & permissions. I tried pushing the configs to the indexers from the cluster manager and that didn't work either. I created copies of the configs in ~/etc/system/local/ and no dice.  I've done similar in my lab with success.  I verified the customer inputs.conf is declaring the sourcetype as linux_messages_syslog I'm at a total loss of why this isn't working.   props.conf: [linux_messages_syslog] TRANSFORMS-dropLog = dropLog transforms.conf: [dropLog] REGEX = (?s)^.*$ DEST_KEY = queue FORMAT = nullQueue   Anyone have any idea what got'cha I'm getting got by?
We currently have email as a trigger action for Searches, Reports and Alerts. The issue arises when we try to email certain company email addresses because the address is configured to only allow int... See more...
We currently have email as a trigger action for Searches, Reports and Alerts. The issue arises when we try to email certain company email addresses because the address is configured to only allow internal email messages (like a distribution list type email address). The email coming from Splunk Cloud is from  alerts@splunkcloud.com. We would prefer not to make internal email addresses allow receipt of external emails. There is no way to configure the "From" address in the Triggered Actions section. Ideally what was proposed was that we somehow configure Splunk to send the email as if it came from an internal service email address for our company. I found some documentation on Email configuration however where I would insert an internal email address to be the "FROM", the documentation states "Send email as: This value is set by your Splunk Cloud Platform implementation and cannot be changed. Entering a value in this field has no effect."  Any suggestions on how to accomplish this without too much time investment?
Hello everyone, I’m trying to track all the resources loaded on a page — specifically, the ones that appear in the browserResourceRecord index. Right now, I only see a portion of the data, and the c... See more...
Hello everyone, I’m trying to track all the resources loaded on a page — specifically, the ones that appear in the browserResourceRecord index. Right now, I only see a portion of the data, and the captured entries seem completely random. My final goal is to correlate a browser_record session (via cguid) with its corresponding entries in browserResourceRecord. Currently, I’m able to do the reverse: occasionally, a page is randomly captured in browserResourceRecord, and I can trace it back to the session it belongs to. But I can’t do the opposite — which is what I actually need. I’ve tried various things in the RUM script. My most recent changes involved setting the following capture parameters: config.resTiming = { sampler: "TopN", maxNum: 500, bufSize: 500, clearResTimingOnBeaconSend: true }; Unfortunately, this hasn’t worked either. I also suspected that resources only appear when they violate the Resource Performance thresholds, so I configured extremely low thresholds — but this didn’t help either. What I’d really like is to have access to something similar to a HAR file — with all the resource information — and make it available via Analytics, so I can download and compare it. Unfortunately, the session waterfall isn't downloadable — which is a major limitation. Thank you, Marco.
Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a... See more...
Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a better name for the input. I cloned the original, put a different name, and kept all the same config. I disabled the original. I noticed that I can still run the script and see the API output, but when I searched for the output, I did not find it. I started to see 401 errors instead. I went back to the data inputs and disabled the clone and enabled the original and all is back to normal. Is there a rule to cloning the data input for the addon builder that says not to clone?      
Addon Builder 4.5.0,  Modular input using my Python code.   In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/... See more...
Addon Builder 4.5.0,  Modular input using my Python code.   In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/apps/TA-api1/logs/vosfin_cli.log"   The main page (Configure Data Collection) shows all the 'input names' that I built. But looking at the 'event count', I see a 0.  When I go into the log, it shows it running and giving me data ok.  Why doesn't the event count up every time the script runs?    Is there addition configuration in inputs, props or web.conf that I need to add/edit to make it count up?    
Hi team,   I am in splunk 9.4 and have configured db connect. The SQL query will search for any failures from the table and will pass the result to splunk search.Configured a real time alert to sen... See more...
Hi team,   I am in splunk 9.4 and have configured db connect. The SQL query will search for any failures from the table and will pass the result to splunk search.Configured a real time alert to send the log details to my email id. However emails are landing in junk folder. Not able to figure out why it is landing in junk folder.Any help is appreciated
What am following are as follows :- 1. Log into Monitoring Console :- Login to splunk cloud UI Search for cloud monitoring console under Apps  2. Check Indexing health :-           Check In... See more...
What am following are as follows :- 1. Log into Monitoring Console :- Login to splunk cloud UI Search for cloud monitoring console under Apps  2. Check Indexing health :-           Check Indexing Performance: Go to Indexing -> Indexing Performance  Review ingestion rate trends. Identify queue buildup (parsing, indexing, or pipeline queues). 3. Monitor data inputs Go to Forwarders > Forwarders deployment  Check forwarder connectivity and status. Confirm data forwarding from Universal Forwarders or Heavy Forwarders. what other steps can be included in this 
Hi all, I'm working on a Splunk SOAR connector where we plan to add support for webhooks (introduced in SOAR v6.4.1), allowing the connector to receive data from external sources. I see there's an o... See more...
Hi all, I'm working on a Splunk SOAR connector where we plan to add support for webhooks (introduced in SOAR v6.4.1), allowing the connector to receive data from external sources. I see there's an option to enable authentication for the webhook, but after enabling it, I'm unsure what type of information needs to be included in the request. I've tried using basic authentication and an auth token, but neither worked. Could someone please guide me on what information should be included in the request once authentication is enabled?
I am trying to display raw logs in a dashboard but it removing the raw logs. Is there a way to display it? In standard search, it is showing the raw logs but not in dashboard. Sample Query: index... See more...
I am trying to display raw logs in a dashboard but it removing the raw logs. Is there a way to display it? In standard search, it is showing the raw logs but not in dashboard. Sample Query: index=* | eval device = coalesce( dvc, device_name) | eval is_valid_str=if(match(device, "^[a-zA-Z0-9_\-.,$]*$"), "true", "false") | where is_valid_str="false" | stats count by device, index, _raw