All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We're trying to customize the Meantime to Triage and Meantime to Resolution queries in the dashboard to filter for specific urgency or rule names.  We previously were using the stand alone Mission Co... See more...
We're trying to customize the Meantime to Triage and Meantime to Resolution queries in the dashboard to filter for specific urgency or rule names.  We previously were using the stand alone Mission Control app before it was integrated into Enterprise Security. Reviewing the incident_updates_lookup table, it seemed to have stopped updating both "urgency" and "rule_name" around the time we migrated into Enterprise Security's Mission Control.  We can see old entries prior to that, but more recent ones are very infrequent. Anyone know how to resolve this or know of another way to filter?
Hi, I need to create an investigation with SOAR. When I create the investigation, it doesn't link the Finding to the Investigation. Do you have a playbook that can help me with this feature?   ... See more...
Hi, I need to create an investigation with SOAR. When I create the investigation, it doesn't link the Finding to the Investigation. Do you have a playbook that can help me with this feature?        
Can you please share download links for hf and enterprise prior to 9.1? i.e. 9.0.x, both linux and windows, thanks
Hi Everyone, I am in the process of installing Splunk UBA and have a question regarding the storage partitioning requirements. The official documentation (link below) states that separate physical ... See more...
Hi Everyone, I am in the process of installing Splunk UBA and have a question regarding the storage partitioning requirements. The official documentation (link below) states that separate physical disks, /dev/sdb and /dev/sdc, are required for specific mount points to ensure performance. Documentation Link: https://docs.splunk.com/Documentation/UBA/5.3.0/Install/InstallSingleServer#Prepare_the_disks However, my current server is configured with a single physical disk (/dev/sda) that uses LVM to create multiple logical volumes. Here is my current lsblk output: [zake@ubaserver]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 2.7T 0 disk ├─sda1 8:1 0 1G 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 2.7T 0 part ├─rhel-root 253:0 0 80G 0 lvm / ├─rhel-swap 253:1 0 16G 0 lvm [SWAP] ├─rhel-var_vcap2 253:2 0 1T 0 lvm /var/vcap2 ├─rhel-var_vcap1 253:3 0 1T 0 lvm /var/vcap1 ├─rhel-home 253:4 0 118.8G 0 lvm /home └─rhel-backup 253:5 0 500G 0 lvm /backup sr0 11:0 1 1024M 0 rom My question is: Can my existing logical volumes, /dev/mapper/rhel-var_vcap1 and /dev/mapper/rhel-var_vcap2, be used as a substitute for the required /dev/sdb and /dev/sdc disks? I understand the requirement for separate physical disks is likely due to I/O performance. Would using this LVM setup on a single disk be a supported configuration, or is adding two new physical/virtual disks a mandatory step? Thank you for your guidance.
Hi everyone! I am new with Splunk and probably this should be really easy for many of you.  I am trying to left join a lookup with a source table. I tried this initially and it looks great but... See more...
Hi everyone! I am new with Splunk and probably this should be really easy for many of you.  I am trying to left join a lookup with a source table. I tried this initially and it looks great but it's not displaying the total number of records contained in the lookup table. I need to display all records in the lookup and to show all matching records and a blank if not found in table1. The TempTableLookup.csv (lookup table) just has 1 column called "NUMBER" with 7,500 records. The table1 has NUMBER, ORIGINDATE and other columns which are not needed. Table1 has 360,000 records. So I run this query but I get 7,479 instead of the total 7500. There's around 20+ records that do not have an ORIGINDATE or the lookup number does not exist in table1. index=test sourcetype="table1" | lookup TempTableLookup.csv NUMBER output NUMBER as matched_number | where isnotnull(matched_number) | table NUMBER ORIGINDATE So I read I need to do an left join so I tried this and it's bringing all 7,500 records I want but it is not bringing back the ORIGINDATE. Could someone please let me know what I am doing wrong on the second lookup? I know that left joins are not recommended but I cannot think of any other way to give me what I need. | inputlookup TempTableLookup.csv | join type=left NUMBER [ search index=test sourcetype="table1" | dedup NUMBER | fields NUMBER , ORIGINDATE ] | table NUMBER ORIGINDATE The output should look like: NUMBER     ORIGINDATE 123456       01/10/2025 128544       05/05/2029 and so forth... I'd appreciate greatly any ideas on how to do this. Thank you in advance and have a great day, Diana      
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have confi... See more...
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have configured in deployments app already. How do I connect with the customer App so as it can show on dashboard?    
I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status fiel... See more...
I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status field is Up or Down  The split by field is Instance host and threshold is based on val  The alert triggers fine but I want to put the field in email alert $result.Status$ Instead of $result.val$ But I dont see the field Status in tracked alerts. How can I make this field Status shows in tracked alerts index or events generated so that I can use it in my email (This is to avoid confusuion, instead of saying 0, 1 it will say up or down)
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable... See more...
We have logs already in CloudWatch. What is the best way to take the logs from cloudwatch to splunk on prem. We have a vpn established too between them . So based on this any Add ons or other viable solution other than Add ons. If yes : Any details /steps etc.
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mn... See more...
Hi there, I want to point the secondary storage of my splunk indexer to mix with another storage, like point it to cloud storage? so it will like this one is the common: [volume:hot1] path = /mnt/fast_disk maxVolumeDataSizeMB = 100000 [volume:s3volume] storageType = remote path = s3://<bucketname>/rest/of/path   is there a mechanism or reference to did this?
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. ... See more...
In the AppD metric browser for Java apps, there is a metric called Allocated-Objects (MB).  I thought I understood what this was, but I'm getting some unexpected results after making a code change. We had a service that had an allocation rate that was too high for the request volume, IMO, so we took a 1-minute flight recording in a test environment.  Total allocation, according to the flight recording samples, was around 35GB.  Based on where the samples were coming from, we made a code change.  When we retested, the total allocation for the same test over 1-minute was only 9GB, approximately 75% less. When we deployed the change and re-ran our endurance test, we saw an object allocation rate that was only slightly lower than the baseline.  Dividing the allocation rate by the request volume, the number had only gone down from 12MB/req to 10MB/req. We do not have verbose GC enabled, so I can't check against that. What could be causing the numbers to be so similar?  Is the allocation rate reported by AppD reliable? thanks  
<form version="1.1" theme="light"> <label>Dashboard</label> <!-- Hidden base search for dropdowns --> <search id="base_search"> <query> index=$index$ ---------- </query> <earliest>$time_tok.earliest$... See more...
<form version="1.1" theme="light"> <label>Dashboard</label> <!-- Hidden base search for dropdowns --> <search id="base_search"> <query> index=$index$ ---------- </query> <earliest>$time_tok.earliest$</earliest> <latest>$time_tok.latest$</latest> </search> <fieldset submitButton="false"></fieldset> <row> <panel> <html> <p>⚠️ Kindly avoid running the Dashboard for extended time frames <b>(Last 30 days)</b> unless absolutely necessary, as it may impact performance.</p> <p>To get started, Please make sure to select your <b>Index Name</b> - this is required to display the dashboard data </p> </html> </panel> </row> This is how I am writing the description. But I am not satisfied because it is not eye catchy. When the user opens the dashboard he should see this note first, i want in that way. I am not aware of HTML as well. Can some one help me. Copied icon from google and it seems small in dashboard.  
Hi All   I am building synthetic monitoring in Observability Cloud for an online shop. One thing I want to monitor is whether or not an items stock status is correct. I have a synthetic step "Sav... See more...
Hi All   I am building synthetic monitoring in Observability Cloud for an online shop. One thing I want to monitor is whether or not an items stock status is correct. I have a synthetic step "Save return value from JavaScript" to this effect: function checkStockStatus() { if (document.body.textContent.includes("OUT OF STOCK")) { return "oos"; } return "instock"; } checkStockStatus();   I am storing this in a variable named stockStatus Is there any way I can use the value of this variable in a dashboard, or to trigger an alert ? For example, say I am selling a golden lamp, and it gets sold out, how can I get a dashboard to show "oos" somewhere ?   Thanks       
Hello,   I need to send all syslog data from opnsense to a specific index. As this is not a known vender source what is the easiest way to do this? I have seen that you can create a parser based on... See more...
Hello,   I need to send all syslog data from opnsense to a specific index. As this is not a known vender source what is the easiest way to do this? I have seen that you can create a parser based on certain factors so guessing this would be the easiest way? If so does anyone have a good parser guide?
I have issue to transform data and extracting the fields value. Here is my sample data. 2025-07-20T10:15:30+08:00 h1 test[123456]: {"data": {"a": 1, "b": 2, "c": 3}} The data has timestamp and othe... See more...
I have issue to transform data and extracting the fields value. Here is my sample data. 2025-07-20T10:15:30+08:00 h1 test[123456]: {"data": {"a": 1, "b": 2, "c": 3}} The data has timestamp and other information in the beginning, and the data dictionary at the end. I want my data to go into Splunk in JSON format. Other than data, what I need is the timestamp. So I create a transform to pick only the data dictionary and move the timestamp into that dictionary. Here is my transforms. [test_add_timestamp] DEST_KEY = _raw REGEX = ([^\s]+)[^:]+:\s*(.+)} FORMAT = $2, "timestamp": "$1"} LOOKAHEAD = 32768 Here is my props to use the transforms. [test_log] SHOULD_LINEMERGE = false TRUNCATE = 0 KV_MODE = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true disabled = false pulldown_type = true TIME_PREFIX = timestamp MAX_TIMESTAMP_LOOKAHEAD = 100 TIME_FORMAT = %Y-%m-%dT%H:%M:%S TZ = UTC TRANSFORMS-timestamp = test_add_timestamp After transform, the data will look like this. {"data": {"a": 1, "b": 2, "c": 3}, "timestamp": "2025-07-20T10:15:30+08:00"} But when I search the data in Splunk, why do I see "none" as value in timestamp as well? Another thing I noticed is in my Splunk index that has many data, I can see few data has this timestamp extracted, and most of them have no timestamp, which is fine. But when I click "timestamp" under interesting fields, why is it showing only "none"? I also noticed some of the JSON keys are not available under "interesting fields". What is the logic behind this?
Hello All,  Below is my dataset from a base query. How can i calculate the average value of the column ? Incident avg_time days hrs minutes P1 1 hour, 29 minutes 0.06181041204508532 1.4... See more...
Hello All,  Below is my dataset from a base query. How can i calculate the average value of the column ? Incident avg_time days hrs minutes P1 1 hour, 29 minutes 0.06181041204508532 1.4834498890820478 29.00699334492286 P2 1 hour, 18 minutes 0.05428940107829018  1.3029456258789642 18.176737552737862 I need to display the average of the avg_time values. Since there is date/time involved, merely doing the below function is not working stats avg(avg_time) as average_ttc_overall  
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to someh... See more...
The "configuration" page that the Add On Builder has created for my add on isn't matching the additional parameters that I've added for my alert action. Instead, the configuration page seems to somehow show the parameters I used for a prior version. I've checked the global config json file and everywhere else I could think of and they all reflect the parameters for the new version. Despite that, the UI still shows the old parameters. Does anyone have any idea why or where else I could check?
Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/... See more...
Hi all, When upgrading from v9.4.1 to a newer version (including 10) on MacOS (arm) i receive the error message: -> Currently configured KVSTore database path="/Users/andreas/splunk/var/lib/splunk/kvstore" -> Currently used KVSTore version=4.2.22. Expected version=4.2 or version=7.0 CPU Vendor: GenuineIntel CPU Family: 6 CPU Model: 44 CPU Brand: \x AVX Support: No SSE4.2 Support: Yes AES-NI Support: Yes   There seems to be an issue with determine AVX correctly thru rosetta?! - Anyway, i tried to upgrade on v9.4.1using ~/splunk/bin/splunk start-standalone-upgrade kvstore -version 7.0 -dryRun true and receive the error In handler 'kvstoreupgrade': Missing Mongod Binaries :: /Users/andreas/splunk/bin/mongod-7.0; /Users/andreas/splunk/bin/mongod-6.0; /Users/andreas/splunk/bin/mongod-5.0; /Users/andreas/splunk/bin/mongod-4.4; Please make sure they are present under :: /Users/andreas/splunk/bin before proceeding with upgrade. Upgrade Path = /Users/andreas/splunk/bin/mongod_upgrade not found Please make sure upgrade tool binary exists under /Users/andreas/splunk/bin The error that mongod-4.4, mongod-5.0, mongod-6.0 and mongod-7.0 are missing is correct, the files are not there. There are not in delivered splunk .tgz for macos. The linux tarball includes them..  any hints? best regards, Andreas
I'm at a loss and hoping for an assist.  Running a distributed Splunk instance, I used the Deployment Server to push props.conf and transforms.conf to my heavy forwarders to drop specific external c... See more...
I'm at a loss and hoping for an assist.  Running a distributed Splunk instance, I used the Deployment Server to push props.conf and transforms.conf to my heavy forwarders to drop specific external customer logs logs at the HF.    We're receiving logs from several external customers, each with their own index. I'm in progress of dividing each customer into sub indexes like {customer}-network, {customer}-authentication and {customer}-syslog Yes, I'm trying to dump all Linux syslog. This a temporary move while their syslog are flooding millions of errors before we're able to finish moving them to their new {customer}-syslog index. I did inform them and they're working it, with no ETA. I've been over a dozen posts on the boards, I've asked two different AI's how to do this backwards and forwards, I've triple checked spelling, placement & permissions. I tried pushing the configs to the indexers from the cluster manager and that didn't work either. I created copies of the configs in ~/etc/system/local/ and no dice.  I've done similar in my lab with success.  I verified the customer inputs.conf is declaring the sourcetype as linux_messages_syslog I'm at a total loss of why this isn't working.   props.conf: [linux_messages_syslog] TRANSFORMS-dropLog = dropLog transforms.conf: [dropLog] REGEX = (?s)^.*$ DEST_KEY = queue FORMAT = nullQueue   Anyone have any idea what got'cha I'm getting got by?
We currently have email as a trigger action for Searches, Reports and Alerts. The issue arises when we try to email certain company email addresses because the address is configured to only allow int... See more...
We currently have email as a trigger action for Searches, Reports and Alerts. The issue arises when we try to email certain company email addresses because the address is configured to only allow internal email messages (like a distribution list type email address). The email coming from Splunk Cloud is from  alerts@splunkcloud.com. We would prefer not to make internal email addresses allow receipt of external emails. There is no way to configure the "From" address in the Triggered Actions section. Ideally what was proposed was that we somehow configure Splunk to send the email as if it came from an internal service email address for our company. I found some documentation on Email configuration however where I would insert an internal email address to be the "FROM", the documentation states "Send email as: This value is set by your Splunk Cloud Platform implementation and cannot be changed. Entering a value in this field has no effect."  Any suggestions on how to accomplish this without too much time investment?
Hello everyone, I’m trying to track all the resources loaded on a page — specifically, the ones that appear in the browserResourceRecord index. Right now, I only see a portion of the data, and the c... See more...
Hello everyone, I’m trying to track all the resources loaded on a page — specifically, the ones that appear in the browserResourceRecord index. Right now, I only see a portion of the data, and the captured entries seem completely random. My final goal is to correlate a browser_record session (via cguid) with its corresponding entries in browserResourceRecord. Currently, I’m able to do the reverse: occasionally, a page is randomly captured in browserResourceRecord, and I can trace it back to the session it belongs to. But I can’t do the opposite — which is what I actually need. I’ve tried various things in the RUM script. My most recent changes involved setting the following capture parameters: config.resTiming = { sampler: "TopN", maxNum: 500, bufSize: 500, clearResTimingOnBeaconSend: true }; Unfortunately, this hasn’t worked either. I also suspected that resources only appear when they violate the Resource Performance thresholds, so I configured extremely low thresholds — but this didn’t help either. What I’d really like is to have access to something similar to a HAR file — with all the resource information — and make it available via Analytics, so I can download and compare it. Unfortunately, the session waterfall isn't downloadable — which is a major limitation. Thank you, Marco.