All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I’m running Splunk in a Linux Red Hat environment and trying to collect logs generated by the auditd service.  I could simply put a monitor on "/var/log/audit/audit.log", but the lines of that file ... See more...
I’m running Splunk in a Linux Red Hat environment and trying to collect logs generated by the auditd service.  I could simply put a monitor on "/var/log/audit/audit.log", but the lines of that file aren’t organized such that the records from a specific event are together, so I would end up having to correlate them on the back-end.  I’d rather use ausearch in a scripted input to correlate the records on the front end and provide clear delimiters (----) separating events before reporting them to the central server. Obviously, I don’t want the entirety of all logged events being reported each time the script is run.  I just want the script to report any new event since the last run.  I found that the "--checkpoint" parameter ought to be useful for that purpose. Here’s the script that I’m using: path_to_checkpoint=$( realpath "$( dirname "$0" )/../metadata/checkpoint" ) path_to_temp_file=$( realpath "$( dirname "$0" )/../metadata/temp" ) /usr/sbin/ausearch -–input-logs --checkpoint $path_to_checkpoint > $path_to_temp_file 2>&1 output_code="$?" chmod 777 $path_to_checkpoint if [ "$output_code" -eq "0" ]; then         cat $path_to_temp_file fi echo "" >> $path_to_temp_file date >> $path_to_temp_file echo "" >> $path_to_temp_file echo $output_code >> $path_to_temp_file It works just fine in the first round, when the checkpoint doesn’t exist yet and is generated for the first time, but in the second and all subsequent rounds, I get error code 10: invalid checkpoint data found in checkpoint file. It works fine in all rounds when I run the bash script manually from command line, so there isn’t any kind of a syntax error, and I’m not using the parameters incorrectly. Based on the fact that the first round runs without error, I know that there isn’t any kind of permissions issue with running “ausearch”. It works fine in all rounds when I run the bash script as a cronjob using crontab, so the fact that Scripted Inputs run like a scheduled service isn’t the root of the problem either. I’ve confirmed that the misbehavior is occurring in the interpretation of the checkpoint (rather than the generation of the checkpoint) by doing the following. Trial 1: First round: bash script executed manually in CMD to generate the first checkpoint Second round: bash script executed manually in CMD, interpreting the old checkpoint and generating a new checkpoint Result: Code 0, no errors Trial 2: First round: bash script executed manually in CMD to generate the first checkpoint Second round: bash script executed by Splunk Forwarder as a Scripted Input, interpreting the old checkpoint and generating a new checkpoint Result: Code 10, "invalid checkpoint data found in checkpoint file" Trial 3: First round: bash script executed by Splunk Forwarder as a Scripted Input to generate the first checkpoint Second round: bash script executed manually in CMD, interpreting the old checkpoint and generating a new checkpoint Result: Code 0, no errors Trial 4: First round: bash script executed by Splunk Forwarder as a Scripted Input to generate the first checkpoint Second round: bash script executed by Splunk Forwarder as a Scripted Input, interpreting the old checkpoint and generating a new checkpoint Result: Code 10, "invalid checkpoint data found in checkpoint file" Inference: The error only occurs when the Splunk Forwarder Scripted Input is interpreting the checkpoint regardless of how the checkpoint was generated, therefore the interpretation is where the misbehavior is taking place. I’m aware that I can include the "--start checkpoint" parameter to avoid this error by causing "ausearch" to start from the timestamp in the checkpoint file rather than look for a specific record to start from.  I’d like to avoid using that option though, because it causes the script to send duplicate records.  Any records that occurred at the timestamp recorded in the checkpoint are reported when that checkpoint was generated and also in the following execution of "ausearch".  If no events are logged by auditd between executions of "ausearch", then the same events may be reported several times until a new event does get logged. I tried adding the "-i" parameter to the command hoping that it would help interpret the checkpoint file, but it didn't make any difference. For reference, here's the format of the checkpoint file that is generated: dev=0xFD00 inode=1533366 output=<hostname> 1754410692.161:65665 0x46B I'm starting to wonder if it might be a line termination issue.  Like if the Splunk Universal Forwarder is expecting each line to terminate with a CRLF the way that it would in Windows, but instead it's seeing that the lines all end in LF because it's Linux.  I can't imagine why that would be the case since the version of Splunk Universal Forwarder that I have installed is meant for Linux, but that's the only thing that comes to mind. I'm using version 9.4.1 of the Splunk Universal Forwarder.  The Forwarder is acting as a deployment-client that installs and runs apps issued to it by a separate deployment-server that runs Splunk Enterprise version 9.1.8. Any thoughts on what it is about Splunk Universal Forwarder Scripted Inputs that might be preventing ausearch from interpreting its own checkpoint files?
I will start by saying that I am very new to Splunk - so I could be missing an obvious step.     Please forgive me... while I learn....lol My Dashboard Studio Date picker is not working.      Here a... See more...
I will start by saying that I am very new to Splunk - so I could be missing an obvious step.     Please forgive me... while I learn....lol My Dashboard Studio Date picker is not working.      Here are the steps I have taken:  Created a search  Save "Search" as a new Dashboard studio  Dashboard studio automaticly adds the date picker to the dashboard.      I linked the data picker to the dashboard - by selecting "Sharing date range"    But still not working.     I must be missing something.     Thank you inadvance.     
Getting the following error after upgraded Splunk Add-on for servicenow to 9.0.0. "Error Failed to create 1 tickets out of 1 events for account" Ticket was created but does not return ticket number... See more...
Getting the following error after upgraded Splunk Add-on for servicenow to 9.0.0. "Error Failed to create 1 tickets out of 1 events for account" Ticket was created but does not return ticket number. Getting return code 201 with Curl command.   Version Splunk Add-on for servicenow 9.0.0 Splunk Cloud Version: 9.3.2411.112
Hi guys, I'm searching for a way to disable the outline of the links in splunk classic dashboard. There was a similar question on the community, but i'm not understanding the answers. In my css i'm... See more...
Hi guys, I'm searching for a way to disable the outline of the links in splunk classic dashboard. There was a similar question on the community, but i'm not understanding the answers. In my css i'm trying with: a:focus{ outline: none !important} but it doesn't work. Thank you!
Trying to extract some data from a hybrid log where the log format is <Syslog header> <JSON Data>. Have had success with extracting via spath and regex in search but want to do this before ingestion... See more...
Trying to extract some data from a hybrid log where the log format is <Syslog header> <JSON Data>. Have had success with extracting via spath and regex in search but want to do this before ingestions, so trying to complete this on a heavy forwarder by using  props.conf and transforms.conf to complete the field extractions. Got this working to a degree but it only functions partly fuctions with some logs the the nested logs in msg are not full extracted and some logs don't extract anything for JSON. An example of one of many log types but all in this format <Syslog header> <JSON Data> Aug 3 04:45:01 server.name.local program {"_program":{"uid":"0","type":"newData","subj":"unconfined","pid":"4864","msg":"ab=new:session_create creator=sam,sam,echo,ba_permit,ba_umask,ba_limits acct=\"su\" exe=\"/usr/sbin/vi\" hostname=? addr=? terminal=vi res=success","auid":"0","UID":"user1","AUID":"user1"}} creator=sam stopping at first comma acct=\ exe=\ Doesn't collect the data after \ And the following 2 logs had no field extractions from the json Aug 3 04:31:01 server.name.local program {"_program":{"uid":"0","type":"SYSCALL","tty":"pts1","syscall":"725","su":"0","passedsuccess":"yes","pass":"unconfined","id":"0","sess":"3417","pid":"4568732","msg":"utime(1754195461.112:457):","items":"2","gid":"0","fsuid":"0","fsgid":"0","exit":"3","exe":"/usr/bin/vi","euid":"0","egid":"0","comm":"vi","auid":"345742342","arch":"c000003e","a3":"1b6","a2":"241","a1":"615295291b60","a0":"ffffff9c","UID":"user1","SYSCALL":"openmat","SUID":"user1","SGID":"user1","GID":"user1","FSUID":"user1","FSGID":"user1","EUID":"user1","EGID":"user1","AUID":"user1","ARCH":"x86_64"}} Aug 3 04:10:01 server.name.local program {"_program":{"type":"data","data":"/usr/bin/vi","msg":"utime(1754194201.112:457):"}}   Thanks in advance for any help
Route logs from combined_large.log to webapp1_index or webapp2_index based on log content ([webapp1] or [webapp2]). Setup: Universal Forwarder: Windows (sending logs) Indexer: Windows (receivi... See more...
Route logs from combined_large.log to webapp1_index or webapp2_index based on log content ([webapp1] or [webapp2]). Setup: Universal Forwarder: Windows (sending logs) Indexer: Windows (receiving & parsing) Logs contain [webapp1] or [webapp2] Expect routing to happen on the Indexer Sample log: 2025-05-03 16:41:36 [webapp1] Session timeout for user 2025-04-13 20:25:59 [webapp2] User registered successfully inputs.conf (on UF): [monitor://C:\logs\combined_large.log] disabled = false sourcetype = custom_combined_log index = default props.conf (on Indexer): [custom_combined_log] TRANSFORMS-route_app_logs = route-webapp1_index, route-webapp2_index transforms.conf (on Indexer): [route-webapp1_index] REGEX = \[webapp1\] DEST_KEY = _MetaData:Index FORMAT = webapp1_index [route-webapp2_index] REGEX = \[webapp2\] DEST_KEY = _MetaData:Index FORMAT = webapp2_index Tried: Verified file is being read Confirmed btool loads configs Restarted services Re-indexed by duplicating the file Issue: Logs not appearing in either webapp1_index or webapp2_index Questions: Is this config correct? Am I missing a key step or wrong config location? Any way to debug routing issues? Any help or insight would be greatly appreciated. Thanks in advance    
Hello everyone! I'm trying to create a table view of IIS logs. The main issue I've encountered is some very long URL fields. In similar situations elsewhere, I've seen interactive "URL wrapping" —... See more...
Hello everyone! I'm trying to create a table view of IIS logs. The main issue I've encountered is some very long URL fields. In similar situations elsewhere, I've seen interactive "URL wrapping" — like clicking or hovering to reveal the full link. But Splunk's table view doesn't seem to offer anything like that. How can I handle this?
I have two DSs that fail to deploy the TA_nix to themselves, how is it normally done? meaning how does the deployment server deploy to itself?
Why do we find postgres in /apps/splunk/splunkforwarder/quarantined_files/bin/postgres even if we have upgraded to 9.4.3. Splunk must have moved this? If yes why?  
I want to show the tab with the water mark in splunk configuration page , how to achive it.
We are running Splunk Universal Forwarder on a virtual machine and using the Splunk Add-on for Unix and Linux. The VM is configured with 2 vCPUs and 4GB of RAM. During metric collection, it appears... See more...
We are running Splunk Universal Forwarder on a virtual machine and using the Splunk Add-on for Unix and Linux. The VM is configured with 2 vCPUs and 4GB of RAM. During metric collection, it appears that the hardware.sh script executes the lshw command, which causes a temporary CPU spike of around 20–40%. Since these scripts run periodically, this behavior may impact performance, especially on resource-constrained VMs. I would appreciate any insights or experiences regarding the following: ・Recommended VM specifications for running the Linux Add-On ・Ways to reduce CPU load caused by lshw or other scripts ・Is this kind of CPU spike expected behavior for the Add-On? ・Any operational tips or configuration examples to mitigate the impact Thanks in advance for your help!
Hello all, im new with splunk, im currently integrating splunk with bmc remedy, with self sign certificate but after config for the cert there is an error like this  and after i check e... See more...
Hello all, im new with splunk, im currently integrating splunk with bmc remedy, with self sign certificate but after config for the cert there is an error like this  and after i check error from the log  :  tail -n 100 splunk_ta_remedy_rest_account_validation.log | grep "ERROR" 2025-07-31 12:04:45,394 ERROR pid=3684890 tid=MainThread file=splunk_ta_remedy_rest_account_validation.py:validate:114 | SSLError occurred. If you are using self signed certificate and your certificate is at /etc/ssl/ca-bundle.crt, please refer the troubleshooting section in add-on documentation. Traceback = Traceback (most recent call last): 2025-08-01 13:52:51,828 ERROR pid=758565 tid=MainThread file=splunk_ta_remedy_rest_account_validation.py:validate:114 | SSLError occurred. If you are using self signed certificate and your certificate is at /etc/ssl/ca-bundle.crt, please refer the troubleshooting section in add-on documentation. Traceback = Traceback (most recent call last):
I have trouble with getting public and private IP addresses fields separately. How can I extract private and public IP addresses fields separately using regex???  Because, when I extract IP field fro... See more...
I have trouble with getting public and private IP addresses fields separately. How can I extract private and public IP addresses fields separately using regex???  Because, when I extract IP field from failed ssh login log, I get both public and private  fields in same filed, therefore I want extract them in different fields.
We're trying to customize the Meantime to Triage and Meantime to Resolution queries in the ES Executivity Summary dashboard to filter for specific urgency or rule names.  We previously were using the... See more...
We're trying to customize the Meantime to Triage and Meantime to Resolution queries in the ES Executivity Summary dashboard to filter for specific urgency or rule names.  We previously were using the stand alone Mission Control app before it was integrated into Enterprise Security. Reviewing the incident_updates_lookup table, it seemed to have stopped updating both "urgency" and "rule_name" around the time we migrated into Enterprise Security's Mission Control.  We can see old entries prior to that, but more recent ones are very infrequent. Anyone know how to resolve this or know of another way to filter?
Hi, I need to create an investigation with SOAR. When I create the investigation, it doesn't link the Finding to the Investigation. Do you have a playbook that can help me with this feature?   ... See more...
Hi, I need to create an investigation with SOAR. When I create the investigation, it doesn't link the Finding to the Investigation. Do you have a playbook that can help me with this feature?        
Can you please share download links for hf and enterprise prior to 9.1? i.e. 9.0.x, both linux and windows, thanks
Hi Everyone, I am in the process of installing Splunk UBA and have a question regarding the storage partitioning requirements. The official documentation (link below) states that separate physical ... See more...
Hi Everyone, I am in the process of installing Splunk UBA and have a question regarding the storage partitioning requirements. The official documentation (link below) states that separate physical disks, /dev/sdb and /dev/sdc, are required for specific mount points to ensure performance. Documentation Link: https://docs.splunk.com/Documentation/UBA/5.3.0/Install/InstallSingleServer#Prepare_the_disks However, my current server is configured with a single physical disk (/dev/sda) that uses LVM to create multiple logical volumes. Here is my current lsblk output: [zake@ubaserver]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 2.7T 0 disk ├─sda1 8:1 0 1G 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 2.7T 0 part ├─rhel-root 253:0 0 80G 0 lvm / ├─rhel-swap 253:1 0 16G 0 lvm [SWAP] ├─rhel-var_vcap2 253:2 0 1T 0 lvm /var/vcap2 ├─rhel-var_vcap1 253:3 0 1T 0 lvm /var/vcap1 ├─rhel-home 253:4 0 118.8G 0 lvm /home └─rhel-backup 253:5 0 500G 0 lvm /backup sr0 11:0 1 1024M 0 rom My question is: Can my existing logical volumes, /dev/mapper/rhel-var_vcap1 and /dev/mapper/rhel-var_vcap2, be used as a substitute for the required /dev/sdb and /dev/sdc disks? I understand the requirement for separate physical disks is likely due to I/O performance. Would using this LVM setup on a single disk be a supported configuration, or is adding two new physical/virtual disks a mandatory step? Thank you for your guidance.
Hi everyone! I am new with Splunk and probably this should be really easy for many of you.  I am trying to left join a lookup with a source table. I tried this initially and it looks great but... See more...
Hi everyone! I am new with Splunk and probably this should be really easy for many of you.  I am trying to left join a lookup with a source table. I tried this initially and it looks great but it's not displaying the total number of records contained in the lookup table. I need to display all records in the lookup and to show all matching records and a blank if not found in table1. The TempTableLookup.csv (lookup table) just has 1 column called "NUMBER" with 7,500 records. The table1 has NUMBER, ORIGINDATE and other columns which are not needed. Table1 has 360,000 records. So I run this query but I get 7,479 instead of the total 7500. There's around 20+ records that do not have an ORIGINDATE or the lookup number does not exist in table1. index=test sourcetype="table1" | lookup TempTableLookup.csv NUMBER output NUMBER as matched_number | where isnotnull(matched_number) | table NUMBER ORIGINDATE So I read I need to do an left join so I tried this and it's bringing all 7,500 records I want but it is not bringing back the ORIGINDATE. Could someone please let me know what I am doing wrong on the second lookup? I know that left joins are not recommended but I cannot think of any other way to give me what I need. | inputlookup TempTableLookup.csv | join type=left NUMBER [ search index=test sourcetype="table1" | dedup NUMBER | fields NUMBER , ORIGINDATE ] | table NUMBER ORIGINDATE The output should look like: NUMBER     ORIGINDATE 123456       01/10/2025 128544       05/05/2029 and so forth... I'd appreciate greatly any ideas on how to do this. Thank you in advance and have a great day, Diana      
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have confi... See more...
I need to configure a certain customer app to ingest files.Those files needs an add-on which will convert them to be read by splunk, they are in ckls format.I have the add-on already and I have configured in deployments app already. How do I connect with the customer App so as it can show on dashboard?    
I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status fiel... See more...
I have a KPI for Instance status:   index=xxxxx  source="yyyyy" | eval UpStatus=if(Status=="up",1,0) | stats last(UpStatus) as val by Instance host Status Now the val is 0 or 1 and Status field is Up or Down  The split by field is Instance host and threshold is based on val  The alert triggers fine but I want to put the field in email alert $result.Status$ Instead of $result.val$ But I dont see the field Status in tracked alerts. How can I make this field Status shows in tracked alerts index or events generated so that I can use it in my email (This is to avoid confusuion, instead of saying 0, 1 it will say up or down)