All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What would be the proper way to deploy the TA_nix on the deployment server, is the reload option available or do I need to bounce the server?
I’m running Splunk in a Linux Red Hat environment and trying to collect logs generated by the auditd service.  I could simply put a monitor on "/var/log/audit/audit.log", but the lines of that file ... See more...
I’m running Splunk in a Linux Red Hat environment and trying to collect logs generated by the auditd service.  I could simply put a monitor on "/var/log/audit/audit.log", but the lines of that file aren’t organized such that the records from a specific event are together, so I would end up having to correlate them on the back-end.  I’d rather use ausearch in a scripted input to correlate the records on the front end and provide clear delimiters (----) separating events before reporting them to the central server. Obviously, I don’t want the entirety of all logged events being reported each time the script is run.  I just want the script to report any new event since the last run.  I found that the "--checkpoint" parameter ought to be useful for that purpose. Here’s the script that I’m using: path_to_checkpoint=$( realpath "$( dirname "$0" )/../metadata/checkpoint" ) path_to_temp_file=$( realpath "$( dirname "$0" )/../metadata/temp" ) /usr/sbin/ausearch -–input-logs --checkpoint $path_to_checkpoint > $path_to_temp_file 2>&1 output_code="$?" chmod 777 $path_to_checkpoint if [ "$output_code" -eq "0" ]; then         cat $path_to_temp_file fi echo "" >> $path_to_temp_file date >> $path_to_temp_file echo "" >> $path_to_temp_file echo $output_code >> $path_to_temp_file It works just fine in the first round, when the checkpoint doesn’t exist yet and is generated for the first time, but in the second and all subsequent rounds, I get error code 10: invalid checkpoint data found in checkpoint file. It works fine in all rounds when I run the bash script manually from command line, so there isn’t any kind of a syntax error, and I’m not using the parameters incorrectly. Based on the fact that the first round runs without error, I know that there isn’t any kind of permissions issue with running “ausearch”. It works fine in all rounds when I run the bash script as a cronjob using crontab, so the fact that Scripted Inputs run like a scheduled service isn’t the root of the problem either. I’ve confirmed that the misbehavior is occurring in the interpretation of the checkpoint (rather than the generation of the checkpoint) by doing the following. Trial 1: First round: bash script executed manually in CMD to generate the first checkpoint Second round: bash script executed manually in CMD, interpreting the old checkpoint and generating a new checkpoint Result: Code 0, no errors Trial 2: First round: bash script executed manually in CMD to generate the first checkpoint Second round: bash script executed by Splunk Forwarder as a Scripted Input, interpreting the old checkpoint and generating a new checkpoint Result: Code 10, "invalid checkpoint data found in checkpoint file" Trial 3: First round: bash script executed by Splunk Forwarder as a Scripted Input to generate the first checkpoint Second round: bash script executed manually in CMD, interpreting the old checkpoint and generating a new checkpoint Result: Code 0, no errors Trial 4: First round: bash script executed by Splunk Forwarder as a Scripted Input to generate the first checkpoint Second round: bash script executed by Splunk Forwarder as a Scripted Input, interpreting the old checkpoint and generating a new checkpoint Result: Code 10, "invalid checkpoint data found in checkpoint file" Inference: The error only occurs when the Splunk Forwarder Scripted Input is interpreting the checkpoint regardless of how the checkpoint was generated, therefore the interpretation is where the misbehavior is taking place. I’m aware that I can include the "--start checkpoint" parameter to avoid this error by causing "ausearch" to start from the timestamp in the checkpoint file rather than look for a specific record to start from.  I’d like to avoid using that option though, because it causes the script to send duplicate records.  Any records that occurred at the timestamp recorded in the checkpoint are reported when that checkpoint was generated and also in the following execution of "ausearch".  If no events are logged by auditd between executions of "ausearch", then the same events may be reported several times until a new event does get logged. I tried adding the "-i" parameter to the command hoping that it would help interpret the checkpoint file, but it didn't make any difference. For reference, here's the format of the checkpoint file that is generated: dev=0xFD00 inode=1533366 output=<hostname> 1754410692.161:65665 0x46B I'm starting to wonder if it might be a line termination issue.  Like if the Splunk Universal Forwarder is expecting each line to terminate with a CRLF the way that it would in Windows, but instead it's seeing that the lines all end in LF because it's Linux.  I can't imagine why that would be the case since the version of Splunk Universal Forwarder that I have installed is meant for Linux, but that's the only thing that comes to mind. I'm using version 9.4.1 of the Splunk Universal Forwarder.  The Forwarder is acting as a deployment-client that installs and runs apps issued to it by a separate deployment-server that runs Splunk Enterprise version 9.1.8. Any thoughts on what it is about Splunk Universal Forwarder Scripted Inputs that might be preventing ausearch from interpreting its own checkpoint files?
Sorry for the late reply.  What that ends up giving me is: Unable to parse event_time_field='_time', check whether it is in epoch format. 0 results (7/1/25 12:00:00.000 AM to 8/1/25 12:00:00.000 A... See more...
Sorry for the late reply.  What that ends up giving me is: Unable to parse event_time_field='_time', check whether it is in epoch format. 0 results (7/1/25 12:00:00.000 AM to 8/1/25 12:00:00.000 AM) I tried manipulating it with generative AI some more and continued to hit roadblocks.       
Hi @JH2 ,   Can you check this below link    https://answers.splunk.com/answers/627432/jquery-datepicker-in-splunk.html
I will start by saying that I am very new to Splunk - so I could be missing an obvious step.     Please forgive me... while I learn....lol My Dashboard Studio Date picker is not working.      Here a... See more...
I will start by saying that I am very new to Splunk - so I could be missing an obvious step.     Please forgive me... while I learn....lol My Dashboard Studio Date picker is not working.      Here are the steps I have taken:  Created a search  Save "Search" as a new Dashboard studio  Dashboard studio automaticly adds the date picker to the dashboard.      I linked the data picker to the dashboard - by selecting "Sharing date range"    But still not working.     I must be missing something.     Thank you inadvance.     
Getting the following error after upgraded Splunk Add-on for servicenow to 9.0.0. "Error Failed to create 1 tickets out of 1 events for account" Ticket was created but does not return ticket number... See more...
Getting the following error after upgraded Splunk Add-on for servicenow to 9.0.0. "Error Failed to create 1 tickets out of 1 events for account" Ticket was created but does not return ticket number. Getting return code 201 with Curl command.   Version Splunk Add-on for servicenow 9.0.0 Splunk Cloud Version: 9.3.2411.112
Thank you @thahir, it's working. But when i try to navigate the page with TAB, the outlines are hidden... I'm taking your answer as correct, hoping no developer is going to hate me Thank you,  A... See more...
Thank you @thahir, it's working. But when i try to navigate the page with TAB, the outlines are hidden... I'm taking your answer as correct, hoping no developer is going to hate me Thank you,  AleCanzo.
Hi @AleCanzo ,   Can you try this below   a:focus {                  outline: none !important;                  box-shadow: none !important; }
@livehybrid idea with an on-top row for the full URL is pretty close to what I wanted to achieve. As for filtering or searching by the full URL, I can still do it using something like: | search _fu... See more...
@livehybrid idea with an on-top row for the full URL is pretty close to what I wanted to achieve. As for filtering or searching by the full URL, I can still do it using something like: | search _full_url="*$token_for_search$*"  
I just tested this approach and think that, at least for now, it suits my goal.
Hi guys, I'm searching for a way to disable the outline of the links in splunk classic dashboard. There was a similar question on the community, but i'm not understanding the answers. In my css i'm... See more...
Hi guys, I'm searching for a way to disable the outline of the links in splunk classic dashboard. There was a similar question on the community, but i'm not understanding the answers. In my css i'm trying with: a:focus{ outline: none !important} but it doesn't work. Thank you!
Hi @Keigo,   You’re using Splunk Universal Forwarder with the Linux Add-on on a 2 vCPU / 4GB RAM VM. A script (hardware.sh) that runs lshw causes 20–40% CPU spikes, which may impact performance. l... See more...
Hi @Keigo,   You’re using Splunk Universal Forwarder with the Linux Add-on on a 2 vCPU / 4GB RAM VM. A script (hardware.sh) that runs lshw causes 20–40% CPU spikes, which may impact performance. lshw is not light weight and is overkill most of the use cases, and thhis behavior is expected because lshw is resource-heavy, especially on low-spec machines. Below are the recommendations 1. Check if you actually need hardware data.  2. If needed, reduce the frequency to minimize impact 3. Alternative: Run the script via cron during off-peak hours and monitor its output file with Splunk. 4. Use lightweight tools like CollectD for performance metrics instead of heavy scripts. 5. Recommended specs (if you keep such scripts):     Use 4 vCPUs and 6–8 GB RAM for better performance.
The addon for *nix contains several inputs. Some of them are more useful, some less... The question is why would you run this input in the first place. Is this your only source of HW inventory? And ... See more...
The addon for *nix contains several inputs. Some of them are more useful, some less... The question is why would you run this input in the first place. Is this your only source of HW inventory? And even then - this is something that doesn't change often so the interval between subsequent runs can be quite big without any significant impact to the usefulness of the output data.
what access was given to service account for the connection to happen ?
The old Splunk dashboard examples app https://classic.splunkbase.splunk.com/app/1603/ which although no longer supported, can be downloaded and you can get an idea of how to write some extensions t... See more...
The old Splunk dashboard examples app https://classic.splunkbase.splunk.com/app/1603/ which although no longer supported, can be downloaded and you can get an idea of how to write some extensions that would, for example, give you a tooltip on hover over the URL, depending on your level of css/javascript skills.  
Trying to extract some data from a hybrid log where the log format is <Syslog header> <JSON Data>. Have had success with extracting via spath and regex in search but want to do this before ingestion... See more...
Trying to extract some data from a hybrid log where the log format is <Syslog header> <JSON Data>. Have had success with extracting via spath and regex in search but want to do this before ingestions, so trying to complete this on a heavy forwarder by using  props.conf and transforms.conf to complete the field extractions. Got this working to a degree but it only functions partly fuctions with some logs the the nested logs in msg are not full extracted and some logs don't extract anything for JSON. An example of one of many log types but all in this format <Syslog header> <JSON Data> Aug 3 04:45:01 server.name.local program {"_program":{"uid":"0","type":"newData","subj":"unconfined","pid":"4864","msg":"ab=new:session_create creator=sam,sam,echo,ba_permit,ba_umask,ba_limits acct=\"su\" exe=\"/usr/sbin/vi\" hostname=? addr=? terminal=vi res=success","auid":"0","UID":"user1","AUID":"user1"}} creator=sam stopping at first comma acct=\ exe=\ Doesn't collect the data after \ And the following 2 logs had no field extractions from the json Aug 3 04:31:01 server.name.local program {"_program":{"uid":"0","type":"SYSCALL","tty":"pts1","syscall":"725","su":"0","passedsuccess":"yes","pass":"unconfined","id":"0","sess":"3417","pid":"4568732","msg":"utime(1754195461.112:457):","items":"2","gid":"0","fsuid":"0","fsgid":"0","exit":"3","exe":"/usr/bin/vi","euid":"0","egid":"0","comm":"vi","auid":"345742342","arch":"c000003e","a3":"1b6","a2":"241","a1":"615295291b60","a0":"ffffff9c","UID":"user1","SYSCALL":"openmat","SUID":"user1","SGID":"user1","GID":"user1","FSUID":"user1","FSGID":"user1","EUID":"user1","EGID":"user1","AUID":"user1","ARCH":"x86_64"}} Aug 3 04:10:01 server.name.local program {"_program":{"type":"data","data":"/usr/bin/vi","msg":"utime(1754194201.112:457):"}}   Thanks in advance for any help
I am using the Splunk  Add-on for Microsoft Windows. The inputs.conf files on the hosts are located in: C:\SplunkUF\etc\apps\Splunk_TA_windows\local\inputs.conf
I am getting records from 5 or more .log s .
Hi @Shakeer_Spl  Are you able to see the data land in *any* index? (e.g main?) If so, can you confirm the sourcetype matches that configured in inputs.conf? I assume (but want to cheeck) that the i... See more...
Hi @Shakeer_Spl  Are you able to see the data land in *any* index? (e.g main?) If so, can you confirm the sourcetype matches that configured in inputs.conf? I assume (but want to cheeck) that the indexes have been created on the Indexers, and that you have appropriate RBAC/access to view the contents? Are you able to see the UF sending logs to _internal on your indexers? If not this would indicate that the issue lies with for output (from UF) or input (into IDX) Are there any other props/transforms that apply to that sourcetype in your props.conf? Sorry for all the questions (in addition to those already asked re HF etc) , there is a lot of establish in a situation like this!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing