All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunkers, I'm having a logs which will be generated only where there is change in system, 6:01:01 - System Stop 10:54:01 - System Start 13:09:04 - System Stop 16:01:01 - System Start 1... See more...
Hello Splunkers, I'm having a logs which will be generated only where there is change in system, 6:01:01 - System Stop 10:54:01 - System Start 13:09:04 - System Stop 16:01:01 - System Start 17:01:01 - System Stop These are the logs. Lets say If I'm searchit it in a chart, for the timerange from 7Am - 4Pm the chart from 8Am until 10:54:01 Am is empty since the previous event was generated at 6:01:01, so there is a gap. I would like to fix this. In some cases only 2 values is been repeated, so we can take the one in present, the past can be its opposite. Eg -  At 10:54:01 - System Start, We have received this log, where the system is start, the previous one will be stop.  These are fixed for some cased, I need two best solutions, only for this scenario, other for multiple values, like these 14:01:01 - System Started 17:54:01 - System reset 22:09:04 - System Stop 23:01:01 - System Started 01:01:01 - System Stop wheres here I'm getting three values like Started, Stop and reset. Thanks in Advance!
Hello, I am trying to collect bash_history logs in real-time from multiple Linux hosts using Splunk. I have deployed the following script to append executed commands to /var/log/bash_history.log: #... See more...
Hello, I am trying to collect bash_history logs in real-time from multiple Linux hosts using Splunk. I have deployed the following script to append executed commands to /var/log/bash_history.log: #!/bin/bash LOG_FILE="/var/log/bash_history.log" PROMPT_COMMAND_STR='export PROMPT_COMMAND='\''RECORD_CMD=$(history 1 | sed "s/^[ ]*[0-9]*[ ]*//"); echo "$(date "+%Y-%m-%d %H:%M:%S") $(whoami) $RECORD_CMD" >> /var/log/bash_history.log'\'''   # 1. Create log file if it doesn't exist and set permissions if [ ! -f "$LOG_FILE" ]; then     touch "$LOG_FILE"     echo "[INFO] Log file created: $LOG_FILE" fi chmod 666 "$LOG_FILE" chown root:users "$LOG_FILE" echo "[INFO] Log file permissions set"   # 2. Add PROMPT_COMMAND to /etc/bash.bashrc if ! grep -q "PROMPT_COMMAND" /etc/bash.bashrc; then     echo "$PROMPT_COMMAND_STR" >> /etc/bash.bashrc     echo "[INFO] PROMPT_COMMAND added to /etc/bash.bashrc" fi   # 3. Force loading of ~/.bashrc through /etc/profile if ! grep -q "source ~/.bashrc" /etc/profile; then     echo 'if [ -f ~/.bashrc ]; then source ~/.bashrc; fi' >> /etc/profile     echo "[INFO] ~/.bashrc now loads via /etc/profile" fi   # 4. Add PROMPT_COMMAND to all users' ~/.bashrc and ~/.profile for user in $(ls /home); do     for FILE in "/home/$user/.bashrc" "/home/$user/.profile"; do         if [ -f "$FILE" ] && ! grep -q "PROMPT_COMMAND" "$FILE"; then             echo "$PROMPT_COMMAND_STR" >> "$FILE"             echo "[INFO] PROMPT_COMMAND added to $FILE (user: $user)"         fi     done done   # 5. Add PROMPT_COMMAND for root user for FILE in "/root/.bashrc" "/root/.profile"; do     if [ -f "$FILE" ] && ! grep -q "PROMPT_COMMAND" "$FILE"; then         echo "$PROMPT_COMMAND_STR" >> "$FILE"         echo "[INFO] PROMPT_COMMAND added to $FILE (root)"     fi done   # 6. Ensure ~/.bashrc is sourced in ~/.profile for all users for user in $(ls /home); do     PROFILE_FILE="/home/$user/.profile"     if [ -f "$PROFILE_FILE" ] && ! grep -q ". ~/.bashrc" "$PROFILE_FILE"; then         echo ". ~/.bashrc" >> "$PROFILE_FILE"         echo "[INFO] ~/.bashrc now sources from ~/.profile (user: $user)"     fi done   # 7. Ensure all users use Bash shell while IFS=: read -r username _ _ _ _ home shell; do     if [[ "$home" == /home/* || "$home" == "/root" ]]; then         if [[ "$shell" != "/bin/bash" ]]; then             echo "[WARNING] User $username has shell $shell, changing to Bash..."             usermod --shell /bin/bash "$username"         fi     fi done < /etc/passwd   # 8. Apply changes exec bash echo "[INFO] Configuration applied" The script runs correctly, and /var/log/bash_history.log is created on all hosts. However, Splunk is not collecting logs from all hosts. Some hosts send data properly, while others do not. What I have checked: Permissions on /var/log/bash_history.log → The file is writable by all users (chmod 666 and chown root:users). Presence of PROMPT_COMMAND in user sessions → When running echo $PROMPT_COMMAND, it appears correctly for most users. SU behavior → If users switch with su - username, it works. However, if they switch with su username, sometimes the logs are missing. Splunk Inputs Configuration: [monitor:///var/log/bash_history.log] disabled = false index = os sourcetype = bash_history This is properly deployed to all hosts. Questions: Could there be permission issues with writing to /var/log/bash_history.log under certain circumstances? Would another directory (e.g., /tmp/) be better? How can I ensure that all user sessions (including su username) log commands consistently? Could there be an issue with Splunk Universal Forwarder not properly monitoring /var/log/bash_history.log on some hosts?   Any insights or best practices would be greatly appreciated! Thanks.
@SN1  To resolve the missing indexer in your License Master: Test network connectivity (ping, telnet 8089) between the indexer and License Master. Restart the indexer (and License Master if neede... See more...
@SN1  To resolve the missing indexer in your License Master: Test network connectivity (ping, telnet 8089) between the indexer and License Master. Restart the indexer (and License Master if needed) after fixing config or network issues. Check if the indexer is running. On the indexer, open $SPLUNK_HOME/etc/system/local/server.conf and look for  [license] master_uri = https://<license-master-host>:8089 Replace <license-master-host> with the License Master’s IP or FQDN. If this is missing or incorrect, update it. From the indexer’s host : ping <license-master-host> Test the management port : telnet <license-master-host> 8089 . If it doesn’t connect, troubleshoot firewalls, network routes, or confirm the License Master is listening (netstat -tuln | grep 8089 on the License Master). 
Hi @SN1  Are you able to share the output of this page from the server with an issue, please? Go to https://yourSplunkInstance/en-US/manager/system/licensing  Does the license is showing as valid?... See more...
Hi @SN1  Are you able to share the output of this page from the server with an issue, please? Go to https://yourSplunkInstance/en-US/manager/system/licensing  Does the license is showing as valid? Or does it show connection to license server? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@SN1   The indexer can’t reach the License Master due to firewall rules, network outages, or DNS resolution problems. Please check with your network team. 
Hi @nnkreddy  Each of the objects you want to be next to each other horizontally need to be in <panel> within the same <row>. This will produce something like below, is this what you are looking to... See more...
Hi @nnkreddy  Each of the objects you want to be next to each other horizontally need to be in <panel> within the same <row>. This will produce something like below, is this what you are looking to achieve?     <form version="1.1" theme="light"> <label>xmltest</label> <fieldset submitButton="false"></fieldset> <row> <panel id="input1"> <input type="text" token="tk1" id="tk1id" searchWhenChanged="true"> <label>Refine further?</label> <prefix> | where </prefix> </input> </panel> <panel id="html1"> <html id="htmlid"> <p>count: $job.resultCount$</p> </html> </panel> </row> <row> <panel id="panel1"> <table> <search> <query>|makeresults | eval msg="test" | stats count by msg</query> </search> <option name="drilldown">cell</option> </table> </panel> </row> </form>   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
we dont have any UF on the windows machine, we are receiving the logs through UDP
The server count is huge and we are receiving the logs from the LB to our syslog server. 
so after running this query i am not able to see 1 indexer . How can i resolve it.
Thanks @syedabuthahira  Please could you share the props/transforms you are referring to here so we can understand why they might not be applying the filtering you are expecting? Also, if you can s... See more...
Thanks @syedabuthahira  Please could you share the props/transforms you are referring to here so we can understand why they might not be applying the filtering you are expecting? Also, if you can share raw samples of the junk data this will also be great.  Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@syedabuthahira  Universal Forwarder (UF): Designed to collect logs (e.g., Windows Event Logs) and send them directly to Splunk indexers or intermediate Heavy Forwarders. Heavy Forwarder (HF): A f... See more...
@syedabuthahira  Universal Forwarder (UF): Designed to collect logs (e.g., Windows Event Logs) and send them directly to Splunk indexers or intermediate Heavy Forwarders. Heavy Forwarder (HF): A full Splunk instance that can parse, filter, and route data before sending it to indexers. Useful if data needs transformation or routing logic. Syslog Forwarder: Typically used for network devices or non-Splunk agents sending logs in syslog format (e.g., UDP/TCP). Windows logs aren’t natively syslog-formatted—they’re in Event Log format (EVTX) so converting them to syslog adds complexity. If a UF is installed on a Windows machine, it’s generally unnecessary and inefficient to forward Windows logs to a syslog server first. The UF can send logs directly to indexers or an HF, avoiding extra hops, format conversion (e.g., to syslog), and potential data loss or latency.
So @Poojitha  - If you ran the dashboard search now for last 60 minutes, it would search from the start of the minute 60 minutes ago, until now. For example 08:40:00.000 to 09:40:12.000 (Note that "N... See more...
So @Poojitha  - If you ran the dashboard search now for last 60 minutes, it would search from the start of the minute 60 minutes ago, until now. For example 08:40:00.000 to 09:40:12.000 (Note that "Now" in this case is 09:40:12 - 12 seconds after the start of 09:40). If you now ran the same search in the Splunk search bar, 10 seconds later (for example) you would be searching 08:40:00.000 to 09:40:22.000 - so, the reason this is interesting is that you may have more counts and more errors in the last 10 seconds. To verify the counts you will need to run the search over a specific time window in both the dashboard and Splunk Search bar.   Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@syedabuthahira  First, I recommend sending the Windows logs directly to the indexers or via a heavy forwarder. I'm not sure why you're routing them through a syslog forwarder when the Universal For... See more...
@syedabuthahira  First, I recommend sending the Windows logs directly to the indexers or via a heavy forwarder. I'm not sure why you're routing them through a syslog forwarder when the Universal Forwarder is already installed.
I'm trying to display a simple input type and html text side by side but they are appearing vertically. I have tried few CSS options but none of them worked. Sample Panel code <panel id="panel1"> ... See more...
I'm trying to display a simple input type and html text side by side but they are appearing vertically. I have tried few CSS options but none of them worked. Sample Panel code <panel id="panel1"> <input type="text" token="tk1" id="tk1id" searchWhenChanged=“true”> <label>Refine further?</label> <prefix> | where </prefix> </input> <html id="htmlid"> <p>count: $job.resultCount$</p> </html> <event> <search> . . . . </search> </event> </panel> How can I make input and html text appear side by side and events at the bottom? My requirement is to have this in achieved in single panel in XML dashboard. Thanks for any help!
Thank you @ITWhisperer @livehybrid  for looking into it.   I am receiving the logs from multiple windows hosts through the syslog server and we have the UF on the syslog server through the UF we ar... See more...
Thank you @ITWhisperer @livehybrid  for looking into it.   I am receiving the logs from multiple windows hosts through the syslog server and we have the UF on the syslog server through the UF we are forwarding the data to the indexer. the actual breaking issue is-  when I am searching the logs  for the particular source by using the spl index="win"  sourcetype="*Snare*" | table host source   if i run the above query i should see only the hostname thats what we have configured our props and trnasforms but i am seeing the junk values in the host field actually its breaking in between somewhere in the event.
@kfchen  Unfortunately, Splunk does not provide a built-in way to disable replication specifically for cold storage. Your idea of using a cron job to delete replicated buckets in cold storage is cr... See more...
@kfchen  Unfortunately, Splunk does not provide a built-in way to disable replication specifically for cold storage. Your idea of using a cron job to delete replicated buckets in cold storage is creative, but it comes with risks. Deleting replicated buckets manually might lead to data inconsistency and potential search issues.  If Indexer A is under maintenance and the main bucket is on Indexer A, Indexer B should still be able to query the replicated bucket. However, this depends on the search factor (SF) being met. If the SF is not met, searches might not return complete result. If Indexer A is down and Indexer B detects that it needs to meet the replication factor (RF), it will attempt to replicate the bucket to another indexer. This process is part of Splunk's mechanism to ensure data availability and redundancy. Configuring all indexers to refer to a single shared file path for cold storage is possible. You would need to modify the indexes.conf file to set the coldPath to a shared directory.However, ensure that the shared storage is reliable and has sufficient performance to handle the load. Before proceeding with any changes, it's crucial to test your setup in a staging environment to avoid any disruptions in your production environment. Please contact Splunk support or PS.  NOTE:- Official answer from support is to NOT remove any replicated buckets even with clustering disabled, as they may be marked as the Primary Bucket. It is best to let them age out.
You are probably going to have to be a bit more precise. What does "junk values" mean? What does "breaking" mean? What configuration values do you have on the UF and SH? Is this issue isolated to spe... See more...
You are probably going to have to be a bit more precise. What does "junk values" mean? What does "breaking" mean? What configuration values do you have on the UF and SH? Is this issue isolated to specific hosts, data sources, times of day, sourcetypes, etc?
What does "When I run the query" mean? Are you copying/rewriting the search in a search window, or are you using the "Open in Search" button on the pie-chart? Does the timeframe for the results of t... See more...
What does "When I run the query" mean? Are you copying/rewriting the search in a search window, or are you using the "Open in Search" button on the pie-chart? Does the timeframe for the results of the search match the time frame you think you are using? For example:  
So i currently have an indexer cluster, the RF and SF is 2. My hot/warm Db and my cold bucket will be on different storage disks in a cluster which has its own replication features, and i happen to h... See more...
So i currently have an indexer cluster, the RF and SF is 2. My hot/warm Db and my cold bucket will be on different storage disks in a cluster which has its own replication features, and i happen to have EC+2:1, meaning the data on my cold storage will be replicated twice.  As a result, i would like to disable replication on my cold storage, but there is currently no way to do that in Splunk(or not that I know of). I am thinking of writing a cron job that deletes all replicated bucket in the cold storage disk. For this to happen, all of the indexers should be referring to a single shared file path in the cold storage. However, this begs the question: Will the search still works as per normal? Lets say the main bucket is on Indexer A  and the replicated copy is in indexer B. But my indexer A is currently under maintenance, would it be possible for lets say, index B, to query the bucket with indexer A's bid? Additionally, will indexer B sense that something is wrong and try to replicate the bucket in warm bucket again?
Hi @syedabuthahira  Are you able to give us an example of the junk you are seeing in the logs? Redact anything sensitive if needed. Also - what are the source(s) file paths for these events? This ... See more...
Hi @syedabuthahira  Are you able to give us an example of the junk you are seeing in the logs? Redact anything sensitive if needed. Also - what are the source(s) file paths for these events? This information will help us to answer your question more accurately. Please consider adding karma to this or any other answer if it has helped. Regards Will