All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Please is it possible to create a Tag  for a group of IP addresses? i need to do to search on a group of servers.
I want to essentially trigger an alarm if a user changes the password of multiple distinct user accounts within a given period of time.  I was able to start with the search below, which provides me... See more...
I want to essentially trigger an alarm if a user changes the password of multiple distinct user accounts within a given period of time.  I was able to start with the search below, which provides me a count of distinct user account change grouped by the source user.  When I try to apply a threshold logic to it, it doesn't appear to work. source="WinEventLog:Security" (EventCode=628 OR EventCode=627 OR EventCode=4723 OR EventCode=4724) | stats count(Target_Account_Name) by Subject_Account_Name
Hi I cross the results of a subsearch with a main search like this index=toto [inputlookup test.csv |eval user=Domain."\\"Sam |table user] |table _time user Imagine I need to add a new lookup i... See more...
Hi I cross the results of a subsearch with a main search like this index=toto [inputlookup test.csv |eval user=Domain."\\"Sam |table user] |table _time user Imagine I need to add a new lookup in my search  For example i would try to do something like this  index=toto [inputlookup test.csv OR inputlookup test2.csv |eval user=Domain."\\"Sam |table user] |table _time user How to do this please?
How to change the colour of info button in dashboard.
Hi Splunkers! I am using Splunk Enterprise Security, and creating correlation searches, one of them I have created and tested manually by running the search over a specific period of time, many even... See more...
Hi Splunkers! I am using Splunk Enterprise Security, and creating correlation searches, one of them I have created and tested manually by running the search over a specific period of time, many events matched, but no notable events are being created. To test my correlation, I have added another action (send email) when the correlation is triggered, and sure enough, an email was sent to me. Can anyone help me solve this issue?
Dear Splunkers, actual i am facing an issue, we have an Lookup on the SHC with some location infromation e.g location.csv   ____ location DE EN   Scope is to ingest data only on indexers, whe... See more...
Dear Splunkers, actual i am facing an issue, we have an Lookup on the SHC with some location infromation e.g location.csv   ____ location DE EN   Scope is to ingest data only on indexers, when the location in events showing up on lookups too. The solution works with ingest_eval and lookup filtering.   The question right know is do we have the possibility to manage this lookup on SH level and provide some roles the permission to add/remove locations on their demand from this index. e.g. I'll update the lookup on the SH and this will be replicated to lookup on Index Cluster too..how can i achieve this one? Kind Regards
hi guys, I want to detect a service ticket request (Windows event code 4769) and one of the following corresponding events does not appear before the service ticket request: 1. User Ticket (TGT) req... See more...
hi guys, I want to detect a service ticket request (Windows event code 4769) and one of the following corresponding events does not appear before the service ticket request: 1. User Ticket (TGT) request, Windows event code 4768. 2. Ticket renewal request, Windows event code 4770.
hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is trigge... See more...
hi guys, I want to detect that more than 10 different ports of the same host are sniffed and scanned every 15 minutes and triggered 5 times in a row, then the alarm; If the same time period is triggered for three consecutive days, the alarm is triggered.
Hi All, I have below two logs: First Log 2023-09-05 00:17:56.987 [INFO ] [pool-3-thread-1] ReadControlFileImpl - Reading Control-File /absin/CARS.HIERCTR.D090423.T001603 Second Log 2023-09-05 03... See more...
Hi All, I have below two logs: First Log 2023-09-05 00:17:56.987 [INFO ] [pool-3-thread-1] ReadControlFileImpl - Reading Control-File /absin/CARS.HIERCTR.D090423.T001603 Second Log 2023-09-05 03:55:15.808 [INFO ] [Thread-20] FileEventCreator - Completed Settlement file processing, CARS.HIER.D090423.T001603 records processed: 161094 I want to capture the trimmings for both logs: My current queries index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Reading Control-File /absin/CARS.HIERCTR." index="abc"sourcetype =600000304_gg_abs_ipc2 source="/amex/app/gfp-settlement-raw/logs/gfp-settlement-raw.log" "Completed Settlement file processing, CARS.HIER."
Hi Everyone, I have to extract a file path from a path. The path will be in the format C:\a\b\c\abc\xyz\abc.h. I want to skip first 4 folders. That is in this example i want to extract \abc\xyz\ab... See more...
Hi Everyone, I have to extract a file path from a path. The path will be in the format C:\a\b\c\abc\xyz\abc.h. I want to skip first 4 folders. That is in this example i want to extract \abc\xyz\abc.h. How can i dot it using regex?  
Is there any performance impact when used, index IN ("windows_server") OR  index="windows_server"  ?
Hi Splunkers!    I need to extract the specific field which dosent consists of sourcetype in logs, Fields to extract - OS, OSRelease     Thanks in Advance, Manoj Kumar S
I have field in the event which has multi-line data (between double quotes) and I need to split them into individual lines and finally extract them into a table format for each of the header. Basical... See more...
I have field in the event which has multi-line data (between double quotes) and I need to split them into individual lines and finally extract them into a table format for each of the header. Basically, the requirement is to report this data in table format to users. output = "DbName|CurrentSizeGB|UsedSpaceGB|FreeSpaceGB|ExtractedDate abc|60.738|39.844|20.894|Sep 5 2023 10:00AM def|0.098|0.017|0.081|Sep 5 2023 10:00AM pqr|15.859|0.534|15.325|Sep 5 2023 10:00AM xyz|32.733|0.675|32.058|Sep 5 2023 10:00AM"
Hi all! Recently there has been a need to implement a centralized Splunk setup on Linux machines. I managed to do this without using ansible, starting from the script by lguinn2 : https://community.... See more...
Hi all! Recently there has been a need to implement a centralized Splunk setup on Linux machines. I managed to do this without using ansible, starting from the script by lguinn2 : https://community.splunk.com/t5/Getting-Data-In/Simple-installation-script-for-Universal-Forwarder/m-p/21517. Today I want to share with everyone, of course, if you have any comments or improvements, please write! I got two slightly different scripts, 1 for CentOS and SuSe, 2 for Ubuntu and Debian. Why, because different installation packages were used. Yes, yes, it was possible to do something differently, I just had to make it as quickly as possible, and the level of knowledge of Linux is also hoarse)  Well, this post is for people like me)) Note. The script has been tested on an Ubuntu distribution, so it is recommended that you work on an Ubuntu machine. Successful completion of work requires SSH access to target devices, the ability to execute ssh and sshpass commands. All target machines must have an identical account with the same password, and this account must be as superuser (with the ability to run commands via sudo). MyForwarders and MyForwarders_U are simple text files for storing IP addresses of target machines: I think for the rest everything is clear from the description, even if there are questions during the time, you will understand everything! 1. #!/bin/bash # Credentials of the user who will connect to the target host and run Splunk. read -p "Enter SSH user name: " username echo -n "Enter SSH user password: " stty -echo read password stty echo echo INSTALLED=False # Logging file for Splunk status STATUS_LOG="/home/zhanali/splunk_status.txt" # File with machine's IPs HOSTS_FILE="/home/zhanali/MyForwarders" # Installation file location INSTALL_FILE="/home/zhanali/splunkforwarder-9.1.0.1-77f73c9edb85.x86_64.rpm" PREPARE_COMMANDS=" echo $password | sudo -S -k mkdir /opt/splunkforwarder 2>/dev/null echo $password | sudo -S -k chown -R splunk:splunk /opt/splunkforwarder 2>/dev/null " INSTALL_COMMANDS=" echo $password | sudo -S -k chmod 644 /opt/splunkforwarder/splunkforwarder-9.1.0.1-77f73c9edb85.x86_64.rpm 2>/dev/null echo $password | sudo -S -k rpm -i /opt/splunkforwarder/splunkforwarder-9.1.0.1-77f73c9edb85.x86_64.rpm 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt --seed-passwd '!@#qweasdZXC' 2>/dev/null echo 'Please wait 10 second....' sleep 10 echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk stop 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk disable boot-start 2>/dev/null echo $password | sudo -S -k chown -R splunk:splunk /opt/splunkforwarder 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk enable boot-start -user $username 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk start 2>/dev/null echo $password | sudo -S -k mkdir /home/$username/.splunk 2>/dev/null echo $password | sudo -S -k chmod 777 -R /home/$username/.splunk 2>/dev/null echo $password | sudo -S -k sudo -u $username /opt/splunkforwarder/bin/splunk add forward-server 172.16.30.104:9997 -auth 'admin':'!@#qweasdZXC' 2>/dev/null echo $password | sudo -S -k sudo -u $username /opt/splunkforwarder/bin/splunk set deploy-poll 172.16.30.104:8089 -auth 'admin':'!@#qweasdZXC' 2>/dev/null " echo "In 5 seconds, will run the following script on each remote host:" echo sleep 5 echo "Reading host IPs from $HOSTS_FILE" echo echo "Starting." for DST in `cat "$HOSTS_FILE"`; do if [ -z "$DST" ]; then continue; fi echo "---------------------------------" | tee -a $STATUS_LOG echo "Starting work with $DST" | tee -a $STATUS_LOG sshpass -p $password ssh -q $username@$DST [[ -f /opt/splunkforwarder/bin/splunk ]] && INSTALLED=True || INSTALLED=False if [ "$INSTALLED" = "True" ]; then echo "Splunk UF is already installed" | tee -a $STATUS_LOG version=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk version | grep 'Splunk Universal Forwarder'" 2>/dev/null) echo "Splunk UF version: $version" | tee -a $STATUS_LOG status=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk status | grep 'splunkd is '" 2>/dev/null) echo "Splunk UF status: $status" | tee -a $STATUS_LOG dep=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/deploymentclient.conf | grep '172.16.30.104:8089'" 2>/dev/null) fwd=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/outputs.conf | grep '172.16.30.104:9997'" 2>/dev/null) if [ -z "$dep" ]; then echo "Deployment server is not configured" | tee -a $STATUS_LOG else echo "Deployment server is configured" | tee -a $STATUS_LOG fi if [ -z "$fwd" ]; then echo "Forward server is not configured" | tee -a $STATUS_LOG else echo "Forward server is configured" | tee -a $STATUS_LOG fi INSTALLED=False else echo "Splunk UF is not installed to host $DST" | tee -a $STATUS_LOG echo "Installing..." | tee -a $STATUS_LOG sshpass -p $password ssh $username@$DST "$PREPARE_COMMANDS" sshpass -p $password scp $INSTALL_FILE $username@$DST:/opt/splunkforwarder sshpass -p $password ssh $username@$DST "$INSTALL_COMMANDS" echo "Installation is done" | tee -a $STATUS_LOG echo "Checking..." | tee -a $STATUS_LOG status=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk status | grep 'splunkd is '" 2>/dev/null) echo "Splunk UF status: $status" | tee -a $STATUS_LOG dep=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/deploymentclient.conf | grep '172.16.30.104:8089'" 2>/dev/null) fwd=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/outputs.conf | grep '172.16.30.104:9997'" 2>/dev/null) if [ -z "$dep" ]; then echo "Deployment server is not configured" | tee -a $STATUS_LOG else echo "Deployment server is configured" | tee -a $STATUS_LOG fi if [ -z "$fwd" ]; then echo "Forward server is not configured" | tee -a $STATUS_LOG else echo "Forward server is configured" | tee -a $STATUS_LOG fi fi echo "---------------------------------" | tee -a $STATUS_LOG done echo "Done" And 2. #!/bin/bash # Credentials of the user who will connect to the target host and run Splunk. read -p "Enter SSH user name: " username echo -n "Enter SSH user password: " stty -echo read password stty echo echo INSTALLED=False # Logging file for Splunk status STATUS_LOG="/home/zhanali/splunk_status.txt" # File with machine's IPs HOSTS_FILE="/home/zhanali/MyForwarders_U" # Installation file location INSTALL_FILE="/home/zhanali/splunkforwarder-9.1.0.1-77f73c9edb85-linux-2.6-amd64.deb" PREPARE_COMMANDS=" echo $password | sudo -S -k mkdir /opt/splunkforwarder 2>/dev/null echo $password | sudo -S -k chown -R splunk:splunk /opt/splunkforwarder 2>/dev/null " INSTALL_COMMANDS=" echo $password | sudo -S -k dpkg -i /opt/splunkforwarder/splunkforwarder-9.1.0.1-77f73c9edb85-linux-2.6-amd64.deb 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt --seed-passwd '!@#qweasdZXC' 2>/dev/null echo 'Please wait 10 second....' sleep 10 echo $password | sudo -S -k chown -R splunk:splunk /opt/splunkforwarder 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk stop 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk enable boot-start -user $username 2>/dev/null echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk start 2>/dev/null echo 'Please wait 5 second....' sleep 5 echo $password | sudo -S -k sudo -u $username /opt/splunkforwarder/bin/splunk add forward-server 172.16.30.104:9997 -auth 'admin':'!@#qweasdZXC' 2>/dev/null echo $password | sudo -S -k sudo -u $username /opt/splunkforwarder/bin/splunk set deploy-poll 172.16.30.104:8089 -auth 'admin':'!@#qweasdZXC' 2>/dev/null " echo "In 5 seconds, will run the following script on each remote host:" echo sleep 5 echo "Reading host IPs from $HOSTS_FILE" echo echo "Starting." for DST in `cat "$HOSTS_FILE"`; do if [ -z "$DST" ]; then continue; fi echo "---------------------------------" | tee -a $STATUS_LOG echo "Starting work with $DST" | tee -a $STATUS_LOG sshpass -p $password ssh -q $username@$DST [[ -f /opt/splunkforwarder/bin/splunk ]] && INSTALLED=True || INSTALLED=False if [ "$INSTALLED" = "True" ]; then echo "Splunk UF is already installed" | tee -a $STATUS_LOG version=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk version | grep 'Splunk Universal Forwarder'" 2>/dev/null) echo "Splunk UF version: $version" | tee -a $STATUS_LOG status=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk status | grep 'splunkd is '" 2>/dev/null) echo "Splunk UF status: $status" | tee -a $STATUS_LOG dep=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/deploymentclient.conf | grep '172.16.30.104:8089'" 2>/dev/null) fwd=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/outputs.conf | grep '172.16.30.104:9997'" 2>/dev/null) if [ -z "$dep" ]; then echo "Deployment server is not configured" | tee -a $STATUS_LOG else echo "Deployment server is configured" | tee -a $STATUS_LOG fi if [ -z "$fwd" ]; then echo "Forward server is not configured" | tee -a $STATUS_LOG else echo "Forward server is configured" | tee -a $STATUS_LOG fi INSTALLED=False else echo "Splunk UF is not installed to host $DST" | tee -a $STATUS_LOG echo "Installing..." | tee -a $STATUS_LOG sshpass -p $password ssh $username@$DST "$PREPARE_COMMANDS" sshpass -p $password scp $INSTALL_FILE $username@$DST:/opt/splunkforwarder sshpass -p $password ssh $username@$DST "$INSTALL_COMMANDS" echo "Installation is done" | tee -a $STATUS_LOG echo "Checking..." | tee -a $STATUS_LOG status=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k /opt/splunkforwarder/bin/splunk status | grep 'splunkd is '" 2>/dev/null) echo "Splunk UF status: $status" | tee -a $STATUS_LOG dep=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/deploymentclient.conf | grep '172.16.30.104:8089'" 2>/dev/null) fwd=$(sshpass -p $password ssh $username@$DST "echo $password | sudo -S -k cat /opt/splunkforwarder/etc/system/local/outputs.conf | grep '172.16.30.104:9997'" 2>/dev/null) if [ -z "$dep" ]; then echo "Deployment server is not configured" | tee -a $STATUS_LOG else echo "Deployment server is configured" | tee -a $STATUS_LOG fi if [ -z "$fwd" ]; then echo "Forward server is not configured" | tee -a $STATUS_LOG else echo "Forward server is configured" | tee -a $STATUS_LOG fi fi echo "---------------------------------" | tee -a $STATUS_LOG done echo "Done"  
Hi,  my env is like - UF->HF->IDX Cluster I have many errors on my HF that it can't received the data some are like: "ERROR TcpInputProc - Message rejected. Received unexpected message of size=36... See more...
Hi,  my env is like - UF->HF->IDX Cluster I have many errors on my HF that it can't received the data some are like: "ERROR TcpInputProc - Message rejected. Received unexpected message of size=369295616 bytes from src=xxxx:xxxx in streaming mode. Maximum message size allowed=67108864. (::) Possible invalid source sending data to splunktcp port or valid source sending unsupported payload."   and some are like "ERROR TcpInputProc - Encountered Streaming S2S error = Received reference to unknown channel_code=1 for data received from src=xxx:xxx"   any help?
HI Team, how to write search query for cpu & memory utilization    please help on this    thanks
Hi , We have Splunk Website monitoring 2.6 in Splunk Enterprise version 7.2.6, All of sudden I observed that my website monitoring summary page is blank and no URL's are being monitored. Kindly plea... See more...
Hi , We have Splunk Website monitoring 2.6 in Splunk Enterprise version 7.2.6, All of sudden I observed that my website monitoring summary page is blank and no URL's are being monitored. Kindly please help
Data: {"Field1":"xxx","message1":"{0}","message2":"xxx","message3":{"TEXT":"xxxx: xxx\r\n.xxxxx: {\"xxxxx\":{\"@CDI\":\"@ABC-123G-dhskdd-ghdkshd122@hkfhksdf12-djkshd12-hkdshd12 \",\"@RETURN\":\"xxxx-... See more...
Data: {"Field1":"xxx","message1":"{0}","message2":"xxx","message3":{"TEXT":"xxxx: xxx\r\n.xxxxx: {\"xxxxx\":{\"@CDI\":\"@ABC-123G-dhskdd-ghdkshd122@hkfhksdf12-djkshd12-hkdshd12 \",\"@RETURN\":\"xxxx-xxxxxxxxxx-xx-xxxxx\",\"@message4\":\"xxxxxx:xxx\",\"message5\":{\"message6............   Want to extract new field highlighted above but not getting any result.    This is what I tried: | rex field=_raw "RETURN\\\"\:\\\"(?<Field2>[^\\]+)"  
I basically have a long playbook consisting of sub-playbooks. I have 5 artifacts in a container I am using, where 4 will be dropped via 4 different decision actions and posted to a Confluent topic. T... See more...
I basically have a long playbook consisting of sub-playbooks. I have 5 artifacts in a container I am using, where 4 will be dropped via 4 different decision actions and posted to a Confluent topic. The final artifact will make it through to the end of the playbook and also be posted in a Confluent topic. When I run each artifact individually, they work perfectly. However, when I try to run "all artifacts (5 in the container)" to simulate the artifacts coming in at the same time, they are each posted 5 times in the Confluent topic, totaling 25 instead of just 5. I have two hunches as to where the problem might be; one where the phantom.decision() is evaluating to True, despite only one artifact matching that criterion and just posting all 5 instead of 1 artifact. The other is that there is no "end" after my Post actions, so each artifact is being posted to Confluent, but then also continuing to the next Playbook against my intentions. I have no idea what is causing this and haven't found much in terms of documentation for my issue. I just find it annoying that they will work perfectly fine individually but the opposite when called together. This might be how it is designed to be, or probably that I'm doing something simply incorrectly, but any help regarding this would be greatly appreciated!
Where can I find the HL7 add on for Splunk? We created a solution around this for healthcare field. We now have an official go ahead for a POC with Splunk in Asia. We need HL7 add on. Can you pleas... See more...
Where can I find the HL7 add on for Splunk? We created a solution around this for healthcare field. We now have an official go ahead for a POC with Splunk in Asia. We need HL7 add on. Can you please help us? Thanks, Sanjay