All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to match values in two different columns to see if both data sets contain the same serial number for a cellphone part.  My search: index=.. my search..... CellNumber="978499-"... See more...
Hello, I am trying to match values in two different columns to see if both data sets contain the same serial number for a cellphone part.  My search: index=.. my search..... CellNumber="978499-" |dedup CellSerialNumber |table CellNumber CellSerialNumber |appendcols [search ......CellNumber="978499-ALL" |dedup CellSerial |table CellNumber CellSerial] | eval result=if(match(CellSerial,"%".CellSerialNumber."%"),"Contained", "Not Contained") Results:   Looking deeper into the data I see there is CellSerialNumber values with their last 6 digits (-6digits) equal to the six digit CellSerial number, yet they are given a "Not contained" value. Why is this??
When i search for the string "ERROR"  in a log i get the below  < DEBUG : blah blah INFO : blah blah blah  ERROR : <some error string>  More blah blah >   I want to only show the whol... See more...
When i search for the string "ERROR"  in a log i get the below  < DEBUG : blah blah INFO : blah blah blah  ERROR : <some error string>  More blah blah >   I want to only show the whole line that starts with ERROR.  The length of the error line is variable. How can i do this? I do understand that fixing the line breaks formatting in prop.conf might be a quicker way but i dont have access to that file so would like to do it in the result head.  thanks in advance.  
Hi.   I am monitoring service status on number of paired servers. While service is running on server1 report on service stopped on server 2 is a false positive But if it stopped on server 1 ... See more...
Hi.   I am monitoring service status on number of paired servers. While service is running on server1 report on service stopped on server 2 is a false positive But if it stopped on server 1 and did not start on server 2 it's a case when I need to be alerted. Any example of alert logic you can share? Thank you
The default port is 8088 with the below address.  Due to invalidate certificate I have problem posting data there w. my iPaaS application.  Can someone help advise as changing the port to 443 (valid)... See more...
The default port is 8088 with the below address.  Due to invalidate certificate I have problem posting data there w. my iPaaS application.  Can someone help advise as changing the port to 443 (valid) is disabled?   https://prd-p-4le0q.splunkcloud.com:8088/
We are currently using Splunk Enterprise OnPrem 9.0.0 and when we try to install IT Essentials Work (https://splunkbase.splunk.com/app/5403/) it raises the following error: "Invalid app contents: arc... See more...
We are currently using Splunk Enterprise OnPrem 9.0.0 and when we try to install IT Essentials Work (https://splunkbase.splunk.com/app/5403/) it raises the following error: "Invalid app contents: archive contains more than one immediate subdirectory: and DA-ITSI-DATABASE". Anyone encountered this error and managed to install IT Essentials Work?
We have one standard mode federated index on a remote Splunk cluster. A local data model (model1) has a base search of index="federated:blah" |head10. Using the search dialog for 'index="federate... See more...
We have one standard mode federated index on a remote Splunk cluster. A local data model (model1) has a base search of index="federated:blah" |head10. Using the search dialog for 'index="federated:blah" | head10', we get 10 results as expected. Running '| from datamodel model1' we get nothing. Inspecting the search.log, we see the remote Splunk instance being queried when using the search dialog. When calling the data model, there doesn't seem to be any communication out to the remote instance. Does standard mode federated search not support local data models querying a federated index? Am I doing something wrong?
Hello all, I am new to Splunk and need a little help. I have the following configuration: Splunk Indexer Server. Splunk Deployment Server. I have installed Universal Forwarder on my clients ... See more...
Hello all, I am new to Splunk and need a little help. I have the following configuration: Splunk Indexer Server. Splunk Deployment Server. I have installed Universal Forwarder on my clients and specified Deployment Server in the installation. After installation, the clients report correctly to the Deployment Server. I have created two server classes. One for Windows and one for Linux. Server class Linux: App "fwd_to_receiver" = the Splunk indexer server is specified here. App "Linmess" = inputs.conf (here is defined what should be monitored) My question now: I would like to monitor the /var/log/lastlog file. But this does not work with inputs.conf. I have now installed a Splunk Add-on for Unix and linux. How can I set this up so that my deployment server distributes a central configuration where the "Lastlog" file is monitored correctly and also the source type fits. Do I need to install the add-on on the indexer and on the deployment server? Many thanks in advance! best regards Codyy_Fast
I'm benchmarking performance of search queries. I noticed that although the entire search pipeline takes long to complete, initial results are returned quickly. how can I measure the query run ti... See more...
I'm benchmarking performance of search queries. I noticed that although the entire search pipeline takes long to complete, initial results are returned quickly. how can I measure the query run time until the first result is returned? currently i'm measuring the entire query run time with      history.total_run_time     but that gives me the total time and I want the time for first result.
Hi All,   How can I search whether a particular saved search is being used in any dashboard or alerts or reports in Splunk.
Hello to everyone, My environment : A part of my infrastructure is deployed as Docker containers that are built and configured by myself. Bassicaly I'm pulling an ubuntu:latest image on which i'm... See more...
Hello to everyone, My environment : A part of my infrastructure is deployed as Docker containers that are built and configured by myself. Bassicaly I'm pulling an ubuntu:latest image on which i'm installing a splunk forwarder which will transfer logs to a central splunk enterprise. On start time I'm using supervisorD which is a process control system to start the UF and other processes.   The following steps are done on every build / deployment : - Pulling the latest image of Ubuntu - Installing / Configuring splunk forwarder (creating user / downloading . deb / installing) - Installing other packages - Starting Docker container with a simple bash script that do things at runtime. - Starting the service via supervisorD as root which then start the UF as splunk user.   Configurations My Docker File configuration.       FROM ubuntu:latest ENV TZ=Europe/Paris ARG DEBIAN_FRONTEND=noninteractive RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN adduser --home /home/www-python --disabled-password --gecos "" www-python \ && groupadd -r splunk \ && useradd -r -m -g splunk splunk \ && apt update \ && apt install -y python3 python3-pip wget curl supervisor RUN wget -O splunkforwarder-8.2.8-da25d08d5d3e-linux-2.6-amd64.deb "https://download.splunk.com/products/universalforwarder/releases/8.2.8/linux/splunkforwarder-8.2.8-da25d08d5d3e-linux-2.6-amd64.deb" \ && dpkg -i splunkforwarder-*.deb \ && rm -f splunkforwarder-* COPY [ "src/splunkforwarder/inputs.conf", "src/splunkforwarder/outputs.conf", "src/splunkforwarder/server.conf", "/opt/splunkforwarder/etc/system/local/" ] USER root WORKDIR /root/ COPY [ "src/supervisor/service.conf", "/root/"] COPY ./src/start.sh /root/ RUN chmod +x /root/start.sh         My start.sh script.       #!/bin/bash #Doing runtime stuff supervisord -c /root/service.conf         My supervisor configuration.       [supervisord] nodaemon=true user=root [program:splunkforwarder] command=/opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt user=splunk [program:python-script] command=some command to start a service         Problem encountered I've traced back the problem to the version 9.0.0. So all the steps and configuration in this post is working on all versions under 9.0.0. Version 9.0.1 is also having the same behavior. When the container starts, my supervisord indicates that everything started smoothly.       2022-09-12 09:15:17,813 INFO Set uid to user 0 succeeded 2022-09-12 09:15:17,844 INFO supervisord started with pid 10 2022-09-12 09:15:18,856 INFO spawned: 'python-script' with pid 11 2022-09-12 09:15:18,858 INFO spawned: 'splunkforwarder' with pid 12 2022-09-12 09:15:19,863 INFO success: python-web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-09-12 09:15:19,863 INFO success: splunkforwarder entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)         But the catch starts here, the splunk daemon seems to be stuck. As you can see my ps aux indicates that the service is taking all the CPU.       root@demo:~# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2904 1012 ? Ss 09:15 0:00 /bin/sh -c /root/start.sh root 7 0.0 0.0 4508 3516 ? S 09:15 0:00 /bin/bash /root/start.sh root 10 0.0 0.1 33220 27204 ? S 09:15 0:02 /usr/bin/python3 /usr/bin/supervisord -c /root/service.conf splunk 12 56.0 0.0 4516 2900 ? R 09:15 145:55 /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt         When you go look for more logs, nothings is created, like the service never did start.       root@demo:~# ls -alh /opt/splunkforwarder/var/log/splunk/ total 4.0K drwx------ 2 splunk splunk 31 Sep 12 09:15 . drwx--x--- 5 splunk splunk 57 Sep 12 09:15 .. -rw------- 1 splunk splunk 70 Sep 12 09:15 first_install.log         So if I try to start manualy the service, it asks me to accept the licence agreements & indicates that a previous installion has been found and needs to migrate the instance. The only logs of errors I have are these lines :       Creating unit file... Error calling execve(): No such file or directory Error launching command: No such file or directory         Like I was saying earlier, the only thing that changed on this case is the version of splunk. I can start the service by using version 8.2.X but not the latest one. Does anyone have any inputs on this matter ? I didn't had any insight by looking at the threads on this site (or elsewhere)
Hi, I am  a new to spunk. I am trying to send an REST request from splunk dashboard by a submit button to external server that listens http requests. How can i achieve that goal? I basically ne... See more...
Hi, I am  a new to spunk. I am trying to send an REST request from splunk dashboard by a submit button to external server that listens http requests. How can i achieve that goal? I basically need to send a simple curl with a var in the path that will be selected by the user from a drop down list.    
Why does alert manager not always trigger an alert?
Hi everyone, I have a db connect and get a table like this: _time count 12/09/2022 10:00 1 12/09/2022 10:01 1 12/09/2022 10:03 1 12/09/2022 10:04 1 1... See more...
Hi everyone, I have a db connect and get a table like this: _time count 12/09/2022 10:00 1 12/09/2022 10:01 1 12/09/2022 10:03 1 12/09/2022 10:04 1 12/09/2022 11:05 2 12/09/2022 11:15 5 12/09/2022 11:05 6 12/09/2022 11:17 4 12/09/2022 12:05 1 12/09/2022 12:10 1 12/09/2022 12:12 1   I want to find the trend of the event that I receive by hour, base on now: What I understand, I have to count the number of event by hour, to achieve a table like this before choosing displaying by single value: _time count 12/09/2022 10:30 14 12/09/2022 11:30 3 Suppose: now: 12/09/2022 12:30. And I want to count events from 10:30-11:30 et 11:30-12:30. If I use |timechart span=1h sum(count) as count, I have this table instead: time count 12/09/2022 10:00 4 12/09/2022 11:00 17 12/09/2022 12:00 3   Please, is it possible to find the table that I want?   Have  a nice day! Julia
Hello, When I download a dashboard with dashboard studio it come out with the horizontal and vertical scrollbars. There is an option to download to pdf with full tables and and without the s... See more...
Hello, When I download a dashboard with dashboard studio it come out with the horizontal and vertical scrollbars. There is an option to download to pdf with full tables and and without the scrollbars? Or if there is option to make the height size dynamically based on custom variables? Thanks, Ran
hi all, I have 3 indexers with 26 CPU/ indexer they are crying out loud due to load.  I may not be able to increase CPU's as such there is a limit and max is 26 cores. Will adding more indexers h... See more...
hi all, I have 3 indexers with 26 CPU/ indexer they are crying out loud due to load.  I may not be able to increase CPU's as such there is a limit and max is 26 cores. Will adding more indexers help?  
This is a script for finding frozen bucket files in time range you gave It shows folders + size + start time and endtime of logs contains on  each folder log + asks to unfrozen log      #!/bi... See more...
This is a script for finding frozen bucket files in time range you gave It shows folders + size + start time and endtime of logs contains on  each folder log + asks to unfrozen log      #!/bin/bash clear echo "############################" echo "##created.by mehran.safari##" echo "## 2022 ##" echo "############################" ############## echo -n " Enter index name to lookup:" read INAME #### FROZENPATH="/frozendata" echo " Default Splunk Frozen Indexes Path is "$FROZENPATH". is it ok? (y to continue or n to give new path):" read ANSWER3; case "$ANSWER3" in "y") echo -e "OK Deafult Frozen Index Path Selected.";; "n") echo -e "Enter NEW Frozen Index Path:"; read FROZENPATH;; esac #### find "$FROZENPATH/$INAME" -type d -iname "db_*" -print > "./frozendb.txt" echo -n " Enter starting date you need("MM/DD/YYYY HH:MM:SS"):" read SDATE echo -n " Enter end date you need("MM/DD/YYYY HH:MM:SS"):" read EDATE ############## BSDATE=$(date -d "$SDATE" +%s) BEDATE=$(date -d "$EDATE" +%s) ############# FILE='./frozendb.txt' while read line; do LOGSTART=`echo $line | cut -d "_" -f3`; LOGEND=`echo $line | cut -d "_" -f2`; if [[ $BSDATE -le $LOGEND && $BEDATE -gt $LOGSTART ]]; then echo -e "******************************" echo -e "Frozen Log Path You want: $line" HLOGSTART=`date -d @"$LOGSTART"` HLOGEND=`date -d @"$LOGEND"` LOGSIZE=`du -hs "$line" | cut -d "/" -f1` echo -e "*** this Bucket contains logs from: $HLOGSTART" echo -e "*** this Bucket contains logs to: $HLOGEND " echo -e "**** The Size Of This Log Is: $LOGSIZE" echo -e "$line" >> "./frozenmatched.txt" echo -e "******************************" #else #echo "not in data range you want: $line" fi done<$FILE ############ sudo rm -rf "./frozendb.txt" echo "Do you Want to Unfrozen this Logs?(y to copy): " read ANSWER FILE2='./frozenmatched.txt' INDEXPATH="/opt/splunk/var/lib/splunk" DST="$INDEXPATH/$INAME/thaweddb/" if [[ "$ANSWER" == "y" ]]; then echo " Default Destination is "$DST". is it ok? (y to continue or n to give new path):" read ANSWER2; case "$ANSWER2" in "y") echo -e "OK Deafult Destination Selected.";; "n") echo -e "Enter NEW Destination Path:"; read DST;; esac while read line2; do sudo cp -R "$line2" "$DST" echo -e "Executing copy of $line2 to $DST DONE." echo -e "$DST$(basename $line2)" sudo /opt/splunk/bin/splunk rebuild "$DST$(basename $line2)" $INAME --ignore-read-error done<$FILE2 fi sudo rm -rf "./frozenmatched.txt" ########## echo " Do you want to restart splunk service? (y to continue or n to exit):" read ANSWER4; if [[ "$ANSWER4" == "y" ]]; then sudo /opt/splunk/bin/splunk restart fi ########## echo "################################" echo -e "## GOOD LUCk WITH BEST REGARDS##" echo "################################" #########   this is the  github project if you need  https://github.com/mehransafari/Splunk_FrozenData_FIND_by_DATE_and_Restore it may help you
this bash script will search frozen path you give + oldest needed time then will show older logs and asks you to remove them. it shows you path + size + start and end time of logs each bucket conta... See more...
this bash script will search frozen path you give + oldest needed time then will show older logs and asks you to remove them. it shows you path + size + start and end time of logs each bucket contains this will find logs forexample older than 30 days and will ask you to remove them if you agree this script detects logs with wrong time ( logtime > current time) too           #!/bin/bash clear echo "############################" echo "##created.by mehran.safari##" echo "## 2022 ##" echo "############################" ############## echo -n " Enter index name to lookup:" read INAME #### FROZENPATH="/frozendata" echo " Default Splunk Frozen Indexes Path is "$FROZENPATH". is it ok? (y to continue or n to give new path):" read ANSWER1; case "$ANSWER1" in "y") echo -e "OK Deafult Frozen Index Path Selected.";; "n") echo -e "Enter NEW Frozen Index Path:"; read FROZENPATH;; esac #### find "$FROZENPATH/$INAME" -type d -iname "db_*" -print > "./frozendb.txt" ODATE=30 echo " oldest Frozen Bucket Should be "$ODATE" days old. is it ok?(press "y" to continue & "n" to change it):" read ANSWER3 case $ANSWER3 in y ) echo -e "OK Default Frozen Age Kept."; break;; n ) echo -e "Enter NEW Frozen AGE You Want:"; read ODATE; break;; esac BODATE=$(date --date="`date`-"$ODATE"days" +%s) BCDATE=`date +%s` ############# FILE1='./frozendb.txt' while read line; do LOGSTART=`echo $line | cut -d "_" -f3`; LOGEND=`echo $line | cut -d "_" -f2`; if [[ $LOGEND -gt $BCDATE || $LOGSTART -lt $BODATE ]]; then echo -e "******************************" echo -e "Frozen Log Path You want: $line" HLOGSTART=`date -d @"$LOGSTART"` HLOGEND=`date -d @"$LOGEND"` LOGSIZE=`du -hs "$line" | cut -d "/" -f1` echo -e "*** this Bucket contains logs from: $HLOGSTART" echo -e "*** this Bucket contains logs to: $HLOGEND " echo -e "**** The Size Of This Log Is: $LOGSIZE" echo -e "$line" >> "./frozenmatched.txt" echo -e "******************************" fi done<$FILE1 ############ sudo rm -rf "./frozendb.txt" echo "Do you Want to DELETE this Logs?(y to DELETE): " read ANSWER3 FILE2='./frozenmatched.txt' if [[ "$ANSWER3" == "y" ]]; then while read line2; do sudo rm -rf "$line2" echo -e "DELETING of $line2 DONE." done<$FILE2 fi sudo rm -rf "./frozenmatched.txt" ########## echo "################################" echo -e "## GOOD LUCk WITH BEST REGARDS##" echo "################################" #########           this is github link if you want https://github.com/mehransafari/Splunk_Frozen_Cleanup
Hi all, is there a way to integrate with O365 and, given a malicious email (identified by subject and sender), search for it in all the mailboxes of all the users and then delete it? I was lookin... See more...
Hi all, is there a way to integrate with O365 and, given a malicious email (identified by subject and sender), search for it in all the mailboxes of all the users and then delete it? I was looking for an action in the "EWS for Office 365 App" and in "MS Graph for Office 365" but I do not see any action able to do that. For instance, the "run query" actions require a precise mailbox to look into. Thank you in advance.
Is there a way to create/update/delete tags any other way than through "Administration Settings/Tags"? I was looking for a way to do it through playbooks 
I have integrated Splunk with Service Now using Add on. Now I have 2 questions: I'm able to bring the desired cases data into Splunk. I'm only able to create but cannot delete the record in Splun... See more...
I have integrated Splunk with Service Now using Add on. Now I have 2 questions: I'm able to bring the desired cases data into Splunk. I'm only able to create but cannot delete the record in Splunk when I delete the same case in Service now , so what should I do? When trying to push the data to ServiceNow from Splunk, I'm able to push the data to only incident and event table, but not my desired table. Is there a way to do that?