All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all - I'm attempting to write a query using earliest/latest based off a date field in the event, not _time. I've tried a dozen things, and no matter what I try the earliest/latest fields are not s... See more...
Hi all - I'm attempting to write a query using earliest/latest based off a date field in the event, not _time. I've tried a dozen things, and no matter what I try the earliest/latest fields are not showing what I expect. I'm using 'my_report_date' as the desired earliest/latest field. When I run the following search, the earliest should be 11/1/22, but it shows as 11/2 (these events were sent to a summary index prior to the events of 11/1). The rest of the query is finding the number of days between first/last events. How do I refine this search to use 'my_report_date' instead of _time?   index=summary | stats earliest(my_report_date) AS FirstFound, latest(my_report_date) AS LastFound by my_asset | convert mktime(FirstFound) AS FirstFoundEpoch timeformat="%Y-%m-%d" | convert mktime(LastFound) AS LastFoundEpoch timeformat="%Y-%m-%d" | eval daysdiff=round((LastFoundEpoch-FirstFoundEpoch)/86400,0) | stats count by my_asset, FirstFound, LastFound, daysdiff    
I have 5 separate endpoints for our Okta environment that I'm pulling into Splunk. The data is all event driven so if I'm trying to map user, group and application data together and the groups or app... See more...
I have 5 separate endpoints for our Okta environment that I'm pulling into Splunk. The data is all event driven so if I'm trying to map user, group and application data together and the groups or applications were created over a year ago, it won't find the data unless I move the search window back, causing long searches. What I would like to do is  create lookup tables for each of those endpoints so I only have to run one long query, one time for those endpoints, and then append any group, application and user that is create each data on a saved search. Is this the right strategy and could someone help me with how you would do that? I did see a few articles on appending data to table but it didn't seem to meet my needs for this scenario. Thanks, Joel
Hello everyone. First of all, this was working fine using images 8.x. Here is my compose for 8.2:     version: '3.6' services: splunkuf82: tty: true image: splunk/universalforward... See more...
Hello everyone. First of all, this was working fine using images 8.x. Here is my compose for 8.2:     version: '3.6' services: splunkuf82: tty: true image: splunk/universalforwarder:8.2 hostname: universalforwarder82 container_name: universalforwarder82 environment: SPLUNK_START_ARGS: "--accept-license --answer-yes --no-prompt" SPLUNK_USER: root SPLUNK_GROUP: root SPLUNK_PASSWORD: "adminadmin"     Here are some commands to check if it is running:     jpla@rd:~/pd/rd/docker/rundeck/rd.universalforwarder82$ docker compose down jpla@rd:~/pd/rd/docker/rundeck/rd.universalforwarder82$ docker compose up -d [+] Running 2/2 ⠿ Network rduniversalforwarder82_default Created 0.1s ⠿ Container universalforwarder82 Started 0.4s jpla@rd:~/pd/rd/docker/rundeck/rd.universalforwarder82$ docker exec -it universalforwarder82 bash [ansible@universalforwarder82 splunkforwarder]$ cd bin [ansible@universalforwarder82 bin]$ sudo ./splunk status splunkd is running (PID: 1125). splunk helpers are running (PIDs: 1126).     Here is my compose for 9.0.3:     version: '3.6' services: splunkuf903: tty: true image: splunk/universalforwarder:9.0.3 hostname: universalforwarder903 container_name: universalforwarder903 environment: SPLUNK_START_ARGS: "--accept-license --answer-yes --no-prompt" SPLUNK_USER: root SPLUNK_GROUP: root SPLUNK_PASSWORD: "adminadmin"     Here are the same commands to check if it is running:     jpla@rd:~/pd/rd/docker/rundeck/rd.universalforwarder903$ docker compose down jpla@rd:~/pd/rd/docker/rundeck/rd.universalforwarder903$ docker compose up -d [+] Running 2/2 ⠿ Network rduniversalforwarder903_default Created 0.1s ⠿ Container universalforwarder903 Started 0.5s jpla@rd:~/pd/rd/docker/rundeck/rd.universalforwarder903$ docker exec -it universalforwarder903 bash [ansible@universalforwarder903 splunkforwarder]$ cd bin [ansible@universalforwarder903 bin]$ sudo ./splunk status Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R root /opt/splunkforwarder" Error calling execve(): No such file or directory Error launching command: No such file or directory execvp: No such file or directory Do you agree with this license? [y/n]: y This appears to be an upgrade of Splunk. --------------------------------------------------------------------------------) Splunk has detected an older version of Splunk installed on this machine. To finish upgrading to the new version, Splunk's installer will automatically update and alter your current configuration files. Deprecated configuration files will be renamed with a .deprecated extension. You can choose to preview the changes that will be made to your configuration files before proceeding with the migration and upgrade: If you want to migrate and upgrade without previewing the changes that will be made to your existing configuration files, choose 'y'. If you want to see what changes will be made before you proceed with the upgrade, choose 'n'. Perform migration and upgrade without previewing configuration changes? [y/n] y -- Migration information is being logged to '/opt/splunkforwarder/var/log/splunk/migration.log.2023-01-31.23-16-18' -- Migrating to: VERSION=9.0.3 BUILD=dd0128b1f8cd PRODUCT=splunk PLATFORM=Linux-x86_64 Error calling execve(): No such file or directory Error launching command: Invalid argument ^C [ansible@universalforwarder903 bin]$ sudo ./splunk status Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R root /opt/splunkforwarder" Error calling execve(): No such file or directory Error launching command: No such file or directory execvp: No such file or directory Do you agree with this license? [y/n]:     As you can see in 9.0.3 it asks for license again, and again after saying yes the first time. This behaviour is running on Docker version 20.10.23, also happening on Minikube version: v1.29.0.- on Linuxmint 21.1.- I added tty: true per this recommendation, but it didn't work for me. Could anybody please confirm the issue? Thanks!
How can I combine multiple fields results in to single column with common name for example Test1, Test2, Test3 and so on up to Test 20  with a common word as "Test" in all the fields (either using fo... See more...
How can I combine multiple fields results in to single column with common name for example Test1, Test2, Test3 and so on up to Test 20  with a common word as "Test" in all the fields (either using foreach or any other solution)? Test1 Test2  Test3 Test4 Test5 Test6 Test7 Test8                 1 6 11 16 21 26 31 36 2 7 12 17 22 27 32 37 3 8 13 18 23 28 33 38 4 9 14 19 24 29 34 39 5 10 15 20 25 30 35 40   Result:  Test21 1 2 3 4 5 6 7 8 9 so on    Any help would be appreciated. 
I'm fairly new to Splunk and I am having some trouble setting up a data input from my universal forwarder. I've currently got it configured to pull windows event files from a specific folder on the m... See more...
I'm fairly new to Splunk and I am having some trouble setting up a data input from my universal forwarder. I've currently got it configured to pull windows event files from a specific folder on the machine that are moved to it manually. However it is only pulling seemingly random files, but 99% aren't getting indexed. I've tried specifying the file type to see if that was in issue, with no luck. I've also tried adding crcSalt = <string> to the input.conf file, no luck there either. Trying to see if I'm missing something as I've gone through many other posts for similar issues to no avail. Any ideas are greatly appreciated. 
So I have a vSphere environment.  our indexer machines are running rhel 8.7 and I installed splunk enterprise on all them. We named them indx01, indx02 and indx03 real creative yep but with some go... See more...
So I have a vSphere environment.  our indexer machines are running rhel 8.7 and I installed splunk enterprise on all them. We named them indx01, indx02 and indx03 real creative yep but with some googling we turning off distributed searching and disabled firewall just to be sure we initially had a success in adding peers to the index cluster master, but they were throwing an error that said unable to connect to the cluster master and so on the replication factor blah blah. So then we disabled indexer clustering on all of them and now I can't get any of them to be added. Distributed search turned off check firewall disabled   check on the same domain and dns  check I am attaching a image of a warning but I don't know what if anything it has to do with the problem
I've been working on a Dashboard/Query that takes two date/time values (UTC) from Zscaler ZPA logs and converts to local timezone (PST). Some entries have a blank Time_Disconnected value and I do not... See more...
I've been working on a Dashboard/Query that takes two date/time values (UTC) from Zscaler ZPA logs and converts to local timezone (PST). Some entries have a blank Time_Disconnected value and I do not know why. Original (Zscaler): TimestampAuthentication=2023-01-31T16:51:09.000Z TimestampUnAuthentication=2023-01-31T17:19:05.169Z Query: | rename TimestampAuthentication AS Time_Auth, TimestampUnAuthentication AS Time_Disconn | eval Time_Authenticated=strftime(strptime(Time_Auth, "%Y-%m-%dT%H:%M:%S.%z"), "%Y-%m-%d %H:%M:%S") | eval Time_Disconnected=strftime(strptime(Time_Disconn, "%Y-%m-%dT%H:%M:%S.%z"), "%Y-%m-%d %H:%M:%S") | sort -_time | table _time, Time_Auth, Time_Authenticated, Time_Disconn, Time_Disconnected (Time_Auth and Time_Disconn are the raw values) Result: Why is it that the last entry does not have the Time_Disconnected field populated? I have seen a few of those conversions not working. Is my query incorrectly formatted in some way?
Dear Splunkers, I would like to inform you that, I am very curious to learn splunk admin, can anyone refer me good YouTube channel or any other online institute moreover If possible please provid... See more...
Dear Splunkers, I would like to inform you that, I am very curious to learn splunk admin, can anyone refer me good YouTube channel or any other online institute moreover If possible please provide me list of topics available in splunk admin course Would be appropriate your kind support Thanks in advance..
I have installed my first splunk enterprise on a linux server and installed forwarders on windows workstations using the ports as instructed. Firewall is off and selinux off. The forwarders are calli... See more...
I have installed my first splunk enterprise on a linux server and installed forwarders on windows workstations using the ports as instructed. Firewall is off and selinux off. The forwarders are calling in. Now perhaps I am missing something in that in splunk, I select search and enter * (or index=anything, there is a long list) and the error is; The transform ca_pam_login_auth_action_success is invalid. Its regex has no capturing groups, but its FORMAT has capturing group references. I tried another search, and saw another error; Error in "litsearch" command: Your splunk license expired (the license is new) or you have exceeded your license limit too many times. Renew your splunk license by visiting www.splunk/com/store or calling 866-GET-SPLUNK. The search job failed due to an error. You may be able to view the job in the job inspector. All I want is to understand why FORMAT has capturing group references, but the regex does not and to turn my paperweight into a thriving reporting tool. Can anyone help? Thank you!  
Query: |tstats count where index=afg-juhb-appl   host_ip=*     source=*     TERM(offer) i want to get the count of each source by host_ip as shown below. output: source 11.56.67.12 11.56.6... See more...
Query: |tstats count where index=afg-juhb-appl   host_ip=*     source=*     TERM(offer) i want to get the count of each source by host_ip as shown below. output: source 11.56.67.12 11.56.67.15 11.56.67.18 11.56.67.19 /app/clts/shift.logs 987 67 67 89 /apps/lts/server.logs 45 45 67 43 /app/mts/catlog.logs 89 89 65 56 /var/http/show.logs 12 87 43 65
I feel like there's a simple solution to this that I just can't remember. I have a field named Domain that has 13 values and I want to combine ones that are similar into single field values. This is ... See more...
I feel like there's a simple solution to this that I just can't remember. I have a field named Domain that has 13 values and I want to combine ones that are similar into single field values. This is how it currently looks: Domain:                                                                           Count: BC                                                                                      1 WIC                                                                                    3 WIC, BC                                                                            2 WIC, UPnet                                                                    3 WIC, DWnet                                                                   5 WIC, DWnet, BC                                                           6 WIC, DWnet, UPnet                                                    1 WIC/UPnet                                                                    3 WIC/DWnet                                                                   2 UPnet                                                                              5 UPnet, SG                                                                       6 DWnet                                                                              1 DW                                                                                     1 I want to merge the values "WIC, UPnet" and "WIC/UPnet" to "WIC,UPnet" | "WIC, DWnet" and WIC/DWnet" to "WIC, DWnet" | "DWnet" and "DW" to "DWnet" New results should read: Domain:                                                                           Count: BC                                                                                      1 WIC                                                                                    3 WIC, BC                                                                            2 WIC, UPnet                                                                    6 WIC, DWnet                                                                   7 WIC, DWnet, BC                                                           6 WIC, DWnet, UPnet                                                    1 UPnet                                                                              5 UPnet, SG                                                                       6 DWnet                                                                              2
Hi, Need a search query to find the either if  first_find and last_find values matches with the current date should raise an alert .   first_find last_find fields are in  2020-04-30T13:... See more...
Hi, Need a search query to find the either if  first_find and last_find values matches with the current date should raise an alert .   first_find last_find fields are in  2020-04-30T13:18:13.000Z 2023-01-15T14:12:18.000Z format need this in  2020-04-30 format  2. Instead of receiving all the alerts we require, if today's date matches the first _find or the last_find, raise an alert *todays date will change every day do not bound that with actual todays date* note : last_find  , first_find are multi valued fields.. Thanks...
Hello! I am calculating utilization using the code below. Yet, I want to only account for utilization during the weekdays, instead of the whole week. To do this, I set date_wday= Monday, Tuesda... See more...
Hello! I am calculating utilization using the code below. Yet, I want to only account for utilization during the weekdays, instead of the whole week. To do this, I set date_wday= Monday, Tuesday, Wednesday, Thursday, or Friday BUT when doing this, the utilization still accounts for the whole search time frame when I just want it to look at the time for business weeks. Code: index=example date_wday=monday OR tuesday or wednesday OR thrusday OR friday | transaction Machine maxpause=300s maxspan=1d keepevicted=T keeporphans=T | addinfo | eval timepast=info_max_time-info_min_time | eventstats sum(duration) as totsum by Machine | eval Util=min(round( (totsum)/(timepast) *100,1),100) | stats values(Util) as "Utilized" by Machine |stats max(Utilized) Can I please have help!! Thank you.
Hello everyone,    I have a question with base search in Splunk Dashboard Studio.  I used this option to made my parent search and my chain search :  For example, I create this search, which ... See more...
Hello everyone,    I have a question with base search in Splunk Dashboard Studio.  I used this option to made my parent search and my chain search :  For example, I create this search, which used the base search : SI_bs_nb_de_pc However, I have a problem with thoses errors:  * Can you help me please ? An other quetsion, how to use multiple base search in a same search ? I didn't find an issue to do this in Dashboard Studio  I need your help , thank you so much 
Configured the script based app for the databases which brings the data as follows. As mentioned below.  When I am running the script at UF corrected expeced output. But when I push an application ... See more...
Configured the script based app for the databases which brings the data as follows. As mentioned below.  When I am running the script at UF corrected expeced output. But when I push an application containg the same script it is fetching me different output. Expected data after running the script in the UF is as below.  Date, datname="sql", age="00:00:00" Output we are receiving at splunk SH is like below.  Date, datname="datname", age="age" The script is kept in the location -> /opt/splunkforwarder/etc/apps/appname/bin - scripts  and /opt/splunkforwarder/etc/apps/appname/local - inputs.conf For troubleshooting I have followed below steps.  Removed and Pushed the app again Tried restarting the UF Can any one know or faced similar issue. Please help me on this. 
Hi  We are having multiple UFs running on old version and i wanted to upgrade them to the latest version using Deployment server using Scripts. can you please  help me how to do it.  Can you pleas... See more...
Hi  We are having multiple UFs running on old version and i wanted to upgrade them to the latest version using Deployment server using Scripts. can you please  help me how to do it.  Can you please provide powershell script to upgrade UF version   
Hi, is there an alert action to save the results of the search directly to a specified, existing index? I already tried the "Log event" alert action, but in the "Event" field that has to be specifi... See more...
Hi, is there an alert action to save the results of the search directly to a specified, existing index? I already tried the "Log event" alert action, but in the "Event" field that has to be specified, I did not know how to access the results of my search.   Thanks for your help!
Hi! im trying to detect multiple user access from the same source (same mobile device). Im feeding splunk with logs from a mobile app like this: 09:50:14,524 INFO [XXXXXXXXXXXX] (default task-XXXXX... See more...
Hi! im trying to detect multiple user access from the same source (same mobile device). Im feeding splunk with logs from a mobile app like this: 09:50:14,524 INFO [XXXXXXXXXXXX] (default task-XXXXXX) [ authTipoPassword=X, authDato=XXXXX, authTipoDato=X, nroDocEmpresa=X, tipoDocEmpresa=X, authCodCanal=XXX, authIP=XXX.XXX.XXX.XXX, esDealer=X, dispositivoID=XXXXXXXXXX, dispositivoOS=XXXXX ] im using the following search search XXXX |  stats dc(authDato) as count,values(authDato) as AuthDato by dispositivoID dispositivoOS authIP | where count > 1 | sort - count  and get almost all the info i wanted (like two different users - authDato - from same deviceID (dispositivoID), but i would like to enrich the data with the last time of ocurrence for the event. Is there a way to include this information?  Thanks in advance.
A have two tables anda i want to relation this two tables by nember of events in a hour, i  manage to make a SQL query,  but struggle to do in splank. I send the data of this 2 tables for two diferen... See more...
A have two tables anda i want to relation this two tables by nember of events in a hour, i  manage to make a SQL query,  but struggle to do in splank. I send the data of this 2 tables for two diferent indexes (simple copy) and want to make this:     WITH count_reserved as ( SELECT count (ru.id) reserved, to_char(ru.date,'yyyy-mm-dd hh24') as time FROM reserved ru GROUP BY to_char(ru.date,'yyyy-mm-dd hh24') ), count_concluid as ( SELECT count (u.id) as concluid, to_char(u.date,'yyyy-mm-dd hh24') as time FROM concluid u GROUP BY to_char(u.date,'yyyy-mm-dd hh24') ) SELECT coalesce(concluid,0) as concluid, reserved, count_reserved.time, ((coalesce(concluid::decimal,0)/reserved)*100) as percentage FROM count_reserved LEFT JOIN count_concluid ON count_concluid.time=count_reserved.time ORDER BY 3 ASC     the information that a want to return is the percentage value and the time to make a graph hour bar
Hi Friends, We are using Splunk cloud 9.0.  I want to ingest Azure SQL Managed Instance data into Splunk. Could you please suggest which add-on I need to use to integrate Azure SQL managed instan... See more...
Hi Friends, We are using Splunk cloud 9.0.  I want to ingest Azure SQL Managed Instance data into Splunk. Could you please suggest which add-on I need to use to integrate Azure SQL managed instance data into Splunk?  If possible share some references how to setup that add-on.  In our environment we are using below components: Universal Forwarder, heavy forwarder, Deployment server , Search head, IDM, Indexer, Cluster Master.  Could you please confirm where we need to install add-on ? HF or SH or IDM. Regards, Jagadeesh