All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hello in my first dashboard, I use the timepicker below     <fieldset submitButton="false"> <input type="dropdown" token="period"> <label>Période</label> <choice value="1654... See more...
hello in my first dashboard, I use the timepicker below     <fieldset submitButton="false"> <input type="dropdown" token="period"> <label>Période</label> <choice value="1654466400.0">Lundi 6 Juin 2022</choice> <choice value="1655071200.0">Lundi 13 Juin 2022</choice> <choice value="1655676000.0">Lundi 20 Juin 2022</choice> <change> <eval token="debut">period</eval> <eval token="fin">debut+432000</eval> <eval token="debut_4w">relative_time(debut,"-4w")</eval> <eval token="fin_4w">relative_time(debut,"-0w")</eval> </change> <default>1655071200.0</default> <initialValue>1655071200.0</initialValue> </input>     Now, I need to retrieve the time choice done in the timepicker in an second dashboard So here is the link I use     <a href="/app/spl_pub/bp?form.period=$form.period$" target="_blank">Cliquez ici</a>     And in the second dashboard, I added this in each panel but it doesnt works     | search period=$form.period$      what is the problemplease?
Problem Getting API data from an external service. Location script: /opt/splunk/etc/apps/statuscake/bin/statuscake.sh       #!/bin/bash curl https://api.statuscake.com/v1/pagespeed \ -H... See more...
Problem Getting API data from an external service. Location script: /opt/splunk/etc/apps/statuscake/bin/statuscake.sh       #!/bin/bash curl https://api.statuscake.com/v1/pagespeed \ -H "Authorization: Bearer secretkeyhere"       When running the curl command directly on Linux host, the output of command is working. When running the script.sh (see above) from the CLI (as splunk user) the output is working. When running the script by Splunk itself, the output shows only in splunkd.log :         09-13-2022 03:21:49.753 +0000 ERROR ExecProcessor [7634 ExecProcessor] - message from "/opt/splunk/etc/apps/statuscake/bin/statuscake_api.sh" \r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (77) Problem with the SSL CA cert (path? access rights?) 09-13-2022 03:21:49.649 +0000 ERROR ExecProcessor [7634 ExecProcessor] - message from "/opt/splunk/etc/apps/statuscake/bin/statuscake_api.sh" % Total % Received % Xferd Average Speed Time Time Time Current 09-13-2022 03:20:49.771 +0000 ERROR ExecProcessor [7634 ExecProcessor] - message from "/opt/splunk/etc/apps/statuscake/bin/statuscake_api.sh" \r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (77) Problem with the SSL CA cert (path? access rights?)         Tried already (without success) Updating ca-bundle on CentOS (curl points to right path). Updated Splunk cert. Trying to replicate, but curl and the script work without problems as long as they are not being triggered by Splunk.  
Hello, I understand that the HTTP Event Collector receives data over HTTPS on TCP port 8088 by default. What i am wondering is if i have virtual machines running in the Azure cloud, do i need to ... See more...
Hello, I understand that the HTTP Event Collector receives data over HTTPS on TCP port 8088 by default. What i am wondering is if i have virtual machines running in the Azure cloud, do i need to open both inbound and outbound port 8088 in the Azure portal firewall settings? Also, I was hoping to disable HTTPS by clicking on the Global Settings button at the top of the HTTP Event Collector management page in Splunk Cloud, but i see that it's greyed out.  I am in the admin role so is this changeable?  
Data cannot be registered by Universal Forwarder. There are a total of 12 Universal Forwarders. Only one of these universal forwarders registers data. Even if I change crcSalt, the data cannot ... See more...
Data cannot be registered by Universal Forwarder. There are a total of 12 Universal Forwarders. Only one of these universal forwarders registers data. Even if I change crcSalt, the data cannot be registered. Which part should I check?   <setting> [monitor://D:\Splunk\iocheck\PMC*\PMC*.csv] disabled = 0  index = pmc host_segment = 3 sourcetype = pmc_iotable crcSalt = <SOURCE>
I am using splunk cloud. I would like to use the lookup file to find out if there is an IP corresponding to the blacklist, but only 10.50.88.22 is hit. [definition of lookup] WILDCARD (IP) [Con... See more...
I am using splunk cloud. I would like to use the lookup file to find out if there is an IP corresponding to the blacklist, but only 10.50.88.22 is hit. [definition of lookup] WILDCARD (IP) [Contents of lookupfile] IP 10.50.88.22 10.30.50.70 [Search statement] |makeresults format=csv data="IP 10.50.88.220 10.50.88.22 10.50.88.2" |lookup test.csv IP OUTPUT IP as list_IP |where list_IP IN(IP) |table IP list_IP If it works correctly, I want the following two to hit. 10.50.88.220 10.50.88.22 Referencing past questions and changing the lookup definition to the following did not work. WILDCARD (IP) Is my search statement wrong? Any advice would be greatly appreciated.
Brand new VM server.  Fresh copy of Splunk 9.0 install file.  Running installer with elevated privileges.   Selecting Domain Account option in wizard.  Account used is member of Domain Admins.  Accou... See more...
Brand new VM server.  Fresh copy of Splunk 9.0 install file.  Running installer with elevated privileges.   Selecting Domain Account option in wizard.  Account used is member of Domain Admins.  Account listed in Security Policy as member to Allow Login Locally. Generated log file for install but nothing in it shows a error.  All these were suggestions to look at if install is not working that I found online.  Yet, it still fails doing the install and does a rollback.  Any other suggestions?  Thanks
Hello All, I have been tasked with building a clustered environment from scratch in PROD. This will be my first.  I have only practiced in a test environment and everything is usually good. But, I ... See more...
Hello All, I have been tasked with building a clustered environment from scratch in PROD. This will be my first.  I have only practiced in a test environment and everything is usually good. But, I would like to know any DOs and DONTs if any, or tips to be more successful. Secondly, Once am done and everything is running how do I connect the old environment to the new one and Transfer or copy rather the same alerts, reports, dashboards, and apps to the new site? Thanks for your help in advance.  
Hi all!  We use stats commands to pull in data from our APIs. But, our APIs get called multiple times in a single session. This works well if you want to use the first or last API call, using first(... See more...
Hi all!  We use stats commands to pull in data from our APIs. But, our APIs get called multiple times in a single session. This works well if you want to use the first or last API call, using first(variable) or last(variable). However, we want to pull in the middle API call. Is there a way to do this? I realize there's no param for middle(variable), but I'm looking for possible alternatives. Any help would be much appreciated! index=conversation sourcetype="cui-orchestration-log" botId=123456 | stats first(experiments__40000) as treatment middle(case_number) as case_ID by sessionId  
My organization has a 10G a day data ingest subscription with splunk. Recently, every Tuesday,  our firewall data ingest will spike sending us over the 10G limit. How can I find out what is causing t... See more...
My organization has a 10G a day data ingest subscription with splunk. Recently, every Tuesday,  our firewall data ingest will spike sending us over the 10G limit. How can I find out what is causing this Tuesdays spike? Any suggestion will be appreciated. 
Hey,   I was trying to filter some search data in splunk using regex. I was able to figure the regex part. However when I try to input into splunk, i get an error.  Error in 'SearchParser': Missin... See more...
Hey,   I was trying to filter some search data in splunk using regex. I was able to figure the regex part. However when I try to input into splunk, i get an error.  Error in 'SearchParser': Missing a search command before '\'. Error at position '321' of search query 'search index=nessus [ search index=nessus ...{snipped} {errorcontext = <paths>^([\w]+[^\w\r\}'.   Splunk command : | rex field=pluginText (?<paths>^([\w]+[^\w\r\n]+){2}[\w]+) regex link : regex101: build, test, and debug regex
Hello, I am trying to match values in two different columns to see if both data sets contain the same serial number for a cellphone part.  My search: index=.. my search..... CellNumber="978499-"... See more...
Hello, I am trying to match values in two different columns to see if both data sets contain the same serial number for a cellphone part.  My search: index=.. my search..... CellNumber="978499-" |dedup CellSerialNumber |table CellNumber CellSerialNumber |appendcols [search ......CellNumber="978499-ALL" |dedup CellSerial |table CellNumber CellSerial] | eval result=if(match(CellSerial,"%".CellSerialNumber."%"),"Contained", "Not Contained") Results:   Looking deeper into the data I see there is CellSerialNumber values with their last 6 digits (-6digits) equal to the six digit CellSerial number, yet they are given a "Not contained" value. Why is this??
When i search for the string "ERROR"  in a log i get the below  < DEBUG : blah blah INFO : blah blah blah  ERROR : <some error string>  More blah blah >   I want to only show the whol... See more...
When i search for the string "ERROR"  in a log i get the below  < DEBUG : blah blah INFO : blah blah blah  ERROR : <some error string>  More blah blah >   I want to only show the whole line that starts with ERROR.  The length of the error line is variable. How can i do this? I do understand that fixing the line breaks formatting in prop.conf might be a quicker way but i dont have access to that file so would like to do it in the result head.  thanks in advance.  
Hi.   I am monitoring service status on number of paired servers. While service is running on server1 report on service stopped on server 2 is a false positive But if it stopped on server 1 ... See more...
Hi.   I am monitoring service status on number of paired servers. While service is running on server1 report on service stopped on server 2 is a false positive But if it stopped on server 1 and did not start on server 2 it's a case when I need to be alerted. Any example of alert logic you can share? Thank you
The default port is 8088 with the below address.  Due to invalidate certificate I have problem posting data there w. my iPaaS application.  Can someone help advise as changing the port to 443 (valid)... See more...
The default port is 8088 with the below address.  Due to invalidate certificate I have problem posting data there w. my iPaaS application.  Can someone help advise as changing the port to 443 (valid) is disabled?   https://prd-p-4le0q.splunkcloud.com:8088/
We are currently using Splunk Enterprise OnPrem 9.0.0 and when we try to install IT Essentials Work (https://splunkbase.splunk.com/app/5403/) it raises the following error: "Invalid app contents: arc... See more...
We are currently using Splunk Enterprise OnPrem 9.0.0 and when we try to install IT Essentials Work (https://splunkbase.splunk.com/app/5403/) it raises the following error: "Invalid app contents: archive contains more than one immediate subdirectory: and DA-ITSI-DATABASE". Anyone encountered this error and managed to install IT Essentials Work?
We have one standard mode federated index on a remote Splunk cluster. A local data model (model1) has a base search of index="federated:blah" |head10. Using the search dialog for 'index="federate... See more...
We have one standard mode federated index on a remote Splunk cluster. A local data model (model1) has a base search of index="federated:blah" |head10. Using the search dialog for 'index="federated:blah" | head10', we get 10 results as expected. Running '| from datamodel model1' we get nothing. Inspecting the search.log, we see the remote Splunk instance being queried when using the search dialog. When calling the data model, there doesn't seem to be any communication out to the remote instance. Does standard mode federated search not support local data models querying a federated index? Am I doing something wrong?
Hello all, I am new to Splunk and need a little help. I have the following configuration: Splunk Indexer Server. Splunk Deployment Server. I have installed Universal Forwarder on my clients ... See more...
Hello all, I am new to Splunk and need a little help. I have the following configuration: Splunk Indexer Server. Splunk Deployment Server. I have installed Universal Forwarder on my clients and specified Deployment Server in the installation. After installation, the clients report correctly to the Deployment Server. I have created two server classes. One for Windows and one for Linux. Server class Linux: App "fwd_to_receiver" = the Splunk indexer server is specified here. App "Linmess" = inputs.conf (here is defined what should be monitored) My question now: I would like to monitor the /var/log/lastlog file. But this does not work with inputs.conf. I have now installed a Splunk Add-on for Unix and linux. How can I set this up so that my deployment server distributes a central configuration where the "Lastlog" file is monitored correctly and also the source type fits. Do I need to install the add-on on the indexer and on the deployment server? Many thanks in advance! best regards Codyy_Fast
I'm benchmarking performance of search queries. I noticed that although the entire search pipeline takes long to complete, initial results are returned quickly. how can I measure the query run ti... See more...
I'm benchmarking performance of search queries. I noticed that although the entire search pipeline takes long to complete, initial results are returned quickly. how can I measure the query run time until the first result is returned? currently i'm measuring the entire query run time with      history.total_run_time     but that gives me the total time and I want the time for first result.
Hi All,   How can I search whether a particular saved search is being used in any dashboard or alerts or reports in Splunk.
Hello to everyone, My environment : A part of my infrastructure is deployed as Docker containers that are built and configured by myself. Bassicaly I'm pulling an ubuntu:latest image on which i'm... See more...
Hello to everyone, My environment : A part of my infrastructure is deployed as Docker containers that are built and configured by myself. Bassicaly I'm pulling an ubuntu:latest image on which i'm installing a splunk forwarder which will transfer logs to a central splunk enterprise. On start time I'm using supervisorD which is a process control system to start the UF and other processes.   The following steps are done on every build / deployment : - Pulling the latest image of Ubuntu - Installing / Configuring splunk forwarder (creating user / downloading . deb / installing) - Installing other packages - Starting Docker container with a simple bash script that do things at runtime. - Starting the service via supervisorD as root which then start the UF as splunk user.   Configurations My Docker File configuration.       FROM ubuntu:latest ENV TZ=Europe/Paris ARG DEBIAN_FRONTEND=noninteractive RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN adduser --home /home/www-python --disabled-password --gecos "" www-python \ && groupadd -r splunk \ && useradd -r -m -g splunk splunk \ && apt update \ && apt install -y python3 python3-pip wget curl supervisor RUN wget -O splunkforwarder-8.2.8-da25d08d5d3e-linux-2.6-amd64.deb "https://download.splunk.com/products/universalforwarder/releases/8.2.8/linux/splunkforwarder-8.2.8-da25d08d5d3e-linux-2.6-amd64.deb" \ && dpkg -i splunkforwarder-*.deb \ && rm -f splunkforwarder-* COPY [ "src/splunkforwarder/inputs.conf", "src/splunkforwarder/outputs.conf", "src/splunkforwarder/server.conf", "/opt/splunkforwarder/etc/system/local/" ] USER root WORKDIR /root/ COPY [ "src/supervisor/service.conf", "/root/"] COPY ./src/start.sh /root/ RUN chmod +x /root/start.sh         My start.sh script.       #!/bin/bash #Doing runtime stuff supervisord -c /root/service.conf         My supervisor configuration.       [supervisord] nodaemon=true user=root [program:splunkforwarder] command=/opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt user=splunk [program:python-script] command=some command to start a service         Problem encountered I've traced back the problem to the version 9.0.0. So all the steps and configuration in this post is working on all versions under 9.0.0. Version 9.0.1 is also having the same behavior. When the container starts, my supervisord indicates that everything started smoothly.       2022-09-12 09:15:17,813 INFO Set uid to user 0 succeeded 2022-09-12 09:15:17,844 INFO supervisord started with pid 10 2022-09-12 09:15:18,856 INFO spawned: 'python-script' with pid 11 2022-09-12 09:15:18,858 INFO spawned: 'splunkforwarder' with pid 12 2022-09-12 09:15:19,863 INFO success: python-web entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-09-12 09:15:19,863 INFO success: splunkforwarder entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)         But the catch starts here, the splunk daemon seems to be stuck. As you can see my ps aux indicates that the service is taking all the CPU.       root@demo:~# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2904 1012 ? Ss 09:15 0:00 /bin/sh -c /root/start.sh root 7 0.0 0.0 4508 3516 ? S 09:15 0:00 /bin/bash /root/start.sh root 10 0.0 0.1 33220 27204 ? S 09:15 0:02 /usr/bin/python3 /usr/bin/supervisord -c /root/service.conf splunk 12 56.0 0.0 4516 2900 ? R 09:15 145:55 /opt/splunkforwarder/bin/splunk start --accept-license --answer-yes --no-prompt         When you go look for more logs, nothings is created, like the service never did start.       root@demo:~# ls -alh /opt/splunkforwarder/var/log/splunk/ total 4.0K drwx------ 2 splunk splunk 31 Sep 12 09:15 . drwx--x--- 5 splunk splunk 57 Sep 12 09:15 .. -rw------- 1 splunk splunk 70 Sep 12 09:15 first_install.log         So if I try to start manualy the service, it asks me to accept the licence agreements & indicates that a previous installion has been found and needs to migrate the instance. The only logs of errors I have are these lines :       Creating unit file... Error calling execve(): No such file or directory Error launching command: No such file or directory         Like I was saying earlier, the only thing that changed on this case is the version of splunk. I can start the service by using version 8.2.X but not the latest one. Does anyone have any inputs on this matter ? I didn't had any insight by looking at the threads on this site (or elsewhere)