Getting Data In
Highlighted

How can I automate the downloading of universal forwarder?

Explorer

Everything I am reading is that to download via wget, cURL, etc, that you have to specify the full path that contains the specific version number in the name/path. How can I get the latest/current version through automation versus hard-coding the path?

0 Karma
Highlighted

Re: How can I automate the downloading of universal forwarder?

Motivator

Hello @petersonjared,

You can see all the Universal Forwarder download links for the latest version when you take a look at the HTML source code of https://www.splunk.com/en_us/download/universal-forwarder.html.
That even works without logging in.

I use a Bash script for downloading the latest version of Splunk.

I cut the script down to its essence (remove command line argument parsing and if-then-else logic) for viewing purposes.

Pick one URL line (full Splunk or Universal Forwarder) and one REGEX line (Linux RPM or Linux TGZ or Windows 64-bit MSI):

URL="https://www.splunk.com/en_us/download/splunk-enterprise.html"
URL="https://www.splunk.com/en_us/download/universal-forwarder.html"
OS_REGEX="linux-2\.6-x86_64\.rpm"
OS_REGEX="Linux-x86_64\.tgz"
OS_REGEX="x64-release.msi"
RESPONSE=`curl -s --connect-timeout 10 --max-time 10 $URL`
LINK=`echo $RESPONSE | egrep -o "data-link=\"https://[^\"]+-${OS_REGEX}\"" | cut -c12- | rev | cut -c2- | rev`
wget --no-check-certificate -P /tmp $LINK

For example, when I run

URL="https://www.splunk.com/en_us/download/universal-forwarder.html"
REGEX="linux-2\.6-x86_64\.rpm"
RESPONSE=`curl -s --connect-timeout 5 --max-time 5 $URL`
LINK=`echo $RESPONSE | egrep -o "data-link=\"https://[^\"]+-${REGEX}\"" | cut -c12- | rev | cut -c2- | rev`

then I get the link:

https://download.splunk.com/products/universalforwarder/releases/7.2.4/linux/splunkforwarder-7.2.4-8...