Hola Everyone, I keep getting this error in when I connect Splunk to Tableau. Could you shed some light towards what the issue and the fix is? I have not been able to find anything online. [Splunk][SplunkODBC] (60) Unexpected response from server. Verify the server URL. Error parsing JSON: Text only contains white space(s) Thank you.
... View more
Hey Mate Just Use Splunk DELIMS in place of REGEX when dealing with ASCII-only delimiter-based field extractions, where field values or field/value pairs are separated by delimiters such as commas, colons, spaces, tab spaces, line breaks, and so on.
... View more
Visit the Splunk download page to download the Splunk .deb package: http://www.splunk.com/download?r=header Upload the file to your Ubuntu server and place it a temporary directory. Run the dpkg command to install the Splunk server. The file name of the .deb file may change as new versions are made available so make sure that you have downloaded. # dpkg -i splunk-7.1.1-8f0ead9ec3db-linux-2.6-amd64.deb
Selecting previously unselected package splunk.
(Reading database ... 51260 files and directories currently installed.)
Preparing to unpack splunk-7.1.1-8f0ead9ec3db-linux-2.6-amd64.deb ...
Unpacking splunk (7.1.1) ...
Setting up splunk (7.1.1) ...
Complete Now lets create the init.d script so that we can easily start and stop Splunk. Change the the Splunk directory and run the splunk executable with the below arguments. # cd /opt/splunk/bin/
# ./splunk enable boot-start Press SPACE to view all of the license agreement and then Y to accept it. And it will ask for password to setup admin dashboard. Once it done, you can see confirmation like below Moving '/opt/splunk/share/splunk/search_mrsparkle/modules.new' to '/opt/splunk/share/splunk/search_mrsparkle/modules'.
Init script installed at /etc/init.d/splunk.
Init script is configured to run at boot. Start Splunk with the service command. # /etc/init.d/splunk start You will now be able to access Splunk’s web GUI which is running on port 8000. http://YOUR-HOST-IP:8000 Open the URL in the browser and login with the below details: User Name: admin Password: YOURPASSWORD Once you login, you can see follow screen. JOIN Splunk Training . For more Details Click Splunk Online Training Install the Splunk Forwarder The Splunk Universal Forwarder is a small, lightweight daemon which forwards data to your main Splunk server from a variety of sources. Download the Splunk Universal Forwarder .deb file from the Splunk website: https://www.splunk.com/download/universalforwarder Once you agree the agreement, then it will download to your local machine. Once it downloaded, Upload the file to your Ubuntu server and place it a temporary directory. Run the dpkg command to install the Splunk server. The file name of the .deb file may change as new versions are made available so make sure that you have downloaded. # dpkg -i splunkforwarder-7.1.1-8f0ead9ec3db-linux-2.6-amd64.deb
Selecting previously unselected package splunkforwarder.
(Reading database ... 65803 files and directories currently installed.)
Preparing to unpack splunkforwarder-7.1.1-8f0ead9ec3db-linux-2.6-amd64.deb ...
Unpacking splunkforwarder (7.1.1) ...
Setting up splunkforwarder (7.1.1) ...
complete Let’s create the init.d script to start the log forwarder. # cd /opt/splunkforwarder/bin/
# ./splunk enable boot-start Press SPACE to view all of the license agreement and then Y to accept it. You can now start the forwarder daemon using the init.d script. # /etc/init.d/splunk start For now, we have setup the splunk and splunk forwarder. In next post will see how to parse the logs to splunk. Enable Receiving input on the Index Server CLI: # /opt/splunk/bin/splunk enable listen 9997 Where 9997 (default) is the receiving port for Splunk Forwarder connections GUI: Configure the Splunk Index Server to receive data, either in the manager: Settings -> Forwarding and receiving -> configure receiving -> new or via the CLI: Configure Forwarder connection to Index Server: CLI: # /opt/splunkforwarder/bin/splunk add forward-server hostname_or_IP:9997 –auth admin:PASSWORD (where hostname.domain is the fully qualified address or IP of the index server GUI: Settings -> Forwarding and receiving -> configure receiving -> new. Add 9997 Add Data: CLI: # /opt/splunkforwarder/bin/splunk add monitor /path/to/app/logs/ -index main -sourcetype %app% Where /path/to/app/logs/ is the path to application logs on the host that you want to bring into Splunk, and %app% is the name you want to associate with that type of data This will create a file: inputs.conf in /opt/splunkforwarder/etc/apps/search/local/ — here is some documentation on inputs.conf: http://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf Note: System logs in /var/log/ are covered in the configuration part of Step 7. If you have application logs in /var/log/*/ Once you added the monitor successfully, you can login to your dashboard and start search the log and index you mentioned on the last command kube..
... View more
i am new to hadoop, so from what I understood: If your data upload is not an actual service of the cluster, which should be running on an edge node of the cluster, then you can configure your own computer to work as an edge node. An edge node doesn't need to be known by the cluster (but for security stuff) as it does not store data nor compute job. This is basically what it means to be an edge-node: it is connected to the hadoop cluster but does not participate. In case it can help someone, here is what I have done to connect to a cluster that I don't administer: get an account on the cluster, say myaccount create an account on you computer with the same name: myaccount configure your computer to access the cluster machines (ssh w\out passphrase, registered ip, ...) get the hadoop configuration files from an edge-node of the cluster get a hadoop distrib (eg. from here) uncompress it where you want, say /home/myaccount/hadoop-x.x add the following environment variables: JAVA_HOME, HADOOP_HOME (/home/me/hadoop-x.x) (if you'd like) add hadoop bin to your path: export PATH=$HADOOP_HOME/bin:$PATH replace your hadoop configuration files by those you got from the edge node. With hadoop 2.5.2, it is the folder $HADOOP_HOME/etc/hadoop also, I had to change the value of a couple $JAVA_HOME defined in conf files. To find them use: grep -r "export.*JAVA_HOME" Then do hadoop fs -ls / which should list the root directory of the cluster hdfs. KBS Training
... View more