Yes, This is relatively easy with the use of python egg files. Though you must be careful as you will have to build all the eggs and dependencies per arch type. Looking at the pyAOS module it depends on NumPy/SciPy which is a pain to build all the eggs for. You will need to build them on the same OS as your Splunk instance. If you are using a Windows server you will need to download Intel's Math Kernel Library(INTEL MKL) to build some of the eggs.
You will need to do the following:
1. Make sure you have setuptools installed with an python 2.7.x install.
2. Download pyAOS source.
3. untar. run the following the follow command python setup.py bdist_egg. Note: you will need to do this on the same OS archtype as you production splunk.
Now you will probably get an error. review the error and find missing module. Then download the missing modules source. Repeat step three. You will have to do this until you build all the missing eggs. Once you have build the missing eggs you can use the following code to load all the complied eggs during your scripts run time. The following will find the running path of your script then search an egg directory in the same path.
import os import sys from platform import system platform = system().lower() # Loading eggs into python execution path if platform == 'darwin': platform = 'macosx' running_dir = os.path.dirname(os.path.realpath(\_\_file\_\_)) egg\_dir = os.path.join(running_dir, 'eggs') for filename in os.listdir(egg\_dir): file_segments = filename.split('-') if filename.endswith('.egg'): filename = os.path.join(egg\_dir, filename) if len(file\_segments) <= 3: sys.path.append(filename) else: if platform in filename: sys.path.append(filename) import foo, bar, spam \# \# \#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\#\# \# Alternative import os, sys import subprocess import splunk.Intersplunk # Remove problematic environmental variables if they exist. for envvar in ("PYTHONPATH", "LD\_LIBRARY\_PATH"): if envvar in os.environ: del os.environ[envvar] # os interpreter python\_executable = "/usr/bin/python" try: realscript = os.path.join(os.path.dirname(\_\_file\_\_), "realscript\.py") p = subprocess.Popen([python\_executable, realscript], stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = p.communicate() print out splunk.Intersplunk.outputResults(out); except: # log something pass
This is the same approach I use in my app called Compuware GPN which loads SUDS. I have done this with the NLTK module, but its a pain and took about 6 hours of tinkering with.
I had a problem with the egg. It looks like it would have worked if I could have worked out my issues. I was going to try and call a native python based script from within the splunk based script using os.popen.
Yeah, that's kinda what I set up. Works fine for a day or so of data, but get a couple of thousand records and the process time increases. 49 secs for 432 events. Seems very inefficient. I tried the python setup.py bdist_egg, but it appears that isn't a option in the setup file.
[root@watson aoslib-master]# python setup.py bdistegg
usage: setup.py [globalopts] cmd1 [cmd1opts] [cmd2 [cmd2opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_egg'
you could try python -c 'import setuptools; execfile("setup.py")' build.
I also have a bunch of eggs built for mac and gnu/linux which may work for you.
ipython, nltk, numpy, pandas, scipy, six, statsmodels, and sympy. Would you be interested?