Can Splunk get data lookup from remote peer server ?
The problem is, because we have many Splunk servers, so if the lookup is located to each servers, then when we want to manually update it, we have to update to all servers.
What we want is to maintance the Splunk lookup table in one server and all other servers get data from that server.
I try this query but it is not working
* | head 1 | eval fieldinput="abc" | lookup local=false lookup-table-a fieldinput AS fieldinput OUTPUTNEW fieldoutput AS fieldoutput
The error is
Error in 'lookup' command: The lookup table 'lookup-table-a' does not exist.
Well lookup-table-a is located in server a. I have the query in server b which the search peer to server a is enabled.
So, Can Splunk get data lookup from remote peer server ?
Short answer: no, not natively. You can achieve this, kind of, using dynamic lookups.
Longer answer: We have this exact problem in our environment, and I've spent a fair amount of time dealing with it. The problem is, like it says in the post @lukejadamec links to, that when you issue a search on a search head, that search head will put all its knowledge objects into a bundle that will be sent out to its search peers. When the search peers process this search, only the knowledge objects in this bundle will be used. So whatever objects exist locally on the search peer, such as lookups, will/can not be used in the search. While I do not know the full reasons behind why only this mode of operation is supported, one good reason is that since Splunk distributed search operates in a map/reduce fashion, sooner or later when a non-streaming ("reduce type") command is encountered in the search, the search peer is going to return its results to the search head which will carry out the rest of the search centrally. Couple this with knowledge objects that exist on different Splunk instances and the results can be pretty confusing unless you understand fully what is going on. In your example, for instance, only the very first part of the search (
"*") would be carried out on the search peers while the
head 1 command will cause the rest to be run on the search head, because it's only possible to know which 1 event to present by combining the results from all search peers.
If you DO know how to deal with this though, there are ways to work around this limitation. Specifically, what I did was to create dynamic lookups that work similar to Splunk's regular static lookups - i.e. read CSV formatted input on stdin, match this against a CSV formatted file on the file system, populate the requested fields in the CSV from stdin and finally send the result to stdout. What will happen when you use this in a distributed search is that instead of having Splunk send out a static CSV lookup to the search peer it will now send the dynamic lookup script instead. This script is then free to read files on the search peer's local filesystem when it's run, thus making it possible for the lookup to return search peer local values even though it was invoked from a search head.
I pasted a version of this dynamic lookup script here: http://answers.splunk.com/answers/85324/regular-expression-in-my-lookup-table
It's a modified version that can perform regex matches.
But again, NOTE that when you do this it becomes VERY important to keep track of in which order you do things. I've spent many frustrating moments debugging my seemingly not working lookups only to find out that the error was that I had ordered my search incorrectly so the lookup was run on the search head instead of the search peers.
As another example, this lookup will run on the search peer:
* | lookup mydynamiclookup a OUTPUT b | stats count,values(b) by a
While this will run on the search head:
* | stats count,values(b) by a | lookup mydynamiclookup a OUTPUT b
I think the solution is using ssh modul in the python code so that the python script run only in the job server (by connecting ssh to job server first). I try that last night, there is some python modul I need to install first and I don't know why but it is so difficult 😐 . Maybe because the phyton version (i run Splunk 5.0.2) is older than the modul installer.
Well because I'm not in a hurry and I have some other works I will try again later. I will inform if it is works.
Thanks ayn and lukejadamec
That's trickier, because when running a query from the search head your job server will not be involved at all. I can't think of any way you could throw the job server into the mix if it's not part of the search at all.
Here is my lookup phyton script :
try: csvfile = open( "/apps/splunk/etc/system/lookups/mycsv.csv", "rb" ) except Exception: sys.stderr.write("No file found.") sys.exit(0) dialect = csv.Sniffer().sniff( csvfile.read( 10*1024 ) ) csvfile.seek(0) reader = csv.reader( csvfile, dialect ) header = reader.next() data = [row for row in reader] writer = csv.writer( sys.stdout ) writer.writerow(header) [ writer.writerow(x) for x in data ]
Because when I run the search in search head (not the job server), the data comes from an indexer, lets say indexer A. And the command lookup after the search is only look for lookup in the indexer A.
When I try to move my lookup to indexer A and not in job server, I'm facing a new problem, which is if my search data comes from indexer B, then the command lookup will only look for lookup in indexer B.
I'm not a Phyton coder, have little experience in programming and not a Splunk master, so it takes some time for me to try your trick.
Here is my result and my condition :
My condition :
I have a few servers act as Search Head and one of them is a job server. I have some indexers. My Splunk's user access our Splunk from our Search Heads (except the job server) with load balancer in front of our search heads. What I want to do is locating the lookup in the job server.
My result with your trick :
I can not get the dynamic lookup from my job server.
This configuration is supported, so you must not have it configured correctly. See this post:
You should check permissions on the lookup, and make sure the lookup config is in the right location.