I've written a lookup app called TA-browscap_lookup_express. It needs to write data out to a CSV to be re-used on future searches. When I had everything on a single machine, it worked fine, but now that I've gone to a distributed setup, I've noticed that the app runs on the indexer, not the search head, and that it runs in $SPLUNK_HOME/var/run/searchpeers/mypeer-somecode/apps\TA-browscap_express/bin. "somecode" gets regenerated, so basically the whole point of speeding up browscap lookups with a cache is lost because the cache is written to a different location on every run. I'm obviously doing something wrong, so my question is: what's the right way to write this file so that it is updated and persists between searches?
Thanks,
Rob
In the end I decided to make the file location configurable, and configured it to be a CIFS share. This solved the problem with the least amount of work. Writing data back to splunk through the API would have been cooler, but much more work.
In the end I decided to make the file location configurable, and configured it to be a CIFS share. This solved the problem with the least amount of work. Writing data back to splunk through the API would have been cooler, but much more work.
Hi Martin, thanks for your reply. I thought about this, either extracting the TEMP env var or making it configurable pointing to some CIFS/NFS share, which is still on the table. I'm hoping there is a splunk-centric way of doing this (like maybe I can use the API and store the matches in an index). Worst case I'll drop an ini file in the app and for distributed setups let people use a share.
A generic idea would be to create a directory such as /tmp/splunk/TA-browscap_express
and use that.