I have a HF running in Linux machine. I have root access to that machine using sudo bash as sudo - splunk or su - splunk is not allowing me to get root access. But, when I copy files to the folders where monitor command pointing to pickup the files, it is not forwarding events to the SPLUNK indexer since I cannot see those events within SPLUNK. However, when I type chown -R splunk: splunk/opt/splunk and then restart SPLUNK, it's working as expected, that means I can see those events within SPLUNK. So, every time when I copy files within HF folders, I need to use chown command and restart SPLUNK to make them available within SPLUNK. Is there anyway this can be resolved that I don't need to type chown command and restart SPLUNK to forward events. Thank you so much.
Short answer - no, you can't help the fact that files have wrong ownership/permissions. That's what the whole permission system is for.
Long answer - in general, you shouldn't copy files into /opt/splunk. The proper approach would be to write the log files normally to - for example - /var/log/somewhere or /opt/your_service/var/log and add a monitor input to splunk reading directly from there. Then you should make sure that splunk user has access to those files (possibly by means of proper umasks, group membership and acls).
Thank you for your reply.
Let me explain a little more how I copy the source files. I create app and use GUI feature "Install app from file" to pull the source files into SPLUNK HF opt/splunk/etc/apps/TA-my_sourcefile folder and then copy those source files from that folder to /opt/splunk/var/log/sourcefiles and add a monitor input to SPLUNK reading directly from there. Only problem now how would I make sure to have access as SPLUNK user since sudo bash is giving me root user access (ie, whoami shows only root) and su/sudo - splunk is not working for me in that Linux machine. Is there any other ways I can have SPLUNK user access? Thank you again.
That sounds way too overcomplicated. Why do you do it like that? Apps are not meant as a way of uploading files to ingest 😲
If your event-generating solution is not on the same host as your HF, why aren't you using UF or sending events via other means (syslog, HEC)?
It's simply confusing since you apparently have CLI access (with permission to run sudo bash ) so you have quite "wide" access to the machine. Furthermore, if you can install apps, you also have quite high-privileged access to the splunk itself. So it's very unusual to do it this way.
There are way more efficient ways to onboard data. Why don't you set the monitor a "static" file or directory and update it periodicaly with scp/sftp/whatever?
I can see those events using index=_internal (X OR Y) host=zzzzz, but when I use index=X......I can't...getting error message "Insufficient permission to read file ='/opt/splunk/var/folder. Looks like I can see the events but SPLUNK apps cannot. Thank you so much, any help will be highly appreciated.
In regard to complexity....we receive files from 5 different sources by Email, and then transform them using python scripts based on our requirements, and then pull them into our Linux server using app and then copy from the app folder to /opt/splunk/var/folder.........we are in a process of automatic this system....it's an interim solution.
If you have to pull the data using a script why not make it into a scripted/modular input and run it from within the splunk service?
That seems more consistent with overall splunk architecture.
Thank you so much....Yes, agree....but, we perform transformation process in different server/computer at this stage....and then pull the data using app...as I mentioned.