Hi,
I'm doing a project and I've installed Splunk Trial Enterprise on a server and Universal Forwarder on other three servers (with Ubuntu) that sends me logs. On forwarders exist a script that sends me logs of every processes that's running on server.
I would to create a dynamic list where logs of processes is added and tagged as "Well-Knowned Processes".
After that when new logs of processes come to indexer they are compared with logs on dynamic list and if the process was not recognized (doesn't exist in the list) the alert is triggered.
I would to do that to check suspicious process.
Thanks
Hi @raffaelecervino,
you have two choices:
I prefer the first solution because is quicker but it requires a little bit more work.
in few words, you have to:
index=your_index
| dedup process
| sort process
| table process
| outputlookup processes.csv append=true
| index=your_index NOT [ | inputlookup processes.csv | dedup process | fields process ]
| dedup process
| sort process
| table process
In this way you have a very quick search that you can run also with an high frequency and, if you want, you can also manually modify the lookup adding or deleting processes.
Ciao.
Giuseppe
Hi @raffaelecervino,
you have two choices:
I prefer the first solution because is quicker but it requires a little bit more work.
in few words, you have to:
index=your_index
| dedup process
| sort process
| table process
| outputlookup processes.csv append=true
| index=your_index NOT [ | inputlookup processes.csv | dedup process | fields process ]
| dedup process
| sort process
| table process
In this way you have a very quick search that you can run also with an high frequency and, if you want, you can also manually modify the lookup adding or deleting processes.
Ciao.
Giuseppe
Thanks, it works perfectly!
Is there a semantic to don't append the same processes in the .csv file?
Because I run the search everyday (for a while) to appending new processes (to train the model about the main processes in the machine) and i would to prevent double processes in .csv file.
Thanks a lot!
Hi @raffaelecervino,
you could create another scheduled search that every day removes duplicates, something like this:
| inputlookup processes.csv
| dedup process
| sort process
| table process
| outputlookup processes.csv
or modify the sceduled search for populating the lookup:
index=your_index
| fields process
| append [ | inputlookup processes.csv | fields process ]
| dedup process
| sort process
| table process
| outputlookup processes.csv
Ciao.
Giuseppe
Hi @raffaelecervino ,
good for you, see next time!
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated 😉