@francesco1g try the below one:
index=audit db_name=* | fields db_name | dedup db_name | table db_name
| outputlookup audit.csv
Also, If this reply helps you, an upvote would be appreciated.
@francesco1g try the below one:
index=audit db_name=* | fields db_name | dedup db_name | table db_name
| outputlookup audit.csv
Also, If this reply helps you, an upvote would be appreciated.
Hi @francesco1g,
the only approach when you have to manage million or more data to have fast searches is acceleration:
https://docs.splunk.com/Documentation/Splunk/8.2.2/Report/Acceleratereports
https://docs.splunk.com/Documentation/Splunk/8.2.1/Knowledge/Aboutsummaryindexing
https://docs.splunk.com/Documentation/Splunk/8.2.2/Knowledge/Acceleratedatamodels
In your situation, the easiest way is to schedule your search to run e.g. every day or every hur, or in another time period with less events; then write results in a sumamry index and then run your search on the summary index.
Or create an accelerated Datamodel with few fields, only the ones you need.
Ciao.
Giuseppe