Splunk Search

Large number of hosts in search criteria

rgisrael
Explorer

So I have about 40k hosts logging syslog data to a splunk cluster, and I've been given a requirement to regularly extract data (sudo, login, etc) for a subset of these hosts (~500) into a report.

Is there a saner way than creating an OR statement with 500 hosts in it? Can't really use a regex, these are mostly uniquely named hosts.

Tags (2)
0 Karma
1 Solution

gkanapathy
Splunk Employee
Splunk Employee

The best solution for what you're looking for is to create a lookup table, and apply an automatic lookup. This will be a lot easier than tagging the fields. Take a look at the docs for LOOKUP in the props.conf docs. I would recommend it highly over tagging and tags.conf.

This will exactly let you create a CSV file that will map a host name (or any other field or set of fields) to another field value, and also do a reverse search/lookup using that new field:

http://www.splunk.com/base/Documentation/4.1.6/AppManagement/Usealookup

View solution in original post

gkanapathy
Splunk Employee
Splunk Employee

The best solution for what you're looking for is to create a lookup table, and apply an automatic lookup. This will be a lot easier than tagging the fields. Take a look at the docs for LOOKUP in the props.conf docs. I would recommend it highly over tagging and tags.conf.

This will exactly let you create a CSV file that will map a host name (or any other field or set of fields) to another field value, and also do a reverse search/lookup using that new field:

http://www.splunk.com/base/Documentation/4.1.6/AppManagement/Usealookup

rgisrael
Explorer

Has no effect... no error messages on restart saying they were read and were improper, no visible change to log messages to suggest it worked.
props.conf:
[syslog]
pulldown_type = true
maxDist = 3
TIME_FORMAT = %b %d %H:%M:%S
MAX_TIMESTAMP_LOOKAHEAD = 32
TRANSFORMS = syslog-host
REPORT-syslog = syslog-extractions
SHOULD_LINEMERGE = False
lookup_orblookup = orblookup Hostname OUTPUTNEW OrganizationCode

transforms.conf:
[orblookup]
filename = ol1.csv

head apps/lookups/ol1.csv

0 Karma

rgisrael
Explorer

In the splunkd log, I see:
02-28-2011 11:50:06.127 WARN LookupTableConfPathMapper - Refuse to copy file from unsafe location: /splunk/var/run/splunk/lookup_tmp/ol1.csv.0132362786125
02-28-2011 11:50:06.127 ERROR PropertiesMapConfig - Failed to save settings: /admin/search/lookups/ol1.csv (user: admin, app: search, root: /opt/splunk/etc): Data could not be written: /admin/search/lookups/ol1.csv: /opt/splunk/var/run/splunk/lookup_tmp/ol1.csv.0132362786125

Putting in modified props.conf and transforms.conf in apps/search/local and putting the file in apps/search/lookups (cont)

0 Karma

rgisrael
Explorer

OK, so I've spent a good bit of time trying to implement this according to the docs, and I'm getting no luck at all. When I try to use the GUI to add a lookup table file, I get the following error:

Encountered the following error while trying to save: In handler 'lookup-table-files': Error performing action=create on object id=ol1.csv in config=lookups.

0 Karma

rgisrael
Explorer

This sounds like a great start, but wouldn't that mean finding a log entry for each one of these hosts and individually tagging them? That'd be quite onerous.... or is there maybe a way to create a file that has every hostname that has needs this tag?

---- Actually, just found the docs for tags.conf, looking at it now...

0 Karma

MarioM
Motivator
0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...