All Apps and Add-ons

Best way to search for server class?

kaufmanm
Communicator

We have a new auto scaling application launching in AWS and we are using the deployment server to pull down the appropriate log monitoring configuration, e.g. there is a new_web_app server class which we set as the clientName in deploymentclient.conf of the forwarders.

This is all working great, but the host field is populated with hostnames like ip-x-x-x-x which makes it very hard to search for logs from the new_web_app server class. We could set the hostnames on all of these servers to new_web_app, and now we'd be able to do the search easily, but we wouldn't have anyway to see down to the instance level, so we'd prefer to not lose the IP address or individual host data.

Is there anyway to keep with this data another field like clientName which shows the deployment server group the host that this data is coming from is in?

0 Karma
1 Solution

kaufmanm
Communicator

After talking to our technical account manager, this is the solution we came up with, it works very well:

We run a scheduled search every five minutes on the search head, that gets the complete list of deployment clients from our deployment server (replace deploymentsever with the host name of your deployment server, or use local if it's the same as your search head) and then stores them in a static lookup file.

| rest /services/deployment/server/clients count=0 splunk_server=deploymentserver | fields hostname name | rename name as clientName | outputlookup clientNames.csv

Then we add the below stanzas to props.conf and transforms.conf so all messages get their clientName looked up based on their host:

props.conf:
[host::*]
LOOKUP-client = clientLookup hostname AS host OUTPUT clientName
transforms.conf:
[clientLookup]
filename = clientNames.csv

And now, we can simply type clientName=new_web_app into the Search app, and all of our logs from that app will come up. Works great, and haven't seen any noticeable performance hit from the extra lookups.

View solution in original post

0 Karma

kaufmanm
Communicator

After talking to our technical account manager, this is the solution we came up with, it works very well:

We run a scheduled search every five minutes on the search head, that gets the complete list of deployment clients from our deployment server (replace deploymentsever with the host name of your deployment server, or use local if it's the same as your search head) and then stores them in a static lookup file.

| rest /services/deployment/server/clients count=0 splunk_server=deploymentserver | fields hostname name | rename name as clientName | outputlookup clientNames.csv

Then we add the below stanzas to props.conf and transforms.conf so all messages get their clientName looked up based on their host:

props.conf:
[host::*]
LOOKUP-client = clientLookup hostname AS host OUTPUT clientName
transforms.conf:
[clientLookup]
filename = clientNames.csv

And now, we can simply type clientName=new_web_app into the Search app, and all of our logs from that app will come up. Works great, and haven't seen any noticeable performance hit from the extra lookups.

View solution in original post

0 Karma

kaufmanm
Communicator

This is the best way I've figured out to do it now:

[search index=_internal source=*splunkd.log name=new_web_app | dedup ip | rex field=ip mode=sed "s/./-/g" | eval host = "ip-" . ip | fields + host]

Basically in the deployment logging I can pull out the server class from the name field, and then transform the actual IP address into the ip-x-x-x-x hostname. This subsearch will then pass the set of hosts into the main search. Instead I'd like to be able to do something like:

clientName=new_web_app

To the same effect, without the performance hit of searching all time on the splunkd.logs for when they were added to the deployment server. And presumably that data could be aged off, in which case I'd really need another way to do it.

0 Karma