Archive

How do I tell what enviroment is production?

Path Finder

We have two environments, prod1 and prod2. At any given point in time one is production and the other is staging. We can change our DNS to switch what environment is in production. Currently there is nothing going into Splunk that lets us figure out what is what.

All hosts come in as systemname.prod1.foo.com, or systemname.prod2.foo.com.

I am trying to figure out the best way to write a search that only searches production. I wrote a simple script to send "production=prod1" to Splunk, so we have a searchable value.

How would I write a search to take advantage of this?
Is there a better way?

Tags (1)
1 Solution

Legend

Part 2 of answer:

To solve the "historical problem" with lookups (see part 1) --

Create the CSV file like before, but with a timestamp:

host,assignedTo,asOf
*.prod1.foo.com,production,Thu Jul 12 23:00:00
*.prod2.foo.com,staging,Thu Jul 12 23:00:00
*.prod2.foo.com,production,Thu Jul 26 23:00:00
*.prod1.foo.com,staging,,Thu Jul 26 23:00:00

My example is a bit lame, because you should really append to the CSV once a day at the same time each day. It will make the lookup work properly. If you save your data for 5 years, you will have less than 4000 entries in the table - that's not too big.

Change the transforms.conf to

[yourlookupname]
filename = yourlookupfile.csv
max_matches = 1
min_matches = 1
default_match = Unknown
match_type = WILDCARD(host)
time_field = asOf
time_format = %a %b %d %T
max_offset_secs = 86400

The max_offset_secs says that the event must be within the 24 hours of the asOf timestamp in order to match.

I believe that the props.conf can remain the same.

View solution in original post

Path Finder

I am doing something wrong. When ever I search for any thing I get "The lookup table 'prdlookup' does not exist. It is referenced by configuration 'source::/var/log/..."

[root@log01ptk01 lookups]# pwd
/opt/splunk/etc/apps/cheese/lookups
[root@log01ptk01 lookups]# cat prdlookup.cvs
host,assignedTo,asOf
*.prd01.*,production,Jul 26 2012 20:23:34
*.prd02.*,staging,Jul 26 2012 20:23:34

[root@log01ptk01 default]# pwd
/opt/splunk/etc/apps/cheese/default
[root@log01ptk01 default]# head props.conf -n 6
[host::*]
LOOKUP-lookup1 = prdlookup host OUTPUT assignedTo

[host::*]
LOOKUP-lookup2 = prdlookup host OUTPUT assignedTo

[root@log01ptk01 default]# head transforms.conf 
[prdlookup]
filename = prdlookup.csv
max_matches = 1
min_matches = 1
default_match = Unknown
match_type = WILDCARD(host)
time_field = asOf
time_format = %b %d %Y %T
max_offset_secs = 86400

Any ideas what I messed up????

0 Karma

Legend

Nice! and if I don't make at least one typo a day, I feel like I am not working...

Path Finder

WIN! User error (csv VS cvs) (Quietly hangs head in shame)

Also built a simple shell script/cron job to update prdlookup.csv daily.

0 Karma

Legend

Also, since you are doing this with the [host::*] stanza, you don't need two lookups in props.conf.

0 Karma

Legend

Is your file name .csv or .cvs? Gotta be prdlookup.csv

0 Karma

Legend

Part 2 of answer:

To solve the "historical problem" with lookups (see part 1) --

Create the CSV file like before, but with a timestamp:

host,assignedTo,asOf
*.prod1.foo.com,production,Thu Jul 12 23:00:00
*.prod2.foo.com,staging,Thu Jul 12 23:00:00
*.prod2.foo.com,production,Thu Jul 26 23:00:00
*.prod1.foo.com,staging,,Thu Jul 26 23:00:00

My example is a bit lame, because you should really append to the CSV once a day at the same time each day. It will make the lookup work properly. If you save your data for 5 years, you will have less than 4000 entries in the table - that's not too big.

Change the transforms.conf to

[yourlookupname]
filename = yourlookupfile.csv
max_matches = 1
min_matches = 1
default_match = Unknown
match_type = WILDCARD(host)
time_field = asOf
time_format = %a %b %d %T
max_offset_secs = 86400

The max_offset_secs says that the event must be within the 24 hours of the asOf timestamp in order to match.

I believe that the props.conf can remain the same.

View solution in original post

Legend

Part 1 (original answer)

I would consider doing this as a lookup. Your script could just write to a CSV file that looks like this

host,assignedTo
*.prod1.foo.com,production
*.prod2.foo.com,staging

You can set up this lookup using the Splunk Manager GUI, but you will need to manually edit the transforms.conf configuration file to make it do the wild card match. In the transforms.conf where your lookup is defined:

[yourlookupname]
filename = yourlookupfile.csv
max_matches = 1
min_matches = 1
default_match = Unknown
match_type = WILDCARD(host)

In props.conf, you could set this as an automatic lookup to be used for a variety of different sourcetypes:

[sourcetype1]
LOOKUP-lookup1 = yourlookupname host OUTPUT assignedTo

[sourcetype2]
LOOKUP-lookup2 = yourlookupname host OUTPUT assignedTo

Now, for these sourcetypes, you can just search like this:

yoursearchhere assignedTo=production 

or

yoursearchhere assignedTo=staging

and you will only get the data from the particular domain.

Note that your lookup CSV file, props.conf and transforms.conf need to all belong to the same app. More information is in the documentation.

Path Finder

Blast! That is the exact problem I am trying to get around, the historical lookups. Regardless, I have other thing "lookup" will work great for!

0 Karma

Legend

The only problem that I see with this is that it doesn't track historically. So if you searched last week, when xyz.prod1,foo.com was assigned to staging, the assignedTo is based on the current assignment, not last week's.

You could add a timestamp to the lookup, and use it as a second key. But try this first, and see if works for you. Once you get this, look in the docs for time-based lookup. I think you will need to update your lookup CSV file every day or every hour to make time-based lookup work accurately.

0 Karma