Splunk Search

Extracting fields from Microsoft Internet Authentication Service (IAS) logs?

mfrost8
Builder

I'm trying to figure out a strategy to perform field extractions from Microsoft Internet Authentication Service (IAS) logs. I see there was a question about this in the old Splunk forums that was never answered.

I'm not a stranger to search-time field extractions using props.conf and transforms.conf, but I'm not quite sure how to approach this one.

Microsoft describes the format in Interpret IAS Format Log Files

The format is essentially just a comma-separated list, but with a sort of twist. The first 6 fields are fixed (the header) and those are really straightforward to parse out with Splunk. The fields after those first 6 are, however, really key-value pairs, but with the separator still being the comma. The key values themselves are also just numbers which them map to what we'd really want as a field name.

A (fairly short) event example they given in the documentation is

10.10.10.10,client,06/04/1999,14:42:19,NPS,CLIENTCOMP,6,2,7,1,5,9,61,5,64,1,65,1,31,1

The first 6 fixed fields would be a straightforward match in Splunk as the following k/v pairs:

NAS-IP-Address = 10.10.10.10
User-Name      = client
Record-Date    = 06/04/1999
Record-Time    = 14:42:19
Service-Name   = NPS
Computer-Name  = CLIENTCOMP

But the rest of the fields are key-value pairs like 6,2 and 7,1 and 5,9 etc which would "extract" to

6 = 2
7 = 1
5 = 9
61 = 5
64 = 1
65 = 1
31 = 1

Those key "names" are unique and then actually translate, based on Microsoft's mappings to mean:

Service-Type       = 2
Framed-Protocol    = 1
NAS-Port           = 9
NAS-Port-Type      = 5
Tunnel-Type        = 1
Tunnel-Medium-Type = 1
Calling-Station-ID = 1

which would ultimately be the field/value exractions I'd like to get out of this event, but I'm not sure how.

The problems I see are first extracting the fields the way they are. You could potentially make a separate extraction for each field name (that would be around 70 different regexes, one for each possible field name (number) in which each regex say, looked for the key "4128,value" as long as there were an odd number of commas preceding it in an attempt to line up the even number start of the key/value pair. Then you'd tell the props.conf entry to run through all 70 possible extractions (wildly inefficient, I would think), result in fields like "6" where what I really want is Service-Type. I didn't think Splunk even allowed field names that started with a number. I could then alias the fields like "6" to Service-Type, but then I've got this big mess of "number" fields that I don't want. Or I could maybe use a lookup table, to add the translated (i.e. correct) fieldnames like Service-Type, but I still wind up with lots of numeric field names.

I was thinking I could maybe use the SEDCMD from props.conf, but that would be a rather huge sed expression and it would be altering the event quite significantly before it gets indexed.

I guess this is Microsoft's way of dealing with variable results in an event. Clearly they can't guarantee any particular ordering of fields in the event or whether any fields will even exist in an event. I've seen other vendors handle this problem by listing all attributes every time in a fixed order which results in a lot of ",,,," type entries, but at least it's easy to parse.

Other than the ideas I listed above (which I'm not sure will even work), does anyone have any ideas on an approach to pick fields from these IAS events?

Thanks very much.

Tags (2)
0 Karma

southeringtonp
Motivator

+1 to araitz's answer. That's similar to our in-house setup.

I've uploaded a copy of what we're already using to SplunkBase. As it may take a day or two to get through their approval process, you can also download a copy here:
http://www.southerington.com/redir.php?id=273

jwalzerpitt
Influencer

Is this download still available?

Thx

0 Karma

southeringtonp
Motivator

Yes, though the redirect link appears to be temporarily broken. It's on SplunkBase now so go to https://splunkbase.splunk.com/app/907/

0 Karma

jwalzerpitt
Influencer

Thx for the update

0 Karma

thomastorggler
New Member

thanks man, now thats useful!

0 Karma

araitz
Splunk Employee
Splunk Employee

It seems like you could just extract all key-value pairs and map the RADIUS ID and possible values to a lookup. I had to do a similar thing in my Windows DHCP and DNS apps.

To ensure the positional extractions are correct, you will need to extract using the 6 "header" fields as an anchor. Also, write a static value in front of the extracted key as by default Splunk will not assign fields where the key starts with a number. Just off the top of my head since it is Saturday morning:

[your_regex]
REGEX=^([^\,]+)\,([^\,]+)\,([^\,]+)\,([^\,]+)\,([^\,]+)\,([^\,]+)
FORMAT=NAS-IP-Address::$1 User-Name::$2 <yada yada>

[your_regex2]
REGEX=^([^\,]+\,){6}([^\,]+\,[^\,]+\,)+?([^\,]+)\,([^\,]+)
FORMAT=radius_$3::$4
REPEAT_MATCH=True

jwalzerpitt
Influencer

araitz,

I download the IAS app and got it to work, and it really saved a ton of time for parsing the NPS Radius logs. However, I did notice there are some errors and/or missing key/value pairs. For example, in the transforms.conf file I see the following stanza:

[ias-attr-45]
SOURCE_KEY = ias_message
REGEX = ^(([^,]+),){6}(([^,]+),([^,]+),)*(45),([^,]+)
FORMAT = acct_authentic::$7

I found a parsing of the IAS logs and for the acct_authentic key/value, I see the following:

'enum' => { '0' => 'None', '1' => 'RADIUS', '2' => 'Local', '3' => 'Remote' },
'name' => 'Acct-Authentic',

How do I modify the transforms.conf stanza to include the values (0 = 'None', 1 = 'Radius', 2 = 'Local', 3 = 'Remote')?

Thx

0 Karma
Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...