Turn on suggestions

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

All Apps and Add-ons

Auto-suggest helps you quickly narrow down your search results by suggesting possible matches as you type.

Showing results for

- Community
- :
- Splunk Answers
- :
- Apps and Add-ons
- :
- All Apps and Add-ons
- :
- Splunk Machine Learning Toolkit: Does ML Toolkit c...

- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark Topic
- Subscribe to Topic
- Mute Topic
- Printer Friendly Page

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

jsinnott_

Explorer

12-13-2017
12:30 PM

Hi Splunk Experts--

A colleague of mine and I are exploring the Splunk Machine Learning

Toolkit and, more specifically, using the ML Toolkit to perform

Logistic Regression analysis on a dataset that includes categorical

data as independent variables.

When performing LR on categorical data, we've been taught the

statistical technique of creating "dummy variables" that, in effect,

transform the categorical data into a series of numeric variables.

Example: Imagine a single categorical attribute `color`

with values

["red", "yellow" and "blue"]. That categorical data could be

transformed into three dummy variables (say, `is_red`

, `is_yellow`

and

`is_blue`

) where each dummy variable would have a value [`0`

or `1`

].

Given this, a data record where `color`

had value `yellow`

would be

transformed into

```
is_red: 0
is_yellow: 1
is_blue: 0
```

We were both taught that when using dummy variables in Logistic

Regression, you need to omit one dummy variable from the set

representing a given categorical variable. Doing this prevents

double-counting of that omitted categorical value (say, `is_blue`

)

because having zeros in all other dummy variables effectively

represents a one in the omitted dummy variable.

We've been crawling through the ML Toolkit (Logistic Regression)

source code to see how it handles categorical data and have found

something that surprises both of us: Specifically, the

`prepare_features`

method in `df_util.py`

(see below), which uses pandas

to create dummy variables for categorical data, by invoking the pandas

`get_dummies`

method (see line 27 below).

```
def prepare_features(X, variables, final_columns=None, get_dummies=True):
"""Prepare features.
This method defines conventional steps to prepare features:
- drop unused columns
- drop rows that have missing values
- optionally (if get_dummies==True)
- convert categorical fields into indicator dummy variables
- optionally (if final_column is provided)
- make the resulting dataframe match final_columns
Args:
X (dataframe): input dataframe
variables (list): column names
final_columns (list): finalized column names
get_dummies (bool): indicate if categorical variable should be converted
Returns:
X (dataframe): prepared feature dataframe
nans (np array): boolean array to indicate which rows have missing
values in the original dataframe
columns (list): sorted list of feature column names
"""
X, nans = drop_unused_and_missing(X, variables)
if get_dummies:
filter_non_numeric(X)
X = pd.get_dummies(X, prefix_sep='=', sparse=True)
if final_columns is not None:
drop_unused_fields(X, final_columns)
assert_any_fields(X)
fill_missing_fields(X, final_columns)
assert_any_rows(X)
assert_any_fields(X)
columns = sort_fields(X)
return (X, nans, columns)
```

The ML toolkit seems to use pandas 0.17. In pandas 0.18 the

`get_dummies`

method supports a `drop_first`

parameter which omits the

first dummy variable for a categorical variable, but that's not

available in pandas 0.17. To us this means that the Splunk ML Toolkit

code should contain code to drop one of the dummy variables returned

by pandas-- and we don't see code that does this.

So (finally!) here are our questions:

Are the assertions/interpretations above correct?

If so does it follow that the ML Toolkit is not handling categorical

data correctly-- that it will produce biased models when the input

contains categorical data?And if so, is there a technique for using the ML Toolkit to perform

Logistic Regression on categorical data that allows creation of

models without this bias?

'Hope this is reasonably clear-- thanks in advance for any advice!

1 Solution

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

yangzd

Splunk Employee

12-18-2017
11:32 PM

Hi,

Thank you for asking, this is an incredibly valuable question! You have a very good understanding of dummy variables.

First, about the bias in the model. Let's assume you have dummy variables `x1`

, `x2`

, `x3`

, such that `x1 + x2 + x3 = 1`

,

With `m-1`

dummy variables, your linear model can be expressed as

`y = α0 + α1 * x1 + α2 * x2`

With `m`

dummy variables, your linear model is now:

`y = β0 + β1 * x1 + β2 * x2 + β3 * x3`

Since `x3 = 1 − x1 − x2`

, you get

`y = β0 + β1 * x1 + β2 * x2 + β3 * (1 − x1 − x2) = (β0 + β3) + (β1 − β3) * x1 + (β2 − β3) * x2`

Essentially you have

`α0 = β0 + β3`

, `α1 = β1 − β3`

, `α2 = β2 − β3`

So, these two models are equivalent, and there is no bias introduced as you see in this exercise.

Now, the question is, what's introduced here? Collinearity is what you are after, since you can always tell the value of the left out dummy variable if you know `m-1`

of them. Collinearity can cause computational problems for linear regression since the matrix inversion can not be performed. But for logistic regression, depending on the computational scheme under the hood, e.g. gradient descent, numerical instability may not be an issue. Moreover, the LogisticRegression model in sklearn uses a regularization, `penalty='l2'`

and `C=1.0`

, which means feature collinearity will be penalized. Therefore, using the full `m`

dummy variables instead of `m-1`

does not introduce bias to the model, except for potential numerical instability.

In practice, to avoid the potential numerical instability issue, if you decide to go with `m-1`

dummy variables, you may have the following options:

1) With latest version of MLTK (you are right it uses pandas 0.17), you can modify the `prepare_features_and_target`

method in `df_util.py`

, instead of doing

```
X = pd.get_dummies(X, prefix_sep='=', sparse=True)
```

you can use the following code to drop the first column of the created dummy variables for each categorical variable:

```
columns_to_encode = X.select_dtypes(include=['object', 'category']).columns
for col in columns_to_encode:
X = X.join(pd.get_dummies(X.pop(col), prefix=col, prefix_sep='=').iloc[:, 1:])
```

2) As you already mentioned in your post, `drop_first=True`

is supported in pandas 0.18+, you could use this when a future version of Python for Scientific Computing is released.

On the other hand, if you want to reduce the effect of collinearity in your model, you can also use some preprocessing methods, e.g. Field Selector to select features, or PCA to remove collinearity. You can also use algorithms like Random Forest that are least affected by feature multicollinearity.

Hope it helps clarify some of the issues.

zd

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

aljohnson_splun

Splunk Employee

12-20-2017
09:07 PM

In addition to @yangzd's response below, you can do your own categorical encoding quite simply with eval. Say for example our field `color`

with values `is_red`

, `is_yellow`

, and `is_blue`

, and you'd like to to encode the 3 levels into two dummy variables (treating `is_blue`

as the base):

```
| eval {color} = 1
| fillnull is_red is_yellow
| fields - is_blue
```

The `{color}`

on the left side of eval will take the value of the field and use it as the name of the new field.

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

jsinnott_

Explorer

01-02-2018
09:14 AM

Hi aljohnson-- Thanks very much for this. This, it turns out, is the method we're using to do the comparison between letting the ML Toolkit handle categorical data (described above) and converting our categorical data to dummy variables prior to invoking the `fit`

command. In fact, we generalize this so something like (for a categorical attribute "foo"):

```
...
| eval foo_is_{foo} = 1
...
| foreach foo_is_* [ eval <<FIELD>>=coalesce(<<FIELD>>,0) ]
```

Thanks for taking the time to comment!

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

yangzd

Splunk Employee

12-18-2017
11:32 PM

Hi,

Thank you for asking, this is an incredibly valuable question! You have a very good understanding of dummy variables.

First, about the bias in the model. Let's assume you have dummy variables `x1`

, `x2`

, `x3`

, such that `x1 + x2 + x3 = 1`

,

With `m-1`

dummy variables, your linear model can be expressed as

`y = α0 + α1 * x1 + α2 * x2`

With `m`

dummy variables, your linear model is now:

`y = β0 + β1 * x1 + β2 * x2 + β3 * x3`

Since `x3 = 1 − x1 − x2`

, you get

`y = β0 + β1 * x1 + β2 * x2 + β3 * (1 − x1 − x2) = (β0 + β3) + (β1 − β3) * x1 + (β2 − β3) * x2`

Essentially you have

`α0 = β0 + β3`

, `α1 = β1 − β3`

, `α2 = β2 − β3`

So, these two models are equivalent, and there is no bias introduced as you see in this exercise.

Now, the question is, what's introduced here? Collinearity is what you are after, since you can always tell the value of the left out dummy variable if you know `m-1`

of them. Collinearity can cause computational problems for linear regression since the matrix inversion can not be performed. But for logistic regression, depending on the computational scheme under the hood, e.g. gradient descent, numerical instability may not be an issue. Moreover, the LogisticRegression model in sklearn uses a regularization, `penalty='l2'`

and `C=1.0`

, which means feature collinearity will be penalized. Therefore, using the full `m`

dummy variables instead of `m-1`

does not introduce bias to the model, except for potential numerical instability.

In practice, to avoid the potential numerical instability issue, if you decide to go with `m-1`

dummy variables, you may have the following options:

1) With latest version of MLTK (you are right it uses pandas 0.17), you can modify the `prepare_features_and_target`

method in `df_util.py`

, instead of doing

```
X = pd.get_dummies(X, prefix_sep='=', sparse=True)
```

you can use the following code to drop the first column of the created dummy variables for each categorical variable:

```
columns_to_encode = X.select_dtypes(include=['object', 'category']).columns
for col in columns_to_encode:
X = X.join(pd.get_dummies(X.pop(col), prefix=col, prefix_sep='=').iloc[:, 1:])
```

2) As you already mentioned in your post, `drop_first=True`

is supported in pandas 0.18+, you could use this when a future version of Python for Scientific Computing is released.

On the other hand, if you want to reduce the effect of collinearity in your model, you can also use some preprocessing methods, e.g. Field Selector to select features, or PCA to remove collinearity. You can also use algorithms like Random Forest that are least affected by feature multicollinearity.

Hope it helps clarify some of the issues.

zd

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

jsinnott_

Explorer

12-19-2017
04:56 PM

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

yangzd

Splunk Employee

12-20-2017
10:30 AM

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

acruise_splunk

Splunk Employee

12-15-2017
05:03 PM

- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

jsinnott_

Explorer

12-17-2017
07:39 PM

Get Updates on the Splunk Community!

Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data ...

Splunk Education believes in the value of training and certification in today’s rapidly-changing data-driven ...

As security threats and their complexities surge, security analysts deal with increased challenges and ...