When we look at "Understanding fit and apply" from the MLTK docs, we see that apply can use null fields, unlike fit, when applying models to generate an predicted field . However you may not get all the functionality of the algorithm (like probabilities) if those other functionalities are reliant on good data.
Is it possible that the fields you logistic regression is being applied to are null? So the probabilities field isn't being populated because there isn't a continuous/valid value for each field?
... View more
Most likely, your email data is under some textual field, say email_text . When you go to do some analysis on this textual data, the machine learning algorithms require that input gets converted into some type of numeric representation.
The Machine Learning Toolkit will try to help you out here, by automatically converting categorical variables, like email_text , into numeric fields. The most common and easiest way to do this is called "dummy encoding" or sometimes called "one-hot" encoding. The MLTK does this by using the panda's get_dummies.
Lets look at a simpler example. Let's say we have a field called color , and it has three values:
If we left it at this, as a categorical field, behind the scenes the MLTK would convert this categorical field into a one-hot representation like this:
color_red, color_green, color_blue
1 0 0
0 1 0
0 0 1
For a simple field with a moderate number of discrete vales, this strategy actually works pretty well despite its simplicity.
Let's go back to the email example. When this type of encoding happens on a field where each value of email_text is unique, you basically get a huge sparse matrix of 0's with these occasional ones. In this case, each event isn't really being represented by anything useful for machine learning, and by default, the MLTK will throw an error.
When you need to create features out of a textual variable like email_text , the preprocessing that you need to do is most likely vectorization of the text. You can read more about vectorization here. Thankfully, the MLTK already ships with a vectorizer algorithm, term-frequency inverse-document frequency (TFIDF) vectorization.
... base search ...
| fit TFIDF email_text
Will create 100 fields that are based on the TFIDF weighting of word occurrences. You can look into the parameters on the docs page linked to try featuring on characters versus words (e.g. analyzer=char or ngram_range=1-3 , max_features=200 ).
You'll see that the TFIDF algorithm will then turn your text into a useful representation for machine learning, rather than the useless representation you get (for this kind of data) with the one-hot/dummy encoding.
Hope this helps 🙂 if you need more help, try asking on the #machinelearning channel on the Splunk user groups slack - read more about that here.
P.S. If however you do want to change the limit that is imposed, the maximum number of distinct values can be modified in mlspl.conf: see https://docs.splunk.com/Documentation/MLApp/3.3.0/User/Configurefitandapply
... View more