I have a CSV that I am monitoring. The CSV has lots of fields and my extraction works appropriately. What I have noticed is that depending on the item in the CSV the field either has a value or not. I have noticed that this appears to be common with fields all prefixed with the same term. An example of the data set
comp_domain
comp_cputype
comp_department
last_logon_date
Enabled
Name
If I run the following SPL then for all the fields EXCEPT comp_*, SPLUNK will populate it with my value
index=foo
| fillnull value="Nothing"
So using the above fields
| field | value |
| comp_domain | |
| comp_cputype | |
| comp_department | |
| last_logon_date | Nothing |
| Enabled | Nothing |
| Name | Nothing |
If I run an eval to look for null for one of the value (e.g. comp_domain) I get the same result
index=foo
| eval job=if(isnull(comp_domain),"Nothing here",comp_domain)
| field | value |
| comp_domain |
The same will happen for any field with "comp_" prefixed but works fine for fields that don't have a prefix.
Hi @willadams
Go back to the source csv file, I suspect that it must have a whitespace value or something so Splunk does not consider it a true null value, as the eval test proves in your example.
Here's a run anywhere example of what I mean...
| makeresults
| eval test=1, blank=" " , empty=""
| foreach blank empty [ eval <<FIELD>>_size=len(<<FIELD>>) ]
| foreach blank empty [ eval <<FIELD>>=if(isnull('<<FIELD>>'), "NULL", "NOT NULL") ]
| eval empty=null()
| appendpipe [
eval test=2
| foreach blank empty [ eval <<FIELD>>_size=len(<<FIELD>>) ]
| foreach blank empty [ eval <<FIELD>>=if(isnull('<<FIELD>>'), "NULL", "NOT NULL") ]
]
Results
| 1 | 2020-09-01 17:24:52 | NOT NULL | 1 | 0 | 1 | |
| 2 | 2020-09-01 17:24:52 | NOT NULL | 8 | NULL | 2 |
Hope this helps.