I've got a lot of CSV data that I'm indexing and for one of the fields in the csv, the values are themselves big jumbles of different fields joined together.
eg:
MLQK=3.5000;MLQKav=3.7370;MLQKmn=3.2782;MLQKmx=5.3000;ICR=0.0000;CCR=0.0003;ICRmx=0.0027;CS=1;SCS=0;MLQKvr=0.93
The extract
command (aka kv
), springs to mind but extract only runs against _raw
as far as I know. Are there any good tricks to using extract
with other fields besides _raw?
http://www.splunk.com/base/Documentation/latest/SearchReference/Extract
Right now I'm thinking of:
<my search> | rename _raw as _actualRaw | eval _raw=myCrazyField | extract | eval _raw=_actualRaw
but it seems really clunky and I thought maybe there's a better way.
If you don't want to tackle it via conf - I'll see your clunky and raise you one (sans _raw)
| stats count
| eval jumboField="MLQK=3.5000;MLQKav=3.7370;MLQKmn=3.2782;MLQKmx=5.3000;ICR=0.0000;CCR=0.0003;ICRmx=0.0027;CS=1;SCS=0;MLQKvr=0.93"
| eval jumboField=replace(jumboField,"([^=]+)=([^;]+);?","<\1>\2</\1>")
| xmlkv jumboField
| fields - jumboField
Close. But the performance of xmlkv on the number of events I need, is pretty horrendous. In fact a nice little warning pops up in the 4.3 UI, telling me I'm insane to send this many events through xmlkv. 😃