If you don't put a wild card when searching after extracting the field, you can't search.
Field extraction is successful and field verification is possible when searching in index and sourcetype.
myfield=aaaa -> no search
myfield=*aaaa* -> search ok
It works like this in all fields of a specific index.
If you have those sourcetype definitions ( dropping transformation) then you can do it for sourcetype, but if you have that field definition then you must do it in fields.conf.
SPLUNK_HOME/etc/system/local/fields.conf [MyField] INDEXED_VALUE = false
I found the way, but I have too many fields to apply. It's similar, but is there a different way?
You could try the next options.
props.conf – disable event dropping transformation for this sourcetype
fields.conf – you could try this
[MyField]
INDEXED_VALUE = *<VALUE>
You could check how those are working with job inspector and what is your lispy code for those before and after changes.
btw. It's much better way of work to create separate apps/TAs for those integrations than use .../etc/system/local folder.
Is it impossible with sourcetype, not Field?
If you have those sourcetype definitions ( dropping transformation) then you can do it for sourcetype, but if you have that field definition then you must do it in fields.conf.
The situation has changed. I can search for char. But i have numbers, i have to add * to search. Can you help me?
Hi
this is known behaviour in some cases. The reason is how your props&transforms have configured to extract data and it off course depends on your data. If you have taken Data Administration training you have seen this and that was explained there (if I recall right, I couldn't found my exact notes about this).
Can you show your raw data before indexing, your props and transforms for that index/sourcetype, so we can try to help you if it's possible?
r. Ismo
Sorry I didn't understand it well.
This is simplification for that reason. If/when you show your sample and configuration we could try to explain/fix it better.
When the raw data contains something like
11-11-2020 12:12:12 host name=abc myfield=ABCAaaaa qsdae aaaa ics
And you have defined e.g in props.conf that your field myfield
[foo]
EXTRACT-myfield = .+ABC(?<origin>\w)(?<myfield>\D{4})\s........
And here is the explanation of this issue from the course material:
The mysterious search behavior was caused by a different expectation of how Splunk search works. The search-time field extraction works with the indexed keywords and they are based on the defined segmenters. By default, Splunk considers the <space> character as a keyword segmenter. If a tsidx bucket does not have the keyword that a search-time extraction is based on, then any subsequent extraction cannot be able to continue.
Depending of your events and configuration (index/search time etc.) there could be couple of ways to fix it.
Actually this was in troubleshooting course.
r. Ismo
SPLUNK_HOME/etc/system/local/fields.conf [MyField] INDEXED_VALUE = false
I'm sorry. I don't think I can show you the log. But I was able to solve it with this method, but there are too many fields to apply. Do you know how to solve it?
HI,
This could be possible if there are some space around your field value. so to check this you can check length of your field value using below search-
...|eval length=len(myfield)
I checked, but there are no gaps or other texts.