To do this properly you would need to do this at index time so that proper event breaking would occur. If you are trying to do event breaking at search time this becomes much harder. If this data was properly event broken, each event would have the correct time assigned to it. this is the best practice. However I know that is not always within your control. If the above is not possible, then I would start with a rex command with the max_match=0 parameter to capture each pattern repeatedly. it might look like: | rex max_match=0 field=_raw (or whatever the field is named) ".*(<ipAddress>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})(?<connectionTime>.*)(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}) This would create 2 new fields per event, one with ALLthe ip address and one with ALL the time field values. so seperate the multivalue fields, you can use mvexpand, it will create a duplicate event for each value. once you captured a time field, you might then need to use strftime to convert it into Epoch time so that splunk can put it in time order for you. eval _time= strptime(connectionTime, "%Y-%m-%d %H:%M:%S.%N") As you can see, the method to fix this at search time is very complicated. this search is poorly optimized and they are many points of failure. The best way to proceed, especially if you want to use this data long term, is to set up proper event breaking. If your Splunk teams needs any help with the event breaking, I'd be happy to walk them through it.
... View more