I have a field that's called file_content on an source type.
This has a CSV inside.
Meaning every event has a field (file_content) that has a csv inside it. Every event is an email Can't be field extraction as the "file_content" is really hard to find inside the data.
I used the regex query to extract the data, It's slow as I get 1 CSV per hour every day. so i wonder if there is any automation or a better way to do this?
My regex folows:
| rex field=file_content "(?P<ContactId>[^\s,]*),(?P<Customernumber>[^\s,]*),(?P<AfterContactWorkDuration>[^\s,]*),(?P<AfterContactWorkEndTimestamp>[^,]*),(?P<AfterContactWorkStartTimestamp>[^,]*),(?P<AgentInteractionDuration>[^\s,]*),(?P<ConnectedToAgentTimestamp>[^,]*),(?P<CustomerHoldDuration>[^\s,]*),(?P<Hierarchygroups_Level1_GroupName>[^\s,]*),(?P<Hierarchygroups_Level2_GroupName>[^\s,]*),(?P<Hierarchygroups_Level3_GroupName>[^\s,]*),(?P<LongestHoldDuration>[^\s,]*),(?P<NumberOfHolds>[^\s,]*),(?P<Routingprofile>[^\s,]*),(?P<Agent>[^\s,]*),(?P<AgentConnectionAttempts>[^\s,]*),(?P<ConnectedToSystemTimestamp>[^,]*),(?P<DisconnectTimestamp>[^,]*),(?P<InitiationMethod>[^\s,]*),(?P<InitiationTimestamp>[^,]*),(?P<LastUpdateTimestamp>[^,]*),(?P<NextContactId>[^\s,]*),(?P<PreviousContactId>[^\s,]*),(?P<DequeueTimestamp>[^,]*),(?P<Duration>[^\s,]*),(?P<EnqueueTimestamp>[^,]*),(?P<Name>[^\s,]*),(?P<TransferCompletedTimestamp>[^,]*),(?P<HandleTime>[^\s,]*),(?P<TicketNumber>((\"[^\"]*\")+|[^\s,]*)),(?P<Account>[^\s,]*),(?P<AccountName>[^\s,]*),(?P<Country>[^\s,]*),(?P<Language>[^\s,]*),(?P<Site>[^\s,]*),(?P<WrapCode>[^\s,]*)"
And here is an example of how the data should look like (in csv):
ContactId,Customernumber,AfterContactWorkDuration,AfterContactWorkEndTimestamp,AfterContactWorkStartTimestamp,AgentInteractionDuration,ConnectedToAgentTimestamp,CustomerHoldDuration,Hierarchygroups_Level1_GroupName,Hierarchygroups_Level2_GroupName,Hierarchygroups_Level3_GroupName,LongestHoldDuration,NumberOfHolds,Routingprofile,Agent,AgentConnectionAttempts,ConnectedToSystemTimestamp,DisconnectTimestamp,InitiationMethod,InitiationTimestamp,LastUpdateTimestamp,NextContactId,PreviousContactId,DequeueTimestamp,Duration,EnqueueTimestamp,Name,TransferCompletedTimestamp,HandleTime,TicketNumber,Account,AccountName,Country,Language,Site,WrapCode
aaaa-xxxxxx,123456789,90,29/06/2021 01:00,29/06/2021 01:00,111,29/06/2021 01:00,0,country1,xx,yy,90,90,language,dummy,1,29/06/2021 01:00,29/06/2021 01:00,type_x,29/06/2021 01:00,29/06/2021 01:00,,,29/06/2021 01:00,11,29/06/2021 01:00,type_y,29/06/2021 01:00,201,A123,xxx,xxx,country_y,language,type_w,xxxx
bbbb-xxxxxx,987654321,90,29/06/2021 01:00,29/06/2021 01:00,111+P4,29/06/2021 01:00,0,country1,xx,yy,90,90,language,dummy,1,29/06/2021 01:00,29/06/2021 01:00,type_x,29/06/2021 01:00,29/06/2021 01:00,,,29/06/2021 01:00,11,29/06/2021 01:00,type_y,29/06/2021 01:00,201,"""A123,B123""",xxx,xxx,country_y,language,type_w,xxxx
For example you can run a report everyday and save the outcome to an lookup , but this wouldn't work as it would be too much data for a lookup , I looked around and I found some people talk about summary index , do you think this would be a good option for me ?
Sounds like you are trying to extract a multi-value field so you might try using max_match.
| rex field=file_content max_match=0 <expression>
Documentation: