Thanks all, however no luck so far in either case, unfortunately.
I did however notice in the health check that there were issues in the 'Event-processing issues' section, relating to events for this sourcetype being too long in bytes, and event max was also an issue.
Some recently ingested events are triggering event-processing warnings and indicate the presence of one or more of these scenarios:
1. Lines in the event are too long, exceeding props.conf / TRUNCATE
2. There are too many lines per event, exceeding props.conf / MAX_EVENTS
3. The extraction of event time stamps was partially or completely unsuccessful
These event-processing issues can have a negative impact on the performance of data ingestion.
Check the events that are triggering these warnings. Adjust event-processing settings as needed to ensure their proper ingestion.
So I added TRUNCATE and MAX_EVENTS to the stanza, to result in the following:
TRUNCATE = 15000
MAX_EVENTS = 300
LINE_BREAKER = ([\r\n]+)EventDate
SHOULD_LINEMERGE = false
TIME_PREFIX = EventDate\s*:\s*
TIME_FORMAT = %Y-%m-%d %H:%M
MAX_TIMESTAMP_LOOKAHEAD = 16
Note I also changed the sourcetype name just to be sure there was no issue there.
Now the length and count issues are no longer showing up, but the data is the same one big event.
I'm wondering if what I see on the screen and what Splunk is looking at are two different things. I'm also wondering if I should modify my script to make the format more digestible to Splunk, somehow - perhaps xml.