I am trying to do some performance/stress testing using eventgen and I'm running into a few problems.
If I use the splunkd sample/tutorial events, the data flows into Splunk fairly normally.
If I use my own data index=main | reverse | fields index, source, sourcetype, _raw
, I can get eventgen to write the events to a file, but not to the Splunk management port.
The other thing I've noticed with both configs is that if I tell eventgen to generate 1 GB of data per day, it will create all of the events in the first couple seconds of each minute and then site there idle until the next minute comes. Is there any way to distribute the event load over the entire minute?
Here is my eventgen.conf:
[small.events]
mode = sample
sampletype = csv
timeMultiple = 1
perDayVolume = 1
autotimestamp = true
threading = process
#queuing = zeromq
(note: the eventgen Performance doc says to use this for higher performance, but when I run eventgen it says it isn't valid)
index=main
host=eventgen
source=eventgen
sourcetype=eventgen
outputMode = splunkstream
splunkHost = localhost
splunkPort = 8089
splunkUser = admin
splunkPass = *****
splunkMethod = https
token.0.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3}
token.0.replacementType = timestamp
token.0.replacement = %Y-%m-%d %H:%M:%S,%f
token.1.token = \d{2}-\d{2}-\d{4} \d{2}:\d{2}:\d{2}.\d{3}
token.1.replacementType = timestamp
token.1.replacement = %m-%d-%Y %H:%M:%S.%f
token.2.token = \d{2}/\w{3}/\d{4}:\d{2}:\d{2}\:\d{2}.\d{3}
token.2.replacementType = timestamp
token.2.replacement = %d/%b/%Y:%H:%M:%S.%f
token.3.token = \d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}
token.3.replacementType = timestamp
token.3.replacement = %Y-%m-%d %H:%M:%S