Hello
I am trying to understand how SPLUNK works on Workstation after a network disconnect.
Is it the same process that usually the incoming data goes first to the parsingQueue and from there to the parsing pipeline, where it undergoes event processing. It then moves to the indexQueue and on to the indexing pipeline, which builds the index, or is it a different queue process?
If for example, I disconnect the computer for one month, is it possible to have a slowness issue due to the indexing process?
thanks in advance
If you have a forwarder running on a workstation and that workstation is disconnected, the forwarder's queues will start filling up. Depending on what queue sizes you configured and event rate, it will take a certain amount of time before queues are full. Once full, the forwarder will stop receiving (reading) new logs.
If the forwarder (or the workstation running on it) is restarted in the meantime you may loose data. Same for when log files rotate during the time the forwarder's queues were full and inputs were blocked.
Once the forwarder comes online again, it will empty its queues and then start reading again. By default a Universal Forwarder is configured with a 256KBps thruput limit. So it shouldn't go all out crazy trying to catch up, causing performance issues. But if you removed that limit, I can imagine it may get rather busy trying to catch up on reading and forwarding all the events (also depending on how long it was offline and how much data is has to process).
perfect
thanks!
ok thanks
last question :
how and where you configure queue sizes you and event rate??
in memory input queue in inputs.conf (specific to each input stanza):
queueSize =
Persistent (on disk) queue in inputs.conf:
persistentQueueSize =
output queue in outputs.conf
maxQueueSize =
Throughput limit in limits.conf:
[thruput]
maxKBps =
If you have a forwarder running on a workstation and that workstation is disconnected, the forwarder's queues will start filling up. Depending on what queue sizes you configured and event rate, it will take a certain amount of time before queues are full. Once full, the forwarder will stop receiving (reading) new logs.
If the forwarder (or the workstation running on it) is restarted in the meantime you may loose data. Same for when log files rotate during the time the forwarder's queues were full and inputs were blocked.
Once the forwarder comes online again, it will empty its queues and then start reading again. By default a Universal Forwarder is configured with a 256KBps thruput limit. So it shouldn't go all out crazy trying to catch up, causing performance issues. But if you removed that limit, I can imagine it may get rather busy trying to catch up on reading and forwarding all the events (also depending on how long it was offline and how much data is has to process).
Can you provide a bit more info on your setup? Are you talking about a single instance running on your workstation and collecting and indexing locally? Or are you referring to a distributed setup where you have a forwarder installed on one or more workstations, which are sending to indexers?
hi
i m referring to a distributed setup where I have a forwarder installed on all workstations, which are sending to indexers....