Hello Splunkers,
I configured my HF to pull data from an Event Hub, all good I'm receiving logs, but to much (around 130Gb/Day) and my HF often has some trouble to parse and forward the logs during "the peak of data".
I wanted to use an additional HF in order to "share" the work but I do not know how to proceed. If I configured the Add-On on this new HF the same way I did for the first, I will just end up with duplicated data...
Would you have any idea ?
Thanks,
GaetanVP
Hi @GaetanVP,
you have two solutions:
They are both efficient:
I'd use the second monitoring if the network connection is able to accept the traffic, if yes, you solved the issue, if not, you can add a second HF.
Ciao.
Giuseppe
Hi @GaetanVP,
you have two solutions:
They are both efficient:
I'd use the second monitoring if the network connection is able to accept the traffic, if yes, you solved the issue, if not, you can add a second HF.
Ciao.
Giuseppe
Hello @gcusello thanks for your answer, it makes sense.
However do we agree that both solution will still represent a single point of failure. If the HF (or one of the two HF) go down, I will miss all logs (or at least some of them).
Thanks,
GaetanVP
Hi @GaetanVP,
it should be a Single Point of Failure if you use the HF to receive data, but you are using this HF to pull data from Cloud, so it isn't mandatory to be redundant.
You could also have a cold copy of it ready to be turned on.
Ciao.
Giuseppe
Currently the issue with HA HF is replicate checkpoints to avoid duplicate data. As far as I know currently there is no official way to replicate checkpoints with several HFs. Of course you could use some synchronization method between master and secondary nodes to prepare failure situation, but quite probably you will get at least some events duplicated when failover happens.
Hi
haven't try this by myself, but can you use several consumer groups and then configure each HF to pull only one or some of those? https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-features
r. Ismo