[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [xmlblaster] Cluster peers
Michael Lum wrote:
I couldn't figure this out from the reference book and didn't find
anything on the mailing list --
How can I setup two nodes in a cluster to be peers? I'd like to put two
machines behind a load balancer, so that any client could publish a
message, but it might end up on one of the two machines depending on
where the load balancer sends it. Also, subscribers would connect via
the load balancer, so any subscriber might end up on one of the two
machines. But, I need any message published to any of the two machines
to be receieved by all subscribers, so if a client connects publishes to
server 'A', subscribers on server 'A' AND on server 'B' get copies of
Finally, if possible, I'd like these two nodes to be slaves to a pair of
masters, so that both the two slaves that are peered AND the two masters
(also peered) are getting copies of messages.
these are nice features which are not available yet,
but xmlBlaster should definitely support such scenarios
in the near future.
1. load balancer
I could imagine that the load balancer initially chooses
an IP of one of the xmlBlaster slaves and the client sticks
to it until it is finished or until the connected xmlBlaster
If the load balancer shall intercept each publish call
you need to have sort of a proxy running on the load balancer.
In the simplest case this is a xmlBlaster slave itself
or a sligthly extended SOCKET protocol plugin ...
2. master slave operation
This is available already.
3. master mirroring
I believe you need this feature of having two masters
to have high availability (HA) support.
This is currently not supported directly by xmlBlaster
but we are not far away of it.
Probably it can be handled by a master/slave operation
of the two backends and by a dynamic reconfiguration
from slave to master when the original master breaks away.
Here the typical cluster reconfiguration logic applies,
they need a heart beat and a solid decision which of the
remaining sub clusters takes over control in case of problems.
-> Depending on your exact use case the existing
master / slave could be sufficient
Sorry, no out-of-the-box solution yet,