[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [xmlblaster] Clustering or simply server to server connects
Based on time and resources, I probably won't be able to add that which I
would like to have :) Thanks for the input, Marcel.
I'm thinking...
If each node has the same topics, but they are named prefixed with the node
name. Then each other node was a slave for the topics of each other node
which had a filter or plug-in to re-publish them as if they were published
on the local topic, what would that break? And is that doable with the
plug-in or filter (I think that's what I read the feature was called)?
I know it would mean there could be queues duplicated on multiple nodes, but
I could be ok with that... I would also have to manage via the consuming
application(s) the fact that a node was not there...
Thoughts, anyone?
> -----Original Message-----
> From: Marcel Ruff [SMTP:mr at marcelruff.info]
> Sent: Friday, April 18, 2003 1:03 PM
> To: xmlblaster at server.xmlblaster.org
> Subject: Re: [xmlblaster] Clustering or simply server to server
> connects
>
> Madere, Colin wrote:
>
> >I'm getting a little lost in the documentation, so I decided to post my
> >question:
> >
> >I'm interested in setting up multiple servers which pass all or a
> designated
> >subset of message to each other. I guess I would set up the "Domain"
> idea
> >as I saw in the docs, however, the choice of "masters of topics" will be
> >arbitrary since there won't be an application level requirement of a
> certain
> >location being the master of most of the topics. A few will have that
> >property, but most will not.
> >
> >In that same vein, if I arbitrarily choose a master for a topic (or
> domain
> >of topics) and that server is disconnected from the rest, won't this mean
> >that messages published to non-masters will not be propogated to others
> >since only the master propogates topic publishes to the "slaves"? I kind
> of
> >want to avoid that... so is that one of the clustering features that has
> not
> >yet been implemented?
> >
> You got the point.
>
> The current clustering support was a first shot, it supports exactly
> what i needed in a commercial project.
>
> Having mirrored cluster nodes (for fail over) is not implemented yet.
>
> The current clustering is a good base to add such features, but
> it still takes time to do and document and test it, it involves:
>
> o Only messages of stratum level 0 are mirrored
>
> o A cluster setup may have any number of mirror nodes
>
> o Changing mirror nodes to hot standby and adding mirror
> nodes in hot operation needs to be supported (administrative control).
>
> o If a cluster collective breaks into pieces (no heartbeat) we need
> a semaphore (depending on the size of the sub collectives) to
> decide which sub collective takes over the leadership
>
> o The session informations need to be mirrored as well
> (time to live, subscribes, callback queue states for persistent
> messages ...)
>
> o The client library needs to be extended to change to other mirror
> nodes on failure (this is reused in the cluster nodes themselves
> when connecting to other nodes)
>
> As soon as somebody has some Euros/Kroner left in his commercial
> project and needs it we could add these features :-)
>
> >Follow-up questions, it seems, would be:
> >
> >1) Is clustering implemented in a functionally complete way for sharing
> >topics amonst multiple instances in a non-load balancing structure?
> >
> As mentioned, no mirroring is implemented.
> Probably this can somehow partly be simulated with the current
> configuration possibilities,
> in directory
> xmlBlaster/demo/javaclients/cluster
> there are examples to play with (just start the servers in different
> xterms/MS-DOS boxes).
>
> >2) Is anyone successfully using the clustering setup that is currently
> >available in xmlBlaster?
> >
> I do, it is a commercial clustering setup with a master and many slaves
> for realtime radar/GPS data of ships.
>
> regards,
>
> Marcel
>
> >
> >
> >Thanks in advance for you input.
> >
> >Colin
> >
> >
> >
> >
>
>