[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [xmlblaster] locking bug
Hi,
i have looked at your thread dump, but i couldn't come to a conclusion,
it is not possible to see the lock-taking thread ...
Could you please send me the logfiles as well, during the time the
threads are consumed
(probably with -logging/org.xmlBlaster.engine FINE
but this will blow up your files).
Could you please add in TopicAccessor.java:376
the keyword 'volatile':
private volatile TopicHandler topicHandler;
It is just i very blind guess.
Can you make a grep in your log files on the line
log.warning("Trying again to get a TopicHandler ...
if it is logged somewhere.
Is it always the same topicId which fails?
Do you have many other topics witch work fine?
Thanks
Marcel
Marcel Ruff schrieb:
Póka Balázs schrieb:
Hello to all XmlBlaster developers!
I may have discovered a locking problem in
org.xmlblaster.engine.TopicAccessor. The symptom is xmlBlaster running
out of memory. I made a stack dump the last time when this happened
(and it happens a lot), and there were innumerable threads stuck here:
"XmlBlaster.HandleClient" prio=10 tid=0x00002aaac12ce800 nid=0xf62 in
Object.wait() [0x00002aaae417e000..0x00002aaae417eca0]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:485)
at
edu.emory.mathcs.backport.java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:199)
- locked <0x00002aaab42ec888> (a
edu.emory.mathcs.backport.java.util.concurrent.locks.ReentrantLock$NonfairSync)
at
edu.emory.mathcs.backport.java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:481)
at
org.xmlBlaster.engine.TopicAccessor$TopicContainer.lock(TopicAccessor.java:403)
at
org.xmlBlaster.engine.TopicAccessor.findOrCreate(TopicAccessor.java:179)
at
org.xmlBlaster.engine.RequestBroker.publish(RequestBroker.java:1677)
at
org.xmlBlaster.engine.RequestBroker.publish(RequestBroker.java:1405)
at
org.xmlBlaster.engine.RequestBroker.publish(RequestBroker.java:1393)
at
org.xmlBlaster.engine.XmlBlasterImpl.publish(XmlBlasterImpl.java:180)
at
org.xmlBlaster.engine.XmlBlasterImpl.publishArr(XmlBlasterImpl.java:219)
at
org.xmlBlaster.util.protocol.RequestReplyExecutor.receiveReply(RequestReplyExecutor.java:447)
at
org.xmlBlaster.protocol.socket.HandleClient.handleMessage(HandleClient.java:194)
at
org.xmlBlaster.protocol.socket.HandleClient$1.run(HandleClient.java:369)
at
edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:665)
at
edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:690)
at java.lang.Thread.run(Thread.java:619)
There was no deadlock. It seems that some thread locked the
ReentrantLock instance contained in
org.xmlblaster.engine.TopicAccessor$TopicContainer, and forgot to
unlock it.
The clients subsequently disconnected and reconnected to another
session id since from their point of view, the server did not respond,
thus the huge number of stale threads and clients resulted.
I noted that the implementation of
org.xmlblaster.engine.TopicAccessor.release(...) seems problematic
since it may return _without unlocking_ under erroneous conditions,
while not even logging any INFO or higher level message! Maybe the
tc.unlock() call would need to be placed in a finally block. There may
as well be other places which may hold instances of this lock without
unlocking it ever again.
If the map throws a Runtime exception, yes. But this should
be logged somewhere up the stack.
Could you please send us the complete stack trace?
Since when did you notice this behaviour?
Which version of xmlBlaster do you use?
Thanks
Marcel
Thanks for your help.
regards,
Balázs Póka
--
Marcel Ruff
http://www.xmlBlaster.org
http://watchee.net
Phone: +49 7551 309371