[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[xmlblaster] locking bug
- To: xmlblaster at server.xmlBlaster.org
- Subject: [xmlblaster] locking bug
- From: "Póka Balázs" <p.balazs at gmail.com>
- Date: Sat, 18 Oct 2008 14:24:51 +0200
- Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:to :subject:mime-version:content-type:content-transfer-encoding :content-disposition; bh=mAWs9uQcPt5QNFfLtoLuGoajBmUdDiduagIvIHXc760=; b=p14oO4UHoZgb0ihQ/RxJE3XDna/A8gdA+F/uY1xhnuj3X9sn7wq+h8hhSxIV+z1jnB GV/5FM0n/RAP3fHxcPmfRNbVugQACYaPcr5UGAi4dEpeXVZI1bOzrbem+PLh5s4UVLYY 1YSUoZHk0Sc7on5ZoT67Ek91Tw5aDO9Wx1Gyw=
- Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition; b=lThnK6A9Ot8wzOOq+83ZO8kQwbbL+sA7iIVy+dzkmDP8QrtTT1S7zgWAa2ZaivzCSA W4MYNIHI67AZpGyQOLtKyRppra8jERC/85nJ8cjjYb+M5fstXcwpTZle1fYcVmLh9XqG 8P5VdIxg3KU41yzmt2u6lHmNkMlLAhC0ieXDc=
- Reply-to: xmlblaster at server.xmlBlaster.org
- Sender: owner-xmlblaster at server.xmlBlaster.org
Hello to all XmlBlaster developers!
I may have discovered a locking problem in
org.xmlblaster.engine.TopicAccessor. The symptom is xmlBlaster running
out of memory. I made a stack dump the last time when this happened
(and it happens a lot), and there were innumerable threads stuck here:
"XmlBlaster.HandleClient" prio=10 tid=0x00002aaac12ce800 nid=0xf62 in
Object.wait() [0x00002aaae417e000..0x00002aaae417eca0]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:485)
at edu.emory.mathcs.backport.java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:199)
- locked <0x00002aaab42ec888> (a
edu.emory.mathcs.backport.java.util.concurrent.locks.ReentrantLock$NonfairSync)
at edu.emory.mathcs.backport.java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:481)
at org.xmlBlaster.engine.TopicAccessor$TopicContainer.lock(TopicAccessor.java:403)
at org.xmlBlaster.engine.TopicAccessor.findOrCreate(TopicAccessor.java:179)
at org.xmlBlaster.engine.RequestBroker.publish(RequestBroker.java:1677)
at org.xmlBlaster.engine.RequestBroker.publish(RequestBroker.java:1405)
at org.xmlBlaster.engine.RequestBroker.publish(RequestBroker.java:1393)
at org.xmlBlaster.engine.XmlBlasterImpl.publish(XmlBlasterImpl.java:180)
at org.xmlBlaster.engine.XmlBlasterImpl.publishArr(XmlBlasterImpl.java:219)
at org.xmlBlaster.util.protocol.RequestReplyExecutor.receiveReply(RequestReplyExecutor.java:447)
at org.xmlBlaster.protocol.socket.HandleClient.handleMessage(HandleClient.java:194)
at org.xmlBlaster.protocol.socket.HandleClient$1.run(HandleClient.java:369)
at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:665)
at edu.emory.mathcs.backport.java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:690)
at java.lang.Thread.run(Thread.java:619)
There was no deadlock. It seems that some thread locked the
ReentrantLock instance contained in
org.xmlblaster.engine.TopicAccessor$TopicContainer, and forgot to
unlock it.
The clients subsequently disconnected and reconnected to another
session id since from their point of view, the server did not respond,
thus the huge number of stale threads and clients resulted.
I noted that the implementation of
org.xmlblaster.engine.TopicAccessor.release(...) seems problematic
since it may return _without unlocking_ under erroneous conditions,
while not even logging any INFO or higher level message! Maybe the
tc.unlock() call would need to be placed in a finally block. There may
as well be other places which may hold instances of this lock without
unlocking it ever again. Thanks for your help.
regards,
Balázs Póka