JPPF Issue Tracker
JPPF (jppf)
April 08, 2018
icon_milestone.png 09:02 JPPF 5.1.1
A new milestone has been reached
April 02, 2018
icon_build.png 14:00 JPPF 5.2.9
New version released
March 20, 2018
feature_request_tiny.png 10:57  Feature request JPPF-528 - Restructuring of and major improvements to the documentation
lolocohen : Title updated
March 15, 2018
feature_request_tiny.png 13:54  Feature request JPPF-528 - Restructuring of and major improvements to the documentation
lolocohen : Description updated
feature_request_tiny.png 13:54  Feature request JPPF-528 - Restructuring of and major improvements to the documentation
lolocohen : Assignee changed: lolo4j
feature_request_tiny.png 13:53  Feature request JPPF-528 - Restructuring of and major improvements to the documentation
lolocohen : Issue created
Excellent feedback, suggestions and comments from a long-time JPPF user:
# It is not obvious that JPPF benefits from multiple core processors. We might want to make that more obvious, as it might help attract more users and help people decide what kind of cloud service or physical servers to use. People might be wasting time multi-threading their programs when it would be better to use JPPF. There was a time when Rackspace had only single core servers. JPPF would have been fine for that. Linode has many cores and to use them with JPPF is not wasteful. To go from one to the other requires no programming changes. To go from one to the other requires no configuration changes. To me this was a non-obvious benefit of JPPF.
# It isn't obvious either that JPPF is probably better for many small tasks rather than a few long elapsed time tasks. That is, 64 million tasks of 10 seconds each is better than 64 tasks of 10 million seconds each. Knowledge of this might save people some rewriting effort.
# The documentation is kind of discouraging and lengthy. This results in readers/users being unwilling or unable to read the whole thing because they cannot grasp the relevance of much of it. I haven't looked at the documentation in recent years so perhaps my comment is no longer relevant. Yes it is!
# The following use case is typical: homogeneous tasks, each task would last many hours, one task per processor core, no internet and no database since everything required is in the serialized object passed to each processor core. The fact that it is typical may lead users to think they can set up JPPF by themselves, but in the end they actually need help found in the forums. The allocation decisions JPPF makes are often very difficult to understand (i.e. how the tasks are distributed to the nodes). In particular, it is difficult to understand the "nodethreads" algorithm. A suggestion is that each typical case be matched with a brief cheat sheet.
# It is not obvious where to enable assertions. It was observed that it needed to be in 2 places: the invocation of the application and inside the node configuration file. It is not obvious that it is not necessary in to enable assertions for the invocation of the driver, the configuration file of the driver, the invocation of the node. This would be good to put on a cheat sheet. 6. When an application might need more memory it is not obvious that it is not necessary to increase it for the driver launcher, the node launcher, and the driver configuration, but rather it must be done for the application and in the node properties. This would be good to put on a cheat sheet.
# For some applications virtual memory doesn't solve a problem; it creates a problem. VM is probably helpful mainly in applications where the memory need is highly variable across time. A compute intensive application might have a nearly constant level of needed memory. If swapping happens at all, not only will it will slow progress, what is worse is that it will probably persist for hours. It was found that it is best to disable swapping and discover as soon as possible if more physical memory is needed. While this is not a JPPF specific problem it might be relevant for some it would help to know this as soon as possible in the development life cycle.
# Files that contain the "jppf.jvm.options" property are probably more relevant for the user so perhaps it would be good to point out that some configuration files are not important and the configuration files are long but there are only a few lines that matter to some (or most) users. It can be useful to see suggestions like: "do not waste time looking at log4j-node.properties" because you will not enable assertions there and you will not increase memory there.
# It is a minor point but sometimes the word "server" appears and it is confusing because everything is a server. The line "#jppf.server.host=localhost" is confusing. If the reader thinks a node is a "server" because the driver is a "master", he will be more confused because he will think this property is supposed to point to a node. The default value "localhost" is confusing because the majority of the time it will not be local. Perhaps it could be "#jppf.driver.host=IPADDRESS".
March 14, 2018
bug_report_tiny.png 08:34  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Issue closed
bug_report_tiny.png 08:34  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Status changed: New ⇒ Closed
bug_report_tiny.png 08:34  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Resolution changed: Not determined ⇒ RESOLVED
bug_report_tiny.png 08:34  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : lolo4j ⇒ Not being worked on
bug_report_tiny.png 08:26  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Assignee changed: lolo4j
bug_report_tiny.png 08:26  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Reproducability changed: Reproduction steps updated
bug_report_tiny.png 08:24  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Issue created
When the admin console's configuration specifies a pool size greater than 1 for the client connnections ot the driver(s), the "Job data" view is and remains empty.

This happens in 5.1.x releases, but has been fixed in 5.2.x releases and later.
bug_report_tiny.png 08:24  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : 'JPPF 5.1.6' added
March 03, 2018
icon_milestone.png 13:15 JPPF 4.2.9
A new milestone has been reached
feature_request_tiny.png 10:46  Feature request JPPF-526 - Enable NIO-based recovery/heartbeat mechanism
lolocohen : Issue created
Currently, the [http://www.jppf.org/doc/6.0/index.php?title=Configuring_a_JPPF_server#Recovery_from_hardware_failures_of_remote_nodes recovery mechanism] uses a connection to a separate driver port. We propose to rewrite it so that it uses the non-blocking NIO based communications, which will allow it to use the same port as for nodes and clients connections. It will then be a matter of enbling it or not.
January 30, 2018
icon_milestone.png 12:36 JPPF 5.1
A new milestone has been reached
January 18, 2018
icon_milestone.png 14:25 JPPF 5.0
A new milestone has been reached
January 17, 2018
bug_report_tiny.png 09:01  Bug report JPPF-521 - Document the "jppf.node.provisioning.master.uuid" configuration property
lolocohen : Issue closed
bug_report_tiny.png 09:01  Bug report JPPF-521 - Document the "jppf.node.provisioning.master.uuid" configuration property
lolocohen : Status changed: New ⇒ Closed
bug_report_tiny.png 09:01  Bug report JPPF-521 - Document the "jppf.node.provisioning.master.uuid" configuration property
lolocohen : Resolution changed: Not determined ⇒ RESOLVED
bug_report_tiny.png 09:01  Bug report JPPF-521 - Document the "jppf.node.provisioning.master.uuid" configuration property
lolocohen : lolo4j ⇒ Not being worked on
bug_report_tiny.png 08:21  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Issue closed
bug_report_tiny.png 08:21  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Status changed: New ⇒ Closed
bug_report_tiny.png 08:21  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Resolution changed: Not determined ⇒ RESOLVED
bug_report_tiny.png 08:21  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : lolo4j ⇒ Not being worked on
January 13, 2018
bug_report_tiny.png 08:41  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Reproducability changed: Reproduction steps updated
bug_report_tiny.png 08:40  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Reproducability changed: Reproduction steps updated
bug_report_tiny.png 08:10  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Assignee changed: lolo4j
bug_report_tiny.png 08:08  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Issue created
During the run of the tests for test.org.jppf.client.event.TestJobListener on 5.2.8, the following deadlock was detected:
- thread id 23 "JPPF NIO-0004" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@6c452fb4 which is held by thread id 20 "JPPF NIO-0001"
- thread id 20 "JPPF NIO-0001" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@71183515 which is held by thread id 23 "JPPF NIO-0004"

Stack trace information for the threads listed above

"JPPF NIO-0004" - 23 - state: WAITING - blocked count: 3 - blocked time: 0 - wait count: 33 - wait time: 11618
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@6c452fb4
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.protocol.AbstractServerJobBase.addBundle(AbstractServerJobBase.java:229)
at org.jppf.server.queue.JPPFPriorityQueue.addBundle(JPPFPriorityQueue.java:121)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:88)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:34)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:78)
- locked org.jppf.nio.SelectionKeyWrapper@158778fd
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.locks.ReentrantLock$NonfairSync@71183515
- java.util.concurrent.ThreadPoolExecutor$Worker@6524a69

"JPPF NIO-0001" - 20 - state: WAITING - blocked count: 7 - blocked time: 1 - wait count: 32 - wait time: 11579
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@71183515
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.queue.JPPFPriorityQueue.getJob(JPPFPriorityQueue.java:286)
at org.jppf.server.queue.JPPFPriorityQueue.getBundleForJob(JPPFPriorityQueue.java:371)
at org.jppf.server.nio.client.CompletionListener.removeJobFromQueue(CompletionListener.java:121)
at org.jppf.server.nio.client.CompletionListener.taskCompleted(CompletionListener.java:74)
at org.jppf.server.protocol.ServerTaskBundleClient.fireTasksCompleted(ServerTaskBundleClient.java:361)
at org.jppf.server.protocol.ServerTaskBundleClient.resultReceived(ServerTaskBundleClient.java:218)
at org.jppf.server.protocol.ServerJob.resultsReceived(ServerJob.java:139)
at org.jppf.server.protocol.ServerTaskBundleNode.resultsReceived(ServerTaskBundleNode.java:197)
at org.jppf.server.protocol.ServerJob.handleCancelledStatus(ServerJob.java:241)
at org.jppf.server.protocol.ServerJob.cancel(ServerJob.java:276)
at org.jppf.server.nio.client.ClientContext.cancelJobOnClose(ClientContext.java:279)
at org.jppf.server.nio.client.ClientContext.handleException(ClientContext.java:121)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:94)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.locks.ReentrantLock$NonfairSync@6c452fb4
- java.util.concurrent.ThreadPoolExecutor$Worker@69663655
Attaching the whole logs zip for this test run.
bug_report_tiny.png 08:08  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : 'JPPF 5.2.8' added
January 09, 2018
bug_report_tiny.png 13:45  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
lolocohen : Issue closed
bug_report_tiny.png 13:45  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
lolocohen : Status changed: New ⇒ Closed
bug_report_tiny.png 13:45  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
lolocohen : Resolution changed: Not determined ⇒ RESOLVED
bug_report_tiny.png 13:45  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
lolocohen : lolo4j ⇒ Not being worked on
bug_report_tiny.png 13:44  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
lolocohen : Description updated
bug_report_tiny.png 11:53  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
lolocohen : Assignee changed: lolo4j
January 08, 2018
bug_report_tiny.png 16:52  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
igor : Issue created
During restart of a payara server i see sometimes this warning in logs:
[2018-01-08T13:31:47.119+0100] [Payara 4.1] [WARNING] [NCLS-COMUTIL-00023] [javax.enterprise.system.util] [[
Input stream has been finalized or forced closed without being explicitly closed; stream instantiation reported in following stack trace
java.lang.Throwable
at com.sun.enterprise.loader.ASURLClassLoader$SentinelInputStream.(ASURLClassLoader.java:1284)
at com.sun.enterprise.loader.ASURLClassLoader$InternalJarURLConnection.getInputStream(ASURLClassLoader.java:1392)
at java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:238)
at com.sun.enterprise.loader.ASURLClassLoader.getResourceAsStream(ASURLClassLoader.java:936)
at org.jppf.utils.VersionUtils.createVersionInfo(VersionUtils.java:59)
at org.jppf.utils.VersionUtils.(VersionUtils.java:42)
at org.jppf.client.AbstractJPPFClient.(AbstractJPPFClient.java:97)
at org.jppf.client.AbstractGenericClient.(AbstractGenericClient.java:87)
at org.jppf.client.JPPFClient.(JPPFClient.java:61)
bug_report_tiny.png 16:52  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
igor : 'JPPF 5.1.6' added
January 04, 2018
bug_report_tiny.png 13:38  Bug report JPPF-523 - nullpointer in org.jppf.utils.SystemUtils.addOtherSystemProperties
lolocohen : Issue closed
bug_report_tiny.png 13:38  Bug report JPPF-523 - nullpointer in org.jppf.utils.SystemUtils.addOtherSystemProperties
lolocohen : Status changed: New ⇒ Closed
bug_report_tiny.png 13:38  Bug report JPPF-523 - nullpointer in org.jppf.utils.SystemUtils.addOtherSystemProperties
lolocohen : Resolution changed: Not determined ⇒ RESOLVED
bug_report_tiny.png 13:38  Bug report JPPF-523 - nullpointer in org.jppf.utils.SystemUtils.addOtherSystemProperties
lolocohen : lolo4j ⇒ Not being worked on
January 02, 2018
bug_report_tiny.png 10:43  Bug report JPPF-523 - nullpointer in org.jppf.utils.SystemUtils.addOtherSystemProperties
lolocohen : Description updated
bug_report_tiny.png 10:41  Bug report JPPF-523 - nullpointer in org.jppf.utils.SystemUtils.addOtherSystemProperties
lolocohen : Assignee changed: lolo4j
December 02, 2017
icon_milestone.png 02:53 JPPF 5.0.5
A new milestone has been reached
August 10, 2017
icon_build.png 10:00 JPPF 5.2.8
New version released
June 12, 2017
icon_build.png 10:00 JPPF 5.2.7
New version released
April 27, 2017
icon_milestone.png 15:29 JPPF 4.2.8
A new milestone has been reached
April 02, 2017
icon_build.png 22:00 JPPF 5.2.6
New version released
icon_build.png 21:30 JPPF 5.1.6
New version released
March 10, 2017
icon_build.png 10:00 JPPF 5.2.5
New version released
January 18, 2017
icon_build.png 10:00 JPPF 5.2.4
New version released
January 07, 2017
icon_milestone.png 13:15 JPPF 5.1.5
A new milestone has been reached
January 05, 2017
icon_milestone.png 22:35 JPPF 4.2.4
A new milestone has been reached
November 27, 2016
icon_build.png 11:00 JPPF 5.2.3
New version released
November 13, 2016
icon_milestone.png 08:09 JPPF 5.0.4
A new milestone has been reached
November 09, 2016
icon_milestone.png 17:29 JPPF 5.0.2
A new milestone has been reached
October 16, 2016
icon_milestone.png 01:33 JPPF 4.2.1
A new milestone has been reached
icon_milestone.png 01:29 JPPF 4.2.5
A new milestone has been reached
Show moreaction_add_small.png