JPPF Issue Tracker
JPPF (jppf)
June 20, 2018
enhancement_tiny.png 07:36  Enhancement JPPF-490 - Timestamps for statistics updates
lolocohen : Issue closed
bug_report_tiny.png 06:17  Bug report JPPF-537 - Task is being submitted to a driver and canceled immediately afterwards, no cancellation effect is visible.
lolocohen : Issue closed
June 19, 2018
bug_report_tiny.png 13:13  Bug report JPPF-537 - Task is being submitted to a driver and canceled immediately afterwards, no cancellation effect is visible.
ecx_q : Issue created
We have issue related to task cancellation. Initially problem was noticed as job's
client timeout sometimes did not work, but in the end it seems this is issue is just
related to canceling job.
June 18, 2018
icon_milestone.png 22:15 JPPF 5.1
A new milestone has been reached
June 17, 2018
feature_request_tiny.png 00:18  Feature request JPPF-471 - Show master / slave nodes relationships in the admin console
lolocohen : Issue closed
June 11, 2018
task_tiny.png 07:29  Task JPPF-520 - Make JPPF work with Java 9
lolocohen : Issue closed
enhancement_tiny.png 07:21  Enhancement JPPF-531 - Ability to specify alternate drivers/servers for a node to connect to in a single configuration property
lolocohen : Issue closed
June 01, 2018
bug_report_tiny.png 23:02  Bug report JPPF-533 - IllegalStateException in driver in multi-server topology
lolocohen : Issue closed
May 30, 2018
icon_milestone.png 20:05 JPPF 4.1
A new milestone has been reached
May 26, 2018
enhancement_tiny.png 22:31  Enhancement JPPF-530 - Port J2EE connector to Open Liberty
lolocohen : Issue closed
task_tiny.png 08:04  Task JPPF-532 - Upgrade to latest version of SLF4J
lolocohen : Issue closed
May 25, 2018
feature_request_tiny.png 08:16  Feature request JPPF-462 - Node temperature
lolocohen : Issue closed
feature_request_tiny.png 08:14  Feature request JPPF-519 - Admin console: ability to add custom data to the JVM health view and the charts
lolocohen : Issue closed
feature_request_tiny.png 08:14  Feature request JPPF-396 - Provide information on remote drivers/nodes not natively available from the JDK
lolocohen : Issue closed
May 24, 2018
enhancement_tiny.png 22:24  Enhancement JPPF-535 - Desktop console: use picklist to select visible columns in tree table views
lolocohen : Issue closed
task_tiny.png 07:11  Task JPPF-536 - Generic build script in the repo root
lolocohen : Issue closed
bug_report_tiny.png 07:05  Bug report JPPF-533 - IllegalStateException in driver in multi-server topology
lolocohen : Issue closed
May 23, 2018
task_tiny.png 19:47  Task JPPF-536 - Generic build script in the repo root
lolocohen : Issue created
We propose to create a build.xml that can be easily used to build all modules, publish all maven artifacts, run the tests, etc ...

The script should be at the root of the repo, and would call the various individual build scripts involved
enhancement_tiny.png 19:11  Enhancement JPPF-535 - Desktop console: use picklist to select visible columns in tree table views
lolocohen : Issue created
The desktop console uses a dialog with a list of checkboxes, one for each column, to select the visible columns in the topology, JVM health monitoring and jobs views. On the other hand, ir uses a pick list to select the viible statistics in the statistics view. The web console uses pick lists for of all of these viws.

A pick list is more compact and provides a way to reorder the columns as well. We propose to use oone for the desktop console's tree table views.
May 22, 2018
icon_milestone.png 22:25 JPPF 4.2.1
A new milestone has been reached
May 18, 2018
bug_report_tiny.png 06:09  Bug report JPPF-534 - When no jvm.options is in the config file, a NullPointerException is thrown
lolocohen : Issue closed
May 17, 2018
bug_report_tiny.png 10:54  Bug report JPPF-534 - When no jvm.options is in the config file, a NullPointerException is thrown
boris.klug : Issue created
When in the config file of the node "jppf-node.properties" the option "jppf.jvm.options" is not set, short after the start of the node crashes with a NullPointerException:
java.lang.NullPointerException
at java.util.regex.Matcher.getTextLength(Unknown Source)
at java.util.regex.Matcher.reset(Unknown Source)
at java.util.regex.Matcher.(Unknown Source)
at java.util.regex.Pattern.matcher(Unknown Source)
at org.jppf.process.AbstractProcessLauncher.parseJvmOptions(AbstractProcessLauncher.java:84)
at org.jppf.process.ProcessLauncher.buildProcess(ProcessLauncher.java:156)
at org.jppf.process.ProcessLauncher.startProcess(ProcessLauncher.java:142)
at org.jppf.process.ProcessLauncher.run(ProcessLauncher.java:119)
at org.jppf.node.NodeLauncher.main(NodeLauncher.java:38)
The same happends with the option "jppf.node.provisioning.slave.jvm.options" when provisioning a slave.

Workaround: Set the option to an empty string like this:
jppf.jvm.options =
May 12, 2018
bug_report_tiny.png 21:08  Bug report JPPF-533 - IllegalStateException in driver in multi-server topology
lolocohen : Issue created
In the dirver logs of a failed test of load-balancer state persistence in multi-server, I see the following stack trace:
2018-05-12 20:10:59,727 [DEBUG][JPPF-0001 ][org.jppf.server.job.JPPFJobManager.jobEnded(166)] jobId 'testPersistentAlgos-proportional' ended
2018-05-12 20:10:59,728 [DEBUG][JobManager-0001 ][org.jppf.server.job.management.DriverJobManagement.sendNotification(223)] sending event JOB_ENDED for job JobInformation[jobUuid=98768233-45EB-071D-6C66-5BAA431E33A6, jobName=testPersistentAlgos-proportional, taskCount=0, initialTaskCount=100, priority=0, suspended=false, pending=false, maxNodes=2147483647], node=null
2018-05-12 20:10:59,735 [DEBUG][JPPF-0001 ][org.jppf.server.job.JPPFJobManager.jobQueued(145)] jobId 'testPersistentAlgos-proportional' queued
2018-05-12 20:10:59,736 [DEBUG][JobManager-0001 ][org.jppf.server.job.management.DriverJobManagement.sendNotification(223)] sending event JOB_QUEUED for job JobInformation[jobUuid=98768233-45EB-071D-6C66-5BAA431E33A6, jobName=testPersistentAlgos-proportional, taskCount=0, initialTaskCount=100, priority=0, suspended=false, pending=false, maxNodes=2147483647], node=null
2018-05-12 20:10:59,737 [DEBUG][JPPF-0002 ][org.jppf.nio.StateTransitionTask.run(90)] error on channel SelectionKeyWrapper[id=10, channel=java.nio.channels.SocketChannel[connected local=/127.0.0.1:11102 remote=/127.0.0.1:56375], readyOps=1, interestOps=0, context=ClientContext[channel=SelectionKeyWrapper[id=10], state=WAITING_JOB, uuid=B9700023-8320-CE2A-FACA-B38D9EF05B0E, connectionUuid=B9700023-8320-CE2A-FACA-B38D9EF05B0E_2, peer=false, ssl=false], nbTasksToSend=31, completedBundles={[]}] :
java.lang.IllegalStateException: Job ENDED
at org.jppf.server.protocol.AbstractServerJobBase.addBundle(AbstractServerJobBase.java:235)
at org.jppf.server.queue.JPPFPriorityQueue.addBundle(JPPFPriorityQueue.java:128)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:87)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:34)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:80)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Full logs provided in attached file

As a consequence, not all tasks results are sent back to the client
May 10, 2018
task_tiny.png 08:35  Task JPPF-532 - Upgrade to latest version of SLF4J
lolocohen : Issue created
The latest version (or maybe a prior) has a new API that allows writing log statements with vararg parameters, for instance:
log.debug("logging object {} with param1={}, param2={}, param3={}", this, p1, p2, p3);
In the version we currently use, we have to explicitely create an array:
log.debug("logging object {} with param1={}, param2={}, param3={}", new Object[] { this, p1, p2, p3 });
May 08, 2018
enhancement_tiny.png 07:56  Enhancement JPPF-531 - Ability to specify alternate drivers/servers for a node to connect to in a single configuration property
lolocohen : Issue created
What we propose here is a new built-in [https://www.jppf.org/doc/6.0/index.php?title=Defining_the_node_connection_strategy '''node connection strategy'''] which reads a list of driver from a single configuration property. We propose that the value of the property be in a format similar to the built-in [https://www.jppf.org/doc/6.0/index.php?title=Defining_the_node_connection_strategy#Built-in_strategies '''CSV file-based strategy'''], except we use a pipe ('|') character to separate driver definitions instead of newlines. For example:
jppf.server.connection.strategy.defintions = sslFlag1, host1, port1, recoveryPort1 | ... | sslFlagN, hostN, portN, recoveryPortN
May 07, 2018
enhancement_tiny.png 10:35  Enhancement JPPF-530 - Port J2EE connector to Open Liberty
lolocohen : Issue created
We propose to port the JCA connector to the [https://openliberty.io '''Open Liberty'''] application server.
feature_request_tiny.png 09:47  Feature request JPPF-436 - Integration of JMX remote with NIO
lolocohen : Issue closed
task_tiny.png 08:56  Task JPPF-529 - Explore usage of .Net Core instead of Visual Studio for the .Net bridge
lolocohen : Issue created
Currently, we use Visual Studio and associated SDK tools to build the [https://www.jppf.org/doc/6.0/index.php?title=.Net_Bridge '''.Net bridge''']. We propose to try and switch to the open-source [https://dotnet.github.io/ '''.Net Core framework'''] instead, and explore the possibility to port it to Linux and Mac,
April 08, 2018
icon_milestone.png 09:02 JPPF 5.1.1
A new milestone has been reached
April 02, 2018
icon_build.png 14:00 JPPF 5.2.9
New version released
March 15, 2018
feature_request_tiny.png 13:53  Feature request JPPF-528 - Restructuring of and major improvements to the documentation
lolocohen : Issue created
Excellent feedback, suggestions and comments from a long-time JPPF user:
# It is not obvious that JPPF benefits from multiple core processors. We might want to make that more obvious, as it might help attract more users and help people decide what kind of cloud service or physical servers to use. People might be wasting time multi-threading their programs when it would be better to use JPPF. There was a time when Rackspace had only single core servers. JPPF would have been fine for that. Linode has many cores and to use them with JPPF is not wasteful. To go from one to the other requires no programming changes. To go from one to the other requires no configuration changes. To me this was a non-obvious benefit of JPPF.
# It isn't obvious either that JPPF is probably better for many small tasks rather than a few long elapsed time tasks. That is, 64 million tasks of 10 seconds each is better than 64 tasks of 10 million seconds each. Knowledge of this might save people some rewriting effort.
# The documentation is kind of discouraging and lengthy. This results in readers/users being unwilling or unable to read the whole thing because they cannot grasp the relevance of much of it. I haven't looked at the documentation in recent years so perhaps my comment is no longer relevant. Yes it is!
# The following use case is typical: homogeneous tasks, each task would last many hours, one task per processor core, no internet and no database since everything required is in the serialized object passed to each processor core. The fact that it is typical may lead users to think they can set up JPPF by themselves, but in the end they actually need help found in the forums. The allocation decisions JPPF makes are often very difficult to understand (i.e. how the tasks are distributed to the nodes). In particular, it is difficult to understand the "nodethreads" algorithm. A suggestion is that each typical case be matched with a brief cheat sheet.
# It is not obvious where to enable assertions. It was observed that it needed to be in 2 places: the invocation of the application and inside the node configuration file. It is not obvious that it is not necessary in to enable assertions for the invocation of the driver, the configuration file of the driver, the invocation of the node. This would be good to put on a cheat sheet. 6. When an application might need more memory it is not obvious that it is not necessary to increase it for the driver launcher, the node launcher, and the driver configuration, but rather it must be done for the application and in the node properties. This would be good to put on a cheat sheet.
# For some applications virtual memory doesn't solve a problem; it creates a problem. VM is probably helpful mainly in applications where the memory need is highly variable across time. A compute intensive application might have a nearly constant level of needed memory. If swapping happens at all, not only will it will slow progress, what is worse is that it will probably persist for hours. It was found that it is best to disable swapping and discover as soon as possible if more physical memory is needed. While this is not a JPPF specific problem it might be relevant for some it would help to know this as soon as possible in the development life cycle.
# Files that contain the "jppf.jvm.options" property are probably more relevant for the user so perhaps it would be good to point out that some configuration files are not important and the configuration files are long but there are only a few lines that matter to some (or most) users. It can be useful to see suggestions like: "do not waste time looking at log4j-node.properties" because you will not enable assertions there and you will not increase memory there.
# It is a minor point but sometimes the word "server" appears and it is confusing because everything is a server. The line "#jppf.server.host=localhost" is confusing. If the reader thinks a node is a "server" because the driver is a "master", he will be more confused because he will think this property is supposed to point to a node. The default value "localhost" is confusing because the majority of the time it will not be local. Perhaps it could be "#jppf.driver.host=IPADDRESS".
March 14, 2018
bug_report_tiny.png 08:34  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Issue closed
bug_report_tiny.png 08:24  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Issue created
When the admin console's configuration specifies a pool size greater than 1 for the client connnections ot the driver(s), the "Job data" view is and remains empty.

This happens in 5.1.x releases, but has been fixed in 5.2.x releases and later.
March 03, 2018
icon_milestone.png 13:15 JPPF 4.2.9
A new milestone has been reached
feature_request_tiny.png 10:46  Feature request JPPF-526 - Enable NIO-based recovery/heartbeat mechanism
lolocohen : Issue created
Currently, the [http://www.jppf.org/doc/6.0/index.php?title=Configuring_a_JPPF_server#Recovery_from_hardware_failures_of_remote_nodes recovery mechanism] uses a connection to a separate driver port. We propose to rewrite it so that it uses the non-blocking NIO based communications, which will allow it to use the same port as for nodes and clients connections. It will then be a matter of enbling it or not.
January 18, 2018
icon_milestone.png 14:25 JPPF 5.0
A new milestone has been reached
January 17, 2018
bug_report_tiny.png 09:01  Bug report JPPF-521 - Document the "jppf.node.provisioning.master.uuid" configuration property
lolocohen : Issue closed
bug_report_tiny.png 08:21  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Issue closed
January 13, 2018
bug_report_tiny.png 08:08  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Issue created
During the run of the tests for test.org.jppf.client.event.TestJobListener on 5.2.8, the following deadlock was detected:
- thread id 23 "JPPF NIO-0004" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@6c452fb4 which is held by thread id 20 "JPPF NIO-0001"
- thread id 20 "JPPF NIO-0001" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@71183515 which is held by thread id 23 "JPPF NIO-0004"

Stack trace information for the threads listed above

"JPPF NIO-0004" - 23 - state: WAITING - blocked count: 3 - blocked time: 0 - wait count: 33 - wait time: 11618
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@6c452fb4
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.protocol.AbstractServerJobBase.addBundle(AbstractServerJobBase.java:229)
at org.jppf.server.queue.JPPFPriorityQueue.addBundle(JPPFPriorityQueue.java:121)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:88)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:34)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:78)
- locked org.jppf.nio.SelectionKeyWrapper@158778fd
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.locks.ReentrantLock$NonfairSync@71183515
- java.util.concurrent.ThreadPoolExecutor$Worker@6524a69

"JPPF NIO-0001" - 20 - state: WAITING - blocked count: 7 - blocked time: 1 - wait count: 32 - wait time: 11579
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@71183515
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.queue.JPPFPriorityQueue.getJob(JPPFPriorityQueue.java:286)
at org.jppf.server.queue.JPPFPriorityQueue.getBundleForJob(JPPFPriorityQueue.java:371)
at org.jppf.server.nio.client.CompletionListener.removeJobFromQueue(CompletionListener.java:121)
at org.jppf.server.nio.client.CompletionListener.taskCompleted(CompletionListener.java:74)
at org.jppf.server.protocol.ServerTaskBundleClient.fireTasksCompleted(ServerTaskBundleClient.java:361)
at org.jppf.server.protocol.ServerTaskBundleClient.resultReceived(ServerTaskBundleClient.java:218)
at org.jppf.server.protocol.ServerJob.resultsReceived(ServerJob.java:139)
at org.jppf.server.protocol.ServerTaskBundleNode.resultsReceived(ServerTaskBundleNode.java:197)
at org.jppf.server.protocol.ServerJob.handleCancelledStatus(ServerJob.java:241)
at org.jppf.server.protocol.ServerJob.cancel(ServerJob.java:276)
at org.jppf.server.nio.client.ClientContext.cancelJobOnClose(ClientContext.java:279)
at org.jppf.server.nio.client.ClientContext.handleException(ClientContext.java:121)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:94)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.locks.ReentrantLock$NonfairSync@6c452fb4
- java.util.concurrent.ThreadPoolExecutor$Worker@69663655
Attaching the whole logs zip for this test run.
January 09, 2018
bug_report_tiny.png 13:45  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
lolocohen : Issue closed
Show moreaction_add_small.png