JPPF Issue Tracker
JPPF (jppf)
October 20, 2017
bug_report_tiny.png 09:16  Bug report JPPF-521 - Document the "jppf.node.provisioning.master.uuid" configuration property
lolocohen : Issue created
The "jppf.node.provisioning.master.uuid" property, also represented as [http://jppf.org/javadoc/5.2/org/jppf/utils/configuration/JPPFProperties.html#PROVISIONING_MASTER_UUID '''JPPFProperties.PROVISIONING_MASTER_UUID'''] in the configuration API is a property that is only set on slave nodes and contains the UUID of the master node that started them.

It appears this property is only documented in the Javadoc and nowhere else. We should add something about it in the [http://www.jppf.org/doc/5.2/index.php?title=Node_provisioning '''provisioning'''] documentation.
October 09, 2017
task_tiny.png 08:25  Task JPPF-520 - Make JPPF work with Java 9
lolocohen : Issue created
The title says it all. We should be able to compile, build and run JPPF with Java 9. In particular, passing all automated tests will be the main acceptance criteria.
feature_request_tiny.png 08:06  Feature request JPPF-519 - Admin console: ability to add custom data to the JVM health view and the charts
lolocohen : Issue created
The idea is to be able to add cutom columns to the JVM health view of the desktop and web admin consoles, along with the ability to make data available for the charts in the desktop console. This would be the client side counterpart to the changes proposed in feature request JPPF-396.

This should be implemented as a plugin for the admin conosle(s), with new data based on those found in the refactored [http://www.jppf.org/javadoc/6.0/index.html?org/jppf/management/diagnostics/HealthSnapshot.html '''HealthSnapshot''']
October 05, 2017
bug_report_tiny.png 09:00  Bug report JPPF-518 - Admin console job data view does not display peer drivers to which jobs are dispatched
lolocohen : Issue closed
bug_report_tiny.png 08:50  Bug report JPPF-518 - Admin console job data view does not display peer drivers to which jobs are dispatched
lolocohen : Issue created
When a job is dispatched to a peer driver, the job data view of the admin console displays a blank instead of an icon + host:port string
October 04, 2017
feature_request_tiny.png 08:45  Feature request JPPF-493 - Parametrized configuration properties
lolocohen : Issue closed
September 30, 2017
bug_report_tiny.png 18:31  Bug report JPPF-517 - Deadlock in the driver during stress test
lolocohen : Issue closed
bug_report_tiny.png 11:04  Bug report JPPF-517 - Deadlock in the driver during stress test
lolocohen : Issue created
While performing a stress test in the driver, I was mpnitoring witht he admin console and it showed in the JVM Health vie, the following deadlock:
Deadlock detected

- thread id 32 "JPPF NIO-0008" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c which is held by thread id 30 "JPPF NIO-0006"
- thread id 30 "JPPF NIO-0006" is waiting to lock org.jppf.nio.SelectionKeyWrapper@20bda110 which is held by thread id 29 "JPPF NIO-0005"
- thread id 29 "JPPF NIO-0005" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c which is held by thread id 30 "JPPF NIO-0006"

Stack trace information for the threads listed above

"JPPF NIO-0008" - 32 - state: WAITING - blocked count: 5932 - blocked time: 2065 - wait count: 247708 - wait time: 864535
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.queue.JPPFPriorityQueue.addBundle(JPPFPriorityQueue.java:99)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:87)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:34)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:79)
- locked org.jppf.nio.SelectionKeyWrapper@358ddcb3
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.ThreadPoolExecutor$Worker@7074c91e

"JPPF NIO-0006" - 30 - state: BLOCKED - blocked count: 6280 - blocked time: 212246 - wait count: 263218 - wait time: 669040
at org.jppf.server.nio.client.CompletionListener.taskCompleted(CompletionListener.java:85)
- waiting on org.jppf.nio.SelectionKeyWrapper@20bda110
at org.jppf.server.protocol.ServerTaskBundleClient.fireTasksCompleted(ServerTaskBundleClient.java:393)
at org.jppf.server.protocol.ServerTaskBundleClient.resultReceived(ServerTaskBundleClient.java:245)
at org.jppf.server.protocol.ServerJob.postResultsReceived(ServerJob.java:165)
at org.jppf.server.protocol.ServerJob.resultsReceived(ServerJob.java:132)
at org.jppf.server.protocol.ServerTaskBundleNode.resultsReceived(ServerTaskBundleNode.java:197)
at org.jppf.server.nio.nodeserver.WaitingResultsState.processResults(WaitingResultsState.java:151)
at org.jppf.server.nio.nodeserver.WaitingResultsState.process(WaitingResultsState.java:87)
at org.jppf.server.nio.nodeserver.WaitingResultsState.performTransition(WaitingResultsState.java:67)
at org.jppf.server.nio.nodeserver.WaitingResultsState.performTransition(WaitingResultsState.java:43)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:79)
- locked org.jppf.nio.SelectionKeyWrapper@41f49664
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c
- java.util.concurrent.ThreadPoolExecutor$Worker@1c3590e

"JPPF NIO-0005" - 29 - state: WAITING - blocked count: 6000 - blocked time: 1502 - wait count: 256419 - wait time: 877563
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.queue.JPPFPriorityQueue.addBundle(JPPFPriorityQueue.java:99)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:87)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:34)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:79)
- locked org.jppf.nio.SelectionKeyWrapper@20bda110
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.ThreadPoolExecutor$Worker@7c52859c
September 25, 2017
feature_request_tiny.png 07:53  Feature request JPPF-511 - Ability to persist and reuse the state of adaptive load-balancers
lolocohen : Issue closed
August 30, 2017
feature_request_tiny.png 06:42  Feature request JPPF-443 - Variable substitutions and scripted expressions for execution policies arguments
lolocohen : Issue closed
enhancement_tiny.png 06:42  Enhancement JPPF-514 - Lighter syntax for scripted property values
lolocohen : Issue closed
August 29, 2017
enhancement_tiny.png 08:04  Enhancement JPPF-514 - Lighter syntax for scripted property values
lolocohen : Issue created
Examples of current syntax:
# following two are equivalent
my.prop1 = $script:javascript:inline{ 1 + 2 }$
my.prop2 = $script{ 1 + 2 }$

my.prop1 = $script:groovy:file{ /home/me/script.groovy }$
my.prop1 = $script:javascript:url{ file:///home/me/script.js }$
This is a bit cumbersome. We propose to relax the syntactic constraints and allow using 'S' or 's' instead of 'script', and only specifying the first character of each possible script source type ('u' or 'U' for url etc...)
August 21, 2017
feature_request_tiny.png 08:45  Feature request JPPF-508 - Peer to peer connection pooling
lolocohen : Issue closed
August 19, 2017
feature_request_tiny.png 10:08  Feature request JPPF-28 - Asynchronous communication between servers
lolocohen : Issue closed
August 12, 2017
feature_request_tiny.png 10:33  Feature request JPPF-445 - Provide access to the node from a task
lolocohen : Issue closed
August 10, 2017
icon_build.png 10:00 JPPF 5.2.8
New version released
August 09, 2017
bug_report_tiny.png 08:09  Bug report JPPF-512 - PeerAttributesHandler spawns too many threads
lolocohen : Issue closed
bug_report_tiny.png 07:56  Bug report JPPF-513 - Using @JPPFRunnable annotation leads to ClassNotFoundException
lolocohen : Issue closed
bug_report_tiny.png 06:41  Bug report JPPF-513 - Using @JPPFRunnable annotation leads to ClassNotFoundException
lolocohen : Issue created
When using a POJO task where one of the methods or constructor is annotated with @JPPFRunnable, the node executing the task throws a ClassNotFoundException saying it can't find the class of the POJO task
August 08, 2017
bug_report_tiny.png 09:56  Bug report JPPF-512 - PeerAttributesHandler spawns too many threads
lolocohen : Issue created
The PeerAttributesHandler class uses a thread pool to handle JMX notficiations from peer drivers when they update their number of nodes and total number of node threads. It uses Runtime.getRuntime().availableProcessors() which seems wasteful since the tasks performed by the threds are very short-lived.

We should use a configuration property "jppf.peer.handler.threads" which defaults to 1 to configure this number of threads instead.
feature_request_tiny.png 08:49  Feature request JPPF-480 - Jobs persistence in the driver
lolocohen : Issue closed
July 10, 2017
bug_report_tiny.png 10:26  Bug report JPPF-510 - Documentation on job listeners does not mention isRemoteExecution() and getConnection() methods of JobEvent
lolocohen : Issue closed
July 08, 2017
feature_request_tiny.png 06:23  Feature request JPPF-511 - Ability to persist and reuse the state of adaptive load-balancers
lolocohen : Issue created
From [http://www.jppf.org/forums/index.php/topic,7993.0.html this forum post]:

> Adaptive algorithms use statistics but when driver restarts or hardware failure, statistics will be gone and load balancing algorithm adaptation will be return to beginning.
>
> - Is it possible (and logical?) to save job execution statistics periodically and load them to same driver while restart or to another driver which already running?
> - Another idea, maybe sharing these statistics with peer drivers, so when one of them down, informations still exist on other peers and when it restarts or a new driver added as peer, it will start with existing statistics.
>
> We are planning to use p2p because of the risk of a single point of failure, but progress of algorithm's learning important and it shouldn't reset each time the server reset.
bug_report_tiny.png 06:08  Bug report JPPF-510 - Documentation on job listeners does not mention isRemoteExecution() and getConnection() methods of JobEvent
lolocohen : Issue created
The documentation on [http://www.jppf.org/doc/5.2/index.php?title=Jobs_runtime_behavior,_recovery_and_failover#Job_lifecycle_notifications:_JobListener '''job listeners'''] does not mention the '''isRemoteExecution()''' and '''getConnection()''' methods in the [http://www.jppf.org/javadoc/5.2/index.html?org/jppf/client/event/JobEvent.html '''JobEvent'''] class.
June 25, 2017
bug_report_tiny.png 10:45  Bug report JPPF-509 - Regression: topology monitoring API does not detect peer to peer connections anymore
lolocohen : Issue closed
bug_report_tiny.png 09:03  Bug report JPPF-509 - Regression: topology monitoring API does not detect peer to peer connections anymore
lolocohen : Issue created
I've noticed in the admin console that peer to peer driver connections are not detected anymore. Looking at the logs, I could see that the toplogy monitoring API never logs peer connections. I suspect this due to the JPPFNodeForwardingMBean excluding peer nodes when retrieving nodes specified with a NodeSelector.
feature_request_tiny.png 08:13  Feature request JPPF-508 - Peer to peer connection pooling
lolocohen : Issue created
Currently, in a multi-server topology where servers are connected to each other, each server can only send one job at a time to each of its peers. This has an impact on scalability.

It is possible to "trick" each server into connecting multiple times to the same peer, but this only works with manual peer configuration, for example:
jppf.peers = driver2a driver2b
jppf.peer.driver2a.server.host = localhost
jppf.peer.driver2a.server.port = 11111
jppf.peer.driver2b.server.host = localhost
jppf.peer.driver2b.server.port = 11111
However, this is quite cumbersome and is not possible with auto discovery of peer drivers.

We propose to enable the definition of connection pools instead, with a configurable pool size:
jppf.peers = driver2
# five connections to driver2
jppf.peer.driver2.pool.size = 5
jppf.peer.driver2.server.host = localhost
jppf.peer.driver1a.server.port = 11111
or, with peer discovery enabled:
jppf.peer.discovery.enabled = true
# five connections to each discovered peer
jppf.peer.pool.size = 5
feature_request_tiny.png 07:00  Feature request JPPF-507 - New persisted jobs view in the web and desktop admin consoles
lolocohen : Issue created
The feature request JPPF-480 provides a pluggable way for the driver to persist jobs, to enable both job failover/recovery and the ability to execute jobs and retrieve their results offline. In particular, it provides a client-side API to administer persisted jobs.

We propose to add an administration interface to the web and desktop consoles to allow users to perform these tasks graphically in addition to programmatically.
bug_report_tiny.png 06:45  Bug report JPPF-506 - Client side load-balancer does not use the configuration passed to the JPPFClient constructor
lolocohen : Issue closed
June 24, 2017
bug_report_tiny.png 06:46  Bug report JPPF-506 - Client side load-balancer does not use the configuration passed to the JPPFClient constructor
lolocohen : Issue created
When using the constructor JPPFClient(String uuid, TypedProperties config, ConnectionPoolListener... listeners), the load-balancer for this client is not using the TypedProperties object, but instead uses the global configuration via a static call to JPPFConfiguration.getProperties(). This will cause wrong settings for the client load-balancer.

A possible workaround is to dynamically set the load-balancer configuration once the client is initialized, using JPPFClient.setLoadBalancerSettings(String algorithm, Properties config).
June 20, 2017
enhancement_tiny.png 07:03  Enhancement JPPF-505 - Ability to disable the bias towards local node in the driver
lolocohen : Issue closed
enhancement_tiny.png 06:41  Enhancement JPPF-505 - Ability to disable the bias towards local node in the driver
lolocohen : Issue created
Currently, when a driver is configured with a local (same JVM) node, this local node is always given priority for job scheduling. We propose to give users the ability to disable this behavior via a driver configuration proeprty such as "jppf.local.node.bias = false", with a default value of "true" to keep compatibility with previous versions.
June 15, 2017
bug_report_tiny.png 06:53  Bug report JPPF-504 - Local node never completes connection to server
lolocohen : Issue closed
June 14, 2017
bug_report_tiny.png 06:47  Bug report JPPF-504 - Local node never completes connection to server
lolocohen : Issue created
When starting a JPPF driver with a local node, the local node does not complete its connection with the driver it is embedded in, even though it display a message "Node successfully initialized". The node then behaves as if it were not started at all, and does not appear in the administration console.
June 12, 2017
icon_build.png 10:00 JPPF 5.2.7
New version released
June 11, 2017
enhancement_tiny.png 20:19  Enhancement JPPF-502 - Ability to dynamically change the settings of the client load balancer
lolocohen : Issue closed
bug_report_tiny.png 07:27  Bug report JPPF-503 - JPPF Serialization: ConcurrentModificationException when serializing a java.util.Vector
lolocohen : Issue closed
June 08, 2017
bug_report_tiny.png 08:36  Bug report JPPF-503 - JPPF Serialization: ConcurrentModificationException when serializing a java.util.Vector
lolocohen : Issue created
Whern trying to serialize a Spring ApplicationContext using the JPPF serialization scheme, I get the following exception:
2017-06-08 08:02:07,204 [DEBUG][org.jppf.client.balancer.ChannelWrapperRemote.run(231)]:
java.io.IOException
at org.jppf.serialization.JPPFObjectOutputStream.writeObjectOverride(JPPFObjectOutputStream.java:91)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:344)
at org.jppf.serialization.DefaultJPPFSerialization.serialize(DefaultJPPFSerialization.java:58)
at org.jppf.utils.ObjectSerializerImpl.serialize(ObjectSerializerImpl.java:79)
at org.jppf.io.IOHelper.serializeDataToMemory(IOHelper.java:330)
at org.jppf.io.IOHelper.serializeData(IOHelper.java:311)
at org.jppf.io.IOHelper.sendData(IOHelper.java:283)
at org.jppf.client.BaseJPPFClientConnection.sendTasks(BaseJPPFClientConnection.java:137)
at org.jppf.client.JPPFClientConnectionImpl.sendTasks(JPPFClientConnectionImpl.java:34)
at org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable.run(ChannelWrapperRemote.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.ConcurrentModificationException
at java.util.Vector$Itr.checkForComodification(Vector.java:1184)
at java.util.Vector$Itr.next(Vector.java:1137)
at org.jppf.serialization.VectorHandler.writeDeclaredFields(VectorHandler.java:49)
at org.jppf.serialization.Serializer.writeFields(Serializer.java:179)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:146)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:122)
at org.jppf.serialization.Serializer.writeDeclaredFields(Serializer.java:219)
at org.jppf.serialization.Serializer.writeFields(Serializer.java:192)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:146)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:122)
at org.jppf.serialization.Serializer.writeDeclaredFields(Serializer.java:219)
at org.jppf.serialization.Serializer.writeFields(Serializer.java:192)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:146)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:122)
at org.jppf.serialization.Serializer.writeDeclaredFields(Serializer.java:219)
at org.jppf.serialization.Serializer.writeFields(Serializer.java:192)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:146)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:122)
at org.jppf.serialization.JPPFObjectOutputStream.writeObjectOverride(JPPFObjectOutputStream.java:89)
... 12 more
The fix in VectorHandler appears to be simple; replace the line:
for (Object o: vector) serializer.writeObject(o);
with:
List list = new ArrayList<>(vector);
for (Object o: list) serializer.writeObject(o);
This works in the scenario I used to reproduce.
Show moreaction_add_small.png