JPPF Issue Tracker
JPPF (jppf)
April 08, 2018
icon_milestone.png 09:02 JPPF 5.1.1
A new milestone has been reached
April 02, 2018
icon_build.png 14:00 JPPF 5.2.9
New version released
March 15, 2018
feature_request_tiny.png 13:53  Feature request JPPF-528 - Restructuring of and major improvements to the documentation
lolocohen : Issue created
Excellent feedback, suggestions and comments from a long-time JPPF user:
# It is not obvious that JPPF benefits from multiple core processors. We might want to make that more obvious, as it might help attract more users and help people decide what kind of cloud service or physical servers to use. People might be wasting time multi-threading their programs when it would be better to use JPPF. There was a time when Rackspace had only single core servers. JPPF would have been fine for that. Linode has many cores and to use them with JPPF is not wasteful. To go from one to the other requires no programming changes. To go from one to the other requires no configuration changes. To me this was a non-obvious benefit of JPPF.
# It isn't obvious either that JPPF is probably better for many small tasks rather than a few long elapsed time tasks. That is, 64 million tasks of 10 seconds each is better than 64 tasks of 10 million seconds each. Knowledge of this might save people some rewriting effort.
# The documentation is kind of discouraging and lengthy. This results in readers/users being unwilling or unable to read the whole thing because they cannot grasp the relevance of much of it. I haven't looked at the documentation in recent years so perhaps my comment is no longer relevant. Yes it is!
# The following use case is typical: homogeneous tasks, each task would last many hours, one task per processor core, no internet and no database since everything required is in the serialized object passed to each processor core. The fact that it is typical may lead users to think they can set up JPPF by themselves, but in the end they actually need help found in the forums. The allocation decisions JPPF makes are often very difficult to understand (i.e. how the tasks are distributed to the nodes). In particular, it is difficult to understand the "nodethreads" algorithm. A suggestion is that each typical case be matched with a brief cheat sheet.
# It is not obvious where to enable assertions. It was observed that it needed to be in 2 places: the invocation of the application and inside the node configuration file. It is not obvious that it is not necessary in to enable assertions for the invocation of the driver, the configuration file of the driver, the invocation of the node. This would be good to put on a cheat sheet. 6. When an application might need more memory it is not obvious that it is not necessary to increase it for the driver launcher, the node launcher, and the driver configuration, but rather it must be done for the application and in the node properties. This would be good to put on a cheat sheet.
# For some applications virtual memory doesn't solve a problem; it creates a problem. VM is probably helpful mainly in applications where the memory need is highly variable across time. A compute intensive application might have a nearly constant level of needed memory. If swapping happens at all, not only will it will slow progress, what is worse is that it will probably persist for hours. It was found that it is best to disable swapping and discover as soon as possible if more physical memory is needed. While this is not a JPPF specific problem it might be relevant for some it would help to know this as soon as possible in the development life cycle.
# Files that contain the "jppf.jvm.options" property are probably more relevant for the user so perhaps it would be good to point out that some configuration files are not important and the configuration files are long but there are only a few lines that matter to some (or most) users. It can be useful to see suggestions like: "do not waste time looking at log4j-node.properties" because you will not enable assertions there and you will not increase memory there.
# It is a minor point but sometimes the word "server" appears and it is confusing because everything is a server. The line "#jppf.server.host=localhost" is confusing. If the reader thinks a node is a "server" because the driver is a "master", he will be more confused because he will think this property is supposed to point to a node. The default value "localhost" is confusing because the majority of the time it will not be local. Perhaps it could be "#jppf.driver.host=IPADDRESS".
March 14, 2018
bug_report_tiny.png 08:34  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Issue closed
bug_report_tiny.png 08:24  Bug report JPPF-527 - Admin console's job data view does not display anything when client pool size is greater than 1
lolocohen : Issue created
When the admin console's configuration specifies a pool size greater than 1 for the client connnections ot the driver(s), the "Job data" view is and remains empty.

This happens in 5.1.x releases, but has been fixed in 5.2.x releases and later.
March 03, 2018
icon_milestone.png 13:15 JPPF 4.2.9
A new milestone has been reached
feature_request_tiny.png 10:46  Feature request JPPF-526 - Enable NIO-based recovery/heartbeat mechanism
lolocohen : Issue created
Currently, the [http://www.jppf.org/doc/6.0/index.php?title=Configuring_a_JPPF_server#Recovery_from_hardware_failures_of_remote_nodes recovery mechanism] uses a connection to a separate driver port. We propose to rewrite it so that it uses the non-blocking NIO based communications, which will allow it to use the same port as for nodes and clients connections. It will then be a matter of enbling it or not.
January 30, 2018
icon_milestone.png 12:36 JPPF 5.1
A new milestone has been reached
January 18, 2018
icon_milestone.png 14:25 JPPF 5.0
A new milestone has been reached
January 17, 2018
bug_report_tiny.png 09:01  Bug report JPPF-521 - Document the "jppf.node.provisioning.master.uuid" configuration property
lolocohen : Issue closed
bug_report_tiny.png 08:21  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Issue closed
January 13, 2018
bug_report_tiny.png 08:08  Bug report JPPF-525 - Deadlock in the driver during automated test run
lolocohen : Issue created
During the run of the tests for test.org.jppf.client.event.TestJobListener on 5.2.8, the following deadlock was detected:
- thread id 23 "JPPF NIO-0004" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@6c452fb4 which is held by thread id 20 "JPPF NIO-0001"
- thread id 20 "JPPF NIO-0001" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@71183515 which is held by thread id 23 "JPPF NIO-0004"

Stack trace information for the threads listed above

"JPPF NIO-0004" - 23 - state: WAITING - blocked count: 3 - blocked time: 0 - wait count: 33 - wait time: 11618
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@6c452fb4
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.protocol.AbstractServerJobBase.addBundle(AbstractServerJobBase.java:229)
at org.jppf.server.queue.JPPFPriorityQueue.addBundle(JPPFPriorityQueue.java:121)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:88)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:34)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:78)
- locked org.jppf.nio.SelectionKeyWrapper@158778fd
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.locks.ReentrantLock$NonfairSync@71183515
- java.util.concurrent.ThreadPoolExecutor$Worker@6524a69

"JPPF NIO-0001" - 20 - state: WAITING - blocked count: 7 - blocked time: 1 - wait count: 32 - wait time: 11579
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@71183515
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.queue.JPPFPriorityQueue.getJob(JPPFPriorityQueue.java:286)
at org.jppf.server.queue.JPPFPriorityQueue.getBundleForJob(JPPFPriorityQueue.java:371)
at org.jppf.server.nio.client.CompletionListener.removeJobFromQueue(CompletionListener.java:121)
at org.jppf.server.nio.client.CompletionListener.taskCompleted(CompletionListener.java:74)
at org.jppf.server.protocol.ServerTaskBundleClient.fireTasksCompleted(ServerTaskBundleClient.java:361)
at org.jppf.server.protocol.ServerTaskBundleClient.resultReceived(ServerTaskBundleClient.java:218)
at org.jppf.server.protocol.ServerJob.resultsReceived(ServerJob.java:139)
at org.jppf.server.protocol.ServerTaskBundleNode.resultsReceived(ServerTaskBundleNode.java:197)
at org.jppf.server.protocol.ServerJob.handleCancelledStatus(ServerJob.java:241)
at org.jppf.server.protocol.ServerJob.cancel(ServerJob.java:276)
at org.jppf.server.nio.client.ClientContext.cancelJobOnClose(ClientContext.java:279)
at org.jppf.server.nio.client.ClientContext.handleException(ClientContext.java:121)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:94)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.locks.ReentrantLock$NonfairSync@6c452fb4
- java.util.concurrent.ThreadPoolExecutor$Worker@69663655
Attaching the whole logs zip for this test run.
January 09, 2018
bug_report_tiny.png 13:45  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
lolocohen : Issue closed
January 08, 2018
bug_report_tiny.png 16:52  Bug report JPPF-524 - Stream isn't closed properly in org.jppf.utils.VersionUtils#createVersionInfo
igor : Issue created
During restart of a payara server i see sometimes this warning in logs:
[2018-01-08T13:31:47.119+0100] [Payara 4.1] [WARNING] [NCLS-COMUTIL-00023] [javax.enterprise.system.util] [[
Input stream has been finalized or forced closed without being explicitly closed; stream instantiation reported in following stack trace
java.lang.Throwable
at com.sun.enterprise.loader.ASURLClassLoader$SentinelInputStream.(ASURLClassLoader.java:1284)
at com.sun.enterprise.loader.ASURLClassLoader$InternalJarURLConnection.getInputStream(ASURLClassLoader.java:1392)
at java.net.URLClassLoader.getResourceAsStream(URLClassLoader.java:238)
at com.sun.enterprise.loader.ASURLClassLoader.getResourceAsStream(ASURLClassLoader.java:936)
at org.jppf.utils.VersionUtils.createVersionInfo(VersionUtils.java:59)
at org.jppf.utils.VersionUtils.(VersionUtils.java:42)
at org.jppf.client.AbstractJPPFClient.(AbstractJPPFClient.java:97)
at org.jppf.client.AbstractGenericClient.(AbstractGenericClient.java:87)
at org.jppf.client.JPPFClient.(JPPFClient.java:61)
January 04, 2018
bug_report_tiny.png 13:38  Bug report JPPF-523 - nullpointer in org.jppf.utils.SystemUtils.addOtherSystemProperties
lolocohen : Issue closed
January 01, 2018
bug_report_tiny.png 22:54  Bug report JPPF-523 - nullpointer in org.jppf.utils.SystemUtils.addOtherSystemProperties
subes : Issue created
The following NullpointerException is thrown:
2018-01-01 21:46:26,498 [ |JPPF Client-0001 ] ERROR org.jppf.client.balancer.JobManagerClient.addConnection - Error while adding connection p://invesdwin.de:5532-1[invesdwin.de:5532] : NEW
java.lang.NullPointerException: null
at java.util.Hashtable.put(Hashtable.java:460)
at java.util.Properties.setProperty(Properties.java:166)
at org.jppf.utils.SystemUtils.addOtherSystemProperties(SystemUtils.java:124)
at org.jppf.utils.SystemUtils.getSystemProperties(SystemUtils.java:101)
at org.jppf.management.JPPFSystemInformation.populate(JPPFSystemInformation.java:282)
at org.jppf.management.JPPFSystemInformation.(JPPFSystemInformation.java:107)
at org.jppf.management.JPPFSystemInformation.(JPPFSystemInformation.java:83)
at org.jppf.client.balancer.ChannelWrapperRemote.(ChannelWrapperRemote.java:68)
at org.jppf.client.balancer.JobManagerClient.addConnection(JobManagerClient.java:198)
at org.jppf.client.balancer.JobManagerClient$3.connectionAdded(JobManagerClient.java:139)
at org.jppf.client.AbstractJPPFClient.fireConnectionAdded(AbstractJPPFClient.java:307)
at org.jppf.client.AbstractGenericClient.newConnection(AbstractGenericClient.java:328)
at org.jppf.client.AbstractGenericClient.submitNewConnection(AbstractGenericClient.java:311)
at org.jppf.client.AbstractGenericClient$4.run(AbstractGenericClient.java:263)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
This is caused by a system property "jmockit-instrumentation" with value . A proper fix would be to ignore null values when adding other properties:

private static void addOtherSystemProperties(final TypedProperties props) {
try {
// run as privileged so we don't have to set write access on all properties in the security policy file.
Properties sysProps = AccessController.doPrivileged(new PrivilegedAction() {
@Override
public Properties run() {
return System.getProperties();
}
});
Enumeration en = sysProps.propertyNames();
while (en.hasMoreElements()) {
String name = (String) en.nextElement();
try {
''' if (!props.contains(name)) {'''
''' Object value = System.getProperty(name);'''
''' if(value != null) {'''
''' props.setProperty(name, value);'''
''' }'''
''' }'''
} catch(SecurityException e) {
if (debugEnabled) log.debug(e.getMessage(), e);
else log.info(e.getMessage());
}
}
} catch(SecurityException e) {
if (debugEnabled) log.debug(e.getMessage(), e);
else log.info(e.getMessage());
}
}
December 02, 2017
icon_milestone.png 02:53 JPPF 5.0.5
A new milestone has been reached
November 28, 2017
enhancement_tiny.png 13:55  Enhancement JPPF-522 - Enhancements to the pluggable view sample
lolocohen : Issue closed
enhancement_tiny.png 08:59  Enhancement JPPF-522 - Enhancements to the pluggable view sample
lolocohen : Issue created
Currently, the sample view displays job dispatch/return events as "job x dispatched to node y". We propose to add the following missing information:
* the drivers which dispatches or to which the job is returned
* whether the node is a real node or a peer driver seen as a node
The display would then look like:
* for a dispatch event: "job x dispatched to {node | peer node} y by driver z"
* for a returned event: "job x returned from {node | peer node} y to driver z"
October 20, 2017
bug_report_tiny.png 09:16  Bug report JPPF-521 - Document the "jppf.node.provisioning.master.uuid" configuration property
lolocohen : Issue created
The "jppf.node.provisioning.master.uuid" property, also represented as [http://jppf.org/javadoc/5.2/org/jppf/utils/configuration/JPPFProperties.html#PROVISIONING_MASTER_UUID '''JPPFProperties.PROVISIONING_MASTER_UUID'''] in the configuration API is a property that is only set on slave nodes and contains the UUID of the master node that started them.

It appears this property is only documented in the Javadoc and nowhere else. We should add something about it in the [http://www.jppf.org/doc/5.2/index.php?title=Node_provisioning '''provisioning'''] documentation.
October 09, 2017
task_tiny.png 08:25  Task JPPF-520 - Make JPPF work with Java 9
lolocohen : Issue created
The title says it all. We should be able to compile, build and run JPPF with Java 9. In particular, passing all automated tests will be the main acceptance criteria.
feature_request_tiny.png 08:06  Feature request JPPF-519 - Admin console: ability to add custom data to the JVM health view and the charts
lolocohen : Issue created
The idea is to be able to add cutom columns to the JVM health view of the desktop and web admin consoles, along with the ability to make data available for the charts in the desktop console. This would be the client side counterpart to the changes proposed in feature request JPPF-396.

This should be implemented as a plugin for the admin conosle(s), with new data based on those found in the refactored [http://www.jppf.org/javadoc/6.0/index.html?org/jppf/management/diagnostics/HealthSnapshot.html '''HealthSnapshot''']
October 05, 2017
bug_report_tiny.png 09:00  Bug report JPPF-518 - Admin console job data view does not display peer drivers to which jobs are dispatched
lolocohen : Issue closed
bug_report_tiny.png 08:50  Bug report JPPF-518 - Admin console job data view does not display peer drivers to which jobs are dispatched
lolocohen : Issue created
When a job is dispatched to a peer driver, the job data view of the admin console displays a blank instead of an icon + host:port string
October 04, 2017
feature_request_tiny.png 08:45  Feature request JPPF-493 - Parametrized configuration properties
lolocohen : Issue closed
September 30, 2017
bug_report_tiny.png 18:31  Bug report JPPF-517 - Deadlock in the driver during stress test
lolocohen : Issue closed
bug_report_tiny.png 11:04  Bug report JPPF-517 - Deadlock in the driver during stress test
lolocohen : Issue created
While performing a stress test in the driver, I was mpnitoring witht he admin console and it showed in the JVM Health vie, the following deadlock:
Deadlock detected

- thread id 32 "JPPF NIO-0008" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c which is held by thread id 30 "JPPF NIO-0006"
- thread id 30 "JPPF NIO-0006" is waiting to lock org.jppf.nio.SelectionKeyWrapper@20bda110 which is held by thread id 29 "JPPF NIO-0005"
- thread id 29 "JPPF NIO-0005" is waiting to lock java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c which is held by thread id 30 "JPPF NIO-0006"

Stack trace information for the threads listed above

"JPPF NIO-0008" - 32 - state: WAITING - blocked count: 5932 - blocked time: 2065 - wait count: 247708 - wait time: 864535
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.queue.JPPFPriorityQueue.addBundle(JPPFPriorityQueue.java:99)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:87)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:34)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:79)
- locked org.jppf.nio.SelectionKeyWrapper@358ddcb3
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.ThreadPoolExecutor$Worker@7074c91e

"JPPF NIO-0006" - 30 - state: BLOCKED - blocked count: 6280 - blocked time: 212246 - wait count: 263218 - wait time: 669040
at org.jppf.server.nio.client.CompletionListener.taskCompleted(CompletionListener.java:85)
- waiting on org.jppf.nio.SelectionKeyWrapper@20bda110
at org.jppf.server.protocol.ServerTaskBundleClient.fireTasksCompleted(ServerTaskBundleClient.java:393)
at org.jppf.server.protocol.ServerTaskBundleClient.resultReceived(ServerTaskBundleClient.java:245)
at org.jppf.server.protocol.ServerJob.postResultsReceived(ServerJob.java:165)
at org.jppf.server.protocol.ServerJob.resultsReceived(ServerJob.java:132)
at org.jppf.server.protocol.ServerTaskBundleNode.resultsReceived(ServerTaskBundleNode.java:197)
at org.jppf.server.nio.nodeserver.WaitingResultsState.processResults(WaitingResultsState.java:151)
at org.jppf.server.nio.nodeserver.WaitingResultsState.process(WaitingResultsState.java:87)
at org.jppf.server.nio.nodeserver.WaitingResultsState.performTransition(WaitingResultsState.java:67)
at org.jppf.server.nio.nodeserver.WaitingResultsState.performTransition(WaitingResultsState.java:43)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:79)
- locked org.jppf.nio.SelectionKeyWrapper@41f49664
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c
- java.util.concurrent.ThreadPoolExecutor$Worker@1c3590e

"JPPF NIO-0005" - 29 - state: WAITING - blocked count: 6000 - blocked time: 1502 - wait count: 256419 - wait time: 877563
at sun.misc.Unsafe.park(Native Method)
- waiting on java.util.concurrent.locks.ReentrantLock$NonfairSync@7494873c
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:834)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:867)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1197)
at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:214)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:290)
at org.jppf.server.queue.JPPFPriorityQueue.addBundle(JPPFPriorityQueue.java:99)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:87)
at org.jppf.server.nio.client.WaitingJobState.performTransition(WaitingJobState.java:34)
at org.jppf.nio.StateTransitionTask.run(StateTransitionTask.java:79)
- locked org.jppf.nio.SelectionKeyWrapper@20bda110
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

Locked ownable synchronizers:
- java.util.concurrent.ThreadPoolExecutor$Worker@7c52859c
September 25, 2017
feature_request_tiny.png 07:53  Feature request JPPF-511 - Ability to persist and reuse the state of adaptive load-balancers
lolocohen : Issue closed
August 30, 2017
feature_request_tiny.png 06:42  Feature request JPPF-443 - Variable substitutions and scripted expressions for execution policies arguments
lolocohen : Issue closed
enhancement_tiny.png 06:42  Enhancement JPPF-514 - Lighter syntax for scripted property values
lolocohen : Issue closed
August 29, 2017
enhancement_tiny.png 08:04  Enhancement JPPF-514 - Lighter syntax for scripted property values
lolocohen : Issue created
Examples of current syntax:
# following two are equivalent
my.prop1 = $script:javascript:inline{ 1 + 2 }$
my.prop2 = $script{ 1 + 2 }$

my.prop1 = $script:groovy:file{ /home/me/script.groovy }$
my.prop1 = $script:javascript:url{ file:///home/me/script.js }$
This is a bit cumbersome. We propose to relax the syntactic constraints and allow using 'S' or 's' instead of 'script', and only specifying the first character of each possible script source type ('u' or 'U' for url etc...)
August 21, 2017
feature_request_tiny.png 08:45  Feature request JPPF-508 - Peer to peer connection pooling
lolocohen : Issue closed
August 19, 2017
feature_request_tiny.png 10:08  Feature request JPPF-28 - Asynchronous communication between servers
lolocohen : Issue closed
August 12, 2017
feature_request_tiny.png 10:33  Feature request JPPF-445 - Provide access to the node from a task
lolocohen : Issue closed
August 10, 2017
icon_build.png 10:00 JPPF 5.2.8
New version released
August 09, 2017
bug_report_tiny.png 08:09  Bug report JPPF-512 - PeerAttributesHandler spawns too many threads
lolocohen : Issue closed
bug_report_tiny.png 07:56  Bug report JPPF-513 - Using @JPPFRunnable annotation leads to ClassNotFoundException
lolocohen : Issue closed
bug_report_tiny.png 06:41  Bug report JPPF-513 - Using @JPPFRunnable annotation leads to ClassNotFoundException
lolocohen : Issue created
When using a POJO task where one of the methods or constructor is annotated with @JPPFRunnable, the node executing the task throws a ClassNotFoundException saying it can't find the class of the POJO task
Show moreaction_add_small.png