JPPF Issue Tracker
JPPF (jppf)
February 22, 2019
bug_report_tiny.png 04:45  Bug report JPPF-581 - Setting a MBeanServerForwarder on the jmxremote-nio connector server has no effect
lolocohen : Issue closed
bug_report_tiny.png 04:05  Bug report JPPF-581 - Setting a MBeanServerForwarder on the jmxremote-nio connector server has no effect
lolocohen : Issue created
When calling JPPFJMXConnectorServer.setMBeanServerForwarder(), the MBeanServerForwarder that is set is never used afterwards. I just realized I forgot to implement that part.
February 18, 2019
enhancement_tiny.png 07:17  Enhancement JPPF-576 - Client methods for sync and async job submission, deprecation of blocking job flag
lolocohen : Issue closed
February 17, 2019
enhancement_tiny.png 08:05  Enhancement JPPF-580 - Allow MavenCentralLocation or a subclass to get artifacts from different repositories, including snapshots
lolocohen : Issue closed
February 16, 2019
enhancement_tiny.png 06:29  Enhancement JPPF-580 - Allow MavenCentralLocation or a subclass to get artifacts from different repositories, including snapshots
lolocohen : Issue created
Currently, [https://www.jppf.org/doc/6.1/index.php?title=The_Location_API#MavenCentralLocation MavenCentralLocation] only allows to download artifacts from Maven Central. We propose to add the ability to specify a different repository, as well as the ability to download SNAPSHOT artifacts, For instnce in a class nmamed MavenLocation, of which MavenCentralLocation could be a specialized subclass.
February 14, 2019
enhancement_tiny.png 13:31  Enhancement JPPF-579 - Monitoring data providers: ability to configure a value converter for each datum
lolocohen : Issue closed
February 13, 2019
enhancement_tiny.png 08:47  Enhancement JPPF-577 - JVM health monitoring enhancements
lolocohen : Issue closed
enhancement_tiny.png 08:30  Enhancement JPPF-579 - Monitoring data providers: ability to configure a value converter for each datum
lolocohen : Issue created
[https://www.jppf.org/doc/6.1/index.php?title=Monitoring_data_providers '''Monitoring data providers'''] allow to define properties of various types that are monitored over time. However, there is currently no way to specify how these values should be displayed in the JVM health view of the desktop and web administration consoles. JPPF currently uses default conversions based on the type of each property, but this may not be always convenient.

For instance, let's say we want to monitor the JVM uptime. This value is expressed in millisecons as a long integer value. However, in the GUI we'd rather have it displayed as days:hours:minutes:seconds.millis.

We propose to implement the ability to configure a value converter for each defined property to this effect.

For instance (just for example purpose, this is not what the actual design will be):
public interface MonitoringValueConverter {
String convert(String value);
}

public abstract class MonitoringDataProvider {
...

public MonitoringDataProvider setConverter(String name, MonitoringValueConverter converter) {
...
}
}

public class MyProvider extends MonitoringDataProvider {
...

@Override
public void defineProperties() {
...
setLongProperty("time", -1L).setConverter("time", value -> new Date(Long.valueOf(value)).toString());
}
}
February 11, 2019
feature_request_tiny.png 14:32  Feature request JPPF-575 - IsMasterNode, IsSlaveNode and other convenience execution policies
lolocohen : Issue closed
feature_request_tiny.png 00:02  Feature request JPPF-562 - Fix the preference execution policy
lolocohen : Issue closed
February 10, 2019
enhancement_tiny.png 17:15  Enhancement JPPF-578 - Allow jppf-admin-web jar dependency as alternative to war to make embedding possible
gsubes : Issue created
I would like to embed jppf-admin-web into my own embedded webserver as an executable jar. I need jppf-admin-web as a jar dependency instead of war to make this work. I would define my own web.xml for this and ignore the one inside

See description here: https://pragmaticintegrator.wordpress.com/2010/10/22/using-a-war-module-as-dependency-in-maven/

You would need to add:

...

...

maven-war-plugin
${version.maven-war-plugin}

true


...

...

So that I could use:

org.jppf
jppf-admin-web
${version.jppf}
classes

Also it would be nice if you could define the jppf.css and and images/ as maven resources behind a package name and add those resources into the classes folder. You could then mount those resources in your wicket application under your current paths using PackageResourceReferences to serve them from the classpath. This makes embedding easier and I don't have to copy these resources myself then.
February 05, 2019
enhancement_tiny.png 07:33  Enhancement JPPF-577 - JVM health monitoring enhancements
lolocohen : Issue created
We propose to add a number of data elements to the JVM health monitoring:
* peak thread count and total created threads (to be displayed in the same column as the live thread count, i.e. "live / peak / total"
* JVM uptime
enhancement_tiny.png 07:00  Enhancement JPPF-576 - Client methods for sync and async job submission, deprecation of blocking job flag
lolocohen : Issue created
Jobs have a [https://www.jppf.org/doc/6.0/index.php?title=Dealing_with_jobs#Non-blocking_jobs blocking job] attribute whose semantic is confusing. Technically, there is no difference between a blocking (.i.e synchronous) and a non-blocking (asynchronous) job. The difference is only in the client code that submits the job (JPPFClient.submitJob() method).

We consider that a job should be submissible either synchronously or asynchronously, regardless of its state

To this effect, we propose to deprecate the '''blocking''' job attribute in JPPFJob, as well as the '''submitJob()''' method in JPPFClient, and add the '''submit(JPPFJob)''' and '''submitAsync(JPPFJob)''' methods to JPPFClient instead, to fullfill the sme functionality.

Also, the deprecated members should '''''not''''' be removed before the next major version (v7.0) or even later, to ensure that users have plenty of advance warnings and the time to adjust their applications. In other words, this should be a long-term deprecation.

Care should also be taken to adapt the J2EE/JCA connector to take this into account.
February 03, 2019
feature_request_tiny.png 23:41  Feature request JPPF-575 - IsMasterNode, IsSlaveNode and other convenience execution policies
lolocohen : Issue closed
February 02, 2019
feature_request_tiny.png 21:40  Feature request JPPF-575 - IsMasterNode, IsSlaveNode and other convenience execution policies
lolocohen : Issue created
Currently, an execution policy predicate to determine whether a node is a master or slave node is written as follows:
ExecutionPolicy masterPolicy = new Equal("jppf.node.provisioning.master", true);
with the XML equivalent:

jppf.node.provisioning.master
true

This is quite cumbersome. We propose to implement simple policy classes instead, as follows:
ExecutionPolicy masterPolicy = new IsMasterNode();
ExecutionPolicy slavePolicy = new IsSlaveNode();
with the XML equivalents:

feature_request_tiny.png 12:41  Feature request JPPF-558 - Node provisioning notifications
lolocohen : Issue closed
January 28, 2019
feature_request_tiny.png 23:07  Feature request JPPF-573 - Pluggable mechanism to warn the driver when a node can't accept any more job
lolocohen : Issue closed
task_tiny.png 06:36  Task JPPF-574 - Reorganize the documentation into a single LibreOffice document
lolocohen : Issue closed
January 27, 2019
task_tiny.png 04:44  Task JPPF-574 - Reorganize the documentation into a single LibreOffice document
lolocohen : Issue created
Since the begining, the docuemntation has been orginazied into multiple .odt documents, grouped via a master (.odm) file. This has notoriously caused problems with cross-document links, in particular in the generated PDF version.

I propose to group all documents into a single one instead, and fix the links.
January 22, 2019
feature_request_tiny.png 08:44  Feature request JPPF-569 - New job SLA attributes
lolocohen : Issue closed
January 19, 2019
feature_request_tiny.png 22:22  Feature request JPPF-573 - Pluggable mechanism to warn the driver when a node can't accept any more job
lolocohen : Issue created
Since a node can process any number of jobs concurrently, there is a risk that it can be overwhelmed, and its performance may degrade suddenly or it could even crash, for instance because of an out of memory condition.

We propose to implement a pluggable mechanism in the node to alert the driver that it cannot accept any more jobs when a given condition is true. The same mechanism would send another alert when the condition is no lnger true, so it can resume taking additional jobs.
January 17, 2019
enhancement_tiny.png 08:01  Enhancement JPPF-433 - Add missing value snapshots to the server statistics
lolocohen : Issue closed
task_tiny.png 07:48  Task JPPF-572 - Performance, endurance and stress testing
lolocohen : Issue created
Given the major changes in the upcoming 6.1 release, in particular feature request JPPF-548, feature request JPPF-549 and feature request JPPF-564, it is important to check for possible regressions of the performance and health indicators.

Some specific points to check:
* load-balancing performance, how is load-balancing impacted by the fact that nodes can now process multiple jobs concurrently?
* look for memory leaks; I'm hoping endurance tests will help with that
* on the client side, attempt to measure the performance impact of single vs. multiple connections with multiple concurrejt jobs
January 15, 2019
feature_request_tiny.png 07:06  Feature request JPPF-570 - Accessing the job from a task
lolocohen : Issue closed
January 10, 2019
feature_request_tiny.png 08:24  Feature request JPPF-564 - Asynchronous communication betwen node and driver
lolocohen : Issue closed
January 05, 2019
bug_report_tiny.png 18:28  Bug report JPPF-571 - Driver started with JPPFDriver noLauncher exits immediately when jppf.discovery.enabled=false
lolocohen : Issue closed
bug_report_tiny.png 04:52  Bug report JPPF-571 - Driver started with JPPFDriver noLauncher exits immediately when jppf.discovery.enabled=false
lolocohen : Issue created
When starting a driver using the main class org.jppf.server.JPPFDriver, with the argument "noLauncher", the driver will exit immediately after its initialization, when UDP multicast discovery is disabled, that is, when the configuration property "jppf.discovery.enabled" is set to "false"

This is due to the fact that the UDP broadcast thread is the only non-dameon thread started at driver startup. The driver startup essentially starts new threads, and nothing prevents it from exiting the JVM when only daemon threads are started.
December 29, 2018
feature_request_tiny.png 01:35  Feature request JPPF-570 - Accessing the job from a task
lolocohen : Issue created
We propose to add the following method to the [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/protocol/Task.html '''Task'''] interface and its [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/protocol/AbstractTask.html '''AbstractTask'''] default implemntation, such that the code in the task can access the job as an instance of the [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/protocol/JPPFDistributedJob.html '''JPPFDistributedJob'''] interface:
public interface Task extends Runnable, Serializable, Interruptibility {
// get the job this task is a part of
JPPFDistributedJob getJob();

... other methods
}
December 23, 2018
feature_request_tiny.png 23:43  Feature request JPPF-569 - New job SLA attributes
lolocohen : Issue created
We propose to add the following attributes to the server-side SLA of a job:
* whether to accept peer servers in a multi-driver topology (this is already available via the [https://www.jppf.org/doc/6.1/index.php?title=Execution_policy_properties#JPPF_configuration_properties "jppf.peer.driver"] boolean property available to an execution policy).
* max driver depth: in a multi server topology, an upper bound for how many drivers a job can be transfered to before being executed on a node. '''''Done'''''
* maximum dispatch size: the maximum number of tasks in a job that can be sent at once to a node (driver-side SLA) or to a driver (client-side SLA)
* allow multiple dispatches to the same node (driver-side SLA) or driver (client-side SLA): a flag to specifiy whether a job can be dispatched to the same node or driver multiple times at any given moment. This is in anticipation of the completion of feature request JPPF-564
December 19, 2018
bug_report_tiny.png 07:37  Bug report JPPF-567 - JMXDriverConnectionWrapper.getAllJobIds still exists and raises an exception
lolocohen : Issue closed
bug_report_tiny.png 06:46  Bug report JPPF-568 - Exceptions shown in the log when JMXDriverConnectionWrapper fails to connect to the driver
lolocohen : Issue closed
December 18, 2018
bug_report_tiny.png 08:30  Bug report JPPF-568 - Exceptions shown in the log when JMXDriverConnectionWrapper fails to connect to the driver
lolocohen : Issue created
When no driver is started, and the log level is set to DEBUG for the package org.jppf.management and trying to connect via JMXDriverConnectionWrapper, the following are shown in the log:
2018-12-18 07:16:44,485 [WARN ][jmx@localhost:11111 ][org.jppf.comm.socket.QueuingSocketInitializer.initialize(68)]: java.lang.InterruptedException
2018-12-18 07:16:44,491 [DEBUG][jmx@localhost:11111 ][org.jppf.management.JMXConnectionThread.run(68)]: localhost:11111 JMX URL = service:jmx:jppf://localhost:11111
java.net.ConnectException: Connection refused: connect
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Net.java:454)
at sun.nio.ch.Net.connect(Net.java:446)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
at org.jppf.comm.socket.SocketChannelClient.open(SocketChannelClient.java:248)
at org.jppf.comm.socket.SocketInitializerImpl.initialize(SocketInitializerImpl.java:105)
at org.jppf.comm.socket.QueuingSocketInitializer.access$001(QueuingSocketInitializer.java:31)
at org.jppf.comm.socket.QueuingSocketInitializer$1.call(QueuingSocketInitializer.java:61)
at org.jppf.comm.socket.QueuingSocketInitializer$1.call(QueuingSocketInitializer.java:58)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-12-18 07:16:44,491 [DEBUG][alhost:11111 closing][org.jppf.management.JMXConnectionWrapper.run(139)]:
java.lang.NullPointerException
at org.jppf.jmxremote.JPPFJMXConnector.close(JPPFJMXConnector.java:140)
at org.jppf.management.JMXConnectionWrapper$1.run(JMXConnectionWrapper.java:137)
at java.lang.Thread.run(Thread.java:748)
These exceptions are harmless, as they are indeed caught and handled in the JPPF code, however they may cause some worries:
* the InterruptedException is logged as a warning. This is wrong, because it is expected when the connection attempts fail after a specified timeout. This should be logged at TRACE level
* the NullPointerException results from poor handling in the JPPFJMXConnector code, this must be fixed.
bug_report_tiny.png 07:41  Bug report JPPF-567 - JMXDriverConnectionWrapper.getAllJobIds still exists and raises an exception
lolocohen : Issue created
From [https://www.jppf.org/forums/index.php/topic,8057.0.html '''this forums thread''']:

The method [https://www.jppf.org/javadoc/6.0/org/jppf/management/JMXDriverConnectionWrapper.html#getAllJobIds() getAllJobIds()] still exists in class [https://www.jppf.org/javadoc/6.0/index.html?org/jppf/management/JMXDriverConnectionWrapper.html JMXDriverConnectionWrapper], but it was removed from the [https://www.jppf.org/javadoc/6.0/index.html?org/jppf/server/job/management/DriverJobManagementMBean.html DriverJobManagementMBean] interface.

Using this method on a connected JMX wrapper always raises an exception:
javax.management.AttributeNotFoundException: No such attribute: AllJobIds
at com.sun.jmx.mbeanserver.PerInterface.getAttribute(Unknown Source)
at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(Unknown Source)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(Unknown Source)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(Unknown Source)
at org.jppf.jmxremote.nio.JMXMessageReader.handleRequest(JMXMessageReader.java:125)
at org.jppf.jmxremote.nio.JMXMessageReader.handleMessage(JMXMessageReader.java:98)
at org.jppf.jmxremote.nio.JMXMessageReader.access$0(JMXMessageReader.java:95)
at org.jppf.jmxremote.nio.JMXMessageReader$HandlingTask.run(JMXMessageReader.java:339)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)
We should remove this method from JMXDriverConnectionWrapper as well and update the documentation to reflect that, and in particular state that to achieve the same goal the following should be used:
JMXDriverConnectionWrapper jmx = ...;
String jobUuids = jmx.getJobManager().getAllJobUuids();
December 09, 2018
feature_request_tiny.png 09:17  Feature request JPPF-566 - New sample: embedded grid
lolocohen : Issue closed
feature_request_tiny.png 06:07  Feature request JPPF-563 - Make the JPPF driver and node not singletons
lolocohen : Issue closed
feature_request_tiny.png 05:36  Feature request JPPF-566 - New sample: embedded grid
lolocohen : Issue created
Following feature request JPPF-563, create a new sample in the samples pack which demonstrates how to start a driver, node and client programmatically, all embedded within the same JVM. The sample will show the following functionalities:
* embedded driver life cycle: create, start, stop
* embedded node life cycle: create, start, stop
* connecting a client and submitting a job
* programmatically creating the configuration for a driver, node and client
* using management and monitoring APIs for an embedded driver and node
December 05, 2018
task_tiny.png 07:16  Task JPPF-565 - Feature removals
lolocohen : Issue created
We propose that the following features be either deprecated or dropped altogether:

'''1. .Net integration'''

This feature relies heavily on the [http://jni4net.com/ '''jni4net'''] framework, which hasn't seen a new version in 4 years. Following the switch to Java 8 (feature request JPPF-548), its .Net proxy generator is no longer fully working, as it doesn't handle new Java 8 constucts such as default methods in interfaces, It is currently not possible to build it with the current code, and I don't see any solution that can be mainained in the long term. I propose to drop this feature from JPPF 6.1 forward. We will still maintain it for prior versiosn.

'''2. Android integration'''

The switch to Java 8 requires a lot of changes to the Android port, including, but definitely not limited to, the min Android sdk version and build tools. I haven't evaluated the changes that need to be done to the code itself, and, given the lack of bandwith (I'm just 1 developer), I tend to think it should be dropped so we can focus on more modern features such as job streaming and big data. If anyone volunteers to take this feature on, I'll be happy to assist in any way, In any case, we'll keep maintaining it for versions up to 6.0.

'''3. Persistent data in NodeRunner'''

The [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/NodeRunner.html '''NodeRunner'''] class provides 3 methods '''setPersistentData()''', '''getPersistentData()''' and '''removePersistentData()''', which were intended for tasks to be able to store, access and manage data on the nodes accross job executions. These methods are inherently dangerous, because they can cause the nodes to retain objects and classes from many different class loaders, resulting in class loader leaks and potential out-of-memory conditions. This feature isn't used anymore in the JPPF code, and I believe now's a good time to remove it.

'''4. Node security policy'''

I can no longer see the benefit of the [https://www.jppf.org/doc/6.0/index.php?title=Node_configuration#Security_policy '''security policy'''] in the node configuration. We haven't touched this code in years, the default node security policy in the node distribution is no longer close to being useful or even accurate, and this can be easily replaced with a standard SecurityManager and associated security policy file. This feature should definitely be removed.

'''5. JMXMP JMX remote connector'''

Since JPPF 6.0, we use the new nio-based connector, which allows the JMX port to be the same as the server port, and thus simplifies the configuration. Itis still possible to switch to the JMXMP connecotr, via a configuration property. However, there isn't much of a point in keeping the JMXMP connector as part of the JPPF distribution, since it is not used and simple adds a useless dependency. I propose that we drop it from the distro, and perhaps set it up as a separate project/Github repo.
November 27, 2018
feature_request_tiny.png 09:02  Feature request JPPF-564 - Asynchronous communication betwen node and driver
lolocohen : Issue created
Similarly to feature request JPPF-549, we propose to refactor the communication between divers nd nodes and make it possible for a node to handle multiple jobs concurrently.

We can also explore the possibility for a node to connect to multiple drivers?
feature_request_tiny.png 08:55  Feature request JPPF-563 - Make the JPPF driver and node not singletons
lolocohen : Issue created
Current the [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/server/JPPFDriver.html '''JPPFDriver'''] is implemented as a singleton. This means there can be only one per JVM. The same is true for the nodes, whether local to the driver's JVM or remote.

We propose to change that and make it possible to have any number of drivers and/or nodes per JVM. They might have the possibility to share some common resources, for example the NIO thread pool.
feature_request_tiny.png 07:26  Feature request JPPF-562 - Fix the preference execution policy
lolocohen : Issue created
Currently the [https://www.jppf.org/doc/6.1/index.php?title=Execution_Policy_Elements#Preference '''Preference'''] execution poliy is applied to each node individually and is identical to the [https://www.jppf.org/doc/6.1/index.php?title=Execution_Policy_Elements#OR '''OR'''] execution policy. To sum up: despite its name, it has nothing to do with a "preference".

We propose to make it live up to its name, which implies:

* it should define a real order of preference for a number of node execution policies, where a node that satisfies policy N in the list will have priority over a node that satisfies policy N + 1
* it should be applied globally to all the nodes available to the driver
* because of the previous point, it should be a separate attribute of the job SLA
* it should be applicable to the client-side as well, where it would define a driver preference rather than a node preference
* special care should be taken about perfomance, as the algorithm will be in O(nbNodes * nbJobs). Should we allow parallel (with regards to the nodes) evaluation of the policy for each job?
Show moreaction_add_small.png