JPPF Issue Tracker
JPPF (jppf)
March 23, 2017
enhancement_tiny.png 07:30  Enhancement JPPF-494 - Extend the driver's JobTaskListener facility
lolocohen : Issue closed
March 19, 2017
bug_report_tiny.png 20:19  Bug report JPPF-495 - JobListener.jobDispatched() notification is sent too early
lolocohen : Issue closed
bug_report_tiny.png 20:01  Bug report JPPF-495 - JobListener.jobDispatched() notification is sent too early
lolocohen : Issue created
Reviewing part of the client code, I noticied that the jobDispatched() notfication is sent right after the asynchronous task to send the tasks of a job to the driver is submitted. This means that generally the notification is emitted before the tasks are fully sent to the driver, and contradicts the intended semantics of the notification.
enhancement_tiny.png 09:37  Enhancement JPPF-494 - Extend the driver's JobTaskListener facility
lolocohen : Issue created
We propose the following additons to the [http://www.jppf.org/doc/5.2/index.php?title=Receiving_the_status_of_tasks_dispatched_to_or_returned_from_the_nodes '''JobTaskListener'''] plugin:

1) Add a new callback method to the listener, called when tasks results are about to be sent back to the client:
public interface JobTasksListener extends EventListener {
...

/**
* Called when tasks results are about to be sent back to the client.
* @param event encapsulates information on the tasks results.
*/
void resultsReceived(JobTasksEvent event);
}
2) Add the job SLA and metadata to the available job information in the event:
public class JobTasksEvent extends TaskReturnEvent {
...

/**
* Get the job SLA from this event.
* @return an instance of {@link JobSLA}.
*/
public JobSLA getJobSLA()

/**
* Get the job metadata from this event.
* @return an instance of {@link JobMetadata}.
*/
public JobMetadata getJobMetadata()
}
3) Add the task result to each ServerTaskInformation and enable accessing it as either a stream or a deserialized Task object:
public class ServerTaskInformation implements Serializable {
...

/**
* Get an input stream of the task's result data, which can be desrialized as a {@link Task}.
* @return an {@link InputStream}, or {@code null} if no result could be obtained.
* @throws Exception if any error occurs getting the stream.
*/
public InputStream getResultAsStream() throws Exception

/**
* Deserialize the result into a Task object.
* @return a {@link Task}, or {@code null} if no result could be obtained.
* @throws Exception if any error occurs deserializing the result.
*/
public Task getResultAsTask() throws Exception
}
The combination of 1) and 3) will then allow tasks results to be processed even if the client is disconnected before the job completes, provided ''job.getSLA().setCancelUponClientDisconnect(false)'' was set.
March 10, 2017
icon_build.png 10:00 JPPF 5.2.5
New version released
March 09, 2017
feature_request_tiny.png 08:06  Feature request JPPF-493 - Parametrized configuration properties
lolocohen : Issue created
Currently the configuration API does not allow easy handling of configuration properties whose name has one or more parameters. For instance, the following properties:
.jppf.server.host = localhost
jppf.load.balancing.profile.. = value
We propose to extends the existing configuration api to hnadles these typs of constrcuts.
February 28, 2017
feature_request_tiny.png 08:52  Feature request JPPF-486 - Removal of JPPFDataTransform and replacement with composite serialization
lolocohen : Issue closed
February 25, 2017
enhancement_tiny.png 09:50  Enhancement JPPF-444 - Fluent interfaces
lolocohen : Issue closed
February 23, 2017
enhancement_tiny.png 17:13  Enhancement JPPF-468 - Add connection/executor information to job events on the client side
lolocohen : Issue closed
February 22, 2017
enhancement_tiny.png 10:04  Enhancement JPPF-492 - Monitoring API: move collapsed state handling out of TopologyDriver class
lolocohen : Issue created
The class TopologyDriver has these 2 methods to handle the collapsed state in a tree or graph representation:
public boolean isCollapsed() { ... }
public void setCollapsed(final boolean collapsed) { ... }
This is a mistake, as TopologyDriver is part of the model, whereas the collapsed state is part of the view. The collpased state should be moved to another part of the code, maybe in in own class.

February 21, 2017
task_tiny.png 08:42  Task JPPF-487 - Drop support of Apache Geronimo in the JCA connector
lolocohen : Issue closed
feature_request_tiny.png 08:24  Feature request JPPF-23 - Web based administration console
lolocohen : Issue closed
February 19, 2017
feature_request_tiny.png 08:28  Feature request JPPF-491 - Node statistics
lolocohen : Issue created
The title says it all. In the smae way we made statistics available for the servers, we propose to do the same for the nodes, including the possibility to access them remotely via the management/monitoring API, the ability to register statitics listeners, and the ability to define charts in the admin console
enhancement_tiny.png 08:24  Enhancement JPPF-490 - Timestamps for statistics updates
lolocohen : Issue created
We propose to add timestamps to all statistics updates, along with a creation time for all statitics in the current JVM. We could express the stat update timestamp as the number of nanoseconds since creation, so we could have the best available accuracy, especially since many of the intervals between updates have a sub-milliseconds precision.
February 16, 2017
bug_report_tiny.png 09:01  Bug report JPPF-488 - Priority of client connection pools is not respected
lolocohen : Issue closed
February 11, 2017
bug_report_tiny.png 08:32  Bug report JPPF-489 - JPPFDriverAdminMBean.nbNodes() returns incorrect value when management is disabled on one or more nodes
lolocohen : Issue closed
February 09, 2017
bug_report_tiny.png 09:34  Bug report JPPF-489 - JPPFDriverAdminMBean.nbNodes() returns incorrect value when management is disabled on one or more nodes
lolocohen : Issue created
When management is dsabed on one or more nodes attached to a server, the server's management API will return an incorrect number of nodes: it reports the number of node with a valid management connection instead of all nodes. This is true for all nbIdleNodes() and nbNodes() of the JPPFDriverAdminMBean interface

I've located the problem in the class NodeSelectionHelper, where the selection/filtering methods have a hasWorkingJmxConnection() condition, which is obviously false when management is disabled on an node. This class was first designed as a helper for the node management forwarding feature, then reused for the driver management methods that count nodes, but I forgot to take into account that nodes can have management disabled.
February 08, 2017
bug_report_tiny.png 09:00  Bug report JPPF-488 - Priority of client connection pools is not respected
lolocohen : Issue created
When, in the client configuration, 2 or more connection pools are defined with different priorities, jobs are not always sent to the pools with the hhighest priority.

There are two scenarios in which this happens:
* when jobs are submitted while the client is initializing, it is possible that at this time only connections with a lower priority are established, in which case they are still considered to be at the highest priority
* when all connections of the pool with the highest priority are busy, they are in fact removed from the idle connections map. This map is a sorted multimap whose key is the priority and value is a collection of connections to a server. When a connection is selected to execute a job, it is removed from the collection, and when the collection is empty, it is removed from the map, which changes the highest priority found in the map. If more jobs are to be executed, they will therefore be sent to connections with don't have the highest priority as defined in the configuration.
February 05, 2017
task_tiny.png 08:59  Task JPPF-487 - Drop support of Apache Geronimo in the JCA connector
lolocohen : Issue created
From the [http://geronimo.apache.org/ Apache Geronimo] web site, the project appears to be dead: last release, news item and source code commit happened more than 3 years ago. There doesn't seem to be any point in supporting this app server anymore.
January 20, 2017
feature_request_tiny.png 08:51  Feature request JPPF-486 - Removal of JPPFDataTransform and replacement with composite serialization
lolocohen : Issue created
Currently, the [http://www.jppf.org/doc/6.0/index.php?title=Transforming_and_encrypting_networked_data data transform feature] is a drain on performance and memory resources since, even when none is defined, it still forces us to read fully each serialized object from the network connection before it can be deserialized. In the same way, each object is fully serialized befored it is sent throw the connection.

Since the same functionality can be accomplished with [http://www.jppf.org/doc/6.0/index.php?title=Composite_serialization composite serialization], we propose to remove the data transformation and replace it with composite serialization. This will allow the code to directly serialize/deserialize to/from the network stream and increase performance while decreqing memory usage.

This implies updating the current "Network Data Encryption" sample to use composite serialization.
January 19, 2017
bug_report_tiny.png 21:10  Bug report JPPF-485 - Number of peer total processing threads is not propagated properly
lolocohen : Issue closed
bug_report_tiny.png 20:56  Bug report JPPF-485 - Number of peer total processing threads is not propagated properly
lolocohen : Issue created
When a peer driver notifies its remote peer of a change in its number of nodes and/or threads the remote peer updates the properties of its peer accordingly but does not notify the associated bundler (i.e. load-balancer).

In the case of the "nodethreads" algorithm, this causes a wrong number of total processing threads for the peer driver to be computed, which in turn impairs the efficiency of load-balancing and overall performance of the grid
January 18, 2017
icon_build.png 10:00 JPPF 5.2.4
New version released
January 17, 2017
bug_report_tiny.png 08:20  Bug report JPPF-484 - Invocation of tasks' onCancel() method is not clearly documented
lolocohen : Issue closed
January 15, 2017
bug_report_tiny.png 01:42  Bug report JPPF-479 - Task cancelation/timeout problems
lolocohen : Issue closed
January 09, 2017
bug_report_tiny.png 10:09  Bug report JPPF-479 - Task cancelation/timeout problems
lolocohen : Issue closed
January 07, 2017
icon_milestone.png 13:15 JPPF 5.1.5
A new milestone has been reached
January 05, 2017
icon_milestone.png 22:35 JPPF 4.2.4
A new milestone has been reached
December 30, 2016
bug_report_tiny.png 08:46  Bug report JPPF-482 - Kryo serialization uses wrong class loader upon deserialization
lolocohen : Issue closed
bug_report_tiny.png 08:23  Bug report JPPF-484 - Invocation of tasks' onCancel() method is not clearly documented
lolocohen : Issue created
The documentation does not mention that, when a task executing in a node is cancelled, its onCancel() method is called after its execution completes, whether the cancellation succeeded or not.

We also need to describe a mechanism for a callback invoked immediately upon cancellation, wiht ht CancellationHandler interface.
bug_report_tiny.png 07:58  Bug report JPPF-483 - ConcurrentModificationException in AbstractExecutionManager
lolocohen : Issue closed
bug_report_tiny.png 07:54  Bug report JPPF-483 - ConcurrentModificationException in AbstractExecutionManager
lolocohen : Issue created
While investigating an automated test failure, I found out that sometimes cancelling a job can cause a node to fail if the node is just starting to execute the job.

This is due to a lack of synchronization in AbstractExecutionManager which leads to an intermittent ConcurrentModificationException.
December 29, 2016
bug_report_tiny.png 08:34  Bug report JPPF-482 - Kryo serialization uses wrong class loader upon deserialization
lolocohen : Issue created
From this [http://www.jppf.org/forums/index.php/topic,5071.msg12459.html#msg12459 forums post]. We found evidence that Kryo instances cache a mapping of class name to class object, causing class objects from the same class loader to always be used and preventing classes with the same name but from a new class laoder from being used.
November 27, 2016
icon_build.png 11:00 JPPF 5.2.3
New version released
bug_report_tiny.png 10:53  Bug report JPPF-463 - Information Missing in Monitoring Tool
lolocohen : Issue closed
bug_report_tiny.png 10:32  Bug report JPPF-479 - Task cancelation/timeout problems
lolocohen : Issue closed
November 17, 2016
enhancement_tiny.png 07:14  Enhancement JPPF-481 - Monitoring and management UI enhancements
lolocohen : Issue closed
November 13, 2016
icon_milestone.png 08:09 JPPF 5.0.4
A new milestone has been reached
November 09, 2016
icon_milestone.png 17:29 JPPF 5.0.2
A new milestone has been reached
November 01, 2016
enhancement_tiny.png 10:17  Enhancement JPPF-481 - Monitoring and management UI enhancements
lolocohen : Issue created
Some improvements to the Swing and Web admin console:
* add a "select all" button in the topology view to select all nodes and drivers
* add a "select all jobs" button in the jobs view
* replace the "load balancing" tab with an action in the topology view, based on the driver selected in the tree

Show moreaction_add_small.png