We propose to add Docker images for JPPF, for drivers, nodes and web admin console. The configuration of a JPPF grid with docker should allow any kind of JPPF topology, including multi-server topologies.
The class [https://www.jppf.org/javadoc/6.2/index.html?org/jppf/scheduling/JPPFSchedule.html JPPFSchedule] is used to specify the start or expiration schedule of a job, as well as the expiration schedule of a task. It currently has 2 basic constructors, one that takes an epoch time in millis, the other that takes a string which represents a date, along with a SimpleDateFormat-compliant format to parse it.
We propose to extends this class to enable building JPPFSchedule objects based on the classes in java.time.*, such as ZOnedDateTime, Duration, etc.
When a job is dispatched to multiple nodes in parallel, this can result in the same class loading request being issued to the same client in parallel or in sequence. This would happen when identical requests are forwarded to the same client, before the first response is received by the server, and therefore before in can be added to the server-side cache. It could be worthwhile, from a perfromance perspective, to use a cache of class definitions, such that identical requests (same client-side class loader and same resource path) only result in a single lookup in the classpath.
To this effect, we propose to implement a cache in the client, as follows:
* an identity hash map whose keys are class loaders
* the values are hash maps where the key is a path in the classpath and the value is the byte for the resource located at that path. These could be implemented as [https://www.jppf.org/javadoc/6.2/index.html?org/jppf/utils/collections/SoftReferenceValuesMap.html SoftReferenceValueMap]s to avoid out of memory conditions due to the cache
The online and offline dosc still show, in the tutorial, code snippe that use the "blocking" job attribute, which is now deprecated, as well as job submission with JPPFClient.submitJob(), now deprecated and replace with submit() and submitAsync
This is about refactoring the distributed class loader communication model into the more efficient and scalable model introduced in JPPF 6.1 (see feature request JPPF-549 and feature request JPPF-564). This includes both driver/node and driver/client communication channels.
These are the last components to switch to the new model. Once it is done, we expect a number of benefits:
* we will be able to get rid of the old nio code, which should reduce the maintenance burden
* this should also increase the performance, simply because we will remove parts of the code inherited from the old model, which are still present but not used in the new model
* increased performance and scalability, because the new nio model is more efficient