We propose to enable dependencies between tasks in the same job. For instance, we envision the ability to express that task A depends on the completion of tasks B and C, who each depend on the completion of task D (diamond dependency graph):
This implies a number of challenges, including but not limited to:
* decide how to schedule and parallelize the execution of the tasks: a task with dependencies cannot be scheduled before its dependenices have completed
* handle failures / cancellation of tasks on which other tasks depend
* provide an expressive and intuitive API to specify the dependencies
We have a [https://www.jppf.org/samples-pack/JobDependencies/ job dependencies sample] which illustrates an ad-hoc way of executing acyclic graphs of dependent jobs.
We propose to make this an actual feature instead of a sample, and to explore other possibilities such as:
* a broader set of relationshups between jobs than just "depends on", e.g. split/join (or map/reduce). This implies to be able to gather global results for an entire job and apply transformations to these results.
* an expressive and intuitive way to build the job graph
Currently, [https://www.jppf.org/doc/6.1/index.php?title=The_Location_API#MavenCentralLocation MavenCentralLocation] only allows to download artifacts from Maven Central. We propose to add the ability to specify a different repository, as well as the ability to download SNAPSHOT artifacts, For instnce in a class nmamed MavenLocation, of which MavenCentralLocation could be a specialized subclass.
[https://www.jppf.org/doc/6.1/index.php?title=Monitoring_data_providers '''Monitoring data providers'''] allow to define properties of various types that are monitored over time. However, there is currently no way to specify how these values should be displayed in the JVM health view of the desktop and web administration consoles. JPPF currently uses default conversions based on the type of each property, but this may not be always convenient.
For instance, let's say we want to monitor the JVM uptime. This value is expressed in millisecons as a long integer value. However, in the GUI we'd rather have it displayed as days:hours:minutes:seconds.millis.
We propose to implement the ability to configure a value converter for each defined property to this effect.
For instance (just for example purpose, this is not what the actual design will be):
I would like to embed jppf-admin-web into my own embedded webserver as an executable jar. I need jppf-admin-web as a jar dependency instead of war to make this work. I would define my own web.xml for this and ignore the one inside
See description here: https://pragmaticintegrator.wordpress.com/2010/10/22/using-a-war-module-as-dependency-in-maven/
You would need to add:
So that I could use:
Also it would be nice if you could define the jppf.css and and images/ as maven resources behind a package name and add those resources into the classes folder. You could then mount those resources in your wicket application under your current paths using PackageResourceReferences to serve them from the classpath. This makes embedding easier and I don't have to copy these resources myself then.
We propose to add a number of data elements to the JVM health monitoring:
* peak thread count and total created threads (to be displayed in the same column as the live thread count, i.e. "live / peak / total"
* JVM uptime
Jobs have a [https://www.jppf.org/doc/6.0/index.php?title=Dealing_with_jobs#Non-blocking_jobs blocking job] attribute whose semantic is confusing. Technically, there is no difference between a blocking (.i.e synchronous) and a non-blocking (asynchronous) job. The difference is only in the client code that submits the job (JPPFClient.submitJob() method).
We consider that a job should be submissible either synchronously or asynchronously, regardless of its state
To this effect, we propose to deprecate the '''blocking''' job attribute in JPPFJob, as well as the '''submitJob()''' method in JPPFClient, and add the '''submit(JPPFJob)''' and '''submitAsync(JPPFJob)''' methods to JPPFClient instead, to fullfill the sme functionality.
Also, the deprecated members should '''''not''''' be removed before the next major version (v7.0) or even later, to ensure that users have plenty of advance warnings and the time to adjust their applications. In other words, this should be a long-term deprecation.
Care should also be taken to adapt the J2EE/JCA connector to take this into account.
Since the begining, the docuemntation has been orginazied into multiple .odt documents, grouped via a master (.odm) file. This has notoriously caused problems with cross-document links, in particular in the generated PDF version.
I propose to group all documents into a single one instead, and fix the links.
Since a node can process any number of jobs concurrently, there is a risk that it can be overwhelmed, and its performance may degrade suddenly or it could even crash, for instance because of an out of memory condition.
We propose to implement a pluggable mechanism in the node to alert the driver that it cannot accept any more jobs when a given condition is true. The same mechanism would send another alert when the condition is no lnger true, so it can resume taking additional jobs.
Given the major changes in the upcoming 6.1 release, in particular feature request JPPF-548, feature request JPPF-549 and feature request JPPF-564, it is important to check for possible regressions of the performance and health indicators.
Some specific points to check:
* load-balancing performance, how is load-balancing impacted by the fact that nodes can now process multiple jobs concurrently?
* look for memory leaks; I'm hoping endurance tests will help with that
* on the client side, attempt to measure the performance impact of single vs. multiple connections with multiple concurrejt jobs
When starting a driver using the main class org.jppf.server.JPPFDriver, with the argument "noLauncher", the driver will exit immediately after its initialization, when UDP multicast discovery is disabled, that is, when the configuration property "jppf.discovery.enabled" is set to "false"
This is due to the fact that the UDP broadcast thread is the only non-dameon thread started at driver startup. The driver startup essentially starts new threads, and nothing prevents it from exiting the JVM when only daemon threads are started.
We propose to add the following method to the [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/protocol/Task.html '''Task'''] interface and its [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/protocol/AbstractTask.html '''AbstractTask'''] default implemntation, such that the code in the task can access the job as an instance of the [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/protocol/JPPFDistributedJob.html '''JPPFDistributedJob'''] interface:
We propose to add the following attributes to the server-side SLA of a job:
* whether to accept peer servers in a multi-driver topology (this is already available via the [https://www.jppf.org/doc/6.1/index.php?title=Execution_policy_properties#JPPF_configuration_properties "jppf.peer.driver"] boolean property available to an execution policy).
* max driver depth: in a multi server topology, an upper bound for how many drivers a job can be transfered to before being executed on a node. '''''Done'''''
* maximum dispatch size: the maximum number of tasks in a job that can be sent at once to a node (driver-side SLA) or to a driver (client-side SLA)
* allow multiple dispatches to the same node (driver-side SLA) or driver (client-side SLA): a flag to specifiy whether a job can be dispatched to the same node or driver multiple times at any given moment. This is in anticipation of the completion of feature request JPPF-564