Currently, [https://www.jppf.org/doc/6.1/index.php?title=The_Location_API#MavenCentralLocation MavenCentralLocation] only allows to download artifacts from Maven Central. We propose to add the ability to specify a different repository, as well as the ability to download SNAPSHOT artifacts, For instnce in a class nmamed MavenLocation, of which MavenCentralLocation could be a specialized subclass.
[https://www.jppf.org/doc/6.1/index.php?title=Monitoring_data_providers '''Monitoring data providers'''] allow to define properties of various types that are monitored over time. However, there is currently no way to specify how these values should be displayed in the JVM health view of the desktop and web administration consoles. JPPF currently uses default conversions based on the type of each property, but this may not be always convenient.
For instance, let's say we want to monitor the JVM uptime. This value is expressed in millisecons as a long integer value. However, in the GUI we'd rather have it displayed as days:hours:minutes:seconds.millis.
We propose to implement the ability to configure a value converter for each defined property to this effect.
For instance (just for example purpose, this is not what the actual design will be):
I would like to embed jppf-admin-web into my own embedded webserver as an executable jar. I need jppf-admin-web as a jar dependency instead of war to make this work. I would define my own web.xml for this and ignore the one inside
See description here: https://pragmaticintegrator.wordpress.com/2010/10/22/using-a-war-module-as-dependency-in-maven/
You would need to add:
So that I could use:
Also it would be nice if you could define the jppf.css and and images/ as maven resources behind a package name and add those resources into the classes folder. You could then mount those resources in your wicket application under your current paths using PackageResourceReferences to serve them from the classpath. This makes embedding easier and I don't have to copy these resources myself then.
We propose to add a number of data elements to the JVM health monitoring:
* peak thread count and total created threads (to be displayed in the same column as the live thread count, i.e. "live / peak / total"
* JVM uptime
Jobs have a [https://www.jppf.org/doc/6.0/index.php?title=Dealing_with_jobs#Non-blocking_jobs blocking job] attribute whose semantic is confusing. Technically, there is no difference between a blocking (.i.e synchronous) and a non-blocking (asynchronous) job. The difference is only in the client code that submits the job (JPPFClient.submitJob() method).
We consider that a job should be submissible either synchronously or asynchronously, regardless of its state
To this effect, we propose to deprecate the '''blocking''' job attribute in JPPFJob, as well as the '''submitJob()''' method in JPPFClient, and add the '''submit(JPPFJob)''' and '''submitAsync(JPPFJob)''' methods to JPPFClient instead, to fullfill the sme functionality.
Also, the deprecated members should '''''not''''' be removed before the next major version (v7.0) or even later, to ensure that users have plenty of advance warnings and the time to adjust their applications. In other words, this should be a long-term deprecation.
Care should also be taken to adapt the J2EE/JCA connector to take this into account.
Since the begining, the docuemntation has been orginazied into multiple .odt documents, grouped via a master (.odm) file. This has notoriously caused problems with cross-document links, in particular in the generated PDF version.
I propose to group all documents into a single one instead, and fix the links.
Since a node can process any number of jobs concurrently, there is a risk that it can be overwhelmed, and its performance may degrade suddenly or it could even crash, for instance because of an out of memory condition.
We propose to implement a pluggable mechanism in the node to alert the driver that it cannot accept any more jobs when a given condition is true. The same mechanism would send another alert when the condition is no lnger true, so it can resume taking additional jobs.
Given the major changes in the upcoming 6.1 release, in particular feature request JPPF-548, feature request JPPF-549 and feature request JPPF-564, it is important to check for possible regressions of the performance and health indicators.
Some specific points to check:
* load-balancing performance, how is load-balancing impacted by the fact that nodes can now process multiple jobs concurrently?
* look for memory leaks; I'm hoping endurance tests will help with that
* on the client side, attempt to measure the performance impact of single vs. multiple connections with multiple concurrejt jobs
When starting a driver using the main class org.jppf.server.JPPFDriver, with the argument "noLauncher", the driver will exit immediately after its initialization, when UDP multicast discovery is disabled, that is, when the configuration property "jppf.discovery.enabled" is set to "false"
This is due to the fact that the UDP broadcast thread is the only non-dameon thread started at driver startup. The driver startup essentially starts new threads, and nothing prevents it from exiting the JVM when only daemon threads are started.
We propose to add the following method to the [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/protocol/Task.html '''Task'''] interface and its [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/protocol/AbstractTask.html '''AbstractTask'''] default implemntation, such that the code in the task can access the job as an instance of the [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/protocol/JPPFDistributedJob.html '''JPPFDistributedJob'''] interface:
We propose to add the following attributes to the server-side SLA of a job:
* whether to accept peer servers in a multi-driver topology (this is already available via the [https://www.jppf.org/doc/6.1/index.php?title=Execution_policy_properties#JPPF_configuration_properties "jppf.peer.driver"] boolean property available to an execution policy).
* max driver depth: in a multi server topology, an upper bound for how many drivers a job can be transfered to before being executed on a node. '''''Done'''''
* maximum dispatch size: the maximum number of tasks in a job that can be sent at once to a node (driver-side SLA) or to a driver (client-side SLA)
* allow multiple dispatches to the same node (driver-side SLA) or driver (client-side SLA): a flag to specifiy whether a job can be dispatched to the same node or driver multiple times at any given moment. This is in anticipation of the completion of feature request JPPF-564
When no driver is started, and the log level is set to DEBUG for the package org.jppf.management and trying to connect via JMXDriverConnectionWrapper, the following are shown in the log:
These exceptions are harmless, as they are indeed caught and handled in the JPPF code, however they may cause some worries:
* the InterruptedException is logged as a warning. This is wrong, because it is expected when the connection attempts fail after a specified timeout. This should be logged at TRACE level
* the NullPointerException results from poor handling in the JPPFJMXConnector code, this must be fixed.
From [https://www.jppf.org/forums/index.php/topic,8057.0.html '''this forums thread''']:
The method [https://www.jppf.org/javadoc/6.0/org/jppf/management/JMXDriverConnectionWrapper.html#getAllJobIds() getAllJobIds()] still exists in class [https://www.jppf.org/javadoc/6.0/index.html?org/jppf/management/JMXDriverConnectionWrapper.html JMXDriverConnectionWrapper], but it was removed from the [https://www.jppf.org/javadoc/6.0/index.html?org/jppf/server/job/management/DriverJobManagementMBean.html DriverJobManagementMBean] interface.
Using this method on a connected JMX wrapper always raises an exception:
We should remove this method from JMXDriverConnectionWrapper as well and update the documentation to reflect that, and in particular state that to achieve the same goal the following should be used:
Following feature request JPPF-563, create a new sample in the samples pack which demonstrates how to start a driver, node and client programmatically, all embedded within the same JVM. The sample will show the following functionalities:
* embedded driver life cycle: create, start, stop
* embedded node life cycle: create, start, stop
* connecting a client and submitting a job
* programmatically creating the configuration for a driver, node and client
* using management and monitoring APIs for an embedded driver and node
We propose that the following features be either deprecated or dropped altogether:
'''1. .Net integration'''
This feature relies heavily on the [http://jni4net.com/ '''jni4net'''] framework, which hasn't seen a new version in 4 years. Following the switch to Java 8 (feature request JPPF-548), its .Net proxy generator is no longer fully working, as it doesn't handle new Java 8 constucts such as default methods in interfaces, It is currently not possible to build it with the current code, and I don't see any solution that can be mainained in the long term. I propose to drop this feature from JPPF 6.1 forward. We will still maintain it for prior versiosn.
'''2. Android integration'''
The switch to Java 8 requires a lot of changes to the Android port, including, but definitely not limited to, the min Android sdk version and build tools. I haven't evaluated the changes that need to be done to the code itself, and, given the lack of bandwith (I'm just 1 developer), I tend to think it should be dropped so we can focus on more modern features such as job streaming and big data. If anyone volunteers to take this feature on, I'll be happy to assist in any way, In any case, we'll keep maintaining it for versions up to 6.0.
'''3. Persistent data in NodeRunner'''
The [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/node/NodeRunner.html '''NodeRunner'''] class provides 3 methods '''setPersistentData()''', '''getPersistentData()''' and '''removePersistentData()''', which were intended for tasks to be able to store, access and manage data on the nodes accross job executions. These methods are inherently dangerous, because they can cause the nodes to retain objects and classes from many different class loaders, resulting in class loader leaks and potential out-of-memory conditions. This feature isn't used anymore in the JPPF code, and I believe now's a good time to remove it.
'''4. Node security policy'''
I can no longer see the benefit of the [https://www.jppf.org/doc/6.0/index.php?title=Node_configuration#Security_policy '''security policy'''] in the node configuration. We haven't touched this code in years, the default node security policy in the node distribution is no longer close to being useful or even accurate, and this can be easily replaced with a standard SecurityManager and associated security policy file. This feature should definitely be removed.
'''5. JMXMP JMX remote connector'''
Since JPPF 6.0, we use the new nio-based connector, which allows the JMX port to be the same as the server port, and thus simplifies the configuration. Itis still possible to switch to the JMXMP connecotr, via a configuration property. However, there isn't much of a point in keeping the JMXMP connector as part of the JPPF distribution, since it is not used and simple adds a useless dependency. I propose that we drop it from the distro, and perhaps set it up as a separate project/Github repo.
Current the [https://www.jppf.org/javadoc/6.1/index.html?org/jppf/server/JPPFDriver.html '''JPPFDriver'''] is implemented as a singleton. This means there can be only one per JVM. The same is true for the nodes, whether local to the driver's JVM or remote.
We propose to change that and make it possible to have any number of drivers and/or nodes per JVM. They might have the possibility to share some common resources, for example the NIO thread pool.
Currently the [https://www.jppf.org/doc/6.1/index.php?title=Execution_Policy_Elements#Preference '''Preference'''] execution poliy is applied to each node individually and is identical to the [https://www.jppf.org/doc/6.1/index.php?title=Execution_Policy_Elements#OR '''OR'''] execution policy. To sum up: despite its name, it has nothing to do with a "preference".
We propose to make it live up to its name, which implies:
* it should define a real order of preference for a number of node execution policies, where a node that satisfies policy N in the list will have priority over a node that satisfies policy N + 1
* it should be applied globally to all the nodes available to the driver
* because of the previous point, it should be a separate attribute of the job SLA
* it should be applicable to the client-side as well, where it would define a driver preference rather than a node preference
* special care should be taken about perfomance, as the algorithm will be in O(nbNodes * nbJobs). Should we allow parallel (with regards to the nodes) evaluation of the policy for each job?