I've noticed in the admin console that peer to peer driver connections are not detected anymore. Looking at the logs, I could see that the toplogy monitoring API never logs peer connections. I suspect this due to the JPPFNodeForwardingMBean excluding peer nodes when retrieving nodes specified with a NodeSelector.
The feature request JPPF-480 provides a pluggable way for the driver to persist jobs, to enable both job failover/recovery and the ability to execute jobs and retrieve their results offline. In particular, it provides a client-side API to administer persisted jobs.
We propose to add an administration interface to the web and desktop consoles to allow users to perform these tasks graphically in addition to programmatically.
When using the constructor JPPFClient(String uuid, TypedProperties config, ConnectionPoolListener... listeners), the load-balancer for this client is not using the TypedProperties object, but instead uses the global configuration via a static call to JPPFConfiguration.getProperties(). This will cause wrong settings for the client load-balancer.
A possible workaround is to dynamically set the load-balancer configuration once the client is initialized, using JPPFClient.setLoadBalancerSettings(String algorithm, Properties config).
Currently, when a driver is configured with a local (same JVM) node, this local node is always given priority for job scheduling. We propose to give users the ability to disable this behavior via a driver configuration proeprty such as "jppf.local.node.bias = false", with a default value of "true" to keep compatibility with previous versions.
When starting a JPPF driver with a local node, the local node does not complete its connection with the driver it is embedded in, even though it display a message "Node successfully initialized". The node then behaves as if it were not started at all, and does not appear in the administration console.