I've noticed in the admin console that peer to peer driver connections are not detected anymore. Looking at the logs, I could see that the toplogy monitoring API never logs peer connections. I suspect this due to the JPPFNodeForwardingMBean excluding peer nodes when retrieving nodes specified with a NodeSelector.
The feature request JPPF-480 provides a pluggable way for the driver to persist jobs, to enable both job failover/recovery and the ability to execute jobs and retrieve their results offline. In particular, it provides a client-side API to administer persisted jobs.
We propose to add an administration interface to the web and desktop consoles to allow users to perform these tasks graphically in addition to programmatically.
When using the constructor JPPFClient(String uuid, TypedProperties config, ConnectionPoolListener... listeners), the load-balancer for this client is not using the TypedProperties object, but instead uses the global configuration via a static call to JPPFConfiguration.getProperties(). This will cause wrong settings for the client load-balancer.
A possible workaround is to dynamically set the load-balancer configuration once the client is initialized, using JPPFClient.setLoadBalancerSettings(String algorithm, Properties config).
Currently, when a driver is configured with a local (same JVM) node, this local node is always given priority for job scheduling. We propose to give users the ability to disable this behavior via a driver configuration proeprty such as "jppf.local.node.bias = false", with a default value of "true" to keep compatibility with previous versions.
When starting a JPPF driver with a local node, the local node does not complete its connection with the driver it is embedded in, even though it display a message "Node successfully initialized". The node then behaves as if it were not started at all, and does not appear in the administration console.
Currently, there is no way to dynamically chnage the algorithm or parmaters of the load balancer in the client. This can only be done statically in the configuration, whereas it is possible to change the ''server-side'' load balancing with the driver JMX APIs.
We propose to add 2 methods to JPPFClient to allow dynamic changes of its load-balancer, for instance:
We propose to implement a set of facilities to provide easy access to one or more databases from a JPPF application. One goal will be to make it as painless as possible to define, cache and use JDBC data sources using a simple API.
Some important considerations:
'''1) choice of a connection pool/datasource implementation''': we propose [https://github.com/brettwooldridge/HikariCP '''HikariCP''']. It has great performance, it is small (131 kb jar) and has no runtime dependency other than SLF4J which is already distributed with JPPF
'''2) how to define datasources''': we propose to do this from the JPPF configuration, for instance:
* '''configId''' is used to distinguish the datasource properties when multiple datasources are defined
* the datasource '''name''' is mandatory and is used to store and retrieve the datasource in a custom registry. It is also the datasource name used in the configuration of this job persistence implementation
* '''hikaricp_property_x''' designates any valid HikariCP configuration property. Properties not supported by HikariCP are simply ignored
'''3) scope and class loading considerations''': we want to be able to define, in a single place, datasources that will be instantiated in every node. To achieve that, we want to be able to create the definitions on the driver side and use the built-in distributed class loader to download them and make the JDBC driver classes available to the nodes, without deploying them in each node. We propose implementing a "datasource provider" discovered via SPI, with a different implementation in the driver. Each datasource configuration could also specify a "scope" property only used n the driver, to tell whether the datasource is to be deployed on the nodes (scope = node) or in the local JVM
This feature will also be used by the feature request JPPF-480, for a built-in dabase implementation of job persistence
Referring to the Snippet in [http://www.jppf.org/forums/index.php?topic=748.0] i made up a Task wich is using [https://github.com/brettwooldridge/HikariCP HikariCP] as ConnectionPool.
When start the Client code everything is fine. But when i want to start it again i get following Output:
Maybe the Classloader of the Node gets the bytecode a second time and isnt able to cast the "old" stored object into the new pulled Class?
The main goal is to made up a connectionpool on a node and let the task on the node do something on the database with different clients.
Currently there is too much complexity in the handling of client connections to the drivers and their status. In particular, each JPPFClientConnection implementation holds 2 actual connections, both subclasses of AbstractClientConnectionHandler and each with their status listeners. The main connection status is set either directly or as a combination of the states of the two "sub-connections". In the former case, the sub-connections status becomes inconsistent with that of the main connection.
Overall, this complexity results in many observed problems in the client, especially when running the automated tests: deadlocks, race conditions, failures of the recovery and failover mechanisms.
What we propose is to remove the code that handle the status in the sub-connections (and thus in AbstractClientConnectionHandler) and only keep one source of status and associated events.
Additionally, the abstract class org.jppf.client.balancer.ChannelWrapper, subclassed as ChannelWrapperLocal and ChannelWrapperRemote, holds an executor filed of type ExecutorService defined as a single thread executor in both subclasses. Instead of a separate thread pool for each ChannelWrapper, we should make use of the executor held by the JPPFClient instance, and add proper synchronization if needed.
To have the JPPF logging working in JBoss 7 and Wildfly deployments of the J2EE connector, the dependency on slf4j must be declared in the MANIFEST.MF with the attribute "'''Dependencies: org.slf4j,org.slf4j.impl'''". Also the sfa4 and log4j jars should be removed from the JPPF rar file, since the logging api are provided by JBOss as OSGi dependencies.
The J2EE connector build sdhoud be modified to reflect this.
Then, the JBoss/Wildfly logging configuration can be modified, for instance like this:
Reviewing part of the client code, I noticied that the jobDispatched() notfication is sent right after the asynchronous task to send the tasks of a job to the driver is submitted. This means that generally the notification is emitted before the tasks are fully sent to the driver, and contradicts the intended semantics of the notification.
We propose the following additons to the [http://www.jppf.org/doc/5.2/index.php?title=Receiving_the_status_of_tasks_dispatched_to_or_returned_from_the_nodes '''JobTaskListener'''] plugin:
1) Add a new callback method to the listener, called when tasks results are about to be sent back to the client:
2) Add the job SLA and metadata to the available job information in the event:
3) Add the task result to each ServerTaskInformation and enable accessing it as either a stream or a deserialized Task object:
The combination of 1) and 3) will then allow tasks results to be processed even if the client is disconnected before the job completes, provided ''job.getSLA().setCancelUponClientDisconnect(false)'' was set.