We propose to implement a set of facilities to provide easy access to one or more databases from a JPPF application. One goal will be to make it as painless as possible to define, cache and use JDBC data sources using a simple API.
Some important considerations:
'''1) choice of a connection pool/datasource implementation''': we propose [https://github.com/brettwooldridge/HikariCP '''HikariCP''']. It has great performance, it is small (131 kb jar) and has no runtime dependency other than SLF4J which is already distributed with JPPF
'''2) how to define datasources''': we propose to do this from the JPPF configuration, for instance:
* '''configId''' is used to distinguish the datasource properties when multiple datasources are defined
* the datasource '''name''' is mandatory and is used to store and retrieve the datasource in a custom registry. It is also the datasource name used in the configuration of this job persistence implementation
* '''hikaricp_property_x''' designates any valid HikariCP configuration property. Properties not supported by HikariCP are simply ignored
'''3) scope and class loading considerations''': we want to be able to define, in a single place, datasources that will be instantiated in every node. To achieve that, we want to be able to create the definitions on the driver side and use the built-in distributed class loader to download them and make the JDBC driver classes available to the nodes, without deploying them in each node. We propose implementing a "datasource provider" discovered via SPI, with a different implementation in the driver. Each datasource configuration could also specify a "scope" property only used n the driver, to tell whether the datasource is to be deployed on the nodes (scope = node) or in the local JVM
This feature will also be used by the feature request JPPF-480, for a built-in dabase implementation of job persistence
Referring to the Snippet in [http://www.jppf.org/forums/index.php?topic=748.0] i made up a Task wich is using [https://github.com/brettwooldridge/HikariCP HikariCP] as ConnectionPool.
When start the Client code everything is fine. But when i want to start it again i get following Output:
Maybe the Classloader of the Node gets the bytecode a second time and isnt able to cast the "old" stored object into the new pulled Class?
The main goal is to made up a connectionpool on a node and let the task on the node do something on the database with different clients.
Currently there is too much complexity in the handling of client connections to the drivers and their status. In particular, each JPPFClientConnection implementation holds 2 actual connections, both subclasses of AbstractClientConnectionHandler and each with their status listeners. The main connection status is set either directly or as a combination of the states of the two "sub-connections". In the former case, the sub-connections status becomes inconsistent with that of the main connection.
Overall, this complexity results in many observed problems in the client, especially when running the automated tests: deadlocks, race conditions, failures of the recovery and failover mechanisms.
What we propose is to remove the code that handle the status in the sub-connections (and thus in AbstractClientConnectionHandler) and only keep one source of status and associated events.
Additionally, the abstract class org.jppf.client.balancer.ChannelWrapper, subclassed as ChannelWrapperLocal and ChannelWrapperRemote, holds an executor filed of type ExecutorService defined as a single thread executor in both subclasses. Instead of a separate thread pool for each ChannelWrapper, we should make use of the executor held by the JPPFClient instance, and add proper synchronization if needed.