JPPF Issue Tracker
Please log in to bookmark issues
CLOSED  Feature request JPPF-501  -  Database services
Posted May 07, 2017 - updated May 25, 2017
icon_info.png This issue has been closed with status "Closed" and resolution "RESOLVED".
Issue details
  • Type of issue
    Feature request
  • Status
  • Assigned to
  • Type of bug
    Not triaged
  • Likelihood
    Not triaged
  • Effect
    Not triaged
  • Posted by
  • Owned by
    Not owned by anyone
  • Category
  • Resolution
  • Priority
  • Targetted for
    icon_milestones.png JPPF 6.0
Issue description
We propose to implement a set of facilities to provide easy access to one or more databases from a JPPF application. One goal will be to make it as painless as possible to define, cache and use JDBC data sources using a simple API.

Some important considerations:

1) choice of a connection pool/datasource implementation: we propose HikariCP. It has great performance, it is small (131 kb jar) and has no runtime dependency other than SLF4J which is already distributed with JPPF

2) how to define datasources: we propose to do this from the JPPF configuration, for instance:
# datsource definition
jppf.datasource.<configId>.name = jobDS
jppf.datasource.<configId>.<hikaricp_property_1>= value_1
jppf.datasource.<configId>.<hikaricp_property_N>= value_N
  • configId is used to distinguish the datasource properties when multiple datasources are defined
  • the datasource name is mandatory and is used to store and retrieve the datasource in a custom registry. It is also the datasource name used in the configuration of this job persistence implementation
  • hikaricp_property_x designates any valid HikariCP configuration property. Properties not supported by HikariCP are simply ignored
3) scope and class loading considerations: we want to be able to define, in a single place, datasources that will be instantiated in every node. To achieve that, we want to be able to create the definitions on the driver side and use the built-in distributed class loader to download them and make the JDBC driver classes available to the nodes, without deploying them in each node. We propose implementing a "datasource provider" discovered via SPI, with a different implementation in the driver. Each datasource configuration could also specify a "scope" property only used n the driver, to tell whether the datasource is to be deployed on the nodes (scope = node) or in the local JVM

This feature will also be used by the Feature request JPPF-480 - Jobs persistence in the driver, for a built-in dabase implementation of job persistence

Comment posted by
May 25, 11:30
Implemented in trunk revision 4518