JPPF Issue Tracker
star_faded.png
Please log in to bookmark issues
feature_request_small.png
CLOSED  Feature request JPPF-501  -  Database services
Posted May 07, 2017 - updated May 25, 2017
action_vote_minus_faded.png
0
Votes
action_vote_plus_faded.png
icon_info.png This issue has been closed with status "Closed" and resolution "RESOLVED".
Issue details
  • Type of issue
    Feature request
  • Status
     
    Closed
  • Assigned to
     lolo4j
  • Type of bug
    Not triaged
  • Likelihood
    Not triaged
  • Effect
    Not triaged
  • Posted by
     lolo4j
  • Owned by
    Not owned by anyone
  • Category
    Core
  • Resolution
    RESOLVED
  • Priority
    Normal
  • Targetted for
    icon_milestones.png JPPF 6.0
Issue description
We propose to implement a set of facilities to provide easy access to one or more databases from a JPPF application. One goal will be to make it as painless as possible to define, cache and use JDBC data sources using a simple API.

Some important considerations:

1) choice of a connection pool/datasource implementation: we propose HikariCP. It has great performance, it is small (131 kb jar) and has no runtime dependency other than SLF4J which is already distributed with JPPF

2) how to define datasources: we propose to do this from the JPPF configuration, for instance:
# datsource definition
jppf.datasource.<configId>.name = jobDS
jppf.datasource.<configId>.<hikaricp_property_1>= value_1
...
jppf.datasource.<configId>.<hikaricp_property_N>= value_N
Where:
  • configId is used to distinguish the datasource properties when multiple datasources are defined
  • the datasource name is mandatory and is used to store and retrieve the datasource in a custom registry. It is also the datasource name used in the configuration of this job persistence implementation
  • hikaricp_property_x designates any valid HikariCP configuration property. Properties not supported by HikariCP are simply ignored
3) scope and class loading considerations: we want to be able to define, in a single place, datasources that will be instantiated in every node. To achieve that, we want to be able to create the definitions on the driver side and use the built-in distributed class loader to download them and make the JDBC driver classes available to the nodes, without deploying them in each node. We propose implementing a "datasource provider" discovered via SPI, with a different implementation in the driver. Each datasource configuration could also specify a "scope" property only used n the driver, to tell whether the datasource is to be deployed on the nodes (scope = node) or in the local JVM

This feature will also be used by the Feature request JPPF-480 - Jobs persistence in the driver, for a built-in dabase implementation of job persistence

#3
Comment posted by
 lolo4j
May 25, 11:30
Implemented in trunk revision 4518