JPPF Issue Tracker
star_faded.png
Please log in to bookmark issues
bug_report_small.png
CLOSED  Bug report JPPF-59  -  Cannot submit multiple jobs concurrently with JCA adaptor
Posted Sep 05, 2012 - updated Dec 27, 2014
icon_info.png This issue has been closed with status "Closed" and resolution "RESOLVED".
Issue details
  • Type of issue
    Bug report
  • Status
     
    Closed
  • Assigned to
     lolo4j
  • Progress
       
  • Type of bug
    Not triaged
  • Likelihood
    Not triaged
  • Effect
    Not triaged
  • Posted by
     lolo4j
  • Owned by
    Not owned by anyone
  • Category
    J2EE
  • Resolution
    RESOLVED
  • Priority
    Normal
  • Reproducability
    Always
  • Severity
    Normal
  • Targetted for
    icon_milestones.png JPPF 3.1.x
Issue description
From this forum thread.

When submitting multiple non-blocking jobs concurrently, with a connection pool configured, jobs are still sent 1 at a time to the driver
Steps to reproduce this issue
  • modify the JPPF config in the ra.xml to have a pool of connections
  • submit multiple jobs via the JPPF resource adapter
  • observe the jobs executing in the JPPF admin console: you can see that only one job at a time is executed, even if there are multiple nodes

#3
Comment posted by
 lolo4j
Sep 07, 20:59
I've actually integrated the JCA with the new balancer, which doesn't have the problem.

I think it's much better than trying to fix the old client load-balancer and keep both in the same JPPF version. This will save us a lot of time in maintenance and make the code leaner, which is always desirable.
#5
Comment posted by
 lolo4j
Sep 07, 22:20
Fixed. Changes committed to SVN:

The issue was updated with the following change(s):
  • This issue has been closed
  • The status has been updated, from New to Closed.
  • This issue's progression has been updated to 100 percent completed.
  • The resolution has been updated, from Not determined to RESOLVED.
  • Information about the user working on this issue has been changed, from lolo4j to Not being worked on.