![]() Please wait while updating issue type...
Could not save your changes
This issue has been changed since you started editing it
Data that has been changed is highlighted in red below. Undo your changes to see the updated information
You have changed this issue, but haven't saved your changes yet. To save it, press the Save changes button to the right
This issue is blocking the next release
![]() There are no comments
There is nothing attached to this issue
This issue has no duplicates
There are no code checkins for this issue |
|||||||||||||||||||||||||||||||||||||||||||||||
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?
1) One of the drivers is used solely as failover, as is the case in the originating forum thread. In this situation an easy fix would be to make the other driver server mark it as a peer (e.g. "jppf.driver.peer = true") in the information available to execution policies. Then it is enough to set the job with an appropriate exeuction policy to work around the issue, like this:
This is the solution I will implement for JPPF 3.3.7 as it satisfies the scenario in the forum thread.
2) In the general case, for instance when the peer initially has some nodes attached, then for any reason the nodes get disconnected, there are several possibilities:
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?
Really delete this comment?