JPPF Issue Tracker
JPPF (jppf)
August 20, 2017
icon_milestone.png 13:38 JPPF 5.2.9
A new milestone has been reached
August 19, 2017
feature_request_tiny.png 10:08  Feature request JPPF-28 - Asynchronous communication between servers
lolocohen : Issue closed
August 12, 2017
feature_request_tiny.png 10:33  Feature request JPPF-445 - Provide access to the node from a task
lolocohen : Issue closed
August 10, 2017
icon_build.png 10:00 JPPF 5.2.8
New version released
August 09, 2017
bug_report_tiny.png 08:09  Bug report JPPF-512 - PeerAttributesHandler spawns too many threads
lolocohen : Issue closed
bug_report_tiny.png 07:56  Bug report JPPF-513 - Using @JPPFRunnable annotation leads to ClassNotFoundException
lolocohen : Issue closed
bug_report_tiny.png 06:41  Bug report JPPF-513 - Using @JPPFRunnable annotation leads to ClassNotFoundException
lolocohen : Issue created
When using a POJO task where one of the methods or constructor is annotated with @JPPFRunnable, the node executing the task throws a ClassNotFoundException saying it can't find the class of the POJO task
August 08, 2017
bug_report_tiny.png 09:56  Bug report JPPF-512 - PeerAttributesHandler spawns too many threads
lolocohen : Issue created
The PeerAttributesHandler class uses a thread pool to handle JMX notficiations from peer drivers when they update their number of nodes and total number of node threads. It uses Runtime.getRuntime().availableProcessors() which seems wasteful since the tasks performed by the threds are very short-lived.

We should use a configuration property "jppf.peer.handler.threads" which defaults to 1 to configure this number of threads instead.
feature_request_tiny.png 08:49  Feature request JPPF-480 - Jobs persistence in the driver
lolocohen : Issue closed
July 10, 2017
bug_report_tiny.png 10:26  Bug report JPPF-510 - Documentation on job listeners does not mention isRemoteExecution() and getConnection() methods of JobEvent
lolocohen : Issue closed
July 08, 2017
feature_request_tiny.png 06:23  Feature request JPPF-511 - Ability to persist and reuse the state of adaptive load-balancers
lolocohen : Issue created
From [http://www.jppf.org/forums/index.php/topic,7993.0.html this forum post]:

> Adaptive algorithms use statistics but when driver restarts or hardware failure, statistics will be gone and load balancing algorithm adaptation will be return to beginning.
>
> - Is it possible (and logical?) to save job execution statistics periodically and load them to same driver while restart or to another driver which already running?
> - Another idea, maybe sharing these statistics with peer drivers, so when one of them down, informations still exist on other peers and when it restarts or a new driver added as peer, it will start with existing statistics.
>
> We are planning to use p2p because of the risk of a single point of failure, but progress of algorithm's learning important and it shouldn't reset each time the server reset.
bug_report_tiny.png 06:08  Bug report JPPF-510 - Documentation on job listeners does not mention isRemoteExecution() and getConnection() methods of JobEvent
lolocohen : Issue created
The documentation on [http://www.jppf.org/doc/5.2/index.php?title=Jobs_runtime_behavior,_recovery_and_failover#Job_lifecycle_notifications:_JobListener '''job listeners'''] does not mention the '''isRemoteExecution()''' and '''getConnection()''' methods in the [http://www.jppf.org/javadoc/5.2/index.html?org/jppf/client/event/JobEvent.html '''JobEvent'''] class.
June 25, 2017
bug_report_tiny.png 10:45  Bug report JPPF-509 - Regression: topology monitoring API does not detect peer to peer connections anymore
lolocohen : Issue closed
bug_report_tiny.png 09:03  Bug report JPPF-509 - Regression: topology monitoring API does not detect peer to peer connections anymore
lolocohen : Issue created
I've noticed in the admin console that peer to peer driver connections are not detected anymore. Looking at the logs, I could see that the toplogy monitoring API never logs peer connections. I suspect this due to the JPPFNodeForwardingMBean excluding peer nodes when retrieving nodes specified with a NodeSelector.
feature_request_tiny.png 08:13  Feature request JPPF-508 - Peer to peer connection pooling
lolocohen : Issue created
Currently, in a multi-server topology where servers are connected to each other, each server can only send one job at a time to each of its peers. This has an impact on scalability.

It is possible to "trick" each server into connecting multiple times to the same peer, but this only works with manual peer configuration, for example:
jppf.peers = driver2a driver2b
jppf.peer.driver2a.server.host = localhost
jppf.peer.driver2a.server.port = 11111
jppf.peer.driver2b.server.host = localhost
jppf.peer.driver2b.server.port = 11111
However, this is quite cumbersome and is not possible with auto discovery of peer drivers.

We propose to enable the definition of connection pools instead, with a configurable pool size:
jppf.peers = driver2
# five connections to driver2
jppf.peer.driver2.pool.size = 5
jppf.peer.driver2.server.host = localhost
jppf.peer.driver1a.server.port = 11111
or, with peer discovery enabled:
jppf.peer.discovery.enabled = true
# five connections to each discovered peer
jppf.peer.pool.size = 5
feature_request_tiny.png 07:00  Feature request JPPF-507 - New persisted jobs view in the web and desktop admin consoles
lolocohen : Issue created
The feature request JPPF-480 provides a pluggable way for the driver to persist jobs, to enable both job failover/recovery and the ability to execute jobs and retrieve their results offline. In particular, it provides a client-side API to administer persisted jobs.

We propose to add an administration interface to the web and desktop consoles to allow users to perform these tasks graphically in addition to programmatically.
bug_report_tiny.png 06:45  Bug report JPPF-506 - Client side load-balancer does not use the configuration passed to the JPPFClient constructor
lolocohen : Issue closed
June 24, 2017
bug_report_tiny.png 06:46  Bug report JPPF-506 - Client side load-balancer does not use the configuration passed to the JPPFClient constructor
lolocohen : Issue created
When using the constructor JPPFClient(String uuid, TypedProperties config, ConnectionPoolListener... listeners), the load-balancer for this client is not using the TypedProperties object, but instead uses the global configuration via a static call to JPPFConfiguration.getProperties(). This will cause wrong settings for the client load-balancer.

A possible workaround is to dynamically set the load-balancer configuration once the client is initialized, using JPPFClient.setLoadBalancerSettings(String algorithm, Properties config).
June 20, 2017
enhancement_tiny.png 07:03  Enhancement JPPF-505 - Ability to disable the bias towards local node in the driver
lolocohen : Issue closed
enhancement_tiny.png 06:41  Enhancement JPPF-505 - Ability to disable the bias towards local node in the driver
lolocohen : Issue created
Currently, when a driver is configured with a local (same JVM) node, this local node is always given priority for job scheduling. We propose to give users the ability to disable this behavior via a driver configuration proeprty such as "jppf.local.node.bias = false", with a default value of "true" to keep compatibility with previous versions.
June 15, 2017
bug_report_tiny.png 06:53  Bug report JPPF-504 - Local node never completes connection to server
lolocohen : Issue closed
June 14, 2017
bug_report_tiny.png 06:47  Bug report JPPF-504 - Local node never completes connection to server
lolocohen : Issue created
When starting a JPPF driver with a local node, the local node does not complete its connection with the driver it is embedded in, even though it display a message "Node successfully initialized". The node then behaves as if it were not started at all, and does not appear in the administration console.
June 12, 2017
icon_build.png 10:00 JPPF 5.2.7
New version released
June 11, 2017
enhancement_tiny.png 20:19  Enhancement JPPF-502 - Ability to dynamically change the settings of the client load balancer
lolocohen : Issue closed
bug_report_tiny.png 07:27  Bug report JPPF-503 - JPPF Serialization: ConcurrentModificationException when serializing a java.util.Vector
lolocohen : Issue closed
June 08, 2017
bug_report_tiny.png 08:36  Bug report JPPF-503 - JPPF Serialization: ConcurrentModificationException when serializing a java.util.Vector
lolocohen : Issue created
Whern trying to serialize a Spring ApplicationContext using the JPPF serialization scheme, I get the following exception:
2017-06-08 08:02:07,204 [DEBUG][org.jppf.client.balancer.ChannelWrapperRemote.run(231)]:
java.io.IOException
at org.jppf.serialization.JPPFObjectOutputStream.writeObjectOverride(JPPFObjectOutputStream.java:91)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:344)
at org.jppf.serialization.DefaultJPPFSerialization.serialize(DefaultJPPFSerialization.java:58)
at org.jppf.utils.ObjectSerializerImpl.serialize(ObjectSerializerImpl.java:79)
at org.jppf.io.IOHelper.serializeDataToMemory(IOHelper.java:330)
at org.jppf.io.IOHelper.serializeData(IOHelper.java:311)
at org.jppf.io.IOHelper.sendData(IOHelper.java:283)
at org.jppf.client.BaseJPPFClientConnection.sendTasks(BaseJPPFClientConnection.java:137)
at org.jppf.client.JPPFClientConnectionImpl.sendTasks(JPPFClientConnectionImpl.java:34)
at org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable.run(ChannelWrapperRemote.java:212)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.ConcurrentModificationException
at java.util.Vector$Itr.checkForComodification(Vector.java:1184)
at java.util.Vector$Itr.next(Vector.java:1137)
at org.jppf.serialization.VectorHandler.writeDeclaredFields(VectorHandler.java:49)
at org.jppf.serialization.Serializer.writeFields(Serializer.java:179)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:146)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:122)
at org.jppf.serialization.Serializer.writeDeclaredFields(Serializer.java:219)
at org.jppf.serialization.Serializer.writeFields(Serializer.java:192)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:146)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:122)
at org.jppf.serialization.Serializer.writeDeclaredFields(Serializer.java:219)
at org.jppf.serialization.Serializer.writeFields(Serializer.java:192)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:146)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:122)
at org.jppf.serialization.Serializer.writeDeclaredFields(Serializer.java:219)
at org.jppf.serialization.Serializer.writeFields(Serializer.java:192)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:146)
at org.jppf.serialization.Serializer.writeObject(Serializer.java:122)
at org.jppf.serialization.JPPFObjectOutputStream.writeObjectOverride(JPPFObjectOutputStream.java:89)
... 12 more
The fix in VectorHandler appears to be simple; replace the line:
for (Object o: vector) serializer.writeObject(o);
with:
List list = new ArrayList<>(vector);
for (Object o: list) serializer.writeObject(o);
This works in the scenario I used to reproduce.
June 01, 2017
enhancement_tiny.png 07:02  Enhancement JPPF-502 - Ability to dynamically change the settings of the client load balancer
lolocohen : Issue created
Currently, there is no way to dynamically chnage the algorithm or parmaters of the load balancer in the client. This can only be done statically in the configuration, whereas it is possible to change the ''server-side'' load balancing with the driver JMX APIs.

We propose to add 2 methods to JPPFClient to allow dynamic changes of its load-balancer, for instance:
public class JPPFClient {
// change the load-balancer configuration
public void setLoadBalancerConfig(LoadBalancingInformation currentInfo);

// get the load-balancer configuration
public LoadBalancingInformation getLoadBalancerConfig();

...
}
May 25, 2017
feature_request_tiny.png 11:30  Feature request JPPF-501 - Database services
lolocohen : Issue closed
May 07, 2017
feature_request_tiny.png 10:08  Feature request JPPF-501 - Database services
lolocohen : Issue created
We propose to implement a set of facilities to provide easy access to one or more databases from a JPPF application. One goal will be to make it as painless as possible to define, cache and use JDBC data sources using a simple API.

Some important considerations:

'''1) choice of a connection pool/datasource implementation''': we propose [https://github.com/brettwooldridge/HikariCP '''HikariCP''']. It has great performance, it is small (131 kb jar) and has no runtime dependency other than SLF4J which is already distributed with JPPF

'''2) how to define datasources''': we propose to do this from the JPPF configuration, for instance:
# datsource definition
jppf.datasource..name = jobDS
jppf.datasource..= value_1
...
jppf.datasource..= value_N
Where:
* '''configId''' is used to distinguish the datasource properties when multiple datasources are defined
* the datasource '''name''' is mandatory and is used to store and retrieve the datasource in a custom registry. It is also the datasource name used in the configuration of this job persistence implementation
* '''hikaricp_property_x''' designates any valid HikariCP configuration property. Properties not supported by HikariCP are simply ignored
'''3) scope and class loading considerations''': we want to be able to define, in a single place, datasources that will be instantiated in every node. To achieve that, we want to be able to create the definitions on the driver side and use the built-in distributed class loader to download them and make the JDBC driver classes available to the nodes, without deploying them in each node. We propose implementing a "datasource provider" discovered via SPI, with a different implementation in the driver. Each datasource configuration could also specify a "scope" property only used n the driver, to tell whether the datasource is to be deployed on the nodes (scope = node) or in the local JVM

This feature will also be used by the feature request JPPF-480, for a built-in dabase implementation of job persistence
April 28, 2017
bug_report_tiny.png 08:10  Bug report JPPF-500 - Node Persistent Data Casting error
lolocohen : Issue closed
April 27, 2017
icon_milestone.png 21:36 JPPF 5.1.7
A new milestone has been reached
icon_milestone.png 16:16 JPPF 4.2.9
A new milestone has been reached
icon_milestone.png 15:29 JPPF 4.2.8
A new milestone has been reached
April 19, 2017
bug_report_tiny.png 11:14  Bug report JPPF-500 - Node Persistent Data Casting error
steveoh444 : Issue created
Referring to the Snippet in [http://www.jppf.org/forums/index.php?topic=748.0] i made up a Task wich is using [https://github.com/brettwooldridge/HikariCP HikariCP] as ConnectionPool.
When start the Client code everything is fine. But when i want to start it again i get following Output:


cd C:\Users\swende01\Documents\NetBeansProjects\JPPFDBm; "JAVA_HOME=C:\\Program Files\\Java\\jdk1.8.0_121" cmd /c "\"\"C:\\Program Files\\NetBeans 8.2\\java\\maven\\bin\\mvn.bat\" -Dexec.args=\"-Xmx64m -Dlog4j.configuration=log4j.properties -Djppf.config=jppf.properties -Djava.util.logging.config.file=config/logging.properties -classpath %classpath de.itout.jppf.test.jppfdbm.Runner\" -Dexec.executable=\"C:\\Program Files\\Java\\jdk1.8.0_121\\bin\\java.exe\" -Dmaven.ext.class.path=\"C:\\Program Files\\NetBeans 8.2\\java\\maven-nblib\\netbeans-eventspy.jar\" -Dfile.encoding=UTF-8 -Djava.net.useSystemProxies=true process-classes org.codehaus.mojo:exec-maven-plugin:1.2.1:exec\""
Scanning for projects...

------------------------------------------------------------------------
Building JPPFDBm 1.0-SNAPSHOT
------------------------------------------------------------------------

--- maven-resources-plugin:2.5:resources (default-resources) @ JPPFDBm ---
[debug] execute contextualize
Using 'UTF-8' encoding to copy filtered resources.
skip non existing resourceDirectory C:\Users\swende01\Documents\NetBeansProjects\JPPFDBm\src\main\resources

--- maven-compiler-plugin:2.3.2:compile (default-compile) @ JPPFDBm ---
Nothing to compile - all classes are up to date

--- exec-maven-plugin:1.2.1:exec (default-cli) @ JPPFDBm ---
log4j:WARN No appenders could be found for logger (org.jppf.utils.JPPFConfiguration).
log4j:WARN Please initialize the log4j system properly.
client process id: 1088, uuid: BDB1A448-26B2-738A-C31A-AF1B490F1FFE
[client: jppf_discovery-1-1 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-1 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-1 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-1 - TasksServer] Reconnected to the JPPF task server
[client: jppf_discovery-1-2 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-2 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-2 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-2 - TasksServer] Reconnected to the JPPF task server
[client: jppf_discovery-1-3 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-3 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-3 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-3 - TasksServer] Reconnected to the JPPF task server
[client: jppf_discovery-1-4 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-4 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-4 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-4 - TasksServer] Reconnected to the JPPF task server
[client: jppf_discovery-1-5 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-5 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-5 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-5 - TasksServer] Reconnected to the JPPF task server
Doing something while the jobs are executing ...
Results for job 'Template concurrent job 1' :
Template concurrent job 1 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
Results for job 'Template concurrent job 2' :
Template concurrent job 2 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
Results for job 'Template concurrent job 3' :
Template concurrent job 3 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
Results for job 'Template concurrent job 4' :
Template concurrent job 4 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
Results for job 'Template concurrent job 5' :
Template concurrent job 5 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
------------------------------------------------------------------------
BUILD SUCCESS
------------------------------------------------------------------------
Total time: 10.582s
Finished at: Wed Apr 19 10:02:18 CEST 2017
Final Memory: 5M/123M
------------------------------------------------------------------------


Maybe the Classloader of the Node gets the bytecode a second time and isnt able to cast the "old" stored object into the new pulled Class?

The main goal is to made up a connectionpool on a node and let the task on the node do something on the database with different clients.


=== The Database ===


CREATE TABLE JPPFTEST (
IDENT numeric(8,0) not null,
RES VARCHAR(255),
CONSTRAINT PK_JPPFTEST PRIMARY KEY
(IDENT));


=== The code of the Task ===


package de.itout.jppf.test.jppfdbm;

import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import java.io.UnsupportedEncodingException;
import java.net.InetAddress;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import org.jppf.node.protocol.AbstractTask;
import org.jppf.node.NodeRunner;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.logging.Level;
import java.util.logging.Logger;

/**
*
* @author swende01
*/
public class DBTask extends AbstractTask {

@Override
public void run() {
System.out.println("running task:" + this.getId());
System.out.println("calling for datasource");
HikariDataSource ds = getDataSource();
System.out.println("datasource returned");
if (ds == null) {
System.out.println("datasource==null");
}
try {
Connection conn = ds.getConnection();
System.out.println("connection created");
if (conn == null) {
System.out.println("conn==null");
}
String res = calculateHash();
Statement stmt = conn.createStatement();
if (stmt == null) {
System.out.println("stmt==null");
}
String host = InetAddress.getLocalHost().getHostName();
System.out.println("host:" + host);
String q = "INSERT INTO JPPFTEST VALUES ("+getNextID(conn)+",'" + res + "')";
System.out.println(q);
stmt.executeUpdate(q);
stmt.close();
stmt = null;
conn.close();
conn = null;
setResult(res);
} catch (Exception ex) {
System.out.println(ex);
}
}

private int getNextID(Connection con){
Statement stmt = null;
String query = "SELECT MAX(IDENT)+1 FROM JPPFTEST";
try {
stmt = con.createStatement();
ResultSet rs = stmt.executeQuery(query);
while (rs.next()) {
int id = rs.getInt(1);
return id;
}
} catch (SQLException e) {
System.err.println(e);
} finally {
if (stmt != null) {
try {
stmt.close();
} catch (SQLException ex) {
System.err.println(ex);
}
}
}
return -1;
}

private String calculateHash() {
System.out.println("Generate Random Numbers...");
double a = Math.random();
double b = Math.random();
System.out.println("Random Numbers are A="+a+" and B="+b);
MessageDigest md;
String result = "";
try {
md = MessageDigest.getInstance("SHA-256");
String text = a+""+b+"there is salt in the sea";
System.out.println("Encrypt the two numbers with a salt ["+text+"]");
md.update(text.getBytes("UTF-8"));
byte[] digest = md.digest();
result = String.format("%064x", new java.math.BigInteger(1, digest));
System.out.println("Encryted text is["+result+"]");
} catch (NoSuchAlgorithmException | UnsupportedEncodingException ex) {
System.err.println(ex);
}
return result;
}

protected static HikariDataSource setUpDataSource() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("SOMEJDBCURL");
config.setUsername("user");
config.setPassword("pw");
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");

HikariDataSource dataSource = new HikariDataSource(config);
NodeRunner.setPersistentData("datasource", dataSource);
return dataSource;
}

public synchronized static HikariDataSource getDataSource() {
System.out.println("returning dataSource");
HikariDataSource ds = (HikariDataSource) NodeRunner.getPersistentData("datasource");
if (ds == null) {
System.out.println("setting up dataSource");
ds = setUpDataSource();
}
return ds;
}
}


=== The Client Runner Class ===


/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package de.itout.jppf.test.jppfdbm;

import java.util.ArrayList;
import java.util.List;
import org.jppf.client.*;
import org.jppf.node.protocol.Task;

/**
*
* @author swende01
*/
public class Runner {

public static void main(String[] args) {

// create the JPPFClient. This constructor call causes JPPF to read the configuration file
// and connect with one or multiple JPPF drivers.
try (JPPFClient jppfClient = new JPPFClient()) {

// create a runner instance.
Runner runner = new Runner();

// create and execute a blocking job
// runner.executeBlockingJob(jppfClient);
// create and execute a non-blocking job
//runner.executeNonBlockingJob(jppfClient);
// create and execute 3 jobs concurrently
runner.executeMultipleConcurrentJobs(jppfClient, 5);

} catch (Exception e) {
e.printStackTrace();
}
}

public void executeMultipleConcurrentJobs(final JPPFClient jppfClient, final int numberOfJobs) throws Exception {
// ensure that the client connection pool has as many connections
// as the number of jobs to execute
ensureNumberOfConnections(jppfClient, numberOfJobs);

// this list will hold all the jobs submitted for execution,
// so we can later collect and process their results
final List jobList = new ArrayList<>(numberOfJobs);

// create and submit all the jobs
for (int i = 1; i <= numberOfJobs; i++) {
// create a job with a distinct name
JPPFJob job = createJob("Template concurrent job " + i);

// set the job in non-blocking (or asynchronous) mode.
job.setBlocking(false);

// submit the job for execution, without blocking the current thread
jppfClient.submitJob(job);

// add this job to the list
jobList.add(job);
}

// the non-blocking jobs are submitted asynchronously, we can do anything else in the meantime
System.out.println("Doing something while the jobs are executing ...");

// wait until the jobs are finished and process their results.
for (JPPFJob job : jobList) {
// wait if necessary for the job to complete and collect its results
List> results = job.awaitResults();

// process the job results
processExecutionResults(job.getName(), results);
}
}

/**
* Create a JPPF job that can be submitted for execution.
*
* @param jobName an arbitrary, human-readable name given to the job.
* @return an instance of the {@link org.jppf.client.JPPFJob JPPFJob} class.
* @throws Exception if an error occurs while creating the job or adding
* tasks.
*/
public JPPFJob createJob(final String jobName) throws Exception {
// create a JPPF job
JPPFJob job = new JPPFJob();
// give this job a readable name that we can use to monitor and manage it.
job.setName(jobName);

// add a task to the job.
Task task = job.add(new DBTask());
// provide a user-defined name for the task
task.setId(jobName + " - DB task");

// add more tasks here ...
// there is no guarantee on the order of execution of the tasks,
// however the results are guaranteed to be returned in the same order as the tasks.
return job;
}

/**
* Process the execution results of each submitted task.
*
* @param jobName the name of the job whose results are processed.
* @param results the tasks results after execution on the grid.
*/
public synchronized void processExecutionResults(final String jobName, final List> results) {
// print a results header
System.out.printf("Results for job '%s' :\n", jobName);
// process the results
for (Task task : results) {
String taskName = task.getId();
// if the task execution resulted in an exception
if (task.getThrowable() != null) {
// process the exception here ...
System.out.println(taskName + ", an exception was raised: " + task.getThrowable().getMessage());
} else {
// process the result here ...
System.out.println(taskName + ", execution result: " + task.getResult());
}
}
}

/**
* Ensure that the JPPF client has the desired number of connections.
*
* @param jppfClient the JPPF client which submits the jobs.
* @param numberOfConnections the desired number of connections.
* @throws Exception if any error occurs.
*/
public void ensureNumberOfConnections(final JPPFClient jppfClient, final int numberOfConnections) throws Exception {
// wait until the client has at least one connection pool with at least one avaialable connection
JPPFConnectionPool pool = jppfClient.awaitActiveConnectionPool();

// if the pool doesn't have the expected number of connections, change its size
if (pool.getConnections().size() != numberOfConnections) {
// set the pool size to the desired number of connections
pool.setSize(numberOfConnections);
}

// wait until all desired connections are available (ACTIVE status)
pool.awaitActiveConnections(Operator.AT_LEAST, numberOfConnections);
}
}
April 18, 2017
task_tiny.png 06:05  Task JPPF-499 - Simplify client internal code
lolocohen : Issue closed
April 17, 2017
task_tiny.png 10:06  Task JPPF-499 - Simplify client internal code
lolocohen : Issue created
Currently there is too much complexity in the handling of client connections to the drivers and their status. In particular, each JPPFClientConnection implementation holds 2 actual connections, both subclasses of AbstractClientConnectionHandler and each with their status listeners. The main connection status is set either directly or as a combination of the states of the two "sub-connections". In the former case, the sub-connections status becomes inconsistent with that of the main connection.

Overall, this complexity results in many observed problems in the client, especially when running the automated tests: deadlocks, race conditions, failures of the recovery and failover mechanisms.

What we propose is to remove the code that handle the status in the sub-connections (and thus in AbstractClientConnectionHandler) and only keep one source of status and associated events.

Additionally, the abstract class org.jppf.client.balancer.ChannelWrapper, subclassed as ChannelWrapperLocal and ChannelWrapperRemote, holds an executor filed of type ExecutorService defined as a single thread executor in both subclasses. Instead of a separate thread pool for each ChannelWrapper, we should make use of the executor held by the JPPFClient instance, and add proper synchronization if needed.
April 07, 2017
bug_report_tiny.png 08:04  Bug report JPPF-497 - 6.0 JMXDriverConnectionWrapper.restartShutdown(5000L, 0L) restarts the Driver
lolocohen : Issue closed
bug_report_tiny.png 08:02  Bug report JPPF-498 - Client reconnect to driver failure
lolocohen : Issue closed
April 05, 2017
bug_report_tiny.png 11:49  Bug report JPPF-498 - Client reconnect to driver failure
steveoh444 : Issue created
We Testet this on 2 different machines.

Here the mixed output of client and node:


<[jppf_discovery-1-4 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...>
[client: jppf_discovery-1-4 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-1 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-1 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-5 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-5 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-2 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-2 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-3 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-3 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-4 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-4 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Attempting connection to the class server at 10.225.120.160:11111
process exited with code 0
Apr 05, 2017 11:29:39 AM GenericClientCommunicatorAdmin close
INFO: java.io.IOException: The connection is not currently established.
java.io.IOException: The connection is not currently established.
at com.sun.jmx.remote.generic.ClientSynchroMessageConnectionImpl.checkState(ClientSynchroMessageConnectionImpl.java:514)
at com.sun.jmx.remote.generic.ClientSynchroMessageConnectionImpl.sendOneWay(ClientSynchroMessageConnectionImpl.java:217)
at javax.management.remote.generic.GenericConnector.close(GenericConnector.java:292)
at javax.management.remote.generic.GenericConnector.close(GenericConnector.java:265)
at javax.management.remote.generic.GenericClientCommunicatorAdmin.doStop(GenericClientCommunicatorAdmin.java:145)
at com.sun.jmx.remote.opt.internal.ClientCommunicatorAdmin.restart(ClientCommunicatorAdmin.java:238)
at com.sun.jmx.remote.opt.internal.ClientCommunicatorAdmin.gotIOException(ClientCommunicatorAdmin.java:133)
at javax.management.remote.generic.GenericConnector$RequestHandler.execute(GenericConnector.java:372)
at com.sun.jmx.remote.generic.ClientSynchroMessageConnectionImpl$RemoteJob.run(ClientSynchroMessageConnectionImpl.java:477)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at 10.225.120.160:11111
[client: jppf_discovery-1-1 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-4 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-3 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-5 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-1 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-4 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-3 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-5 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-2 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-2 - ClassServer] Reconnected to the class server
RemoteClassLoaderConnection: Reconnected to the class server
JPPF Node management initialized on port 12001
Attempting connection to the node server at 10.225.120.160:11111
Reconnected to the node server
Node successfully initialized
Apr 05, 2017 11:31:38 AM org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable run
WARNING: java.net.SocketException: Software caused connection abort: socket write error
Apr 05, 2017 11:31:38 AM org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable run
SEVERE: future already removed
java.lang.IllegalStateException: future already removed
at org.jppf.client.balancer.ClientJob.taskCompleted(ClientJob.java:327)
at org.jppf.client.balancer.ClientTaskBundle.taskCompleted(ClientTaskBundle.java:174)
at org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable.run(ChannelWrapperRemote.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Apr 05, 2017 11:31:38 AM org.jppf.client.TaskServerConnectionHandler init
INFO: [client: jppf_discovery-1-3 - TasksServer] Attempting connection to the task server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:31:38 AM org.jppf.client.TaskServerConnectionHandler init
INFO: [client: jppf_discovery-1-3 - TasksServer] Reconnected to the JPPF task server

[client: jppf_discovery-1-3 - TasksServer] Attempting connection to the task server at FDAC8040.HR.LOCAL:11111
java.lang.IllegalStateException: future already removed
at org.jppf.client.balancer.ClientJob.taskCompleted(ClientJob.java:327)
at org.jppf.client.balancer.ClientTaskBundle.taskCompleted(ClientTaskBundle.java:174)
at org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable.run(ChannelWrapperRemote.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Truncated. see log file for complete stacktrace
>
[client: jppf_discovery-1-3 - TasksServer] Reconnected to the JPPF task server

enhancement_tiny.png 07:59  Enhancement JPPF-492 - Monitoring API: move collapsed state handling out of TopologyDriver class
lolocohen : Issue closed
Show moreaction_add_small.png