JPPF Issue Tracker
JPPF (jppf)
April 19, 2017
bug_report_tiny.png 11:14  Bug report JPPF-500 - Node Persistent Data Casting error
steveoh444 : Issue created
Referring to the Snippet in [http://www.jppf.org/forums/index.php?topic=748.0] i made up a Task wich is using [https://github.com/brettwooldridge/HikariCP HikariCP] as ConnectionPool.
When start the Client code everything is fine. But when i want to start it again i get following Output:


cd C:\Users\swende01\Documents\NetBeansProjects\JPPFDBm; "JAVA_HOME=C:\\Program Files\\Java\\jdk1.8.0_121" cmd /c "\"\"C:\\Program Files\\NetBeans 8.2\\java\\maven\\bin\\mvn.bat\" -Dexec.args=\"-Xmx64m -Dlog4j.configuration=log4j.properties -Djppf.config=jppf.properties -Djava.util.logging.config.file=config/logging.properties -classpath %classpath de.itout.jppf.test.jppfdbm.Runner\" -Dexec.executable=\"C:\\Program Files\\Java\\jdk1.8.0_121\\bin\\java.exe\" -Dmaven.ext.class.path=\"C:\\Program Files\\NetBeans 8.2\\java\\maven-nblib\\netbeans-eventspy.jar\" -Dfile.encoding=UTF-8 -Djava.net.useSystemProxies=true process-classes org.codehaus.mojo:exec-maven-plugin:1.2.1:exec\""
Scanning for projects...

------------------------------------------------------------------------
Building JPPFDBm 1.0-SNAPSHOT
------------------------------------------------------------------------

--- maven-resources-plugin:2.5:resources (default-resources) @ JPPFDBm ---
[debug] execute contextualize
Using 'UTF-8' encoding to copy filtered resources.
skip non existing resourceDirectory C:\Users\swende01\Documents\NetBeansProjects\JPPFDBm\src\main\resources

--- maven-compiler-plugin:2.3.2:compile (default-compile) @ JPPFDBm ---
Nothing to compile - all classes are up to date

--- exec-maven-plugin:1.2.1:exec (default-cli) @ JPPFDBm ---
log4j:WARN No appenders could be found for logger (org.jppf.utils.JPPFConfiguration).
log4j:WARN Please initialize the log4j system properly.
client process id: 1088, uuid: BDB1A448-26B2-738A-C31A-AF1B490F1FFE
[client: jppf_discovery-1-1 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-1 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-1 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-1 - TasksServer] Reconnected to the JPPF task server
[client: jppf_discovery-1-2 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-2 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-2 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-2 - TasksServer] Reconnected to the JPPF task server
[client: jppf_discovery-1-3 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-3 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-3 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-3 - TasksServer] Reconnected to the JPPF task server
[client: jppf_discovery-1-4 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-4 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-4 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-4 - TasksServer] Reconnected to the JPPF task server
[client: jppf_discovery-1-5 - ClassServer] Attempting connection to the class server at localhost:11111
[client: jppf_discovery-1-5 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-5 - TasksServer] Attempting connection to the task server at localhost:11111
[client: jppf_discovery-1-5 - TasksServer] Reconnected to the JPPF task server
Doing something while the jobs are executing ...
Results for job 'Template concurrent job 1' :
Template concurrent job 1 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
Results for job 'Template concurrent job 2' :
Template concurrent job 2 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
Results for job 'Template concurrent job 3' :
Template concurrent job 3 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
Results for job 'Template concurrent job 4' :
Template concurrent job 4 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
Results for job 'Template concurrent job 5' :
Template concurrent job 5 - DB task, an exception was raised: com.zaxxer.hikari.HikariDataSource cannot be cast to com.zaxxer.hikari.HikariDataSource
------------------------------------------------------------------------
BUILD SUCCESS
------------------------------------------------------------------------
Total time: 10.582s
Finished at: Wed Apr 19 10:02:18 CEST 2017
Final Memory: 5M/123M
------------------------------------------------------------------------


Maybe the Classloader of the Node gets the bytecode a second time and isnt able to cast the "old" stored object into the new pulled Class?

The main goal is to made up a connectionpool on a node and let the task on the node do something on the database with different clients.


=== The Database ===


CREATE TABLE JPPFTEST (
IDENT numeric(8,0) not null,
RES VARCHAR(255),
CONSTRAINT PK_JPPFTEST PRIMARY KEY
(IDENT));


=== The code of the Task ===


package de.itout.jppf.test.jppfdbm;

import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import java.io.UnsupportedEncodingException;
import java.net.InetAddress;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import org.jppf.node.protocol.AbstractTask;
import org.jppf.node.NodeRunner;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.logging.Level;
import java.util.logging.Logger;

/**
*
* @author swende01
*/
public class DBTask extends AbstractTask {

@Override
public void run() {
System.out.println("running task:" + this.getId());
System.out.println("calling for datasource");
HikariDataSource ds = getDataSource();
System.out.println("datasource returned");
if (ds == null) {
System.out.println("datasource==null");
}
try {
Connection conn = ds.getConnection();
System.out.println("connection created");
if (conn == null) {
System.out.println("conn==null");
}
String res = calculateHash();
Statement stmt = conn.createStatement();
if (stmt == null) {
System.out.println("stmt==null");
}
String host = InetAddress.getLocalHost().getHostName();
System.out.println("host:" + host);
String q = "INSERT INTO JPPFTEST VALUES ("+getNextID(conn)+",'" + res + "')";
System.out.println(q);
stmt.executeUpdate(q);
stmt.close();
stmt = null;
conn.close();
conn = null;
setResult(res);
} catch (Exception ex) {
System.out.println(ex);
}
}

private int getNextID(Connection con){
Statement stmt = null;
String query = "SELECT MAX(IDENT)+1 FROM JPPFTEST";
try {
stmt = con.createStatement();
ResultSet rs = stmt.executeQuery(query);
while (rs.next()) {
int id = rs.getInt(1);
return id;
}
} catch (SQLException e) {
System.err.println(e);
} finally {
if (stmt != null) {
try {
stmt.close();
} catch (SQLException ex) {
System.err.println(ex);
}
}
}
return -1;
}

private String calculateHash() {
System.out.println("Generate Random Numbers...");
double a = Math.random();
double b = Math.random();
System.out.println("Random Numbers are A="+a+" and B="+b);
MessageDigest md;
String result = "";
try {
md = MessageDigest.getInstance("SHA-256");
String text = a+""+b+"there is salt in the sea";
System.out.println("Encrypt the two numbers with a salt ["+text+"]");
md.update(text.getBytes("UTF-8"));
byte[] digest = md.digest();
result = String.format("%064x", new java.math.BigInteger(1, digest));
System.out.println("Encryted text is["+result+"]");
} catch (NoSuchAlgorithmException | UnsupportedEncodingException ex) {
System.err.println(ex);
}
return result;
}

protected static HikariDataSource setUpDataSource() {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("SOMEJDBCURL");
config.setUsername("user");
config.setPassword("pw");
config.addDataSourceProperty("cachePrepStmts", "true");
config.addDataSourceProperty("prepStmtCacheSize", "250");
config.addDataSourceProperty("prepStmtCacheSqlLimit", "2048");

HikariDataSource dataSource = new HikariDataSource(config);
NodeRunner.setPersistentData("datasource", dataSource);
return dataSource;
}

public synchronized static HikariDataSource getDataSource() {
System.out.println("returning dataSource");
HikariDataSource ds = (HikariDataSource) NodeRunner.getPersistentData("datasource");
if (ds == null) {
System.out.println("setting up dataSource");
ds = setUpDataSource();
}
return ds;
}
}


=== The Client Runner Class ===


/*
* To change this license header, choose License Headers in Project Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
package de.itout.jppf.test.jppfdbm;

import java.util.ArrayList;
import java.util.List;
import org.jppf.client.*;
import org.jppf.node.protocol.Task;

/**
*
* @author swende01
*/
public class Runner {

public static void main(String[] args) {

// create the JPPFClient. This constructor call causes JPPF to read the configuration file
// and connect with one or multiple JPPF drivers.
try (JPPFClient jppfClient = new JPPFClient()) {

// create a runner instance.
Runner runner = new Runner();

// create and execute a blocking job
// runner.executeBlockingJob(jppfClient);
// create and execute a non-blocking job
//runner.executeNonBlockingJob(jppfClient);
// create and execute 3 jobs concurrently
runner.executeMultipleConcurrentJobs(jppfClient, 5);

} catch (Exception e) {
e.printStackTrace();
}
}

public void executeMultipleConcurrentJobs(final JPPFClient jppfClient, final int numberOfJobs) throws Exception {
// ensure that the client connection pool has as many connections
// as the number of jobs to execute
ensureNumberOfConnections(jppfClient, numberOfJobs);

// this list will hold all the jobs submitted for execution,
// so we can later collect and process their results
final List jobList = new ArrayList<>(numberOfJobs);

// create and submit all the jobs
for (int i = 1; i <= numberOfJobs; i++) {
// create a job with a distinct name
JPPFJob job = createJob("Template concurrent job " + i);

// set the job in non-blocking (or asynchronous) mode.
job.setBlocking(false);

// submit the job for execution, without blocking the current thread
jppfClient.submitJob(job);

// add this job to the list
jobList.add(job);
}

// the non-blocking jobs are submitted asynchronously, we can do anything else in the meantime
System.out.println("Doing something while the jobs are executing ...");

// wait until the jobs are finished and process their results.
for (JPPFJob job : jobList) {
// wait if necessary for the job to complete and collect its results
List> results = job.awaitResults();

// process the job results
processExecutionResults(job.getName(), results);
}
}

/**
* Create a JPPF job that can be submitted for execution.
*
* @param jobName an arbitrary, human-readable name given to the job.
* @return an instance of the {@link org.jppf.client.JPPFJob JPPFJob} class.
* @throws Exception if an error occurs while creating the job or adding
* tasks.
*/
public JPPFJob createJob(final String jobName) throws Exception {
// create a JPPF job
JPPFJob job = new JPPFJob();
// give this job a readable name that we can use to monitor and manage it.
job.setName(jobName);

// add a task to the job.
Task task = job.add(new DBTask());
// provide a user-defined name for the task
task.setId(jobName + " - DB task");

// add more tasks here ...
// there is no guarantee on the order of execution of the tasks,
// however the results are guaranteed to be returned in the same order as the tasks.
return job;
}

/**
* Process the execution results of each submitted task.
*
* @param jobName the name of the job whose results are processed.
* @param results the tasks results after execution on the grid.
*/
public synchronized void processExecutionResults(final String jobName, final List> results) {
// print a results header
System.out.printf("Results for job '%s' :\n", jobName);
// process the results
for (Task task : results) {
String taskName = task.getId();
// if the task execution resulted in an exception
if (task.getThrowable() != null) {
// process the exception here ...
System.out.println(taskName + ", an exception was raised: " + task.getThrowable().getMessage());
} else {
// process the result here ...
System.out.println(taskName + ", execution result: " + task.getResult());
}
}
}

/**
* Ensure that the JPPF client has the desired number of connections.
*
* @param jppfClient the JPPF client which submits the jobs.
* @param numberOfConnections the desired number of connections.
* @throws Exception if any error occurs.
*/
public void ensureNumberOfConnections(final JPPFClient jppfClient, final int numberOfConnections) throws Exception {
// wait until the client has at least one connection pool with at least one avaialable connection
JPPFConnectionPool pool = jppfClient.awaitActiveConnectionPool();

// if the pool doesn't have the expected number of connections, change its size
if (pool.getConnections().size() != numberOfConnections) {
// set the pool size to the desired number of connections
pool.setSize(numberOfConnections);
}

// wait until all desired connections are available (ACTIVE status)
pool.awaitActiveConnections(Operator.AT_LEAST, numberOfConnections);
}
}
April 18, 2017
task_tiny.png 06:05  Task JPPF-499 - Simplify client internal code
lolocohen : Issue closed
April 17, 2017
task_tiny.png 10:06  Task JPPF-499 - Simplify client internal code
lolocohen : Issue created
Currently there is too much complexity in the handling of client connections to the drivers and their status. In particular, each JPPFClientConnection implementation holds 2 actual connections, both subclasses of AbstractClientConnectionHandler and each with their status listeners. The main connection status is set either directly or as a combination of the states of the two "sub-connections". In the former case, the sub-connections status becomes inconsistent with that of the main connection.

Overall, this complexity results in many observed problems in the client, especially when running the automated tests: deadlocks, race conditions, failures of the recovery and failover mechanisms.

What we propose is to remove the code that handle the status in the sub-connections (and thus in AbstractClientConnectionHandler) and only keep one source of status and associated events.

Additionally, the abstract class org.jppf.client.balancer.ChannelWrapper, subclassed as ChannelWrapperLocal and ChannelWrapperRemote, holds an executor filed of type ExecutorService defined as a single thread executor in both subclasses. Instead of a separate thread pool for each ChannelWrapper, we should make use of the executor held by the JPPFClient instance, and add proper synchronization if needed.
April 07, 2017
bug_report_tiny.png 08:04  Bug report JPPF-497 - 6.0 JMXDriverConnectionWrapper.restartShutdown(5000L, 0L) restarts the Driver
lolocohen : Issue closed
bug_report_tiny.png 08:02  Bug report JPPF-498 - Client reconnect to driver failure
lolocohen : Issue closed
April 05, 2017
bug_report_tiny.png 11:49  Bug report JPPF-498 - Client reconnect to driver failure
steveoh444 : Issue created
We Testet this on 2 different machines.

Here the mixed output of client and node:


<[jppf_discovery-1-4 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...>
[client: jppf_discovery-1-4 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-1 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-1 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-5 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-5 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-2 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-2 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-3 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-3 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl run
WARNING: [jppf_discovery-1-4 - ClassServer] caught java.io.EOFException: could only read 0 bytes out of 4, will re-initialise ...
Apr 05, 2017 11:29:36 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-4 - ClassServer] Attempting connection to the class server at FDAC8040.HR.LOCAL:11111
Attempting connection to the class server at 10.225.120.160:11111
process exited with code 0
Apr 05, 2017 11:29:39 AM GenericClientCommunicatorAdmin close
INFO: java.io.IOException: The connection is not currently established.
java.io.IOException: The connection is not currently established.
at com.sun.jmx.remote.generic.ClientSynchroMessageConnectionImpl.checkState(ClientSynchroMessageConnectionImpl.java:514)
at com.sun.jmx.remote.generic.ClientSynchroMessageConnectionImpl.sendOneWay(ClientSynchroMessageConnectionImpl.java:217)
at javax.management.remote.generic.GenericConnector.close(GenericConnector.java:292)
at javax.management.remote.generic.GenericConnector.close(GenericConnector.java:265)
at javax.management.remote.generic.GenericClientCommunicatorAdmin.doStop(GenericClientCommunicatorAdmin.java:145)
at com.sun.jmx.remote.opt.internal.ClientCommunicatorAdmin.restart(ClientCommunicatorAdmin.java:238)
at com.sun.jmx.remote.opt.internal.ClientCommunicatorAdmin.gotIOException(ClientCommunicatorAdmin.java:133)
at javax.management.remote.generic.GenericConnector$RequestHandler.execute(GenericConnector.java:372)
at com.sun.jmx.remote.generic.ClientSynchroMessageConnectionImpl$RemoteJob.run(ClientSynchroMessageConnectionImpl.java:477)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at localhost:11111
resetting with stopJmx=true
node process id: 10724, uuid: A647D6AA-9961-27FF-7EB9-E7F7C8B5E311
Attempting connection to the class server at 10.225.120.160:11111
[client: jppf_discovery-1-1 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-4 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-3 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-5 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-1 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-4 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-3 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-5 - ClassServer] Reconnected to the class server
Apr 05, 2017 11:30:41 AM org.jppf.client.ClassServerDelegateImpl init
INFO: [client: jppf_discovery-1-2 - ClassServer] Reconnected to the class server
[client: jppf_discovery-1-2 - ClassServer] Reconnected to the class server
RemoteClassLoaderConnection: Reconnected to the class server
JPPF Node management initialized on port 12001
Attempting connection to the node server at 10.225.120.160:11111
Reconnected to the node server
Node successfully initialized
Apr 05, 2017 11:31:38 AM org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable run
WARNING: java.net.SocketException: Software caused connection abort: socket write error
Apr 05, 2017 11:31:38 AM org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable run
SEVERE: future already removed
java.lang.IllegalStateException: future already removed
at org.jppf.client.balancer.ClientJob.taskCompleted(ClientJob.java:327)
at org.jppf.client.balancer.ClientTaskBundle.taskCompleted(ClientTaskBundle.java:174)
at org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable.run(ChannelWrapperRemote.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Apr 05, 2017 11:31:38 AM org.jppf.client.TaskServerConnectionHandler init
INFO: [client: jppf_discovery-1-3 - TasksServer] Attempting connection to the task server at FDAC8040.HR.LOCAL:11111
Apr 05, 2017 11:31:38 AM org.jppf.client.TaskServerConnectionHandler init
INFO: [client: jppf_discovery-1-3 - TasksServer] Reconnected to the JPPF task server

[client: jppf_discovery-1-3 - TasksServer] Attempting connection to the task server at FDAC8040.HR.LOCAL:11111
java.lang.IllegalStateException: future already removed
at org.jppf.client.balancer.ClientJob.taskCompleted(ClientJob.java:327)
at org.jppf.client.balancer.ClientTaskBundle.taskCompleted(ClientTaskBundle.java:174)
at org.jppf.client.balancer.ChannelWrapperRemote$RemoteRunnable.run(ChannelWrapperRemote.java:250)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
Truncated. see log file for complete stacktrace
>
[client: jppf_discovery-1-3 - TasksServer] Reconnected to the JPPF task server

enhancement_tiny.png 07:59  Enhancement JPPF-492 - Monitoring API: move collapsed state handling out of TopologyDriver class
lolocohen : Issue closed
April 04, 2017
bug_report_tiny.png 17:28  Bug report JPPF-497 - 6.0 JMXDriverConnectionWrapper.restartShutdown(5000L, 0L) restarts the Driver
steveoh444 : Issue created
In my test with the Nighly Build from 04.04.2017 the restartShutdown of the Driver restarts the Driver instead of shutting it down forever.

The Admin UI Shutdown works.

Exit Code is 2 wich means the org.jppf.server.ShutdownRestartTask what is triggert, musst be set false from the MBean.

I did not test other Releases.

Thanks in advance

Stefan Wendelmann
April 02, 2017
icon_build.png 22:00 JPPF 5.2.6
New version released
icon_build.png 21:30 JPPF 5.1.6
New version released
enhancement_tiny.png 06:59  Enhancement JPPF-494 - Extend the driver's JobTaskListener facility
lolocohen : Issue closed
March 24, 2017
bug_report_tiny.png 10:20  Bug report JPPF-496 - JCA connector: packaging prevents effective logging in JBoss 7 and Wildfly
lolocohen : Issue closed
bug_report_tiny.png 07:04  Bug report JPPF-496 - JCA connector: packaging prevents effective logging in JBoss 7 and Wildfly
lolocohen : Issue created
To have the JPPF logging working in JBoss 7 and Wildfly deployments of the J2EE connector, the dependency on slf4j must be declared in the MANIFEST.MF with the attribute "'''Dependencies: org.slf4j,org.slf4j.impl'''". Also the sfa4 and log4j jars should be removed from the JPPF rar file, since the logging api are provided by JBOss as OSGi dependencies.

The J2EE connector build sdhoud be modified to reflect this.

Then, the JBoss/Wildfly logging configuration can be modified, for instance like this:

... existing handlers ...











... existing loggers ...















... root logger (not changed) ...

March 23, 2017
enhancement_tiny.png 07:30  Enhancement JPPF-494 - Extend the driver's JobTaskListener facility
lolocohen : Issue closed
March 19, 2017
bug_report_tiny.png 20:19  Bug report JPPF-495 - JobListener.jobDispatched() notification is sent too early
lolocohen : Issue closed
bug_report_tiny.png 20:01  Bug report JPPF-495 - JobListener.jobDispatched() notification is sent too early
lolocohen : Issue created
Reviewing part of the client code, I noticied that the jobDispatched() notfication is sent right after the asynchronous task to send the tasks of a job to the driver is submitted. This means that generally the notification is emitted before the tasks are fully sent to the driver, and contradicts the intended semantics of the notification.
enhancement_tiny.png 09:37  Enhancement JPPF-494 - Extend the driver's JobTaskListener facility
lolocohen : Issue created
We propose the following additons to the [http://www.jppf.org/doc/5.2/index.php?title=Receiving_the_status_of_tasks_dispatched_to_or_returned_from_the_nodes '''JobTaskListener'''] plugin:

1) Add a new callback method to the listener, called when tasks results are about to be sent back to the client:
public interface JobTasksListener extends EventListener {
...

/**
* Called when tasks results are about to be sent back to the client.
* @param event encapsulates information on the tasks results.
*/
void resultsReceived(JobTasksEvent event);
}
2) Add the job SLA and metadata to the available job information in the event:
public class JobTasksEvent extends TaskReturnEvent {
...

/**
* Get the job SLA from this event.
* @return an instance of {@link JobSLA}.
*/
public JobSLA getJobSLA()

/**
* Get the job metadata from this event.
* @return an instance of {@link JobMetadata}.
*/
public JobMetadata getJobMetadata()
}
3) Add the task result to each ServerTaskInformation and enable accessing it as either a stream or a deserialized Task object:
public class ServerTaskInformation implements Serializable {
...

/**
* Get an input stream of the task's result data, which can be desrialized as a {@link Task}.
* @return an {@link InputStream}, or {@code null} if no result could be obtained.
* @throws Exception if any error occurs getting the stream.
*/
public InputStream getResultAsStream() throws Exception

/**
* Deserialize the result into a Task object.
* @return a {@link Task}, or {@code null} if no result could be obtained.
* @throws Exception if any error occurs deserializing the result.
*/
public Task getResultAsTask() throws Exception
}
The combination of 1) and 3) will then allow tasks results to be processed even if the client is disconnected before the job completes, provided ''job.getSLA().setCancelUponClientDisconnect(false)'' was set.
March 10, 2017
icon_build.png 10:00 JPPF 5.2.5
New version released
March 09, 2017
feature_request_tiny.png 08:06  Feature request JPPF-493 - Parametrized configuration properties
lolocohen : Issue created
Currently the configuration API does not allow easy handling of configuration properties whose name has one or more parameters. For instance, the following properties:
.jppf.server.host = localhost
jppf.load.balancing.profile.. = value
We propose to extends the existing configuration api to hnadles these typs of constrcuts.
February 28, 2017
feature_request_tiny.png 08:52  Feature request JPPF-486 - Removal of JPPFDataTransform and replacement with composite serialization
lolocohen : Issue closed
February 25, 2017
enhancement_tiny.png 09:50  Enhancement JPPF-444 - Fluent interfaces
lolocohen : Issue closed
February 23, 2017
enhancement_tiny.png 17:13  Enhancement JPPF-468 - Add connection/executor information to job events on the client side
lolocohen : Issue closed
February 22, 2017
enhancement_tiny.png 10:04  Enhancement JPPF-492 - Monitoring API: move collapsed state handling out of TopologyDriver class
lolocohen : Issue created
The class TopologyDriver has these 2 methods to handle the collapsed state in a tree or graph representation:
public boolean isCollapsed() { ... }
public void setCollapsed(final boolean collapsed) { ... }
This is a mistake, as TopologyDriver is part of the model, whereas the collapsed state is part of the view. The collpased state should be moved to another part of the code, maybe in in own class.

February 21, 2017
task_tiny.png 08:42  Task JPPF-487 - Drop support of Apache Geronimo in the JCA connector
lolocohen : Issue closed
feature_request_tiny.png 08:24  Feature request JPPF-23 - Web based administration console
lolocohen : Issue closed
February 19, 2017
feature_request_tiny.png 08:28  Feature request JPPF-491 - Node statistics
lolocohen : Issue created
The title says it all. In the smae way we made statistics available for the servers, we propose to do the same for the nodes, including the possibility to access them remotely via the management/monitoring API, the ability to register statitics listeners, and the ability to define charts in the admin console
enhancement_tiny.png 08:24  Enhancement JPPF-490 - Timestamps for statistics updates
lolocohen : Issue created
We propose to add timestamps to all statistics updates, along with a creation time for all statitics in the current JVM. We could express the stat update timestamp as the number of nanoseconds since creation, so we could have the best available accuracy, especially since many of the intervals between updates have a sub-milliseconds precision.
February 16, 2017
bug_report_tiny.png 09:01  Bug report JPPF-488 - Priority of client connection pools is not respected
lolocohen : Issue closed
February 11, 2017
bug_report_tiny.png 08:32  Bug report JPPF-489 - JPPFDriverAdminMBean.nbNodes() returns incorrect value when management is disabled on one or more nodes
lolocohen : Issue closed
February 09, 2017
bug_report_tiny.png 09:34  Bug report JPPF-489 - JPPFDriverAdminMBean.nbNodes() returns incorrect value when management is disabled on one or more nodes
lolocohen : Issue created
When management is dsabed on one or more nodes attached to a server, the server's management API will return an incorrect number of nodes: it reports the number of node with a valid management connection instead of all nodes. This is true for all nbIdleNodes() and nbNodes() of the JPPFDriverAdminMBean interface

I've located the problem in the class NodeSelectionHelper, where the selection/filtering methods have a hasWorkingJmxConnection() condition, which is obviously false when management is disabled on an node. This class was first designed as a helper for the node management forwarding feature, then reused for the driver management methods that count nodes, but I forgot to take into account that nodes can have management disabled.
February 08, 2017
bug_report_tiny.png 09:00  Bug report JPPF-488 - Priority of client connection pools is not respected
lolocohen : Issue created
When, in the client configuration, 2 or more connection pools are defined with different priorities, jobs are not always sent to the pools with the hhighest priority.

There are two scenarios in which this happens:
* when jobs are submitted while the client is initializing, it is possible that at this time only connections with a lower priority are established, in which case they are still considered to be at the highest priority
* when all connections of the pool with the highest priority are busy, they are in fact removed from the idle connections map. This map is a sorted multimap whose key is the priority and value is a collection of connections to a server. When a connection is selected to execute a job, it is removed from the collection, and when the collection is empty, it is removed from the map, which changes the highest priority found in the map. If more jobs are to be executed, they will therefore be sent to connections with don't have the highest priority as defined in the configuration.
February 05, 2017
task_tiny.png 08:59  Task JPPF-487 - Drop support of Apache Geronimo in the JCA connector
lolocohen : Issue created
From the [http://geronimo.apache.org/ Apache Geronimo] web site, the project appears to be dead: last release, news item and source code commit happened more than 3 years ago. There doesn't seem to be any point in supporting this app server anymore.
January 20, 2017
feature_request_tiny.png 08:51  Feature request JPPF-486 - Removal of JPPFDataTransform and replacement with composite serialization
lolocohen : Issue created
Currently, the [http://www.jppf.org/doc/6.0/index.php?title=Transforming_and_encrypting_networked_data data transform feature] is a drain on performance and memory resources since, even when none is defined, it still forces us to read fully each serialized object from the network connection before it can be deserialized. In the same way, each object is fully serialized befored it is sent throw the connection.

Since the same functionality can be accomplished with [http://www.jppf.org/doc/6.0/index.php?title=Composite_serialization composite serialization], we propose to remove the data transformation and replace it with composite serialization. This will allow the code to directly serialize/deserialize to/from the network stream and increase performance while decreqing memory usage.

This implies updating the current "Network Data Encryption" sample to use composite serialization.
January 19, 2017
bug_report_tiny.png 21:10  Bug report JPPF-485 - Number of peer total processing threads is not propagated properly
lolocohen : Issue closed
bug_report_tiny.png 20:56  Bug report JPPF-485 - Number of peer total processing threads is not propagated properly
lolocohen : Issue created
When a peer driver notifies its remote peer of a change in its number of nodes and/or threads the remote peer updates the properties of its peer accordingly but does not notify the associated bundler (i.e. load-balancer).

In the case of the "nodethreads" algorithm, this causes a wrong number of total processing threads for the peer driver to be computed, which in turn impairs the efficiency of load-balancing and overall performance of the grid
January 18, 2017
icon_build.png 10:00 JPPF 5.2.4
New version released
January 17, 2017
bug_report_tiny.png 08:20  Bug report JPPF-484 - Invocation of tasks' onCancel() method is not clearly documented
lolocohen : Issue closed
January 15, 2017
bug_report_tiny.png 01:42  Bug report JPPF-479 - Task cancelation/timeout problems
lolocohen : Issue closed
January 09, 2017
bug_report_tiny.png 10:09  Bug report JPPF-479 - Task cancelation/timeout problems
lolocohen : Issue closed
January 07, 2017
icon_milestone.png 13:15 JPPF 5.1.5
A new milestone has been reached
Show moreaction_add_small.png