JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net
JPPF, java, parallel computing, distributed computing, grid computing, parallel, distributed, cluster, grid, cloud, open source, android, .net

The open source
grid computing

 Home   About   Features   Download   Documentation   On Github   Forums 

Client and administration console configuration

From JPPF 5.1 Documentation

Jump to: navigation, search


Main Page > Configuration guide > Client and administration console

1 Server discovery in the client

By default, JPPF clients are configured to automatically discover active servers on the network. This mechanism works in the same way as for the nodes, and uses the same configuration properties, except for the discovery timeout:

# Enable or disable automatic discovery of JPPF drivers
jppf.discovery.enabled = true
# UDP multicast group to which drivers broadcast their connection parameters =
# UDP multicast port to which drivers broadcast their connection parameters
jppf.discovery.port = 11111

# IPv4 address inclusion patterns
jppf.discovery.include.ipv4 = 
# IPv4 address exclusion patterns
jppf.discovery.exclude.ipv4 = 
# IPv6 address inclusion patterns
jppf.discovery.include.ipv6 = 
# IPv6 address exclusion patterns
jppf.discovery.exclude.ipv6 = 

A major difference is that, when discovery is enabled, the client does not stop attempting to find one or more servers. A client can also connect to multiple servers, and will effectively connect to every server it discovers on the network.

The JPPF client will manage a pool of connections to each discovered server, which can be used for concurrent job submissions. The size of the connection pools is configured with the following property:

# connection pool size for each discovered server; defaults to 1 (single connection)
jppf.pool.size = 5

Each server connection has an assigned name, following the pattern: "jppf_discovery-<n>-<p>", where n is a driver number, in order of discovery, and p is the connection number, if the defined connection pool size is greater than 1. For instance, if we defined jppf.pool.size = 2, then the first discovered driver will have 2 connections named “jppf_discovery-1-1” and “jppf_discovery-1-2”

Each connection pool has an associated pool of JMX connections, whose size is independently configured as follows:

# JMX connection pool size, defaults to 1
jppf.jmx.pool.size = 1

The inclusion and exclusion pattern definitions work exactly in the same way as for the node configuration. Please refer to section Node configuration » Server discovery for more details.

It is also possible to specify the priority of all discovered server connections, so that they will easily fit into a failover strategy defined via the manual network configuration:

# priority assigned to all auto-discovered connections; defaults to 0
# this is equivalent to "<driver_name>.jppf.priority" in manual network configuration
jppf.discovery.priority = 10

Additionally, you can specify the behavior to adopt, when a driver broadcasts its connection information for multiple network interfaces. In this case, the client may end up creating multiple connections to the same driver, but with different IP addresses. This default behavior can be disabled by setting the following property:

# enable or disable multiple network interfaces for each driver
jppf.pool.acceptMultipleInterfaces = false

This property is set to false by default, meaning that only the first discovered interface for a driver will be taken into account.

2 Manual network configuration

As we have seen, a JPPF client can connect to multiple drivers. The first step will this be to name these drivers:

# space-separated list of drivers this client may connect to
# defaults to “default-driver”
jppf.drivers = driver-1 driver-2

Then for each driver, we will define the connection and behavior attributes, including:

Connection to the JPPF server

# host name, or ip address, of the host the JPPF driver is running on = localhost
# port number the server is listening to for connections
driver-1.jppf.server.port = 11111

Here, driver-1.jppf.server.port must have the same value as the corresponding property jppf.server.port defined in the server configuration.

Connection pool size

# size of the pool of connections to this driver
driver-1.jppf.pool.size = 5

This allows the creation of a connection pool with a specific size for each server we connect to, whereas all pools would have the same size when server discovery is enabled.

As for automatic server discovery, each connection pool has an associated pool of JMX connections, whose size is independently configured as follows:

# JMX connection pool size, defaults to 1
driver-1.jppf.jmx.pool.size = 1


# assigned driver priority
driver-1.jppf.priority = 10

The priority assigned to a server connection enables the defintion of a fallback strategy for the client. In effect, the client will always use connections that have the highest priority. If the connection with the server is interrupted, then the client we use connections with the next highest priority in the remaining accessible server connection pools.

Connection to the management server

# management port for this driver = 11198

This will allow direct access to the driver's JMX server using the client APIs, unless the client configuration property is set to false.

Note that this property is optional, since the JPPF client fetches it from the JPPF driver during the communication handshake. It is thus recommended to leave it unspecified.

3 Using manual configuration and server discovery together

It is also possible to use the manual server configuration simultaneously with the server discovery, by adding a specific driver name, “jppf_discovery” to the list of manually configured drivers:

# enable discovery
jppf.discovery.enabled = true
# specifiy both discovery and manually configured drivers
jppf.drivers = '''''jppf_discovery''''' driver-1
# host for this driver = my_host
# port for this driver
driver-1.jppf.server.port = 11111

4 Socket connections idle timeout

In some environments, a firewall may be configured to automatically close socket connections that have been idle for more than a specified time. This may lead to a situation where a server may be unaware that a client was disconnected, and cause one or more jobs to never return. To remedy to that situation, it is possible to configure an idle timeout on the client side of the connection, so that the connection can be closed cleanly and grid operations can continue unhindered. This is done via the following property:

jppf.socket.max-idle = timeout_in_seconds

If the timeout value is less than 10 seconds, then it is considered as no timeout. The default value is -1.

5 Local and remote execution

It is possible for a client to execute jobs locally (i.e. in the client JVM) rather than by submitting them to a server. This feature allows taking advantage of muliple CPUs or cores on the client machine, while using the exact same APIs as for a distributed remote execution. I can also be used for local testing and debugging before performing the “real-life” execution of a job.

Local execution is disabled by default. To enable it, set the following configuration property:

# enable local job execution; defaults to false
jppf.local.execution.enabled = true

Local execution uses a pool of threads, whose size is configured as follows:

# number of threads to use for local execution
# the default value is the number of CPUs or cores available to the JVM
jppf.local.execution.threads = 4

A priority can be assigned to the local executor, so that it will easily fit into a failover strategy defined via the manual network configuration::

# priority assigned to the local executor; defaults to 0
# this is equivalent to "<driver_name>.jppf.priority" in manual network configuration
jppf.local.execution.priority = 10

It is also possible to mix local and remote execution. This will happen whenever the client is connected to a server and has local execution enabled. In this case, the JPPF client uses an adaptive load-balancing algorithm to balance the workload between local execution and node-side execution.

Finally, the JPPF client also provides the ability to disable remote execution. This can be useful if you want to test the execution of jobs purely locally, even if the server discovery is enabled or the server connection properties would otherwise point to a live JPPF server. To achieve this, simply configure the following:

# enable remote job execution; defaults to true
jppf.remote.execution.enabled = false

6 Load-balancing in the client

The JPPF client allows load balancing between local and remote execution. The load balancing configuration is exactly the same as for the driver, which means it uses exactly the same configuration properties, algorithms, parameters, etc... Please refer to the driver load-balancing configuration section for the configuration details. The default configuration, if none is provided, is equivalent to the following:

# name of the load balancing algorithm
jppf.load.balancing.algorithm = manual
# name of the set of parameter values (aka profile) to use for the algorithm
jppf.load.balancing.profile = jppf
# "jppf" profile
jppf.load.balancing.profile.jppf.size = 1000000

Also note that the load balancing is active even if only remote execution is available. This has an impact on how tasks within a job will be sent tot he server. For instance, if the “manual” algorithm is configured, with a size of 1, this means the tasks in a job will be sent one at a time.

7 Resolution of the drivers IP addresses

You can switch on or off the DNS name resolution for the drivers a client connects, with the following property:

# whether to resolve the drivers' ip addresses into host names
# defaults to true (resolve the addresses)
org.jppf.resolve.addresses = true

8 UI refresh intervals in the administration tool

You may change the values of these properties if the graphical administration and monitoring tool is having trouble displaying all the information received from the nodes and servers. This may happen when the number of nodes and servers becomes large and the UI cannot cope. Increasing the refresh intervals (or decreasing the frequency of the updates) in the UI resolves such situations. The available configuration properties are defined as follows:

# refresh interval for the statistcs panel in millis; defaults to 1000
# this is the interval between 2 succesive stats requests to a driver via JMX
jppf.admin.refresh.interval.stats = 1000

# refresh interval in millis for the topology panels: tree view and graph views
# this is the interval between 2 successive runs of the task that refreshes the
# topology via JMX requests; defaults to 1000
jppf.admin.refresh.interval.topology = 1000

# refresh interval for the JVM health panel in millis; defaults to 1000
# this is the interval between 2 successive runs of the task that refreshes
# the JVM health via JMX requests = 1000

# UI refresh interval for the job data panel in ms. Its meaning depends on the
# publish mode specified with property "jppf.gui.publish.mode" (see below):
# - in "immediate_notifications" mode, this is not used
# - in "deferred_notifications" mode, this is the interval between 2 publications
#   of updates as job monitoring events
# - in "polling" mode this is the interval between 2 polls of each driver
jppf.gui.publish.period = 1000

# UI refresh mode for the job data panel. The possible values are:
# - polling: the job data is polled at regular intervals and updates to the view are
#   computed as the differences with the previous poll. This mode generates less network
#   traffic than the other modes, but some updates, possibly entire jobs, may be missed
# - deferred_notifications: updates are received as jmx notifications and published at
#   regular intervals, possibly aggregated in the interval. This mode provides a more
#   accurate view of the jobs life cycle, at the cost of increased network traffic
# - immediate_notifications: updates are received as jmx notifications and are all
#   published immediately as job monitoring events, which are pushed to the UI. In this
#   mode, no event is missed, however this causes higher cpu and memory consumption
# The default value is immediate_notifications
jppf.gui.publish.mode = immediate_notifications

Main Page > Configuration guide > Client and administration console

JPPF Copyright © 2005-2020 Powered by MediaWiki