The desktop console uses a dialog with a list of checkboxes, one for each column, to select the visible columns in the topology, JVM health monitoring and jobs views. On the other hand, ir uses a pick list to select the viible statistics in the statistics view. The web console uses pick lists for of all of these viws.
A pick list is more compact and provides a way to reorder the columns as well. We propose to use oone for the desktop console's tree table views.
What we propose here is a new built-in [https://www.jppf.org/doc/6.0/index.php?title=Defining_the_node_connection_strategy '''node connection strategy'''] which reads a list of driver from a single configuration property. We propose that the value of the property be in a format similar to the built-in [https://www.jppf.org/doc/6.0/index.php?title=Defining_the_node_connection_strategy#Built-in_strategies '''CSV file-based strategy'''], except we use a pipe ('|') character to separate driver definitions instead of newlines. For example:
Currently, we use Visual Studio and associated SDK tools to build the [https://www.jppf.org/doc/6.0/index.php?title=.Net_Bridge '''.Net bridge''']. We propose to try and switch to the open-source [https://dotnet.github.io/ '''.Net Core framework'''] instead, and explore the possibility to port it to Linux and Mac,
Excellent feedback, suggestions and comments from a long-time JPPF user:
# It is not obvious that JPPF benefits from multiple core processors. We might want to make that more obvious, as it might help attract more users and help people decide what kind of cloud service or physical servers to use. People might be wasting time multi-threading their programs when it would be better to use JPPF. There was a time when Rackspace had only single core servers. JPPF would have been fine for that. Linode has many cores and to use them with JPPF is not wasteful. To go from one to the other requires no programming changes. To go from one to the other requires no configuration changes. To me this was a non-obvious benefit of JPPF.
# It isn't obvious either that JPPF is probably better for many small tasks rather than a few long elapsed time tasks. That is, 64 million tasks of 10 seconds each is better than 64 tasks of 10 million seconds each. Knowledge of this might save people some rewriting effort.
# The documentation is kind of discouraging and lengthy. This results in readers/users being unwilling or unable to read the whole thing because they cannot grasp the relevance of much of it. I haven't looked at the documentation in recent years so perhaps my comment is no longer relevant. Yes it is!
# The following use case is typical: homogeneous tasks, each task would last many hours, one task per processor core, no internet and no database since everything required is in the serialized object passed to each processor core. The fact that it is typical may lead users to think they can set up JPPF by themselves, but in the end they actually need help found in the forums. The allocation decisions JPPF makes are often very difficult to understand (i.e. how the tasks are distributed to the nodes). In particular, it is difficult to understand the "nodethreads" algorithm. A suggestion is that each typical case be matched with a brief cheat sheet.
# It is not obvious where to enable assertions. It was observed that it needed to be in 2 places: the invocation of the application and inside the node configuration file. It is not obvious that it is not necessary in to enable assertions for the invocation of the driver, the configuration file of the driver, the invocation of the node. This would be good to put on a cheat sheet. 6. When an application might need more memory it is not obvious that it is not necessary to increase it for the driver launcher, the node launcher, and the driver configuration, but rather it must be done for the application and in the node properties. This would be good to put on a cheat sheet.
# For some applications virtual memory doesn't solve a problem; it creates a problem. VM is probably helpful mainly in applications where the memory need is highly variable across time. A compute intensive application might have a nearly constant level of needed memory. If swapping happens at all, not only will it will slow progress, what is worse is that it will probably persist for hours. It was found that it is best to disable swapping and discover as soon as possible if more physical memory is needed. While this is not a JPPF specific problem it might be relevant for some it would help to know this as soon as possible in the development life cycle.
# Files that contain the "jppf.jvm.options" property are probably more relevant for the user so perhaps it would be good to point out that some configuration files are not important and the configuration files are long but there are only a few lines that matter to some (or most) users. It can be useful to see suggestions like: "do not waste time looking at log4j-node.properties" because you will not enable assertions there and you will not increase memory there.
# It is a minor point but sometimes the word "server" appears and it is confusing because everything is a server. The line "#jppf.server.host=localhost" is confusing. If the reader thinks a node is a "server" because the driver is a "master", he will be more confused because he will think this property is supposed to point to a node. The default value "localhost" is confusing because the majority of the time it will not be local. Perhaps it could be "#jppf.driver.host=IPADDRESS".
Currently, the [http://www.jppf.org/doc/6.0/index.php?title=Configuring_a_JPPF_server#Recovery_from_hardware_failures_of_remote_nodes recovery mechanism] uses a connection to a separate driver port. We propose to rewrite it so that it uses the non-blocking NIO based communications, which will allow it to use the same port as for nodes and clients connections. It will then be a matter of enbling it or not.