Download PDF version of this document Download GPU GPU project homepage
The application is organized as follows: GPU discovers new computers on the network, establishes and keeps a predefined number of connections with other computers. When GPU detects incoming requests, it will start threads or queue them for later handling, if the maximum number of threads is reached. Once the computing thread is done, GPU sends the result back to the requester and frees that thread from memory.
Plugins encapsulate algorithms that answer the incoming request: the brute force attack to the discrete logarithm is one example [3], another is the computation of a partial differential equation using the random walkers approach and the Feynman-Kac formula [1]. Games like chess, where chessboards become leafs of a tree to be evaluated by a fitness function, could be suitable for the framework as well, if implemented correctly.
Plugins are a library of functions, loaded at runtime by GPU. In Windows, this dynamic link mechanism is provided by DLLs (files with extension .dll). In Linux, the same mechanism is called shared objects and the corresponding extension is .so. We give here a brief overview on how to write plugins with graphical output.
Frontends are a complement to plugins; they simplify the submission of jobs and the visualization of results: imagine playing chess by typing the 64 numbers representing the chessboard each time or visualizing the result of the partial differential equation by reading the list of results for each coordinate. Frontends communicate with GPU using Windows messages (a Linux implementation would use pipe and signals to achieve the same goal). Through this privileged channel, the frontend is able to submit jobs and to receive results.
In this document, we give a detailed overview on how to implement frontends. For more insights on the framework, one should read the previous work on GPU as well [3].