next up previous contents
Next: Big clusters Up: Benchmarks Previous: Benchmarks   Contents

Different topologies on local LAN network

Figure 15: Topologies on Local Area Network: master node is black.

In the star topology, all computers mimic the centralized model. However, we used a branch computer as master. All computers received their job, but the central computer had trouble sending back results. Linear speedup was not achieved because the central computer was overloaded and lost some packets.

Figure 16: Speedup for the star topology



Figure 17: Speedup for the line topology

In the line topology, we have some packet losses for $ n>5$. For $ n>8$ speedup will not increase linearly anymore, but remains at the $ n=8$ level. In fact, the ninth computer did not receive the job; the packet carrying the job was discarded by the eighth GPU because its Time-To-Live counter reached zero. Notice that TTL is set to 7 in the GPU program.

Figure 18: Speedup for the tree topology

For the tree topology, we get the best results: only one packet got lost once while running on 9 computers.

Figure 19: Speedup for the random graph topology

Results for the random graph topology show that GPU is still not ready to scale: in fact we found duplicates containing the same answer. This is the reason for the impossible super linear speedup. Therefore, we should fix the answer mechanism for version 0.847.


next up previous contents
Next: Big clusters Up: Benchmarks Previous: Benchmarks   Contents
Tiziano Mengotti 2004-03-27