Parallel iterative solver ISOL
Download the complete software package ISOL 1.45a (TAR.GZ), a part of the finite element system GEM. The solver ISOL and associated programs are free software distributed under the terms of the GNU General Public License as published by the Free Software Foundation.
The code ISOL is intended for the solution of large linear systems arising from the FE analysis of 3D boundary value problems of elasticity. The solver follows the algorithm of the conjugate gradient method powered by additive Schwarz preconditioners. The domain is decomposed along the Z direction into several non-overlapping subdomains, which are then extended, so that adjacent subdomains have usually the minimal overlap.
The parallel implementation of the solver includes several concurrent processes corresponding to the subproblems. Each process works with its portion of data and follows the PCG algorithm. Due to a special one-dimensional domain decomposition, the communication requirments of the parallel solver are fairly small, because in the iterative phase, each process communicates just locally with its neighbours, mainly when the matrix-by-vector multiplication and the preconditioning are performed. The amount of transferred data is small and proportional to the overlapped region. Thus, the parallelization has very good assumptions to be efficient and scalable.
The preconditioning is given by the one-level additive Schwarz method. The subproblems are solved inexactly, when the local matrices are substituted by their incomplete factorizations. But the efficiency of the preconditioner decreases with the increasing number of subproblems and it is necessary to add the coarse grid correction in the preconditioner. Such improvement results in the two-level Schwarz method, which ensures the numerical scalability. The global coarse grid problem can be created separately or numericaly by aggregations. More detailed information about the program can be found in the decumentation of a library of parallel solvers ELPAR (PDF).
The program is written in Fortran 77. The communication of parallel processes is realized by message passing given by the MPI standard, which is supported and generally available on all parallel architectures including distributed-memory systems. The code was tested on a number of parallel computers, e.g. the symmetric multiprocessors Natan and Simba, or clusters Thea and Ra.