Rational Krylov-subspace methods are a predestined
candidate in the reduction of very-large-scale linear models due to their moderate computational cost and memory requirements. However, in order to achieve good approximation results, state-of-the-art Krylov algorithms like IRKA iteratively search for a set of locally H2-optimal reduction parameters. This search requires the repeated reduction of the highdimensional model and can therefore still account for significant computational cost, especially in case of slow convergence. In this contribution, we investigate the cost of H2-optimal rational Krylov methods and propose an enhanced reduction framework, based on the local nature of such methods, to reduce the computational effort while guaranteeing optimality at convergence. The improvement achieved through this framework is analyzed theoretically and validated numerically on a modified IRKA algorithm.
«
Rational Krylov-subspace methods are a predestined
candidate in the reduction of very-large-scale linear models due to their moderate computational cost and memory requirements. However, in order to achieve good approximation results, state-of-the-art Krylov algorithms like IRKA iteratively search for a set of locally H2-optimal reduction parameters. This search requires the repeated reduction of the highdimensional model and can therefore still account for significant computational cost, espe...
»