"Automatic selection of the fastest algorithm implementations" Sidnev A.A. and Gergel V.P. 
We propose an approach for the runtime prediction of distributed highperformance computation. This approach does not require experimentation on all target computer systems. The selection of an optimal algorithm is performed according to the asymptotic complexity of the algorithms under evaluation using machine learning methods. The proposed approach can significantly reduce the number of experiments and the dimension of the problems to be solved during the process of evaluating the performance of a computer system. The evaluation of algorithm execution time based on the known parameters of the system allows determining the computer system efficiency for solving certain classes of problems without performing experiments on it. This allows one to quickly update the forecast by a minimum number of experiments with smallsize tasks on the target computer system. The proposed solution can be used for the automatic library turning before using it (like the autosetting in the ATLAS (Automatically Tuned Linear Algebra Software) library. A comparative analysis of runtime prediction results obtained when solving several problems on 84 computers is given. The use of a random forest combined with the linear least square method shows the average relative error of the estimated execution time 17% for the training data corresponding to the problems of small dimension and the average relative error 9% when training was performed on data from the entire range of algorithm parameters in the test samples. The resulting estimates allow one to select the most efficient implementation of the algorithm in more than 80% of cases. Keywords: selection of algorithm implementation, program running time, asymptotic estimates of complexity, characteristics of computing systems, machine learning, regression analysis.

