Roger Dannenberg, co-creator of Audacity, visits the UPV to collaborate on AI projects

Roger Dannenberg, pioneer of computational music, visits UPV within the CIAICO/2023/275 project led by Jorge Sastre and Nuria Lloret.

Below you can access press coverage of Roger Dannenberg’s visit to the UPV, where he gave a live presentation of his virtual musical interpreter system.

The visit is part of the Valencian Regional Government’s Ciaico/2023/275 project, which focuses on artificial intelligence applications and is co-directed by UPV professors Nuria Lloret and Jorge Sastre.







Beyond Paterson–Stockmeyer: Advancing Matrix Polynomial Computation

For over fifty years, the Paterson–Stockmeyer method has been considered the benchmark for efficient matrix polynomial evaluation. In our recent open access article, we provide a summary of recent advances in this area and present a constructive scheme that evaluates a degree‑20 matrix polynomial using only 5 matrix multiplications—two fewer than Paterson–Stockmeyer.

We also show how the coefficients of this scheme can be derived from the solutions of a single equation involving one coefficient, and we include the full process in our supplementary materials.


Publication Details

  • Title: Beyond Paterson–Stockmeyer: Advancing Matrix Polynomial Computation
  • Authors: J. Sastre, J. Ibáñez, J. M. Alonso, E. Defez
  • Journal: WSEAS Transactions on Mathematics, Vol. 24, pp. 684–693, 2025
  • Conference: 5th Int. Conf. on Applied Mathematics, Computational Science and Systems Engineering (AMCSE), Paris, France, April 14–16, 2025
  • Open Access: https://doi.org/10.37394/23206.2025.24.68
  • Supplementary Material:

Main Contributions

  • Survey of recent advances in matrix polynomial evaluation.
  • Constructive result: A method to compute a degree‑20 matrix polynomial with just 5 matrix multiplications, improving efficiency over Paterson–Stockmeyer (needing 7 matrix products).
  • Coefficient derivation: All coefficients can be obtained by solving an equation in one unknown, documented step by step in the .txt file.
  • Generalization: We propose a framework for evaluation formulas of the type yk2(A)y_{k2}(A), see with Ck2C_k^2​ available variables, and set two conjectures for future research.

Why This Matters

Reducing matrix multiplications significantly lowers computational cost, which is crucial for:

  • Large-scale scientific computing
  • Numerical linear algebra
  • AI and machine learning models involving matrix functions

Access and Resources


Next Steps

If you work with matrix functions or large-scale computations:

  • Try the 5-multiplication scheme for degree‑20 polynomials.
  • Benchmark against Paterson–Stockmeyer.
  • Explore adapting the rational-coefficient approach to other degrees.

We welcome collaboration on proving the conjectures and extending these ideas to broader polynomial families.

Polynomial approximations for the matrix logarithm with computation graphs

Polynomial approximations for the matrix logarithm with computation graphs, E. Jarlebring, J. Sastre, J. Ibáñez, Linear Algebra Applications, in Press (open access), 2024. https://doi.org/10.1016/j.laa.2024.10.024, https://arxiv.org/abs/2401.10089, code.

In this article the matrix logarithm is computed by using matrix polynomial approximations evaluated by using matrix polynomial multiplications and additions. The most popular method for computing the matrix logarithm is a combination of the inverse scaling and squaring method in conjunction with a Padé approximation, sometimes accompanied by the Schur decomposition. The main computational effort lies in matrix-matrix multiplications and left matrix division. In this work we illustrate that the number of such operations can be substantially reduced, by using a graph based representation of an efficient polynomial evaluation scheme. A technique to analyze the rounding error is proposed, and backward error analysis is adapted. We provide substantial simulations illustrating competitiveness both in terms of computation time and rounding errors.

A Matrix Spline Method for a Class of Fourth-Order Ordinary Differential Problems

A Matrix Spline Method for a Class of Fourth-Order Ordinary Differential Problems, M.M. Tung, E. Defez, J. Ibáñez, J.M. Alonso, J.I. Real-Herráiz, Mathematics 2022, 10(16), 2826, https://www.doi.org/10.3390/math10162826

Differential matrix models provide an elementary blueprint for the adequate and efficient treatment of many important applications in science and engineering. In the present work, we suggest a procedure, extending our previous research results, to represent the solutions of nonlinear matrix differential problems of fourth order given in the form ?(4)(?)=?(?,?(?)) in terms of higher-order matrix splines. The corresponding algorithm is explained, and some numerical examples for the illustration of the method are included.

On the Approximated Solution of a Special Type of Nonlinear Third-Order Matrix Ordinary Differential Problem

On the Approximated Solution of a Special Type of Nonlinear Third-Order Matrix Ordinary Differential Problem, E. Defez, J. Ibáñez, J.M. Alonso, M.M. Tung, T.P. Real-Herraiz, Teresa Pilar. Mathematics, 2021, 9(18), 2262, https://doi.org/10.3390/math9182262

Matrix differential equations are at the heart of many science and engineering problems. In this paper, a procedure based on higher-order matrix splines is proposed to provide the approximated numerical solution of special nonlinear third-order matrix differential equations, having the form ?(3)(?)=?(?,?(?)). Some numerical test problems are also included, whose solutions are computed by our method.

Press: UPV Researchers ‘Revolutionize’ Matrix Calculation with a ‘Faster and More Accurate’ Method

UPV Researchers ‘Revolutionize’ Matrix Calculation with a ‘Faster and More Accurate’ Method

Below you can access the press coverage of our discovery of a new, faster and more accurate method for calculating matrix functions:




















Accurate Approximation of the Matrix Hyperbolic Cosine Using Bernoulli Polynomials

José M. Alonso, Javier Ibáñez, Emilio Defez and Fernando Alvarruiz. Mathematics, vol. 11, 520, 2022. https://doi.org/10.3390/math11030520.

This paper presents three different alternatives to evaluate the matrix hyperbolic cosine using Bernoulli matrix polynomials, comparing them from the point of view of accuracy and computational complexity. The first two alternatives are derived from two different Bernoulli series expansions of the matrix hyperbolic cosine, while the third one is based on the approximation of the matrix exponential by means of Bernoulli matrix polynomials. We carry out an analysis of the absolute and relative forward errors incurred in the approximations, deriving corresponding suitable values for the matrix polynomial degree and the scaling factor to be used. Finally, we use a comprehensive matrix testbed to perform a thorough comparison of the alternative approximations, also taking into account other current state-of-the-art approaches. The most accurate and efficient options are identified as results.

Euler polynomials for the matrix exponential approximation

José M. Alonso, Javier Ibáñez, Emilio Defez, Pedro Alonso-Jordá. Journal of Computational and Applied Mathematics, vol. 425, 115074, 2023. https://doi.org/10.1016/j.cam.2023.115074.

In this work, a new method to compute the matrix exponential function by using an approximation based on Euler polynomials is proposed. These polynomials are used in combination with the scaling and squaring technique, considering an absolute forward-type theoretical error. Its numerical and computational properties have been evaluated and compared with the most current and competitive codes dedicated to the computation of the matrix exponential. Under a heterogeneous test battery and a set of exhaustive experiments, it has been demonstrated that the new method offers performance in terms of accuracy and stability which is as good as or even better than those of the considered methods, with an intermediate computational cost among all of them. All of the above makes this a very competitive alternative that should be considered in the growing list of available numerical methods and implementations dedicated to the approximation of the matrix exponential.

On the backward and forward error of approximations of analytic functions and applications to the computation of matrix functions

Jorge Sastre, Javier Ibáñez, Journal of Computational and Applied Mathematics, Volume 419, 2003, 114706, https://doi.org/10.1016/j.cam.2022.114706

A new formula to write the forward error of Taylor approximations of analytical functions in terms of the backward error of those approximations is given, overcoming problems of the backward error analysis that use inverse functions. Examples for the backward error analysis of functions such as the matrix cosine cos(A) or cos(sqrt(A)) are given.