• bee_rider 8 hours ago

    I guess that is the same Tropp, at the end of the article, as

    https://arxiv.org/abs/0909.4061

    It is a pretty good framework for eigenvalues, as long as your eigenvalues are good (decay nicely). If your eigenvalues don’t decay nicely… find a different problem!

    • jey 9 hours ago

      There's also a number of other algorithms for randomized numerical linear algebra ("RandNLA") that are especially useful when dealing with matrices that are too big to fit in memory, commonly occurring in "big data" applications.

      Here's a recent survey paper with an eye towards practical applications: https://arxiv.org/abs/2302.11474

      • jonathanyc 8 hours ago

        Interesting:

        > He and Kressner’s algorithm is delightful. Ultimately, it uses randomness in only a small way. For most coefficients a_1, a_2 \in \real, a matrix Q diagonalizing a_1 H + a_2 S will also diagonalize A = H+iS. But, for any specific choice of a_1, a_2, there is a possibility of failure. To avoid this possibility, we can just pick a_1 and a_2 at random. It’s really as simple as that.

        Does anyone use randomized algorithms like these at work? I’m very curious about the conditions where it makes sense. Another comment links to a monograph but I’m more curious about the product side. I worked a little on geometry processing for maps in the past and I don’t think we used any randomized algorithms.

        I can see how you could e.g. fix a the random seed for a partition of the data so that if you end up in the bad case you can change it (similar to how you can change consistent hashing keys to load balance).

        • adgjlsfhk1 an hour ago

          Depends a little bit on what you mean by "like these". In the broadest sense, quicksort is an algorithm like this in that it can use random numbers to minimize the likelihood of bad data messing up an algorithm.