If you're trying to solve 10,000 equations with 10,000 unknowns, then one single instance of the 10,000x10,000 matrix uses 800MB (or 400MB if you use single-precision floating point). And, while out of core solvers can deal with the problem by not keeping everything in memory, they are a fair bit slower than in core solvers which simply load up the matrices and have at it, since they keep scratch files on the hard disk to keep track of all the data they don't have in memory.
So, any time a problem requires "massive matrices" (Tim's words) to solve, they're far simpler (and faster) to solve if the matrices themselve can be wholely loaded into RAM.
So, any time a problem requires "massive matrices" (Tim's words) to solve, they're far simpler (and faster) to solve if the matrices themselve can be wholely loaded into RAM.