Roland Melkert Wrote:If you really need more that 2GB (most software doesn't, but programmers seem to get more and more lazy these days ) ...
The obvious solution would be to develop for 64bit os's only.
An alternative would be using some kind of swap file, although this will needs decent management classes and could be extremely slow if done inefficient (I'm having EMS back flashes here ).
Something else I have been thinking of but never really tried is using multiple program instances. Because if the system runs a 64bit os every 32bit process has it's OWN 2GB limit. So using multiple instances would give you 'n' times 2GB. This approach would need a sophisticated way of communicating between processes and dividing the data.
my 2cts
Hi Roland,
Yes I really do need that much. I'm writing physics code (true computation!) so I need 3x3 tensors of 3D representations of complex values. Which means (3N^3)^2 elements where N is largish.
With the current code I can save myself 1 (somewhat easily) or 2 (not as easily) very large arrays but that just gains me a small increase in N.
The idea of parallelising/forking is interesting. If I can find a parrallellisable eigendecomposition routine it might be a good solution in general.
And yeah... swap files... sloooooow access, annoying to deal with... but may have to be the solution
Thanks,
Tim