NERON: Miscellaneous Remarks about Neron

From: Arthur Gaer 
To: David Harvey 
Cc: William A. Stein 
Subject: Re: Lots o'Mem
Date: Mon, 12 Apr 2004 15:01:35 -0400 (EDT)
Hi David,

I wouldn't worry too much about outrunning physical memory.  Sun's big
selling point, besides robustness and stability, is ability to deal with
large amounts of data and memory usage--they do lots of work on optimizing
paging and swapping algorithms on their hardware for just this sort of
instance, so the effect won't be nearly as detrimental as swapping on your
laptop.

I would guess, too, that the Sun compilers may produce binaries that are
better optimized and better integrated to work with the system and kernel
level virtual mememory and swapping facilities, though that's just a
guess.  But perhaps good to work with those compilers for this sort of
computation.

In any case, neron is a $70,000 machine, so it's got much faster data and
memory buses and much faster disk and disk accesses than your laptop,
orders of magnitude faster.  Swapping won't be nearly as noticeable,
especially when you're not also, simultaneously trying to use the machine
as an interactive workstation, so don't sweat it.  Obviously, running out
of swap space entirely would be a bad thing.

But in order to help prevent that, I'll see if I can add on a good chunk
of that second disk as extra swap space.  I know it can be done, and I'm
pretty sure it can even be done on a live system without a reboot, but
I've never done that, so it might take a day or two to find the magic
words.

ulimit is mostly a shell built-in, so that can sometimes get kind of
funky, depending on the system and how the shell interacts, etc.  No
reason not to try it, but it could turn out not to do exactly what you
want.

Sun actually has all sorts of proprietary facilities for controlling
resource usage by particular jobs, etc.  They're mostly for use on those
big multi-million dollar servers Sun loves to sell--don't know if any of
them are dependent on that expensive hardware and/or require extra
software.  I've never had occasion to poke into them before, but I'll look
around a bit and see what I can see.

Arthur


On Fri, 9 Apr 2004, David Harvey wrote:

> Hi Arthur,
>
> Yes, I definitely did notice, I was keeping an eye on it most of the
> time. (I was estimating it would get up to about 10G on that run, but
> it exceeded my expectations somewhat.) I am planning to do some more
> runs like that over the next few weeks, as long as that's acceptable in
> terms of the department's computing resources. Let me know if I'm
> pushing the friendship.
>
> The problem with the swap space is that my code references memory all
> over the place, and from my experience on my 256MB laptop, if it starts
> swapping it will just collapse in a thrashing heap. My main priority is
> to ensure that I don't interfere with any of the other users on the
> machine (for example Stephanie has been running a few processes for a
> while there), so I don't want to force them to start swapping either.
>
> What would be nice is if my code could tell if the physical RAM was
> running out and then just bail out if that happens, but I'm not sure
> how to do that. Or perhaps there is some way to limit the physical and
> swap memory from the command line when I run it?
>
> In any case I still have plenty of scope left for improvements in my
> code which should cut down the memory requirements for similar sized
> problems.
>
> In case you were wondering what I am doing... for a few months I have
> been studying a type of chromatic polynomial defined for cubic graphs.
> In general the time required to compute this polynomial is exponential
> in the number of vertices; for PLANAR cubic graphs I have a faster
> algorithm, but there's a memory tradeoff. Hence the 14GB. My code
> basically cooks up a large random planar graph (today's had 300
> vertices) and computes the associated polynomial. There are certain
> features of the coefficients of the polynomial which may or may not
> persist for larger