A UPS is always a good idea for any sort of critical server. Of course, all that allows is basically for a clean shutdown if there is any sort of extended power failure, but it will keep you up and running smoothly through a flicker. It also smooths out any irregularities in your power lines, which can cause instability problems, or power supply failure over time.
With most hardware RAID solutions, you should be more or less OK if the power does drop. You may have an inconsistency on the block(s) that were currently being written when the power failed (which would cause corruption in whatever file was being written, but at least it would be noticed!), but the rest of the data will be alright. If only a single disk has an error, it *should* be able to rebuild properly. It's a very good idea to do a live test of things like this (unplug a drive while running, unplug the system while running and see what's there when you power up) once you get it set up -- if it's not truly fault-tolerant, you want to know about it *before* something bad happens.
If you don't have hot-swap bays, your only choice is to shut down the system to replace a bad disk. Whether or not this is an acceptable solution is up to you -- it takes significantly longer to open the case up and replace a bad disk than to just swap a drive from a bay. If you're Amazon.com, that's bad. Your lab, however, may be able to tolerate an hour of downtime for the significant price savings.
IMHO (as someone who works in the computer storage business), SCSI disks themselves are not inherently more fault-tolerant than IDE/SATA drives. That said, high-end SCSI disks are certainly of better quality than cheap IDE ones, but I doubt you're going to see a significant change in failures going between IBM 10KRPM 74GB SCSI disks and, say, WD Raptor SATA drives. If you're concerned about drive failure, use RAID1, RAID0+1, or RAID5 to provide redundancy. Many controllers can 3- or 4-way mirror data in RAID1 as well, but if your backups are pretty regular, that's probably overkill. If you want to make backups easier and more regular, you might want to look into NAS (Network Attached Storage), and some automated backup software. Might be out of your budget, though.
I understand your position, having worked in a neuroscience lab in college that did a lot of number-crunching (mostly DSP stuff on 32- and 64-channel high-resolution intracellular recordings, with dataset sizes up in the 1-8 GB range). For bizarre reasons that I don't entirely understand, scientific computing packages seem to have practically no support for multithreading OR distributed computing, even though they're the sort of apps that could most make use of them! If an x86-64 (Opteron) version of your software is available, that's probably your best bet. Itanium solutions are prohibitively expensive in my experience, and you'll get a lot more speed than Xeons or Athlon MPs that way if you have a 64-bit app.