So, this is our HPCC. It is very loud, very hot and very powerful.
We have a monitor, keyboard, mouse in the middle connected to two switches that allow connection to all the nodes
Pretty Impressive specs - although I forget the specifics specifications. I just know they have 16GB of Memory each, 4 Quad Core processors and some decent hard drives. At the bottom there is a UPS.
The problem with the UPS is that it had to be hardwired, which took a while for the electrician to come out and do, but then, we found out that the UPS only supports 8000W output, and the combined unit actually uses close to 15000W. You can see the problems there.
Now these pics were taken before the UPS was turned on.
This is the back of the unit. IT IS HOT!!
You'll notice quite adequate below floor cooling. We have two very powerful units blowing cold air under the floor and very few rooms have openings, this room being one of them.
The upper half...the head node is a 2950 and the other nodes are 1950s all from Dell of course.
The whole thing is networked on 1GB full duplex Ethernet, with Jumbo Frames enabled, usually.
That monitor is sweet, and very useful during install, but WAY TOO COLD for regular use.
The bottom half. We have this whole rack filled out except for one U, but that's ok.
Hopefully this will be the first of many research compute clusters.
Oh and its name, after some debate, because apparently Locutus of Borg was inappropriate, is CESAR200. Which as you might guess is an acronym for Computer Science Engineering Science and Arts Research Computer Cluster. Note that what we did was take the CC and just say each one was the roman number for 100 and made it 200. Sure, we might have done something else, but no one really complained so it works.