There is a TPR BOINC Beta team

The SETI BOINC beta test site can be found at http://setiweb.ssl.berkeley.edu/beta/

We have a team (thanks to the Worm That Turned :thumbsup: ) so feel free to join the two of us who already have, even if it’s a low resource share on one machine.

As usual, disclaimer is this is BETA software blah blah blah and credits probably won’t count in the wider world, but it is your decision whether helping develop and test the software is worth that.

Team is at http://setiweb.ssl.berkeley.edu/beta/team_display.php?teamid=55

Joined but it’s not giving out work at the moment:(

Has anyone seen one of these procs before?

http://setiweb.ssl.berkeley.edu/beta/show_host_detail.php?hostid=2037

From the technical news (beta test uses the same servers as live)

September 8, 2005 - 17:00 UTC
We are now moving all the results from the upload/download file server onto a separate file system (directly attached to the upload/download server). We are copying as fast as we can - our early estimates show this will take about 24 hours. After that we will turn on all the backend processes and drain all the queues throughout the night. Tomorrow morning we will turn everything back on. Since the upload directories will be on local disks, and the download directories won’t be bogged down with upload traffic, we should see a vast improvement in performance.

September 7, 2005 - 20:30 UTC
A temporary solution to our current woes is at hand. In fact, it’s already half implemented. During our regular weekly database-backup outage we dismantled the disk volume attached to our old replica database server (which hasn’t been in use for months) and attached it to the E3500 which is currently handling all the uploads/downloads. Right now a new 0.25TB RAID 10 filesystem is being created/synced. Should take about a day.

This space should be enough to hold the entire upload directory but that’s all. Thus we are splitting the uploads and download onto two separate file servers, with the upload disks directly attached to the server that writes the result files.

When the system is ready we estimated it will take about a half day to move the upload directories to the new location, during which all services will be off line. This may happen very soon.

Note that this is not a permanent fix, but something has to happen ASAP before a new client (or new hardware) arrives. We’d rather both the upload and download directories move to directly attached storage, but we currently don’t have the disk space available. And the disks we are going to use are old and have a potentially high rate of disk failure (there are several hot spare disks in the RAID system). But we’re running out of space as the queues fail to drain, so we’re out of options.

Right, I didn’t realise it used the same backend as live.

September 9, 2005 - 17:00 UTC
(This is an update to a post made yesterday)
We are now moving all the results from the upload/download file server onto a separate file system (directly attached to the upload/download server). We are copying as fast as we can - our early estimates were a bit off. Now we see that the entire file transfer process will take about 48 hours, all told (it should finish Saturday morning, Pacific time). After that we will turn on all the backend processes and drain all the queues. Since the upload directories will be on local disks, and the download directories won’t be bogged down with upload traffic, we should see a vast improvement in performance.