Any spare GPU power available - Nividia only

I’m starting to wonder a little about Folding@Home and Stanford as so many things need to be done manually, like the V7 updates. Comms just seem to lack and I’m not sure at this moment whether to turn my 2x460+955 into a boinc rig from the Folding rig. As a team, we are on a steady decline in numbers and backers of the project.

Boinc + folding + skynet seem to be making us spread thin, the crossover in science is very easily compared now as more adopt to boinc. It’s becoming the “google” of DC now, and I’m slightly in shock at the number of projects that choose to not be on the platform, they must be missing out in terms of computational power?

DT.

Agree.

To be honest I am sticking to Boinc. It is a mature, no nonsense, easy to install and maintain client that gives you access to a wide variety of projects and covers CPU and GPU cruching smoothly and easily from a dual core to a 32 core beast.

I did used to fold many years ago but I gave it up as it was just one big pain. In the olden days you had to eternally fiddle with command clients, then seperate SMP clients, seperate GPU clients - and all were a pain. Probably won’t ever go back now.

Skynet is intersting I did fiddle about with it a bit but that also seems to be a royal pain in the bum. Got more than one CPU? tough. Got a GPU? Tough. Frankly it was not worth the time. Also both the installable client and the web page client had adverse affect on my PC - in terms of slowdown and mouse stuttering. I think skynet has a long way to go.

Boinc all the way for me mate - if you turned your folding rig into a boinc rig I would be most pleased with you! :smiley:

Butuz

Hated boinc when it came out, still hate it now. Credits via benchmark never make sense (fun watching the credit chasing herds shop for project). Work time estimates are always out. It will somehow manage to swing from getting far too much work to not having any (hint: allow per-subproject DCF not a single global value). But I still ended up preferring it over working out everything that needs to be done for stand alone clients.

Decided I don’t like GPU projects though. Run too hot, and screen lag is unacceptable. Also with increasing applications supporting GPU assist, I’d rather prioritise that.

[QUOTE=mackerel;463717]
Decided I don’t like GPU projects though. Run too hot, and screen lag is unacceptable. Also with increasing applications supporting GPU assist, I’d rather prioritise that.[/QUOTE]

All my cruncher have on-board graphics which I use,and I don’t even connect the Nvidia cards output, so don’t suffer from screen lag.

wow - that unit that just completed had a results file upload of just shy of 40Mb !!!

[3] NVIDIA GeForce GTX 460 (993MB) driver: 260.99 :smiley:

It’s not quite true, one is the 1Gb version, 2 are 768’s, but all running slight overclocks to 750, 100% fans as it’s in the office I hardly ever work in (apart from today on a Sunday!!)

Should give us a kick

Awesome work DT :trophy::tiphat:

Butuz

been working this afternoon in the office, they have max tmp according to Nvidia of 104, there’s not much space between them, 100% loaded (ofc) and the top one is 90, middle, 80 bottom 70.

Can you stick something really thin in between them to kind of space them out a little bit let some air in?

Butuz

I’ve pushed my EVGA 460 up to 800 and it seems quite happy at that speed and runs at a temp of 66. :smiley:

I had mine up to around there, but had huge choke squeel from them :frowning: The MSI Cyclone460 in my main machine has a higher “stock” clock than the Palits both the 768 and 1gb versions were in my opinion deliberately clocked low. It’s all down to the cooler on the 460.

[Picture 1](http://doubletop.org.uk/pics/aria/phenom955-buildlog/2012-04-29 16.03.55.jpg)
[Picture 2](http://doubletop.org.uk/pics/aria/phenom955-buildlog/2012-04-29 16.04.14.jpg)
[Picture 3](http://doubletop.org.uk/pics/aria/phenom955-buildlog/2012-04-29 16.04.27.jpg)

Considering the space, the ambient room temp due to the other machines, the bottom GPU temp is about what I’d expect, it just can’t push the air out faster enough with them being so close. Before I didn’t use the middle slot and left space, they’ve been folding for months now :smiley:

DT.

The ventilation on my old skool Lian Li cases is 2 80mm inlets and 1 80mm exhaust, so I am quite chuffed my 460 temp is so bad. If you run all 3 of those 460’s on GPUGRID for a while it won’t be long before you catch me.

I might look for another EVGA 460 after I get back from our holiday, funds permitting as I have an slot going spare. :Plot:

Just me or are these units REALLY unstable.

Many many cycles wasted now, on a GPU that will sit and crunch Folding all day long without issue. Anyone care to peek through the results on my machines?

Both rigs on latest drivers.

DT.

Hmm I thin I’ve skim read things on the gpugrid forum about not using latest drivers or having trouble with new drivers?

Will try and find it.

Butuz

http://setiathome.berkeley.edu/results.php?hostid=5873209&offset=0&show_names=0&state=5&appid=

put it on seti GPU, same thing - some results are returning fine and then we have these three ? Is this normal boinc GPU behaviour for units to crash or could I have an issue?

DT.

From what I know about it, if you are using the latest WHQL drivers you will get failures. Supposedly, BOINC 7 will refuse to run GPU tasks if the drivers are the 29* series. I’ve also heard the 30* beta drivers fix the problems.

thanks Egad, I’ll have a go tonight on the home rig - better not mess with my gaming though !!

My GPUGRID cruncher is running XP, BOINC 6.10.60, and driver 280.26 and very rarely get compute errors with it…

My attitude is “If it ain’t broke then don’t fix it”. :smiley:

The GPUGRID message board is usually a good source of information for any errors that come up, I tend to keep an eye on it daily. They’re usually quick to let you know if there is problems with the latest drivers and stuff, or if they have released some dodgy WU’s.

One bit of advice for running GPUGRID is to set your cache down as low as possible… I run mine at 0.01 days as it maximises the chance of getting the 24Hr bonus with long units. And when I want to stock up on Docking units I set GPUGRID to “no new tasks” while I fill up with a few days worth of Docking WU’s.

I have the same issue DT has… my CUDA card is in my gaming rig (of course!) and I need the latest drivers to support my gaming addiction, so I can’t downrev to the 28* drivers. I might throw the 30* beta driver in there and see if anything catches fire.