This cuda thingy

a.) has anyone got a list of what nvida cards crunch at what speed ie GPU v GPU ?

b.) ditto above comparing CPU v GPU ?

gonna see what I can ramp things up to , haven’t bothered in a year although some still seem to be cruching a large number of pc have since been replaced.

a) Not that I know of. It would be possible, I suppose, to figure out if I went through all the WUs that my GPUs have crunched and figured out the wall time. That would be a LOT of work. I wouldn’t do it unless I was paid to.

As a first approximation, multiply the number of shaders by their clock speed. That should give a good estimate.

b) GPU is much faster but the applications are limited.

Guess if you had a dig around Seti ‘number crunchers’ (open boinc manager, click on a unit, look left, click on message boards, number crunching, search - is the lazy way :slight_smile: ) someone there might have.

But I suspect there’s so many variables to take into account it’s hard to decide. Stuff like unit size, cpu and gpu combo, unit , angle range of the unit, o/clocking, and whether your running AP, and/or MB, and other projects, not forgetting the version of boinc you run or whether it’s standard or optimised apps.

It’s dashed involved 'ol chap. Then you have pending credit and suchlike.

Only tip I can give with CUDA is that the displayed time in the boinc manager is longer than it seems, because it displays cpu time and not CUDA time, 'cause it uses cpu and CUDA, and I’m not sure it’s possible to read GPU time.

Only thing I’m sure about is sometimes GPU time depends on the Angle range.

I’d go for the best kit you can afford to throw at it, sit back and enjoy :slight_smile:

Would be an idea to make a new venue and move your GPU crunchers into it, and then switch off Astropulse for this venue. Reason being GPU crunching messes about with your result duration correction factor, e.g. 8800GTS RDC=11.7, 8800 Ultra RDC=85.88. You can imagine what these do to the estimated complete times for Astropulse, resulting in the machine thinking it is massively overworked, and then won’t download tasks so no GPU multibeam units will be given out. Kinda makes it pointless. See if you can guess which the two machines are in the screenshot with CUDA cards :furious:

Oh and CUDA does not preempt, so you need to be aware graphics performance on the machine will drop massively when CUDA is running.

Well I ordered a 8400 GS to test with , then I will order something a bit better.

any idea if you can use a graphics card just for cuda then another for the display ?

easiest way is to have a non-cuda card driving the display

that was kinda what I was trying to ask , cheerz :smiley:

I’ve done a lot of testing. The best bang-for-the-buck is the 9800 GT. Pair it with a heavily overclocked quad core CPU (Intel based) and you have a monster cruncher.

Yep…this is necessary if you want to use your cruncher for something other than crunching at the same time it is crunching. Hey…lotsa crunching in that sentence.:sigh::smiley:

i have a few hundred pc’s , i aint spending £100 a card :sigh:

FYI,

The best and simpliest method of determining actual GPU performance compared to CPU is probably to look at the message log. It records the actual time the task started and the time it finished, not recorded CPU time as reported.

Maybe we could start a bit of a stats session to gain the figures?

Gandelf

When you get your 8400, Download laterst driver and accept cuda work units.

Off topic, you finally got off ogame, and back to seti :slight_smile:

The question you posed was about video card performance. You said nothing about needing hundreds of them. You’re sort of an idiot, are you not?:kickbum:

If your question was not to determine what the best performance for the money is, what is the reason for the question?:cuckoo:

Surely a big man like you with all those computers can afford a few decent video cards. That 8400 is not worth the money because the power/pound ratio is much lower than a 9800.

I love it when monkeys ask for help and then spit on those stupid enough to try. Give us a kiss mate!:moon:

RP - once again you’ve stepped over the line, whether sarcastic or not, calling another forum user what you did in your post is not acceptable. That will be the last time you do so.

DT.

Sorry, wasn’t being sarky I just phrased that wrong. What I meant was a non-CUDA card to drive the display whatever the status of BOINC, and another (which will be recognised by the OS) not connected to a display to run your CUDA apps means you will not have the display interference, and you can run flat out all the time.

If anyone is planning investment in video cards purely for CUDA then this list at Unable to handle request shows cards which work with CUDA 2.0. At the moment SETI uses CUDA 1 and I have no knowledge of a planned move to 2, but for investment protection its probably worth a read. Neither of mine will work :frowning: but then I didn’t buy them purely for crunching so I guess no biggie really.

[QUOTE=DoubleTop;434958]RP - once again you’ve stepped over the line, whether sarcastic or not, calling another forum user what you did in your post is not acceptable. That will be the last time you do so.

DT.[/QUOTE]

Woops hes gone again…

:rotate:

[QUOTE=Mojo;434976]Sorry, wasn’t being sarky I just phrased that wrong. What I meant was a non-CUDA card to drive the display whatever the status of BOINC, and another (which will be recognised by the OS) not connected to a display to run your CUDA apps means you will not have the display interference, and you can run flat out all the time.

If anyone is planning investment in video cards purely for CUDA then this list at http://www.gpugrid.net/forum_thread.php?id=316 shows cards which work with CUDA 2.0. At the moment SETI uses CUDA 1 and I have no knowledge of a planned move to 2, but for investment protection its probably worth a read. Neither of mine will work :frowning: but then I didn’t buy them purely for crunching so I guess no biggie really.[/QUOTE]

Wasnt taken as being sarky m8 , I just didnt explain what I was trying to say well enough.

The card I was planning on using is on the second list so as long as it works ok , that’s job done for me at least , Not all of the pcs I run have PCI-e slots , because most are HP buisness pc’s they only have the ‘‘special hp graphics slot’’ which is a bit picky about what cards it likes.
But I do have a fair amount of pc’s that I can slowly add £20 graphics cards to so hopefully a reasonably small company investment should pay dividends.

Edit : I didn’t get to read Phenoms comment , those of you who know me may decide that was probably the best outcome :-p

It will. Just watching this machine running two normal multibeam and a CUDA unit, its no slouch after 10 mins it has finished around 20% of the multibeams (S@H 6.03 stock app). The CUDA task is 55% complete in under 3 mins and that trust me is a slow one. Even a low-power CUDA card should therefore add a relatively large amount of processing power to a machine whilst leaving enough grunt left to run normal BOINC tasks or god forbid user processing :wink: edit while I typed that it went to 85% in 3:47, the overhead is less than 20% of (total) CPU. edit 2 complete 3:48 http://setiathome.berkeley.edu/result.php?resultid=1182159847 edit 3 its little brother the 8800GTS has just put a unit into S&H beta in 1:52, it has 45 CUDA units in the queue those will take under 3 hours to complete. Staggering isn’t it? I have an 8600GT on the way to me it looked like a good bang/buck combination will post some results for that next week if NE1 is interested.

Card should be here today so should be able to post at least an indication today.

[QUOTE=Gandelf;434952]FYI,

The best and simpliest method of determining actual GPU performance compared to CPU is probably to look at the message log. It records the actual time the task started and the time it finished, not recorded CPU time as reported.

Maybe we could start a bit of a stats session to gain the figures?

Gandelf[/QUOTE]

Decide which information is required and post it, I’ll certainly put some results in for ya :nod: