Anandtech vs Tom's Hardware Folding@Home Coronavirus Race thread

Page 24 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

blckgrffn

Diamond Member
May 1, 2003
9,145
3,086
136
www.teamjuchems.com
Do it! I'm kinda glad I took the plunge myself. I game at 4K, and the 2070 Super was okay if you turn down the IQ settings. With the 2080 Ti, this thing can run a solid 60 FPS between high and ultra IQ. I'm actually surprised at how well my 2600K has held up, but then again, at 4K, the GPU is the bottleneck the vast majority of the time.

On another note, my 2080 Ti downclocked itself and was running at 1350 MHz a good part of today, so will take a hit on PPD. Had to reboot the computer to resolve the low clock.

Ha! A 2600k and a 2080Ti! Magnificent!

I need to update my sig, but I've moved on from my hexacore Sandybridge... to a hexacore Ryzen 3. I really have only played Borderlands 3 on it (we don't need to say how many hours), but wow is it so much smoother and there lack of FPS dips and stutters is amazing, not to mention the load times are cut hard between the new hardware and the NVME drive. The 5700 xt didn't change, but my perception of its 2k performance certainly did.
 

ZipSpeed

Golden Member
Aug 13, 2007
1,302
169
106
Ha! A 2600k and a 2080Ti! Magnificent!

I need to update my sig, but I've moved on from my hexacore Sandybridge... to a hexacore Ryzen 3. I really have only played Borderlands 3 on it (we don't need to say how many hours), but wow is it so much smoother and there lack of FPS dips and stutters is amazing, not to mention the load times are cut hard between the new hardware and the NVME drive. The 5700 xt didn't change, but my perception of its 2k performance certainly did.

Yeah the framerate dips are definitely there but it doesn’t bother me as much as I thought it would. I have a 3700X build all speced out but with all the uncertainty in our world and the fact that I blew this year‘s budget on the 2080 Ti, I will hold off on any upgrades for now. Maybe this holiday season if things improve.
 
Reactions: blckgrffn

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,575
96
Personally, I think we are close enough to the 3000 series to wait until fall and get one of those. I’ll be upgrading CPUs and playing with some cheaper hardware in the mean time.

Yeah this is more of what i wanna do, got stuff originally planned around a 3900x upgrade. That and a 1tb NVME.
 
Reactions: 1979Damian

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
I own 2 houses outright, no mortgage, one is a rental.
Home solar cost 24k for one house. And wouldn't cover more than $100 a month of my electric. CPUs are now taking some heat, the EPYC boxes do get a little warm (64 cores/128 threads) But a 1070TI produces as much heat as those.

Now if I had an acre or two for a solar farm, that's a different story.
Have you ever looked into whether you could harvest the heat to, say partially heat your hot water? That would also reduce the heat load in your house, maybe something as relatively 'simple' as passing the hot air through a radiator (I'm thinking car style here, not house) in the water supply to the hot water tank/boiler etc. Quite how you would duct all, or even a large portion of that hot air into one place is another matter though! Unless the waste heat is already ducted??

Btw, that's an awesome thing you're doing for the team (& F@H), but we don't really want you to cook yourself or bust your budget this month!
I'd say shut off at least a couple of rigs, I'd offer to fire up my 2nd rig to trade for shutting 1 of yours down, but it's GPU is ancient (see sig, it's CPU isn't too bad though) and would be nowhere near the performance of any of yours I'd imagine! (However if you are running an ancient rig, lmk! ).
 
Reactions: 1979Damian

sswingle

Diamond Member
Mar 2, 2000
7,183
45
91
A certain drop of daily production for both teams, but also for other teams as far as I have seen. I am not sure if this is due to work supply or just lagging stats exports again.

I'm guessing work supply. Mine were running solid for a while and then late yesterday/early today I have had to pause and start because they were up to waiting 30 minutes to retry for work. Both are client type advanced as well.
 

1979Damian

Junior Member
Apr 4, 2020
5
10
41
I'm guessing work supply. Mine were running solid for a while and then late yesterday/early today I have had to pause and start because they were up to waiting 30 minutes to retry for work. Both are client type advanced as well.

Same here.
 
Reactions: NesuD

StefanR5R

Elite Member
Dec 10, 2016
5,598
8,025
136
Thanks @sswingle and @1979Damian, it's hard to tell for me because I have had reasonable work supply myself. I have a script running which prints the current state of each of my clients every 10 minutes. Now that you mentioned recent supply issues, I scrolled back through the log but didn't see something conclusive.

I am using the pause-and-restart method too. Actually, not personally, since I have a trained monkey doing it. ;-) The monkey needs to press that button very very rarely though.

------------

3 of my 9 GPUs are out of service now: The triple-GPU computer shut itself off. On power-on, the BIOS complains about CPU overtemperature and lets me enter the BIOS setup (where it shows 89 °C CPU temperature), but doesn't let me boot into the OS. So I can't at least clear its current F@h and Rosetta work caches. Except perhaps if I plug the disk into another computer.

The pump is working, the water is cool, so I suspect the CPU waterblock is blocked with old dirt in the loop. I will have to take it apart and open the CPU and GPU waterblocks to check.

Couldn't have this happened before the weekend rather than on Sunday evening?
To add to this, I have got four watercooled PCs, all left without maintenance since winter 2017/2018 except for blowing dust out now and then. The main difference of the 3-GPU computer vs. the other ones is that it contains components of my first watercooled PC from 2016. It had a modular all-in-one cooling, which had sealant on the fittings put on by the manufacturer. The sealant had already worked itself into the loop some time in 2017. My suspicion is that there are still remains of this somewhere in this loop. What I am trying to say is that watercooling can work reliably and even maintenance-free until the computing components become obsolete, but I took a risk with re-using components from an earlier failed loop.
Today being a holiday, I disassembled the loop and opened each of the four waterblocks. The CPU block, which has narrower fins than the GPU blocks, was completely blocked by gunk, as I expected. The first and second GPU block were almost clean. The third GPU block had a lot of gunk in it. That blob of gunk almost jumped at me the moment I opened the waterblock. =:-O

Incidentally, the third GPU has always been hotter than the first two. So far I attributed it to the fact that this GPU was sitting at the warmest spot in this serial loop (reservoir -> pump -> CPU block -> GPU 1 bock -> 280 mm radiator -> GPU 2 block -> GPU 3 block -> quick disconnect fitting -> MORA-360 radiator -> quick disconnect fitting -> reservoir), and the single pump having a hard time to keep up a high enough volume flow in this large loop for more even temperatures. This was certainly true, but in hindsight not the only reason.

Now, with the Xeon E5 v4 CPU running Rosetta on 31 threads, and the three GPUs running FahCore_22 at a reduced board power target of 200 W each, temperatures are CPU: 41...46 °C, GPU1: 42 °C, GPU2: 43 °C, GPU3: 54 °C, which is about 10 degrees lower compared to before the cleaning for CPU/GPU1/GPU2, and about 20 degrees lower for GPU3 — if my memory doesn't play tricks on me.
 

NesuD

Diamond Member
Oct 9, 1999
4,999
106
106

StefanR5R

Elite Member
Dec 10, 2016
5,598
8,025
136
No work here. No Work No Production.
In case that you haven't already seen it somewhere buried in this thread: If the client doesn't get work at a request, it retries after a period which gets longer after each unsuccessful request. Meaning, the number of requests in a given time frame gets fewer and fewer. In the advanced control (FahControl), the retry period can be seen when you click on the work unit which is in "downloading" state, on the right as "Next Attempt".

If the retry period has gotten rather long, click "pause" and then "fold" either for the entire host, or via right-click just for the idle slot. Then the retry period goes back to something short again... for a while at least.

At one of the first days of the race, I* counted that I did this game on average every 7 hours per each slot, with 9 slots in operation. But now I* am doing this only maybe 10 times or less on a day, all of my folding slots taken together.

*) Not I really, but my trained monkey.
 
Last edited:

Soulkeeper

Diamond Member
Nov 23, 2001
6,712
142
106
What is the radeon vs. nvidia performance situation ?
I can't seem to find a good benchmark chart comparing all the current cards on the market.
I'm guessing because they are partnered with nvidia, their software is mostly optimized for them.
 

Endgame124

Senior member
Feb 11, 2008
955
669
136
What is the radeon vs. nvidia performance situation ?
I can't seem to find a good benchmark chart comparing all the current cards on the market.
I'm guessing because they are partnered with nvidia, their software is mostly optimized for them.
This is the best resource I know of:


my 1080ti results are better than listed there, I think, but I’m not Sure I get consistent enough work to prove that
 
Reactions: 1979Damian

StefanR5R

Elite Member
Dec 10, 2016
5,598
8,025
136
(ninja'd by @Endgame124)
What is the radeon vs. nvidia performance situation ?
I can't seem to find a good benchmark chart comparing all the current cards on the market.
Here is a chart with several popular GPUs.


my 1080ti results are better than listed there, I think, but I’m not Sure I get consistent enough work to prove that
See the footnotes at the sheet: FahCore_22 performs better than FahCore_21. The sheet still reflects FahCore_21 results.
 

StefanR5R

Elite Member
Dec 10, 2016
5,598
8,025
136
Not even sure if folding could honesty push 32 threads?
It can honestly push 128 threads. Maybe more, but I don't have more to try. B-)
Aim for a billion lead as a team goal?
I switched Folding@home back on on my GPU-less computers now. I had them running Rosetta, then TN-Grid for a while. Right now at ~10...20% completion I am seeing quite good estimated PPD, much better than on Sunday when I last tried it:

56-thread slot: ~0.9 M
88 threads: ~1.1 M combined (partitioned into a 64-thread slot + 24-thread slot)
128-thread slot: ~3.2 M :-O

Code:
Fri Apr 10 23:00:44 2020
cpu-only1
        slot 00 unit 00: RUNNING, 11.26% done, ETA 01:08:00, 930 k ppd
        = 930 k ppd
cpu-only2
        slot 00 unit 00: RUNNING, 11.26% done, ETA 01:08:00, 931 k ppd
        = 931 k ppd
cpu-only3
        slot 01 unit 00: RUNNING, 6.10% done, ETA 02:59:00, 237 k ppd
        slot 00 unit 01: RUNNING, 10.29% done, ETA 01:11:00, 874 k ppd
        = 1.11 M ppd
cpu-only4
        slot 00 unit 00: RUNNING, 10.68% done, ETA 01:09:00, 901 k ppd
        slot 01 unit 01: RUNNING, 4.33% done, ETA 03:04:00, 233 k ppd
        = 1.13 M ppd
cpu-only5
        slot 00 unit 00: RUNNING, 23.00% done, ETA 00:25:40, 3.24 M ppd
        = 3.24 M ppd
gpu-only1
        slot 01 unit 00: RUNNING, 84.70% done, ETA 00:18:37, 1.45 M ppd
        slot 00 unit 02: RUNNING, 12.26% done, ETA 02:33:00, 1.46 M ppd
        = 2.90 M ppd
gpu-only2
        slot 01 unit 00: RUNNING, 12.76% done, ETA 01:43:00, 1.59 M ppd
        slot 00 unit 01: RUNNING, 12.42% done, ETA 02:30:00, 1.51 M ppd
        = 3.10 M ppd
gpu-only3
        slot 01 unit 00: RUNNING, 87.13% done, ETA 00:15:14, 1.47 M ppd
        slot 00 unit 02: RUNNING, 2.58% done, ETA 02:45:00, 1.54 M ppd
        = 3.01 M ppd
gpu-only4
        slot 01 unit 03: RUNNING, 96.08% done, ETA 00:06:59, 1.38 M ppd
        slot 00 unit 01: RUNNING, 45.35% done, ETA 01:37:00, 1.43 M ppd
        slot 02 unit 04: RUNNING, 60.03% done, ETA 00:51:58, 1.27 M ppd
        slot 01 unit 00: READY
        = 4.08 M ppd

= 20.44 M ppd, 0 dl, 16 run, 0 ul
That's with ~4.2 kW drawn from the wall.
 
Last edited:

Endgame124

Senior member
Feb 11, 2008
955
669
136
My 1660 Super arrived today. Will install tonight and then measure at stock for a while (probably the rest of the race with Tom’s). Then I’ll start playing with clock speeds and efficiency.
 
Reactions: VirtualLarry

Soulkeeper

Diamond Member
Nov 23, 2001
6,712
142
106
VS the radeons, it looks like nvidia gets a 25-50% boost with folding compared to some other projects.
Just going by the temp/fan my radeon is not being pushed very hard.
 
Reactions: Assimilator1

IEC

Elite Member
Super Moderator
Jun 10, 2004
14,345
4,966
136
VS the radeons, it looks like nvidia gets a 25-50% boost with folding compared to some other projects.
Just going by the temp/fan my radeon is not being pushed very hard.

My Radeon VII varies wildly in estimated PPD depending on WU (800K-2000K+). I chose to underclock and undervolt to 900mV 1600MHz max core clock vs stock 1084mV 1800MHz. This seems to produce closer to 1000-1200K ppd on average at ~95W power consumption while running at 45°C (+20C over ambient).
 

Endgame124

Senior member
Feb 11, 2008
955
669
136
My 1660 Super arrived today. Will install tonight and then measure at stock for a while (probably the rest of the race with Tom’s). Then I’ll start playing with clock speeds and efficiency.
Figures, I got it installed and ready to go and... no work, even with advanced joins set
 

StefanR5R

Elite Member
Dec 10, 2016
5,598
8,025
136
I switched Folding@home back on on my GPU-less computers now. [...] I am seeing quite good estimated PPD, much better than on Sunday when I last tried it:

56-thread slot: ~0.9 M
88 threads: ~1.1 M combined (partitioned into a 64-thread slot + 24-thread slot)
128-thread slot: ~3.2 M :-O
It's notably lower during the last few hours, but still much better than last Sunday:
56-thread slot: ~0.5...0.9 M
64+24 thread slots: ~0.8...0.9 M combined
128-thread slot: ~1.5...2.6 M

I also had the 128-thread fail just now because a WU could not be scaled as large ("domain decomposition" not compatible), had to reduce the slot to 96 threads temporarily to get it going. This 96-thread slot alone on the 128-thread computer gave 1.0 M PPD.
 
Last edited:

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,575
96
Well thanks to a little help in the gpu area, i was able to get my HD630 working for desktop/light gaming and its working like a dream while the 1080ti chugs about. Was as simple as enabling onboard as default, shutting down sticking in hdmi cable in the back of the board and after one more reboot cause apparently its 1995 again and EVERYTHING requires a reboot i was up and running.

GPU looks a bit more pegged, still not maxed i wonder if that program or cpu bottleneck based?

Edit: the bios for this Z270 is looney tunes, got like no reason in the world why it applied a self oc of 4.5ghz, i put the bios back to stock after the last fiasco trying to overclock. Not even a 4.5ghz oc setting, its got a mind of its own. Maybe its powered by Christine inside? Cause i am so lost right now, even google can't find me. Its stable......
 
Last edited:
Reactions: biodoc

mopardude87

Diamond Member
Oct 22, 2018
3,348
1,575
96
Since i switched over to my HD630, i get a very small ppd bump folding on the 7700k with the 1080ti. According to the client i get like 1k more now doing both vs not. Not sure how accurate those numbers are, gpu usage is staying more or less the same as before lots of micro dips but nothing to serious.
 

Ionstream

Member
Nov 19, 2016
55
24
51
It's notably lower during the last few hours, but still much better than last Sunday:
56-thread slot: ~0.5...0.9 M
64+24 thread slots: ~0.8...0.9 M combined
128-thread slot: ~1.5...2.6 M

I also had the 128-thread fail just now because a WU could not be scaled as large ("domain decomposition" not compatible), had to reduce the slot to 96 threads temporarily to get it going. This 96-thread slot alone on the 128-thread computer gave 1.0 M PPD.

I think one of my recent WUs also encountered this error, and my setup is nowhere near your 128 threads. Those scores though, good lord!
 
Reactions: 1979Damian

Assimilator1

Elite Member
Nov 4, 1999
24,120
507
126
Is their anyway to get F@H to alert you to errors regarding h/w or overclocking errors?

A couple of days ago I hadn't realised that 3 WUs (not in succession) had aborted early due to this (I'd undervolted, & since underclocked my RX 580 to cut power & heat). It was at 1350 MHz @1025mV a couple of days ago, & then I upped GPU vcore to 1050mV which gave a single error (which it recovered from) this morning, I've now underclocked it to 1325 MHz so hopefully now it should be ok. But I'd like to know for sure, at least from F@Hs POV!
JFYI, dropping from 1150mV (default) to 1050mV cut system power draw by about 26w & GPU temp by ~8C!

08:46:49:WU00:FS01:0x22:Completed 3120000 out of 4000000 steps (78%)
08:46:58:WU00:FS01:0x22:Bad State detected... attempting to resume from last good checkpoint. Is your system overclocked?
08:46:58:WU00:FS01:0x22:Following exception occured: Particle coordinate is nan
08:49:27:WU00:FS01:0x22:Completed 3160000 out of 4000000 steps (79%)


What does 'nan' mean again btw?
What happens to the aborted WUs btw, do they get resent? (I noticed the log flagged them as 'bad work unit'! ).
 
Last edited:
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |