Message boards :
News :
Scheduled Maintenance Concluded
Message board moderation
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 13 · Next
Author | Message |
---|---|
Send message Joined: 4 Nov 12 Posts: 96 Credit: 251,528,484 RAC: 0 |
Like what? I have a 600W PSU on the XP machine but limited cooling. The Win10 machine is an HP slimline requireing a half-height boad and only has a 350W PSU. See my previous post for Titan Black performance. If left running, this will likely place my host PC within the top 5 performing PCs crunching for MW@H. Cheers. |
Send message Joined: 30 Apr 14 Posts: 67 Credit: 160,674,488 RAC: 0 |
Cool. My single Radeon 280X can crunch up to 380k-400k per day. Crunching 4WU at the same time. Times are around 110-130s. So 22-26s per WU. Updated to 6 MW@H 1.43 WU bundles running per GPU, 12 total. Also allocated more CPU headroom so WUs are finishing about 3:28 (88s) total. GPUs are loaded up to around 93%, VRAM up to around 3668MB (59% on SLIed Titan Black cards, so the memory usage is double from being mirrored between cards – expect roughly half this on independent cards). 3:28 is 208s (not 88s) 208 / 6 (single card performance) = ~35s. So it would appear that Titan Black gets around 70% performance of R280X in MW@H. |
Send message Joined: 4 Nov 12 Posts: 96 Credit: 251,528,484 RAC: 0 |
Yeah, sorry, I was thinking 1 minute instead of 3 for some reason. I blame it on not being awake enough, at least that's my excuse. Either way, this PC has been within the top 5 not too long ago. (Edit: And of course it's too late to edit those posts now...) Also, I recall others recommending running 8 MW@H tasks per GPU on Titan Black cards so that they're kept more consistently loaded, but I personally don't want to push mine that hard. I seem to be getting some computation errors now though, so I'm going to try dialing it back down to 5 task per GPU to see if that fixes it. We'll see. I've typically been running 4 per GPU in the more recent past (last few months). Cheers. ... Update: Still getting some errors. It may be on the CPU end as the tasks seem to occasionally error out at the same time when they switch between the bundled tasks and load the CPU. For now I will assume this is on my end and not an issue with the WU, though I'm not sure about it. This problem may also potentially work itself out over time as the WU tasks start to not hit the CPU at the same time. I might also try further limiting CPU usage, but had increased it to get faster turnaround between tasks. |
Send message Joined: 30 Apr 14 Posts: 67 Credit: 160,674,488 RAC: 0 |
My setup have 0% failure rate. I think it's more likely a GPU issue than a CPU. |
Send message Joined: 4 Nov 12 Posts: 96 Credit: 251,528,484 RAC: 0 |
I'm not sure, but it doesn't look like it's the GPUs, since the WU errors seem to be occurring when they hit the CPU at the same time and I get much less to seeming no errors when they don't hit the CPU at the same time. https://i.imgur.com/qiY2tv0.png I do have my CPU overclocked a little bit and not my GPUs, so it may be related to that with the changing load spikes the CPU was going through. I'll mess around with it some more and report back if I find anything conclusive. |
Send message Joined: 25 Feb 13 Posts: 580 Credit: 94,200,158 RAC: 0 |
Hey Everyone, Sorry for the silence yesterday, I decided to take a day off yesterday to recharge a bit after the huge push I've been doing for the last two weeks. I will be working on the Linux GPU apps today, the Mac applications, and if I have time I will look into fixing the cosmetic issues with the progress bar. Jake |
Send message Joined: 4 Nov 12 Posts: 96 Credit: 251,528,484 RAC: 0 |
Hey Everyone, Thanks for the update. Sounds like you could have used the break and things seem to be running rather well overall now. As for the cosmetic changes, I actually rather like being able to easily see on the progress bar when one of the bundled tasks ends as it's helping me to troubleshoot some issues I seem to be having on my end (see my last couple posts), being able to monitor bundled task progress, CPU load, and GPU load in real time. But hey, that's just me. Best of luck and thanks for all your efforts. |
Send message Joined: 5 Jul 11 Posts: 990 Credit: 376,143,149 RAC: 0 |
Yes it is just you :-P I'd like to see the progress of the whole WU. Since there are 5 in the bundle, you can just watch for 20/40/60/80/100% :-) |
Send message Joined: 4 Nov 12 Posts: 96 Credit: 251,528,484 RAC: 0 |
To be fair, it's easier watching multiple things at once with the progress bar resetting. The only reason I see for changing this is for aesthetics, not serving any practical purpose that I'm aware of. Maybe to help avoid some confusion, though on the other hand it might also help illustrate how the WUs are running to people who aren't aware of them being bundled in this way. I understand some people prefer form over function though... ;) I'll make do either way. Cheers, guys. |
Send message Joined: 30 Apr 14 Posts: 67 Credit: 160,674,488 RAC: 0 |
Hey Everyone, Jake, I hope that Fixing issues with "Max tasks per day" will come next. There are at least a dozen Hosts that spam server with Invalid tasks, and process over 10000-20000 Tasks every day. They also cause some problems with creating Invalid WU. Thus, some of our work is wasted. IMHO this parameter can start at a lower numer and grow linearly as valid Tasks are generated. |
Send message Joined: 5 Jul 11 Posts: 990 Credit: 376,143,149 RAC: 0 |
I prefer function over form actually, and to have a progress bar that moves back and forth is illogical. |
Send message Joined: 24 Oct 16 Posts: 12 Credit: 56,127,036 RAC: 0 |
Changes since update on this 'puter. GPU ver. 1.43: Bundle5 ~4 min. (prior single WU 19-30 seconds) CPU still ver. 1.42: Bundle5 ~1 hr. 25 min. (prior single WU 9-10 min.) Inconclusive WUs 4 Nov. thru 15 Nov. were 1-3 WUs per day. 16 Nov. 35 WUs so far...11 CPU and 24 GPU Looks like production is down....Am I reading too much into this? Intel(R) Core(TM) i7-2700K CPU @ 3.50GHz [Family 6 Model 42 Stepping 7] (8 processors) AMD Radeon HD 6900 series (Cayman) (2048MB) driver: 1.4.1848 OpenCL: 1.2 Microsoft Windows 7 Pro x64 Service Pack 1, (06.01.7601.00) |
Send message Joined: 5 Jul 11 Posts: 990 Credit: 376,143,149 RAC: 0 |
Bundle5 ~4 min. (prior single WU 19-30 seconds) That looks wrong. My bundles take more or less 5 times longer. Yours are over 8 times longer. I've got a similar setup - Windows 10, R9 290, i5 3570K. |
Send message Joined: 8 Apr 09 Posts: 70 Credit: 11,027,167,827 RAC: 0 |
Hello everyone, Is anyone running a 390 or 390X (290 or 290x may have the same problem) I still have the problem, when i run several WUs at once, then after some time (from minutes to one hour or so) some WUs start to hang, and go on for ever, while one or two crunch on. I have tested drivers since 15.9, always the same problem, win 7 or win10 does not matter either. Tried different hardware setups, new installations of windows or old ones no difference. I hope someone can confirm the problem, so we can start searching for the root cause and maybe even a fix. PS. Running on the 280X or 7970 doesn't give me the error. Also running one WU at a time is fine. Running 2 Einstein@home WUs at the same time causes calculation error (invalid tasks) |
Send message Joined: 13 Feb 09 Posts: 51 Credit: 72,880,793 RAC: 4,462 |
Well, I discovered a simple cure for my excessive CPU usage when running MW GPU tasks: Don't run multiple tasks in the GPU! At least not MW tasks -- Seti and Einstein don't cause a problem with multiple tasks. In my XP machine with a fairly old dual core AMD cpu and GTX-750Ti, changing from two MW tasks to one reduced the CPU usage from ~40% to ~4%! Running just one task results in a minimal reduction in GPU thruput -- less than 5%. So . . . single tasksing from here on for MW! Saw a similar thing with my Win10 maching using a sligltly newer AMD dual core and a GT-730: From ~30% down to ~3%. (My previous statement that CPU usage was ~10% on that machine was in error.) Since I am not seeking to rack up as manny points as possible but just to run multiple BOINC projects reasonably efficiently, I think I will stick with my current configuration. For now, at least. One of these days I may get around to putting together a real number cruncher. :-) Folks running older machines with mid to low end NVIDIA cards should take note. |
Send message Joined: 24 Oct 16 Posts: 12 Credit: 56,127,036 RAC: 0 |
I notice that there are (9)WUs running at once now: (8) CPU tasks Ver. 1.42 and (1) GPU Ver. 1.43.....I believe it was only (7) and (1) earlier....with the occasional N-Body Ver. 1.62 taking up a CPU. Wish I had recorded the number of tasks running on each earlier....hate depending on my worn memory. I have an N-Body coming up.....we'll see what, if anything changes. Thanks for the reply, Peter Intel(R) Core(TM) i7-2700K CPU @ 3.50GHz [Family 6 Model 42 Stepping 7] (8 processors) AMD Radeon HD 6900 series (Cayman) (2048MB) driver: 1.4.1848 OpenCL: 1.2 Microsoft Windows 7 Pro x64 Service Pack 1, (06.01.7601.00) |
Send message Joined: 4 Nov 12 Posts: 96 Credit: 251,528,484 RAC: 0 |
Hey Everyone, On that note, I've done a bunch of testing on my end, and am still not sure what the exact issue is. I've loaded up the CPU and GPUs much more than BOINC does doing these tests, Prime95 for a few hours and other GPU load and compute tests looking for any artifacts or similar. Everything seems to check out 100% so far. I haven't started crunching for MW@H again yet as I still want to do some more tests, but so far it was looking like the first several WUs would have computational errors as they would "finish" bundled tasks and change CPU and GPU loads at the same time as each other for a while. Once they start deviating from each other, they seem to stop erring out. I'm not wanting to give out bad work, waste server resources, nor anything similar. Unless there is truly a shortage of WU and it is determined that more people should get them as apposed to more capable computers which can turn them around faster, I'm not sure that discriminating against work per time should be a thing. Theoretically the good of the project would be to get the most amount of work done in the shortest amount of time overall. Discriminating against failed WU or similar does make more sense to me though as that might gunk up the works a bit, even though I might technically fall under this category for the time being. I can assure you that isn't my intent. Hopefully my comments in this thread have helped shed some light on these issues as well. Pointing fingers doesn't solve the problem. My computer has done a lot of good work for this project, and as mentioned, has been within the top 5 performing hosts in the not too distant past. Off the top of my head, I think it got up to 4th place, but had been within the top 10 for a few months. I hope to get it back up there again after I get these issues sorted out. Cheers. |
Send message Joined: 4 Nov 12 Posts: 96 Credit: 251,528,484 RAC: 0 |
It doesn't exactly follow the rules of expectations, if that's what you mean, but I'm not sure that it necessarily should, given that it isn't actually just running one WU, but running several one at a time separately from each other. |
Send message Joined: 24 Oct 16 Posts: 12 Credit: 56,127,036 RAC: 0 |
When the N-Body Ver. 1.62 begins running all progress % stops on the (7) remaining CPU Bundle5 Tasks and the status changes to "waiting to run". The GPU Ver. 1.43 tasks continue to crunch but the completion time changes to 3 Min. 55 Sec. from the 4 Min. 25 Sec. required to complete when the (8)CPU Bundle5 Ver. 1.42 Tasks are running. The N-Body task completes in 8 Min. 50 Sec. and the CPU Bundle5 tasks restart. Intel(R) Core(TM) i7-2700K CPU @ 3.50GHz [Family 6 Model 42 Stepping 7] (8 processors) AMD Radeon HD 6900 series (Cayman) (2048MB) driver: 1.4.1848 OpenCL: 1.2 Microsoft Windows 7 Pro x64 Service Pack 1, (06.01.7601.00) |
Send message Joined: 30 Apr 14 Posts: 67 Credit: 160,674,488 RAC: 0 |
Lol. :) I wasn't talking about you at all. I was talking about (100% or almost 100% invalid or errored tasks): http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=606779&offset=0&show_names=0&state=5&appid= (50k invalid tasks in 2 days) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=23432&offset=0&show_names=0&state=6&appid= (2k invalid tasks) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=698232&offset=0&show_names=0&state=5&appid= (3k+ invalid tasks) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=586986&offset=0&show_names=0&state=5&appid= (3k+ invalid tasks) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=694659&offset=0&show_names=0&state=5&appid= (2k+ invalid tasks) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=630056&offset=0&show_names=0&state=6&appid= (almost 2k invalid tasks) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=259168&offset=0&show_names=0&state=6&appid= (almost 2k invalid tasks) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=706643&offset=0&show_names=0&state=6&appid= (almost 2k invalid tasks) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=709083&offset=0&show_names=0&state=6&appid= (almost 1k invalid tasks) http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=637080&offset=0&show_names=0&state=5&appid= (500 invalid tasks) Those Hosts don't really generate ANY credits. And those are just from 15 of my "can't validate" WUs. So there are many more of those... My personal favorite is Hosts that have a limit of 10k Tasks per day, and somehow receives more than 20k. After a single day, "Max tasks per day" should drop to 100, unless SOME WUs are correctly returned. Some time ago I have created a thread: http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=3990 which was "ignored" by the community, so "we" weren't interested in server performance. And I suspect that this time it will be similar. Bonus points since it was the SAME HOST, that currently gives us 20k invalid results per day. |
©2024 Astroinformatics Group