Welcome to MilkyWay@home

Scheduled Maintenance Concluded

Message boards : News : Scheduled Maintenance Concluded
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 13 · Next

AuthorMessage
Arivald Ha'gel

Send message
Joined: 30 Apr 14
Posts: 67
Credit: 160,674,488
RAC: 0
Message 65821 - Posted: 15 Nov 2016, 9:00:06 UTC - in response to Message 65820.  


In past the project delay was 60 seconds.
Then it was changed to/now it's:
Project requested delay of 91 seconds
[sched_op] Deferring communication for 00:01:31


If it would be needed to change this settings, please think to the very fast PCs which are around (like mine with 4* R9 Fury X VGA cards ;-) - that they could be fed/saturated 24/7... :-)


IMHO your PC will still be saturated :) Current max of 80 tasks will require at least 10 minutes for your PC to crunch through :)
If Jake will increase bundle sizes, this will be even easier to achieve.
ID: 65821 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Dirk Sadowski

Send message
Joined: 30 Apr 09
Posts: 101
Credit: 29,874,293
RAC: 0
Message 65822 - Posted: 15 Nov 2016, 9:00:30 UTC

BTW.

AFAIK, Milkyway WUs could be created/send out if results come back.

The results which come from the members PCs make it possible to create new WUs.

This is different to other BOINC projects.
ID: 65822 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Michael H.W. Weber

Send message
Joined: 22 Jan 08
Posts: 29
Credit: 242,730,423
RAC: 0
Message 65823 - Posted: 15 Nov 2016, 9:16:36 UTC - in response to Message 65786.  
Last modified: 15 Nov 2016, 9:22:27 UTC

Just released the GPU version. It is a 32-bit application that works on 64 bit machines. Let me know if there are any issues.

Well, first of all congratulations that you finally made it happen!
The returned tasks validate as before the bundling efforts, i.e. many are instantly valid, a majority is inconclusive - but then shifted to the valid bucket (this behavior I never understood, by the way...).

They should take about 5x longer than normal work units since you are crunching 5.

In fact, a 280X requires 9 secs for a single WU. The 5x bundle completes in 38 secs which is quicker than 5-fold. Same with the 290X: 13 secs for a single task, 58 secs for a bundle of 5 tasks. So, the computation is certainly more time efficient.
Moreover it is better for the GPU hardware, because it does not cool down and heat up as frequently as before but is kept on a rather constant operation temperature.

Finally, I am unsure whether bundling of only 5 tasks will solve the DDoS-like attack issues on your server. You can easily increase the bundle size by another factor of 10 or even 100 and then disallow server contacts below a reasonable time threshold.
But let's see. As soon as you find that the 'GPU people' run out of work again, you might want to increase the bundle size as suggested.

And thanks again for taking our concerns serious. As a result, I am quite sure you will be flooded with new results.

Michael.
President of Rechenkraft.net e.V. - This planet's first and largest distributed computing organization.

ID: 65823 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin

Send message
Joined: 2 Oct 16
Posts: 167
Credit: 1,008,062,758
RAC: 155
Message 65825 - Posted: 15 Nov 2016, 10:58:23 UTC - in response to Message 65805.  

Hey Everyone,

I did see the Linux GPU ones were getting a few invalid results, but I thought that was just do to the significant number of errors on the release day. Is that still an issue now that the error stats are reduced? (Can anyone confirm there are invalid results on Linux from WUs assigned today?)

I will work on getting a fix for the cosmetic issues for the next scheduled server maintenance in a week or so. I also need to get a Mac application working and released. I'll make a new news thread with a date to expect the next updates.

Thank you all for your help with debugging and words of encouragement. The MilkyWay@home community is the best.

Jake


I had a decent number of errors prior to the server update/bundling. I had checked some of the top users and I only saw errors on Linux machines. It seems like the errors have gone down, maybe due to bundling but the invalids have gone up. My PC with 280x with everything at stock.
http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=706926&offset=0&show_names=0&state=5&appid=
ID: 65825 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Arivald Ha'gel

Send message
Joined: 30 Apr 14
Posts: 67
Credit: 160,674,488
RAC: 0
Message 65829 - Posted: 15 Nov 2016, 12:49:33 UTC
Last modified: 15 Nov 2016, 12:51:18 UTC

I see we have some rouge Hosts that wasted some work of my GPU. Here's one example:
http://milkyway.cs.rpi.edu/milkyway/results.php?hostid=606779

About 20k invalid WU in a single day. I thought that BOINC was supposed to curb that one with "Number of tasks today", but it seems to be over 20k per day (?) :/

I have already PMed the owner.

Jake,
Could you look at this particular problem? With increased Bundle size this will become increasingly problematic.
ID: 65829 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
w1hue

Send message
Joined: 13 Feb 09
Posts: 51
Credit: 72,827,746
RAC: 2,413
Message 65830 - Posted: 15 Nov 2016, 13:14:34 UTC

I am currently running two machines:

WinXP, AMD 64 X2 CPU, NVIDA GTX-750Ti GPU, two WUs per GPU
Win10, AMD 7750 Dual Core CPU, NVIDA GT-730 GPU, two WUs per GPU

Running Milkyway, Einstein and SETI GPU apps on both machines.

On the XP machine, Milkyway WUs taking about 20 min of run time and using about 40% CPU.

On the Win10 machine, Milkyway WUs taking about 30 min of run time and typically using less than 10% CPU time (fairly large variation in CPU time).

Einstein and SETI WUs take 10% or less CPU time on both machines.

I don't consider that running the current Milkyway WUs on my XP machine is an efficient use of the machine due to the high CPU usage, so I have discontinued running them until such that (or if...) a new version of the app is released that requires less CPU time.
ID: 65830 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Arivald Ha'gel

Send message
Joined: 30 Apr 14
Posts: 67
Credit: 160,674,488
RAC: 0
Message 65831 - Posted: 15 Nov 2016, 13:37:33 UTC - in response to Message 65830.  
Last modified: 15 Nov 2016, 13:38:39 UTC

On the XP machine, Milkyway WUs taking about 20 min of run time and using about 40% CPU.

On the Win10 machine, Milkyway WUs taking about 30 min of run time and typically using less than 10% CPU time (fairly large variation in CPU time).


This is probably due to CPU/GPU combination. AMD 64 X2 CPU is 90nm, very old CPU (introduced in 2005/2006, newer versions 65nm from 2007/2008). AMD 7750 Dual Core CPU is from the same family, 65nm CPU from 2008.

I have wrote about CPU/GPU profile analysis few posts earlier.
Also GTX-750Ti nor GT-730 is especially efficient GPU for double precision work. It WILL be much more efficient in single precision work in Einstein@Home or on Seti@Home.

I don't consider that running the current Milkyway WUs on my XP machine is an efficient use of the machine due to the high CPU usage, so I have discontinued running them until such that (or if...) a new version of the app is released that requires less CPU time.


Using those GPU, MilkyWay@Home will probably never be more efficient than Einstein@Home or Seti@Home credits wise.

If you wish to contribute to MW@H more I can suggest you some DP GPUs that will be efficient.
ID: 65831 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin

Send message
Joined: 2 Oct 16
Posts: 167
Credit: 1,008,062,758
RAC: 155
Message 65832 - Posted: 15 Nov 2016, 14:48:08 UTC - in response to Message 65831.  

On the XP machine, Milkyway WUs taking about 20 min of run time and using about 40% CPU.

On the Win10 machine, Milkyway WUs taking about 30 min of run time and typically using less than 10% CPU time (fairly large variation in CPU time).


This is probably due to CPU/GPU combination. AMD 64 X2 CPU is 90nm, very old CPU (introduced in 2005/2006, newer versions 65nm from 2007/2008). AMD 7750 Dual Core CPU is from the same family, 65nm CPU from 2008.

I have wrote about CPU/GPU profile analysis few posts earlier.
Also GTX-750Ti nor GT-730 is especially efficient GPU for double precision work. It WILL be much more efficient in single precision work in Einstein@Home or on Seti@Home.

I don't consider that running the current Milkyway WUs on my XP machine is an efficient use of the machine due to the high CPU usage, so I have discontinued running them until such that (or if...) a new version of the app is released that requires less CPU time.


Using those GPU, MilkyWay@Home will probably never be more efficient than Einstein@Home or Seti@Home credits wise.

If you wish to contribute to MW@H more I can suggest you some DP GPUs that will be efficient.


To add onto that, my 280x at stock running MW outcredits my 970 and 1070 running E@H and they would do even worse running MW. The DP on the 280x just rocks MW WUs.
ID: 65832 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 0
Message 65833 - Posted: 15 Nov 2016, 15:04:03 UTC - in response to Message 65822.  

BTW.

AFAIK, Milkyway WUs could be created/send out if results come back.

The results which come from the members PCs make it possible to create new WUs.

This is different to other BOINC projects.


This could be sorted by shortening the deadlines?
ID: 65833 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Nick Name

Send message
Joined: 27 Jul 14
Posts: 23
Credit: 921,261,826
RAC: 0
Message 65834 - Posted: 15 Nov 2016, 15:36:04 UTC

Please observe these work units.

This is expected.
https://milkyway.cs.rpi.edu/milkyway/result.php?resultid=1887793313
Name: de_modfit_fast_19_3s_136_bundle5_ModfitConstraints3
Run time: 20 min 41 sec
Credit: 133.66

This is not expected.
https://milkyway.cs.rpi.edu/milkyway/result.php?resultid=1887790878
Name: de_modfit_fast_19_3s_136_ModfitConstraints3
Run time: 4 min 5 sec
Credit: 26.73

It appears there are some work units getting thru that are not bundled, but run 5x as long as an old single work unit and pay 1/5 as much. I have a handful of these.
Team USA forum | Team USA page
Always crunching / Always recruiting
ID: 65834 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
wb8ili

Send message
Joined: 18 Jul 10
Posts: 76
Credit: 639,904,038
RAC: 58,719
Message 65835 - Posted: 15 Nov 2016, 15:43:42 UTC

Nick Name -

The first task you have is bundled (5 "old" tasks), ran roughly 5 times longer than the second, and gave roughly 5 times more credit than the second.

What was not expected?

It is true that not all tasks being sent out are bundled. I have a few of those.
ID: 65835 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Nick Name

Send message
Joined: 27 Jul 14
Posts: 23
Credit: 921,261,826
RAC: 0
Message 65836 - Posted: 15 Nov 2016, 16:12:00 UTC - in response to Message 65835.  

Nick Name -

The first task you have is bundled (5 "old" tasks), ran roughly 5 times longer than the second, and gave roughly 5 times more credit than the second.

What was not expected?

It is true that not all tasks being sent out are bundled. I have a few of those.

Thanks for your comment, I think I figured out what is happening. I was surprised to see some unbundled work units. The run time also surprised me, even taking into account the fact that I'm running six at once. That's why the bundled one I linked ran for 20 minutes. At first glance it seemed some tasks that should have been bundled, weren't, and were also running unusually long.

I don't have many of these and haven't been at the computer to see one come thru. If I catch one I'll probably run it by itself just for peace of mind.
Team USA forum | Team USA page
Always crunching / Always recruiting
ID: 65836 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Mr P Hucker
Avatar

Send message
Joined: 5 Jul 11
Posts: 990
Credit: 376,143,149
RAC: 0
Message 65837 - Posted: 15 Nov 2016, 16:28:46 UTC - in response to Message 65834.  

Please observe these work units.

This is expected.
https://milkyway.cs.rpi.edu/milkyway/result.php?resultid=1887793313
Name: de_modfit_fast_19_3s_136_bundle5_ModfitConstraints3
Run time: 20 min 41 sec
Credit: 133.66

This is not expected.
https://milkyway.cs.rpi.edu/milkyway/result.php?resultid=1887790878
Name: de_modfit_fast_19_3s_136_ModfitConstraints3
Run time: 4 min 5 sec
Credit: 26.73

It appears there are some work units getting thru that are not bundled, but run 5x as long as an old single work unit and pay 1/5 as much. I have a handful of these.


The times you quote look fine. The first took 5 times longer with 5 times the credit.

Although they shouldn't be coming through I'll grant you that.
ID: 65837 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Arivald Ha'gel

Send message
Joined: 30 Apr 14
Posts: 67
Credit: 160,674,488
RAC: 0
Message 65838 - Posted: 15 Nov 2016, 16:34:03 UTC - in response to Message 65834.  

Please observe these work units.

This is expected.
https://milkyway.cs.rpi.edu/milkyway/result.php?resultid=1887793313
Name: de_modfit_fast_19_3s_136_bundle5_ModfitConstraints3
Run time: 20 min 41 sec
Credit: 133.66

This is not expected.
https://milkyway.cs.rpi.edu/milkyway/result.php?resultid=1887790878
Name: de_modfit_fast_19_3s_136_ModfitConstraints3
Run time: 4 min 5 sec
Credit: 26.73

It appears there are some work units getting thru that are not bundled, but run 5x as long as an old single work unit and pay 1/5 as much. I have a handful of these.


They're not bundled since they where first computed when bundle was not yet available. I believe this is expected that sometimes (1%-2%) WU will still be not-bundled. But this will disappear in few days totally.

Check: https://milkyway.cs.rpi.edu/milkyway/workunit.php?wuid=1377085803
This is WU for this Task. First sent to MW@H 1.38 Client.
ID: 65838 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Nick Name

Send message
Joined: 27 Jul 14
Posts: 23
Credit: 921,261,826
RAC: 0
Message 65839 - Posted: 15 Nov 2016, 17:16:16 UTC

Thanks everyone. I caught an unbundled one and it processed at the (before bundle) normal rate, so it seems things are working as they should. Any extra time they take can be explained by the stop/start of the bundled work units running at the same time.
Team USA forum | Team USA page
Always crunching / Always recruiting
ID: 65839 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
hans dorn

Send message
Joined: 6 Apr 13
Posts: 8
Credit: 215,367,305
RAC: 0
Message 65844 - Posted: 15 Nov 2016, 20:23:51 UTC

Everything seems to be running just fine at my end now.

Thanks for the fast fix.


Cheers
ID: 65844 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 65851 - Posted: 16 Nov 2016, 4:23:24 UTC
Last modified: 16 Nov 2016, 4:48:12 UTC

As I mentioned here. → http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=4058&postid=65850#65850

I like that it goes back to 0, knowing that it is running 5 WU in 1 in effect, to let me monitor the progress each WU package.

Currently I'm running 5 of the new 1.43 WUs in parallel at the same time on each of my 2 Titan Black cards, 10 in total. Crunching time per the new WU is 3:33 (93s) to 3:36 (96s) in total, loading each GPU up to about 78%, so total productivity does seem to have improved for me as well.

Very nice job, it seems, and it's probably easier bundling these both for the servers and most hosts with less down time and communication needs overall.


Good job, guys. I'm looking forward to keeping an eye on these and seeing how well they work overall now.

Cheers.
ID: 65851 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
w1hue

Send message
Joined: 13 Feb 09
Posts: 51
Credit: 72,827,746
RAC: 2,413
Message 65852 - Posted: 16 Nov 2016, 4:41:41 UTC - in response to Message 65831.  

If you wish to contribute to MW@H more I can suggest you some DP GPUs that will be efficient.

Like what? I have a 600W PSU on the XP machine but limited cooling. The Win10 machine is an HP slimline requireing a half-height boad and only has a 350W PSU.
ID: 65852 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Arivald Ha'gel

Send message
Joined: 30 Apr 14
Posts: 67
Credit: 160,674,488
RAC: 0
Message 65853 - Posted: 16 Nov 2016, 10:27:48 UTC
Last modified: 16 Nov 2016, 10:29:39 UTC

Like what? I have a 600W PSU on the XP machine but limited cooling. The Win10 machine is an HP slimline requireing a half-height boad and only has a 350W PSU.


For AMD/ATI Radeon 280/280X is still the best for DP. They're also quite cheap right now (around 150$? each)

As for NVidia GeForce GTX Titan, and GeForce GTX Titan Black (from GeForce 700 Series), but they're extra costly - I believe more than 500$ each. In my place they're really unavailable. I saw some on UK eBay - 700 Pounds each... oh my eyes.

Titan should be almost twice as effective per W in DP but in old MW@H performed a little worse than R280X per /s (so I assume that per W it's the same).
See benchmark thread:
https://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=3551&postid=64162#64162

In this thread it can be seen that really ANYTHING AMD/ATI is better than NVidia in terms of DP.

For both, 600W PSU should be enough.
ID: 65853 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile Wrend
Avatar

Send message
Joined: 4 Nov 12
Posts: 96
Credit: 251,528,484
RAC: 0
Message 65854 - Posted: 16 Nov 2016, 13:19:43 UTC
Last modified: 16 Nov 2016, 13:20:34 UTC

Updated to 6 MW@H 1.43 WU bundles running per GPU, 12 total. Also allocated more CPU headroom so WUs are finishing about 3:28 (88s) total. GPUs are loaded up to around 93%, VRAM up to around 3668MB (59% on SLIed Titan Black cards, so the memory usage is double from being mirrored between cards – expect roughly half this on independent cards).

There can be significant CPU load spikes between individually bundled GPU tasks, even though the CPU tends to mostly idle. I've allocated up to 0.75 CPUs per GPU WU, leaving 33% of my 12 threaded CPU free.

Loads: https://i.imgur.com/8wygOs6.png

Config: https://i.imgur.com/SGKV9XD.png

Still seems to be running great.
ID: 65854 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 13 · Next

Message boards : News : Scheduled Maintenance Concluded

©2024 Astroinformatics Group