Message boards :
News :
News General
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5
Author | Message |
---|---|
Send message Joined: 8 May 22 Posts: 3 Credit: 4,640,216 RAC: 0 |
I'm on 13.3.1 and it has been working fine. |
Send message Joined: 30 Dec 14 Posts: 2 Credit: 12,751,333 RAC: 0 |
I have been running 24/7 and in the past 3 days I get 0 new work units. I get project stand down of 5 minutes to 15 hours in some cases. the site has been down several times recently however is now back up just that I had 140RAC now dropping as i have nothing to crunch. I was not sure where to address this issue ? |
Send message Joined: 8 May 09 Posts: 3339 Credit: 524,010,781 RAC: 0 |
I have been running 24/7 and in the past 3 days I get 0 new work units. What kind of tasks are you trying to get? I'm getting N-Body tasks with no problem |
Send message Joined: 19 Jul 10 Posts: 627 Credit: 19,363,476 RAC: 3,495 |
You have lots of nBody tasks, which expire today, that might cause some issues. Try to disable work fetch for CPU to get some GPU tasks and for the future set your cache lower, you have lots of timed out tasks. |
Send message Joined: 11 Jul 08 Posts: 13 Credit: 10,015,444 RAC: 0 |
I have been monitoring run times for the gpu app, and they have been slowly increasing over the last couple of weeks. My guess is that WU's are getting more demanding - but only around 5% to date. An odd thing about them is that my bigger card (1650 max-q 4GB 1024 shaders) uses about double so much cpu as my smaller (mx350 2GB 640 shaders). We all know that gpu WU's get "normal" cpu priority contra cpu WU's "low" priority. I don't have other normal jobs running, and the 2 machines (intel 10 & 11) are running at similar speeds. Is it because the bigger card requires more exchange of data ? |
Send message Joined: 15 Feb 21 Posts: 3 Credit: 17,403,101 RAC: 3,576 |
Sorry to bother you but this is the first human I've found. For the first time I have a Task that shut down all my others for the first time. Without going into the details, my computer usually runs at least a dozen at a time. Application Milkyway@home N-Body Simulation 1.82 (mt) Name de_nbody_02_27_2023_v182_pal5__data__10_1688749648_1132564 State Running Received 8/20/2023 11:46:09 PM Report deadline 9/1/2023 11:46:10 PM Resources 16 CPUs Estimated computation size 3,531 GFLOPs CPU time 05:35:58 CPU time since checkpoint 00:05:28 Elapsed time 00:44:22 Estimated time remaining 00:35:26 Fraction done 36.094% Virtual memory size 16.32 MB Working set size 19.07 MB Directory slots/6 Process ID 21108 Progress rate 49.320% per hour Executable milkyway_nbody_1.82_windows_x86_64__mt.exe |
Send message Joined: 19 Jul 10 Posts: 627 Credit: 19,363,476 RAC: 3,495 |
Yes, n-Body tasks use all available cores (unless you have more than 16). |
Send message Joined: 22 May 11 Posts: 71 Credit: 5,685,114 RAC: 0 |
Why not? Will it crash if more than 16 is specified in app_config? |
Send message Joined: 8 May 09 Posts: 3339 Credit: 524,010,781 RAC: 0 |
Why not? Will it crash if more than 16 is specified in app_config? You can try but I believe that it will still only use 16 |
Send message Joined: 3 Mar 13 Posts: 84 Credit: 779,527,712 RAC: 0 |
I don`t see any real advantage to using more than 16 cpu/threads , when the app is allowed to use up to 16 cpu`s it does not make full use all of them all the time anyway I was running 4x16 on a 48cpu/thread system [as an experiment] and at times [with all units `running`] it was only using 70% cpu time , sometimes less . 16 will crunch them fast , 6 or thereabouts [depending on system/config etc , etc] will get more done per day . Though running 20 or 30 more would be an interesting experiment . |
Send message Joined: 14 Oct 19 Posts: 6 Credit: 3,455,883 RAC: 237 |
Even without improvements, any parallel code that does not use 100% of a critical resource may benefit from running more threads, so as one thread is busy elsewhere, the other thread can use the resource. I used to use double the CPU core count to ensure close to max throughput. If Intel hyperthreaded processors, one might even double that. For instance, if 50% of a thread's time is spent in IO, you might run twice as many threads to compensate, although you may just discover they are more IO bound as a result. Even so, IO total throughput somewhat increases with additional requests, like shorter seeks on a disk as it services requests in elevator style. When we were tied to modems, a second transfer might increase the flow just a little by ensuring no idle time between packets, like 2 threads running 50% in place of one running 95%. If paging is involved, one might trigger some thrashing, but the parallel app should be OK, as all threads use the same VM, although possibly different pages. In a producer-consumer queue situation, one might create more producers if the queue runs dry, and more consumers if the queue goes full, in addition to short sleeps for threads with no queue data or space. Of course, not all apps can support multiple producers or consumers on a queue. |
Send message Joined: 8 May 09 Posts: 3339 Credit: 524,010,781 RAC: 0 |
Even without improvements, any parallel code that does not use 100% of a critical resource may benefit from running more threads, so as one thread is busy elsewhere, the other thread can use the resource. I used to use double the CPU core count to ensure close to max throughput. If Intel hyperthreaded processors, one might even double that. BUT the software isn't optimized to handle any of that except by accident and for most people they would never take the time to figure out which is best, it's alot like at Prime Grid where they advise people to not use hyperthreading to crunch their tasks and to limit AMD cpu cores to stay on a single 'chiclet' yet people still say 'I've got 24 cpu's cores and I'm putting them all on one task and it's working so I'm okay'. Little do they realize that with a bit of tweaking they could easily get 2 or even 3 times the output that they are getting now. |
Send message Joined: 24 Mar 22 Posts: 1 Credit: 72,293 RAC: 0 |
I have a problem; milkyway_nbody_1.82_windows_x86_64_mt.exe is not running properly. It shows 0.674% complete and 79 days to completion. Task maniger indicates it is using 1% of the CPU power. I believe that others have had the same problem. |
Send message Joined: 19 Jul 10 Posts: 627 Credit: 19,363,476 RAC: 3,495 |
I believe that others have had the same problem.Yes, most if not all of them because of not allowing BOINC to use 100% of CPU time. |
Send message Joined: 5 Jul 11 Posts: 990 Credit: 376,143,149 RAC: 0 |
Even without improvements, any parallel code that does not use 100% of a critical resource may benefit from running more threads, so as one thread is busy elsewhere, the other thread can use the resource. I used to use double the CPU core count to ensure close to max throughput. If Intel hyperthreaded processors, one might even double that.If? Here's the latest one, the performance cores have HT: https://www.intel.com/content/www/us/en/products/sku/232167/intel-core-i913900ks-processor-36m-cache-up-to-6-00-ghz/specifications.html The above was double spaced between sentences, I apologise for the forum software ruining my post. |
Send message Joined: 5 Jul 11 Posts: 990 Credit: 376,143,149 RAC: 0 |
BUT the software isn't optimized to handle any of that except by accident and for most people they would never take the time to figure out which is best, it's alot like at Prime Grid where they advise people to not use hyperthreading to crunch their tasks and to limit AMD cpu cores to stay on a single 'chiclet' yet people still say 'I've got 24 cpu's cores and I'm putting them all on one task and it's working so I'm okay'. Little do they realize that with a bit of tweaking they could easily get 2 or even 3 times the output that they are getting now.Primegrid folk aren't right in the head. If it works better with less threads, the server should hand them out that way. The above was double spaced between sentences, I apologise for the forum software ruining my post. |
Send message Joined: 5 Jul 11 Posts: 990 Credit: 376,143,149 RAC: 0 |
Why would someone deliberately put "1%" in that option?I believe that others have had the same problem.Yes, most if not all of them because of not allowing BOINC to use 100% of CPU time. The above was double spaced between sentences, I apologise for the forum software ruining my post. |
©2024 Astroinformatics Group