Message boards :
MilkyWay@home Science :
Single vs. Double Precision
Message board moderation
Author | Message |
---|---|
Send message Joined: 21 Feb 09 Posts: 180 Credit: 27,806,824 RAC: 0 |
Just wondering how big a % error single precision affects the project? I have two 4670s I'd like to crunch with which only do single precision. I personally do electrochemical simulations at work, and single precision does not affect the results. I wonder just how much of an effect it would have here. |
Send message Joined: 24 Dec 07 Posts: 1947 Credit: 240,884,648 RAC: 0 |
You might want to read the application code board. |
Send message Joined: 12 Nov 07 Posts: 2425 Credit: 524,164 RAC: 0 |
Just wondering how big a % error single precision affects the project? I have two 4670s I'd like to crunch with which only do single precision. Gpu's are required to use double. Single is not usable for this project. See this thread for usable Gpus: http://milkyway.cs.rpi.edu/milkyway/forum_thread.php?id=589 Doesn't expecting the unexpected make the unexpected the expected? If it makes sense, DON'T do it. |
Send message Joined: 21 Feb 09 Posts: 180 Credit: 27,806,824 RAC: 0 |
Just wondering how big a % error single precision affects the project? I have two 4670s I'd like to crunch with which only do single precision. I'm already running a 4850 - I know what the current set of usable GPUs are, thank you. I've read through and through that part of the forum; to be honest I'm just asking for specifics - I'm asking just how much of a difference single precision makes to the results. In the forum, all I see is 'this science requires double precision' reiterated over and over, rather than a reason. For example: Compare a constant, take pi, as a single and a double. It's true value is 3.1415926535897932384626433832795 In single-precision pi has a decimal value of 3.1415927410125 In double-precision pi has a decimal value of 3.1415926535898 In single, pi is correct to 6dp. In double, pi is correct to 12dp. Depending on the calculations that are involved, the base error will either compound, or grown with each successive calculation. Now, like I said, I do electrochemical simulation. This involves solving of partial differential equations via matrix manipulation. A typical simulation for a set of functions takes 5 minutes (at double precision) on a 3Ghz Core2Quad machine (single threaded), and on conversion to single precision, I barely get 0.001% discrepancy on results - and as such I can just increase the density of the matrix to make up that discrepancy at very little cost to time. The single vs. double discrepancy evens out over time due to the nature of my simulation This I'm asking just as a 10 minute detour to compare. If it sucks, then ok. But if it isn't that big an issue, there could be a bigger set of GPUs out there willing to crunch. I don't program in C, but if it's possible to compile for single instead of double, we can run and compare. Or if I can be linked to something of that effect, please post here. Many thanks, Ian |
Send message Joined: 26 Jul 08 Posts: 627 Credit: 94,940,203 RAC: 0 |
It's true value is 3.1415926535897932384626433832795 Where did you get these numbers? Actually double precision values have about 16 significant decimal places, whereas single precision values have 7 significant decimal places. The double precision value of Pi is 3.1415926535897930, the single precision value is 3.1415927. You can count it, that are 16 and 8 correct decimal places (when rounded), respectively. For the single precision Pi value one has some luck with the binary representation. The next smaller representable value would be 3.1415925, the next larger one 3.1415930. So the resolution is worse than 8 decimal places. Btw., the "true" value of Pi has an infinite number of decimal places ;) But back to the topic. Could Milkyway make use of single precision? You have to know that MW uses quite some exponential functions. Unfortunately the numerical noise gets exponentiated by an exponential function too. Therefore one needs some more digits to start with. But I think in the early stages of a search, when one is far away from the optimum fitness, it may be possible. Most probably one can exclude some parameter ranges with single precision calculations and search for the optimum in a second pass using double precision. But that would require 2 sets of applications (or one incorporating both) and also more effort on the server side. And it is not even said, that it would be much faster considering how fast the searches converge already now. How do you decide which parameter regions you search again with double precision if not all clients have reported back their single precision results? If you wait for the results, the whole search may have already be finished if you started it with higher precision from the beginning. Milkyway really likes the results to be returned fast. As a side note, one would get real problems how to award credits in comparison to other projects. Flops counting would yield approximately 5 to 6 times as much as GPUGrid (even after their 60% credit raise) as one would get >500GFlops (single precision) on high end cards (and not only 140-150 double precision GFlops as now). Milkyway@home should change its name to Milky GPU Highway then ;) |
Send message Joined: 21 Feb 09 Posts: 180 Credit: 27,806,824 RAC: 0 |
Cluster Physik, awesome answer. Many thanks, exactly what I was looking for!! By your answer, single precision works my end, as my own simulations uses few exponentials that get evened out over time. I use a load of matrix calculations which are simple operator functions. That being said, I'm going to try and sell my 4670s that I have, pick up another 4850 or something. Also just got paid... wondering whether to get another 4850, or the PC case I've wanted for a while for my home machine.. |
Send message Joined: 30 May 09 Posts: 9 Credit: 105,674 RAC: 0 |
I experimented with integral steps for "mu" and "nu" which resulted to significant differences in the final fitness. Example: increasing Mu and Nu steps in the test file astronomy_parameters-20.txt from 800 to 900 and from 80 to 90 respectively resulted in following fitness in double precission: original astronomy_parameters-20.txt: ... mu[min,max,steps]: 133, 249, 800 nu[min,max,steps]: -1.25, 1.25, 80 ... fitness = -2.985312797571516 modified: ... mu[min,max,steps]: 133, 249, 900 nu[min,max,steps]: -1.25, 1.25, 90 ... fitness = -2.985312877665851 Natan said the optimization algorithm sometimes hits local maximum, so there always must be careful manual checking for the congruence of the crunched numbers. In other words the algorithm is not noice-free and much accurate calculations probably are not necessary. Something suggests me that the scientific focus will move (if not already) to using BOINC for not fully automatic solving of the problem, but more for brute force experimenting, followed by more or less manual data processing. From this point of view I think the scientists soon or later will prefer quick response from BOINC community instead of slow but more accurate processing results. Results for the fitness in above example in single precision exponent in the most inner loop only are respectively original: -2.985312797574330 modified: -2.985312877668430 Single precision exponent, including the stupid conversion from double to single, and back to double immediately before and after the exp() call, increases the whole performance about 4 times. The production tasks currently use larger integral steps and probably the deviations there would be less, but I am using CPU and had not time to experiment with 1 hour tasks. For me, the requirement to use double precision sounds like requirement not to make love 3 months before measuring the blood preasure because this may affect the results ;-) I know the project is still in alpha stage so please treat my comment as hint only. |
©2025 Astroinformatics Group