Japan Slices The Biggest Pi Ever

from the well,-that's-useful dept

Is it just me, or is it a really slow news day? It seems that some researchers in Japan have calculated pi to 1.2411 trillion places, which beats the old world record for calculating pi by approximately six times (talk about trouncing some poor mathemetician’s claim to fame). It took a supercomputer 400 hours to calculate the answer, but (get this) it took the team five years to write the software for the supercomputer to do this. To be honest, I’m a bit disappointed in the team. I mean, why stop at 1.2411 trillion places? If you’ve gone that way, you might as well continue on a few more billion, right? Anyway, in case you were wondering, there is no practical use of pi to 1.2411 trillion places.


Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Japan Slices The Biggest Pi Ever”

Subscribe: RSS Leave a comment
9 Comments
dorpus says:

Nature of the business

In mathematics, published findings take years of peer reviews and counter-proposals before they are accepted into the mainstream of mathematics. Five years sounds about right for calculating pi to that degree of precision, when you consider that computers are, in fact, quite unreliable when it comes to floating-point calculations.

Because computers store numbers as digital bits in base-2 form, it is mathematically impossible to store the exact value “0.1”; it is an irrational number (0*1/2 + 0*1/4 + 0*1/8 + 1*1/16 + …). Computers approximate such values with long sequences of negative powers of 2.

To avoid the irrational number problem, one would have to devise an elaborate algorithm of raising powers of ten to convert decimal-place digits into integers.

And yes, calculating pi to trillions of digits does have a practical purpose. Some mathematical theorems have been shown to fail at very large numbers. If a comparable discovery is made for pi, it would have profound implications for mathematics, therefore for cryptology, engineering sciences, physics, cosmology, everything.

LittleW0lf says:

Re: Nature of the business

Because computers store numbers as digital bits in base-2 form, it is mathematically impossible to store the exact value “0.1”; it is an irrational number (0*1/2 + 0*1/4 + 0*1/8 + 1*1/16 + …). Computers approximate such values with long sequences of negative powers of 2.

You were really beginning to frighten me there for a moment dorpus…you actually had a rational thought…then you destroyed it.

Computers do not have any problem storing floating point numbers (for finite numbers). Calculating them it does, but not storing them. Most modern programming languages have single/double precisiong floating point number data types and dealing with the storage of floating point is quite easy even with assembly language. To a computer, storing 0.1 is no different then storing 42, or 270,000.147.

True, the machine stores a floating point in base-2 form, just like every other bit of data, so computers are as good at storing floating point numbers as they are at storing strings, integers, etc. Computers don’t know how to use base-10 for storing bits, nor do they care about it except when they need to display the results to us.

However in this particular exercise, you don’t think the computer is actually computing the value of pi as one operation? The calculation for the value of pi is one that the computer can do very easily, it breaks the computation down into (an infinite) series of computations of smaller values, either using fractions, arctan, etc. This is something a computer can do.

Anonymous Coward says:

Re: Re: Nature of the business

What you describe is just a programming trick (done at the compiler level) that works for a limited number of decimal places. What if the number has a lot of decimal places, or worse, is irrational, with no repeating patterns? And you had to add trillions of them? Then you’ve got a problem.

The problem of calculating very precise numbers is an old one in scientific programming. What makes matters worse is that modern CPU’s have sacrificed mathematical purity for performance. Even at the design level, a modern CPU is no longer tested for the accuracy of every single possible state; only probabilistic methods are employed. At the manufacturing level, about half of CPU’s are thrown out because they make too many calculation errors. The other half are “accurate enough”, and the compiler-trick you described usually hides the errors from users anyway.

Back in 1990, Intel famously had this problem with the new Pentium chip, which would make mistakes in scientific computations. Since then, chip makers have both cooperated with language designers to sweep the problem under the hood, and also kept a tighter lid on such problems.

The future of chip design may have less to do with speed and more to do with reliability of extremely precise calculations.

Brandon (user link) says:

Re: Re: Re: Nature of the business

The first person’s post about floating point numbers if true.

Yours is mostly fiction.

The bug in the Pentium was a couple missing values in the fdiv lookup table. Logically, the mechanism for divining the results of fdiv to 80 bits is accurate, but implementing it wrong is something else.

Also, it is virtual impossible to run every possible combination of states in a modern processor (24 million transistors, ie switches), even at a couple GHz speeds. When you’re talking emulation before production, double hah. When I worked at Intel, the emulator ran at about 8 Hz. Sure, computers run faster now, but the model is also several times the size. At that size, you can’t even do all the standard integer math operations on all 32bit integers, much less 80bit floating point.

As for the need for reliability beyond 80bits… that strikes me as a very small subset of people. Most people would probably be fine just using 64 bit integers and multiplying all values by 10000 or so, if they need that level of accuracy.

Brandon

dorpus says:

Re: Re: Re:2 Nature of the business

So, in other words, computers are indeed unreliable for scientific computations. We live with the myth that computers can never make math errors, when they may be making some pretty basic arithmetic errors with 32-bit integers.

When NASA sent astronauts to the moon, they had 3 different computers make orbit and trajectory computations, since they would all come up with different answers.

LittleW0lf says:

Re: Re: Re:3 Nature of the business

When NASA sent astronauts to the moon, they had 3 different computers make orbit and trajectory computations, since they would all come up with different answers.

Source?

I couldn’t believe this, why would NASA use three different computers to come up with orbital and trajectory info if none of them came up with the right answer. These weren’t Intel Pentiums, but custom designed computers, designed solely for the purposes of calculating the flight trajectory and orbit of a space vessel. I couldn’t find info on this system on either NASA or elsewhere. (They did have a ton of information on the Navagational systems within the Apollo spacecraft, but not about the computers on the ground.)

However, when it comes to NASA, I suspect the use of more than one computer for this operation was not due to bad computation, but to redundancy, especially in a real time environment where lives were at stake. If the computer went down, there would be a need to switch to a backup. When I’ve done real-time application development in the past for the government, they ALWAYS required multiple redundant systems to be in place.

If you have a source to back this claim up, I’d be really interested in reading it.

A programmer says:

Re: Re: Re:4 Nature of the business

Jeez, you guys need to stop posting bullshit about stuff you clearly don’t know anything about. You don’t seriously think that anyone calculates pi to that many places using the floating point unit of an actual computer. I should imagine that they do it with some kind of simulation of fixed point which, though slow is not prone to all the errors that you have been talking about. Interestingly this is a case where you might need *less* storage as the computation proceeds as after a while you can save the leading digits and stop storing them as they will not be changed by the computation (depending of course on the formula you are using of course)

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...