# Tech Optimism Is Back

### from the *wave-2.0* dept

It looks like tech optimism is back in fashion. Here come the glowing press reports about “the next big thing.” USA Today starts us off by noticing that the internet is now poised to go to a new level. While we’ve pretty much reached the point on the web that matched many of the initial predictions, it has people wondering where do we go from here? USA Today notes that the pieces have been laid down over the past few years to expand the internet into a much more useful tool. Wireless devices and technologies help expand the web outside of the computer. The core infrastructure of the internet is now a commodity and the concept of standard web services is being accepted. The tools to build internet businesses are now understood and cheap. All that’s left is really creating the innovative services and applications. Of course, everything isn’t just about the internet. Business Week is running a big issue on the Innovation Economy, with a ton of articles about what they expect will be our new age of innovation over the next 75 years, covering biotech to nanotech to infotech to energy. If you’ve been down about the prospects for technology going forward, read through some of this and realize there’s still plenty of innovation on the way.

## Comments on “Tech Optimism Is Back”

## Will we ever have decimal computers?

For the foreseeable future, computers will continue to store data represented in 0’s and 1’s. Nobody has talked about storing data in values ranging from 0 to 9 for each bit. And for the foreseeable future, humans will continue to use the decimal number system.

The problem with binary computing is that the decimal number “0.1” cannot be accurately stored in memory.

Computers can represent fractional numbers as negative powers of 2, e.g.

0.5 = 1/2

0.75 = 1/2 + 1/4

0.375 = 0/2 + 1/4 + 1/8

etc.

However, it is mathematically impossible to represent the decimal number 0.1 as an exact sum of powers of 2.

Proof by contradiction:

Suppose 0.1 can be represented as a finite sum of powers of 2, namely

n(1)/2 + n(2)/4 + n(3)/8 + … n(N)/2^N = 1/10

where each n(.) is either 0 or 1.

The above equation is equivalent to

n(N) + 2*n(N-1) + 4*n(N-2) + … + 2^(N-1)*n(1) = (2^N)/10

The left side of this equation adds up to some integer, whereas the right side cannot be an integer:

(2^N)/10 = 2^(N-1)/5

A power of 2 cannot be divided by 5, since both are prime numbers.

This leads to a contradiction, therefore 0.1 cannot be represented as a finite sequence of negative powers of 2.

This has vast implications for computer science: computers cannot be fully trusted with calculations involving decimal-point numbers. Computer makers compensate by adding many digits to represent very small fractions, but there are still small errors; 0.1 stored in computer memory is really something like 0.0999999999998713.

So yes, if there is a vast conspiracy by the CIA, Trilateral Commission, Microsoft, etc. to hide the truth, it is that computers do make math mistakes. If you add up millions of small numbers, errors do accumulate.

## Re: Will we ever have decimal computers?

>> computers cannot be fully trusted with calculations involving decimal-point numbers.

Actually, just as you can program computers to perform spell checking of text – you can program computers to perform flawless decimal arithmetic. It’s slower than native binary – but if you want your money to add up – it is fairly straightforward to accomplish.

This is usually more of an issue for financial systems where pennies count rather than scientific systems where all bases are more or less equally valid.

## Re: Re: Will we ever have decimal computers?

Still, people have to remember that programs need to perform such tricks behind the scenes. I demonstrated this fact today to people who have advanced degrees in computer science or related fields, and they didn’t remember it right away.

## Re: Re: Re: Will we ever have decimal computers?

The first computers were decimal. (especially if you go back as far as Babbage) – we switched to binary because 10-state electronic devices are hard to build.

## Re: Re: Re:

^{2}Will we ever have decimal computers?Actually, IO state electronic devices are not necessary for decimal computing.

Approximately 3.3 times (2^3.3 = IO) more traces on a chip are needed to represent decimal numbers with digital devices, but the devices are binary.

Binary devices aren’t even the most efficient use of electronics.

A wire (or trace) can actually hold 3 on/off states: + voltage, O voltage, & – voltage.

Imagine logic circuits made up of single pole, double throw switches instead of s.p.s.t. switches and one will see what I mean.

Trinary logic circuits, as this arrangement is called, have been designed, but, as far as I know, never implemented.

It’s too bad since they are inherently the most efficient use of chip space.

One doesn’t have to use a trinary number system with trinary logic circuits.

Ordinary binary numbers can be used with signed digits allowing arithmetic without using the 2s compliment system.

The mixed signed digit combinations can be used to represent everything other than numbers – words, instructions, addresses, etc.

## Re: Re: Re: Will we ever have decimal computers?

It’s a fact that the translation from decimal arithmetic to binary arithmetic and back again entails inaccuracies.

The problem comes in the rounding.

There are situations where only binary coded decimal is used because of this problem.

I’d recommend building decimal computers again, since the hardware is advanced enough to build decimal machines with acceptable performance, but the binary system really is superior to the decimal one.

If we switched to the binary system generally, there’d by no need to memorize a multiplication table with 45 entries!