The Power Of Defective Chips
from the saving-yield dept
In the semiconductor business, yield is everything. Yield is (basically) the number of chips manufactured that are good vs. those that are defective and need to be tossed out. It often goes ignored in the general business press, but part of the reason Intel was so successful for so many years against competitors like AMD was that their manufacturing process allowed them to have a much higher yield — meaning that even if competitors could make equally good chips, they could never reach the same margins, as they’d be throwing out a lot more defective chips. It’s commonly accepted that in producing chips, a certain percentage (hopefully decreasing over time) are going to get thrown out. However, some researchers are trying to make that waste not so wasteful. They’ve realized that even defective chips can be useful for some things — usually simpler processes that might not need as exact calculations, such as for decoding MPEG video. They’ve now developed a system for testing defective chips to see just how defective they really are, and whether or not they can be reused in other applications or devices. While they say that some chip firms haven’t been interested in talking to them, for fear of being associated with selling “defective” chips, it appears that some are realizing that this is pure incremental sales. These are chips that they would otherwise throw out. So, sales of such chips are simply icing on the cake — and in a low margin business, that icing can be pretty sweet. On the other side of the coin, such chips could also help to make special purpose devices cheaper, since the chips needed to build them will be cheaper.
Comments on “The Power Of Defective Chips”
Maximum Likelihood Estimators
It could be that the whole of science is, indeed, built on unreliable measurements with unknown errors. We merely take the most plausible explanation that minimizes the scope of the errors, but that can introduce its own biases by assuming the errors are minimal.
Re: Maximum Likelihood Estimators
When the errors turn out to be, in fact, larger than that predicted by theory, then we look for new explanations. We can devise more complex models, but they require us to estimate more parameters, which eats up the information available in the data, leaving us with less room for useful predictions.
The number of errors on a chip could be modelled as a Poisson distribution. We live in a logarithmic world in which failures are waiting to happen to us.
Uncertainty Principle?
everything is actually uncertain.