Talk:Nanotechnology

The timeline on this page is, I think, rediculously optimistic. I don't see any argument or support on the linked page; It seems to be just raw guesswork.

If there were "simple nanotech products" to be released in 2008, you'd think we'd hear something about it by now.

We hear a lot about nanotech products right now, but they are all on the "grain" end of things, ("oh, look, we can attach these little molecules to this, and it has these different properties!,") which is essentially just traditional chemistry labeled nano-tech. The news we hear about is largely similar "grain" products.

The most interesting work right now seems to me to be the artificial DNA and stuff like that. But even that's far from commercial application, "simple nanotech products."

I'd rather wipe away the assertions, and in place, put questions, questions that we can hope that domain experts can answer.


 * What nanotech products do we expect to appear when, and why?
 * What are reasonable expectations for tiny machines, and what supports those arguments?
 * How much miniaturization can we expect? What are the experimental limits.
 * "There's plenty of space at the bottom." What is the bottom, what are we building right now, and how much is the difference between the two?

I hear that transistors in computers have a smallest feature size right now of 30nm. I need to know: "What's that mean? So, does that mean a transistor is 30 nm wide? Or does that mean that part of the transistor is 30 nm wide? How tall are they? What is the actual geomtetry of a transistor? Atoms average about 1/2-3 nanometers, to my understanding; That means that we only have so much potential miniaturization-- perhaps x15. That sounds like a lot, but it's only 4 iterations over Moore's law, which means that your exponential trends mean squat-- unless you can show me that I misunderstand transistors, or that I misunderstand switching, or something like that. But can you give me good reason to believe, beyond faith in exponentials that have served us well so far?"

This is actually very crucial. If I cannot make a convincing argument that miniaturization will proceed like we say, then I cannot accept an argument that miniaturization will proceed like this.

I understand that either Feynmann said that there's "plenty of room, at the bottom." That may have been true, what, 30-40 years ago. But we may have actually gotten pretty close to the bottom, by now. "For instance, the wires should be 10 or 100 atoms in diameter, and the circuits should be a few thousand angstroms across." Isn't that about where we're at? Today? Now? Are we around Feynmann's limit?

So, I'd like to know something, and see some arguments, rather than just hearing: "Oh, it'll get smaller." Please, show me how.

I have heard ideas that there will be more computing power in a grain of sand, than in all of the world's computers, or all of the computing power in a child's brain. That's a lot of computing power, and, frankly, I'm skeptical that such a density will ever be achieved. Wires themselves, even in the most optimistic projections I have seem, require thousands of atoms. And with enormous capabilities, we will require thousands, hundreds of thousands, millions, of wires.

If we want small computers significantly smaller than, say, blood cells, floating around, running Java code, or whatever, I think we have a heck of a lot of explaining to do. I don't think the ITRS would support this vision. If a red blood cell is 7 micrometers in diameter, then that gives you just 7,000 nanometers to put your computer and all manipulators in. (You've also got to disguise this thing, so your body doesn't treat it poorly.) You've got to give it all the intelligence it needs to do whatever magic you have in mind. It's got to have a radio inside of it, it's got to have security, it has to have a bunch of programs in it, it's got to have power, (a battery, or some mechanisms for drawing power from the body,) etc., etc., etc.,. You only have so much space in there: How are you going to fit it in?

Suppose you can set aside 3,000 x 3,000 x 3,000 nanometers just for memory. Okay, now lets say each "bit" requires 5x5x5 atoms to keep record. We'll be liberal, and suggest that includes all the things you need to keep state, to address, to read, and to write. Great, that's 10x10x10 nanometers. Our field is basically 300x300x300 bits, or a paltry 3375000 bytes. We've got 3 MB to take over the world from within a cell with. Perhaps you can find a way to ramp this juggernaut up to 100 MB, or something; We're still in a very finite world, for the complex things we want to imagine.


 * Human Genome is around 3,000,000,000 base pairs, each 'worth 2 bits'. Around 1.5% codes for proteins, so we have about 715 MB of storage available of which around 11 MB is needed to code for a human being.  So


 * The storage capacity is higher than the estimate being given
 * We can do an awful lot more with 11Mb than you might think.

It's easy to say, "Oh, we'll find answers, we'll just keep miniaturizing." But we're running up against some pretty stark limits here: individual atoms. We may be able to find ways to achieve greater densities with quantum and the like, but my suspicion is that they will require macro scale technologies in order to work. You know: a CD doesn't read itself; You've got to have a device to contain the CD, to spin it, and to read from it. Writing to it too puts you in a different game. You might be able to achieve incredible densities somehow with a cyclotron, but are we really going to carry one around? Density may be inaccessible, without supporting machinery: the macro-scale CD reader.

Communications may be the answer: Instead of the little device knowing what it wants to do, it'll just know, instead, to trust what something outside says to do. But which outside will it trust, if a cracker is sending signals as well? The little device needs enough intelligence to distinguish good signal from bad. This may well be a limiting factor. We do not know, at present.

Have we ever observed evolution working at the molecular scale? Yes: We see clear patterns of evolved machinery, at the level of molecules. Atoms are bound together in clever ways that give us all these neat "cybernetic" systems, at work within plants and animals.

Have we ever observed evolution working at the individual atoms' scale? Or at the quantum level, or anything like that? Nope, not that we know of. Nothing has found any advantage within particular atoms over particular other atoms, of the same kind. We see no evidence of "exploitability," below the level of molecules. Only some building blocks used at the higher levels, and those building blocks do not seem to be "developed" within themselves. Nothing that we can exploit. We've got our smallest 1x1x1 legos, but I don't see how we're going to get anything out of anything smaller. Nature has had millions of years, and hasn't seemed to figure out anything itself, either.


 * Whilst highly controvertial, Roger Penrose argues that we do actually use quantum effects in the brain, in microtubule assemblies. The apparent absence of sub-atomic engineering in nature may reflect our current poor understanding of it, rather than its actual absence.

There is certainly nothing on this page to address these questions, to address these concerns. Nor have I seen anything elsewhere.

Thus, I think that we should put caps into the growth of these technologies into our visions of the future.

We do have room for quite a bit of growth, it would seem to me; Perhaps we can keep Moore's law chugging past 2017-2018 (when the ITRS thinks we reach the smallest theoretical feature size for transistor technology,) by going with rod logic or who knows what. Perhaps we can make it to 2025. Then we can start getting smarter about the patterns and designs- break backwards compatability, try out radical designs, and get more efficient at the macro scale of chip production, layout, go into 3-D designs, break the von Neumann bottleneck, what have you, and make it to 2030. Then we can just make really freaking BIG computers. Bring "big" back. Rely on communications, more than the carting around of smart matter. Perhaps we can get Moore's Law out to 2040 in a practical way this way. Maybe even 2050. But I don't see how we're going to get any further, exploiting the small. We'll just be making bigger and bigger computers. We'd care more about the technology behind factories and manufacturing and inter-connect, than basic research into smaller sizes.

It would require a major physics breakthrough, I believe, to get smaller. I think the primary argument I hold is: "Nature didn't do it." Nature exploited almost all of the small-scale stuff that we're exploiting in our computers. But I haven't seen anything smaller exploited. So we'd have to come up with some major physics breakthrough, something that we didn't see nature figure out.

And I don't think we can just "buy" this breakthrough with sufficient intelligence. I don't think we can just say: "Well, we've had breakthroughs before, and we've always surpassed prior limits." Not so: Einstein's speed of light remains constant. C does not waver. I think we need more thinking here.

LionKimbro

I posted a link to this criticism to WiseNano wiki where this timeline first appeared. Paranoid 08:32, 21 June 2006 (UTC)

Thank you! Good idea. :) LionKimbro

Not Intended as a Prediction
As the creator of that timeline I'd strongly urge you NOT to use it as a prediction. It's an illustration. If you go back to the source page you'll see it's part of a debate on whether there'll be a "big bang" event when nanotechnology goes from the laboratory to omnipresence in a matter of days or weeks, or if there will be a gradual increase in technology with self-replicating molecular assemblers being just another incremental step. Putting years on that illustration was just to make it easier to understand. I'd be shocked if my guesses turned out to be right. Selenite