...

Achieving the computational capacity of the brain

Published:

Many approaches are being used to try to achieve this goal

There is a reason to suppose that if you can create genuine artificial intelligence, you can create objects like neurons that are a million times faster. This leads to the conclusion that it is possible to create systems that think a million times more quickly than an individual.

Everything has changed as a result of the acceleration of computation, including political institutions as well as social and economic relations. Moore, a businessman from the United States and a co-founder of Intel Corporation, is renowned for his contributions to semiconductor technology and the formulation of “Moore’s Law” but he neglected to mention in his papers that the strategy of scale integration wasn’t actually the first paradigm to bring exponential growth to computation and communication.

It was in fact the fifth, and the next was already beginning to take shape: computing at the molecular level and in three dimensions. Even though the fifth paradigm is still more than a decade away, all of the supporting technologies needed for the sixth paradigm have already made convincing progress.

Here are some that could be used to achieve the computational capacity of the human brain.

The Bridge to 3D Molecular Computing

Building three-dimensional circuits with “conventional” silicon lithography is one method. Memory chips with many vertically stacked planes of transistors rather than a single flat layer are already manufactured by Matrix Semiconductor. Matrix is first focusing on portable devices, where it hopes to compete with flash memory because a single 3D chip can carry more memory while reducing overall product size (used in cell phones and digital cameras because it does not lose information when the power is turned off).

The overall cost per bit is also decreased by the stacked circuitry. One of Matrix’s rivals, Fujio Masuoka, a former Toshiba engineer and the creator of flash memory, has a different strategy. In comparison to flat chips, Masuoka asserts that his innovative memory design, which resembles a cylinder, drastically reduces the size and cost-per-bit of memory.

Nanotubes

Nanotubes can achieve high densities because of their small size—single-wall nanotubes have a diameter of only one nanometer. They are also potentially very fast.

A single electron is used to switch between the on and off states of a nanotube-based transistor that operates at ambient temperature and has dimensions of one by twenty nanometers, according to a report in Science on July 6, 2001.
At about the same time, IBM showed out an integrated circuit with 1,000 transistors made of nanotubes.

The fact that certain nanotubes operate like conductors and merely transport electricity while others behave like semiconductors and can switch and create logic gates presents one of the difficulties in deploying this technology. The difference in capability is based on subtle structural features. These used to need to be sorted out manually, which made it impractical to design large-scale circuits. The Berkeley and Stanford researchers came up with a fully automated way to separate out semiconductor nanotubes in order to overcome this problem.

Nanotubes have a tendency to grow in all directions, which makes lining them up difficult in nanotube circuits. Scientists from IBM proved in 2001 that nanotube transistors could be mass-produced in the same way as silicon transistors. They employed a technique known as “constructive destruction,” which eliminates the need to manually filter out defective nanotubes by destroying them immediately on the wafer.

Computing with Molecules

Together with nanotubes, significant advancements in computing with just one or a few molecules have been made recently. Molecular computing was first proposed by Mark A. Ratner of Northwestern University and Avi Aviram of IBM in the early 1970s.

An “atomic memory drive” that mimics a hard drive using atoms was developed in 2002 by researchers at the Universities of Wisconsin and Basel. A block of twenty silicon atoms could have one added or taken away using a scanning tunneling microscope. Although the demonstration only employed a limited amount of bits, researchers anticipate that the technique might be used to store millions of times more data on a disk of comparable size—a density of around 250 terabits of data per square inch.

Self-Assembly

Self-assembling of nanoscale circuits is another key enabling technique for effective nanoelectronics. Self-assembly makes it possible for the possibly billions of circuit components to organize themselves rather than being painstakingly assembled in a top-down process, and it enables badly formed components to be automatically eliminated.

Researchers from NASA’s Ames Research Center and the University of Southern California demonstrated a technique that self-organizes incredibly dense circuits in a chemical solution in 2004. The process generates nanowires on their own and then triggers the self-assembly of nanoscale memory cells—each capable of storing three bits of data—onto the wires.

Emulating Biology

Biology, which depends on these characteristics, is the source of inspiration for the idea of creating self-replicating and self-organizing electronic or mechanical systems. Prions, which are self-replicating proteins, were used in research published in the Proceedings of the National Academy of Sciences to create self-replicating nanowires.

Because of the innate strength of prions, the project team employed them as a model. However, the researchers developed a genetically altered form of prions that had a small covering of gold on it, which conducts electricity with a low resistance despite the fact that prions don’t ordinarily do so.

Of course, DNA is the ultimate self-replicating biological molecule. Self-assembling DNA molecules were used by Duke University researchers to make “tiles” which are little molecular building pieces. They were able to manipulate the assembly’s structure by forming “nanogrids.” Using this method, protein molecules are automatically attached to each nanogrid cell, which could be used for computation.

Computing with DNA

DNA is nature’s very own nanoengineered computer, and specialized “DNA computers” have already made use of its capacity to store data and perform logical operations at the molecular level. In essence, a DNA computer is a test tube filled with water and trillions of DNA molecules, each of which functions as a computer.

The computation’s purpose is to solve a problem, and the result is represented by a series of symbols. (For instance, the collection of symbols might stand for a simple mathematical argument or a collection of numbers). This is how a DNA computer works. Each symbol has its own specific code, which is used to generate a short strand of DNA. The “polymerase chain reaction” is a technique that is used to replicate each of these strands trillions of times. These DNA pools are then placed in a test tube.

Long strands naturally arise as a result of DNA’s predisposition for joining strands together, with the strands’ sequences standing for various symbols, each of which could be a potential solution to the issue. There are several strands for each potential solution because there will be many trillions of such strands (that is, each possible sequence of symbols).

Computing with Spin

In addition to having a negative electrical charge, electrons also have a characteristic called spin that can be used for memory and computation. In accordance with quantum mechanics, electrons spin on an axis much like the Earth does.

As an electron is thought to occupy a point in space, it is difficult to imagine a point without a size that spins, hence this idea is only theoretical. The magnetic field that is produced when an electrical charge moves, however, is actual and quantifiable. The ability of an electron to spin in either of two directions—”up” or “down”—can be used to switch logic or encode a piece of memory.

The fascinating aspect of spintronics is that the spin state of an electron can be changed without the need for energy.

Computing with Light

Using multiple laser beams with information stored in each stream of photons is another method of SIMD (Single instruction, Multiple Data) computing. The encoded data streams can then be processed by optical components using logical and arithmetic operations. By executing the identical computation on each of the 256 streams of data, a system created by Lenslet, a small Israeli startup, can process eight trillion calculations per second using 256 lasers.

Application areas for the technology include data compression for 256 video channels.

Quantum Computing

Even more revolutionary than SIMD parallel processing, quantum computing is still in its infancy compared to the other emerging technologies we have covered. A quantum computer comprises a number of qubits, which are effectively both 0 and 1. The basic ambiguity that occurs in quantum mechanics serves as the basis for the qubit. The qubits in a quantum computer are represented by a quantum property of a particle, like the state of each individual electron’s spin. Each qubit is simultaneously present in both states when they are in an “entangled” state.

Each qubit’s ambiguity is resolved by a process known as “quantum decoherence” leaving an unambiguous series of ones and zeroes. Its decohered sequence should be the problem’s solution if the quantum computer is configured properly. In essence, only the proper order endures the decoherence process.

Accelerating the availability of Human-Level Personal Computing

By 2025, we will have 1016 cps, up from the more than 109 cps that personal computers currently offer. There are, however, a number of techniques to speed up this timetable. Application-specific integrated circuits (ASICs) can offer better pricing performance for extremely repetitive calculations than general-purpose processors. For the repetitive calculations required to produce moving images in video games, such circuits already offer exceptionally high computational throughput. ASICs can accelerate price performance a thousand-fold, slashing the 2025 deadline by around eight years.

The many different programs that make up a simulation of the human brain will have a lot of repetition as well, making them suitable for ASIC implementation. For instance, a fundamental wiring pattern is repeated billions of times in the cerebellum.

By employing the unused computational power of Internet-connected devices, we will also be able to increase the power of personal computers. Meshing computing, one of the new communication paradigms, considers treating each device in the network as a node. In other words, each device will operate as a node itself, sending information to and receiving information from every other device, as opposed to devices (such as personal computers and PDAs) only delivering information to and from nodes. As a result, incredibly strong, self-organizing communication networks will be created. Also, it will be simpler for computers and other devices to use the spare CPU time of the mesh members in their region.

Human memory capacity

How much memory can one individual store computationally? It turns out that if we include the demands on human memory, our time predictions are rather close. For a variety of topics, an expert typically has mastered 105 or more “chunks” of information.

These units stand for both specific knowledge and patterns (like faces). A world-class chess player, for instance, is thought to have mastered about 100,000 different board situations. Shakespeare used 29,000 words, yet those words had around 100,000 different meanings. Humans are capable of mastering around 100,000 concepts in a given topic, according to the development of expert systems in medicine. We can estimate the number of pieces if we assume that this “professional” information only makes up 1% of a human’s overall pattern and 107 knowledge stores.

A plausible estimate for chunk sizes (patterns or pieces of knowledge) in rule-based expert systems or self-organizing pattern-recognition systems is around 106 bits, which equates to a functional memory capacity of 1013 (10 trillion) bits in humans.

Machines achieving the computational capacity of the human brain could lead to more advanced problem-solving and data analysis capabilities, for example. In addition, we could have computers with more complex memory with better learning capabilities. Scientific research and technological development would surely be accelerated in every field.

However, there are also some implications since there will be machines smarter than humans that could easily deceive and manipulate people like AIs could do, as well as unpredictable consequences and behaviors like happens in human brains. In addition, there could be socio-economic impacts since these machines will outperform humans in many tasks but they could also have an environmental impact because of their substantial computational and energy consumption.

The Singularity is near, by Ray Kurzweil, is available to purchase here

Related articles

Recent articles

Seraphinite AcceleratorOptimized by Seraphinite Accelerator
Turns on site high speed to be attractive for people and search engines.