bloc 33rd Square Business Tools - Moore's Law 33rd Square Business Tools: Moore's Law - All Post
Showing posts with label Moore's Law. Show all posts
Showing posts with label Moore's Law. Show all posts

Tuesday, October 3, 2017

Computers: Where They Came From and Where They're Going


The remarkable exponential progress of computer technology has gone on for over a century now, with the promise of artificial intelligence that uses quantum computers with access to big data soon upon us. Nobody can predict exactly what will happen, but the future is sure to be exciting.



In the Beginning


When modern computers were first created, they used vacuum tubes to perform computations. The UNIVAC I weighed sixteen tons and took up over 350 square foot of space. Many scientists theorized that future computers would be the size of buildings.

Related articles
Obviously that didn’t happen, but only because a few mechanical geniuses invented the transistor. With transistors, computers thousands of times more powerful than a UNIVAC could fit into a space the size of a postage stamp.

Engineers have been making transistors smaller as time has passed, which has made computers cheaper and more powerful. In 1965 Gordon Moore predicted manufacturers would be able to double the amount of transistors in an integrated circuit every two years or so.

For decades this law held up, but transistors can only get so small before quantum tunneling prevents them from working properly. According to some experts, we’ll hit this barrier somewhere around the year 2026.


Looking to the Future

Many researchers don’t think computational power will end with transistors. The next step on the road to more powerful computers involves using limitations as a tool. That same electron quantum tunneling which is catastrophic to transistors is instrumental for quantum computers.

Quantum computers are still hard to work with, but research is promising. Google claims D-Wave's 2X quantum annealer is already over a hundred million times faster than traditional processors when performing some actions.They won’t necessarily ever be able to do everything faster than traditional transistor-based computers, but the fields they’re superior at have powerful implications for manipulation of big data the computer industry as a whole.

Kurzweil Curve

The Goal

Modern computers are able to handle an awful lot of data, but according to Cisco, people around the world produce over 4.4 exabytes of data per month. That’s 1,700,000,000,000 bytes every single second. No modern computer can process anywhere near that amount of information, but quantum computers might be able to some day. They might be able to sift through that massive amount of data and spot patterns or irregularities that we’d never be able to find with modern processors.

Nobody really knows how long this could take. It will require more research and changes in computer architecture. Since there are few quantum computers around, the number of people who can truly test the limits of quantum computing are limited. IBM is experimenting with letting people utilize quantum computers via the internet, but this doesn’t give quantum computers access to the vast amounts of information that would be required to to see if they could effectively analyze big data.

There are many resources for streaming information on the internet already. Apache Kafka, for instance, is an open source platform for distributed streaming. Netflix, Spotify, Uber and many more companies already use it to monitor and deliver their streaming information.

If it or another distributed streaming platform could be adapted to handle quantum computer architecture, it could in theory be used to give programs access to vast amounts of data so that researchers could experiment, improve, and find out what quantum computers are really capable of.

Artificial intelligence is already all over the place, whether people realize it or not. IBM's Watson has already beaten the world's greatest chess grand master. Amazon Echo's Alexa can interpret many things its owner says and give useful advice. Google Maps can guide someone through a town, choosing the best route and rerouting on the fly.

Of course, these forms of AI are limited and nothing like what most people might imagine when they think of artificial intelligence. But quantum computing just might help researchers reach the next level of artificial intelligence. Nobody can predict exactly what will happen, but the future of AI research using quantum computers with access to big data looks bright.


By  Mark PalmerEmbed

Author Bio - Mark Palmer is a small business expert and has a passion for helping entrepreneurs make the most out of their company. As a freelance writer, Mark hopes to influence others so they can have a positive business experience.



Tuesday, June 6, 2017

Moore's Law Has Another Life with Development of 5 Nanometer Chip


Moore's Law

IBM and Samsung have developed a first-of-a-kind process to build silicon nanosheet transistors that will enable 5 nanometer chips. The resulting increase in performance will help accelerate artificial intelligence, the Internet of Things (IoT) and other data-intensive applications delivered in the cloud. The power savings alone might mean that the batteries in smartphones and other mobile products could last two to three times longer than today’s devices, before needing to be charged.


"The economic value that Moore’s Law generates is unquestionable. That’s where innovations such as this one come into play, to extend scaling not by traditional ways but coming up with innovative structures."
IBM and Samsung, have announced the development of an industry-first process to build silicon nanosheet transistors that will enable 5 nanometer (nm) chips.

The breakthrough means that silicon technology has yet again extended the potential of Moore's Law.

Less than two years after developing a 7nm test node chip with 20 billion transistors, the researchers involved have paved the way for 30 billion switches on a fingernail-sized chip.

Related articles
The resulting increase in performance will help accelerate artificial intelligence, the Internet of Things (IoT), and other data-intensive applications delivered in the cloud. The power savings could also mean that the batteries in smartphones and other mobile products could last two to three times longer than today’s devices, before needing to be charged.

“The economic value that Moore’s Law generates is unquestionable. That’s where innovations such as this one come into play, to extend scaling not by traditional ways but coming up with innovative structures,” says Mukesh Khare, vice president of semiconductor research for IBM Research.

Scientists working as part of the IBM-led Research Alliance at the SUNY Polytechnic Institute Colleges of Nanoscale Science and Engineering’s NanoTech Complex in Albany, NY achieved the breakthrough by using stacks of silicon nanosheets as the device structure of the transistor, instead of the standard FinFET architecture, which is the blueprint for the semiconductor industry up through 7nm node technology.

Moore's Law extended again
IBM scientists at the SUNY Polytechnic Institute Colleges of Nanoscale Science and Engineering’s NanoTech Complex in Albany, NY prepare test wafers with 5nm silicon nanosheet transistors, loaded into the front opening unified pod, or FOUPs, to test the process of building 5nm transistors using silicon nanosheets. Image Source - Connie Zhou / IBM

The silicon nanosheet transistor demonstration, as detailed in the Research Alliance paper Stacked Nanosheet Gate-All-Around Transistor to Enable Scaling Beyond FinFET, and published by VLSI, proves that 5nm chips are possible, more powerful, and not too far off in the future.

5 Nanometer Chip
Pictured: a scan of IBM Research Alliance’s 5nm transistor, built using an industry-first process to stack silicon nanosheets as the device structure – achieving a scale of 30 billion switches on a fingernail-sized chip that will deliver significant power and performance enhancements over today’s state-of-the-art 10nm chips. Image Source - IBM

Gary Patton, CTO and Head of Worldwide R&D at GLOBALFOUNDRIES stated. “As we make progress toward commercializing 7nm in 2018 at our Fab 8 manufacturing facility, we are actively pursuing next-generation technologies at 5nm and beyond to maintain technology leadership and enable our customers to produce a smaller, faster, and more cost efficient generation of semiconductors.”

IBM Research has explored nanosheet semiconductor technology for more than 10 years. This work is the first in the industry to demonstrate the feasibility to design and fabricate stacked nanosheet devices with electrical properties better than FinFET architecture.

The scientists used the same Extreme Ultraviolet (EUV) lithography approach used to produce the 7nm test node and its 20 billion transistors to the nanosheet in the new transistor architecture. Using EUV lithography, the width of the nanosheets could be adjusted continuously, all within a single manufacturing process or chip design.

This adjustability allowed for the fine-tuning of performance and power for specific circuits – something not possible with today’s FinFET transistor architecture production.

Dr. Bahgat Sammakia, Interim President, SUNY Polytechnic Institute said that, “We believe that enabling the first 5nm transistor is a significant milestone for the entire semiconductor industry as we continue to push beyond the limitations of our current capabilities.”

Full implementation of this technology will still require 10 to 15 years of further development according to some reports.

The details of the process will be presented at the 2017 Symposia on VLSI Technology and Circuits conference in Kyoto, Japan.




SOURCE  IBM


By  33rd SquareEmbed





Thursday, August 4, 2016

4 Interesting Facts about Microchips


Technology

Microchips are involved in almost every aspect of life since they are present in basically all electronic devices, which have grown to such a massive prevalence that it's impossible to get away from them.


Microchips are everywhere, but how do they work? How can they be so small? The answers are quite amazing, and there's much more to learn about these fascinating little components. Here are four facts about microchips that you probably didn't know.

1. Transistors Run Things

The heart of a microchip is the transistor, or the transistors, more appropriately. There are literally hundreds of millions of transistors on each individual microchip, and the connections between those chips allow for high level calculations at a fast, efficient pace. Intel makes so many transistors that it can't accurately say how many it sells in a year, but they estimate the number to be 10 quintillion. That's a 10 with 18 zeroes after it, and that's just the production rate of a single company.

Related articles

2. Abundant Material

Modern microchips are printed on silicon chips, which is where the 'chip' half of microchip comes from. This is not only convenient for the chip-making process itself, which companies like Streamline Circuits handle with ease thanks to the semiconducting nature of silicon, but it is also ideal for production costs. Silicon is the second most abundant element on the planet, second only to oxygen.

3. Small & Fast

The newest age of transistors are built on the nanoscale, and they function at an exponentially faster rate for their smaller size. It has been estimated that as many as 30 million of these nanotransistors could fit on the head of a standard ink pen. With the extreme size difference between nanotransistors and human neurons, Intel believes they will be able to produce a single microchip with the same number of transistors as there are neurons in the brain by the year 2026.

4. Moore's Law

This isn't actually a scientific law, but rather a sort of prophecy regarding the rate at which technology would improve once the transistor-based computer took over. Moore's Law basically says that the amount of transistors on a chip will double every two years. So far, that law has held for over 26 years.Microchips are only going to continue playing a huge part in society as the years progress. Chips will get smaller, and faster, and more energy efficient, and soon they will be able to do things we can't even imagine today!

Moore's Law


By Dixie SomersEmbed


Author Bio - Dixie is a freelance writer who loves to write about business, finance and self improvement. She lives in Arizona with her husband and three beautiful daughters.


Monday, December 21, 2015

Raspberrry Pi Zero Shows How Remarkable Exponential Technology Is


Exponential Technology

A recently shared comparison of a 'super computer' from 1957 and the latest offering from Raspberry Pi clearly demonstrates the power of exponential technologies.


Compared to its own predecessor, the Raspberry Pi Zero is half the size of a Model A+, with twice the utility. This tiny computer is even more remarkable for its cost and performance if we turn back the clock a few more years.

Related articles
Twitter users @SadHappyAmazing and @HistoricalPics recently posted two photographs showing the Raspberry Pi Zero juxtaposed in front of the Norwich City Council Treasurer's Department building, where the delivery of the Elliott 405 computer was photographed in 1957.
The folks at blog dds extended the comparison here:


Elliott 405

Elliott 405

Raspberry Pi Zero

Raspberry Pi Zero

Year

1957

2015

Price

£85,000 (1957)
$5

Instruction cycle time

10.71-0.918 ms (93-1089 Hz)
1 ns (1 GHz clock)

Main memory

16 kB drum store
512 MB LPDDR2 SDRAM

Fast memory

1280 bytes (nickel delay lines)
32 kB (16 kB I + 16kB D L1 cache)

Secondary memory

1.2 MB (300,000 word magnetic film)
8 GB (typical micro SD flash card - not included)

Output bandwidth

25 characters/s
373 MB/s (1080p60 HDMI)

Weight

3-6 tons
9 g

Size



The comparison reminds us of another example we recently shared on our Facebook page:



SOURCE  blog dds


By 33rd SquareEmbed


Wednesday, September 30, 2015

Researchers Use Genetic Algorithms To Push Past the Limits of Moore's Law


Genetic Algorithms


Scientists have demonstrated working electronic circuits that have been 'designed' in a radically new way, using methods that resemble Darwinian evolution. The size of these circuits is comparable to the size of their conventional counterparts, but they are much closer in operation to natural networks like our brain.
 


Scientists of the MESA+ Institute for Nanotechnology and the CTIT Institute for ICT Research at the University of Twente in The Netherlands have demonstrated working electronic circuits that have been produced in a novel way, using methods that resemble Darwinian evolution. The size of these circuits is comparable to the size of their conventional counterparts, but they are much closer to natural networks like the human brain.

Related articles

The findings promise a new generation of powerful, energy-efficient electronics, and have been published in the journal Nature Nanotechnology.

During the last few years computers have become more and more powerful by integrating ever smaller components on silicon chips, in the familiar pattern of Moore's Law. The technology has reached a point where it is becoming increasingly difficult and extremely expensive to continue this miniaturization. 

Current transistors consist of only a handful of atoms. It is a major challenge to produce chips in which the millions of transistors have the same characteristics, and thus to make the chips operate without error. Another drawback is that their energy consumption is reaching unacceptable levels. It is obvious that one has to look for alternative directions, and it is interesting to see what we can learn from nature. Natural evolution has led to powerful ‘computers’ like the human brain, which can solve complex problems in an energy-efficient way. Nature exploits complex networks that can execute many tasks in parallel.

genetic algorithm


The approach of the researchers at the University of Twente is based on methods that resemble those found in nature. They have used networks of gold nanoparticles for the execution of essential computational tasks. Also, unlike conventional electronics, they have moved away from designed circuits. 

Using 'designless' systems, costly design mistakes are avoided. The computational power of their networks is enabled by applying artificial evolution, also known as genetic algorithms. This evolution takes less than an hour, rather than millions of years. By applying electrical signals, one and the same network can be configured into 16 different logical gates. The evolutionary approach works around - or can even take advantage of - possible material defects that can be fatal in conventional electronics.

It is the first time that scientists have succeeded in this way in realizing robust electronics with dimensions that can compete with commercial technology. 

"With this research we have delivered proof of principle: demonstrated that our approach works in practice."


According to lead researcher Wilfred van der Wiel, the circuits currently generated with the artificial evolution method still have limited computing power. “But with this research we have delivered proof of principle: demonstrated that our approach works in practice. By scaling up the system, real added value will be produced in the future. Take for example the efforts to recognize patterns, such as with face recognition. This is very difficult for a regular computer, while humans and possibly also our circuits can do this much better." 

Another key advantage may be that this type of circuitry uses much less energy, both in the production, and during use. The researchers anticipate a wide range of applications, in portable electronics and in the medical field.


SOURCE  University of Twente


By 33rd SquareEmbed


Thursday, July 16, 2015

We All Rely on Them, But How Does a Microchip Work?

 Technology


Microchips have undoubtedly revolutionized the world. Find out more about these technological powerhouses that have enabled exponential growth of nearly every aspect of our economy and culture.





Microchips are in almost every piece of technology. They're an essential component that provides for the miniaturization, energy efficiency and raw computational power of devices. The microchip is the single thing that has allowed Moore's Law to remain true.

That means these tiny things are invaluable for most people around the world, but most people fail to realize just how amazing microchips are. Let's examine what they are, how they work and some of the unique opportunities they create.

What is a Microchip?

A microchip is an electrical component that utilizes layers of semi-conducting materials to create logic circuits. These circuits are special in that they are extremely small, incredibly efficient and allow for trillions of operations to be conducted per second in just one part of a larger device.

The most interesting part about the microchip is that there are a wide variety of microchips that exist. Some act as controllers for temperature probes, others work to test the conductivity of a semifluid material and others simply perform countless calculations.

The best way to think of a microchip is as if it was a much larger circuit shrunk into a very small size.


How Does a Microchip Work?

The most basic building blocks of any microchip is the ability to use transistors. These variable resisting circuits can be networked together to work to perform almost any electrical operation, which in turn allows for digital devices to work.

Related articles
Each of these transistor circuits will output a variable voltage depending upon how much voltage is supplied, the thickness of the semi-conductive layer within the transistor and the maximum output for the transistor.

When you put trillions of these tiny transistors in a network together, companies like Semi Source create a microchip that can perform countless calculations on something smaller than a dime.

Further Astounding Effects

The uses of the microchip extend past simply making electronics more efficient and compact. They make every process, from resource gathering to refining, manufacturing and distribution exponentially more efficient than they would be otherwise.

They can even make things that would otherwise be unthinkable possible. The best example of this pertains to space flight, where there are countless microchips in one vehicle. Each one serves a purpose to add redundancy or to provide real-time calculations of sensitive flight trajectories and other processing-intensive operations that would be too difficult for a pilot to do in his or her head.

The Importance of the Microchip

Microchips have revolutionized the world. They act similarly to the way that steam revolutionized industry or combustible engines changed transportation forever.

By understanding how microchips work, you likely now have a firm grasp of just how important these tiny components are.



By Anica OaksEmbed


Author Bio - A recent college graduate from University of San Francisco, Anica loves dogs, the ocean, and anything outdoor-related. She was raised in a big family, so she's used to putting things to a vote. Also, cartwheels are her specialty. You can connect with Anica here.

Wednesday, April 29, 2015

Critical Steps to Building First Practical Quantum Computer Made

 Quantum Computers
Scientists have made two advances needed to create viable quantum computers. They have shown the ability to detect and measure both kinds of quantum errors simultaneously, as well as created a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions.





Scientists at IBM have unveiled two critical advances towards the realization of practical quantum computers. For the first time, they showed the ability to detect and measure both kinds of quantum errors simultaneously, as well as demonstrated a new, square quantum bit circuit design that is the only physical architecture that could successfully scale to larger dimensions.

With Moore's Law expected to run out of steam, quantum computing will be among the inventions that could usher in a new era of innovation across industries. Quantum computers promise to open up new capabilities in the fields of optimization and simulation simply not possible using today's computers. If a quantum computer could be built with just 50 quantum bits (qubits), no combination of today's supercomputers could successfully outperform it.

The IBM breakthroughs, described in the journal Nature Communications, show for the first time the ability to detect and measure the two types of quantum errors (bit-flip and phase-flip) that will occur in any real quantum computer. Until now, it was only possible to address one type of quantum error or the other, but never both at the same time. This is a necessary step toward quantum error correction, which is a critical requirement for building a practical and reliable large-scale quantum computer.

IBM Square Lattice Quantum Computer Chip

IBM's new and complex quantum bit circuit, based on a square lattice of four superconducting qubits on a chip roughly one-quarter-inch square, enables both types of quantum errors to be detected at the same time. By opting for a square-shaped design versus a linear array – which prevents the detection of both kinds of quantum errors simultaneously – IBM's design shows the best potential to scale by adding more qubits to arrive at a working quantum system.

Related articles
"Quantum computing could be potentially transformative, enabling us to solve problems that are impossible or impractical to solve today," said Arvind Krishna, senior vice president and director of IBM Research. "While quantum computers have traditionally been explored for cryptography, one area we find very compelling is the potential for practical quantum systems to solve problems in physics and quantum chemistry that are unsolvable today. This could have enormous potential in materials or drug design, opening up a new realm of applications."

For instance, in physics and chemistry, quantum computing could allow scientists to design new materials and drug compounds without expensive trial and error experiments in the lab, potentially speeding up the rate and pace of innovation across many industries.

For a world consumed by Big Data, quantum computers could quickly sort and curate ever larger databases as well as massive stores of diverse, unstructured data. This could transform how people make decisions and how researchers across industries make critical discoveries.

One of the great challenges for scientists seeking to harness the power of quantum computing is controlling or removing quantum decoherence – the creation of errors in calculations caused by interference from factors such as heat, electromagnetic radiation, and material defects. The errors are especially acute in quantum machines, since quantum information is so fragile.

"Up until now, researchers have been able to detect bit-flip or phase-flip quantum errors, but never the two together. Previous work in this area, using linear arrangements, only looked at bit-flip errors offering incomplete information on the quantum state of a system and making them inadequate for a quantum computer," said Jay Gambetta, a manager in the IBM Quantum Computing Group. "Our four qubit results take us past this hurdle by detecting both types of quantum errors and can be scalable to larger systems, as the qubits are arranged in a square lattice as opposed to a linear array."

"Quantum computing could be potentially transformative, enabling us to solve problems that are impossible or impractical to solve today."


The most basic piece of information that a typical computer understands is a bit. Much like a beam of light that can be switched on or off, a bit can have only one of two values: "1" or "0". However, a quantum bit (qubit) can hold a value of 1 or 0 as well as both values at the same time, described as superposition and simply denoted as "0+1". The sign of this superposition is important because both states 0 and 1 have a phase relationship to each other. This superposition property is what allows quantum computers to choose the correct solution among millions of possibilities in a time much faster than a conventional computer.

Two types of errors can occur on such a superposition state. One is called a bit-flip error, which simply flips a 0 to a 1 and vice versa. This is similar to classical bit-flip errors and previous work has showed how to detect these errors on qubits. However, this is not sufficient for quantum error correction because phase-flip errors can also be present, which flip the sign of the phase relationship between 0 and 1 in a superposition state. Both types of errors must be detected in order for quantum error correction to function properly.

quantum computers

Quantum information is very fragile because all existing qubit technologies lose their information when interacting with matter and electromagnetic radiation. Theorists have found ways to preserve the information much longer by spreading information across many physical qubits. "Surface code" is the technical name for a specific error correction scheme which spreads quantum information across many qubits. It allows for only nearest neighbor interactions to encode one logical qubit, making it sufficiently stable to perform error-free operations.

The research team used a variety of techniques to measure the states of two independent syndrome (measurement) qubits. Each reveals one aspect of the quantum information stored on two other qubits (called code, or data qubits). Specifically, one syndrome qubit revealed whether a bit-flip error occurred to either of the code qubits, while the other syndrome qubit revealed whether a phase-flip error occurred. Determining the joint quantum information in the code qubits is an essential step for quantum error correction because directly measuring the code qubits destroys the information contained within them.

Because these qubits can be designed and manufactured using standard silicon fabrication techniques, the company anticipates that once a handful of superconducting qubits can be manufactured reliably and repeatedly, and controlled with low error rates, there will be no fundamental obstacle to demonstrating error correction in larger lattices of qubits.

Next, the Experimental Quantum Computing team is planning on making a similar lattice with eight qubits. “Thirteen or 17 qubits is the next important milestone,” Jerry Chow, Manager of Experimental Quantum Computing at IBM Research, told TechCrunch , because it’s at that point they’ll have the ability to start encoding logic into the qubits — which is when things start to get really interesting.


SOURCE  Market Watch

By 33rd SquareEmbed

Thursday, April 23, 2015

50 Years of Moore's Law and Beyond

 Computers
Moore's Law, the trend that heralds in ever-faster computers every couple of years, is now 50 years old.  Will a new paradigm take its place to continue our progress in the years to come?





April 19th recently marked the 50th anniversary of Moore's Law: the prediction that became the fact of the tremendous exponential growth in computing power that powered industry and civilization and led to the further prediction for a technological Singularity.

The invention of the integrated circuit in 1958, started off the electronics revolution and in 1965, Gordon Moore, a chemist turned electronic engineer, noticed that in the years since the first integrated circuits were built, engineers had managed to roughly double the number of components, such as transistors, on a chip every year.

Gordon Moore

Moore also predicted that the rate of component shrinkage — which he later revised to a doubling every two years — would continue for at least another decade. It turns out he was not exactly correct as the trend has extended for many more decades.

The transistors continued to shrink, leading to microprocessors with ever-higher performance and functionality.

First plot of Moore's Law
The original plot by Gordon Moore
Related articles
Initially the semiconductor industry met Moore’s Law mainly through feats of engineering genius and gigantic strides in manufacturing processes. Gradually though, the role of basic science also played a major role—and one that is increasingly important as the physical limitations of further shrinkage are being encountered.From the transistor to contemporary microprocessors, each success has built upon the last, driving innovation forward.

In the 1990s, when components reached around 100 nanometres across, miniaturization began to have adverse effects, worsening performance. Science was called upon to improve the performance of transistor materials. Major help came from condensed-matter physics.

The field was well aware of the ability of silicon to conduct electricity even better once its crystal lattice was stretched. Engineers introduced strained silicon into chips in the 2000s, and Moore’s Law continued along at full steam.

Now microprocessors have transistors that are just 14 nanometres wide, and Moore’s Law is reaching its ultimate physical limits.

Waste heat in particular has become a source of concern. It has already caused one form of Moore’s law — the exponential acceleration of computer ‘clock speed’ — to grind to a halt. Power-hungry chips also limit the ability of mobile devices to survive more than a few hours between charges.

Further efforts might yet bring one or two more generations of smaller transistors, down to a size of perhaps 5 nanometres, but beyond that we will require a fundamentally new physics model.

Researchers around the world are experimenting with approaches and materials to shift Moore's to a new paradigm. So far each potential obstacle to the Law has been trumped, but will the next one succeed?


SOURCE  Nature

By 33rd SquareEmbed

Monday, January 19, 2015


 Exponential Technologies
At the recent DLD Conference, economist Andrew McAfee used the example of the second half of the chessboard as it relates to the power of exponential technologies, and what the future may bring.




From time to time, we still get asked, "Why do you call the website 33rd Square?"  Contrary to some opinions, it is not based on some esoteric Masonic symbolism or the Knights Templar. The 33rd square on a chessboard represents the beginning of what Andrew McAfee and Erik Brynjolfsson call, "the second half of the chessboard."

As McAfee tells in the video above, recorded at the recent Digital-Life-Design (DLD) Conference in Munich, Germany, the second half of the chessboard relates to the parable of the invention of the game chess.  As the legend goes, the inventor of chess introduced his game to the emperor of India, who was so pleased that he offered the inventor any reward he named.  Giving the impression of being humble, the inventor asked for a single grain of rice on the first square of the chessboard, two on the second, four on the third, and so on.


Image Source: www.etereaestudios.com

Soon after the piles of rice entered the second half of the chessboard, the emperor figured out what was going on, and as some versions of the story go, the inventor lost his head as a result.

Related articles
What the story relates, as we know, is the power of exponential growth, which in technological terms is synonymous with Moore's Law.  As McAfee continues, citing Ray Kuzweil, "things only get crazy in the second half of the chessboard."  Our human intuition can't cope with the constant doubling after the 32nd square of the board.

In their book,The Second Machine Age, McAfee and Brynjolfsson calculated that if Moore's Law started in 1958, and the doubling period was every 18 months, "by that calculation, we entered the second half of the chessboard with digital progress in about 2006."

"That really helps me to understand why we are seeing smartphones and self-driving cars, and automatic translation and powerful artificial intelligence and this amazing parade of technologies. I think of them as second half of the chessboard technologies."


"For me that really helps me to understand why we are seeing smartphones and self-driving cars, and automatic translation and powerful artificial intelligence and this amazing parade of technologies," he says.  "I think of them as second half of the chessboard technologies."

If this calculated date of 2006 is reasonable, the clear implication McAfee points out is that we are only at the beginning of this.  We are still in the early part of the second half of the chessboard.

McAfee is also concerned about the dark side of the rise of technology.  One of these is the increase of winner-take-all dynamics in the new digital age.

Essentially optimistic, McAfee says that, "the pie is getting bigger."  He suggests that the new situation is creating a distribution problem above all else.

When asked a question from the audience that suggests recent economic data does not match with his predictions, McAfee clarifies that unemployment numbers are artificially high as so many workers have simply stopped looking for jobs. "Some of my colleagues are really worried about the Singularity and really worried about when we have super powerful artificial intelligence—let me tell you something: if things continue on the same trajectory they are now, the people are going to rise up way before the machines do."


SOURCE  DLD Conference

By 33rd SquareEmbed

Thursday, October 30, 2014

DNA-based electrical circuits

 Molecular Electronics
Scientists have announced a significant breakthrough toward developing DNA-based electrical circuits. Molecular electronics, which uses molecules as building blocks for the fabrication of electronic components, has been seen as the ultimate solution to overcoming the limits of Moore's Law.




In new research published in Nature Nanotechnology, an international group of scientists has announced the most significant breakthrough in a decade toward developing DNA-based electrical circuits.

The central technological revolution of the 20th century was the development of computers, leading to the communication and Internet era. The main measure of this evolution is miniaturization: making our machines smaller.

"This research paves the way for implementing DNA-based programmable circuits for molecular electronics, which could lead to a new generation of computer circuits that can be more sophisticated, cheaper and simpler to make."


A computer with the memory of the average laptop today was the size of a tennis court in the 1970s. Yet while scientists made great strides in reducing of the size of individual computer components through microelectronics as is shown by Moore's Law, they have been less successful at reducing the distance between transistors, the main element of our computers. These spaces between transistors have been much more challenging and extremely expensive to miniaturize – an obstacle that limits the future development of computers.

Danny Porath

Molecular electronics, which uses molecules as building blocks for the fabrication of electronic components, was seen as the ultimate solution to the miniaturization challenge. However, to date, no one has actually been able to make complex electrical circuits using molecules. The only known molecules that can be pre-designed to self-assemble into complex miniature circuits, which could in turn be used in computers, are DNA molecules. Nevertheless, so far no one has been able to demonstrate reliably and quantitatively the flow of electrical current through long DNA molecules.

Now, an international group led by Professor Danny Porath, the Etta and Paul Schankerman Professor in Molecular Biomedicine at the Hebrew University of Jerusalem, reports reproducible and quantitative measurements of electricity flow through long molecules made of four DNA strands, signaling a significant breakthrough towards the development of DNA-based electrical circuits. The research, which could re-ignite interest in the use of DNA-based wires and devices in the development of programmable circuits, appears in the prestigious journal Nature Nanotechnology under the title "Long-range charge transport in single G-quadruplex DNA molecules."

Related articles
Porath is affiliated with the Hebrew University's Institute of Chemistry and its Center for Nanoscience and Nanotechnology. The molecules were produced by the group of Alexander Kotlyar from Tel Aviv University, who has been collaborating with Porath for 15 years. The measurements were performed mainly by Gideon Livshits, a PhD student in the Porath group, who carried the project forward with great creativity, initiative and determination. The research was carried out in collaboration with groups from Denmark, Spain, US, Italy and Cyprus.

According to Porath, "This research paves the way for implementing DNA-based programmable circuits for molecular electronics, which could lead to a new generation of computer circuits that can be more sophisticated, cheaper and simpler to make."



SOURCE  The Hebrew University of Jerusalem

By 33rd SquareEmbed

Tuesday, September 23, 2014


 Artificial Intelligence
This month, the Stanford Graduate School of Business held a sold out session on deep learning moderated by Steve Jurvetson.




A machine learning approach inspired by the human brain, deep learning is taking many industries by storm. Empowered by the latest generation of commodity computing, deep learning begins to derive significant value from Big Data. It has already radically improved the computer’s ability to recognize speech and identify objects in images, two fundamental hallmarks of human intelligence.

In deep learning, each layer of representation builds upon the previous ones, so that, as the system goes deeper, the representation becomes clearer.

"Now we’re getting into a world where we can take measurements of the physical world, like pixels in a picture, and turn them into symbols that we can sort."


Industry giants such as Google, Facebook, and Baidu have acquired most of the dominant players in this space to improve their product offerings. At the same time, startup entrepreneurs are creating a new paradigm, Intelligence as a Service, by providing APIs that democratize access to deep learning algorithms.

Recently, Moderator Steve Jurvetson, Partner, DFJ Ventures discussed the trends in artificial intelligence with panelists Adam Berenzweig, Co-founder and CTO, Clarifai; Naveen Rao, Co-founder and CEO, Nervana SystemsElliot Turner, Founder and CEO, AlchemyAPI; and Ilya Sutskever, Research Scientist, Google Brain to learn more about this exciting new technology and be introduced to some of the new application domains, the business models, and the key players in this emerging field.

The discussion took place at the Stanford Graduate School of Business this month.

Jurvetson explains why deep learning has had so much impact on AI in the last few years. First, there’s a lot more data around because of the Internet, there’s metadata such as tags and translations, and there’s even services such as Amazon’s Mechanical Turk, which allows for cheap labeling or tagging. Jurvetson focuses on new developments in artificial intelligence including Big Data, and algoritmic advances in using unlabeled data, unsupervised training and successive layers of learning that are leading to many new advances in the field.

Jurvetson, like others states that neural networks have been around for a long time, but the new factor is that they can now process Big Data thanks to the progress of Moore's Law.  In the case of Google Brain, for instance, 1 billion synapses were used to detect cats from YouTube videos.

A fan of Ray Kurzeil, Jurvetson, when showing Kurzweil's Moore's Law plot states that it may the most important graph ever plotted, not just in technology and business. He also mentions how quantum computing may have considerable impact on deep learning as well.  As a concluding image, Jurvetson shows a picture of a black swan drinking from a fire hose to represent what is coming in the field of artificial intelligence.

Kurzweil's Moore's Law

Berenzweig, a former engineer at Google for 10 years, made the case that deep learning is “adding a new primary sense to computing” in the form of useful computer vision. Deep learning is forming that bridge between the physical world and the world of computing, according to Berenzweig. “Now we’re getting into a world where we can take measurements of the physical world, like pixels in a picture, and turn them into symbols that we can sort,” he says.

clarifai image recognition

Related articles
Berenzweig does a demo of the ClarifAI system, which you can try too at: http://clarifai.com/  For ClarifAI, the ImageNet database is incredibly useful for scoring their image recognition system. The convolutional neural network used by ClarifAI can recognize 10,000 objects now.

Deep learning is “that missing link between computing and what the brain does,” according to Nervana Systems' Rao. It i more than just fast computation now, “We can start building new hardware that takes computer processing in a whole new direction,” more like the brain does. Also, this now means that the business case is opened up for deep learning. "We really need to scale these things up," says Rao.

What excites Sutskever, a colleague of Geoffrey Hinton, is that neural networks and deep learning, actually works.  "It may seem that we have these programs that seem to do very complicated things, that the programs must be very complicated themselves, but that is not the case," he says.

Turner, states his company’s mission is to “democratize deep learning.” The company is working in many industries from advertising to financial services to business intelligence, helping companies apply it to their businesses.  They feature demos on text/language and image recognition on their website.

Turner is excited by the transition in machine learning. "Despite having the word 'machine' in the name, machine learning historically had a lot of human involvement.  It relied on the innate cleverness of individuals in the process known as feature engineering to translate raw data into something that traditional shallow learning algorithms could effectively deal with." These processes are now becoming weakly supervised or even unsupervised.

These breakthroughs are also allowing the smaller teams and companies to compete with the larger organizations.

Jurvetson asks the panel if the work is a path leading to AI (we can assume he is talking about AGI based on his description).  Turner replies that a lot of different problems are now being solved with human data - what is important to people (pictures of houses, cats, dogs, etc.), and these will be a factor in how neural networks evolve.

Sutskever, says that it is in progress on learning principles that is important.  "Whenever we make conceptual progress on learning principles, we will make very huge practical progress very rapidly."


SOURCE  VLAB

By 33rd SquareEmbed