bloc 33rd Square Business Tools - microprocessors 33rd Square Business Tools: microprocessors - All Post
Showing posts with label microprocessors. Show all posts
Showing posts with label microprocessors. Show all posts

Sunday, December 18, 2016

Amazing Circuitry Advances in the 21st Century


Electronics

We take them for granted—the microscopic, and soon to be nanoscopic, circuitry that makes our electronic and digital lives possible.The advancement of the electronic circuit has taken the average computer from being the size of a house to small enough to hold in the palm of your hand.


Electronics technology has taken a leap forward in the last few decades, and the pace of this advancement continues to accelerate. In a world that is now almost entirely reliant on computers and electronic communication technology, it can be easy to forget about or dismiss the microscopic, and soon to be nanoscopic, circuitry that makes it all possible. It is the advancement of the circuit that has taken the average computer from being the size of a house to small enough to hold in the palm of your hand.


The Role of a Circuit

All computers and most types of electronic systems function based on a surprisingly simple “on” or “off” principle. The function of a circuit in any electronic device is to provide this on or off switch that regulates the amount of power flowing through the device. More advanced computing power simply uses a greater number of circuit switches to provide greater complexity. The key advancement in computer technology in the last century has been to make this simple technological concept increasingly smaller and more efficient. Two basic principles stand out in the development of circuit and thus computer chip technology. The more circuits you can put into a smaller space, the more powerful, smaller and lighter the resulting machine will be. At the same time the faster and cheaper you can produce those circuit boards, the less expensive the end-product will become. Advances in both these principles are the cornerstone of our technological age.

Related articles

The Microchip

One of the most significant advances in circuit technology is the microchip. A microchip is a tiny wafer of semi-conductive material, usually silicon derived from ultra-refined sand, upon which microscopic circuits can be etched to create an integrated circuit. The full process of creating a microchip is complex, but the result makes modern computers possible. This etched circuit technology uses computer designs and light to create circuits in the microscale.


The Printed Circuit Board

Originally, circuit boards were time consuming and difficult to construct. They involved large and bulky materials that usually had to be assembled by hand. They were prone to impurities and failures that hampered their functionality. Some companies, like Streamline Circuits, know that the printed circuit board, or PCB, was a breakthrough the circuit manufacturing process that allowed the circuit material to be “printed” onto a board of nonconductive material like fiberglass. The printed material is usually metallic “ink” based on copper that creates the conductive pathways. The PCB can be easily custom designed using computers and rapidly assembled to very precise specifications. This process, when combined with other circuit advances, is what makes most technology affordable for the average consumer today.

microchips


Carbon Nanotube Circuits

There is a fear among modern technology manufactures that the standard silicon circuit may be reaching its apex. A stall in circuit technology means a halt to electronics advancement. The cutting edge of circuit design now involves hybrid chips that include carbon nanotubes in the traditional silicon structure. This allows the circuit to function with much greater speed, so that less circuits are needed to provide the same power. This increase in efficiency is predicted to make up for the inability to add more circuits to future computer chips.

Electronics technology continues to advance rapidly. As long as our technology is based on the flow of electrons through conductors, the circuit will continue to play a vital role in our everyday life.



By  Rachelle WilberEmbed



Monday, June 20, 2016

UC Davis Unveils First 1,000-Processor Chip


Microprocessors

Researchers at UC Davis have created a new microprocessor, called “KiloCore,” which is made up of 1,000 independent processors. Kilo core has a computed computation rate of 1.78 trillion instructions per second and contains 621 million transistors.


Researchers at California's UC Davis have created a microchip containing 1,000 independent programmable processors. The energy-efficient “KiloCore” chip has a maximum computation rate of 1.78 trillion instructions per second and contains 621 million transistors. The KiloCore was presented at the 2016 Symposium on VLSI Technology and Circuits in Honolulu this month.

“To the best of our knowledge, it is the world’s first 1,000-processor chip and it is the highest clock-rate processor ever designed in a university,” said Bevan Baas, professor of electrical and computer engineering, who led the team that designed the chip architecture. While other multiple-processor chips have been created, none exceed about 300 processors, according to an analysis by Baas’ team. Most were created for research purposes and few are sold commercially. The KiloCore chip was fabricated by IBM using their 32 nm CMOS technology.

Related articles
"To the best of our knowledge, it is the world’s first 1,000-processor chip and it is the highest clock-rate processor eve."
Each processor core can run its own small program independently of the others, which is a fundamentally more flexible approach than so-called Single-Instruction-Multiple-Data approaches utilized by processors such as GPUs; the idea is to break an application up into many small pieces, each of which can run in parallel on different processors, enabling high throughput with lower energy use, Baas said.

KiloCore


Each processor is independently clocked, it can shut itself down to further save energy when not needed, said graduate student Brent Bohnenstiehl, who developed the principal architecture. Cores operate at an average maximum clock frequency of 1.78 GHz, and they transfer data directly to each other rather than using a pooled memory area that can become a bottleneck for data.

The chip is the most energy-efficient “many-core” processor ever reported, Baas said. For example, the 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts, low enough to be powered by a single AA battery. The KiloCore chip executes instructions more than 100 times more efficiently than a modern laptop processor.

Applications already developed for the chip include wireless coding/decoding, video processing, encryption, and others involving large amounts of parallel data such as scientific data applications and datacenter record processing.

The team has completed a compiler and automatic program mapping tools for use in programming the chip.


SOURCE  UC Davis


By 33rd SquareEmbed


Thursday, July 16, 2015

We All Rely on Them, But How Does a Microchip Work?

 Technology


Microchips have undoubtedly revolutionized the world. Find out more about these technological powerhouses that have enabled exponential growth of nearly every aspect of our economy and culture.





Microchips are in almost every piece of technology. They're an essential component that provides for the miniaturization, energy efficiency and raw computational power of devices. The microchip is the single thing that has allowed Moore's Law to remain true.

That means these tiny things are invaluable for most people around the world, but most people fail to realize just how amazing microchips are. Let's examine what they are, how they work and some of the unique opportunities they create.

What is a Microchip?

A microchip is an electrical component that utilizes layers of semi-conducting materials to create logic circuits. These circuits are special in that they are extremely small, incredibly efficient and allow for trillions of operations to be conducted per second in just one part of a larger device.

The most interesting part about the microchip is that there are a wide variety of microchips that exist. Some act as controllers for temperature probes, others work to test the conductivity of a semifluid material and others simply perform countless calculations.

The best way to think of a microchip is as if it was a much larger circuit shrunk into a very small size.


How Does a Microchip Work?

The most basic building blocks of any microchip is the ability to use transistors. These variable resisting circuits can be networked together to work to perform almost any electrical operation, which in turn allows for digital devices to work.

Related articles
Each of these transistor circuits will output a variable voltage depending upon how much voltage is supplied, the thickness of the semi-conductive layer within the transistor and the maximum output for the transistor.

When you put trillions of these tiny transistors in a network together, companies like Semi Source create a microchip that can perform countless calculations on something smaller than a dime.

Further Astounding Effects

The uses of the microchip extend past simply making electronics more efficient and compact. They make every process, from resource gathering to refining, manufacturing and distribution exponentially more efficient than they would be otherwise.

They can even make things that would otherwise be unthinkable possible. The best example of this pertains to space flight, where there are countless microchips in one vehicle. Each one serves a purpose to add redundancy or to provide real-time calculations of sensitive flight trajectories and other processing-intensive operations that would be too difficult for a pilot to do in his or her head.

The Importance of the Microchip

Microchips have revolutionized the world. They act similarly to the way that steam revolutionized industry or combustible engines changed transportation forever.

By understanding how microchips work, you likely now have a firm grasp of just how important these tiny components are.



By Anica OaksEmbed


Author Bio - A recent college graduate from University of San Francisco, Anica loves dogs, the ocean, and anything outdoor-related. She was raised in a big family, so she's used to putting things to a vote. Also, cartwheels are her specialty. You can connect with Anica here.

Wednesday, December 17, 2014

Nanotech ‘High-Rise’ 3D Chips Developed by Researchers

 Computers
Researchers have build 3D “high-rise” chips that could leapfrog the performance of the single-story logic and memory chips on today’s circuit cards, which are subject to frequent traffic jams between logic and memory.




Researchers at Stanford University have built a new multi-layered "high-rise" chip that could significantly outperform traditional computer chips, taking on the hefty workloads that will be needed for the Internet of Things, Big Data and to continue the exponential trends in computation after Moore's Law.

By using nanotechnology, the new chips are built with layers of processing on top of layers of memory, greatly cutting down on the time and energy typically needed to move information from memory to processing and back.

Max Shulaker, a researcher on the project and a Ph.D candidate in Stanford's Department of Electrical Engineering, said they have built a four-layer chip but he could easily see them building a 100-layer chip if that was needed.

"The slowest part of any computer is sending information back and forth from the memory to the processor and back to the memory. That takes a lot of time and lot of energy," Shulaker told Computerworld. "If you look at where the new exciting apps are, it's with big data… For these sorts of new applications, we need to find a way to handle this big data."

The conventional separation of memory and logic is not well-suited for these types of heavy workloads. With traditional chip design, information is passed from the memory to the processor for computing, and then it goes back to the memory to be saved again.

Related articles
In relative terms, that takes a lot of energy and time – way more than the computation itself.

"People talk about the Internet of Things, where we're going to have millions and trillions of sensors beaming information all around," said Shulaker. "You can beam all the data to the cloud to organize all the data there, but that's a huge data deluge. You need [a chip] that can process on all this data… You want to make sense of this data before you send it off to the cloud."

That, he noted, would make working with the cloud, as well as with the Internet of Things, more efficient.

The new high-rise chip is based on three emerging technologies, according to Stanford.

The researchers, led by Subhasish Mitra, a Stanford associate professor of electrical engineering and computer science, and H.S. Philip Wong, a professor in Stanford's school of engineering, used carbon nanotube transistors instead of silicon and replaced typical memory with resistive random-access memory (RRAM) or spin-transfer torque magnetic random-access memory (STT-RAM). Both use less power and are more efficient than traditional memory systems.

"For all of these Internet of Things applications, all of them would run much, much more efficiently and much, much faster. For way less energy, you'd be able to do way more work."


The third new technique is to build the logic and memory technologies in layers that sit on top of each other in what scientists describe as "high-rise" structures.

"The connectivity between the layers increases by three orders of magnitude or a thousand times the benefit in bandwidth of how much data you can move back and forth," Shulaker said. "For all of these Internet of Things applications, all of them would run much, much more efficiently and much, much faster. For way less energy, you'd be able to do way more work."

Shulaker said they've built four-layer chips but could build many more layers. Now they're trying to figure out what size structure gives the most benefit for the cost of the build.

“This research is at an early stage, but our design and fabrication techniques are scalable,” Mitra said. “With further development this architecture could lead to computing performance that is much, much greater than anything available today.” Wong said the prototype chip unveiled at the IEEE International Electron Devices Meeting shows how to put logic and memory together into three-dimensional structures that can be mass-produced.

The researchers also said the chips could be built in a traditional chip fabrication plant without much retooling. Shulaker declined to say what kind of interest the researchers are receiving from commercial computer chip manufacturers but did say they are collaborating with industry.


SOURCE  Computer World

By 33rd SquareEmbed

Friday, January 24, 2014

Cooling microprocessor chips through the combination of carbon nanotubes and organic molecules

 Carbon Nanotubes
Researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab) have developed a “process friendly” technique that would enable the cooling of microprocessor chips through carbon nanotubes.




Researchers with the U.S. Department of Energy (DOE)’s Lawrence Berkeley National Laboratory (Berkeley Lab), with funding from Intel have developed a “process friendly” technique that would enable the cooling of microprocessor chips through carbon nanotubes.

Frank Ogletree, a physicist with Berkeley Lab’s Materials Sciences Division, led a study in which organic molecules were used to form strong covalent bonds between carbon nanotubes and metal surfaces. This improved by six-fold the flow of heat from the metal to the carbon nanotubes, paving the way for faster, more efficient cooling of computer chips. The technique is done through gas vapor or liquid chemistry at low temperatures, making it suitable for the manufacturing of computer chips.

“We’ve developed covalent bond pathways that work for oxide-forming metals, such as aluminum and silicon, and for more noble metals, such as gold and copper,” says Ogletree, who serves as a staff engineer for the Imaging Facility at the Molecular Foundry, a DOE nanoscience center hosted by Berkeley Lab. “In both cases the mechanical adhesion improved so that surface bonds were strong enough to pull a carbon nanotube array off of its growth substrate and significantly improve the transport of heat across the interface.”

Researchers Cool Microprocessors with Carbon Nanotubes
Image Source - Nature Communications
Ogletree is the corresponding author of a paper describing this research in Nature Communications. The paper is titled “Enhanced Thermal Transport at Covalently Functionalized Carbon Nanotube Array Interfaces.” Co-authors are Sumanjeet Kaur, Nachiket Raravikar, Brett Helms and Ravi Prasher.

Overheating is the bane of microprocessors. As transistors heat up, their performance can deteriorate to the point where they no longer function as transistors. With microprocessor chips becoming more densely packed and processing speeds continuing to increase, the overheating problem looms ever larger. The first challenge is to conduct heat out of the chip and onto the circuit board where fans and other techniques can be used for cooling. Carbon nanotubes have demonstrated exceptionally high thermal conductivity but their use for cooling microprocessor chips and other devices has been hampered by high thermal interface resistances in nanostructured systems.

Related articles
“The thermal conductivity of carbon nanotubes exceeds that of diamond or any other natural material but because carbon nanotubes are so chemically stable, their chemical interactions with most other materials are relatively weak, which makes for  high thermal interface resistance,” Ogletree says.

Sumanjeet Kaur, lead author of the Nature Communications paper and an expert on carbon nanotubes, with assistance from co-author and Molecular Foundry chemist Brett Helms, used reactive molecules to bridge the carbon nanotube/metal interface – aminopropyl-trialkoxy-silane (APS) for oxide-forming metals, and cysteamine for noble metals. First vertically aligned carbon nanotube arrays were grown on silicon wafers, and thin films of aluminum or gold were evaporated on glass microscope cover slips. The metal films were then “functionalized” and allowed to bond with the carbon nanotube arrays. Enhanced heat flow was confirmed using a characterization technique developed by Ogletree that allows for interface-specific measurements of heat transport.

“You can think of interface resistance in steady-state heat flow as being an extra amount of distance the heat has to flow through the material,” Kaur says. “With carbon nanotubes, thermal interface resistance adds something like 40 microns of distance on each side of the actual carbon nanotube layer. With our technique, we’re able to decrease the interface resistance so that the extra distance is around seven microns at each interface.”

Although the approach used by Ogletree, Kaur and their colleagues substantially strengthened the contact between a metal and individual carbon nanotubes within an array, a majority of the nanotubes within the array may still fail to connect with the metal. The Berkeley team is now developing a way to improve the density of carbon nanotube/metal contacts. Their technique should also be applicable to single and multi-layer graphene devices, which face the same cooling issues.

Ogletree says, “In developing this technique to address a real-world technology problem, we also created tools that yield new information on fundamental chemistry.”



SOURCE  Berkeley Lab

By 33rd SquareSubscribe to 33rd Square

Tuesday, January 7, 2014

Click to play this Smilebox slideshow

 Gadgets
Intel has introduced Edison, a Quark technology-based computer housed in an SD card form factor with built-in wireless capabilities and support for multiple operating systems. Intel says it will “enable rapid innovation and product development by a range of inventors, entrepreneurs and consumer product designers when available this summer.”





Intel has introduced Edison, a new Quark technology-based computer housed in an SD cardorm factor with built-in wireless capabilities and support for multiple operating systems.

The Edison processor is made with a 22-nm process, and it's backed by a handful of onboard features. It has integrated Wi-Fi and Bluetooth connectivity in addition to LPDDR2 memory and NAND-based storage. Intel hasn't revealed exactly how much memory and storage has been squeezed inside the tiny form factor yet.

Edison from Intel

Related articles
Intel says Edison will “enable rapid innovation and product development by a range of inventors, entrepreneurs and consumer product designers when available this summer.”the device is mainly aimed at developers, who Intel hopes will use it to build the next generation of wearable and connected devices.

That said, Intel is leading by example, and showed a small collection of design concept products using embedded Edison chips, like a toy frog that reports a baby's vital signs to a parent via an LED coffee cup, and a milk warmer that starts heating when the frog hears the baby cry.


SOURCE  Intel

By 33rd SquareSubscribe to 33rd Square

Sunday, October 13, 2013


 Microprocessors
Qualcomm has a new processor in the works that aims to bring a wealth of new enhancements to mobile processors as we know it. The biologically inspired Zeroth processors can learn as they go and can be taught through positive reinforcement.




T aking a ground-up approach to their latest design, Qualcomm’s latest Zeroth processor has been made with speed and power efficiency in mind.

The company's research teams have been working on the new computer architecture that mimics the human brain and nervous system so devices can have embedded cognition driven by brain-inspired computing.

There are three main goals for Qualcomm Zeroth processors:

1. Biologically Inspired Learning

"We want Qualcomm Zeroth products to not only mimic human-like perception but also have the ability to learn how biological brains do," writes Samir Kumar, Qualcomm's Director of Business Development, "Instead of preprogramming behaviors and outcomes with a lot of code, we’ve developed a suite of software tools that enable devices to learn as they go and get feedback from their environment."

In the video below, a robot outfitted with a Qualcomm Zeroth processor is placed it in an environment with colored boxes. The researchers were then able to teach it to visit white boxes only. They did this through dopaminergic-based learning, a.k.a. positive reinforcement—not by programming lines of code.

2. Enable Devices To See and Perceive the World as Humans Do

Brain Architecture

Related articles
Another major pillar of Zeroth processor function is striving to replicate the efficiency with which our senses and our brain communicate information says Kumar. Neuroscientists have created mathematical models that accurately characterize biological neuron behavior when they are sending, receiving or processing information. Neurons send precisely timed electrical pulses referred to as “spikes” only when a certain voltage threshold in a biological cell’s membrane is reached.

These spiking neural networks (SNN) encode and transmit data very efficiently in both how our senses gather information from the environment and then how our brain processes and fuses all of it together.

3. Creation and definition of an Neural Processing Unit—NPU


Qualcomm Neural Processing Unit

The final goal of Qualcomm Zeroth is to create, define and standardize this new processing architecture called a Neural Processing Unit (NPU.) Qualcomm envisions NPU’s in a variety of different devices, but also able to live side-by-side in future system-on-chips. "This way you can develop programs using traditional programing languages, or tap into the NPU to train the device for human-like interaction and behavior," writes Kumar.



SOURCE  Qualcomm

By 33rd SquareSubscribe to 33rd Square

Tuesday, October 8, 2013

silicon photonics

 Moore's Law
Researchers have developed a new technique in silicon photonics that could allow for exponential improvement in microprocessors to continue well into the future. By using light waves instead of electrical wires for microprocessor communication the breakthrough could eliminate some of the limitations now faced by conventional microprocessors.






A pair of breakthroughs in the field of silicon photonics by DARPA-funded researchers at the University of Colorado Boulder, the Massachusetts Institute of Technology and Micron Technology Inc. could allow for the trajectory of exponential improvement in microprocessors that began nearly half a century ago—the famous Moore’s Law—to continue well into the future, allowing for increasingly faster electronics, from supercomputers to laptops to smartphones.

The research team, led by CU-Boulder researcher Milos Popovic, an assistant professor of electrical, computer and energy engineering, developed a new technique that allows microprocessors to use light, instead of electrical wires, to communicate with transistors on a single chip, a system that could lead to extremely energy-efficient computing and a continued skyrocketing of computing speed into the future.

Popovic and his colleagues created two different optical modulators—structures that detect electrical signals and translate them into optical waves—that can be fabricated within the same processes already used in industry to create today’s state-of-the-art electronic microprocessors. The modulators are described in a recent issue of the journal Optics Letters.

Moore's Law


First laid out in 1965, Moore’s Law predicted that the size of the transistors used in microprocessors could be shrunk by half about every two years for the same production cost, allowing twice as many transistors to be placed on the same-sized silicon chip. The net effect would be a doubling of computing speed every couple of years.

The projection has held true until relatively recently. While transistors continue to get smaller, halving their size today no longer leads to a doubling of computing speed. That’s because the limiting factor in microelectronics is now the power that’s needed to keep the microprocessors running. The vast amount of electricity required to flip on and off tiny, densely packed transistors causes excessive heat buildup.

“The transistors will keep shrinking and they’ll be able to continue giving you more and more computing performance,” Popovic said. “But in order to be able to actually take advantage of that you need to enable energy-efficient communication links.”

Microelectronics also are limited by the fact that placing electrical wires that carry data too closely together can result in “cross talk” between the wires.

In the last half-dozen years, microprocessor manufacturers, such as Intel, have been able to continue increasing computing speed by packing more than one microprocessor into a single chip to create multiple “cores.” But that technique is limited by the amount of communication that then becomes necessary between the microprocessors, which also requires hefty electricity consumption.

Using light waves instead of electrical wires for microprocessor communication functions could eliminate the limitations now faced by conventional microprocessors and extend Moore’s Law into the future, Popovic said.

Related articles
Optical communication circuits, known as photonics, have two main advantages over communication that relies on conventional wires: Using light has the potential to be brutally energy efficient, and a single fiber-optic strand can carry a thousand different wavelengths of light at the same time, allowing for multiple communications to be carried simultaneously in a small space and eliminating cross talk.

Optical communication is already the foundation of the Internet and the majority of phone lines. But to make optical communication an economically viable option for microprocessors, the photonics technology has to be fabricated in the same foundries that are being used to create the microprocessors. Photonics have to be integrated side-by-side with the electronics in order to get buy-in from the microprocessor industry, Popovic said.

“In order to convince the semiconductor industry to incorporate photonics into microelectronics you need to make it so that the billions of dollars of existing infrastructure does not need to be wiped out and redone,” Popovic said.

Last year, Popovic collaborated with scientists at MIT to show, for the first time, that such integration is possible. “We are building photonics inside the exact same process that they build microelectronics in,” Popovic said. “We use this fabrication process and instead of making just electrical circuits, we make photonics next to the electrical circuits so they can talk to each other.”

In two papers published last month in Optics Letters with CU-Boulder postdoctoral researcher Jeffrey Shainline as lead author, the research team refined their original photonic-electronic chip further, detailing how the crucial optical modulator, which encodes data on streams of light, could be improved to become more energy efficient. That optical modulator is compatible with a manufacturing process—known as Silicon-on-Insulator Complementary Metal-Oxide-Semiconductor, or SOI CMOS—used to create state-of-the-art multicore microprocessors such as the IBM Power7 and Cell, which is used in the Sony PlayStation 3.

The researchers also detailed a second type of optical modulator that could be used in a different chip-manufacturing process, called bulk CMOS, which is used to make memory chips and the majority of the world’s high-end microprocessors.

“This innovation could enable optical processor to memory interconnects in supercomputers within five years, and likely within consumer electronics like game consoles, cell phones, as well,” Popovic told KurzweilAI. “Chips enabled with optical processing could also impact an array of other areas, including medical imaging, advanced analog signal processing like analog to digital conversion, etc. The bottom line is that optics is becoming an essential part of advanced microelectronics as performance continues to scale.



SOURCE  University of Colorado Boulder

By 33rd SquareSubscribe to 33rd Square

Wednesday, September 25, 2013

Intel Shows Moore's Law Not Over With 14 Nanometer Gate Broadwell Chips

 Moore's Law
Intel recently revealed it's 14 nanometer Broadwell chips were operational and performing very well - so well that in the future your laptop may not even need an annoying fan.




Recently Intel CEO Brian Krzanich revealed that Broadwell, the successor to its current Haswell processor line.

"Fourteen nanometers is here, it's working, and will be shipping by the end of this year," said Krzanich.

With a mere 14-nanometer gate, are said to be already exhibiting "30 percent power improvement" Krzanich said. "And we're not done yet. That's only what we've tested so far," Krzanich said.

Broadwell marks another impressive leap in Moore's Law. With Intel focusing on extending productivity, there also may come the elimination of another minor inconvenience for users: fan noise.

At the Intel Developers Forum, Krzanich showed off a fanless HP laptop, highlighting that the hardware's low wattage (4.5 watts) allowed it to operate without a need for a whirring fan.

Intel is reportedly planning to ship its Broadwell family of processors in the second half of 2014.

Moore's Law Gate Distance


Related articles
Intel president Renée James, who spoke with Krzanich, was adamant about the health of the famous microprocessor rule. "Moore's Law has been declared dead at least once a decade since I've been at Intel," she said, "and as you know – you heard from Brian – we have 14 nanometer working and we can see beyond that. I assure you it's alive and well."

According to James, that "alive and well" status will allow Intel make it down to 7nm – although her presentation didn't include any projections beyond that node.

Shrinking the size of transistors allows Intel to put more of them on a single processor. That means that chips continue to grow more powerful while consuming less power, and more power means more features. For instance Ultrabooks built on Broadwell will support 3D cameras built directly into the laptop. Kirk Skaugen, Intels senior vice president of PC clients said during his keynote that Intel also envisions laptops with Kinect-like 3D gesture controls, face recognition, eye-tracking and voice recognition, all made possible by super powerful chips, 



SOURCE  Business Insider

By 33rd SquareSubscribe to 33rd Square

Sunday, April 15, 2012


The era of Moore's Law of 2D chip lithography is drawing to a close.  Already Intel is working on 3D chipsets that also work in the 3rd dimension, and other non-silicon computation methods are being developed.

In this Big Think video with Michio Kaku, the physisist and author of Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 helps further explore the significance of this development.  

"

Big Think

Thursday, March 1, 2012

Image: MIT

Microelectromechanical systems, or MEMS, are small devices with a lot of potential applications.   Typically made of components less than 100 microns in size — the diameter of a human hair — they have been used as tiny biological sensors, accelerometers, gyroscopes and actuators.

For the most part, existing MEMS devices are two-dimensional, with functional elements engineered on the surface of a chip. It was thought that operating in three dimensions — to detect acceleration, for example — would require complex manufacturing and costly merging of multiple devices in precise orientations.

Now researchers at MIT have come up with a new approach to MEMS design that enables engineers to design 3D configurations, using existing fabrication processes; with this approach, the researchers built a MEMS device that enables 3D sensing on a single chip. The silicon device, not much larger than Abraham Lincoln’s ear on a U.S. penny, contains microscopic elements about the width of a red blood cell that can be engineered to reach heights of hundreds of microns above the chip’s surface.

Fabio Fachin, a postdoc in the Department of Aeronautics and Astronautics, says the device may be outfitted with sensors, placed atop and underneath the chip’s minuscule bridges, to detect three-dimensional phenomena such as acceleration. Such a compact accelerometer may be useful in several applications, including autonomous space navigation, where extremely accurate resolution of three-dimensional acceleration fields is key.

“One of the main driving factors in the current MEMS industry is to try to make fully three-dimensional devices on a single chip, which would not only enable real 3D sensing and actuation, but also yield significant cost benefits,” Fachin says. “A MEMS accelerometer could give you very accurate acceleration [measurements] with a very small footprint, which in space is critical.”

Fachin collaborated with Brian Wardle, an associate professor of aeronautics and astronautics at MIT, and Stefan Nikles, a design engineer at MEMSIC, an Andover, Mass., company that develops wireless-sensor technology. The team outlined the principles behind their 3D approach in a paper accepted for publication in the Journal of Microelectromechanical Systems.

While most MEMS devices are two-dimensional, there have been efforts to move the field into 3D, particularly for devices made from polymers. Scientists have used lithography to fabricate intricate, three-dimensional structures from polymers, which have been used as tiny gears, cogs and micro-turbines. However, Fachin says, polymers lack the stiffness and strength required for some applications, and can deform at high temperatures — qualities that are less than ideal in applications like actuators and shock absorbers.

By contrast, materials such as silicon are relatively durable and temperature-resistant. But, Fachin says, fabricating 3D devices in silicon is tricky. MEMS engineers use a common technique called deep reactive ion etching to make partially 3D structures, in which two-dimensional elements are etched into a wafer. The technique, however, does not enable full 3D configurations, where structures rise above a chip’s surface.

To make such devices, engineers fabricate tiny two-dimensional bridges, or cantilevers, on a chip’s surface. After the chip is produced, they apply a small force to arch the bridge into a three-dimensional configuration. This last step, Fachin says, requires great precision.

Instead, the MIT team came up with a way to create 3D MEMS elements without this final nudge. The group based its approach on residual stress: In any bridge structure, no matter its size, there exist stresses that remain in a material even after the original force needed to produce it — such as the heat or mechanical force of a fabrication process — has disappeared. Such stresses can be strong enough to deform a material, depending on its dimensions.

Fachin and his colleagues studied previous work on microbeam configurations and developed equations to represent the relationship between a thin-film material’s flexibility, geometry and residual stress. The group then plugged their desired bridge height into the equations, and came up with the amount of residual stress required to buckle or bend the structure into the desired shape. Fachin says other researchers can use the group’s equations as an analytical tool to design other 3D devices using pre-existing fabrication processes.

“This offers a very cost-effective way for 3D structures,” says Y.K. Yoon, an associate professor of electrical and computer engineering at the University of Florida who did not take part in the research. “Since the process is based on a silicon substrate, and compatible with standard complementary metal oxide semiconductor (CMOS) processes, it will also offer a pathway to a smart CMOS-MEMS process, with good manufacturability.”

“For other applications where you want to go much larger in size, you could just pick a material that has a larger residual stress, and that would cause the beam to buckle more,” Fachin says. “The flexibility of the tool is important.”

MIT News