bloc 33rd Square Business Tools - crowd sourcing 33rd Square Business Tools: crowd sourcing - All Post
Showing posts with label crowd sourcing. Show all posts
Showing posts with label crowd sourcing. Show all posts

Saturday, April 9, 2016

This Mini Computer Is Opening Possibilities for DIY Tech Projects


Computing

VoCore is open hardware with WIFI, USB, UART, 20+ GPIOs but is only one inch square. It will help developers and enthusiasts make a smart house, study embedded system or even make the tiniest router in the world.


The saying, “Good things come in small packages” is often overused, but the VoCore mini-computer is a product that actually warrants it. The computer measures less than an inch tall and wide, so you can literally hold it in your pocket or the palm of your hand.

Related articles

Learn About and Control the Computer

Some operating systems are so locked down that anyone who wants to learn more about how they work is out of luck. However, the VoCore runs on OpenWrt Linux. When purchasing a VoCore, you’ll get details about its source code and hardware design. That insight gives you more possibilities if you’re looking to push the boundaries of technology, and even makes it possible to build your own VoCore if you feel so inclined.

A Crowdsourced Hit

Tech-related projects find varying degrees of success on crowdfunding sites. Perhaps it shouldn’t be surprising though, based on what you already know about it, that the VoCore easily surpassed its funding goal. The creators hoped to make at least $6,000 so it’d be possible to produce the VoCore on a larger scale. It was known early on that unless mass production became a reality, the cost of the individual parts for just a few computers would be too high.

Impressively, 2,950 supporters raised over $116,000 in just two months. Clearly, those people quickly recognized the potential of this tiny product.

Explore What’s Possible

So, just what can you do with a VoCore? It can become a wireless router, and that’s just one example. Since the VoCore also has a USB port, you could depend on it while using a hot melt handgun that has a USB connector. You can connect peripheral devices to your VoCode, or use it as a component as a much larger system.

VoCare computer is used as an NES emulator.
In this project, the VoCare computer is used as an NES emulator.

The computer’s small size is one advantage it has over larger products. At the VoCore website, there’s a collection of user projects that demonstrates how the VoCore and innovation go hand in hand. One creative person even figured out how to use the VoCore build a motion detector that offers an alert whenever someone walks into a room.


A Well-Equipped Bargain

The VoCore has 32 MB of RAM, plus 8 MB worth of flash memory. If desired, you can purchase a dock separately that includes a headphone jack. Since people have reportedly used the VoCore to turn their speakers into wireless accessories, the headphone jack seems like a smart addition.

If you purchase the VoCore alone, it’s $19.99, and buying it with the dock brings the cost to $44.99. You can even buy a camera with a built-in sound recorder that’s made for the VoCore. It doesn’t require drivers, so after waiting about 30 seconds for the VoCore to boot up, you’ll be good to go. Keep in mind, the store reports it may take up to 90 days for the VoCore to arrive at your doorstep depending on your country of residence, so you might need to practice patience after ordering this petite but powerful product.

VoCare Robot Car

Share Knowledge and Learn From Others

The open-source nature of the VoCore makes it simple for tech enthusiasts to understand more about what they’ve purchased. However, there’s also a forum where people can post details about how they’ve used the VoCore, and get help with troubleshooting. If you like to not only learn about technology but grow your perspective with what others already know, the VoCore forum may become your new favorite online hangout.

Due to the VoCore’s low cost, you could plan to purchase one and see if it’s as helpful as expected for your next tech project. If it meets or exceeds expectations, or you find you need more than one, you won’t be out much money by making another purchase.




By Kayla MatthewsEmbed


Author Bio - Kayla Matthews is a technology journalist and blogger, as well as editor of ProductivityBytes.com. Follow Kayla on Facebook and Twitter to read all of her latest posts.

Monday, August 25, 2014

Robo Brain

 Artificial Intelligence
Robo Brain is now at work examining images and concepts available on the Internet so that it can teach robots how to recognize, grasp and manipulate objects and predict human behavior in the environment.




Hey there! I'm a robot brain. I learn concepts by searching the Internet. I can interpret natural language text, images, and videos. I watch humans with my sensors and learn things from interacting with them. Here are a few things I've learned recently...

And so the Robo Brain, introduces itself.  The project at Cornell University was turned on last month.

The AI project, which is led by professor Ashutosh Saxena, is described as "a large-scale computational system that learns from publicly available internet resources, computer simulations, and real-life robot trials".

The open-source effort that includes Brown, Cornell and Stanford Universities as well as the University of California, Berkeley is addressing research challenges in various domains:

  • -Large-Scale Data Processing
  • -Language and Dialog
  • -Perception
  • -AI and Reasoning Systems
  • -Robotics and Automation

Unceasingly the system is in the process of downloading images, YouTube videos and other how-to documents and appliance manuals, along with the training Cornell researchers gave to other robots in their laboratories.

By reviewing these materials, the Robo Brain is intended to learn how to recognize objects and how they are used, as well as human language and behavior in order to train robots how to function in the human built physical world.

"Our laptops and cell phones have access to all the information we want," explains Saxena. "If a robot encounters a situation it hasn't seen before it can query Robo Brain in the cloud."

For instance, the Robot Brain can learn from the Internet that the knobs on a microwave oven are turned to set the time for reheating a cup of coffee, and combined with other how-to information it finds, it could instruct a robot, like a Willow Garage PR2 or a Rethink Robotics Baxter research robot, how long to heat the beverage.

Robo Brain learning
Related articles
Other such online learning could be filled in to eventually have the robot complete all of the required steps to get you a hot cup of coffee.

It can also contain layers of abstraction, a system the researchers call "structured deep learning". For example, if the robot sees an armchair, it knows that it is a type of furniture, and more specifically, that it is furniture used for sitting -- a sub-class that contains a wide range of chairs, stools, benches and couches.

“The Robo Brain will look like a gigantic, branching graph with abilities for multi-dimensional queries,” said Aditya Jami, a visiting researcher art Cornell, who designed the large-scale database for the brain.

Robotic Planning

This information will then be stored in what mathematicians call a Markov model, represented as a series of points ("nodes") connected by lines ("edges"), like a giant branching graph, where each state depends on the previous states.

"The Robo Brain will look like a gigantic, branching graph with abilities for multi-dimensional queries."


The nodes could be actions, objects, or parts of an image, and each one is assigned a probability, or a level of variance while remaining correct. A key, for example, can vary in form, but still usually consists of a handle, a shaft and teeth. The robot can then follow a chain and look for nodes that match within probability limits.

Moreover, by learning to recognize about the human environment, Robo Brain also is learning about human behavior.  By finding out about what humans use objects for, the system can also be used to anticipate the actions of the people it is looking at.

Robo Brain is very similar the European RoboEarth Project.  Like Robo Brain, RoboEarth is cloud storage and computing for robots with an ever-expanding database intended to store knowledge created by both humans and robots in a robot-readable open format.

One area where Robo Brain is new, is that it also uses crowd sourcing to feed information to the graph.  Complimenting Robo Brain's object detection system; PlanIt, a simulation through which users can teach robots how to grasp objects or move about a room; is a system called Tell Me Dave, a crowd-sourced project that teaches robots how to understand language.

A major challenge for the system currently is that it does not have good source of data for the area of haptics - which would be useful for teaching robots how to touch and feel.

As researchers continue to add other types of learning models and data sources, such ImageNet, 3D Warehouse, and more along with the knowledge of the crowd, the team expects the system to undergo a positive feedback loop.  Moreover, each of the robots that are using Robo Brain in the world will feedback and teach other robots.  The exponential nature of this feedback is very evident.  Already at this early stage, the researchers are pleased with the results.

By merging all this software and data, the researchers hope to create a system that demonstrates a primitive sense of perception, that can “discover most of the common sense knowledge of the world,” says Bart Selman, a Robo Brain collaborator at Cornell.




SOURCES  Wired, Popular Science, Engadget, TechCrunch

By 33rd SquareEmbed

Monday, April 21, 2014

Future Technology Timeline Survey


 Future Tech
Help us crowd source a map of future technologies, inventions and processes by filling out our survey.






Create your free online surveys with SurveyMonkey , the world's leading questionnaire tool.


By 33rd SquareEmbed

Monday, September 23, 2013

What Computers See

 Object Recognition
Researchers have developed a new technique that enables the visualization of a common mathematical representation of images, which should help researchers understand why their current recognition algorithms fail.




Object-recognition systems — software that tries to identify objects in digital images —  is still fairly limited in capability. Even the best object-recognition systems, however, succeed only around 30 or 40 percent of the time — and their failures can be totally baffling. 

Now, in an attempt to improve these systems, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory have created a system that, allows humans to see the world the way an object-recognition system does.

The team has also published their results in a paper available online.

HOG

Their system, called HOG Glasses, takes an ordinary image, translates it into the mathematical representation used by an object-recognition system and then, using inventive new algorithms, translates it back into a conventional image.

The researchers report that, when presented with the re-translation of a translation, human volunteers make classification errors that are very similar to those made by computers.

That suggests that the learning algorithms are just fine, and throwing more data at the problem won’t help; it’s the feature selection that’s the culprit.

The researchers are also hopeful that, in addition to identifying the problem, their system will also help solve it, by letting their colleagues reason more intuitively about the consequences of particular feature decisions.

Related articles
The feature set most widely used in computer-vision research is called the histogram of oriented gradients, or HOG. HOG first breaks an image into square chunks, usually eight pixels by eight pixels. Then, for each square, it identifies a “gradient,” or change in color or shade from one region to another. It characterizes the gradient according to 32 distinct variables, such as its orientation — vertical, horizontal or diagonal, for example — and the sharpness of the transition — whether it changes color suddenly or gradually.

Thirty-two variables for each square translates to thousands of variables for a single image, which define a space with thousands of dimensions. Any conceivable image can be characterized as a single point in that space, and most object-recognition systems try to identify patterns in the collections of points that correspond with particular objects.

“This feature space, HOG, is very complex,” says Carl Vondrick, an MIT graduate student in electrical engineering and computer science and first author on the new paper. “A bunch of researchers sat down and tried to engineer, ‘What’s the best feature space we can have?’ It’s very highly dimensional. It’s almost impossible for a human to comprehend intuitively what’s going on. So what we’ve done is built a way to visualize this space.”

Vondrick; his advisor, Antonio Torralba, an associate professor of electrical engineering and computer science; and two other researchers in Torralba’s group, graduate student Aditya Khosla and postdoc Tomasz Malisiewicz, experimented with several different algorithms for converting points in HOG space back into ordinary images. One of those algorithms, which didn’t turn out to be the most reliable, nonetheless offers a fairly intuitive understanding of the process.

The algorithm first produces a HOG for an image and then scours a database for images that match it — on a very weak understanding of the word “match.”

“Because it’s a weak detector, you won’t find very good matches,” Vondrick explains. “But if you average all the top ones together, you actually get a fairly good reconstruction. Even though each detection is wrong, each one still captures the statistics of the original image patch.”

The reconstruction algorithm that ended up proving the most reliable is more complex. It uses a so-called “dictionary,” a technique that’s increasingly popular in computer-vision research. The dictionary consists of a large group of HOGs with fairly regular properties: One, for instance, might have a top half that’s all diagonal gradients running bottom left to upper right, while the bottom half is all horizontal gradients; another might have gradients that rotate slowly as you move from left to right across each row of squares. But any given HOG can be represented as a weighted combination of these dictionary “atoms.”

The researchers’ algorithm assembled the dictionary by analyzing thousands of images downloaded from the Internet and settled on the dictionary that allowed it to reconstruct the HOG for each of them with, on average, the fewest atoms. The trick is that, for each atom in the dictionary, the algorithm also learned the ordinary image that corresponds to it. So for an arbitrary HOG, it can apply the same weights to the ordinary images that it does to the dictionary atoms, producing a composite image.

The volunteers were slightly better than machine-learning algorithms at identifying the objects depicted in the reconstructions, but only slightly — nowhere near the disparity of 60 or 70 percent when object detectors and humans are asked to identify objects in the raw images. And the dropoff in accuracy as the volunteers moved from the easiest cases to the more difficult ones mirrored that of the object detectors.

Using HOG, the researchers hope to help others develop more efficient object recognition systems and highlight why failures may result. As Marcel Proust noted, "The real voyage of discovery consists not in seeking new landscapes but in having new eyes."



SOURCE  MIT

By 33rd SquareSubscribe to 33rd Square