Camo-bot — Science imitates the art of the octopus (video)

From the Wall Street Journal: “When it comes to camouflage, nature is light years ahead of human efforts. The octopus, for example, is a master at swiftly changing its color and shape. Stephen Morin of Harvard University has been trying to duplicate this natural quick-change ability with a soft-bodied robot. Dr. Morin upgraded a previous Harvard robot’s back with a sheet of silicone containing a network of tiny tubes, each less than a half-millimeter wide. By pumping colored liquids through these ‘microfluidic’ channels, he can change the robot’s color in about 30 seconds. He also has one-upped nature by using fluids that can make the camo-bot invisible to heat sensors, like those found in infrared cameras (the natural-world equivalent of which would be heat-sensitive bats or snakes).”

 

Caption for the YouTube video above, narrated by Stephen Morin: “Scientists have developed a soft, flexible robot that can change colors to blend in or stand out in its environment. Such devices may be useful for animal-behavior research or other activities using machines that aren’t supposed to be noticed. Stephen Morin and colleagues at Harvard University describe the robot in the 17 August 2012 issue of Science (www.sciencemag.org).”

Below, the abstract of the paper published in Science.

Camouflage and Display for Soft Machines

Synthetic systems cannot easily mimic the color-changing abilities of animals such as cephalopods. Soft machines—machines fabricated from soft polymers and flexible reinforcing sheets—are rapidly increasing in functionality. This manuscript describes simple microfluidic networks that can change the color, contrast, pattern, apparent shape, luminescence, and surface temperature of soft machines for camouflage and display. The color of these microfluidic networks can be changed simultaneously in the visible and infrared—a capability that organisms do not have. These strategies begin to imitate the functions, although not the anatomies, of color-changing animals.

via Google Reader (1000+).

Leap Motion: Gesture tech’s come-hither allure | Cutting Edge – CNET News

CNET News Cutting Edge

Leap Motion: Gesture tech’s come-hither allure

In less than two months, developers have submitted more than 26,000 requests to use the gesture control technology to drive cars, fly planes, and even interpret sign language in real time.

by Daniel Terdiman July 31, 2012 5:00 AM PDT

7 CommentsFacebook32Twitter41Linked In1More

With the Leap, from Leap Motion, developers will be able to create apps that can translate the movement of users hands — and even their fingers — onto the screen.

(Credit: Leap Motion)

Developers eager to be among the first to create applications for Leap Motion’s new gesture control system think it could be used to auto-translate sign language.

That was among the details the company released this morning about the initial round of requests from developers to design tools that work with the Leap — technology that lets users control what’s on their computers with hundredth of a millimeter accuracy with nothing more than their fingers or their hands.

The San Francisco company said that in the two months since pulling back the wraps on the Leap, more than 26,000 people asked for software developer kits, including 15,000 in the first week alone. Those developers come from 143 countries, and include 1,500 from university researchers or students.

When Leap Motion first unveiled its technology, it said that it saw the Leap as being ideal for upending industries like gaming, surgery, architecture, engineering, design, and more. But among the most interesting potential applications suggested by the developers asking for SDKs were ideas for using the technology to automatically translate sign language, the company said.

Leap Motion bounds ahead with 3D motion control

1-2 of 5

Scroll Left

Scroll Right

Developers also proposed using it to drive cars or fly planes, and support physical rehabilitation and special needs. And more than 400 people presented ideas for using the Leap in computer-aided design software — the same computing challenge that led Leap co-founder and CTO David Holz to begin creating the technology four years ago.

Related stories

Former Apple executive joins Leap Motion

Leap Motion: 3D hands-free motion control, unbound

For Silicon Valley VC, a Leap from great advice to big rewards

Leap Motion said that 14 percent of developers proposed gaming-related applications, with 12 percent wanting to use Leap in conjunction with music and video, 11 percent seeing it as ideal for art and design, 8 percent for science and medicine, and 6 percent for robotics. At launch, the company said it will build an Apple-style app store, and more than 90 percent of those asking for SDKs want to sell their work through such a store.

At launch, Holz and co-founder and CEO Michael Buckwald told CNET they intended to identify a small initial group of developers whose highly-compelling projects they could highlight when the Leap becomes available and the app store opens. But the company has not yet revealed any details about who those developers will be, or what they want to do with the Leap, which is expected to cost $70 and begin shipping early next year.

Earlier this month, the company announced it had brought on as president and COO Andy Miller, a former partner at the venture capital firm Highland Capital Partners and former Apple vice president of mobile advertising. Miller had spearheaded Highland’s $12.75 million first-round investment in Leap Motion.

via Leap Motion: Gesture tech’s come-hither allure | Cutting Edge – CNET News.

Gesture controlled car functions courtesy a gadget to let you concentrate on driving instead | Damn Geeky – The geek’s guide to awesomeness

Gesture controlled car functions courtesy a gadget to let you concentrate on driving instead

Posted by Gaurav about a month ago

In under two years’ time you could be driving a car that is purely based on controlling the secondary function like turning on the music, making a call or control air conditioning by raising or lowering the hand. All this will be possible courtesy a car having a special gadget that controls car’s function using nods and winks being developed by engineers from global infotainment specialists Harman. The technology uses infra-red sensor mounted on the dash board that recognizes driver’s facial expressions and these facial expressions are then further translated by a computer into a list of commands for devices like radio, satellite navigation, heating and mobile phone.

Now the question to be asked is how the technology differentiates these very gestures from the normal instinctive gestures? For this very purpose the gesture recognizing program carefully looks for the driver’s particular movement to avoid any wrong action as a result of the gesture. Some of the main functions that can be controlled by this gesture recognizing gadget are as follows.

• Tilting the head to left or right for controlling the volume of radio or music player.

• Controlling the air conditioning or heating system of the car by just lowering or raising the hand in front of the gear stick.

• Make a phone call making the receiver gesture with the hand and then dial by saying contact’s name.

• Wheel tap to skip the song or station for easy navigation of music.

Via: DailyMail

via Gesture controlled car functions courtesy a gadget to let you concentrate on driving instead | Damn Geeky – The geek’s guide to awesomeness.

T-Mobile’s Genius Voice Command Is Getting Smarter – Businessweek

T-Mobile’s Genius Voice Command Is Getting Smarter

By Kevin Fitchard

August 10, 2012 0:55 PM EDT

Anyone who has ever owned a T-Mobile myTouch is familiar with the little button on the lower right-hand corner labeled with a stylized “G.” It’s the Genius button, which, once pushed, allows you to issue basic voice commands, from calling or texting a contact to searching the Web or Google Maps. If you’ve used it before, you know: Siri it’s not.

The service’s vocabulary and contextual understanding are pretty limited. For instance, if you ask Genius to “find a restaurant” it will pull up the nearest eatery on Google Maps. But if you ask it to “find nearby restaurants,” it searches Maps for a joint named “Nearby.” If you’re like me, you’ve probably fiddled with the button a few times and never touched it again, despite its relative convenience on the phone’s faceplate.

But T-Mobile has given Genius a much-needed overhaul, at least on the latest versions of the myTouch manufactured by Huawei Technologies. Nuance Communications, which powers the voice recognition technology on T-Mobile devices, is upgrading Genius’s capabilities and features, providing a deeper level of natural-language understanding and integrating the service with a much broader array of content sources beyond Maps and Google Search.

For instance, if I were to ask the new Genius for nearby restaurants it would not only understand my intent, it would also use Yelp to pull up nearby dining options and display their rankings and reviews. If I were to change that command to “make a reservation at nearby restaurants,” it would bring me to OpenTable’s website and display eateries in the vicinity that accept online bookings.

If this sounds familiar you’ve probably used Nuance’s consumer semantic-search app Dragon Go!, available for Android and iOS smartphones. In fact, if you look at the list of 200 content partners the Genius can access, they’re the same as the ones Dragon Go! uses. Nuance wouldn’t acknowledge specifically that T-Mobile is white-labeling the semantic-search app, but it’s pretty obvious that’s exactly what it’s doing—which is by no means a bad thing. I’m a big fan of Nuance’s intuitive little search app, and being able to access it in fewer steps is a bonus.

The official line, though, is that T-Mobile has basically upgraded its relationship with Nuance to a kind of platinum status. The old Genius tapped into Nuance’s basic speech-recognition APIs, but it had none of the rules-based language-parsing abilities of Nuance’s more sophisticated offerings. By adding greater contextual understanding and a host of content providers, T-Mobile may be able to turn a pretty lame voice command feature into something quite useful. So far, though, it’s only available on the myTouch and myTouch Q.

Both T-Mobile and Nuance will be represented at GigaOM’s Mobilize conference next month. Vlad Sejnoha, Nuance’s chief technology officer, will be speaking on a panel about the future smartphone interface on Sept. 20, while Brad Duea, T-Mobile’s senior vice president for marketing, will discuss the evolution of voice services on Sept. 21.

via T-Mobile’s Genius Voice Command Is Getting Smarter – Businessweek.