Performativity · Technology

EnableTalk: Gloves That Translate Sign Language Into Speech



This year’s Microsoft’s Imagine Cup in Sydney, has produced a finalist project called EnableTalk by the Ukrainian team QuadSquad. The gloves automatically translate sign language into spoken words with the aide of a text-to-speech engine.

Worldwide, there are currently about 40 million deaf, mute and deaf-mute people and many of them use sign language to communicate. The problem is, there are very few people who actually understand sign language.

Via 33rdsquare. Continue HERE

Human-ities · Performativity · Social/Politics

Deaf sign language users pick up faster on body language

Deaf people who use sign language are quicker at recognizing and interpreting body language than hearing non-signers, according to new research from investigators at UC Davis and UC Irvine.

The work suggests that deaf people may be especially adept at picking up on subtle visual traits in the actions of others, an ability that could be useful for some sensitive jobs, such as airport screening.

“There are a lot of anecdotes about deaf people being better able to pick up on body language, but this is the first evidence of that,” said David Corina, professor in the UC Davis Department of Linguistics and Center for Mind and Brain.

Corina and graduate student Michael Grosvald, now a postdoctoral researcher at UC Irvine, measured the response times of both deaf and hearing people to a series of video clips showing people making American Sign Language signs or “non-language” gestures, such as stroking the chin. Their work was published online Dec. 6 in the journal Cognition.

“We expected that deaf people would recognize sign language faster than hearing people, as the deaf people know and use sign language daily, but the real surprise was that deaf people also were about 100 milliseconds faster at recognizing non-language gestures than were hearing people,” Corina said.

This work is important because it suggests that the human ability for communication is modifiable and is not limited to speech, Corina said. Deaf people show us that language can be expressed by the hands and be perceived through the visual system. When this happens, deaf signers get the added benefit of being able to recognize non-language actions better than hearing people who do not know a sign language, Corina said.

The study supports the idea that sign language is based on a modification of the system that all humans use to recognize gestures and body language, rather than working through a completely different system, Corina said.

Provided by University of California – Davis. Via Medical Xpress

Design · Digital Media · Performativity · Sonic/Musical

Mogees: Gesture Recognition With Contact-Microphones

Designed by Bruno Zamborlin. Mogees is a project that uses microphones to turn any surface into an interactive board, which associates different gestures with different sounds. This means that desktop drummers could transform their finger taps and hand slaps into the sound of a marimba or xylophone.

Users plug any contact microphone onto a surface — be it a tree, a cupboard, a piece of glass or even a balloon. They can then record several different types of touch using their hands or any objects that cause a sound — so one sound could be a hand slap, another could be a finger tap and another could be hitting the surface with a drumstick. Users can train the system to detect new types of touch recording them just once.

The different gestures can then be associated with different sounds. Then when the user wants to perform, the Mogees software will recognize which of these types of touch is closest to the one that the user is doing and then enable the corresponding sound engine or synthesizer. The tone of the synthesized sound is influenced by the actual sound picked up on the microphone. So you could use the same gesture — for example a tap — in different places on the surface and it would create the sound in a different key.

Mogees
currently uses two audio synthesis techniques — the first is physical modelling, which consists of generating the sound by simulating the propagation of the sound wave through different physical materials such as strings, membranes, or tubes using a piece of software called Modalys. The second technique is mosaicing, where the user loads a sound folder and then the audio coming form the contact microphone is analyzed and the software looks for the closest segment within the sound folder. So if a sound folder of voices is loaded, touching the surface gently would provoke a whispering while scratching it will cause a sound similar to screaming voices.

The idea of using contact microphones comes from the desire to turn ordinary objects into percussive instruments. The goal is to allow musicians and performers to take full advantage of electronic music without losing the feeling of touching a real surface.

Text by Bruno Zamborlin. See project HERE