Ad blocker interference detected!
Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers
Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.
Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Lipreading, also known as lip reading, speech reading, or speechreading, is a technique of understanding speech by visually interpreting the movements of the lips, face and tongue with information provided by the context, language, and any residual hearing.
People with normal vision, hearing and social skills unconsciously use information from the lips and face to aid aural comprehension in everyday conversation, and most fluent speakers of a language are able to speechread to some extent. (See McGurk effect.) Each speech sound (phoneme) has a particular facial and mouth position (viseme), although many phonemes share the same viseme and thus are impossible to distinguish from visual information alone. Sounds whose place of articulation is inside the mouth or throat are not detectable, such as glottal consonants. Voiced and unvoiced pairs look identical, such as [p] and [b], [k] and [g], [t] and [d], [f] and [v], and [s] and [z] (American English); likewise for nasalisation. It has been estimated that only 30% to 40% of sounds in the English language are distinguishable from sight alone; the phrase "where there's life, there's hope" looks identical to "where's the lavender soap" in most English dialects. Author Henry Kisor titled his book What's That Pig Outdoors?: A Memoir of Deafness in reference to mishearing the question, "What's that big loud noise?" He used this example in the book to discuss the shortcomings of lipreading.
Thus a speechreader must use cues from the environment and a knowledge of what is likely to be said. It is much easier to speechread customary phrases such as greetings than utterances that appear in isolation and without supporting information, such as the name of a person never met before. Speechreaders who have grown up deaf may never have heard the spoken language and are unlikely to be fluent users of it, which makes speechreading much more difficult. They must also learn the individual visemes by conscious training in an educational setting. In addition, lip reading takes a lot of focus, and can be extremely tiring. For these and other reasons, many deaf people prefer to use other means of communication with non-signers, such as mime and gesture, writing, and sign language interpreters. When conversing with a speechreader, exaggerated mouthing of words is not considered to be helpful and may in fact obscure useful clues. However, it is possible to learn to emphasize useful clues — this is known as lip speaking.
Other difficult scenarios in which to speechread include:
- lack of a clear view of the speaker's lips. This includes obstructions such as moustaches or hands in front of the mouth; the speaker's head turned aside or away; bright light source such as a window behind the speaker.
- group discussions, especially when multiple people are talking in quick succession.
Lip reading may be combined with Cued Speech; one of the arguments in favor of the use of cued speech is that it helps develop lip reading skills that may be useful even when cues are absent, i.e., when communicating with non-deaf, non-hard of hearing people.
Quote from the Listening Eye, Dorothy Clegg, 1953, "When you are deaf you live inside a well-corked glass bottle. You see the entrancing outside world, but it does not reach you. After learning to lip read, you are still inside the bottle, but the cork has come out and the outside world slowly but surely comes in to you." This view is relatively controversial within the deaf world - see manualism and oralism for an incomplete history of this debate.
See also Edit
- CSAIL: Articulatory Feature Based Visual Speech Recognition - To develop a visual speech recognition system that models visual speech in terms of the underlying articulatory processes.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).|