To date, CSLR has developed eight programs that use virtual humans. Each of these programs is currently under development, and being tested with human subjects. Each has been developed in close collaboration with “domain experts”-reading researchers, teachers and/or clinicians who have developed treatments that have demonstrated to be effective in the laboratory, classroom or clinic. The programs include:
The lifelike computer characters in each of these programs use one of the 3D characters in the CU Animate system , developed by Dr. Jiyong Ma and his colleagues at CSLR. In all of our applications, the virtual tutor or therapist produces accurate and natural visual speech, using a novel technique invented at CSLR that concatenates motions capture data collected from human lips. In all of our applications to date, the visual speech is synchronized with a recorded human voice, since the human voice is a remarkable instrument that conveys emotions, enthusiasm, etc., and imparts personality to the virtual human. The synchronization of the human voice to the movements of the lips in all of our programs occurs fully automatically; a voice talent (who may be a clinician) records an utterance, and the utterance and the associated text string are input to the alignment system. The system transforms the text string into a sequence of expected phonetic segments, and the SONIC speech recognition system aligns these phonetic segments to the recorded speech. The waveform is then played at the appropriate point in an application, and the time aligned phonetic segments are used to inform the CU Animate system when and how to move the lips and regions of the lower face of the 3D model. An algorithm developed by Jie Yan uses a set of rules to move the head and face while the character talks. In some applications, specific animation sequences are used to portray emotions when the virtual human is speaking or responding to the speech of the user.