为了正常的体验网站,请在浏览器设置里面开启Javascript功能!

语音合成英文文献翻译

2017-11-18 13页 doc 48KB 43阅读

用户头像

is_321575

暂无简介

举报
语音合成英文文献翻译语音合成英文文献翻译 英文原文 Speech synthesis Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware. A text-to-speech TTS system converts normal lan...
语音合成英文文献翻译
语音合成英文文献翻译 英文原文 Speech synthesis Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware. A text-to-speech TTS system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech. Synthesized speech can be created by concatenating pieces of recorded speech that are stored in a database. Systems differ in the size of the stored speech units; a system that stores phones or diphones provides the largest output range, but may lack clarity. For specific usage domains, the storage of entire words or sentences allows for high-quality output. Alternatively, a synthesizer can incorporate a model of the vocal tract and other human voice characteristics to create a completely "synthetic" voice output. The quality of a speech synthesizer is judged by its similarity to the human voice and by its ability to be understood. An intelligible text-to-speech program allows people with visual impairments or reading disabilities to listen to written works on a home computer. Many computer operating systems have included speech synthesizers since the early 1990s. Overview of text processing A text-to-speech system or "engine" is composed of two parts: a front-end and a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often called text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word, and divides and marks the text into prosodic units, like phrases, clauses, and sentences. The process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation that is output by the front-end. The back-end?often referred to as the synthesizer?then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the target prosody pitch contour, phoneme durations, which is then imposed on the output speech History Long before electronic signal processing was invented, there were those who tried to build machines to create human speech. Some early legends of the existence of "speaking heads" involved Gerbert of Aurillac d. 1003 AD, Albertus Magnus 1198?1280, and Roger Bacon 1214?1294. In 1779, the Danish scientist Christian Kratzenstein, working at the Russian Academy of Sciences, built models of the human vocal tract that could produce the five long vowel sounds in International Phonetic Alphabet notation, they are [a?], [e?], [i?], [o?] and [u?].[5] This was followed by the bellows-operated "acoustic-mechanical speech machine" by Wolfgang von Kempelen of Pressburg, Hungary, described in a 1791 paper.[6] This machine added models of the tongue and lips, enabling it to produce consonants as well as vowels. In 1837, Charles Wheatstone produced a "speaking machine" based on von Kempelen's design, and in 1857, M. Faber built the "Euphonia". Wheatstone's design was resurrected in 1923 by PagetIn the 1930s, Bell Labs developed the vocoder, which automatically analyzed speech into its fundamental tone and resonances. From his work on the vocoder, Homer Dudley developed a manually keyboard-operated voice synthesizer called The Voder Voice Demonstrator, which he exhibited at the 1939 New York World's Fair. The Pattern playback was built by Dr. Franklin S. Cooper and his colleagues at Haskins Laboratories in the late 1940s and completed in 1950. There were several different versions of this hardware device but only one currently survives. The machine converts pictures of the acoustic patterns of speech in the form of a spectrogram back into sound. Using this device, Alvin Liberman and colleagues were able to discover acoustic cues for the perception of phonetic segments consonants and vowels. Dominant systems in the 1980s and 1990s were the MITalk system, based largely on the work of Dennis Klatt at MIT, and the Bell Labs system;[8] the latter was one of the first multilingual language-independent systems, making extensive use of natural language processing methods. Early electronic speech synthesizers sounded robotic and were often barely intelligible. The quality of synthesized speech has steadily improved, but output from contemporary speech synthesis systems is still clearly distinguishable from actual human speech. As the cost-performance ratio causes speech synthesizers to become cheaper and more accessible to the people, more people will benefit from the use of text-to-speech programs. Electronic devices The first computer-based speech synthesis systems were created in the late 1950s. The first general English text-to-speech system was developed by Noriko Umeda et al. in 1968 at the Electrotechnical Laboratory, Japan.[10] In 1961, physicist John Larry Kelly, Jr and colleague Louis Gerstman[11] used an IBM 704 computer to synthesize speech, an event among the most prominent in the history of Bell Labs. Kelly's voice recorder synthesizer vocoder recreated the song "Daisy Bell", with musical accompaniment from Mathews. Coincidentally, Arthur C. Clarke was visiting his friend and colleague John Pierce at the Bell Labs Murray Hill facility. Clarke was so impressed by the demonstration that he used it in the climactic scene of his screenplay for his novel 2001: A Space Odyssey,Arthur C. Clarke Biography at the Wayback Machine archived December 11, 1997 where the HAL 9000 computer sings the same song as it is being put to sleep by astronaut Dave Bowman."Where "HAL"First Spoke Bell Labs Speech Synthesis website". Bell Labs. ////0>. Retrieved 2010-02-17. Despite the success of purely electronic speech synthesis, research is still being conducted into mechanical speech synthesizers.Anthropomorphic Talking Robot Waseda-Talker SeriesHandheld electronics featuring speech synthesis began emerging in the 1970s. One of the first was the Telesensory Systems Inc. TSI Speech+ portable calculator for the blind in 1976.TSI Speech+ & other speaking calculators Gevaryahu, Jonathan, "TSI S14001A Speech Synthesizer LSI Integrated Circuit Guide"[dead link] Other devices were produced primarily for educational purposes, such as Speak & Spell, produced by Texas InstrumentsBreslow, et al. United States Patent 4326710: "Talking electronic game" April 27, 1982 in 1978. Fidelity released a speaking version of its electronic chess computer in 1979.Voice Chess Challenger The first video game to feature speech synthesis was the 1980 shoot 'em up arcade game, Stratovox, from Sun Electronics.Gaming's Most Important Evolutions, GamesRadar Another early example was the arcade version of Bezerk, released that same year. The first multi-player electronic game using voice synthesis was Milton from Milton Bradley Company, which produced the device in 1980. Synthesizer technologies The most important qualities of a speech synthesis system are naturalness and intelligibility.[citation needed] Naturalness describes how closely the output sounds like human speech, while intelligibility is the ease with which the output is understood. The ideal speech synthesizer is both natural and intelligible. Speech synthesis systems usually try to imize both characteristics. The two primary technologies for generating synthetic speech waveforms are concatenative synthesis and formant synthesis. Each technology has strengths and weaknesses, and the intended uses of a synthesis system will typically determine which approach is used. Concatenative synthesis Concatenative synthesis is based on the concatenation or stringing together of segments of recorded speech. Generally, concatenative synthesis produces the most natural-sounding synthesized speech. However, differences between natural variations in speech and the nature of the automated techniques for segmenting the waveforms sometimes result in audible glitches in the output. There are three main sub-types of concatenative synthesis. Unit selection synthesis Unit selection synthesis uses large databases of recorded speech. During database creation, each recorded utterance is segmented into some or all of the following: individual phones, diphones, half-phones, syllables, morphemes, words, phrases, and sentences. Typically, the division into segments is done using a specially modified speech recognizer set to a "forced alignment" mode with some manual correction afterward, using visual representations such as the waveform and spectrogram.[12] An index of the units in the speech database is then created based on the segmentation and acoustic parameters like the fundamental frequency pitch, duration, position in the syllable, and neighboring phones. At run time, the desired target utterance is created by determining the best chain of candidate units from the database unit selection. This process is typically achieved using a specially weighted decision tree. Unit selection provides the greatest naturalness, because it applies only a small amount of digital signal processing DSP to the recorded speech. DSP often makes recorded speech sound less natural, although some systems use a small amount of signal processing at the point of concatenation to smooth the waveform. The output from the best unit-selection systems is often indistinguishable from real human voices, especially in contexts for which the TTS system has been tuned. However, imum naturalness typically require unit-selection speech databases to be very large, in some systems ranging into the gigabytes of recorded data, representing dozens of hours of speech.[13] Also, unit selection algorithms have been known to select segments from a place that results in less than ideal synthesis e.g. minor words become unclear even when a better choice exists in the database.[14] Recently, researchers have proposed various automated methods to detect unnatural segments in unit-selection speech synthesis systems. Diphone synthesis Diphone synthesis uses a minimal speech database containing all the diphones sound-to-sound transitions occurring in a language. The number of diphones depends on the phonotactics of the language: for example, Spanish has about 800 diphones, and German about 2500. In diphone synthesis, only one example of each diphone is contained in the speech database. At runtime, the target prosody of a sentence is superimposed on these minimal units by means of digital signal processing techniques such as linear predictive coding, PSOLA[16] or MBROLA.[17] Diphone synthesis suffers from the sonic glitches of concatenative synthesis and the robotic-sounding nature of formant synthesis, and has few of the advantages of either approach other than small size. As such, its use in commercial applications is declining,[citation needed] although it continues to be used in research because there are a number of freely available software implementations Domain-specific synthesis Domain-specific synthesis concatenates prerecorded words and phrases to create complete utterances. It is used in applications where the variety of texts the system will output is limited to a particular domain, like transit schedule announcements or weather reports.[18] The technology is very simple to implement, and has been in commercial use for a long time, in devices like talking clocks and calculators. The level of naturalness of these systems can be very high because the variety of sentence types is limited, and they closely match the prosody and intonation of the original recordings. Because these systems are limited by the words and phrases in their databases, they are not general-purpose and can only synthesize the combinations of words and phrases with which they have been preprogrammed. The blending of words within naturally spoken language however can still cause problems unless the many variations are taken into account. For example, in non-rhotic dialects of English the "r" in words like "clear" /?kl??/ is usually only pronounced when the following word has a vowel as its first letter e.g. "clear out" is realized as /?klt/. Likewise in French, many final consonants become no longer silent if followed by a word that begins with a vowel, an effect called liaison. This alternation cannot be reproduced by a simple word-concatenation system, which would require additional complexity to be context-sensitive. Formant synthesis Formant synthesis does not use human speech samples at runtime. Instead, the synthesized speech output is created using additive synthesis and an acoustic model physical modelling synthesis.[19] Parameters such as fundamental frequency, voicing, and noise levels are varied over time to create a waveform of artificial speech. This method is sometimes called rules-based synthesis; however, many concatenative systems also have rules-based components. Many systems based on formant synthesis technology generate artificial, robotic-sounding speech that would never be mistaken for human speech. However, imum naturalness is not always the goal of a speech synthesis system, and formant synthesis systems have advantages over concatenative systems. Formant-synthesized speech can be reliably intelligible, even at very high speeds, avoiding the acoustic glitches that commonly plague concatenative systems. High-speed synthesized speech is used by the visually impaired to quickly navigate computers using a screen reader. Formant synthesizers are usually smaller programs than concatenative systems because they do not have a database of speech samples. They can therefore be used in embedded systems, where memory and microprocessor power are especially limited. Because formant-based systems have complete control of all aspects of the output speech, a wide variety of prosodies and intonations can be output, conveying not just questions and statements, but a variety of emotions and tones of voice. Examples of non-real-time but highly accurate intonation control in formant synthesis include the work done in the late 1970s for the Texas Instruments toy Speak & Spell, and in the early 1980s Sega arcade machines[20] and in many Atari, Inc. arcade games[21] using the TMS5220 LPC Chips. Creating proper intonation for these projects was painstaking, and the results have yet to be matched by real-time text-to-speech interface. Articulatory synthesis Articulatory synthesis refers to computational techniques for synthesizing speech based on models of the human vocal tract and the articulation processes occurring there. The first articulatory synthesizer regularly used for laboratory experiments was developed at Haskins Laboratories in the mid-1970s by Philip Rubin, Tom Baer, and Paul Mermelstein. This synthesizer, known as ASY, was based on vocal tract models developed at Bell Laboratories in the 1960s and 1970s by Paul Mermelstein, Cecil Coker, and colleagues. Until recently, articulatory synthesis models have not been incorporated into commercial speech synthesis systems. A notable exception is the NeXT-based system originally developed and marketed by Trillium Sound Research, a spin-off company of the University of Calgary, where much of the original research was conducted. Following the demise of the various incarnations of NeXT started by Steve Jobs in the late 1980s and merged with Apple Computer in 1997, the Trillium software was published under the GNU General Public License, with work continuing as gnuspeech. The system, first marketed in 1994, provides full articulatory-based text-to-speech conversion using a waveguide or transmission-line analog of the human oral and nasal tracts controlled by Carré's "distinctive region model". HMM-based synthesis HMM-based synthesis is a synthesis method based on hidden Markov models, also called Statistical Parametric Synthesis. In this system, the frequency spectrum vocal tract, fundamental frequency vocal source, and duration prosody of speech are modeled simultaneously by HMMs. Speech waveforms are generated from HMMs themselves based on the imum likelihood criterion Sinewave synthesis Sinewave synthesis is a technique for synthesizing speech by replacing the formants main bands of energy with pure tone whistles Challenges Text normalization challenges The process of normalizing text is rarely straightforward. Texts are full of heteronyms, numbers, and abbreviations that all require expansion into a phonetic representation. There are many spellings in English which are pronounced differently based on context. For example, "My latest project is to learn how to better project my voice" contains two pronunciations of "project". Most text-to-speech TTS systems do not generate semantic representations of their input texts, as processes for doing so are not reliable, well understood, or computationally effective. As a result, various heuristic techniques are used to guess the proper way to disambiguate homographs, like examining neighboring words and using statistics about frequency of occurrence. Recently TTS systems have begun to use HMMs discussed above to generate "parts of speech" to aid in disambiguating homographs. This technique is quite successful for many cases such as whether "read" should be pronounced as "red" implying past tense, or as "reed" implying present tense. Typical error rates when using HMMs in this fashion are usually below five percent. These techniques also work well for most European languages, although access to required training corpora is frequently difficult in these languages. Deciding how to convert numbers is another problem that TTS systems have to address. It is a simple programming challenge to convert a number into words at least in English, like "1325" becoming "one thousand three h
/
本文档为【语音合成英文文献翻译】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。 本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。 网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。

历史搜索

    清空历史搜索