Bu Sun Kim | 715bd7f | 2019-06-14 16:50:42 -0700 | [diff] [blame] | 1 | <html><body> |
| 2 | <style> |
| 3 | |
| 4 | body, h1, h2, h3, div, span, p, pre, a { |
| 5 | margin: 0; |
| 6 | padding: 0; |
| 7 | border: 0; |
| 8 | font-weight: inherit; |
| 9 | font-style: inherit; |
| 10 | font-size: 100%; |
| 11 | font-family: inherit; |
| 12 | vertical-align: baseline; |
| 13 | } |
| 14 | |
| 15 | body { |
| 16 | font-size: 13px; |
| 17 | padding: 1em; |
| 18 | } |
| 19 | |
| 20 | h1 { |
| 21 | font-size: 26px; |
| 22 | margin-bottom: 1em; |
| 23 | } |
| 24 | |
| 25 | h2 { |
| 26 | font-size: 24px; |
| 27 | margin-bottom: 1em; |
| 28 | } |
| 29 | |
| 30 | h3 { |
| 31 | font-size: 20px; |
| 32 | margin-bottom: 1em; |
| 33 | margin-top: 1em; |
| 34 | } |
| 35 | |
| 36 | pre, code { |
| 37 | line-height: 1.5; |
| 38 | font-family: Monaco, 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', 'Lucida Console', monospace; |
| 39 | } |
| 40 | |
| 41 | pre { |
| 42 | margin-top: 0.5em; |
| 43 | } |
| 44 | |
| 45 | h1, h2, h3, p { |
| 46 | font-family: Arial, sans serif; |
| 47 | } |
| 48 | |
| 49 | h1, h2, h3 { |
| 50 | border-bottom: solid #CCC 1px; |
| 51 | } |
| 52 | |
| 53 | .toc_element { |
| 54 | margin-top: 0.5em; |
| 55 | } |
| 56 | |
| 57 | .firstline { |
| 58 | margin-left: 2 em; |
| 59 | } |
| 60 | |
| 61 | .method { |
| 62 | margin-top: 1em; |
| 63 | border: solid 1px #CCC; |
| 64 | padding: 1em; |
| 65 | background: #EEE; |
| 66 | } |
| 67 | |
| 68 | .details { |
| 69 | font-weight: bold; |
| 70 | font-size: 14px; |
| 71 | } |
| 72 | |
| 73 | </style> |
| 74 | |
| 75 | <h1><a href="texttospeech_v1.html">Cloud Text-to-Speech API</a> . <a href="texttospeech_v1.text.html">text</a></h1> |
| 76 | <h2>Instance Methods</h2> |
| 77 | <p class="toc_element"> |
| 78 | <code><a href="#synthesize">synthesize(body, x__xgafv=None)</a></code></p> |
| 79 | <p class="firstline">Synthesizes speech synchronously: receive results after all text input</p> |
| 80 | <h3>Method Details</h3> |
| 81 | <div class="method"> |
| 82 | <code class="details" id="synthesize">synthesize(body, x__xgafv=None)</code> |
| 83 | <pre>Synthesizes speech synchronously: receive results after all text input |
| 84 | has been processed. |
| 85 | |
| 86 | Args: |
| 87 | body: object, The request body. (required) |
| 88 | The object takes the form of: |
| 89 | |
| 90 | { # The top-level message sent by the client for the `SynthesizeSpeech` method. |
| 91 | "input": { # Contains text input to be synthesized. Either `text` or `ssml` must be # Required. The Synthesizer requires either plain text or SSML as input. |
| 92 | # supplied. Supplying both or neither returns |
| 93 | # google.rpc.Code.INVALID_ARGUMENT. The input size is limited to 5000 |
| 94 | # characters. |
| 95 | "text": "A String", # The raw text to be synthesized. |
| 96 | "ssml": "A String", # The SSML document to be synthesized. The SSML document must be valid |
| 97 | # and well-formed. Otherwise the RPC will fail and return |
| 98 | # google.rpc.Code.INVALID_ARGUMENT. For more information, see |
| 99 | # [SSML](/speech/text-to-speech/docs/ssml). |
| 100 | }, |
| 101 | "voice": { # Description of which voice to use for a synthesis request. # Required. The desired voice of the synthesized audio. |
| 102 | "ssmlGender": "A String", # The preferred gender of the voice. Optional; if not set, the service will |
| 103 | # choose a voice based on the other parameters such as language_code and |
| 104 | # name. Note that this is only a preference, not requirement; if a |
| 105 | # voice of the appropriate gender is not available, the synthesizer should |
| 106 | # substitute a voice with a different gender rather than failing the request. |
| 107 | "languageCode": "A String", # The language (and optionally also the region) of the voice expressed as a |
| 108 | # [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt) language tag, e.g. |
| 109 | # "en-US". Required. This should not include a script tag (e.g. use |
| 110 | # "cmn-cn" rather than "cmn-Hant-cn"), because the script will be inferred |
| 111 | # from the input provided in the SynthesisInput. The TTS service |
| 112 | # will use this parameter to help choose an appropriate voice. Note that |
| 113 | # the TTS service may choose a voice with a slightly different language code |
| 114 | # than the one selected; it may substitute a different region |
| 115 | # (e.g. using en-US rather than en-CA if there isn't a Canadian voice |
| 116 | # available), or even a different language, e.g. using "nb" (Norwegian |
| 117 | # Bokmal) instead of "no" (Norwegian)". |
| 118 | "name": "A String", # The name of the voice. Optional; if not set, the service will choose a |
| 119 | # voice based on the other parameters such as language_code and gender. |
| 120 | }, |
| 121 | "audioConfig": { # Description of audio data to be synthesized. # Required. The configuration of the synthesized audio. |
| 122 | "audioEncoding": "A String", # Required. The format of the requested audio byte stream. |
| 123 | "effectsProfileId": [ # An identifier which selects 'audio effects' profiles that are applied on |
| 124 | # (post synthesized) text to speech. |
| 125 | # Effects are applied on top of each other in the order they are given. |
| 126 | # See |
| 127 | # |
| 128 | # [audio-profiles](https: |
| 129 | # //cloud.google.com/text-to-speech/docs/audio-profiles) |
| 130 | # for current supported profile ids. |
| 131 | "A String", |
| 132 | ], |
| 133 | "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. Optional. If this is |
| 134 | # different from the voice's natural sample rate, then the synthesizer will |
| 135 | # honor this request by converting to the desired sample rate (which might |
| 136 | # result in worse audio quality), unless the specified sample rate is not |
| 137 | # supported for the encoding chosen, in which case it will fail the request |
| 138 | # and return google.rpc.Code.INVALID_ARGUMENT. |
| 139 | "pitch": 3.14, # Optional speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 |
| 140 | # semitones from the original pitch. -20 means decrease 20 semitones from the |
| 141 | # original pitch. |
| 142 | "speakingRate": 3.14, # Optional speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal |
| 143 | # native speed supported by the specific voice. 2.0 is twice as fast, and |
| 144 | # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any |
| 145 | # other values < 0.25 or > 4.0 will return an error. |
| 146 | "volumeGainDb": 3.14, # Optional volume gain (in dB) of the normal native volume supported by the |
| 147 | # specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of |
| 148 | # 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) |
| 149 | # will play at approximately half the amplitude of the normal native signal |
| 150 | # amplitude. A value of +6.0 (dB) will play at approximately twice the |
| 151 | # amplitude of the normal native signal amplitude. Strongly recommend not to |
| 152 | # exceed +10 (dB) as there's usually no effective increase in loudness for |
| 153 | # any value greater than that. |
| 154 | }, |
| 155 | } |
| 156 | |
| 157 | x__xgafv: string, V1 error format. |
| 158 | Allowed values |
| 159 | 1 - v1 error format |
| 160 | 2 - v2 error format |
| 161 | |
| 162 | Returns: |
| 163 | An object of the form: |
| 164 | |
| 165 | { # The message returned to the client by the `SynthesizeSpeech` method. |
| 166 | "audioContent": "A String", # The audio data bytes encoded as specified in the request, including the |
| 167 | # header (For LINEAR16 audio, we include the WAV header). Note: as |
| 168 | # with all bytes fields, protobuffers use a pure binary representation, |
| 169 | # whereas JSON representations use base64. |
| 170 | }</pre> |
| 171 | </div> |
| 172 | |
| 173 | </body></html> |