docs: update generated docs (#981)
diff --git a/docs/dyn/dialogflow_v2.projects.agent.environments.users.sessions.html b/docs/dyn/dialogflow_v2.projects.agent.environments.users.sessions.html
index fa9ae76..b0a8d29 100644
--- a/docs/dyn/dialogflow_v2.projects.agent.environments.users.sessions.html
+++ b/docs/dyn/dialogflow_v2.projects.agent.environments.users.sessions.html
@@ -138,16 +138,170 @@
"-". It's up to the API caller to choose an appropriate `Session ID` and
`User Id`. They can be a random number or some type of user and session
identifiers (preferably hashed). The length of the `Session ID` and
-`User ID` must not exceed 36 characters. (required)
+`User ID` must not exceed 36 characters.
+
+For more information, see the [API interactions
+guide](https://cloud.google.com/dialogflow/docs/api-overview). (required)
body: object, The request body.
The object takes the form of:
{ # The request to detect user's intent.
+ "outputAudioConfigMask": "A String", # Mask for output_audio_config indicating which settings in this
+ # request-level config should override speech synthesizer settings defined at
+ # agent-level.
+ #
+ # If unspecified or empty, output_audio_config replaces the agent-level
+ # config in its entirety.
+ "queryParams": { # Represents the parameters of the conversational query. # The parameters of this query.
+ "resetContexts": True or False, # Specifies whether to delete all contexts in the current session
+ # before the new ones are activated.
+ "sentimentAnalysisRequestConfig": { # Configures the types of sentiment analysis to perform. # Configures the type of sentiment analysis to perform. If not
+ # provided, sentiment analysis is not performed.
+ "analyzeQueryTextSentiment": True or False, # Instructs the service to perform sentiment analysis on
+ # `query_text`. If not provided, sentiment analysis is not performed on
+ # `query_text`.
+ },
+ "sessionEntityTypes": [ # Additional session entity types to replace or extend developer
+ # entity types with. The entity synonyms apply to all languages and persist
+ # for the session of this query.
+ { # A session represents a conversation between a Dialogflow agent and an
+ # end-user. You can create special entities, called session entities, during a
+ # session. Session entities can extend or replace custom entity types and only
+ # exist during the session that they were created for. All session data,
+ # including session entities, is stored by Dialogflow for 20 minutes.
+ #
+ # For more information, see the [session entity
+ # guide](https://cloud.google.com/dialogflow/docs/entities-session).
+ "name": "A String", # Required. The unique identifier of this session entity type. Format:
+ # `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type
+ # Display Name>`, or `projects/<Project ID>/agent/environments/<Environment
+ # ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display
+ # Name>`.
+ # If `Environment ID` is not specified, we assume default 'draft'
+ # environment. If `User ID` is not specified, we assume default '-' user.
+ #
+ # `<Entity Type Display Name>` must be the display name of an existing entity
+ # type in the same agent that will be overridden or supplemented.
+ "entities": [ # Required. The collection of entities associated with this session entity
+ # type.
+ { # An **entity entry** for an associated entity type.
+ "synonyms": [ # Required. A collection of value synonyms. For example, if the entity type
+ # is *vegetable*, and `value` is *scallions*, a synonym could be *green
+ # onions*.
+ #
+ # For `KIND_LIST` entity types:
+ #
+ # * This collection must contain exactly one synonym equal to `value`.
+ "A String",
+ ],
+ "value": "A String", # Required. The primary value associated with this entity entry.
+ # For example, if the entity type is *vegetable*, the value could be
+ # *scallions*.
+ #
+ # For `KIND_MAP` entity types:
+ #
+ # * A reference value to be used in place of synonyms.
+ #
+ # For `KIND_LIST` entity types:
+ #
+ # * A string that can contain references to other entity types (with or
+ # without aliases).
+ },
+ ],
+ "entityOverrideMode": "A String", # Required. Indicates whether the additional data should override or
+ # supplement the custom entity type definition.
+ },
+ ],
+ "payload": { # This field can be used to pass custom data to your webhook.
+ # Arbitrary JSON objects are supported.
+ # If supplied, the value is used to populate the
+ # `WebhookRequest.original_detect_intent_request.payload`
+ # field sent to your webhook.
+ "a_key": "", # Properties of the object.
+ },
+ "geoLocation": { # An object representing a latitude/longitude pair. This is expressed as a pair # The geo location of this conversational query.
+ # of doubles representing degrees latitude and degrees longitude. Unless
+ # specified otherwise, this must conform to the
+ # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84
+ # standard</a>. Values must be within normalized ranges.
+ "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
+ "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
+ },
+ "contexts": [ # The collection of contexts to be activated before this query is
+ # executed.
+ { # Dialogflow contexts are similar to natural language context. If a person says
+ # to you "they are orange", you need context in order to understand what "they"
+ # is referring to. Similarly, for Dialogflow to handle an end-user expression
+ # like that, it needs to be provided with context in order to correctly match
+ # an intent.
+ #
+ # Using contexts, you can control the flow of a conversation. You can configure
+ # contexts for an intent by setting input and output contexts, which are
+ # identified by string names. When an intent is matched, any configured output
+ # contexts for that intent become active. While any contexts are active,
+ # Dialogflow is more likely to match intents that are configured with input
+ # contexts that correspond to the currently active contexts.
+ #
+ # For more information about context, see the
+ # [Contexts guide](https://cloud.google.com/dialogflow/docs/contexts-overview).
+ "parameters": { # Optional. The collection of parameters associated with this context.
+ #
+ # Depending on your protocol or client library language, this is a
+ # map, associative array, symbol table, dictionary, or JSON object
+ # composed of a collection of (MapKey, MapValue) pairs:
+ #
+ # - MapKey type: string
+ # - MapKey value: parameter name
+ # - MapValue type:
+ # - If parameter's entity type is a composite entity: map
+ # - Else: string or number, depending on parameter value type
+ # - MapValue value:
+ # - If parameter's entity type is a composite entity:
+ # map from composite entity property names to property values
+ # - Else: parameter value
+ "a_key": "", # Properties of the object.
+ },
+ "name": "A String", # Required. The unique identifier of the context. Format:
+ # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
+ # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
+ # ID>/sessions/<Session ID>/contexts/<Context ID>`.
+ #
+ # The `Context ID` is always converted to lowercase, may only contain
+ # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
+ #
+ # If `Environment ID` is not specified, we assume default 'draft'
+ # environment. If `User ID` is not specified, we assume default '-' user.
+ #
+ # The following context names are reserved for internal use by Dialogflow.
+ # You should not use these contexts or create contexts with these names:
+ #
+ # * `__system_counters__`
+ # * `*_id_dialog_context`
+ # * `*_dialog_params_size`
+ "lifespanCount": 42, # Optional. The number of conversational query requests after which the
+ # context expires. The default is `0`. If set to `0`, the context expires
+ # immediately. Contexts expire automatically after 20 minutes if there
+ # are no matching queries.
+ },
+ ],
+ "timeZone": "A String", # The time zone of this conversational query from the
+ # [time zone database](https://www.iana.org/time-zones), e.g.,
+ # America/New_York, Europe/Paris. If not provided, the time zone specified in
+ # agent settings is used.
+ },
+ "inputAudio": "A String", # The natural language speech audio to be processed. This field
+ # should be populated iff `query_input` is set to an input audio config.
+ # A single request can contain up to 1 minute of speech audio data.
"outputAudioConfig": { # Instructs the speech synthesizer on how to generate the output audio content. # Instructs the speech synthesizer how to generate the output
# audio. If this field is not set and agent-level speech synthesizer is not
# configured, no output audio is generated.
# If this audio config is supplied in a request, it overrides all existing
# text-to-speech settings applied to the agent.
+ "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. If not
+ # provided, then the synthesizer will use the default sample rate based on
+ # the audio encoding. If this is different from the voice's natural sample
+ # rate, then the synthesizer will honor this request by converting to the
+ # desired sample rate (which might result in worse audio quality).
"audioEncoding": "A String", # Required. Audio encoding of the synthesized audio content.
"synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Configuration of how speech should be synthesized.
"volumeGainDb": 3.14, # Optional. Volume gain (in dB) of the normal native volume supported by the
@@ -158,6 +312,10 @@
# amplitude of the normal native signal amplitude. We strongly recommend not
# to exceed +10 (dB) as there's usually no effective increase in loudness for
# any value greater than that.
+ "speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
+ # native speed supported by the specific voice. 2.0 is twice as fast, and
+ # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
+ # other values < 0.25 or > 4.0 will return an error.
"pitch": 3.14, # Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20
# semitones from the original pitch. -20 means decrease 20 semitones from the
# original pitch.
@@ -171,31 +329,13 @@
# voice of the appropriate gender is not available, the synthesizer should
# substitute a voice with a different gender rather than failing the request.
},
- "speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
- # native speed supported by the specific voice. 2.0 is twice as fast, and
- # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
- # other values < 0.25 or > 4.0 will return an error.
"effectsProfileId": [ # Optional. An identifier which selects 'audio effects' profiles that are
# applied on (post synthesized) text to speech. Effects are applied on top of
# each other in the order they are given.
"A String",
],
},
- "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. If not
- # provided, then the synthesizer will use the default sample rate based on
- # the audio encoding. If this is different from the voice's natural sample
- # rate, then the synthesizer will honor this request by converting to the
- # desired sample rate (which might result in worse audio quality).
},
- "inputAudio": "A String", # The natural language speech audio to be processed. This field
- # should be populated iff `query_input` is set to an input audio config.
- # A single request can contain up to 1 minute of speech audio data.
- "outputAudioConfigMask": "A String", # Mask for output_audio_config indicating which settings in this
- # request-level config should override speech synthesizer settings defined at
- # agent-level.
- #
- # If unspecified or empty, output_audio_config replaces the agent-level
- # config in its entirety.
"queryInput": { # Represents the query input. It can contain either: # Required. The input specification. It can be set to:
#
# 1. an audio config
@@ -211,16 +351,19 @@
# 2. A conversational query in the form of text,.
#
# 3. An event that specifies which intent to trigger.
+ "text": { # Represents the natural language text to be processed. # The natural language text to be processed.
+ "text": "A String", # Required. The UTF-8 encoded natural language text to be processed.
+ # Text length must not exceed 256 characters.
+ "languageCode": "A String", # Required. The language of this conversational query. See [Language
+ # Support](https://cloud.google.com/dialogflow/docs/reference/language)
+ # for a list of the currently supported language codes. Note that queries in
+ # the same session do not necessarily need to specify the same language.
+ },
"event": { # Events allow for matching intents by event name instead of the natural # The event to be processed.
# language input. For instance, input `<event: { name: "welcome_event",
# parameters: { name: "Sam" } }>` can trigger a personalized welcome response.
# The parameter `name` may be used by the agent in the response:
# `"Hello #welcome_event.name! What can I do for you today?"`.
- "languageCode": "A String", # Required. The language of this query. See [Language
- # Support](https://cloud.google.com/dialogflow/docs/reference/language)
- # for a list of the currently supported language codes. Note that queries in
- # the same session do not necessarily need to specify the same language.
- "name": "A String", # Required. The unique identifier of the event.
"parameters": { # The collection of parameters associated with the event.
#
# Depending on your protocol or client library language, this is a
@@ -238,32 +381,25 @@
# - Else: parameter value
"a_key": "", # Properties of the object.
},
- },
- "text": { # Represents the natural language text to be processed. # The natural language text to be processed.
- "languageCode": "A String", # Required. The language of this conversational query. See [Language
+ "name": "A String", # Required. The unique identifier of the event.
+ "languageCode": "A String", # Required. The language of this query. See [Language
# Support](https://cloud.google.com/dialogflow/docs/reference/language)
# for a list of the currently supported language codes. Note that queries in
# the same session do not necessarily need to specify the same language.
- "text": "A String", # Required. The UTF-8 encoded natural language text to be processed.
- # Text length must not exceed 256 characters.
},
"audioConfig": { # Instructs the speech recognizer how to process the audio content. # Instructs the speech recognizer how to process the speech audio.
- "audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
- "singleUtterance": True or False, # If `false` (default), recognition does not cease until the
- # client closes the stream.
- # If `true`, the recognizer will detect a single spoken utterance in input
- # audio. Recognition ceases when it detects the audio's voice has
- # stopped or paused. In this case, once a detected intent is received, the
- # client should close the stream and start a new request with a new stream as
- # needed.
- # Note: This setting is relevant only for streaming methods.
- # Note: When specified, InputAudioConfig.single_utterance takes precedence
- # over StreamingDetectIntentRequest.single_utterance.
- "languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do
- # translations. See [Language
- # Support](https://cloud.google.com/dialogflow/docs/reference/language)
- # for a list of the currently supported language codes. Note that queries in
- # the same session do not necessarily need to specify the same language.
+ "model": "A String", # Which Speech model to select for the given request. Select the
+ # model best suited to your domain to get best results. If a model is not
+ # explicitly specified, then we auto-select a model based on the parameters
+ # in the InputAudioConfig.
+ # If enhanced speech model is enabled for the agent and an enhanced
+ # version of the specified model for the language does not exist, then the
+ # speech is recognized using the standard version of the specified model.
+ # Refer to
+ # [Cloud Speech API
+ # documentation](https://cloud.google.com/speech-to-text/docs/basics#select-model)
+ # for more details.
+ "modelVariant": "A String", # Which variant of the Speech model to use.
"speechContexts": [ # Context information to assist speech recognition.
#
# See [the Cloud Speech
@@ -275,6 +411,7 @@
# recognizer should recognize with higher likelihood.
#
# This list can be used to:
+ #
# * improve accuracy for words and phrases you expect the user to say,
# e.g. typical commands for your Dialogflow agent
# * add additional words to the speech recognizer vocabulary
@@ -296,6 +433,31 @@
# find a value that fits your use case with binary search.
},
],
+ "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in
+ # StreamingRecognitionResult with information about the recognized speech
+ # words, e.g. start and end time offsets. If false or unspecified, Speech
+ # doesn't return any word-level information.
+ "singleUtterance": True or False, # If `false` (default), recognition does not cease until the
+ # client closes the stream.
+ # If `true`, the recognizer will detect a single spoken utterance in input
+ # audio. Recognition ceases when it detects the audio's voice has
+ # stopped or paused. In this case, once a detected intent is received, the
+ # client should close the stream and start a new request with a new stream as
+ # needed.
+ # Note: This setting is relevant only for streaming methods.
+ # Note: When specified, InputAudioConfig.single_utterance takes precedence
+ # over StreamingDetectIntentRequest.single_utterance.
+ "audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
+ "sampleRateHertz": 42, # Required. Sample rate (in Hertz) of the audio content sent in the query.
+ # Refer to
+ # [Cloud Speech API
+ # documentation](https://cloud.google.com/speech-to-text/docs/basics) for
+ # more details.
+ "languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do
+ # translations. See [Language
+ # Support](https://cloud.google.com/dialogflow/docs/reference/language)
+ # for a list of the currently supported language codes. Note that queries in
+ # the same session do not necessarily need to specify the same language.
"phraseHints": [ # A list of strings containing words and phrases that the speech
# recognizer should recognize with higher likelihood.
#
@@ -308,149 +470,6 @@
# treat the [phrase_hints]() as a single additional [SpeechContext]().
"A String",
],
- "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in
- # StreamingRecognitionResult with information about the recognized speech
- # words, e.g. start and end time offsets. If false or unspecified, Speech
- # doesn't return any word-level information.
- "sampleRateHertz": 42, # Required. Sample rate (in Hertz) of the audio content sent in the query.
- # Refer to
- # [Cloud Speech API
- # documentation](https://cloud.google.com/speech-to-text/docs/basics) for
- # more details.
- "model": "A String", # Which Speech model to select for the given request. Select the
- # model best suited to your domain to get best results. If a model is not
- # explicitly specified, then we auto-select a model based on the parameters
- # in the InputAudioConfig.
- # If enhanced speech model is enabled for the agent and an enhanced
- # version of the specified model for the language does not exist, then the
- # speech is recognized using the standard version of the specified model.
- # Refer to
- # [Cloud Speech API
- # documentation](https://cloud.google.com/speech-to-text/docs/basics#select-model)
- # for more details.
- "modelVariant": "A String", # Which variant of the Speech model to use.
- },
- },
- "queryParams": { # Represents the parameters of the conversational query. # The parameters of this query.
- "contexts": [ # The collection of contexts to be activated before this query is
- # executed.
- { # Represents a context.
- "lifespanCount": 42, # Optional. The number of conversational query requests after which the
- # context expires. The default is `0`. If set to `0`, the context expires
- # immediately. Contexts expire automatically after 20 minutes if there
- # are no matching queries.
- "name": "A String", # Required. The unique identifier of the context. Format:
- # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
- # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
- # ID>/sessions/<Session ID>/contexts/<Context ID>`.
- #
- # The `Context ID` is always converted to lowercase, may only contain
- # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
- #
- # If `Environment ID` is not specified, we assume default 'draft'
- # environment. If `User ID` is not specified, we assume default '-' user.
- #
- # The following context names are reserved for internal use by Dialogflow.
- # You should not use these contexts or create contexts with these names:
- #
- # * `__system_counters__`
- # * `*_id_dialog_context`
- # * `*_dialog_params_size`
- "parameters": { # Optional. The collection of parameters associated with this context.
- #
- # Depending on your protocol or client library language, this is a
- # map, associative array, symbol table, dictionary, or JSON object
- # composed of a collection of (MapKey, MapValue) pairs:
- #
- # - MapKey type: string
- # - MapKey value: parameter name
- # - MapValue type:
- # - If parameter's entity type is a composite entity: map
- # - Else: string or number, depending on parameter value type
- # - MapValue value:
- # - If parameter's entity type is a composite entity:
- # map from composite entity property names to property values
- # - Else: parameter value
- "a_key": "", # Properties of the object.
- },
- },
- ],
- "sentimentAnalysisRequestConfig": { # Configures the types of sentiment analysis to perform. # Configures the type of sentiment analysis to perform. If not
- # provided, sentiment analysis is not performed.
- "analyzeQueryTextSentiment": True or False, # Instructs the service to perform sentiment analysis on
- # `query_text`. If not provided, sentiment analysis is not performed on
- # `query_text`.
- },
- "timeZone": "A String", # The time zone of this conversational query from the
- # [time zone database](https://www.iana.org/time-zones), e.g.,
- # America/New_York, Europe/Paris. If not provided, the time zone specified in
- # agent settings is used.
- "sessionEntityTypes": [ # Additional session entity types to replace or extend developer
- # entity types with. The entity synonyms apply to all languages and persist
- # for the session of this query.
- { # Represents a session entity type.
- #
- # Extends or replaces a custom entity type at the user session level (we
- # refer to the entity types defined at the agent level as "custom entity
- # types").
- #
- # Note: session entity types apply to all queries, regardless of the language.
- "name": "A String", # Required. The unique identifier of this session entity type. Format:
- # `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type
- # Display Name>`, or `projects/<Project ID>/agent/environments/<Environment
- # ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display
- # Name>`.
- # If `Environment ID` is not specified, we assume default 'draft'
- # environment. If `User ID` is not specified, we assume default '-' user.
- #
- # `<Entity Type Display Name>` must be the display name of an existing entity
- # type in the same agent that will be overridden or supplemented.
- "entityOverrideMode": "A String", # Required. Indicates whether the additional data should override or
- # supplement the custom entity type definition.
- "entities": [ # Required. The collection of entities associated with this session entity
- # type.
- { # An **entity entry** for an associated entity type.
- "value": "A String", # Required. The primary value associated with this entity entry.
- # For example, if the entity type is *vegetable*, the value could be
- # *scallions*.
- #
- # For `KIND_MAP` entity types:
- #
- # * A reference value to be used in place of synonyms.
- #
- # For `KIND_LIST` entity types:
- #
- # * A string that can contain references to other entity types (with or
- # without aliases).
- "synonyms": [ # Required. A collection of value synonyms. For example, if the entity type
- # is *vegetable*, and `value` is *scallions*, a synonym could be *green
- # onions*.
- #
- # For `KIND_LIST` entity types:
- #
- # * This collection must contain exactly one synonym equal to `value`.
- "A String",
- ],
- },
- ],
- },
- ],
- "resetContexts": True or False, # Specifies whether to delete all contexts in the current session
- # before the new ones are activated.
- "geoLocation": { # An object representing a latitude/longitude pair. This is expressed as a pair # The geo location of this conversational query.
- # of doubles representing degrees latitude and degrees longitude. Unless
- # specified otherwise, this must conform to the
- # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84
- # standard</a>. Values must be within normalized ranges.
- "latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
- "longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
- },
- "payload": { # This field can be used to pass custom data to your webhook.
- # Arbitrary JSON objects are supported.
- # If supplied, the value is used to populate the
- # `WebhookRequest.original_detect_intent_request.payload`
- # field sent to your webhook.
- "a_key": "", # Properties of the object.
},
},
}
@@ -464,9 +483,26 @@
An object of the form:
{ # The message returned from the DetectIntent method.
+ "outputAudio": "A String", # The audio data bytes encoded as specified in the request.
+ # Note: The output audio is generated based on the values of default platform
+ # text responses found in the `query_result.fulfillment_messages` field. If
+ # multiple default text responses exist, they will be concatenated when
+ # generating audio. If no default platform text responses exist, the
+ # generated audio content will be empty.
+ #
+ # In some scenarios, multiple output audio fields may be present in the
+ # response structure. In these cases, only the top-most-level audio output
+ # has content.
+ "responseId": "A String", # The unique identifier of the response. It can be used to
+ # locate a response in the training example set or for reporting issues.
"outputAudioConfig": { # Instructs the speech synthesizer on how to generate the output audio content. # The config used by the speech synthesizer to generate the output audio.
# If this audio config is supplied in a request, it overrides all existing
# text-to-speech settings applied to the agent.
+ "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. If not
+ # provided, then the synthesizer will use the default sample rate based on
+ # the audio encoding. If this is different from the voice's natural sample
+ # rate, then the synthesizer will honor this request by converting to the
+ # desired sample rate (which might result in worse audio quality).
"audioEncoding": "A String", # Required. Audio encoding of the synthesized audio content.
"synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Configuration of how speech should be synthesized.
"volumeGainDb": 3.14, # Optional. Volume gain (in dB) of the normal native volume supported by the
@@ -477,6 +513,10 @@
# amplitude of the normal native signal amplitude. We strongly recommend not
# to exceed +10 (dB) as there's usually no effective increase in loudness for
# any value greater than that.
+ "speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
+ # native speed supported by the specific voice. 2.0 is twice as fast, and
+ # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
+ # other values < 0.25 or > 4.0 will return an error.
"pitch": 3.14, # Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20
# semitones from the original pitch. -20 means decrease 20 semitones from the
# original pitch.
@@ -490,367 +530,271 @@
# voice of the appropriate gender is not available, the synthesizer should
# substitute a voice with a different gender rather than failing the request.
},
- "speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
- # native speed supported by the specific voice. 2.0 is twice as fast, and
- # 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
- # other values < 0.25 or > 4.0 will return an error.
"effectsProfileId": [ # Optional. An identifier which selects 'audio effects' profiles that are
# applied on (post synthesized) text to speech. Effects are applied on top of
# each other in the order they are given.
"A String",
],
},
- "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. If not
- # provided, then the synthesizer will use the default sample rate based on
- # the audio encoding. If this is different from the voice's natural sample
- # rate, then the synthesizer will honor this request by converting to the
- # desired sample rate (which might result in worse audio quality).
},
"queryResult": { # Represents the result of conversational query or event processing. # The selected results of the conversational query or event processing.
# See `alternative_query_results` for additional potential results.
- "fulfillmentMessages": [ # The collection of rich messages to present to the user.
- { # A rich response message.
- # Corresponds to the intent `Response` field in the Dialogflow console.
- # For more information, see
- # [Rich response
- # messages](https://cloud.google.com/dialogflow/docs/intents-rich-messages).
- "mediaContent": { # The media content card for Actions on Google. # The media content card for Actions on Google.
- "mediaType": "A String", # Optional. What type of media is the content (ie "audio").
- "mediaObjects": [ # Required. List of media objects.
- { # Response media object for media content card.
- "name": "A String", # Required. Name of media card.
- "description": "A String", # Optional. Description of media card.
- "contentUrl": "A String", # Required. Url where the media is stored.
- "icon": { # The image response message. # Optional. Icon to display above media content.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "largeImage": { # The image response message. # Optional. Image to display above media content.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- },
- ],
- },
- "image": { # The image response message. # The image response.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "payload": { # A custom platform-specific response.
- "a_key": "", # Properties of the object.
- },
- "text": { # The text response message. # The text response.
- "text": [ # Optional. The collection of the agent's responses.
- "A String",
- ],
- },
- "platform": "A String", # Optional. The platform that this message is intended for.
- "suggestions": { # The collection of suggestions. # The suggestion chips for Actions on Google.
- "suggestions": [ # Required. The list of suggested replies.
- { # The suggestion chip message that the user can tap to quickly post a reply
- # to the conversation.
- "title": "A String", # Required. The text shown the in the suggestion chip.
- },
- ],
- },
- "listSelect": { # The card for presenting a list of options to select from. # The list card response for Actions on Google.
- "subtitle": "A String", # Optional. Subtitle of the list.
- "items": [ # Required. List items.
- { # An item in the list.
- "title": "A String", # Required. The title of the list item.
- "image": { # The image response message. # Optional. The image to display.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "description": "A String", # Optional. The main text describing the item.
- "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
- # dialog.
- "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
- # item in dialog.
- "A String",
- ],
- "key": "A String", # Required. A unique key that will be sent back to the agent if this
- # response is given.
- },
- },
- ],
- "title": "A String", # Optional. The overall title of the list.
- },
- "quickReplies": { # The quick replies response message. # The quick replies response.
- "title": "A String", # Optional. The title of the collection of quick replies.
- "quickReplies": [ # Optional. The collection of quick replies.
- "A String",
- ],
- },
- "card": { # The card response message. # The card response.
- "imageUri": "A String", # Optional. The public URI to an image file for the card.
- "title": "A String", # Optional. The title of the card.
- "buttons": [ # Optional. The collection of card buttons.
- { # Contains information about a button.
- "text": "A String", # Optional. The text to show on the button.
- "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
- # open.
- },
- ],
- "subtitle": "A String", # Optional. The subtitle of the card.
- },
- "basicCard": { # The basic card message. Useful for displaying information. # The basic card response for Actions on Google.
- "title": "A String", # Optional. The title of the card.
- "image": { # The image response message. # Optional. The image for the card.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "formattedText": "A String", # Required, unless image is present. The body text of the card.
- "buttons": [ # Optional. The collection of card buttons.
- { # The button object that appears at the bottom of a card.
- "title": "A String", # Required. The title of the button.
- "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
- "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
- },
- },
- ],
- "subtitle": "A String", # Optional. The subtitle of the card.
- },
- "tableCard": { # Table card for Actions on Google. # Table card for Actions on Google.
- "title": "A String", # Required. Title of the card.
- "rows": [ # Optional. Rows in this table of data.
- { # Row of TableCard.
- "dividerAfter": True or False, # Optional. Whether to add a visual divider after this row.
- "cells": [ # Optional. List of cells that make up this row.
- { # Cell of TableCardRow.
- "text": "A String", # Required. Text in this cell.
- },
- ],
- },
- ],
- "subtitle": "A String", # Optional. Subtitle to the title.
- "columnProperties": [ # Optional. Display properties for the columns in this table.
- { # Column properties for TableCard.
- "header": "A String", # Required. Column heading.
- "horizontalAlignment": "A String", # Optional. Defines text alignment for all cells in this column.
- },
- ],
- "image": { # The image response message. # Optional. Image which should be displayed on the card.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "buttons": [ # Optional. List of buttons for the card.
- { # The button object that appears at the bottom of a card.
- "title": "A String", # Required. The title of the button.
- "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
- "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
- },
- },
- ],
- },
- "carouselSelect": { # The card for presenting a carousel of options to select from. # The carousel card response for Actions on Google.
- "items": [ # Required. Carousel items.
- { # An item in the carousel.
- "description": "A String", # Optional. The body text of the card.
- "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
- # dialog.
- "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
- # item in dialog.
- "A String",
- ],
- "key": "A String", # Required. A unique key that will be sent back to the agent if this
- # response is given.
- },
- "title": "A String", # Required. Title of the carousel item.
- "image": { # The image response message. # Optional. The image to display.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- },
- ],
- },
- "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # The link out suggestion chip for Actions on Google.
- # or website associated with this agent.
- "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
- "uri": "A String", # Required. The URI of the app or site to open when the user taps the
- # suggestion chip.
- },
- "browseCarouselCard": { # Browse Carousel Card for Actions on Google. # Browse carousel card for Actions on Google.
- # https://developers.google.com/actions/assistant/responses#browsing_carousel
- "items": [ # Required. List of items in the Browse Carousel Card. Minimum of two
- # items, maximum of ten.
- { # Browsing carousel tile
- "openUriAction": { # Actions on Google action to open a given url. # Required. Action to present to the user.
- "urlTypeHint": "A String", # Optional. Specifies the type of viewer that is used when opening
- # the URL. Defaults to opening via web browser.
- "url": "A String", # Required. URL
- },
- "footer": "A String", # Optional. Text that appears at the bottom of the Browse Carousel
- # Card. Maximum of one line of text.
- "title": "A String", # Required. Title of the carousel item. Maximum of two lines of text.
- "image": { # The image response message. # Optional. Hero image for the carousel item.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "description": "A String", # Optional. Description of the carousel item. Maximum of four lines of
- # text.
- },
- ],
- "imageDisplayOptions": "A String", # Optional. Settings for displaying the image. Applies to every image in
- # items.
- },
- "simpleResponses": { # The collection of simple response candidates. # The voice and text-only responses for Actions on Google.
- # This message in `QueryResult.fulfillment_messages` and
- # `WebhookResponse.fulfillment_messages` should contain only one
- # `SimpleResponse`.
- "simpleResponses": [ # Required. The list of simple responses.
- { # The simple response message containing speech or text.
- "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
- # speech output. Mutually exclusive with ssml.
- "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
- # response to the user in the SSML format. Mutually exclusive with
- # text_to_speech.
- "displayText": "A String", # Optional. The text to display.
- },
- ],
- },
- },
- ],
- "webhookPayload": { # If the query was fulfilled by a webhook call, this field is set to the
- # value of the `payload` field returned in the webhook response.
- "a_key": "", # Properties of the object.
- },
- "action": "A String", # The action name from the matched intent.
- "webhookSource": "A String", # If the query was fulfilled by a webhook call, this field is set to the
- # value of the `source` field returned in the webhook response.
- "fulfillmentText": "A String", # The text to be pronounced to the user or shown on the screen.
- # Note: This is a legacy field, `fulfillment_messages` should be preferred.
- "parameters": { # The collection of extracted parameters.
- #
- # Depending on your protocol or client library language, this is a
- # map, associative array, symbol table, dictionary, or JSON object
- # composed of a collection of (MapKey, MapValue) pairs:
- #
- # - MapKey type: string
- # - MapKey value: parameter name
- # - MapValue type:
- # - If parameter's entity type is a composite entity: map
- # - Else: string or number, depending on parameter value type
- # - MapValue value:
- # - If parameter's entity type is a composite entity:
- # map from composite entity property names to property values
- # - Else: parameter value
- "a_key": "", # Properties of the object.
- },
- "sentimentAnalysisResult": { # The result of sentiment analysis as configured by # The sentiment analysis result, which depends on the
- # `sentiment_analysis_request_config` specified in the request.
- # `sentiment_analysis_request_config`.
- "queryTextSentiment": { # The sentiment, such as positive/negative feeling or association, for a unit # The sentiment analysis result for `query_text`.
- # of analysis, such as the query text.
- "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0 (positive
- # sentiment).
- "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents the absolute
- # magnitude of sentiment, regardless of score (positive or negative).
- },
- },
- "intentDetectionConfidence": 3.14, # The intent detection confidence. Values range from 0.0
- # (completely uncertain) to 1.0 (completely certain).
- # This value is for informational purpose only and is only used to
- # help match the best intent within the classification threshold.
- # This value may change for the same end-user expression at any time due to a
- # model retraining or change in implementation.
- # If there are `multiple knowledge_answers` messages, this value is set to
- # the greatest `knowledgeAnswers.match_confidence` value in the list.
- "allRequiredParamsPresent": True or False, # This field is set to:
- #
- # - `false` if the matched intent has required parameters and not all of
- # the required parameter values have been collected.
- # - `true` if all required parameter values have been collected, or if the
- # matched intent doesn't contain any required parameters.
- "queryText": "A String", # The original conversational query text:
- #
- # - If natural language text was provided as input, `query_text` contains
- # a copy of the input.
- # - If natural language speech audio was provided as input, `query_text`
- # contains the speech recognition result. If speech recognizer produced
- # multiple alternatives, a particular one is picked.
- # - If automatic spell correction is enabled, `query_text` will contain the
- # corrected user input.
- "speechRecognitionConfidence": 3.14, # The Speech recognition confidence between 0.0 and 1.0. A higher number
- # indicates an estimated greater likelihood that the recognized words are
- # correct. The default of 0.0 is a sentinel value indicating that confidence
- # was not set.
- #
- # This field is not guaranteed to be accurate or set. In particular this
- # field isn't set for StreamingDetectIntent since the streaming endpoint has
- # separate confidence estimates per portion of the audio in
- # StreamingRecognitionResult.
- "diagnosticInfo": { # Free-form diagnostic information for the associated detect intent request.
- # The fields of this data can change without notice, so you should not write
- # code that depends on its structure.
- # The data may contain:
- #
- # - webhook call latency
- # - webhook errors
- "a_key": "", # Properties of the object.
- },
- "intent": { # Represents an intent. # The intent that matched the conversational query. Some, not
+ "intent": { # An intent categorizes an end-user's intention for one conversation turn. For # The intent that matched the conversational query. Some, not
# all fields are filled in this message, including but not limited to:
# `name`, `display_name`, `end_interaction` and `is_fallback`.
- # Intents convert a number of user expressions or patterns into an action. An
- # action is an extraction of a user command or sentence semantics.
+ # each agent, you define many intents, where your combined intents can handle a
+ # complete conversation. When an end-user writes or says something, referred to
+ # as an end-user expression or end-user input, Dialogflow matches the end-user
+ # input to the best intent in your agent. Matching an intent is also known as
+ # intent classification.
+ #
+ # For more information, see the [intent
+ # guide](https://cloud.google.com/dialogflow/docs/intents-overview).
+ "name": "A String", # Optional. The unique identifier of this intent.
+ # Required for Intents.UpdateIntent and Intents.BatchUpdateIntents
+ # methods.
+ # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
+ "webhookState": "A String", # Optional. Indicates whether webhooks are enabled for the intent.
+ "isFallback": True or False, # Optional. Indicates whether this is a fallback intent.
+ "displayName": "A String", # Required. The name of this intent.
+ "messages": [ # Optional. The collection of rich messages corresponding to the
+ # `Response` field in the Dialogflow console.
+ { # A rich response message.
+ # Corresponds to the intent `Response` field in the Dialogflow console.
+ # For more information, see
+ # [Rich response
+ # messages](https://cloud.google.com/dialogflow/docs/intents-rich-messages).
+ "card": { # The card response message. # The card response.
+ "title": "A String", # Optional. The title of the card.
+ "subtitle": "A String", # Optional. The subtitle of the card.
+ "buttons": [ # Optional. The collection of card buttons.
+ { # Contains information about a button.
+ "text": "A String", # Optional. The text to show on the button.
+ "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
+ # open.
+ },
+ ],
+ "imageUri": "A String", # Optional. The public URI to an image file for the card.
+ },
+ "text": { # The text response message. # The text response.
+ "text": [ # Optional. The collection of the agent's responses.
+ "A String",
+ ],
+ },
+ "carouselSelect": { # The card for presenting a carousel of options to select from. # The carousel card response for Actions on Google.
+ "items": [ # Required. Carousel items.
+ { # An item in the carousel.
+ "description": "A String", # Optional. The body text of the card.
+ "title": "A String", # Required. Title of the carousel item.
+ "image": { # The image response message. # Optional. The image to display.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
+ # dialog.
+ "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
+ # item in dialog.
+ "A String",
+ ],
+ "key": "A String", # Required. A unique key that will be sent back to the agent if this
+ # response is given.
+ },
+ },
+ ],
+ },
+ "simpleResponses": { # The collection of simple response candidates. # The voice and text-only responses for Actions on Google.
+ # This message in `QueryResult.fulfillment_messages` and
+ # `WebhookResponse.fulfillment_messages` should contain only one
+ # `SimpleResponse`.
+ "simpleResponses": [ # Required. The list of simple responses.
+ { # The simple response message containing speech or text.
+ "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
+ # speech output. Mutually exclusive with ssml.
+ "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
+ # response to the user in the SSML format. Mutually exclusive with
+ # text_to_speech.
+ "displayText": "A String", # Optional. The text to display.
+ },
+ ],
+ },
+ "platform": "A String", # Optional. The platform that this message is intended for.
+ "browseCarouselCard": { # Browse Carousel Card for Actions on Google. # Browse carousel card for Actions on Google.
+ # https://developers.google.com/actions/assistant/responses#browsing_carousel
+ "items": [ # Required. List of items in the Browse Carousel Card. Minimum of two
+ # items, maximum of ten.
+ { # Browsing carousel tile
+ "footer": "A String", # Optional. Text that appears at the bottom of the Browse Carousel
+ # Card. Maximum of one line of text.
+ "image": { # The image response message. # Optional. Hero image for the carousel item.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "description": "A String", # Optional. Description of the carousel item. Maximum of four lines of
+ # text.
+ "title": "A String", # Required. Title of the carousel item. Maximum of two lines of text.
+ "openUriAction": { # Actions on Google action to open a given url. # Required. Action to present to the user.
+ "url": "A String", # Required. URL
+ "urlTypeHint": "A String", # Optional. Specifies the type of viewer that is used when opening
+ # the URL. Defaults to opening via web browser.
+ },
+ },
+ ],
+ "imageDisplayOptions": "A String", # Optional. Settings for displaying the image. Applies to every image in
+ # items.
+ },
+ "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # The link out suggestion chip for Actions on Google.
+ # or website associated with this agent.
+ "uri": "A String", # Required. The URI of the app or site to open when the user taps the
+ # suggestion chip.
+ "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
+ },
+ "basicCard": { # The basic card message. Useful for displaying information. # The basic card response for Actions on Google.
+ "buttons": [ # Optional. The collection of card buttons.
+ { # The button object that appears at the bottom of a card.
+ "title": "A String", # Required. The title of the button.
+ "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
+ "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
+ },
+ },
+ ],
+ "subtitle": "A String", # Optional. The subtitle of the card.
+ "formattedText": "A String", # Required, unless image is present. The body text of the card.
+ "title": "A String", # Optional. The title of the card.
+ "image": { # The image response message. # Optional. The image for the card.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ },
+ "suggestions": { # The collection of suggestions. # The suggestion chips for Actions on Google.
+ "suggestions": [ # Required. The list of suggested replies.
+ { # The suggestion chip message that the user can tap to quickly post a reply
+ # to the conversation.
+ "title": "A String", # Required. The text shown the in the suggestion chip.
+ },
+ ],
+ },
+ "quickReplies": { # The quick replies response message. # The quick replies response.
+ "quickReplies": [ # Optional. The collection of quick replies.
+ "A String",
+ ],
+ "title": "A String", # Optional. The title of the collection of quick replies.
+ },
+ "tableCard": { # Table card for Actions on Google. # Table card for Actions on Google.
+ "title": "A String", # Required. Title of the card.
+ "columnProperties": [ # Optional. Display properties for the columns in this table.
+ { # Column properties for TableCard.
+ "header": "A String", # Required. Column heading.
+ "horizontalAlignment": "A String", # Optional. Defines text alignment for all cells in this column.
+ },
+ ],
+ "subtitle": "A String", # Optional. Subtitle to the title.
+ "image": { # The image response message. # Optional. Image which should be displayed on the card.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "rows": [ # Optional. Rows in this table of data.
+ { # Row of TableCard.
+ "cells": [ # Optional. List of cells that make up this row.
+ { # Cell of TableCardRow.
+ "text": "A String", # Required. Text in this cell.
+ },
+ ],
+ "dividerAfter": True or False, # Optional. Whether to add a visual divider after this row.
+ },
+ ],
+ "buttons": [ # Optional. List of buttons for the card.
+ { # The button object that appears at the bottom of a card.
+ "title": "A String", # Required. The title of the button.
+ "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
+ "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
+ },
+ },
+ ],
+ },
+ "image": { # The image response message. # The image response.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "mediaContent": { # The media content card for Actions on Google. # The media content card for Actions on Google.
+ "mediaObjects": [ # Required. List of media objects.
+ { # Response media object for media content card.
+ "largeImage": { # The image response message. # Optional. Image to display above media content.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "contentUrl": "A String", # Required. Url where the media is stored.
+ "icon": { # The image response message. # Optional. Icon to display above media content.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "name": "A String", # Required. Name of media card.
+ "description": "A String", # Optional. Description of media card.
+ },
+ ],
+ "mediaType": "A String", # Optional. What type of media is the content (ie "audio").
+ },
+ "listSelect": { # The card for presenting a list of options to select from. # The list card response for Actions on Google.
+ "title": "A String", # Optional. The overall title of the list.
+ "items": [ # Required. List items.
+ { # An item in the list.
+ "image": { # The image response message. # Optional. The image to display.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
+ # dialog.
+ "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
+ # item in dialog.
+ "A String",
+ ],
+ "key": "A String", # Required. A unique key that will be sent back to the agent if this
+ # response is given.
+ },
+ "title": "A String", # Required. The title of the list item.
+ "description": "A String", # Optional. The main text describing the item.
+ },
+ ],
+ "subtitle": "A String", # Optional. Subtitle of the list.
+ },
+ "payload": { # A custom platform-specific response.
+ "a_key": "", # Properties of the object.
+ },
+ },
+ ],
"events": [ # Optional. The collection of event names that trigger the intent.
# If the collection of input contexts is not empty, all of the contexts must
# be present in the active user session for an event to trigger this intent.
# Event names are limited to 150 characters.
"A String",
],
- "parentFollowupIntentName": "A String", # Read-only after creation. The unique identifier of the parent intent in the
- # chain of followup intents. You can set this field when creating an intent,
- # for example with CreateIntent or
- # BatchUpdateIntents, in order to make this
- # intent a followup intent.
- #
- # It identifies the parent followup intent.
- # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
- "priority": 42, # Optional. The priority of this intent. Higher numbers represent higher
- # priorities.
- #
- # - If the supplied value is unspecified or 0, the service
- # translates the value to 500,000, which corresponds to the
- # `Normal` priority in the console.
- # - If the supplied value is negative, the intent is ignored
- # in runtime detect intent requests.
"outputContexts": [ # Optional. The collection of contexts that are activated when the intent
# is matched. Context messages in this collection should not set the
# parameters field. Setting the `lifespan_count` to 0 will reset the context
# when the intent is matched.
# Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
- { # Represents a context.
- "lifespanCount": 42, # Optional. The number of conversational query requests after which the
- # context expires. The default is `0`. If set to `0`, the context expires
- # immediately. Contexts expire automatically after 20 minutes if there
- # are no matching queries.
- "name": "A String", # Required. The unique identifier of the context. Format:
- # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
- # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
- # ID>/sessions/<Session ID>/contexts/<Context ID>`.
- #
- # The `Context ID` is always converted to lowercase, may only contain
- # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
- #
- # If `Environment ID` is not specified, we assume default 'draft'
- # environment. If `User ID` is not specified, we assume default '-' user.
- #
- # The following context names are reserved for internal use by Dialogflow.
- # You should not use these contexts or create contexts with these names:
- #
- # * `__system_counters__`
- # * `*_id_dialog_context`
- # * `*_dialog_params_size`
+ { # Dialogflow contexts are similar to natural language context. If a person says
+ # to you "they are orange", you need context in order to understand what "they"
+ # is referring to. Similarly, for Dialogflow to handle an end-user expression
+ # like that, it needs to be provided with context in order to correctly match
+ # an intent.
+ #
+ # Using contexts, you can control the flow of a conversation. You can configure
+ # contexts for an intent by setting input and output contexts, which are
+ # identified by string names. When an intent is matched, any configured output
+ # contexts for that intent become active. While any contexts are active,
+ # Dialogflow is more likely to match intents that are configured with input
+ # contexts that correspond to the currently active contexts.
+ #
+ # For more information about context, see the
+ # [Contexts guide](https://cloud.google.com/dialogflow/docs/contexts-overview).
"parameters": { # Optional. The collection of parameters associated with this context.
#
# Depending on your protocol or client library language, this is a
@@ -868,256 +812,53 @@
# - Else: parameter value
"a_key": "", # Properties of the object.
},
+ "name": "A String", # Required. The unique identifier of the context. Format:
+ # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
+ # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
+ # ID>/sessions/<Session ID>/contexts/<Context ID>`.
+ #
+ # The `Context ID` is always converted to lowercase, may only contain
+ # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
+ #
+ # If `Environment ID` is not specified, we assume default 'draft'
+ # environment. If `User ID` is not specified, we assume default '-' user.
+ #
+ # The following context names are reserved for internal use by Dialogflow.
+ # You should not use these contexts or create contexts with these names:
+ #
+ # * `__system_counters__`
+ # * `*_id_dialog_context`
+ # * `*_dialog_params_size`
+ "lifespanCount": 42, # Optional. The number of conversational query requests after which the
+ # context expires. The default is `0`. If set to `0`, the context expires
+ # immediately. Contexts expire automatically after 20 minutes if there
+ # are no matching queries.
},
],
- "defaultResponsePlatforms": [ # Optional. The list of platforms for which the first responses will be
- # copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).
- "A String",
- ],
"action": "A String", # Optional. The name of the action associated with the intent.
# Note: The action name must not contain whitespaces.
- "name": "A String", # Optional. The unique identifier of this intent.
- # Required for Intents.UpdateIntent and Intents.BatchUpdateIntents
- # methods.
- # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
- "messages": [ # Optional. The collection of rich messages corresponding to the
- # `Response` field in the Dialogflow console.
- { # A rich response message.
- # Corresponds to the intent `Response` field in the Dialogflow console.
- # For more information, see
- # [Rich response
- # messages](https://cloud.google.com/dialogflow/docs/intents-rich-messages).
- "mediaContent": { # The media content card for Actions on Google. # The media content card for Actions on Google.
- "mediaType": "A String", # Optional. What type of media is the content (ie "audio").
- "mediaObjects": [ # Required. List of media objects.
- { # Response media object for media content card.
- "name": "A String", # Required. Name of media card.
- "description": "A String", # Optional. Description of media card.
- "contentUrl": "A String", # Required. Url where the media is stored.
- "icon": { # The image response message. # Optional. Icon to display above media content.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "largeImage": { # The image response message. # Optional. Image to display above media content.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- },
- ],
- },
- "image": { # The image response message. # The image response.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "payload": { # A custom platform-specific response.
- "a_key": "", # Properties of the object.
- },
- "text": { # The text response message. # The text response.
- "text": [ # Optional. The collection of the agent's responses.
- "A String",
- ],
- },
- "platform": "A String", # Optional. The platform that this message is intended for.
- "suggestions": { # The collection of suggestions. # The suggestion chips for Actions on Google.
- "suggestions": [ # Required. The list of suggested replies.
- { # The suggestion chip message that the user can tap to quickly post a reply
- # to the conversation.
- "title": "A String", # Required. The text shown the in the suggestion chip.
- },
- ],
- },
- "listSelect": { # The card for presenting a list of options to select from. # The list card response for Actions on Google.
- "subtitle": "A String", # Optional. Subtitle of the list.
- "items": [ # Required. List items.
- { # An item in the list.
- "title": "A String", # Required. The title of the list item.
- "image": { # The image response message. # Optional. The image to display.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "description": "A String", # Optional. The main text describing the item.
- "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
- # dialog.
- "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
- # item in dialog.
- "A String",
- ],
- "key": "A String", # Required. A unique key that will be sent back to the agent if this
- # response is given.
- },
- },
- ],
- "title": "A String", # Optional. The overall title of the list.
- },
- "quickReplies": { # The quick replies response message. # The quick replies response.
- "title": "A String", # Optional. The title of the collection of quick replies.
- "quickReplies": [ # Optional. The collection of quick replies.
- "A String",
- ],
- },
- "card": { # The card response message. # The card response.
- "imageUri": "A String", # Optional. The public URI to an image file for the card.
- "title": "A String", # Optional. The title of the card.
- "buttons": [ # Optional. The collection of card buttons.
- { # Contains information about a button.
- "text": "A String", # Optional. The text to show on the button.
- "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
- # open.
- },
- ],
- "subtitle": "A String", # Optional. The subtitle of the card.
- },
- "basicCard": { # The basic card message. Useful for displaying information. # The basic card response for Actions on Google.
- "title": "A String", # Optional. The title of the card.
- "image": { # The image response message. # Optional. The image for the card.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "formattedText": "A String", # Required, unless image is present. The body text of the card.
- "buttons": [ # Optional. The collection of card buttons.
- { # The button object that appears at the bottom of a card.
- "title": "A String", # Required. The title of the button.
- "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
- "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
- },
- },
- ],
- "subtitle": "A String", # Optional. The subtitle of the card.
- },
- "tableCard": { # Table card for Actions on Google. # Table card for Actions on Google.
- "title": "A String", # Required. Title of the card.
- "rows": [ # Optional. Rows in this table of data.
- { # Row of TableCard.
- "dividerAfter": True or False, # Optional. Whether to add a visual divider after this row.
- "cells": [ # Optional. List of cells that make up this row.
- { # Cell of TableCardRow.
- "text": "A String", # Required. Text in this cell.
- },
- ],
- },
- ],
- "subtitle": "A String", # Optional. Subtitle to the title.
- "columnProperties": [ # Optional. Display properties for the columns in this table.
- { # Column properties for TableCard.
- "header": "A String", # Required. Column heading.
- "horizontalAlignment": "A String", # Optional. Defines text alignment for all cells in this column.
- },
- ],
- "image": { # The image response message. # Optional. Image which should be displayed on the card.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "buttons": [ # Optional. List of buttons for the card.
- { # The button object that appears at the bottom of a card.
- "title": "A String", # Required. The title of the button.
- "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
- "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
- },
- },
- ],
- },
- "carouselSelect": { # The card for presenting a carousel of options to select from. # The carousel card response for Actions on Google.
- "items": [ # Required. Carousel items.
- { # An item in the carousel.
- "description": "A String", # Optional. The body text of the card.
- "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
- # dialog.
- "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
- # item in dialog.
- "A String",
- ],
- "key": "A String", # Required. A unique key that will be sent back to the agent if this
- # response is given.
- },
- "title": "A String", # Required. Title of the carousel item.
- "image": { # The image response message. # Optional. The image to display.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- },
- ],
- },
- "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # The link out suggestion chip for Actions on Google.
- # or website associated with this agent.
- "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
- "uri": "A String", # Required. The URI of the app or site to open when the user taps the
- # suggestion chip.
- },
- "browseCarouselCard": { # Browse Carousel Card for Actions on Google. # Browse carousel card for Actions on Google.
- # https://developers.google.com/actions/assistant/responses#browsing_carousel
- "items": [ # Required. List of items in the Browse Carousel Card. Minimum of two
- # items, maximum of ten.
- { # Browsing carousel tile
- "openUriAction": { # Actions on Google action to open a given url. # Required. Action to present to the user.
- "urlTypeHint": "A String", # Optional. Specifies the type of viewer that is used when opening
- # the URL. Defaults to opening via web browser.
- "url": "A String", # Required. URL
- },
- "footer": "A String", # Optional. Text that appears at the bottom of the Browse Carousel
- # Card. Maximum of one line of text.
- "title": "A String", # Required. Title of the carousel item. Maximum of two lines of text.
- "image": { # The image response message. # Optional. Hero image for the carousel item.
- "imageUri": "A String", # Optional. The public URI to an image file.
- "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
- # e.g., screen readers.
- },
- "description": "A String", # Optional. Description of the carousel item. Maximum of four lines of
- # text.
- },
- ],
- "imageDisplayOptions": "A String", # Optional. Settings for displaying the image. Applies to every image in
- # items.
- },
- "simpleResponses": { # The collection of simple response candidates. # The voice and text-only responses for Actions on Google.
- # This message in `QueryResult.fulfillment_messages` and
- # `WebhookResponse.fulfillment_messages` should contain only one
- # `SimpleResponse`.
- "simpleResponses": [ # Required. The list of simple responses.
- { # The simple response message containing speech or text.
- "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
- # speech output. Mutually exclusive with ssml.
- "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
- # response to the user in the SSML format. Mutually exclusive with
- # text_to_speech.
- "displayText": "A String", # Optional. The text to display.
- },
- ],
- },
- },
- ],
- "webhookState": "A String", # Optional. Indicates whether webhooks are enabled for the intent.
- "inputContextNames": [ # Optional. The list of context names required for this intent to be
- # triggered.
- # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
- "A String",
- ],
- "followupIntentInfo": [ # Read-only. Information about all followup intents that have this intent as
- # a direct or indirect parent. We populate this field only in the output.
- { # Represents a single followup intent in the chain.
- "followupIntentName": "A String", # The unique identifier of the followup intent.
- # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
- "parentFollowupIntentName": "A String", # The unique identifier of the followup intent's parent.
- # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
- },
- ],
+ "priority": 42, # Optional. The priority of this intent. Higher numbers represent higher
+ # priorities.
+ #
+ # - If the supplied value is unspecified or 0, the service
+ # translates the value to 500,000, which corresponds to the
+ # `Normal` priority in the console.
+ # - If the supplied value is negative, the intent is ignored
+ # in runtime detect intent requests.
"rootFollowupIntentName": "A String", # Read-only. The unique identifier of the root intent in the chain of
# followup intents. It identifies the correct followup intents chain for
# this intent. We populate this field only in the output.
#
# Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
- "displayName": "A String", # Required. The name of this intent.
- "mlDisabled": True or False, # Optional. Indicates whether Machine Learning is disabled for the intent.
- # Note: If `ml_disabled` setting is set to true, then this intent is not
- # taken into account during inference in `ML ONLY` match mode. Also,
- # auto-markup in the UI is turned off.
- "isFallback": True or False, # Optional. Indicates whether this is a fallback intent.
+ "followupIntentInfo": [ # Read-only. Information about all followup intents that have this intent as
+ # a direct or indirect parent. We populate this field only in the output.
+ { # Represents a single followup intent in the chain.
+ "parentFollowupIntentName": "A String", # The unique identifier of the followup intent's parent.
+ # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
+ "followupIntentName": "A String", # The unique identifier of the followup intent.
+ # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
+ },
+ ],
"trainingPhrases": [ # Optional. The collection of examples that the agent is
# trained on.
{ # Represents an example that the agent is trained on.
@@ -1125,7 +866,6 @@
"timesAddedCount": 42, # Optional. Indicates how many times this example was added to
# the intent. Each time a developer adds an existing sample by editing an
# intent or training, this counter is increased.
- "type": "A String", # Required. The type of the training phrase.
"parts": [ # Required. The ordered list of training phrase parts.
# The parts are concatenated in order to form the training phrase.
#
@@ -1146,9 +886,6 @@
# and the `entity_type`, `alias`, and `user_defined` fields are all
# set.
{ # Represents a part of a training phrase.
- "text": "A String", # Required. The text for this part.
- "entityType": "A String", # Optional. The entity type name prefixed with `@`.
- # This field is required for annotated parts of the training phrase.
"alias": "A String", # Optional. The parameter name for the value extracted from the
# annotated part of the example.
# This field is required for annotated parts of the training phrase.
@@ -1156,12 +893,37 @@
# This field is set to true when the Dialogflow Console is used to
# manually annotate the part. When creating an annotated part with the
# API, you must set this to true.
+ "text": "A String", # Required. The text for this part.
+ "entityType": "A String", # Optional. The entity type name prefixed with `@`.
+ # This field is required for annotated parts of the training phrase.
},
],
+ "type": "A String", # Required. The type of the training phrase.
},
],
+ "inputContextNames": [ # Optional. The list of context names required for this intent to be
+ # triggered.
+ # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
+ "A String",
+ ],
+ "mlDisabled": True or False, # Optional. Indicates whether Machine Learning is disabled for the intent.
+ # Note: If `ml_disabled` setting is set to true, then this intent is not
+ # taken into account during inference in `ML ONLY` match mode. Also,
+ # auto-markup in the UI is turned off.
"resetContexts": True or False, # Optional. Indicates whether to delete all contexts in the current
# session when this intent is matched.
+ "parentFollowupIntentName": "A String", # Read-only after creation. The unique identifier of the parent intent in the
+ # chain of followup intents. You can set this field when creating an intent,
+ # for example with CreateIntent or
+ # BatchUpdateIntents, in order to make this
+ # intent a followup intent.
+ #
+ # It identifies the parent followup intent.
+ # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
+ "defaultResponsePlatforms": [ # Optional. The list of platforms for which the first responses will be
+ # copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).
+ "A String",
+ ],
"parameters": [ # Optional. The collection of parameters associated with the intent.
{ # Represents intent parameters.
"value": "A String", # Optional. The definition of the parameter value. It can be:
@@ -1172,22 +934,22 @@
# - a parameter value from some context defined as
# `#context_name.parameter_name`.
"displayName": "A String", # Required. The name of the parameter.
- "entityTypeDisplayName": "A String", # Optional. The name of the entity type, prefixed with `@`, that
- # describes values of the parameter. If the parameter is
- # required, this must be provided.
- "prompts": [ # Optional. The collection of prompts that the agent can present to the
- # user in order to collect a value for the parameter.
- "A String",
- ],
"mandatory": True or False, # Optional. Indicates whether the parameter is required. That is,
# whether the intent cannot be completed without collecting the parameter
# value.
+ "isList": True or False, # Optional. Indicates whether the parameter represents a list of values.
+ "entityTypeDisplayName": "A String", # Optional. The name of the entity type, prefixed with `@`, that
+ # describes values of the parameter. If the parameter is
+ # required, this must be provided.
"defaultValue": "A String", # Optional. The default value to use when the `value` yields an empty
# result.
# Default values can be extracted from contexts by using the following
# syntax: `#context_name.parameter_name`.
"name": "A String", # The unique identifier of this parameter.
- "isList": True or False, # Optional. Indicates whether the parameter represents a list of values.
+ "prompts": [ # Optional. The collection of prompts that the agent can present to the
+ # user in order to collect a value for the parameter.
+ "A String",
+ ],
},
],
},
@@ -1199,28 +961,21 @@
# `output_contexts.parameters` contains entries with name
# `<parameter name>.original` containing the original parameter values
# before the query.
- { # Represents a context.
- "lifespanCount": 42, # Optional. The number of conversational query requests after which the
- # context expires. The default is `0`. If set to `0`, the context expires
- # immediately. Contexts expire automatically after 20 minutes if there
- # are no matching queries.
- "name": "A String", # Required. The unique identifier of the context. Format:
- # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
- # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
- # ID>/sessions/<Session ID>/contexts/<Context ID>`.
- #
- # The `Context ID` is always converted to lowercase, may only contain
- # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
- #
- # If `Environment ID` is not specified, we assume default 'draft'
- # environment. If `User ID` is not specified, we assume default '-' user.
- #
- # The following context names are reserved for internal use by Dialogflow.
- # You should not use these contexts or create contexts with these names:
- #
- # * `__system_counters__`
- # * `*_id_dialog_context`
- # * `*_dialog_params_size`
+ { # Dialogflow contexts are similar to natural language context. If a person says
+ # to you "they are orange", you need context in order to understand what "they"
+ # is referring to. Similarly, for Dialogflow to handle an end-user expression
+ # like that, it needs to be provided with context in order to correctly match
+ # an intent.
+ #
+ # Using contexts, you can control the flow of a conversation. You can configure
+ # contexts for an intent by setting input and output contexts, which are
+ # identified by string names. When an intent is matched, any configured output
+ # contexts for that intent become active. While any contexts are active,
+ # Dialogflow is more likely to match intents that are configured with input
+ # contexts that correspond to the currently active contexts.
+ #
+ # For more information about context, see the
+ # [Contexts guide](https://cloud.google.com/dialogflow/docs/contexts-overview).
"parameters": { # Optional. The collection of parameters associated with this context.
#
# Depending on your protocol or client library language, this is a
@@ -1238,19 +993,327 @@
# - Else: parameter value
"a_key": "", # Properties of the object.
},
+ "name": "A String", # Required. The unique identifier of the context. Format:
+ # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
+ # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
+ # ID>/sessions/<Session ID>/contexts/<Context ID>`.
+ #
+ # The `Context ID` is always converted to lowercase, may only contain
+ # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
+ #
+ # If `Environment ID` is not specified, we assume default 'draft'
+ # environment. If `User ID` is not specified, we assume default '-' user.
+ #
+ # The following context names are reserved for internal use by Dialogflow.
+ # You should not use these contexts or create contexts with these names:
+ #
+ # * `__system_counters__`
+ # * `*_id_dialog_context`
+ # * `*_dialog_params_size`
+ "lifespanCount": 42, # Optional. The number of conversational query requests after which the
+ # context expires. The default is `0`. If set to `0`, the context expires
+ # immediately. Contexts expire automatically after 20 minutes if there
+ # are no matching queries.
},
],
+ "webhookPayload": { # If the query was fulfilled by a webhook call, this field is set to the
+ # value of the `payload` field returned in the webhook response.
+ "a_key": "", # Properties of the object.
+ },
+ "action": "A String", # The action name from the matched intent.
+ "fulfillmentMessages": [ # The collection of rich messages to present to the user.
+ { # A rich response message.
+ # Corresponds to the intent `Response` field in the Dialogflow console.
+ # For more information, see
+ # [Rich response
+ # messages](https://cloud.google.com/dialogflow/docs/intents-rich-messages).
+ "card": { # The card response message. # The card response.
+ "title": "A String", # Optional. The title of the card.
+ "subtitle": "A String", # Optional. The subtitle of the card.
+ "buttons": [ # Optional. The collection of card buttons.
+ { # Contains information about a button.
+ "text": "A String", # Optional. The text to show on the button.
+ "postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
+ # open.
+ },
+ ],
+ "imageUri": "A String", # Optional. The public URI to an image file for the card.
+ },
+ "text": { # The text response message. # The text response.
+ "text": [ # Optional. The collection of the agent's responses.
+ "A String",
+ ],
+ },
+ "carouselSelect": { # The card for presenting a carousel of options to select from. # The carousel card response for Actions on Google.
+ "items": [ # Required. Carousel items.
+ { # An item in the carousel.
+ "description": "A String", # Optional. The body text of the card.
+ "title": "A String", # Required. Title of the carousel item.
+ "image": { # The image response message. # Optional. The image to display.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "info": { # Additional info about the select item for when it is triggered in a # Required. Additional info about the option item.
+ # dialog.
+ "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
+ # item in dialog.
+ "A String",
+ ],
+ "key": "A String", # Required. A unique key that will be sent back to the agent if this
+ # response is given.
+ },
+ },
+ ],
+ },
+ "simpleResponses": { # The collection of simple response candidates. # The voice and text-only responses for Actions on Google.
+ # This message in `QueryResult.fulfillment_messages` and
+ # `WebhookResponse.fulfillment_messages` should contain only one
+ # `SimpleResponse`.
+ "simpleResponses": [ # Required. The list of simple responses.
+ { # The simple response message containing speech or text.
+ "textToSpeech": "A String", # One of text_to_speech or ssml must be provided. The plain text of the
+ # speech output. Mutually exclusive with ssml.
+ "ssml": "A String", # One of text_to_speech or ssml must be provided. Structured spoken
+ # response to the user in the SSML format. Mutually exclusive with
+ # text_to_speech.
+ "displayText": "A String", # Optional. The text to display.
+ },
+ ],
+ },
+ "platform": "A String", # Optional. The platform that this message is intended for.
+ "browseCarouselCard": { # Browse Carousel Card for Actions on Google. # Browse carousel card for Actions on Google.
+ # https://developers.google.com/actions/assistant/responses#browsing_carousel
+ "items": [ # Required. List of items in the Browse Carousel Card. Minimum of two
+ # items, maximum of ten.
+ { # Browsing carousel tile
+ "footer": "A String", # Optional. Text that appears at the bottom of the Browse Carousel
+ # Card. Maximum of one line of text.
+ "image": { # The image response message. # Optional. Hero image for the carousel item.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "description": "A String", # Optional. Description of the carousel item. Maximum of four lines of
+ # text.
+ "title": "A String", # Required. Title of the carousel item. Maximum of two lines of text.
+ "openUriAction": { # Actions on Google action to open a given url. # Required. Action to present to the user.
+ "url": "A String", # Required. URL
+ "urlTypeHint": "A String", # Optional. Specifies the type of viewer that is used when opening
+ # the URL. Defaults to opening via web browser.
+ },
+ },
+ ],
+ "imageDisplayOptions": "A String", # Optional. Settings for displaying the image. Applies to every image in
+ # items.
+ },
+ "linkOutSuggestion": { # The suggestion chip message that allows the user to jump out to the app # The link out suggestion chip for Actions on Google.
+ # or website associated with this agent.
+ "uri": "A String", # Required. The URI of the app or site to open when the user taps the
+ # suggestion chip.
+ "destinationName": "A String", # Required. The name of the app or site this chip is linking to.
+ },
+ "basicCard": { # The basic card message. Useful for displaying information. # The basic card response for Actions on Google.
+ "buttons": [ # Optional. The collection of card buttons.
+ { # The button object that appears at the bottom of a card.
+ "title": "A String", # Required. The title of the button.
+ "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
+ "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
+ },
+ },
+ ],
+ "subtitle": "A String", # Optional. The subtitle of the card.
+ "formattedText": "A String", # Required, unless image is present. The body text of the card.
+ "title": "A String", # Optional. The title of the card.
+ "image": { # The image response message. # Optional. The image for the card.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ },
+ "suggestions": { # The collection of suggestions. # The suggestion chips for Actions on Google.
+ "suggestions": [ # Required. The list of suggested replies.
+ { # The suggestion chip message that the user can tap to quickly post a reply
+ # to the conversation.
+ "title": "A String", # Required. The text shown the in the suggestion chip.
+ },
+ ],
+ },
+ "quickReplies": { # The quick replies response message. # The quick replies response.
+ "quickReplies": [ # Optional. The collection of quick replies.
+ "A String",
+ ],
+ "title": "A String", # Optional. The title of the collection of quick replies.
+ },
+ "tableCard": { # Table card for Actions on Google. # Table card for Actions on Google.
+ "title": "A String", # Required. Title of the card.
+ "columnProperties": [ # Optional. Display properties for the columns in this table.
+ { # Column properties for TableCard.
+ "header": "A String", # Required. Column heading.
+ "horizontalAlignment": "A String", # Optional. Defines text alignment for all cells in this column.
+ },
+ ],
+ "subtitle": "A String", # Optional. Subtitle to the title.
+ "image": { # The image response message. # Optional. Image which should be displayed on the card.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "rows": [ # Optional. Rows in this table of data.
+ { # Row of TableCard.
+ "cells": [ # Optional. List of cells that make up this row.
+ { # Cell of TableCardRow.
+ "text": "A String", # Required. Text in this cell.
+ },
+ ],
+ "dividerAfter": True or False, # Optional. Whether to add a visual divider after this row.
+ },
+ ],
+ "buttons": [ # Optional. List of buttons for the card.
+ { # The button object that appears at the bottom of a card.
+ "title": "A String", # Required. The title of the button.
+ "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
+ "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
+ },
+ },
+ ],
+ },
+ "image": { # The image response message. # The image response.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "mediaContent": { # The media content card for Actions on Google. # The media content card for Actions on Google.
+ "mediaObjects": [ # Required. List of media objects.
+ { # Response media object for media content card.
+ "largeImage": { # The image response message. # Optional. Image to display above media content.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "contentUrl": "A String", # Required. Url where the media is stored.
+ "icon": { # The image response message. # Optional. Icon to display above media content.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "name": "A String", # Required. Name of media card.
+ "description": "A String", # Optional. Description of media card.
+ },
+ ],
+ "mediaType": "A String", # Optional. What type of media is the content (ie "audio").
+ },
+ "listSelect": { # The card for presenting a list of options to select from. # The list card response for Actions on Google.
+ "title": "A String", # Optional. The overall title of the list.
+ "items": [ # Required. List items.
+ { # An item in the list.
+ "image": { # The image response message. # Optional. The image to display.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ },
+ "info": { # Additional info about the select item for when it is triggered in a # Required. Additional information about this option.
+ # dialog.
+ "synonyms": [ # Optional. A list of synonyms that can also be used to trigger this
+ # item in dialog.
+ "A String",
+ ],
+ "key": "A String", # Required. A unique key that will be sent back to the agent if this
+ # response is given.
+ },
+ "title": "A String", # Required. The title of the list item.
+ "description": "A String", # Optional. The main text describing the item.
+ },
+ ],
+ "subtitle": "A String", # Optional. Subtitle of the list.
+ },
+ "payload": { # A custom platform-specific response.
+ "a_key": "", # Properties of the object.
+ },
+ },
+ ],
+ "webhookSource": "A String", # If the query was fulfilled by a webhook call, this field is set to the
+ # value of the `source` field returned in the webhook response.
+ "allRequiredParamsPresent": True or False, # This field is set to:
+ #
+ # - `false` if the matched intent has required parameters and not all of
+ # the required parameter values have been collected.
+ # - `true` if all required parameter values have been collected, or if the
+ # matched intent doesn't contain any required parameters.
+ "speechRecognitionConfidence": 3.14, # The Speech recognition confidence between 0.0 and 1.0. A higher number
+ # indicates an estimated greater likelihood that the recognized words are
+ # correct. The default of 0.0 is a sentinel value indicating that confidence
+ # was not set.
+ #
+ # This field is not guaranteed to be accurate or set. In particular this
+ # field isn't set for StreamingDetectIntent since the streaming endpoint has
+ # separate confidence estimates per portion of the audio in
+ # StreamingRecognitionResult.
+ "fulfillmentText": "A String", # The text to be pronounced to the user or shown on the screen.
+ # Note: This is a legacy field, `fulfillment_messages` should be preferred.
+ "sentimentAnalysisResult": { # The result of sentiment analysis. Sentiment analysis inspects user input # The sentiment analysis result, which depends on the
+ # `sentiment_analysis_request_config` specified in the request.
+ # and identifies the prevailing subjective opinion, especially to determine a
+ # user's attitude as positive, negative, or neutral.
+ # For Participants.AnalyzeContent, it needs to be configured in
+ # DetectIntentRequest.query_params. For
+ # Participants.StreamingAnalyzeContent, it needs to be configured in
+ # StreamingDetectIntentRequest.query_params.
+ # And for Participants.AnalyzeContent and
+ # Participants.StreamingAnalyzeContent, it needs to be configured in
+ # ConversationProfile.human_agent_assistant_config
+ "queryTextSentiment": { # The sentiment, such as positive/negative feeling or association, for a unit # The sentiment analysis result for `query_text`.
+ # of analysis, such as the query text.
+ "magnitude": 3.14, # A non-negative number in the [0, +inf) range, which represents the absolute
+ # magnitude of sentiment, regardless of score (positive or negative).
+ "score": 3.14, # Sentiment score between -1.0 (negative sentiment) and 1.0 (positive
+ # sentiment).
+ },
+ },
+ "intentDetectionConfidence": 3.14, # The intent detection confidence. Values range from 0.0
+ # (completely uncertain) to 1.0 (completely certain).
+ # This value is for informational purpose only and is only used to
+ # help match the best intent within the classification threshold.
+ # This value may change for the same end-user expression at any time due to a
+ # model retraining or change in implementation.
+ # If there are `multiple knowledge_answers` messages, this value is set to
+ # the greatest `knowledgeAnswers.match_confidence` value in the list.
+ "parameters": { # The collection of extracted parameters.
+ #
+ # Depending on your protocol or client library language, this is a
+ # map, associative array, symbol table, dictionary, or JSON object
+ # composed of a collection of (MapKey, MapValue) pairs:
+ #
+ # - MapKey type: string
+ # - MapKey value: parameter name
+ # - MapValue type:
+ # - If parameter's entity type is a composite entity: map
+ # - Else: string or number, depending on parameter value type
+ # - MapValue value:
+ # - If parameter's entity type is a composite entity:
+ # map from composite entity property names to property values
+ # - Else: parameter value
+ "a_key": "", # Properties of the object.
+ },
+ "queryText": "A String", # The original conversational query text:
+ #
+ # - If natural language text was provided as input, `query_text` contains
+ # a copy of the input.
+ # - If natural language speech audio was provided as input, `query_text`
+ # contains the speech recognition result. If speech recognizer produced
+ # multiple alternatives, a particular one is picked.
+ # - If automatic spell correction is enabled, `query_text` will contain the
+ # corrected user input.
+ "diagnosticInfo": { # Free-form diagnostic information for the associated detect intent request.
+ # The fields of this data can change without notice, so you should not write
+ # code that depends on its structure.
+ # The data may contain:
+ #
+ # - webhook call latency
+ # - webhook errors
+ "a_key": "", # Properties of the object.
+ },
},
- "outputAudio": "A String", # The audio data bytes encoded as specified in the request.
- # Note: The output audio is generated based on the values of default platform
- # text responses found in the `query_result.fulfillment_messages` field. If
- # multiple default text responses exist, they will be concatenated when
- # generating audio. If no default platform text responses exist, the
- # generated audio content will be empty.
- #
- # In some scenarios, multiple output audio fields may be present in the
- # response structure. In these cases, only the top-most-level audio output
- # has content.
"webhookStatus": { # The `Status` type defines a logical error model that is suitable for # Specifies the status of the webhook request.
# different programming environments, including REST APIs and RPC APIs. It is
# used by [gRPC](https://github.com/grpc). Each `Status` message contains
@@ -1258,19 +1321,17 @@
#
# You can find out more about this error model and how to work with it in the
# [API Design Guide](https://cloud.google.com/apis/design/errors).
+ "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+ "message": "A String", # A developer-facing error message, which should be in English. Any
+ # user-facing error message should be localized and sent in the
+ # google.rpc.Status.details field, or localized by the client.
"details": [ # A list of messages that carry the error details. There is a common set of
# message types for APIs to use.
{
"a_key": "", # Properties of the object. Contains field @type with type URL.
},
],
- "code": 42, # The status code, which should be an enum value of google.rpc.Code.
- "message": "A String", # A developer-facing error message, which should be in English. Any
- # user-facing error message should be localized and sent in the
- # google.rpc.Status.details field, or localized by the client.
},
- "responseId": "A String", # The unique identifier of the response. It can be used to
- # locate a response in the training example set or for reporting issues.
}</pre>
</div>