chore: regens API reference docs (#889)
diff --git a/docs/dyn/dialogflow_v2.projects.agent.sessions.html b/docs/dyn/dialogflow_v2.projects.agent.sessions.html
index 9795a18..aae05ae 100644
--- a/docs/dyn/dialogflow_v2.projects.agent.sessions.html
+++ b/docs/dyn/dialogflow_v2.projects.agent.sessions.html
@@ -88,7 +88,7 @@
<code><a href="#deleteContexts">deleteContexts(parent, x__xgafv=None)</a></code></p>
<p class="firstline">Deletes all active contexts in the specified session.</p>
<p class="toc_element">
- <code><a href="#detectIntent">detectIntent(session, body, x__xgafv=None)</a></code></p>
+ <code><a href="#detectIntent">detectIntent(session, body=None, x__xgafv=None)</a></code></p>
<p class="firstline">Processes a natural language query and returns structured, actionable data</p>
<h3>Method Details</h3>
<div class="method">
@@ -97,7 +97,11 @@
Args:
parent: string, Required. The name of the session to delete all contexts from. Format:
-`projects/<Project ID>/agent/sessions/<Session ID>`. (required)
+`projects/<Project ID>/agent/sessions/<Session ID>` or `projects/<Project
+ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session
+ID>`.
+If `Environment ID` is not specified we assume default 'draft' environment.
+If `User ID` is not specified, we assume default '-' user. (required)
x__xgafv: string, V1 error format.
Allowed values
1 - v1 error format
@@ -119,7 +123,7 @@
</div>
<div class="method">
- <code class="details" id="detectIntent">detectIntent(session, body, x__xgafv=None)</code>
+ <code class="details" id="detectIntent">detectIntent(session, body=None, x__xgafv=None)</code>
<pre>Processes a natural language query and returns structured, actionable data
as a result. This method is not idempotent, because it may cause contexts
and session entity types to be updated, which in turn might affect
@@ -127,24 +131,30 @@
Args:
session: string, Required. The name of the session this query is sent to. Format:
-`projects/<Project ID>/agent/sessions/<Session ID>`. It's up to the API
-caller to choose an appropriate session ID. It can be a random number or
-some type of user identifier (preferably hashed). The length of the session
-ID must not exceed 36 bytes. (required)
- body: object, The request body. (required)
+`projects/<Project ID>/agent/sessions/<Session ID>`, or
+`projects/<Project ID>/agent/environments/<Environment ID>/users/<User
+ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume
+default 'draft' environment. If `User ID` is not specified, we are using
+"-". It's up to the API caller to choose an appropriate `Session ID` and
+`User Id`. They can be a random number or some type of user and session
+identifiers (preferably hashed). The length of the `Session ID` and
+`User ID` must not exceed 36 characters. (required)
+ body: object, The request body.
The object takes the form of:
{ # The request to detect user's intent.
- "outputAudioConfig": { # Instructs the speech synthesizer on how to generate the output audio content. # Optional. Instructs the speech synthesizer how to generate the output
+ "outputAudioConfig": { # Instructs the speech synthesizer on how to generate the output audio content. # Instructs the speech synthesizer how to generate the output
# audio. If this field is not set and agent-level speech synthesizer is not
# configured, no output audio is generated.
- "sampleRateHertz": 42, # Optional. The synthesis sample rate (in hertz) for this audio. If not
+ # If this audio config is supplied in a request, it overrides all existing
+ # text-to-speech settings applied to the agent.
+ "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. If not
# provided, then the synthesizer will use the default sample rate based on
# the audio encoding. If this is different from the voice's natural sample
# rate, then the synthesizer will honor this request by converting to the
# desired sample rate (which might result in worse audio quality).
"audioEncoding": "A String", # Required. Audio encoding of the synthesized audio content.
- "synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Optional. Configuration of how speech should be synthesized.
+ "synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Configuration of how speech should be synthesized.
"effectsProfileId": [ # Optional. An identifier which selects 'audio effects' profiles that are
# applied on (post synthesized) text to speech. Effects are applied on top of
# each other in the order they are given.
@@ -157,12 +167,13 @@
# voice of the appropriate gender is not available, the synthesizer should
# substitute a voice with a different gender rather than failing the request.
"name": "A String", # Optional. The name of the voice. If not set, the service will choose a
- # voice based on the other parameters such as language_code and gender.
+ # voice based on the other parameters such as language_code and
+ # ssml_gender.
},
"speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
# native speed supported by the specific voice. 2.0 is twice as fast, and
# 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
- # other values < 0.25 or > 4.0 will return an error.
+ # other values < 0.25 or > 4.0 will return an error.
"volumeGainDb": 3.14, # Optional. Volume gain (in dB) of the normal native volume supported by the
# specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of
# 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB)
@@ -176,9 +187,15 @@
# original pitch.
},
},
- "inputAudio": "A String", # Optional. The natural language speech audio to be processed. This field
+ "inputAudio": "A String", # The natural language speech audio to be processed. This field
# should be populated iff `query_input` is set to an input audio config.
# A single request can contain up to 1 minute of speech audio data.
+ "outputAudioConfigMask": "A String", # Mask for output_audio_config indicating which settings in this
+ # request-level config should override speech synthesizer settings defined at
+ # agent-level.
+ #
+ # If unspecified or empty, output_audio_config replaces the agent-level
+ # config in its entirety.
"queryInput": { # Represents the query input. It can contain either: # Required. The input specification. It can be set to:
#
# 1. an audio config
@@ -198,99 +215,199 @@
"text": "A String", # Required. The UTF-8 encoded natural language text to be processed.
# Text length must not exceed 256 characters.
"languageCode": "A String", # Required. The language of this conversational query. See [Language
- # Support](https://cloud.google.com/dialogflow-enterprise/docs/reference/language)
+ # Support](https://cloud.google.com/dialogflow/docs/reference/language)
# for a list of the currently supported language codes. Note that queries in
# the same session do not necessarily need to specify the same language.
},
"event": { # Events allow for matching intents by event name instead of the natural # The event to be processed.
- # language input. For instance, input `<event: { name: "welcome_event",
- # parameters: { name: "Sam" } }>` can trigger a personalized welcome response.
+ # language input. For instance, input `<event: { name: "welcome_event",
+ # parameters: { name: "Sam" } }>` can trigger a personalized welcome response.
# The parameter `name` may be used by the agent in the response:
# `"Hello #welcome_event.name! What can I do for you today?"`.
"languageCode": "A String", # Required. The language of this query. See [Language
- # Support](https://cloud.google.com/dialogflow-enterprise/docs/reference/language)
+ # Support](https://cloud.google.com/dialogflow/docs/reference/language)
# for a list of the currently supported language codes. Note that queries in
# the same session do not necessarily need to specify the same language.
"name": "A String", # Required. The unique identifier of the event.
- "parameters": { # Optional. The collection of parameters associated with the event.
+ "parameters": { # The collection of parameters associated with the event.
+ #
+ # Depending on your protocol or client library language, this is a
+ # map, associative array, symbol table, dictionary, or JSON object
+ # composed of a collection of (MapKey, MapValue) pairs:
+ #
+ # - MapKey type: string
+ # - MapKey value: parameter name
+ # - MapValue type:
+ # - If parameter's entity type is a composite entity: map
+ # - Else: string or number, depending on parameter value type
+ # - MapValue value:
+ # - If parameter's entity type is a composite entity:
+ # map from composite entity property names to property values
+ # - Else: parameter value
"a_key": "", # Properties of the object.
},
},
"audioConfig": { # Instructs the speech recognizer how to process the audio content. # Instructs the speech recognizer how to process the speech audio.
- "phraseHints": [ # Optional. A list of strings containing words and phrases that the speech
+ "languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do
+ # translations. See [Language
+ # Support](https://cloud.google.com/dialogflow/docs/reference/language)
+ # for a list of the currently supported language codes. Note that queries in
+ # the same session do not necessarily need to specify the same language.
+ "audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
+ "phraseHints": [ # A list of strings containing words and phrases that the speech
# recognizer should recognize with higher likelihood.
#
# See [the Cloud Speech
# documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints)
# for more details.
+ #
+ # This field is deprecated. Please use [speech_contexts]() instead. If you
+ # specify both [phrase_hints]() and [speech_contexts](), Dialogflow will
+ # treat the [phrase_hints]() as a single additional [SpeechContext]().
"A String",
],
- "languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do
- # translations. See [Language
- # Support](https://cloud.google.com/dialogflow-enterprise/docs/reference/language)
- # for a list of the currently supported language codes. Note that queries in
- # the same session do not necessarily need to specify the same language.
- "audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
+ "enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in
+ # StreamingRecognitionResult with information about the recognized speech
+ # words, e.g. start and end time offsets. If false or unspecified, Speech
+ # doesn't return any word-level information.
"sampleRateHertz": 42, # Required. Sample rate (in Hertz) of the audio content sent in the query.
# Refer to
# [Cloud Speech API
# documentation](https://cloud.google.com/speech-to-text/docs/basics) for
# more details.
- "modelVariant": "A String", # Optional. Which variant of the Speech model to use.
+ "modelVariant": "A String", # Which variant of the Speech model to use.
+ "model": "A String", # Which Speech model to select for the given request. Select the
+ # model best suited to your domain to get best results. If a model is not
+ # explicitly specified, then we auto-select a model based on the parameters
+ # in the InputAudioConfig.
+ # If enhanced speech model is enabled for the agent and an enhanced
+ # version of the specified model for the language does not exist, then the
+ # speech is recognized using the standard version of the specified model.
+ # Refer to
+ # [Cloud Speech API
+ # documentation](https://cloud.google.com/speech-to-text/docs/basics#select-model)
+ # for more details.
+ "speechContexts": [ # Context information to assist speech recognition.
+ #
+ # See [the Cloud Speech
+ # documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints)
+ # for more details.
+ { # Hints for the speech recognizer to help with recognition in a specific
+ # conversation state.
+ "phrases": [ # Optional. A list of strings containing words and phrases that the speech
+ # recognizer should recognize with higher likelihood.
+ #
+ # This list can be used to:
+ # * improve accuracy for words and phrases you expect the user to say,
+ # e.g. typical commands for your Dialogflow agent
+ # * add additional words to the speech recognizer vocabulary
+ # * ...
+ #
+ # See the [Cloud Speech
+ # documentation](https://cloud.google.com/speech-to-text/quotas) for usage
+ # limits.
+ "A String",
+ ],
+ "boost": 3.14, # Optional. Boost for this context compared to other contexts:
+ # * If the boost is positive, Dialogflow will increase the probability that
+ # the phrases in this context are recognized over similar sounding phrases.
+ # * If the boost is unspecified or non-positive, Dialogflow will not apply
+ # any boost.
+ #
+ # Dialogflow recommends that you use boosts in the range (0, 20] and that you
+ # find a value that fits your use case with binary search.
+ },
+ ],
+ "singleUtterance": True or False, # If `false` (default), recognition does not cease until the
+ # client closes the stream.
+ # If `true`, the recognizer will detect a single spoken utterance in input
+ # audio. Recognition ceases when it detects the audio's voice has
+ # stopped or paused. In this case, once a detected intent is received, the
+ # client should close the stream and start a new request with a new stream as
+ # needed.
+ # Note: This setting is relevant only for streaming methods.
+ # Note: When specified, InputAudioConfig.single_utterance takes precedence
+ # over StreamingDetectIntentRequest.single_utterance.
},
},
- "queryParams": { # Represents the parameters of the conversational query. # Optional. The parameters of this query.
- "geoLocation": { # An object representing a latitude/longitude pair. This is expressed as a pair # Optional. The geo location of this conversational query.
+ "queryParams": { # Represents the parameters of the conversational query. # The parameters of this query.
+ "geoLocation": { # An object representing a latitude/longitude pair. This is expressed as a pair # The geo location of this conversational query.
# of doubles representing degrees latitude and degrees longitude. Unless
# specified otherwise, this must conform to the
- # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84
- # standard</a>. Values must be within normalized ranges.
+ # <a href="http://www.unoosa.org/pdf/icg/2012/template/WGS_84.pdf">WGS84
+ # standard</a>. Values must be within normalized ranges.
"latitude": 3.14, # The latitude in degrees. It must be in the range [-90.0, +90.0].
"longitude": 3.14, # The longitude in degrees. It must be in the range [-180.0, +180.0].
},
- "contexts": [ # Optional. The collection of contexts to be activated before this query is
+ "contexts": [ # The collection of contexts to be activated before this query is
# executed.
{ # Represents a context.
- "parameters": { # Optional. The collection of parameters associated with this context.
- # Refer to [this
- # doc](https://cloud.google.com/dialogflow-enterprise/docs/intents-actions-parameters)
- # for syntax.
- "a_key": "", # Properties of the object.
- },
"name": "A String", # Required. The unique identifier of the context. Format:
- # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`.
+ # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
+ # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
+ # ID>/sessions/<Session ID>/contexts/<Context ID>`.
#
# The `Context ID` is always converted to lowercase, may only contain
- # characters in [a-zA-Z0-9_-%] and may be at most 250 bytes long.
+ # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
+ #
+ # If `Environment ID` is not specified, we assume default 'draft'
+ # environment. If `User ID` is not specified, we assume default '-' user.
+ #
+ # The following context names are reserved for internal use by Dialogflow.
+ # You should not use these contexts or create contexts with these names:
+ #
+ # * `__system_counters__`
+ # * `*_id_dialog_context`
+ # * `*_dialog_params_size`
+ "parameters": { # Optional. The collection of parameters associated with this context.
+ #
+ # Depending on your protocol or client library language, this is a
+ # map, associative array, symbol table, dictionary, or JSON object
+ # composed of a collection of (MapKey, MapValue) pairs:
+ #
+ # - MapKey type: string
+ # - MapKey value: parameter name
+ # - MapValue type:
+ # - If parameter's entity type is a composite entity: map
+ # - Else: string or number, depending on parameter value type
+ # - MapValue value:
+ # - If parameter's entity type is a composite entity:
+ # map from composite entity property names to property values
+ # - Else: parameter value
+ "a_key": "", # Properties of the object.
+ },
"lifespanCount": 42, # Optional. The number of conversational query requests after which the
- # context expires. If set to `0` (the default) the context expires
+ # context expires. The default is `0`. If set to `0`, the context expires
# immediately. Contexts expire automatically after 20 minutes if there
# are no matching queries.
},
],
- "sentimentAnalysisRequestConfig": { # Configures the types of sentiment analysis to perform. # Optional. Configures the type of sentiment analysis to perform. If not
+ "sentimentAnalysisRequestConfig": { # Configures the types of sentiment analysis to perform. # Configures the type of sentiment analysis to perform. If not
# provided, sentiment analysis is not performed.
- "analyzeQueryTextSentiment": True or False, # Optional. Instructs the service to perform sentiment analysis on
+ "analyzeQueryTextSentiment": True or False, # Instructs the service to perform sentiment analysis on
# `query_text`. If not provided, sentiment analysis is not performed on
# `query_text`.
},
- "resetContexts": True or False, # Optional. Specifies whether to delete all contexts in the current session
+ "resetContexts": True or False, # Specifies whether to delete all contexts in the current session
# before the new ones are activated.
- "timeZone": "A String", # Optional. The time zone of this conversational query from the
+ "timeZone": "A String", # The time zone of this conversational query from the
# [time zone database](https://www.iana.org/time-zones), e.g.,
# America/New_York, Europe/Paris. If not provided, the time zone specified in
# agent settings is used.
- "payload": { # Optional. This field can be used to pass custom data into the webhook
- # associated with the agent. Arbitrary JSON objects are supported.
+ "payload": { # This field can be used to pass custom data to your webhook.
+ # Arbitrary JSON objects are supported.
+ # If supplied, the value is used to populate the
+ # `WebhookRequest.original_detect_intent_request.payload`
+ # field sent to your webhook.
"a_key": "", # Properties of the object.
},
- "sessionEntityTypes": [ # Optional. Additional session entity types to replace or extend developer
+ "sessionEntityTypes": [ # Additional session entity types to replace or extend developer
# entity types with. The entity synonyms apply to all languages and persist
# for the session of this query.
{ # Represents a session entity type.
#
- # Extends or replaces a developer entity type at the user session level (we
- # refer to the entity types defined at the agent level as "developer entity
+ # Extends or replaces a custom entity type at the user session level (we
+ # refer to the entity types defined at the agent level as "custom entity
# types").
#
# Note: session entity types apply to all queries, regardless of the language.
@@ -312,7 +429,7 @@
#
# For `KIND_MAP` entity types:
#
- # * A canonical value to be used in place of synonyms.
+ # * A reference value to be used in place of synonyms.
#
# For `KIND_LIST` entity types:
#
@@ -321,13 +438,17 @@
},
],
"name": "A String", # Required. The unique identifier of this session entity type. Format:
- # `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type
- # Display Name>`.
+ # `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type
+ # Display Name>`, or `projects/<Project ID>/agent/environments/<Environment
+ # ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display
+ # Name>`.
+ # If `Environment ID` is not specified, we assume default 'draft'
+ # environment. If `User ID` is not specified, we assume default '-' user.
#
- # `<Entity Type Display Name>` must be the display name of an existing entity
+ # `<Entity Type Display Name>` must be the display name of an existing entity
# type in the same agent that will be overridden or supplemented.
"entityOverrideMode": "A String", # Required. Indicates whether the additional data should override or
- # supplement the developer entity type definition.
+ # supplement the custom entity type definition.
},
],
},
@@ -343,13 +464,15 @@
{ # The message returned from the DetectIntent method.
"outputAudioConfig": { # Instructs the speech synthesizer on how to generate the output audio content. # The config used by the speech synthesizer to generate the output audio.
- "sampleRateHertz": 42, # Optional. The synthesis sample rate (in hertz) for this audio. If not
+ # If this audio config is supplied in a request, it overrides all existing
+ # text-to-speech settings applied to the agent.
+ "sampleRateHertz": 42, # The synthesis sample rate (in hertz) for this audio. If not
# provided, then the synthesizer will use the default sample rate based on
# the audio encoding. If this is different from the voice's natural sample
# rate, then the synthesizer will honor this request by converting to the
# desired sample rate (which might result in worse audio quality).
"audioEncoding": "A String", # Required. Audio encoding of the synthesized audio content.
- "synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Optional. Configuration of how speech should be synthesized.
+ "synthesizeSpeechConfig": { # Configuration of how speech should be synthesized. # Configuration of how speech should be synthesized.
"effectsProfileId": [ # Optional. An identifier which selects 'audio effects' profiles that are
# applied on (post synthesized) text to speech. Effects are applied on top of
# each other in the order they are given.
@@ -362,12 +485,13 @@
# voice of the appropriate gender is not available, the synthesizer should
# substitute a voice with a different gender rather than failing the request.
"name": "A String", # Optional. The name of the voice. If not set, the service will choose a
- # voice based on the other parameters such as language_code and gender.
+ # voice based on the other parameters such as language_code and
+ # ssml_gender.
},
"speakingRate": 3.14, # Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal
# native speed supported by the specific voice. 2.0 is twice as fast, and
# 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any
- # other values < 0.25 or > 4.0 will return an error.
+ # other values < 0.25 or > 4.0 will return an error.
"volumeGainDb": 3.14, # Optional. Volume gain (in dB) of the normal native volume supported by the
# specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of
# 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB)
@@ -381,30 +505,6 @@
# original pitch.
},
},
- "outputAudio": "A String", # The audio data bytes encoded as specified in the request.
- # Note: The output audio is generated based on the values of default platform
- # text responses found in the `query_result.fulfillment_messages` field. If
- # multiple default text responses exist, they will be concatenated when
- # generating audio. If no default platform text responses exist, the
- # generated audio content will be empty.
- "webhookStatus": { # The `Status` type defines a logical error model that is suitable for # Specifies the status of the webhook request.
- # different programming environments, including REST APIs and RPC APIs. It is
- # used by [gRPC](https://github.com/grpc). Each `Status` message contains
- # three pieces of data: error code, error message, and error details.
- #
- # You can find out more about this error model and how to work with it in the
- # [API Design Guide](https://cloud.google.com/apis/design/errors).
- "message": "A String", # A developer-facing error message, which should be in English. Any
- # user-facing error message should be localized and sent in the
- # google.rpc.Status.details field, or localized by the client.
- "code": 42, # The status code, which should be an enum value of google.rpc.Code.
- "details": [ # A list of messages that carry the error details. There is a common set of
- # message types for APIs to use.
- {
- "a_key": "", # Properties of the object. Contains field @type with type URL.
- },
- ],
- },
"queryResult": { # Represents the result of conversational query or event processing. # The selected results of the conversational query or event processing.
# See `alternative_query_results` for additional potential results.
"sentimentAnalysisResult": { # The result of sentiment analysis as configured by # The sentiment analysis result, which depends on the
@@ -427,11 +527,25 @@
# - `true` if all required parameter values have been collected, or if the
# matched intent doesn't contain any required parameters.
"parameters": { # The collection of extracted parameters.
+ #
+ # Depending on your protocol or client library language, this is a
+ # map, associative array, symbol table, dictionary, or JSON object
+ # composed of a collection of (MapKey, MapValue) pairs:
+ #
+ # - MapKey type: string
+ # - MapKey value: parameter name
+ # - MapValue type:
+ # - If parameter's entity type is a composite entity: map
+ # - Else: string or number, depending on parameter value type
+ # - MapValue value:
+ # - If parameter's entity type is a composite entity:
+ # map from composite entity property names to property values
+ # - Else: parameter value
"a_key": "", # Properties of the object.
},
"languageCode": "A String", # The language that was triggered during intent detection.
# See [Language
- # Support](https://cloud.google.com/dialogflow-enterprise/docs/reference/language)
+ # Support](https://cloud.google.com/dialogflow/docs/reference/language)
# for a list of the currently supported language codes.
"speechRecognitionConfidence": 3.14, # The Speech recognition confidence between 0.0 and 1.0. A higher number
# indicates an estimated greater likelihood that the recognized words are
@@ -444,24 +558,28 @@
# StreamingRecognitionResult.
"intentDetectionConfidence": 3.14, # The intent detection confidence. Values range from 0.0
# (completely uncertain) to 1.0 (completely certain).
+ # This value is for informational purpose only and is only used to
+ # help match the best intent within the classification threshold.
+ # This value may change for the same end-user expression at any time due to a
+ # model retraining or change in implementation.
# If there are `multiple knowledge_answers` messages, this value is set to
# the greatest `knowledgeAnswers.match_confidence` value in the list.
"action": "A String", # The action name from the matched intent.
"intent": { # Represents an intent. # The intent that matched the conversational query. Some, not
# all fields are filled in this message, including but not limited to:
- # `name`, `display_name` and `webhook_state`.
+ # `name`, `display_name`, `end_interaction` and `is_fallback`.
# Intents convert a number of user expressions or patterns into an action. An
# action is an extraction of a user command or sentence semantics.
"isFallback": True or False, # Optional. Indicates whether this is a fallback intent.
"mlDisabled": True or False, # Optional. Indicates whether Machine Learning is disabled for the intent.
- # Note: If `ml_diabled` setting is set to true, then this intent is not
+ # Note: If `ml_disabled` setting is set to true, then this intent is not
# taken into account during inference in `ML ONLY` match mode. Also,
# auto-markup in the UI is turned off.
"displayName": "A String", # Required. The name of this intent.
- "name": "A String", # The unique identifier of this intent.
+ "name": "A String", # Optional. The unique identifier of this intent.
# Required for Intents.UpdateIntent and Intents.BatchUpdateIntents
# methods.
- # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
+ # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
"parameters": [ # Optional. The collection of parameters associated with the intent.
{ # Represents intent parameters.
"displayName": "A String", # Required. The name of the parameter.
@@ -480,7 +598,7 @@
# - a parameter value from some context defined as
# `#context_name.parameter_name`.
"prompts": [ # Optional. The collection of prompts that the agent can present to the
- # user in order to collect value for the parameter.
+ # user in order to collect a value for the parameter.
"A String",
],
"isList": True or False, # Optional. Indicates whether the parameter represents a list of values.
@@ -535,17 +653,27 @@
# a direct or indirect parent. We populate this field only in the output.
{ # Represents a single followup intent in the chain.
"followupIntentName": "A String", # The unique identifier of the followup intent.
- # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
+ # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
"parentFollowupIntentName": "A String", # The unique identifier of the followup intent's parent.
- # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
+ # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
},
],
"webhookState": "A String", # Optional. Indicates whether webhooks are enabled for the intent.
- "resetContexts": True or False, # Optional. Indicates whether to delete all contexts in the current
- # session when this intent is matched.
+ "parentFollowupIntentName": "A String", # Read-only after creation. The unique identifier of the parent intent in the
+ # chain of followup intents. You can set this field when creating an intent,
+ # for example with CreateIntent or
+ # BatchUpdateIntents, in order to make this
+ # intent a followup intent.
+ #
+ # It identifies the parent followup intent.
+ # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
"messages": [ # Optional. The collection of rich messages corresponding to the
# `Response` field in the Dialogflow console.
- { # Corresponds to the `Response` field in the Dialogflow console.
+ { # A rich response message.
+ # Corresponds to the intent `Response` field in the Dialogflow console.
+ # For more information, see
+ # [Rich response
+ # messages](https://cloud.google.com/dialogflow/docs/intents-rich-messages).
"simpleResponses": { # The collection of simple response candidates. # The voice and text-only responses for Actions on Google.
# This message in `QueryResult.fulfillment_messages` and
# `WebhookResponse.fulfillment_messages` should contain only one
@@ -578,6 +706,26 @@
# e.g., screen readers.
"imageUri": "A String", # Optional. The public URI to an image file.
},
+ "mediaContent": { # The media content card for Actions on Google. # The media content card for Actions on Google.
+ "mediaObjects": [ # Required. List of media objects.
+ { # Response media object for media content card.
+ "contentUrl": "A String", # Required. Url where the media is stored.
+ "description": "A String", # Optional. Description of media card.
+ "name": "A String", # Required. Name of media card.
+ "largeImage": { # The image response message. # Optional. Image to display above media content.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ },
+ "icon": { # The image response message. # Optional. Icon to display above media content.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ },
+ },
+ ],
+ "mediaType": "A String", # Optional. What type of media is the content (ie "audio").
+ },
"suggestions": { # The collection of suggestions. # The suggestion chips for Actions on Google.
"suggestions": [ # Required. The list of suggested replies.
{ # The suggestion chip message that the user can tap to quickly post a reply
@@ -592,6 +740,31 @@
# suggestion chip.
"destinationName": "A String", # Required. The name of the app or site this chip is linking to.
},
+ "browseCarouselCard": { # Browse Carousel Card for Actions on Google. # Browse carousel card for Actions on Google.
+ # https://developers.google.com/actions/assistant/responses#browsing_carousel
+ "items": [ # Required. List of items in the Browse Carousel Card. Minimum of two
+ # items, maximum of ten.
+ { # Browsing carousel tile
+ "image": { # The image response message. # Optional. Hero image for the carousel item.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ },
+ "title": "A String", # Required. Title of the carousel item. Maximum of two lines of text.
+ "description": "A String", # Optional. Description of the carousel item. Maximum of four lines of
+ # text.
+ "openUriAction": { # Actions on Google action to open a given url. # Required. Action to present to the user.
+ "url": "A String", # Required. URL
+ "urlTypeHint": "A String", # Optional. Specifies the type of viewer that is used when opening
+ # the URL. Defaults to opening via web browser.
+ },
+ "footer": "A String", # Optional. Text that appears at the bottom of the Browse Carousel
+ # Card. Maximum of one line of text.
+ },
+ ],
+ "imageDisplayOptions": "A String", # Optional. Settings for displaying the image. Applies to every image in
+ # items.
+ },
"basicCard": { # The basic card message. Useful for displaying information. # The basic card response for Actions on Google.
"buttons": [ # Optional. The collection of card buttons.
{ # The button object that appears at the bottom of a card.
@@ -610,6 +783,39 @@
"formattedText": "A String", # Required, unless image is present. The body text of the card.
"title": "A String", # Optional. The title of the card.
},
+ "tableCard": { # Table card for Actions on Google. # Table card for Actions on Google.
+ "rows": [ # Optional. Rows in this table of data.
+ { # Row of TableCard.
+ "cells": [ # Optional. List of cells that make up this row.
+ { # Cell of TableCardRow.
+ "text": "A String", # Required. Text in this cell.
+ },
+ ],
+ "dividerAfter": True or False, # Optional. Whether to add a visual divider after this row.
+ },
+ ],
+ "subtitle": "A String", # Optional. Subtitle to the title.
+ "title": "A String", # Required. Title of the card.
+ "image": { # The image response message. # Optional. Image which should be displayed on the card.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ },
+ "columnProperties": [ # Optional. Display properties for the columns in this table.
+ { # Column properties for TableCard.
+ "header": "A String", # Required. Column heading.
+ "horizontalAlignment": "A String", # Optional. Defines text alignment for all cells in this column.
+ },
+ ],
+ "buttons": [ # Optional. List of buttons for the card.
+ { # The button object that appears at the bottom of a card.
+ "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
+ "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
+ },
+ "title": "A String", # Required. The title of the button.
+ },
+ ],
+ },
"carouselSelect": { # The card for presenting a carousel of options to select from. # The carousel card response for Actions on Google.
"items": [ # Required. Carousel items.
{ # An item in the carousel.
@@ -653,16 +859,15 @@
"title": "A String", # Required. The title of the list item.
},
],
+ "subtitle": "A String", # Optional. Subtitle of the list.
"title": "A String", # Optional. The overall title of the list.
},
- "payload": { # Returns a response containing a custom, platform-specific payload.
- # See the Intent.Message.Platform type for a description of the
- # structure that may be required for your platform.
+ "payload": { # A custom platform-specific response.
"a_key": "", # Properties of the object.
},
"card": { # The card response message. # The card response.
"buttons": [ # Optional. The collection of card buttons.
- { # Optional. Contains information about a button.
+ { # Contains information about a button.
"text": "A String", # Optional. The text to show on the button.
"postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
# open.
@@ -674,65 +879,91 @@
},
},
],
- "parentFollowupIntentName": "A String", # Read-only after creation. The unique identifier of the parent intent in the
- # chain of followup intents. You can set this field when creating an intent,
- # for example with CreateIntent or BatchUpdateIntents, in order to
- # make this intent a followup intent.
- #
- # It identifies the parent followup intent.
- # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
"defaultResponsePlatforms": [ # Optional. The list of platforms for which the first responses will be
# copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).
"A String",
],
"priority": 42, # Optional. The priority of this intent. Higher numbers represent higher
- # priorities. If this is zero or unspecified, we use the default
- # priority 500000.
+ # priorities.
#
- # Negative numbers mean that the intent is disabled.
+ # - If the supplied value is unspecified or 0, the service
+ # translates the value to 500,000, which corresponds to the
+ # `Normal` priority in the console.
+ # - If the supplied value is negative, the intent is ignored
+ # in runtime detect intent requests.
"rootFollowupIntentName": "A String", # Read-only. The unique identifier of the root intent in the chain of
# followup intents. It identifies the correct followup intents chain for
# this intent. We populate this field only in the output.
#
- # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
- "inputContextNames": [ # Optional. The list of context names required for this intent to be
- # triggered.
- # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
- "A String",
- ],
+ # Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
"action": "A String", # Optional. The name of the action associated with the intent.
# Note: The action name must not contain whitespaces.
+ "resetContexts": True or False, # Optional. Indicates whether to delete all contexts in the current
+ # session when this intent is matched.
"outputContexts": [ # Optional. The collection of contexts that are activated when the intent
# is matched. Context messages in this collection should not set the
# parameters field. Setting the `lifespan_count` to 0 will reset the context
# when the intent is matched.
- # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
+ # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
{ # Represents a context.
- "parameters": { # Optional. The collection of parameters associated with this context.
- # Refer to [this
- # doc](https://cloud.google.com/dialogflow-enterprise/docs/intents-actions-parameters)
- # for syntax.
- "a_key": "", # Properties of the object.
- },
"name": "A String", # Required. The unique identifier of the context. Format:
- # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`.
+ # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
+ # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
+ # ID>/sessions/<Session ID>/contexts/<Context ID>`.
#
# The `Context ID` is always converted to lowercase, may only contain
- # characters in [a-zA-Z0-9_-%] and may be at most 250 bytes long.
+ # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
+ #
+ # If `Environment ID` is not specified, we assume default 'draft'
+ # environment. If `User ID` is not specified, we assume default '-' user.
+ #
+ # The following context names are reserved for internal use by Dialogflow.
+ # You should not use these contexts or create contexts with these names:
+ #
+ # * `__system_counters__`
+ # * `*_id_dialog_context`
+ # * `*_dialog_params_size`
+ "parameters": { # Optional. The collection of parameters associated with this context.
+ #
+ # Depending on your protocol or client library language, this is a
+ # map, associative array, symbol table, dictionary, or JSON object
+ # composed of a collection of (MapKey, MapValue) pairs:
+ #
+ # - MapKey type: string
+ # - MapKey value: parameter name
+ # - MapValue type:
+ # - If parameter's entity type is a composite entity: map
+ # - Else: string or number, depending on parameter value type
+ # - MapValue value:
+ # - If parameter's entity type is a composite entity:
+ # map from composite entity property names to property values
+ # - Else: parameter value
+ "a_key": "", # Properties of the object.
+ },
"lifespanCount": 42, # Optional. The number of conversational query requests after which the
- # context expires. If set to `0` (the default) the context expires
+ # context expires. The default is `0`. If set to `0`, the context expires
# immediately. Contexts expire automatically after 20 minutes if there
# are no matching queries.
},
],
+ "inputContextNames": [ # Optional. The list of context names required for this intent to be
+ # triggered.
+ # Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
+ "A String",
+ ],
"events": [ # Optional. The collection of event names that trigger the intent.
# If the collection of input contexts is not empty, all of the contexts must
# be present in the active user session for an event to trigger this intent.
+ # Event names are limited to 150 characters.
"A String",
],
},
"fulfillmentMessages": [ # The collection of rich messages to present to the user.
- { # Corresponds to the `Response` field in the Dialogflow console.
+ { # A rich response message.
+ # Corresponds to the intent `Response` field in the Dialogflow console.
+ # For more information, see
+ # [Rich response
+ # messages](https://cloud.google.com/dialogflow/docs/intents-rich-messages).
"simpleResponses": { # The collection of simple response candidates. # The voice and text-only responses for Actions on Google.
# This message in `QueryResult.fulfillment_messages` and
# `WebhookResponse.fulfillment_messages` should contain only one
@@ -765,6 +996,26 @@
# e.g., screen readers.
"imageUri": "A String", # Optional. The public URI to an image file.
},
+ "mediaContent": { # The media content card for Actions on Google. # The media content card for Actions on Google.
+ "mediaObjects": [ # Required. List of media objects.
+ { # Response media object for media content card.
+ "contentUrl": "A String", # Required. Url where the media is stored.
+ "description": "A String", # Optional. Description of media card.
+ "name": "A String", # Required. Name of media card.
+ "largeImage": { # The image response message. # Optional. Image to display above media content.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ },
+ "icon": { # The image response message. # Optional. Icon to display above media content.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ },
+ },
+ ],
+ "mediaType": "A String", # Optional. What type of media is the content (ie "audio").
+ },
"suggestions": { # The collection of suggestions. # The suggestion chips for Actions on Google.
"suggestions": [ # Required. The list of suggested replies.
{ # The suggestion chip message that the user can tap to quickly post a reply
@@ -779,6 +1030,31 @@
# suggestion chip.
"destinationName": "A String", # Required. The name of the app or site this chip is linking to.
},
+ "browseCarouselCard": { # Browse Carousel Card for Actions on Google. # Browse carousel card for Actions on Google.
+ # https://developers.google.com/actions/assistant/responses#browsing_carousel
+ "items": [ # Required. List of items in the Browse Carousel Card. Minimum of two
+ # items, maximum of ten.
+ { # Browsing carousel tile
+ "image": { # The image response message. # Optional. Hero image for the carousel item.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ },
+ "title": "A String", # Required. Title of the carousel item. Maximum of two lines of text.
+ "description": "A String", # Optional. Description of the carousel item. Maximum of four lines of
+ # text.
+ "openUriAction": { # Actions on Google action to open a given url. # Required. Action to present to the user.
+ "url": "A String", # Required. URL
+ "urlTypeHint": "A String", # Optional. Specifies the type of viewer that is used when opening
+ # the URL. Defaults to opening via web browser.
+ },
+ "footer": "A String", # Optional. Text that appears at the bottom of the Browse Carousel
+ # Card. Maximum of one line of text.
+ },
+ ],
+ "imageDisplayOptions": "A String", # Optional. Settings for displaying the image. Applies to every image in
+ # items.
+ },
"basicCard": { # The basic card message. Useful for displaying information. # The basic card response for Actions on Google.
"buttons": [ # Optional. The collection of card buttons.
{ # The button object that appears at the bottom of a card.
@@ -797,6 +1073,39 @@
"formattedText": "A String", # Required, unless image is present. The body text of the card.
"title": "A String", # Optional. The title of the card.
},
+ "tableCard": { # Table card for Actions on Google. # Table card for Actions on Google.
+ "rows": [ # Optional. Rows in this table of data.
+ { # Row of TableCard.
+ "cells": [ # Optional. List of cells that make up this row.
+ { # Cell of TableCardRow.
+ "text": "A String", # Required. Text in this cell.
+ },
+ ],
+ "dividerAfter": True or False, # Optional. Whether to add a visual divider after this row.
+ },
+ ],
+ "subtitle": "A String", # Optional. Subtitle to the title.
+ "title": "A String", # Required. Title of the card.
+ "image": { # The image response message. # Optional. Image which should be displayed on the card.
+ "accessibilityText": "A String", # Optional. A text description of the image to be used for accessibility,
+ # e.g., screen readers.
+ "imageUri": "A String", # Optional. The public URI to an image file.
+ },
+ "columnProperties": [ # Optional. Display properties for the columns in this table.
+ { # Column properties for TableCard.
+ "header": "A String", # Required. Column heading.
+ "horizontalAlignment": "A String", # Optional. Defines text alignment for all cells in this column.
+ },
+ ],
+ "buttons": [ # Optional. List of buttons for the card.
+ { # The button object that appears at the bottom of a card.
+ "openUriAction": { # Opens the given URI. # Required. Action to take when a user taps on the button.
+ "uri": "A String", # Required. The HTTP or HTTPS scheme URI.
+ },
+ "title": "A String", # Required. The title of the button.
+ },
+ ],
+ },
"carouselSelect": { # The card for presenting a carousel of options to select from. # The carousel card response for Actions on Google.
"items": [ # Required. Carousel items.
{ # An item in the carousel.
@@ -840,16 +1149,15 @@
"title": "A String", # Required. The title of the list item.
},
],
+ "subtitle": "A String", # Optional. Subtitle of the list.
"title": "A String", # Optional. The overall title of the list.
},
- "payload": { # Returns a response containing a custom, platform-specific payload.
- # See the Intent.Message.Platform type for a description of the
- # structure that may be required for your platform.
+ "payload": { # A custom platform-specific response.
"a_key": "", # Properties of the object.
},
"card": { # The card response message. # The card response.
"buttons": [ # Optional. The collection of card buttons.
- { # Optional. Contains information about a button.
+ { # Contains information about a button.
"text": "A String", # Optional. The text to show on the button.
"postback": "A String", # Optional. The text to send back to the Dialogflow API or a URI to
# open.
@@ -861,9 +1169,13 @@
},
},
],
- "diagnosticInfo": { # The free-form diagnostic info. For example, this field could contain
- # webhook call latency. The string keys of the Struct's fields map can change
- # without notice.
+ "diagnosticInfo": { # Free-form diagnostic information for the associated detect intent request.
+ # The fields of this data can change without notice, so you should not write
+ # code that depends on its structure.
+ # The data may contain:
+ #
+ # - webhook call latency
+ # - webhook errors
"a_key": "", # Properties of the object.
},
"queryText": "A String", # The original conversational query text:
@@ -879,22 +1191,45 @@
# value of the `source` field returned in the webhook response.
"outputContexts": [ # The collection of output contexts. If applicable,
# `output_contexts.parameters` contains entries with name
- # `<parameter name>.original` containing the original parameter values
+ # `<parameter name>.original` containing the original parameter values
# before the query.
{ # Represents a context.
- "parameters": { # Optional. The collection of parameters associated with this context.
- # Refer to [this
- # doc](https://cloud.google.com/dialogflow-enterprise/docs/intents-actions-parameters)
- # for syntax.
- "a_key": "", # Properties of the object.
- },
"name": "A String", # Required. The unique identifier of the context. Format:
- # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`.
+ # `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`,
+ # or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User
+ # ID>/sessions/<Session ID>/contexts/<Context ID>`.
#
# The `Context ID` is always converted to lowercase, may only contain
- # characters in [a-zA-Z0-9_-%] and may be at most 250 bytes long.
+ # characters in a-zA-Z0-9_-% and may be at most 250 bytes long.
+ #
+ # If `Environment ID` is not specified, we assume default 'draft'
+ # environment. If `User ID` is not specified, we assume default '-' user.
+ #
+ # The following context names are reserved for internal use by Dialogflow.
+ # You should not use these contexts or create contexts with these names:
+ #
+ # * `__system_counters__`
+ # * `*_id_dialog_context`
+ # * `*_dialog_params_size`
+ "parameters": { # Optional. The collection of parameters associated with this context.
+ #
+ # Depending on your protocol or client library language, this is a
+ # map, associative array, symbol table, dictionary, or JSON object
+ # composed of a collection of (MapKey, MapValue) pairs:
+ #
+ # - MapKey type: string
+ # - MapKey value: parameter name
+ # - MapValue type:
+ # - If parameter's entity type is a composite entity: map
+ # - Else: string or number, depending on parameter value type
+ # - MapValue value:
+ # - If parameter's entity type is a composite entity:
+ # map from composite entity property names to property values
+ # - Else: parameter value
+ "a_key": "", # Properties of the object.
+ },
"lifespanCount": 42, # Optional. The number of conversational query requests after which the
- # context expires. If set to `0` (the default) the context expires
+ # context expires. The default is `0`. If set to `0`, the context expires
# immediately. Contexts expire automatically after 20 minutes if there
# are no matching queries.
},
@@ -904,8 +1239,32 @@
"a_key": "", # Properties of the object.
},
},
+ "webhookStatus": { # The `Status` type defines a logical error model that is suitable for # Specifies the status of the webhook request.
+ # different programming environments, including REST APIs and RPC APIs. It is
+ # used by [gRPC](https://github.com/grpc). Each `Status` message contains
+ # three pieces of data: error code, error message, and error details.
+ #
+ # You can find out more about this error model and how to work with it in the
+ # [API Design Guide](https://cloud.google.com/apis/design/errors).
+ "message": "A String", # A developer-facing error message, which should be in English. Any
+ # user-facing error message should be localized and sent in the
+ # google.rpc.Status.details field, or localized by the client.
+ "code": 42, # The status code, which should be an enum value of google.rpc.Code.
+ "details": [ # A list of messages that carry the error details. There is a common set of
+ # message types for APIs to use.
+ {
+ "a_key": "", # Properties of the object. Contains field @type with type URL.
+ },
+ ],
+ },
"responseId": "A String", # The unique identifier of the response. It can be used to
# locate a response in the training example set or for reporting issues.
+ "outputAudio": "A String", # The audio data bytes encoded as specified in the request.
+ # Note: The output audio is generated based on the values of default platform
+ # text responses found in the `query_result.fulfillment_messages` field. If
+ # multiple default text responses exist, they will be concatenated when
+ # generating audio. If no default platform text responses exist, the
+ # generated audio content will be empty.
}</pre>
</div>