Response types reference
You can use the JSON editor to specify responses of many different types. You can use JSON editor to specify the responses to the customer query. By adding the JSON scripts in the JSON editor, your assistant uses the response format mentioned in the JSON script.
For more information, see Defining responses with the JSON editor.
When the variables in the action and message API differ at run time, the format of response types in the action and message API also differs. The following examples show the differences in response type format when you use message API and the JSON editor.
If the text response from the message API has the following format:
{ "response_type": "text", "text": "Hello world" }
Then, the assistant displays the actual text message, Hello world
, in a single step.
If the text response from the JSON action editor has the following format:
{
"generic": [
{
"response_type": "text",
"values": [
{
"text_expression": {
"concat": [
{
"scalar": "Hi, "
},
{
"variable": "step_472"
},
{
"scalar": ". How can I help you?"
}
]
}
}
],
"selection_policy": "sequential"
}
]
}
Then, the assistant combines actual value of variable
with other items in the values
array and displays the response. For example, if step_472
takes the value "Bob", then the assistant displays
Hi, Bob. How can I help you?
.
Viewing the response type at the runtime
You can refer to the API documentation for watsonx Assistant to view the details of response types and the APIs.
For example, to view the runtime response type, do the following:
- In the Response section, click
MessageOutput
in output to see the generic section. - In the
generic
section, clickRuntimeResponseGeneric[]
. - Select an option in the
One of
dropdown.
To view more details about the selected option, click One of
.
The following response types are supported in the JSON editor.
audio
Plays an audio clip that is specified by a URL.
Integration channel support
Web chat | Phone | SMS | Slack | ||
---|---|---|---|---|---|
- Some channel integrations do not display audio titles or descriptions.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | audio |
Y |
source | string | The https: URL of the audio clip. The URL can specify either an audio file or an audio clip on a supported hosting service. |
Y |
title | string | The title to show before the audio player. | N |
description | string | The text of the description that accompanies the audio player. | N |
alt_text | string | Descriptive text that can be used for screen readers or other situations where the audio player cannot be seen. | N |
channel_options.voice_telephony.loop | string | Whether the audio clip repeats indefinitely (phone integration only). | N |
The URL specified by the source
property can be one of the following types:
-
The URL of an audio file in any standard format such as MP3 or WAV. In the web chat, the linked audio clip renders as an embedded audio player.
-
The URL of an audio clip on a supported streaming service. In the web chat, the linked audio clip renders by using the embeddable player for the hosting service.
Specify the URL that you use to access the audio file in your browser (for example,
https://soundcloud.com/ibmresearchfallen-star-amped
). The web chat automatically converts the URL to an embeddable form.You can embed audio hosted on a supported service:
For the phone integration, the URL must specify an audio file that is single-channel (mono) and PCM-encoded, and is sampled at 8,000 Hz with 16 bits per sample.
Example
This example plays an audio clip with a title and descriptive text.
{
"generic":[
{
"response_type": "audio",
"source": "https://example.com/audio/example-file.mp3",
"title": "Example audio file",
"description": "An example audio clip returned as part of a multimedia response."
}
]
}
card
Visual content to improve the information experience of users by using card
.
Integration channel support
WebChat |
---|
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | card |
Y |
body[] | list | A list of response types to create rich content. A maximum of 10 response types are allowed in the list.
Supported response types: |
Y |
footer[] | list | A list of only button response types. A maximum of 5 buttons are allowed in the list. |
N |
A card
can be rendered in a panel, but it is not allowed to have buttons.
Example
The following example shows the basic structure for building a card
response type:
{
"response_type": "card",
"body": [
{
"response_type": "text",
"text": "# Heading"
},
{
"response_type": "text",
"text": "body"
}
]
}
carousel
A carousel
to present cards with rich content. If there is only one card in the carousel, the web chat integration will just render the card instead of the card in a carousel.
Integration channel support
WebChat |
---|
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | carousel |
Y |
items[] | list | A list of card response types. A maximum of 5 cards are allowed in the list. |
Y |
Example
The following example shows the basic structure for building a carousel
response type:
{
"response_type": "carousel",
"items": [
{
"response_type": "card",
"body": [
{
"response_type": "text",
"text": "# Heading"
},
{
"response_type": "text",
"text": "body"
}
]
},
{
"response_type": "card",
"body": [
{
"response_type": "text",
"text": "# Heading"
},
{
"response_type": "text",
"text": "body"
}
]
}
]
}
channel_transfer
Transfers the conversation to a different channel integration. Currently, the web chat integration is the only supported target of a channel transfer.
Integration channel support
Phone | SMS | Slack | ||
---|---|---|---|---|
- The indicated channel integrations support initiating a channel transfer (currently, the web chat integration is the only supported transfer target).
- Initiating a channel transfer from the phone integration requires that the SMS integration is also configured.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | channel_transfer |
Y |
message_to_user | string | A message to display to the user before the link for initiating the transfer. | Y |
transfer_info | object | Information used by an integration to transfer the conversation to a different channel. | Y |
transfer_info.target.chat | string | URL for the website that hosts the web chat to which the conversation is transferred. | Y |
Example
This example requests a transfer from WhatsApp to the web chat. In addition to the channel_transfer
response, the output also includes a text
response to be displayed by the web chat integration after the transfer.
The use of the channels
array ensures that the channel_transfer
response is handled only by the WhatsApp integration (before the transfer), and the connect_to_agent
response only by the web chat integration
(after the transfer).
{
"generic": [
{
"response_type": "channel_transfer",
"channels": [
{
"channel": "whatsapp"
}
],
"message_to_user": "Click the link to connect with an agent using our website.",
"transfer_info": {
"target": {
"chat": {
"url": "https://example.com/webchat"
}
}
}
},
{
"response_type": "connect_to_agent",
"channels": [
{
"channel": "chat"
}
],
"message_to_human_agent": "User asked to speak to an agent.",
"agent_available": {
"message": "Please wait while I connect you to an agent."
},
"agent_unavailable": {
"message": "I'm sorry, but no agents are online at the moment. Please try again later."
},
"transfer_info": {
"target": {
"zendesk": {
"department": "Payments department"
}
}
}
}
]
}
connect_to_agent
Transfers the conversation to a live agent for help. Service desk support must be configured for the channel integration.
Integration channel support
Web chat | Phone | |
---|---|---|
- For information about adding service desk support to the web chat integration, see Adding contact center support.
- For information about adding service desk support to the phone integration, see Configuring backup call center support.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | connect_to_agent |
Y |
message_to_human_agent | string | A message to display to the live agent to whom the conversation is being transferred. | Y |
agent_available | string | A message to display to the user when agents are available. | Y |
agent_unavailable | string | A message to display to the user when no agents are available. | Y |
transfer_info | object | Information that is used by the web chat service desk integrations for routing the transfer. | N |
transfer_info.target.zendesk.department | string | A valid department from your Zendesk account. | N |
transfer_info.target.salesforce.button_id | string | A valid button ID from your Salesforce deployment. | N |
Example
This example requests a transfer to a live agent and specifies messages to be displayed both to the user and to the agent at the time of transfer.
{
"generic": [
{
"response_type": "connect_to_agent",
"message_to_human_agent": "User asked to speak to an agent.",
"agent_available": {
"message": "Please wait while I connect you to an agent."
},
"agent_unavailable": {
"message": "I'm sorry, but no agents are online at the moment. Please try again later."
}
}
]
}
date
Use an interactive date picker that a customer can use to specify a date value.
Integration channel support
Web chat |
---|
- In the web chat, the customer can specify a date value either by clicking the interactive date picker or typing a date value in the input field.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | date |
Y |
Example
This example sends a text response that asks the user to specify a date, and then shows an interactive date picker.
{
"generic": [
{
"response_type": "text",
"text": "What day will you be checking in?"
},
{
"response_type": "date"
}
]
}
dtmf
Sends commands to the phone integration to control input or output with dual-tone multi-frequency (DTMF) signals. (DTMF is a protocol that transmits tones, which are generated when a user presses keys on a push-button phone.)
Integration channel support
Phone |
---|
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | dtmf |
Y |
command_info | object | Information specifying the DTMF command to send to the phone integration. | Y |
command_info.type | string | The DTMF command to send (collect , disable_barge_in , enable_barge_in , or send ). |
Y |
command_info.parameters | object | See Handling phone interactions. | N |
The command_info.type
field can specify any of the following supported commands:
collect
: Collects DTMF keypad input.disable_barge_in
: Disables DTMF barge-in so that playback from the phone integration is not interrupted when the customer presses a key.enable_barge_in
: Enables DTMF barge-in so that the customer can interrupt playback from the phone integration by pressing a key.send
: Sends DTMF signals.
For detailed information about how to use each of these commands, see Handling phone interactions.
Example
This example shows the dtmf
response type with the collect
command, used to collect DTMF input. For more information, see Handling phone interactions.
{
"generic": [
{
"response_type": "dtmf",
"command_info": {
"type": "collect",
"parameters": {
"termination_key": "#",
"count": 16,
"ignore_speech": true
}
},
"channels": [
{
"channel": "voice_telephony"
}
]
}
]
}
end_session
Sends a command to the channel that ends the session. This response type instructs the phone integration to hang up the call.
Integration channel support
Phone | SMS |
---|---|
- The SMS integration supports ending a session by using the
terminateSession
action command.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | end_session |
Y |
For the phone integration, you can use the channel_options
object to include custom headers with the SIP BYE
request that is generated. For more information, see End the call.
Example
This example uses the end_session
response type to end a conversation.
{
"generic": [
{
"response_type": "end_session"
}
]
}
grid
Gives you the flexibility to create the layout you need to present content that conveys the type of information you want users to consume.
Integration channel support
WebChat |
---|
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | grid |
Y |
horizontal_alignment | string | The horizontal alignment for all items in the grid (left , center , or right ).
The default value is |
N |
vertical_alignment | string | The vertical alignment for all items in the grid (top , center , or bottom ).
The default values is |
N |
columns[] | list | The list of columns. A maximum of 5 columns are allowed in the list. Each column is separated by 8px of space. | N |
columns[].width | string | The width of the column. You can set the value of width by using number (for example, 1 ) or pixel (for example, 48 px ).
The number value of a column width is calculated based on the total width of the row and the width of other columns in the row. For example, if the width of the first column is By default, the number value of width is |
Y |
rows[] | list | The list of rows. Maximum 5 rows are allowed in the list. Each row is separated by 8px of space. | Y |
rows[].cells[] | list | The list of cells in a row. Each cell is a column in a row (for example, cell 1 is column 1 in a row). The width of the cell is equal to width of the column. | Y |
rows[].cells[].items[] | list | A list of response-type items. Each item is separated by 8px of space. Maximum 5 response-type items are allowed in the list.
Supported response-type items are grid only within
a grid and below the first level. A grid in a cell cannot contain a grid response type. |
Y |
rows[].cells[].horizontal_alignment | string | The horizontal alignment for items in the cell (left , center , or right ).
The default value is |
N |
rows[].cells[].vertical_alignment | string | The vertical alignment for items in the cell (top , center , or bottom ).
The default values is |
N |
Example
The following example shows the basic structure for building a grid
response type:
{
"response_type": "grid",
"columns": [
{
"width": "1"
},
{
"width": "1"
}
],
"rows": [
{
"cells": [
{
"items": [
{
"response_type": "text",
"text": "row 1 cell 1"
}
]
},
{
"items": [
{
"response_type": "text",
"text": "row 1 cell 2"
}
]
}
]
},
{
"cells": [
{
"items": [
{
"response_type": "text",
"text": "row 2 cell 1"
}
]
},
{
"items": [
{
"response_type": "text",
"text": "row 2 cell 2"
}
]
}
]
}
]
}
iframe
Embeds content from an external website as an HTML iframe
element.
Integration channel support
Web chat | |
---|---|
- Currently, the web chat integration ignores the
description
andimage_url
properties. Instead, the description and preview image are dynamically retrieved from the source at run time.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | iframe |
Y |
source | string | The URL of the external content. The URL must specify content that is embeddable in an HTML iframe element. |
Y |
title | string | The title to show before the embedded content. | N |
description | string | The text of the description that accompanies the embedded content. | N |
image_url | string | The URL of an image that shows a preview of the embedded content. | N |
channel_options.chat.display | string | The way web chat renders the response type (inline or panel ). The default value is panel for this response type. |
N |
channel_options.chat.dimensions.base_height | number | The base height (in pixels) to use to scale the content to a specific display size. This property works only when display is set to inline . |
N |
Different sites have varying restrictions for embedding content, and different processes for generating embeddable URLs. An embeddable URL is one that can be specified as the value of the src
attribute of the iframe
element.
For example, to embed an interactive map with Google Maps, you can use the Google Maps Embed API. (For more information, see The Maps Embed API overview.) Other sites have different processes for creating embeddable content.
For technical details about using Content-Security-Policy: frame-src
to allow embedding of your website content, see CSP: frame-src.
Example
The following example embeds an iframe with a title and description.
{
"generic":[
{
"response_type": "iframe",
"source": "https://example.com/embeddable/example",
"title": "Example iframe",
"description": "An example of embeddable content returned as an iframe response.",
"channel_options": {
"chat": {
"display": "inline",
"base_height": 180
}
}
}
]
}
image
Displays an image that is specified by a URL.
Integration channel support
Web chat | SMS | Slack | MS Teams | ||
---|---|---|---|---|---|
- Some channel integrations do not display image titles or descriptions.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | image |
Y |
source | string | The https: URL of the image. The specified image must be in .jpg , .gif , or .png format. |
Y |
title | string | The title to show before the image. | N |
description | string | The text of the description that accompanies the image. | N |
alt_text | string | Descriptive text that can be used for screen readers or other situations where the image cannot be seen. | N |
Example
This example displays an image with a title and descriptive text.
{
"generic":[
{
"response_type": "image",
"source": "https://example.com/image.jpg",
"title": "Example image",
"description": "An example image returned as part of a multimedia response."
}
]
}
option
Use to show a set of options (such as buttons or a drop-down list) that users can choose from. The selected value is then sent to the assistant as user input. An options
response is automatically defined when you choose the Options customer response type for a step. For more information, see Collecting information from your customers.
Integration channel support
Web chat | Phone | SMS | Slack | MS Teams | ||
---|---|---|---|---|---|---|
- How options are presented varies by channel integration. The
preference
field is supported when possible, but not all channels support drop-down lists or buttons.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | option |
Y |
title | string | The title to show before the options. | Y |
description | string | The text of the description that accompanies the options. | N |
preference | string | The preferred type of control to display, if supported by the channel (dropdown or button ). |
N |
options | list | A list of key-value pairs that specify options from which a user can choose. | Y |
options[].label | string | The user-facing label for the option. | Y |
options[].value | object | An object that defines the response that is sent to the watsonx Assistant service if the user selects the option. | Y |
options[].value.input | object | An object that includes the message input corresponding to the option, including input text and any other field that is a valid part of a watsonx Assistant message. For more information, see the API Reference. | N |
Example
This example presents two options (Buy something
and Exit
).
{
"generic":[
{
"response_type": "option",
"title": "Choose from the following options:",
"preference": "button",
"options": [
{
"label": "Buy something",
"value": {
"input": {
"text": "Place order"
}
}
},
{
"label": "Exit",
"value": {
"input": {
"text": "Exit"
}
}
}
]
}
]
}
pause
Pauses before the next message to the channel, and optionally sends a "user is typing" event (for channels that support it).
Integration channel support
Web chat | ||
---|---|---|
- With the phone integration, you can add a pause by including the SSML
break
element in the assistant output. For more information, see the Text to Speech documentation.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | pause |
Y |
time | int | How long to pause, in milliseconds. | Y |
typing | Boolean | Whether to send the "user is typing" event during the pause. Ignored if the channel does not support this event. | N |
Example
This example sends the "user is typing" event and pauses for 5 seconds.
{
"generic":[
{
"response_type": "pause",
"time": 5000,
"typing": true
}
]
}
speech_to_text
Sends a command to the Speech to Text service instance used by the phone integration. These commands can dynamically change the configuration or behavior of the service during a conversation.
Integration channel support
Phone |
---|
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | speech_to_text |
Y |
command_info | object | Information specifying the command to send to the Speech to Text. | Y |
command_info.type | string | The command to send (currently only the configure command is supported). |
Y |
command_info.parameters | object | See Applying advanced settings to the Speech to Text service | N |
The command_info.type
field can specify any of the following supported commands:
configure
: Dynamically updates the Speech to Text configuration. Configuration changes can be applied only to the next conversation turn, or for the rest of the session.
For information about how to use this command, see Applying advanced settings to the Speech to Text service.
Example
This example uses the speech_to_text
response type with the configure
command to change the language model from the Speech to Text service to Spanish, and to enable smart formatting.
{
"generic": [
{
"response_type": "speech_to_text",
"command_info": {
"type": "configure",
"parameters": {
"narrowband_recognize": {
"model": "es-ES_NarrowbandModel",
"smart_formatting": true
}
}
},
"channels":[
{
"channel": "voice_telephony"
}
]
}
]
}
start_activities
Sends a command to a channel integration to start one or more activities that are specific to that channel. You can use this response type to restart any activity you previously stopped with the stop_activities
response type.
Integration channel support
Phone |
---|
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | start_activities |
Y |
activities | list | A list of objects that identify the activities to start. | Y |
activities[].type | string | The name of the activity to start. | Y |
Currently, the following activities for the phone integration can be started:
speech_to_text_recognition
: Recognizes speech. Streaming audio to the Speech to Text service is resumed.dtmf_collection
: Processes inbound DTMF signals.
Example
This example uses the start_activities
response type to restart recognizing speech. Because this command is specific to the phone integration, the channels
property specifies voice_telephony
only.
{
"generic": [
{
"response_type": "start_activities",
"activities": [
{
"type": "speech_to_text_recognition"
}
],
"channels":[
{
"channel": "voice_telephony"
}
]
}
]
}
stop_activities
Sends a command to a channel integration to stop one or more activities that are specific to that channel. The activities remain stopped until they are restarted with the start_activities
response type.
Integration channel support
Phone |
---|
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | stop_activities |
Y |
activities | list | A list of objects that identify the activities to stop. | Y |
activities[].type | string | The name of the activity to stop. | Y |
Currently, the following activities for the phone integration can be stopped:
speech_to_text_recognition
: Stops recognizing speech. All streaming audio to the Speech to Text service is stopped.dtmf_collection
: Stops processing of inbound DTMF signals.
Example
This example uses the stop_activities
response type to stop recognizing speech. Because this command is specific to the phone integration, the channels
property specifies voice_telephony
only.
{
"generic": [
{
"response_type": "stop_activities",
"activities": [
{
"type": "speech_to_text_recognition"
}
],
"channels":[
{
"channel":"voice_telephony"
}
]
}
]
}
table
Beta
The web chat support for new table
response type represents the structured data in rows and columns, headers, and cells.
Integration channel support
Web chat |
---|
Properties and Definitions
Property | Description | Type | Required |
---|---|---|---|
title |
The title of the table. | String | No |
description |
A brief description of the table. | String | No |
headers |
Array of column headers. | Array<String, Number> | Yes |
rows |
Array of rows, each containing an array of cells. | Array | Yes |
rows[].cells |
Data for each cell in the row. | Array<String, Number> | Yes |
Each row must have the same number of cells as the headers. A mismatch between cells and headers will cause the web chat to throw an error when it attempts to render the table.
Example
This example displays structured data in a table.
{
"generic": [
{
"response_type": "table",
"title": "Data Table",
"description": "A table with data",
"headers": ["Column 1", "Column 2"],
"rows": [
{
"cells": ["Row 1, Column 1", "Row 1, Column 2"]
},
{
"cells": ["Row 2, Column 1", "22"]
}
]
}
]
}
text
Displays text (or reads it aloud, for the phone integration). To add variety, you can specify multiple alternative text responses. If you specify multiple responses, you can choose to rotate sequentially through the list, choose a response randomly, or output all specified responses.
Integration channel support
Web chat | Phone | SMS | Slack | ||
---|---|---|---|---|---|
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | text |
Y |
values | list | A list of one or more objects that define a text response. | Y |
values.[n].text_expression | object | An object that describes the text of the response. | N |
values.[n].text_expression.concat | list | A list of objects that form components of the text response. These objects can include scalar text strings and references to variables. | N |
selection_policy | string | How a response is selected from the list, if more than one response is specified. The possible values are sequential , random , and multiline . |
N |
delimiter | string | The delimiter to output as a separator between responses. Used only when selection_policy =multiline . The default delimiter is newline ( ). |
N |
Example
This example displays a greeting message to the user.
{
"generic": [
{
"response_type": "text",
"values": [
{
"text_expression": {
"concat": [
{
"scalar": "Hi, "
},
{
"variable": "step_472"
},
{
"scalar": ". How can I help you?"
}
]
}
}
],
"selection_policy": "sequential"
}
]
}
text_to_speech
Sends a command to the Text to Speech service instance used by the phone integration. These commands can dynamically change the configuration or behavior of the service during a conversation.
Integration channel support
Phone |
---|
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | text_to_speech |
Y |
command_info | object | Information specifying the command to send to the Text to Speech. | Y |
command_info.type | string | The command to send (configure , disable_barge_in , or enable_barge_in ). |
Y |
command_info.parameters | object | See Applying advanced settings to the Text to Speech service | N |
The command_info.type
field can specify any of the following supported commands:
configure
: Dynamically updates the Text to Speech configuration. Configuration changes can be applied only to the next conversation turn, or for the rest of the session.disable_barge_in
: Disables speech barge-in so that playback from the phone integration is not interrupted when the customer speaks.enable_barge_in
: Enables speech barge-in so that the customer can interrupt playback from the phone integration by speaking.
For detailed information about how to use each of these commands, see Applying advanced settings to the Text to Speech service.
Example
This example uses the text_to_speech
response type with the configure
command to change the voice used by the Text to Speech service.
{
"generic": [
{
"response_type": "text_to_speech",
"command_info": {
"type": "configure",
"parameters" : {
"synthesize": {
"voice": "en-US_LisaVoice"
}
}
},
"channels":[
{
"channel": "voice_telephony"
}
]
}
]
}
user_defined
A custom response type with any JSON data that the client or integration knows how to handle. For example, you might customize the web chat to display a special card, or build a custom application to format responses with a table or chart.
The user-defined response type is not displayed unless the channel has code to handle it. For more information, see Applying advanced customizations.
Integration channel support
Web chat | Phone | SMS | Slack | ||
---|---|---|---|---|---|
* | * |
- With the phone integration, the
user_defined
response type is used to send legacy commands (for example,vgwActForceNoInputTurn
orvgwActSendSMS
). For more information, see Handling phone interactions. - With the SMS integration, the
user_defined
response type is used to send action commands (for example,terminateSession
orsmsActSendMedia
).
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | user_defined |
Y |
user_defined | object | An object that contains any data the client or integration knows how to handle. This object can contain any valid JSON data, but it cannot exceed a total size of 5000 bytes. | Y |
Example
This example shows a generic example of a user-defined response. The user_defined
object can contain any valid JSON data.
{
"generic":[
{
"response_type": "user_defined",
"user_defined": {
"field_1": "String value",
"array_1": [
1,
2
],
"object_1": {
"property_1": "Another string value"
}
}
}
]
}
video
Displays a video that is specified by a URL.
Integration channel support
Web chat | SMS | Slack | ||
---|---|---|---|---|
- Some channel integrations do not display video titles or descriptions.
Fields
Name | Type | Description | Required? |
---|---|---|---|
response_type | string | video |
Y |
source | string | The https: URL of the video. The URL can specify a video file or a streaming video on a supported hosting service. |
Y |
title | string | The title to show before the video. | N |
description | string | The text of the description that accompanies the video. | N |
alt_text | string | Descriptive text that can be used for screen readers or other situations where the video cannot be seen. | N |
channel_options.chat.dimensions.base_height | number | The base height (in pixels) to use to scale the video to a specific display size. | N |
The URL specified with the source
property can be one of the following types:
-
The URL of a video file in a standard format such as MPEG or AVI. In the web chat, the linked video renders as an embedded video player.
HLS (
.m3u8
) and DASH (MPD) streaming videos are not supported. -
The URL of a video from a supported service. In the web chat, the linked video renders with the embeddable player for the hosting service.
Specify the URL of the video that you want to view in your browser (for example,
https://www.youtube.com/watch?v=52bpMKVigGU
). The web chat automatically converts the URL to an embeddable form.You can embed videos from the following services:
Example
This example displays a video with a title and descriptive text.
{
"generic":[
{
"response_type": "video",
"source": "https://example.com/videos/example-video.mp4",
"title": "Example video",
"description": "An example video returned as part of a multimedia response."
}
]
}