MRCP
AWS Transcribe Plugin
Usage Guide
Created: June 28, 2020
Last updated: December 18, 2020
Author: Arsen Chaloyan
Table of Contents
4.1 Using Default Configuration
4.3 Specifying AWS Credentials
4.4 Specifying Recognition Language
4.6 Specifying Speech Input Parameters
4.7 Specifying DTMF Input Parameters
4.8 Specifying No-Input and Recognition Timeouts
4.9 Specifying Vendor Specific Parameters
4.11 Maintaining Recognition Details Records
5 Recognition Grammars and Results
5.1 Using Built-in Speech Grammar
5.2 Using Built-in DTMF Grammars
7.3 Speech and DTMF Recognition
This guide describes how to configure and use the Amazon Web Services (AWS) Transcribe plugin to the UniMRCP server. The document is intended for users having a certain knowledge of AWS Transcribe and UniMRCP.
For installation instructions, use one of the guides below.
· RPM Package Installation (Red Hat / Cent OS)
· Deb Package Installation (Debian / Ubuntu)
Instructions provided in this guide are applicable to the following versions.
UniMRCP 1.7.0 and above UniMRCP Transcribe Plugin 1.0.0 and above |
This is a brief check list of the features currently supported by the UniMRCP server running with the Transcribe plugin.
ü DEFINE-GRAMMAR
ü RECOGNIZE
ü START-INPUT-TIMERS
ü STOP
ü SET-PARAMS
ü GET-PARAMS
ü RECOGNITION-COMPLETE
ü START-OF-INPUT
ü Input-Type
ü No-Input-Timeout
ü Recognition-Timeout
ü Speech-Complete-Timeout
ü Waveform-URI
ü Media-Type
ü Completion-Cause
ü Confidence-Threshold
ü Start-Input-Timers
ü DTMF-Interdigit-Timeout
ü DTMF-Term-Timeout
ü DTMF-Term-Char
ü Save-Waveform
ü Speech-Language
ü Cancel-If-Queue
ü Sensitivity-Level
ü Built-in speech, event and DTMF grammars
ü SRGS XML (limited support)
ü NLSML
ü JSON
The configuration file of the Transcribe plugin is located in /opt/unimrcp/conf/umstranscribe.xml. The configuration file is written in XML.
The root element of the XML document must be <umstranscribe>.
Name |
Unit |
Description |
license-file |
File path |
Specifies the license file. File name may include patterns containing '*' sign. If multiple files match the pattern, the most recent one gets used. |
credentials-file |
File path |
Specifies the AWS credentials file to use. File name may include patterns containing '*' sign. If multiple files match the pattern, the most recent one gets used. |
credentials-provider |
String |
Specifies a credentials provider. If not initialized or set to custom, the custom credentials provider is used to read credentials from credentials-file. Otherwise, if set to default, the AWS default credentials provider chain is used, and credentials-file is not observer. Otherwise, if set to sts, the AWS STS profile credentials provider is used, and credentials-file is not observer. |
init-sdk |
Boolean |
Specifies whether to initialize AWS SDK upon loading of the plugin. Must be set to true by default. Set it to false, if another plugin using the same AWS SDK is loaded prior to this plugin. |
shutdown-sdk |
Boolean |
Specifies whether to shut down AWS SDK upon unloading of the plugin. Must be set to true by default. Set it to false, if another plugin using the same AWS SDK is unloaded next to this plugin. |
sdk-log-level |
Integer |
Specifies a log level of AWS SDK. If not initialized or set to 0, the SDK logs are disabled. Acceptable values are from 0 (OFF) to 6 (TRACE). |
None.
Name |
Unit |
Description |
<streaming-recognition> |
String |
Specifies recognition parameters of streaming recognition. |
<results> |
String |
Specifies parameters of recognition results set in RECOGNITION-COMPLETE events. |
<speech-contexts> |
String |
Contains a list of speech contexts. |
<speech-dtmf-input-detector> |
String |
Specifies parameters of the speech and DTMF input detector. |
<utterance-manager> |
String |
Specifies parameters of the utterance manager. |
<rdr-manager> |
String |
Specifies parameters of the Recognition Details Record (RDR) manager. |
<monitoring-agent> |
String |
Specifies parameters of the monitoring manager. |
<license-server> |
String |
Specifies parameters used to connect to the license server. The use of the license server is optional. |
This is an example of a bare document.
<umstranscribe license-file="umstranscribe_*.lic" credentials-file="*.json" credentials-provider="custom" init-sdk="true" shutdown-sdk="true"> </umstranscribe > |
This element specifies parameters of streaming recognition.
Name |
Unit |
Description |
language |
String |
Specifies the default language to use, if not set by the client. For a list of supported languages, visit https://docs.aws.amazon.com/transcribe/latest/dg/websocket.html |
single-utterance |
Boolean |
Specifies whether to detect a single spoken utterance or perform continuous recognition. |
interim-results |
Boolean |
Specifies whether to request interim results or not. |
start-of-input |
String |
Specifies the source of start of input event sent to the client (use "service-originated" for an event originated based on a first-received interim result and "internal" for an event determined by plugin). |
max-alternatives |
Integer |
Specifies the maximum number of speech recognition result alternatives to be returned. Can be overridden by client by means of the header field N-Best-List-Length. |
alternatives-below-threshold |
Boolean |
Specifies whether to return speech recognition result alternatives with the confidence score below the confidence threshold. |
skip-unsupported-grammars |
Boolean |
Specifies whether to skip or raise an error while referencing a malformed or not supported grammar. |
transcription-grammar |
String |
Specifies the name of the built-in speech transcription grammar. The grammar can be referenced as builtin:speech/transcribe or builtin:grammar/transcribe, where transcribe is the default value of this parameter. |
inter-result-timeout |
Time interval [msec] |
Specifies a timeout between interim results containing transcribed speech. If the timeout is elapsed, input is considered complete. The timeout defaults to 0 (disabled). |
region |
String |
Specifies the AWS region. |
vocabulary-name |
String |
Specifies an optional custom vocabulary. https://docs.aws.amazon.com/transcribe/latest/dg/how-vocabulary.html |
<umstranscribe>
None.
This is an example of streaming recognition element.
<streaming-recognition language="en-US" single-utterance="true" interim-results="true" skip-unsupported-grammars="true" transcription-grammar="transcribe" region="" vocabulary-name="" /> |
This element specifies parameters of recognition results set in RECOGNITION-COMPLETE events.
Name |
Unit |
Description |
format |
String |
Specifies the format of results to be returned to the client (use "standard" for NLSML and "json" for JSON). |
indent |
Integer |
Specifies the indent to use while composing the results. |
confidence-format |
String |
Specifies the format of the confidence score to be returned (use "auto" for a format based on protocol version, "mrcpv2" for a float value in the range of 0..1, "mrcpv1" for an integer value in the range of 0..100) |
<umstranscribe>
None.
This is an example of results element.
<results format="standard" indent="0" confidence-format="auto" /> |
This element specifies a list of speech contexts.
None.
<umstranscribe>
<speech-context>
The example below defines two speech contexts booking and directory.
<speech-contexts> <speech-context id="booking" enable="true"> <phrase>I would like to book a flight from New York to Rome with a ticket eligible for free cancellation</phrase> <phrase>I would like to book a one-way flight from New York to Rome</phrase> </speech-context>
<speech-context id="directory" enable="true"> <phrase>call Steve</phrase> <phrase>call John</phrase> <phrase>dial 5</phrase> <phrase>dial 6</phrase> </speech-context> </speech-contexts> |
This element specifies a speech context.
Name |
Unit |
Description |
id |
String |
Specifies a unique string identifier of the speech context to be referenced by the MRCP client. |
enable |
Boolean |
Specifies whether the speech context is enabled or disabled. |
speech-complete |
Boolean |
Specifies whether to complete input as soon as an interim result matches one of the specified phrases. |
language |
String |
The language the phrases are defined for. |
scope |
String |
Specifies a scope of the speech context, which can be set to either hint or strict. |
<speech-contexts>
<phrase>
This is an example of speech context element.
<speech-context id="directory" enable="true"> <phrase>call Steve</phrase> <phrase>call John</phrase> <phrase>dial 5</phrase> <phrase>dial 6</phrase> </speech-context> |
This element specifies a phrase in the speech context.
Name |
Unit |
Description |
tag |
String |
Specifies an optional arbitrary string identifier to be returned as an instance in the NLSML result, if the transcription result matches the phrase. |
<speech-context>
None.
This is an example of a speech context with phrases having tags specified.
<speech-context id="boolean" speech-complete="true" scope="strict" enable="true"> <phrase tag="true">yes</phrase> <phrase tag="true">sure</phrase> <phrase tag="true">correct</phrase> <phrase tag="false">no</phrase> <phrase tag="false">not sure</phrase> <phrase tag="false">incorrect </phrase> </speech-context> |
This element specifies parameters of the utterance manager.
Name |
Unit |
Description |
save-waveforms |
Boolean |
Specifies whether to save waveforms or not. |
purge-existing |
Boolean |
Specifies whether to delete existing records on start-up. |
max-file-age |
Time interval [min] |
Specifies a time interval in minutes after expiration of which a waveform is deleted. Set 0 for infinite. |
max-file-count |
Integer |
Specifies the max number of waveforms to store. If reached, the oldest waveform is deleted. Set 0 for infinite. |
waveform-base-uri |
String |
Specifies the base URI used to compose an absolute waveform URI. |
waveform-folder |
Dir path |
Specifies a folder the waveforms should be stored in. |
file-prefix |
String |
Specifies a prefix used to compose the name of the file to be stored. Defaults to 'umstranscribe-', if not specified. |
use-logging-tag |
Boolean |
Specifies whether to use the MRCP header field Logging-Tag, if present, to compose the name of the file to be stored. |
<umstranscribe>
None.
The example below defines a typical utterance manager having the default parameters set.
<utterance-manager save-waveforms="false" purge-existing="false" max-file-age="60" max-file-count="100" waveform-base-uri="http://localhost/utterances/" waveform-folder="" /> |
This element specifies parameters of the Recognition Details Record (RDR) manager.
Name |
Unit |
Description |
save-records |
Boolean |
Specifies whether to save recognition details records or not. |
purge-existing |
Boolean |
Specifies whether to delete existing records on start-up. |
max-file-age |
Time interval [min] |
Specifies a time interval in minutes after expiration of which a record is deleted. Set 0 for infinite. |
max-file-count |
Integer |
Specifies the max number of records to store. If reached, the oldest record is deleted. Set 0 for infinite. |
record-folder |
Dir path |
Specifies a folder to store recognition details records in. Defaults to ${UniMRCPInstallDir}/var. |
file-prefix |
String |
Specifies a prefix used to compose the name of the file to be stored. Defaults to 'umstranscribe-', if not specified. |
use-logging-tag |
Boolean |
Specifies whether to use the MRCP header field Logging-Tag, if present, to compose the name of the file to be stored. |
<umstranscribe>
None.
The example below defines a typical utterance manager having the default parameters set.
<rdr-manager save-records="false" purge-existing="false" max-file-age="60" max-file-count="100" waveform-folder="" /> |
This element specifies parameters of the monitoring agent.
Name |
Unit |
Description |
refresh-period |
Time interval [sec] |
Specifies a time interval in seconds used to periodically refresh usage details. See <usage-refresh-handler>. |
<umstranscribe>
<usage-change-handler>
<usage-refresh-handler>
The example below defines a monitoring agent with usage change and refresh handlers.
<monitoring-agent refresh-period="60">
<usage-change-handler> <log-usage enable="true" priority="NOTICE"/> </usage-change-handler>
<usage-refresh-handler> <dump-channels enable="true" status-file="umstranscribe-channels.status"/> </usage-refresh-handler >
</monitoring-agent> |
This element specifies an event handler called on every usage change.
None.
<monitoring-agent>
<log-usage>
<update-usage>
<dump-channels>
This is an example of the usage change event handler.
<usage-change-handler> <log-usage enable="true" priority="NOTICE"/> <update-usage enable="false" status-file="umstranscribe-usage.status"/> <dump-channels enable="false" status-file="umstranscribe-channels.status"/> </usage-change-handler> |
This element specifies an event handler called periodically to update usage details.
None.
<monitoring-agent>
<log-usage>
<update-usage>
<dump-channels>
This is an example of the usage change event handler.
<usage-refresh-handler> <log-usage enable="true" priority="NOTICE"/> <update-usage enable="false" status-file="umstranscribe-usage.status"/> <dump-channels enable="false" status-file="umstranscribe-channels.status"/> </usage-refresh-handler> |
This element specifies parameters used to connect to the license server.
Name |
Unit |
Description |
enable |
Boolean |
Specifies whether the use of license server is enabled or not. If enabled, the license-file attribute is not honored. |
server-address |
String |
Specifies the IP address or host name of the license server. |
certificate-file |
File path |
Specifies the client certificate used to connect to the license server. File name may include patterns containing a '*' sign. If multiple files match the pattern, the most recent one gets used. |
ca-file |
File path |
Specifies the certificate authority used to validate the license server. |
channel-count |
Integer |
Specifies the number of channels to check out from the license server. If not specified or set to 0, either all available channels or a pool of channels will be checked based on the configuration of the license server. |
http-proxy-address |
String |
Specifies the IP address or host name of the HTTP proxy server, if used. |
http-proxy-port |
Integer |
Specifies the port number of the HTTP proxy server, if used. |
<umstranscribe>
None.
The example below defines a typical configuration which can be used to connect to a license server located, for example, at 10.0.0.1.
<license-server enable="true" server-address="10.0.0.1" certificate-file="unilic_client_*.crt" ca-file="unilic_ca.crt" /> |
For further reference to the license server, visit
This element specifies a set of credentials profiles.
None.
<umstranscribe>
<credential-profile>
This is an example of credentials profiles.
<credentials-profiles> <credentials-profile name="default" duration="60" /> <credentials-profile name="dev" duration="60" /> <credentials-profile name="prod" duration="60" /> </credentials-profiles> |
This section outlines common configuration steps.
The default configuration should be sufficient for the general use.
This section must be skipped if the Transcribe plugin is used without the Polly plugin. However, in case both Polly and Transcribe plugins are loaded into the same instance of UniMRCP server, then the plugins need to be configured in a certain way to ensure the AWS SDK is initialized and shutdown only once.
<umspolly license-file="umspolly_*.lic" credentials-file="aws.credentials" init-sdk="true" shutdown-sdk="false"> <umstranscribe license-file="umstranscribe_*.lic" credentials-file="aws.credentials" init-sdk="false" shutdown-sdk="true"> |
By default, the plugin uses credentials of an IAM user to consume AWS services. The procedure how to set up the credentials is documented in the installation guide. No further action is required if the IAM user is supposed to be used.
The plugin can be configured to use the AWS default credentials provider chain, which in turn allows to derive an IAM role set on an instance UniMRCP server is running on. The behavior is controlled by the configuration attribute credentials-provider, which is supposed to be set to default to use the default credentials provider chain.
<umstranscribe license-file="umstranscribe_*.lic" credentials-file="aws.credentials" credentials-provider="default" sdk="true" shutdown-sdk="true"> |
Note, if the attribute credentials-provider is not set to custom, the attribute credentials-file is not observed.
Since 1.4.0, the plugin can be configured to use the AWS STS profile credentials provider to assume a role. The behavior is controlled by the configuration attribute credentials-provider, which is supposed to be set to sts to use STSProfileCredentialsProvider.
<umstranscribe license-file="umstranscribe_*.lic" credentials-file="aws.credentials" credentials-provider="sts" sdk="true" shutdown-sdk="true"> |
Since 1.1.0, different AWS credentials profiles can be configured per UniMRCP server profile. The profiles can also be created and referenced on-demand by the MRCP client via the header field Vendor-Specific-Parameters. The following parameters can be specified.
Name |
Unit |
Description |
aws-credentials-file |
File path |
Specifies the AWS credentials file to use. File name may include patterns containing '*' sign. If multiple files match the pattern, the most recent one gets used. Available since 1.1.0. |
aws-credentials-provider |
String |
Specifies a credentials provider. Use one of: · custom for credentials read from the specified file · default for the AWS default credentials provider chain · sts for the AWS STS profile credentials provider Available since 1.1.0. |
aws-credentials-profile |
String |
Specifies a credentials profile to reference and/or create. Available since 1.1.0. |
aws-credentials-duration |
Integer |
Specifies a lifetime of the credentials profile to create. Available since 1.1.0. |
aws-arn-role |
String |
Specifies an ARN role. Available since 1.1.0. |
aws-region |
String |
Specifies an AWS region. Available since 1.1.0. |
Recognition language can be specified by the client per MRCP session by means of the header field Speech-Language set in a SET-PARAMS or RECOGNIZE request. Otherwise, the parameter language set in the configuration file umstranscribe.xml is used. The parameter defaults to en-US.
Sampling rate is determined based on the SDP negotiation. Refer to the configuration guide of the UniMRCP server on how to specify supported encodings and sampling rates to be used in communication between the client and server.
The native sampling rate with the linear16 audio encoding is used to stream audio data to the service.
While the default parameters specified for the speech input detector are sufficient for the general use, various parameters can be adjusted to better suit a particular requirement.
· speech-start-timeout
This parameter is used to trigger a start of speech input. The shorter is the timeout, the sooner a START-OF-INPUT event is delivered to the client. However, a short timeout may also lead to a false positive.
· speech-complete-timeout
This parameter is used to trigger an end of speech input. The shorter is the timeout, the shorter is the response time. However, a short timeout may also lead to a false positive.
· vad-mode
This parameter is used to specify an operating mode of the Voice Activity Detector (VAD) within an integer range of [0 … 3]. A higher mode is more aggressive and, as a result, is more restrictive in reporting speech. The parameter can be overridden per MRCP session by setting the header field Sensitivity-Level in a SET-PARAMS or RECOGNIZE request. The following table shows how the Sensitivity-Level is mapped to the vad-mode.
Sensitivity-Level |
Vad-Mode |
[0.00 ... 0.25) |
0 |
[0.25 … 0.50) |
1 |
[0.50 ... 0.75) |
2 |
[0.75 ... 1.00] |
3 |
While the default parameters specified for the DTMF input detector are sufficient for the general use, various parameters can be adjusted to better suit a particular requirement.
· dtmf-interdigit-timeout
This parameter is used to set an inter-digit timeout on DTMF input. The parameter can be overridden per MRCP session by setting the header field DTMF-Interdigit-Timeout in a SET-PARAMS or RECOGNIZE request.
· dtmf-term-timeout
This parameter is used to set a termination timeout on DTMF input and is in effect when dtmf-term-char is set and there is a match for an input grammar. The parameter can be overridden per MRCP session by setting the header field DTMF-Term-Timeout in a SET-PARAMS or RECOGNIZE request.
· dtmf-term-char
This parameter is used to set a character terminating DTMF input. The parameter can be overridden per MRCP session by setting the header field DTMF-Term-Char in a SET-PARAMS or RECOGNIZE request.
· noinput-timeout
This parameter is used to trigger a no-input event. The parameter can be overridden per MRCP session by setting the header field No-Input-Timeout in a SET-PARAMS or RECOGNIZE request.
· input-timeout
This parameter is used to limit input (recognition) time. The parameter can be overridden per MRCP session by setting the header field Recognition-Timeout in a SET-PARAMS or RECOGNIZE request.
The following parameters can optionally be specified by the MRCP client in SET-PARAMS, DEFINE-GRAMMAR and RECOGNIZE requests via the MRCP header field Vendor-Specific-Parameters.
Name |
Unit |
Description |
start-of-input |
String |
Specifies the source of start of input event sent to the client (use "service-originated" for an event originated based on a first-received interim result and "internal" for an event determined by plugin). |
alternatives-below-threshold |
Boolean |
Specifies whether to return speech recognition result alternatives with the confidence score below the confidence threshold. |
single-utterance |
Boolean |
Specifies whether to detect a single spoken utterance or perform continuous recognition. |
speech-start-timeout |
Time interval [msec] |
Specifies how long to wait in transition mode before triggering a start of speech input event. |
interim-result-timeout |
Time interval [msec] |
Specifies a timeout between interim results containing transcribed speech. If the timeout is elapsed, input is considered complete. |
All the vendor-specific parameters can also be specified at the grammar-level via a built-in or SRGS XML grammar.
The following example demonstrates the use of a built-in grammar with the vendor-specific parameters alternatives-below-threshold and speech-start-timeout set to true and 100 correspondingly.
builtin:speech/transcribe?alternatives-below-threshold=true;speech-start-timeout=100 |
The following example demonstrates the use of an SRGS XML grammar with the vendor-specific parameters alternatives-below-threshold and speech-start-timeout set to true and 100 correspondingly.
<grammar mode="voice" root="transcribe" version="1.0" xml:lang="en-US" xmlns="http://www.w3.org/2001/06/grammar"> <meta name="scope" content="builtin"/> <meta name="alternatives-below-threshold" content="true"/> <meta name="speech-start-timeout" content="100"/> <rule id="transcribe"> <one-of ><item>blank</item></one-of> </rule> </grammar> |
Saving of utterances is not required for regular operation and is disabled by default. However, enabling this functionality allows to save utterances sent to the service and later listen to them offline.
The relevant settings can be specified via the element utterance-manager.
· save-waveforms
Utterances can optionally be recorded and stored if the configuration parameter save-waveforms is set to true. The parameter can be overridden per MRCP session by setting the header field Save-Waveforms in a SET-PARAMS or RECOGNIZE request.
· purge-existing
This parameter specifies whether to delete existing waveforms on start-up.
· max-file-age
This parameter specifies a time interval in minutes after expiration of which a waveform is deleted. If set to 0, there is no expiration time specified.
· max-file-count
This parameter specifies the maximum number of waveforms to store. If the specified number is reached, the oldest waveform is deleted. If set to 0, there is no limit specified.
· waveform-base-uri
This parameter specifies the base URI used to compose an absolute waveform URI returned in the header field Waveform-Uri in response to a RECOGNIZE request.
· waveform-folder
This parameter specifies a path to the directory used to store waveforms in. The directory defaults to ${UniMRCPInstallDir}/var.
Producing of recognition details records (RDR) is not required for regular operation and is disabled by default. However, enabling this functionality allows to store details of each recognition attempt in a separate file and analyze them later offline. The RDRs ate stored in the JSON format.
The relevant settings can be specified via the element rdr-manager.
· save-records
This parameter specifies whether to save recognition details records or not.
· purge-existing
This parameter specifies whether to delete existing records on start-up.
· max-file-age
This parameter specifies a time interval in minutes after expiration of which a record is deleted. If set to 0, there is no expiration time specified.
· max-file-count
This parameter specifies the maximum number of records to store. If the specified number is reached, the oldest record is deleted. If set to 0, there is no limit specified.
· record-folder
This parameter specifies a path to the directory used to store records in. The directory defaults to ${UniMRCPInstallDir}/var.
A pre-set built-in speech grammar can be referenced by the MRCP client in a RECOGNIZE request as follows:
builtin:speech/transcribe |
Pre-set built-in DTMF grammars can be referenced by the MRCP client in a RECOGNIZE request as follows:
builtin:dtmf/$id |
Results received from the service are transformed to a certain data structure and sent to the MRCP client in a RECOGNITION-COMPLETE event. The way results are composed can be adjusted via the <results> element in the configuration file umstranscribe.xml.
If the format attribute is set to standard, which is the default setting, then the header filed Content-Type is set to application/x-nlsml and the body of a RECOGNITION-COMPLETE event is set to an NLSML representation of received results.
If the format attribute is set to json, then the header filed Content-Type is set to application/json and the body of a RECOGNITION-COMPLETE event is set to a JSON representation of received results.
The number of in-use and total licensed channels can be monitored in several alternate ways. There is a set of actions which can take place on certain events. The behavior is configurable via the element monitoring-agent, which contains two event handlers: usage-change-handler and usage-refresh-handler.
While the usage-change-handler is invoked on every acquisition and release of a licensed channel, the usage-refresh-handler is invoked periodically on expiration of a timeout specified by the attribute refresh-period.
The following actions can be specified for either of the two handlers.
The action log-usage logs the following data in the order specified.
· The number of currently in-use channels.
· The maximum number of channels used concurrently.
· The total number of licensed channels.
The following is a sample log statement, indicating 0 in-use, 0 max-used and 2 total channels.
[NOTICE] Transcribe Usage: 0/0/2 |
The action update-usage writes the following data to a status file umstranscribe-usage.status, located by default in the directory ${UniMRCPInstallDir}/var/status.
· The number of currently in-use channels.
· The maximum number of channels used concurrently.
· The total number of licensed channels.
· The current status of the license permit.
· The license server alarm. Set to on, if the license server is not available for more than one hour; otherwise, set to off. This parameter is maintained only if the license server is used.
The following is a sample content of the status file.
in-use channels: 0 max used channels: 0 total channels: 2 license permit: true licserver alarm: off |
The action dump-channels writes the identifiers of in-use channels to a status file umstranscribe-channels.status, located by default in the directory ${UniMRCPInstallDir}/var/status.
This example demonstrates how to reference a built-in speech transcription grammar in a RECOGNIZE request.
C->S:
MRCP/2.0 336 RECOGNIZE 1 Channel-Identifier: 6e1a2e4e54ae11e7@speechrecog Content-Id: request1@form-level Content-Type: text/uri-list Cancel-If-Queue: false No-Input-Timeout: 5000 Recognition-Timeout: 10000 Start-Input-Timers: true Confidence-Threshold: 0.87 Save-Waveform: true Content-Length: 25
builtin:speech/transcribe |
S->C:
MRCP/2.0 83 1 200 IN-PROGRESS Channel-Identifier: 6e1a2e4e54ae11e7@speechrecog
|
S->C:
MRCP/2.0 115 START-OF-INPUT 1 IN-PROGRESS Channel-Identifier: 6e1a2e4e54ae11e7@speechrecog Input-Type: speech
|
S->C:
MRCP/2.0 498 RECOGNITION-COMPLETE 1 COMPLETE Channel-Identifier: 6e1a2e4e54ae11e7@speechrecog Completion-Cause: 000 success Waveform-Uri: <http://localhost/utterances/utter-6e1a2e4e54ae11e7-1.wav>;size=20480;duration=1280 Content-Type: application/x-nlsml Content-Length: 214
<?xml version="1.0"?> <result> <interpretation grammar="builtin:speech/transcribe" confidence="1.00"> <instance>Book a room</instance> <input mode="speech">Book a room</input> </interpretation> </result> |
This example demonstrates how to reference a built-in DTMF grammar in a RECOGNIZE request.
C->S:
MRCP/2.0 266 RECOGNIZE 1 Channel-Identifier: d26bef74091a174c@speechrecog Content-Type: text/uri-list Cancel-If-Queue: false Start-Input-Timers: true Confidence-Threshold: 0.7 Speech-Language: en-US Dtmf-Term-Char: # Content-Length: 19
builtin:dtmf/digits |
S->C:
MRCP/2.0 83 1 200 IN-PROGRESS Channel-Identifier: d26bef74091a174c@speechrecog
|
S->C:
MRCP/2.0 113 START-OF-INPUT 1 IN-PROGRESS Channel-Identifier: d26bef74091a174c@speechrecog Input-Type: dtmf
|
S->C:
MRCP/2.0 382 RECOGNITION-COMPLETE 1 COMPLETE Channel-Identifier: d26bef74091a174c@speechrecog Completion-Cause: 000 success Content-Type: application/x-nlsml Content-Length: 197
<?xml version="1.0"?> <result> <interpretation grammar="builtin:dtmf/digits" confidence="1.00"> <input mode="dtmf">1 2 3 4</input> <instance>1234</instance> </interpretation> </result> |
This example demonstrates how to perform recognition by activating both speech and DTMF grammars. In this example, the user is expected to input a 4-digit pin.
C->S:
MRCP/2.0 275 RECOGNIZE 1 Channel-Identifier: 6ae0f23e1b1e3d42@speechrecog Content-Type: text/uri-list Cancel-If-Queue: false Start-Input-Timers: true Confidence-Threshold: 0.7 Speech-Language: en-US Content-Length: 47
builtin:dtmf/digits?length=4 builtin:speech/transcribe |
S->C:
MRCP/2.0 83 2 200 IN-PROGRESS Channel-Identifier: 6ae0f23e1b1e3d42@speechrecog
|
S->C:
MRCP/2.0 115 START-OF-INPUT 2 IN-PROGRESS Channel-Identifier: 6ae0f23e1b1e3d42@speechrecog Input-Type: speech
|
S->C:
MRCP/2.0 399 RECOGNITION-COMPLETE 2 COMPLETE Channel-Identifier: 6ae0f23e1b1e3d42@speechrecog Completion-Cause: 000 success Content-Type: application/x-nlsml Content-Length: 214
<?xml version="1.0"?> <result> <interpretation grammar=" builtin:speech/transcribe" confidence="1.00"> <instance>one two three four</instance> <input mode="speech">one two three four</input> </interpretation> </result> |
The following sequence diagram outlines common interactions between all the main components involved in a typical recognition session performed over MRCPv2.
· Using Amazon Transcribe Streaming with WebSockets