Documentation
¶
Index ¶
- type Alternative
- type AudioEvent
- type AudioStream
- type AudioStreamMemberAudioEvent
- type AudioStreamMemberConfigurationEvent
- type BadRequestException
- type CallAnalyticsEntity
- type CallAnalyticsItem
- type CallAnalyticsLanguageCode
- type CallAnalyticsTranscriptResultStream
- type CallAnalyticsTranscriptResultStreamMemberCategoryEvent
- type CallAnalyticsTranscriptResultStreamMemberUtteranceEvent
- type CategoryEvent
- type ChannelDefinition
- type CharacterOffsets
- type ClinicalNoteGenerationResult
- type ClinicalNoteGenerationSettings
- type ClinicalNoteGenerationStatus
- type ConfigurationEvent
- type ConflictException
- type ContentIdentificationType
- type ContentRedactionOutput
- type ContentRedactionType
- type Entity
- type InternalFailureException
- type IssueDetected
- type Item
- type ItemType
- type LanguageCode
- type LanguageWithScore
- type LimitExceededException
- type MediaEncoding
- type MedicalAlternative
- type MedicalContentIdentificationType
- type MedicalEntity
- type MedicalItem
- type MedicalResult
- type MedicalScribeAudioEvent
- type MedicalScribeChannelDefinition
- type MedicalScribeConfigurationEvent
- type MedicalScribeEncryptionSettings
- type MedicalScribeInputStream
- type MedicalScribeInputStreamMemberAudioEvent
- type MedicalScribeInputStreamMemberConfigurationEvent
- type MedicalScribeInputStreamMemberSessionControlEvent
- type MedicalScribeLanguageCode
- type MedicalScribeMediaEncoding
- type MedicalScribeParticipantRole
- type MedicalScribePostStreamAnalyticsResult
- type MedicalScribePostStreamAnalyticsSettings
- type MedicalScribeResultStream
- type MedicalScribeResultStreamMemberTranscriptEvent
- type MedicalScribeSessionControlEvent
- type MedicalScribeSessionControlEventType
- type MedicalScribeStreamDetails
- type MedicalScribeStreamStatus
- type MedicalScribeTranscriptEvent
- type MedicalScribeTranscriptItem
- type MedicalScribeTranscriptItemType
- type MedicalScribeTranscriptSegment
- type MedicalScribeVocabularyFilterMethod
- type MedicalTranscript
- type MedicalTranscriptEvent
- type MedicalTranscriptResultStream
- type MedicalTranscriptResultStreamMemberTranscriptEvent
- type PartialResultsStability
- type ParticipantRole
- type PointsOfInterest
- type PostCallAnalyticsSettings
- type ResourceNotFoundException
- type Result
- type Sentiment
- type ServiceUnavailableException
- type Specialty
- type TimestampRange
- type Transcript
- type TranscriptEvent
- type TranscriptResultStream
- type TranscriptResultStreamMemberTranscriptEvent
- type Type
- type UnknownUnionMember
- type UtteranceEvent
- type VocabularyFilterMethod
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Alternative ¶
type Alternative struct { // Contains entities identified as personally identifiable information (PII) in // your transcription output. Entities []Entity // Contains words, phrases, or punctuation marks in your transcription output. Items []Item // Contains transcribed text. Transcript *string // contains filtered or unexported fields }
A list of possible alternative transcriptions for the input audio. Each alternative may contain one or more of Items , Entities , or Transcript .
type AudioEvent ¶
type AudioEvent struct { // An audio blob containing the next segment of audio from your application, with // a maximum duration of 1 second. The maximum size in bytes varies based on audio // properties. // // Find recommended size in [Transcribing streaming best practices]. // // Size calculation: Duration (s) * Sample Rate (Hz) * Number of Channels * 2 // (Bytes per Sample) // // For example, a 1-second chunk of 16 kHz, 2-channel, 16-bit audio would be 1 * // 16000 * 2 * 2 = 64000 bytes . // // For 8 kHz, 1-channel, 16-bit audio, a 1-second chunk would be 1 * 8000 * 1 * 2 // = 16000 bytes . // // [Transcribing streaming best practices]: https://docs.aws.amazon.com/transcribe/latest/dg/streaming.html#best-practices AudioChunk []byte // contains filtered or unexported fields }
A wrapper for your audio chunks. Your audio stream consists of one or more audio events, which consist of one or more audio chunks.
For more information, see Event stream encoding.
type AudioStream ¶
type AudioStream interface {
// contains filtered or unexported methods
}
An encoded stream of audio blobs. Audio streams are encoded as either HTTP/2 or WebSocket data frames.
For more information, see Transcribing streaming audio.
The following types satisfy this interface:
AudioStreamMemberAudioEvent AudioStreamMemberConfigurationEvent
Example (OutputUsage) ¶
package main import ( "fmt" "github.com/aws/aws-sdk-go-v2/service/transcribestreaming/types" ) func main() { var union types.AudioStream // type switches can be used to check the union value switch v := union.(type) { case *types.AudioStreamMemberAudioEvent: _ = v.Value // Value is types.AudioEvent case *types.AudioStreamMemberConfigurationEvent: _ = v.Value // Value is types.ConfigurationEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type AudioStreamMemberAudioEvent ¶
type AudioStreamMemberAudioEvent struct { Value AudioEvent // contains filtered or unexported fields }
A blob of audio from your application. Your audio stream consists of one or more audio events.
For more information, see Event stream encoding.
type AudioStreamMemberConfigurationEvent ¶ added in v1.8.0
type AudioStreamMemberConfigurationEvent struct { Value ConfigurationEvent // contains filtered or unexported fields }
Contains audio channel definitions and post-call analytics settings.
type BadRequestException ¶
type BadRequestException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
One or more arguments to the StartStreamTranscription , StartMedicalStreamTranscription , or StartCallAnalyticsStreamTranscription operation was not valid. For example, MediaEncoding or LanguageCode used unsupported values. Check the specified parameters and try your request again.
func (*BadRequestException) Error ¶
func (e *BadRequestException) Error() string
func (*BadRequestException) ErrorCode ¶
func (e *BadRequestException) ErrorCode() string
func (*BadRequestException) ErrorFault ¶
func (e *BadRequestException) ErrorFault() smithy.ErrorFault
func (*BadRequestException) ErrorMessage ¶
func (e *BadRequestException) ErrorMessage() string
type CallAnalyticsEntity ¶ added in v1.8.0
type CallAnalyticsEntity struct { // The time, in milliseconds, from the beginning of the audio stream to the start // of the identified entity. BeginOffsetMillis *int64 // The category of information identified. For example, PII . Category *string // The confidence score associated with the identification of an entity in your // transcript. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified entity correctly matches the entity spoken in // your media. Confidence *float64 // The word or words that represent the identified entity. Content *string // The time, in milliseconds, from the beginning of the audio stream to the end of // the identified entity. EndOffsetMillis *int64 // The type of PII identified. For example, NAME or CREDIT_DEBIT_NUMBER . Type *string // contains filtered or unexported fields }
Contains entities identified as personally identifiable information (PII) in your transcription output, along with various associated attributes. Examples include category, confidence score, content, type, and start and end times.
type CallAnalyticsItem ¶ added in v1.8.0
type CallAnalyticsItem struct { // The time, in milliseconds, from the beginning of the audio stream to the start // of the identified item. BeginOffsetMillis *int64 // The confidence score associated with a word or phrase in your transcript. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified item correctly matches the item spoken in your // media. Confidence *float64 // The word or punctuation that was transcribed. Content *string // The time, in milliseconds, from the beginning of the audio stream to the end of // the identified item. EndOffsetMillis *int64 // If partial result stabilization is enabled, Stable indicates whether the // specified item is stable ( true ) or if it may change when the segment is // complete ( false ). Stable *bool // The type of item identified. Options are: PRONUNCIATION (spoken words) and // PUNCTUATION . Type ItemType // Indicates whether the specified item matches a word in the vocabulary filter // included in your Call Analytics request. If true , there is a vocabulary filter // match. VocabularyFilterMatch bool // contains filtered or unexported fields }
A word, phrase, or punctuation mark in your Call Analytics transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
type CallAnalyticsLanguageCode ¶ added in v1.8.0
type CallAnalyticsLanguageCode string
const ( CallAnalyticsLanguageCodeEnUs CallAnalyticsLanguageCode = "en-US" CallAnalyticsLanguageCodeEnGb CallAnalyticsLanguageCode = "en-GB" CallAnalyticsLanguageCodeEsUs CallAnalyticsLanguageCode = "es-US" CallAnalyticsLanguageCodeFrCa CallAnalyticsLanguageCode = "fr-CA" CallAnalyticsLanguageCodeFrFr CallAnalyticsLanguageCode = "fr-FR" CallAnalyticsLanguageCodeEnAu CallAnalyticsLanguageCode = "en-AU" CallAnalyticsLanguageCodeItIt CallAnalyticsLanguageCode = "it-IT" CallAnalyticsLanguageCodeDeDe CallAnalyticsLanguageCode = "de-DE" CallAnalyticsLanguageCodePtBr CallAnalyticsLanguageCode = "pt-BR" )
Enum values for CallAnalyticsLanguageCode
func (CallAnalyticsLanguageCode) Values ¶ added in v1.8.0
func (CallAnalyticsLanguageCode) Values() []CallAnalyticsLanguageCode
Values returns all known values for CallAnalyticsLanguageCode. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type CallAnalyticsTranscriptResultStream ¶ added in v1.8.0
type CallAnalyticsTranscriptResultStream interface {
// contains filtered or unexported methods
}
Contains detailed information about your real-time Call Analytics session. These details are provided in the UtteranceEvent and CategoryEvent objects.
The following types satisfy this interface:
CallAnalyticsTranscriptResultStreamMemberCategoryEvent CallAnalyticsTranscriptResultStreamMemberUtteranceEvent
Example (OutputUsage) ¶
package main import ( "fmt" "github.com/aws/aws-sdk-go-v2/service/transcribestreaming/types" ) func main() { var union types.CallAnalyticsTranscriptResultStream // type switches can be used to check the union value switch v := union.(type) { case *types.CallAnalyticsTranscriptResultStreamMemberCategoryEvent: _ = v.Value // Value is types.CategoryEvent case *types.CallAnalyticsTranscriptResultStreamMemberUtteranceEvent: _ = v.Value // Value is types.UtteranceEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type CallAnalyticsTranscriptResultStreamMemberCategoryEvent ¶ added in v1.8.0
type CallAnalyticsTranscriptResultStreamMemberCategoryEvent struct { Value CategoryEvent // contains filtered or unexported fields }
Provides information on matched categories that were used to generate real-time supervisor alerts.
type CallAnalyticsTranscriptResultStreamMemberUtteranceEvent ¶ added in v1.8.0
type CallAnalyticsTranscriptResultStreamMemberUtteranceEvent struct { Value UtteranceEvent // contains filtered or unexported fields }
Contains set of transcription results from one or more audio segments, along with additional information per your request parameters. This can include information relating to channel definitions, partial result stabilization, sentiment, issue detection, and other transcription-related data.
type CategoryEvent ¶ added in v1.8.0
type CategoryEvent struct { // Lists the categories that were matched in your audio segment. MatchedCategories []string // Contains information about the matched categories, including category names and // timestamps. MatchedDetails map[string]PointsOfInterest // contains filtered or unexported fields }
Provides information on any TranscriptFilterType categories that matched your transcription output. Matches are identified for each segment upon completion of that segment.
type ChannelDefinition ¶ added in v1.8.0
type ChannelDefinition struct { // Specify the audio channel you want to define. // // This member is required. ChannelId int32 // Specify the speaker you want to define. Omitting this parameter is equivalent // to specifying both participants. // // This member is required. ParticipantRole ParticipantRole // contains filtered or unexported fields }
Makes it possible to specify which speaker is on which audio channel. For example, if your agent is the first participant to speak, you would set ChannelId to 0 (to indicate the first channel) and ParticipantRole to AGENT (to indicate that it's the agent speaking).
type CharacterOffsets ¶ added in v1.8.0
type CharacterOffsets struct { // Provides the character count of the first character where a match is // identified. For example, the first character associated with an issue or a // category match in a segment transcript. Begin *int32 // Provides the character count of the last character where a match is identified. // For example, the last character associated with an issue or a category match in // a segment transcript. End *int32 // contains filtered or unexported fields }
Provides the location, using character count, in your transcript where a match is identified. For example, the location of an issue or a category match within a segment.
type ClinicalNoteGenerationResult ¶ added in v1.23.0
type ClinicalNoteGenerationResult struct { // Holds the Amazon S3 URI for the output Clinical Note. ClinicalNoteOutputLocation *string // If ClinicalNoteGenerationResult is FAILED , information about why it failed. FailureReason *string // The status of the clinical note generation. // // Possible Values: // // - IN_PROGRESS // // - FAILED // // - COMPLETED // // After audio streaming finishes, and you send a MedicalScribeSessionControlEvent // event (with END_OF_SESSION as the Type), the status is set to IN_PROGRESS . If // the status is COMPLETED , the analytics completed successfully, and you can find // the results at the locations specified in ClinicalNoteOutputLocation and // TranscriptOutputLocation . If the status is FAILED , FailureReason provides // details about the failure. Status ClinicalNoteGenerationStatus // Holds the Amazon S3 URI for the output Transcript. TranscriptOutputLocation *string // contains filtered or unexported fields }
The details for clinical note generation, including status, and output locations for clinical note and aggregated transcript if the analytics completed, or failure reason if the analytics failed.
type ClinicalNoteGenerationSettings ¶ added in v1.23.0
type ClinicalNoteGenerationSettings struct { // The name of the Amazon S3 bucket where you want the output of Amazon Web // Services HealthScribe post-stream analytics stored. Don't include the S3:// // prefix of the specified bucket. // // HealthScribe outputs transcript and clinical note files under the prefix: // S3://$output-bucket-name/healthscribe-streaming/session-id/post-stream-analytics/clinical-notes // // The role ResourceAccessRoleArn specified in the MedicalScribeConfigurationEvent // must have permission to use the specified location. You can change Amazon S3 // permissions using the [Amazon Web Services Management Console]. See also [Permissions Required for IAM User Roles] . // // [Amazon Web Services Management Console]: https://console.aws.amazon.com/s3 // [Permissions Required for IAM User Roles]: https://docs.aws.amazon.com/transcribe/latest/dg/security_iam_id-based-policy-examples.html#auth-role-iam-user // // This member is required. OutputBucketName *string // contains filtered or unexported fields }
The output configuration for aggregated transcript and clinical note generation.
type ClinicalNoteGenerationStatus ¶ added in v1.23.0
type ClinicalNoteGenerationStatus string
const ( ClinicalNoteGenerationStatusInProgress ClinicalNoteGenerationStatus = "IN_PROGRESS" ClinicalNoteGenerationStatusFailed ClinicalNoteGenerationStatus = "FAILED" ClinicalNoteGenerationStatusCompleted ClinicalNoteGenerationStatus = "COMPLETED" )
Enum values for ClinicalNoteGenerationStatus
func (ClinicalNoteGenerationStatus) Values ¶ added in v1.23.0
func (ClinicalNoteGenerationStatus) Values() []ClinicalNoteGenerationStatus
Values returns all known values for ClinicalNoteGenerationStatus. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ConfigurationEvent ¶ added in v1.8.0
type ConfigurationEvent struct { // Indicates which speaker is on which audio channel. ChannelDefinitions []ChannelDefinition // Provides additional optional settings for your Call Analytics post-call // request, including encryption and output locations for your redacted transcript. // // PostCallAnalyticsSettings provides you with the same insights as a Call // Analytics post-call transcription. Refer to [Post-call analytics]for more information on this // feature. // // [Post-call analytics]: https://docs.aws.amazon.com/transcribe/latest/dg/tca-post-call.html PostCallAnalyticsSettings *PostCallAnalyticsSettings // contains filtered or unexported fields }
Allows you to set audio channel definitions and post-call analytics settings.
type ConflictException ¶
type ConflictException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
A new stream started with the same session ID. The current stream has been terminated.
func (*ConflictException) Error ¶
func (e *ConflictException) Error() string
func (*ConflictException) ErrorCode ¶
func (e *ConflictException) ErrorCode() string
func (*ConflictException) ErrorFault ¶
func (e *ConflictException) ErrorFault() smithy.ErrorFault
func (*ConflictException) ErrorMessage ¶
func (e *ConflictException) ErrorMessage() string
type ContentIdentificationType ¶
type ContentIdentificationType string
const (
ContentIdentificationTypePii ContentIdentificationType = "PII"
)
Enum values for ContentIdentificationType
func (ContentIdentificationType) Values ¶
func (ContentIdentificationType) Values() []ContentIdentificationType
Values returns all known values for ContentIdentificationType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ContentRedactionOutput ¶ added in v1.8.0
type ContentRedactionOutput string
const ( ContentRedactionOutputRedacted ContentRedactionOutput = "redacted" ContentRedactionOutputRedactedAndUnredacted ContentRedactionOutput = "redacted_and_unredacted" )
Enum values for ContentRedactionOutput
func (ContentRedactionOutput) Values ¶ added in v1.8.0
func (ContentRedactionOutput) Values() []ContentRedactionOutput
Values returns all known values for ContentRedactionOutput. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ContentRedactionType ¶
type ContentRedactionType string
const (
ContentRedactionTypePii ContentRedactionType = "PII"
)
Enum values for ContentRedactionType
func (ContentRedactionType) Values ¶
func (ContentRedactionType) Values() []ContentRedactionType
Values returns all known values for ContentRedactionType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type Entity ¶
type Entity struct { // The category of information identified. The only category is PII . Category *string // The confidence score associated with the identified PII entity in your audio. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified entity correctly matches the entity spoken in // your media. Confidence *float64 // The word or words identified as PII. Content *string // The end time, in milliseconds, of the utterance that was identified as PII. EndTime float64 // The start time, in milliseconds, of the utterance that was identified as PII. StartTime float64 // The type of PII identified. For example, NAME or CREDIT_DEBIT_NUMBER . Type *string // contains filtered or unexported fields }
Contains entities identified as personally identifiable information (PII) in your transcription output, along with various associated attributes. Examples include category, confidence score, type, stability score, and start and end times.
type InternalFailureException ¶
type InternalFailureException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
A problem occurred while processing the audio. Amazon Transcribe terminated processing.
func (*InternalFailureException) Error ¶
func (e *InternalFailureException) Error() string
func (*InternalFailureException) ErrorCode ¶
func (e *InternalFailureException) ErrorCode() string
func (*InternalFailureException) ErrorFault ¶
func (e *InternalFailureException) ErrorFault() smithy.ErrorFault
func (*InternalFailureException) ErrorMessage ¶
func (e *InternalFailureException) ErrorMessage() string
type IssueDetected ¶ added in v1.8.0
type IssueDetected struct { // Provides the timestamps that identify when in an audio segment the specified // issue occurs. CharacterOffsets *CharacterOffsets // contains filtered or unexported fields }
Lists the issues that were identified in your audio segment.
type Item ¶
type Item struct { // The confidence score associated with a word or phrase in your transcript. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified item correctly matches the item spoken in your // media. Confidence *float64 // The word or punctuation that was transcribed. Content *string // The end time, in milliseconds, of the transcribed item. EndTime float64 // If speaker partitioning is enabled, Speaker labels the speaker of the specified // item. Speaker *string // If partial result stabilization is enabled, Stable indicates whether the // specified item is stable ( true ) or if it may change when the segment is // complete ( false ). Stable *bool // The start time, in milliseconds, of the transcribed item. StartTime float64 // The type of item identified. Options are: PRONUNCIATION (spoken words) and // PUNCTUATION . Type ItemType // Indicates whether the specified item matches a word in the vocabulary filter // included in your request. If true , there is a vocabulary filter match. VocabularyFilterMatch bool // contains filtered or unexported fields }
A word, phrase, or punctuation mark in your transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
type ItemType ¶
type ItemType string
type LanguageCode ¶
type LanguageCode string
const ( LanguageCodeEnUs LanguageCode = "en-US" LanguageCodeEnGb LanguageCode = "en-GB" LanguageCodeEsUs LanguageCode = "es-US" LanguageCodeFrCa LanguageCode = "fr-CA" LanguageCodeFrFr LanguageCode = "fr-FR" LanguageCodeEnAu LanguageCode = "en-AU" LanguageCodeItIt LanguageCode = "it-IT" LanguageCodeDeDe LanguageCode = "de-DE" LanguageCodePtBr LanguageCode = "pt-BR" LanguageCodeJaJp LanguageCode = "ja-JP" LanguageCodeKoKr LanguageCode = "ko-KR" LanguageCodeZhCn LanguageCode = "zh-CN" LanguageCodeThTh LanguageCode = "th-TH" LanguageCodeEsEs LanguageCode = "es-ES" LanguageCodeArSa LanguageCode = "ar-SA" LanguageCodePtPt LanguageCode = "pt-PT" LanguageCodeCaEs LanguageCode = "ca-ES" LanguageCodeArAe LanguageCode = "ar-AE" LanguageCodeHiIn LanguageCode = "hi-IN" LanguageCodeZhHk LanguageCode = "zh-HK" LanguageCodeNlNl LanguageCode = "nl-NL" LanguageCodeNoNo LanguageCode = "no-NO" LanguageCodeSvSe LanguageCode = "sv-SE" LanguageCodePlPl LanguageCode = "pl-PL" LanguageCodeFiFi LanguageCode = "fi-FI" LanguageCodeZhTw LanguageCode = "zh-TW" LanguageCodeEnIn LanguageCode = "en-IN" LanguageCodeEnIe LanguageCode = "en-IE" LanguageCodeEnNz LanguageCode = "en-NZ" LanguageCodeEnAb LanguageCode = "en-AB" LanguageCodeEnZa LanguageCode = "en-ZA" LanguageCodeEnWl LanguageCode = "en-WL" LanguageCodeDeCh LanguageCode = "de-CH" LanguageCodeAfZa LanguageCode = "af-ZA" LanguageCodeEuEs LanguageCode = "eu-ES" LanguageCodeHrHr LanguageCode = "hr-HR" LanguageCodeCsCz LanguageCode = "cs-CZ" LanguageCodeDaDk LanguageCode = "da-DK" LanguageCodeFaIr LanguageCode = "fa-IR" LanguageCodeGlEs LanguageCode = "gl-ES" LanguageCodeElGr LanguageCode = "el-GR" LanguageCodeHeIl LanguageCode = "he-IL" LanguageCodeIdId LanguageCode = "id-ID" LanguageCodeLvLv LanguageCode = "lv-LV" LanguageCodeMsMy LanguageCode = "ms-MY" LanguageCodeRoRo LanguageCode = "ro-RO" LanguageCodeRuRu LanguageCode = "ru-RU" LanguageCodeSrRs LanguageCode = "sr-RS" LanguageCodeSkSk LanguageCode = "sk-SK" LanguageCodeSoSo LanguageCode = "so-SO" LanguageCodeTlPh LanguageCode = "tl-PH" LanguageCodeUkUa LanguageCode = "uk-UA" LanguageCodeViVn LanguageCode = "vi-VN" LanguageCodeZuZa LanguageCode = "zu-ZA" )
Enum values for LanguageCode
func (LanguageCode) Values ¶
func (LanguageCode) Values() []LanguageCode
Values returns all known values for LanguageCode. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type LanguageWithScore ¶ added in v1.1.0
type LanguageWithScore struct { // The language code of the identified language. LanguageCode LanguageCode // The confidence score associated with the identified language code. Confidence // scores are values between zero and one; larger values indicate a higher // confidence in the identified language. Score float64 // contains filtered or unexported fields }
The language code that represents the language identified in your audio, including the associated confidence score. If you enabled channel identification in your request and each channel contained a different language, you will have more than one LanguageWithScore result.
type LimitExceededException ¶
type LimitExceededException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
Your client has exceeded one of the Amazon Transcribe limits. This is typically the audio length limit. Break your audio stream into smaller chunks and try your request again.
func (*LimitExceededException) Error ¶
func (e *LimitExceededException) Error() string
func (*LimitExceededException) ErrorCode ¶
func (e *LimitExceededException) ErrorCode() string
func (*LimitExceededException) ErrorFault ¶
func (e *LimitExceededException) ErrorFault() smithy.ErrorFault
func (*LimitExceededException) ErrorMessage ¶
func (e *LimitExceededException) ErrorMessage() string
type MediaEncoding ¶
type MediaEncoding string
const ( MediaEncodingPcm MediaEncoding = "pcm" MediaEncodingOggOpus MediaEncoding = "ogg-opus" MediaEncodingFlac MediaEncoding = "flac" )
Enum values for MediaEncoding
func (MediaEncoding) Values ¶
func (MediaEncoding) Values() []MediaEncoding
Values returns all known values for MediaEncoding. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalAlternative ¶
type MedicalAlternative struct { // Contains entities identified as personal health information (PHI) in your // transcription output. Entities []MedicalEntity // Contains words, phrases, or punctuation marks in your transcription output. Items []MedicalItem // Contains transcribed text. Transcript *string // contains filtered or unexported fields }
A list of possible alternative transcriptions for the input audio. Each alternative may contain one or more of Items , Entities , or Transcript .
type MedicalContentIdentificationType ¶
type MedicalContentIdentificationType string
const (
MedicalContentIdentificationTypePhi MedicalContentIdentificationType = "PHI"
)
Enum values for MedicalContentIdentificationType
func (MedicalContentIdentificationType) Values ¶
func (MedicalContentIdentificationType) Values() []MedicalContentIdentificationType
Values returns all known values for MedicalContentIdentificationType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalEntity ¶
type MedicalEntity struct { // The category of information identified. The only category is PHI . Category *string // The confidence score associated with the identified PHI entity in your audio. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified entity correctly matches the entity spoken in // your media. Confidence *float64 // The word or words identified as PHI. Content *string // The end time, in milliseconds, of the utterance that was identified as PHI. EndTime float64 // The start time, in milliseconds, of the utterance that was identified as PHI. StartTime float64 // contains filtered or unexported fields }
Contains entities identified as personal health information (PHI) in your transcription output, along with various associated attributes. Examples include category, confidence score, type, stability score, and start and end times.
type MedicalItem ¶
type MedicalItem struct { // The confidence score associated with a word or phrase in your transcript. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified item correctly matches the item spoken in your // media. Confidence *float64 // The word or punctuation that was transcribed. Content *string // The end time, in milliseconds, of the transcribed item. EndTime float64 // If speaker partitioning is enabled, Speaker labels the speaker of the specified // item. Speaker *string // The start time, in milliseconds, of the transcribed item. StartTime float64 // The type of item identified. Options are: PRONUNCIATION (spoken words) and // PUNCTUATION . Type ItemType // contains filtered or unexported fields }
A word, phrase, or punctuation mark in your transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
type MedicalResult ¶
type MedicalResult struct { // A list of possible alternative transcriptions for the input audio. Each // alternative may contain one or more of Items , Entities , or Transcript . Alternatives []MedicalAlternative // Indicates the channel identified for the Result . ChannelId *string // The end time, in milliseconds, of the Result . EndTime float64 // Indicates if the segment is complete. // // If IsPartial is true , the segment is not complete. If IsPartial is false , the // segment is complete. IsPartial bool // Provides a unique identifier for the Result . ResultId *string // The start time, in milliseconds, of the Result . StartTime float64 // contains filtered or unexported fields }
The Result associated with a .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters. This can include information relating to alternative transcriptions, channel identification, partial result stabilization, language identification, and other transcription-related data.
type MedicalScribeAudioEvent ¶ added in v1.23.0
type MedicalScribeAudioEvent struct { // An audio blob containing the next segment of audio from your application, with // a maximum duration of 1 second. The maximum size in bytes varies based on audio // properties. // // Find recommended size in [Transcribing streaming best practices]. // // Size calculation: Duration (s) * Sample Rate (Hz) * Number of Channels * 2 // (Bytes per Sample) // // For example, a 1-second chunk of 16 kHz, 2-channel, 16-bit audio would be 1 * // 16000 * 2 * 2 = 64000 bytes . // // For 8 kHz, 1-channel, 16-bit audio, a 1-second chunk would be 1 * 8000 * 1 * 2 // = 16000 bytes . // // [Transcribing streaming best practices]: https://docs.aws.amazon.com/transcribe/latest/dg/streaming.html#best-practices // // This member is required. AudioChunk []byte // contains filtered or unexported fields }
A wrapper for your audio chunks
For more information, see Event stream encoding.
type MedicalScribeChannelDefinition ¶ added in v1.23.0
type MedicalScribeChannelDefinition struct { // Specify the audio channel you want to define. // // This member is required. ChannelId int32 // Specify the participant that you want to flag. The allowed options are CLINICIAN // and PATIENT . // // This member is required. ParticipantRole MedicalScribeParticipantRole // contains filtered or unexported fields }
Makes it possible to specify which speaker is on which channel. For example, if the clinician is the first participant to speak, you would set the ChannelId of the first ChannelDefinition in the list to 0 (to indicate the first channel) and ParticipantRole to CLINICIAN (to indicate that it's the clinician speaking). Then you would set the ChannelId of the second ChannelDefinition in the list to 1 (to indicate the second channel) and ParticipantRole to PATIENT (to indicate that it's the patient speaking).
If you don't specify a channel definition, HealthScribe will diarize the transcription and identify speaker roles for each speaker.
type MedicalScribeConfigurationEvent ¶ added in v1.23.0
type MedicalScribeConfigurationEvent struct { // Specify settings for post-stream analytics. // // This member is required. PostStreamAnalyticsSettings *MedicalScribePostStreamAnalyticsSettings // The Amazon Resource Name (ARN) of an IAM role that has permissions to access // the Amazon S3 output bucket you specified, and use your KMS key if supplied. If // the role that you specify doesn’t have the appropriate permissions, your request // fails. // // IAM role ARNs have the format // arn:partition:iam::account:role/role-name-with-path . For example: // arn:aws:iam::111122223333:role/Admin . // // For more information, see [Amazon Web Services HealthScribe]. // // [Amazon Web Services HealthScribe]: https://docs.aws.amazon.com/transcribe/latest/dg/health-scribe-streaming.html // // This member is required. ResourceAccessRoleArn *string // Specify which speaker is on which audio channel. ChannelDefinitions []MedicalScribeChannelDefinition // Specify the encryption settings for your streaming session. EncryptionSettings *MedicalScribeEncryptionSettings // Specify how you want your custom vocabulary filter applied to the streaming // session. // // To replace words with *** , specify mask . // // To delete words, specify remove . // // To flag words without changing them, specify tag . VocabularyFilterMethod MedicalScribeVocabularyFilterMethod // Specify the name of the custom vocabulary filter you want to include in your // streaming session. Custom vocabulary filter names are case-sensitive. // // If you include VocabularyFilterName in the MedicalScribeConfigurationEvent , you // must also include VocabularyFilterMethod . VocabularyFilterName *string // Specify the name of the custom vocabulary you want to use for your streaming // session. Custom vocabulary names are case-sensitive. VocabularyName *string // contains filtered or unexported fields }
Specify details to configure the streaming session, including channel definitions, encryption settings, post-stream analytics settings, resource access role ARN and vocabulary settings.
Whether you are starting a new session or resuming an existing session, your first event must be a MedicalScribeConfigurationEvent . If you are resuming a session, then this event must have the same configurations that you provided to start the session.
type MedicalScribeEncryptionSettings ¶ added in v1.23.0
type MedicalScribeEncryptionSettings struct { // The ID of the KMS key you want to use for your streaming session. You can // specify its KMS key ID, key Amazon Resource Name (ARN), alias name, or alias // ARN. When using an alias name, prefix it with "alias/" . To specify a KMS key in // a different Amazon Web Services account, you must use the key ARN or alias ARN. // // For example: // // - Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab // // - Key ARN: // arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab // // - Alias name: alias/ExampleAlias // // - Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias // // To get the key ID and key ARN for a KMS key, use the [ListKeys] or [DescribeKey] KMS API operations. // To get the alias name and alias ARN, use [ListKeys]API operation. // // [DescribeKey]: https://docs.aws.amazon.com/kms/latest/APIReference/API_DescribeKey.html // [ListKeys]: https://docs.aws.amazon.com/kms/latest/APIReference/API_ListAliases.html // // This member is required. KmsKeyId *string // A map of plain text, non-secret key:value pairs, known as encryption context // pairs, that provide an added layer of security for your data. For more // information, see [KMSencryption context]and [Asymmetric keys in KMS]. // // [Asymmetric keys in KMS]: https://docs.aws.amazon.com/transcribe/latest/dg/symmetric-asymmetric.html // [KMSencryption context]: https://docs.aws.amazon.com/transcribe/latest/dg/key-management.html#kms-context KmsEncryptionContext map[string]string // contains filtered or unexported fields }
Contains encryption related settings to be used for data encryption with Key Management Service, including KmsEncryptionContext and KmsKeyId. The KmsKeyId is required, while KmsEncryptionContext is optional for additional layer of security.
By default, Amazon Web Services HealthScribe provides encryption at rest to protect sensitive customer data using Amazon S3-managed keys. HealthScribe uses the KMS key you specify as a second layer of encryption.
Your ResourceAccessRoleArn must permission to use your KMS key. For more information, see Data Encryption at rest for Amazon Web Services HealthScribe.
type MedicalScribeInputStream ¶ added in v1.23.0
type MedicalScribeInputStream interface {
// contains filtered or unexported methods
}
An encoded stream of events. The stream is encoded as HTTP/2 data frames.
An input stream consists of the following types of events. The first element of the input stream must be the MedicalScribeConfigurationEvent event type.
MedicalScribeConfigurationEvent
MedicalScribeAudioEvent
MedicalScribeSessionControlEvent
The following types satisfy this interface:
MedicalScribeInputStreamMemberAudioEvent MedicalScribeInputStreamMemberConfigurationEvent MedicalScribeInputStreamMemberSessionControlEvent
Example (OutputUsage) ¶
package main import ( "fmt" "github.com/aws/aws-sdk-go-v2/service/transcribestreaming/types" ) func main() { var union types.MedicalScribeInputStream // type switches can be used to check the union value switch v := union.(type) { case *types.MedicalScribeInputStreamMemberAudioEvent: _ = v.Value // Value is types.MedicalScribeAudioEvent case *types.MedicalScribeInputStreamMemberConfigurationEvent: _ = v.Value // Value is types.MedicalScribeConfigurationEvent case *types.MedicalScribeInputStreamMemberSessionControlEvent: _ = v.Value // Value is types.MedicalScribeSessionControlEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type MedicalScribeInputStreamMemberAudioEvent ¶ added in v1.23.0
type MedicalScribeInputStreamMemberAudioEvent struct { Value MedicalScribeAudioEvent // contains filtered or unexported fields }
A wrapper for your audio chunks
For more information, see Event stream encoding.
type MedicalScribeInputStreamMemberConfigurationEvent ¶ added in v1.23.0
type MedicalScribeInputStreamMemberConfigurationEvent struct { Value MedicalScribeConfigurationEvent // contains filtered or unexported fields }
Specify additional streaming session configurations beyond those provided in your initial start request headers. For example, specify channel definitions, encryption settings, and post-stream analytics settings.
Whether you are starting a new session or resuming an existing session, your first event must be a MedicalScribeConfigurationEvent .
type MedicalScribeInputStreamMemberSessionControlEvent ¶ added in v1.23.0
type MedicalScribeInputStreamMemberSessionControlEvent struct { Value MedicalScribeSessionControlEvent // contains filtered or unexported fields }
Specify the lifecycle of your streaming session, such as ending the session.
type MedicalScribeLanguageCode ¶ added in v1.23.0
type MedicalScribeLanguageCode string
const (
MedicalScribeLanguageCodeEnUs MedicalScribeLanguageCode = "en-US"
)
Enum values for MedicalScribeLanguageCode
func (MedicalScribeLanguageCode) Values ¶ added in v1.23.0
func (MedicalScribeLanguageCode) Values() []MedicalScribeLanguageCode
Values returns all known values for MedicalScribeLanguageCode. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalScribeMediaEncoding ¶ added in v1.23.0
type MedicalScribeMediaEncoding string
const ( MedicalScribeMediaEncodingPcm MedicalScribeMediaEncoding = "pcm" MedicalScribeMediaEncodingOggOpus MedicalScribeMediaEncoding = "ogg-opus" MedicalScribeMediaEncodingFlac MedicalScribeMediaEncoding = "flac" )
Enum values for MedicalScribeMediaEncoding
func (MedicalScribeMediaEncoding) Values ¶ added in v1.23.0
func (MedicalScribeMediaEncoding) Values() []MedicalScribeMediaEncoding
Values returns all known values for MedicalScribeMediaEncoding. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalScribeParticipantRole ¶ added in v1.23.0
type MedicalScribeParticipantRole string
const ( MedicalScribeParticipantRolePatient MedicalScribeParticipantRole = "PATIENT" MedicalScribeParticipantRoleClinician MedicalScribeParticipantRole = "CLINICIAN" )
Enum values for MedicalScribeParticipantRole
func (MedicalScribeParticipantRole) Values ¶ added in v1.23.0
func (MedicalScribeParticipantRole) Values() []MedicalScribeParticipantRole
Values returns all known values for MedicalScribeParticipantRole. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalScribePostStreamAnalyticsResult ¶ added in v1.23.0
type MedicalScribePostStreamAnalyticsResult struct { // Provides the Clinical Note Generation result for post-stream analytics. ClinicalNoteGenerationResult *ClinicalNoteGenerationResult // contains filtered or unexported fields }
Contains details for the result of post-stream analytics.
type MedicalScribePostStreamAnalyticsSettings ¶ added in v1.23.0
type MedicalScribePostStreamAnalyticsSettings struct { // Specify settings for the post-stream clinical note generation. // // This member is required. ClinicalNoteGenerationSettings *ClinicalNoteGenerationSettings // contains filtered or unexported fields }
The settings for post-stream analytics.
type MedicalScribeResultStream ¶ added in v1.23.0
type MedicalScribeResultStream interface {
// contains filtered or unexported methods
}
Result stream where you will receive the output events. The details are provided in the MedicalScribeTranscriptEvent object.
The following types satisfy this interface:
MedicalScribeResultStreamMemberTranscriptEvent
Example (OutputUsage) ¶
package main import ( "fmt" "github.com/aws/aws-sdk-go-v2/service/transcribestreaming/types" ) func main() { var union types.MedicalScribeResultStream // type switches can be used to check the union value switch v := union.(type) { case *types.MedicalScribeResultStreamMemberTranscriptEvent: _ = v.Value // Value is types.MedicalScribeTranscriptEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type MedicalScribeResultStreamMemberTranscriptEvent ¶ added in v1.23.0
type MedicalScribeResultStreamMemberTranscriptEvent struct { Value MedicalScribeTranscriptEvent // contains filtered or unexported fields }
The transcript event that contains real-time transcription results.
type MedicalScribeSessionControlEvent ¶ added in v1.23.0
type MedicalScribeSessionControlEvent struct { // The type of MedicalScribeSessionControlEvent . // // Possible Values: // // - END_OF_SESSION - Indicates the audio streaming is complete. After you send // an END_OF_SESSION event, Amazon Web Services HealthScribe starts the post-stream // analytics. The session can't be resumed after this event is sent. After Amazon // Web Services HealthScribe processes the event, the real-time StreamStatus is // COMPLETED . You get the StreamStatus and other stream details with the [GetMedicalScribeStream]API // operation. For more information about different streaming statuses, see the // StreamStatus description in the [MedicalScribeStreamDetails]. // // [GetMedicalScribeStream]: https://docs.aws.amazon.com/transcribe/latest/APIReference/API_streaming_GetMedicalScribeStream.html // [MedicalScribeStreamDetails]: https://docs.aws.amazon.com/transcribe/latest/APIReference/API_streaming_MedicalScribeStreamDetails.html // // This member is required. Type MedicalScribeSessionControlEventType // contains filtered or unexported fields }
Specify the lifecycle of your streaming session.
type MedicalScribeSessionControlEventType ¶ added in v1.23.0
type MedicalScribeSessionControlEventType string
const (
MedicalScribeSessionControlEventTypeEndOfSession MedicalScribeSessionControlEventType = "END_OF_SESSION"
)
Enum values for MedicalScribeSessionControlEventType
func (MedicalScribeSessionControlEventType) Values ¶ added in v1.23.0
func (MedicalScribeSessionControlEventType) Values() []MedicalScribeSessionControlEventType
Values returns all known values for MedicalScribeSessionControlEventType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalScribeStreamDetails ¶ added in v1.23.0
type MedicalScribeStreamDetails struct { // The Channel Definitions of the HealthScribe streaming session. ChannelDefinitions []MedicalScribeChannelDefinition // The Encryption Settings of the HealthScribe streaming session. EncryptionSettings *MedicalScribeEncryptionSettings // The Language Code of the HealthScribe streaming session. LanguageCode MedicalScribeLanguageCode // The Media Encoding of the HealthScribe streaming session. MediaEncoding MedicalScribeMediaEncoding // The sample rate (in hertz) of the HealthScribe streaming session. MediaSampleRateHertz *int32 // The result of post-stream analytics for the HealthScribe streaming session. PostStreamAnalyticsResult *MedicalScribePostStreamAnalyticsResult // The post-stream analytics settings of the HealthScribe streaming session. PostStreamAnalyticsSettings *MedicalScribePostStreamAnalyticsSettings // The Amazon Resource Name (ARN) of the role used in the HealthScribe streaming // session. ResourceAccessRoleArn *string // The identifier of the HealthScribe streaming session. SessionId *string // The date and time when the HealthScribe streaming session was created. StreamCreatedAt *time.Time // The date and time when the HealthScribe streaming session was ended. StreamEndedAt *time.Time // The streaming status of the HealthScribe streaming session. // // Possible Values: // // - IN_PROGRESS // // - PAUSED // // - FAILED // // - COMPLETED // // This status is specific to real-time streaming. A COMPLETED status doesn't mean // that the post-stream analytics is complete. To get status of an analytics // result, check the Status field for the analytics result within the // MedicalScribePostStreamAnalyticsResult . For example, you can view the status of // the ClinicalNoteGenerationResult . StreamStatus MedicalScribeStreamStatus // The method of the vocabulary filter for the HealthScribe streaming session. VocabularyFilterMethod MedicalScribeVocabularyFilterMethod // The name of the vocabulary filter used for the HealthScribe streaming session . VocabularyFilterName *string // The vocabulary name of the HealthScribe streaming session. VocabularyName *string // contains filtered or unexported fields }
Contains details about a Amazon Web Services HealthScribe streaming session.
type MedicalScribeStreamStatus ¶ added in v1.23.0
type MedicalScribeStreamStatus string
const ( MedicalScribeStreamStatusInProgress MedicalScribeStreamStatus = "IN_PROGRESS" MedicalScribeStreamStatusPaused MedicalScribeStreamStatus = "PAUSED" MedicalScribeStreamStatusFailed MedicalScribeStreamStatus = "FAILED" MedicalScribeStreamStatusCompleted MedicalScribeStreamStatus = "COMPLETED" )
Enum values for MedicalScribeStreamStatus
func (MedicalScribeStreamStatus) Values ¶ added in v1.23.0
func (MedicalScribeStreamStatus) Values() []MedicalScribeStreamStatus
Values returns all known values for MedicalScribeStreamStatus. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalScribeTranscriptEvent ¶ added in v1.23.0
type MedicalScribeTranscriptEvent struct { // The TranscriptSegment associated with a MedicalScribeTranscriptEvent . TranscriptSegment *MedicalScribeTranscriptSegment // contains filtered or unexported fields }
The event associated with MedicalScribeResultStream .
Contains MedicalScribeTranscriptSegment , which contains segment related information.
type MedicalScribeTranscriptItem ¶ added in v1.23.0
type MedicalScribeTranscriptItem struct { // The start time, in milliseconds, of the transcribed item. BeginAudioTime float64 // The confidence score associated with a word or phrase in your transcript. // // Confidence scores are values between 0 and 1. A larger value indicates a higher // probability that the identified item correctly matches the item spoken in your // media. Confidence *float64 // The word, phrase or punctuation mark that was transcribed. Content *string // The end time, in milliseconds, of the transcribed item. EndAudioTime float64 // The type of item identified. Options are: PRONUNCIATION (spoken words) and // PUNCTUATION . Type MedicalScribeTranscriptItemType // Indicates whether the specified item matches a word in the vocabulary filter // included in your configuration event. If true , there is a vocabulary filter // match. VocabularyFilterMatch *bool // contains filtered or unexported fields }
A word, phrase, or punctuation mark in your transcription output, along with various associated attributes, such as confidence score, type, and start and end times.
type MedicalScribeTranscriptItemType ¶ added in v1.23.0
type MedicalScribeTranscriptItemType string
const ( MedicalScribeTranscriptItemTypePronunciation MedicalScribeTranscriptItemType = "pronunciation" MedicalScribeTranscriptItemTypePunctuation MedicalScribeTranscriptItemType = "punctuation" )
Enum values for MedicalScribeTranscriptItemType
func (MedicalScribeTranscriptItemType) Values ¶ added in v1.23.0
func (MedicalScribeTranscriptItemType) Values() []MedicalScribeTranscriptItemType
Values returns all known values for MedicalScribeTranscriptItemType. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalScribeTranscriptSegment ¶ added in v1.23.0
type MedicalScribeTranscriptSegment struct { // The start time, in milliseconds, of the segment. BeginAudioTime float64 // Indicates which audio channel is associated with the // MedicalScribeTranscriptSegment . // // If MedicalScribeChannelDefinition is not provided in the // MedicalScribeConfigurationEvent , then this field will not be included. ChannelId *string // Contains transcribed text of the segment. Content *string // The end time, in milliseconds, of the segment. EndAudioTime float64 // Indicates if the segment is complete. // // If IsPartial is true , the segment is not complete. If IsPartial is false , the // segment is complete. IsPartial bool // Contains words, phrases, or punctuation marks in your segment. Items []MedicalScribeTranscriptItem // The identifier of the segment. SegmentId *string // contains filtered or unexported fields }
Contains a set of transcription results, along with additional information of the segment.
type MedicalScribeVocabularyFilterMethod ¶ added in v1.23.0
type MedicalScribeVocabularyFilterMethod string
const ( MedicalScribeVocabularyFilterMethodRemove MedicalScribeVocabularyFilterMethod = "remove" MedicalScribeVocabularyFilterMethodMask MedicalScribeVocabularyFilterMethod = "mask" MedicalScribeVocabularyFilterMethodTag MedicalScribeVocabularyFilterMethod = "tag" )
Enum values for MedicalScribeVocabularyFilterMethod
func (MedicalScribeVocabularyFilterMethod) Values ¶ added in v1.23.0
func (MedicalScribeVocabularyFilterMethod) Values() []MedicalScribeVocabularyFilterMethod
Values returns all known values for MedicalScribeVocabularyFilterMethod. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type MedicalTranscript ¶
type MedicalTranscript struct { // Contains a set of transcription results from one or more audio segments, along // with additional information per your request parameters. This can include // information relating to alternative transcriptions, channel identification, // partial result stabilization, language identification, and other // transcription-related data. Results []MedicalResult // contains filtered or unexported fields }
The MedicalTranscript associated with a .
MedicalTranscript contains Results , which contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type MedicalTranscriptEvent ¶
type MedicalTranscriptEvent struct { // Contains Results , which contains a set of transcription results from one or // more audio segments, along with additional information per your request // parameters. This can include information relating to alternative transcriptions, // channel identification, partial result stabilization, language identification, // and other transcription-related data. Transcript *MedicalTranscript // contains filtered or unexported fields }
The MedicalTranscriptEvent associated with a MedicalTranscriptResultStream .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type MedicalTranscriptResultStream ¶
type MedicalTranscriptResultStream interface {
// contains filtered or unexported methods
}
Contains detailed information about your streaming session.
The following types satisfy this interface:
MedicalTranscriptResultStreamMemberTranscriptEvent
Example (OutputUsage) ¶
package main import ( "fmt" "github.com/aws/aws-sdk-go-v2/service/transcribestreaming/types" ) func main() { var union types.MedicalTranscriptResultStream // type switches can be used to check the union value switch v := union.(type) { case *types.MedicalTranscriptResultStreamMemberTranscriptEvent: _ = v.Value // Value is types.MedicalTranscriptEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type MedicalTranscriptResultStreamMemberTranscriptEvent ¶
type MedicalTranscriptResultStreamMemberTranscriptEvent struct { Value MedicalTranscriptEvent // contains filtered or unexported fields }
The MedicalTranscriptEvent associated with a MedicalTranscriptResultStream .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters. This can include information relating to alternative transcriptions, channel identification, partial result stabilization, language identification, and other transcription-related data.
type PartialResultsStability ¶
type PartialResultsStability string
const ( PartialResultsStabilityHigh PartialResultsStability = "high" PartialResultsStabilityMedium PartialResultsStability = "medium" PartialResultsStabilityLow PartialResultsStability = "low" )
Enum values for PartialResultsStability
func (PartialResultsStability) Values ¶
func (PartialResultsStability) Values() []PartialResultsStability
Values returns all known values for PartialResultsStability. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type ParticipantRole ¶ added in v1.8.0
type ParticipantRole string
const ( ParticipantRoleAgent ParticipantRole = "AGENT" ParticipantRoleCustomer ParticipantRole = "CUSTOMER" )
Enum values for ParticipantRole
func (ParticipantRole) Values ¶ added in v1.8.0
func (ParticipantRole) Values() []ParticipantRole
Values returns all known values for ParticipantRole. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.
type PointsOfInterest ¶ added in v1.8.0
type PointsOfInterest struct { // Contains the timestamp ranges (start time through end time) of matched // categories and rules. TimestampRanges []TimestampRange // contains filtered or unexported fields }
Contains the timestamps of matched categories.
type PostCallAnalyticsSettings ¶ added in v1.8.0
type PostCallAnalyticsSettings struct { // The Amazon Resource Name (ARN) of an IAM role that has permissions to access // the Amazon S3 bucket that contains your input files. If the role that you // specify doesn’t have the appropriate permissions to access the specified Amazon // S3 location, your request fails. // // IAM role ARNs have the format // arn:partition:iam::account:role/role-name-with-path . For example: // arn:aws:iam::111122223333:role/Admin . For more information, see [IAM ARNs]. // // [IAM ARNs]: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-arns // // This member is required. DataAccessRoleArn *string // The Amazon S3 location where you want your Call Analytics post-call // transcription output stored. You can use any of the following formats to specify // the output location: // // - s3://DOC-EXAMPLE-BUCKET // // - s3://DOC-EXAMPLE-BUCKET/my-output-folder/ // // - s3://DOC-EXAMPLE-BUCKET/my-output-folder/my-call-analytics-job.json // // This member is required. OutputLocation *string // Specify whether you want only a redacted transcript or both a redacted and an // unredacted transcript. If you choose redacted and unredacted, two JSON files are // generated and stored in the Amazon S3 output location you specify. // // Note that to include ContentRedactionOutput in your request, you must enable // content redaction ( ContentRedactionType ). ContentRedactionOutput ContentRedactionOutput // The KMS key you want to use to encrypt your Call Analytics post-call output. // // If using a key located in the current Amazon Web Services account, you can // specify your KMS key in one of four ways: // // - Use the KMS key ID itself. For example, 1234abcd-12ab-34cd-56ef-1234567890ab // . // // - Use an alias for the KMS key ID. For example, alias/ExampleAlias . // // - Use the Amazon Resource Name (ARN) for the KMS key ID. For example, // arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab . // // - Use the ARN for the KMS key alias. For example, // arn:aws:kms:region:account-ID:alias/ExampleAlias . // // If using a key located in a different Amazon Web Services account than the // current Amazon Web Services account, you can specify your KMS key in one of two // ways: // // - Use the ARN for the KMS key ID. For example, // arn:aws:kms:region:account-ID:key/1234abcd-12ab-34cd-56ef-1234567890ab . // // - Use the ARN for the KMS key alias. For example, // arn:aws:kms:region:account-ID:alias/ExampleAlias . // // Note that the role making the request must have permission to use the specified // KMS key. OutputEncryptionKMSKeyId *string // contains filtered or unexported fields }
Allows you to specify additional settings for your Call Analytics post-call request, including output locations for your redacted transcript, which IAM role to use, and which encryption key to use.
DataAccessRoleArn and OutputLocation are required fields.
PostCallAnalyticsSettings provides you with the same insights as a Call Analytics post-call transcription. Refer to Post-call analyticsfor more information on this feature.
type ResourceNotFoundException ¶ added in v1.23.0
type ResourceNotFoundException struct { Message *string ErrorCodeOverride *string // contains filtered or unexported fields }
The request references a resource which doesn't exist.
func (*ResourceNotFoundException) Error ¶ added in v1.23.0
func (e *ResourceNotFoundException) Error() string
func (*ResourceNotFoundException) ErrorCode ¶ added in v1.23.0
func (e *ResourceNotFoundException) ErrorCode() string
func (*ResourceNotFoundException) ErrorFault ¶ added in v1.23.0
func (e *ResourceNotFoundException) ErrorFault() smithy.ErrorFault
func (*ResourceNotFoundException) ErrorMessage ¶ added in v1.23.0
func (e *ResourceNotFoundException) ErrorMessage() string
type Result ¶
type Result struct { // A list of possible alternative transcriptions for the input audio. Each // alternative may contain one or more of Items , Entities , or Transcript . Alternatives []Alternative // Indicates which audio channel is associated with the Result . ChannelId *string // The end time, in milliseconds, of the Result . EndTime float64 // Indicates if the segment is complete. // // If IsPartial is true , the segment is not complete. If IsPartial is false , the // segment is complete. IsPartial bool // The language code that represents the language spoken in your audio stream. LanguageCode LanguageCode // The language code of the dominant language identified in your stream. // // If you enabled channel identification and each channel of your audio contains a // different language, you may have more than one result. LanguageIdentification []LanguageWithScore // Provides a unique identifier for the Result . ResultId *string // The start time, in milliseconds, of the Result . StartTime float64 // contains filtered or unexported fields }
The Result associated with a .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters. This can include information relating to alternative transcriptions, channel identification, partial result stabilization, language identification, and other transcription-related data.
type Sentiment ¶ added in v1.8.0
type Sentiment string
type ServiceUnavailableException ¶
type ServiceUnavailableException struct { // contains filtered or unexported fields }
The service is currently unavailable. Try your request later.
func (*ServiceUnavailableException) Error ¶
func (e *ServiceUnavailableException) Error() string
func (*ServiceUnavailableException) ErrorCode ¶
func (e *ServiceUnavailableException) ErrorCode() string
func (*ServiceUnavailableException) ErrorFault ¶
func (e *ServiceUnavailableException) ErrorFault() smithy.ErrorFault
func (*ServiceUnavailableException) ErrorMessage ¶
func (e *ServiceUnavailableException) ErrorMessage() string
type Specialty ¶
type Specialty string
type TimestampRange ¶ added in v1.8.0
type TimestampRange struct { // The time, in milliseconds, from the beginning of the audio stream to the start // of the category match. BeginOffsetMillis *int64 // The time, in milliseconds, from the beginning of the audio stream to the end of // the category match. EndOffsetMillis *int64 // contains filtered or unexported fields }
Contains the timestamp range (start time through end time) of a matched category.
type Transcript ¶
type Transcript struct { // Contains a set of transcription results from one or more audio segments, along // with additional information per your request parameters. This can include // information relating to alternative transcriptions, channel identification, // partial result stabilization, language identification, and other // transcription-related data. Results []Result // contains filtered or unexported fields }
The Transcript associated with a .
Transcript contains Results , which contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type TranscriptEvent ¶
type TranscriptEvent struct { // Contains Results , which contains a set of transcription results from one or // more audio segments, along with additional information per your request // parameters. This can include information relating to alternative transcriptions, // channel identification, partial result stabilization, language identification, // and other transcription-related data. Transcript *Transcript // contains filtered or unexported fields }
The TranscriptEvent associated with a TranscriptResultStream .
Contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type TranscriptResultStream ¶
type TranscriptResultStream interface {
// contains filtered or unexported methods
}
Contains detailed information about your streaming session.
The following types satisfy this interface:
TranscriptResultStreamMemberTranscriptEvent
Example (OutputUsage) ¶
package main import ( "fmt" "github.com/aws/aws-sdk-go-v2/service/transcribestreaming/types" ) func main() { var union types.TranscriptResultStream // type switches can be used to check the union value switch v := union.(type) { case *types.TranscriptResultStreamMemberTranscriptEvent: _ = v.Value // Value is types.TranscriptEvent case *types.UnknownUnionMember: fmt.Println("unknown tag:", v.Tag) default: fmt.Println("union is nil or unknown type") } }
Output:
type TranscriptResultStreamMemberTranscriptEvent ¶
type TranscriptResultStreamMemberTranscriptEvent struct { Value TranscriptEvent // contains filtered or unexported fields }
Contains Transcript , which contains Results . The object contains a set of transcription results from one or more audio segments, along with additional information per your request parameters.
type UnknownUnionMember ¶
type UnknownUnionMember struct { Tag string Value []byte // contains filtered or unexported fields }
UnknownUnionMember is returned when a union member is returned over the wire, but has an unknown tag.
type UtteranceEvent ¶ added in v1.8.0
type UtteranceEvent struct { // The time, in milliseconds, from the beginning of the audio stream to the start // of the UtteranceEvent . BeginOffsetMillis *int64 // The time, in milliseconds, from the beginning of the audio stream to the start // of the UtteranceEvent . EndOffsetMillis *int64 // Contains entities identified as personally identifiable information (PII) in // your transcription output. Entities []CallAnalyticsEntity // Indicates whether the segment in the UtteranceEvent is complete ( FALSE ) or // partial ( TRUE ). IsPartial bool // Provides the issue that was detected in the specified segment. IssuesDetected []IssueDetected // Contains words, phrases, or punctuation marks that are associated with the // specified UtteranceEvent . Items []CallAnalyticsItem // Provides the role of the speaker for each audio channel, either CUSTOMER or // AGENT . ParticipantRole ParticipantRole // Provides the sentiment that was detected in the specified segment. Sentiment Sentiment // Contains transcribed text. Transcript *string // The unique identifier that is associated with the specified UtteranceEvent . UtteranceId *string // contains filtered or unexported fields }
Contains set of transcription results from one or more audio segments, along with additional information about the parameters included in your request. For example, channel definitions, partial result stabilization, sentiment, and issue detection.
type VocabularyFilterMethod ¶
type VocabularyFilterMethod string
const ( VocabularyFilterMethodRemove VocabularyFilterMethod = "remove" VocabularyFilterMethodMask VocabularyFilterMethod = "mask" VocabularyFilterMethodTag VocabularyFilterMethod = "tag" )
Enum values for VocabularyFilterMethod
func (VocabularyFilterMethod) Values ¶
func (VocabularyFilterMethod) Values() []VocabularyFilterMethod
Values returns all known values for VocabularyFilterMethod. Note that this can be expanded in the future, and so it is only as up to date as the client.
The ordering of this slice is not guaranteed to be stable across updates.