Select whether recordings are supposed to be transcribed or analyzed for matching a certain category. The analytics rule determines whether recordings are transcribed, including setting a default transcription language, enabling translation options, and applying advanced features such speaker separation. You can also assign categories to identify relevant keywords within transcripts.
Create New Rule:
Click on the button Add rule.
In the field Rule Type, select the option Analytics from the drop-down list.
Edit existing Analytics rule:
Select the analytics rule and click on the icon to edit it.
The following window is displayed:
Add analytics rule
You can configure the following settings:
Option/Function
Description
Rule Name
Unambiguous, unique Rule Name.
Description
Entering a description is optional.
Analytics rights
No Analytics Recordings are not analyzed.
Analytics On Demand Users can perform analytics on demand for certain recordings. NOTICE! Dynamic display are explained in the further course.
Analyze All All conversations of the user are analyzed.
Analyse Selected: Recordings are analyzed depending on the conversation type to reduce costs for analytics.
Activate one or several check boxes to select a conversation type:
NOTICE! Dynamic display when activating:
Inbound: Only internal incoming calls are analyzed.
Outbound: Only external incoming calls are analyzed.
Meeting: All external and internal meetings are analyzed.
External: Only external 1:1 Conversations are analyzed.
Internal: Only internal calls are analyzed.
Chat: All chats inside and outside a conversation are analyzed.
Content Understanding
Analytics Options
Within the analytics options, you define whether recordings should be transcribed and how. You can choose the transcription method, set the transcription language, enable translation of transcripts, activate speech analytics and speaker separation, and apply categories to identify relevant keywords
NOTICE! Dynamic display when activating: Speech to text
Standard Transcript
Batch Transcript
NOTICE! Batch transcription is processed asynchronously to handle large volumes of audio data stored in the system. It uses Microsoft Azure AI Speech to convert spoken words into text, enabling keyword searches, phrase detection, categorizations, and keyword spotting based on predefined or custom word lists. Because this is an offline process, results typically appear in the app within 30 to 60 minutes after the call. This approach is optimized for scalability and efficiency when transcribing multiple recordings at once.
Default language: Select the language in which the transcript is output.
No change of default language in settings: Activate this option if the default transcription language should remain fixed and not be changed automatically or manually by users. This ensures consistent transcription results in the defined standard language.
No transcript without user setting: Activate this option to prevent automatic transcription for users who have not manually selected transcription languages in their settings view. If enabled, recordings of such users will not be transcribed until they define one or more languages. This ensures that transcription only occurs for users with explicitly configured preferences, reducing unnecessary processing and costs.
Select Alternative Language(s): Activate this option to add up to 3 alternative languages.
Multilingual conversations (this will decrease the accuracy): Activate this option if more than one language is regularly spoken in conversations.
Category Selection: Select one or more categories for the transcription.
Speech Analytics: Activate this option to enable emotion detection in the detail view of the recording.
Speaker Separation: Activate the checkbox for speaker recognition to automatically identify different speakers in the recording.
NOTICE! If none of the predefined languages apply, the transcription may be inaccurate or incomplete and manual transcription may be required.
Screenshare Summary
Activate Screenshare Summary to summarize a screen sharing. The summary is output in text form and describes what is displayed on the split screen.
The AI Assistant can differentiate the results of the screenshare summary. Accordingly, questions about the screenshare summary can be asked with the AI Assistant. However, it is necessary to actively use the open question, define it as an on-demand question and assign it to the user or do this automatically based on a policy.
Translation
Activate the translation option to translate transcripts into another language.
Manual Start: Activate this option to start the translation manually.
Automatical Start : Activate this option to translate transcripts automatically.
Language: Select the target language into which the transcript is to be translated.
Click on the button Save as new rule to save the entries. Click on the button Cancel to discard the settings.
Save changes of an assigned analytics rule
Click on the button Save.
If the rule is already assigned to a user or group, the following message is displayed:
Click on the button Yes to save the changes. Click on the button No to discard the changes.
Alternatively, you can create a new rule.
Click on the button Save as new rule to save the changes as a new rule. Click on the button Cancel to discard the changes.