UserBit's transcription module supports a host of languages with advanced models for few. You can find the list of supported languages here.
The transcription model currently supports speaker recognition for English, Chinese (Mandarin), French, German, Italian, Portugese, Russian, and Spanish (Spain).
For these languages, UserBit splits the transcription into separate spakers:
Pro tip: You can click on the speaker to assign a participant or stakeholder to them. This way, any highlights created within the speaker's text, will automatically be attributed to the attached participant/stakeholder.
The play button next to the speaker name, will play exactly the part of the video that corresponds to section of the transcript.
As you scroll down the transcript, a companion video player scrolls with you to help you with your analysis. When you click on the video seek bar to move to a certain part of the video, the transcript scrolls to the place as well.
Similary, when you click on a speaker's section, the video also seeks to the relevant position. Moreover, you can also see exactly what in the video is being said by the highlighted words in the transcript.
You can edit any part of the transcript. Click on it once to activate edit mode, which will allow you to change any word or text. Keep in mind though that once a transcript is changed, it might not be in perfect sync with the video.
Adding a tag to a part of the transcript automatically creates a video clip spanning the length of what is being spoken in the highlighted portion. The clip is stored with the highlight and can be used in reports or insights.