Building a Real Time .NET Transcription Service

Published December 30, 2019 by Steve Lorello

Building speech to text transcription services has never been easier. In this demo you will be building an extremely simple, yet powerful, real-time transcription service in ASP.NET using the Vonage .NET SDK and Microsoft Azure’s Speech SDK.


  • Visual Studio 2019 version 16.3 or higher
  • A Vonage account Sign up here
  • An Azure Account
  • Optional: Ngrok for test deployment

Create Azure Cognitive Services Resource

  • Go to your Azure Dashboard
  • Open the Hamburger menu and select “Create Resource”
  • Search for Speech and Create a Speech Resource:

    Speech Resource page
  • Fill out the create form. For demonstrative purposes, you can use the following inputs
    • Name: TranscriptionTest
    • Subscription: pay-as-you-go
    • Location: East US
    • Pricing Tier: F0
    • Resource Group: Transcription

This will take some time to spin up. Once it’s deployed you are going to navigate to the Quick Start Section to gather your access keys. You are looking for the Key1 value, so search for the highlighted section:

Quick Start Section

Save this key value offline somewhere for the moment.

Building the Real Time Transcription Service

Project Setup

Open Visual Studio and create a new project ASP.NET Core WebApplication. For example, “TranscriptionBlogPostDemo”

Now, you’ll create an MVC web application for this demonstration in ASP.NET Core 3.0.

Web Application Type Selection

With this created, add the following nuget packages to the project:

  • Nexmo.Csharp.Client
  • Newtonsoft.Json
  • Microsoft.CognitiveServices.Speech

Transcription Engine

With these projects imported, create a new class called TranscriptionEngine.

First, define some constants for the benefit of both the speech SDK and for managing the WebSocket buffer.

Next, add the following field to the class:

  • _config – this will hold the subscription/regional info of the speech analyzer. The region in the demo is eastus – derived from the region you configured your speech service for. For a mapping of region to input string see the Microsoft Azure Speech Service Supported Regions Documentation
  • _inputStream – this is going to be a push Stream that will serve as the buffer that will be streamed over to the Azure Speech-to-text service
  • _audioInput – this will be the input for the speech recognizer
  • _recognizer – this will be the recognizer that will perform the speech recognition task

These can be defined like so:

Given several of the fields are IDisposable’s, have this class implement IDisposable and simply dispose of all the disposable fields on its way down

Then add a constructor that will initialize the _audioInput with the push input stream defined above:

Next, add the method that will listen for speech recognition events from the recognizer

From here, you can add a function to stop and start the speech recognizer.

The start method accepts a language string, sets the language of the SpeechConfig, initializes the recognizer with the config and audio input source, registers the RecognizerRecognized event you created earlier, and starts a Continuous Recognition.

Your stop method will unregister the RecognizerRecognized event and stop the recognizer.

NOTE: The StopContinuousRecognitionAsync can take upwards of 20 seconds as there isn’t a mechanism to cancel the currently running input stream as of this writing. This demo explicitly mitigates this issue by not reusing the recognizer between calls, and not blocking the shutdown of the socket for this to complete.

The final task this class is going to take on is going to be to receive audio on the websocket you will be setting up, and pushing it through to the PushAudioStream you created earlier. This will be awaited after the websocket is established and will continue to be until the websocket is closed.

NOTE: The initial buffer you get back from the websocket will contain metadata for the call—and should you wish, you may extract this data from the first ReceiveAsync—for the sake of the demo, this is not done, as the buffer and Recognizer are robust enough to manage.

Setting up WebSockets in Your App

Open Startup.cs.

In the Configure method, you will enable websockets on the server and provide a piece of websocket middleware to use websockets, and to connect an inbound websocket and use the TranscriptionEngine to receive audio over said socket.

Setting up the Voice Controller

The last piece of code that needs to be implemented is to add a Voice Controller. Add a new controller under the Controller file and name it VoiceController. Add a constant string for the BASE_URL of your service.

Your Voice Controller will have two HTTP requests. A POST request for the event webhook, and a GET request the answer webhook. This GET request is going to construct an NCCO with a single connect action which will instruct the Voice API to open a WebSocket to your server and push the audio stream back over that socket. Set the Content type to a 16 bit linear PCM running at 8kHz. See below:

Getting Up and Running

Setting up IIS Express

Open the properties dialog for your project, under debug take note of the port number – for the demo, disable SSL, which will make setting up ngrok easier.

Setting up the Ngrok Tunnel

For the Voice API to forward the the Event/Answer webhooks you need to expose the site to the internet – for testing purposes, you can use ngrok to expose our IIS express port. Open up your command line and use this command, replace PORT_NUMBER with your IIS Express instance’s port number.

ngrok http --host-header="localhost:PORT_NUMBER" http://localhost:PORT_NUMBER

This command produces an output like this:

ngrok config

Setting up the Vonage Voice Application

The next step is going to be setting up a Vonage Voice Application.

  • Navigate to your Vonage Dashboard
  • In the left-hand pane open Voice, and click on ‘Create an application’
  • Name the application, e.g. ‘TranscriptionTest’
  • Under Capabilities enable Voice
  • For Event URL add base_url_of_ngrok_tunnel/voice/events
  • For Answer URL add base_url_of_ngrok_tunnel/voice/answer
  • For Fallback answer URL add base_url_of_ngrok_tunnel/voice/answer

Final Touches

Now that you have the ngrok URL, change the BASE_URL in the VoiceController file to that url (excluding the ‘http://’)

With this, you are up and running. Call the Vonage Number linked to your application, and the App will transcribe your speech to the debug console.

Additional Links Resources

Leave a Reply

Your email address will not be published.

Get the latest posts from Nexmo’s next-generation communications blog delivered to your inbox.

By signing up to our communications blog, you accept our privacy policy , which sets out how we use your data and the rights you have in respect of your data. You can opt out of receiving our updates by clicking the unsubscribe link in the email or by emailing us at [email protected].