For most businesses, the story generally goes as follows. A customer calls to complain, praise or ask for help, then the call is recorded for further training or assessment, then the recording is usually chosen at random, listened to by someone, and reviewed with the representative of the customer. customer service. In this article, I’ll walk you through how we can analyze call records with Machine Learning.
To Analyze the call records can take anywhere from an hour to a week after a customer hangs up. During this time, a lot can go wrong. Compliance issues and poor service could leave you with unhappy customers. I’ll show you how to work smarter, not harder, and identify problems as they arise. What most developers don’t realize is that the complex elements are predefined in Google Cloud Platform.
There are three essentials that you will want to look for when we analyze call records.
- Identity – Clearly separate the people on the call.
- Feeling – Are these people generally positive or negative in the interaction?
- Trigger Words – Have any words or phrases been spoken that merit closer examination?
Analyze Call Records with Machine Learning & Python
Let’s complicate things a bit and evaluate single-channel audio phone calls. The complexity means that we are not only dealing with call quality type audio but also audio where each caller mingles on a single channel. Unique channels make it much more difficult to tell who is speaking and when.
A Google Cloud feature is the easiest way to trigger large-scale code execution when a file is uploaded to Cloud Storage. Setting up a cloud function for this is simple and straightforward.
Let’s start with the requirements.txt file and the imports to analyze call records:
google-cloud-speech==1.3.2 google-cloud-storage==1.27.0 pathlab
Getting Call Records to Analyze Audio
As the Cloud function is triggered by a google.storage.object.finalize event in GCS, a dictionary containing data specific to this type of event is sent.
Entering the path to the filename is as easy as removing the object file [name ’] from the dictionary. Knowing all of this information, we can create a gs: // URI which can be used for various Google AI services.
Before I transcribe the audio to analyze the call records, I want to make sure that this is an actual audio file first. In this example, I will only deal with mp3 audio. There are tons of options to choose from, and I’ll highlight a few. First, the hertz rate is essential, and most often it is 8000 for phone audio recordings. Second, because it’s a phone call, it’s different.
Google offers a different machine learning model for phone call audio that creates better overall transcription. Finally, for a correct setup, be sure to enable diarization and set the appropriate number of speakers on the call. If necessary, automatically adjust your phrase dictionary and choose specific pronouns, business names, or phrases that may appear in the conversation.
For longer audio, such as entire phone conversations, the best practice is to use the client.long_running_recognize (config, audio) method. This method performs asynchronous speech recognition.
After transcription, I check the transcript for any keyword triggers and, if applicable, send the transcript to slack for immediate notification. Below is the send_slack slack function (transcriptw, filename, keyword).
I hope you liked this article on how we can analyze the call records with machine learning by using the google cloud platform. Feel free to ask your valuable questions in the comments section below.