Low AMD Rate

 

Overview

Often times a campaign is identified as having a low accuracy before the necessary steps have been taken to confidently make that claim.  This page will go over the truly important metrics, how to gather them, and then what to do once a real issue has been identified.

Gathering raw data

  1. In order to confidently describe the accuracy in a campaign, a real test campaign of about ~500 calls will need to be run using a sample set of numbers that represent the production environment.  It is possible to catch common issues by running a sample of about ~100 calls, but as with all analyses the larger the sample size the more accurate the results will be.

  2. Once all of the calls have been made, each call will need to be manually tagged by a human listening to the audio recordings of the calls.  This step cannot be avoided and is quite time consuming.

  3. After each call has been tagged, discard all calls that are not either a "Human" or "Machine".

  4. For the remaining calls, place each into one of the following four categories:

    1. True Negative = A Human that NCA correctly detected as a Human

    2. False Positive = A Human that NCA incorrectly detected as a Machine

    3. False Negative = A Machine that NCA incorrectly detected as a Human

    4. True Positive = A Machine that NCA correctly detected as a Machine

The table below will help illustrate these categories.

 

  • True = Actual and NCA result are Equal

  • False = Actual and NCA results are different

  • Positive =Answering Machine

  • Negative = Human

 

The Right Metric

What is important in terms of accuracy is what percentage of Actual Human calls did NCA accurately assert as Human, and what percentage of Actual AM calls did NCA accurately assert as an AM.  This means that we need to analyze the numbers according to the rows in the previous table.

If this issue is approached with either of the following questions in mind, it will be likely that the resulting analysis will be accurate:

  • "Of all the calls that were atually Answering Machines, NCA correctly identified only 80% of them"

  • "NCA did not connect 15% of people who answered the phone to agents"

From a mathematical P.O.V. the correct percentages that you are looking for are:

  

 

The Wrong Metric

If this issue is approached with either of the following questions in mind, it will be likely that the resulting analysis will be incorrect because you would be coming up with statistics based on the columns in the table above:

  • "Out of all the calls that NCA asserted were Answering Machines, 20% of them were People!"

  • "Of all the calls that made it through to my Agents, 15% of them were  Answering Machines!"

 Example

Below is an example table using sample data that is not uncommon to see.  Because the ratio of actual AM to actual Human is high, from the Agent's point of view it looks like NCA has a terrible accuracy (~50%) when in fact the results are quite good.

The table below has the following stats:

  • Just over 50% of calls passed to an agent are connected in error

  • NCA has a 97% Human Detection Accuracy

  • NCA has a 90% AM Detection Accuracy

 

While monitoring the accuracy from an Agent's point of view can be a good indication that there might be an issue, without taking all of this information into account it is impossible to be confident that there is an accuracy issue.

General Accuracy Issues

After running a test campaign and gathering the raw data, sometimes NCA is producing lower than expected results.

In these cases sometimes there are technical issues that are the cause, but other times NCA's default thresholds need to be modified to match the needs of a specific campaign.  It is important to note that modifying the threshold values often have unintended and undesirable consequences, and should only be changed if necessary.

Audio Issues

If there are any issues with the RTP we receive, NCA might have issues correctly classifying the call.  This usually manifests itself by having calls that should be classified as SIT tones ending up classified as an Answering Machine or Human, as there are strict rules on what tones need to be on the line.  Any artificial gap introduced by jitter or excessive packet loss will cause the SIT tone not to be detected.  Afterwards, the "I'm sorry the number you have dialed cannot be reached" message will be detected as an Answering Machine.

Sometimes if there is excessive static or background noise on a call, NCA will detect a Human picking up the phone as an Answering Machine.

It is important that if you find that NCA is producing lower than expected results, a human goes through and listens to the call-recordings to attempt to isolate the existence of these issues.

Excessive Thresholds

If the thresholds have been increased from their default values, it is possible that NCA will never be confident enough to assert a disposition.  This issue will manifest with an excessive amount of "Unknown" results, as the Post-Connect timer will expire before we finish analyzing the audio.

Alternatively the Machine threshold is sometimes lowered in an attempt to increase Agent efficiency.  This modification does not have the desired effects and should be avoided. 

Tuning Solution

There is a service that Sangoma can sell to customers to optimize NCA's thresholds for specific outbound campaigns.  By running the logs and call-recordings of ~500 Human and AM calls through an analysis, the settings that yield the optimal results in terms of both speed and accuracy can be obtained.  This process is incredibly time consuming, and should only be considered for massive campaigns.

Single Recording Issue

There is an extremely uncommon scenario where the default AM recording for an entire provider either consistently or sporadically causes NCA to assert that there is a Human on the line.  Because of this issue, the overall AM detection on the entire campaign is very low even though there is only one specific scenario that is causing the problem.  In these cases it is usually due to unique practices put in place by the provider that involve custom "delays" in the SIP Signalling.  Because NCA takes into account both the RTP as well as the SIP signalling, this unique mismatch means that the NCA result will be relatively unreliable.

In these cases we generally find that NCA is unable to be configured to increase accuracy.

Example

In order to help save customers money, when a customer hits an answering machine the provider waits 5 seconds to start billing them, so they have the chance to hangup and not be charged.  This way, they are charged if they leave a message but not charged if they don't.

  • NCA makes an outbound call attempt

  • The call starts ringing

  • The provider decides enough time has elapsed, and starts playing the AM message

  • While the audio is being played, we receive the 200OK to signal that the call is connected

    • This happens before NCA hits the pre-connect AM detection threshold

    • There is audio on the line as we receive the 200OK

      • This is an extremely unique scenario that does not occur normally

      • This is the technical reason why NCA would fail to come up with a reliable disposition

  • Due to the 200OK coming in a slightly varied points in the recording, NCA comes up with inconsistent dispositions

    • NCA bases the analysis of the current set of packets based on the Audio and it's previous confidence level

    • The AM message is technically the same on all calls, but because of this NCA treats each one differently

Return to Documentation Home I Return to Sangoma Support