Your Accuracy rate shows you how accurately your chatbot responded to your chatbot user's messages. You can use your Accuracy rate to gauge how well your NLP training phrases match the messages your chatbot receives.
The Accuracy rate shows your chatbot's performance as a percentage and as the actual count for this review:
-
Expected answers indicate where the chatbot replied with the correct response.
-
Unexpected answers indicate where the chatbot did not correctly match the NLP question and replied with the wrong passage.
You can open a more advanced look at the data in a new window.
Your Accuracy rate doesn't worry about whether the chatbot had the content it needed, only whether it chose correctly from the content that it had. If you were marking a lot of responses as needing to go to a different passage (but not a new passage) this score will be a little lower.
Your Accuracy rate doesn't include interactions where the chatbot sent a fallback because:
- The chatbot user sent a garbled message.
- The chatbot user sent a message the chatbot is not intended to handle.
- The chatbot did not have the correct content to respond to the user.
The Accuracy rate does include instances where the chatbot sent the fallback when it should have matched the chatbot user's message to an existing NLP question.
Remember, each review depends not just on your chatbot, but what your chatbot users happened to send to your chatbot this time around. Don't worry if your Accuracy rate fluctuates a little between reviews.
You can improve your chatbot's Accuracy rate by generating training phrase suggestions from your finalised reviews.