Best Practice: Prepare for Large Test Sets
So you invested quite some time into chatbot test coverage by writing thousands of Botium test cases. That’s awesome. Here are some hints how to structure your test sets to make it easier to manage a large amount of test cases.
Selecting the Right File Format
The scripting language behind Botium test cases is called BotiumScript and supports several file formats:
When using the visual Convo Editor in Botium Box, in the background the plain text format is used.
While it might seem tempting to use Excel for writing your test cases because everyone out there is familiar with using spreadsheets, this quickly becomes hard to oversee and maintain when there are hundreds of test cases spread over multiple worksheets.
The clear suggestion is to use one of the text-based formats and setup a Git integration with Botium Box.
Establish Naming Conventions
This is maybe a no-brainer. The suggested naming conventions are:
Utterance names should start with UTT_
Partial convo names should start with PCONVO_
Convo names should start with anything you choose for naming your test cases, for example TC04711_…
Convos and Utterances
Consider splitting up your test cases in convo files and utterance files, in case you already didn’t do so. There are several benefits:
A clear separation of concerns between the conversational flow and the actual conversation content
Extensive NLP testing with Botium Coach can only be reached with massive amount of user examples, and this is only doable with separate utterance files
Some of the awesome tools in Botium Box are working exclusively on utterance level
It makes it easier to build up multi-lingual test scenarios (see below)
With partial convos it is possible to reuse parts of a convo in multiple places. For software engineers, partial convos provide modularization and separation of concerns. Use them at least in any of these cases:
When having a common greeting / goodbye section in all of your test cases, you can have two partial convos to be included in the “real” test cases.
If you have to setup conversation context before starting some test cases, the setup process should be extracted in a partial convo. A typical example is a user login process.
The awesome thing is that in partial convos, you can do everything that you can do in full convos, including utterance expansion, using scripting memory, assertions and logic hooks and more.
Partial Convos in Botium Box
You can write partial convos in the visual convo editor or in the script editor.
For inserting a partial convo into a full convo, use the Include Partial Convo button
All partial convos available in the test set are now shown in the selection box. You can even insert more than one partial convo in a section, which allows even finer grained control over partial convo content.
There are three ways to use the Botium scripting memory
Extract and Reuse Dynamic Content in Test Cases
With scripting memory variables it is possible to extract dynamic content from chatbot responses in test cases and reuse this dynamic content. See Scripting Memory Variables.
Inject Dynamic Content into Test Cases
With scripting memory functions it is possible to inject dynamic data into your test cases. Example usage:
Use current date and time in various formats
Generate random numbers in various formats
Inject system environment variables
Use test case metadata in test cases itself (test session name, test case name, …)
Extract custom payload
and more …
Parameterize Test Cases
With scripting memory files it is possible to set parameters for test cases and let them run multiple times with different parameters. See Scripting Memory Files.
This comes in handy when testing same conversation flow for different content, for example testing a shopping workflow for various products.
Test Set Dependencies
In Botium Box it is possible to define dependencies between test sets. When running test sessions, all content from dependent test sets is loaded as well. Possible use cases:
When working on multiple chatbot projects, you can have basic test cases for greeting, goodbye, smalltalk etc in a shared Smalltalk test set and set this as dependency for the specialized test sets
When working on multi-lingual chatbot projects you can have multiple test sets
one shared test set holding the convo files with the conversational flow of the test cases - only utterance names are used here, no text content
for each supported language on test set holding the utterance files for this language used in the convo files, and maybe some additional language-specific test cases
the language test sets are dependent on the shared test set holding the conversational flow
See above (Test Set Dependencies)
Run Botium Crawler for the first time to detect all conversation flows
Copy the resulting test scripts into an empty test set and do a first-time export to the main branch of a linked Git repository
At time, re-run the Botium Crawler to detect new conversation flows
Copy the resulting test scripts into another empty test set and export to the same linked Git repository as before, but into a feature branch
Use Git pull requests to review the detected changes in the test set