Skip to content

Preparing Multi-Brand and Multi-Environment Tests

When designing a chatbot test strategy from scratch, there are often requirements like:

  • We are running a chatbot for all of our five brands. The conversations are basically the same for all of our brands, but there are slight differences.
  • We have a development, a test and a production environment and we have to test on all of them

If you can see yourself in one of the above points, then read on. This article will present a best practice technique to prepare Botium Box for those requirements.

Features and Techniques Overview

For setting up Botium Box we will use several techniques that are actually independent of each other, but in combination they are incredibly powerful.

Wildcard Matching

When asserting chatbot answers, wildcards ("jokers") can be used to accept any text. This is nothing special to Botium, but it comes in handy when asserting content for different brands.

Scripting Memory

With the Botium Scripting Memory it is possible to inject dynamically generated or static values into your test cases. We will use this concept to set different conversation parameters for each environment your tests should run against.

Test Set Dependencies

In Botium Box it is possible to define dependencies between test sets and combine them into a single test set. We will use this technique to separate the different requirements into individual test sets and combine them as needed.

Environment-specific Test Project Capabilities

In Botium Box it is possible to define environment-specific capabilities which will be merged with the chatbot capabilities. So it is sufficient to define the basic chatbot capabilities only once, and then add environment-specific adaptions on Test Project level (f.e. selecting a different IBM Watson Assistant workspace or a different HTTP endpoint).

Resulting Test Suite

In the end there will be several new objects in Botium Box:

  • There will be only one chatbot defined
  • There will be one shared test set holding the test cases valid for all brands (with placeholders)
  • For each brand, there will be a brand-specific test set with brand-specific test cases and brand-specific scripting memory
  • For each combination of brand + environment you have to run your tests, there will be one test project combining:
    • the chatbot, enhanced with environment-specific capabilities
    • the brand-specific test set with the scripting memory files
    • the shared test set

Step By Step

Now comes the interesting part - follow those steps to setup the basic structure in Botium Box.

1. Connect to IBM Watson Assistant

In this example, we will use IBM Watson Assistant, but the same principle works for all supported technologies. We are connecting the chatbot to the Assistant's development workspace, so we can use it for developing the test cases. When running test cases later we will connect to the environment-specific Assistant workspaces by overwriting this from the Test Project.

image.png

2.Create a Shared Convos Test Set with Wildcards

Create a test set named Shared Convos in Botium Box. Add some first Convos in the Visual Convo Designer. The convos should map the conversation structure, and they should be free from any brand-specific content by using wildcards.

We have here a convo named TC_HELLO, which sends a default greeting to the chatbot, and expects a default greeting back:

image.png Here is the corresponding BotiumScript (for Copy&Paste):

TC_HELLO

#me
Hello

#bot
Hello, this is Heinz, the chatbot of *. How can I help you ?

Note the use of the * as a wildcard - this is the spot where the brand name would be shown.

3. Create Brand-Specific Test Sets with Scripting Memory (optional)

The above test case would assert that

  • the chatbot introduces itself as Heinz
  • and that there is any brand name included

But for your brands, you want to make sure that

  • Each brand chooses a different name for the chatbot
  • We want to additionally assert on the brand name (not accept just anything with the wildcard)

For each of the brands, create a new test set. The brand-specific parameters will be saved in Scripting Memory files. Create a test set named Params BRAND-1 and add a YAML-file named Scripting Memory:

image.png In this file we define the variables that we will use in our test cases, like chatbot name and brand name:

scriptingMemory:
  - header:
      name: heinz
    values:
      $chatbot_name: Heinz
      $brand_name: My first brand

For another brand, the test set Params BRAND-2 can look roughly the same, but with different variable values:

scriptingMemory:
  - header:
      name: anna
    values:
      $chatbot_name: Anna
      $brand_name: Another brand

Enable the Scripting Memory for these test sets in the SettingsScripting Settings section

  • Enable the switch Enable Scripting Memory
  • Enable the switch Expand Scripting Memory

4. Adapt Shared Convos with brand-specific content (optional)

The shared test cases from above now have to be changed to use the placeholders for the chatbot name and brand name, instead of just using a wildcard. Replace the corresponding spots in the test case with the variable name:

TC_HELLO

#me
Hello

#bot
Hello, this is $chatbot_name, the chatbot of $brand_name. How can I help you ?

This means that before running a test case, the variables are filled from the scripting memory files you defined upfront, therefore replacing those variables with concrete chatbot names and brand names for doing the assertions.

5. Create Test Projects

Now go the Test Projects / Register Test Project to combine everything from above and apply environment-specific settings.

  • As Test Project Name choose something like BRAND-1 DEV
  • Select the chatbot
  • Select the Shared Convos Test Set and the brand-specific PARAMS BRAND-1 Test Set
  • Save the Test Project and immediately head over to the Settings tab

image.png

In the Advanced Settings section, it is now possible to overwrite the capabilities from the chatbot with the environment-specific settings. You can find the name of the capability (the basic configuration items for the Botium Connectors) either in the connector documentation, or in the Advanced Mode of the Chatbot connector settings.

image.png In this case, we have to overwrite the IBM Watson Assistant workspace ID to connect it to a different workspace. Repeat the above steps for the other brands and environments, and name the Test Projects accordingly (BRAND-2 TEST, BRAND-1 PROD, ...).

Everything is ready now for running your brand- and environment-specific test cases.

Conclusion

In this article you learned how to use Botium Box to prepare a test suite testing multiple chatbots for multiple brands on multiple environments without duplicating test cases, keeping the effort for future work as low as possible.