What data do we need for gamification design?

It is very rare that we receive this question, we usually have to inform the client we want certain data, but this week the question was raised. To me, that is a sign that the buyers of gamification design services are becoming savvier about what is needed to make the design work.

What we always need for every project is the reason why, the purpose and the intended target audience. Then depending on the type of project different data inputs can become invaluable. Data always tells a story. What you measure get’s focus, what you don’t measure is deemed of lesser importance rightly or wrongly doesn’t matter at this point.

If you want to start changing behaviour, it would be strange not to look at the data too. This is something I learned very early in my consulting career when I was working on a cultural change project for a big retail chain. We wanted staff to be more customer focused instead of product focused, even though that was first and foremost what they were hired for. For years managers had been trained to reinforce putting clothes out in a neat way and keeping the store tidy with quick service at the tills. Now, this still mattered, but the key was to offer customers sales advice and look for opportunities to upsell or cross-sell or just basically sell. Initially, the store performance metrics were not changed. So guess what? Managers and staff stuck to their old ways of working. Then we added customer service rating, multi-product sales and gave staff clear expectations of what a good day of selling looked like for their team, behaviour began to shift. Managers were the hardest ones to change until we pointed out that their measures again should also change accordingly. I had managers on quests to spot their people selling well and give them positive feedback when they were impressed.

With those early insights in the back of my mind, I always start a project from a perspective of understanding the data will give me more than an intuitive insight into what is alive in a business. So here are some lists of data I would look for depending on the type of project we are working on. Depending on the reason for the project, there may be more or less, so these are non-exhaustive and also non-prescriptive lists.

If we are working on learner engagement projects:

  • learning-related data on what courses are working well,
  • where do people drop out,
  • what are the satisfaction ratings like,
  • how is proof of learning measured and
  • the relevant key performance indicators for employees including the learning and development team
  • surveys relating to learning and performance

If we are working on employee engagement projects:

  • employee engagement surveys
  • competency frameworks
  • performance structures and key performance indicators
  • exit interview data
  • attrition rates
  • clock out times
  • absences and their reasons

If we are working on customer engagement projects:

  • purchases
  • repeat customers and number of times they return
  • store traffic in a physical store, google analytics and heatmaps for online traffic
  • lines that sell most
  • communication
  • engagement and interactions
  • staff performance indicators
  • incentives both for staff and clients

The lists can be longer or shorter depending on the purpose of the project. The thing we learn from these items is that certain behaviours happen no matter what. It gives us a good grounding picture of the current story. We then back this up with our own surveys and interviews to validate some of the things we have been told and to form a picture of the people we are designing for. I think it is my original marketing degree with a heavy focus on market research and understanding the strategy of an organisation that has stayed with me. I also know that in order to drive change, we need to address some of the numbers as well.

I would be interested to hear what data you look for when you are working on changing behaviours?

Simple and balanced, may make for a dry outcome

As we are working on some game designs, the conversation around keeping the game simple and at the same time balanced is a hard one to get right. The game geeks want to add as many features as possible to keep it interesting and the lesser geeky one is, the simpler you want to game to be. It makes for an interesting discussion.

I think sometimes enthusiasm get’s the better of people when designing a game and at the end of a day or evening, every feature adds something. The emotional attachment to what was added then has a role to play, when it becomes time to drop some and simplify the gameplay.

Personal preferences in the types of games we like or dislike are important to set aside for game design. It may inhibit your choices and also the type of game you can come up with. We had a recent request for a collaborative game with some features of Forbidden Island. However, one of the team decided that it was a boring game and too easy, so they added a few features from Shadows over Camelot, which definitely enhanced the gameplay.

When it came to playtesting some of these features proved complicated and maybe not all essential. So we had to go back and decide which ones could be dropped, without significantly altering the balance of the game. Some game elements were essential, others negotiable. The point system is the hardest to change around.

In all of our gamification and game design, balancing is the most difficult to get right. Changes to graphics, wording and additional fun features, can always be made. What you reward and punish however is typically what changes a game. If we take out core scoring mechanics, we will need to rethink how the game is played. The scoring mechanics are a reflection of what is important to bring home to a player and what makes the game a challenge to win.

In our work mainly in the space of work-related games, we learn the most from failing. so most of our games need to be relatively hard to win. The first two rounds may be easy, but from the third onwards you may need to make resource decisions that are not optimal. In a test, it is important to play all the way through to have the win or lose conditions take effect. It often reflects real life conditions better if winning easily is avoided, yet you have made people think about the implications of scenarios and decisions.

The best advice in terms of balancing a game is to know what is core to the gameplay and what is negotiable and can be dropped in case simplification is required. The instructions are crucial to how your game is perceived as simple or complex. Cryptic instructions, have bitten us a few times and I try my best to keep a close look on these as we work on them. Headers for key items, with simple directive language, works best.

From an accessibility perspective, the type of colours, colours, contrasts, avatars and game parts in a board game can affect the access for people with visual or physical impairment. In a digital game, most often it is equipment requirements and again avatars and colours, that make people relate more or less to a game or allow them to play it or not.

Having an element of complexity will be essential to keep the gamers involved and also those that like a bit of an intellectual challenge. But if you spend more than 10 minutes trying to understand how to set up or get started in a game, in a digital world people would have deleted and moved on 8 minutes ago and in a board game scenario, only the die-hards would continue. Testing is the only thing that can give you ultimate guidance on what to do and what works.

If all else fails, you start again with a new game…