Happy Birthday, Doppio!

Believe it or not, Doppio turned two years old last week! We thought it would be a good time to take a moment to both look back at what we’ve accomplished so far, and ahead to where we’re looking to go in the future.

Never settle

Looking back and ahead upon our second birthday.Happy birthday Doppio!

Looking back and ahead upon our second birthday.

Happy birthday Doppio!

Two years in, we’ve been fortunate to have some success with our first two titles, The Vortex and The 3% Challenge, both of which approach the challenge of designing fun and engaging voice games from slightly different angles. 

I wouldn’t say, however, that we’ve found the “perfect formula” for success here yet. We believe that just falling into complacency with existing design patterns and limiting oneself to just building more of the same sort of games that already exist would be a terrible mistake to make in this space right now. The popular voice assistant platforms have been evolving so rapidly in the last few years, particularly in the area of adding new features for third-party developers, that just playing it safe and not taking every opportunity to try out new and different things would inevitably lead to a massive, missed opportunity.


We’re only human

One of the key considerations in voice interface design is that people can just say anything at any time; this is different from traditional graphical interface design, where you can somewhat constrain the choices available at any given time with what buttons and menus are presented on the screen. 

Because of this, a good voice game needs to handle shifting back and forth between different contexts in a way that’s not jarring to the player. And to make that happen, as a game studio, we need to empower our writers and game designers to build content that is contextually aware with as minimal friction and frustration as possible.

We took two pretty different approaches to this problem with our two games. On The Vortex, our writer worked primarily in Google Docs, using a variety of tables and colored highlighting to identify different branches and context-dependent sections. We then processed that into machine-readable chunks of SSML that could be loaded into the game. The end result was a quite dynamic end-product – every robot character in the game remembers your prior choices, and has different responses and reactions to being asked to do each type of task – but the process of authoring all this content was painstaking and slow.

We needed to empower our writers and game designers to build content that is contextually aware with as minimal friction and frustration as possible, so we took two pretty different approaches to this problem with our two games, The Vortex and The 3…

We needed to empower our writers and game designers to build content that is contextually aware with as minimal friction and frustration as possible, so we took two pretty different approaches to this problem with our two games, The Vortex and The 3% Challenge.

On The 3% Challenge, our second game, we used the Alexa Skill Flow Builder (SFB), which empowered our writer and designers to work directly in a format that could run in our skill with minimal adjustments. 

This was a massive help for productivity – the game has a huge amount of content, with eight full story chapters and six very dynamic mini-game challenges, and there’s no way we could have done all of that in a reasonable amount of time following the same manual process as The Vortex. 

However, there were some challenges in getting a “flow”-oriented framework like SFB to work well in a voice context that, as previously mentioned, inherently fights the concept of “flow” at every step of the way. We managed to achieve this with a collection of over 30 custom extensions – and more than a few lower-level patches to SFB itself. 

It’s also worth noting that there are some nice features on the platform-side that we’ve used on both games. One great example of this is, on the Google platform, Dialogflow’s aptly-named “contexts'' feature. By using contexts, the application code can activate particular parts of the interaction model based on things the player has done in their session. We’ve made extensive use of contexts in both games to improve the platform’s speech recognition and intent matching capabilities.

Going forward, and echoing my earlier comments, I would say we certainly haven’t found the perfect answer here yet, but we’ll definitely be using our lessons learned from the first two games to continue improving our capability to bring fun voice games to market!



Keep looking ahead

As far as where we’re headed next? Well, we see things getting bigger and better in the voice gaming industry! 

The market for voice-forward content keeps growing rapidly – smart speakers and smart displays continue to sell really well; TV-attached streaming devices are increasingly opting for a voice-based interface to aid with content discovery, while cars are doing the same to promote hands-free interaction while driving; and nearly everyone on the planet is carrying a smartphone that is itself a pretty capable voice-controlled device. We continue to see this growing market as a great opportunity to bring our games to a broad base of new players.

And we’ll continue pushing to build better, richer, and more dynamic games. We’re really excited about the ability to do more dynamic multimodal experiences, on devices like smart displays, TV streaming boxes, and smartphones.

And we’ll continue pushing to build better, richer, and more dynamic games. We’re really excited about the ability to do more dynamic multimodal experiences, on devices like smart displays, TV streaming boxes, and smartphones.

And we’ll continue pushing to build better, richer, and more dynamic games. One thing we’re really excited about right now is the ability to do more dynamic multimodal experiences, on devices with screens like smart displays, TV streaming boxes, and of course smartphones. We have dipped our toes in the water here a bit already – The Vortex was one of the first games to use Alexa Presentation Language to deliver rich visuals along with voice, and we recently updated The 3% Challenge so that it’s now using Google’s Interactive Canvas and Amazon’s Alexa Web API for Games to deliver an HTML-driven interface to players on smart displays. 

We have some awesome new projects in the pipeline here that we think are really going to push voice-forward multimodal game design forward quite a bit – we’re looking forward to sharing these with you soon!


Learning from those around us

We’re in a great industry in which cooperation and communication can only benefit everyone.

We’re in a great industry in which cooperation and communication can only benefit everyone.

We’re so excited to be part of such a vibrant and dynamic part of the games business. Everyone in this space is constantly trying new and different things, putting us all in a great position to learn from what each other is doing. And with the platforms we’re building on evolving so rapidly, it’s even more important that we do so on a regular basis! 

At Doppio we try to be active in community forums, Slacks etc., as well as at conferences and other events, so feel free to say hi if you see us around!


Thank you!

Wrapping up, I wanted to extend a sincere thank you from all of us at Doppio to everyone who has helped us along the way so far, and particularly to each and every person who has taken the time to play one of our games. Please keep sending us your feedback - we read all of it, and take everything into account as we plan our future games.

Doppio_Team2.png

So, here’s to the next two years – we can’t wait to share what’s in store!