On this episode of the Humans of DevOps, Jason Baum is joined by Christian Wiklund, co-founder and CEO of unitQ. They discuss why DevOps should prioritize real-time customer feedback in their decision making, the importance of fast feedback cycles and more!
Prior to unitQ, Christian and Niklas Lindstrom founded Skout, a social app with over 50 million users that was acquired by The Meet Group in 2016. Christian is a native Swede, an avid surfer, a prolific tomato grower, and a proud father of three.
Want access to more content like this? Gain the tools, resources and knowledge to help your organization adapt and respond to challenges by becoming a member of DevOps Institute. Get started for free: https://www.devopsinstitute.com/membership/
Have questions, feedback or just want to chat? Send us an email at podcast@devopsinstitute.com
Please find a lightly edited transcript below:
Narrator 00:02
You’re listening to the humans of DevOps podcast, a podcast focused on advancing the humans of DevOps through skills, knowledge, ideas and learning, or the SKIL framework.
Jason Baum 00:34
Hey everyone, it’s Jason Baum, Director of Member experience at DevOps Institute. And this is the Humans of DevOps Podcast. Happy New Year. And welcome back. Thank you for joining us for season three of the Humans of DevOps podcast. My second one with you. I’m really excited to head into a new year, new year new beginnings. And we’re going to shake things up on the podcast, we’re going to do things differently. And today on the episode, we’ll be chatting with founder and CEO of unit Q. Christian Wiklund, on how DevOps should evolve their decision making. And I love this topic, so I’m really excited to talk to Christian about it. Christian believes that customer feedback is the missing ingredient in how developer teams prioritize and make decisions. His company unit Q observes millions of quality issues for consumer tech products, speaking to their product. Speaking to their progress, rather, the company also raised 30 million from top VC firm Excel. Prior to unit Q. He and Nicholas Lindstrom founded Scout a social app with over 50 million users that was acquired by the meat group in 2016. Christian is a native Swede, an avid surfer, a prolific tomato grower and a proud father of three. Cristian, welcome to the Humans of DevOps podcast.
Christian Wiklund 01:55
Thank you, Jason. Happy to be here.
Jason Baum 01:57
Awesome. Yeah. And thank you for taking, like a second in your day with three kids. I don’t know how you do. I’ve won. And I don’t know how it’s done, especially right now. So
Christian Wiklund 02:07
they say to kids is four times worth one. And then the third, it doesn’t make a difference. So
Jason Baum 02:16
just adding to the chaos adding? Yeah, yeah. Yeah. Couple more, you got a basketball team? So there you go. So you’re ready to get human Christian? Yes. Awesome. Let’s just dive right into it. I’m really excited to talk about this. Because the experience experiences are really, you know, that’s, that’s the topic I love to talk about. With DevOps Institute. I’m the Director of Member experience. And one of the things that I like to talk about all the time is no more anecdotal, let’s get to the data. And but of course, how do you get to the data? What are the goals, you should look at? What are the metrics, you should be talking? Because you can metric yourself into death? Right? There are so many metrics, so many things to look at. What how do you get into it? So let’s just start with, why should DevOps prioritize real-time customer feedback in their decision-making?
Christian Wiklund 03:12
Yes, so maybe I should start with a little backstory here of why we’re building unit Q and how we set out on this, this journey to build really the quality companies. And it started years ago with me and my co-founder, Nicholas, we were building this company called scouts, which still is a social network for meeting people. And if was a mobile-first company, which sounds not very interesting today, but back then in 2009, and, and 10, it was pretty special, you know, that you started with mobile. And what we discover there was, you know, as we were moving from, sort of more slow, slower release, Cadence, Cadence and more predictability, more free QA into this ship that will environment with, you know, in order to stay competitive, you need to shift and be agile. We had many, many integrations, I think 20 Plus integrations between authentication, unfortunately, a bunch of AD, that’s the case in their analytics, and so forth. We have 25 languages to be supported. We have Android, iOS web, we have big screen, small screen, you have environmental variables, such as different connectivity out there. And when you layer all of this together, in each of these dimensions, you can have bugs that leak out, and the system is continuously morphing. So how do we make sure that nothing is broken out there when we’re operating a sort of a global Multi-Language multi-platform experience, where our users are thing that you’re, of course, after 24/7, but also that your features are working. And we found that we had really good instrumentation sort of lower in the stack. So if you were to look at monitoring machine data, we had, of course, you know, data, dog and other solutions, that we’re looking for anomalies and trends. And when you climb up the stack for the clients, we had app dynamics in there that could look at what’s going on on the client. But then the surface layer, which is how the product manifests itself, well, you also have signal you have signal where your user base is actually telling you what may or may not be working, best part was not monitored by machines, it was monitored by humans. And we found that that didn’t really scale. Once you get over 10 15,000 pieces of user feedback a month, it gets really hard to extract signals in real time to the DevOps team. And other teams in the company so that they can fix bugs faster. So so that is really the the idea here with what we’re doing to provide monitoring, but instead of using machine data, we’re using human data user feedback,
Jason Baum 06:23
which at the end of the day, is it’s the best feedback, right?
Christian Wiklund 06:27
Well, I would say the one from when a user takes time out of their day to tell you something, then it’s a pretty special moment. And we should listen to what they have to say. And a lot of feedback will be great, you know, you’re getting praised. And I love this product, that is incredible. And some of it will be that, hey, I can no longer reset my password, or the equalizer disappeared, or whatever it may be. And so So we also have to realize that not every person is going to report feedback. So they want to do is you need to make sure that you really take that into account and make that trickle down into the organization, basically, so that the right person in your org gets the right information at the right time. So how do we build basically, an interface between the user base and the company itself so that we have better flow of information?
Jason Baum 07:28
So how did you do that?
Christian Wiklund 07:31
So what how we did it in our left company, we have, when we look at teams that are dealing with user feedback, we have of course, the support team, for the support team, they will deflect tickets and resolve issues for customers and users. And they work sort of in the support ticket buckets. And then you will have the marketing team. And they are probably looking at what’s going on on Reddit and Twitter, and other social media platforms. So they’re trying to keep tabs on what’s happening there. You likely will have an app store marketing team that’s looking at reviews and trying to derive insights from that. And then you may have user research user insights team, or that’s typically from some team in the product management organization that will do surveys, and, and so forth. And what we had. And what we also found out there in the market is that these teams are when it comes to data, they’re siloed. So there is no single source of truth around all of this user feedback data. So that was the first thing that we have to solve. So like we need to aggregate every channel where users are leaving feedback needs to end up in one repository. And then it comes down to how do we categorize all of this data? So we have, we have customers such as chime, we have Pinterest, Spotify, and you know, these really large and incredible brands, they get a lot of data or millions of user feedback come in every month. So in order to make signals actionable, we need granularity. So I can give an example where if I tell the dev team, hey, we’re seeing an increase in password reset issues? Well, they’re going to say, okay, what can I do with that? So what is actually breaking? So two bucket eyes is sort of more generic with password reset, not working. It’s hard to take action on and it creates confusion. It’s easy to brush it off and say hey, maybe they signed up with the wrong password or the wrong email or whatever. Like what is the root cause of this stop working? So what we have to do is we have to break it down into the root cause of why password reset is not working So that can be, the link didn’t work, or I couldn’t pass capture or the email was never delivered. So when you break it into the root cause, that’s when you have actionability. And in order to break it into those fine buckets, we bucket size data in up to 1500, unique packets for one customer. So we call these quality monitors. So these quality monitors can be really anything that breaks the experience, I would say 80% is sort of digit, some product-related. But we also have customers such as HelloFresh. And we have Uber and sort of grocery delivery companies where the bug is not going to be that password reset link broke, it may be that I ordered bananas, but I got avocados. So we do capture all of the and we caught we named them quality issues. So a quality issue is basically the delta between the experience and the expectation. So when I’m using a product, I expect something to happen. And then it didn’t happen. And I’ve reported that we define as a quality issue. And then we bucket high-speed quality issues into these quality monitors. And what then happens is we can alert on on these different buckets. So if you see a trend, just as you would do with any monitoring system going in the wrong direction, we can then ping different teams on Slack channels and page duty channels to make sure that they fix the stuff that’s breaking out there.
Jason Baum 11:35
How do you break it down from expectation? Because Okay, so that sounds great. Right? In, in, in concept, that’s amazing. You can actually do that. How do you get that expectation data and match it to what the outcome? How is that? How does that happen?
Christian Wiklund 11:55
So we use machine learning to first classify a piece of text as a quality issue, yes or no? So we have the company. You know, it’s interesting, when you look at quality of product, if you have someone bodies quality of like every
Jason Baum 12:08
so subjective, it’s
Christian Wiklund 12:11
very subjective. Yes, something that we live in experience every day. And it’s something that we of course, think is important. Like, if you look at quality of the product, you know, how do we compete in today’s marketplace? So if I if we have a music app together, how do we compete like you compete with features anymore? Well, the feature set is basically the same across competition. What about content? Well, content is becoming commodity there, you don’t have different access to content on different music apps, it’s pretty much the same. What about pricing? Well, turns out, you’re going to pay 999 a month for music no matter what. So quality, quality of products is something that’s incredibly important for how you stand out and compete. So we’re sitting, we’re sitting on Zoom, we’re not sitting in GoToMeeting. So that’s a good example of GoTo Meeting. They have less good marketing No, I don’t think so. They have features that were not as not the same feature set the zoom, no, it really comes down to that zoom works. Like it’s a great experience. And, and when you look at quality impact, top of funnel, so high-quality products, they spread faster, organically. And even more importantly, it impacts the conversion cycle inside of the product machine. So if you were to look at sign up to activated user to second day return rates, seven eight return rate conversion to paid, how many times they log in per day, what’s the session length, quality is impacting all of those metrics. So we want to make sure that what you put in the product machine has really clear signal to the outputs and then you can reinvest sort of those outputs like revenue and so forth into the beginning of
Jason Baum 14:01
Today’s episode of the Humans of DevOps podcast is sponsored by Kolide. Kolide is an endpoint security solution that sends your employees important and timely security recommendations for their Linux Mac and Windows devices right inside Slack. Kollide is perfect for organizations that care deeply about compliance and security, but don’t want to get there by locking down devices to the point where they become unusable. Instead of frustrating your employees collide educates them about security, and device management while directing them to fix important problems. You can try collide with all its features on an unlimited number of devices. Free for 14 days. No credit card required. Visit Kolide.com/HODP to sign up today. That’s Kolide.com/HODP enter your email when prompted to receive You’re free Kollide gift bundle after trial activation.
Christian Wiklund 15:06
So get back to how we do it. So first, it’s a binary classifier. Is this equality issue? Yes or no. And we have defined sort of what equality issue is. So we have companies on that. And there, there are three layers of this. Sort of, if you look at it as a pyramid, the bottom of the pyramid is just functionality. Does the functionality work? Yes or no? Can I reset password? Yes or no? Can I log in? Yes or no? Can I upgrade to premium? Yes or no? The next layer is usability. So there may be a mismatch that perceived as a bug, or a quality issue. But it’s implemented according to spec, but it’s just too hard to use. So that can be used ability that can be slowness of the product, friction and so forth. And then on top of the product of the quality pyramid, is just delight that, hey, there’s, there’s really no friction, it’s a beautiful experience. And I never, I never seem angry, when I use this product, it’s great.
Jason Baum 16:10
Sometimes it just comes down to write it. It’s, they have a saying in football, American football. That’s the best ability is availability. And God that is like such a true statement for almost anything. But yeah, like you were, you mentioned, zoom. What happened to Skype? You know, not gonna happen? Yeah, what happens? This guy go to me. I mean, it’s because Zoom is just, it just works. It’s just as simple as that.
Christian Wiklund 16:39
And you know, Jason, what’s funny here is, you ask any, anyone at any company, like if quality of the product or service, is that important? They all gonna say? Yes. And then you ask them, Well, how do you measure it? And they say, we don’t? We have no good metrics? And how do you align teams around goals of quality of the product, if you don’t have a metric if you don’t have a single source of truth of what’s impacting the user base. So that’s also part of what we’ve done here, we have to develop some quality metrics. So what is the unit Q score, which basically, it’s like a credit score, so it gives you a score from zero to 100, and 100, you have epic quality, and, and so forth. And that’s been eye-opening for a lot of our companies. And to benchmark themselves, we actually have on our website, you can check the unit two scores for the 4000 largest apps out there.
Jason Baum 17:35
Tell me Tell me about the unit to score a little bit.
Christian Wiklund 17:40
So the way it works is that it measures the fraction of all public feedback data that we analyze what fraction contains a quality issue. So if we look at 1000 people feedback, and we find no quality issues reported, then you get score 100. If 100 of them contains a quality issue, then you get score 90. So if it’s sort of a direct instrumentation of how much of the public use of feedback data out there refers to a quality issue. And you know, what, we’ve seen it with apps in particular that star ratings is not the best measurement for quality. Because they are a snapshot in time. So star ratings, there’s an average over a longer period of time. But also, I’m sure you’ve seen Jason, like when you use an app four or five times, then it will say, Hey, can you rate me five stars? Or hey, would you rate us? Do you like
Jason Baum 18:38
us? It’s like, it’s like a very basic statement, like, like this app? And cool. Clearly, you’re using the app. So when you say yes.
Christian Wiklund 18:47
Do you like us? And the oldest trick in the book, as you say, hey, please give us one to five stars in the app. And then if you do three or lower, they will not send you today, I will record it. Yeah, if you do five or four, they will send you to the App Store to review it.
Jason Baum 19:02
Or on Amazon, people just make up reviews just to be you know, funny.
Christian Wiklund 19:09
Yes. How many times since you ordered vitamins and is that Oh, give us a five star review and take a picture of the review. And we’ll give you another another box for free. I mean, it’s there’s so there’s a lot of stuff going on there. So but we said hey look like if we can’t measure something we can improve. So we need to develop quality metrics. And the industry has been amazing at building a set of growth metrics so that maybe what’s your daily active users over monthly active users? What your five over seven retention metric look like? You know, all of these great metrics to you know, what’s your seven day churn look like your 30 day? One, ADC 60. So I think the industry has been really good at building out growth metrics that we have put in operations but quality of service and product lesson there were two metrics, we developed unit Q Score. And the other one is, we call this time to fix, which is really like time to resolution. So what we can see is when something impacts the user base, and we see the signal that ops people are reporting it, then how quickly can the company fix it, because the, of course, the longer you have issues out in production, it impacts the entire company. So let’s take an example of what we had a scout, we had this ethic bug, where in Polish language on Android, longitude, latitude was reported in some different format than in other languages. And our parser couldn’t take that. So it crashed, and scouted the location-based app, which means we asked for location every time you open the app. So no one in Poland could use the app for six months. And they were reporting it in the app store. So we got a lot of one star reviews from the Polish Android users, they were emailing support, they were clogging the support tickets, you know, they were adding more tickets that didn’t have to be there. If they can’t use the app, then we can’t really make any revenue there. They’re not going to be very happy. So of course, we lost out on revenue. And the signal was there, the issue was that the Polish Android community was a small piece of our overall community. And I actually stumbled upon it, I sort of asked the human in the loop, the monitoring machine for user feedback, so I stumbled upon it. And I said, hey, something is going on in Poland. And it was a parser bug, it took 10 minutes for an engineer to fix it. And we made $800,000 in revenue from Poland in the next 12 months. So that was sort of the that’s like a million-dollar bug. Expensive bug, a very expensive bug. So that’s where we saw these different bugs that were out. And we were always too slow to react to it. And we started obsessing over it. So like, I told the team, hey, we probably have $50 million bugs, we don’t know where they are, but let’s go find them. So that’s where that’s where you know, you need to be able to measure it, you need to be able to track it. We have companies using our unit to score as OKRs, they will say, hey, our score is 85. Let’s get it to 90. So what are the top 10 things we need to fix Strava is a great customer of ours, they will do they’ve done like a bug’s squash week where they will take our data and then you fix a bunch of bugs and drive the score up. So they do that on a regular basis. But the sooner in the cycle, you detect that something is broken. And the faster you can align the teams that like, Hey, this is not an anecdote. This is not just one user talking about it, or this is not a support tickets that were filed in JIRA. And I have one example of like, why should I fix this, when you can align the teams around a single source of truth and look at the data and say, Well, up until today, no people reporters that the password reset link was broken. And today, we had 10 people, okay, I get it, it’s 10 people out of a million that are using the product today, but we had zero a lot of days, so something must have broke. And then if you don’t fix it in maybe five days, you will have hundreds of people every day reporting it. So get in early fixes, align the teams to get it done. And then of course, the thing I really love about this is we give,
Christian Wiklund 23:46
we give context to developers who are going to fix the bug. So imagine you’re a developer and someone threw over this JIRA ticket from support with one example, saying, hey, please go fix this. Well, I need some context. I want to see the data. Is this happening on every platform? Is it happening on a particular device? Is it Is there some evidence here that I can gather so I can reproduce the bug, because a lot of times when I’m going to when I’m tasked with fixing a bug, the first step is how to reproduce it. And a lot of times I can’t reproduce it, it works on my device. So So that’s really how, you know, early detection of issues, aligning cross functional teams quicker around what needs to be fixed. And then the actual fixing of the bug, it gets them the evidence, so they can reproduce faster. And
Jason Baum 24:33
so we talked about all the positives, right? So, but what are the drawbacks of relying on that anecdotal decision making are there? What are the biggest drawbacks
Christian Wiklund 24:43
so anecdotal to me? If so, if you were to look at what we do is unique curiosity. Really, we take qualitative data and make it quantitative. So we take all you know you’re sending an email and other people send emails to support and then some attributes talk about One particular quality issue. And so that’s, that’s qualitative data? Well, I can’t make decisions based on one data point. But when you take this qualitative data and you actually make it into a trendline, you create data out of it. Now, it’s no longer anecdotes, now we’re looking at patterns. anecdotes are really, really hard, because they, what is the scope of this bug? Where is it happening? Why should I care? You know, it’s one person out of a million who reported it. So this may be a user error. And, you know, it’s something that that I like to call like, the great wall between support and engineering, you know, support from their perspective, they will file a JIRA ticket, they throw it over the fence, and then they have no idea what happens like there is no, there’s no feedback loop. And they keep seeing the same ticket being reported by different users over and over and over again. So I think there’s been there’s been like a breakdown between how these different teams communicate, and how do they align around a single source of truth that like, Okay, if it shows up in this trend, in this product, you need to then we got to jump on in and fix. And that’s what’s been really cool with even some of these really incredible big brands. When we go live, we see sort of a before and after, where the unit to score always goes up. And it’s not because, you know, it’s not because we fix their bugs, it’s because our data tells the company like, Hey, here’s your 1015 Things you need to fix right away. And then as the as things keep breaking, and guess what every product is, like a living organism. And a lot of changes is not even because of your internal changes, you might have an API that changed. We had a case with scouts where Ukraine banned SSL on public hotspots. And we had no idea that 30% of our Ukrainian users were on internet cafes. And our session handler needs SSL to function. So they couldn’t use it on public Wi-Fi. So how do we find that out? Well, the users told us, but since our manual processes were were couldn’t capture it, we found that by also way, way too late. So So it comes down to Jason like we’re using machines to monitor large datasets lower in the stack. So we should use it as well, in the surface layer of the stack, how the product manifests.
Jason Baum 27:34
So from from one, one person who follows experience trends to another one, well, actually a couple metrics that I have always looked at our net promoter score, customer satisfaction, and cost, the customer effort score. Yep. And those are and of course, churn rate retention, all that those desert and percentage new all that stuff is obviously that you’re going to do that anyway. But those three Net Promoter Score, customer satisfaction, customer effort, to me shape and experience. So it sounds like I’m missing something.
Christian Wiklund 28:17
Well, so the CSET is great, right? They will, they will raise the experience of the support experience. So that’s asymmetric they, they should obsess over and very much established their MPs, I think is it’s a great snapshot in time. Right? And there, there are some people out there and they’re like, well, NPS is the best buyers, you know, self select, self selecting users who respond to the survey and so forth. But I think
Jason Baum 28:45
isn’t that true about anything, though? Yeah. Every survey?
Christian Wiklund 28:49
Yeah. Surveys? Yeah. Like you typically see the most responses on the number eight, you’re number one, right? So it’s, it is what it is. But I think NPS is very much established as as a as a crucial metric. And I think if now the four operations, you know, how is how is quality of the product and experience trending today on Android, this hour on Android, with the latest release on iOS? To have a snapshot once every quarter of how our NPS is trending. It’s not good enough for operations, like operations, we need data now. So if we were to look at the unit Q score, and we’ve done some studies on this, it’s a leading indicator of both MCS and star rating. Where if, if your score goes down and stays down consistently, then the next NPS survey you do is going to, you’re going to see that you took a took a hit there, and vice versa. If you get the units to score up, then you keep it up consistently. You’re going to see NPS going up in the next survey as well. So So I think NPS is great for getting that question. really check in. But for daily operations, we need something that’s daily.
Jason Baum 30:06
Awesome. I appreciate, you know, all the time that you’ve spent with us and talking to us a bit about, you know, these metrics, and certainly the unit Q metric, I’m gonna have to check that out for myself. But I really appreciate you coming on. And one, one last question, which is a question we asked all of our guests in season two, and I think maybe in season three will come? We’ll have a new question. But since you’re our first guest on season two, or season three, I mean, let’s, let’s ask, what is one thing that maybe you do that that is unique to you, that maybe no one else knows professionally?
Christian Wiklund 30:44
Well, I make quite a lot of music. So that’s one thing, I have been tinkering with analog circuitry since a very young age. And when I was younger, I even built my own analog synthesizers and stuff. So I, I love, I love the folding iron and breaking, you know, when I was a kid, my dad was not very happy when I dissemble the VHS recorder, and, you know, then tried to put it back together. And I’ve never, I’ve never, I’ve never managed to get it back together. But I know electronics and I love music. And, you know, and that’s really, you know, for me, there were three paths, musician, entrepreneurship, and I had in my early age, I wanted to be a quant analyst on Wall Street, I wanted to build pricing models, but I ended up in Silicon Valley. And that’s also what I love, Jason about software. And music, software. And music is very similar. In Canada, they go hand in hand, you start with a blank piece of paper, an empty class file, or an empty track in Ableton, or Cubase, or whatever you use. You can create stuff, and I just love that creative process. The most fun part about software is you can build stuff from nothing. And then you have the entire world is your marketplace, really, you can you can, and you have no inventory. There’s so much to love about software.
Jason Baum 32:11
You know, it’s funny, you’re clearly not the first person who has come on this podcast and is also a musician, in addition to their life in tech. So I think we have talked about it ad nauseum about having like a DevOps playlist or something like that. And I think we’re gonna build it out. And I might even do another one, another podcast with the humans of DevOps musicians, because I feel like there’s so many out there. And by the way, what is with you Swedes and synthesizers, it’s like, sweet love.
Christian Wiklund 32:45
While here, I can tell you something brilliant. And yeah, and I think music is top five export industries for Sweden. For with 10 million people, how come there are so many great musicians, and not just Abba, but you know, you have Max Martin, Ace of ACE, Ace of Base, and, you know, rocks. And of course, you know, the Swedish House Mafia and all these these great artists. And it’s very, very simple. Some clever politician that way back when said, Hey, shouldn’t private tutoring and music education be free of charge for every child in freedom. And that’s what they did. So if you want to learn to play the guitar, or the piano, or the trumpet or whatever, then you get free tutoring from very young age. And as a result of that, we have a lot of musicians. And and so the investment they make in giving free music classes, pays itself off many times older. So I really love that one. That’s awesome. Yeah, I think it’s super cool.
Jason Baum 33:55
That’s great. Well, thanks again for coming on the podcast. And it’s great having you as a guest sharing. I did not know that about, about musicians in Sweden. So now I know. I hope I hope everyone listening learn something, too.
Christian Wiklund 34:09
Yeah. Thank you so much, Jason. And I hope my DevOps team were a little nervous to show Oh, my goodness, are you gonna hope he doesn’t get grilled here, but this was very pleasant. And thank you very much.
Jason Baum 34:22
Awesome. Yeah. No, I do. I one thing I do not do is grill. You know, it’s, I don’t believe in any gotcha question. Maybe next time. I’ll think of a few glad gotcha questions. Just yeah. Thanks again, Christian. And thanks for listening to this episode of the humans of DevOps podcast. Happy New Year again. So glad we’re back. We’re going to have a bunch of these. You know, we come at you every week with a new podcast, new guests new topic. So until then, I’m going to end this episode the same way I always do, encouraging you to become a member of DevOps Institute to get access to even more great resources, just like this one. Until next time, stay safe, stay healthy, and most of all, stay human. We’ll see you on the next episode. Live long and prosper.
Narrator 35:09
Thanks for listening to this episode of the Humans of DevOps podcast. Don’t forget to join our global community to get access to even more great resources like this. Until next time, remember, you are part of something bigger than yourself. You belong