August 9, 2021
On this episode of the Humans of DevOps, Jason Baum is joined by Jason Meller (@jmeller), CEO and founder of Kolide and Author of Honest Security – A guide to endpoint security and device management that doesn’t erode your values. Download your free copy. They discuss his journey into IT, cybersecurity and ethics, lessons learned and more!
Jason has dedicated his career to building products and tools that enable security experts to successfully defend western interests from sophisticated and organized global cyber threats.
The lightly edited transcript can be found below.
You’re listening to the humans of DevOps podcast, a podcast focused on advancing the humans of DevOps through skills, knowledge, ideas and learning, or the SKIL framework.
Jason Meller 00:17
The other thing that’s really interesting about companies today is a complete shift in the power dynamic between employees and employers. There’s never before been a time, at least in my career, where I’ve seen employees have so much power and say so over the cultures of the organization they belong to.
Jason Baum 00:33
Hey, everyone, welcome back to another episode of humans of DevOps Podcast. I’m Jason Baum, the director of membership at the DevOps Institute. Thanks for joining us for another episode today. So let’s get to today’s guest who I’m very excited to be chatting with Jason Mellor. Jason, welcome to the show. Thanks, Jason.
Jason Meller 00:53
Great to be here.
Jason Baum 00:55
It’s great to meet you. So Jason is CEO and founder at Kollide. Kollide provides user focus security or honest security for teams. Their SaaS product focuses entirely on helping organizations who use Slack integrate user focus security in their organization, and in turn, employees are briefed on their company security guidelines and achieve compliance without resorting to rigid management, which sounds like fantastic, a real sure yes to some of the instances that people are having these days. Prior to Kollide, Jason spent his 10 plus year career in cybersecurity at GE. And then from there, he moved to Mandy and Corp and worked his way up to Chief Security strategist. Jason, welcome to the podcast. And are you ready to get human?
Jason Meller 01:43
I’m ready. I think we have a great topic to talk about.
Jason Baum 01:47
Definitely, definitely. And it’s a topic of utmost importance. And I think on top of everybody’s mind is cybersecurity and just seems about every other day someone’s either under attack, and someone’s trying to put out a fire. And I’m sure we could speak for days about this topic. So let’s just start right with the manifesto that you’re writing on security. And talk a little bit about that concept. And just start from there. And maybe we could work backward into how you kind of landed on writing that manifesto.
Jason Meller 02:24
Yeah, so honest, security is a guide that I wrote, that’s looking to solve, I think, a growing problem that’s endemic to the cybersecurity industry. And it’s this relationship between the end-users, the employees of the company, and the IT and security teams that actually protect the organization. Today, that relationship is antagonistic, it’s at odds, and it’s getting worse over time. And as you know, the pandemic has fundamentally altered the way people work. And they’re now at home sitting with their company provisioned laptops surrounded by their family, we wanted to take another look at just the best practices that IT and security teams used to actually monitor the environment, protect the organization, and see which ways are that they’re doing that are still appropriate, and which things might be actually an overreach, and it might be actually causing that relationship to erode over time. The way that it’s always been done, you know, to actually protect an organization is to really extract the user from the problem, and then just treat it like a computer issue, which is, we need to make these computers secure. So we’re going to lock them down, we’re going to force the firewall on, we’re going to restart the computer every day, every time there’s an update, because we just at a fundamental level, don’t believe it’s the end-users job to really have any say over really the hygiene of the device itself. And when you do that, you create a lot of issues with that relationship, because people fundamentally want to use their computers, you know, it’s their window into their work world. It’s how they actually generate useful work product. That’s how they’re productive. And when you hamper this incredibly powerful machine, it really frustrates them, especially when it’s like maybe in the middle of a conference call, they’re giving a presentation, suddenly, the screen lock thing comes up and they you know, they have to unlock their computer middle important presentation. Like, I have to move my mouse every five minutes just because the cybersecurity team says I do. Well, that’s annoying. And now I feel bad about that relationship. They don’t understand what my needs are. And now I look foolish over something that was like, in my opinion, a very minimal threat. The other thing that’s really interesting about companies today is we are faced with this complete shift in the power dynamic between employees and employers. There’s never before been a time at least in my career, where I’ve seen employees have so much power and say so over the cultures of the organization they belong to and that’s certainly true with developers David bops use are the most sought after positions in the world. And the way that people are now competing for talent is more than just monetary compensation. It’s about work-life balance, it’s about creating freedom for them to be able to do the job with the tools that they want, in the manner that they choose. And it just feels as though the classic way of doing it in cybersecurity is at odds with those goals. You go to most organizations websites on their career page, and they preach these values of personal responsibility and freedom, and we trust one another. But then when you actually see their implementation, from an IT perspective, it’s the exact opposite of that. It’s about locking things down. It’s about surveilling the employee, it’s about worrying about this insider threat, that one employee that’s going to leave with all the intellectual property when they depart the organization. And these things are just at odds with each other. And it just always felt odd to me, that we put all this effort and work in hiring new employees, and we trust them. And then their first day on the job, we really prevent them from accessing or doing what they need to do to get their job done. It’s just, those things just don’t feel like they’re in alignment with each other.
Hey, humans of DevOps, it’s Nixon here from DevOps Institute. I’m dropping by to share how you can skill up and advance your career with the DevOps Institute premium membership. Our goal is to empower you to gain the edge you need to advance your DevOps skills, knowledge, ideas, and learning. A premium membership gives you all access to what DevOps Institute has to offer, how does getting 30% off all certification exam? So what about having access to a tool that can Assess your team’s DevOps capabilities, and if that wasn’t enough, members can also read our entire library of skill books to support you during your DevOps journey. Get started today for the price of a cup of coffee by going to DevOps Institute comm slash membership. Head to the link in the episode description to receive 20% off your DevOps Institute premium membership.
Jason Baum 07:03
So it’s so interesting that you say, you know, we trust an employee, they’re the hot new thing, right? You love them on the interview, you bring them in, and here’s your VPN, we’re gonna lock it down. And we’re gonna monitor everything that you say.
Jason Meller 07:17
Right? I think it’s, it’s completely backward. The hiring process is already filled with the background checks that you need. You’re obviously checking references, you’re hiring people, that you’re actually depending on to build the next greatest thing that’s going to take your company to that next level of growth. And yet here you are, like, can we trust you to really manage your firewall? And most companies? The answer is, No, we can’t. So you’re just going to get this lockdown device. The thing that’s changed though is how we’re all working from home, it’s so easy for that person to get frustrated, and just swivel chair 30 degrees to their personal laptop. And now that’s where they’re doing the work. And now cybersecurity has zero visibility. And that’s really where the risks are generated. So if
Jason Baum 08:01
I see phones, I mean, yeah, people use their personal phones, unless they’re logged in the VPN, what are you? What are you? What control do you have?
Jason Meller 08:08
Right? And there’s a lot of companies now are offering to Bring Your Own Devices and incentive as well, and the legal quagmire. They’re at what is they really allowed to surveil? On that, you know, other people in the family might be using the laptop, and then all these privacy laws like GDPR, the California privacy laws, how do those play into the classic role of cybersecurity really getting as much data as possible for them to do their detection mission. So what we’re trying to say and honest security is that the relationship between end-users and the actual cybersecurity team is so valuable, and it’s not being leveraged right now. And actually, it’s being actively damaged. And we wanted to take another look at that and say, you know, what, if we actually can build a relationship, they’re the folks that are actually sitting there on the ground are the eyes and ears of the organization, they actually have a lot more insight into where the real risks of the company lie, because they’re doing the job every day. They know all the little quirks and the processes. If we can extract their visibility in a way that’s useful, that’s actually way better than just locking them out of their devices, and then just not dealing with them at all. So that’s really the hope with honor security is drawing attention to this to this relationship, see how we can strengthen it and how can we create rules of engagement to actually surveil and use your laptops without violating their privacy? So there’s a ton of specific information there and I posted the manifesto at a great domain it’s just at honest dot security. And it’s you know, it’s free and the idea there is really inspiring people to take another look at how they do this.
Jason Baum 09:46
So that’s the whole manifesto Honest Security and Yeah, cuz I’d be an obviously could go you should go and read it. I’m, I’m interested in understanding a little bit of why, you know, what led you to to writing the manifesto, I’m assuming there some clash that happened or that led you to it. I’d look, hey, we all have our frustrations with this aspect. We know, cybersecurity is, you know, utmost importance. And yet, we probably if you’re, like me get super frustrated every time you have to log into, like, LastPass or something like that, right? You know, you, you know, you have to do it, you know, it’s a for your best interest. But it’s another piece, it’s another thing and every time it logs out, you have to enter the master passcode I think I’m gonna, you know, just get so I get frustrated. So, you know, how do you it’s kind of a culture change, right? It’s, it’s what really
Jason Meller 10:49
is, I mean, I cut my teeth in this industry on the defender side, really right out of college, when I started working at GE, and I got to work at a very interesting time at GE. Right now GE is sort of on Skid Row, they’re not doing so great. They’re really spitting out various aspects of their business. But back then, over 10 years ago, GE was a major, you know, a multinational conglomerate, we had NBC Universal appliances, healthcare, GE Capital, like just anything any major US business that you could think of, we were a part of it in some way. And this created this massive attack surface for, you know, nation-states that were really looking to get as much intellectual property out of the business as possible. China was a major threat actor that we had to deal with, they were really interested in what she was doing with the Department of Defense in our aviation division, we were building, you know, important components for like the Joint Strike Fighter. And so there was a real need to get, I think, a very sophisticated defense team in there to really monitor what amounts to hundreds of 1000s of endpoints and a network topology that span the globe. So I was on this team, I was an entry-level person really doing threat intelligence, AI, it was really my first professional role, I’d really been interested in cybersecurity, my whole life, even sort of being somewhat of a bad actor early on in my childhood. And I really got to sit and get a perspective from the, from a defender. And the thing that we really focused on was getting as much high-quality data from path as possible from endpoints than from the network, and then having really specific detection and then doing our best to respond as quickly as possible. Because when people attack your network, there’s usually different phases of that attack. There’s some initial reconnaissance, they’re probing for vulnerabilities they get in, they establish, you know, some degree of command and control. And it isn’t until usually days or sometimes weeks or months, they really execute that killing blow of, we’re going to now find the crown jewels of the organization X fill them out. So there is a window of time where if you can actually detect and defend the organization and actually prevent potentially one of those issues from actually happening, that was the goal of the cert was to respond within a certain amount of time, and detect these issues before they spiraled out of control. So we actually did relatively well, with that program. But there was this one time, wherein the story goes, we got an alert that someone was exfiltrating, these zipped up, you know, rar files from a staging server. And this is like, the perfect situation where this matches the, you know, standard operating procedure, the threat actor, we’re going after they, they take all the files, they staged them in one server, they, they put them in a rar, you know, archive, and then they and then they send them out to some third party like FTP site or something like that. And so it match all the, you know, heuristics that we were looking for, for that type of behavior. So we had actually gotten we’d actually caught them it at the moment they we were starting to file transfer. And we, we called our boss and our bosses called their boss and went all the way up to the G leadership. And we’re like, can we actually burn you know, delete the files before they actually get them off the server, and maybe even reach into their server, and delete them the files that they’ve already transferred over? Because we can see that we can see their password. And there’s a lot of like, is that ethical? Should we do that? And then we were like, You know what, let’s just do it. We got authorization. And that’s exactly what we did. We went in, we deleted all the files from the staging server. And I think we even reached in the files that had already been transferred on the other third party, and we actually deleted them off there. And we felt psyched about this, like we had actually stopped a real exfiltration event live in real-time. So we were like, high fiving this it really, this is very unusual for industry. It really showed how far we had come and we were just selling abrading And everybody was getting an attaboy at the end of the day, two days later, we find out that we were just completely wrong about this entire situation. In fact, we hadn’t afforded a threat actor at all. What we had thwarted, was some contractor in India was backing up his personal photos. And he just happened to do it in a way that mimic this threat actor perfectly. And these are family photos. And not only did we delete them from whatever staging server you put on there, we actually reached into his personal backup area, and we destroyed all the photos,
Jason Baum 15:37
oh, my God, all his personal photos.
Jason Meller 15:40
Yeah. And this was someone who really wasn’t doing the right thing with the resources that they had. Yet. At the same time, we hadn’t really done the right thing on our end, in terms of, you know, it’s this murky gray area of can we hack back to the people that are we perceive are hacking us, we got this one wrong, it was the first one. So in my mind, I thought that we were just going to, you know, really be admonished for this. And it was going to be a major thing. But what ended up happening was the person involved that actually was, in my opinion, the victim actually got fired from the company. They’re like, You should have been doing this. And we received really, very limited admonishment. And that just, you know, completely changed my perspective on we were the good guys in my mind. We were doing everything right, we were using the Dave data for noble purposes, that made sense. And then even when all that was true, we had inadvertently created a situation that caused harm to another person. And that was a very formative moment for me, where I realized that, yes, we have a mission. And yes, there’s maybe collateral damage that happens with that mission. But to me, that was an unacceptable moment, where we, even when you have the greatest of intentions, just because we have that access, and we have the authorization to that we ended up causing harm because we didn’t fully understand the situation we were presented with. And that was a formative moment, it really started helping me try to find this boundary of maybe cybersecurity teams, they need enough information to do their job. But there is a limit. And I think there’s a balance between the rights of the end-user as well. And that person is now you know, I don’t know where they are. They were a contractor. And I’m sure that they don’t have a positive opinion of any security team after that experience, you know, and yeah, maybe they weren’t doing the exact right thing. But I don’t think that was something that they deserve to get fired over. So ultimately, to me, I think that there are very complex issues at hand here. And I think that the situation now is where cybersecurity teams, in general, have very limited oversight within the organization. They have all the tools and those tools, their detection tools, but they mimic a lot of the capabilities of classic spyware. Like you could see what processes and users running can see what the browser history is, what is the oversight there to make sure that that information is being used in a way that’s, that’s useful? And what transparency? Can we give the end-user, let them know that we are collecting this information? What for our purposes? Most end users have no idea they can’t answer the simple question, can your IT team or your security team? See what websites you’re going to? Or you know, what programs are running? And how long you been running them? For? They assume maybe? Some say no, some say yes, definitely. But there’s no definitive place most these people can go to and actually learn the truth without, you know, asking the security team and then creating this awkward situation where that team has to answer it. So that was a huge formative moment for me. And even though I continue to be in the cybersecurity industry as a defender, and building tools to help defenders, I always felt in the back of my mind, there was room for I think, a more nuanced approach. And that’s the genesis of honest security.
Jason Baum 18:52
That’s such an interesting story. Thank you for sharing it. So let me put you back. Like, I guess this was, what 10 years ago, you said or 910 years ago, roughly. Okay, so let’s so we’ll go back in time, put your back there now knowing what you know, everything that you all the work that you’ve put into it and in this manifesto and put you back in that seat 910 years ago, what would you do differently?
Jason Meller 19:16
So I think that was the key thing that we didn’t have was we did not have pre-approved rules of engagement for how we wanted to handle certain scenarios, especially ones that have an ethical dilemma associated with them. I think most cybersecurity teams are already discussing that particular issue because there’s a lot of legal ramifications there if they get it wrong. And back in 2010 2011. It was just less defined there was there wasn’t a lot of precedents there for what companies could really do to defend themselves and when his defense become an offense. So I think really sitting down and understanding what happens if in the course of you know doing some unrelated investigation, we learn something about an employee through their web browsing history, or the apps that they’re using, that doesn’t necessarily impact the security of the company, but maybe, you know, create some sort of picture for us around their productivity, or what they’re doing in their free time. How do we handle those types of dilemmas? And how do we write that down and actually communicate it to end users beforehand, so they understand what the rules of engagement are.
Jason Meller 20:29
What I hate to see happening is that cybersecurity teams were so used to working and defending the company against external threats. And now the situation has really changed where they see the end-users as the threats, this idea of the insider threat where we have to treat every employee with suspicion. And it just reminds me of these, like classic examples where you have one bad apple that does something wrong or unethical. And then it like, it’s, it’s like a punishment for everybody. Like, I don’t know, if you’ve worked in a big office where someone didn’t clean up after themselves, and like the shared kitchen area, and then there’s all these passive-aggressive signs that start showing up like, you know, oh, you got to think we’ve all led that. Yeah. Right. And it’s really just one or two individuals that are really causing it. But yet, you know, the solution isn’t to directly, you know, punish those individuals, it’s to now create these broad sweeping policies that affect everybody. I just talked to one prospect recently. And they were all kind of gung ho on the on security stuff. And then they just had an insider threat situation where employees left the company, and they took some valuable IP and their mindset completely changed to, we need to now completely surveil every single device, lock these things down because we really don’t want that to happen again. And my question is, is what are you doing with that specific case, and maybe that’s how you create an incentive is you actually, you know, commit to the fact that you’ve detected that this happened, and let’s punish that person. Versus now let’s lockdown everybody and make their experience here miserable. I just think that’s such an overreaction to one or two isolated incidents. But that tends to be how, you know, cybersecurity and it like to address things like their dress them at scale that like to automate them. But in practice, I think it’s going to be much more competitive for companies who really want to attract talent to avoid that natural instinct. And to really treat folks with respect, give them the freedom that they need to do their job, and then find ways to bring them into the process. So the thing that we do at Kollide is we allow our customers to just message people on slack when they start going off the rails a little bit. Most people when they start a new company, have like this big security orientation. And they have like a PowerPoint, or it’s like a document that just lays flat on the page. And you don’t remember half the things that are in there. But the thing that we do, and I think most companies should try to do is when you’re actively getting things wrong, that’s when you can read chat with an automated system, say, hey, we noticed that you just downloaded, you know, a customer list from Salesforce, which is cool, you’re totally allowed to do that. That’s probably part of your job. But like, let’s delete it after 72 hours, and it’s been on there for maybe five or six days now. So here’s where the file is, here’s why we do that. And here’s, here’s what you should do. And here’s a button that lets you know that you did it, right. Like that’s that those are the types of interactions which are way more powerful, because they’re catching you at this point of performance, you’ve actively done something that’s now running the risk of hurting the security of the company. It’s a specific instance. And there’s a rationale behind it. And there’s remediation steps there. And it’s not a person that’s pinging you, it’s some sort of automated solution. So you don’t feel like someone’s like just on their has their thumb on the button, like watching you through a video camera. And that we think is a much better approach, people learn why a policy is in place. And it also gets the nuance of sometimes it’s okay to have this data on our devices, but just not for like three years just sitting in a folder that’s never going to get looked at again, that’s when something that’s really part of your job becomes just an unnecessary risk. And the only way to solve that problem is by talking to people, you can’t solve that with automation, you have to talk to people, and I don’t get why at an AI team security teams, try to avoid that. Because if you don’t avoid that, and you embrace it, you can solve a lot more complicated issues and really reduce recidivism, you know, prevent these issues from happening again, because people understand and why the solution the policies are in place. Yeah, I
Jason Baum 24:36
love that. And you know, if you’ve been listening to the podcast, I have a four-year-old at home so I relate literally everything to parenting, but it’s like anything when you teach someone and you help them to understand the why. Typically that goes better than punishment. Because if you go straight to the punishment, they don’t really understand the why Right, they’re not they now all they know is they’re punished and they don’t like that’s or in a sense, have you lost your job? Well, that’s too late. Or if you’re punishing everybody will now there’s resentment. Whereas if you actually just like, catch them in the moment, and it’s a teaching moment, and you’re explaining the why, well, 72 hours to me, I didn’t even know that. Okay, so you have the list on your computer probably shouldn’t have it, that’s clearly a security risk for too long wrong. But you know, you’re not in that mind frame. I think that that person who’s probably in sales or whatever, who’s downloading that list, they’re, they’re not thinking that way. They’re just trying to download the list to do their job. So I love that concept of teaching. And being proactive approach, I would say to it rather than a hostile, reactive way. Sounds like is what you’re after.
Jason Meller 25:56
Exactly. And I think that’s, you know, the reason why there is a security industry is because of people, you know, if people didn’t need to use computers, we likely could make them perfectly secure. But it’s that human element that actually introduces a lot of the good parts, like the creativity, the unique things that you can do on a computer, it also introduces the risk. So people have to be a part of the solution, though, because they’re actually the ones that are generating all the X factors in all these interactions. And they’re the ones that are the most vulnerable to outside influence. And to instead of saying, you know, what we’re gonna invest in teaching people to take the opposite approach, let’s wrap them in bubble wrap, and then send them out in the world and assume that their, you know, their defenses are impenetrable. That’s just not a solution that works in the long term, eventually, it just becomes an exercise for a threat actor be like, Okay, where do they miss, you know, where’s the chink in the armor that we can then go after next. And then it’s this cat and mouse game. But if you bring in users into it, they’ll start actually providing you with the feedback on how to write your next slack workflow, your next check, because if a salesperson gets dinged on, hey, we shouldn’t have these documents, they may feel like you know what, there’s a couple other things that I do that feels similar to this that the security team probably doesn’t even know about. And this was fine, I didn’t get in trouble. And actually, I want to make sure that the company is around and doesn’t have like this company ending headline event. So I’m going to reach out to the security team because I don’t feel like we’re diametrically opposed. We’re not adversaries, we’re actually working together. And that’s, I think there’s so much unexplored territory there and benefit. And I’ve always been someone who’s really interested in these natural, antagonistic relationships that seem to form and, you know, organizations and certainly, it and users has always been one. Certainly, SRE is DevOps, and, you know, engineers or engineers in QA, it’s because there’s checks and balances there. But ultimately, I think there’s a lot more to gain at preserving those relationships, instead of allowing them to naturally become antagonistic over time. But in order to do that, you need to put in the effort, and really having conversations with people, which is really hard to scale. But with tools like Slack and the fact that we’re on tools like Slack today, we create a new opportunity to really start automating a lot of those interactions so that that actual outcome is achievable. And you can actually have most of these really educational moments, but not everybody really has to lift a finger which is, which is the
Jason Baum 28:36
hope that you know, this is a great transition point, I think because you kind of described your passion around it. And clearly, you can hear it when you talk. You know, prior to GE, so we’re working backwards here. So prior to GE, what led you to cybersecurity, what was it Did you know, like you’re five years old, and you’re like I want to go into cybersecurity or, you know, was it was this something that that you were passionate early in your life? Or how did you get there? So,
Jason Meller 29:09
I was born in 1985. And it was a really interesting time to be born because I kind of got a chance to ride the wave a lot of different major technological waves at the zenith like the first really being the personal computer, you know, a renaissance that we had in the late 80s, early 90s. That was like very formative age back then. Then going into the dawn of the internet right around the time that I hit middle school, too. You know, peer-to-peer file sharing and like the legal quagmire that happened in that and that was like early high school for me. And then my first year in college was the same year that Facebook launched and like the rise of social media, and then right at the end of college was the rise of of mobile devices, smartphones and you know, the iPhone. So really, I was very fortunate that I got A lot of the history of where we came from while I was still young enough to remember, but also during these very formative years, the common element around all these different new waves is that security was always the last thing to really get figured out. I remember early on, you know, I didn’t really know much about computers, but I was super interested in and the thing that I could affect the most was break things like breaking things was the way that I could cause the most change and what I was working on whether that was just breaking my own computer through exploration, which then eventually morphed into, oh, I’m like on AOL, which used to be called America Online. And like, I can write a certain instant message that will crash my friend’s computer, when it gets sent over the wire. That’s cool. And like breaking stuff, like became really fun. And then suddenly, like, how can I break things more efficiently. And then that led me down to, I need to learn to code to be able to do that. Okay, like there are different coding languages out there, I’ll pick up this one Visual Basic. And then I started building tools that other people could use to break other people’s computers. And it was so easy back then, like, they used to be this thing in the mid 90s, called the ping of debt. And all you have to do is send this one malformed ping to any computer that was connected to the internet, which is blue screen it booster-like which seems ridiculous today. But that’s, that’s what it took back then. And so it was a really great learning ground to learn a lot of the offensive, what people would refer to as like script kiddie type of tactics, and really start exploring the offensive side. And then, you know, growing my skills there, I remember really being interested in cracking software because like, I wanted to have Adobe Photoshop and all this stuff. But then I was like, really interested in how do these people create these workarounds to how to get through like the serial protection. And that’s when I started learning about reverse engineering. And they used to have like these little training grounds online, and they had these exercises called crack me’s. And they were like these little programs. And the goal was to learn how to bypass the serial protection. I mean, they were there to actually create this new world of people that could do this skill. And I didn’t really I thought it was just fun. And I didn’t realize I was like actually learning a computer science skill that would like served me well later in life. So I was just really fortunate that I had a lot of, you know, my parents gave me just like unfettered access to the computer, which may not necessarily be a good idea today. But and then it was during a time where there was just like, a lot to learn and a lot of easy stuff for me to really show a lot of initial progress that got me excited. And then it really wasn’t until I hit college that I started realizing like, that was when the big first big of like malware spyware started hitting computers, like early 2005 2006, when everybody had Windows XP, and it was just loaded with crap. Like everything was busted, you had a million toolbars in your Internet Explorer. And then I was in a support role there. And I had to clean up a lot of the mess. And I realized like, oh, wow, like, this is now not just for fun anymore. People are building business economics around harming other people through viruses and malware. And I don’t want to be on that side. Like I want to be on the good guys, it feels really good to fix problems. And to take that, you know, one student who wrote their thesis on their computer, and they can recover because some malware, you know, screwed up their entire hard drive and being able to get that thesis back for them. Like that felt really good. And that’s when I knew I wanted to be a defender. And that’s really what kicked off my career. really tapping into that. It support skill set led me to eventually become a defender. And then the formative moments that she led me to try to find that balance between gathering as much data as possible to actually finding the write-downs, but I’ve always gotten a lot out of helping others. And there are not as many opportunities to get on cybersecurity, but I think that there should be and that’s a big reason why on a security is as a major part of my you know, almost my identity.
Jason Baum 34:05
Yeah, that’s, we have very similar background because I’m, I’m just like a slightly older than you. And we’re it sounds so similar I, I kind of wish I went into that phase and not it was like very sophomore for me. I was just like it was, hey, we can do a prank call over the modem. Cool. Yeah, I shared that last week that we dealt with. That was what I spent my time doing. I wish I was doing some of the other cool things. But yeah, it was it is an interesting path that we kind of lead as you weren’t born with these things. They didn’t exist, you know, already. So I think we were the early adopters in a way. And so I think our parents just didn’t know any better to view your comment like that you had unfettered. I think we all did in a way, you know, our parents had no clue what We were doing with the computers how we were doing it. I mean, right? You got your 50 free hours of AOL or whatever. And then, and then it expired and one of them now. Yeah, so yeah, or Yeah, I get my encyclopedia online now mom. Cool. So I mean, we’re wrapping up, but I would really love to hear your thoughts on the future. You know, cybersecurity just seems to be getting worse and worse and worse, or the threats, I should say, seem to be getting worse and worse, and it’s easier. It would seem for someone to, to hack or get into these. It’s, I just hear about it every day. I mean, you hear what is it? All these major sites were down the other day? What do you think the future is going to be like, with cybersecurity? How can we all play a better role in it? And yeah, so and then we’ll,
Jason Meller 35:58
I think, for the general audience out there, I think it’s important to, to recognize how far we’ve come because you’re right, we hear about all this stuff in the news all the time. And it feels like it’s getting so much worse, and it’s becoming a major part of our lives. But as we just talked about, I can’t remember a time where devices have been so locked down and so secure. So in some ways, we’ve come a long way in terms of just fundamentally raising the bar. And so what that has created, though, is is created a very sophisticated set of threat actors who are really the only ones who have the economic resources to take advantage of the gaps in security. And it’s really them, you know, performing their own r&d finding, like zero-day exploits that people haven’t heard of before, and then using them at the right time and weaponizing them, and then trying to build an economic advantage for them. And the big one today, is really ransomware. And the ability to really hold files hostage, you know, receive the ransom, and then you know, give them back to them so I think that like anything else, you can’t really, you know, appeal to the good nature of criminals, what you need to do in order to beat them is you need to raise the economic states the amount of money that it takes for them to deploy their attacks so that it’s no longer economically viable for them to even exist. And that really means investments. At, you know, increasing the baseline security of the actual devices themselves. Like I think Apple windows and the various manufacturers are the, you know, the, you know, the, the creators of the various flavors of Linux all have a responsibility for adding as much you know, ransomware protection in the operating system itself, we shouldn’t be dependent on third parties, these are really flaws in the architecture of the operating system, and having third parties really service that creates an opportunity for them to extract a lot of money from folks, even though it’s well deserved, I think that’s really an issue that should be solved by the vendors. On the other side of it is I think we need to spend a lot more time investing in human beings, than people that are behind the computer, because if we can solve that problem, or at least reduce its frequency, a lot of these attacks don’t become economically viable. I really like the fact that people are using Slack more and more. Because slack is so much different from email, if you think about email, email, it’s just this giant inbox in the sky, and anyone in the world can send you an email, and they can pretend to be anyone else. When I’m in Slack, I’m only talking to people that presumably have access to that slack instance. And so there’s a different level of trust that can have, but it’s basically you know, email is just a way for anyone to get really have access, to your digital space. And it’s this weird entry point where people can just sort of sending you unsolicited messages that can cause you to perform actions. So I’d like to see us really rely on email a lot less because I think it’s a major root cause for a lot of the untrusted actions that end users have, I like us to be a little bit more particular about how we establish communication with external third parties. I think slack is a really good example of that. And even some of the social media things that we do today, where there’s this whole dance of really vetting the person first accepting the friend request, versus just having like this address out there that anyone can just bombard with nonsense. And maybe 0.01% of that nonsense is gonna lead me to compromise myself. It just feels wrong to me, it always has. So I think the future is really making the investment in people and trying to prepare them as much as possible for the litany of things that can happen there. And then there’s always going to be more complexity that needs to be built out. But I think we need to, you know, actively train people and train them in a way that actually works. And I think catching them at that point of performance, telling them why giving them visibility, and really fixing that relationship is going to be a great way to raise the bar yet again, for the attackers and maybe some of the major threats that are out there won’t be viable anymore because We’ve done that. And then there’ll be a new thing and probably a much scarier thing. But that’s the goals, things are gonna get a lot worse, but there’ll be a lot less frequency of them, we can already see there’s just a lot less background malware that is really there, like, installing ad wear, and things like that just do isn’t as big of a problem because we’ve raised the bar, but now we’ve created a new one. And you know, life goes on. So I don’t want people to get too cynical out there and, and like feed too much into fear, uncertainty, and doubt. I think if you’re going to make an investment today and security for your organization, make it into your employees, and not necessarily buying yet the next shiny tool, it’s going to give you even more visibility and devices, I think you really need to get a communication channel established with your people, and give them an opportunity to talk back with you and have a conversation about security versus being this one-way affair that it’s always been.
Jason Baum 40:49
I love that invest in the future. People are the future. I love that you don’t hear enough. And it certainly fits within our theme of the humans of DevOps. And I really appreciate what you had to say today. Jason, it was really great getting to meet you.
Jason Meller 41:05
Thank you for having me
Jason Baum 41:05
on. That was Jason, Jason Miller, the co-founder of Kollide and again the manifesto that he wrote on security it’s Honest.security. Please go on read it. It’s a really interesting read. And this topic, I feel like we could go on for days. Jason, it was great having you on.
Jason Meller 41:27
Yeah, thank you. And for folks out there who want to follow me on Twitter, you can reach me at Jmeller. And if you’re interested in Kollide, you can find us at Kollide.com.
Jason Baum 41:39
Awesome. And thank you for listening to this episode of the humans of DevOps podcast before I finish. Again, this is the part of the podcast where I’m going to recommend that you go on and ask us some questions because I don’t want it to be a one-way street. And maybe we could get Jason back on to answer a few questions. Maybe we could, we could send him something on Slack. And we’ll, we’ll figure it out. Don’t email Jason. By the way, the concept of no emails is beautiful to me. So I think to all of us. Alright, I’m gonna wrap it up. Thanks for listening to this episode of the humans of DevOps Podcast. I’m going to end this one the same as I always do, encouraging you to become a premium member of DevOps Institute to get access to even more great resources just like this one. And until next time, stay safe, stay healthy, and most of all, stay human. Live long and prosper.
Thanks for listening to this episode of the humans of DevOps podcast. Don’t forget to join our global community to get access to even more great resources like this. Until next time, remember, you are part of something bigger than yourself. You belong