
Cloud-Native Considerations for SMBs with Apurva Joshi
The conversation covers: The difference between cloud computing and cloud-native, according to AJWhether it’s possible to have a cloud-native application that runs on-premise The types of conversations that AJ has with customers, as VP of product. AJ also talks about the different types of customers that DigitalOcean serves.How the needs of smaller teams tend to differ from the needs of enterprise users — and the challenges that smaller teams face when learning and implementing cloud-native applications. Making decisions when using Kubernetes, and how it can be overwhelming due to the sheer number of choices that you can make. Some of the main motivations that are driving smaller companies to Kubernetes. AJ also explains what he thinks is the best rationale for using Kubernetes.Popular misconceptions about cloud-native and Kubernetes that AJ is seeing.Why customers often struggle to make technology decisions to support their business goals. AJ’s advice for businesses when making technology decisions.Why startups are encouraged to start by using open source — and why open source wins in the end when compared to proprietary solutions.LinksDigitalOcean: https://www.digitalocean.com/Twitter: https://twitter.com/apurvajo LinkedIn: https://www.linkedin.com/in/apurvajo/ TranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I'm chatting with AJ. AJ, can you go ahead and introduce yourself?AJ: Hey, I'm AJ. I’m vice president of product for DigitalOcean. I've been with the company for about 15 months. Before that, I spent about a couple of decades with Microsoft. I was fortunate to work on Azure for the last decade, and I had the opportunity to build some cloud services with the company.Emily: And thank you so much for joining us.AJ: Thank you, thank you for having me.Emily: I always like to start out by asking, what do you actually do? What does a day look like?AJ: [laughs]. It’s an interesting question. So, yes, the day is usually all over the place depending on the priorities and things that are in motion for a given quarter or a week, per se. But usually, my days involve working with the team around the strategic initiatives that have been planned, driving clarity around different projects that I [unintelligible]. Mainly working with leadership on defining some of the roadmap for the product as well as the company. And yeah, and talking to lots of customers. That's something that I really, really enjoy. And every other day I have a meeting or two talking to our customers, learning from them, how they use our products and how can we get better.Emily: I'm going to ask more about those conversations with customers because that's what I find really interesting. But first, actually, I wanted to start with another question. What do you see as the difference between cloud computing and cloud-native?AJ: The difference essentially, in a way, the cloud computing is a much bigger umbrella around how we as a technology industry are enabling other businesses to bring their workload to a more scalable, more efficient, more secure environment versus trying to host, optimize, or do things by themselves. And the cloud-native, in a way, it's a subset of a cloud computing where not necessarily you always have to have existing workloads or something that is prior technology that has been already built and you're looking for a place to host. In a way, when you're building something out, new, greenfield apps and whatnot, you're starting from scratch, you're building your applications and solutions that are cloud-native by definition. They're built for Cloud; they're born in Cloud, and are optimizing the latest and the greatest innovations that are present and as future-looking to help you scale and succeed your business, in a way.Emily: Do you think it's possible to have a cloud-native application that runs on-premise?AJ: There's a lot of [laughs] innovations happening in pockets, and especially from the top providers to enable those scenarios. But at the end of the day, those investments are essentially driven to help people and companies, especially on the larger scale, to buy some time to completely move to the public cloud where the industry takes their time to come up with the compliance, security requirements and [unintelligible]. So, you'll start to see—you might have heard about some of the investments these top cloud providers are doing about allowing and bringing those similar stack and technologies that they are building in a public cloud to on-premise or running on their own data center, in a way. So, it is possible, in bits and pockets to start with a cloud-native to run, on-premise, but that customer segment and the target is very, very different than the ones that start in a public cloud first.Emily: I want to switch to talking about some of the conversations that you have with customers. I really like to understand what end users are thinking. What would you say when you talk to customers? What's the thing that they're most excited about?AJ: Right. So, it depends on what segment of customers you're speaking with, right? DigitalOcean serves a very different set of customers than a typical large cloud providers do. We're focused more on individual developers, small startups, or SMBs. Again, when I say SMBs, it's a broad term, when I say SMBs the S with [unintelligible]. So, we focus mainly on two to ten devs team, and smaller companies and whatnot. So, their requirements are very different; their needs are very unique compared to what I used to talk, back in my past life, with enterprise customers. Their requirements are very unique and different as well. So, what I hear from the customers that I speak with recently, and have been speaking with for last over a year, is how can I make my business that is [unintelligible] on a cloud? And what I mean by that is how do I build solutions that are simple, easy to understand, and where I'm focused on building software and not really worrying about the complexity of the infrastructure, at the same time, keep the price in control and very simple and predictable. And that resonates really, really well. The tons and tons of customers that I spoke with recently, they moved from large cloud providers to our platform because their business was not viable on those cloud providers. And what I mean by that...
23 Syys 202032min

Enabling Cloud Native Environments with Gou Rao
The conversation covers: Gou’s role as CTO of Portworx, and how he works with customers on a day to day basis.Common pain points that Gou talks about with customers. Gou explains how he helps customers create agile and cost-effective application development and deployment environments.The types of people that Gou talks to when approaching customers about cloud native discussions.Why customers often struggle with infrastructure related problems during their cloud native journeys, and how Gou and his team help.Common misconceptions that exist among customers when exploring cloud native solutions. For example, Gou mentions moving to Kubernetes for the sake of moving to Kubernetes. Gou’s thoughts on state — including why there is no such thing as an end-to-end stateless architecture.Some cloud native vertical trends that Gou is noticing taking place in the market. The issue of vendor lock-in, and how data and state fit into lock-in discussions. Gou’s opinion on where he sees the cloud native ecosystem heading.LinksPortworx: https://portworx.com/ Portworx Blog: https://portworx.com/blog/Gou Rao Email: mailto:gou@portworx.com TranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native, I'm your host Emily Omier, and today I am chatting with Gou Rao. Gou, I want to go ahead and have you introduce yourself. Where do you work? What do you do?Gou: Sure. Hi, Emily, and hi to everybody that's listening in. Thanks for having me on this podcast. My name is Gou Rao. I'm the CTO at Portworx. Portworx is a leader in the cloud-native storage space. We help companies run mission-critical stateful applications in production in hybrid, multi-cloud, and cloud-native environments.Emily: So, when you say you’re CTO, obviously that's a job title everyone, sort of, understands. But what does that mean you spend your day doing?Gou: Yeah, it is an overloaded term. As a CTO, I think CTOs in different companies wear multiple hats doing different things. Here at Portworx, technically I'm in charge of this company strategy and technical direction. What does that mean in terms of my day to day activities? And it's spending a lot of time with customers understanding the problems that they're trying to solve, and then trying to build a pattern around what different people in different industries and companies are doing, and then identifying common problems and trying to bring solutions to market, by working with our engineering teams, that sort of address, holistically, the underlying areas that I see people try and craft solutions around, whether it's enabling an agile development environment for their internal developers, or cost optimization, there's usually some underlying theme, and my job is to identify what that is, and come up with a meaningful solution that addresses a wide segment of the market.Emily: What are the most common pain points that you end up talking to customers about?Gou: Over the past, I think, eight-plus years or so—I think the enterprise software space goes through iterations in the types of problems that are being solved. Over the past eight-plus years or so, it really has been around this—we use this term cloud-native—enabling cloud-native environments. And what does that really mean? In talking to customers, what this is really meant recently is enabling an agile application development and deployment environment. And let's even define what that is. Me as an application developer, I have to rely on traditional IT techniques where there's a separate storage department, compute department, networking department, security department, and I have to interact with all of them just to develop and try out an application. But that really is impeding me as a developer from how fast I can iterate and build product and get it out there, so by and large, the common underlying theme has been, “Make that process better for me.” So, if I'm head of infrastructure how can I enable my developers to build and push product faster? So, getting that agility up in a sense where it makes—cost-wise, too, so it has to make cost sense—how do I enable an efficient, cost-efficient development platform? That has been the underlying theme. That sort of defines a set of technologies that we call cloud-native, and so orchestration tools like Kubernetes, and storage technologies like, hopefully, what we're doing at Portworx, these are all aimed at facilitating that. That's been sort of what we've been focused on over the past couple of years.Emily: And when you talk to customers, do they tend to say, “Hey, we need to figure out a way to increase our development velocity?” Or do they tend to say, “We need a better solution for stateful applications?” What's the type of vocabulary that they're attempting to use to describe their problems, and how high-level do they usually go?Gou: That's a good question. Both. So, the backdrop really is, “Increase my development velocity. Make it easier for me to put product out there faster.” Now, what does it take to get there? So, the second-order problems then become do I run in the public cloud, private cloud? Do I need help running stateful applications? So, these are all pillars that support the main theme here, which is increasing development velocity. So, the primary umbrella under which our customers are operating under is really around increasing the development velocity in a way that makes cost sense. And if you double-click on that and look at the type of problems that they're solving, they would include, “How do I efficiently run my applications in a public cloud? Or a hybrid cloud? How do I enable workflows that need to span multiple clouds?” Again because maybe they're using cloud provider technologies, like either compute resources, or even services that a cloud provider may be offering, so that, again, all of this so that they can increase their development velocity.Emily: And in the past, and to a certain extent now, storage was somewhat of a siloed area of expertise. When you're talking to customers, who are you talking to in an organization? I mean, is it somebody who's a storage specialist or is it someone who's not?Gou: No, they're not. So, that's been one of the things that have really changed in this ecosystem, which is the shift away from this kind of like, hey, there's a storage admin and a storage architect, and then there's a compute admin or BM admin or a security admin, that's really not who are driving this because if you look at that—that world really thinks in terms of infrastructure first.
16 Syys 202029min

Exploring Single Music’s Cloud Native Journey with Kevin Crawley
The conversation covers: Why Kevin helped launch Single Music, where he currently provides SRE and architect duties.Single Music’s technical evolution from Docker Swarm to Kubernetes, and the key reasons that drove Kevin and his team to make the leap.What’s changed at Single Music since migrating to Kubernetes, and how Kubernetes is opening new doors for the company — increasing stability, and making life easier for developers.How Kubernetes allows Single Music to grow and pivot when needed, and introduce new features and products without spending a large amount of time on backend configurations. How the COVID-19 pandemic has impacted music sales.Single Music’s new plugin system, which empowers their users to create their own middleware.Kevin’s current project, which is a series of how-to manuals and guides for users of Kubernetes.Some common misconceptions about Kubernetes.LinksSingle MusicTraefik LabsTwitter: https://twitter.com/notsureifkevin?lang=enConnect with Kevin on LinkedIn: https://www.linkedin.com/in/notsureifkevinEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I am chatting with Kevin Crawley. And Kevin actually has two jobs that we're going to talk about. Kevin, can you sort of introduce yourself and what your two roles are?Kevin: First, thank you for inviting me on to the show Emily. I appreciate the opportunity to talk a little bit about both my roles because I certainly enjoy doing both jobs. I don't necessarily enjoy the amount of work it gives me, but it also allows me to explore the technical aspects of cloud-native, as well as the business and marketing aspects of it. So, as you mentioned, my name is Kevin Crawley. I work at a company called Containous. They are the company who created Traefik, the cloud-native load balancer. We've also created a couple other projects, and I'll talk a little bit about those later. For Containous, I'm a developer advocate. I work both with the marketing team and the engineering team. But also I moonlight as a co-founder and a co-owner of Single Music. And there, I fulfill mostly SRE type duties and also architect duties where a lot of times people will ask me feedback, and I'll happily share my opinion. And Single Music is actually based out of Nashville, Tennessee, where I live, and I started that with a couple friends here.Emily: Tell me actually a little bit more about why you started Single Music. And what do you do exactly?Kevin: Yeah, absolutely. So, the company started out of really an idea that labels and artists—and these are musicians if you didn't pick up on the name Single Music—we saw an opportunity for those labels and artists to sell their merchandise through a platform called Shopify to have advanced tools around selling music alongside that merchandise. And at the time, which was in 2016, there weren't any tools really to allow independent artists and smaller labels to upload their music to the web and sell it in a way in which could be reported to the Billboard charts, as well as for them to keep their profits. At the time, there was really only Apple Music, or iTunes. And iTunes keeps a significant portion of an artist's revenue, as well as they don't release those funds right away; it takes months for artists to get that money. And we saw an opportunity to make that turnaround time immediate so that the artists would get that revenue almost instantaneously. And also we saw an opportunity to be more affordable as well. So, initially, we offered that Shopify integration—and they call those applications—and that would allow those store owners to distribute that music digitally and have those sales reported in Nielsen SoundScan, and that drives the Billboard Top 100. Now since then, we've expanded quite considerably since the launch. We now report on sales for physical merchandise as well. Things like cassette tapes, and vinyl, so records. And you'd be surprised at how many people actually still buy cassette tapes. I don't know what they're doing with them, but they still do. And we're also moving into the live streaming business now, with all the COVID stuff going on, and there's been some pretty cool events that we've been a part of since we started doing that, and bands have gotten really elaborate with their live production setups and live streaming. To answer the second part of your question, what I do for them, as I mentioned, I mostly serve as an advisor, which is pretty cool because the CTO and the developers on staff, I think there's four or five developers now working on the team, they manage most of the day-to-day operations of the platform, and we have, like, over 150 Kubernetes pods running on an EKS cluster that has roughly, I'd say, 80 cores and 76 gigabytes of RAM. That is around, I'd say about 90 or 100 different services that are running at any given time, and that's across two or three environments, just depending on what we're doing at the time.Emily: Can you tell me a little bit about the sort of technical evolution at Single? Did you start in 2016 on Kubernetes? That's, I suppose, not impossible.Kevin: It's not impossible, and it's something we had considered at the time. But really, in 2016, Kubernetes, I don't even think there wasn't even a managed offering of Kubernetes outside of Google at that time, I believe, and it was still pretty early on in development. If you wanted to run Kubernetes, you were probably going to operate it on-premise, and that just seemed like way too high of a technical burden. At the time, it was just myself and the CTO, the lead developer on the project, and also the marketing or business person who was also part of the company. And at that time, it was just deemed—it was definitely going to solve the problems that we were anticipating having, which was scaling and building that microservice application environment, but at the time, it was impractical for myself to manage Kubernetes on top of managing all the stuff that Taylor, the CTO, had to build to actually make this product a reality. So, initially, we launched on Docker Swarm in my garage, on a Dell R815, which was like a, I think was 64 cores and 256 gigs of RAM, which was, like, overkill, but it was also, I think it cost me, like, $600. I bought it off of Craigslist from somebody here in the area. But it served really well as a server for us to grow into, and it was, for the most part, other than electricity and the internet connection into m...
9 Syys 202038min

Navigating the Cloud Native Ecosystem with Harness Evangelist Ravi Lachhman
The conversation covers: An overview of Ravi’s role as an evangelist — an often misunderstood, but important technology enabler. Balancing organizational versus individual needs when making decisions.Some of the core motivations that are driving cloud native migrations today. Why Ravi believes it in empowering engineers to make business decisions. Some of the top misconceptions about cloud native. Ravi also provides his own definition of cloud native.How cloud native architectures are forcing developers to “shift left.”Linkshttps://harness.io/Twitter: https://twitter.com/ravilachHarness community: https://community.harness.io/Harness Slack: https://harnesscommunity.slack.com/TranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Welcome to The Business of Cloud Native, I am your host Emily Omier. And today I'm chatting with Ravi Lachhman. Ravi, I want to always start out with, first of all, saying thank you—Ravi: Sure, excited to be here.Emily: —and second of all, I like to have you introduce yourself, in your own words. What do you do? Where do you work?Ravi: Yes, sure. I'm an evangelist for Harness. So, what an evangelist does, I focus on the ecosystem, and I always like the joke, I marry people with software because when people think of evangelists, they think of a televangelist. Or at least that’s what I told my mother and she believes me still. I focus on the ecosystem Harness plays in. And so, Harness is a continuous delivery as a service company. So, what that means, all of the confidence-building steps that you need to get software into production, such as approvals, test orchestration, Harness, how to do that with lots of convention, and as a service.Emily: So, when you start your day, walk me through what you're actually doing on a typical day?Ravi: a typical day—dude, I wish there was a typical day because we wear so many hats as a start-up here, but kind of a typical day for me and a typical day for my team, I ended up reading a lot. I probably read about two hours a day, at least during the business day. Now, for some people that might not be a lot, but for me, that's a lot. So, I'll usually catch up with a lot of technology news and news in general. They kind of see how certain things are playing out. So, a big fan of The New Stack big fan of InfoQ. I also like reading Hacker News for more emotional reading. The big orange angry site, I call Hacker News. And then really just interacting with the community and teams at large. So, I'm the person I used to make fun of, you know, quote-unquote, “thought leader.” I used to not understand what they do, then I became one that was like, “Oh, boy.” [laughs]. And so just providing guidance for some of our field teams, some of the marketing teams around the cloud-native ecosystem, what I'm seeing, what I'm hearing, my opinion on it. And that's pretty much it. And I get to do fun stuff like this, talking on podcasts, always excited to talk to folks and talk to the public. And then kind of just a mix of, say, making some sort of demos, or writing scaffolding code, just exploring new technologies. I'm pretty fortunate in my day to day activities.Emily: And tell me a little bit more about marrying people with software. Are you the matchmaker? Are you the priest, what role?Ravi: I can play all parts of the marrying lifecycle. Sometimes I'm the groom, sometimes I’m the priest. But I'm really helping folks make technical decisions. So, it’s go a joke because I get the opportunity to take a look at a wide swath of technology. And so just helping folks make technical decisions. Oh, is this new technology hot? Does this technology make sense? Does this project fatality? What do you think? I just play, kind of, masters of ceremony on folks who are making technology decisions.Emily: What are some common decisions that you help people with, and common questions that they have?Ravi: Lot of times it comes around common questions about technology. It's always finding rationale. Why are you leveraging a certain piece of technology? The ‘why’ question is always important. Let's say that you're a forward-thinking engineer or a forward-thinking technology leader. They also read a lot, and so if they come across, let's say a new hot technology, or if they're on Twitter, seeing, yeah, this particular project’s getting a lot of retweets, or they go in GitHub and see oh, this project has little stars, or forks. What does that mean? So, part of my role when talking to people is actually to kind of help slow that roll down, saying, “Hey, what’s the business rationale behind you making a change? Why do you actually want to go about leveraging a certain, let's say, technology?” I’m just taking more of a generic approach, saying, “Hey, what’s the shiny penny today might not be the shiny penny tomorrow.” And also just providing some sort of guidance like, “Hey, let's take a look at project vitality. Let's take a look at some other metrics that projects have, like defect close ratio—you know, how often it's updates happening, what's your security posture?” And so just walking through a more, I would say the non-fun tasks or non-functional tasks, and also looking about how to operationalize something like, “Hey, given you want to make sure you're maintaining innovation, and making sure that you're maintaining business controls, what are some best operational practices?” You know, want to go for gold, or don't boil the ocean, it’s helping people make decisive decisions.Emily: What do you see as sort of the common threads that connect to the conversations that you have?Ravi: Yeah, so I think a lot of the common threads are usually like people say, “Oh, we have to have it. We're going to fall behind if you don't use XYZ technology.” And when you really start getting to talking to them, it's like, let’s try to line up some sort of technical debt or business problem that you have, and how about are you going to solve these particular technical challenges? It's something that, of the space I play into, which is ironic, it's the double-edged sword, I call it ‘chasing conference tech.’ So, sometimes people see a really hot project, if my team implements this, I can go speak at a conference about a certain piece of technology. And it's like, eh, is that a really r...
2 Syys 202032min

Simplifying Cloud Native Testing with Jón Eðvald
The conversation covers:Some of the pain points and driving factors that led Jón and his partners to launch Garden. Jon also talks about his early engineering experiences prior to Garden.How the developer experience can impact the overall productivity of a company, and why companies should try and optimize it.Kubernetes shortcomings, and the challenges that developers often face when working with it. Jón also talks about the Kubernetes skills gap, and how Garden helps to close that gap. Business stakeholder perception regarding Kuberentes challenges. The challenge of deploying a single service on Kubernetes in a secure manner — and why Jón was surprised by this process. How the Kubernetes ecosystem has grown, and the benefits of working with a large community of people who are committed to improving it. Jón’s multi-faceted role as CEO of Garden, and what his day typically entails as a developer, producer, and liaison. Garden’s main mission, which involves streamlining end-to-end application testing. Links:Company site: https://garden.io/Twitter: https://twitter.com/jonedvaldKubernetes Slack: https://slack.k8s.io/Transcript:Emily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. I'm your host Emily Omier. And today I'm chatting with Jón Eðvald. And, Jón, thank you so much for joining me.Jón: Thank you so much for having me. You got the name pretty spot on. Kudos.Emily: Woohoo, I try. So, if you could actually just start by introducing yourself and where you work in Garden, that would be great.Jón: Sure. So, yeah, my name is Jón, one of the founders, and I’m the CEO of Garden. I've been doing software engineering for more years than I'd like to count, but Garden is my second startup. Previous company was some years ago; dropped out of Uni to start what became a natural language processing company. So, different sort of thing than what I'm doing now. But it's actually interesting just to scan through the history of how we used to do things compared to today. We ran servers out of basically a cupboard with a fan in it, back in the day, and now, things are done somewhat differently. So, yeah, I moved to Berlin, it's about four years ago now, met my current co-founders. We all shared a passion and, I guess to some degree, frustrations about the general developer experience around, I guess, distributed systems in general. And now it's become a lot about Kubernetes these days in the cloud-native world, but we are interested in addressing common developer headaches regarding all things microservices. Testing, in particular, has become a big part of our focus. Garden itself is an open-source product that aims to ease the developer experience around Kubernetes, again, with an emphasis on testing. When we started it, there wasn't a lot of these types of tools around, or they were pretty early on. Now there's a whole bunch of them, so we're trying to fit into this broad ecosystem. Happy to expand on that journey. But yeah, that's roughly—that's what Garden is, and that’s… yeah, a few hop-skips of my history as well.Emily: So, tell me a little bit more about the frustration that led you to start Garden. What were you doing, and what were you having trouble doing, basically?Jón: So, when I first moved to Berlin, it was to work for a company called Clue. They make a popular period tracking app. So, initially, I was meant to focus on the data science and data engineering side of things, but it became apparent that there was a lot of need for people on the engineering side as well. So, I gravitated into that and ended up managing the engineering team there. And it was a small operation. We had more than a million daily active users yet just a single back end developer, so it was bursting at the seams. And at the time running a simple Node.js backend on Heroku, single Postgres database, pretty simple. And I took that through—first, we adopted containers and moved into Docker Cloud. Then Docker Cloud disappeared, or was terminated without—we had to discover that by ourselves. And then Kubernetes was manifesting as the de facto way to do these things. So, we went through that transition, and I was kind of surprised. It was easy enough to get going and get to a functional level with Kubernetes and get everything running and working. The frustration came more from just the general developer experience and developer productivity side. Specifically, we found it very difficult to test the whole application because we had, by the end of that journey, a few different services doing different things. And for just the time you make a simple change to your code to it actually having been built, deployed, and ultimately tested was a rather tedious experience. And I found myself building tools, bespoke tools to be able to deal with that, and that ended up being sort of a janky prototype of what Garden is today. And I realized that my passion was getting the better of me, and we wanted to start a company to try and do better.Emily: Why do you think developer experience matters?Jón: Beyond just the, kind of, psychological effect of having to have these long and tedious feedback loops—just as a developer myself, it kind of grinds and reduces the overall joy of working on something. But in more concrete material terms, it really limits your productivity. You basically, you take—if your feedback loop is 10 times longer than it should be, that exponentially reduces the overall output of you as an individual or your team. So, it has a pretty significant impact on just the overall productivity of a company.Emily: And, in fact, it seems like a lot of companies move to Kubernetes or adopt distributed systems, cloud-native in general, precisely to get the speed.Jón: And, yeah, that makes sense. I think it's easy to underestimate all the, what are often called these day-two problems, when—so, it's easy enough to grok how you might adopt Kubernetes. You might get the application working, and you even get to production fairly quickly, and then you find that you've left a lot of problems unsolved, that Kubernetes by itself doesn't really address for you. And it's often conflated by the fact that you may be actually adopting multiple things at the same time. You may be not only transitioning to Kubernetes from something analogous, you may be going from simpler, bespoke processes, or you might have just a monolith that didn't really have any complicated requirements when it comes to dev tooling and dev setups. So, yeah, you might be adopting microservices, containers, and Kuberne...
26 Elo 202027min

CERN’s Transition to Containerization and Kubernetes with Ricardo Rocha
Some of the highlights of the show include: The challenges that CERN was facing when storing, processing, and analyzing data, and why it pushed them to think about containerization. CERN’s evolution from using mainframes, to physical commodity hardware, to virtualization and private clouds, and eventually to containers. Ricardo also explains how the migration to containerization and Kubernetes was started.Why there was a big push from groups that focus on reproducibility to explore containerization. How end users have responded to Kubernetes and containers. Ricardo talks about the steep Kubernetes learning curve, and how they dealt with frustration and resistance. Some of top benefits of migrating to Kubernetes, and the impact that the move has had on their end users. Current challenges that CERN is working through, regarding hybrid infrastructure and rising data loads. Ricardo also talks about how CERN optimizes system resources for their scientists, and what it’s like operating as a public sector organization.How CERN handles large data transfers. Links:Email:ricardo.rocha@cern.ch Twitter: https://twitter.com/ahcorportoCERNTranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to the Business of Cloud Native. I'm your host, Emily Omier, and today I'm here with Ricardo Rocha. Ricardo, thank you so much for joining us.Ricardo: It's a pleasure.Emily: Ricardo, can you actually go ahead and introduce yourself: where you work, and what you do?Ricardo: Yeah, yes, sure. I work at CERN, the European Organization for Nuclear Research. I'm a software engineer and I work in the CERN IT department. I've done quite a few different things in the past in the organization, including software development in the areas of storage and monitoring, and also distributed computing. But right now, I'm part of the CERN Cloud Team, and we manage the CERN private cloud and all the resources we have. And I focus mostly on networking and containerization, so Kubernetes and all these new technologies.Emily: And on a day to day basis, what do you usually do? What sort of activities are you actually doing?Ricardo: Yeah. So, it's mostly making sure we provide the infrastructure that our physics users and experiments require, and also the people on campus. So, CERN is a pretty large organization. We have around 10,000 people on-site, and many more around the world that depend on our resources. So, we operate private clouds, we basically do DevOps-style work. And we have a team dedicated for the Cloud, but also for other areas of the data center. And it's mostly making sure everything operates correctly; try to automate more and more, so we do some improvements gradually; and then giving support to our users.Emily: Just so everyone knows, can you tell a little bit more about what kind of work is done at CERN? What kind of experiments people are running?Ricardo: Our main goal is fundamental research. So, we try to answer some questions about the universe. So, what's dark matter? What's dark energy? Why don't we see antimatter? And similar questions. And for that, we build very large experiments. So, the biggest experiment we have, which is actually the biggest scientific experiment ever built, is the Large Hadron Collider, and this is a particle accelerator that accelerates two beams of protons in opposite directions, and we make them collide at very specific points where we build this very large physics experiments that try to understand what happens in these collisions and try to look for new physics. And in reality, what happens with these collisions is that we generate large amounts of data that need to be stored, and processed, and analyzed, so the IT infrastructure that we support, it’s larger fraction dedicated to this physics analysis.Emily: Tell me a little bit more about some of the challenges related to processing and storing the huge amount of data that you have. And also, how this has evolved, and how it pushed you to think about containerization.Ricardo: The big challenge we have is the amount of data that we have to support. So, these experiments, each of the experiments, at the moment of the collisions, it can generate data in the order of one petabyte a second. This is, of course, not something we can handle, so the first thing we do, we use these hardware triggers to filter this data quite significantly, but we still generate, per experiment, something like a few gigabytes a second, so up to 10 gigabytes a second. And this we have to store, and then we have large farms that will handle the processing and the reconstruction of all of this. So, we've had these sort of experiments since quite a while, and to analyze all of this, we need a large amount of resources, and with time. If you come and visit CERN, you can see a bit of the history of computing, kind of evolving with what we used to have in the past in our data center. But it's mostly—we used to have large mainframes, that now it's more in the movies that we see them, but we used to have quite a few of those. And then we transitioned to physical commodity hardware with Linux servers. Eventually introduced virtualization and private clouds to improve the efficiency and the provisioning of these resources to our users, and then eventually, we moved to containers and the main motivation is always to try to be as efficient as possible, and to speed up this process of provisioning resources, and be more flexible in the way we assign compute and also storage. What we've seen is that in the move from physical to virtualization, we saw that the provisioning and maintenance got significantly improved. What we see with containerization is the extra speed in also deployment and update of the applications that run on those resources. And we also see an improving resource utilization. We already had the possibility to improve quite a bit with virtualization by doing things like overcommit, but with containers, we can go one step further by doing more efficient resource sharing for the different applications we have to run.Emily: Is the amount of data that you're processing stable? Is it steadily increasing, have spikes, a combination?Ricardo: So, the way it works is, we have what we call ‘beam’ which is when we actually have protons circulating in the accelerator. And during these periods, we try to get as much collisions as ...
19 Elo 202034min

Discussing the Latest Cloud Trends with Cloud Comrade Co-founder Andy Waroma
Highlights from this episode include: Key market drivers that are causing Cloud Comrade’s clients to containerize applications — including the role that the global pandemic is playing. The pitfalls of approaching cloud migration with a cost-first strategy, and why Andy doesn’t believe in this approach. Common misconceptions that can arise when comparing cloud TCO to on-premise infrastructure.How today’s enterprises tend to view cloud computing versus cloud-native. Andy also mentions a key requirement that companies have to have when integrating cloud services.Andy’s thoughts on build versus buy when integrating cloud services at the enterprise level.Why cloud migration is a relatively safe undertaking for companies because it’s easy to correct mistakes.Why businesses need to re-think AI and to be more realistic in terms of what can actually be automated. Andy’s must-have engineering tool, which may surprise you.Links:Cloud Comrade LinkedIn: https://www.linkedin.com/company/cloud-comrade/Follow Andy on Twitter: @andywaromaConnect with Andy on LinkedIn: https://www.linkedin.com/in/andyw/TranscriptEmily: Hi everyone. I’m Emily Omier, your host, and my day job is helping companies position themselves in the cloud-native ecosystem so that their product’s value is obvious to end-users. I started this podcast because organizations embark on the cloud naive journey for business reasons, but in general, the industry doesn’t talk about them. Instead, we talk a lot about technical reasons. I’m hoping that with this podcast, we focus more on the business goals and business motivations that lead organizations to adopt cloud-native and Kubernetes. I hope you’ll join me.Emily: Welcome to The Business of Cloud Native. I'm Emily Omier, your host, and today I'm here with Andy Waroma. Andy, I just wanted to start with having you introduce yourself.Andy: Yeah, hi. Thanks, Emily for having me on your podcast. My name is Andy Waroma, and I'm based in Singapore, but originally from Finland. I've been [unintelligible] in Singapore for about 20 years, and for 11 years I spent with a company called SAP focusing on business software applications. And then more recently, about six years ago, I co-founded together with my ex-colleague from SAP, a company called Cloud Comrade, and we have been running Cloud Comrade now for six years and Cloud Comrade focuses on two things: number one, on cloud migrations; and number two, on cloud managed services across the Southeast Asia region.Emily: What kind of things do you help companies understand when you're helping with cloud migrations? Is this like, like, a lift and shift? To what extent are you helping them change the architecture of their applications?Andy: Good question. So, typically, if you look at the Southeast Asian market, we are probably anywhere between one to two years behind that of the US market. And I always like to say that the benefit that we have in Southeast Asia is that we have a time machine at our disposal. So, whatever has happened in the US in the past 18 months or so it's going to be happening also in Singapore and Southeast Asia. And for the first three to four years of this business, we saw a lot of lift and shift migrations, but more recently, we have been asked to go and containerize applications to microservices, revamp applications from monolithic approach to a much more flexible and cloud-native approach, and we just see those requirements increasing as companies understand what kind of innovation they can do on different cloud platforms.Emily: And what do you think is driving, for your clients, this desire to containerize applications?Andy: Well, if you asked me three months ago, I probably would have said it's about innovation, and business advantage, and getting ahead in the market, and investing in the future. Now, with the global pandemic situation, I would say that most companies are looking at two things: they're looking at cost savings, and they are also looking at automation. And I think cost savings is quite obvious; most companies need to know how they can reduce on their IT expenditure, how they can move from CAPEX to OPEX, how they can be targeting their resources up and down depending on the business demand what they have. And at the same time, they're also not looking to hire a lot of new people into their internal IT organization. So, therefore, most of our customers want to see their applications to be as automated as possible. And of course, microservices, CI/CD pipelines, and everything else helps them to achieve that somewhat. But first and foremost, of course, it's about all services that Cloud provides in general. And then once they have been moving some of those applications and getting positive experiences, that's where we typically see the phase two kicking in, going into cloud-native microservices, containers, Kubernetes, Docker, and so forth.Emily: And do you think when companies are going into this, thinking, “Oh, I'm going to really reduce my costs.” Do you think they're generally successful?Andy: I don't think in a way that they think they are. So, especially if I'm looking at the Southeast Asian markets: Singapore, Malaysia, Thailand, Philippines, Indonesia, and perhaps other countries like Vietnam, Myanmar, and Cambodia, it’s a very cost-conscious market, and I always, also like to say that when we go into a meeting, the first question that we get from the customers, “How much?” It is not even what are we going to be delivering, but how much it's going to cost them. That's the first gate of assessment. So, it's very much of an on-premise versus clouds comparison in the beginning.And I think if companies go in with that type of a mindset, that's not necessarily the winning strategy for them. What they will come to know after a while is that, for example, setting up disaster recovery systems on an on-premise environment, especially when a separate location is extremely expensive, and doing something like that on the Cloud is going to be very cost-efficient. And that's when they start seeing cost savings. But typically, what they will start seeing on Cloud is a process cost-saving, so how they can do things faster, quicker, and be more flexible in terms of responding to end-user demands.Emily: At the beginning of the process, how much do you think your customers generally understand about how different the cost structure is going to be?Andy: So, we have more than 200 customers, and we have done more than 500 projects over the six years, and there's a vast range of customers. We have done work with companies with a few people; we have done companies with Fortune 10 organizations, and everything in between, in all kinds of different industries: manufacturing, finance, insurance, public sector, industrial level things, nonprofits, research organizations. So, we can't really say that each customer are same. There are customers who are very sophisticated and they know exactly what they want when going to a cloud platform, but then there are, of course, many other customers who need to be advised much more in the beginning, and that’s where we typically...
12 Elo 202024min

RVU’s Cloud Native Transformation with Paul Ingles
Some highlights of the show include:The company’s cloud native journey, which accelerated with the acquisition of Uswitch. How the company assessed risk prior to their migration, and why they ultimately decided the task was worth the gamble.Uswitch’s transformation into a profitable company resulting from their cloud native migration.The role that multidisciplinary, collaborative teams played in solving problems and moving projects forward. Paul also offers commentary on some of the tensions that resulted between different teams.Key influencing factors that caused the company to adopt containerization and Kubernetes. Paul goes into detail about their migration to Kubernetes, and the problems that it addressed. Paul’s thoughts on management and prioritization as CTO. He also explains his favorite engineering tool, which may come as a surprise. Links:RVU Website: https://www.rvu.co.uk/Uswitch Website: https://www.uswitch.com/Twitter: https://twitter.com/pinglesGitHub: https://github.com/pinglesTranscriptAnnouncer: Welcome to The Business of Cloud Native podcast, where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.Emily: Welcome to The Business of Cloud Native. I'm your host, Emily Omier, and today I am chatting with Paul Ingles. Paul, thank you so much for joining me.Paul: Thank you for having me.Emily: Could you just introduce yourself: where do you work? What do you do? And include, sort of, some specifics. We all have a job title, but it doesn't always reflect what our actual day-to-day is.Paul: I am the CTO at a company called RVU in London. We run a couple of reasonably big-ish price comparison, aggregator type sites. So, we help consumers figure out and compare prices on broadband products, mobile phones, energy—so in the UK, energy is something which is provided through a bunch of different private companies, so you've got a fair amount of choice on kind of that thing. So, we tried to make it easier and simpler for people to make better decisions on the household choices that they have. I've been there for about 10 years, so I've had a few different roles. So, as CTO now, I sit on the exec team and try to help inform the business and technology strategy. But I've come through a bunch of teams. So, I've worked on some of the early energy price comparison stuff, some data infrastructure work a while ago, and then some underlying DevOps type automation and Kubernetes work a couple of years ago.Emily: So, when you get in to work in the morning, what types of things are usually on your plate?Paul: So, I keep a journal. I use bullet journalling quite extensively. So, I try to track everything that I’ve got to keep on top of. Generally, what I would try to do each day is catch up with anybody that I specifically need to follow up with. So, at the start of the week, I make a list of every day, and then I also keep a separate column for just general priorities. So, things that are particularly important for the week, themes of work going on, like, technology changes, or things that we're trying to launch, et cetera. And then I will prioritize speaking to people based on those things. So, I'll try and make sure that I'm focusing on the most important thing. I do a weekly meeting with the team. So, we have a few directors that look after different aspects of the business, and so we do a weekly meeting to just run through everything that's going on and sharing the problems. We use the three P's model: so, sharing progress problems and plans. And we use that to try and steer on what we do. And we also look at some other team health metrics. Yeah, it's interesting actually. I think when I switched from working in one of the teams to being in the CTO role, things change quite substantially. That list of things that I had to care about increase hugely, to the point where it far exceeded how much time I had to spend on anything. So, nowadays, I find that I'm much more likely for some things to drop off. And so it's unfortunate, and you can't please everybody, so you just have to say, “I'm really sorry, but this thing is not high on the list of priorities, so I can't spend any time on it this week, but if it's still a problem in a couple of weeks time, then we'll come back to it.” But yeah, it can vary quite a lot.Emily: Hmm, interesting. I might ask you more questions about that later. For now, let's sort of dive into the cloud-native journey. What made RVU decide that containerization was a good idea and that Kubernetes was a good idea? What were the motivations and who was pushing for it?Paul: That's a really good question. So, I got involved about 10 years ago. So, I worked for a search marketing startup that was in London called Forward Internet Group, and they acquired USwitch in 2010. And prior to working at Forward, I'd worked as a consultant at ThoughtWorks in London, so I spent a lot of time working in banks on continuous delivery and things like that. And so when Uswitch came along, there were a few issues around the software release process. Although there was a ton of automation, it was still quite slow to actually get releases out. We were only doing a release every fortnight. And we also had a few issues with the scalability of data. So, it was a monolithic Windows Microsoft stack. So, there was SQL Server databases, and .NET app servers, and things like that. And our traffic can be quite spiky, so when companies are in the news, or there's policy changes and things like that, we would suddenly get an increase in traffic, and the Microsoft solution would just generally kind of fall apart as soon as we hit some kind of threshold. So, I got involved, partly to try and improve some of the automation and release practices because at the search start-up, we were releasing experiments every couple of hours, even. And so we wanted to try and take a bit of that ethos over to Uswitch, and also to try and solve some of the data scalability and system scalability problems. And when we got started doing that, a lot of it was—so that was in the early heyday of AWS, so this was about 2008, that I was at the search startup. And we were used to using EC2 to try and spin up Hadoop clusters and a few other bits and pieces that we were playing around with. And when we acquired Uswitch, we felt like it was quickest for us to just create a different environment, stick it under the load balancer so end users wouldn't realize that some requests was being served off of the AWS infrastructure instead, and then just gradually go from there. We found that that was just the fastest way to move. So, I think it was interesting, and it was both a deliberate move, but it was also I think the degree to which we followed through on it, I don't think we'd really anticipated quite how quickly we would shift everything. And so when Forward made the acquisition, I joined summer of 2010, and myself and a colleague wrote ...
5 Elo 202037min