Tech Talk: Developing APIs the Easy Way – Streamline your API process with an endpoint-focused approach on Dec 5 at 11 am EST! Register now

Back to Podcasts
LIVIN' ON THE EDGE PODCAST

Developer Control Planes: An Ecosystem Leader’s Point of View

About

While cloud-native development has not been adopted into production environments everywhere, it is an increasingly visible force, bolstered by a robust community, several widely adopted emerging standards, and increasingly cohesive tooling and platforms. Ambassador's Head of Developer Relations, Daniel Bryant, recently sat down with Katie Gamanji to discuss these points as well as real-world, cloud-native implementations driven by business goals and the importance of both education and community in achieving critical mass.

Episode Guests

Katie Gamanji
Senior Kubernetes Field Engineer at Apple
Ambassador's Head of Developer Relations, Daniel Bryant, recently sat down with Katie Gamanji, Senior Kubernetes Field Engineer at Apple, and the CNCF's former Ecosystem Advocate, to discuss these points as well as real-world, cloud-native implementations driven by business goals and the importance of both education and community in achieving critical mass.

The conversation yielded several related themes that underpin success for cloud-native adoption in the real world:

  • Centralizing real-world, cloud-native development to meet business goals should be the key driver of any digital transformation.

    In a previous role, Katie worked with the Condé Nast organization to build a centralized, cloud-native solution that unified upwards of 32 different media platforms into a single hosting platform, single CMS, and single visual identity in order to create and deliver content consistently on a global level. "With business critical applications, 'centralizing the decentralized' ensured stability and ownership when we needed to take other nuances into consideration, for example, operating in China and other complex business problems."

  • Adopting established tools for faster success.

    In guiding the creation and delivery of a centralized platform, Katie's team didn't embrace bleeding-edge technology but went with proven cloud-native technologies. "We didn't want to fight underlying technologies. We had a two-year timeline and needed to adopt tools accepted as somewhat standard in industry…. Kubernetes was already established; AWS was already established. This eased the journey and helped us meet deadlines and key goals."

  • Upskilling teams with cross-functional insight and ability forged by collaboration.

    As much as a team may have grown into multiple teams with their own functions, the core focus for all teams was building the single platform. In the beginning, as a single team, developers had to take on some devops responsibility, upskilling on delivery, monitoring, authentication, and other similar areas that are not normally a developer's focus. As the team grew, successful migration to the new platform demanded a full team focused on the core platform and one on application delivery with close collaboration and communication between the two.

    With a team humming along collaboratively, the next step in hitting the business goals was observability. "We needed analytics to prove that the application is healthy and identify quickly when it isn't, that budgets are justified, to find out whether the infrastructure and components involved are cost efficient, and whether what we built solves the problems we needed to solve." Observability insight feeds back into development and delivery workflows and is key to ensuring that the entire team functions and collaborates optimally

  • Education is a key enabler

    How does this kind of cloud-native development become the norm? Through education. Katie cites cloud-native fundamentals education as one area that the community consistently asks for. She is working on a beginner-friendly, inclusive Kubernetes and Cloud Native Associate (KCNA) certification exam that focuses on the basics first. In addition, one of Katie's focus areas is to make Kubernetes and cloud native more approachable and ubiquitous, which builds on making learning accessible. Katie collaborated with Udacity to create some of this foundational work about cloud native principles. "There are all kinds of courses out there, but I am keen to take learners on a journey, literally step-by-step. And being very, very declarative. I want to translate the fundamentals because the fundamentals are always going to be the same. That is, you want something that is packaged. You want something that is scalable, something you can deploy automatically."

  • Focus on the cloud native community

    Ultimately, as critical as education is, community is the linchpin to understanding and being a part of the cloud-native movement. "For anyone who's trying to understand what cloud native is, why it is important and how to get into it, I continue to highlight that while it is partly about the tooling, it is much more importantly about the community. Once you get into this space, I think it's very important to get to know your folks. Get to know the maintainers for the project that you're using, or maybe try to be one of the contributors. If you have time and you have the resources that's extremely valuable. Just try to reach out. That's what keeps the community vibrant."

  • Embrace the wider open source community

    Community is not just the cloud-native community, necessarily, but the entire expanse of open source. "There isn't a single industry that has not been touched by open source," Katie explains. "An open source team is the dream team because it includes thousands of people from everywhere with different perspectives, ideas, prospects for the tools, how they want to use it, and they all come together and contribute. This creates a momentum that cannot be replicated anywhere else. This is why the Kubernetes community, and the open source community overall, are so powerful."


Transcript

Daniel Bryant (00:02):

Hello, and welcome to the Ambassador Labs podcast, where we explore all things about cloud native platform with developer control planes and developer experience. I'm your host, Daniel Bryant, head of DevRel here at Ambassador Labs, and today I have the pleasure of sitting down with Katie Gamanji, a well-known technical leader within the cloud native ecosystem and the author of several popular online cloud skills training courses. Join us for a fantastic discussion covering topics such as building and supporting a Kubernetes platform at Condé Nast, the role of the CNCF and the fantastic cloud native community, and the importance of education for developers. And remember, if you want to dive deep into the motivation for and the benefits of a cloud native developer control plane or are new to Kubernetes and want to learn more on our free Kubernetes developer learning center, please visit getambassador.io to learn more. So welcome, Katie. Many thanks for joining us today. Could you briefly introduce yourself and give a little bit about your background as well, please?

Katie Gamanji (00:52):

Hello Daniel and thank you for having me again for this podcast. My name is Katie Gamanji and currently I am working with the end user community within the CNCF, or Cloud Native Computing Foundation. I am pretty much leading this community but, at the same time, I'm making sure to bridge the gap between adopters and the projects within the ecosystem, so actually generating that close feedback loop between these two entities. As well, I have many roles in the community. I am one of the advisory board members for Keptn, which currently is a sandbox CNCF project, but they're applying for incubation very soon so hopefully they're going to get more adoption. As well, I'm working with OpenUK to make sure that open standards are fairly used across data hardware and software, and another thing that I would like to mention is currently I am working with Udacity to create the cloud native fundamentals course. So I have many roles within the community but the main one is focused on the end user community at the moment and the cloud native tools.

Daniel Bryant (01:50):

Very cool, Katie. You're involved in so many things, but I like the thread across them, end user focus, and we'll definitely dive more into your Udacity experience as well because education is such an important part of cloud native, as you and I were talking a little bit off mic. So I first bumped into your work at KubeCon Barcelona, I think it's back in 2019, when you keynoted. It seems such a long time ago now with all the craziness of the world, but you talked and explored in great detail around how Condé Nast had built their platform and the system, so I was keen to explore that in a little bit more detail. I remember you discussed implementing a centralized platform and I was curious, what was it like before and what was the motivations that led to a centralized platform?

Katie Gamanji (02:32):

Right, I'm more than happy to talk about that use case because it was a great usage and implementation of cloud native. So Condé Nast is known for having a lot of ownership around luxury media companies, such as Vogue, GQ, Wired, which is focused on the tech aspects. And all of these brands, they were distributed in every single country and it was different. So for example, we had GQ France and GQ Germany, GQ UK. It was the same for Vogue, but the thing is all of the websites were hosted very differently and there was a lot of discrepancy in terms of the CMS tools, what kind of tools the editors would use to put this content online to the actual users. The other thing was discrepancy in visual and design, so even if Vogue by itself is a very well known brand, so maybe the logo of Vogue was everywhere the same, but when you look into the website, there was a discrepancy in the visual and design. Let me find words.

And that was not necessarily required for a larger brand as well, we want that unified customer experience. But at the same time, when we looked underneath, the platforms that were hosting all of these websites, they were completely different as well and, most of the time, they were outsourced to third parties. So there would be another organization, or company, or consultancy company taking care of some of the tools in way these websites are hosted and delivered. If you look at all of this, we had 32 markets at the time that we intended to migrate our content and all of these 32 markets, they had, again, as I mentioned, very different CMS tools, visual and design, and the actual platform hosting tools. And we wanted to centralize that to create one unique way for them to create that content and deliver that to the user. So I think that was the underlying motivation, looking back three years ago. I think more, four years ago.

Daniel Bryant (04:32):

Yeah, it took a long time, didn't it? Yeah, that was great stuff, Katie. And back then, were you looking to make the platform self-service? Because now we talk a lot about that in the cloud native space, developers should be able to self serve, deploy things on their own. Was that on the roadmap a Condé Nast back in 2019 and before?

Katie Gamanji (04:50):

I think that came as a natural procedure or a process within the organization because one thing that I should mention is so the Condé Nast is the wider organization, but in London we had this new entity called Condé Nast International, and it was a very small team, so it started with five people, and we scaled up to 60 people across 18 months. So it scaled very, very quick. But the thing is we started the team in late 2017, early 2018, and we had this greenfield platform, we had a greenfield team. We could choose the best talent, we could choose the best team. Our mission was to create a platform that will help us to host those websites. But because it was a new team, we could really shape the culture and the way we interact with different teams, and the way we collaborate as well. So I think at the time DevOps was a very big keyword and it was something which it resonated across all of the industries and many of the communities. And I think it was one of the natural things that we adopted within the Condé Nast team as well.

Daniel Bryant (05:58):

Very nice, Katie. What did the dev tools look like at that time? Was it a met case of using your ID to create code, build a Docker container, write some Kubernetes YAML, and push it up or was there more abstractions involved there?

Katie Gamanji (06:13):

Right, so at the time, so we're talking 2018, that was the pinnacle of the work of the platform creation, Kubernetes was already well known. We were in version 1.12, maybe 1.10, something like that. We are in version 1.22 now, so a couple of releases back, but at the time Kubernetes was strongly and firmly asserting its ownership of the way you deploy your containers. At the beginning, we actually went with a third party service, we went with AWS, and we went with, I think it's ECS. So they had their own container platform that you can use underneath. But then we wanted to have full ownership of the clusters, the way we deploy them, and it was cheaper, actually. Cost was another thing that we were looking into quite heavily, so we decided to have our own clusters and manage them.

We had the team, we had the resources, and the thing is we didn't need to scale to 100 clusters straight away, we just had a small cluster in Europe and then we scaled it, well, based on the migration process that it came in. So it was a very natural expansion. So we could start small and have the cost in mind as well. So we just use from AWS, we just use the computer services, so EC2 instances, we use the networking for our load balancers. We didn't use any storage because all of our applications were stateless, so if something restarts, that's absolutely fine. So our use case was quite straight forward, but this allowed us to really choose the best tooling as well. So in terms of the tools, Kubernetes was out there.

In terms of the packaging of applications, you've mentioned Docker. Of course, Docker has been the main way for us to collaborate with developer teams to prepare their application to be deployed to Kubernetes. So there was a certain amount of upskilling in that regard, but again, it wasn't something that it was an unknown. Docker has been on the market for many years before that, packaging an application, having these kind of containers even running locally with Docker Compose, it was absolutely fine. It wasn't groundbreaking. So we just made sure that we have a procedure that makes sense. We need an image and, with that image, we'll be able to deploy within our cluster. So that bridge, we made sure that it's very clear and we made very clear the requirements we need to deploy an application to the cluster. Yeah, I think the fundamentals were there. I wouldn't say it was completely edge adopt option or on the cutting edge adoption, we were in a good state.

Daniel Bryant (08:44):

That's very sensible, Katie, because I think it's tempting, I've been involved in a few greenfield projects, to literally be on the bleeding edge, but you pay a tax for that because you're often the one finding the bugs and committing upstream, which Kubernetes community is fantastic at that, encouraging these kind of missions, but when you've got a practical business goal to be met, you really just want to not be fighting with the underlying technologies, right?

Katie Gamanji (09:08):

Mm-hmm. Again, for us, we had a very clear business goal in mind and we had a timeline for that as well. It wasn't a hard deadline, but we had a timeline of two years, create a platform and start the migration, because this would pretty much define the success of this team and Condé Nast International as an entity in itself. So we had a very clear business goal and with a business, when you have a goal, I mean you have a budget but you have to produce results with that budget. So again, we were very driven to adopt tools that have proven to be somewhat a standard within the industry. So for example, I wouldn't say it's bleeding edge, but we were like a CoreOS house. So for example, we used Tectonic, which was a way to bootstrap your clusters. However, CoreOS was bought by Red Hat and Tectonic project was not no longer maintained, unless we would fork it in house and maintain it, which no-one wants to do that. So that was maybe a risque adoption. At the time, CoreOS was very well known.

It had proven to be very established, it had roots within the community, the work they did was very good, and quality-wise, it was at the level, but with the time, with the acquisition, it moved away, so in that regard, we actually had to move to a new bootstrap provider. So there were some decisions that we had to reconsider throughout the time, but again, at the core, when it comes to the core platform, the way we run containers, Kubernetes was there. We use Fluentd for all logging, we use AWS, which already was doing very well, especially when it comes to China as well. Well, there are many aspects here because we had to distribute our infrastructure to China, so we had to use a provider that would have some presence in that region, and AWS at the time was ahead of the curve compared to other providers. So that was one of the reasons for us to choose this provider. So there were a lot of nuances that we had to take in consideration when choosing even a provider and even the way we're going to manage our clusters.

Daniel Bryant (11:22):

Very good, Katie. Yes, getting those key constraints out and key requirements out early is key to success, I think, isn't it?

Katie Gamanji (11:28):

Precisely.

Daniel Bryant (11:29):

So one more question on the platform. I know we want to cover education as well. Now SRE is a big thing, site reliability engineering. I'm guessing you might have been doing that back in the Condé Nast role. Did you call it site reliability engineering? And if you were an SRE or that kind of role, how did you interact with the developers on the team?

Katie Gamanji (11:49):

Right, this is a very good question because the way we created the SRE team was a very organic movement. Maybe I could talk a bit more about the shape of our team at the beginning. So to begin with, we were a big team. We were around 12 engineers and all of us were focused on creating the platform. We were focused on upskilling developers, actually interacting with the development teams, doing monitoring, logging, anything in regards to authentication and security, everything was done by our team. But then moving forward, we realized that if you want to have a successful migration process, we really need to have people focused only on the delivery process. So how exactly do you deliver your application to the cluster?

We realized that we needed a team which is going to be fully focused on the creation of the platform and maintaining the clusters because we replicated our clusters in five different regions at the time. So we already had around nine clusters and we were self-service, we need to do upgrades. We need to make sure that all of our components, like migration from kube-dns to CoreDNS is going to happen, so we need a core team as well. So the next step from this one big team doing everything, we moved into two teams, one of them was the core platform team and the other one was focused on the application delivery. So this was more of, many people call it a DevOps team, but they had a lot of interaction with the developers.

They made sure that the developers understand how to create Docker containers, how to interact with CircleCI when deploying their application or debugging things. They required some Terraform to bootstrap. So for example, I enroll us in AWS, all of that interaction, all of that upskilling was within that team. But then moving forward, we realized that we have a lot of infrastructure, we had a lot of components, but we don't have a lot of insights or analytics into how this platform behaves. So is it cost efficient? Is it actually solving the problems we need to solve within the budget that we have? So I think the next org and except for us was to focus on observability. Are the application healthy? Are we able to identify if something went down within a reasonable amount of time?

So as a natural step, once we had a healthy platform, a healthy collaboration between the devs and ops team, the next thing was to make sure that we have analytics to prove and have those results or those indicators that the application is healthy, we were able to troubleshoot it or divide it in time as well. And that was the first step, and this is how we created the third team, which was SRE team. Yeah, so within the SRE team, actually this was the latest team that was created, the platform's team or the platform engineering team, and they really focused on bringing the SLOs and SLIs, and actually having that metrics and indicators of this is the standard we're going to run the applications in production, and this is how we're going to monitor it, and this is how we're actually going to do this throughout the way. So they really focused only on the observability aspect mainly. So yeah, long story short, I think it was an organic step for the team, but again being a greenfield team and the greenfield project, it was a natural step for us.

Daniel Bryant (15:16):

Yeah, that's awesome. That's a great evolution. I've seen some of those things commented in the Team Topologies book, I think both you and I have read that one . And Matthew and Manuel have done a fantastic job of encapsulating a lot of the learnings that you've gone through and other teams like you went through, and this natural missing role emerged. And I'll put this link in the show notes because Team Topologies book has just been a guiding light for me and really important work, I think.

Katie Gamanji (15:41):

Yeah, and actually I delivered a talk around this. And talking about this team evolution within Condé Nast, it was for KubeCon San Diego, and actually I'm talking about how to create a micro open source community within an organization, so pretty much inter-sourcing, but that was mainly focused on Heath. So how exactly the developers within our organization created PRs for our main health charts, which was great. We created a mini open source community. And I'm talking about how this evolution of the team and upskilling at the right time, making sure that you take your developers on the journey really, really helped. And I'm making reference to the Team Topologies as well. So yeah, I could provide a link, so make sure check it out .

Daniel Bryant (16:23):

Please do, that'd be awesome, Katie. Thank you so much. Yeah, that sounds like an amazing reference for folks that are going through this now because there's so many folks that have not adopted cloud native, and if they can learn from folks like yourself, folks like Matthew and Manuel, we're all going to make mistakes, but I'd rather not make mistakes that have already been made, and build on other successes too is really important. So thanks Katie, that's a great link, and that's a nice segue actually into the next topic I was keen to pick your brains on because I see you on Twitter talking a lot about education, bringing folks along on the journey you just mentioned, great way of putting it. And I see you've done work around Udacity courses and other courses. Could you share the listeners a little bit about your motivations and what you've done in that space, please?

Katie Gamanji (17:04):

I think one of my goals within my professional side, and even personal side, is to make cloud native approachable or easy for everyone. Because when I jumped into cloud native, Kubernetes was one or two years old, I had to do Kubernetes the hard way. When I bootstrapped a cluster of two nodes, I was beyond happy, I was just ecstatic. I was like wow, this thing works. And I set up etcd as well and I was like oh my God, I have this key-value storage which is replicating across the ... I was very happy.

Daniel Bryant (17:35):

It's awesome, right?

Katie Gamanji (17:38):

But the thing is to do that, it took me a week and a half. I'm not joking, it took me a week and a half to bootstrap a cluster of two nodes, and it was still cracking because some of the networking components were not fully bootstrapped. So it was the hard way. So I do remember when I jumped into cloud native, I was like it's a great space, already there is a growing community, and I wanted to make it easier for people like me because at the time, I just stepped out of university, I was a grad, moved into my mid-level position. I was like okay, let's do this, but then I realized it's still difficult. So being fresh out of uni. I wanted to make it easier for students to do cloud native as well. So I think my underlying motivation is if we use one of the cloud native missions is to make cloud native ubiquitous, so pretty much approachable by everyone. So when I collaborated with Udacity, this was my core motivation, make it easier for someone with little programming experience.

All you need is maybe to write a Hello World in Python, and even that is not necessary because the code is provided. But for example, if you are a programmer and you want to move to cloud native, or you are a student, usually do programming within your computer science degree, or there are so many free workshops that you can do nowadays which allows you to write that Hello World function. So taking these personas to understand how exactly they can package an application, how can they deploy to Kubernetes, how can they automate the delivery process using a CI/CD, how they can use even a pass if, for example, they don't need to manage the infrastructure but there is a service, how they can do so. So I'm taking them on a journey, literally step by step, and being very, very declarative. I'm very, very keen to create a story throughout the way and just making sure that they follow through. And by the end of the course, pretty much everyone should be able to use Docker, they'll even look into how to use Cloud Foundry as a PaaS.

Daniel Bryant (19:43):

I saw that on the curriculum, I thought Cloud Foundry, interesting.

Katie Gamanji (19:47):

Well, I know it's a project which is not very used out there, but again, it's to showcase that there is a solution for that. If you want to manage your containers application, you can do that with a PaaS solution if you choose so. So it was more for demonstrations. What I'm trying to translate is the fundamentals because the fundamentals are always going to be the same. You want something that is packaged, you want something that is scalable, you can be deploy automatically. So all of these principles, they are throughout the entire. They're going to learn how to use Argo, they're going to learn how to use many techs, they'll learn how to use GitHub actions for the CI part to actually package an application automatically, they will deploy their clusters using K3s. So there is a lot of good tooling that many of our professionals in cloud native community are using. So by following this course, they will be in a good position to hopefully look for a job and contribute to cloud native.

Daniel Bryant (20:42):

Perfect. That perfect bootstrapping course for bringing folks along. And I guess is the core audience, Katie, folks that are completely new to cloud or is it also applicable to folks that say, "Our enterprise program is java.net," springs to mind. 20 years, super experienced, and they're new to cloud. Is it for both the new new folks and the new old folks? And I consider myself an old folk, I would say.

Katie Gamanji (21:08):

It's definitely for anyone. So anyone who's trying to understand what cloud native is and what to make out of cloud native, pretty much. Why exactly it's an important domain and how to get into it. One of the messages that I am highlighting at the beginning at the end of the course is that cloud native, it's about the tooling, but more importantly it's about the community as well, because once you get within this space, I think it's very important to get to know your folks, or get to know the maintainers of the of the projects, or maybe try to be one of the contributor. If you have time and if you have the resources, that's extremely valuable. So just try to reach out. And this is a message that I'm highlighting quite intensely at the beginning and the end because technology is great, but the community around cloud native, it's what really makes it great. So I hope this is a key message that the students will take it across.

Daniel Bryant (22:04):

Very well said, Katie, because it's such a challenge and probably a separate podcast to talk about OSS, and I've worked on teams that didn't want to contribute, and we were using OSS, and I was like it only seems the right thing to do, and it's a can of worms. But I think if you look at successful projects, you and I were talking off mic around Linux as an example, obviously there's a different model of ownership around that, but so many folks have contributed, and arguably a lot of the work in Linux has set the foundations for the cloud and beyond, right?

Katie Gamanji (22:35):

100%. So this is actually a very good thought or set of thoughts that I had a couple of months ago, since Linux turned 30. So I was actually thinking because Linux itself is open source, but it's been on the market for 30 years, and open source has been there and slowly gaining momentum. And I think now it's, I think, one of the climax points with cloud native where open source is very well valued, open source is in so many organizations, I think. There is not one industry that hasn't been touched by open source, or Kubernetes, or they're using Prometheus, or they're using other tools. It's amazing to see how overarching this ecosystem has become so far. And with Linux, what it actually did, it set the fundamentals of how to contribute to upstream, so that transparent environment and governed environment as well.

It wasn't like I can do anything and just be rude to everyone, it was very well structured, but at the same time it was open. So if you had time, you had resources, you can contribute and everyone would build up on top of that. And these fundamentals, they were set for at least 30 years and they were very easily transferred to cloud native. And on top of that, we put the cloud native mission and principles, so what exactly cloud native is. It's about automation, resilience, scalable, dynamic, observable. There are so many things that you can define cloud native as, but they were built on top of this freedom to commit to upstream, governed and transparent environment, collaboration between different organizations and different industries.

So I think it had a very good foundation to begin with, and maybe that's why it's such a success today. And if you're thinking about the timeframe, cloud native or Kubernetes has been donated seven years ago. It's not that much, but if you're looking into, again, the amount of industries and sectors it's reached, it's absolutely amazing. But again, I think this has been possible because this foundation of open source and how can you contribute has been already laid down and you would be able to just build on top of that with the missions and the principles that I've mentioned before. So yeah, I think it's a great achievement for the Linux community, so 30 years, it's a great anniversary. So I'm looking forward for the cloud native community to reach the same conclusion at some point.

Daniel Bryant (25:02):

Absolutely brilliant. And what you were saying there, I remember chatting to Matt Klein a few years ago about Envoy, and he said he couldn't have dreamed of assembling the team of people that ultimately worked on Envoy into a company because there's different conflicts, different things people want, but to your point, open source, you can work for different companies but we're all contributing to the same mission. And I know Matt's done many stories around the birth of Envoy and I can see many other successful CNCF projects in that space too, right?

Katie Gamanji (25:32):

You mentioned the dream team, and I think an open source team is the dream team because you have people from everywhere. But the thing is it's not just 10 people, it's thousands of people. This is the thing. And what is important about this thing, all of these thousands of people with different perspective, ideas, maybe prospects for the tools or how they want to use it, they come together and they contribute. And it's amazing because you have this extremely diversified input, and this creates a momentum when it comes to the tooling that cannot be replicated anywhere else.

So this is the thing with Kubernetes, and maybe that's why it has been so powerful so far, because the Kubernetes community and everyone who contributes at the moment, and there are more than 4,000 authors every single year contributing to it, you cannot replicate that momentum and that contribution velocity in an organization. With all the resources you have, you won't be able to reach that within the same timeframe. We're talking, again, let's even put seven years. If you try to create a new container orchestration as powerful as Kubernetes in seven years with the same success, I think that's going to be nearly impossible. Even with all of the budget we have, I'm thinking if we want to create that internally within the organization. So yeah, I think it's the dream team, it's a journey that never really ceases to amaze me. I'm going to be honest, it's just truly, truly amazing to observe how it grows and the rate at which it grows as well.

Daniel Bryant (26:55):

Agreed, Katie. Every time I go to a KubeCon, I'm humbled. The matter of folks, new and old, there, and the contributions, the innovation, it's epic, right?

Katie Gamanji (27:04):

Yeah, KubeCon is definitely one of my favorite places to be. I mean, well, again, I'm a bit biased because when I joined cloud native, I had a chance to join one KubeCon, it was in Seattle, and actually I applied for the diversity and inclusion scholarship. So actually, I somehow got the budget for traveling there because my organization at the time would not have the budget for it. So the stars were aligned, let's put it this way. But, it was one of my first conferences as well, when I joined, and I saw 5,000 people, and this ginormous stage with the Kubernetes logo lighted up, I was like wow, this is definitely bigger than I imagined. And it was in 2018, December 2018, three years ago. Feels like thousands of years ago, honestly.

But that was, again, seeing this community, interacting with people throughout. I knew no-one. I was my first time in Seattle. I knew no-one, literally, and I was just at the beginning of my journey with cloud native, but seeing how welcoming it is and the potential of people just to interact and create ideas out of nothing. You have a coffee or you chat at the concessions and you have maybe a new role, a new position that you want to talk about or you have a new project to contribute. It just happens within seconds and I'm like wow, this is amazing. And I wanted to be part of that since then, so I think community aspect, it's quite important and a very big part of cloud native.

Daniel Bryant (28:27):

Perfect. This has been some amazing comments. We're getting close to time here. Is there any final comments or any final things you're working on you wanted to share with the listeners at all?

Katie Gamanji (28:35):

Yeah, absolutely. So currently, I am working on the KCNA exam, or a Cloud Native KCNA exam, or Kubernetes and Cloud Native Associate Certificate, which is undergoing the beta testing stage and is going to be released quite soon, in a couple of weeks. It's going to be GA, so you'll be able to purchase it and actually take the exam. So I have been leading the creation of this exam and what is it focused on is, again, to be more beginner-friendly and inclusive, because if you look into the CNCF certification, you have CKAD, Certified Kubernetes Application Developer or CKA, Certified Kubernetes Administrator, and CKS, which is the security specialist. I'm not going to decipher all them, but they are very advanced certifications and if anyone took them, you actually have a terminal in front of you and you actually need to interact with the cluster to produce the right results or to write the right results.

So if you don't have that hands on experience with this, it's going to be nearly impossible to pass these exams. So the feedback from the community was to create something which is more beginner friendly, and we created this associate exam that is multiple choice, but you're going to have a set of 60 questions to solve in one hour and a half, so 90 minutes. It's focused mainly on Kubernetes, so the Kubernetes fundamentals. Do you understand what is a container? Do you understand what's a pod? Do you understand what's the relation between a replica set and a deployment? What are volumes? So all of these things are going to be within the Kubernetes space, but we have Kubernetes and cloud native.

So we will explore some other principles within the ecosystem and other tools, so we're going to touch upon observability with Prometheus, we're going to talk about GitOps, we're going to talk about Helm, we're going to talk about some of the storage providers such as Rook. So again, it's trying to cover the extensive understanding of the landscape as well. So again, it's multiple choice but you still need to study to take it. So it's not going to be completely easy and you can do it with one breath, you really need to understand those core principles within Kubernetes and cloud native. So yeah, the feedback so far is great. We had 500 slots for our beta testers which were filled within half an hour after it was announced at the KubeCon keynote. I was amazed. I was actually watching the numbers go down, in terms of the spots we have available. So the feedback so far has been great, and I'm looking forward for this to be GA and see how it will impact the student and anyone new to cloud native.

Daniel Bryant (31:06):

Awesome stuff, Katie. Always enjoy chatting to you. We could talk for hours. I'd learn so much from you.

Katie Gamanji (31:08):

I know.

Daniel Bryant (31:09):

I really appreciate all your sharing of wisdom and knowledge there, and we'll see you again soon. Thanks so much, Katie.

Katie Gamanji (31:16):

Thank you very much for having me, Daniel.

Featured Episodes

Platform Engineering

Podcast

S3 Ep14: Four P's of Platform Engineering: Key to Prosperity

Explore platform engineering with Erik Wilde as he discusses the Four P’s for success in our latest podcast.

Observability

Podcast

S3 Ep16: Cutting Costs with Observability - Beyond Monitoring Best Practices

Discover top observability best practices to cut costs and enhance performance. Go beyond monitoring to optimize your cloud-native applications.

Cloud Computing

Podcast

S4 Ep1: Overcoming Cloud Challenges: Exploring the Future of Cloud Computing

Dive into cloud computing’s future with Kunal Khushwaha on 'Livin’ on the Edge.' Discuss Multicloud, AI, K8s & the challenges of the cloud that many organizations are facing.