Oracle Hybrid Cloud

Oracle Hybrid Cloud

In this season’s final episode, Lois Houston and Nikita Abraham, along with special guest Rohit Rahi, discuss Oracle Hybrid Cloud and how it gives customers the flexibility to choose their infrastructure based on their workload, regulatory, and latency needs. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community Twitter: https://twitter.com/Oracle_Edu LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Special thanks to Arijit Ghosh, Kiran BR, David Wright, the OU Podcast Team, and the OU Studio Team for helping us create this episode.

Avsnitt(132)

Database Essentials

Database Essentials

Join hosts Lois Houston and Nikita Abraham, along with Hope Fisher, Oracle’s Product Manager for Database Technologies, as they break down the basics of databases, explore different database management systems, and delve into database development.   Whether you're a newcomer or just need a refresher, this quick, informative episode is sure to offer you some valuable insights.   Oracle MyLearn: https://mylearn.oracle.com/ou/course/database-essentials/133032/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! For the last seven weeks, we’ve been exploring the world of OCI Container Engine for Kubernetes with our senior instructor Mahendra Mehra. We covered key aspects of OKE to help you create, manage, and optimize Kubernetes clusters in Oracle Cloud Infrastructure. So, be sure you check out those episodes if you’re interested in Kubernetes. 01:00 Nikita: Today, we’re doing something a little different. We’ve had a lot of episodes on different aspects of Oracle Database, but what if you’re just getting started in this world? We wanted you to have something that you could listen to as well. And so we have Hope Fisher with us today. Hope is a Product Manager for Database Technologies at Oracle, and we’re going to ask her to take us through the basics of database, the different database management systems, and database development.  Lois: Hi Hope! Thanks for joining us for this episode. Before we dive straight into terminologies and concepts, I want to take a step back and really get down to the basics. We sometimes use the terms data and information interchangeably, but they’re not the same, right? 01:43 Hope: Data is raw material or a set of facts and observations. Information is the meaning derived from the facts. The difference between data and information can be explained by using an example, such as test scores. In one class, if every student receives a numbered score and the scores can be calculated to determine a class average, the class average can be calculated to determine the school average. So in this scenario, each student's test score is one piece of data. And information is the class’s average score or the school's average score. There is no value in data until you actually do something with it. 02:24 Nikita: Right, so then how do we make all this data useful? Do we create a database system?  Hope: A database system provides a simple function—treat data as a collection of information, organize it, and make the data usable by providing easy access to it and giving you a place where that data can be stored. Every organization needs to collect and maintain data to meet its requirements. Most organizations today use a database to automate their information systems. An information system can be defined as a formal system for storing and processing data. A database is an organized collection of data put together as a unit. The rationale of a database is to collect, store, and retrieve related data for use by database applications. A database application is a software program that interacts with the database to access and manipulate data. A database is usually managed by a Database Administrator, also known as a DBA. 03:25 Nikita: Hope, give us some examples of database systems. Hope: Popular examples of database systems include Oracle Database, MySQL, which is also owned by Oracle, Microsoft SQL server, Postgres, and others. There are relational database management systems. The acronym is DBMS. Some of the strengths of a DBMS include flexibility and scalability. Given the huge amounts of information that modern businesses need to handle, these are important factors to consider when surveying different types of databases. 03:59 Lois: This may seem a little bit silly, but why not just use spreadsheets, Hope? Why use databases? Hope: The easy answer is that spreadsheets are designed for specific problems, relatively small amounts of data and individual users. Databases are designed for lots of data, shared information use, and complex data analysis. Spreadsheets are typically used for specific problems or small amounts of data. Individual users generally use spreadsheets. In a database, cells contain records that come from external tables. Databases are designed for lots of data. They are intended to be shared and used for more complex data analysis. They need to be scalable, secure, and available to many users. This differentiation means that spreadsheets are static documents, while databases can be relational. 04:51 Nikita: Hope, what are some common database applications?  Hope: Database applications are used in far and wide use cases that most commonly can be grouped into three areas. Applications that run companies called enterprise applications. Enterprise applications are designed to integrate computer systems that run all phases of an enterprise's operations to facilitate cooperation and coordination of work across the enterprise. The intent is to integrate core business processes, like sales, accounting, finance, human resources, inventory, and manufacturing. Applications that do something very specific, like healthcare applications-- specialized software is software that's written for a specific task rather than for a broad application area.  And then there are also applications that are used to examine data and turn it into information, like a data warehouse, analytics, and data lake. 05:54 Lois: We’ve spoken about data lakes before. But since this is an episode about the basics of database, can you briefly tell us what a data lake is? Hope: A data lake is a place to store your structured and unstructured data as well as a method for organizing large volumes of highly diverse data from diverse sources. Data lakes are becoming increasingly important as people, especially in businesses and technology, want to perform broad data exploration and discovery. Bringing data together into a single place or most of it into a single place makes that simpler. 06:29 Nikita: Thanks for that, Hope. So, what kind of organizations use databases? And, who within these organizations uses databases the most? Hope: Almost every enterprise uses databases. Enterprises use databases for a variety of reasons and in a variety of ways. Data and databases are part of almost any process of the enterprise. Data is being collected to help solve business needs and drive value. Many people in an organization work with databases. These include the application developers who create applications that support and drive the business. The database administrator or DBA maintains and updates the database. And the end user uses the data as needed. 07:19 Do you want to stay ahead of the curve in the ever-evolving AI  landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free. So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai. 07:57 Nikita: Welcome back. Now that we’ve discussed foundational database concepts, I want to move on to database management systems. Take us through what a database management system is, Hope. Hope: A Database Management System, DBMS, has the following elements. The kernel code manages memory and storage for the DBMS. The repository of metadata is called a data dictionary. The query language enables applications to access the data. Oracle database functions include data definitions, storage, structure, and security. Additional functionality also provides for user access control, backup and recovery, integrity, and communications. There are many different database types and management systems. The most common is the relational database management system. 08:51 Nikita: And how do relational databases store data?  Hope: Essentially and very simplistically, there are key elements of the relational database. Database table containing rows and columns; the data in the table, which is stored a row at a time; and the columns which contain attributes or related information. And then the different tables in a database relate to one another and share a column. 09:17 Lois: Customers usually have a mix of applications and data structures, and ideally, they should be able to implement a data management strategy that effectively uses all of their data in applications, right? How does Oracle approach this?  Hope: Oracle's approach to this enterprise data management strategy and architecture is converged database to all different data types and workloads. The converged database is a database that has native support for all modern data types and, of course, traditional relational data.  By providing support for all of these data types, a converged database can run all sorts of workloads, from transaction processing to analytics and machine learning to blockchain to support the applications and systems. Oracle provides a single database engine that supports all data models, process types, and development environments. It also addresses many kinds of workloads against the same data sets. And there's no need to use dozens of specialized databases. Deploying several single-purpose databases would increase costs, complexity, and risk. 10:25 Nikita: In the final part of our conversation today, I want to bring up database development. Hope, how are databases developed?  Hope: Data modeling is the first part of the database development process. Conceptual data modeling is the examination of a business and business data to determine the structure of business information and the rules that govern it. This structure forms the basis for database design. A conceptual model is relatively stable over long periods of time. Physical data modeling, or database building, is concerned with implementation in each technical software and hardware environment. The physical implementation is highly dependent on the current state of technology and is subject to change as available technologies rapidly change. Conceptual model captures the functional and informational needs of a business and is used to identify important entities and their relationships.  A logical model includes the entities and relationships. This is also called an entity relationship model and provides the details of the relationships.  11:34 Lois: I think that’s a good place to wrap up our episode. To know more about the Oracle Database architecture, offerings, and so on, visit mylearn.oracle.com. Thanks for joining us today, Hope.  Nikita: Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 11:55 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

23 Juli 202412min

Container Engine for Kubernetes: Security Practices

Container Engine for Kubernetes: Security Practices

In the season's final episode, hosts Lois Houston and Nikita Abraham interview senior OCI instructor Mahendra Mehra about the security practices that are vital for OKE clusters on OCI.   Mahendra shares his expert insights on the importance of Kubernetes security, especially in today's digital landscape where the integrity of data and applications is paramount.   OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we spoke about self-managed nodes and how you can manage Kubernetes deployments. Nikita: Today is the final episode of this series on OCI Container Engine for Kubernetes. We’re going to look at the security side of things and discuss how you can implement vital security practices for your OKE clusters on OCI, and safeguard your infrastructure and data.  00:59 Lois: That’s right, Niki! We can’t overstate the importance of Kubernetes security, especially in today's digital landscape, where the integrity of your data and applications is paramount. With us today is senior OCI instructor, Mahendra Mehra, who will take us through Kubernetes security and compliance practices. Hi Mahendra! It’s great to have you here. I want to jump right in and ask you, how can users add a service account authentication token to a kubeconfig file? Mahendra: When you set up the kubeconfig file for a cluster, by default, it contains an Oracle Cloud Infrastructure CLI command to generate a short-lived, cluster-scoped, user-specific authentication token. The authentication token generated by the CLI command is appropriate to authenticate individual users accessing the cluster using kubectl and the Kubernetes Dashboard. However, the generated authentication token is not appropriate to authenticate processes and tools accessing the cluster, such as continuous integration and continuous delivery tools. To ensure access to the cluster, such tools require long-lived non-user-specific authentication tokens. One solution is to use a Kubernetes service account. Having created a service account, you bind it to a cluster role binding that has cluster administration permissions. You can create an authentication token for this service account, which is stored as a Kubernetes secret. You can then add the service account as a user definition in the kubeconfig file itself. Other tools can then use this service account authentication token when accessing the cluster. 02:47 Nikita: So, as I understand it, adding a service account authentication token to a kubeconfig file enhances security and enables automated tools to interact seamlessly with your Kubernetes cluster. So, let’s talk about the permissions users need to access clusters they have created using Container Engine for Kubernetes. Mahendra: For most operations on Container Engine for Kubernetes clusters, IAM leverages the concept of groups. A user's permissions are determined by the IAM groups they belong to, including dynamic groups. The access rights for these groups are defined by policies. IAM provides granular control over various cluster operations, such as the ability to create or delete clusters, add, remove, or modify node pool, and dictate the Kubernetes object create, delete, view operations a user can perform. All these controls are specified at the group and policy levels. In addition to IAM, the Kubernetes role-based access control authorizer can enforce additional fine-grained access control for users on specific clusters via Kubernetes RBAC roles and ClusterRoles.  04:03 Nikita: What are Kubernetes RBAC roles and ClusterRoles, Mahendra? Mahendra: Roles here defines permissions for resources within a specific namespace and ClusterRole is a global object that will provide access to global objects as well as non-resource URLs, such as API version and health endpoints on the API server. Kubernetes RBAC also includes RoleBindings and ClusterRoleBindings. RoleBinding grants permission to subjects, which can be a user, service, or group interacting with the Kubernetes API. It specified an allowed operation for a given subject in the cluster. RoleBinding is always created in a specific namespace. When associated with a role, it provides users permission specified within that role related to the objects within that namespace. When associated with a ClusterRole, it provides access to namespaced objects only defined within that cluster rule and related to the roles namespace. ClusterRoleBinding, on the other hand, is a global object. It associates cluster roles with users, groups, and service accounts. But it cannot be associated with a namespaced role. ClusterRoleBinding is used to provide access to global objects, non-namespaced objects, or to namespaced objects in all namespaces. 05:36 Lois: Mahendra, what’s IAM’s role in this? How do IAM and Kubernetes RBAC work together? Mahendra: IAM provides broader permissions, while Kubernetes RBAC offers fine-grained control. Users authorized either by IAM or Kubernetes RBAC can perform Kubernetes operations. When a user attempts to perform any operation on a cluster, except for create role and create cluster role operations, IAM first determines whether a group or dynamic group to which the user belongs has the appropriate and sufficient permissions. If so, the operation succeeds. If the attempted operation also requires additional permissions granted via a Kubernetes RBAC role or cluster role, the Kubernetes RBAC authorizer then determines whether the user or group has been granted the appropriate Kubernetes role or Kubernetes ClusterRoles. 06:41 Lois: OK. What kind of permissions do users need to define custom Kubernetes RBAC rules and ClusterRoles?  Mahendra: It's common to define custom Kubernetes RBAC rules and ClusterRoles for precise control. To create these, a user must have existing roles or ClusterRoles with equal or higher privileges. By default, users don't have any RBAC roles assigned. But there are default roles like cluster admin or super user privileges. 07:12 Nikita: I want to ask you about securing and handling sensitive information within Kubernetes clusters, and ensuring a robust security posture. What can you tell us about this? Mahendra: When creating Kubernetes clusters using OCI Container Engine for Kubernetes, there are two fundamental approaches to store application secrets. We can opt for storing and managing secrets in an external secrets store accessed seamlessly through the Kubernetes Secrets Store CSI driver. Alternatively, we have the option of storing Kubernetes secret objects directly in etcd.  07:53 Lois: OK, let’s tackle them one by one. What can you tell us about the first method, storing secrets in an external secret store? Mahendra: This integration allows Kubernetes clusters to mount multiple secrets, keys, and certificates into pods as volumes. The Kubernetes Secrets Store CSI driver facilitates seamless integration between our Kubernetes clusters and external secret stores. With the Secrets Store CSI driver, our Kubernetes clusters can mount and manage multiple secrets, keys, and certificates from external sources. These are accessible as volumes, making it easy to incorporate them into our application containers. OCI Vault is a notable external secrets store. And Oracle provides the Oracle Secrets Store CSI driver provider to enable Kubernetes clusters to seamlessly access secrets stored in Vault. 08:54 Nikita: And what about the second method? How can we store secrets as Kubernetes secret objects in etcd? Mahendra: In this approach, we store and manage our application secrets using Kubernetes secret objects. These objects are directly managed within etcd, the distributed key value store used for Kubernetes cluster coordination and state management. In OKE, etcd reads and writes data to and from block storage volumes in OCI block volume service. By default, OCI ensures security of our secrets and etcd data by encrypting it at rest. Oracle handles this encryption automatically, providing a secure environment for our secrets. Oracle takes responsibility for managing the master encryption key for data at rest, including etcd and Kubernetes secrets. This ensures the integrity and security of our stored secrets. If needed, there are options for users to manage the master encryption key themselves. 10:06 Lois: OK. We understand that managing secrets is a critical aspect of maintaining a secure Kubernetes environment, and one that users should not take lightly. Can we talk about OKE Container Image Security? What essential characteristics should container images possess to fortify the security posture of a user’s applications? Mahendra: In the dynamic landscape of containerized applications, ensuring the security of containerized images is paramount.  It is not uncommon for the operating system packages included in images to have vulnerabilities. Managing these vulnerabilities enables you to strengthen the security posture of your system and respond quickly when new vulnerabilities are discovered. You can set up Oracle Cloud Infrastructure Registry, also known as Container Registry, to scan images in a repository for security vulnerabilities published in the publicly available Common Vulnerabilities and Exposures Database. 11:10 Lois: And how is this done? Is it automatic? Mahendra: To perform image scanning, Container Registry makes use of the Oracle Cloud Infrastructure Vulnerability Scanning Service and Vulnerability Scanning REST API. When new vulnerabilities are added to the CVE database, the container registry initiates automatic rescanning of images in repositories that have scanning enabled. 11:41 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai. 12:20 Nikita: Welcome back! Mahendra, what are the benefits of image scanning? Mahendra: You can gain valuable insights into each image scan conducted over the past 13 months. This includes an overview of the number of vulnerabilities detected and an overall risk assessment for each scan. Additionally, you can delve into comprehensive details of each scan featuring descriptions of individual vulnerabilities, their associated risk levels, and direct links to the CVE database for more comprehensive information. This historical and detailed data empowers you to monitor, compare, and enhance image security over time. You can also disable image scanning on a particular repository by removing the image scanner.  13:11 Nikita: Another characteristic that container images should have is unaltered integrity, right?  Mahendra: For compliance and security reasons, system administrators often want to deploy software into a production system. Only when they are satisfied that the software has not been modified since it was published compromising its integrity. Ensuring the unaltered integrity of software is paramount for compliance and security in production environment. 13:41 Lois: Mahendra, what are the mechanisms that guarantee this integrity within the context of Oracle Cloud Infrastructure? Mahendra: Image signatures play a pivotal role in not only verifying the source of an image but also ensuring its integrity. Oracle's Container Registry facilitates this process by allowing users or systems to push images and sign them using a master encryption key sourced from the OCI Vault.  It's worth noting that an image can have multiple signatures, each associated with a distinct master encryption key. These signatures are uniquely tied to an image OCID, providing granularity to the verification process. Furthermore, the process of image signing mandates the use of an RSA asymmetric key from the OCI Vault, ensuring a robust and secure validation of the image's unaltered integrity. 14:45 Nikita: In the context of container images, how can users ensure the use of trusted sources within OCI? Mahendra: System administrators need the assurance that the software being deployed in a production system originates from a source they trust. Signed images play a pivotal role, providing a means to verify both the source and the integrity of the image. To further strengthen this, administrators can create image verification policies for clusters, specifying which master encryption keys must have been used to sign images. This enhances security by configuring container engine for Kubernetes clusters to allow the deployment of images signed with specific encryption keys from Oracle Cloud Infrastructure Registry. Users or systems retrieving signed images from OCIR can trust the source and be confident in the image's integrity. 15:46 Lois: Why is it imperative for users to use signed images from Oracle Cloud Infrastructure Registry when deploying applications to a Container Engine for Kubernetes cluster?  Mahendra: This practice is crucial for ensuring the integrity and authenticity of the deployed images.  To achieve this enforcement. It's important to note that an image in OCIR can have multiple signatures, each linked to a different master encryption key. This multikey association adds layers of security to the verification process. A cluster's image verification policy comes into play, allowing administrators to specify up to five master encryption keys. This policy serves as a guideline for the cluster, dictating which keys are deemed valid for image signatures.  If a cluster's image verification policy doesn't explicitly specify encryption keys, any signed image can be pulled regardless of the key used. Any unsigned image can also be pulled potentially compromising the security measures. 16:56 Lois: Mahendra, can you break down the essential permissions required to bolster security measures within a user’s OKE clusters? Mahendra: To enable clusters to include master encryption key in image verification policies, you must give clusters permission to use keys from OCI Vault. For example, to grant this permission to a particular cluster in the tenancy, we must use the policy—allow any user to use keys in tenancy where request.user.id is set to the cluster's OCID. Additionally, for clusters to seamlessly pull signed images from Oracle Cloud Infrastructure Registry, it's vital to provide permissions for accessing repositories in OCIR. 17:43 Lois: I know this may sound like a lot, but OKE container image security is vital for safeguarding your containerized applications. Thank you so much, Mahendra, for being with us through the season and taking us through all of these important concepts. Nikita: To learn more about the topics covered today, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham… Lois Houston: And Lois Houston, signing off! 18:16 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

16 Juli 202418min

Working with Self-Managed Nodes and Managing Kubernetes Deployments

Working with Self-Managed Nodes and Managing Kubernetes Deployments

In this episode, hosts Lois Houston and Nikita Abraham speak with senior OCI instructor Mahendra Mehra about the capabilities of self-managed nodes in Kubernetes, including how they offer complete control over worker nodes in your OCI Container Engine for Kubernetes environment.   They also explore the various options that are available to effectively manage your Kubernetes deployments.   OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast! I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week, we discussed how OKE virtual nodes can offer you a complete serverless Kubernetes experience. Nikita: Yeah, and in today’s episode, we’ll focus on self-managed nodes, where you get complete control over the worker nodes within your OKE environment. We’ll also talk about how you can manage your Kubernetes deployments. 00:57 Lois: To tell us more about this, we have Mahendra Mehra, a senior OCI instructor with Oracle University. Hi Mahendra! Welcome back! Let’s get started with self-managed nodes. Can you tell us what they are? Mahendra: In Container Engine for Kubernetes, a self-managed node is essentially a worker node that you personally create and host on a compute instance or instance pool within the compute service. Unlike managed nodes or virtual nodes, self-managed nodes are not grouped into node pools by default. They are often referred to as Bring Your Own Nodes, also abbreviated as BYON. If you wish to streamline administration and manage multiple self-managed nodes collectively, you can utilize the compute service to create a compute instance pool for hosting these nodes. This allows for greater flexibility and customization in your Kubernetes environment. 01:58 Nikita: Mahendra, what are some practical usage scenarios for OKE self-managed nodes? Mahendra: These nodes offer a range of advantages for specific use cases. Firstly, for specialized workloads, leveraging the compute service allows you to configure compute instances with shapes and image combination that may not be available for managed nodes or virtual nodes. This includes options like GPU shapes for hardware accelerated workloads or high frequency processor cores for demanding high-performance computing tasks. Secondly, if you require complete control over your compute instance configuration, self-managed nodes are the ideal choice. This gives you the flexibility to tailor each node to your specific requirements. Additionally, self-managed nodes are particularly well suited for Oracle Cloud Infrastructure cluster networks. These nodes provide high bandwidth, low latency RDMA connectivity, making them a preferred option for certain networking setups. Lastly, the use of compute instance pools with self-managed nodes enables the creation of infrastructure for handling complex distributed computing tasks. This can greatly enhance the efficiency of your Kubernetes environment. Consider these points carefully to determine the optimal use of OKE self-managed nodes in your deployments. 03:30 Lois: What do we need to consider before creating a self-managed node and integrating it into a cluster? Mahendra: There are two crucial aspects to address. Firstly, you need to confirm that the cluster to which you plan to add a self-managed node is configured appropriately.  Secondly, it's essential to choose the right image for the compute instance hosting the self-managed node.  03:53 Nikita: Can you dive a little deeper into these prerequisites? Mahendra: To successfully integrate a self-managed node into your cluster, you must ensure that the cluster is an enhanced cluster. This is a crucial prerequisite for the addition of self-managed nodes. The flannel CNI plugin for pod networking should be utilized, not the VCN-native pod networking CNI plugin. This ensures optimal pod networking for your self-managed nodes. The control plane nodes of the cluster must be running Kubernetes version 1.25 or later. This is essential for compatibility and optimal performance. Lastly, maintain compatibility between the Kubernetes version on control plane nodes and worker nodes with a maximum allowable difference of two minor versions. This ensures a smooth and stable operation of your Kubernetes environment. Keep these cluster requirements in mind as you prepare to add self-managed nodes to your OKE cluster. 04:55 Lois: What about the image requirements when creating self-managed nodes? Mahendra: Choose either Oracle Linux 7 or Oracle Linux 8 image, for your self-managed nodes. Ensure that the selected image has a release date of March 28, 2023 or later. Obtain the image OCID, also known as Oracle Cloud Identifier, from the respective sources. When specifying an image, be mindful of the Kubernetes version it contains. It's your responsibility to select an image with a Kubernetes version that aligns with the Kubernetes version skew support policy. Keep in mind that the Container Engine for Kubernetes does not automatically check the compatibility. So it's up to you to ensure harmony between the Kubernetes version on the self-managed node and the cluster's control plane nodes. These considerations will help you make informed choices when configuring images for your self-managed nodes. 05:57 Nikita: I really like the flexibility and customization OKE self-managed nodes offer. Now I want to switch gears a little and ask you about OCI Service Operator for Kubernetes. Can you tell us a bit about it? Mahendra: OCI Service Operator for Kubernetes is an open-source Kubernetes add-on that transforms the way we manage and connect OCI resources within our Kubernetes clusters. This powerful operator enables you to effortlessly create, configure, and interact with OCI resources directly from your Kubernetes environment, eliminating the need for constant navigation between the Oracle Cloud Infrastructure Console, CLI, or other tools. With the OCI Service Operator, you can seamlessly leverage kubectl to call the operator framework APIs, providing a streamlined and efficient workflow. 06:53 Lois: On what framework is the OCI Service Operator built? Mahendra: OCI Service Operator for Kubernetes is built using the open-source Operator Framework toolkit. The Operator Framework manages Kubernetes-native applications called operators in an effective, automated, and scalable way. The Operator Framework comprises essential components like Operator SDK. This leverages the Kubernetes controller-runtime library, providing high-level APIs and abstractions for writing operational logic. Additionally, it offers tools for scaffolding and code generation. 07:35 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai. 08:14 Nikita: Welcome back! Mahendra, are there any other components within OCI Service Operator to manage Kubernetes deployments? Mahendra: The other essential component is Operator Lifecycle Manager, also abbreviated as OLM. OLM extends Kubernetes by introducing a declarative approach to install, manage, and upgrade operators within a cluster. The OCI Service Operator for Kubernetes is intelligently packaged as an Operator Lifecycle Manager bundle, simplifying the installation process on Kubernetes clusters. This comprehensive bundle encapsulates all necessary objects and definitions, including CRDs, RBACs, ConfigMaps, and deployments, making it effortlessly deployable on a cluster. 09:02 Lois: So much that users can take advantage of! What about OCI Service Operator’s integration with other OCI services?  Mahendra: One of its standout features is its seamless integration with a range of OCI services. The first one is Autonomous Database, specifically tailored for transaction processing, mixed workloads, analytics, and data warehousing. Enjoy automated patching, upgrades, and tuning, allowing routine maintenance tasks to be performed without human intervention. The next on the list is MySQL HeatWave, a fully-managed Database Service designed for developing and deploying secure cloud-native applications using widely adopted MySQL open-source database. Third on the list is OCI Streaming service. Experience a fully managed, scalable, and durable solution for ingesting and consuming high-volume data streams in real time. Next is Service Mesh. This service offers a set of capabilities to facilitate communication among microservices within a cloud-native application. The communication is centrally managed and secured, ensuring a smooth and secure interaction. The OCI Service Operator for Kubernetes serves as a versatile bridge, seamlessly connecting your Kubernetes clusters with these powerful Oracle Cloud Infrastructure services. 10:31 Nikita: That’s awesome! I’ve also heard about Ingress Controllers. Can you tell us what they are? Mahendra: A Kubernetes Ingress Controller serves as the enforcer of rules defined in a Kubernetes Ingress. Its primary role is to manage, load balance, and route incoming traffic to specific service pods residing on worker nodes within the cluster. At the heart of this process is the Kubernetes Ingress Resource. Think of it as a blueprint, a rich configuration holding routing rules and options, specifically crafted for handling HTTP and HTTPS traffic. It serves as a powerful orchestrator for managing external communication with services inside the cluster. 11:15 Lois: Mahendra, how do Ingress Controllers bring about efficiency? Mahendra: Efficiency comes with consolidation. With a single ingress resource, you can neatly gather routing rules for multiple services. This eliminates the need to create a Kubernetes service of type LoadBalancer for each service seeking external or private network traffic. The OCI native ingress controller is a powerhouse. It crafts an OCI Flexible Load Balancer, your gateway to efficient request handling. The OCI native ingress controller seamlessly adapts to changes in routing rules with real-time updates. 11:53 Nikita: And what about integration with an OKE cluster? Mahendra: Absolutely. It harmonizes with the cluster for streamlined traffic management. Operating as a single pod on a randomly selected worker node, it ensures a balanced workload distribution. 12:08 Lois: Moving on, let’s talk about running applications on ARM-based nodes and GPU nodes. We’ll start with ARM-based nodes.  Mahendra: Typically, developers use ARM-based worker nodes in Kubernetes cluster to develop and test applications. Selecting the right infrastructure is crucial for optimal performance.  12:28 Nikita: What kind of options do developers have when running applications on ARM-based nodes? Mahendra: When it comes to running applications on ARM-based nodes, you have a range of options at your fingertips. First up, consider the choice between ARM-based bare metal shapes and flexible VM shapes. Each comes with its own unique advantages. Now, let's talk about the heart of it all, the Ampere A1 Compute instances. These instances are driven by the cutting edge Ampere Altra processor, ensuring high performance and efficiency for your workloads. You must specify the ARM-based node pool shapes during cluster or node pool creation, whether you choose to navigate through the user-friendly console, leverage the flexibility of the API, or command with precision through the CLI, the process remains seamless. 13:23 Lois: Can you define pods to run exclusively on ARM-based nodes within a heterogeneous cluster setup? Mahendra: In scenarios where a cluster comprises node pools with ARM-based shapes alongside other shapes, such as AMD64, you can employ a powerful tool called node selector in the pod specification. This allows you to precisely dictate that an application should exclusively run on ARM-based worker nodes, ensuring your workloads aligns with the desired architecture. 13:55 Nikita: And before we end this episode, can you explain why developers must run applications on GPU nodes? Mahendra: Originally designed for graphics manipulations, GPUs prove highly efficient in parallel data processing. This makes them a top choice for deploying data-intensive applications. Our GPU nodes utilize cutting edge NVIDIA graphics cards ensuring efficient and powerful data processing. Seamless access to this computing prowess is made possible through CUDA libraries. To ensure smooth integration, be sure to select a GPU shape and opt for an Oracle Linux GPU image preloaded with the essential CUDA libraries. CUDA here is Compute Unified Device Architecture, which is a parallel computing platform and application-programming interface model created by NVIDIA. It allows developers to use NVIDIA graphics-processing units for general-purpose processing, rather than just rendering graphics. 14:57 Nikita: Thank you, Mahendra, for another insightful session. We appreciate you joining us today. Lois: For more information on everything we discussed, go to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. You’ll find plenty of demos and skill checks to supplement your learning. Join us next week when we’ll discuss vital security practices for your OKE clusters on OCI. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 15:28 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

9 Juli 202415min

Working with OKE Virtual Nodes

Working with OKE Virtual Nodes

Want to gain insights into how virtual nodes provide a serverless Kubernetes experience?   Join hosts Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, as they compare managed nodes and virtual nodes. Continuing from the previous episode, they explore how virtual nodes enhance Kubernetes deployments in Oracle Cloud Infrastructure.   OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hey everyone! In our last episode, we examined OCI Container Engine for Kubernetes, including its key features and benefits. Lois: Yeah, that was an interesting one. Today, we’re going to discuss virtual nodes and their role in enhancing Kubernetes deployments in Oracle Cloud Infrastructure. Nikita: We’re going to compare virtual nodes and managed nodes, and look at their differences and advantages. To take us through all this, we have Mahendra Mehra with us. Mahendra is a senior OCI instructor with Oracle University.  01:09 Lois: Hi Mahendra! From our discussion last week, we know that when creating a node pool with Container Engine for Kubernetes, we have the option of specifying the type of Oracle nodes as either managed nodes or virtual nodes. But I’m sure there are some key differences in the features supported by each type, right?  Mahendra: The primary point of differentiation between virtual nodes and managed nodes is in their management approach. When it comes to managed nodes, users are responsible for managing the nodes. They have the flexibility to configure them to meet the specific requirements. Users are also responsible for upgrading Kubernetes on managed nodes and for managing cluster capacity. You can create managed nodes and node pools in both basic clusters and enhanced clusters, whereas in virtual nodes, virtual nodes provide a serverless Kubernetes, experience, enabling users to run containerized applications at scale. The Kubernetes software is upgraded and security patches are applied while respecting application's availability requirements.  You can only create virtual nodes and virtual node pools in enhanced clusters. 02:17 Nikita: What about differences in terms of resource allocation? Are there any differences we should be aware of? Mahendra: When it comes to managed nodes, the resource allocation is at the node pool level and the users specify CPU and memory resource requirements for a given node pool. In the virtual nodes, the resource allocation is done at the pod level, where you can specify the CPU and memory resource requirements, but this time, as requests and limits in the pod specification.  02:45 Lois: What about differences in the approach to load balancing? Mahendra: When it comes to managed nodes, load balancing is between the worker nodes, whereas in virtual nodes, load balancing is between pods.  Also, load balancer security list management is never enabled, and you always must manually configure security rules. When using virtual nodes, load balances distribute traffic among pods' IP addresses and then assign node port.  03:12 Lois: And when it comes to pod networking? Mahendra: Under managed nodes, both the VCN-Native Pod Networking CNI plugin and the flannel CNI plugin are supported. When it comes to virtual nodes, only VCN-Native Pod Networking is supported. Also, only one VNIC is attached to each virtual node. Remember, IP addresses are not pre-allocated before pods are created. And the VCN-Native Pod Networking CNI plugin is not shown as running in the kube-system namespace. Pod subnet route tables must have route rules defined for a NAT gateway and a service gateway. 03:48 Nikita: OK… I have a question, Mahendra. When it comes to scaling Kubernetes clusters and node pools, can users adjust the cluster capacity in response to their changing requirements? Mahendra: When it comes to managed nodes, customers can scale the cluster and node pool up and down by changing the number of managed node pools and nodes respectively. They also have an option to enable autoscaling to automatically scale managed node pools and pods. When it comes to virtual nodes, operational overhead of cluster capacity management is handled for you by OCI. A virtual node pool scales automatically and can support up to 1000 pods per virtual node. Users also have an option to increase the number of virtual node pools or virtual nodes to scale up the cluster or node pool respectively. 04:37 Lois: And what about the pricing for each? Mahendra: Under managed nodes, you pay for the compute instances that execute applications, whereas under virtual nodes, you pay for the exact compute resources consumed by each Kubernetes pod. 04:55 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai. 05:34 Nikita: Welcome back! We were just discussing how when you have to choose between virtual nodes and managed nodes for your Kubernetes cluster, you need to consider several key points of differentiation, like the management approach, resource allocation, load balancing, pod networking, scaling, and pricing.  Lois: Yeah, it’s important to understand the benefits and drawbacks of each approach to make informed decisions. Mahendra, now let’s talk about the prerequisites to configure clusters with virtual nodes and the IAM policies that are required to use virtual nodes. Mahendra: Before you can use virtual nodes, you always have to set up at least one IAM policy, which is required in all circumstances by both tenancy administrators and non-administrator users. This basically means, to create and use clusters with virtual nodes and virtual node pools, you must endorse Container Engine for Kubernetes service to allow virtual nodes to create container instances in the Container Engine for Kubernetes service tenancy with a VNIC connected to a subnet of a VCN in your tenancy. All you need to do is create a policy in the root compartment with policy statements from the official documentation page. You will find them under the Working with Virtual Nodes section within the Container Engine topic.  06:55 Lois: Mahendra, how do you create and configure virtual nodes and virtual node pools? Mahendra: Creating virtual nodes is a pivotal step and it involves setting up a virtual node pool in a new cluster. This is exclusively applicable to enhanced clusters. You can initiate this process using the console, the CLI, or the API. Configuring your virtual node pools involves defining critical parameters. Firstly, we have the node count. This represents the number of virtual nodes you wish to create within your virtual node pool. These nodes will be strategically placed in the availability domains that you specify. Now, it's important to carefully consider the placement of these nodes. You can distribute them across different availability domains, ensuring high availability for your applications. Additionally, you have the option to place these nodes in a regional subnet, which is the recommended approach for optimal performance. 07:53 Nikita: Isn’t the pod shape another important parameter? Can you tell us a bit about it? Mahendra: Pod shape refers to the type of shape you want for pods running on your virtual nodes within the virtual node pool. The pod shape is crucial as it determines the processor type on which you want your pods to run. It is important to note that only shapes available in your tenancy and supported by Container Engine for Kubernetes will be shown. So choose a shape that aligns with the requirements of your applications and services. A noteworthy point is that you explicitly specify the CPU and memory resource requirements for virtual nodes in the pod specification file. This ensures that your virtual nodes have the necessary resources to handle the workloads of your applications. Precision in specifying these requirements is key to achieving optimal performance. 08:49 Lois: What is the network setup for virtual nodes?  Mahendra: The pod running on virtual nodes utilize VCN-native pod networking, and it's crucial to specify how these pods in the node pool communicate with each other. This involves setting up a pod subnet, which is a regional subnet configured specially to host pods. The pod subnet you specify for virtual nodes must be private. Oracle recommends that the pod subnet and the virtual node subnets are the same. In addition to subnet configurations, you have the option to use security rules in network security group to control access to the pod subnet. This involves defining security rules within one or more NSGs that you specify with a maximum limit of five network security groups. Also, it is worth noting that using network security group is recommended over using security list. Now, let's shift our focus to virtual node communication. For this, you will configure a virtual node subnet. This subnet can be either a regional subnet, which is recommended, or an availability domain-specific subnet. And it's designed to host your virtual nodes. 10:02 Nikita: What are some key considerations for virtual node subnets? Mahendra: If you've specified load balancer subnets, ensure that the virtual node subnets are different. As with pod communication, Oracle recommends that the pod subnet and the virtual node subnet are the same, with the added condition that the virtual node subnet must be private. 10:23 Lois: Mahendra, can you take us through the fundamental tasks involved in managing virtual nodes and virtual node pools? Mahendra: Whether you're creating a new enhanced cluster using the Console, or looking to scale up an existing one, the creation process is versatile.  Creating virtual nodes involves establishing a virtual node pool. Virtual nodes can only be created within enhanced clusters. Listing virtual nodes task offers visibility into virtual nodes within a virtual node pool. Whether you prefer Console, CLI, or the API, you have the flexibility to choose the method that suits your workflow best. For a comprehensive understanding of your virtual node pools, navigate to the Cluster List page, and click on the name of the cluster. This will unveil the specifics of the virtual node pool you are interested in. Now let's talk about updating virtual node pools. Whether your initiating a new enhanced cluster, or expanding an existing one, the update process ensures your cluster aligns with your evolving requirements. You can easily update the virtual node pool’s name for clarity. You can also dynamically change the number of virtual nodes to meet the workload demands, and you can fine tune the Node Placement using options like Availability Domain and Fault Domain settings. Moving on to an essential aspect of node pool management, that is deletion. It's crucial to understand that deleting a node pool is a permanent action. Once deleted, the node pool cannot be recovered.  12:04 Lois: Before we wrap up, Mahendra, can you talk about the critical factors when allocating CPU, memory, and storage resources to pods provisioned by virtual nodes within your OKE cluster? Mahendra: To ensure optimal performance, OKE calculates CPU and memory allocations at the pod level, a distinctive feature when using virtual nodes. This approach stands in contrast to the traditional worker node-level allocation. The allocation process takes into account several factors. First one is the CPU and memory requests and limits. These are specified for each container in the pod spec file, if present. Secondly, number of containers in the pod. The total number of containers impacts the overall resource requirements. And kube-proxy and container runtime requirements. A small but essential consideration taking up 0.25 GB of memory and negligible CPU. Pod CPU and memory requests must meet a minimum of 0.125 OCPUs and 0.5 GB of memory. 13:12 Nikita: Thank you, Mahendra, for this really insightful session. If you’re interested in learning more about the topics we discussed today, head over to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course.  Lois: You’ll find demos that you watch as well as skill checks that you can attempt to better your understanding. In our next episode, we’ll journey into the world of self-managed nodes and discuss how to manage Kubernetes deployments. Until then, this is Lois Houston…  Nikita: And Nikita Abraham, signing off! 13:45 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

2 Juli 202414min

Introduction to OCI Container Engine for Kubernetes

Introduction to OCI Container Engine for Kubernetes

Curious about how OCI Container Engine for Kubernetes (OKE) can transform the way your development team builds, deploys, and manages cloud-native applications? Listen to hosts Lois Houston and Nikita Abraham explore OKE's key features and benefits with senior OCI instructor Mahendra Mehra.   Mahendra breaks down complex concepts into digestible bits, making it easy for you to understand the magic behind OKE.   OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! If you’ve been listening to us these last few weeks, you’ll know we’ve been discussing containerization, the Oracle Cloud Infrastructure Registry, and the basics of Kubernetes. Today, we’ll dive into the world of OCI Container Engine for Kubernetes, also referred to as OKE.  Nikita: We’re joined by Mahendra Mehra, a senior OCI instructor with Oracle University, who will take us through the key features and benefits of OKE and also talk about working with managed nodes. Hi Mahendra! Thanks for joining us today. 01:09 Lois: So, Mahendra, what is OKE exactly? Mahendra: Oracle Cloud Infrastructure Container Engine for Kubernetes is a fully managed, scalable, and highly available service that empowers you to effortlessly deploy your containerized applications to the cloud. But that's just the beginning. OKE can transform the way you and your development team build, deploy, and manage cloud native applications. 01:36 Nikita: What would you say are some of its most defining features?    Mahendra: One of the defining features of OKE is the flexibility it offers. You can specify whether you want to run your applications on virtual nodes or opt for managed nodes. Regardless of your choice, Container Engine for Kubernetes will efficiently provision them within your existing OCI tenancy on Oracle Cloud Infrastructure. Creating OKE cluster is a breeze, and you have a couple of fantastic tools at your disposal-- the console and the rest API. These make it super easy to get started. OKE relies on Kubernetes, which is an open-source system that simplifies the deployment, scaling, and management of containerized applications across clusters of hosts. Kubernetes is an incredible system that groups containers into logical units known as pods. And these pods make managing and discovering your applications very simple. Not to mention, Container Engine for Kubernetes uses Kubernetes versions that are certified as conformant by the Cloud Native Computing Foundation, also abbreviated as CNCF. And here's the icing on the cake. Container Engine for Kubernetes is ISO-compliant. The other two ISO-IEC standards—27001, 27017, and 27018. That's your guarantee of a secure and reliable platform.   03:08 Lois: That’s great. But how do you access all this power? Mahendra: You can define and create your Kubernetes cluster using the intuitive console and the robust rest API. Once your clusters are up and running, you can manage them using the Kubernetes command line, also known as kubectl, the user-friendly Kubernetes dashboard, and the powerful Kubernetes API. 03:32 Nikita: I love the idea of an intuitive console and being able to manage everything from a centralized place. Lois: Yeah, that’s fantastic! Mahendra, can you talk us through the magic that happens behind the scenes? What’s Oracle’s role in all this? Mahendra: All the master nodes or control plane nodes are managed by Oracle. This includes components like etcd, the API server, and the controller manager among others. To ensure reliability, we make sure multiple copies of these master components are distributed across different availability domains. And we don't stop there. We also manage the Kubernetes dashboard and even handle the self-healing mechanism of both the cluster and the worker nodes. All of these are meticulously created and managed within your Oracle tenancy. 04:19 Lois: And what happens at the user’s end? What is their responsibility? Mahendra: At your end, you have the power to manage your worker nodes. Using different compute shapes, you can create and control them in your own user tenancy. So, as you can see, it's a perfect blend of Oracle's expertise and your control. 04:38 Nikita: So, in your opinion, why should users consider OKE their go-to solution for all things Kubernetes? Mahendra: Imagine a world where building and maintaining Kubernetes environments, be it master nodes or worker nodes, is no longer complex, costly, or even time-consuming. OKE is here to make your life easier by seamlessly integrating Kubernetes with various container life cycle management products, which includes container registries, CI/CD frameworks, networking solutions, storage options, and top-notch security features. And speaking of security, OKE gives you the tools you need to manage and control team access to production clusters, ensuring granular access to Kubernetes cluster in a straightforward process. It empowers developers to deploy containers quickly, provides devops teams with visibility and control for seamless Kubernetes management, and brings together Kubernetes container orchestration with Oracle's advanced cloud infrastructure. This results in robust control, top tier security, IAM, and consistent performance. 05:50 Nikita: OK…a lot of benefits! Mahendra, I know there have been ongoing enhancements to the OKE service. So, when creating a new cluster with Container Engine for Kubernetes, what is the cluster type we should specify?  Mahendra: The first type is the basic clusters. Basic clusters support all the core functionality provided by Kubernetes and Container Engine for Kubernetes. Basic clusters come with a service-level objective, but not a financially backed service level agreement. This means that Oracle guarantees a certain level of availability for the basic cluster, but there is no monetary compensation if that level is not met. On the other hand, we have the enhanced clusters. Enhanced clusters support all available features, including features not supported by basic clusters.  06:38 Lois: OK. So, can you tell us more about the features supported by enhanced clusters? Mahendra: As we move towards a more digitized world, the demand for infrastructure continues to rise. However, with virtual nodes, managing the infrastructure of your cluster becomes much simpler. The burden of manually scaling, upgrading, or troubleshooting worker nodes is removed, giving you more time to focus on your applications rather than the underlying infrastructure. Virtual nodes provide a great solution for managing large clusters with a high number of nodes that require frequent updates or scaling. With this feature, you can easily simplify the management of your cluster and focus on what really matters, that is your applications. Managing cluster add-ons can be a daunting task. But with enhanced clusters, you can now deploy and configure them in a more granular way. This means that you can manage both essential add-ons like CoreDNS and kube-proxy as well as a growing portfolio of optional add-ons like the Kubernetes Dashboard.  With enhanced clusters, you have complete control over the add-ons you install or disable, the ability to select specific add-on versions, and the option to opt-in or opt-out of automatic updates by Oracle. You can also manage add-on specific customizations to tailor your cluster to meet the needs of your application. 08:05 Lois: Do users need to worry about deploying add-ons themselves? Mahendra: Oracle manages the lifecycle of add-ons so that you don't have to worry about deploying them yourself. This level of control over add-ons gives you the flexibility to customize your cluster to meet the unique needs of your applications, making managing your cluster a breeze. 08:25 Lois: What about scaling? Mahendra: Scaling your clusters to meet the demands of your workload can be a challenging task. However, with enhanced clusters, you can now provision more worker nodes in a single cluster, allowing you to deploy larger workloads on the same cluster which can lead to better resource utilization and lower operational overhead. Having fewer larger environments to secure, monitor, upgrade, and manage is generally more efficient and can help you save on cost. Remember, there are limits to the number of worker nodes supported on an enhanced cluster, so you should review the Container Engine for Kubernetes limits documentation and consider the additional considerations when defining enhanced clusters with large number of managed nodes.  09:09  Nikita: Ensuring the security of my cluster would be of utmost importance to me, right? How would I do that with enhanced clusters? Mahendra: With enhanced clusters, you can now strengthen cluster security through the use of workload identity. Workload identity enables you to define OCI IAM policies that authorize specific pods to make OCI API calls and access OCI resources. By scoping the policies to Kubernetes service account associated with application pods, you can now allow the applications running inside those pods to directly access the API based on the permissions provided by the policies. 09:48 Nikita: Mahendra, what type of uptime and server availability benefits do enhanced clusters provide? Mahendra: You can now rely on a financially backed service level agreement tied to Kubernetes API server uptime and availability. This means that you can expect a certain level of uptime and availability for your Kubernetes API server, and if it degrades below the stated SLA, you'll receive compensation. This provides an extra level of assurance and helps ensure that your cluster is highly available and performant. 10:20 Lois: Mahendra, do you have any tips for us to remember when creating basic and enhanced clusters? Mahendra: When using the console to create a cluster, a new cluster is created as an enhanced cluster by default unless you explicitly choose to create a basic cluster. If you don't select any enhanced features during cluster creation, you have the option to create the new cluster as a basic cluster. When using CLI or API to create a cluster, you can specify whether to create a basic cluster or an enhanced cluster. If you don't explicitly specify the type of cluster to create, a new cluster is created as a basic cluster by default. Creating a new cluster as an enhanced cluster enables you to easily add enhanced features later even if you didn't select any enhanced features initially. If you do choose to create a new cluster as a basic cluster, you can still choose to upgrade the basic cluster to an enhanced cluster later on. However, you cannot downgrade an enhanced cluster to a basic cluster. These points are really important while you consider selection of a basic cluster or an enhanced cluster for your usage. 11:34 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai. 12:13 Nikita: Welcome back! I want to move on to serverless Kubernetes with virtual nodes. But I think before we do that, we first need to have a basic understanding of what managed nodes are.   Mahendra: Managed nodes run on compute instances within your tenancy, and are at least partly managed by you. In the context of Kubernetes, a node is a compute host that can be either a virtual machine or a bare metal host. As you are responsible for managing managed nodes, you have the flexibility to configure them to meet your specific requirements. You are responsible for upgrading Kubernetes on managed nodes and for managing cluster capacity. Nodes are responsible for running a collection of pods or containers, and they are comprised of two system components: the kubelet, which is the host brain, and the container runtime such as CRI-O, or containerd.  13:07 Nikita: Ok… so what are virtual nodes, then? Mahendra: Virtual nodes are fully managed and highly available nodes that look and act like real nodes to Kubernetes. They are built using the open source CNCF Virtual Kubelet Project, which provides the translation layer between OCI and Kubernetes. 13:25 Lois: So, what makes Oracle’s managed virtual Kubernetes product different? Mahendra: OCI is the first major cloud provider to offer a fully managed virtual Kubelet product that provides a serverless Kubernetes experience through virtual nodes. Virtual nodes are configured by customers and are located within a single availability and fault domain within OCI. Virtual nodes have two main components: port management and container instance management. Virtual nodes delegates all the responsibility of managing the lifecycle of pods to virtual Kubernetes while on a managed node, the kubelet is responsible for managing all the lifecycle state. The key distinction of virtual nodes is that they support up to a 1,000 pods per virtual node with the expectation of supporting more in the future. 14:15 Nikita: What are the other benefits of virtual nodes? Mahendra: Virtual nodes offer a fully managed experience where customers don't have to worry about managing the underlying infrastructure of their containerized applications. Virtual nodes simplifies scaling patterns for customers. Customers can scale their containerized application up or down quickly without worrying about the underlying infrastructure, and they can focus solely on their applications. With virtual nodes, customers only pay for the resources that their containerized application use. This allows customers to optimize their costs and ensures that they are not paying for any unused resources. Virtual nodes can support over 10 times the number of pods that a normal node can. This means that customer can run more containerized applications on virtual nodes, which reduces operational burden and makes it easier to scale applications. Customers can leverage container instances in serverless offering from OCI to take advantage of many OCI functionalities natively. These functionalities include strong isolation and ultimate elasticity with respect to compute capacity. 15:26 Lois: When creating a cluster using Container Engine for Kubernetes, we have the flexibility to customize the worker nodes within the cluster, right? Could you tell us more about this customization? Mahendra: This customization includes specifying two key elements. Firstly, you can select the operating system image to be used for worker nodes. This image serves as a template for the worker node's virtual hard drive, and determines the operating system and other software installed. Secondly, you can choose the shape for your worker nodes. The shape defines the number of CPUs and the amount of memory allocated to each instance, ensuring it meets your specific requirements. This customization empowers you to tailor your OKE cluster to your exact needs. It is important to note that you can define and create OKE clusters using both the console and the REST API. This level of control is specially valuable for your development team when building, deploying, and managing cloud native applications.  You have the option to specify whether applications should run on virtual nodes or managed nodes. And Container Engine for Kubernetes efficiently provisions them on Oracle Cloud Infrastructure within your existing OCI tenancy. This flexibility ensures that you can adapt your OKE cluster to suit the specific requirements of your projects and workloads. 16:56 Lois: Thank you so much, Mahendra, for giving us your time today. For more on the topics we discussed, visit mylearn.oracle.com and look for the OCI Container Engine for Kubernetes Specialist course. Join us next week as we dive deeper into working with OKE virtual nodes. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 17:18 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

25 Juni 202417min

Basics of Kubernetes

Basics of Kubernetes

In this episode, Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, dive into the fundamentals of Kubernetes. They talk about how Kubernetes tackles challenges in deploying and managing microservices, and enhances software performance, flexibility, and availability.   OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to another episode of the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! We’ve spent the last two episodes getting familiar with containerization and the Oracle Cloud Infrastructure Registry. Today, it’s going to be all about Kubernetes. So if you've heard of Kubernetes but you don't know what it is, or you've been playing with Docker and containers and want to know how to take it to the next level, you’ll want to stay with us. Lois: That’s right, Niki. We’ll be chatting with Mahendra Mehra, a senior OCI instructor with Oracle University, about the challenges in containerized applications within a complex business setup and how Kubernetes facilitates container orchestration and improves its effectiveness, resulting in better software performance, flexibility, and availability. 01:20 Nikita: Hi Mahendra. To start, can you tell us when you would use Kubernetes?  Mahendra: While deploying and managing microservices in a distributed environment, you may run into issues such as failures or container crashes. Issues such as scheduling containers to specific machines depending upon the configuration. You also might face issues while upgrading or rolling back the applications which you have containerized. Scaling up or scaling down containers across a set of machines can be troublesome.  01:50 Lois: And this is where Kubernetes helps automate the entire process?  Mahendra: Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services that facilitates both declarative configuration and automation.  You can think of a Kubernetes as you would a conductor for an orchestra. Similar to how a conductor would say how many violins are needed, which one play first, and how loud they should play, Kubernetes would say, how many webserver front-end containers or back-end database containers are needed, what they serve, and how many resources are to be dedicated to each one. 02:27 Nikita: That’s so cool! So, how does Kubernetes work?  Mahendra: In Kubernetes, there is a master node, and there are multiple worker nodes. Each worker node can handle multiple pods. Pods are just a bunch of containers clustered together as a working unit. If a worker node goes down, Kubernetes starts new pods on the functioning worker node. 02:47 Lois: So, the benefits of Kubernetes are… Mahendra: Kubernetes can containerize applications of any scale without any downtime. Kubernetes can self-heal containerized applications, making them resilient to unexpected failures.  Kubernetes can autoscale containerized applications as for the workload and ensure optimal utilization of cloud resources. Kubernetes also greatly simplifies the process of deployment operations. With Kubernetes, however complex an operation is, it could be performed reliably by executing a couple of commands at the most. 03:19 Nikita: That’s great. Mahendra, can you tell us a bit about the architecture and main components of Kubernetes? Mahendra: The Kubernetes cluster has two main components. One is the control plane, and one is the data plane. The control plane hosts the components used to manage the Kubernetes cluster. And the data plane basically hosts all the worker nodes that can be virtual machines or physical machines. These worker nodes basically host pods which run one or more containers. The containers running within these pods are making use of Docker images, which are managed within the image registry. In case of OCI, it is the container registry. 03:54 Lois: Mahendra, you mentioned nodes and pods. What are nodes? Mahendra: It is the smallest unit of computing hardware within the Kubernetes. Its work is to encapsulate one or more applications as containers. A node is a worker machine that has a container runtime environment within it. 04:10 Lois: And pods? Mahendra: A pod is a basic object of Kubernetes, and it is in charge of encapsulating containers, storage resources, and network IPs. One pod represents one instance of an application within Kubernetes. And these pods are launched in a Kubernetes cluster, which is composed of nodes. This means that a pod runs on a node but can easily be instantiated on another node. 04:32 Nikita: Can you run multiple containers within a pod? Mahendra: A pod can even contain more than one container if these containers are relatively tightly coupled. Pod is usually meant to run one application container inside of it, but you can run multiple containers inside one pod. Usually, it is only the case if you have one main application container and a helper container or some sidecar containers that has to run inside of that pod. Every pod is assigned a unique private IP address, using which the pods can communicate with one another. Pods are meant to be ephemeral, which means they die easily. And if they do, upon re-creation, they are assigned a new private IP address. In fact, Kubernetes can scale a number of these pods to adapt for the incoming traffic, consequently creating or deleting pods on demand. Kubernetes guarantees the availability of pods and replicas specified, but not the liveliness of each individual pod. This means that other pods that need to communicate with this application or component cannot rely on the underlying individual pod's IP address. 05:35 Lois: So, how does Kubernetes manage traffic to this indecisive number of pods with changing IP addresses? Mahendra: This is where another component of Kubernetes called services comes in as a solution. A service gets allocated a virtual IP address and lives until explicitly destroyed. Requests to the services get redirected to the appropriate pods, thus the services of a stable endpoint used for inter-component or application communication. And the best part here is that the lifecycle of service and the pods are not connected. So even if the pod dies, the service and the IP address will stay, so you don't have to change their endpoints anymore. 06:13 Nikita: What types of services can you create with Kubernetes? Mahendra: There are two types of services that you can create. The external service is created to allow external users to connect the containerized applications within the pod. Internal services can also be created that restrict the communication within the cluster. Services can be exposed in different ways by specifying a particular type. 06:33 Nikita: And how do you define these services? Mahendra: There are three types in which you can define services. The first one is the ClusterIP, which is the default service type that exposes services on an internal IP within the cluster. This type makes the service only reachable from within the cluster. You can specify the type of service as NodePort. NodePort basically exposes the service on the same port of each selected node in the cluster using a network address translation and makes the service accessible from the outside of the cluster using the node IP and the NodePort combination. This is basically a superset of ClusterIP. You can also go for a LoadBalancer type, which basically creates an external load balancer in the current cloud. OCI supports LoadBalancer types. It also assigns a fixed external IP to the service. And the LoadBalancer type is a superset of NodePort. 07:25 Lois: There’s another component called ingress, right? When do you used that? Mahendra: An ingress is used when we have multiple services on our cluster, and we want the user requests routed to the services based on their pod, and also, if you want to talk to your application with a secure protocol and a domain name. Unlike NodePort or LoadBalancer, ingress is not actually a type of service. Instead, it is an entry point that sits in front of the multiple services within the cluster. It can be defined as a collection of routing rules that govern how external users access services running inside a Kubernetes cluster. Ingress is most useful if you want to expose multiple services under the same IP address, and these services all use the same Layer 7 protocol, typically HTTP. 08:10 Lois: Mahendra, what about deployments in Kubernetes?  Mahendra: A deployment is an object in Kubernetes that lets you manage a set of identical pods. Without a deployment, you will need to create, update, and delete a bunch of pods manually. With the deployment, you declare a single object in a YAML file, and the object is responsible for creating the pods, making sure they stay up-to-date and ensuring there are enough of them running. You can also easily autoscale your applications using a Kubernetes deployment. In a nutshell, the Kubernetes deployment object lets you deploy a replica set of your pods, update the pods and the replica sets. It also allows you to roll back to your previous deployment versions. It helps you scale a deployment. It also lets you pause or continue a deployment. 08:59 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free! So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai. 09:37 Nikita: Welcome back! We were talking about how useful a Kubernetes deployment is in scaling operations. Mahendra, how do pods communicate with each other?  Mahendra: Pods communicate with each other using a service. For example, my application has a database endpoint. Let's say it's a MySQL service that it uses to communicate with the database. But where do you configure this database URL or endpoints? Usually, you would do it in the application properties file or as some kind of an external environment variable. But usually, it's inside the build image of the application. So for example, if the endpoint of the service or the service name, in this case, changes to something else, you would have to adjust the URL in the application. And this will cause you to rebuild the entire application with a new version, and you will have to push it to the repository. You'll then have to pull that new image into your pod and restart the whole thing. For a small change like database URL, this is a bit tedious. So for that purpose, Kubernetes has a component called ConfigMap. ConfigMap is a Kubernetes object that maintains a key value store that can easily be used by other Kubernetes objects, such as pods, deployments, and services. Thus, you can define a ConfigMap composed of all the specific variables for your environment. In Kubernetes, now you just need to connect your pod to the ConfigMap, and the pod will read all the new changes that you have specified within the ConfigMap, which means you don't have to go on to build a new image every time a configuration changes. 11:07 Lois: So then, I’m just wondering, if we have a ConfigMap to manage all the environment variables and URLs, should we be passing our username and password in the same file? Mahendra: The answer is no. Password or other credentials within a ConfigMap in a plain text format would be insecure, even though it's an external configuration. So for this purpose, Kubernetes has another component called secret. Kubernetes secrets are secure objects which store sensitive data, such as passwords, OAuth tokens, and SSH keys with the encryption within your cluster. Using secrets gives you more flexibility in a pod lifecycle definition and control over how sensitive data is used. It reduces the risk of exposing the data to unauthorized users. 11:50 Nikita: So, you’re saying that the secret is just like ConfigMap or is there a difference? Mahendra: Secret is just like ConfigMap, but the difference is that it is used to store secret data credentials, for example, database username and passwords, and it's stored in the base64 encoded format. The kubelet service stores this secret into a temporary file system. 12:11 Lois: Mahendra, how does data storage work within Kubernetes? Mahendra: So let's say we have this database pod that our application uses, and it has some data or generates some data. What happens when the database container or the pod gets restarted? Ideally, the data would be gone, and that's problematic and inconvenient, obviously, because you want your database data or log data to be persisted reliably for long term. To achieve this, Kubernetes has a solution called volumes. A Kubernetes volume basically is a directory that contains data accessible to containers in a given pod within the Kubernetes platform. Volumes provide a plug-in mechanism to connect ephemeral containers with persistent data stores elsewhere. The data within a volume will outlast the containers running within the pod. Containers can shut down and restart because they are ephemeral units. Data remains saved in the volume even if a container crashes because a container crash is not enough to cut off a pod from a node. 13:10 Nikita: Another main component of Kubernetes is a StatefulSet, right? What can you tell us about it?  Mahendra: Stateful applications are applications that store data and keep tracking it. All databases such as MySQL, Oracle, and PostgreSQL are examples of Stateful applications. In a modern web application, we see stateless applications connecting with Stateful application to serve the user request. For example, a Node.js application is a stateless application that receives new data on each request from the user. This application is then connected with a Stateful application, such as MySQL database, to process the data. MySQL stores the data and keeps updating the database on the user's request.  Now, assume you deployed a MySQL database in the Kubernetes cluster and scaled this to another replica, and a frontend application wants to access the MySQL cluster to read and write data. The read request will be forwarded to both these pods. However, the write request will only be forwarded to the first primary pod. And the data will be synchronized with other pods. You can achieve this by using the StatefulSets. Deleting or scaling down a StatefulSet will not delete the volumes associated with the Stateful applications. This gives you your data safety. If you delete the MySQL pod or if the MySQL pod restarts, you can have access to the data in the same volume.  So overall, a StatefulSet is a good fit for those applications that require unique network identifiers; stable persistent storage; ordered, graceful deployment and scaling; as well as ordered, automatic rolling updates. 14:43 Lois: Before we wrap up, I want to ask you about the features of Kubernetes. I’m sure there are countless, but can you tell us the most important ones? Mahendra: Health checks are used to check the container's readiness and liveness status. Readiness probes are intended to let Kubernetes know if the app is ready to serve the traffic.  Networking plays a significant role in container orchestration to isolate independent containers, connect coupled containers, and provide access to containers from the external clients. Service discovery allows containers to discover other containers and establish connections to them. Load balancing is a dedicated service that knows which replicas are running and provides an endpoint that is exposed to the clients. Logging allows us to oversee the application behavior.  The rolling update allows you to update a deployed containerized application with minimal downtime using different update scenarios. The typical way to update such an application is to provide new images for its containers. Containers, in a production environment, can grow from few to many in no time. Kubernetes makes managing multiple containers an easy task. And lastly, resource usage monitoring-- resources such as CPU and RAM must be monitored within the Kubernetes environment. Kubernetes resource usage looks at the amount of resources that are utilized by a container or port within the Kubernetes environment. It is very important to keep an eye on the resource usage of the pods and containers as more usage translates to more cost. 16:18 Nikita: I think we can wind up our episode with that. Thank you, Mahendra, for joining us today. Kubernetes sure can be challenging to work with, but we covered a lot of ground in this episode.  Lois: That’s right, Niki! If you want to learn more about the rich features Kubernetes offers, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Remember, all the training is free, so you can dive right in! Join us next week when we'll take a look at the fundamentals of Oracle Cloud Infrastructure Container Engine for Kubernetes. Until then, Lois Houston… Nikita: And Nikita Abraham, signing off! 16:57 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

18 Juni 202417min

Oracle Cloud Infrastructure Registry

Oracle Cloud Infrastructure Registry

In this episode, hosts Lois Houston and Nikita Abraham, along with senior OCI instructor Mahendra Mehra, discuss how Oracle Cloud Infrastructure Registry simplifies the development-to-production workflow for developers.   Listen to Mahendra explain important container registry concepts, such as images, repositories, image tags, and image paths, as well as how they relate to each other.   OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Principal Technical Editor with Oracle University, and I’m joined by Lois Houston, Director of Innovation Programs. Lois: Hi there! This is our second episode on OCI Container Engine for Kubernetes, and today we’re going to spend time discussing container registries with our colleague and senior OCI instructor, Mahendra Mehra. Nikita: We’ll talk about how you can become proficient in managing Oracle Cloud Infrastructure Registry, a vital component in your container workflow.  00:58 Lois: Hi Mahendra, can you explain what Oracle Cloud Infrastructure Registry, or OCIR, is and how it simplifies the container image management process? Mahendra: OCIR is an Oracle-managed registry designed to simplify the development-to-production workflow for developers. It offers a range of functionalities, serving as a private docker registry for internal use where developers can easily store, share, and manage container images.  The strength of OCIR lies in its high available and scalable architecture. Leveraging OCI to ensure reliable deployment of applications, developers can use OCIR not only as a private registry but also as a public registry, facilitating the pulling of images from public repositories for users with internet access. 01:55 Lois: But what sets OCIR apart? Mahendra: What sets OCIR apart is its compliance with the Open Container Initiative standards, allowing the storage of container images conforming to the OCI specifications. It goes a step further by supporting manifest lists, sometimes known as multi-architecture images, accommodating diverse architectures like ARM and AMD64. Additionally, OCIR extends its support to Helm charts. Security is a priority with OCIR, offering private access through a service gateway. This means that OCI resources within a VCN in the same region can securely access OCIR without exposing them to the public internet. 02:46 Nikita: OK. What are some other key advantages of OCIR?  Mahendra: Firstly, OCIR seamlessly integrates with the Container Engine for Kubernetes, ensuring a cohesive container management experience. In terms of security, OCIR provides flexibility by allowing registries to be either private or public, giving administrators control over accessibility. It is intricately integrated with IAM, offering straightforward authentication through OCI Identity. Another notable benefit is regional availability. You can efficiently pull container images from the same region as your deployments. For high-performance, availability, and low-latency image operations, OCIR leverages the robust infrastructure of OCI, enhancing the overall reliability of image push and pull operations. OCIR ensures anywhere access, allowing you to utilize container CLI for image operations from various locations, be it on the cloud, on-premises, or even from personal laptops.  03:57 Lois: I believe OCIR has repository quotas? Is there a cap on them?  Mahendra: In each enabled region for your tenancy, you can establish up to 500 repositories with a cumulative storage limit of 500 GB. Each repository is capable of holding up to 100,000 images. Importantly, charges apply only for stored images. 04:21 Nikita: That’s good to know, Mahendra. I want to move on to basic container registry concepts. Maybe we can start with what an image is. Mahendra: Image is basically a read-only template with instructions for creating a container. It holds the application that you want to run as a container, along with any dependencies that are required. Container registry is an Open Container Initiative-compliant registry. As a result, you can store any artifacts that conform to Open Container Initiative specifications, such as Docker images, manifest lists, sometimes also known as multi-architecture images, and Helm charts. 05:02 Lois: And what’s a repository then? Mahendra: It's a meaningfully named collection of related images which are grouped together for convenience in a container registry. There are different versions of the same source image, which are grouped together into the same repository.  You can have multiple images stored under this repository. The only thing that you need to keep changing is the image version. Every image version is given a tag. And the tag uniquely identifies the image. 05:33 Lois: Is it possible to make the repository public or private? Mahendra: Depending upon your need, a repository can be made private or public. One important thing to note is that the user needs to have an OCI username and authentication token before being able to push/pull an image from the OCIR. 05:52 Nikita: There are so many terms that you come across when working with repositories and container registry, right? Could you take us through them and explain  how they relate to each other? I’ve heard of the region key and tenancy namespace. Mahendra: The region key identifies the container registry region that you are using. A tenancy namespace is an auto-generated random and immutable string of alphanumeric characters. The tenancy namespace can be retrieved from the value of your object storage namespace field. Repository name is the name of a repository in container registry, to and from which you can push and pull images. Repository names can include one or more slash characters and are unique across all the compartments in the entire tenancy. You should note that although a repository name can include slash characters, the slash does not represent a hierarchical directory structure. It is simply one character in the string of characters. As a convenience, you might choose to start the name of different repositories with the same string. A registry identifier is the combination of your container registry region key and the tenancy namespace.  07:07 Lois: What about an image tag and an image path? How do they differ from each other? Mahendra: A tag or an image tag is a string used to refer to a particular image in a known registry. The term "image name" is sometimes used as a shorthand way to refer to a particular image in a particular repository. A tag can be a numerical value or it can be a string. An image path is a fully qualified path to a particular image in a registry. It extends the repository path by adding tags associated with the image.  07:46 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free. So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai. 08:24 Nikita: Welcome back! Mahendra, from what you’ve told us, OCIR seems like such a pivotal tool for modern containerized workflows, with its seamless integration, robust security measures, regional accessibility, efficient image management. So, how do we actually manage OCIR?  Mahendra: Managing OCIR can be done in three ways. Starting with managing the repository itself, followed by managing the images within the repository, and, last but not the least, managing the overall security of your repository alongside the images. 08:58 Nikita: Can we dive into each of these approaches in a little more detail? How does managing the repository itself work? Mahendra: You can create an empty repository in a compartment and give it a name that's unique across all the compartments in the entire tenancy. There is a limit to the number of repositories you can have in a given region in a tenancy. So, when you no longer need a repository, it makes sense to delete it from the Oracle Cloud Infrastructure registry. Make a note that when you delete a repository, it can take up to 48 hours for the deletion to take effect and for the storage to actually be released. When you create a new repository in Oracle Cloud Infrastructure Registry, you specify the compartment in which you want to create it. Having created the repository in one compartment, you can subsequently move it to a different compartment. The reasons can be many. It can be to change the users who are authorized to use the repository or to change how the billing for a repository is charged. 09:52 Lois: OK. And what about managing images within the repository?  Mahendra: You can view the images stored on OCIR using the OCI Console or using Docker images command from your Docker client after logging in to the OCIR repo. To push an image, you first used the Docker tag command to create a copy of the local source image as a new image. As a name for the new image, you specify the fully-qualified path to the target location in your container registry where you want to push the image, including the name of a repository. In order to pull an image, you must be logged in into the OCIR registry using the auth token and use the Docker pull command followed by a fully-qualified name of the image you wish to download on your Docker client. 10:36 Nikita: What happens when you no longer need an old image or you simply want to clean up the list of image tags in a repository? Mahendra: You can delete images from the Oracle Cloud Infrastructure Registry. You can undelete an image you've previously deleted for up to 48 hours after you deleted it. After that time, the image is permanently removed from the container registry. You can set up image retention policies to automatically delete images that meet particular selection criteria. 11:02 Lois: What sort of selection criteria? Mahendra: Criterias can be images that have not been pulled for a certain number of days or images that have not been tagged for a certain number of days. It can also be images that have not been given particular Docker tags specified as exempt from the automatic deletion. There's an hourly process that checks images against the selection criteria, and any that meet the selection criteria are automatically deleted. In each region in a tenancy, there's a global image retention policy. The default criteria of the policy is to retain all images so that no images are automatically deleted. However, you can change the global image retention policy so that the images are deleted if they meet certain criteria that you specify. A region's global image retention policy applies to all the repository within that region unless it is explicitly overridden by one or more custom image retention policies. Only one custom image retention policy at a time can be applied to a repository. If a repository has already been added to a custom retention policy and you want to add repository to a different custom retention policy, you have to remove the policy from the first retention policy before adding it to the second one. 12:15 Lois: Mahendra, what should we keep in mind when we’re dealing with the global image retention policy? Mahendra: The global image retention policy are specific to a particular region. To delete images consistently in different regions in your tenancy, you need to set up image retention policies in each region with identical selection criteria.  If you want to prevent images from being deleted on the basis of Docker tags they've been given, you need to specify those tags as exempt in a comma-separated list. When you want to clean up the list of images in a repository without actually deleting the images, you can remove the tags from the images in OCIR. Removing images is referred to as untagging. 12:53 Nikita: OK…and the last approach was managing the overall security of your repository alongside the images, right?  Mahendra: While managing security, you are given fine grained control over the operations that users are allowed to perform on repositories within the Container Registry. Using the concept of users and groups, you can control repository access by setting up identity access management policies at the tenancy and at the compartment level.  You can write policies to allow inspect, read, use, and manage operations on the repository based on the requirements. You can set up Oracle Cloud Infrastructure Registry to scan images in a repository for security vulnerabilities published in the publicly available common vulnerabilities and exposure databases. To perform image scanning, container registry makes use of the Oracle Cloud Infrastructure vulnerability-scanning service and vulnerability scanning REST API. 13:46 Nikita: What do I need to have in place before I can push and pull Docker images to and from Oracle Cloud Infrastructure Registry? Mahendra: The first thing is, your tenancy must be subscribed to one or more of the regions in which the container registry is available. You can check the same within the Oracle documentation. The next thing is, you need to have access to the Docker command line interface to push and pull images on your local machine. The third thing is, users must belong to a group to which a policy grants the appropriate permission or belong to a tenancies administrator group, which by default have access permissions on the container registry. Lastly, user must already have an Oracle Cloud Infrastructure username and an authentication token, which enables them to perform operations on the container registry. 14:29 Lois: Thank you, Mahendra, for sharing your insights on OCIR with us. To watch demos on managing OCIR, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Nikita: Mahendra will be back next week to walk us through the basics of Kubernetes. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 14:53 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

11 Juni 202415min

What is Containerization?

What is Containerization?

Welcome to a new season of the Oracle University Podcast, where we delve deep into the world of OCI Container Engine for Kubernetes. Join hosts Lois Houston and Nikita Abraham as they ask senior OCI instructor Mahendra Mehra about the transformative power of containers in application deployment and why they're so crucial in today's software ecosystem.   Uncover key differences between virtualization and containerization, and gain insights into Docker components and commands.   Getting Started with Oracle Cloud Infrastructure: https://oracleuniversitypodcast.libsyn.com/getting-started-with-oracle-cloud-infrastructure-1   Networking in OCI: https://oracleuniversitypodcast.libsyn.com/networking-in-oci   OCI Identity and Access Management: https://oracleuniversitypodcast.libsyn.com/oci-identity-and-access-management   OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X (formerly Twitter): https://twitter.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode.   ---------------------------------------------------------   Episode Transcript:   00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.  Nikita: Hi everyone! Welcome to a new season of the Oracle University Podcast. This time around, we’re going to delve into the world of OCI Container Engine for Kubernetes, or OKE. For the next couple of weeks, we’ll cover key aspects of OKE to help you create, manage, and optimize Kubernetes clusters in Oracle Cloud Infrastructure. 00:58 Lois: So, whether you’re a cloud native developer, Kubernetes administrator and developer, a DevOps engineer, or site reliability engineer who wants to enhance your expertise in leveraging the OCI OKE service for cloud native application solutions, you’ll want to tune in to these episodes for sure. And if that doesn’t sound like you, I’ll bet you will find the season interesting even if you’re just looking for a deep dive into this service. Nikita: That’s right, Lois. In today’s episode, we’ll focus on concepts of containerization, laying the foundation for your journey into the world of containers. And taking us through all this is Mahendra Mehra, a senior OCI instructor with Oracle University. 01:38 Lois: Hi Mahendra! We’re so glad to start our look at containerization with you today. Could you give us an overview? Why is it important in today's software world? Mahendra: Containerization is a form of virtualization, operates by running applications in isolated user spaces known as containers.  All these containers share the same underlying operating system. The container engine, pivotal in containerization technologies and container orchestration platforms, serves as the container runtime environment. It effectively manages the creation, deployment, and execution of containers. 02:18 Lois: Can you simplify this for a novice like me, maybe by giving us an analogy?  Mahendra: Imagine a container as a fully packaged and portable computing environment. It's like a digital suitcase that holds everything an application needs to run—binaries, libraries, configuration files, dependencies, you name it. And the best part, it's all encapsulated and isolated within container. 02:46 Nikita: Mahendra, how is containerization making our lives easier today?  Mahendra: In olden days, running an application meant matching it with your machine's operating system. For example, Windows software required a Windows machine. However, containerization has rewritten this narrative. Now, it's ancient history. With containerization, you create a single software package, a container that gracefully runs on any device or operating systems. What's fascinating is that these containers seamlessly run while sharing the host operating system. The container engine is like a shadow abstracted from the host operating system with limited access to underlying resources. Think of it as a super lightweight virtual machine. The beauty of this, the containerized application becomes a globetrotter, seamlessly running on bare metal within VMs or on the cloud platforms without needing tweaks for each environment. 03:52 Nikita: How is containerization different from traditional virtualization? Mahendra: On one side, we have traditional virtualization. It’s like having multiple houses on a single piece of land, and each house or virtual machine has its complete setup—wall, roofs, and utilities. This setup, while providing isolation, can be resource-intensive with each virtual machine carrying its entire operating system. Now, let's shift gears to containerization, the modern day superhero. Imagine a high-rise building where each floor represents a container. These containers share the same building or host operating system, but have their private space or isolated user space. Here's the magic. They are super lightweight, don't carry extra baggage of a full operating system and can swiftly move between different floors. 04:50 Lois: Ok, gotcha. That sounds pretty efficient! So, what are the direct benefits of containerization?  Mahendra: With containerization technology, there's less overhead during startup and no need to set up a separate guest OS for each application since they all share the same OS kernel. Because of this high efficiency, containerization is commonly used for packing up the many individual microservices that make up modern applications. Containerization unfolds a spectrum of benefits, delivering unparalleled portability as containers run uniformly across diverse platforms. This agility, fostered by open source container engines, empowers developers with cross-platform flexibility. The speed of containerized applications known for their lightweight nature reduces cost, boosts efficiency, and accelerates start times. Fault isolation ensures robustness, allowing independent operations without affecting others. Efficiency thrives as containers share the OS kernel and reusable layers, optimizing server utilization. The ease of management is achieved through orchestration platforms like Kubernetes automating essential tasks. Security remains paramount as container isolation and defined permissions fortify the infrastructure against malicious threats. Containerization emerges not just as a technology but as a transformative force, redefining how we build, deploy, and manage applications in the digital landscape. 06:37 Lois: It sure makes deployment efficient, scalability, and seamless! Mahendra, various components of Docker architecture work together to achieve containerization goals, right? Can you walk us through them? Mahendra: A developer or a DevOps professional communicates with Docker engine through the Docker client, which may be run on the same computer as Docker engine in case of development environments or through a remote shell. So whenever a developer fires a Docker command, the client sends them to the Docker Daemon which carries them out. The communication between the Docker client and the Docker host is usually taken place through REST APIs. The Docker clients can communicate with more than one Daemon at a time.  Docker Daemon is a persistent background process that manages Docker images, containers, networks, and storage volumes. The Docker Daemon constantly listens to the Docker API request from the Docker clients and processes them. Docker registries are services that provide locations from where you can store and download Docker images. In other words, a Docker registry contains repositories that host one or more Docker images. Public registries include Docker Hub and Docker Cloud and private registries can also be used. Oracle Cloud Infrastructure offers you services like OCIR, which is also called a container registry, where you can host your own private or public registry. 08:02 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we’re offering both the course and certification for free. So, don’t miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That’s https://education.oracle.com/genai. 08:39 Nikita: Welcome back! Mahendra, I’m wondering how virtual machines are different from containers. How do virtual machines work? Mahendra: A hypervisor or a virtual machine monitor is a software, firmware, or hardware that creates and runs virtual machines. It is placed between the hardware and the virtual machines, and is necessary to virtualize the server. Within each virtual machine runs a unique guest operating system. VMs with different operating systems can run on the same physical server. A Linux VM can sit alongside a Windows VM and so on. Each VM has its own binaries, libraries, and application that it services. And the VM may be many gigabytes in size. 09:22 Lois: What kind of benefits do we see from virtual machines? Mahendra: This technique provides a variety of benefits like the ability to consolidate applications into a single system, cost savings through reduced footprints, and faster server provisioning. But this approach has its own drawbacks. Each VM includes a separate operating system image, which adds overhead in memory and storage footprint. As it turns out, this issue adds complexity to all the stages of software development lifecycle, from development and test to production and disaster recovery as well. It also severely limits the portability of applications between different cloud providers and traditional data centers. And this is where containers come to the rescue.  10:05 Lois: OK…how do containers help in this situation? Mahendra: Containers sit on top of a physical server and its host operating system—typically, Linux or Windows. Each container shares the host OS kernel and usually the binaries and libraries as well. But the shared components are read only. Sharing OS resources such as libraries significantly reduces the need to reproduce the operating system code. A server can run multiple workloads with a single operating system installation. Containers are thus exceptionally lightweight. They are only megabytes in size and take just seconds to start. What this means in practice is you can put two or three times as many applications on a single server with containers than you can put on a virtual machine. Compared to containers, virtual machines take minutes to run and are order of magnitude larger than an equivalent container measured in gigabytes versus megabytes. 11:01 Nikita: So then, is there ever a time you should use a virtual machine? Mahendra: You should use a virtual machine when you want to run applications that specifically require a new OS, also when isolation and security are your priority over everything else. In most scenarios, a container will provide a lighter, faster, and more cost-effective solution than the virtual machines. 11:22 Lois: Now that we’ve discussed containerization and the different Docker components, can you tell us more about working with Docker images? We first need to know what a Dockerfile is, right?  Mahendra: A Dockerfile is a text file that defines a Docker image. You’ll use a Dockerfile to create your own custom Docker image. In other words, you use it to define your custom environment to be used in a Docker container. You'll want to create your own Dockerfile when existing images won't meet your project needs to different runtime requirements, which means that learning about Docker files is an essential part of working with Docker. Dockerfile is a step-by-step definition of building up a Docker image. It provides a set of standard instructions to be used in Dockerfile that Docker will execute when you issue a Docker build command. 12:09 Nikita: Before we wrap up, can you walk us through some Docker commands? Mahendra: Every Dockerfile must start with a FROM instruction. The idea behind this is that you need a starting point to build your image. It can be from scratch or from an existing image available in the Docker registry.  The RUN command is used to execute a command and will wait till the command finishes its execution. Since most of the images are Linux-based, a good practice is to set up a directory you will work in. That's the purpose of work directory line. It defines a directory and moves you in. The COPY instruction helps you to copy your source code into the image. ENV provides default values for variables that can be accessed within the containers. If your app needs to be reached from outside the container, you must open its listening port using the EXPOSE command. Once your application is ready to run, the last thing to do is to specify how to execute it. You must add the CMD line with the same command with all the arguments you used locally to launch your application. This command can also be used to execute commands at runtime for the containers, but we can be more flexible using the ENTRYPOINT command. Labels are used in Dockerfile to help organize your Docker images.   13:20 Lois: Thank you, Mahendra, for joining us today. I learned a lot! And if you want to learn more about working with Docker images, go to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. The course is free so you can get started right away. Nikita: Yeah, a fundamental understanding of core OCI services, like Identity and Access Management, networking, compute, storage, and security, is a prerequisite to the course and will certainly serve you well when leveraging the OCI OKE service. And the quickest way to gain this knowledge is by completing the OCI Foundations Associate learning path on MyLearn and getting certified. You can also listen to episodes from our first season, called OCI Made Easy, where we discussed these topics. We’ll put a few links in the show notes so you can easily find them.  Lois: We’re looking forward to having Mahendra join us again next week when we’ll talk about container registries. Until next time, this is Lois Houston… Nikita: And Nikita Abraham signing off! 14:24 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

4 Juni 202414min

Populärt inom Utbildning

bygga-at-idioter
historiepodden-se
det-skaver
rss-bara-en-till-om-missbruk-medberoende-2
alska-oss
nu-blir-det-historia
harrisons-dramatiska-historia
svd-ledarredaktionen
johannes-hansen-podcast
allt-du-velat-veta
not-fanny-anymore
rikatillsammans-om-privatekonomi-rikedom-i-livet
roda-vita-rosen
sa-in-i-sjalen
rss-max-tant-med-max-villman
sektledare
i-vantan-pa-katastrofen
rosceremoni
rss-sjalsligt-avkladd
rss-npf-podden