The Employee Life Cycle

The Employee Life Cycle

During an employee's tenure in an organization, they may experience different situations or have varying demands—they may get promoted, apply for leave, or get transferred to another team, for instance. Clearly, hiring employees is just the tip of the iceberg. Managing them requires a lot more work. In this episode, hosts Lois Houston and Nikita Abraham, along with Cloud Delivery Lead Nigel Wiltshire, take a closer look at the Employee Life Cycle, which pertains to how employee information, separation, and absence are dealt with. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Twitter: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript:

00:00

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started.

00:26

Lois: Welcome to the Oracle University Podcast. I'm Lois Houston, Director of Product Innovation and Go to Market Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.

Nikita: Hi everyone! Last week, we spoke about the Applicant Life Cycle, which is the first in the overall HCM business process life cycle, with our Cloud Delivery Lead Nigel Wiltshire. Nigel joins us once again today to talk about the second life cycle, the Employee.

00:54

Lois: Right. And since we're walking through HCM business processes, you might want to go back and listen to the last few episodes so you can get an idea of the big picture and the life cycles we've already discussed. Nigel, we're so glad you're back with us again this week. Thanks for agreeing to be our guide through this series. You know, I've never really thought about there being a "life cycle" for an employee. I'm an employee, but I just never really considered myself as part of a cycle. Can you tell us a little bit about how that's defined?

01:23

Nigel: Hi, and thank you once again for inviting me to participate. To put it very simply, the Employee Life Cycle continues from where the Applicant Life Cycle ends, and encompasses all the tasks that are performed against the employee from start to finish.

Nikita: Now when you say, "start to finish," what exactly do you mean by that?

Nigel: Well, the very last act performed in relation to an applicant is for Recruiting to pass the baton over to Human Resources, so that HR can officially create an employee record, and take care of all the needs and tasks associated with an employee. This will typically range from transferring or accepting all the relevant data from their applicant record, expanding that to include their Work Relationship, and managing their career changes.

02:04

Nikita: Sorry to interrupt, Nigel, but what is a "Work Relationship"?

Nigel: For each employee, we need to create and maintain a relationship with the business. This serves a couple of purposes.

Firstly, it establishes which legal entity they belong to. A legal entity is the governing body that takes care of all the legislative rules and laws that affect the employee, and from the HR perspective, it is going to control such things as employment laws, working time directives, absence entitlements, and taxation, to list just a few.

Secondly, we need to provide the employee with an assignment. This will indicate what their remit is within the organization and will record such details as their Job, Department, Location, Work Hours, Grade, Salary, and much more.

Many smaller organizations will operate in a single legal entity, so managing this is not a huge piece of the puzzle, but for larger organizations, especially those that operate globally, this is a major aspect of the company setup.

03:00

Lois: I hadn't really considered all of that before. Thanks for going through that, Nigel. So, now we have the employee on board. What processes does the Employee Life Cycle encompass?

Nigel: Unlike the Applicant Life Cycle that we spoke about last week, which has only one process under it, there are three main processes in the Employee Life Cycle: Hire to Retire, Absence to Productivity, and Employee Separation to Workforce Analysis.

03:26

Nigel: Hire to Retire is the process that encompasses an employee's whole career in an organization, from when they are hired to when they decide to leave. Of course, within that time frame there are many changes that occur, such as promotions, transfers, and general assignment changes, like a change in work hours, salary, line manager, terminations, to name just a few. A major aspect of this is the need to manage and maintain the organization structure so that reporting lines can be established, and for many larger organizations, this is a regular occurrence and therefore a major job for someone.

04:00

Nikita: OK, so that's Hire to Retire. What's Absence to Productivity?

Nigel: Absence to Productivity is the process where employees take time away from work, which would mostly be due to vacation or personal time off, but would also incorporate other types of absence such as sickness, maternity, paternity, and jury service, again to name just a few. This process provides the framework and mechanism to record such absences and to monitor entitlements. It also goes as far as analysing the impact on the business and its operational effectiveness. Of course, we can't always predict when somebody is going to be absent from work, but we can monitor trends and plan for an eventuality. Another aspect of this process comes from the "human" angle. For anybody that has been absent for a while due to illness, injury, or stress, there is a duty of care to help them return to work. This may involve finding the employee a different role within the organization, or simply to gradually introduce the employee back to work, maybe on a part time basis for a couple of weeks.

04:59

Nigel: The third and final process in the Employee Life Cycle is Employee Separation to Workforce Analysis. Now, although employee terminations are very much part of the "Hire to Retire" process, there is a much more robust and complex process that is usually put into place. So, you shouldn't really think of it as simply the employee leaving and being replaced.

Lois: What do you mean by that, Nigel?

05:20

Nigel: Lois, the manner in which the employee leaves is quite important. For instance, many organizations issue their employees with equipment, such as laptops and mobile phones, especially with a lot of employees working from home. So, we need a process that makes it quite clear as to how and when that will be returned.

Lois: They could also have security badges and keys that need to be returned.

Nigel: Exactly. And to deal with this, many organizations adopt an off-boarding process, which on the face of it is almost the reverse of the onboarding process the employee may well have been through.

05:49

Nikita: OK, that makes sense. And what happens after the employee leaves?

Nigel: What happens? We are left with a gap in the workforce, and this vacancy may need to be filled. So, we would initiate a recruitment campaign and the Applicant Life Cycle would be triggered. Now, before a decision is made, many organizations go through a period of analysis to establish whether the employee does in fact need to be replaced. In some cases, it would be a "no brainer" based on the job the employee was performing, but in others it may be that a simple reorganization would fill the gap and negate the need to hire a new employee. So, like I said, it's not often a simple case of employee leaves, employee gets replaced.

06:31

Have an idea for a new course or learning opportunity? We'd love to hear it! Visit the Oracle University Learning Community and share your thoughts with us. Your suggestion could find a place in future development projects.

If you are already an Oracle MyLearn user, go to MyLearn to join the community. You will need to log in first. If you have not yet accessed Oracle MyLearn, visit mylearn.oracle.com and create an account to get started.

07:00

Nikita: Welcome back. Nigel, you'd mentioned that the hiring of an employee in included in the "Hire to Retire" process. But why would hiring an employee be part of this life cycle? Surely, that's part of the Applicant Life Cycle, right?

Nigel: Yeah, I can see how that could be confusing. It really boils down to an organization's processes. A lot of larger organizations will have a dedicated recruitment team who will most likely use an applicant tracking system to manage their recruitment campaigns. And as I was saying earlier, the final act would be to hand over the successful application to the HR team, who will take it from there. In that regard, the "onboarding" of the employee, which could be seen as part of the Applicant Life Cycle, is often set and monitored by the HR team. What we also have to consider is that smaller organizations may not have the luxury of having a recruitment team, nor have the resources at their disposal, such as an applicant tracking system. Therefore, the whole process of recruiting is swallowed up within the HR team's process. However, the fundamentals will be the same: recruit, onboard employee, manage employment, terminate, etc.

08:04

Lois: Let's move on to some more specifics about an employee record. For instance, some organizations recognize and measure employee seniority. Why do employers do this and how does it affect the employee?

Nigel: Great question, Lois. Measuring seniority is a way for the employer to keep track of how long an employee has been in a given situation. This mostly involves measuring the number of years and months from when the employee is hired. The reason why they do this can be varied, but a couple of examples of this would be things like bonuses and vacation, i.e., the amount that you are entitled to could depend on how long you have been with the organization. Some organizations also like to track when the employee started so that they can recognize loyalty and provide the employee with a thank you gift at certain increments such as 5, 10, 25 years.

08:50

Nikita: So, other than measuring seniority from the employee's start date, are there any other times an organization would start counting?

Nigel: Absolutely. Some organizations like to know how long somebody has been in a particular role. For example, an employee may have been at the company for 25 years (so that would be one continuous seniority period), and within that time they may have moved jobs two or three times, therefore, additional periods of seniority would be measured for each role performed.

09:17

Nigel: In addition to that, some organizations recognize previous service as part of the employee's seniority. For example, let's say the employee has recently joined the organization. You would think that their seniority would be quite low, but they also served 10 years with the organization previously, which is also being taken into account. Now the impact could be that their entitlements, bonuses, and so on are set at the level of a 10-year employee, instead of an employee who has literally just joined the organization.

09:44

Lois: So, bridging their service to include previous employment there. That makes sense. OK, I'd like to explore one of the other processes you mentioned a little further – Employee Separation to Workforce Analysis. In particular, the Workforce Analysis part. Can you tell us what this is and why it's so important?

Nigel: For an organization to be effective, we need to protect its operational capabilities. This can come in many forms, including equipment maintenance, fire and emergency procedures, and also staffing levels. No point having equipment if there's nobody to operate it, right?

10:19

Nigel: So, workforce analysis is a process that will allow the organization to establish the optimal numbers to run each part of the business effectively and efficiently. It also goes some way to work out how the organization should be structured so that they can deploy employees for optimal productivity.

The reason why this process is associated with Employee Separation, is to allow for the analysis of "why" people are leaving. Of course, there will be a certain amount of attrition based on reasons like retirement and redundancies, which, in the main, can be predicted, but what about the ad hoc leavers? It's a good idea to identify why they are leaving as it may highlight certain flaws in the organization, its processes, or even its management structure, which can be addressed and hopefully, plugged. A lot of organizations achieve this by conducting exit interviews as part of the employee's off-boarding process.

11:08

Nigel: This process can also be closely linked to the Absence processes so that we can monitor why people are absent. Now, I'm not talking about absences due to entitlement, such as vacation, or absences that can't be predicted, such as jury service or bereavement. I'm mostly referring to absences due to sickness. Again, it's not always possible to predict these, but it's possible to spot trends, and cater to these accordingly. For instance, most people tend to catch a cold and flu during the winter months. Therefore, if this is the case with your organization, steps can be taken to ensure the company remains operational, which may involve being prepared to hire temporary staff. It is also possible to see from the data, how the virus is spreading across the organization and to put measures in place, such as the provision of hand sanitizer.

11:52

Nikita: On the face of it, it sounds like there are a ton of tasks that need to be performed just to keep an employee's record up to date. Surely this is not all done by one person.

Nigel: I guess that would depend on the size of the organization and the complexity of the processes they adopt, but generally, there are three main roles that play their part in the Employee Life Cycle.

First and foremost, we have the HR Specialist. This person is an expert who has the skills, knowledge, and experience to maintain employee records and ensure that all necessary processes are launched, monitored, and run as smoothly as possible. They are the intermediary between the employee and the business, and ensure everybody is happy.

12:29

Lois: So they're a generalist who does everything?

Nigel: There are some processes that require a little more specialized knowledge and skills, so it is not uncommon to have specific HR Specialists looking after specific parts of the employee record. A classic example of this is the Payroll Administrator whose knowledge of payroll is very specialized.

12:48

Nigel: Then there's the employee's Line Manager. Over the last couple of decades, line managers have increasingly become more involved with the management and maintenance of employee records. Of course, they would not have the knowledge and years of experience that an HR Specialist would have, but would perform simple tasks such as approving an employee vacation request or interviewing potential employees. Over the years, as more and more people become savvy with technology, we have seen this role become more and more involved, to the point where some of the HR tasks are now the responsibility of the line manager, such as initiating promotions, transfers, terminations, salary changes, and many more. This frees up a lot of the HR Specialist's time to concentrate on more specialized tasks.

13:31

Nigel: And last but certainly not least, the Employee themselves take some responsibility. At the end of the day, HR mostly centers around employee data. Therefore, very much like the line manager role, the employee increasingly is required to take responsibility for that data. Therefore, it is not uncommon these days for employees to enter and update certain data, such as change of address, addition of emergency contacts, absence requests and withdrawals, and many more tasks, which, again, frees up the HR Specialist's time for more complex tasks.

14:00

Lois: Like I inferred at the beginning, there's certainly a lot more to this life cycle than it appears on the surface. We just show up to do our work, but there's a lot happening behind the scenes to track and manage our employment. We're not even aware of this sometimes.

Nigel: Exactly, Lois.

14:16

Nikita: Thank you, Nigel, for your insights into the Employee Life Cycle. To learn more about HCM business processes, visit mylearn.oracle.com.

Lois: Yes, and you should definitely consider catching up on the previous episodes of this season so you'll get the full picture of the business processes for HCM. And don't forget to join us again next week, where we will be introducing the Reward Life Cycle. So much good stuff. Until then, this is Lois Houston…

Nikita: And Nikita Abraham, signing off!

14:45

That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Avsnitt(143)

Database Sharding: Part 2

Database Sharding: Part 2

Join hosts Lois Houston and Nikita Abraham in Part 2 of the discussion on database sharding with Ron Soltani, a Senior Principal Database & Security Instructor. They talk about sharding native replication, directory-based sharding, and coordinated backup and restore for sharded databases, explaining how these features work and their benefits. Additionally, they explore the automatic bulk data move on sharding keys and the ability to split and move partition sets, highlighting the flexibility and efficiency they bring to data management. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! In our last episode, we dove into database sharding and Oracle Database Sharding in particular. If you haven't listened to it yet, I'd suggest you go back and do so before you listen to this episode because it will give you a lot of context. 00:53 Lois: Right, Niki. Today, we will discuss all the 23ai new features related to database sharding. We will cover sharding native replication, directory-based sharding, coordinated backup and restore for sharded databases, and a few more. Nikita: And we're so happy to have Ron Soltani back on the podcast. If you don't already know him, Ron is a Senior Principal Database & Security Instructor with Oracle University. Hi Ron! Let's talk about sharding native replication, which is RAFT-based, meaning that it is reliable and fault tolerant-based, usually providing subzero or subsecond zero data loss replication support. Tell us more about it, please. 01:33 Ron: This is completely transparent replication built in within Oracle sharding that duplicates data across the different shards. So data are generally put into chunks. And then the chunks are replicated either between three or five different shards, depending on how much of the fault tolerance is required. This is completely provided by the Oracle sharding database, and does not require use of any other component like GoldenGate and Data Guard. So if you remember when we talked about the architecture, we said that each shard, each database can have a Data Guard component, whether through GoldenGate or whether through Data Guard to have a standby. And that way support high availability with the sharding native replication, you don't rely on the secondary database. You actually-- the shards will back each other up by holding replicas and being able to globally manage the replica, make sure everything is preserved, and manage all of the fault operations. Now this is a logical replication, generally consensus-based, kind of like different components all aware of each other. They know which component is good, depending on the load, depending on the failure. The sharded databases behind the scene decide who is actually serving the data to the client. That can provide subsecond failovers with zero data loss. 03:15 Lois: And what are the benefits of this? Ron: Major benefits for having sharding native replication is that it is completely transparent to the application or any of the structures. You just identify that you want to go ahead and use this replication and identify the replication factor. The rest is managed by the Oracle sharded database behind the scene. It supports fast failover with zero data loss, usually subsecond failovers. And depending on the number of replicas, it can even tolerate multiple failures like two server failures. And when the loads are submitted, the loads are also load-balanced across all of these shards based on where the data is located, based on the replicas. So this way, it can also provide you with a little bit of a better utilization of the hardware and load administration. So generally, it's designed to help you keep your regular SQL-based databases without having to resolve to FauxSQL or NoSQL environment getting into other databases. 04:33 Nikita: So next is directory-based sharding. Can you tell us what directory-based sharding is, Ron? Ron: Directory-based sharding basically allows the user to define the values that are used and combined for different partition, so better control, location of the data, in what partition, what shard. So this allows you to set up a good configuration. Now, many times we may have a key that may not be large enough for hash partitioning to distribute the data enough. Sometimes we may not even know what keys are going to come in the future. And these need to be built in the future. So having to build these, you really don't want to have to go reorganize the whole data based on new hash functions, and so when data cannot be managed and distributed using hash partitioning or when we need full control over combination of where data exists. 05:36 Lois: Can you give us a practical example of how this works? Ron: So let's say our company is very small in three different countries. So I can combine those three countries into one single shard. And then have three other big countries, each one sitting in their own individual shards. So all of this done through this directory-based sharding. However, what is good about this is the directory is created, which is a table, created behind the scene, stored in the catalog, available to the client that is cached with them, used for connection mapping, used for data access. So it can give you a lot of very high-level benefits. 06:24 Nikita: Speaking of benefits, what are the key advantages of using directory-based sharding? Ron: First benefit allow you to group the data together based on the whatever values you want, depending on what location you want to put them as far as across the shards are concerned. So all of that is much better and easier controlled by us or by the designers. Now, this is when there is not enough values available. So when you're going to use hash-based partition, that would result into an uneven distribution of the data. Therefore, we may be able to use this directory for better distribution of the data since we understand the data structure better than just the hash function. And having a specification where you can go ahead and create future component, future partitions, depending on how large they're going to be. Maybe you're creating them with an existing shard, later put them in another shard. So capability of having all of those controls become essential for management of this specific type of data. If a shard value, the key value is required, for example, as we said, client getting too big or can use the key value, split it or get multiple key value. Combine them. Move data from one location to another. So all of these components maintain automatically behind the scene by us providing the changes. And then the directory sharding and then the sharded database manages all of the data structure, movement, everything behind the scene using some of the future functionalities. And finally, large chunk of data, all of that can then be moved from one location to another. This is part of the automatic chunk data move and whatnot, but utilized within the directory-based sharding to allow us the control of this data and how we're going to move and manage the data based on the load as the load or the size of the data changes. 08:50 Lois: Ron, what is the purpose of the coordinated backup and restore system in Oracle Database Sharding? Ron: So, basically when we talk about a coordinated backup and restore, remember in a sharded database, I have different databases. Each database is a shard. When you take a backup, each database creates its own backup. So to have consistent data across all of the shards for the whole schema, it is extremely important for these databases to be coordinated when the backup is taken, when the restore is being done. So you have consistency of the data maintained across all of the shards. 09:28 Nikita: So, how does this coordination actually happen? Ron: You don't submit this through our main. You submit this through the Global Management tool that is used for the sharded database. And it's the Global Management tool that is actually submit your request to each database, but maintains the consistency of when the actual backup is taken, what SCN. So that SCN coordination across all of the shards is then maintained for the backup so you can create a consistent backup or restore to a consistent point in time across the sharded database. So now this system was enhanced in 23C to support multiple destinations. So you can now send your backup to an object store. You can send it to ZDLRA. You can send it to Amazon S3. So multiple locations can now be defined where you can send these backups to. You can also use multiple recovery catalogs. So let's say I have data that is located on different countries and we have requirement that data for each country must stay in that country. So I need to also use a separate catalog to maintain that partition. So now I can use multiple catalog and define which catalog is maintaining which partition to satisfy those type of requirements or any data administration requirement when it comes to backup recovery. In addition, you can also now specify different type of encryption to be used, whether you want to have different type of encryption algorithm for each of the databases that you're backing up that is maintained. It can be identified, and then set up for each one of those components. So these advancements now allow you to manage this coordinated backup and restore with all of the various specific configuration that may be required based on the data organization. So the encryption, now can also be done across that, as I mentioned, for different algorithms. And you can define different components. Finally, there is much better error handling and response available through this global system. Since things have been synchronized, you get much better information into diagnosing any issues. 12:15 Want to get the inside scoop on Oracle University? Head over to the Oracle University Learning Community. Attend exclusive events. Read up on the latest news. Get first-hand access to new products. Read the OU Learning Blog. Participate in challenges. And stay up-to-date with upcoming certification opportunities. Visit www.mylearn.oracle.com to get started. 12:41 Nikita: Welcome back! Continuing with the updates… next up is the automatic bulk data move on sharding keys. Ron, can you explain how this works and why it's significant? Ron: And by the way, this doesn't have to be a bulk data. This could be just an individual row or it could be bulk data, a huge piece of data that is going to be moved. Now, in the past, when the shard key of an existing record was going to be updated, we basically had to remove that row from the table, so moving it to a temporary table or moving it to another location. Basically, you're deleting the row, and then change the value and reinsert the row so the row would then be inserted into the proper location. That causes a lot of work and requires specific code-writing and whatnot to manage those specific type of situations. And of course, if there is a lot of data, now, you're moving those bulk data in twice. 13:45 Lois: Yeah… you're moving it to one location and then moving it back in. That's a lot of double work, not to mention that it all needs to be managed manually, right? So, how has this process been improved? Ron: So now, basically, you can just go ahead and update the value of the partition key, and then data will then automatically move to the new location. So this gives you complete flexibility of the shard key values. This is also completely transparent, and again, completely managed behind the scenes. All you do is identify what is going to be changed. Then the database will maintain the actual data location and movement behind the scenes. 14:31 Lois: And what are some of the specific benefits of this feature? Ron: Basically, it allows you to now be flexible, be able to update the shard key without having to worry about, oh, which location does this value have to exist? Do I have to delete it, reinsert it? And all of those different operations. And this is done automatically by Oracle database, but it does require for you to enable row movement at the table level. So for tables that are expected to have partition key updates kind of without knowing when that happens, can happen, any time it happens by the clients directly or something, then we may need to enable row movement at the table level and leave it enabled. It does have tiny bit of overhead of maintaining these row locations behind the scenes when enabled, as it maintains some metadata behind the scenes. But for cases that, let's say I know when the shard key is going to be changed, and we can use, let's say, a written procedure or something for that when the particular shard key is going to be changed. Then when the shard key is updated, the data will then automatically move to the new location based on that shard key operation. So we don't need to move the data manually in and out or to different locations. 16:03 Nikita: In our final segment, I want to bring up the update on splitting and moving a partition set, or basically subpartitioning tables and then being able to move all of the data associated with that in a bulk data move to a new location. Ron, can you explain how this process works? Ron: This gives us a lot of flexibility for data management based on future requirements, size of the data, key changes, or key management requirements. So generally when we use a composite sharding, remember, this is a combination of user-defined partitioning plus the system partitioning put together. That kind of defines a little bit more control over how the shards are, where the data is distributed evenly across the shards. So sometimes based on this type of configuration, we may actually need to split partition and that can cause the shard key values to be now assigned to a new shard space based on the partitioning reconfiguration. So data, this needs to be automatically managed. So when you go ahead and split partition or partitionsets, then the data based on your configuration, based on your identification can automatically move to the new location automatically between those shard spaces. 17:32 Lois: What are some of the key advantages of this for clients? Ron: This provides a huge benefit to clients because it allows them flexibility of better managing their configuration, expanding both configuration servers, the structures for better management of the data and the load. Data is completely online during all of this data move. Since this is being done behind the scenes by the database, it does not impact the availability of the data for anyone who is actually using the data. And then, data is generally moved using transportable tablespaces in big bulk and big chunks. So it's almost like copying portions of the files. If you remember in Oracle database, we could take a backup of big files as image copy in pieces. This is kind of similar where chunks of data can then be moved and then transported if possible depending on the organization of the data itself for those particular partitions. 18:48 Lois: So, what does it look like in practice? Ron: Well, clients now can go ahead and rearrange their data structure based on the adjustments of the partitioning that already exists within the sharded database. The bulk data move then automatically triggers once the customer execute the statement to go ahead and restructure the partitioning. And then all of the client, they're still accessing data. All of the data operation are completely maintained behind the scene. 19:28 Nikita: Thank you for joining us today, Ron. If you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:51 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

27 Aug 202420min

Database Sharding: Part 1

Database Sharding: Part 1

In this two-part episode, hosts Lois Houston and Nikita Abraham are joined by Ron Soltani, a Senior Principal Database & Security Instructor, to discuss the ins and outs of database sharding. In Part 1, they delve into the fundamentals of database sharding, including what it is and how it works, specifically looking at Oracle Database Sharding and its benefits. They also explore the architecture of a sharded database, examining components such as shards, shard catalogs, and shard directors. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! The last two weeks of the podcast have been dedicated to all things database security. We discussed why it's so important and looked at all the new features related to database security that have been released in Oracle Database 23ai, previously known as 23c. 00:55 Nikita: Today's episode is also going to be the first of two parts, and we're going to explore database sharding with Ron Soltani. Ron is a Senior Principal Database & Security Instructor with Oracle University. We'll ask Ron about what database sharding is and then talk specifically about Oracle Database Sharding. We'll look at the benefits of it and also discuss the architecture. Lois: All this will help us to prepare for next week's episode when we dive into each 23ai new feature related to Oracle Database Sharding. So, let's get to it. Hi Ron! What's database sharding? 01:32 Ron: This is basically an architecture to allow you to divide data for better computing and scaling across multiple environments instead of having a single system performing the work. So this allows you to do hyperscale computing and other different technologies that are included that will allow you to distribute your queries and all other requests across these multiple components to be able to get a very fast response. Now many times with this distributed segment across each kind of database that is called a shard allow you to have some geographical location component while you are not really sharing any of the servers or the components. So it allows you separation and data management for each of the shards separately. However, when it comes to the application, the sharded database is totally invisible. So as far as the application is concerned, they connect to a global service, submit their statements. Everything else is managed then by the sharded database underneath. With sharded tables, basically it gets distributed across each shard. Normally, this is done through horizontal partitioning. And then the data depending on the partitioning scheme will be distributed across like server A, server B, server C, which are independent servers that are running independent databases. 03:18 Nikita: And what about Oracle Database Sharding specifically? Ron: The Oracle Database Sharding allows you to automate how the data is distributed, replicated, and maintain the kind of a directory that defines the complete sharding scheme, while everything is distributed across many servers with no sharing whether the hardware or software. It allows you to have a very good scaling to be able to scale based on this partitioning across all of these independent servers. And based on the subset and the discrete data configuration, you can go ahead and distribute this data across these components where each shard is an independent data location or data component, a subset of data that can be used, whether individually on its own or globally across all of the shards together. And as we said to the application, the Oracle Database Sharding also looks as a single component. 04:35 Lois: Ron, what are some of the benefits of Oracle Database Sharding? Ron: With Oracle Database, you basically have linear scaling capability across as many shards as you like. And all of the different database configurations are supported with this. So you can have rack databases across the shards, Oracle Data Guard, GoldenGate. So all of the different components are still used to give you all of the high availability and every other kind of functionality that we generally used to having a single database with. It provides you with fault toleration. So each component could be down. It could have its own replicated data. It doesn't affect other location and availability of the data in those other locations. And finally, depending on data sovereignty and configuration, you could actually distribute data geographically across the different locations based on requirements and also data access to provide a higher speed for local data management. 05:46 Lois: I'd like to understand more about the architecture of Oracle Database Sharding. Ron, can you first give us a broad overview of how Oracle Database Sharding is structured? Ron: When it comes to dealing with Oracle Database architecture, the components include, first, your shards. The shards-- each one is an independent Oracle Database depending on the partitioning you decide on a partition key and then how the actual data is divided across those shards. 06:18 Nikita: So, these shards are like separate pieces of the database puzzle…Ok. What's next in the architecture? Ron: Then you have shard catalog. Shard catalog is a catalog of your sharding configuration, is aware of all of the components in the shard, and any kind of replicated object that master object exists in the shard catalog to be maintained from there. And it also manages the global queries acting as a proxy. So queries can be distributed across multiple shards. The data from the shards returned back to the catalog to group together and then sent back to the client. Now, this shard catalog is basically another version of an Oracle Database that is created independently of the shards that include the actual data, and its job is to maintain this catalog functionality. 07:19 Nikita: Got it. And what about the shard director? Ron: The shard director is like another form of a global service manager. So it understands the sharding by being able to access the catalog, knows where everything exists. The client connection pool will hit the shard director. In general, communication and then whether it's being distributed to the shard catalog to be able to proxy it, or, if the key is available, then the director can send the query directly to the shard based on the key where the data exists. So the shard can then respond to the client directly. So all of the connection pool and the components for global administration, generally managed by the shard director. 08:11 Nikita: Can we dive into each of these components in a little more detail? Let's go backwards and start with the shard director. Ron: The shard director, as we said, this is like a global service manager. It acts as a regional listener where all of the connection requests will be coming to the shard director and then distributed from that depending on the type of connection that is being used. Now the director understands the topology--maintains the complete understanding of the mapping of the data against the shards. And based on the shard key, if the request are specified on the specific key, it can then route the connection request directly to the shard that is appropriate where the data resides for the direct response. 09:03 Lois: And what can you tell us about the shard catalog? Ron: The shard catalog, this is another Oracle Database that is created for special purpose of holding the topology of the sharded database. And have all of the centralized information metadata about your sharded database. It also act as a proxy. So, if a client request comes in without providing a shard key, then the request would go to the catalog. It can be distributed to all of the shards. So the shards that you actually have the data can respond, but the data can then be combined and sent back to the client. So, it also creates the master copy of all the duplicate tables that are created in the shard database. 09:56 Lois: Ok. I've got it. Now, let's talk more about the shards themselves. Ron: Each shard is basically a database. And data is horizontally partitioned to be placed on each of these shards. So, this physical database is called the shard. And depending on the topology of your sharding, there could be user sharding, for example, where multiple keys are in a single shard or could be a system sharding that based on the hash value data is distributed whether singly or multiple data components across each shard. Now, this is completely transparent to the application. So, as far as application is concerned, this is a single database and the response everything that they do is generally just operating as a single database interaction. However, when it comes to the administrators, each shard is a separate database. Each shard can be managed independently and can have its own standby and other components that is then set up for high availability and management of the data operations. 11:21 Do you have an idea for a new course or learning opportunity? We'd love to hear it! Visit the Oracle University Learning Community and share your thoughts with us on the Idea Incubator. Your suggestion could find a place in future development projects! Visit mylearn.oracle.com to get started. 11:41 Nikita: Welcome back! Let's move on to global services and the various sharding methods. Ron, can you explain what global services are and how they function in a sharded database? Ron: Global services is generally the service that is used for the application to be able to connect to the sharded database. This is provided and supported through the shard director. So clients are routed using this global service. 12:11 Lois: What are the different sharding methods that are available? Ron: When it comes to sharding methods that were available, originally we started with the system sharding, which is a hash partition, basically data is distributed evenly across the shards. Then we needed to allow for the user-defined sharding because sometimes it's not about just distributing the data evenly, it's also about controlling where the data goes to be able to control individual query execution based on the keys. And even for data sovereignty and position of the data itself. And then a composite sharding, which provides you kind of a combination of the user-defined sharding and the system hash sharding that gives you a little bit of a combination of the two to better distribute your data across the shard. And finally, sub-partitioning all types of sub-partitionings are supported to provide a better structure of the data depending on the application schema design. 13:16 Nikita: Ron, how do clients typically connect to a sharded database? Ron: When it comes to the client connections, all the client connections are generally routed to the director and then managed from there. So there are multiple ways that clients can connect. One could be a direct connect. With a direct connect, they're providing the shard key in the request. Therefore, the director knowing the topology can route the client directly to the shard that has the data. The proxy routing is done by the catalog. This is when generally a shard key is not provided or data is requested from many shards. So data will then request is then sent to the catalog. The catalog database will then distribute the query to the shards, collects the results, and then combine sending it back to the client acting as a proxy sitting in the middle. And the middle tier routing, this is when you can expose the middle tier to the structure of your sharding. So when the middle tier send the request, the request identifies which shard the data is going to. So take advantage of that from the middle tier. So the data is then routed properly. But that requires exposing the structures and everything in the middle tier. 14:40 Lois: Let's dive a bit deeper into direct routing. What are the advantages of using this method? Ron: With the client request routing, as we talked about the direct routing, this allows the applications to get very quick data access when they know the key that is used for the distribution of the data. And that is used to access the data from the shard. This provides you a direct connection to a shard from the shard director. And once the connection is established, then the queries can get data directly across the shard with the key that is supplied. So the RAC respond for that particular subset of data with the data request. Now with the direct routing again, you get some advantages. The advantage is you have much better performance for capturing subset of the data because you don't have to wait for every shard to respond for a particular query. If you want to distribute data geographically or based on the specific key, of course, all of that is perfectly supported. And kind of allows you to now distribute your query to actually the location where the data exists. So for example, data that is in Canada can then be locally accessed in Canada through this direct access. And of course, when it comes to management of your client connection, load balancing of those connections. And of course, supporting all types of queries and application requests. 16:18 Nikita: And what about routing by proxy? Ron: The proxy routing is when queries do not supply the actual sharding key, where identifies which shard the data reside. Or the actual routing cannot be properly identified. Then the shard director will send the request to the catalog performing the work as the proxy. So proxy will then send a request to all of the shards. If any shards can be eliminated, would be. But generally all of the shards that could have any portion of the data will then get the request. The requests are then sent back to the proxy. And then the proxy will then coordinate the data going back and forth between the client. And the shard catalog basically hands this type of data access to the catalog to act as the proxy. And then the catalog is-- the shard director is no longer part of the connection management since everything is then handled by the shard catalog itself. 17:37 Lois: Can you explain middle tier routing, Ron? Ron: This generally allows you to use the middle tier to define which shard your data is being routed into. This is a type of routing that can be used where the data geographically have some sovereignty or the application is aware of the structure. So the middle tier is exposed to the sharded database topology. So understand exactly what these components are based on the specific request on the shard key, then the middle tier can then route the application to the appropriate location for the connection. And then the middle tier, and then the either one shard or the subset of shard will maintain those connections for the data access going back and forth since the topology is now being managed by the middle tier. Of course, all of the work that is done here still is known in the catalog, will be registered in the catalog. So catalog is fully aware of any operations that are going on, whether connection is done through middle tier or through direct routing. 18:54 Nikita: Ron, can you tell us how query execution and DDL operations work in a sharded database? Ron: When it comes to the query execution of the application, there are no changes, no requirement for identifying specifically how the data is distributed. All of that is maintained behind the scene based on your sharding topology. For the DDL, most of your tables, most of the structures work exactly the same way as it did before. There are some general structures that are associated to the sharded database that we will originally create and set up with mapping. Once the mappings are configured, then the rest of the components are created just like a regular database. 19:43 Lois: Ok. What about the deployment process? Is it complicated to set up a sharded database? Ron: The deployment for the sharded database is fully automated using Terraform, Kubernetes, and scripts that are put together. Basically what you do is you provide some of your configuration information, structure of your topology through an input file, like a parameter file type of a thing. And then you execute the scripts and then it will build everything else based on the structure that you have provided. 20:19 Nikita: What if someone wants to migrate from a non-sharded database to a sharded database? Is there support for that? Ron: If you are going to migrate from a regular database to a sharded database, there are two components that are fully shard aware. First, you have the Shard Advisor. This can look at your current structure, the schema, how the data is distributed. And the workload and how the data is used to give you recommendation in what type of sharding would work best based on the workload. And then Data Pump is fully aware of the sharding component. Normally, we use Data Pump and load into each of the databases individually on its own. So instead of one job having to read all the data and move data across many shards, data can be loaded individually across each shard using Data Pump for much faster operations. 21:18 Lois: Ron, thank you for joining us today. Now that we've had a good understanding of Oracle Database Sharding, we'll talk about the new 23ai features related to this topic next week. Nikita: And if you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Until next week, this is Nikita Abraham… Lois: And Lois Houston signing off! 21:45 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

20 Aug 202422min

Database Security: Part 2

Database Security: Part 2

In this episode, hosts Lois Houston and Nikita Abraham continue their exploration of Oracle Database 23ai's database security capabilities. They are joined once again by Ron Soltani, a Senior Principal Database & Security Instructor, who delves into the intricacies of the new hybrid read-only mode for pluggable databases, the flexibility of read-only users and sessions, and the newly introduced developer role. They also discuss simplified schema-level privileges and the integration of Azure Active Directory with Oracle Database. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! In our last episode, we discussed database security, why it is so important, and all its different components. Today, we're going to be continuing that conversation by looking at all the new features related to database security that have been released in Oracle Database 23ai, previously known as 23c. 00:59 Lois: And we're so happy to have Ron Soltani back as our guide. Ron is a Senior Principal Database & Security Instructor with Oracle University. Hi Ron! Thanks for joining us again! We have a list of the new features related to database security and we'd like to ask you about them one by one, starting with the new mode for pluggable databases. What's that about? 01:21 Ron: With the hybrid read-only mode for pluggable database, the database could be in the read/write mode or read-only mode, depending on the user that is actually connected. So one of the things we have to realize is the regular read-only mode has one major issue. The major issue is everything, including data dictionary, including SysAux and all of the other elements are also locked up read-only. So we cannot do any database maintenance. We cannot collect statistics to monitor anything. So you pretty much have to hard tune everything for the load you want and maintain everything. And this happens in many warehouse environments, in environments where the data itself is generally loaded. And then just heavily read. So it requires to be in a read-only mode to protect it. So with a hybrid read-only mode, if you are a local user in the PDB, even a PDB administrator-- so I can create a local user in the PDB as a PDB administrator. And grant that PDB administrator even sysdba privilege. But once the PDB is open hybrid read-only mode, even for that user, the PDB is read-only. However, if a common user connect, who is, as you know, is a CDB user. Generally, CDB-level privileges granted and considered CDB administrators. If they connect to the PDB, then the PDB is actually in read/write mode. So now, they can take snapshots. They can use all of the database tools to monitor how things are going. They can perform maintenance. So this allows us to be able to perform patching, maintenance, and other database-related operation. 03:17 Nikita: So you don't have to flip back and forth between read-only, read/write, read-only, read/write… Ron: Because you know if we have database read/write to go to read-only, generally, we would have to shut down the database, then go to read-only. Then from read-only, we can go to read/write. But then going back to read-only, we have to shut down again. Lois: Which was the issue with the normal read-only on the pluggable database, right? I'm glad that's been made easier. Ok… Moving on to the next new feature, which is read-only users and sessions. What can you tell us about this one, Ron? 03:51 Ron: As we previously discussed, you can put the PDB in the hybrid read-only mode. But then now the PDB is read-only for all local application users. However, let's say we have an environment where you have multiple application users. One needs to be able to perform maintenance and perform updates where other sessions who are just reading the data to protect against all security element, and then better performance and operation management. We are going to set up read-only. So setting up read-only at the pluggable database, that can be very high level depending on the application need. So with the read-only users and session, this will give you capability of setting read-only either for a particular user. So when the user connects, all the user can do is read-only process. We do a lot of testing, for example. And we have users that may have read/write privilege in the test environment, then we want to go ahead and perform other operation. So we would have to take privileges away, set the read-only, then go back and change again to read/write. So performing all of those different type of tests and even with the development has always been an issue. So having granular capability of managing at a user or a session level can give us a major benefit of better granularly managing all application needs without sacrificing either security or having extra components that would have to be done by administrators. 05:33 Nikita: Yeah, this gives you a lot of flexibility and you don't have to keep temporarily changing privileges or configuring specific types of sessions. It's also an easy way to control user behavior, right? Ron: An application, as we said, have the schema owner that today we want to have a schema-only user for the schema owner. That is usually nobody connect us. But then we have multiple schema users that one may be used for performing updates, one is used for administration, and one can be used for read-only. So this can give me a mechanism to manage that, or if a particular operation needs to run and for security purposes, that particular session needs to be set to read-only. So that gives us major control over it. And in the cloud environment, this can be a very, very good component for better managing all of the security levels, where you can enable very fine-grained control while supporting all functionality of the application. 06:39 Lois: Ok. So, can you tell us about this new developer role in the database? Ron: If we think about application administration, usually we create a schema owner. And we start by giving that the schema owner privileges-- grant them a resource role. By having resource role, they can create simple objects. But when you design an application, you need to implement it, test that, and then deploy it. Today, there are many, many complex objects that can be used at the application level to manage the application. So today, we grant the resource role to the schema owner. Then we wait until they complain. They don't have privilege for certain object they want to create. Then we're going to have to grant them privileges as needed, and that used to be the way the security had worked. But today since we have a schema only account where we can only enable the account when we want to do any type of schema work, and then it's locked up so the schema is protected, giving the schema owner the application role, the DB application role, now that has all the privileges in it, should not cause any security issue when managed properly, and will provide them with all of the privileges that they need to perform their work, including there are many complex schema structure like analytical views, hierarchies, dimensions, data-specific types that you can create. And many of these type of privileges are not just assigned through a regular privilege assignment. Some of them are assigned through procedures. 08:21 Lois: And could you give us some examples of how this feature could be used? Ron: So there are many different ways of granting all of these granule privileges. So at the time that we go ahead and perform development of the schema and all of that depending on what's available, we don't know really what privileges do we need. And as we said, there are many packages that we may be able to use to create complex objects that then gradually have to go ahead and get privileges on executing those packages and to be able to use them. And as we said at the time we actually performed the application, many of these objects, we may not even know we're going to use them until later on becomes evident or it may be a better structure to represent what we want. So having to add and continuously deal with these type of changes can become extremely kind of cumbersome and tedious. It also delays all of the operations, especially now that the application schema owner can be secured. So we can grant this developer role to the schema owner, give the schema owner all privileges that is needed very quickly that they can now manage their schemas and manage all complex objects for that schema operation. So the role is called db developer role. And just like any other role, you would connect as an administrator, grant db developer role to the schema owner. Now, we don't need to grant the resource role and all other things, because everything here is included in the db developer role. 10:01 The Oracle University Learning Community is an excellent place to collaborate and learn with Oracle experts and fellow learners. Grow your skills, inspire innovation, and celebrate your successes. All your activities, from liking a post to answering questions and sharing with others, will help you earn a valuable reputation, badges, and ranks to be recognized in the community. Visit www.mylearn.oracle.com to get started. 10:28 Nikita: Welcome back! Ron, how have schema-level privileges been simplified in 23ai? Ron: To be able to understand this, first we can review the privilege assignment in Oracle Database. First, you can be granted a privilege at an object level, so you can perform certain work on a particular object. However, let's say I have a user account that I'm going to use an app user who's going to have to read from multiple objects within a particular schema. Now this granting at the object level is too low because I have to go at each object and assign the privileges needed on that particular object to the user. Or we had our system privilege, for example, grant create any table to a user. The problem with that is now you can create any table within the schema that I want you to work with. But that privilege goes across all the schemas in the database, of course, not the database schemas itself-- those are protected, but across all user schemas. 11:34 Lois: Right. So, you're getting that privilege on other schemas that you may not really need that privilege for... Ron: So now the gap is kind of met with creating a schema-level privilege that allows you to grant the same any privilege but on all objects of a particular schema and not granted across all the schemas. So this now allows us to much better be able to manage schemas, have schema user accounts with different level privileges on all the objects that they need to perform the type of work that they need to, without having to granularly assign each one of those privileges as we used to create many different roles with different privileges needed, then try to control the users by granting them those roles. Here, these are much better simplified by going through the schema-level privilege. 12:34 Nikita: Ron, I want to ask you about the new feature on creating audit policies at the column level. Ron: So if you remember, in the past, we talked about we can create audit policies with the old system where you would identify what to audit. But then you had to manage a whole bunch of parameters and security. And protecting audit even from the administrator were major issues. In 12, Oracle identified or added the unified audit, which gives you protection on the audit schema. Even administrators cannot access it. You manage it through privileges that are assigned specifically to users who are going to manage the audit. And it also allow you to audit Oracle operations, tools like Data Pump, like RMAN. So you can create a really secure audit environment monitoring everything in the database using unified audit and then maintain and manage those audits. One of the important aspect of auditing is generating the minimal amount of audits. So this way, audits can be reviewed because if you generate too much audit, it is very hard to automate either using an automated system to review the audits or having users to review those audits. Furthermore, if we wanted to then audit specific columns and different operation like SELECT, DML, we would have had to use the row-level security and build additional policies to be able to then individually monitor those columns, which not very simple to use and manage. And then the audits are put in different tables. Having to maintain all of those, relate them has always caused major issue overall. So the benefit of having now this column-level audit added to the normal unified audit policies is that you can go ahead and build now your audits instead of at the table level, only for a particular column. This is going to reduce the false positive results that are generated because if I'm going to put update on a table, not updating any column can generate an audit. But if I put update on the column salary, then only if the salary is updated, the audit is generated. So that can give me just the audits that are needed without the additional false positive audits that are generally generated. 15:08 Lois: Ron, can you talk to us about the management of authorization for Unified Audit administration, especially when using Database Vault? Ron: So first as we know for the Unified Audit, you have audit admin privilege and audit viewer privilege. If you want to be able to create and administer and manage all of the audit information, including the audit purging and time periods and all of that, you have to have audit admin privilege. If you want to be able to read and generate the reports or things like that from the audits that have been created, you have to have audit viewer privilege. Now we also have Oracle Database Vault. Database Vault kind of uses a row level security, but not on the end user data. It applies this row level security and administration on Oracle data dictionary. And allows you to control when particular object can be used, at what level can they be used? And give you complete control over how the actual database and the objects are used and become available to other users in the database, including other administrators, even schema owners. So when the Database Vault is then applied and enabled, in the past, we could have managed the Unified Audit, which was kind of very funky to put one of the major security functions outside the main security Administration utility of the database. So now, the Unified Audit has been incorporated into the Database Vault. So you can now use Database Vault to go ahead and set up the privileges and configuration for the authorizations required for managing Unified Audit. This also controls all the high-level users, including SYS, SYSTEM, and anyone who may have DBA roles or other high-level privileges. So this allows us to now enable the Database Vault, and then manage the authorizations for the Unified Audit through Database Vault. Therefore, all authorization administration is unified under the same security tool, which is Database Vault. 17:28 Nikita: The final new feature to discuss is the integration of Microsoft Azure Active Directory with the Oracle database environment. What can you tell us about it, Ron? Ron: This has been requested by many of the clients who use other platforms and active directories and then need to access either the Oracle OCI, Oracle Cloud where the databases are running or having Oracle databases even in a local environment. So wanted to be able to now allow this to happen. So if you remember, originally we had capability of mapping users from the database into Oracle Active Directory. So this way the user's role privileges can be centrally managed and the user does not inherit any privileges in the database. So if the user directly connect to database, has no privileges. Connect properly through Active Directory, everything enabled. Then in Database 18, they created the commonly managed users, the CMUs. Where we could now map a third party Active Directory and then be able to use that into connecting to Oracle database for authentication and user administration. However, many of our clients use Microsoft Azure Active Directory. And they wanted to be able to integrate that particular Active Directory into Oracle environment, especially in the Oracle OCI Database as a service environment. So to be able to do that, Oracle has multiple components that they have built to allow this to be able to now be configured and used. So the client can use these Active Directory for their user administration centrally. 19:20 Lois: With that, I think we've covered all the new features related to database security in 23ai. Thanks so much for taking us through all of them and giving us some context. Nikita: Yeah, it's really been so helpful. To learn more about these new features and watch some demonstrations on them, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 19:54 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

13 Aug 202420min

Database Security: Part 1

Database Security: Part 1

Join hosts Lois Houston and Nikita Abraham, along with Senior Principal Database & Security Instructor Ron Soltani, as they dive into the critical topic of database security. In the first of a two-part series on database security in Oracle Database 23ai, they discuss the importance of protecting data against external and internal threats, common security risks like phishing and SQL injection, and the principle of least privilege. Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and joining me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In case you missed last week's episode, we've begun a new season of the podcast, talking about all the new features in Oracle Database 23ai. We covered blockchain tables and new features, and today's episode is going to be one of two that will be dedicated to database security. Nikita: Right, Lois. So, in Part 1, we want to set the scene, so to speak, by looking at an overview of database security so that when we discuss some of the new features, we'll know exactly where they actually fit into the process. Joining us for these two episodes is Ron Soltani. Ron is a Senior Principal Database & Security Instructor with Oracle University. 01:16 Lois: Hi Ron! Thanks for being with us today. To start off, let's discuss the importance of database security. Why is database security so critical today? Ron: Security requirements, describes the need for keeping things private and make sure that we protect against threat, against data destruction. We also have, today, data that is global. Therefore, there is consolidation of the data. There is globalization. There is data sourcing, locational, where the data is actually located, rules opposed by different governments, and guidelines that enforce a certain type of security administration on the data. And finally, there are many different companies or organizations that actually come up with either guidelines or rules that must be followed for security aspect that we must set up and build compliance. 02:24 Nikita: Ron, what are some of the common security risks that databases face? Ron: Security risk can include external threats that could be unauthorized users trying to use phishing, get privileged user information, and get in as a privileged user to do whatever damage they want. Denial-of-service attack, one of the most common attacks out there where the attackers just create or attack the components, like a listener, for example, in a database, and cause a situation where the listener can no longer establish connection to the database. So now no client can connect to the database to get data, which is that denial-of-service attacks. Having unauthorized access to the data-- so again, this is generally done through phishing or sometimes even SQL injection. SQL injection also allows you to insert SQL statement in the application where it's not expected, where it can then convert into an executable in the database and then have unwanted data returned for the user. 03:42 Nikita: Sorry, can you explain that? Ron: For example, when you go to Google, you want to run a search. They expect you to say, meaning of a particular word. Now, what if I knew the structure of the data organization in Google? And instead of just putting in meaning of whatever word, I actually plug in a SQL statement that then passed along to the Google system to be executed. And then that SQL, if the components and everything exist and within the privileges of what is being executed, could expose some information to me. So that's the idea with being able to perform that type of operation. 04:24 Lois: Ok. So, those are external threats. But, could you also have internal threats? Ron: Internal threat could be abused by someone who is privileged, could be sabotage of the system and the data. It could be data complexity that creates an environment where data is not properly being secured and even accidental damage. It's a security issue. And then finally, if there is a damage, we do need to be able to perform recovery. So we create backups and data access in those. Therefore, those recovery information must be properly secured. And finally, the omission, being able to block access or cause issues with the data. Then having external threats coming in through the internal abuse, so internal abuse could actually open door to allow external threats to get in. Now, the final type of security risk could be coming in from partners who have privilege to be able to load or access and get data. For example, I may sell a particular product. But the product description is actually coming from the product distributor. 05:47 Nikita: Yeah, so they have access to push that product information into your system. So, what are the typical points of attack for a database? I'm familiar with phishing. Ron: People send you emails or do something to be able to get information from the pieces and things that come back. For example, this is one of the reason for many operation. We would return false error messages. Like in Oracle database, if you don't have privilege on the table and you try to select it, we tell you a table or a view does not exist. So this way, you don't know if it's a table, you don't know if it's a view. And as far as you know, it doesn't exist. So the name you have does not correspond to any particular data. 06:32 Nikita: That's clever! Ron: If we would tell you don't have privilege, now you know the name of this table exists. So now I just got to find a way of hacking the table. So this is basically phishing means, extracting different pieces of information through different channels, being able to put them together. Then in database, we have some privilege known accounts that if not protected can be a vulnerable access. The back doors into the database. For example, somebody being able to get to the operating system DBA group, and then connect to the database without user ID and password. That's why we have to protect every layer. Any debug codes that may be available that could reference how the operation of the system is actually going. Creating cross-scripting between the different data and then operations that goes on. And as we talked about, SQL injection. 07:28 Lois: Can you dive a little deeper into SQL injection, Ron? Ron: With SQL injection, you kind of have to understand that, in general, SQL injection means somebody, like we said, knows the structure of something, knows the structure of the way the application is operating, and then be able to inject a SQL statement where they would generally put a condition or pass some parameters or some information to the application. So, then that SQL statement becomes part of that statement and submitted to the database. Now, we need to understand SQL injection is not about the person, is not about generally your overall configuration of the database. The most important aspect of SQL injection is about the session that is actually doing the work. For example, if I am a DBA and I am going to collect statistics for a table. If I connect a SYS DBA to collect that statistics and somebody hacks into my session and inject a SQL drop database. Database gone, because the session has SYS DBA privilege. But if I have a user that only has create session privilege and execute a script. And in this script, I write the statement to collect statistics and I give that script only the privilege to collect stats. So now I can connect as that user with that minimal privilege, just execute the script. So now anyone inject any SQL into the session, that will never be executed because the session has no privilege. So this is the important of SQL injection for us to understand that the importance is what happens at the session level. And many of the security element we will see, like read-only session, hybrid read-only PDB and things like that are related into this type of SQL injection or abuse. 09:30 Lois: Yeah, we are looking forward to talking through those new features in the next episode. Ron: So the common vulnerabilities can be exploited, and also any of the users that are part of the operations that can be set up into the string and supplied into middle of statements and things like that. 09:56 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You'll find training on everything from cloud computing, database, and security, artificial intelligence and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification. Visit www.mylearn.oracle.com to get started. 10:25 Nikita: Welcome back! One of the concepts I wanted to ask you about, Ron, is the principle of least privilege. Can you explain what it really means? Ron: Principles of least privilege, again, means the work that needs to be done has to have minimal privilege. 10:40 Nikita: But, we've always thought about that, right? Giving a user minimal privileges… Ron: Well, back in the old days, we used to execute everything as a schema owner. Therefore, we had privilege on all the data. Then we said, OK, let's create schema users and only give them like a read privilege or this privilege. So they can only do the type of work they need to do, which is fine. But at the same time, that can be very complex. Now I need a lot of different users and whatnot. So when it comes to principles of least privilege, this is generally about only installing whatever software that is required. Only enable or turn on whatever machines and segments that is going to be used. Have proper operating system level users and privileges configured for all of the software that is installed at the operating system level. Have proper administrator account that are properly maintained. Set up privilege user account for each operation. So when we do maintenance and database administration, we are not creating very high-level privileged session. That's why some of the differences privileges was created in database 12 and up, like Sys backup, Sys DG, Sys RAC. So you don't inherit the privilege as Sys DBA to actually do the work. You only have privileges for what you need. And, of course, limit the user's access to particular object and things that they need to do. However, as I mentioned, this is not just about the user level. This is also about the session level. If I'm going to do maintenance and I'm connecting as a schema owner, somebody inject a SQL drop table, table gone. So that's why it is very important for us to be able to have control over how sessions can also operate within the database. 12:34 Lois: Right, so, what about the strategy of defense in depth? Ron: Defense in depth. That means we have to strengthen and apply security at every level, whether it being at the securities applied at the operating system in the database, in the application, in the network. So we have to have policies defining all the different security levels. Most important, train users. So no mistakable damages. Harden every component, including the operating system. Set up proper firewalls. Set up proper network security, like use of the Oracle firewall that protects against unwanted SQL statements. We can compare SQL statement to a whitelist of acceptable statements. And then other database security features like VPD, the auditing as we will talk about, and other components to give you an overall very secure environment. 13:35 Nikita: Ron, what are the fundamental aspects of managing security only within the database… not including the operating system or the application? Ron: So first, we have to have confidentiality. Confidentiality means that we need to make sure that all of the data is properly secured at a data level, whether it be both at the storage level, in the database for data usage, and we have many different ways of doing confidentiality management. Number one, properly creating users, maintaining users with the proper password through proper authentication. And then setting up authorization that privileges may not be enough because if I give you select privilege, you can see every column, every row. So I may need row level security, data redaction, data masking for duplication, and other mechanism to help us manage even subset of data for that particular security. 14:35 Lois: Ok… so that's confidentiality. What's next? Ron: Data integrity means that we need to make sure that data is not destroyed, whether it being addressed in the database, in memory, in data file, in backup, in exports, or during transmission in the network. So we usually apply encryption and check-summing not only to protect the data, but also to validate, make sure it's not corrupted. Next data availability, which means today, especially, we are 24/7 operation. And remember we talked about denial of service attack on a database. That usually attacks on the listener, because if the listener is crashed, nobody can connect. We have to then utilize available tools and components like RAC to have multiple instances in case a particular host crashes. And I lose a particular instance. Data Guard in case my storage and a whole database crashes. The PDB and real-time PDB management with duplication, having a PDB standbys that are maintained and managed behind the scene. Using PDB snapshots, which are point in time. Preserve data that I can use it for restoring data at those particular point in time. Backup recovery through RMAN or other backup recovery processes. So in case data is damaged, I can restore it and recover it. And finally, auditing. Auditing historically was always known as after effect. 16:09 Nikita: That's what I was wondering… You only see what's going on after something happens, right? Ron: It also can be a deterrent when people know they are being audited, they're more careful, don't make mistakes. Try not to, of course, do anything you get caught. And today, this auditing can also be set up in a way that it cannot only catch what is going on. It can actually help us better secure data and have much better responses. Now the problem with auditing has always been the overhead. That's why the unified audits that provides us with much less overhead for management can give us an extreme detailed audits. And then the new features allows us to even more reduce the amount of audits that are generated by only auditing at the column level and better protection for those audits. By the way, in the older days, most auditing was done at the app, because we never knew who the end user the app is. But today with being able to have Active Directory mapped into the database and information passed between the two, all audits can actually come back centrally to the database. 17:21 Lois: So, to wrap up today's conversation, Ron, can you just summarize database security for us? All the things we need to think about. Ron: So database security starts with making sure, number one, our network is secure and we are accessing the data through a very secure connection coming in from the user. If required to be, could have a three-tier environment where the clients go through a first external firewall to get to the middle tier. Then from the middle tier go through internal firewall to get to the database, or if this is like a direct access and things like that, setting up secure network coming in through that like for administration, remote administrations, and operations. Then, setting up proper authentication and authentication management, configuring detail, access control and setting up multiple level of security for data accesses, not just at the table level, even at the row and column level. And building a complete data confidentiality by not only adding in storage encryption and all of the management of the data, even on components sitting outside the database, of course, we have Oracle components that can manage some of those for you, like sample backups, RMAN, and things like that. And to get this complete data confidentiality, you also add in, as we said, an efficient auditing that can then describe any issues, tell us where the problem is, how it happened. And if we set up an audit system that is very focused, then we can even tie it up to triggers, to notifications. So they're very quickly responded to, because the problem with audits has always been there is just way too much of them, therefore nobody ever reviews them to see exactly what has happened. So many vulnerabilities may go on detected until a major damage happens. And that's how you can know that this is common out there in a lot of businesses when you hear in the news. And so and so company was broken in and so much data was stolen. Well, if proper security were set up and the network is being hacked in and proper alert system and automated system were configured to be able to catch these in a proper auditing real-time, then maybe corrective action could have stopped a lot of those damages. 20:04 Nikita: Thanks for that wonderful overview, Ron. In our next episode, we're going to go through each of the new security features and try to understand how Oracle is tightening the screws around security. Lois: And if you want to learn more about what we discussed today, visit mylearn.oracle.com and search for the Oracle Database 23ai New Features for Administrators course. Until next week, this is Lois Houston… Nikita: And Nikita Abraham signing off! 20:31 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

6 Aug 202421min

Blockchain Tables

Blockchain Tables

In this episode of the Oracle University Podcast, hosts Lois Houston and Nikita Abraham kick off a new season with a deep dive into the latest features of Oracle Database 23ai. Joined by Bill Millar, a Senior Principal Database & MySQL Instructor, they explore the new enhancements to blockchain tables, such as row versions, user chains, delegate signer, and countersignature. So, if you're curious about harnessing the power of blockchain tables for your database needs, this is the perfect episode for you! Oracle MyLearn: https://mylearn.oracle.com/ou/course/oracle-database-23ai-new-features-for-administrators/137192/207062 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! Thank you for joining us as we begin a new season of the podcast. For the next few weeks, we're going to explore all the new features in Oracle Database 23ai, previously known as 23c. These episodes will be great for you if you're a database administrator, a developer, or even a database architect. Lois: Right Niki, and while anyone can listen to the podcast, you're probably going to get the most out of this season if you have prior knowledge or experience with the previous versions of Oracle Database and have used SQL to manage Oracle Databases. Throughout this season, we'll discuss new features in database availability, architecture, manageability, performance, and security. 01:21 Nikita: Exactly. Today, we're diving into the world of blockchain tables and the new features introduced. First, we'll try to get an overview of blockchain tables that were introduced in 21c. Then, we'll discuss the new features in 23ai, including row versions, user chains, delegate signer, and countersignature. Lois: So, let's get started. To take us through all this, we are joined today by Bill Millar. Bill is a Senior Principal Database & MySQL Instructor with Oracle University. Hi Bill! Thanks for joining us. To begin, what is a blockchain table? 01:59 Bill: Well, a blockchain table provides the means for recording transactions where only insert operations are allowed. And rows are protected or restricted based on time as defined when the table is created. This makes the rows tamper-resistant with their chaining algorithms. 02:16 Nikita: Bill, take us through some common attributes of a blockchain table. Bill: They are append only, protects the current data in the table. Made tamper-resistant with their hashing algorithm. And optionally, they can be digitally signed. However, they are mandatory in blockchain platform transactions. Transaction logs, audit trails, compliance information, they can most benefit from using blockchain tables. 02:44 Lois: Bill, let's talk for a minute about the blockchain tables being tamper-resistant. What makes a blockchain table tamper-proof? Bill: Well, with the insert only tables, each row is going to be chained to the previous row, except the first row. There's nothing to change it to. So once a row is added, it changes it to the previous row, to the previous row. Rows are linked when the transaction commits. We don't link them beforehand because you might roll back. 03:13 Nikita: Do we have some considerations or guidelines for managing blockchain tables? Bill: One, they may be partitioned. You can specify retention at a table level, the blockchain table itself. You can use the no drop clause. And you can also define it blockchain tables at the row level when you create that blockchain table. Defining a retention period for the table itself or a retention period for the rows. 03:41 Nikita: And are there any restrictions when using blockchain tables? Bill: There are several restrictions for the blockchain table. Some of them are… There are some data types that are not supported. The row ID, long, timestamp with time zone, and so forth. And there are other operations not allowed. A few of them are updating rows, merging rows, truncating, dropping them partitions. Converting a regular table to a blockchain table or vice versa. So you do want to make sure that you understand the restrictions if you decide that you're going to use a blockchain table. There are some things you can alter in a blockchain table. One is you can modify a retention period. It cannot be reduced. However, you can make it longer. 04:30 Lois: Ok, I think I've got it. So, coming to the 23ai features, what's new with blockchain tables? Could you give us a brief overview of them before we dive into each one? Bill: So we have the user chain, just a chain of rows based off to three user-defined columns. Previously, the system defined the chain. The row versions…it allows me to have multiple historical views of a row that's going to be-- that is maintained with the blockchain table. We have the log history. The flashback data archive history tables are now blockchain tables. And there's also a countersignature. So you can request the time of signing a row that it has a signature for that. That signature metadata is going to be stored within the row, within some hidden columns. And then you can also have a delegate signer. It's an alternate to the user who is allowed to sign rows inserted by that primary user. 05:31 Nikita: What are some advantages of using blockchain tables? Bill: There are benefits of using the blockchain tables in transparent from fraud protection and users don't know as they're inserting the data. You can detect it by verifying the rows in the blockchain table. They are not part of the database itself. It can be more secure when you're validating them. And it is easier than distributed blockchains where multiple blockchains with identical data is being maintained across multiple different platforms. 06:03 Lois: And what about benefits specifically from the 23ai new features? Bill: We have allowed increased flexibility. Just the user-defined itself, instead of having it just rely on the system-defined. It can guarantee row versioning. The blockchain log history to record and protect the changes. The counter signature, along with the digital signature, can help protect it even more. So you must specify a version. There is no default version, so you must specify whether either it's going to be version 1 or version 2 and create the table. Version 1 is the version from 21c. You have to specify version 2 if you're going to take advantage of some of the new features in 23c. And with these two different versions, it does reduce the number of columns that you are going to have accessible. Version 1 uses 20 additional columns to maintain that blockchain information, whereas a version 2 blockchain table is going to use 40 additional columns. So that reduces the number of columns that you can use by 40. Even though version 2 does use more columns for the hidden information, it does have its benefit. It does allow you to add, drop columns. You can drop partitions with version 2. You have distributed transactions. And you can also use with replication, such as Oracle Golden Gate and Active Data Guard. 07:32 Nikita: Are there restrictions when it comes to using blockchain tables? Bill: Again, make sure that you understand the requirements of your tables when determining if blockchain table is going to be appropriate for your application or not. XMLTypes are not supported. Can't truncate. Doesn't work with sharded tables. Can't work with different policies such as the automatic data optimization, virtual private database, label security. Cannot use the DBMS_REDEFINITION package on a blockchain table. 08:10 Are you planning to become an Oracle Certified Professional this year? Whether you're a seasoned IT pro or just starting your career, getting certified can give you a significant boost. And don't worry, we've got your back! Join us at one of our cert prep live events in the Oracle University Learning Community. You'll get insider tips from seasoned experts and learn from other professionals' experiences. Plus, once you've earned your certification, you'll become part of our exclusive forum for Oracle-certified users. So, what are you waiting for? Head over to www.mylearn.oracle.com and create an account to jump-start your journey towards certification today! 08:53 Nikita: Welcome back! Let's get into each of those 23ai new features, Bill. What can you tell us about the row versions feature? Bill: With the row version option, it allows you to have multiple historic views of a row corresponding to a set of user-defined columns. Previously, only the system would define the columns. When you create these, it automatically creates a view to allow you to view information about that blockchain table with the row version. The system is going to create the view with the same columns. However, the name of that view, it's going to take whatever that table name that you create and it's going to append the _Las$ onto it for that. And it has not only the same columns of your table, but it also has additional columns in there. One of them be that last row version. This is going to allow you to see, what is the latest version of that row? In order to use the row versions, you must specify with the row version clause when you create the table. It is also supported with or without primary key. The primary key column must not be identical to the set of the row version column. There are some restrictions, though. So you must specify-- you must specify a row version name with it. And remember, three columns is the maximum. You don't have to have three. You can have one, two, or three. And then the fields that are restricted to the types-- number, char, varchar, and raw. And it cannot be used with version 1 blockchain tables, meaning blockchain tables came out in 21C. So if you have 21C, you cannot create it. It's a 23C feature. That's why that is like that. So you're going to specify with the row version. And then you're going to give it that row version name because that is required. And then up to three different columns that you want to use. 10:58 Lois: What about user chains? How do they enhance blockchain tables? Bill: So with the user chains, previously again, only the system chains were available. It randomly selected how to change the tables, what columns to chain it with. Well now a user chain can be defined by the end user. And set up one, two, or three. Well, how many rows do you want to chain? Have that chain apply to. Again, the column types that we just talked about that are only supported. The number of the char, varchar, and raw. But with the user chains and you being able to identify the columns, it adds that additional flexibility to allow you to have this tamper-resistant table to be used by your applications. So to create that blockchain table, user chain is defined when you create the table. So you're going to define when you create the table what is going to be that chain for that. When you do create that, any rows that have the same change values will be grouped together. For example, let's say a banking application. I have an account. I make deposits. I make withdrawals. I do balance inquiries because that's all based off of that same field, that account, it'll group those together within the chain. It does apply the hashing value to the columns that are stored within that chain. 12:27 Lois: Bill, can you explain the blockchain table delegate signer feature? Bill: What it is, optionally, a signature that can be applied to provide additional security against tampering for that. However, if you do use it, it does require a digital certificate when adding a signature to a row. Signatures are validated using that digital certificate and any signature algorithm for that. The delegate is an alternate. And it can be used instead of addition to just a user signature. So when I am the user, I create a row, it adds my signature, I can add my certificate to it or now I can have a delegate to do that for me. So it can be digitally signed by the delegate. It can be signed by the delegate instead of the user itself. So that way, it's verified. Yes, that is good. Well, maybe users are not able to sign the rows they created, but they trust the delegate. 13:32 Nikita: And the last new feature to discuss is a blockchain table countersignature. Bill: A countersignature is going to provide additional guarantees that, hey, this data has been securely stored within our table itself. You can request a countersignature. It is requested at the time of signing a row. So what it's going to do is it's going to record that signature metadata in that row and the counting signature in the signed bytes that can be returned to the caller to verify, yes, that I might want to retrieve that information to use in another source for that. So we can use that. As we said here, that candidate signature and the sign bytes, we might put it in another data store, might put it in our Oracle blockchain platform. For this non-repudiation purposes, basically what that means is that, hey, it's proof of the origin, the authenticity of it, the integrity of that data. Well, I want to pass that information to something else, another application or source or whatever. So yes, this is trusted information for that. So it gives that additional security. So it assures that the sender that their message was delivered plus gives proof of that sender's identity. Countersignatures are saved in the blockchain table, that happens to be a blockchain table itself. The countersignature is computed using the bytes, using that hashing algorithm. It's going to include that end user signature, the delegate, or both. Remember, the end user can sign, a delicate can, or it can use both of that information for that. Even though we do save that information in the blockchain table, we recommend if you're going to use this, you might want to store that information outside of the database for those non-repudiation purposes. 15:37 Lois: Thank you so much, Bill, for taking us though all these updates. We look forward to having you back soon to talk us through some more of these new features. Nikita: To learn more about blockchain tables, visit mylearn.oracle.com and search for the Oracle Database 23ai: New Features for Administrators course. Join us next week for a discussion on some more Oracle Database 23ai new features. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 16:06 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

30 Juli 202416min

Database Essentials

Database Essentials

Join hosts Lois Houston and Nikita Abraham, along with Hope Fisher, Oracle's Product Manager for Database Technologies, as they break down the basics of databases, explore different database management systems, and delve into database development. Whether you're a newcomer or just need a refresher, this quick, informative episode is sure to offer you some valuable insights. Oracle MyLearn: https://mylearn.oracle.com/ou/course/database-essentials/133032/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast. I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! For the last seven weeks, we've been exploring the world of OCI Container Engine for Kubernetes with our senior instructor Mahendra Mehra. We covered key aspects of OKE to help you create, manage, and optimize Kubernetes clusters in Oracle Cloud Infrastructure. So, be sure you check out those episodes if you're interested in Kubernetes. 01:00 Nikita: Today, we're doing something a little different. We've had a lot of episodes on different aspects of Oracle Database, but what if you're just getting started in this world? We wanted you to have something that you could listen to as well. And so we have Hope Fisher with us today. Hope is a Product Manager for Database Technologies at Oracle, and we're going to ask her to take us through the basics of database, the different database management systems, and database development. Lois: Hi Hope! Thanks for joining us for this episode. Before we dive straight into terminologies and concepts, I want to take a step back and really get down to the basics. We sometimes use the terms data and information interchangeably, but they're not the same, right? 01:43 Hope: Data is raw material or a set of facts and observations. Information is the meaning derived from the facts. The difference between data and information can be explained by using an example, such as test scores. In one class, if every student receives a numbered score and the scores can be calculated to determine a class average, the class average can be calculated to determine the school average. So in this scenario, each student's test score is one piece of data. And information is the class's average score or the school's average score. There is no value in data until you actually do something with it. 02:24 Nikita: Right, so then how do we make all this data useful? Do we create a database system? Hope: A database system provides a simple function—treat data as a collection of information, organize it, and make the data usable by providing easy access to it and giving you a place where that data can be stored. Every organization needs to collect and maintain data to meet its requirements. Most organizations today use a database to automate their information systems. An information system can be defined as a formal system for storing and processing data. A database is an organized collection of data put together as a unit. The rationale of a database is to collect, store, and retrieve related data for use by database applications. A database application is a software program that interacts with the database to access and manipulate data. A database is usually managed by a Database Administrator, also known as a DBA. 03:25 Nikita: Hope, give us some examples of database systems. Hope: Popular examples of database systems include Oracle Database, MySQL, which is also owned by Oracle, Microsoft SQL server, Postgres, and others. There are relational database management systems. The acronym is DBMS. Some of the strengths of a DBMS include flexibility and scalability. Given the huge amounts of information that modern businesses need to handle, these are important factors to consider when surveying different types of databases. 03:59 Lois: This may seem a little bit silly, but why not just use spreadsheets, Hope? Why use databases? Hope: The easy answer is that spreadsheets are designed for specific problems, relatively small amounts of data and individual users. Databases are designed for lots of data, shared information use, and complex data analysis. Spreadsheets are typically used for specific problems or small amounts of data. Individual users generally use spreadsheets. In a database, cells contain records that come from external tables. Databases are designed for lots of data. They are intended to be shared and used for more complex data analysis. They need to be scalable, secure, and available to many users. This differentiation means that spreadsheets are static documents, while databases can be relational. 04:51 Nikita: Hope, what are some common database applications? Hope: Database applications are used in far and wide use cases that most commonly can be grouped into three areas. Applications that run companies called enterprise applications. Enterprise applications are designed to integrate computer systems that run all phases of an enterprise's operations to facilitate cooperation and coordination of work across the enterprise. The intent is to integrate core business processes, like sales, accounting, finance, human resources, inventory, and manufacturing. Applications that do something very specific, like healthcare applications-- specialized software is software that's written for a specific task rather than for a broad application area. And then there are also applications that are used to examine data and turn it into information, like a data warehouse, analytics, and data lake. 05:54 Lois: We've spoken about data lakes before. But since this is an episode about the basics of database, can you briefly tell us what a data lake is? Hope: A data lake is a place to store your structured and unstructured data as well as a method for organizing large volumes of highly diverse data from diverse sources. Data lakes are becoming increasingly important as people, especially in businesses and technology, want to perform broad data exploration and discovery. Bringing data together into a single place or most of it into a single place makes that simpler. 06:29 Nikita: Thanks for that, Hope. So, what kind of organizations use databases? And, who within these organizations uses databases the most? Hope: Almost every enterprise uses databases. Enterprises use databases for a variety of reasons and in a variety of ways. Data and databases are part of almost any process of the enterprise. Data is being collected to help solve business needs and drive value. Many people in an organization work with databases. These include the application developers who create applications that support and drive the business. The database administrator or DBA maintains and updates the database. And the end user uses the data as needed. 07:19 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free. So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 07:57 Nikita: Welcome back. Now that we've discussed foundational database concepts, I want to move on to database management systems. Take us through what a database management system is, Hope. Hope: A Database Management System, DBMS, has the following elements. The kernel code manages memory and storage for the DBMS. The repository of metadata is called a data dictionary. The query language enables applications to access the data. Oracle database functions include data definitions, storage, structure, and security. Additional functionality also provides for user access control, backup and recovery, integrity, and communications. There are many different database types and management systems. The most common is the relational database management system. 08:51 Nikita: And how do relational databases store data? Hope: Essentially and very simplistically, there are key elements of the relational database. Database table containing rows and columns; the data in the table, which is stored a row at a time; and the columns which contain attributes or related information. And then the different tables in a database relate to one another and share a column. 09:17 Lois: Customers usually have a mix of applications and data structures, and ideally, they should be able to implement a data management strategy that effectively uses all of their data in applications, right? How does Oracle approach this? Hope: Oracle's approach to this enterprise data management strategy and architecture is converged database to all different data types and workloads. The converged database is a database that has native support for all modern data types and, of course, traditional relational data. By providing support for all of these data types, a converged database can run all sorts of workloads, from transaction processing to analytics and machine learning to blockchain to support the applications and systems. Oracle provides a single database engine that supports all data models, process types, and development environments. It also addresses many kinds of workloads against the same data sets. And there's no need to use dozens of specialized databases. Deploying several single-purpose databases would increase costs, complexity, and risk. 10:25 Nikita: In the final part of our conversation today, I want to bring up database development. Hope, how are databases developed? Hope: Data modeling is the first part of the database development process. Conceptual data modeling is the examination of a business and business data to determine the structure of business information and the rules that govern it. This structure forms the basis for database design. A conceptual model is relatively stable over long periods of time. Physical data modeling, or database building, is concerned with implementation in each technical software and hardware environment. The physical implementation is highly dependent on the current state of technology and is subject to change as available technologies rapidly change. Conceptual model captures the functional and informational needs of a business and is used to identify important entities and their relationships. A logical model includes the entities and relationships. This is also called an entity relationship model and provides the details of the relationships. 11:34 Lois: I think that's a good place to wrap up our episode. To know more about the Oracle Database architecture, offerings, and so on, visit mylearn.oracle.com. Thanks for joining us today, Hope. Nikita: Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 11:55 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

23 Juli 202412min

Container Engine for Kubernetes: Security Practices

Container Engine for Kubernetes: Security Practices

In the season's final episode, hosts Lois Houston and Nikita Abraham interview senior OCI instructor Mahendra Mehra about the security practices that are vital for OKE clusters on OCI. Mahendra shares his expert insights on the importance of Kubernetes security, especially in today's digital landscape where the integrity of data and applications is paramount. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we spoke about self-managed nodes and how you can manage Kubernetes deployments. Nikita: Today is the final episode of this series on OCI Container Engine for Kubernetes. We're going to look at the security side of things and discuss how you can implement vital security practices for your OKE clusters on OCI, and safeguard your infrastructure and data. 00:59 Lois: That's right, Niki! We can't overstate the importance of Kubernetes security, especially in today's digital landscape, where the integrity of your data and applications is paramount. With us today is senior OCI instructor, Mahendra Mehra, who will take us through Kubernetes security and compliance practices. Hi Mahendra! It's great to have you here. I want to jump right in and ask you, how can users add a service account authentication token to a kubeconfig file? Mahendra: When you set up the kubeconfig file for a cluster, by default, it contains an Oracle Cloud Infrastructure CLI command to generate a short-lived, cluster-scoped, user-specific authentication token. The authentication token generated by the CLI command is appropriate to authenticate individual users accessing the cluster using kubectl and the Kubernetes Dashboard. However, the generated authentication token is not appropriate to authenticate processes and tools accessing the cluster, such as continuous integration and continuous delivery tools. To ensure access to the cluster, such tools require long-lived non-user-specific authentication tokens. One solution is to use a Kubernetes service account. Having created a service account, you bind it to a cluster role binding that has cluster administration permissions. You can create an authentication token for this service account, which is stored as a Kubernetes secret. You can then add the service account as a user definition in the kubeconfig file itself. Other tools can then use this service account authentication token when accessing the cluster. 02:47 Nikita: So, as I understand it, adding a service account authentication token to a kubeconfig file enhances security and enables automated tools to interact seamlessly with your Kubernetes cluster. So, let's talk about the permissions users need to access clusters they have created using Container Engine for Kubernetes. Mahendra: For most operations on Container Engine for Kubernetes clusters, IAM leverages the concept of groups. A user's permissions are determined by the IAM groups they belong to, including dynamic groups. The access rights for these groups are defined by policies. IAM provides granular control over various cluster operations, such as the ability to create or delete clusters, add, remove, or modify node pool, and dictate the Kubernetes object create, delete, view operations a user can perform. All these controls are specified at the group and policy levels. In addition to IAM, the Kubernetes role-based access control authorizer can enforce additional fine-grained access control for users on specific clusters via Kubernetes RBAC roles and ClusterRoles. 04:03 Nikita: What are Kubernetes RBAC roles and ClusterRoles, Mahendra? Mahendra: Roles here defines permissions for resources within a specific namespace and ClusterRole is a global object that will provide access to global objects as well as non-resource URLs, such as API version and health endpoints on the API server. Kubernetes RBAC also includes RoleBindings and ClusterRoleBindings. RoleBinding grants permission to subjects, which can be a user, service, or group interacting with the Kubernetes API. It specified an allowed operation for a given subject in the cluster. RoleBinding is always created in a specific namespace. When associated with a role, it provides users permission specified within that role related to the objects within that namespace. When associated with a ClusterRole, it provides access to namespaced objects only defined within that cluster rule and related to the roles namespace. ClusterRoleBinding, on the other hand, is a global object. It associates cluster roles with users, groups, and service accounts. But it cannot be associated with a namespaced role. ClusterRoleBinding is used to provide access to global objects, non-namespaced objects, or to namespaced objects in all namespaces. 05:36 Lois: Mahendra, what's IAM's role in this? How do IAM and Kubernetes RBAC work together? Mahendra: IAM provides broader permissions, while Kubernetes RBAC offers fine-grained control. Users authorized either by IAM or Kubernetes RBAC can perform Kubernetes operations. When a user attempts to perform any operation on a cluster, except for create role and create cluster role operations, IAM first determines whether a group or dynamic group to which the user belongs has the appropriate and sufficient permissions. If so, the operation succeeds. If the attempted operation also requires additional permissions granted via a Kubernetes RBAC role or cluster role, the Kubernetes RBAC authorizer then determines whether the user or group has been granted the appropriate Kubernetes role or Kubernetes ClusterRoles. 06:41 Lois: OK. What kind of permissions do users need to define custom Kubernetes RBAC rules and ClusterRoles? Mahendra: It's common to define custom Kubernetes RBAC rules and ClusterRoles for precise control. To create these, a user must have existing roles or ClusterRoles with equal or higher privileges. By default, users don't have any RBAC roles assigned. But there are default roles like cluster admin or super user privileges. 07:12 Nikita: I want to ask you about securing and handling sensitive information within Kubernetes clusters, and ensuring a robust security posture. What can you tell us about this? Mahendra: When creating Kubernetes clusters using OCI Container Engine for Kubernetes, there are two fundamental approaches to store application secrets. We can opt for storing and managing secrets in an external secrets store accessed seamlessly through the Kubernetes Secrets Store CSI driver. Alternatively, we have the option of storing Kubernetes secret objects directly in etcd. 07:53 Lois: OK, let's tackle them one by one. What can you tell us about the first method, storing secrets in an external secret store? Mahendra: This integration allows Kubernetes clusters to mount multiple secrets, keys, and certificates into pods as volumes. The Kubernetes Secrets Store CSI driver facilitates seamless integration between our Kubernetes clusters and external secret stores. With the Secrets Store CSI driver, our Kubernetes clusters can mount and manage multiple secrets, keys, and certificates from external sources. These are accessible as volumes, making it easy to incorporate them into our application containers. OCI Vault is a notable external secrets store. And Oracle provides the Oracle Secrets Store CSI driver provider to enable Kubernetes clusters to seamlessly access secrets stored in Vault. 08:54 Nikita: And what about the second method? How can we store secrets as Kubernetes secret objects in etcd? Mahendra: In this approach, we store and manage our application secrets using Kubernetes secret objects. These objects are directly managed within etcd, the distributed key value store used for Kubernetes cluster coordination and state management. In OKE, etcd reads and writes data to and from block storage volumes in OCI block volume service. By default, OCI ensures security of our secrets and etcd data by encrypting it at rest. Oracle handles this encryption automatically, providing a secure environment for our secrets. Oracle takes responsibility for managing the master encryption key for data at rest, including etcd and Kubernetes secrets. This ensures the integrity and security of our stored secrets. If needed, there are options for users to manage the master encryption key themselves. 10:06 Lois: OK. We understand that managing secrets is a critical aspect of maintaining a secure Kubernetes environment, and one that users should not take lightly. Can we talk about OKE Container Image Security? What essential characteristics should container images possess to fortify the security posture of a user's applications? Mahendra: In the dynamic landscape of containerized applications, ensuring the security of containerized images is paramount. It is not uncommon for the operating system packages included in images to have vulnerabilities. Managing these vulnerabilities enables you to strengthen the security posture of your system and respond quickly when new vulnerabilities are discovered. You can set up Oracle Cloud Infrastructure Registry, also known as Container Registry, to scan images in a repository for security vulnerabilities published in the publicly available Common Vulnerabilities and Exposures Database. 11:10 Lois: And how is this done? Is it automatic? Mahendra: To perform image scanning, Container Registry makes use of the Oracle Cloud Infrastructure Vulnerability Scanning Service and Vulnerability Scanning REST API. When new vulnerabilities are added to the CVE database, the container registry initiates automatic rescanning of images in repositories that have scanning enabled. 11:41 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 12:20 Nikita: Welcome back! Mahendra, what are the benefits of image scanning? Mahendra: You can gain valuable insights into each image scan conducted over the past 13 months. This includes an overview of the number of vulnerabilities detected and an overall risk assessment for each scan. Additionally, you can delve into comprehensive details of each scan featuring descriptions of individual vulnerabilities, their associated risk levels, and direct links to the CVE database for more comprehensive information. This historical and detailed data empowers you to monitor, compare, and enhance image security over time. You can also disable image scanning on a particular repository by removing the image scanner. 13:11 Nikita: Another characteristic that container images should have is unaltered integrity, right? Mahendra: For compliance and security reasons, system administrators often want to deploy software into a production system. Only when they are satisfied that the software has not been modified since it was published compromising its integrity. Ensuring the unaltered integrity of software is paramount for compliance and security in production environment. 13:41 Lois: Mahendra, what are the mechanisms that guarantee this integrity within the context of Oracle Cloud Infrastructure? Mahendra: Image signatures play a pivotal role in not only verifying the source of an image but also ensuring its integrity. Oracle's Container Registry facilitates this process by allowing users or systems to push images and sign them using a master encryption key sourced from the OCI Vault. It's worth noting that an image can have multiple signatures, each associated with a distinct master encryption key. These signatures are uniquely tied to an image OCID, providing granularity to the verification process. Furthermore, the process of image signing mandates the use of an RSA asymmetric key from the OCI Vault, ensuring a robust and secure validation of the image's unaltered integrity. 14:45 Nikita: In the context of container images, how can users ensure the use of trusted sources within OCI? Mahendra: System administrators need the assurance that the software being deployed in a production system originates from a source they trust. Signed images play a pivotal role, providing a means to verify both the source and the integrity of the image. To further strengthen this, administrators can create image verification policies for clusters, specifying which master encryption keys must have been used to sign images. This enhances security by configuring container engine for Kubernetes clusters to allow the deployment of images signed with specific encryption keys from Oracle Cloud Infrastructure Registry. Users or systems retrieving signed images from OCIR can trust the source and be confident in the image's integrity. 15:46 Lois: Why is it imperative for users to use signed images from Oracle Cloud Infrastructure Registry when deploying applications to a Container Engine for Kubernetes cluster? Mahendra: This practice is crucial for ensuring the integrity and authenticity of the deployed images. To achieve this enforcement. It's important to note that an image in OCIR can have multiple signatures, each linked to a different master encryption key. This multikey association adds layers of security to the verification process. A cluster's image verification policy comes into play, allowing administrators to specify up to five master encryption keys. This policy serves as a guideline for the cluster, dictating which keys are deemed valid for image signatures. If a cluster's image verification policy doesn't explicitly specify encryption keys, any signed image can be pulled regardless of the key used. Any unsigned image can also be pulled potentially compromising the security measures. 16:56 Lois: Mahendra, can you break down the essential permissions required to bolster security measures within a user's OKE clusters? Mahendra: To enable clusters to include master encryption key in image verification policies, you must give clusters permission to use keys from OCI Vault. For example, to grant this permission to a particular cluster in the tenancy, we must use the policy—allow any user to use keys in tenancy where request.user.id is set to the cluster's OCID. Additionally, for clusters to seamlessly pull signed images from Oracle Cloud Infrastructure Registry, it's vital to provide permissions for accessing repositories in OCIR. 17:43 Lois: I know this may sound like a lot, but OKE container image security is vital for safeguarding your containerized applications. Thank you so much, Mahendra, for being with us through the season and taking us through all of these important concepts. Nikita: To learn more about the topics covered today, visit mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham… Lois Houston: And Lois Houston, signing off! 18:16 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

16 Juli 202418min

Working with Self-Managed Nodes and Managing Kubernetes Deployments

Working with Self-Managed Nodes and Managing Kubernetes Deployments

In this episode, hosts Lois Houston and Nikita Abraham speak with senior OCI instructor Mahendra Mehra about the capabilities of self-managed nodes in Kubernetes, including how they offer complete control over worker nodes in your OCI Container Engine for Kubernetes environment. They also explore the various options that are available to effectively manage your Kubernetes deployments. OCI Container Engine for Kubernetes Specialist: https://mylearn.oracle.com/ou/course/oci-container-engine-for-kubernetes-specialist/134971/210836 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:26 Nikita: Hello and welcome to the Oracle University Podcast! I'm Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Last week, we discussed how OKE virtual nodes can offer you a complete serverless Kubernetes experience. Nikita: Yeah, and in today's episode, we'll focus on self-managed nodes, where you get complete control over the worker nodes within your OKE environment. We'll also talk about how you can manage your Kubernetes deployments. 00:57 Lois: To tell us more about this, we have Mahendra Mehra, a senior OCI instructor with Oracle University. Hi Mahendra! Welcome back! Let's get started with self-managed nodes. Can you tell us what they are? Mahendra: In Container Engine for Kubernetes, a self-managed node is essentially a worker node that you personally create and host on a compute instance or instance pool within the compute service. Unlike managed nodes or virtual nodes, self-managed nodes are not grouped into node pools by default. They are often referred to as Bring Your Own Nodes, also abbreviated as BYON. If you wish to streamline administration and manage multiple self-managed nodes collectively, you can utilize the compute service to create a compute instance pool for hosting these nodes. This allows for greater flexibility and customization in your Kubernetes environment. 01:58 Nikita: Mahendra, what are some practical usage scenarios for OKE self-managed nodes? Mahendra: These nodes offer a range of advantages for specific use cases. Firstly, for specialized workloads, leveraging the compute service allows you to configure compute instances with shapes and image combination that may not be available for managed nodes or virtual nodes. This includes options like GPU shapes for hardware accelerated workloads or high frequency processor cores for demanding high-performance computing tasks. Secondly, if you require complete control over your compute instance configuration, self-managed nodes are the ideal choice. This gives you the flexibility to tailor each node to your specific requirements. Additionally, self-managed nodes are particularly well suited for Oracle Cloud Infrastructure cluster networks. These nodes provide high bandwidth, low latency RDMA connectivity, making them a preferred option for certain networking setups. Lastly, the use of compute instance pools with self-managed nodes enables the creation of infrastructure for handling complex distributed computing tasks. This can greatly enhance the efficiency of your Kubernetes environment. Consider these points carefully to determine the optimal use of OKE self-managed nodes in your deployments. 03:30 Lois: What do we need to consider before creating a self-managed node and integrating it into a cluster? Mahendra: There are two crucial aspects to address. Firstly, you need to confirm that the cluster to which you plan to add a self-managed node is configured appropriately. Secondly, it's essential to choose the right image for the compute instance hosting the self-managed node. 03:53 Nikita: Can you dive a little deeper into these prerequisites? Mahendra: To successfully integrate a self-managed node into your cluster, you must ensure that the cluster is an enhanced cluster. This is a crucial prerequisite for the addition of self-managed nodes. The flannel CNI plugin for pod networking should be utilized, not the VCN-native pod networking CNI plugin. This ensures optimal pod networking for your self-managed nodes. The control plane nodes of the cluster must be running Kubernetes version 1.25 or later. This is essential for compatibility and optimal performance. Lastly, maintain compatibility between the Kubernetes version on control plane nodes and worker nodes with a maximum allowable difference of two minor versions. This ensures a smooth and stable operation of your Kubernetes environment. Keep these cluster requirements in mind as you prepare to add self-managed nodes to your OKE cluster. 04:55 Lois: What about the image requirements when creating self-managed nodes? Mahendra: Choose either Oracle Linux 7 or Oracle Linux 8 image, for your self-managed nodes. Ensure that the selected image has a release date of March 28, 2023 or later. Obtain the image OCID, also known as Oracle Cloud Identifier, from the respective sources. When specifying an image, be mindful of the Kubernetes version it contains. It's your responsibility to select an image with a Kubernetes version that aligns with the Kubernetes version skew support policy. Keep in mind that the Container Engine for Kubernetes does not automatically check the compatibility. So it's up to you to ensure harmony between the Kubernetes version on the self-managed node and the cluster's control plane nodes. These considerations will help you make informed choices when configuring images for your self-managed nodes. 05:57 Nikita: I really like the flexibility and customization OKE self-managed nodes offer. Now I want to switch gears a little and ask you about OCI Service Operator for Kubernetes. Can you tell us a bit about it? Mahendra: OCI Service Operator for Kubernetes is an open-source Kubernetes add-on that transforms the way we manage and connect OCI resources within our Kubernetes clusters. This powerful operator enables you to effortlessly create, configure, and interact with OCI resources directly from your Kubernetes environment, eliminating the need for constant navigation between the Oracle Cloud Infrastructure Console, CLI, or other tools. With the OCI Service Operator, you can seamlessly leverage kubectl to call the operator framework APIs, providing a streamlined and efficient workflow. 06:53 Lois: On what framework is the OCI Service Operator built? Mahendra: OCI Service Operator for Kubernetes is built using the open-source Operator Framework toolkit. The Operator Framework manages Kubernetes-native applications called operators in an effective, automated, and scalable way. The Operator Framework comprises essential components like Operator SDK. This leverages the Kubernetes controller-runtime library, providing high-level APIs and abstractions for writing operational logic. Additionally, it offers tools for scaffolding and code generation. 07:35 Do you want to stay ahead of the curve in the ever-evolving AI landscape? Look no further than our brand-new OCI Generative AI Professional course and certification. For a limited time only, we're offering both the course and certification for free! So, don't miss out on this exclusive opportunity to get certified on Generative AI at no cost. Act fast because this offer is valid only until July 31, 2024. Visit https://education.oracle.com/genai to get started. That's https://education.oracle.com/genai. 08:14 Nikita: Welcome back! Mahendra, are there any other components within OCI Service Operator to manage Kubernetes deployments? Mahendra: The other essential component is Operator Lifecycle Manager, also abbreviated as OLM. OLM extends Kubernetes by introducing a declarative approach to install, manage, and upgrade operators within a cluster. The OCI Service Operator for Kubernetes is intelligently packaged as an Operator Lifecycle Manager bundle, simplifying the installation process on Kubernetes clusters. This comprehensive bundle encapsulates all necessary objects and definitions, including CRDs, RBACs, ConfigMaps, and deployments, making it effortlessly deployable on a cluster. 09:02 Lois: So much that users can take advantage of! What about OCI Service Operator's integration with other OCI services? Mahendra: One of its standout features is its seamless integration with a range of OCI services. The first one is Autonomous Database, specifically tailored for transaction processing, mixed workloads, analytics, and data warehousing. Enjoy automated patching, upgrades, and tuning, allowing routine maintenance tasks to be performed without human intervention. The next on the list is MySQL HeatWave, a fully-managed Database Service designed for developing and deploying secure cloud-native applications using widely adopted MySQL open-source database. Third on the list is OCI Streaming service. Experience a fully managed, scalable, and durable solution for ingesting and consuming high-volume data streams in real time. Next is Service Mesh. This service offers a set of capabilities to facilitate communication among microservices within a cloud-native application. The communication is centrally managed and secured, ensuring a smooth and secure interaction. The OCI Service Operator for Kubernetes serves as a versatile bridge, seamlessly connecting your Kubernetes clusters with these powerful Oracle Cloud Infrastructure services. 10:31 Nikita: That's awesome! I've also heard about Ingress Controllers. Can you tell us what they are? Mahendra: A Kubernetes Ingress Controller serves as the enforcer of rules defined in a Kubernetes Ingress. Its primary role is to manage, load balance, and route incoming traffic to specific service pods residing on worker nodes within the cluster. At the heart of this process is the Kubernetes Ingress Resource. Think of it as a blueprint, a rich configuration holding routing rules and options, specifically crafted for handling HTTP and HTTPS traffic. It serves as a powerful orchestrator for managing external communication with services inside the cluster. 11:15 Lois: Mahendra, how do Ingress Controllers bring about efficiency? Mahendra: Efficiency comes with consolidation. With a single ingress resource, you can neatly gather routing rules for multiple services. This eliminates the need to create a Kubernetes service of type LoadBalancer for each service seeking external or private network traffic. The OCI native ingress controller is a powerhouse. It crafts an OCI Flexible Load Balancer, your gateway to efficient request handling. The OCI native ingress controller seamlessly adapts to changes in routing rules with real-time updates. 11:53 Nikita: And what about integration with an OKE cluster? Mahendra: Absolutely. It harmonizes with the cluster for streamlined traffic management. Operating as a single pod on a randomly selected worker node, it ensures a balanced workload distribution. 12:08 Lois: Moving on, let's talk about running applications on ARM-based nodes and GPU nodes. We'll start with ARM-based nodes. Mahendra: Typically, developers use ARM-based worker nodes in Kubernetes cluster to develop and test applications. Selecting the right infrastructure is crucial for optimal performance. 12:28 Nikita: What kind of options do developers have when running applications on ARM-based nodes? Mahendra: When it comes to running applications on ARM-based nodes, you have a range of options at your fingertips. First up, consider the choice between ARM-based bare metal shapes and flexible VM shapes. Each comes with its own unique advantages. Now, let's talk about the heart of it all, the Ampere A1 Compute instances. These instances are driven by the cutting edge Ampere Altra processor, ensuring high performance and efficiency for your workloads. You must specify the ARM-based node pool shapes during cluster or node pool creation, whether you choose to navigate through the user-friendly console, leverage the flexibility of the API, or command with precision through the CLI, the process remains seamless. 13:23 Lois: Can you define pods to run exclusively on ARM-based nodes within a heterogeneous cluster setup? Mahendra: In scenarios where a cluster comprises node pools with ARM-based shapes alongside other shapes, such as AMD64, you can employ a powerful tool called node selector in the pod specification. This allows you to precisely dictate that an application should exclusively run on ARM-based worker nodes, ensuring your workloads aligns with the desired architecture. 13:55 Nikita: And before we end this episode, can you explain why developers must run applications on GPU nodes? Mahendra: Originally designed for graphics manipulations, GPUs prove highly efficient in parallel data processing. This makes them a top choice for deploying data-intensive applications. Our GPU nodes utilize cutting edge NVIDIA graphics cards ensuring efficient and powerful data processing. Seamless access to this computing prowess is made possible through CUDA libraries. To ensure smooth integration, be sure to select a GPU shape and opt for an Oracle Linux GPU image preloaded with the essential CUDA libraries. CUDA here is Compute Unified Device Architecture, which is a parallel computing platform and application-programming interface model created by NVIDIA. It allows developers to use NVIDIA graphics-processing units for general-purpose processing, rather than just rendering graphics. 14:57 Nikita: Thank you, Mahendra, for another insightful session. We appreciate you joining us today. Lois: For more information on everything we discussed, go to mylearn.oracle.com and search for the OCI Container Engine for Kubernetes Specialist course. You'll find plenty of demos and skill checks to supplement your learning. Join us next week when we'll discuss vital security practices for your OKE clusters on OCI. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 15:28 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

9 Juli 202415min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
alska-oss
nu-blir-det-historia
johannes-hansen-podcast
harrisons-dramatiska-historia
sektledare
allt-du-velat-veta
roda-vita-rosen
not-fanny-anymore
rss-sjalsligt-avkladd
sa-in-i-sjalen
vi-gar-till-historien
rss-npf-podden
rss-max-tant-med-max-villman
rikatillsammans-om-privatekonomi-rikedom-i-livet
efterlevandepodden
rss-makabert
rss-basta-livet