Core AI Concepts – Part 3

Core AI Concepts – Part 3

Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they discuss the transformative world of Generative AI. Together, they uncover the ways in which generative AI agents are changing the way we interact with technology, automating tasks and delivering new possibilities. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript:

00:00

Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started!

00:25

Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services. 

Nikita: Hi everyone! Last week was Part 2 of our conversation on core AI concepts, where we went over the basics of data science. In Part 3 today, we'll look at generative AI and gen AI agents in detail. To help us with that, we have Himanshu Raj, Principal AI/ML Instructor. Hi Himanshu, what's the difference between traditional AI and generative AI?

01:01

Himanshu: So until now, when we talked about artificial intelligence, we usually meant models that could analyze information and make decisions based on it, like a judge who looks at evidence and gives a verdict. And that's what we call traditional AI that's focused on analysis, classification, and prediction.

But with generative AI, something remarkable happens. Generative AI does not just evaluate. It creates. It's more like a storyteller who uses knowledge from the past to imagine and build something brand new. For example, instead of just detecting if an email is spam, generative AI could write an entirely new email for you.

Another example, traditional AI might predict what a photo contains. Generative AI, on the other hand, creates a brand-new photo based on description. Generative AI refers to artificial intelligence models that can create entirely new content, such as text, images, music, code, or video that resembles human-made work.

Instead of simple analyzing or predicting, generative AI produces something original that resembles what a human might create.

02:16

Lois: How did traditional AI progress to the generative AI we know today?

Himanshu: First, we will look at small supervised learning. So in early days, AI models were trained on small labeled data sets. For example, we could train a model with a few thousand emails labeled spam or not spam. The model would learn simple decision boundaries. If email contains, "congratulations," it might be spam. This was efficient for a straightforward task, but it struggled with anything more complex.

Then, comes the large supervised learning. As the internet exploded, massive data sets became available, so millions of images, billions of text snippets, and models got better because they had much more data and stronger compute power and thanks to advances, like GPUs, and cloud computing, for example, training a model on millions of product reviews to predict customer sentiment, positive or negative, or to classify thousands of images in cars, dogs, planes, etc.

Models became more sophisticated, capturing deeper patterns rather than simple rules. And then, generative AI came into the picture, and we eventually reached a point where instead of just classifying or predicting, models could generate entirely new content.

Generative AI models like ChatGPT or GitHub Copilot are trained on enormous data sets, not to simply answer a yes or no, but to create outputs that look and feel like human made. Instead of judging the spam or sentiment, now the model can write an article, compose a song, or paint a picture, or generate new software code.

03:55

Nikita: Himanshu, what motivated this sort of progression?

Himanshu: Because of the three reasons. First one, data, we had way more of it thanks to the internet, smartphones, and social media. Second is compute. Graphics cards, GPUs, parallel computing, and cloud systems made it cheap and fast to train giant models.

And third, and most important is ambition. Humans always wanted machines not just to judge existing data, but to create new knowledge, art, and ideas.

04:25

Lois: So, what's happening behind the scenes? How is gen AI making these things happen?

Himanshu: Generative AI is about creating entirely new things across different domains. On one side, we have large language models or LLMs.

They are masters of generating text conversations, stories, emails, and even code. And on the other side, we have diffusion models. They are the creative artists of AI, turning text prompts into detailed images, paintings, or even videos.

And these two together are like two different specialists. The LLM acts like a brain that understands and talks, and the diffusion model acts like an artist that paints based on the instructions. And when we connect these spaces together, we create something called multimodal AI, systems that can take in text and produce images, audio, or other media, opening a whole new range of possibilities.

It can not only take the text, but also deal in different media options. So today when we say ChatGPT or Gemini, they can generate images, and it's not just one model doing everything. These are specialized systems working together behind the scenes.

05:38

Lois: You mentioned large language models and how they power text-based gen AI, so let's talk more about them. Himanshu, what is an LLM and how does it work?

Himanshu: So it's a probabilistic model of text, which means, it tries to predict what word is most likely to come next based on what came before.

This ability to predict one word at a time intelligently is what builds full sentences, paragraphs, and even stories.

06:06

Nikita: But what's large about this? Why's it called a large language model?

Himanshu: It simply means the model has lots and lots of parameters. And think of parameters as adjustable dials the model fine tuned during learning.

There is no strict rule, but today, large models can have billions or even trillions of these parameters. And the more the parameters, more complex patterns, the model can understand and can generate a language better, more like human.

06:37

Nikita: Ok… and image-based generative AI is powered by diffusion models, right? How do they work?

Himanshu: Diffusion models start with something that looks like pure random noise.

Imagine static on an old TV screen. No meaningful image at all. From there, the model carefully removes noise step by step to create something more meaningful and think of it like sculpting a statue. You start with a rough block of stone and slowly, carefully you chisel away to reveal a beautiful sculpture hidden inside.

And in each step of this process, the AI is making an educated guess based on everything it has learned from millions of real images. It's trying to predict.

07:24

Stay current by taking the 2025 Oracle Fusion Cloud Applications Delta Certifications. This is your chance to demonstrate your understanding of the latest features and prove your expertise by obtaining a globally recognized certification, all for free! Discover the certification paths, use the resources on MyLearn to prepare, and future-proof your skills. Get started now at mylearn.oracle.com.

07:53

Nikita: Welcome back! Himanshu, for most of us, our experience with generative AI is with text-based tools like ChatGPT. But I'm sure the uses go far beyond that, right? Can you walk us through some of them?

Himanshu: First one is text generation. So we can talk about chatbots, which are now capable of handling nuanced customer queries in banking travel and retail, saving companies hours of support time. Think of a bank chatbot helping a customer understand mortgage options or virtual HR Assistant in a large company, handling leave request. You can have embedding models which powers smart search systems.

Instead of searching by keywords, businesses can now search by meaning. For instance, a legal firm can search cases about contract violations in tech and get semantically relevant results, even if those exact words are not used in the documents.

The third one, for example, code generation, tools like GitHub Copilot help developers write boilerplate or even functional code, accelerating software development, especially in routine or repetitive tasks. Imagine writing a waveform with just a few prompts.

The second application, is image generation. So first obvious use is art. So designers and marketers can generate creative concepts instantly. Say, you need illustrations for a campaign on future cities. Generative AI can produce dozens of stylized visuals in minutes.

For design, interior designers or architects use it to visualize room layouts or design ideas even before a blueprint is finalized. And realistic images, retail companies generate images of people wearing their clothing items without needing real models or photoshoots, and this reduces the cost and increase the personalization.

Third application is multimodal systems, and these are combined systems that take one kind of input or a combination of different inputs and produce different kind of outputs, or can even combine various kinds, be it text image in both input and output.

Text to image It's being used in e-commerce, movie concept art, and educational content creation. For text to video, this is still in early days, but imagine creating a product explainer video just by typing out the script. Marketing teams love this for quick turnarounds. And the last one is text to audio.

Tools like ElevenLabs can convert text into realistic, human like voiceovers useful in training modules, audiobooks, and accessibility apps. So generative AI is no longer just a technical tool. It's becoming a creative copilot across departments, whether it's marketing, design, product support, and even operations.

10:42

Lois: That's great! So, we've established that generative AI is pretty powerful. But what kind of risks does it pose for businesses and society in general?

Himanshu: The first one is deepfakes. Generative AI can create fake but highly realistic media, video, audios or even faces that look and sound authentic.

Imagine a fake video of a political leader announcing a policy, they never approved. This could cause mass confusion or even impact elections. In case of business, deepfakes can be also used in scams where a CEO's voice is faked to approve fraudulent transactions.

Number two, bias, if AI is trained on biased historical data, it can reinforce stereotypes even when unintended. For example, a hiring AI system that favors male candidates over equally qualified women because of historical data was biased.

And this bias can expose companies to discrimination, lawsuits, brand damage and ethical concerns. Number three is hallucinations. So sometimes AI system confidently generate information that is completely wrong without realizing it.

Sometimes you ask a chatbot for a legal case summary, and it gives you a very convincing but entirely made up court ruling. In case of business impact, sectors like health care, finance, or law hallucinations can or could have serious or even dangerous consequences if not caught.

The fourth one is copyright and IP issues, generative AI creates new content, but often, based on material it was trained on. Who owns a new work? A real life example could be where an artist finds their unique style was copied by an AI that was trained on their paintings without permission.

In case of a business impact, companies using AI-generated content for marketing, branding or product designs must watch for legal gray areas around copyright and intellectual properties. So generative AI is not just a technology conversation, it's a responsibility conversation. Businesses must innovate and protect.

Creativity and caution must go together.

12:50

Nikita: Let's move on to generative AI agents. How is a generative AI agent different from just a chatbot or a basic AI tool?

Himanshu: So think of it like a smart assistant, not just answering your questions, but also taking actions on your behalf. So you don't just ask, what's the best flight to Vegas? Instead, you tell the agent, book me a flight to Vegas and a room at the Hilton. And it goes ahead, understands that, finds the options, connects to the booking tools, and gets it done.

So act on your behalf using goals, context, and tools, often with a degree of autonomy. Goals, are user defined outcomes. Example, I want to fly to Vegas and stay at Hilton. Context, this includes preferences history, constraints like economy class only or don't book for Mondays.

Tools could be APIs, databases, or services it can call, such as a travel API or a company calendar. And together, they let the agent reason, plan, and act.

14:02

Nikita: How does a gen AI agent work under the hood?

Himanshu: So usually, they go through four stages. First, one is understands and interprets your request like natural language understanding. Second, figure out what needs to be done, in this case flight booking plus hotel search.

Third, retrieves data or connects to tools APIs if needed, such as Skyscanner, Expedia, or a Calendar. And fourth is takes action. That means confirming the booking and giving you a response like your travel is booked. Keep in mind not all gen AI agents are fully independent.

14:38

Lois: Himanshu, we've seen people use the terms generative AI agents and agentic AI interchangeably. What's the difference between the two?

Himanshu: Agentic AI is a broad umbrella. It refers to any AI system that can perceive, reason, plan, and act toward a goal and may improve and adapt over time.

Most gen AI agents are reactive, not proactive. On the other hand, agentic AI can plan ahead, anticipate problems, and can even adjust strategies.

So gen AI agents are often semi-autonomous. They act in predefined ways or with human approval. Agentic systems can range from low to full autonomy. For example, auto-GPT runs loops without user prompts and autonomous car decides routes and reactions.

Most gen AI agents can only make multiple steps if explicitly designed that way, like a step-by-step logic flows in LangChain. And in case of agentic AI, it can plan across multiple steps with evolving decisions.

On the memory and goal persistence, gen AI agents are typically stateless. That means they forget their goal unless you remind them. In case of agentic AI, these systems remember, adapt, and refine based on goal progression. For example, a warehouse robot optimizing delivery based on changing layouts.

Some generative AI agents are agentic, like auto GPT. They use LLMs to reason, plan, and act, but not all. And likewise not all agentic AIs are generative. For example, an autonomous car, which may use computer vision control systems and planning, but no generative models.

So agentic AI is a design philosophy or system behavior, which could be goal-driven, autonomous, and decision making. They can overlap, but as I said, not all generative AI agents are agentic, and not all agentic AI systems are generative.

16:39

Lois: What makes a generative AI agent actually work?

Himanshu: A gen AI agent isn't just about answering the question. It's about breaking down a user's goal, figuring out how to achieve it, and then executing that plan intelligently. These agents are built from five core components and each playing a critical role.

The first one is goal. So what is this agent trying to achieve? Think of this as the mission or intent. For example, if I tell the agent, help me organized a team meeting for Friday. So the goal in that case would be schedule a meeting.

Number 2, memory. What does it remember? So this is the agent's context awareness. Storing previous chats, preferences, or ongoing tasks. For example, if last week I said I prefer meetings in the afternoon or I have already shared my team's availability, the agent can reuse that. And without the memory, the agent behaves stateless like a typical chatbot that forgets context after every prompt.

Third is tools. What can it access? Agents aren't just smart, they are also connected. They can be given access to tools like calendars, CRMs, web APIs, spreadsheets, and so on.

The fourth one is planner. So how does it break down the goal? And this is where the reasoning happens. The planner breaks big goals into a step-by-step plans, for example checking team availability, drafting meeting invite, and then sending the invite. And then probably, will confirm the booking. Agents don't just guess. They reason and organize actions into a logical path.

And the fifth and final one is executor, who gets it done. And this is where the action takes place. The executor performs what the planner lays out. For example, calling APIs, sending message, booking reservations, and if planner is the architect, executor is the builder.

18:36

Nikita: And where are generative AI agents being used?

Himanshu: Generative AI agents aren't just abstract ideas, they are being used across business functions to eliminate repetitive work, improve consistency, and enable faster decision making. For marketing, a generative AI agent can search websites and social platforms to summarize competitor activity. They can draft content for newsletters or campaign briefs in your brand tone, and they can auto-generate email variations based on audience segment or engagement history.

For finance, a generative AI agent can auto-generate financial summaries and dashboards by pulling from ERP spreadsheets and BI tools. They can also draft variance analysis and budget reports tailored for different departments. They can scan regulations or policy documents to flag potential compliance risks or changes.

For sales, a generative AI agent can auto-draft personalized sales pitches based on customer behavior or past conversations. They can also log CRM entries automatically once submitting summary is generated. They can also generate battlecards or next-step recommendations based on the deal stage.

For human resource, a generative AI agent can pre-screen resumes based on job requirements. They can send interview invites and coordinate calendars. A common theme here is that generative AI agents help you scale your teams without scaling the headcount.

20:02

Nikita: Himanshu, let's talk about the capabilities and benefits of generative AI agents.

Himanshu: So generative AI agents are transforming how entire departments function. For example, in customer service, 24/7 AI agents handle first level queries, freeing humans for complex cases.

They also enhance the decision making. Agents can quickly analyze reports, summarize lengthy documents, or spot trends across data sets. For example, a finance agent reviewing Excel data can highlight cash flow anomalies or forecast trends faster than a team of analysts.

In case of personalization, the agents can deliver unique, tailored experiences without manual effort. For example, in marketing, agents generate personalized product emails based on each user's past behavior. For operational efficiency, they can reduce repetitive, low-value tasks. For example, an HR agent can screen hundreds of resumes, shortlist candidates, and auto-schedule interviews, saving HR team hours each week.

21:06

Lois: Ok. And what are the risks of using generative AI agents?

Himanshu: The first one is job displacement. Let's be honest, automation raises concerns. Roles involving repetitive tasks such as data entry, content sorting are at risk. In case of ethics and accountability, when an AI agent makes a mistake, who is responsible? For example, if an AI makes a biased hiring decision or gives incorrect medical guidance, businesses must ensure accountability and fairness.

For data privacy, agents often access sensitive data, for example employee records or customer history. If mishandled, it could lead to compliance violations. In case of hallucinations, agents may generate confident but incorrect outputs called hallucinations. This can often mislead users, especially in critical domains like health care, finance, or legal.

So generative AI agents aren't just tools, they are a force multiplier. But they need to be deployed thoughtfully with a human lens and strong guardrails. And that's how we ensure the benefits outweigh the risks.

22:10

Lois: Thank you so much, Himanshu, for educating us. We've had such a great time with you! If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on the AI for You course.

Nikita: Join us next week as we chat about AI workflows and tools. Until then, this is Nikita Abraham…

Lois: And Lois Houston signing off!

22:32

That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

Episoder(142)

Best of 2025: What is Multicloud?

Best of 2025: What is Multicloud?

This week, hosts Lois Houston and Nikita Abraham are shining a light on multicloud, a game-changing strategy involving the use of multiple cloud service providers. Joined by Senior Manager of CSS OU Cloud Delivery Samvit Mishra, they discuss why multicloud is becoming essential for businesses, offering freedom from vendor lock-in and the ability to cherry-pick the best services. They also talk about Oracle's pioneering role in multicloud and its partnerships with Microsoft Azure, Google Cloud, and Amazon Web Services.   Oracle Cloud Infrastructure Multicloud Architect Professional: https://mylearn.oracle.com/ou/course/oracle-cloud-infrastructure-multicloud-architect-professional-2025-/144474 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, and the OU Studio Team for helping us create this episode.   ------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Communications and Adoption with Customer Success Services, and with me is Nikita Abraham, Team Lead: Editorial Services with Oracle University.   Nikita: Hi everyone! You're listening to our Best of 2025 series, where over the next few weeks, we're revisiting four of our most popular episodes of the year.      Lois: Today is #2 of 4, and we're throwing it back to an episode with Senior Manager of CSS OU Cloud Delivery Samvit Mishra. This episode was all about shining a light on multicloud, a game-changing strategy involving the use of multiple cloud service providers.     01:07 Nikita: That's right, Lois. Oracle has been an early adopter of multicloud and a pioneer in multicloud services. So, we began that conversation by asking Samvit to explain what multicloud is and why someone would need more than one cloud provider.  Samvit: Multicloud is a very simple, basic concept. It is the coordinated use of cloud services from more than one cloud service provider.  01:30 Nikita: But why would someone want to use more than one cloud service provider? Samvit: There are many reasons why a customer might want to leverage two or more cloud service providers. First, it addresses the very real concern of mitigating or avoiding vendor lock-in. By using multiple providers, companies can avoid being tied down to one vendor and maintain their flexibility. 01:53 Lois: That's like not putting all your eggs in one basket, so to speak. Samvit: Exactly. Another reason is that customers want the best of breed. What that means is basically leveraging or utilizing the best product from one cloud service provider and pairing it against the best product from another cloud service provider. Getting a solution out of the combined products…out of the coordinated use of those services.  02:22 Nikita: So, it sounds like multicloud is becoming the new normal. And as we were saying before, Oracle was a pioneer in this space. But why did we embrace multicloud so wholeheartedly? Samvit: We recognized that our customers were already moving in this direction. Independent studies from Flexera found that 89% of the subjects of the study used multicloud. And we conducted our own study and came to similar numbers. Over 90% of our customers use two or more cloud service providers.  HashiCorp, the big infrastructure as code company, came to similar numbers as well, 94%. They basically asked companies if multicloud helped them advance their business goals. And 94% said yes. And all this is very recent data.  03:13 Lois: Can you give us the backstory of Oracle's entry into the multicloud space? Samvit: Sure. So back in 2019, Oracle and Microsoft Azure joined forces and announced the interconnect service between Oracle Cloud Infrastructure and Microsoft Azure. The interconnect was between Oracle's FastConnect and Microsoft Azure's ExpressRoute. This was a big step, as it allowed for a direct connection between the two providers without needing a third-party. And now we have several of our data centers interconnected already. So, out of the 48 regions, 12 of them are already interconnected. And more are coming. And you can very easily configure the interconnect.  This interconnectivity guarantees low latency, high throughput, and predictable performance. And also, on the OCI side, there are no egress or ingress charges for your data.  There's also a product called Oracle Database@Azure, where Oracle and Microsoft deliver Oracle Database services in Microsoft Azure data centers.  04:20 Lois: That's exciting! And what are the benefits of this product? Samvit: The main advantage is the co-location. Being co-located with the Microsoft Azure data center offers you native integration between Azure and OCI resources. No manual configuration of a private interconnect between the two providers is needed. You're going to get microsecond latency between your applications and the Oracle Database.  The OCI-native Exadata Database Service is available on Oracle Database@Azure. This enables you to get the highest level of Oracle Database performance, scalability, security, and availability. And your tech support can be provided either from Microsoft or from Oracle.  05:11 AI is being used in nearly every industry…healthcare, manufacturing, retail, customer service, transportation, agriculture, you name it! And it's only going to get more prevalent and transformational in the future. It's no wonder that AI skills are the most sought-after by employers. If you're ready to dive in to AI, check out the OCI AI Foundations training and certification that's available for free! It's the perfect starting point to build your AI knowledge. So, get going! Head on over to mylearn.oracle.com to find out more.  05:51 Nikita: Welcome back. Samvit, there have been some new multicloud milestones from OCI, right? Can you tell us about them?  Samvit: That's right, Niki. I am thrilled to share the latest news on Oracle's multicloud partnerships. We now have agreements with Microsoft Azure, Google Cloud, and Amazon Web Services.  So, as we were discussing earlier, with Azure, we have the Oracle Interconnect for Azure and Oracle Database@Azure. Now, with Google Cloud, we have the Oracle Interconnect for Google Cloud. And it is very similar to the Oracle Interconnect for Azure. With Google Cloud, we have physically interconnected data centers and they provide a sub-2 millisecond latency private interconnection. So, you can come in and provision virtual circuits going from Oracle FastConnect to Google Cloud Interconnect.  And the best thing is that there are no egress or ingress charges for your data. The way it is structured is you have your Oracle Cloud Infrastructure on one side, with your virtual cloud network, your subnets, and your resources. And on the other side, you have your Google Cloud router with your virtual private cloud subnet and your resources interconnecting.  You initiate the connectivity on the Google Cloud side, retrieve the service key and provide that service key to Oracle Cloud Infrastructure, and complete the interconnection on the OCI side. So, for example, our US East Ashburn interconnect will match with us-east4 on the Google Cloud side.  07:29 Lois: Now, wasn't the other major announcement Oracle Database@Google Cloud? Tell us more about that, please. Samvit: With Oracle Database@Google Cloud, you can run your applications on Google Cloud and the database inside the Google Cloud platform. That's the Oracle Cloud Infrastructure database co-located in Google Cloud platform data centers. It allows you to run native integration between GCP and OCI resources with no manual configuration of private interconnect between these two cloud service providers.  That means no FastConnect, no Interconnect because, again, the database is located in the Google Cloud data center. And you're going to get microsecond latency and the OCI native Exadata Database Service. So, you're going to gain the highest level of Oracle Database performance, scalability, security, and availability.  08:25 Lois: And how is the tech support managed? Samvit: The technical support is a collaboration between Google Cloud and Oracle Cloud Infrastructure. That means you can either have the technical support provided to completion by Google Cloud or by Oracle. One of us will provide you with an end-to-end solution.  08:43 Nikita: During CloudWorld last year, we also announced Oracle Database@AWS, right?  Samvit: Yes, Niki. That's where Oracle and Amazon Web Services deliver the Oracle Database service on Oracle Cloud Infrastructure in your AWS data center. This will provide you with native integration between AWS and OCI resources, with no manual configuration of private interconnect between AWS and OCI. And you're getting microsecond latency with the OCI-native Exadata Database Service.  And again, as with Oracle Database@Google Cloud and Oracle Database@Azure, you're gaining the highest level of Oracle Database performance, scalability, security, and availability. And the technical support is provided by either AWS or Oracle all the way to completion. Now, Oracle Database@AWS is currently available in limited preview, with broader availability in the coming months as it expands to new regions to meet the needs of our customers.  09:49 Lois: That's great. Now, how does Oracle fare when it comes to pricing, especially compared to our major cloud competitors?  Samvit: Our pricing is pretty consistent. You'll see that in all cases across the world, we have the less expensive solution for you and the highest performance as well.  10:06 Nikita: Let's move on to some use cases, Samvit. How might a company use the multicloud setup? Samvit: Let's start with the split-stack architecture between Oracle Cloud Infrastructure and Microsoft Azure.  Like I was saying earlier, this partnership dates back to 2019. And basically, we eliminated the FastConnect partner from the middle. And this will provide you with high throughput, low latency, and very predictable performance, all of this on highly available links. These links are redundant, ensuring business continuity between OCI and Azure.  And you can have your database on the OCI side and your application on Microsoft Azure side or the other way around. You can have SQL Server on Azure and the application running on Oracle Cloud Infrastructure. And this is very easy to configure.  10:55 Lois: It really sounds like Oracle is at the forefront of the multicloud revolution. Thanks so much, Samvit, for shedding light on this exciting topic.  Samvit: It was my pleasure.  Nikita: That's a wrap for today. To learn more about what we discussed, head over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure Multicloud Architect Professional course.  Lois: We hope you enjoyed that conversation. Join us next week for another throwback episode. Until then, this is Lois Houston...   Nikita: And Nikita Abraham, signing off!   11:26 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

9 Des 11min

Best of 2025: Introduction to MySQL

Best of 2025: Introduction to MySQL

Join hosts Lois Houston and Nikita Abraham as they explore the world of MySQL 8.4. Together with Perside Foster, a MySQL Principal Solution Engineer, they break down the fundamentals of MySQL, its wide range of applications, and why it's so popular among developers and database administrators. This episode also covers key topics like licensing options, support services, and the various tools, features, and plugins available in MySQL Enterprise Edition.   MySQL 8.4 Essentials: https://mylearn.oracle.com/ou/course/mysql-84-essentials/141332/226362 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   -----------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Communications and Adoption with Customer Success Services.    Lois: Hi there! If you've been following along with us, you'll know we've had some really interesting seasons this year. We covered MySQL, Multicloud, APEX, GoldenGate, Artificial Intelligence, and Cloud Tech.   Nikita: And we've had some pretty awesome special guests too. Do go back and check out those episodes if any of these topics interest you.         Lois: As we close out the year, we thought this would be a good time to revisit some of our best episodes. So over the next few weeks, you'll be able to listen to four of our most popular episodes of the year.     01:12 Nikita: Right, this is the best of the best according to you our listeners. Today's episode is #1 of 4 and is a throwback to a discussion with MySQL Principal Solution Engineer Perside Foster on the Oracle MySQL ecosystem and its various components. We began by asking Perside to explain what MySQL is and why it's so widely used. So, let's get to it!     Perside: MySQL is a relational database management system that organizes data into structured tables, rows, and columns for efficient programming and data management. MySQL is transactional by nature. When storing and managing data, actions such as selecting, inserting, updating, or deleting are required. MySQL groups these actions into a transaction. The transaction is saved only if every part completes successfully. 02:13 Lois: Now, how does MySQL work under the hood? Perside: MySQL is a high-performance database that uses its default storage engine, known as InnoDB. InnoDB helps MySQL handle complex operations and large data volumes smoothly. 02:33 Nikita: For the unversed, what are some day-to-day applications of MySQL? How is it used in the real world? Perside: MySQL works well with online transaction processing workloads. It handles transactions quickly and manages large volumes of transaction at once. OLTP, with low latency and high throughput, makes MySQL ideal for high-speed environments like banking or online shopping. MySQL not only stores data but also replicates it from a main server to several replicas. 03:14 Nikita: That's impressive! And what are the benefits of using MySQL?  Perside: It improves data availability and load balancing, which is crucial for businesses that need up-to-date information. MySQL replication supports read scale-out by distributing queries across servers, which increases high availability. MySQL is the most popular database on the web. 03:44 Lois: And why is that? What makes it so popular? What sets it apart from the other database management systems? Perside: First, it is a relational database management system that supports SQL. It also works as a document store, enabling the creation of both SQL and NoSQL applications without the need for separate NoSQL databases. Additionally, MySQL offers advanced security features to protect data integrity and privacy. It also uses tablespaces for better disk space management. This gives database administrators total control over their data storage. MySQL is simple, solid in its reliability, and secure by design. It is easy to use and ideal for both beginners and professionals. MySQL is proven at scale by efficiently handling large data volumes and high transaction rates. MySQL is also open source. This means anyone can download and use it for free. Users can modify the MySQL software to meet their needs. However, it is governed by the GNU General Public License, or GPL. GPL outlines specific rules for its use. MySQL offers two major editions. For developers and small teams, the Community Edition is available for free and includes all of the core features needed. For large enterprises, the Commercial Edition provides advanced features, management tools, and dedicated technical support. 05:42 Nikita: Ok. Let's shift focus to licensing. Who is it useful for?  Perside: MySQL licensing is essential for independent software vendors. They're called ISVs. And original manufacturers, they're called OEMs. This is because these companies often incorporate MySQL code into their software products or hardware system to boost the functionality and performance of their product. MySQL licensing is equally important for value-added resellers. We call those VARs. And also, it's important for other distributors. These groups bundle MySQL with other commercially licensed software to sell as part of their product offering. The GPL v.2 license might suit Open Source projects that distribute their products under that license.   06:46 Lois: But what if some independent software vendors, original manufacturers, or value-add resellers don't want to create Open Source products. They don't want their source to be publicly available and they want to keep it private. What happens then? Perside: This is why Oracle provides a commercial licensing option. This license allows businesses to use MySQL in their products without having to disclose their source code as required by GPL v2. 07:17 Nikita: I want to bring up the robust support services that are available for MySQL Enterprise. What can we expect in terms of support, Perside?  Perside: MySQL Enterprise Support provides direct access to the MySQL Support team. This team consists of experienced MySQL developers, who are experts in databases. They understand the issues and challenges their customers face because they, too, have personally tackled these issues and challenges. This support service operates globally and is available in 29 languages. So no matter where customers are located, Oracle Support provides assistance, most likely in their preferred language. MySQL Enterprise Support offers regular updates and hot fixes to ensure that the MySQL customer systems stays current with the latest improvements and security patches. MySQL Support is available 24 hours a day, 7 days a week. This ensures that whenever there is an issue, Oracle Support can provide the needed help without any delay. There are no restrictions on how many times customers can receive help from the team because MySQL Enterprise Support allows for unlimited incidents. MySQL Enterprise Support goes beyond simply fixing issues. It also offers guidance and advice. Whether customers require assistance with performance tuning or troubleshooting, the team is there to support them every step of the way.  09:11 Lois: Perside, can you walk us through the various tools and advanced features that are available within MySQL? Maybe we could start with MySQL Shell. Perside: MySQL Shell is an integrated client tool used for all MySQL database operations and administrative functions. It's a top choice among MySQL users for its versatility and powerful features. MySQL Shell offers multi-language support for JavaScript, Python, and SQL. These naturally scriptable languages make coding flexible and efficient. They also allow developers to use their preferred programming language for everything, from automating database tasks to writing complex queries. MySQL Shell supports both document and relational models. Whether your project needs the flexibility of NoSQL's document-oriented structures or the structured relationships of traditional SQL tables, MySQL Shell manages these different data types without any problems. Another key feature of MySQL Shell is its full access to both development and administrative APIs. This ability makes it easy to automate complex database operations and do custom development directly from MySQL Shell. MySQL Shell excels at DBA operations. It has extensive tools for database configuration, maintenance, and monitoring. These tools not only improve the efficiency of managing databases, but they also reduce the possibility for human error, making MySQL databases more reliable and easier to manage.  11:21 Nikita: What about the MySQL Server tool? I know that it is the core of the MySQL ecosystem and is available in both the community and commercial editions. But how does it enhance the MySQL experience? Perside: It connects with various devices, applications, and third-party tools to enhance its functionality. The server manages both SQL for structured data and NoSQL for schemaless applications. It has many key components. The parser, which interprets SQL commands. Optimizer, which ensures efficient query execution. And then the queue cache and buffer pools. They reduce disk usage and speed up access. InnoDB, the default storage engine, maintains data integrity and supports robust transaction and recovery mechanism. MySQL is designed for scalability and reliability. With features like replication and clustering, it distributes data, manage more users, and ensure consistent uptime. 12:44 Nikita: What role does MySQL Enterprise Edition play in MySQL server's capabilities? Perside: MySQL Enterprise Edition improves MySQL server by adding a suite of commercial extensions. These exclusive tools and services are designed for enterprise-level deployments and challenging environments. These tools and services include secure online backup. It keeps your data safe with efficient backup solutions. Real-time monitoring provides insight into database performance and health. The seamless integration connects easily with existing infrastructure, improving data flow and operations. Then you have the 24/7 expert support. It offers round the clock assistance to optimize and troubleshoot your databases. 13:48 Lois: That's an extensive list of features. Now, can you explain what MySQL Enterprise plugins are? I know they're specialized extensions that boost the capabilities of MySQL server, tools, and services, but I'd love to know a little more about how they work. Perside: Each plugin serves a specific purpose. Firewall plugin protects against SQL injection by allowing only pre-approved queries. The audit plugin logs database activities, tracking who accesses databases and what they do. Encryption plugin secures data at rest, protecting it from unauthorized access. Then we have the authentication plugin, which integrates with systems like LDAP and Active Directory for control access. Finally, the thread pool plugin optimizes performance in high load situation by effectively controlling how many execution threads are used and how long they run. The plugin and tools are included in the MySQL Enterprise Edition suite. 15:16 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 15:47 Nikita: Welcome back! We've been going through the various MySQL tools, and another important one is MySQL Enterprise Backup, right?  Perside: MySQL Enterprise Backup is a powerful tool that offers online, non-blocking backup and recovery. It makes sure databases remain available and performs optimally during the backup process. It also includes advanced features, such as incremental and differential backup. Additionally, MySQL Enterprise Backup supports compression to reduce backups and encryptions to keep data secure. One of the standard capabilities of MySQL Enterprise Backup is its seamless integration with media management software, or MMS. This integration simplifies the process of managing and storing backups, ensuring that data is easily accessible and secure. Then we have the MySQL Workbench Enterprise. It enhances database development and design with robust tools for creating and managing your diagram and ensuring proper documentation. It simplifies data migration with powerful tools that makes it easy to move databases between platforms. For database administration, MySQL Workbench Enterprise offers efficient tools for monitoring, performance tuning, user management, and backup and recovery. MySQL Enterprise Monitor is another tool. It provides real-time MySQL performance and availability monitoring. It helps track database's health and performance. It visually finds and fixes problem queries. This is to make it easy to identify and address performance issues. It offers MySQL best-practice advisors to guide users in maintaining optimal performance and security. Lastly, MySQL Enterprise Monitor is proactive and it provides forecasting. 18:24 Lois: Oh that's really going to help users stay ahead of potential issues. That's fantastic! What about the Oracle Enterprise Manager Plugin for MySQL? Perside: This one offers availability and performance monitoring to make sure MySQL databases are running smoothly and efficiently. It provides configuration monitoring. This is to help keep track of the database settings and configuration. Finally, it collects all available metrics to provide comprehensive insight into the database operation. 19:03 Lois: Are there any tools designed to handle higher loads and improve security? Perside: MySQL Enterprise Thread Pool improves scalability as concurrent connections grows. It makes sure the database can handle increased loads efficiently. MySQL Enterprise Authentication is another tool. This one integrates MySQL with existing security infrastructures. It provides robust security solutions. It supports Linux PAM, LDAP, Windows, Kerberos, and even FIDO for passwordless authentication. 19:46 Nikita: Do any tools offer benefits like customized logging, data protection, database security? Perside: The MySQL Enterprise Audit provides out-of-the-box logging of connections, logins, and queries in XML or JSON format. It also offers simple to fine-grained policies for filtering and log rotation. This is to ensure comprehensive and customizable logging. MySQL Enterprise Firewall detects and blocks out of policy database transactions. This is to protect your data from unauthorized access and activities. We also have MySQL Enterprise Asymmetric Encryption. It uses MySQL encryption libraries for key management signing and verifying data. It ensures data stays secure during handling. MySQL Transparent Data Encryption, another tool, provides data-at-rest encryption within the database. The Master Key is stored outside of the database in a KMIP 1.1-compliant Key Vault. That is to improve database security. Finally, MySQL Enterprise Masking offers masking capabilities, including string masking and dictionary replacement. This ensures sensitive data is protected by obscuring it. It also provides random data generators, such as range-based, payment card, email, and social security number generators. These tools help create realistic but anonymized data for testing and development. 21:56 Lois: Can you tell us about HeatWave, the MySQL cloud service? We're going to have a whole episode dedicated to it soon, but just a quick introduction for now would be great. Perside: MySQL HeatWave offers a fully managed MySQL service. It provides deployment, backup and restore, high availability, resizing, and read replicas, all the features you need for efficient database management. This service is a powerful union of Oracle Infrastructure and MySQL Enterprise Edition 8. It combines robust performance with top-tier infrastructure. With MySQL HeatWave, your systems are always up to date with the latest security fixes, ensuring your data is always protected. Plus, it supports both OLTP and analytics/ML use cases, making it a versatile solution for diverse database needs. 23:05 Nikita: So to wrap up, what are your key takeways when it comes to MySQL? Perside: When you use MySQL, here is the bottom line. MySQL Enterprise Edition delivers unmatched performance at scale. It provides advanced monitoring and tuning capabilities to ensure efficient database operation, even under heavy loads. Plus, it provides insurance and immediate help when needed, allowing you to depend on expert support whenever an issue arises. Regarding total cost of ownership, TCO, this edition significantly reduces the risk of downtime and enhances productivity. This leads to significant cost savings and improved operational efficiency. On the matter of risk, MySQL Enterprise Edition addresses security and regulatory compliance. This is to make sure your data meets all necessary standards. Additionally, it provides direct contact with the MySQL team for expert guidance. In terms of DevOps agility, it supports automated scaling and management, as well as flexible real-time backups, making it ideal for agile development environments. Finally, concerning customer satisfaction, it enhances application performance and uptime, ensuring your customers have a reliable and smooth experience. 25:02 Lois: Thank you so much, Perside. This is really insightful information. To learn more about all the support services that are available, visit support.oracle.com. This is the central hub for all MySQL Enterprise Support resources.  Nikita: Yeah, and if you want to know about the key commercial products offered by MySQL, visit mylearn.oracle.com and search for the MySQL 8.4: Essentials course.  Lois: We hope you enjoyed that conversation. Join us next week for another throwback episode. Until then, this is Lois Houston…    Nikita: And Nikita Abraham, signing off!    25:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

2 Des 26min

Understanding Security Risks and Threats in the Cloud - Part 1

Understanding Security Risks and Threats in the Cloud - Part 1

This week, Lois Houston and Nikita Abraham are joined by Principal OCI Instructor Orlando Gentil to explore what truly keeps data safe, and what puts it at risk.   They discuss the CIA triad, dive into hashing and encryption, and shed light on how cyber threats like malware, phishing, and ransomware try to sneak past defenses.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! Last week, we discussed how you can keep your data safe with authentication and authorization. Today, we'll talk about various security risks that could threaten your systems. 00:48 Lois: And to help us understand this better, we have Orlando Gentil, Principal OCI Instructor, back with us. Orlando, welcome back! Let's start with the big picture—why is security such a crucial part of our digital world today? Orlando: Whether you are dealing with files stored on a server or data flying across the internet, one thing is always true—security matters. In today's digital world, it's critical to ensure that data stays private, accurate, and accessible only to the right people.  01:20 Nikita: And how do we keep data private, secure, and unaltered? Is there a security framework that we can use to make sense of different security practices? Orlando: The CIA triad defines three core goals of information security.  CIA stands for confidentiality. It's about keeping data private. Only authorized users should be able to access sensitive information. This is where encryption plays a huge role. Integrity means ensuring that the data hasn't been altered, whether accidentally or maliciously. That's where hashing helps. You can compare a stored hash of data to a new hash to make sure nothing's changed. Availability ensures that data is accessible when it's needed. This includes protections like system redundancy, backups, and anti-DDoS mechanisms. Encryption and hashing directly support confidentiality and integrity. And they indirectly support availability by helping keep systems secure and resilient. 02:31 Lois: Let's rewind a bit. You spoke about something called hashing. What does that mean? Orlando: Hashing is a one-way transformation. You feed in data and it produces a unique fixed length string called a hash. The important part is the same input always gives the same output, but you cannot go backward and recover the original data from the hash. It's commonly used for verifying integrity. For example, to check if a file has changed or a message was altered in transit. Hashing is also used in password storage. Systems don't store actual passwords, just their hashes. When you log in, the system hashes what you type it and compare the stored hash. If they match, you're in. But your actual password was never stored or revealed. So hashing isn't about hiding data, it's about providing it hasn't changed. So, while hashing is all about protecting integrity, encryption is the tool we use to ensure confidentiality. 03:42 Nikita: Right, the C in CIA. And how does it do that? Orlando: Encryption takes readable data, also known as plaintext, and turns it into something unreadable called ciphertext using a key. To get the original data back, you need to decrypt it using the right key. This is especially useful when you are storing sensitive files or sending data across networks. If someone intercepts the data, all they will see is gibberish, unless they have the correct key to decrypt it. Unlike hashing, encryption is reversible as long as you have the right key. 04:23 Lois: And are there different types of encryption that serve different purposes? Orlando: Symmetric and asymmetric encryption. With symmetric encryption, the same key is used to both encrypt and decrypt the data. It's fast and great for securing large volumes of data, but the challenge lies in safely sharing the key. Asymmetric encryption solves that problem. It uses a pair of keys: public key that anyone can use to encrypt data, and a private key that only the recipient holds to decrypt it. This method is more secure for communications, but also slower and more resource-intensive. In practice, systems often use both asymmetric encryption to exchange a secure symmetric key and then symmetric encryption for the actual data transfer. 05:21 Nikita: Orlando, where is encryption typically used in day-to-day activities? Orlando: Data can exist in two primary states: at rest and in transit. Data at rest refers to data stored on disk, in databases, backups, or object storage. It needs protection from unauthorized access, especially if a device is stolen or compromised. This is where things like full disk encryption or encrypted storage volumes come in. Data in transit is data being sent from one place to another, like a user logging into a website or an API sending information between services. To protect it from interception, we use protocols like TLS, SSL, VPNs, and encrypted communication channels. Both forms data need encryption, but the strategies and threats can differ. 06:19 Lois: Can you do a quick comparison between hashing and encryption? Orlando: Hashing is one way. It's used to confirm that data hasn't changed. Once data is hashed, it cannot be reversed. It's perfect for use cases like password storage or checking the integrity of files. Encryption, on the other hand, it's two-way. It's designed to protect data from unauthorized access. You encrypt the data so only someone with the right key can decrypt and read it. That's what makes it ideal for keeping files, messages, or network traffic confidential. Both are essential for different reasons. Hashing for trust and encryption for privacy. 07:11 Adopting a multicloud strategy is a big step towards future-proofing your business and we're here to help you navigate this complex landscape. With our suite of courses, you'll gain insights into network connectivity, security protocols, and the considerations of working across different cloud platforms. Start your journey to multicloud today by visiting mylearn.oracle.com.  07:39 Nikita: Welcome back! When we talk about cybersecurity, we hear a lot about threats and vulnerabilities. But what do those terms really mean? Orlando: In cybersecurity, a threat is a potential danger and a vulnerability is a weakness an asset possess that a threat can exploit. When a threat and a vulnerability align, it creates a risk of harm. A threat actor then performs an exploit to leverage that vulnerability, leading to undesirable impact, such as data loss or downtime. After an impact, the focus shifts to response and recovery to mitigate damage and restore operations.  08:23 Lois: Ok, let's zero in on vulnerabilities. What counts as a vulnerability, and what categories do attackers usually target first?  Orlando: Software and hardware bugs are simply unintended flaws in a system's core programming or design. Misconfigurations arise when systems aren't set up securely, leaving gaps. Weak passwords and authentication provide easy entry points for attackers. A lack of encryption means sensitive data is openly exposed. Human error involves mistakes made by people that unintentionally create security risks. Understanding these common vulnerability types is the first step in building more resilient and secure systems as they represent the critical entry points attackers leverage to compromise systems and data. By addressing these, we can significantly reduce our attack surface and enhance overall security.  09:28 Nikita: Can we get more specific here? What are the most common cybersecurity threats that go after vulnerabilities in our systems and data? Orlando: Malware is a broad category, including viruses, worms, Trojans, and spyware. Its goal is to disrupt or damage systems. Ransomware has been on the rise, targeting everything from hospitals to government agencies. It lock your files and demands a ransom, usually in cryptocurrency. Phishing relies on deception. Attackers impersonate legitimate contacts to trick users into clicking malicious links or giving up credentials. Insider threats are particularly dangerous because they come within employees, contractors, or even former staff with lingering access. Lastly, DDoS attacks aim to make online services unavailable by overwhelming them with traffic, often using a botnet—a network of compromised devices. 10:34 Lois: Orlando, can you walk us through how each of these common cybersecurity threats work? Orlando: Malware, short for malicious software, is one of the oldest and most pervasive types of threats. It comes in many forms, each with unique methods and objectives. A virus typically attaches itself to executable files and documents and spreads when those are shared or opened. Worms are even more dangerous in networked environments as they self-replicate and spread without any user action. Trojans deceive users by posing as harmless or helpful applications. Once inside, they can steal data or open backdoors for remote access. Spyware runs silently in the background, collecting sensitive information like keystrokes or login credentials. Adware might seem like just an annoyance, but it can also track your activity and compromise privacy. Finally, rootkits are among the most dangerous because they operate at a low system level, often evading detection tools and allowing attackers long-term access. In practice, malware can be a combination of these types. Attackers often bundle different techniques to maximize damage.  12:03 Nikita: And what about ransomware? Why it is such a serious threat? Orlando: Ransomware has become one of the most disruptive and costly types of cyber attacks in recent years. Its goal is simple but devastating, to encrypt your data and demand payment in exchange for access. It usually enters through phishing emails, insecure remote desktop protocol ports or known vulnerabilities. Once inside, it often spreads laterally across the network before activating, ensuring maximum impact. There are two common main forms. Crypto ransomware encrypts user files, making them inaccessible. Locker ransomware goes a step further, locking the entire system interface, preventing any use at all. Victims are then presented with a ransom note, typically requesting cryptocurrency payments in exchange for the decryption key. What makes ransomware so dangerous is not just the encryption itself, but the pressure it creates. Healthcare institutions, for instance, can't afford the downtime, making them prime targets.  13:18 Lois: Wow. Thanks, Orlando, for joining us today.  Nikita: Yeah, thanks Orlando. We'll be back next week with more on how you use security models to tackle these threats head-on. And if you want to learn about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart  course. Until next time, this is Nikita Abraham… Lois: And Lois Houston, signing off! 13:42 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

18 Nov 14min

Networking & Security Essentials

Networking & Security Essentials

How do all your devices connect and stay safe in the cloud? In this episode, Lois Houston and Nikita Abraham talk with OCI instructors Sergio Castro and Orlando Gentil about the basics of how networks work and the simple steps that help protect them.   You'll learn how information gets from one place to another, why tools like switches, routers, and firewalls are important, and what goes into keeping access secure.   The discussion also covers how organizations decide who can enter their systems and how they keep track of activity.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! In the last episode, we spoke about local area networks and domain name systems. Today, we'll continue our conversation on the fundamentals of networking, covering a variety of important topics.  00:50 Lois: That's right, Niki. And before we close, we'll also touch on the basics of security. Joining us today are two OCI instructors from Oracle University: Sergio Castro and Orlando Gentil. So glad to have you both with us guys. Sergio, with so many users and devices connecting to the internet, how do we make sure everyone can get online? Can you break down what Network Address Translation, or NAT, does to help with this? Sergio: The world population is bigger than 4.3 billion people. That means that if we were to interconnect every single human into the internet, we will not have enough addresses. And not all of us are connected to the internet, but those of us who are, you know that we have more than one device at our disposal. We might have a computer, a laptop, mobile phones, you name it. And all of them need IP addresses. So that's why Network Address Translation exists because it translates your communication from a private IP to a public IP address. That's the main purpose: translate. 02:05 Nikita: Okay, so with NAT handling the IP translation, how do we ensure that the right data reaches the right device within a network? Or to put it differently, what directs external traffic to specific devices inside a network? Sergio: Port forwarding works in a reverse way to Network Address Translation. So, let's assume that this PC here, you want to turn it into a web server. So, people from the outside, customers from the outside of your local area network, will access your PC web server. Let's say that it's an online store. Now all of these devices are using the same public IP address. So how would the traffic be routed specifically to this PC and not to the camera or to the laptop, which is not a web server, or to your IP TV? So, this is where port forwarding comes into play. Basically, whenever it detects a request coming to port, it will route it and forward that request to your PC. It will allow anybody, any external device that wants to access this particular one, this particular web server, for the session to be established. So, it's a permission that you're allowing to this PC and only to this PC. The other devices will still be isolated from that list. That's what port forwarding is. 03:36 Lois: Sergio, let's talk about networking devices. What are some of the key ones, and what role do they play in connecting everything together? Sergio: There's plenty of devices for interconnectivity. These are devices that are different from the actual compute instances, virtual machines, cameras, and IPTV. These are for interconnecting networks. And they have several functionalities. 03:59 Nikita: Yeah, I often hear about a default gateway. Could you explain what that is and why it's essential for a network to function smoothly? Sergio: A gateway is basically where a web browser goes and asks a service from a web server. We have a gateway in the middle that will take us to that web server. So that's basically is the router. A gateway doesn't necessarily have to be a router. It depends on what device you're addressing at a particular configuration. So, a gateway is a connectivity device that connects two different networks. That's basically the functionality.  04:34 Lois: Ok. And when does one use a default gateway? Sergio: When you do not have a specific route that is targeting a specific router. You might have more than one router in your network, connecting to different other local area networks. You might have a route that will take you to local area network B. And then you might have another router that is connecting you to the internet. So, if you don't have a specific route that will take you to local area network B, then it's going to be utilizing the default gateway. It directs data packets to other networks when no specific route is known. In general terms, the default gateway, again, it doesn't have to be a router. It can be any devices. 05:22 Nikita: Could you give us a real-world example, maybe comparing a few of these devices in action, so we can see how they work together in a typical network? Sergio: For example, we have the hub. And the hub operates at the physical layer or layer 1. And then we have the switch. And the switch operates at layer 2. And we also have the router. And the router operates at layer 3. So, what's the big difference between these devices and the layers that they operate in? So, hubs work in the physical layer of the OSI model. And basically, it is for connecting multiple devices and making them act as a single network segment. Now, the switch operates at the data link layer and is basically a repeater, and is used for filtering content by reading the addresses of the source and destination. And these are the MAC addresses that I'm talking about. So, it reads where the packet is coming from and where is it going to at the local area network level. It connects multiple network segments. And each port is connected to a different segment. And the router is used for routing outside of your local area network, performs traffic directing functions on the internet. A data packet is typically forwarded from one router to another through different networks until it reaches its destination node. The switch connects multiple network segments. And each port of the switch is connected to a different segment. And the router performs traffic directing functions on the internet. It takes data from one router to another, and it works at the TCP/IP network layer or internet layer. 07:22 Lois: Sergio, what kind of devices help secure a network from external threats? Sergio: The network firewall is used as a security device that acts as a barrier between a trusted internal network and an untrusted external network, such as the internet. The network firewall is the first line of defense for traffic that passes in and out of your network. The firewall examines traffic to ensure that it meets the security requirements set by your organization, or allowing, or blocking traffic based on set criteria. And the main benefit is that it improves security for access management and network visibility. 08:10 Are you keen to stay ahead in today's fast-paced world? We've got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you're always in the know, we offer New Features courses that give you an insider's look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started.  08:36 Nikita: Welcome back! Sergio, how do networks manage who can and can't enter based on certain permissions and criteria? Sergio: The access control list is like the gatekeeper into your local area network. Think about the access control list as the visa on your passport, assuming that the country is your local area network. Now, when you have a passport, you might get a visa that allows you to go into a certain country. So the access control list is a list of rules that defines which users, groups, or systems have permissions to access specific resources on your networks.  It is a gatekeeper, that is going to specify who's allowed and who's denied. If you don't have a visa to go into a specific country, then you are denied. Similar here, if you are not part of the rule, if the service that you're trying to access is not part of the rules, then you cannot get in. 09:37 Lois: That's a great analogy, Sergio. Now, let's turn our attention to one of the core elements of network security: authentication and authorization. Orlando, can you explain why authentication and authorization are such crucial aspects of a secure cloud network? Orlando: Security is one of the most critical pillars in modern IT systems. Whether you are running a small web app or managing global infrastructure, every secure system starts by answering two key questions. Who are you, and what are you allowed to do? This is the essence of authentication and authorization. Authentication is the first step in access control. It's how a system verifies that you are who you claim to be. Think of it like showing your driver's license at a security checkpoint. The guard checks your photo and personal details to confirm your identity. In IT systems, the same process happens using one or more of these factors. It will ask you for something you know, like a password. It will ask you for something that you have, like a security token, or it will ask you for something that you are, like a fingerprint. An identity does not refer to just a person. It's any actor, human or not, that interacts with your systems. Users are straightforward, think employees logging into a dashboard. But services and machines are equally important. A backend API may need to read data from a database, or a virtual machine may need to download updates. Treating these non-human identities with the same rigor as human ones helps prevent unauthorized access and improves visibility and security. After confirming your identity, can the system move on to deciding what you're allowed to access? That's where authorization comes in. Once authentication confirms who you are, authorization determines what you are allowed to do. Sticking with the driver's license analogy, you've shown your license and proven your identity, but that doesn't mean that you can drive anything anywhere. Your license class might let you drive a car, not a motorcycle or a truck. It might be valid in your country, but not in others. Similarly, in IT systems, authorization defines what actions you can take and on which resources. This is usually controlled by policies and roles assigned to your identity. It ensures that users or services only get access to the things they are explicitly allowed to interact with. 12:34 Nikita: How can organizations ensure secure access across their systems, especially when managing multiple users and resources?  Orlando: Identity and Access Management governs who can do what in our systems. Individually, authentication verifies identity and authorization grants access. However, managing these processes at scale across countless users and resources becomes a complex challenge. That's where Identity and Access Management, or IAM, comes in. IAM is an overarching framework that centralizes and orchestrates both authentication and authorization, along with other critical functions, to ensure secure and efficient access to resources.  13:23 Lois: And what are the key components and methods that make up a robust IAM system? Orlando: User management, a core component of IAM, provides a centralized Identity Management system for all user accounts and their attributes, ensuring consistency across applications. Key functions include user provisioning and deprovisioning, automating account creation for new users, and timely removal upon departure or role changes. It also covers the full user account lifecycle management, including password policies and account recovery. Lastly, user management often involves directory services integration to unify user information. Access management is about defining access permissions, specifically what actions users can perform and which resources they can access. A common approach is role-based access control, or RBAC, where permissions are assigned to roles and users inherit those permissions by being assigned to roles. For more granular control, policy-based access control allows for rules based on specific attributes. Crucially, access management enforces the principle of least privilege, granting only the minimum necessary access, and supports segregation of duties to prevent conflicts of interest. For authentication, IAM systems support various methods. Single-factor authentication, relying on just one piece of evidence like a password, offers basic security. However, multi-factor authentication significantly boosts security by requiring two or more distinct verification types, such as a password, plus a one-time code. We also have biometric authentication, using unique physical traits and token-based authentication, common for API and web services. 15:33 Lois: Orlando, when it comes to security, it's not just about who can access what, but also about keeping track of it all. How does auditing and reporting maintain compliance? Orlando: Auditing and reporting are essential for security and compliance. This involves tracking user activities, logging all access attempts and permission changes. It's vital for meeting compliance and regulatory requirements, allowing you to generate reports for audits. Auditing also aids in security incident detection by identifying unusual activities and providing data for forensic analysis after an incident. Lastly, it offers performance and usage analytics to help optimize your IAM system.  16:22 Nikita: That was an incredibly informative conversation. Thank you, Sergio and Orlando, for sharing your expertise with us. If you'd like to dive deeper into these concepts, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: I agree! This was such a great conversation! Don't miss next week's episode, where we'll continue exploring key security concepts to help organizations operate in a scalable, secure, and auditable way. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 16:56 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

11 Nov 17min

Inside Cloud Networking

Inside Cloud Networking

In this episode, hosts Lois Houston and Nikita Abraham team up with Senior Principal OCI Instructor Sergio Castro to unpack the basics of cloud networking and the Domain Name System (DNS). You'll learn how local and virtual networks connect devices, and how DNS seamlessly translates familiar names like oracle.com into addresses computers understand.   Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last few weeks, we've been talking about different aspects of cloud data centers. Today, we're focusing on something that's absolutely key to how everything works in the cloud: networking and domain name systems.  00:52 Lois: And to guide us through it, we've got Sergio Castro, Senior Principal OCI Instructor at Oracle University. We'll start by trying to understand why networking is so crucial and how it connects everything behind the scenes. Sergio, could you explain what networking means in simple terms, especially for folks new to cloud tech? Sergio: Networking is the backbone of cloud computing. It is a fundamental service because it provides the infrastructure for connecting users, applications, and resources within a cloud environment. It basically enables data transfers. It facilitates remote access. And ensures that cloud services are accessible to users. This provided that these users have the correct credentials.  01:38 Nikita: Ok, can you walk us through how a typical network operates? Sergio: In networking, typically starts with the local area network. Basically, networking is a crucial component for any IT service because it's the foundation for the architecture framework of any of the services that we consume today. So, a network is two or more computers interconnected to each other. And not necessarily it needs to be a computer. It can be another device such as a printer or an IP TV or an IP phone or an IP camera. Many devices can be part of a local area network. And a local area network can be very small. Like I mentioned before, two or more computers, or it could grow into a very robust and complicated set of interconnected networks. And if that happens, then it can become very expensive as well. Cloud networking, it's the Achilles heel for many of the database administrators, programmers, quality assurance engineers, any IT other than a network administrator. Actually, when the network starts to grow, managing access and permissions and implementing robust security measures, this coupled with the critical importance of reliable, and secure performance, can create significant hurdles. 03:09 Nikita: What are the different types of networks we have? Sergio: A local area network is basically in one building. It covers… it can be maybe two buildings that are in close proximity in a small campus, but typically it's very small by definition, and they're all interconnected to each other via one router, typically. A metropolitan area network is a typical network that spans into a city or a metro area, hence the name metropolitan area network. So, one building can be on one edge of the city and the other building can be at the other edge of the city, and they are interconnected by a digital circuit typically. So that's the case. It's more than one building, and the separation of those buildings is considerable. It can go into several miles.  And a wide area network is a network that spans multiple cities, states, countries, even international. 04:10 Lois: I think we'll focus on the local area network for today's conversation. Could you give us a real-world example, maybe what a home office network setup looks like? Sergio: If you are accessing this session from your home office or from your office or corporate office even, but a home office or a home network, typically, you have a router that is being provided to you by the internet vendor—the internet service provider. And then you have your laptop or your computer, your PC connected to that router. And then you might have other devices either connected via cable—ethernet cable—or Wi-Fi. And the interconnectivity within that small building is what makes a local area network. And it looks very similar once you move on into a corporate office. Again, it's two or more computers interconnected. That's what makes a local area network. In a corporate office, the difference with a home office or your home is that you have many more computers. And because you have many more computers, that local area network might be divided into subnets. And for that, you need a switch. So, you have additional devices like a switch and a firewall and the router. And then you might have a server as well. So that's the local area network. Two or more computers. And local area networks are capable of high speeds because they are in close proximity to each other.  05:47 Nikita: Ok… so obviously a local area network has several different components. Let's break them down. What's a client, what's a server, and how do they interact? Sergio: A client basically is a requester of a service. Like when you hop into your browser and then you want to go to a website, for example, oracle.com, you type www.oracle.com, you are requesting a service from a server. And that server typically resides in a data center like oracle.com under the Oracle domain is a big data center with many interconnected servers. Interconnected so they can concurrently serve multiple millions of requests coming into www.oracle.com at the same time. So, servers provide services to client computers. So basically, that's the relation. A client requests a service and the server provides that service.  06:50 Lois: And what does that client-server setup actually look like? Sergio: So, let's continue with our example of a web browser requesting a service from a web server. So, in this case, the physical computer is the server. And then it has a software running on it. And that makes it a web server. So, once you type www.oracle.com, it sends the request and the request is received. And provided that everything's configured correctly and that there are no typos, then it will provide a response and basically give the view of the website. And that's obviously in the local area network, maybe quality assurance when they were testing this for going live. But when it goes live, then you have the internet in the middle. And the internet in the middle then have many routers, hubs, switches. 07:51 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today!   08:16 Nikita: Welcome back! Sergio, would this client-server model also apply to my devices at home? Sergio: In your own local area network, you have client server even without noticing. For example, let's go back to our home office example. What happens if we add another laptop into the scenario? Then all of these devices, they need a way for them to communicate. And for that, they have an IP address. And who provides that IP address? The minute that you add, the other device is going to send a request to the router. The router, we call it router, but it has multiple functions like the mobile device, the handheld device that we call smartphone. It has many functions like camera and calendar and many other functionalities. The router has an additional functionality called the dynamic host configuration protocol at DHCP server. So basically, the laptop requests, hey, give me an IP address, and then the router or the DHCP server replies, here's your IP address. And it's going to be a different one. So, they don't overlap. So that's an example of client server. 09:32 Lois: And where do virtual networks fit into all this?  Sergio: A virtual network is basically, a software version of the physical network. It looks and feels exactly as a physical network does. We do have a path or a communication, in this case, in the physical network, you have either Wi-Fi or you have internet cable. And then you add your workstations or devices on top of that. And then you might create subnets.  So, in a software-defined network or in a virtual network, you have a software-defined connectivity, physical cable and all of that. Everything is software-defined. And it looks exactly the same, except that everything is software. In a software or a virtual network, you can communicate with a physical network as if that software or that virtual network was another physical network. Again, this is a software network or a software-defined network, a virtual network, no longer a physical network.  10:42 Lois: Let's switch gears a little and talk about Domain Name Systems. Sergio, can you explain what DNS is, and why it's important when we browse the web? Sergio: DNS is the global database for internet addressing. The DNS plays a very important role on the internet. And many internet services are closely related to DNS. The main functionality of DNS is to translate easy-to-remember names into IP addresses. Some IP addresses might be very easy to remember. But however, if you have many of them, then it's easier to remember oracle.com or ucla.edu or navy.mil for military or eds.org for organization or gobierno.mx for Mexico. So that's the main feature of the DNS. It's very similar to a mobile phone to the contacts application in your mobile phone, because the contacts application maps names to phone numbers. It's easier to remember Bob's phone than 555-123-4567. So, it's easier to remember the name of the persons in your contacts list, like it is easier to remember, as previously mentioned, oracle.com than 138.1.33.162. Again, 138.1.33.162 might be easy for you to remember if that's the only one that you need to remember. But if you have 20, 40, 50, like we do with phone numbers, it's easier to remember oracle.com or ucla.edu. And this is essential, this mapping, again, because we work with names it's easier for us to remember. However, the fact is that computers, they still need to use IP addresses. And remember that this is the decimal representation of the binary number. It's a lot harder for us to remember the 32 bits or each one of the octets in binary. So that's the main purpose of DNS. Now the big difference is that the contact list in a cell phone is unique to that individual phone. However, DNS is global. It applies to everybody in the world. Anybody typing oracle.com will translate that into 138.1.33.162. Now this is an actual IP address of oracle.com. Oracle.com has many IP addresses. If you ping oracle.com, chances are that this is one of the many addresses that maps to oracle.com. 13:35 Nikita: You mentioned that a domain name like oracle.com can have many IP addresses. So how does DNS help my computer find the right one? Sergio: So, let's say that you want to look for www.example.com, how do you do that? So, you type in your computer instance or in your terminal, in your laptop, in your computer, you type in your browser "www.example.com." If the browser doesn't have that information in cache, then it's going to first ask your DNS server, the one that you have assigned and indicating in your browser's configuration. And if the DNS server then it will relate that the information is 96.7.128.198. This address is real, and your browser will go to this address once you type www.example.com. 14:34 Nikita: But what happens if the browser doesn't know the address?  Sergio: This is where it gets interesting. Your browser wants to go to www.example.com. And it's going to go and look within its cache. If it doesn't have it, then the first step is to go ahead to your DNS server and ask them, hey, if you don't know this address, go ahead and find out. So, it goes to the root server. All the servers are administrated by IANA. And it's going to send the information, hey, what's the IP address for www.example.com? And if the root server doesn't know it, it's going to let you know, hey, ask the top-level domain name server, in this case, the .com. It's a top-level domain name server. So, you go ahead and ask this top-level domain name server to do that for you. In this case, again, the .com and you asked, hey, what's the IP address for example.com? And if the top-level domain name server doesn't know, it's going to ask you, hey, ask example.com. And example.com is actually within the customer's domain. And then based on these instructions you ask, what is the IP address for www.example.com? So, it will provide you with the IP address. And once your DNS server has the IP address, then it's going to relate to your web browser. And this is where your web browser actually reaches 96.7.128.198. Very interesting, isn't it? 16:23 Lois: Absolutely! Sergio, you mentioned top-level domain names. What are they and how are they useful? Sergio: A top level domain is the rightmost segment of a domain name, and it's located after the last visible dot in the domain name. So oracle.com or cloud.oracle.com is a domain name. So, .com is a top-level domain. And the purpose of the top-level domain is to recognize certain elements of a website. This top-level domain indicates that this is a commercial site. Now, .edu, for example, is a top-level domain name for higher education. We also have .org for nonprofit organizations, .net for network service providers. And we also have country specific. .ca for Canadian websites, .it for Italian websites. Now .it, a lot of companies that are in the information technology business utilizes this one to indicate that they're in information technology. There's also the .us. And for US companies, most of the time this is optional. .com, .org, .net is understood that they are from the US. Now if .com is a top-level domain name, what is that .oracle in cloud? So, Oracle is the second-level domain name. And in this case, Cloud is the third-level domain name. And lately you've been seeing a lot more top-level domain names. These are the classic ones. But now you get .AI, .media, .comedy, .people, and so on and so forth. You have many, many, even companies now have the option of registering their company name as the top-level domain name. 18:24 Nikita: Thank you, Sergio, for this deep dive into local area networks and domain name systems. If you want to learn about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course.  Lois: And don't forget to join us next week for another episode on networking essentials. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 18:46 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

4 Nov 19min

Cloud Data Centers: Core Concepts - Part 4

Cloud Data Centers: Core Concepts - Part 4

In this episode, hosts Lois Houston and Nikita Abraham, along with Principal OCI Instructor Orlando Gentil, break down the differences between Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. The conversation explores how each framework influences control, cost efficiency, expansion, reliability, and contingency planning. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about how hypervisors, virtual machines, and containers have transformed data centers. Today, we're moving on to something just as important—the main cloud models that drive modern cloud computing. Nikita: Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again for part four of our discussion on cloud data centers. 01:01 Lois: Hi Orlando! Glad to have you with us today. Can you walk us through the different types of cloud models? Orlando: These are commonly categorized into three main service models: Infrastructure-as-a-Service, Platform-as-a-Service, and Software-as-a-Service. Let's use the idea of getting around town to understand cloud service models. IaaS is like renting a car. You don't own the car, but you control where it goes, how fast, and when to stop. In cloud terms, the provider gives you the infrastructure—virtual machines, storage, and networking—but you manage everything on top—the OS, middleware, runtime, and application. Thus, it's like using a shuttle service. You bring your bags—your code, pick your destination—your app requirements, but someone else drives and maintains the vehicle. You don't worry about the engine, fuel, or routing planning. That's the platform's job. Your focus stays on development and deployment, not on servers or patching. SaaS is like ordering a taxi. You say where you want to go and everything else is handled for you. It's the full-service experience. In the cloud, SaaS is software UXs over the web—Email, CRM, project management. No infrastructure, no updates, just productivity. 02:32 Nikita: Ok. How do the trade-offs between control and convenience differ across SaaS, PaaS, and IaaS? Orlando: With IaaS, much like renting a car, you gain high control. You are managing components like the operating system, runtime, your applications, and your data. In return, the provider expertly handles the underlying virtual machines, storage, and networking. This model gives you immense flexibility. Moving to PaaS, our shuttle service, you shift to a medium level of control but gain significantly higher convenience. Your primary focus remains on your application code and data. The provider now takes on the heavy lifting of managing the runtime environment, the operating system, the servers themselves, and even the scaling. Finally, SaaS, our taxi service, offers the highest convenience with the lowest control level. Here, your responsibility is essentially just using the application and managing your specific configurations or data within it. The cloud provider manages absolutely everything else—the entire infrastructure, the platform, and the application itself. 03:52 Nikita: One of the top concerns for cloud users is cost optimization. How can we manage this? Orlando: Each cloud service model offers distinct strategies to help you manage and reduce your spending effectively, as well as different factors that drives those costs. For Infrastructure-as-a-Service, where you have more control, optimization largely revolves around smart resource management. This means rightsizing your VMs, ensuring they are not overprovisioned, and actively turning off idle resources when not in use. Leveraging preemptible or spot instances for flexible workloads can also significantly cut costs. Your charges here are directly tied to your compute, storage, and network usage, so efficiency is key. Moving to Platform-as-a-Service, where the platform is managed for you, optimization shifts slightly. Strategies include choosing scalable platforms that can efficiently handle fluctuating demand, opting for consumption-based pricing where available, and diligently optimizing your runtime usage to minimize processing time. Costs in PaaS are typically based on your application usage, runtime hours, and storage consumed. Finally, for Software-as-a-Service where you can consume a ready-to-use application, cost optimization centers on licensing and usage. This involves consolidating tools to avoid redundant subscriptions, selecting usage-based plans if they align better with your needs, and crucially, eliminating any unused license. SaaS costs are generally based on subscription or per user fees. Understanding these nuances is essential for effective cloud financial management. 05:52 Lois: Ok. And what about scalability? How does each model handle the ability to grow and shrink with demand, without needing manual hardware changes? Orlando: How you achieve and manage that scalability varies significantly across our three service models. For Infrastructure-as-a-Service, you have the most direct control over scaling. You can implement manual or auto scaling by adding or removing virtual machines as needed, often leveraging load balancers to distribute traffic. In this model, you configure the scaling policies and parameters based on your specific workload. Moving to Platform-as-a-Service, the scaling becomes more automated and elastic. The platform automatically adjusts resources based on your application's demand, allowing it to seamlessly handle traffic spikes or dips. Here, the provider manages the underlying scaling behavior, freeing you from that operational burden. Finally, with Software-as-a-Service, scalability is largely abstracted and invisible to the user. The application scales automatically in the background, with the entire process fully managed by the provider. As a user, you simply benefit from the application's ability to handle millions of users without ever needing to worry about the infrastructure. Understanding these scaling differences is crucial for selecting the right model for your application's need. 07:34 Join the Oracle University Learning Community and tap into a vibrant network of over 1 million members, including Oracle experts and fellow learners. This dynamic community is the perfect place to grow your skills, connect with likeminded learners, and celebrate your successes. As a MyLearn subscriber, you have access to engage with your fellow learners and participate in activities in the community. Visit community.oracle.com/ou to check things out today! 08:05 Nikita: Welcome back! We've talked about cost optimization and scalability in cloud environments. But what about ensuring availability? How does that work? Orlando: Availability refers to the ability of a system or service to remain accessible in operational, even in the face of failures or extremely high demand. The approach of achieving and managing availability, and crucially, your role versus the provider's differs greatly across each model. With Infrastructure-as-a-Service, you have the most direct control over your availability strategy. You will be responsible for designing an architecture that includes redundant VMs, deploying load balancers, and potentially even multi-region setups for disaster recovery. Your specific roles involves designing this architecture and managing your failover process and data backups. The provider's role, in turn, is to deliver the underlying infrastructure with defined service level agreements, SLAs, and health monitoring. For Platform-as-a-Service, the platform itself offers a higher degree of built-in, high availability, and automated failover. While the provider maintains the runtime platform's availability, your role shifts. You need to ensure your application's logic is designed to gracefully handle retries and potential transient failures that might occur. Finally, with Software-as-a-Service, availability is almost entirely handled for you. The provider ensures fully abstracted redundancy and failover behind the scenes. Your role becomes largely minimal, often just involving a specific application's configurations. The provider is entirely responsible for the full application uptime and the underlying high availability infrastructure. Understanding these distinct roles in ensuring availability is essential for setting expectations and designing your cloud strategy efficiently. 10:19 Lois: Building on availability, let's talk Disaster Recovery. Orlando: DR is about ensuring your systems and data can be recovered and brought back online in the event of a significant failure, whether it's a hardware crash, a natural disaster, or even human error. Just like the other aspects, the strategy and responsibilities for DR vary significantly across the cloud service models. For Infrastructure-as-a Service, you have the most direct involvement in your DR strategy. You need to design and execute custom DR plans. This involves leveraging capabilities like multi-region backups, taking VM snapshots, and setting up failover clusters. A real-world example might be using Oracle Cloud compute to replicate your VMs to a secondary region with block volume backups to ensure business continuity. Essentially, you manage your entire DR process here. Moving to Platform-as-a-Service, disaster recovery becomes a shared responsibility. The platform itself offers built-in redundancy and provide APIs for backup and restore. Your role will be to configure the application-level recovery and ensure your data is backed up appropriately, while the provider handles the underlying infrastructure's DR capability. An example could be Azure app service, Oracle APEX applications, where your apps are redeployed from source control like Git after an incident. Finally, with Software-as-a-Service, disaster recovery is almost entirely vendor managed. The provider takes full responsibility, offering features like auto replication and continuous backup, often backed by specific Recovery Point Objective (RPO) and Recovery Time Objective (RTO) SLAs. A common example is how Microsoft 365 or Salesforce manage user data backups in restoration. It's all handled seamlessly by the provider without your direct intervention. Understanding these different approaches to DR is crucial for defining your own business continuity plans in the cloud. 12:46 Lois: Thank you, Orlando, for this insightful discussion. To recap, we spoke about the three main cloud models: IaaS, PaaS, and SaaS, and how each one offers a different mix of control and convenience, impacting cost, scalability, availability, and recovery. Nikita: Yeah, hopefully this helps you pick the right cloud solution for your needs. If you want to learn more about the topics we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. In our next episode, we'll take a close look at the essentials of networking. Until then, this is Nikita Abraham… Lois: And Lois Houston, signing off! 13:26 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

28 Okt 13min

Cloud Data Centers: Core Concepts - Part 3

Cloud Data Centers: Core Concepts - Part 3

Have you ever considered how a single server can support countless applications and workloads at once? In this episode, hosts Lois Houston and Nikita Abraham, together with Principal OCI Instructor Orlando Gentil, explore the sophisticated technologies that make this possible in modern cloud data centers. They discuss the roles of hypervisors, virtual machines, and containers, explaining how these innovations enable efficient resource sharing, robust security, and greater flexibility for organizations. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! For the last two weeks, we've been talking about different aspects of cloud data centers. In this episode, Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again to discuss how virtualization, through hypervisors, virtual machines, and containers, has transformed data centers. 00:58 Lois: That's right, Niki. We'll begin with a quick look at the history of virtualization and why it became so widely adopted. Orlando, what can you tell us about that? Orlando: To truly grasp the power of virtualization, it's helpful to understand its journey from its humble beginnings with mainframes to its pivotal role in today's cloud computing landscape. It might surprise you, but virtualization isn't a new concept. Its roots go back to the 1960s with mainframes. In those early days, the primary goal was to isolate workloads on a single powerful mainframe, allowing different applications to run without interfering with each other. As we moved into the 1990s, the challenge shifted to underutilized physical servers. Organizations often had numerous dedicated servers, each running a single application, leading to significant waste of computing resources. This led to the emergence of virtualization as we know it today, primarily from the 1990s to the 2000s. The core idea here was to run multiple isolated operating systems on a single physical server. This innovation dramatically improved the resource utilization and laid the technical foundation for cloud computing, enabling the scalable and flexible environments we rely on today. 02:26 Nikita: Interesting. So, from an economic standpoint, what pushed traditional data centers to change and opened the door to virtualization? Orlando: In the past, running applications often meant running them on dedicated physical servers. This led to a few significant challenges. First, more hardware purchases. Every new application, every new project often required its own dedicated server. This meant constantly buying new physical hardware, which quickly escalated capital expenditure. Secondly, and hand-in-hand with more servers came higher power and cooling costs. Each physical server consumed power and generated heat, necessitating significant investment in electricity and cooling infrastructure. The more servers, the higher these operational expenses became. And finally, a major problem was unused capacity. Despite investing heavily in these physical servers, it was common for them to run well below their full capacity. Applications typically didn't need 100% of server's resources all the time. This meant we were wasting valuable compute power, memory, and storage, effectively wasting resources and diminishing the return of investment from those expensive hardware purchases. These economic pressures became a powerful incentive to find more efficient ways to utilize data center resources, setting the stage for technologies like virtualization. 04:05 Lois: I guess we can assume virtualization emerged as a financial game-changer. So, what kind of economic efficiencies did virtualization bring to the table? Orlando: From a CapEx or capital expenditure perspective, companies spent less on servers and data center expansion. From an OpEx or operational expenditure perspective, fewer machines meant lower electricity, cooling, and maintenance costs. It also sped up provisioning. Spinning a new VM took minutes, not days or weeks. That improved agility and reduced the operational workload on IT teams. It also created a more scalable, cost-efficient foundation which made virtualization not just a technical improvement, but a financial turning point for data centers. This economic efficiency is exactly what cloud providers like Oracle Cloud Infrastructure are built on, using virtualization to deliver scalable pay as you go infrastructure. 05:09 Nikita: Ok, Orlando. Let's get into the core components of virtualization. To start, what exactly is a hypervisor? Orlando: A hypervisor is a piece of software, firmware, or hardware that creates and runs virtual machines, also known as VMs. Its core function is to allow multiple virtual machines to run concurrently on a single physical host server. It acts as virtualization layer, abstracting the physical hardware resources like CPU, memory, and storage, and allocating them to each virtual machine as needed, ensuring they can operate independently and securely. 05:49 Lois: And are there types of hypervisors? Orlando: There are two primary types of hypervisors. The type 1 hypervisors, often called bare metal hypervisors, run directly on the host server's hardware. This means they interact directly with the physical resources offering high performance and security. Examples include VMware ESXi, Oracle VM Server, and KVM on Linux. They are commonly used in enterprise data centers and cloud environments. In contrast, type 2 hypervisors, also known as hosted hypervisors, run on top of an existing operating system like Windows or macOS. They act as an application within that operating system. Popular examples include VirtualBox, VMware Workstation, and Parallels. These are typically used for personal computing or development purposes, where you might run multiple operating systems on your laptop or desktop. 06:55 Nikita: We've spoken about the foundation provided by hypervisors. So, can we now talk about the virtual entities they manage: virtual machines? What exactly is a virtual machine and what are its fundamental characteristics? Orlando: A virtual machine is essentially a software-based virtual computer system that runs on a physical host computer. The magic happens with the hypervisor. The hypervisor's job is to create and manage these virtual environments, abstracting the physical hardware so that multiple VMs can share the same underlying resources without interfering with each other. Each VM operates like a completely independent computer with its own operating system and applications. 07:40 Lois: What are the benefits of this? Orlando: Each VM is isolated from the others. If one VM crashes or encounters an issue, it doesn't affect the other VMs running on the same physical host. This greatly enhances stability and security. A powerful feature is the ability to run different operating systems side-by-side on the very same physical host. You could have a Windows VM, a Linux VM, and even other specialized OS, all operating simultaneously. Consolidate workloads directly addresses the unused capacity problem. Instead of one application per physical server, you can now run multiple workloads, each in its own VM on a single powerful physical server. This dramatically improves hardware utilization, reducing the need of constant new hardware purchases and lowering power and cooling costs. And by consolidating workloads, virtualization makes it possible for cloud providers to dynamically create and manage vast pools of computing resources. This allows users to quickly provision and scale virtual servers on demand, tapping into these shared pools of CPU, memory, and storage as needed, rather than being tied to a single physical machine. 09:10 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 09:54 Nikita: Welcome back! Orlando, let's move on to containers. Many see them as a lighter, more agile way to build and run applications. What's your take? Orlando: A container packages an application in all its dependencies, like libraries and other binaries, into a single, lightweight executable unit. Unlike a VM, a container shares the host operating system's kernel, running on top of the container runtime process. This architectural difference provides several key advantages. Containers are incredibly portable. They can be taken virtually anywhere, from a developer's laptop to a cloud environment, and run consistently, eliminating it works on my machine issues. Because containers share the host OS kernel, they don't need to bundle a full operating system themselves. This results in significantly smaller footprints and less administration overhead compared to VMs. They are faster to start. Without the need to boot a full operating system, containers can start up in seconds, or even milliseconds, providing rapid deployment and scaling capabilities. 11:12 Nikita: Ok. Throughout our conversation, you've spoken about the various advantages of virtualization but let's consolidate them now. Orlando: From a security standpoint, virtualization offers several crucial benefits. Each VM operates in its own isolated sandbox. This means if one VM experiences a security breach, the impact is generally contained to that single virtual machine, significantly limiting the spread of potential threats across your infrastructure. Containers also provide some isolation. Virtualization allows for rapid recovery. This is invaluable for disaster recovery or undoing changes after a security incident. You can implement separate firewalls, access rules, and network configuration for each VM. This granular control reduces the overall exposure and attack surface across your virtualized environments, making it harder for malicious actors to move laterally. Beyond security, virtualization also brings significant advantages in terms of operational and agility benefits for IT management. Virtualization dramatically improves operational efficiency and agility. Things are faster. With virtualization, you can provision new servers or containers in minutes rather than days or weeks. This speed allows for quicker deployment of applications and services. It becomes much simpler to deploy consistent environment using templates and preconfigured VM images or containers. This reduces errors and ensures uniformity across your infrastructure. It's more scalable. Virtualization makes your infrastructure far more scalable. You can reshape VMs and containers to meet changing demands, ensuring your resources align precisely with your needs. These operational benefits directly contribute to the power of cloud computing, especially when we consider virtualization's role in enabling cloud and scalability. Virtualization is the very backbone of modern cloud computing, fundamentally enabling its scalability. It allows multiple virtual machines to run on a single physical server, maximizing hardware utilization, which is essential for cloud providers. This capability is core of infrastructure as a service offerings, where users can provision virtualized compute resources on demand. Virtualization makes services globally scalable. Resources can be easily deployed and managed across different geographic regions to meet worldwide demand. Finally, it provides elasticity, meaning resources can be automatically scaled up or down in response to fluctuating workloads, ensuring optimal performance and cost efficiency. 14:21 Lois: That's amazing. Thank you, Orlando, for joining us once again. Nikita: Yeah, and remember, if you want to learn more about the topics we covered today, go to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: Well, that's all we have for today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 14:40 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

21 Okt 15min

Cloud Data Centers: Core Concepts - Part 2

Cloud Data Centers: Core Concepts - Part 2

Have you ever wondered where all your digital memories, work projects, or favorite photos actually live in the cloud? In this episode, Lois Houston and Nikita Abraham are joined by Principal OCI Instructor Orlando Gentil to discuss cloud storage. They explore how data is carefully organized, the different ways it can be stored, and what keeps it safe and easy to find. Cloud Tech Jumpstart: https://mylearn.oracle.com/ou/course/cloud-tech-jumpstart/152992 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------ Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I'm Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hey there! Last week, we spoke about the differences between traditional and cloud data centers, and covered components like CPU, RAM, and operating systems. If you haven't listened to the episode yet, I'd suggest going back and listening to it before you dive into this one. Nikita: Joining us again is Orlando Gentil, Principal OCI Instructor at Oracle University, and we're going to ask him about another fundamental concept: storage. 01:04 Lois: That's right, Niki. Hi Orlando! Thanks for being with us again today. You introduced cloud data centers last week, but tell us, how is data stored and accessed in these centers? Orlando: At a fundamental level, storage is where your data resides persistently. Data stored on a storage device is accessed by the CPU and, for specialized tasks, the GPU. The RAM acts as a high-speed intermediary, temporarily holding data that the CPU and the GPU are actively working on. This cyclical flow ensures that applications can effectively retrieve, process, and store information, forming the backbone for our computing operations in the data center. 01:52 Nikita: But how is data organized and controlled on disks? Orlando: To effectively store and manage data on physical disks, a structured approach is required, which is defined by file systems and permissions. The process began with disks. These are the raw physical storage devices. Before data can be written to them, disks are typically divided into partitions. A partition is a logical division of a physical disk that acts as if it were a separated physical disk. This allows you to organize your storage space and even install multiple operating systems on a single drive. Once partitions are created, they are formatted with a file system. 02:40 Nikita: Ok, sorry but I have to stop you there. Can you explain what a file system is? And how is data organized using a file system? Orlando: The file system is the method and the data structure that an operating system uses to organize and manage files on storage devices. It dictates how data is named, is stored, retrieved, and managed on the disk, essentially providing the roadmap for data. Common file systems include NTFS for Windows and ext4 or XFS for Linux. Within this file system, data is organized hierarchically into directories, also known as folders. These containers help to logically group related files, which are the individual units of data, whether they are documents, images, videos, or applications. Finally, overseeing this entire organization are permissions. 03:42 Lois: And what are permissions? Orlando: Permissions define who can access a specific files and directories and what actions they are allowed to perform-- for example, read, write, or execute. This access control, often managed by user, group, and other permissions, is fundamental for security, data integrity, and multi-user environments within a data center. 04:09 Lois: Ok, now that we have a good understanding of how data is organized logically, can we talk about how data is stored locally within a server? Orlando: Local storage refers to storage devices directly attached to a server or computer. The three common types are Hard Disk Drive. These are traditional storage devices using spinning platters to store data. They offer large capacity at a lower cost per gigabyte, making them suitable for bulk data storage when high performance isn't the top priority. Unlike hard disks, solid state drives use flash memory to store data, similar to USB drives but on a larger scale. They provide significantly faster read and write speeds, better durability, and lower power consumption than hard disks, making them ideal for operating systems, applications, and frequently accessed data. Non-Volatile Memory Express is a communication interface specifically designed for solid state that connects directly to the PCI Express bus. NVME offers even faster performance than traditional SATA-based solid state drives by reducing latency and increasing bandwidth, making it the top choice for demanding workloads that require extreme speed, such as high-performance databases and AI applications. Each type serves different performance and cost requirements within a data center. While local storage is essential for immediate access, data center also heavily rely on storage that isn't directly attached to a single server. 05:59 Lois: I'm guessing you're hinting at remote storage. Can you tell us more about that, Orlando? Orlando: Remote storage refers to data storage solutions that are not physically connected to the server or client accessing them. Instead, they are accessed over the network. This setup allows multiple clients or servers to share access to the same storage resources, centralizing data management and improving data availability. This architecture is fundamental to cloud computing, enabling vast pools of shared storage that can be dynamically provisioned to various users and applications. 06:35 Lois: Let's talk about the common forms of remote storage. Can you run us through them? Orlando: One of the most common and accessible forms of remote storage is Network Attached Storage or NAS. NAS is a dedicated file storage device connected to a network that allows multiple users and client devices to retrieve data from a centralized disk capacity. It's essentially a server dedicated to serving files. A client connects to the NAS over the network. And the NAS then provides access to files and folders. NAS devices are ideal for scenarios requiring shared file access, such as document collaboration, centralized backups, or serving media files, making them very popular in both home and enterprise environments. While NAS provides file-level access over a network, some applications, especially those requiring high performance and direct block level access to storage, need a different approach. 07:38 Nikita: And what might this approach be? Orlando: Internet Small Computer System Interface, which provides block-level storage over an IP network. iSCSI or Internet Small Computer System Interface is a standard that allows the iSCSI protocol traditionally used for local storage to be sent over IP networks. Essentially, it enables servers to access storage devices as if they were directly attached even though they are located remotely on the network. This means it can leverage standard ethernet infrastructure, making it a cost-effective solution for creating high performance, centralized storage accessible over an existing network. It's particularly useful for server virtualization and database environments where block-level access is preferred. While iSCSI provides block-level access over standard IP, for environments demanding even higher performance, lower latency, and greater dedicated throughput, a specialized network is often deployed. 08:47 Nikita: And what's this specialized network called? Orlando: Storage Area Network or SAN. A Storage Area Network or SAN is a high-speed network specifically designed to provide block-level access to consolidated shared storage. Unlike NAS, which provides file level access, a SAN presents a storage volumes to servers as if they were local disks, allowing for very high performance for applications like databases and virtualized environments. While iSCSI SANs use ethernet, many high-performance SANs utilize fiber channel for even faster and more reliable data transfer, making them a cornerstone of enterprise data centers where performance and availability are paramount. 09:42 Oracle University's Race to Certification 2025 is your ticket to free training and certification in today's hottest technology. Whether you're starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That's education.oracle.com/race-to-certification-2025. 10:26 Nikita: Welcome back! Orlando, are there any other popular storage paradigms we should know about? Orlando: Beyond file level and block level storage, cloud environments have popularized another flexible and highly scalable storage paradigm, object storage. Object storage is a modern approach to storing data, treating each piece of data as a distinct, self-contained unit called an object. Unlike file systems that organize data in a hierarchy or block storage that breaks data into fixed size blocks, object storage manages data as flat, unstructured objects. Each object is stored with unique identifiers and rich metadata, making it highly scalable and flexible for massive amounts of data. This service handles the complexity of storage, providing access to vast repositories of data. Object storage is ideal for use cases like cloud-native applications, big data analytics, content distribution, and large-scale backups thanks to its immense scalability, durability, and cost effectiveness. While object storage is excellent for frequently accessed data in rapidly growing data sets, sometimes data needs to be retained for very long periods but is accessed infrequently. For these scenarios, a specialized low-cost storage tier, known as archive storage, comes into play. 12:02 Lois: And what's that exactly? Orlando: Archive storage is specifically designed for long-term backup and retention of data that you rarely, if ever, access. This includes critical information, like old records, compliance data that needs to be kept for regulatory reasons, or disaster recovery backups. The key characteristics of archive storage are extremely low cost per gigabyte, achieved by optimizing for infrequent access rather than speed. Historically, tape backup systems were the common solution for archiving, where data from a data center is moved to tape. In modern cloud environments, this has evolved into cloud backup solutions. Cloud-based archiving leverages high-cost, effective during cloud storage tiers that are purpose built for long term retention, providing a scalable and often more reliable alternative to physical tapes. 13:05 Lois: Thank you, Orlando, for taking the time to talk to us about the hardware and software layers of cloud data centers. This information will surely help our listeners to make informed decisions about cloud infrastructure to meet their workload needs in terms of performance, scalability, cost, and management. Nikita: That's right, Lois. And if you want to learn more about what we discussed today, head over to mylearn.oracle.com and search for the Cloud Tech Jumpstart course. Lois: In our next episode, we'll take a look at more of the fundamental concepts within modern cloud environments, such as Hypervisors, Virtualization, and more. I can't wait to learn more about it. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:47 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

14 Okt 14min

Populært innen Fakta

merry-quizmas
fastlegen
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
dine-penger-pengeradet
rss-strid-de-norske-borgerkrigene
foreldreradet
treningspodden
dypdykk
rss-kunsten-a-leve
rss-sarbar-med-lotte-erik
fryktlos
tomprat-med-gunnar-tjomlid
hverdagspsyken
gravid-uke-for-uke
jakt-og-fiskepodden
sinnsyn
rss-sunn-okonomi
takk-og-lov-med-anine-kierulf
rss-impressions-2
rss-triana-juliet-dans-pa-roser