OCI Pricing

OCI Pricing

Oracle offers simple pricing models and compelling savings programs to get you more value, faster. With uniform pricing across all global regions, you can deploy your environment in new regions without any constraints. Listen to Lois Houston and Nikita Abraham, along with special guest Rohit Rahi, talk about the flexibility of Oracle’s approach to pricing and how it allows you to accurately forecast your cloud spending and avoid billing surprises. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community Twitter: https://twitter.com/Oracle_Edu LinkedIn: https://www.linkedin.com/showcase/oracle-university/ Special thanks to Arijit Ghosh, Kiran BR, David Wright, the OU Podcast Team, and the OU Studio Team for helping us create this episode.

Avsnitt(132)

The AI Workflow

The AI Workflow

Join Lois Houston and Nikita Abraham as they chat with Yunus Mohammed, a Principal Instructor at Oracle University, about the key stages of AI model development. From gathering and preparing data to selecting, training, and deploying models, learn how each phase impacts AI’s real-world effectiveness. The discussion also highlights why monitoring AI performance and addressing evolving challenges are critical for long-term success.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode.   --------------------------------------------------------------   Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hey everyone! In our last episode, we spoke about generative AI and gen AI agents. Today, we’re going to look at the key stages in a typical AI workflow. We’ll also discuss how data quality, feedback loops, and business goals influence AI success. With us today is Yunus Mohammed, a Principal Instructor at Oracle University.  01:00 Lois: Hi Yunus! We're excited to have you here! Can you walk us through the various steps in developing and deploying an AI model?  Yunus: The first point is the collect data. We gather relevant data, either historical or real time. Like customer transactions, support tickets, survey feedbacks, or sensor logs. A travel company, for example, can collect past booking data to predict future demand. So, data is the most crucial and the important component for building your AI models. But it's not just the data. You need to prepare the data. In the prepared data process, we clean, organize, and label the data. AI can't learn from messy spreadsheets. We try to make the data more understandable and organized, like removing duplicates, filling missing values in the data with some default values or formatting dates. All these comes under organization of the data and give a label to the data, so that the data becomes more supervised. After preparing the data, I go for selecting the model to train. So now, we pick what type of model fits your goals. It can be a traditional ML model or a deep learning network model, or it can be a generative model. The model is chosen based on the business problems and the data we have. So, we train the model using the prepared data, so it can learn the patterns of the data. Then after the model is trained, I need to evaluate the model. You check how well the model performs. Is it accurate? Is it fair? The metrics of the evaluation will vary based on the goal that you're trying to reach. If your model misclassifies emails as spam and it is doing it very much often, then it is not ready. So I need to train it further. So I need to train it to a level when it identifies the official mail as official mail and spam mail as spam mail accurately.  After evaluating and making sure your model is perfectly fitting, you go for the next step, which is called the deploy model. Once we are happy, we put it into the real world, like into a CRM, or a web application, or an API. So, I can configure that with an API, which is application programming interface, or I add it to a CRM, Customer Relationship Management, or I add it to a web application that I've got. Like for example, a chatbot becomes available on your company's website, and the chatbot might be using a generative AI model. Once I have deployed the model and it is working fine, I need to keep track of this model, how it is working, and need to monitor and improve whenever needed. So I go for a stage, which is called as monitor and improve. So AI isn't set in and forget it. So over time, there are lot of changes that is happening to the data. So we monitor performance and retrain when needed. An e-commerce recommendation model needs updates as there might be trends which are shifting.  So the end user finally sees the results after all the processes. A better product, or a smarter service, or a faster decision-making model, if we do this right. That is, if we process the flow perfectly, they may not even realize AI is behind it to give them the accurate results.  04:59 Nikita: Got it. So, everything in AI begins with data. But what are the different types of data used in AI development?  Yunus: We work with three main types of data: structured, unstructured, and semi-structured. Structured data is like a clean set of tables in Excel or databases, which consists of rows and columns with clear and consistent data information. Unstructured is messy data, like your email or customer calls that records videos or social media posts, so they all comes under unstructured data.  Semi-structured data is things like logs on XML files or JSON files. Not quite neat but not entirely messy either. So they are, they are termed semi-structured. So structured, unstructured, and then you've got the semi-structured. 05:58 Nikita: Ok… and how do the data needs vary for different AI approaches?  Yunus: Machine learning often needs labeled data. Like a bank might feed past transactions labeled as fraud or not fraud to train a fraud detection model. But machine learning also includes unsupervised learning, like clustering customer spending behavior. Here, no labels are needed. In deep learning, it needs a lot of data, usually unstructured, like thousands of loan documents, call recordings, or scan checks. These are fed into the models and the neural networks to detect and complex patterns. Data science focus on insights rather than the predictions. So a data scientist at the bank might use customer relationship management exports and customer demographies to analyze which age group prefers credit cards over the loans. Then we have got generative AI that thrives on diverse, unstructured internet scalable data. Like it is getting data from books, code, images, chat logs. So these models, like ChatGPT, are trained to generate responses or mimic the styles and synthesize content. So generative AI can power a banking virtual assistant trained on chat logs and frequently asked questions to answer customer queries 24/7. 07:35 Lois: What are the challenges when dealing with data?  Yunus: Data isn't just about having enough. We must also think about quality. Is it accurate and relevant? Volume. Do we have enough for the model to learn from? And is my data consisting of any kind of unfairly defined structures, like rejecting more loan applications from a certain zip code, which actually gives you a bias of data? And also the privacy. Are we handling personal data responsibly or not? Especially data which is critical or which is regulated, like the banking sector or health data of the patients. Before building anything smart, we must start smart.  08:23 Lois: So, we’ve established that collecting the right data is non-negotiable for success. Then comes preparing it, right?  Yunus: This is arguably the most important part of any AI or data science project. Clean data leads to reliable predictions. Imagine you have a column for age, and someone accidentally entered an age of like 999. That's likely a data entry error. Or maybe a few rows have missing ages. So we either fix, remove, or impute such issues. This step ensures our model isn't misled by incorrect values. Dates are often stored in different formats. For instance, a date, can be stored as the month and the day values, or it can be stored in some places as day first and month next. We want to bring everything into a consistent, usable format. This process is called as transformation. The machine learning models can get confused if one feature, like example the income ranges from 10,000 to 100,000, and another, like the number of kids, range from 0 to 5. So we normalize or scale values to bring them to a similar range, say 0 or 1. So we actually put it as yes or no options. So models don't understand words like small, medium, or large. We convert them into numbers using encoding. One simple way is assigning 1, 2, and 3 respectively. And then you have got removing stop words like the punctuations, et cetera, and break the sentence into smaller meaningful units called as tokens. This is actually used for generative AI tasks. In deep learning, especially for Gen AI, image or audio inputs must be of uniform size and format.  10:31 Lois: And does each AI system have a different way of preparing data?  Yunus: For machine learning ML, focus is on cleaning, encoding, and scaling. Deep learning needs resizing and normalization for text and images. Data science, about reshaping, aggregating, and getting it ready for insights. The generative AI needs special preparation like chunking, tokenizing large documents, or compressing images. 11:06 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest tech. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 11:50 Nikita: Welcome back! Yunus, how does a user choose the right model to solve their business problem?  Yunus: Just like a business uses different dashboards for marketing versus finance, in AI, we use different model types, depending on what we are trying to solve. Like classification is choosing a category. Real-world example can be whether the email is a spam or not. Use in fraud detection, medical diagnosis, et cetera. So what you do is you classify that particular data and then accurately access that classification of data. Regression, which is used for predicting a number, like, what will be the price of a house next month? Or it can be a useful in common forecasting sales demands or on the cost. Clustering, things without labels. So real-world examples can be segmenting customers based on behavior for targeted marketing. It helps discovering hidden patterns in large data sets.  Generation, that is creating new content. So AI writing product description or generating images can be a real-world example for this. And it can be used in a concept of generative AI models like ChatGPT or Dall-E, which operates on the generative AI principles. 13:16 Nikita: And how do you train a model? Yunus: We feed it with data in small chunks or batches and then compare its guesses to the correct values, adjusting its thinking like weights to improve next time, and the cycle repeats until the model gets good at making predictions. So if you're building a fraud detection system, ML may be enough. If you want to analyze medical images, you will need deep learning. If you're building a chatbot, go for a generative model like the LLM. And for all of these use cases, you need to select and train the applicable models as and when appropriate. 14:04 Lois: OK, now that the model’s been trained, what else needs to happen before it can be deployed? Yunus: Evaluate the model, assess a model's accuracy, reliability, and real-world usefulness before it's put to work. That is, how often is the model right? Does it consistently perform well? Is it practical in the real world to use this model or not? Because if I have bad predictions, doesn't just look bad, it can lead to costly business mistakes. Think of recommending the wrong product to a customer or misidentifying a financial risk.  So what we do here is we start with splitting the data into two parts. So we train the data by training data. And this is like teaching the model. And then we have got the testing data. This is actually used for checking how well the model has learned. So once trained, the model makes predictions. We compare the predictions to the actual answers, just like checking your answer after a quiz. We try to go in for tailored evaluation based on AI types. Like machine learning, we care about accuracy in prediction. Deep learning is about fitting complex data like voice or images, where the model repeatedly sees examples and tunes itself to reduce errors. Data science, we look for patterns and insights, such as which features will matter. In generative AI, we judge by output quality. Is it coherent, useful, and is it natural?  The model improves with the accuracy and the number of epochs the training has been done on.  15:59 Nikita: So, after all that, we finally come to deploying the model… Yunus: Deploying a model means we are integrating it into our actual business system. So it can start making decisions, automating tasks, or supporting customer experiences in real time. Think of it like this. Training is teaching the model. Evaluating is testing it. And deployment is giving it a job.  The model needs a home either in the cloud or inside your company's own servers. Think of it like putting the AI in place where it can be reached by other tools. Exposed via API or embedded in an app, or you can say application, this is how the AI becomes usable.  Then, we have got the concept of receives live data and returns predictions. So receives live data and returns prediction is when the model listens to real-time inputs like a user typing, or user trying to search or click or making a transaction, and then instantly, your AI responds with a recommendation, decisions, or results. Deploying the model isn’t the end of the story. It is just the beginning of the AI's real-world journey. Models may work well on day one, but things change. Customer behavior might shift. New products get introduced in the market. Economic conditions might evolve, like the era of COVID, where the demand shifted and the economical conditions actually changed. 17:48 Lois: Then it’s about monitoring and improving the model to keep things reliable over time. Yunus: The monitor and improve loop is a continuous process that ensures an AI model remains accurate, fair, and effective after deployment. The live predictions, the model is running in real time, making decisions or recommendations. The monitor performance are those predictions still accurate and helpful. Is latency acceptable? This is where we track metrics, user feedbacks, and operational impact. Then, we go for detect issues, like accuracy is declining, are responses feeling biased, are customers dropping off due to long response times? And the next step will be to reframe or update the model. So we add fresh data, tweak the logic, or even use better architectures to deploy the uploaded model, and the new version replaces the old one and the cycle continues again. 18:58 Lois: And are there challenges during this step? Yunus: The common issues, which are related to monitor and improve consist of model drift, bias, and latency of failures. In model drift, the model becomes less accurate as the environment changes. Or bias, the model may favor or penalize certain groups unfairly. Latency or failures, if the model is too slow or fails unpredictably, it disrupts the user experience. Let's take the loan approvals. In loan approvals, if we notice an unusually high rejection rate due to model bias, we might retrain the model with more diverse or balanced data. For a chatbot, we watch for customer satisfaction, which might arise due to model failure and fine-tune the responses for the model. So in forecasting demand, if the predictions no longer match real trends, say post-pandemic, due to the model drift, we update the model with fresh data.  20:11 Nikita: Thanks for that, Yunus. Any final thoughts before we let you go? Yunus: No matter how advanced your model is, its effectiveness depends on the quality of the data you feed it. That means, the data needs to be clean, structured, and relevant. It should map itself to the problem you're solving. If the foundation is weak, the results will be also. So data preparation is not just a technical step, it is a business critical stage. Once deployed, AI systems must be monitored continuously, and you need to watch for drops in performance for any bias being generated or outdated logic, and improve the model with new data or refinements. That's what makes AI reliable, ethical, and sustainable in the long run. 21:09 Nikita: Yunus, thank you for this really insightful session. If you’re interested in learning more about the topics we discussed today, go to mylearn.oracle.com and search for the AI for You course.  Lois: That’s right. You’ll find skill checks to help you assess your understanding of these concepts. In our next episode, we’ll discuss the idea of buy versus build in the context of AI. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 21:39 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

2 Sep 22min

Core AI Concepts – Part 3

Core AI Concepts – Part 3

Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they discuss the transformative world of Generative AI. Together, they uncover the ways in which generative AI agents are changing the way we interact with technology, automating tasks and delivering new possibilities.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services.   Nikita: Hi everyone! Last week was Part 2 of our conversation on core AI concepts, where we went over the basics of data science. In Part 3 today, we’ll look at generative AI and gen AI agents in detail. To help us with that, we have Himanshu Raj, Principal AI/ML Instructor. Hi Himanshu, what’s the difference between traditional AI and generative AI?  01:01 Himanshu: So until now, when we talked about artificial intelligence, we usually meant models that could analyze information and make decisions based on it, like a judge who looks at evidence and gives a verdict. And that's what we call traditional AI that's focused on analysis, classification, and prediction.  But with generative AI, something remarkable happens. Generative AI does not just evaluate. It creates. It's more like a storyteller who uses knowledge from the past to imagine and build something brand new. For example, instead of just detecting if an email is spam, generative AI could write an entirely new email for you.  Another example, traditional AI might predict what a photo contains. Generative AI, on the other hand, creates a brand-new photo based on description. Generative AI refers to artificial intelligence models that can create entirely new content, such as text, images, music, code, or video that resembles human-made work.  Instead of simple analyzing or predicting, generative AI produces something original that resembles what a human might create.   02:16 Lois: How did traditional AI progress to the generative AI we know today?  Himanshu: First, we will look at small supervised learning. So in early days, AI models were trained on small labeled data sets. For example, we could train a model with a few thousand emails labeled spam or not spam. The model would learn simple decision boundaries. If email contains, "congratulations," it might be spam. This was efficient for a straightforward task, but it struggled with anything more complex.  Then, comes the large supervised learning. As the internet exploded, massive data sets became available, so millions of images, billions of text snippets, and models got better because they had much more data and stronger compute power and thanks to advances, like GPUs, and cloud computing, for example, training a model on millions of product reviews to predict customer sentiment, positive or negative, or to classify thousands of images in cars, dogs, planes, etc.  Models became more sophisticated, capturing deeper patterns rather than simple rules. And then, generative AI came into the picture, and we eventually reached a point where instead of just classifying or predicting, models could generate entirely new content.  Generative AI models like ChatGPT or GitHub Copilot are trained on enormous data sets, not to simply answer a yes or no, but to create outputs that look and feel like human made. Instead of judging the spam or sentiment, now the model can write an article, compose a song, or paint a picture, or generate new software code.  03:55 Nikita: Himanshu, what motivated this sort of progression?   Himanshu: Because of the three reasons. First one, data, we had way more of it thanks to the internet, smartphones, and social media. Second is compute. Graphics cards, GPUs, parallel computing, and cloud systems made it cheap and fast to train giant models.  And third, and most important is ambition. Humans always wanted machines not just to judge existing data, but to create new knowledge, art, and ideas.   04:25 Lois: So, what’s happening behind the scenes? How is gen AI making these things happen?  Himanshu: Generative AI is about creating entirely new things across different domains. On one side, we have large language models or LLMs.  They are masters of generating text conversations, stories, emails, and even code. And on the other side, we have diffusion models. They are the creative artists of AI, turning text prompts into detailed images, paintings, or even videos.  And these two together are like two different specialists. The LLM acts like a brain that understands and talks, and the diffusion model acts like an artist that paints based on the instructions. And when we connect these spaces together, we create something called multimodal AI, systems that can take in text and produce images, audio, or other media, opening a whole new range of possibilities.  It can not only take the text, but also deal in different media options. So today when we say ChatGPT or Gemini, they can generate images, and it's not just one model doing everything. These are specialized systems working together behind the scenes.  05:38 Lois: You mentioned large language models and how they power text-based gen AI, so let’s talk more about them. Himanshu, what is an LLM and how does it work?  Himanshu: So it's a probabilistic model of text, which means, it tries to predict what word is most likely to come next based on what came before.  This ability to predict one word at a time intelligently is what builds full sentences, paragraphs, and even stories.  06:06 Nikita: But what’s large about this? Why’s it called a large language model?   Himanshu: It simply means the model has lots and lots of parameters. And think of parameters as adjustable dials the model fine tuned during learning.  There is no strict rule, but today, large models can have billions or even trillions of these parameters. And the more the parameters, more complex patterns, the model can understand and can generate a language better, more like human.  06:37 Nikita: Ok… and image-based generative AI is powered by diffusion models, right? How do they work?  Himanshu: Diffusion models start with something that looks like pure random noise.  Imagine static on an old TV screen. No meaningful image at all. From there, the model carefully removes noise step by step to create something more meaningful and think of it like sculpting a statue. You start with a rough block of stone and slowly, carefully you chisel away to reveal a beautiful sculpture hidden inside.  And in each step of this process, the AI is making an educated guess based on everything it has learned from millions of real images. It's trying to predict.   07:24 Stay current by taking the 2025 Oracle Fusion Cloud Applications Delta Certifications. This is your chance to demonstrate your understanding of the latest features and prove your expertise by obtaining a globally recognized certification, all for free! Discover the certification paths, use the resources on MyLearn to prepare, and future-proof your skills. Get started now at mylearn.oracle.com.  07:53 Nikita: Welcome back! Himanshu, for most of us, our experience with generative AI is with text-based tools like ChatGPT. But I’m sure the uses go far beyond that, right? Can you walk us through some of them?  Himanshu: First one is text generation. So we can talk about chatbots, which are now capable of handling nuanced customer queries in banking travel and retail, saving companies hours of support time. Think of a bank chatbot helping a customer understand mortgage options or virtual HR Assistant in a large company, handling leave request. You can have embedding models which powers smart search systems.  Instead of searching by keywords, businesses can now search by meaning. For instance, a legal firm can search cases about contract violations in tech and get semantically relevant results, even if those exact words are not used in the documents.  The third one, for example, code generation, tools like GitHub Copilot help developers write boilerplate or even functional code, accelerating software development, especially in routine or repetitive tasks. Imagine writing a waveform with just a few prompts.  The second application, is image generation. So first obvious use is art. So designers and marketers can generate creative concepts instantly. Say, you need illustrations for a campaign on future cities. Generative AI can produce dozens of stylized visuals in minutes.  For design, interior designers or architects use it to visualize room layouts or design ideas even before a blueprint is finalized. And realistic images, retail companies generate images of people wearing their clothing items without needing real models or photoshoots, and this reduces the cost and increase the personalization.  Third application is multimodal systems, and these are combined systems that take one kind of input or a combination of different inputs and produce different kind of outputs, or can even combine various kinds, be it text image in both input and output.  Text to image It's being used in e-commerce, movie concept art, and educational content creation. For text to video, this is still in early days, but imagine creating a product explainer video just by typing out the script. Marketing teams love this for quick turnarounds. And the last one is text to audio.  Tools like ElevenLabs can convert text into realistic, human like voiceovers useful in training modules, audiobooks, and accessibility apps. So generative AI is no longer just a technical tool. It's becoming a creative copilot across departments, whether it's marketing, design, product support, and even operations.  10:42 Lois: That’s great! So, we’ve established that generative AI is pretty powerful. But what kind of risks does it pose for businesses and society in general?  Himanshu: The first one is deepfakes. Generative AI can create fake but highly realistic media, video, audios or even faces that look and sound authentic.  Imagine a fake video of a political leader announcing a policy, they never approved. This could cause mass confusion or even impact elections. In case of business, deepfakes can be also used in scams where a CEO's voice is faked to approve fraudulent transactions.  Number two, bias, if AI is trained on biased historical data, it can reinforce stereotypes even when unintended. For example, a hiring AI system that favors male candidates over equally qualified women because of historical data was biased.  And this bias can expose companies to discrimination, lawsuits, brand damage and ethical concerns. Number three is hallucinations. So sometimes AI system confidently generate information that is completely wrong without realizing it.   Sometimes you ask a chatbot for a legal case summary, and it gives you a very convincing but entirely made up court ruling. In case of business impact, sectors like health care, finance, or law hallucinations can or could have serious or even dangerous consequences if not caught.  The fourth one is copyright and IP issues, generative AI creates new content, but often, based on material it was trained on. Who owns a new work? A real life example could be where an artist finds their unique style was copied by an AI that was trained on their paintings without permission.  In case of a business impact, companies using AI-generated content for marketing, branding or product designs must watch for legal gray areas around copyright and intellectual properties. So generative AI is not just a technology conversation, it's a responsibility conversation. Businesses must innovate and protect.  Creativity and caution must go together.   12:50 Nikita: Let’s move on to generative AI agents. How is a generative AI agent different from just a chatbot or a basic AI tool?  Himanshu: So think of it like a smart assistant, not just answering your questions, but also taking actions on your behalf. So you don't just ask, what's the best flight to Vegas? Instead, you tell the agent, book me a flight to Vegas and a room at the Hilton. And it goes ahead, understands that, finds the options, connects to the booking tools, and gets it done.   So act on your behalf using goals, context, and tools, often with a degree of autonomy. Goals, are user defined outcomes. Example, I want to fly to Vegas and stay at Hilton. Context, this includes preferences history, constraints like economy class only or don't book for Mondays.  Tools could be APIs, databases, or services it can call, such as a travel API or a company calendar. And together, they let the agent reason, plan, and act.   14:02 Nikita: How does a gen AI agent work under the hood?  Himanshu: So usually, they go through four stages. First, one is understands and interprets your request like natural language understanding. Second, figure out what needs to be done, in this case flight booking plus hotel search.  Third, retrieves data or connects to tools APIs if needed, such as Skyscanner, Expedia, or a Calendar. And fourth is takes action. That means confirming the booking and giving you a response like your travel is booked. Keep in mind not all gen AI agents are fully independent.  14:38 Lois: Himanshu, we’ve seen people use the terms generative AI agents and agentic AI interchangeably. What’s the difference between the two?  Himanshu: Agentic AI is a broad umbrella. It refers to any AI system that can perceive, reason, plan, and act toward a goal and may improve and adapt over time.   Most gen AI agents are reactive, not proactive. On the other hand, agentic AI can plan ahead, anticipate problems, and can even adjust strategies.  So gen AI agents are often semi-autonomous. They act in predefined ways or with human approval. Agentic systems can range from low to full autonomy. For example, auto-GPT runs loops without user prompts and autonomous car decides routes and reactions.  Most gen AI agents can only make multiple steps if explicitly designed that way, like a step-by-step logic flows in LangChain. And in case of agentic AI, it can plan across multiple steps with evolving decisions.  On the memory and goal persistence, gen AI agents are typically stateless. That means they forget their goal unless you remind them. In case of agentic AI, these systems remember, adapt, and refine based on goal progression. For example, a warehouse robot optimizing delivery based on changing layouts.  Some generative AI agents are agentic, like auto GPT. They use LLMs to reason, plan, and act, but not all. And likewise not all agentic AIs are generative. For example, an autonomous car, which may use computer vision control systems and planning, but no generative models.  So agentic AI is a design philosophy or system behavior, which could be goal-driven, autonomous, and decision making. They can overlap, but as I said, not all generative AI agents are agentic, and not all agentic AI systems are generative.  16:39 Lois: What makes a generative AI agent actually work?  Himanshu: A gen AI agent isn't just about answering the question. It's about breaking down a user's goal, figuring out how to achieve it, and then executing that plan intelligently. These agents are built from five core components and each playing a critical role.  The first one is goal. So what is this agent trying to achieve? Think of this as the mission or intent. For example, if I tell the agent, help me organized a team meeting for Friday. So the goal in that case would be schedule a meeting.  Number 2, memory. What does it remember? So this is the agent's context awareness. Storing previous chats, preferences, or ongoing tasks. For example, if last week I said I prefer meetings in the afternoon or I have already shared my team's availability, the agent can reuse that. And without the memory, the agent behaves stateless like a typical chatbot that forgets context after every prompt.  Third is tools. What can it access? Agents aren't just smart, they are also connected. They can be given access to tools like calendars, CRMs, web APIs, spreadsheets, and so on.  The fourth one is planner. So how does it break down the goal? And this is where the reasoning happens. The planner breaks big goals into a step-by-step plans, for example checking team availability, drafting meeting invite, and then sending the invite. And then probably, will confirm the booking. Agents don't just guess. They reason and organize actions into a logical path.  And the fifth and final one is executor, who gets it done. And this is where the action takes place. The executor performs what the planner lays out. For example, calling APIs, sending message, booking reservations, and if planner is the architect, executor is the builder.   18:36 Nikita: And where are generative AI agents being used?  Himanshu: Generative AI agents aren't just abstract ideas, they are being used across business functions to eliminate repetitive work, improve consistency, and enable faster decision making. For marketing, a generative AI agent can search websites and social platforms to summarize competitor activity. They can draft content for newsletters or campaign briefs in your brand tone, and they can auto-generate email variations based on audience segment or engagement history.  For finance, a generative AI agent can auto-generate financial summaries and dashboards by pulling from ERP spreadsheets and BI tools. They can also draft variance analysis and budget reports tailored for different departments. They can scan regulations or policy documents to flag potential compliance risks or changes.  For sales, a generative AI agent can auto-draft personalized sales pitches based on customer behavior or past conversations. They can also log CRM entries automatically once submitting summary is generated. They can also generate battlecards or next-step recommendations based on the deal stage.  For human resource, a generative AI agent can pre-screen resumes based on job requirements. They can send interview invites and coordinate calendars. A common theme here is that generative AI agents help you scale your teams without scaling the headcount.   20:02 Nikita: Himanshu, let’s talk about the capabilities and benefits of generative AI agents.  Himanshu: So generative AI agents are transforming how entire departments function. For example, in customer service, 24/7 AI agents handle first level queries, freeing humans for complex cases.  They also enhance the decision making. Agents can quickly analyze reports, summarize lengthy documents, or spot trends across data sets. For example, a finance agent reviewing Excel data can highlight cash flow anomalies or forecast trends faster than a team of analysts.  In case of personalization, the agents can deliver unique, tailored experiences without manual effort. For example, in marketing, agents generate personalized product emails based on each user's past behavior. For operational efficiency, they can reduce repetitive, low-value tasks. For example, an HR agent can screen hundreds of resumes, shortlist candidates, and auto-schedule interviews, saving HR team hours each week.  21:06 Lois: Ok. And what are the risks of using generative AI agents?  Himanshu: The first one is job displacement. Let's be honest, automation raises concerns. Roles involving repetitive tasks such as data entry, content sorting are at risk. In case of ethics and accountability, when an AI agent makes a mistake, who is responsible? For example, if an AI makes a biased hiring decision or gives incorrect medical guidance, businesses must ensure accountability and fairness.  For data privacy, agents often access sensitive data, for example employee records or customer history. If mishandled, it could lead to compliance violations. In case of hallucinations, agents may generate confident but incorrect outputs called hallucinations. This can often mislead users, especially in critical domains like health care, finance, or legal.  So generative AI agents aren't just tools, they are a force multiplier. But they need to be deployed thoughtfully with a human lens and strong guardrails. And that's how we ensure the benefits outweigh the risks.  22:10 Lois: Thank you so much, Himanshu, for educating us. We’ve had such a great time with you! If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on the AI for You course.  Nikita: Join us next week as we chat about AI workflows and tools. Until then, this is Nikita Abraham…  Lois: And Lois Houston signing off!  22:32 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

26 Aug 23min

Core AI Concepts – Part 2

Core AI Concepts – Part 2

In this episode, Lois Houston and Nikita Abraham continue their discussion on AI fundamentals, diving into Data Science with Principal AI/ML Instructor Himanshu Raj. They explore key concepts like data collection, cleaning, and analysis, and talk about how quality data drives impactful insights.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ---------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me today is Nikita Abraham, Team Lead: Editorial Services.  Nikita: Hi everyone! Last week, we began our exploration of core AI concepts, specifically machine learning and deep learning. I’d really encourage you to go back and listen to the episode if you missed it.   00:52 Lois: Yeah, today we’re continuing that discussion, focusing on data science, with our Principal AI/ML Instructor Himanshu Raj.  Nikita: Hi Himanshu! Thanks for joining us again. So, let’s get cracking! What is data science?  01:06 Himanshu: It's about collecting, organizing, analyzing, and interpreting data to uncover valuable insights that help us make better business decisions. Think of data science as the engine that transforms raw information into strategic action.  You can think of a data scientist as a detective. They gather clues, which is our data. Connect the dots between those clues and ultimately solve mysteries, meaning they find hidden patterns that can drive value.  01:33 Nikita: Ok, and how does this happen exactly?  Himanshu: Just like a detective relies on both instincts and evidence, data science blends domain expertise and analytical techniques. First, we collect raw data. Then we prepare and clean it because messy data leads to messy conclusions. Next, we analyze to find meaningful patterns in that data. And finally, we turn those patterns into actionable insights that businesses can trust.  02:00 Lois: So what you’re saying is, data science is not just about technology; it's about turning information into intelligence that organizations can act on. Can you walk us through the typical steps a data scientist follows in a real-world project?  Himanshu: So it all begins with business understanding. Identifying the real problem we are trying to solve. It's not about collecting data blindly. It's about asking the right business questions first. And once we know the problem, we move to data collection, which is gathering the relevant data from available sources, whether internal or external.  Next one is data cleaning. Probably the least glamorous but one of the most important steps. And this is where we fix missing values, remove errors, and ensure that the data is usable. Then we perform data analysis or what we call exploratory data analysis.  Here we look for patterns, prints, and initial signals hidden inside the data. After that comes the modeling and evaluation, where we apply machine learning or deep learning techniques to predict, classify, or forecast outcomes. Machine learning, deep learning are like specialized equipment in a data science detective's toolkit. Powerful but not the whole investigation.  We also check how good the models are in terms of accuracy, relevance, and business usefulness. Finally, if the model meets expectations, we move to deployment and monitoring, putting the model into real world use and continuously watching how it performs over time.  03:34 Nikita: So, it’s a linear process?  Himanshu: It's not linear. That's because in real world data science projects, the process does not stop after deployment. Once the model is live, business needs may evolve, new data may become available, or unexpected patterns may emerge.  And that's why we come back to business understanding again, defining the questions, the strategy, and sometimes even the goals based on what we have learned. In a way, a good data science project behaves like living in a system which grows, adapts, and improves over time. Continuous improvement keeps it aligned with business value.   Now, think of it like adjusting your GPS while driving. The route you plan initially might change as new traffic data comes in. Similarly, in data science, new information constantly help refine our course. The quality of our data determines the quality of our results.   If the data we feed into our models is messy, inaccurate, or incomplete, the outputs, no matter how sophisticated the technology, will be also unreliable. And this concept is often called garbage in, garbage out. Bad input leads to bad output.  Now, think of it like cooking. Even the world's best Michelin star chef can't create a masterpiece with spoiled or poor-quality ingredients. In the same way, even the most advanced AI models can't perform well if the data they are trained on is flawed.  05:05 Lois: Yeah, that's why high-quality data is not just nice to have, it’s absolutely essential. But Himanshu, what makes data good?   Himanshu: Good data has a few essential qualities. The first one is complete. Make sure we aren't missing any critical field. For example, every customer record must have a phone number and an email. It should be accurate. The data should reflect reality. If a customer's address has changed, it must be updated, not outdated. Third, it should be consistent. Similar data must follow the same format. Imagine if the dates are written differently, like 2024/04/28 versus April 28, 2024. We must standardize them.   Fourth one. Good data should be relevant. We collect only the data that actually helps solve our business question, not unnecessary noise. And last one, it should be timely. So data should be up to date. Using last year's purchase data for a real time recommendation engine wouldn't be helpful.  06:13 Nikita: Ok, so ideally, we should use good data. But that’s a bit difficult in reality, right? Because what comes to us is often pretty messy. So, how do we convert bad data into good data? I’m sure there are processes we use to do this.  Himanshu: First one is cleaning. So this is about correcting simple mistakes, like fixing typos in city names or standardizing dates.  The second one is imputation. So if some values are missing, we fill them intelligently, for instance, using the average income for a missing salary field. Third one is filtering. In this, we remove irrelevant or noisy records, like discarding fake email signups from marketing data. The fourth one is enriching. We can even enhance our data by adding trusted external sources, like appending credit scores from a verified bureau.  And the last one is transformation. Here, we finally reshape data formats to be consistent, for example, converting all units to the same currency. So even messy data can become usable, but it takes deliberate effort, structured process, and attention to quality at every step.  07:26 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest technology. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 08:10 Nikita: Welcome back! Himanshu, we spoke about how to clean data. Now, once we get high-quality data, how do we analyze it?  Himanshu: In data science, there are four primary types of analysis we typically apply depending on the business goal we are trying to achieve.  The first one is descriptive analysis. It helps summarize and report what has happened. So often using averages, totals, or percentages. For example, retailers use descriptive analysis to understand things like what was the average customer spend last quarter? How did store foot traffic trend across months?  The second one is diagnostic analysis. Diagnostic analysis digs deeper into why something happened. For example, hospitals use this type of analysis to find out, for example, why a certain department has higher patient readmission rates. Was it due to staffing, post-treatment care, or patient demographics?  The third one is predictive analysis. Predictive analysis looks forward, trying to forecast future outcomes based on historical patterns. For example, energy companies predict future electricity demand, so they can better manage resources and avoid shortages. And the last one is prescriptive analysis. So it does not just predict. It recommends specific actions to take.  So logistics and supply chain companies use prescriptive analytics to suggest the most efficient delivery routes or warehouse stocking strategies based on traffic patterns, order volume, and delivery deadlines.   09:42 Lois: So really, we’re using data science to solve everyday problems. Can you walk us through some practical examples of how it’s being applied?  Himanshu: The first one is predictive maintenance. It is done in manufacturing a lot. A factory collects real time sensor data from machines. Data scientists first clean and organize this massive data stream, explore patterns of past failures, and design predictive models.  The goal is not just to predict breakdowns but to optimize maintenance schedules, reducing downtime and saving millions. The second one is a recommendation system. It's prevalent in retail and entertainment industries. Companies like Netflix or Amazon gather massive user interaction data such as views, purchases, likes.  Data scientists structure and analyze this behavioral data to find meaningful patterns of preferences and build models that suggest relevant content, eventually driving more engagement and loyalty. The third one is fraud detection. It's applied in finance and banking sector.  Banks store vast amounts of transaction record records. Data scientists clean and prepare this data, understand typical spending behaviors, and then use statistical techniques and machine learning to spot unusual patterns, catching fraud faster than manual checks could ever achieve.  The last one is customer segmentation, which is often applied in marketing. Businesses collect demographics and behavioral data about their customers. Instead of treating all the customers same, data scientists use clustering techniques to find natural groupings, and this insight helps businesses tailor their marketing efforts, offers, and communication for each of those individual groups, making them far more effective.  Across all these examples, notice that data science isn't just building a model. Again, it's understanding the business need, reviewing the data, analyzing it thoughtfully, and building the right solution while helping the business act smarter.  11:44 Lois: Thank you, Himanshu, for joining us on this episode of the Oracle University Podcast. We can’t wait to have you back next week for part 3 of this conversation on core AI concepts, where we’ll talk about generative AI and gen AI agents.     Nikita: And if you want to learn more about data science, visit mylearn.oracle.com and search for the AI for You course. Until next time, this is Nikita Abraham…  Lois: And Lois Houston signing off!  12:13 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

19 Aug 12min

Core AI Concepts – Part 1

Core AI Concepts – Part 1

Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they dive deeper into the world of artificial intelligence, analyzing the types of machine learning. They also discuss deep learning, including how it works, its applications, and its advantages and challenges. From chatbot assistants to speech-to-text systems and image recognition, they explore how deep learning is powering the tools we use today.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we went through the basics of artificial intelligence. If you missed it, I really recommend listening to that episode before you start this one. Today, we’re going to explore some foundational AI concepts, starting with machine learning. After that, we’ll discuss the two main machine learning models: supervised learning and unsupervised learning. And we’ll close with deep learning. Lois: Himanshu Raj, our Principal AI/ML Instructor, joins us for today’s episode. Hi Himanshu! Let’s dive right in. What is machine learning?  01:12 Himanshu: Machine learning lets computers learn from examples to make decisions or predictions without being told exactly what to do. They help computers learn from past data and examples so they can spot patterns and make smart decisions just like humans do, but faster and at scale.  01:31 Nikita: Can you give us a simple analogy so we can understand this better? Himanshu: When you train a dog to sit or fetch, you don't explain the logic behind the command. Instead, you give this dog examples and reinforce correct behavior with rewards, which could be a treat, a pat, or a praise. Over time, the dog learns to associate the command with the action and reward. Machine learning learns in a similar way, but with data instead of dog treats. We feed a mathematical system called models with multiple examples of input and the desired output, and it learns the pattern. It's trial and error, learning from the experience.  Here is another example. Recognizing faces. Humans are incredibly good at this, even as babies. We don't need someone to explain every detail of the face. We just see many faces over time and learn the patterns. Machine learning models can be trained the same way. We showed them thousands or millions of face images, each labeled, and they start to detect patterns like eyes, nose, mouth, spacing, different angles. So eventually, they can recognize faces they have seen before or even match new ones that are similar. So machine learning doesn't have any rules, it's just learning from examples. This is the kind of learning behind things like face ID on your smartphone, security systems that recognizes employees, or even Facebook tagging people in your photos. 03:05 Lois: So, what you’re saying is, in machine learning, instead of telling the computer exactly what to do in every situation, you feed the model with data and give it examples of inputs and the correct outputs. Over time, the model figures out patterns and relationships within the data on its own, and it can make the smart guess when it sees something new. I got it! Now let’s move on to how machine learning actually works? Can you take us through the process step by step? Himanshu: Machine learning actually happens in three steps. First, we have the input, which is the training data. Think of this as showing the model a series of examples. It could be images of historical sales data or customer complaints, whatever we want the machine to learn from. Next comes the pattern finding. This is the brain of the system where the model starts spotting relationships in the data. It figures out things like customer who churn or leave usually contacts support twice in the same month. It's not given rules, it just learns patterns based on the example. And finally, we have output, which is the prediction or decision. This is the result of all this learning. Once trained, the computer or model can say this customer is likely to churn or leave. It's like having a smart assistant that makes fast, data-driven guesses without needing step by step instruction. 04:36 Nikita: What are the main elements in machine learning? Himanshu: In machine learning, we work with two main elements, features and labels. You can think of features as the clues we provide to the model, pieces of information like age, income, or product type. And the label is the solution we want the model to predict, like whether a customer will buy or not.  04:55 Nikita: Ok, I think we need an example here. Let’s go with the one you mentioned earlier about customers who churn. Himanshu: Imagine we have a table with data like customer age, number of visits, whether they churned or not. And each of these rows is one example. The features are age and visit count. The label is whether the customer churned, that is yes or no. Over the time, the model might learn patterns like customer under 30 who visit only once are more likely to leave. Or frequent visitors above age 45 rarely churn. If features are the clues, then the label is the solution, and the model is the brain of the system. It's what's the machine learning builds after learning from many examples, just like we do. And again, the better the features are, the better the learning. ML is just looking for patterns in the data we give it. 05:51 Lois: Ok, we’re with you so far. Let’s talk about the different types of machine learning. What is supervised learning? Himanshu: Supervised learning is a type of machine learning where the model learns from the input data and the correct answers. Once trained, the model can use what it learned to predict the correct answer for new, unseen inputs. Think of it like a student learning from a teacher. The teacher shows labeled examples like an apple and says, "this is an apple." The student receives feedback whether their guess was right or wrong. Over time, the student learns to recognize new apples on their own. And that's exactly how supervised learning works. It's learning from feedback using labeled data and then make predictions. 06:38 Nikita: Ok, so supervised learning means we train the model using labeled data. We already know the right answers, and we're essentially teaching the model to connect the dots between the inputs and the expected outputs. Now, can you give us a few real-world examples of supervised learning? Himanshu: First, house price prediction. In this case, we give the model features like a square footage, location, and number of bedrooms, and the label is the actual house price. Over time, it learns how to predict prices for new homes. The second one is email: spam or not. In this case, features might include words in the subject line, sender, or links in the email. The label is whether the email is spam or not. The model learns patterns to help us filter our inbox, as you would have seen in your Gmail inboxes. The third one is cat versus dog classification. Here, the features are the pixels in an image, and the label tells us whether it's a cat or a dog. After seeing many examples, the model learns to tell the difference on its own. Let's now focus on one very common form of supervised learning, that is regression. Regression is used when we want to predict a numerical value, not a category. In simple terms, it helps answer questions like, how much will it be? Or what will be the value be? For example, predicting the price of a house based on its size, location, and number of rooms. Or estimating next quarter's revenue based on marketing spend.  08:18 Lois: Are there any other types of supervised learning? Himanshu: While regression is about predicting a number, classification is about predicting a category or type. You can think of it as the model answering is this yes or no, or which group does this belong to.  Classification is used when the goal is to predict a category or a class. Here, the model learns patterns from historical data where both the input variables, known as features, and the correct categories, called labels, are already known.  08:53 Ready to level-up your cloud skills? The 2025 Oracle Fusion Cloud Applications Certifications are here! These industry-recognized credentials validate your expertise in the latest Oracle Fusion Cloud solutions, giving you a competitive edge and helping drive real project success and customer satisfaction. Explore the certification paths, prepare with MyLearn, and position yourself for the future. Visit mylearn.oracle.com to get started today. 09:25 Nikita: Welcome back! So that was supervised machine learning. What about unsupervised machine learning, Himanshu? Himanshu: Unlike supervised learning, here, the model is not given any labels or correct answers. It just handed the raw input data and left to make sense of it on its own.  The model explores the data and discovers hidden patterns, groupings, or structures on its own, without being explicitly told what to look for. And it's more like a student learning from observations and making their own inferences. 09:55 Lois: Where is unsupervised machine learning used? Can you take us through some of the use cases? Himanshu: The first one is product recommendation. Customers are grouped based on shared behavior even without knowing their intent. This helps show what the other users like you also prefer. Second one is anomaly detection. Unusual patterns, such as fraud, network breaches, or manufacturing defects, can stand out, all without needing thousands of labeled examples. And third one is customer segmentation. Customers can be grouped by purchase history or behavior to tailor experiences, pricing, or marketing campaigns. 10:32 Lois: And finally, we come to deep learning. What is deep learning, Himanshu? Himanshu: Humans learn from experience by seeing patterns repeatedly. Brain learns to recognize an image by seeing it many times. The human brain contains billions of neurons. Each neuron is connected to others through synapses. Neurons communicate by passing signals. The brain adjusts connections based on repeated stimuli. Deep learning was inspired by how the brain works using artificial neurons and connections. Just like our brains need a lot of examples to learn, so do the deep learning models. The more the layers and connections are, the more complex patterns it can learn. The brain is not hard-coded. It learns from patterns. Deep learning follows the same idea. Metaphorically speaking, a deep learning model can have over a billion neurons, more than a cat's brain, which have around 250 million neurons. Here, the neurons are mathematical units, often called nodes, or simply as units. Layers of these units are connected, mimicking how biological neurons interact. So deep learning is a type of machine learning where the computer learns to understand complex patterns. What makes it special is that it uses neural networks with many layers, which is why we call it deep learning. 11:56 Lois: And how does deep learning work? Himanshu: Deep learning is all about finding high-level meaning from low-level data layer by layer, much like how our brains process what we see and hear. A neural network is a system of connected artificial neurons, or nodes, that work together to learn patterns and make decisions.  12:15 Nikita: I know there are different types of neural networks, with ANNs or Artificial Neural Networks being the one for general learning. How is it structured? Himanshu: There is an input layer, which is the raw data, which could be an image, sentence, numbers, a hidden layer where the patterns are detected or the features are learned, and the output layer where the final decision is made. For example, given an image, is this a dog? A neural network is like a team of virtual decision makers, called artificial neurons, or nodes, working together, which takes input data, like a photo, and passes it through layers of neurons. And each neuron makes a small judgment and passes its result to the next layer.  This process happens across multiple layers, learning more and more complex patterns as it goes, and the final layer gives the output. Imagine a factory assembly line where each station, or the layer, refines the input a bit more. By the end, you have turned raw parts into something meaningful. And this is a very simple analogy. This structure forms the foundations of many deep learning models.  More advanced architectures, like convolutional neural networks, CNNs, for images, or recurrent neural networks, RNN, for sequences built upon this basic idea. So, what I meant is that the ANN is the base structure, like LEGO bricks. CNNs and RNNs use those same bricks, but arrange them in a way that are better suited for images, videos, or sequences like text or speech.  13:52 Nikita: So, why do we call it deep learning? Himanshu: The word deep in deep learning does not refer to how profound or intelligent the model is. It actually refers to the number of layers in the neural network. It starts with an input layer, followed by hidden layers, and ends with an output layer. The layers are called hidden, in the sense that these are black boxes and their data is not visible or directly interpretable to the user. Models which has only one hidden layer is called shallow learning. As data moves, each layer builds on what the previous layer has learned. So layer one might detect a very basic feature, like edges or colors in an image. Layer two can take those edges and starts forming shapes, like curves or lines. And layer three use those shapes to identify complete objects, like a face, a car, or a person. This hierarchical learning is what makes deep learning so powerful. It allows the model to learn abstract patterns and generalize across complex data, whether it's visual, audio, or even language. And that's the essence of deep learning. It's not just about layers. It's about how each layer refines the information and one step closer to understanding. 15:12 Nikita: Himanshu, where does deep learning show up in our everyday lives? Himanshu: Deep learning is not just about futuristic robots, it's already powering the tools we use today. So think of when you interact with a virtual assistant on a website. Whether you are booking a hotel, resolving a banking issue, or asking customer support questions, behind the scenes, deep learning models understand your text, interpret your intent, and respond intelligently. There are many real-life examples, for example, ChatGPT, Google's Gemini, any airline website’s chatbots, bank's virtual agent. The next one is speech-to-text systems. Example, if you have ever used voice typing on your phone, dictated a message to Siri, or used Zoom's live captions, you have seen this in action already. The system listens to your voice and instantly converts it into a text. And this saves time, enhances accessibility, and helps automate tasks, like meeting transcriptions. Again, you would have seen real-life examples, such as Siri, Google Assistant, autocaptioning on Zoom, or YouTube Live subtitles. And lastly, image recognition. For example, hospitals today use AI to detect early signs of cancer in x-rays and CT scans that might be missed by the human eye. Deep learning models can analyze visual patterns, like a suspicious spot on a lung's X-ray, and flag abnormalities faster and more consistently than humans. Self-driving cars recognize stop signs, pedestrians, and other vehicles using the same technology. So, for example, cancer detection in medical imaging, Tesla's self-driving navigation, security system synchronizes face are very prominent examples of image recognition. 17:01 Lois: Deep learning is one of the most powerful tools we have today to solve complex problems. But like any tool, I’m sure it has its own set of pros and cons. What are its advantages, Himanshu? Himanshu: It is high accuracy. When trained with enough data, deep learning models can outperform humans. For example, again, spotting early signs of cancer in X-rays with higher accuracy. Second is handling of unstructured data. Deep learning shines when working with messy real-world data, like images, text, and voice. And it's why your phone can recognize your face or transcribe your speech into text. The third one is automatic pattern learning. Unlike traditional models that need hand-coded features, deep learning models figure out important patterns by themselves, making them extremely flexible. And the fourth one is scalability. Once trained, deep learning systems can scale easily, serving millions of customers, like Netflix recommending movies personalized to each one of us. 18:03 Lois: And what about its challenges? Himanshu: The first one is data and resource intensive. So deep learning demands huge amount of labeled data and powerful computing hardware, which means high cost, especially during training. The second thing is lacks explainability. These models often act like a black box. We know the output, but it's hard to explain exactly how the model reached that decision. This becomes a problem in areas like health care and finance where transparency is critical. The third challenge is vulnerability to bias. If the data contains biases, like favoring certain groups, the model will learn and amplify those biases unless we manage them carefully. The fourth and last challenge is it's harder to debug and maintain. Unlike a traditional software program, it's tough to manually correct a deep learning model if it starts behaving unpredictably. It requires retraining with new data. So deep learning offers powerful opportunities to solve complex problems using data, but it also brings challenges that require careful strategy, resources, and responsible use. 19:13 Nikita: We’re taking away a lot from this conversation. Thank you so much for your insights, Himanshu.  Lois: If you’re interested to learn more, make sure you log into mylearn.oracle.com and look for the AI for You course. Join us next week for part 2 of the discussion on AI Concepts & Terminology, where we’ll focus on Data Science. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 19:39 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

12 Aug 20min

What is AI?

What is AI?

In this episode, hosts Lois Houston and Nikita Abraham, together with Senior Cloud Engineer Nick Commisso, break down the basics of artificial intelligence (AI). They discuss the differences between Artificial General Intelligence (AGI) and Artificial Narrow Intelligence (ANI), and explore the concepts of machine learning, deep learning, and generative AI. Nick also shares examples of how AI is used in everyday life, from navigation apps to spam filters, and explains how AI can help businesses cut costs and boost revenue.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Hello and welcome to the Oracle University Podcast. I’m Nikita Abraham, Team Lead of Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi everyone! Welcome to a new season of the podcast. I’m so excited about this one because we’re going to dive into the world of artificial intelligence, speaking to many experts in the field. Nikita: If you've been listening to us for a while, you probably know we’ve covered AI from a bunch of different angles. But this time, we’re dialing it all the way back to basics. We wanted to create something for the absolute beginner, so no jargon, no assumptions, just simple conversations that anyone can follow. 01:08 Lois: That’s right, Niki. You don’t need to have a technical background or prior experience with AI to get the most out of these episodes. In our upcoming conversations, we’ll break down the basics of AI, explore how it's shaping the world around us, and understand its impact on your business. Nikita: The idea is to give you a practical understanding of AI that you can use in your work, especially if you’re in sales, marketing, operations, HR, or even customer service.  01:37 Lois: Today, we’ll talk about the basics of AI with Senior Cloud Engineer Nick Commisso. Hi Nick! Welcome back to the podcast. Can you tell us about human intelligence and how it relates to artificial intelligence? And within AI, I know we have Artificial General Intelligence, or AGI, and Artificial Narrow Intelligence, or ANI. What’s the difference between the two? Nick: Human intelligence is the intellectual capability of humans that allow us to learn new skills through observation and mental digestion, to think through and understand abstract concepts and apply reasoning, to communicate using language and understand non-verbal cues, such as facial expressions, tone variation, body language. We can handle objections and situations in real time, even in a complex setting. We can plan for short and long-term situations or projects. And we can create music, art, or invent something new or have original ideas. If machines can replicate a wide range of human cognitive abilities, such as learning, reasoning, or problem solving, we call it artificial general intelligence.  Now, AGI is hypothetical for now, but when we apply AI to solve problems with specific, narrow objectives, we call it artificial narrow intelligence, or ANI. AGI is a hypothetical AI that thinks like a human. It represents the ultimate goal of artificial intelligence, which is a system capable of chatting, learning, and even arguing like us. If AGI existed, it would take the form like a robot doctor that accurately diagnoses and comforts patients, or an AI teacher that customizes lessons in real time based on each student's mood, pace, and learning style, or an AI therapist that comprehends complex emotions and provides empathetic, personalized support. ANI, on the other hand, focuses on doing one thing really well. It's designed to perform specific tasks by recognizing patterns and following rules, but it doesn't truly understand or think beyond its narrow scope. Think of ANI as a specialist. Your phone's face ID can recognize you instantly, but it can't carry on a conversation. Google Maps finds the best route, but it can't write you a poem. And spam filters catch junk mail, but it can't make you coffee. So, most of the AI you interact with today is ANI. It's smart, efficient, and practical, but limited to specific functions without general reasoning or creativity. 04:22 Nikita: Ok then what about Generative AI?  Nick: Generative AI is a type of AI that can produce content such as audio, text, code, video, and images. ChatGPT can write essays, but it can't fact check itself. DALL-E creates art, but it doesn't actually know if it's good. Or AI song covers can create deepfakes like Drake singing "Baby Shark."  04:47 Lois: Why should I care about AI? Why is it important? Nick: AI is already part of your everyday life, often working quietly in the background. ANI powers things like navigation apps, voice assistants, and spam filters. Generative AI helps create everything from custom playlists to smart writing tools. And while AGI isn't here yet, it's shaping ideas about what the future might look like. Now, AI is not just a buzzword, it's a tool that's changing how we live, work, and interact with the world. So, whether you're using it or learning about it or just curious, it's worth knowing what's behind the tech that's becoming part of everyday life.  05:32 Lois: Nick, whenever people talk about AI, they also throw around terms like machine learning and deep learning. What are they and how do they relate to AI? Nick: As we shared earlier, AI is the ability of machines to imitate human intelligence. And Machine Learning, or ML, is a subset of AI where the algorithms are used to learn from past data and predict outcomes on new data or to identify trends from the past. Deep Learning, or DL, is a subset of machine learning that uses neural networks to learn patterns from complex data and make predictions or classifications. And Generative AI, or GenAI, on the other hand, is a specific application of DL focused on creating new content, such as text, images, and audio, by learning the underlying structure of the training data.  06:24 Nikita: AI is often associated with key domains like language, speech, and vision, right? So, could you walk us through some of the specific tasks or applications within each of these areas? Nick: Language-related AI tasks can be text related or generative AI. Text-related AI tasks use text as input, and the output can vary depending on the task. Some examples include detecting language, extracting entities in a text, extracting key phrases, and so on.  06:54 Lois: Ok, I get you. That’s like translating text, where you can use a text translation tool, type your text in the box, choose your source and target language, and then click Translate. That would be an example of a text-related AI task. What about generative AI language tasks? Nick: These are generative, which means the output text is generated by the model. Some examples are creating text, like stories or poems, summarizing texts, and answering questions, and so on. 07:25 Nikita: What about speech and vision? Nick: Speech-related AI tasks can be audio related or generative AI. Speech-related AI tasks use audio or speech as input, and the output can vary depending on the task. For example, speech to text conversion, speaker recognition, or voice conversion, and so on. Generative AI tasks are generative, i.e., the output audio is generated by the model (for example, music composition or speech synthesis). Vision-related AI tasks can be image related or generative AI. Image-related AI tasks use an image as the input, and the output depends on the task. Some examples are classifying images or identifying objects in an image. Facial recognition is one of the most popular image-related tasks that's often used for surveillance and tracking people in real time. It's used in a lot of different fields, like security and biometrics, law enforcement, entertainment, and social media. For generative AI tasks, the output image is generated by the model. For example, creating an image from a textual description or generating images of specific style or high resolution, and so on. It can create extremely realistic new images and videos by generating original 3D models of objects, such as machine, buildings, medications, people and landscapes, and so much more. 08:58 Lois: This is so fascinating. So, now we know what AI is capable of. But Nick, what is AI good at? Nick: AI frees you to focus on creativity and more challenging parts of your work. Now, AI isn't magic. It's just very good at certain tasks. It handles work that's repetitive, time consuming, or too complex for humans, like processing data or spotting patterns in large data sets.  AI can take over routine tasks that are essential but monotonous. Examples include entering data into spreadsheets, processing invoices, or even scheduling meetings, freeing up time for more meaningful work. AI can support professionals by extending their abilities. Now, this includes tools like AI-assisted coding for developers, real-time language translation for travelers or global teams, and advanced image analysis to help doctors interpret medical scans much more accurately. 10:00 Nikita: And what would you say is AI's sweet spot? Nick: That would be tasks that are both doable and valuable. A few examples of tasks that are feasible technically and have business value are things like predicting equipment failure. This saves downtime and the loss of business. Call center automation, like the routing of calls to the right person. This saves time and improves customer satisfaction. Document summarization and review. This helps save time for busy professionals. Or inspecting power lines. Now, this task is dangerous. By automating it, it protects human life and saves time. 10:48 Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest tech. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025. 11:30 Nikita: Welcome back! Now one big way AI is helping businesses today is by cutting costs, right? Can you give us some examples of this?  Nick: Now, AI can contribute to cost reduction in several key areas. For instance, chatbots are capable of managing up to 50% of customer queries. This significantly reduces the need for manual support, thereby lowering operational costs. AI can streamline workflows, for example, reducing invoice processing time from 10 days to just 1 hour. This leads to substantial savings in both time and resources. In addition to cost savings, AI can also support revenue growth. One way is enabling personalization and upselling. Platforms like Netflix use AI-driven recommendation systems to influence user choices. This not only enhances the user experience, but it also increases the engagement and the subscription revenue. Or unlocking new revenue streams. AI technologies, such as generative video tools and virtual influencers, are creating entirely new avenues for advertising and branded content, expanding business opportunities in emerging markets. 12:50 Lois: Wow, saving money and boosting bottom lines. That’s a real win! But Nick, how is AI able to do this?  Nick: Now, data is what teaches AI. Just like we learn from experience, so does AI. It learns from good examples, bad examples, and sometimes even the absence of examples. The quality and variety of data shape how smart, accurate, and useful AI becomes. Imagine teaching a kid to recognize animals using only pictures of squirrels that are labeled dogs. That would be very confusing at the dog park. AI works the exact same way, where bad data leads to bad decisions. With the right data, AI can be powerful and accurate. But with poor or biased data, it can become unreliable and even misleading.  AI amplifies whatever you feed it. So, give it gourmet data, not data junk food. AI is like a chef. It needs the right ingredients. It needs numbers for predictions, like will this product sell? It needs images for cool tricks like detecting tumors, and text for chatting, or generating excuses for why you'd be late. Variety keeps AI from being a one-trick pony. Examples of the types of data are numbers, or machine learning, for predicting things like the weather. Text or generative AI, where chatbots are used for writing emails or bad poetry. Images, or deep learning, can be used for identifying defective parts in an assembly line, or an audio data type to transcribe a dictation from a doctor to a text. 14:35 Lois: With so much data available, things can get pretty confusing, which is why we have the concept of labeled and unlabeled data. Can you help us understand what that is? Nick: Labeled data are like flashcards, where everything has an answer. Spam filters learned from emails that are already marked as junk, and X-rays are marked either normal or pneumonia. Let's say we're training AI to tell cats from dogs, and we show it a hundred labeled pictures. Cat, dog, cat, dog, etc. Over time, it learns, hmm fluffy and pointy ears? That's probably a cat. And then we test it with new pictures to verify. Unlabeled data is like a mystery box, where AI has to figure it out itself. Social media posts, or product reviews, have no labels. So, AI clusters them by similarity. AI finding trends in unlabeled data is like a kid sorting through LEGOs without instructions. No one tells them which blocks will go together.  15:36 Nikita: With all the data that’s being used to train AI, I’m sure there are issues that can crop up too. What are some common problems, Nick? Nick: AI's performance depends heavily on the quality of its data. Poor or biased data leads to unreliable and unfair outcomes. Dirty data includes errors like typos, missing values, or duplicates. For example, an age record as 250, or NA, can confuse the AI. And a variety of data cleaning techniques are available, like missing data can be filled in, or duplicates can be removed. AI can inherit human prejudices if the data is unbalanced. For example, a hiring AI may favor one gender if the past three hires were mostly male. Ensuring diverse and representative data helps promote fairness. Good data is required to train better AI. Data could be messy, and needs to be processed before to train AI. 16:39 Nikita: Thank you, Nick, for sharing your expertise with us. To learn more about AI, go to mylearn.oracle.com and search for the AI for You course. As you complete the course, you’ll find skill checks that you can attempt to solidify your learning.  Lois: In our next episode, we’ll dive deep into fundamental AI concepts and terminologies. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 17:05 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

5 Aug 17min

Modernize Your Business with Oracle Cloud Apps – Part 2

Modernize Your Business with Oracle Cloud Apps – Part 2

In this episode, hosts Lois Houston and Nikita Abraham welcome back Cloud Delivery Lead Sarah Mahalik for a detailed tour of the four pillars of Oracle Fusion Cloud Applications: ERP, HCM, SCM, and CX.   Discover how Oracle weaves AI, analytics, and automation into every layer of enterprise operations. Plus, learn how Oracle Modern Best Practice is redefining digital workflows.   Oracle Fusion Cloud Applications: Process Essentials https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-applications-foundation-hcm/146870 https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-applications-foundations-enterprise-resource-planning-erp/146928/241047 https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-applications-foundation-scm/146938 https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-applications-foundation-cx/146972   Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript:   00:00   Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!   00:25   Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and joining me is Nikita Abraham, Team Lead: Editorial Services.     Nikita: Hi everyone! Last week, we spoke about Oracle Cloud Apps and the Redwood design system. Today, we’ll take a closer look at the four key pillars of Oracle Cloud Apps.    Lois: And we’re so excited to have Sarah Mahalik back with us. Sarah is a Cloud Delivery Lead here at Oracle. Hi Sarah! In the last episode, we briefly spoke about the various Oracle Cloud Apps offerings and their capabilities. For anyone who missed that episode, can you give us a quick introduction?   01:06   Sarah: Oracle Cloud Applications is an incredibly broad suite that covers many of the most important business functions, from Human Capital Management, Supply Chain Management, to Enterprise Resource Planning and Customer Experience. The products in the Oracle Fusion Cloud Applications suite are organized by functional groups or pillars. All of these applications sit on Oracle Cloud Infrastructure, a foundation built from scratch to support mission-critical applications.    Oracle Fusion Applications deliver a single source of truth, enabling quick responses to disruptions and market opportunities. With unified data and consistent business rules, teams can build streamlined end-to-end processes, access real time analytics, and make faster data-driven decisions for improved outcomes.   01:52   Nikita: Ok, let’s actually get into each of these areas. I think we can start with Human Capital Management.   Sarah: Oracle Human Capital Management is an end-to-end solution that allows you to manage all aspects of people data from hire to retire. It all starts with recruiting, or requisitions are used to advertise vacant positions, and candidates are managed through the hiring process. After recruitment, successful candidates are transferred to the human resources module.   You can configure the organization structure to mirror that of your business. And this allows for easy reorganization whenever the structure changes. People data is a staple element of HCM. Therefore, as part of this product, an HR specialist can manage everything about the employee life cycle, including promotions, transfers, general assignment changes, and terminations.   A robust self-service offering allows employees and managers to take ownership and responsibility for the data pertaining to themselves and their teams. By removing the burden of simple data processing from the HR specialists, it not only eases the pressure on the HR department but allows them to concentrate on more specialized tasks.   03:00   Lois: And how are the core products of HCM categorized?   Sarah: The core products of Human Capital Management are categorized into four main groupings according to their logical purpose. First up, we have our human resources. This grouping includes the elements for implementing and maintaining the enterprise and workforce structure and employee life cycle data. This is where you would configure the organization structure as well as manage an employee's data from the HR specialist point of view. In addition, modules such as benefits, work life, workforce modeling and planning, and advanced HCM controls also sit within this category.   This brings us to talent management. This category is one of the largest because it includes recruiting, learning, goals and performance management, career development, succession planning, talent reviews, and compensation. In addition to that, dynamic skills and opportunity marketplace are also included in this grouping. Within workforce management, you'll find absence management and time and labor.   These naturally sit together because most organizations that implement both configure it so that an employee can enter both work time and absences on a time card, instead of having to visit two different entry points. You'll also find workforce health and safety here. And finally, payroll. All aspects of payroll are included here, whether you're simply using global payroll or localizations, such as UK, Canada, and Mexico.   It also encompasses payroll interface for those organizations that run their payroll from another system, and just need to extract and migrate the relevant data from Fusion HCM Cloud. When talking about HCM systems, we cannot forget the employee self-service aspect of the product. For this, there's an employee experience module called Oracle Me. Here you'll find options, such as HCM communicate, touchpoints, journeys, HR help desk, and Oracle digital assistant.   All of these combined enable an employee to take control and ownership of their own data, and use the many self-help options to get the information they need quickly and efficiently. In order to control how the system behaves and how users interact with it and perform the various processes, there are configuration options. These options allow organizations to define such things as the user experience, workflows, and approval policies based on their business requirements. And to meet the constant need for reporting, there's analytics, planning, and data modeling.   And in addition to all of that, you can use configuration options, such as extensibility, integration, or import and extracts, security, and adaptive intelligence to help enhance the system and have it working and looking the way you need. Of course, much of these latter configuration items are not exclusive to HCM but are available for the Oracle Fusion Cloud as a whole.   05:47   Lois: That’s great. Ok, let’s move on to Oracle Enterprise Resource Planning, or ERP.   Sarah: This is a complete modern Cloud ERP suite that provides your teams with advanced capabilities, such as AI, to automate the manual processes that slow them down, analytics to react to market shifts in real time, and automatic updates to stay current and gain a competitive advantage.   Oracle Cloud ERP automates the entire Record to Report process and provides a common repository of information for global financial reporting and compliance. Within ERP, we have the broadest and deepest suite offering everything you need, from financials, project management, enterprise performance management, risk management and compliance, and analytics.    06:34   Nikita: Sarah, could you break down the different modules within ERP?   Sarah: First, we have Financials, which is a global financial platform that connects and automates your financial management processes, including payables, receivables, fixed assets, expenses, and reporting for a clear view into your total financial health. Oracle Project Management offers a single project cloud solution designed to help you gain a complete picture of your organization's project finances and operations. It's seamlessly integrated across the enterprise with the Oracle Fusion Cloud ERP, HCM, and SCM applications.   Oracle Fusion Cloud Enterprise Performance Management, or EPM, helps you model and plan across finance, HR, supply chain and sales, streamline the financial close process, and drive better decisions. Oracle Fusion Cloud Risk Management and Compliance is a security and audit solution that controls user access to your Oracle Cloud ERP financial data, monitors user activity, and makes it easier to meet compliance regulations through automation.   Oracle Risk Management Compliance uses AI and ML to strengthen financial controls to help prevent cash leaks, enforce audit, and protect against emerging risks, saving you hours of manual work. Oracle Analytics for Cloud ERP complements the embedded analytics in Cloud ERP to provide pre-packaged use cases, predictive analysis, and KPIs based on variance analysis and historical trends.    08:06   Lois: And what about Supply Chain Management?   Sarah: Oracle Supply Chain Management empowers organizations to plan, source, make, deliver, and service goods with agility and resilience. It offers a solution that integrates advanced capabilities, such as AI/ML and blockchain, to optimize the supply chain life cycle from start to finish.   08:31   Adopting a multicloud strategy is a big step towards future-proofing your business and we’re here to help you navigate this complex landscape. With our suite of courses, you'll gain insights into network connectivity, security protocols, and the considerations of working across different cloud platforms. Start your journey today to multicloud today by visiting mylearn.oracle.com.    08:58   Nikita: Welcome back! Sarah, what makes Oracle Fusion SCM so powerful?   Sarah: When it comes to planning, you can leverage strategic, tactical, and operational processes for accurate forecasting and resource alignment. Sourcing and manufacturing help you streamline procurement and production to meet supply and demand efficiently. Inventory and warehousing processes ensure the right goods are available, stored, and managed effectively.   Fulfillment is also known as the pick, pack, and ship part of the supply chain and delivery entails order tracking and receipt. Having connected processes in place ensures that billing and revenue recognition are applied correctly on the goods and services. Great customer service models provide accurate tracking of customer orders and deliveries. And this can provide insight for an accurate picture of future planning, manufacturing, and inventory forecasts. This is a constant cycle because information and analytics feed into the planning process.   Oracle Supply Chain Management is designed to seamlessly integrate and optimize every step of the supply chain process, ensuring businesses can adapt to dynamic market conditions and customer expectations. The solution supports end-to-end supply chain processes and leverages cutting-edge technologies to transform how organizations manage their operations.   In planning, Oracle SCM empowers businesses with advanced planning tools to align supply and demand effectively. Sourcing and manufacturing assists in streamlining procurement and manufacturing workflows to drive efficiency. Inventory and warehousing optimizes inventory and warehouse management processes with intelligent capabilities.   Fulfillment delivery helps to accelerate order fulfillment and delivery operations to meet customer needs. And servicing allows you to maintain strong customer relationships through seamless post-sale servicing. Oracle SCM ensures an agile and resilient supply chain with the help of technologies like AI, ML, and blockchain. These tools empower organizations to stay competitive in a fast-paced environment while exceeding customer expectations.   11:03   Lois: To round out our discussion, let’s talk about Oracle Customer Experience.   Sarah: Customer Experience, or CX, provides the platform and products necessary to capture all customer touch points and interactions. This platform also automates the business process from interest and lead generation to the sale and provision of products and services. The major product areas are marketing, sales, service, and CX platform.    11:32   Nikita: Could you dive a bit deeper into its key areas?   Sarah: Oracle Marketing solutions allow you to create targeted cross-channel marketing campaigns, optimize lead generation activities, personalize customer and prospect communication, and automate marketing activities. Use real-time data-driven insights to engage, convert, and nurture buyer relationships to increase sales. Featured products include Eloqua Marketing Automation, Responsys Campaign Management, CrowdTwist Loyalty and Engagement, Infinity Behavioral Intelligence, Unity Customer Data Platform, and more.   With Oracle Sales, you can deliver responsive selling across all touchpoints. Oracle Sales guides sellers with intelligent recommendations and gives them a faster path to critical records to help them focus on the right prospects at the right time. The modern, unified selling and buying approach of Oracle CX connects sales and commerce to service, marketing, and the entire customer experience. Featured products include Salesforce Automation, Sales Planning, Sales Performance Management, Configure, Price, and Quote, Subscription Management, Partner Relationship Management, and Customer Data Management.   Oracle Service enables you to help customers when and where they need you with automated workflows for customer self-service, agent-assisted service, and Field Service engagements. You can accelerate the resolution of service issues with AI-driven recommendations, unified data visibility, and cross-organization and cross-channel collaboration tools.   At Oracle, we make every customer interaction matter by using a suite of CX Cloud applications that connect marketing, sales, customer service, Field Service, and e-commerce. Oracle connects our customer experience systems with finance, supply chain, and HR on a unified cloud platform for a single, dynamic 360-degree view of the customer.   13:31   Lois: Before we wrap up, how does Oracle Modern Best Practice, or OMBP, fit into Oracle Cloud Apps?   Sarah: OMBP illustrates common business processes optimized to leverage the latest applications and technologies in Oracle Fusion Applications.   Oracle Modern Best Practice comprises reimagined industry standard business processes powered by Oracle technology. Engineered into Fusion Applications, OMBP simplifies and streamlines workflows, enabling organizations to leverage modern, efficient, and scalable practices. As we align more assets with OMBP, there will be a stronger connection between global process owners and business process innovation within a customer's organization.   OMBP was derived from over 10,000 successful delivery projects. To publish an OMBP, past Oracle projects were analyzed for successful and unsuccessful processes. Successful processes were reviewed and optimized by product experts, engineers, customers, and key users. Optimized processes were published to OMBP to make them available to other customers.   14:40   Lois: Well, that’s it for this episode. Thank you, Sarah, for all of your incredible insights.    Nikita: If you want to learn more about what we discussed today, head over to mylearn.oracle.com and take a look at the Oracle Fusion Cloud Applications Process Essentials courses. Until next time, this is Nikita Abraham…   Lois: And Lois Houston, signing off!   15:01   That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

29 Juli 15min

Modernize Your Business with Oracle Cloud Apps – Part 1

Modernize Your Business with Oracle Cloud Apps – Part 1

Join hosts Lois Houston and Nikita Abraham, along with Cloud Delivery Lead Sarah Mahalik, as they unpack the core pillars of Oracle Fusion Cloud Applications—ERP, HCM, SCM, and CX.   Learn how Oracle’s SaaS model, Redwood UX, and built-in AI are reshaping business productivity, adaptability, and user experience. From quarterly updates to advanced AI agents, discover how Oracle delivers agility, lower costs, and smarter decision-making across departments.   Oracle Fusion Cloud Applications: Process Essentials https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-applications-foundation-hcm/146870 https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-applications-foundations-enterprise-resource-planning-erp/146928/241047 https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-applications-foundation-scm/146938 https://mylearn.oracle.com/ou/course/oracle-fusion-cloud-applications-foundation-cx/146972   Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. -------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Team Lead: Editorial Services with Oracle University, and with me is Lois Houston, Director of Innovation Programs.  Lois: Hi everyone! In our last two episodes, we explored the Oracle Cloud Success Navigator platform. This week and next, we’re diving into Oracle Fusion Cloud Applications with Sarah Mahalik, a Cloud Delivery Lead here at Oracle. We’ll ask Sarah about Oracle’s cloud apps suite, the Redwood design system, and also look at some of Oracle’s AI capabilities.  01:02 Nikita: Yeah, let’s jump right in. Hi Sarah! How does Oracle approach the SaaS model? Sarah: Oracle's Cloud Applications suite is a complete enterprise cloud designed to modernize your business. Our cloud suite of SaaS applications, which includes Enterprise Resource Planning, or ERP, Supply Chain Management, or SCM, Human Capital Management, or HCM, and Customer Experience, or CX, brings consistent processes and a single source of truth across the most important business functions.  At Oracle, we own all of the technology stacks that power our suite of cloud applications. Oracle Cloud Applications are built on Oracle Cloud Infrastructure and ensure the performance, resiliency, and security that enterprises need. Your business no longer needs to worry about maintaining a data center, hardware, operating systems, database, network, or all of the security. With deep integrations, a common data model, and a unified user interface, these applications help improve customer engagement, increase agility, and accelerate response to change. Oracle's Cloud Applications are updated quarterly with new features and improvements. These updates are based on our deep understanding of customer's functional needs, as well as modern technologies such as artificial intelligence, machine learning, blockchain, and digital assistants. Expectations for user experience only go up. Oracle's Redwood User Experience methodology ensures those expectations are matched and exceeded by including powerful and predictive search, a look and feel that actually helps users see what they need to in the order they need to see it, and by providing conversational and micro-interactions. Oracle, as a SaaS provider, puts the customer first by having enough dedicated resources to ensure zero downtime and increasing the speed of implementation by eliminating much of the hardware and software setup activity. 02:59 Nikita: What are the advantages of adopting Oracle Cloud Apps? Sarah: First off, Oracle provides automatic quarterly updates, and they're usable immediately. Customers can focus on leveraging the new functionality instead of spending cycles on installing it. There's much more accessibility because Oracle hosts the heavy part of the applications and customers access it via thin clients. The applications can be used from nearly anywhere and on a wide range of devices, including smartphones and tablets. Another great advantage is speed and agility. A lot of the benefits you see here result from Oracle's provider model. That means customers aren't spending time on customization, application testing, and report development. Instead, they work on the much lighter and faster tasks of configuration, validation, and leveraging embedded analytics. And finally, it's just better economics. Because of the pricing model, it is easy to compare an on-premises implementation cost. While upfront costs are almost always lower, overall operational costs and risk are usually lower as well. This translates to better total cost of ownership and improved overall economics and agility for your business. 04:10 Lois: Sarah, in your experience, why do customers love Oracle Cloud Apps? Sarah: At Oracle, we empower you with embedded AI that drives real breakthroughs in productivity and efficiency, helping you stay ahead of the curve. With the power of Oracle Cloud Infrastructure, you get the best of performance, security, and scalability, making it the perfect foundation for your business. Our modern user experience is intuitive and designed with your needs in mind, while our relentless innovation is focused on what truly matters to you. Above all, our commitment to your success is unwavering. We're here to support you every step of the way, ensuring you thrive and grow with Oracle. 04:49 Lois: Let’s talk about Oracle’s Redwood design system. What is it? And how does it enhance the user experience?  Sarah: Redwood is the name of Oracle's next-generation user experience. Redwood design system is a collection of prefabricated components, templates, and patterns to enable developers to quickly create very sophisticated and polished interactions that are upgrade safe. It provides a consumer grade-plus experience, where you have high-quality functionality that can be used across multiple devices. You have access to insightful data readily at your fingertips for quick access and decision making, with the option to personalize your application to create your own state of the art experience. Processes and entry time will now be more efficient and streamlined by having fewer clicks and faster downloads, which will lead to high productivity in areas that matter the most. The Redwood design is intelligent, meaning you have access to AI, where you will receive recommendations and guidance based on your preferences and business processes. It's also adaptable, allowing you to use the same tools to create new experiences by using the Business Rule Framework with modern UX components. Oracle's Redwood user experience will help you to be more productive, efficient, and engaged with a highly personalized experience. 06:11 Are you keen to stay ahead in today's fast-paced world? We’ve got your back! Each quarter, Oracle rolls out game-changing updates to its Fusion Cloud Applications. And to make sure you’re always in the know, we offer New Features courses that give you an insider’s look at all of the latest advancements. Don't miss out! Head over to mylearn.oracle.com to get started.  06:37 Nikita: Welcome back! Sarah, you said the Redwood design system is adaptable. Can you elaborate on what you mean by that?  Sarah: In a nutshell, this means that developers can extend their applications using the same development platform that Oracle Cloud Applications are built on. Oracle Visual Builder Studio is a robust application development platform that enables users to rapidly create and extend web, mobile, and progressive web interfaces using a visual development environment. It streamlines application development and reduces coding, while also providing flexibility and support for popular build and testing frameworks. With Oracle Visual Builder Studio, users can build apps for the web, create progressive web apps, and develop on-device mobile apps. The tool also offers access to REST services and allows for planning and managing development processes, as well as managing the code lifecycle. Additionally, Oracle Visual Builder Studio provides hosting for apps along with easy publishing and version management. Changes made using Visual Builder Studio are called Application Extensions.  Visual Builder Studio Express Mode has two key components: Business Rules and Constants. Use Business Rules, which is the Redwood equivalent to Transaction Design Studio for responsive pages, to leverage delivered best practices or create your own rules based on various criteria, such as country and business unit. Make fields and regions required or optional, read-only or editable, and show or hide fields in regions, depending on specific criteria. Use the various delivered Constants to customize your Redwood pages to best fit your specific business needs, such as hide the evaluation panel and connections or reorder the columns in the person search result table. 08:23 Lois: Sarah, here’s a question that's probably on everyone's mind—what about AI for Fusion Applications? Sarah: Oracle integrates AI into Fusion Applications, enabling faster, better decision making and empowering your workforce. With both classic and generative AI embedded, customers can access AI-driven insights seamlessly within their everyday software environment. In HCM, AI helps to automate routine tasks. It's also used to attract and manage talent more efficiently by doing things like reducing the time to hire and performing automatic skill matching for job vacancies. It also uses some of that skill matching for existing employees to optimize their engagement, improve productivity, and maximize career growth. All the while, it provides suggested actions so that those tasks are quick, accurate, and easy. In SCM, AI helps predict order cycle times by analyzing historical data and trends, allowing for more accurate planning. It also generates item descriptions automatically and uncovers potential suppliers by analyzing market data, thereby improving efficiency and sourcing decisions. With ERP, it's all about delivering efficiencies and improving strategic contributions. You can use AI to automate some of the core processes and provide guided actions for users in the rest of the processes. This is our recipe for improved efficiency and reduced human error. For example, in Payables, you can use AI features to accelerate and simplify invoice processing and identify duplicate transactions. And in CX, AI helps you to identify which sales leads offer the greatest potential, provides real-time news alerts, and then recommends actions to ensure that reps are working on the right opportunity at the right time and improving conversion to sale. 10:11 Lois: Everyone’s heard about AI agents, but I’ve always wondered how they work.  Sarah: AI agents are a combination of large language models and other advanced technologies that interact with their environments, automate complex tasks, and collaborate with employees in real time. Reasoning capabilities in these LLMs differentiate AI agents from the brittle rules-based automation of the past. Since they can make judgment calls, AI agents can create action plans and manage workflows, either independently or with human supervision. At the core of their functionality is the capability to learn from previous interactions, use data from internal systems, and collaborate with both people and other agents. This ability to continuously adapt makes AI agents particularly valuable for complex business environments, where flexibility and scalability are key. 11:01 Nikita: And how do they work specifically in Fusion Apps? Sarah: Oracle Fusion AI agents are autonomous assistants designed to help organizations streamline operations, improve decision making, and reduce manual workloads. They can assist with simple or complex tasks and work across departments. 11:19 Lois: Sarah, what are the different types of AI agents? Sarah: Functional agents act as digital assistants for different personas within the enterprise and perform domain-specific tasks. Supervisory agents manage other agents, overseeing complex workflows and making decisions on whether human intervention is needed. Utility agents perform routine low-risk tasks such as retrieving data, sending notifications, or running reports. They're often optimized to help with specific roles, so an AI agent or collection of agents might act as a finance clerk, hiring manager, or a customer service representative. 11:54 Nikita: Can you give us some real-world use cases? Sarah: In human resources, agents will assist employees with benefit inquiries and policy clarifications. In finance, agents will automate invoice approvals and help optimize financial workflows. In supply chain management, a field service agent can guide technicians through repairs by providing real-time diagnostic data, troubleshooting steps, and automating orders for parts. In customer experience, the contracts researcher agent enables sales teams to automate routine contract workflows and approvals so they can focus on selling rather than administrative tasks. Oracle Fusion AI agents represent a leap beyond traditional AI. They don't just automate, they collaborate with human workers, making AI agents more than just tools. By integrating advanced AI within business systems, Oracle continues to lead the way in improving productivity and operational efficiency. As AI technology evolves, expect to see even more sophisticated AI agents capable of managing entire business processes autonomously, giving your team the freedom to focus on strategic, high-impact activities. 13:04 Nikita: Thank you so much for taking us through all that, Sarah. We’re really excited to have you back next week to continue this discussion.  Lois: And if you liked what you heard today, head over to mylearn.oracle.com and take a look at the free Oracle Fusion Cloud Applications Process Essentials courses to learn more. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 13:27 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

22 Juli 13min

Oracle Cloud Success Navigator – Part 2

Oracle Cloud Success Navigator – Part 2

Hosts Lois Houston and Nikita Abraham continue their discussion with Mitchell Flinn, VP of Program Management for the CSS Platform, by exploring how Oracle Cloud Success Navigator helps teams align faster, reduce risk, and drive value.   Learn how built-in quality benchmarks, modern best practices, and Starter Configuration tools accelerate cloud adoption, and explore ways to stay ahead with a mindset of continuous innovation.   Oracle Cloud Success Navigator Essentials: https://mylearn.oracle.com/ou/course/oracle-cloud-success-navigator-essentials/147489/242186 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ----------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:25 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University and with joining me today is Nikita Abraham, Team Lead of Editorial Services.  Nikita: Hi everyone! In our last episode, we gave you a broad overview of the Oracle Cloud Success Navigator platform—what it is, how it works, and its key features and benefits. Today, we’re continuing that discussion with Mitchell Flinn. Mitchell is VP of Program Management for Oracle Cloud Success Navigator, and in this episode, we’re going to ask him to walk us through some of the core components of the platform that we couldn’t get into last week. 01:04 Lois: Right, Niki. Hi Mitchell! You spoke a little about Cloud Quality Standards in our last episode. But how do they contribute or align with the vision of Oracle Cloud Success Navigator?  Mitchell: The vision for Navigator is to support customers throughout every phase of their cloud journey, providing timely advice to help improve outcomes to reduce cost and increase overall value. This model is driven through Oracle Cloud Quality Standards. These standards are intended to improve the transparency and collaboration between customer, partner, and Oracle members of a project. This is a project blueprint to include the ability for business and IT users to align on project coordination, expectations, and ultimately drive tighter alignment. Tracking key milestones and activities can help visualize and measure progress. You can build assessments and help answer questions so that at the right time, you have the right resources to make the right decisions for an organization. Cloud Quality Standards represent the key milestone dates and accomplishments along the journey. You can leverage these to increase project transparency, reduce risk, and increase the overall collaboration. Cloud Quality Standards are proactive list of must haves leveraged by customers, partners, and Oracle. They're a collection of knowledge and lessons learned from thousands of implementations globally. Cloud Quality Standards are partner agnostic and complimentary to all SI methodologies and tool sets. And they've been identified to address delivery issues before they happen and reduce the risk of implementations. 02:34 Lois: Ok, and a crucial component of Oracle Cloud Success Navigator is Oracle Modern Best Practice, or OMBP, right? Can you tell us more about what this is?  Mitchell: Oracle Modern Best Practices are based on distilled knowledge of our customers' needs gained from 10,000 successful delivery projects. They illustrate the business process components and their optimization to take advantage of the latest Oracle applications and technologies. Oracle Modern Best Practices comprise industry best practices and processes powered by Oracle technology. Engineered in Fusion Applications, OMBPs simplify and streamline workflows. They enable organizations to leverage modern, efficient, and scalable practices. As we align our assets with OMBPs, there's a stronger connection between global process owners and business process innovation within a customer's organization. 03:21 Nikita: And how do they help deliver end-to-end success for businesses?  Mitchell: An OMBP approach involves a digital business process, so evolving and adapting in real time to changing market dynamics. End-to-end across the organization, so we're breaking down silos and ensuring there's operational agility and a seamless collaboration between departments. We're leveraging emerging technologies, so utilizing AI, other cutting-edge technologies to automate routine tasks, enabling greater human creativity and unlocking new value and insights. And radically superior results, driving a significant improvement in measurable outcomes. OMBPs are dynamic, and when regularly updated, they meet evolving customer needs and technologies. They're trusted, tested, and validated by Oracle experts and publicly available and download on oracle.com. If you go to oracle.com and search modern best practice, you'll find more detailed introduction to Oracle Modern Best Practices. You'll also find Oracle Modern Best Practice business processes for domains such as ERP, EPM, Supply Chain, HCM, and Customer  Experience. We also have Oracle Modern Best Practices for specific industries. 04:25 Nikita: What are the key benefits of OMBP? Mitchell: Revolutionary new technologies are available for organizations and business leaders. You might wonder how existing business processes are optimized with old technology and how they can drive the best solution. With more emerging technologies reaching commercial availability, existing best practices become outdated. And to stay competitive, organizations need to continuously innovate and incorporate new technology within their best practices. In Oracle's definition of OMBPs, common business processes are considered historic input, but we also factor in what could be done with new technologies. And based on this approach, Oracle Modern Best Practices help us evolve with the organizational needs as market dynamics change, work end to end across organizations to eliminate department silos and ensure agility. It allows us to use technologies such as AI to automate the mundane and unlock human creativity for new value and insight. This allows us to incorporate next generation digital technologies to enable radically superior, measurable results. To achieve these, Oracle makes use of key differentiators such as analytics and AI and machine learning. Analytics are also known as business intelligence provides you with information in the form of pre-built dashboards, showing your key metrics in real time. Embedded analytic capabilities enable you to monitor business performance and make better decisions. 05:44 Lois: And what about AI and machine learning? Mitchell: These focus on building systems that learn or improve performance based on the data that they consume. Smart digital assistants, recommendation engines, predictive analytics, they're all used within AI and machine learning to help organizations automate operations and drive innovation, and ultimately make better decisions faster. 06:02 Nikita: Mitchell, let’s move on to the Starter Configuration. Can you explain what it is and how it helps during a cloud implementation? Mitchell: Starter Configuration is a predefined configuration of Oracle Cloud Applications aligned with the Oracle Modern Best Practices. It's very comprehensive and includes business processes in several domains, such as ERP, HCM, Supply Chain, EPM, and so on. It includes sample, master, and transactional data, and predetermined usernames, which aligns and tests based on the same use cases you saw in Oracle Modern Best Practices in Cloud Success Navigator. Customers can request deployment of a Starter Configuration into their test environment. Oracle will run an automated process for replicating the configuration, master data, transaction data, and predetermined usernames from Oracle to the Oracle Cloud Applications Test Environment of the customer's choice. For best user experience, customers can add a basic level of personalization, such as their customer name, limited number of employees, suppliers, customers, and a few other items. Starter Configuration’s delivered with predetermined step guides for comprehensive set of use cases. Using these, customers can relay the same use cases they've seen in Oracle Modern Best Practices and Success Navigator. In the Oracle Cloud Applications Test Environment Customer, we've been able to enable an in-app guidance using Oracle Guided Learning. This helps to make it easier for navigation through the business processes supported by the application. Oracle can deploy the Starter Configuration in days, not weeks or months, which means the implementation partners don't need to invest time and effort for the first configuration of an Oracle Cloud Application environment before they can even get the chance to show it to a customer. In turn, once Starter Configuration is deployed, it's ready to be used for solution familiarization and design activities. Using Starter Configuration of Oracle Cloud Applications early in the cloud journey will offer several benefits to customers. 08:00 Lois: What are these benefits? Mitchell: The first, it helps to cut down on environment configuration time from several weeks or months to potentially just days. Next, implementation partners can engage stakeholders early, and get them familiar with Oracle Cloud Applications, especially those that maybe have never participated in the sales cycle. Because customer stakeholders actually see what Oracle Cloud solutions might look like in the future, it becomes easier to take design decisions. Starter Configuration provides hands-on familiarization with Oracle Cloud Applications and Oracle Leading Practices. This makes it easier to understand what leading practices and standard features can be adopted to support future business processes. It also reduces the level of customization and accelerates implementation. 08:45 Transform the way you work with Oracle Database 23ai! This cutting-edge technology brings the power of AI directly to your data, making it easier to build powerful applications and manage critical workloads. Want to learn more about Database 23ai? Visit mylearn.oracle.com to pick from our range of courses and enroll today!   09:10 Nikita: Welcome back! Mitchell, how can customers and implementation partners best use the Starter Configuration?  Mitchell: Customers and implementation partners will work in close collaboration to make the implementation successful. Hence, Oracle recommends that customers and implementation partner discuss how the best use of Starter Configuration will take place. This is one of the key activities in the mobilize stage of the cloud journey. First, Oracle recommends to use Starter Configuration to prepare the customer stakeholders for the project. Customer stakeholders who participate in the project should go to the Oracle Modern Best Practice section of Success Navigator platform in order to learn more about the modern best practices, business processes, personas, leading practices, and use cases. Project team can request Starter Configuration early in the project to allow customer stakeholders to get their hands-on experience with performing use cases in the Starter Configuration. Customer stakeholders will perform use cases in Starter Config to learn more about modern best practices. They'll use the step-by-step guides and Guided Learning to easily perform the use cases within the Starter Configuration. This is how they'll visualize use cases in Oracle Cloud Applications and get a good understanding of Oracle Modern Best Practices. Next, mobilize stage of the journey, project team can use Starter Configuration to visualize the solution and make design decisions with confidence. First, by requesting Starter Configuration, implementation partners can engage stakeholders early and create the space to get familiar with Oracle Applications. This applies especially to those that may have not participated during the sales cycle. You could personalize Starter Configuration to enhance the user experience to help the customer connect to the application and, for example, change the company name, the logo, few supplier names, customer names, employee names, etc. And implementation partners are going to be able to run sessions to familiarize the customer with modern best practices and show how cloud applications support use cases. For structured guidance, the implementation partners can use the step guides. It includes screenshots of OGL within cloud applications environments. And you could run design workshop and use Starter Configuration, show and explain which design decisions must take place to define a customer-centric configuration. Finally, you can show use cases that help you explain what the impact of design decisions might be. 11:20 Lois: Mitchell, before we wrap up, can you take us through how the Release Readiness features facilitate innovation?  Mitchell: In order to innovate with the Release Readiness features, it's important to learn about the new features in a one-stop shop, and then connect with the capability. The first item is to be able to find and familiarize yourself with the content as they exist within Release Notes. From there, it's important to be able to actually experience those items by looking at the text, and pictures, and the Oracle University videos that we provide in the Feature Overviews, as well as additional capabilities that will be coming with the Navigator in the preview environment, your ability to get your hands on in a demo experience through Cloud Success Navigator. Furthermore, it's important for you to be able to explore across theme-based items, which we call Adoption Centers, currently ready for AI in Redwood. This gives you the ability to span across Release Notes and different releases in order to understand the themes of the trends around AI and Redwood, and how those capabilities in our technology can advance your innovation in the Cloud. And finally, you need to be able to understand those opportunities based off of business processes, data insights, and industry benchmarks. That way, you can understand the capabilities as they exist, not just for your business specifically, but in the context of the broader industry and technology trends. From there, it's important for you to then think about your ability to collaborate to drive continuous innovation. We want to be able to leverage Cloud Success Navigator to drive collaboration to increase confidence across all members of the project team, whether it be you as a customer, our partners, or the Oracle team. It should also be able to drive an increased efficiency within decision making, driving greater value and readiness as you think about the proposed adoption changes. Finally, we want to think about the ability to reduce cycles related to features and decisions so that you can more quickly adapt, and adjust, and consume innovations as they're produced on a quarterly basis.  13:09 Nikita: I think we can end with that. Thank you so much, Mitchell, for taking us through the Navigator platform. Lois: And if you liked what you heard today, head over to mylearn.oracle.com and take a look at the Oracle Cloud Success Navigator Essentials course to learn more. It’s available for free! Until next time, this is Lois Houston…. Nikita: And Nikita Abraham, signing off! 13:33 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

15 Juli 14min

Populärt inom Utbildning

bygga-at-idioter
historiepodden-se
det-skaver
rss-bara-en-till-om-missbruk-medberoende-2
nu-blir-det-historia
harrisons-dramatiska-historia
svd-ledarredaktionen
alska-oss
johannes-hansen-podcast
allt-du-velat-veta
not-fanny-anymore
roda-vita-rosen
rikatillsammans-om-privatekonomi-rikedom-i-livet
sa-in-i-sjalen
rosceremoni
rss-max-tant-med-max-villman
sektledare
rss-sjalsligt-avkladd
i-vantan-pa-katastrofen
rss-npf-podden