Oracle University Podcast

Oracle University Podcast

Oracle University Podcast delivers convenient, foundational training on popular Oracle technologies such as Oracle Cloud Infrastructure, Java, Autonomous Database, and more to help you jump-start or advance your career in the cloud.

Jaksot(132)

Developing Redwood Applications

Developing Redwood Applications

Redwood is a state-of-the-art graphical interface that defines the look and feel of the new Oracle Cloud Redwood Applications.  In this episode, hosts Lois Houston and Nikita Abraham, along with Senior Principal OCI Instructor Joe Greenwald, take a closer look at the intent behind the design and development aspects of the new Redwood experience. They also explore Redwood page templates and components.  Developing Redwood Applications with Visual Builder: https://mylearn.oracle.com/ou/learning-path/developing-redwood-applications-with-visual-builder/112791 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started. 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! Last week, we discussed how Visual Builder Studio can be used to extend Oracle Fusion Apps.  Lois: That’s right, Niki. In today’s episode, we will focus on Oracle’s Redwood design system and how it helps us create stunning, world-class enterprise applications and user experiences.  00:56 Nikita: Yeah, Redwood is the basis for all the new Oracle Cloud Applications being re-designed, developed, and delivered. To tell us more, we have Senior OCI Learning Solutions Architect and Principal Instructor Joe Greenwald, who’s been working with Oracle software development tools since the early 90s and is responsible for OU’s Visual Builder Studio and Redwood course content. Lois: Hi Joe! Thanks for being with us today. 01:21 Joe: Hi Lois. Hi Niki. I am excited to join you on this episode because with the release of 24A Fusion applications, we are encouraging all our customers to adopt the new Redwood design system and components, and take advantage of the world-class look and feel of the new Redwood user experience. Redwood represents a new approach and direction for us at Oracle, and we’re excited to have our customers benefit from it. 01:44 Nikita: Joe, you’ve been working with Oracle user interface development tools and frameworks for a long time. How and why is Redwood different? Joe: I joined Oracle in 1992, and the first Oracle user interface I experienced was Oracle Forms. And that was the character mode. I came from a background of Smalltalk and its amazing, pioneering graphical user interface (GUI) design capabilities. I worked at Apple and I developed my own GUIs for a few years on PCs and Macs. So, Character Mode Forms, what we used to call DMV (Department of Motor Vehicles) screens, was a shock, to say the least. Since then, I’ve worked with almost every user interface and development platform Oracle has created: Character Mode Forms, GUI Forms, Power Objects, HyperCard on the Macintosh, that was pre-OS X by the way, Sedona, written in native C++ and ActiveX and OLE, which didn’t make it to a product but appeared in other things later, ADF Faces, which uses Java to generate HTML pages, and APEX, which uses PL/SQL to generate HTML pages. And I’ve worked with and wrote training classes for Java Swing, an excellent GUI framework for event-driven desktop and enterprise applications, but it wasn’t designed for the web. So, it’s with pleasure that I introduce you to the Redwood design system, easily the best effort I’ve ever seen, from the look and feel of holistic user-goal-centered design philosophy and approach to the cutting-edge WYSIWYG design tools.  03:11 Lois: Joe, is Redwood just another set of styles, colors, and fonts, albeit very nice-looking ones? Joe: The Redwood platform is new for Oracle, and it represents a significant change, not just in the look and feel, colors, fonts, and styles, I mean that too, but it’s also a fundamental change in how Oracle is creating, designing, and imagining user interfaces. As you may be aware, all Oracle Cloud Applications are being re-designed, re-engineered, and re-rebuilt from the ground up, with significant changes to both back-end and front-end architectures. The front end is being redesigned, re-developed, and re-created in pure HTML5, CSS3, and JavaScript using Visual Builder Studio and its design-time browser-based Integrated Development Environment. The back end is being re-architected, re-designed, and implemented in a modern microservice architecture for Oracle Cloud using Kubernetes and other modern technologies that improve performance and work better in the cloud than our current legacy architecture. The new Oracle Cloud Applications platform uses Redwood for its design system—its tools, its patterns, its components, and page templates. Redwood is a richer and more productive platform to create solutions while still being cost-effective for Oracle. It encourages a transformation of the fundamental user experience, emphasizing identifying, meeting, and understanding end users’ goals and how the applications are used.  04:34 Nikita: Joe, do you think Oracle's user interface has been improved with Redwood? In what ways has the UI changed? Joe: Yes, absolutely. Redwood has changed a lot of things. When I joined Oracle back in the '90s, there was effectively no user interface division or UI team. There was no user interface lab—and that was started in the mid-‘90s—and I was asked to give product usability feedback and participate in UI tests and experiments in those labs. I also helped test the products I was teaching at the time. I actually distinctly remember having to take a week to train users on Oracle’s Designer CASE tool product just to prep the participants enough to perform usability testing. I can still hear the UI lab manager shaking her head and saying any product that requires a week of training to do usability testing has usability issues! And if you’re like me and you’ve been around Oracle long enough, you know that Oracle’s not always been known for its user interfaces and been known to release products that look like they were designed by two or more different companies. All that has changed with Redwood. With Redwood, there’s a new internal design group that oversees the design choices of all development teams that develop products. This includes a design system review and an ongoing audit process to ensure that all the products being released, whether Fusion apps or something else, all look and feel similar so it looks like it’s designed by a single company with a single thought in mind. Which it is. 06:00 Joe: There’s a deeper, consistent commitment in identifying user needs, understanding how the applications are being used, and how they meet those user needs through things like telemetry: gathering metrics from the actual components and the Redwood system itself to see how the applications are being used, what’s working well, and what isn’t.  This telemetry is available to us here at Oracle, and we use it to fine tune the applications’ usability and purpose.  06:25 Lois: That’s really interesting, Joe. So, it’s a fundamental change in the way we’re doing things. What about the GUI components themselves? Are these more sophisticated than simple GUI components like buttons and text fields? Joe: The graphical components themselves are at a much higher level, more comprehensive, and work better together. And in Redwood, everything is a component. And I’m not just talking about things like input text fields and buttons, though it applies to these more fine-grained components as well. Leveraging Oracle’s deep experience in building enterprise applications, we’ve incorporated that knowledge into creating page templates so that the structure and look and feel of the page is fixed based on our internal design standards. The developer has control over certain portions of it, but the overall look and feel of the page is controlled by Oracle. So there is consistency of look and feel within and across applications. These page templates come with predefined functionalities: headers, titles, properties, and variables to manipulate content and settings, slots for other components to hold like search fields, collections, contextual information, badges, and images, as well as primary and secondary actions, and variables for events and event handling through Visual Builder action chains, which handle the various actions and processing of the request on the page. And all these page templates and components are responsive, meaning they respond to the change in the size of the page and the orientation. So, when you move from a desktop to a handheld mobile device or a tablet, they respond appropriately and consistently to deliver a clean, easy-to-use interface and experience. 07:58 Nikita: You mentioned WYSIWYG design tools and their integration with Visual Builder Studio’s integrated development environment. How does Redwood work with Visual Builder Studio? Joe: This is easily one of my most favorite aspects about Redwood and the integration with Visual Builder Studio Designer. The components and page templates are responsive at runtime as well as responsive at design time! In over 30 years of working with Oracle software development products, this is the first development system and integrated development environment I’ve seen Oracle produce where what you see is what you get at design time. Now, with products such as Designer and JDeveloper ADF Faces and even APEX—all those page-generation types of products—you have to generate the page, deploy it, and only then can you view the final page to see whether it meets the needs of your user interface. For example, with Designer, there were literally hundreds of configuration parameters that you could set to control how forms and reports looked when they were generated —down to how many buttons on a row or how many rows to a page, that sort of thing, all done in text mode. Then you’d generate and run the page to see what the result was and then go back and modify things until you got what you wanted. 09:05 Joe: I remember hearing the product managers for Oracle ADF Faces being asked…well, a customer asked, “What happens if I put this component here and this component here? What will the page look like?” and they’d say, “I don’t know. Render the page and let’s see.” That’s just crazy talk. With Redwood and its integration with Visual Builder Studio Designer, what you see on the page at design time is literally what you get. And if I make the page narrower or I even convert it to a mobile display while in the Designer itself, I immediately see what the page looks like in that new mode. Everything just moves accordingly, at design time. For example, when changing to a mobile UI, everything stacks up nicely; the components adjust to the page size and change right there in the design environment. Again, I can’t emphasize enough the simple luxury of being able to see exactly what the user is going to see on my page and having the ability to change the resolution, orientation, and screen size, and it changes right there immediately in my design environment. 10:01 Lois: I’m intrigued by the idea of page templates that are managed by Oracle but still leave room for the developer to customize aspects of the look and feel and functionality. How does that work? Joe: Well, the page templates themselves represent the typical pages you would most likely use in an enterprise application. Things like a welcome page, a search page, and edit and create pages, and a couple of different ways to display summary information, including foldout pages, though this is not an exhaustive list of course. Not only do they provide a logical and complete starting point for the layout of the page itself, but they also include built-in functionality. These templates include functionality for buttons, primary and secondary actions, and areas for holding contextual information, badges, avatars, and images. And this is all built right into the page, and all of them use variables to describe the contents for the various parts, so the contents can change programmatically as the variables’ contents change, if necessary. 10:59 Do you have an idea for a new course or learning opportunity? We’d love to hear it! Visit the Oracle University Learning Community and share your thoughts with us on the Idea Incubator. Your suggestion could find a place in future development projects. Visit mylearn.oracle.com to get started.  11:19 Nikita: Welcome back! So, Joe, let’s say I’m a developer. How do I get started working with Redwood? Joe: One of the easiest ways to do it is to use Visual Builder Studio Designer and create a new visual application. If you’re creating a standalone, bespoke custom application, you can choose a Redwood starter template, which will include all the Redwood components and page templates automatically. Or, if you’re extending and customizing an Oracle Fusion application, Redwood is already included.  Either way, when you create a new page, you have a choice of different page templates—welcome page templates, edit pages, search pages, etc. —and all you have to do is choose a page that you want and begin configuring it. And actually if you make a mistake, it’s easy to switch page templates. All the components, page templates, and so on have documentation right there inside Visual Builder Studio Designer, and we do recommend that you read through the documentation first to get an understanding of what the use case for that template is and how to use it. And some components are more granular, like a collection container which holds a collection of rows of a list or a table and provides capabilities like toolbars and other actions that are already built and defined. You decide what actions you want and then use predefined event listeners that are triggered when an event occurs in the application—like a button being clicked or a row being selected—which kicks off a series of actions to be performed. 12:38 Lois: That sounds easy enough if you know what you’re doing. Joe, what are some of the more common pages and what are they used for?  Joe: Redwood page templates can be broken down into categories. There are overview templates like the welcome page template, which has a nice banner, colors, and illustrations that can be used for a welcoming page—like for entering a new application or a new logical section of the application. The dashboard landing page template displays key information values and their charts and graphs, which can come from Oracle Analytics, and automatically switches the display depending on which set of data is selected.  The detail templates include a general overview, which presents read-only information related to a single record or resource. The item overview gives you a small panel to view summary information (for example, information on a customer) and in the main section, you can view details like all the orders for that customer. And you can even navigate through a set of customers, clicking arrows for next-previous navigation. And that’s all built in. There’s no programming required. The fold-out page template folds out horizontally to show you individual panels with more detail that can be displayed about the subject being retrieved as well as overflow and drill-down areas. And there’s a collection detail template that will display a list with additional details about the selected item (for example, an order and its order line items). 13:52 Joe: The smart search page does exactly what it says. It has a search component that you use to filter or search the data coming back from the REST data sources and then display the results in a list or a table. You define the filter yourself and apply it using different kinds of comparators, so you can look for strings that start with certain values or contain values, or numerical values that are equal to or less than, depending on what you’re filtering for. And then there are the transactional templates, which are meant to make changes. This includes both the simple create and edit and advanced create and edit templates.  The simple create and edit page template edits a single record or creates a single record. And the advanced page template works well if you’re working with master-detail, parent-child type relationships. Let’s say you want to view the parent and create children for it or even create a parent and the children at the same time. And there’s a Gantt chart page for project management–type tracking and a guided process page for multiple-step processes and there’s a data management page template specifically for viewing and editing data collections like Excel spreadsheets. 14:50 Nikita: You mentioned that there’s a design system behind all this. How is this used, and how does the customer benefit from it? Joe: Redwood comprises both a design system and a development system. The design system has a series of steps that we follow here at Oracle and can suggest that you, our customers and partners, can follow as well. This includes understanding the problem, articulating the vision for the page and the application (what it should do), identifying the proper Redwood page templates to use, adding detail and refining the design and then using a number of different mechanisms, including PowerPoint or Figma design tools to specify the design for development, and then monitor engagement in the real world. These are the steps that we follow here at Oracle. The Redwood development process starts with learning how to use Redwood components and templates using the documentation and other content from redwood.oracle.com and Visual Builder Studio. Then it’s about understanding the design created by the design team, learning more about components and templates for your application, specifically the ones you’re going to use, how they work, and how they work together. Then developing your application using Visual Builder Studio Designer, and finally improving and refining your application. Now, right now, as I mentioned, telemetry is available to us here at Oracle so we can get a sense of the feedback on the pages of how components are being used and where time is being spent, and we use that to tune the designs and components being used. That telemetry data may be available to customers in the future. Now, when you go to redwood.oracle.com, you can access the Redwood pattern book that shows you in detail all the different page templates that are available: smart search page, data grid, welcome page, dashboard landing page, and so on, and you can select these and read more about them as well as the actual design specifications that were used to build the pages—defining what they do and what they respond to. They provide a lot of detailed information about the templates and components, how they work and how they’re intended to be used. 16:45 Lois: That’s a lot of great resources available. But what if I don’t have access to Visual Builder Studio Designer? Can I still see how Redwood looks and behaves? Joe: Well, if you go to redwood.oracle.com, you can log in and work with the Redwood reference application, which is a live application working with live data. It was created to show off the various page templates and components, their look and feel and functionality from the Redwood design and development systems. This is an order management application, so you can do things like view filtered pending orders, create new orders, manage orders, and view information about customers and inventory. It uses the different page templates to show you how the application can perform. 17:23 Nikita: I assume there are common aspects to how these page templates are designed, built, and intended to be used. Is that a good way to begin understanding how to work with them? Becoming familiar with their common properties and functionality? Joe: Absolutely! Good point! All pages have titles, and most have primary and secondary actions that can be triggered through a variety of GUI events, like clicking a button or a link or selecting something in a list or a table. The transactional page templates include validation groups that validate whether the data is correct before it is submitted, as well as a message dialog that can pop up if there are unsaved changes and someone tries to leave the page. All the pages can use variables to display information or set properties and can easily display specific contextual information about records that have been retrieved, like adding the Order Number or Customer Name and Number to the page title or section headers. 18:13 Lois: If I were a developer, I’d be really excited to get started! So, let’s say I’m a developer. What’s the best way to begin learning about Redwood, Joe? Joe: A great place to start learning about the Redwood design and development system is at the redwood.oracle.com page I mentioned. We have many different pages that describe the philosophy and fundamental basis for Redwood, the ideas and intent behind it, and how we’re using it here at Oracle. It also has a list of all the different page templates and components you can use and a link to the Redwood reference application where you can sign in and try it yourself. In addition, we at Oracle University offer a course called Design and Develop Redwood Applications, and in there, we have both lecture content as well as hands-on practices where you build a lightweight version of the Redwood reference application using data from the Fusion apps application, as well as the pages that I talked about: the welcome page, detail pages, transactional pages, and the dashboard landing page.  And you’ll see how those pages are designed and constructed while building them yourself.  It’s very important though to take one of the free Visual Builder Developer courses first: either Build Visual Applications Using Visual Builder Studio and/or Develop Fusion Applications Using Visual Builder Studio before you try to work through the practices in the Redwood course because it uses a lot of Visual Builder Designer technology.  You’ll get a lot more out of the Redwood practices if you understand the basics of Visual Builder Studio first. The Build Visual Applications Using Visual Builder Studio course is probably a better place to start unless you know for a fact you will be focusing on extending Oracle Fusion Applications using Visual Builder Studio. Now, a lot of the content is the same between the two courses as they share much of the same technology and architectures. 19:53 Lois: Ok, so Build Visual Applications Using Visual Builder Studio and Develop Fusion Applications Using Visual Builder Studio…all on mylearn.oracle.com and all free for anyone who wants to take them, right? Joe: Yes, exactly. And the free Redwood learning path leads to an Associate certification. While our courses are a great place to start in preparing for your certification exam, they are not, of course, by themselves sufficient to pass and you will want to study and be familiar with the redwood.oracle.com content as well. The learning path is free, but you do have to pay for the certification exam. 20:29 Nikita: Thanks for those tips, Joe, and we appreciate you joining us today. Joe: Thanks for having me! Lois: Join us next week when we’ll discuss how model-based development is still alive and well today. Until next time, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 20:45 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

2 Huhti 202421min

Preparing to Extend Oracle Fusion Apps Using Visual Builder Studio

Preparing to Extend Oracle Fusion Apps Using Visual Builder Studio

What do you need to start customizing the next generation of Oracle Fusion Apps? How do you create new pages for business processes? What level of expertise do you require for this? Join Lois Houston and Nikita Abraham as they get answers to all these questions and more from Senior Principal OCI Instructor Joe Greenwald. Develop Fusion Applications Using Visual Builder Studio: https://mylearn.oracle.com/ou/course/develop-fusion-applications-using-visual-builder-studio/122614/ Build Visual Applications Using Visual Builder Studio: https://mylearn.oracle.com/ou/course/build-visual-applications-using-oracle-visual-builder-studio/110035/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this  series of informative podcasts, we’ll bring you foundational training on the most popular  Oracle technologies. Let’s get started. 00:26 Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! Last week, we were introduced to Visual Builder Studio and the Oracle JavaScript Extension Toolkit, also known as JET. Lois: Our friend and Senior Principal OCI Instructor Joe Greenwald is back with us today to talk about how to extend Oracle Cloud Applications that are being built using Visual Builder for its front-end. Nikita: That’s right. All Fusion Applications are being redesigned and rebuilt using Visual Builder. And we’ll find out more about that from Joe. Hi Joe! Thanks for being with us today. Joe: Hi Lois! Hi Niki! My pleasure to be here. 01:09 Nikita: Joe, tell us a little about what’s happening with the redesign and re-architecture of Oracle Cloud Applications using Visual Builder Studio, or VBS. I hear some very exciting changes are coming that are important for our customers and partners. Joe: That’s right, Niki. Oracle is redesigning and rebuilding its entire suite of Fusion Cloud Applications, over 330 different products, utilizing over 60,000 engineers — that is “60,” not “16” — at Oracle to develop the next generation of Oracle Fusion Applications. What’s most exciting is that the same tools the engineers are using to accomplish this are available to our partners and our customers to use to extend the functionality and capabilities of Fusion Applications to meet their custom needs and processes.  01:54 Lois: That’s pretty awesome! We want to use this time today to ask you about extensions, the types of extensions you can create, and how to use Visual Builder Studio to create those extensions. Nikita: Yeah, can we start with you telling us what an extension is? I’ve gotten the sense that Oracle uses the term extension as both a noun and a verb and that’s a bit confusing to me. 02:15 Joe: Yeah, good catch, Niki. Yes, Oracle does use the term extension in two ways: both as a noun and a verb. As a noun, an extension is a container for the code changes that you make to your applications. Basically, it’s a Git repository that Oracle creates and manages for you. So, the extension container holds the code changes you make to your page layouts: the fields, their positioning, showing and hiding fields, that sort of thing, as well as page functionality. These code changes you make are stored in the extension and it is this extension with your code changes that is merged with the main Git branch eventually and then deployed using continuous integration/continuous deployment jobs defined in Visual Builder Studio, which manages the project and its assets. Your extension is a Git branch that is an asset of the project. Once your extension code is merged with the main branch and deployed, then the next time someone brings up the application, they’ll see the changes you’ve made in the app. 03:08 Lois: And as a verb? Joe: As a verb, extension means to extend the functionality and the look and feel of the application, though I prefer the term customization or configuration to describe this aspect, as the documentation does, and to avoid confusion, though I’ll admit I’m not always consistent about the terms I use. 03:26 Lois: What types of customizations, or extensions, and I’m using the verb now, are available for Fusion Apps in Visual Builder Studio? Joe: There are three different ways Fusion Apps can be customized effectively, configured, or extended. The first way is what we call a basic extension, where you’re rearranging hiding, or showing, or moving around fields and sections on the page that have been set up to be extendable by the Fusion Application development teams. Things like hiding fields, showing fields, hiding sections, showing sections…  Nikita: So fairly basic actions… Joe: Yeah exactly and they can be done in Visual Builder Studio Designer by people with minimal VB training, Visual Builder training. And, most recently, if you have access to it, you can do it in the new Express mode, where the page shows you just those things you can work with and just the tools you need to work with the page. This is new and makes it much easier for folks who are not highly technical to make basic changes to the page layout. 04:18 Lois: People like me! That sounds easy enough. Joe: And the next type of extension is more of an intermediate change and requires some training with Visual Builder Studio because you’re creating rules that govern the display of layouts based on certain conditions on the page. These are highly flexible, powerful, and useful for creating customized page layouts based on a variety of factors from page size and orientation to the role of the person using it to values in the actual fields on the page itself. These rules can be combined to create complex rule-based conditions that display exactly what the user should see, given the conditions of the page and their role. I would also include making changes to action chains, which execute sequences of behaviors and navigation, and the actual structure of the application, but this is more advanced.  Lastly, is creating mashup applications, which are stand-alone Visual Builder visual applications, which use data from Fusion apps, and customer data sources, like their own database tables, and potentially third-party APIs to create brand new pages and applications with new functionality, new processes, new procedures, new displays, all of which look just like Fusion Applications and use the same data as Fusion applications. 05:27 Lois: Joe, how do I get started if I want to extend a page?  Joe: The easiest way to do it is to open a page in Fusion Applications and then select Edit Page in Visual Builder Studio from the Profile menu. You’re then prompted for a project to hold the Git repository for the extension container. And since there’s probably already one that exists, after you select the project, an extension Git container is assigned to you. Unless this is the very first time the application has been extended in which case it creates an extension for you. When creating customizations or configurations, we recommend that each application be done in its own separate project. So, for example, if you’re working on Customer Experience Sales, you might do it in Project A and if you’re working on extensions with HCM, you might do it in Project B. And if you decide to create your own pages and flows in your own app, you might do that in Project C.  06:13 Nikita: But why do you need to do this? Joe: That’s just to keep things nice and separate and organized. The tool, Visual Builder Studio, doesn’t really care, but it makes for cleaner development and can help with the management of the development teams. 06:23 Nikita: Ok, Joe, I have a question. How do I know if the page I’m on in Fusion Apps can be edited in Visual Builder? I know there are a lot of legacy pages still out there and they can co-exist with the new VB-based pages. Joe: If the URL of the page you’re on has the word /Redwood in it instead of /faces, then you know this is a page that was created using Visual Builder Studio and you’ll be able to extend it and make changes to it using the Edit in Visual Builder Studio option. So, if you select Edit in Visual Builder Studio, then the page you are on opens inside Visual Builder Studio Designer and you can make changes to any part of the page that has been explicitly enabled for extension by the development team. 07:02 Lois: That’s an important part, right? The application is not extendable by default.  Joe: That’s right, Lois. It is all locked down and you can’t make any changes to it by default. The development team must specifically enable certain parts of the page: sections, fields, layouts, variables, types, action chains, etc. as extendable for you to be able to make changes to it. This ensures the changes the development team makes to the application in the future won’t break your extensions. And conversely, the development team can choose to not extend portions that they do not want you to touch or mess with. Then if they do change that bit of the app in the future, it won’t break the application and you won’t get a big surprise. So, using the Edit page in Visual Builder Studio, you can make both basic changes, like moving, showing, and hiding fields and sections, as well as the more intermediate types of configurations, like using dynamic components to create rule-based layouts that change dynamically based on several conditions such as page size, roles of the user, and field values on the page itself. 08:00 Nikita: What happens if two developers make changes and essentially overwrite each other’s customizations — say one hides a field and another later exposes it? Joe: Well, whoever commits their changes and deploys last wins. The other developer’s changes get overwritten. So, this is something the team would want to consider carefully. It is possible to roll back to an earlier version if one must. And this can be done in Visual Builder Studio — the part that manages project assets like Git repositories. And there are Oracle blog posts about how to do that if you’re interested in learning more. 08:29 Lois: Joe, earlier you mentioned creating new pages and flows, but so far you’ve only talked about modifying existing extendable pages. How do I create new pages and flows? Joe: In a Visual Builder extension, a set of pages and flows is called an App UI. When I use the terms pages and flows, what I’m talking about is a set of pages that are logically related—whatever logical means to the designer and developer—in a group called a flow that you can navigate between. But you can also navigate between flows and even between applications. So, without getting too technical, each application has a default flow, which has a default page where that flow starts when the app first comes up. So, you can think of an App UI as a collection of flows and their pages, and a URL that accesses the default flow and its default page. That’s the page you would see first when accessing that URL. Of course, this can be configured and changed by the developer, as needed. Now, when Oracle creates the original application (for example, digital sales, helpdesk, or something like that), we create an App UI, which contains the pages and flows for that application and is the “entry point” into the app, accessing that App UI’s default flow and its default page and then things flow on from there. 09:40 Joe: Partners and customers can create their own application extensions that are dependent on an Oracle application and even create their own App UI – their own sets of pages and flows to accommodate their own processing and workflow needs. This gives them the ability to add their own processes and rules, and still leverage and navigate to the core application that Oracle built. For example, say Oracle delivered digital sales as an Oracle Cloud Application built using Visual Builder to a customer and the customer needs to add a few pages to do some validation or other type of business processing before entering the digital sales application. What the customer does, in this case, is create a new extension of the Oracle Digital Sales app and an App UI of their own, which would be the set of pages and flows that contain the processing they want to start with before then navigating into the digital sales app to use Oracle’s application. 10:31 Nikita: Wait, did I hear that correctly? We’re creating an extension of an extension or creating an extension on an existing extension? Joe: I know, right? I realize this can sound confusing the first time you hear it or the second time or even the third time. It took me a while to get my head around what they’re talking about. Let’s start with a Fusion application. In a Fusion application, everything is an extension of something. This is just how the code base and the architecture are organized and how they manage the Git repositories and the code base itself. So, Oracle created a base application called the Unified App. The Unified Application contains the basic page structure and common functionality needed for all applications. For example, it contains the header at the top that has the profile and the footer at the bottom of the page that has that little Ask Oracle icon. 11:16 Joe: Within that page, between the header and the footer, are the pages that are created by the developers, whether they be Oracle engineers or partners or customers. They display the contents of the page with the data and the layouts and all of that. In a sense, you can think of the Unified App as an index page, the starting page of the web application. Though that’s not completely true technically, it’s good enough for illustrative purposes. So, Oracle starts with the Unified App and then a development team extends that Unified App to build their product. This is how digital sales did it. This is how customer experience did it. This is how helpdesk did it. They start with the Unified App and they extend that and create an App UI that contains the flows and pages for their specific application, and then add functionality for all the pages and flows, as needed for the design. Partners and customers can then create a new extension that extends the Oracle Application and add their own App UI and their own URL if they want their pages accessed first, before navigating to the Oracle application. For example, if the digital sales application has functionality you’d like to leverage, like it has data services or fragments or page layouts that you want to reuse or other things, you extend the digital sales application, and this extension holds your code changes. You could then create a new App UI, and once deployed, users can use that URL for the new App UI to access your new pages. And your page can then navigate to the Oracle app when it needs to. Though I will say to date, we’re really not seeing much demand for this particular use case, but it is possible. 12:42 Lois: Is that the only option available to customers and partners—to extend an existing Oracle application? Joe: No, Lois. We’re seeing customers and partners create brand new Fusion applications of their own, based on the Unified App Oracle created. In a sense, doing the same thing that our development teams here are doing.  Remember, I said an Oracle development team starts with the Unified App, which has common functionality and look and feel for all applications, and then extends that to add business rules processing, flows, App UI, whatever they need for their specific Oracle application. We’re seeing our partners and customers wanting to build their own applications. Maybe a customer or partner wants to create a Time & Expense application and leverage the Fusion application data and the APIs available, but define their own flows, their own pages, their own processing. This is very easy to do. They’d start by extending the Unified App just like the Oracle development teams do, and then build their own App UI and within that, their own flows, pages, and custom processing. The nice thing about it is that the application looks and works and feels just like a Fusion application and it appears alongside other Fusion applications, because it is a Fusion application. 13:52 Did you know that the Oracle University Learning Community regularly holds live events hosted by Oracle expert instructors. Find out how to prepare for your certification exams. Learn about the latest technology advances and features. Ask questions in real time and learn from an Oracle subject matter expert. From Ask Me Anything about certification to Ask the Instructor coaching sessions, you’ll be able to achieve your learning goals for 2024 in no time. Join a live event today and witness firsthand the transformative power of the Oracle University Learning Community. Visit mylearn.oracle.com to get started.  14:33 Nikita: Welcome back! So Joe, it sounds like there are two different paths or life cycles to create extensions for future applications in Visual Builder Studio. Is that correct? Joe: Yes, exactly. So one path to extending the functionality of Fusion apps is to edit the page in Visual Builder Studio, which opens the page in Visual Builder Designer, and you then make changes to the existing pages, depending on what the development team has made extendable.  14:58 Nikita: But you can’t create new pages and flows in this scenario, right? Joe: This is strictly about modifying an existing page. The other path is creating a new application extension, which is a new application from scratch or extending an existing Oracle application or even an existing partner or customer application. Again, we’re not seeing this typically being done too much. Most partners and customers create new applications or make customizations to existing pages. But the architecture does support it. So, your partner might create a new application based on the production app released by Oracle, and you could extend their application. Or a development team at your site could extend Oracle’s application and you could then extend that team’s application. This is mechanically possible, although I question the use case behind that. Usually, we see our apps being extended – becoming a dependency when there’s code that can be leveraged or reused for a new app and its new App UI. 15:49 Lois: Joe, what did you mean when you say one extension is a dependency of another? Can you talk a bit about dependencies, what that means, how it looks to the developer? Joe: When you extend an application, it becomes a dependency to your application, and you get access to all the resources within that dependency that are marked as extendable by the developer who created that extension. Most useful are things like service connections to REST APIs from Fusion apps data sources, reusable code fragments, and layouts that you can leverage in those cases where you want to create a new App UI. When an extension is listed as a dependency, you’ll see this graphically in Visual Builder Studio Designer. When you see an extension listed as a dependency, it means you can reference any of that extension’s resources that have been marked extendable by the developer. Recall all resources are closed off or hidden by default, but development teams can mark resources as open to being extended and reused, and then you can see and use those resources. So, you can easily add and remove extensions as dependencies in Visual Builder Designer as needed. Now, this can be a nice way to modularize and reuse your resources and assets. To summarize: I can modify an existing page – this is most common, extend an existing application and create a new App UI – which is not common, or I can extend the unified app to create a new app and a new App UI and add other extensions as dependencies, as needed, to leverage their services, fragments, and layouts when building my own pages – this is pretty common as well. 17:14 Nikita: There’s one thing I’d like to come back to, Joe. You mentioned something called a mashup application earlier. Can you tell us a little more about that? Joe: To recap: I mentioned a couple of different ways that you can extend Fusion applications. One is changing layouts or creating rule-based layouts. You can also extend existing apps and create your own App UI on top of them or create your own Fusion app from scratch. But these are Fusion apps and they have restrictions.  These can only run within the Fusion applications ecosystem, which means they can only be accessed by people who are registered in the Fusion application ecosystem, and there are some other restrictions (for example, in terms of the APIs you can access). And you also have no access to customer data tables.   Mashup applications use the stand-alone Visual Builder Cloud Service, which enables you to create custom visual applications. These are visual applications that run outside the Fusion apps ecosystem. Users only need to be identified to the Identity Cloud Service, IDCS, and then they can get access to these mashup apps, depending on the roles and privileges given to them, of course. These mashup applications can access Fusion apps API data, as well as customer database tables, Excel spreadsheet data, CSV files, and third-party APIs. And all this data can appear on the same page, in the same app, using the same Redwood components, so they look and work just like Fusion applications. 18:32 Lois: I know in the past there’s been some friction to making changes in Fusion applications. Partner and customer developers use different tools than the ones Oracle engineers use and there have been some deployment issues. To wrap up things, can you tell us why customers should use Visual Builder Studio to customize Fusion apps? Joe: Glad to, Lois. The big benefit to customers is that they are using the exact same tools, Visual Builder Designer for page design work and Visual Builder Studio for project and code management, to build the customizations and extensions that Oracle is using to create the applications and extensions that are delivered to them. I can’t emphasize enough how big a deal this is and how wonderful it is for the customer. We’re constantly making the Visual Builder Designer interface easier and easier to work with. We’re currently releasing a new version of Visual Builder Designer—the Express mode version. This version of Designer is lightweight and has only the necessary features required to allow you to make changes to pages and layouts, and create and manage dynamic rule-based layouts. If you need more (for example, you need to create service connections, fragments, and do a lot more of that type of advanced work), then use the advanced version of the Designer. Both are available to you, assuming that your user has the appropriate permission and the Fusion app you are using has implemented Express Designer. 19:46 Lois: OK Joe, what courses does Oracle University offer for me if I wanted to learn more about developing extensions for Fusion apps and creating mashup apps using Visual Builder Studio? Joe: Oracle University has several courses. We have the Develop Visual Applications Using Visual Builder Studio, which focuses on creating the stand-alone custom bespoke mashup visual applications. We also have our Design and Develop Redwood Applications course, which goes into detail about working with the Redwood page templates and components. All these courses are free and available today. And all you need to do is log in to mylearn.oracle.com to get started. 20:19 Nikita: Thank you so much, Joe, for joining us today. This has been so educational. Joe: It’s been lovely talking to you both. Thank you. Lois: Yeah, my brain is full. Thanks Joe. Until next week, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 20:32 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

26 Maalis 202421min

Introduction to Visual Builder Studio, Visual Builder Cloud Service, Stand-Alone, and JET

Introduction to Visual Builder Studio, Visual Builder Cloud Service, Stand-Alone, and JET

The next generation of front-end user interfaces for Oracle Fusion Applications is being built using Visual Builder Studio and Oracle JavaScript Extension Toolkit. However, many of the terms associated with these tools can be confusing. In this episode, Lois Houston and Nikita Abraham are joined by Senior Principal OCI Instructor Joe Greenwald. Together, they take you through the different terminologies, how they relate to each other, and how they can be used to deliver the new Oracle Fusion Applications as well as stand-alone, bespoke visual web applications. Develop Fusion Applications Using Visual Builder Studio: https://mylearn.oracle.com/ou/course/develop-fusion-applications-using-visual-builder-studio/122614/ Build Visual Applications Using Visual Builder Studio: https://mylearn.oracle.com/ou/course/build-visual-applications-using-oracle-visual-builder-studio/110035/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. --------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this  series of informative podcasts, we’ll bring you foundational training on the most popular  Oracle technologies. Let’s get started. 00:26 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! Today, we’re starting a new season on building the next generation of Oracle Cloud Apps with Visual Builder Studio. 00:45 Lois: And I’m so excited that we have someone really special to take us through the next few episodes. Joe Greenwald is joining us. Joe is a Senior Principal OCI Instructor with Oracle University. He joined Oracle in 1992 with an extensive background in CASE tools. Since then, he has used and taught all of Oracle's software development tools, including Oracle Forms, APEX, JDeveloper ADF, as well as all the Fusion Middleware courses. Currently, Joe is responsible for the Visual Builder Studio and Redwood development courses, including extending Fusion Applications with Visual Builder. 01:22 Nikita: In today’s episode, we’re going to ask Joe about Visual Builder Studio and Oracle JavaScript Extension Toolkit, also known as JET. Together, they form the basis of the technology for the next generation of front-end user interfaces for Oracle Fusion Applications, as well as many other Oracle applications, including most Oracle Cloud Infrastructure (OCI) interfaces.  Lois: We’ll look at the different terminologies and technologies, how they relate to each other, and how they deliver the new Oracle Fusion applications and stand-alone, bespoke visual web applications. Hi Joe! Thanks for being with us today. 01:57 Joe: Hi Lois! Hi Niki! I’m glad to be here. Nikita: Joe, I’m somewhat thrown by the terminology around Visual Builder, Visual Studio, and JET. Can you help streamline that for us?  Lois: Yeah, things that are named the same sometimes refer to different things, and sometimes things with a different name refer to the same thing.  02:15 Joe: Yeah, I know where you’re coming from. So, let’s start with Visual Builder Studio. It’s abbreviated as VBS and can go by a number of different names. Some of the most well-known ones are Visual Builder Studio, VBS, Visual Builder, Visual Builder Stand-Alone, and Visual Builder Cloud Service. Clearly, this can be very confusing. For the purposes of these episodes as well as the training courses I create, I use certain definitions.  02:39 Lois: Can you take us through those? Joe: Absolutely, Lois. Visual Builder Studio refers to a product that comes free with an OCI account and allows you to manage your project-related assets. This includes the project itself, which is a container for all of its assets. You can assign teams to your projects, as well as secure the project and declare roles for the different team members. You manage GIT repositories with full graphical and command-line GIT support, define package, build, and deploy jobs, and create and run continuous integration/continuous deployment graphical and code-managed pipelines for your applications. These can be visual applications, created using the Visual Builder Integrated Development Environment, the IDE, or non-visual apps, such as Java microservices, docker builds, NPM apps, and things like that. And you can define environments, which determine where your build jobs can be deployed. You can also define issues, which allow you to identify, track, and manage things like bugs, defects, and enhancements. And these can be tracked in code review merge requests and build jobs, and be mapped to agile sprints and scrum boards. There’s also support for wikis for team collaboration, code snippets, and the management of the repository and the project itself. So, VBS supports code reviews before code is merged into GIT branches for package, build, and deploy jobs using merge requests.  03:57 Nikita: OK, what exactly do you mean by that? Joe: Great. So, for example, you could have developers working in one GIT branch and when they’re done, they would push their private code changes into that remote branch. Then, they’d submit a merge request and their changes would be reviewed.  Once the changes are approved, their code branch is merged into the main branch and then automatically runs a CI/CD package (continuous integration/continuous deployment) package, build, and deploy job on the code. Also, the CI/CD package, build, and deploy jobs can run against any branches, not just the main branch. So Visual Builder Studio is intended for managing the project and all of its assets. 04:37 Lois: So Joe, what are the different tools used in developing web applications? Joe: Well, Visual Builder, Visual Builder Studio Designer, Visual Builder Designer, Visual Builder Design-Time, Visual Builder Cloud Service, Visual Builder Stand-Alone all kind of get lumped together. You can kinda see why. What I’m referring to here are the tools that we use to build a visual web application composed of HTML5, CSS3, JavaScript, and JSON (JavaScript Object Notation) for metadata. I call this Visual Builder Designer. This is an Integrated Development Environment, it’s the “IDE” which runs in your browser. You use a combination of drag and drop, setting properties, and writing and modifying custom and generated code to develop your web applications. You work within a workspace, which is your own private copy of a remote Git branch. When you’re ready to start development work, you open an existing workspace or create a new one based on a clone of the remote branch you want to work on. Typically, a new branch would be created for the development work or you would join an existing branch. 05:35 Nikita: What’s a workspace, Joe? Is it like my personal laptop and drive?  Joe: A workspace is your own private code area that stores any changes you make on the Oracle servers, so your code changes are never lost—even when working in a browser-based, network-based tool. A good analogy is, say I was working at home on my own machine. And I would make a copy of a remote GIT branch and then copy that code down to my local machine, make my code changes, do my testing, etc. and then commit my work—create a logical save point periodically—and then when I’m ready, I’d push that code up into the remote branch so it can be reviewed and merged with the main branch. My local machine is my workspace. However, since this code is hosted up by Oracle on our servers, and the code and the IDE are all running in your browser, the workspace is a simulation of a local work area on your own computer. So, the workspace is a hosted allocation of resources for you that’s private. Other people can’t see what’s going on in your workspace. Your workspace has a clone of the remote branch that you’re working with and the changes you make are isolated to your cloned code in your workspace. 06:38 Lois: Ok… the code is actually hosted on the server, so each time you make a change in the browser, the change is written back to the server? Is it possible that you might lose your edits if there’s a networking interruption? Joe: I want to emphasize that while I started out not personally being a fan of web-based integrated development environments, I have been using these tools for over three years and in all that time, while I have lost a connection at times—networks are still subject to interruptions—I’ve never lost any changes that I’ve made. Ever. 07:08 Nikita: Is there a way to save where you are in your work so that you could go back to it later if you need to? Joe: Yes, Niki, you’re asking about commits and savepoints, like in a Git repository or a Git branch. When you reach a logical stopping or development point in your work, you would create a commit or a savepoint. And when you’re ready, you would push that committed code in your workspace up to the remote branch where it can be reviewed and then eventually merged, usually with the main Git branch, and then continuous integration/continuous package and deployment build jobs are run. Now, I’m only giving you a high-level overview, but we cover all this and much more in detail with hands-on practices in our Visual Builder developer courses. Right now, I’m just trying to give you a sense of how these different tools are used. 07:49 Lois: Yes, that makes sense, Joe. It’s a lot to cover in a short amount of time. Now, we’ve discussed the Visual Builder Designer IDE and workspace. But can you tell us more about Visual Builder Cloud Service and stand-alone environments? What are they used for? What features do they provide? Are they the same or different things? Joe: Visual Builder Cloud Service or Visual Builder Stand-Alone, as it’s sometimes called, is a service that Oracle hosts on its servers. It provides hosting for the deployed web application source code as well as database tables for business objects that we build and maintain to store your customer data. This data can come from XLS or CSV files, or even your own Oracle database customer table data.  A custom REST proxy makes calls to external third-party REST services on your behalf and supports several popular authentication mechanisms. There is also integration with the Identity Cloud Service (IDCS) to manage users and their access to your web apps. 08:47 Joe: Visual Builder Cloud Service is a for-fee product. You pay licensing fees for how much you use because it’s a hosted service. Visual Builder Studio, the project asset management aspect I discussed earlier, is free with a standard OCI license. Now, keep in mind these are separate from something like Visual Builder Design Time and the service that’s running in Fusion application environments. What I’m talking about now is creating standalone, bespoke, custom visual applications. These are applications that are built using industry-standard HTML5, CSS3, JavaScript, and JSON for metadata and are hosted on the Oracle servers. 09:27 Are you looking for practical use cases to help you plan and apply configurations that solve real-world challenges?  With the new Applied Learning courses for Cloud Applications, you'll be able to practically apply the concepts learned in our implementation courses and work through case studies featuring key decisions and configurations encountered during a typical Oracle Cloud Applications implementation. Applied learning scenarios are currently available for General Ledger, Payables, Receivables, Accounting Hub, Global Human Resources, Talent Management, Inventory, and Procurement, with many more to come!  Visit mylearn.oracle.com to get started. 10:09 Nikita: Welcome back! Joe, you said Visual Builder Cloud Service or Stand-Alone is a for-fee service. Is there a way I can learn about using Visual Builder Designer to build bespoke visual applications without a fee? Joe: Yes. Actually, we’ve added an option where you can run the Visual Builder Designer and learn how to create web apps without using the app hosting or the business object database that stores your customer data or the REST proxy for authentication or the Identity Cloud Service. So you don’t get those features, but you can still learn the fundamentals of developing with Visual Builder Designer. You can call third-party APIs, you can download the source, and run it locally, for example, in a Tomcat server. This is a great and free way to learn how to develop with the Visual Builder Designer. 10:52 Lois: Joe, I want to know more about the kinds of apps you can build in VB Designer and the capabilities that VB Cloud Service provides. Joe: Visual Builder Designer allows you to build custom, bespoke web applications made of interactive webpages; flows of pages for navigation; events that respond when things happen in the app, for example, GUI events like a button is clicked or values are entered into a text field; variables to store state and the ability to make REST calls, all from your browser. These applications have full access to the Oracle Fusion Applications APIs, given that you have the right security permissions and credentials of course. They can access your customer business data as business objects in our internally hosted database tables or your own customer database tables. They can access third-party APIs, and all these different data sources can appear in the same visual application, on the same page, at the same time. They use the identity cloud service to identify which users can log in and authenticate against the application. And they all use the new Redwood graphical user interface components and page templates, so they have the same look and feel of all Oracle applications. 11:59 Nikita: But what if you’re building or extending Oracle Fusion Applications? Don’t things change a little bit? Joe: Good point, Niki. Yes. While you still work within Visual Builder Studio, that doesn’t change, VBS maintains your project and all your project-related assets, that is still the same. However, in this case, there is no separate hosted Visual Builder Cloud Service or Stand-Alone instance. In this case, Visual Builder is hosted inside of Fusion apps itself as part of the installation. I won’t go into the details of how the architecture works, but the Visual Builder instance that you’re running your code against is part of Fusion applications and is included in the architecture as well as the billing. All your code changes are maintained and stored within a single container called an extension. And this extension is a Git repository that is created for you, or you can create it yourself, depending on how you choose to work within Visual Builder Studio. You create an extension to hold the source code changes that provide a customization or configuration. This means making a change to an existing page or a set of pages or even adding new pages and flows to your Oracle Fusion Applications. You use Visual Builder Studio and Visual Builder Designer in a similar way as to how you would use them for bespoke stand-alone visual applications.  13:10 Lois: I’m trying to envision how this workflow is used. How is it different from bespoke VB app development? Or is it different at all? Joe: So, recall that the Visual Builder Designer is effectively the Integrated Development Environment, the IDE, where you make your code changes by working with both the raw HTML5, CSS3, and JavaScript code, if need be, or the Page Designer for drag and drop, and setting properties and then Live mode to test your work. You use a version of VB Designer to view and modify your customizations, and the code is stored in a Git repository called an extension. So, in that sense, the work of developing pages and flows and such is the same.  You still start by creating or, more typically, joining a project and then either create a new extension from scratch or base it on an existing application, or go directly to the page that you want to edit and, on that page, select from your profile menu to edit in Visual Builder Studio. Now, this is a different lifecycle path from bespoke visual applications. With them, you’re not extending an app or modifying individual pages in the same way. 14:11 Joe: You get a choice of which project you want to add your extension to when you’re working with Fusion apps and potentially which repository to store your customizations, unless one already exists and then it’s assigned automatically to hold your code changes. So you make your changes and edits to the portions of the application that have been opened for extensibility by the development team. This is another difference. Once you make your code changes, the workflow is pretty much the same as for a bespoke visual application: do your development work, commit your changes, push your changes to the remote branch. And then typically, your code is reviewed and if the code passes and is approved, it’s merged with the main branch. Then, the package and deploy jobs run to deploy the main code to the production environment or whatever environment you’re targeting. And once the package and deploy jobs complete, the code base is updated and users who log in see the changes that you’ve made. 15:00 Nikita: You mentioned creating apps that combine data from Fusion cloud, applications, customer data, and third-party APIs into one page. Why is it necessary? Why can’t you just do all that in one Fusion Applications extension? Joe: When you create extensions, you are working within the Oracle Fusion Applications ecosystem, that’s what they actually call it, which includes a defined a set of users who have been predefined and are, therefore, known to Fusion Applications. So, if you’re a user and you’re not part of that Fusion Apps ecosystem, you can’t access the pages. Period. That’s how Fusion Apps works to maintain its security and integrity. Secondly, you’re working pretty much solely with the Fusion Applications APIs data sources coming directly from Fusion Applications, which are also available to you when you’re creating bespoke visual apps. When you’re working with Fusion Applications in Visual Builder, you don’t have access to these business objects that give you access to your own customer database data through Visual Builder-generated REST APIs. Business objects are available only to bespoke visual applications in the hosted VB Cloud Service instance.  So, your data sources are restricted to the Oracle Fusion Applications APIs and some third-party APIs that work within a narrow set of authentication mechanisms currently, although there are plans to expand this in the future. A mashup app that allows you now to access all these data sources while creating apps that leverage the Redwood Component System, so they look and work like Fusion Apps. They’re a highly popular option for our partners and customers. 16:25 Lois: So, to review, we have two different approaches. You can create a visual application using the for-fee, hosted Visual Builder Cloud Service/Stand-Alone or the one that comes with Oracle Integration Cloud, or you can use the extension architecture for Fusion applications, where you use the designer and create your extensions, and the code is delivered and deployed to Fusion applications code.  You haven’t talked about JET yet though, Joe. What is that? Joe: So, JET is an abbreviation. It stands for Oracle JavaScript Extension Toolkit and JET is the underlying technology that makes Visual Builder, visual applications, and Visual Builder Extensions for Fusion Applications possible. Oracle JavaScript Extension Toolkit provides a module-based, open-source toolkit that leverages modern JavaScript, TypeScript, CSS3, and HTML5 to deliver web applications. It’s targeted at JavaScript developers working on client-side applications. It is not for backend development.  It’s a collection of popular, powerful JavaScript libraries and a set of Oracle-contributed JavaScript libraries that make it very simple, easy, and efficient to build front-end applications that can consume and interact with Oracle products and services, especially Oracle Cloud services, but of course it can work with any type of third-party API. 17:42 Nikita: How are JET applications architected, Joe, and how does that relate to Visual Builder pages and flows? Joe: The architecture of JET applications is what’s called a single page architecture. We’ve all seen these. These are where you have a single web page—think of your index page that provides the header and footer for your web page—and then the middle portion or the middle content of the page, represented by modules, allow you to navigate from one page or module to another. It also provides the data mapping so that the data elements in the variables and the state of the application, as well as the graphical user interface elements that provide the fields and functionality for the interface for the application, these are all maintained on the client side. If you’re working in pure JET, then you work with these modules at the raw JavaScript code level. And there are a lot of JavaScript developers who want to work like this and create their custom applications from the code up, so to speak. However, it also provides the basis for Visual Builder visual applications and Fusion Apps visual extensions in Visual Builder. 18:38 Lois: How does JET support VB Apps? You didn’t talk much about having to write a bunch of JavaScript and HTML5 so I got the impression that this is all done for you by VB Designer? Joe: Visual Builder applications are composed of HTML5, CSS3, and JavaScript code that is usually generated by the developer when she drags and drops components on to the page designer canvas or sets properties or creates action chains to respond to events. But there’s also a lot of JavaScript object notation (JSON) metadata created at the time that describes the pages, the flows, the navigation, the REST services, the variables, their data types, and other assets needed for the app to function. This JSON metadata is translated at runtime using a large JavaScript extension toolkit library called the Visual Builder Runtime that runs in the browser and real time translates the metadata and other assets in the Visual Builder source code into JET code and assets, which are actually executed at runtime. And it’s very quick, very fast, very efficient, and provides a layer of abstraction between the raw JET code and the Visual Builder architecture of pages, flows, action chains for executing code and events to handle things that occur in the user interface, including saving the state in variables that are mapped to GUI components. For example, if you have an Input text component, you need to have a variable to store the value that was entered into that Input text component between page refreshes. The data can move from the Input text component to the variable, and from the variable to that Input text component if it’s changed programmatically, for example. So, JET manages binding these data values to variables and the UI components on the page. So, a change to a variable value or a change to the contents of the component causes the others to change automatically. Now, this is only a small part of what JET and the frameworks and libraries it uses do for the applications.  JET also provides more complex GUI components like lists and tables, and selection lists, and check boxes, and all the sorts of things you would expect in a modern GUI application. 20:34 Nikita: You mentioned a layer of abstraction between Visual Builder Studio Designer and JET. What’s the benefit of working in Visual Builder Designer versus JET itself? Joe: The benefit of Visual Builder is that you work at a higher level of abstraction than having to get down into the more detailed levels of deep JavaScript code, working with modules, data mappings, HTML code, single page architecture navigation, and the related functionalities. You can work at a higher level, a graphical level, where you can drag and drop things onto a design canvas and set properties. The VB architecture insulates you from the more technical bits of JET. Now, this frees the developer to concentrate more on application and page design, implementing logic and business rules, and creating a pleasing workflow and look and feel for the user. This keeps them from having to get caught up in the details of getting this working at the code level.  Now if needed, you can write custom JavaScript, HTML5, and CSS3 code, though much less than in a JET app, and all that is part of the VB application source, which becomes part of the code used by JET to execute the application itself. And yet it all works seamlessly together. 21:35 Lois: Joe, I know we have courses in JavaScript, HTML, and CSS. But does a developer getting ready to work in Visual Builder Designer have to go take those courses first or can they start working in VB Designer right away? Joe: Yeah, that question does often comes up: Do I need to learn JET to work with Visual Builder? No, you don’t. That’s all taken care for you in the products themselves. I don’t really think it helps that much to learn JET if you are going to be a VB developer. In some ways, it could even be a bit distracting since some of things you learn to do in JET, you would have to unlearn or not do so much because of what VB does it for you. The things you would have to do manually in code in JET are done for you. This is why we call VB a low code development tool.  I mean, you certainly can if you want to, but I would spend more time learning about the different GUI components, page templates, the Visual Builder architecture — events, action chains, and the data provider variables and types. Now, I know JET myself. I started with that before learning Visual Builder, but I use very little of my JET knowledge as a VB developer. Visual Builder Designer provides a nice, abstracted, clean layer of modern visual development on top of JET, while leveraging the power and flexibility of JET and keeping the lower-level details out of my way. 22:46 Nikita: Joe, where can I go to get started with Visual Builder? Joe: Well, for more information, I recommend you take a look at our Develop Fusion Applications course if you’re working with Fusion Applications and Visual Builder Studio. The other course is Develop Visual Applications with Visual Builder Studio and that’s if you’re creating stand-alone bespoke applications. Both these courses are free. We also have a comprehensive course that covers JavaScript, HTML5, and CSS3, and while it’s not required that you take that to be successful, it can be helpful down the road. I would say that some basic knowledge of HTML5, CSS3, and JavaScript will certainly support you and serve you well when working with Visual Builder. You learn more as you go along and you find that you need to create more sophisticated applications. I would also mention that a lot of the look and feel of the applications in Visual Builder visual applications and Fusion apps extensions and customizations come through JET components, JET styles, and JET variables, and CSS variables, so that’s something that you would want to pursue at some point. There’s a JET cookbook out there. You can search for Oracle JET and look for the JET cookbook and that’s a good introduction to all of that. 23:47 Lois: Joe, thank you so much for joining us today. We’re really looking forward to having you back next week to discuss extending Oracle Fusion Applications with Visual Builder Studio. Joe: Thanks for having me. Nikita: And if you want to learn about some of the courses Joe mentioned, visit mylearn.oracle.com to get started. Until next time, this is Nikita Abraham… Lois: And Lois Houston signing off! 24:09 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

19 Maalis 202424min

OCI AI Services

OCI AI Services

Listen to Lois Houston and Nikita Abraham, along with Senior Principal Product Manager Wes Prichard, as they explore the five core components of OCI AI services: language, speech, vision, document understanding, and anomaly detection, to help you make better sense of all that unstructured data around you. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! In our last episode, we spoke about OCI AI Portfolio, including AI and ML services, and the OCI AI infrastructure. Nikita: Yeah, and in today’s episode, we’re going to continue down a similar path and take a closer look at OCI AI services. 00:55 Lois: With us today is Senior Principal Product Manager, Wes Prichard. Hi Wes! It’s lovely to have you here with us. Hemant gave us a broad overview of the various OCI AI services last week, but we’re really hoping to get into each of them with you. So, let’s jump right in and start with the OCI Language service. What can you tell us about it? Wes: OCI Language analyzes unstructured text for you. It provides models trained on industry data to perform language analysis with no data science experience needed.  01:27 Nikita: What kind of big things can it do? Wes: It has five main capabilities. First, it detects the language of the text. It recognizes 75 languages, from Afrikaans to Welsh.  It identifies entities, things like names, places, dates, emails, currency, organizations, phone numbers--14 types in all. It identifies the sentiment of the text, and not just one sentiment for the entire block of text, but the different sentiments for different aspects.  01:56 Nikita: What do you mean by that, Wes? Wes: So let's say you read a restaurant review that said, the food was great, but the service sucked. You'll get food with a positive sentiment and service with a negative sentiment. And it also analyzes the sentiment for every sentence.  Lois: Ah, that’s smart. Ok, so we covered three capabilities. What else? Wes: It identifies key phrases in the text that represent the important ideas or subjects. And it classifies the general topic of the text from a list of 600 categories and subcategories.  02:27 Lois: Ok, and then there’s the OCI Speech service...  Wes: OCI Speech is very straightforward. It locks the data in audio tracks by converting speech to text. Developers can use Oracle's time-tested acoustic language models to provide highly accurate transcription for audio or video files across multiple languages.  OCI Speech automatically transcribes audio and video files into text using advanced deep learning techniques. There's no data science experience required. It processes data directly in object storage. And it generates timestamped, grammatically accurate transcriptions.  03:01 Nikita: What are some of the main features of OCI Speech? Wes: OCI Speech supports multiple languages, specifically English, Spanish, and Portuguese, with more coming in the future. It has batching support where multiple files can be submitted with a single call. It has blazing fast processing. It can transcribe hours of audio in less than 10 minutes. It does this by chunking up your audio into smaller segments, and transcribing each segment, and then joining them all back together into a single file. It provides a confidence score, both per word and per transcription. It punctuates transcriptions to make the text more readable and to allow downstream systems to process the text with less friction.  And it has SRT file support.  03:45 Lois: SRT? What’s that? Wes: SRT is the most popular closed caption output file format. And with this SRT support, users can add closed captions to their video. OCI Speech makes transcribed text more readable to resemble how humans write. This is called normalization. And the service will normalize things like addresses, times, numbers, URLs, and more.  It also does profanity filtering, where it can either remove, mask, or tag profanity and output text, where removing replaces the word with asterisks, and masking does the same thing, but it retains the first letter, and tagging will leave the word in place, but it provides tagging in the output data.  04:29 Nikita: And what about OCI Vision? What are its capabilities? Wes: Vision is a computed vision service that works on images, and it provides two main capabilities-- image analysis and document AI. Image analysis analyzes photographic images. Object detection is the feature that detects objects inside an image using a bounding box and assigning a label to each object with an accuracy percentage. Object detection also locates and extracts text that appears in the scene, like on a sign.  Image classification will assign classification labels to the image by identifying the major features in the scene. One of the most powerful capabilities of image analysis is that, in addition to pretrained models, users can retrain the models with their own unique data to fit their specific needs.  05:20 Lois: So object detection and image classification are features of image analysis. I think I got it! So then what’s document AI?  Wes: It's used for working with document images. You can use it to understand PDFs or document image types, like JPEG, PNG, and Tiff, or photographs containing textual information.  05:40 Lois: And what are its most important features? Wes: The features of document AI are text recognition, also known as OCR or optical character recognition.  And this extracts text from images, including non-trivial scenarios, like handwritten texts, plus tilted, shaded, or rotated documents. Document classification classifies documents into 10 different types based on visual appearance, high-level features, and extracted keywords. This is useful when you need to process a document, based on its classification, like an invoice, a receipt, or a resume.  Language detection analyzes the visual features of text to determine the language rather than relying on the text itself. Table extraction identifies tables in docs and extracts their content in tabular form. Key value extraction finds values for 13 common fields and line items in receipts, things like merchant name and transaction date.  06:41 Want to get the inside scoop on Oracle University? Head over to the Oracle University Learning Community. Attend exclusive events. Read up on the latest news. Get first-hand access to new products. Read the OU Learning Blog. Participate in Challenges. And stay up-to-date with upcoming certification opportunities. Visit mylearn.oracle.com to get started.  07:06 Nikita: Welcome back! Wes, I want to ask you about OCI Anomaly Detection. We discussed it a bit last week and it seems like such an intelligent and efficient service. Wes: Oracle Cloud Infrastructure Anomaly Detection identifies anomalies in time series data. Equipment sensors generate time series data, but all kinds of business metrics are also time-based. The unique feature of this service is that it finds anomalies, not just in a single signal, but across many signals at once. That's important because machines often generate multiple signals at once and the signals are often related.  07:42 Nikita: Ok you need to give us an example of this! Wes: Think of a pump that has an output pressure, a flow rate, an RPM, and an electrical current draw. When a pump's going to fail, anomalies may appear across several of those signals but at different times. OCI Anomaly Detection helps you to identify anomalies in a multivariate data set by taking advantage of the interrelationship among signals.  The service contains algorithms for both multi-signal, as in multivariate, single signal, as in univariate anomaly detection, and it automatically determines which algorithm to use based on the training data provided. The multivariate algorithm is called MSET-2, which stands for Multivariate State Estimation technique, and it's unique to Oracle.  08:28 Lois: And the 2? Wes: The 2 in the name refers to the patented enhancements by Oracle labs that automatically identify and fix data quality issues resulting in fewer false alarms and more accurate results.  Now unlike some of the other AI services, OCI Anomaly Detection is always trained on the customer's data. It's trained using actual historical data with no anomalies, and there can be as many different trained models as needed for different sets of signals.  08:57 Nikita: So where would one use a service like this? Wes: One of the most obvious applications of this service is for predictive maintenance. Early warning of a problem provides the opportunity to deploy maintenance resources and schedule downtime to minimize disruption to the business.  09:12 Lois: How would you train an OCI Anomaly Detection model? Wes: It's a simple four-step process to prepare a model that can be used for anomaly detection. The first step is to obtain training data from the system to be monitored. The data must contain no anomalies and should cover the normal range of values that would be experienced in a full business cycle.  Second, the training data file is uploaded to an object storage bucket.  Third, a data set is created for the training data. So a data set in this context is an object in the OCI Anomaly Detection service to manage data used for training and testing models.  And fourth, the model is trained. A wizard in the user interface steps the user through the required inputs, such as the training data set and some training parameters like the target false alarm probability.  10:02 Lois: How would this service know about the data and whether the trained model is univariate or multivariate? Wes: When training OCI Anomaly Detection models, the user does not need to specify whether the intended model is for multivariate or univariate data. It does this detection automatically.  For example, if a model is trained with 10 signals and 5 of those signals are determined to be correlated enough for multivariate anomaly detection, it will create an internal multivariate model for those signals. If the other five signals are not correlated with each other, it will create an internal univariate model for each one.  From the user's perspective, the result will be a single OCI anomaly detection model for the 10 signals. But internally, the signals are treated differently based on the training. A user can also train a model on a single signal and it will result in a univariate model.  10:55 Lois: What does this OCI Anomaly Detection model training entail? How does it ensure that it does not have any false alarms? Wes: Training a model requires a single data file with no anomalies that should cover a complete business cycle, which means it should represent all the normal variations in the signal. During training, OCI Anomaly Detection will use a portion of the data for training and another portion for automated testing. The fraction used for each is specified when the model is trained.  When model training is complete, it's best practice to do another test of the model with a data set containing anomalies to see if the anomalies are detected and if there are any false alarms. Based on the outcome, the user may want to retrain the model and specify a different false alarm probability, also called F-A-P or FAP. The FAP is the probability that the model would produce a false alarm. The false alarm probability can be thought of as the sensitivity of the model. The lower the false alarm probability, the less likelihood of it reporting a false alarm, but the less sensitive it will be to detecting anomalies. Selecting the right FAP is a business decision based on the need for sensitive detections balanced by the ability to tolerate false alarms.  Once a model has been trained and the user is satisfied with its detection performance, it can then be used for inferencing.  12:23 Nikita: Inferencing? Is that what I think it is?  Wes: New data is submitted to the model and OCI Anomaly Detection will respond with anomalies that are detected. The input data must contain the same signals that the model was trained on. So, for example, if the model was trained on signals A, B, C, and D, then for detection inferencing, the same four signals must be provided. No more, no less. 12:46 Lois: Where can I find the features of OCI Anomaly Detection that you mentioned?  Wes: The training and inferencing features of OCI Anomaly Detection can be accessed through the OCI console. However, a human-driven interface is not efficient for most business scenarios.  In most cases, automating the detection of anomalies through software is preferred to be able to process hundreds or thousands of signals using many trained models. The service provides multiple software interfaces for this purpose.  Each trained model is accessible through a REST API and an HTTP endpoint. Additionally, programming language-specific SDKs are available for multiple languages, including Python. Using the Python SDK, data scientists can work with OCI Anomaly Detection for both training and inferencing in an OCI Data Science notebook.  13:37 Nikita: How can a data scientist take advantage of these capabilities?  Wes: Well, you can write code against the REST API or use any of the various language SDKs. But for data scientists working in OCI Data Science, it makes sense to use Python.  13:51 Lois: That’s exciting! What does it take to use the Python SDK in a notebook… to be able to use the AI services? Wes: You can use a Notebook session in OCI Data Science to invoke the SDK for any of the AI services.  This might be useful to generate new features for a custom model or simply as a way to consume the service using a familiar Python interface. But before you can invoke the SDK, you have to prepare the data science notebook session by supplying it with an API Signing Key.  Signing Key is unique to a particular user and tenancy and authenticates that user to OCI when invoking the SDK. So therefore, you want to make sure you safeguard your Signing Key and never share it with another user.  14:34 Nikita: And where would I get my API Signing Key? Wes: You can obtain an API Signing Key from your user profile in the OCI Console. Then you save that key as a file to your local machine.  The API Signing Key also provides commands to be added to a config file that the SDK expects to find in the environment, where the SDK code is executing. The config file then references the key file. Once these files are prepared on your local machine, you can upload them to the Notebook session, where you will execute SDK code for the AI service.  The API Signing Key and config file can be reused with any of your notebook sessions, and the same files also work for all of the AI services. So, the files only need to be created once for each user and tenancy combination.  15:27 Lois: Thank you so much, Wes, for this really insightful discussion. To learn more about the topics covered today, you can visit mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course. Nikita: And remember, that course prepares you for the Oracle Cloud Infrastructure AI Foundations Associate certification that you can take for free! So, don’t wait too long to check it out. Join us next week for another episode of the Oracle University Podcast. Until then, this is Nikita Abraham… Lois Houston: And Lois Houston, signing off! 16:03 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

12 Maalis 202416min

The OCI AI Portfolio

The OCI AI Portfolio

Oracle has been actively focusing on bringing AI to the enterprise at every layer of its tech stack, be it SaaS apps, AI services, infrastructure, or data. In this episode, hosts Lois Houston and Nikita Abraham, along with senior instructors Hemant Gahankari and Himanshu Raj, discuss OCI AI and Machine Learning services. They also go over some key OCI Data Science concepts and responsible AI principles. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hey everyone! In our last episode, we dove into Generative AI and Language Learning Models.  Lois: Yeah, that was an interesting one. But today, we’re going to discuss the AI and machine learning services offered by Oracle Cloud Infrastructure, and we’ll look at the OCI AI infrastructure. Nikita: I’m also going to try and squeeze in a couple of questions on a topic I’m really keen about, which is responsible AI. To take us through all of this, we have two of our colleagues, Hemant Gahankari and Himanshu Raj. Hemant is a Senior Principal OCI Instructor and Himanshu is a Senior Instructor on AI/ML. So, let’s get started! 01:16 Lois: Hi Hemant! We’re so excited to have you here! We know that Oracle has really been focusing on bringing AI to the enterprise at every layer of our stack.  Hemant: It all begins with data and infrastructure layers. OCI AI services consume data, and AI services, in turn, are consumed by applications.  This approach involves extensive investment from infrastructure to SaaS applications. Generative AI and massive scale models are the more recent steps. Oracle AI is the portfolio of cloud services for helping organizations use the data they may have for the business-specific uses.  Business applications consume AI and ML services. The foundation of AI services and ML services is data. AI services contain pre-built models for specific uses. Some of the AI services are pre-trained, and some can be additionally trained by the customer with their own data.  AI services can be consumed by calling the API for the service, passing in the data to be processed, and the service returns a result. There is no infrastructure to be managed for using AI services.  02:37 Nikita: How do I access OCI AI services? Hemant: OCI AI services provide multiple methods for access. The most common method is the OCI Console. The OCI Console provides an easy to use, browser-based interface that enables access to notebook sessions and all the features of all the data science, as well as AI services.  The REST API provides access to service functionality but requires programming expertise. And API reference is provided in the product documentation. OCI also provides programming language SDKs for Java, Python, TypeScript, JavaScript, .Net, Go, and Ruby. The command line interface provides both quick access and full functionality without the need for scripting.  03:31 Lois: Hemant, what are the types of OCI AI services that are available?  Hemant: OCI AI services is a collection of services with pre-built machine learning models that make it easier for developers to build a variety of business applications. The models can also be custom trained for more accurate business results. The different services provided are digital assistant, language, vision, speech, document understanding, anomaly detection.  04:03 Lois: I know we’re going to talk about them in more detail in the next episode, but can you introduce us to OCI Language, Vision, and Speech? Hemant: OCI Language allows you to perform sophisticated text analysis at scale. Using the pre-trained and custom models, you can process unstructured text to extract insights without data science expertise. Pre-trained models include language detection, sentiment analysis, key phrase extraction, text classification, named entity recognition, and personal identifiable information detection.  Custom models can be trained for named entity recognition and text classification with domain-specific data sets. In text translation, natural machine translation is used to translate text across numerous languages.  Using OCI Vision, you can upload images to detect and classify objects in them. Pre-trained models and custom models are supported. In image analysis, pre-trained models perform object detection, image classification, and optical character recognition. In image analysis, custom models can perform custom object detection by detecting the location of custom objects in an image and providing a bounding box.  The OCI Speech service is used to convert media files to readable texts that's stored in JSON and SRT format. Speech enables you to easily convert media files containing human speech into highly exact text transcriptions.  05:52 Nikita: That’s great. And what about document understanding and anomaly detection? Hemant: Using OCI document understanding, you can upload documents to detect and classify text and objects in them. You can process individual files or batches of documents. In OCR, document understanding can detect and recognize text in a document. In text extraction, document understanding provides the word level and line level text, and the bounding box, coordinates of where the text is found.  In key value extraction, document understanding extracts a predefined list of key value pairs of information from receipts, invoices, passports, and driver IDs. In table extraction, document understanding extracts content in tabular format, maintaining the row and column relationship of cells. In document classification, the document understanding classifies documents into different types.  The OCI Anomaly Detection service is a service that analyzes large volume of multivariate or univariate time series data. The Anomaly Detection service increases the reliability of businesses by monitoring their critical assets and detecting anomalies early with high precision. Anomaly Detection is the identification of rare items, events, or observations in data that differ significantly from the expectation.  07:34 Nikita: Where is Anomaly Detection most useful? Hemant: The Anomaly Detection service is designed to help with analyzing large amounts of data and identifying the anomalies at the earliest possible time with maximum accuracy. Different sectors, such as utility, oil and gas, transportation, manufacturing, telecommunications, banking, and insurance use Anomaly Detection service for their day-to-day activities.  08:02 Lois: Ok.. and the first OCI AI service you mentioned was digital assistant… Hemant: Oracle Digital Assistant is a platform that allows you to create and deploy digital assistants, which are AI driven interfaces that help users accomplish a variety of tasks with natural language conversations. When a user engages with the Digital Assistant, the Digital Assistant evaluates the user input and routes the conversation to and from the appropriate skills.  Digital Assistant greets the user upon access. Upon user requests, list what it can do and provide entry points into the given skills. It routes explicit user requests to the appropriate skills. And it also handles interruptions to flows and disambiguation. It also handles requests to exit the bot.  09:00 Nikita: Excellent! Let’s bring Himanshu in to tell us about machine learning services. Hi Himanshu! Let’s talk about OCI Data Science. Can you tell us a bit about it? Himanshu: OCI Data Science is the cloud service focused on serving the data scientist throughout the full machine learning life cycle with support for Python and open source.  The service has many features, such as model catalog, projects, JupyterLab notebook, model deployment, model training, management, model explanation, open source libraries, and AutoML.  09:35 Lois: Himanshu, what are the core principles of OCI Data Science?  Himanshu: There are three core principles of OCI Data Science. The first one, accelerated. The first principle is about accelerating the work of the individual data scientist. OCI Data Science provides data scientists with open source libraries along with easy access to a range of compute power without having to manage any infrastructure. It also includes Oracle's own library to help streamline many aspects of their work.  The second principle is collaborative. It goes beyond an individual data scientist’s productivity to enable data science teams to work together. This is done through the sharing of assets, reducing duplicative work, and putting reproducibility and auditability of models for collaboration and risk management.  Third is enterprise grade. That means it's integrated with all the OCI Security and access protocols. The underlying infrastructure is fully managed. The customer does not have to think about provisioning compute and storage. And the service handles all the maintenance, patching, and upgrades so user can focus on solving business problems with data science.  10:50 Nikita: Let’s drill down into the specifics of OCI Data Science. So far, we know it’s cloud service to rapidly build, train, deploy, and manage machine learning models. But who can use it? Where is it? And how is it used? Himanshu: It serves data scientists and data science teams throughout the full machine learning life cycle.  Users work in a familiar JupyterLab notebook interface, where they write Python code. And how it is used? So users preserve their models in the model catalog and deploy their models to a managed infrastructure.  11:25 Lois: Walk us through some of the key terminology that’s used. Himanshu: Some of the important product terminology of OCI Data Science are projects. The projects are containers that enable data science teams to organize their work. They represent collaborative work spaces for organizing and documenting data science assets, such as notebook sessions and models.  Note that tenancy can have as many projects as needed without limits. Now, this notebook session is where the data scientists work. Notebook sessions provide a JupyterLab environment with pre-installed open source libraries and the ability to add others. Notebook sessions are interactive coding environment for building and training models.  Notebook sessions run in a managed infrastructure and the user can select CPU or GPU, the compute shape, and amount of storage without having to do any manual provisioning. The other important feature is Conda environment. It's an open source environment and package management system and was created for Python programs.  12:33 Nikita: What is a Conda environment used for? Himanshu: It is used in the service to quickly install, run, and update packages and their dependencies. Conda easily creates, saves, loads, and switches between environments in your notebooks sessions. 12:46 Nikita: Earlier, you spoke about the support for Python in OCI Data Science. Is there a dedicated library? Himanshu: Oracle's Accelerated Data Science ADS SDK is a Python library that is included as part of OCI Data Science.  ADS has many functions and objects that automate or simplify the steps in the data science workflow, including connecting to data, exploring, and visualizing data. Training a model with AutoML, evaluating models, and explaining models. In addition, ADS provides a simple interface to access the data science service mode model catalog and other OCI services, including object storage.  13:24 Lois: I also hear a lot about models. What are models? Himanshu: Models define a mathematical representation of your data and business process. You create models in notebooks, sessions, inside projects.  13:36 Lois: What are some other important terminologies related to models? Himanshu: The next terminology is model catalog. The model catalog is a place to store, track, share, and manage models.  The model catalog is a centralized and managed repository of model artifacts. A stored model includes metadata about the provenance of the model, including Git-related information and the script. Our notebook used to push the model to the catalog. Models stored in the model catalog can be shared across members of a team, and they can be loaded back into a notebook session.  The next one is model deployments. Model deployments allow you to deploy models stored in the model catalog as HTTP endpoints on managed infrastructure.  14:24 Lois: So, how do you operationalize these models? Himanshu: Deploying machine learning models as web applications, HTTP API endpoints, serving predictions in real time is the most common way to operationalize models. HTTP endpoints or the API endpoints are flexible and can serve requests for the model predictions. Data science jobs enable you to define and run a repeatable machine learning tasks on fully managed infrastructure.  Nikita: Thanks for that, Himanshu.  14:57 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from cloud computing, database, and security, artificial intelligence, and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification. Visit mylearn.oracle.com to get started.  15:25 Nikita: Welcome back! The Oracle AI Stack consists of AI services and machine learning services, and these services are built using AI infrastructure. So, let’s move on to that. Hemant, what are the components of OCI AI Infrastructure? Hemant: OCI AI Infrastructure is mainly composed of GPU-based instances. Instances can be virtual machines or bare metal machines. High performance cluster networking that allows instances to communicate to each other. Super clusters are a massive network of GPU instances with multiple petabytes per second of bandwidth. And a variety of fully managed storage options from a single byte to exabytes without upfront provisioning are also available.  16:14 Lois: Can we explore each of these components a little more? First, tell us, why do we need GPUs? Hemant: ML and AI needs lots of repetitive computations to be made on huge amounts of data. Parallel computing on GPUs is designed for many processes at the same time. A GPU is a piece of hardware that is incredibly good in performing computations.  GPU has thousands of lightweight cores, all working on their share of data in parallel. This gives them the ability to crunch through extremely large data set at tremendous speed.  16:54 Nikita: And what are the GPU instances offered by OCI? Hemant: GPU instances are ideally suited for model training and inference. Bare metal and virtual machine compute instances powered by NVIDIA GPUs H100, A100, A10, and V100 are made available by OCI.  17:14 Nikita: So how do we choose what to train from these different GPU options?  Hemant: For large scale AI training, data analytics, and high performance computing, bare metal instances BM 8 X NVIDIA H100 and BM 8 X NVIDIA A100 can be used.  These provide up to nine times faster AI training and 30 times higher acceleration for AI inferencing. The other bare metal and virtual machines are used for small AI training, inference, streaming, gaming, and virtual desktop infrastructure.  17:53 Lois: And why would someone choose the OCI AI stack over its counterparts? Hemant: Oracle offers all the features and is the most cost effective option when compared to its counterparts.  For example, BM GPU 4.8 version 2 instance costs just $4 per hour and is used by many customers.  Superclusters are a massive network with multiple petabytes per second of bandwidth. It can scale up to 4,096 OCI bare metal instances with 32,768 GPUs.  We also have a choice of bare metal A100 or H100 GPU instances, and we can select a variety of storage options, like object store, or block store, or even file system. For networking speeds, we can reach 1,600 GB per second with A100 GPUs and 3,200 GB per second with H100 GPUs.  With OCI storage, we can select local SSD up to four NVMe drives, block storage up to 32 terabytes per volume, object storage up to 10 terabytes per object, file systems up to eight exabyte per file system. OCI File system employs five replicated storage located in different fault domains to provide redundancy for resilient data protection.  HPC file systems, such as BeeGFS and many others are also offered. OCI HPC file systems are available on Oracle Cloud Marketplace and make it easy to deploy a variety of high performance file servers.  19:50 Lois: I think a discussion on AI would be incomplete if we don’t talk about responsible AI. We’re using AI more and more every day, but can we actually trust it? Hemant: For us to trust AI, it must be driven by ethics that guide us as well. Nikita: And do we have some principles that guide the use of AI? Hemant: AI should be lawful, complying with all applicable laws and regulations. AI should be ethical, that is it should ensure adherence to ethical principles and values that we uphold as humans. And AI should be robust, both from a technical and social perspective. Because even with the good intentions, AI systems can cause unintentional harm. AI systems do not operate in a lawless world. A number of legally binding rules at national and international level apply or are relevant to the development, deployment, and use of AI systems today. The law not only prohibits certain actions but also enables others, like protecting rights of minorities or protecting environment. Besides horizontally applicable rules, various domain-specific rules exist that apply to particular AI applications. For instance, the medical device regulation in the health care sector.  In AI context, equality entails that the systems’ operations cannot generate unfairly biased outputs. And while we adopt AI, citizens right should also be protected.  21:30 Lois: Ok, but how do we derive AI ethics from these? Hemant: There are three main principles.  AI should be used to help humans and allow for oversight. It should never cause physical or social harm. Decisions taken by AI should be transparent and fair, and also should be explainable. AI that follows the AI ethical principles is responsible AI.  So if we map the AI ethical principles to responsible AI requirements, these will be like, AI systems should follow human-centric design principles and leave meaningful opportunity for human choice. This means securing human oversight. AI systems and environments in which they operate must be safe and secure, they must be technically robust, and should not be open to malicious use.  The development, and deployment, and use of AI systems must be fair, ensuring equal and just distribution of both benefits and costs. AI should be free from unfair bias and discrimination. Decisions taken by AI to the extent possible should be explainable to those directly and indirectly affected.  23:01 Nikita: This is all great, but what does a typical responsible AI implementation process look like?  Hemant: First, a governance needs to be put in place. Second, develop a set of policies and procedures to be followed. And once implemented, ensure compliance by regular monitoring and evaluation.  Lois: And this is all managed by developers? Hemant: Typical roles that are involved in the implementation cycles are developers, deployers, and end users of the AI.  23:35 Nikita: Can we talk about AI specifically in health care? How do we ensure that there is fairness and no bias? Hemant: AI systems are only as good as the data that they are trained on. If that data is predominantly from one gender or racial group, the AI systems might not perform as well on data from other groups.  24:00 Lois: Yeah, and there’s also the issue of ensuring transparency, right? Hemant: AI systems often make decisions based on complex algorithms that are difficult for humans to understand. As a result, patients and health care providers can have difficulty trusting the decisions made by the AI. AI systems must be regularly evaluated to ensure that they are performing as intended and not causing harm to patients.  24:29 Nikita: Thank you, Hemant and Himanshu, for this really insightful session. If you’re interested in learning more about the topics we discussed today, head on over to mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course.  Lois: That’s right, Niki. You’ll find demos that you watch as well as skill checks that you can attempt to better your understanding. In our next episode, we’ll get into the OCI AI Services we discussed today and talk about them in more detail. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off! 25:05 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

5 Maalis 202425min

Generative AI and Large Language Models

Generative AI and Large Language Models

In this week’s episode, Lois Houston and Nikita Abraham, along with Senior Instructor Himanshu Raj, take you through the extraordinary capabilities of Generative AI, a subset of deep learning that doesn’t make predictions but rather creates its own content. They also explore the workings of Large Language Models. Oracle MyLearn: https://mylearn.oracle.com/ou/learning-path/become-an-oci-ai-foundations-associate-2023/127177 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal  Technical Editor.  Nikita: Hi everyone! In our last episode, we went over the basics of deep learning. Today, we’ll look at generative AI and large language models, and discuss how they work. To help us with that, we have Himanshu Raj, Senior Instructor on AI/ML. So, let’s jump right in. Hi Himanshu, what is generative AI?  01:00 Himanshu: Generative AI refers to a type of AI that can create new content. It is a subset of deep learning, where the models are trained not to make predictions but rather to generate output on their own.  Think of generative AI as an artist who looks at a lot of paintings and learns the patterns and styles present in them. Once it has learned these patterns, it can generate new paintings that resembles what it learned. 01:27 Lois: Let's take an example to understand this better. Suppose we want to train a generative AI model to draw a dog. How would we achieve this? Himanshu: You would start by giving it a lot of pictures of dogs to learn from. The AI does not know anything about what a dog looks like. But by looking at these pictures, it starts to figure out common patterns and features, like dogs often have pointy ears, narrow faces, whiskers, etc. You can then ask it to draw a new picture of a dog.  The AI will use the patterns it learned to generate a picture that hopefully looks like a dog. But remember, the AI is not copying any of the pictures it has seen before but creating a new image based on the patterns it has learned. This is the basic idea behind generative AI. In practice, the process involves a lot of complex maths and computation, and there are different techniques and architectures that can be used, such as variational autoencoders (VAs) and Generative Adversarial Networks (GANs).  02:27 Nikita: Himanshu, where is generative AI used in the real world? Himanshu: Generative AI models have a wide variety of applications across numerous domains. For the image generation, generative models like GANs are used to generate realistic images. They can be used for tasks, like creating artwork, synthesizing images of human faces, or transforming sketches into photorealistic images.  For text generation, large language models like GPT 3, which are generative in nature, can create human-like text. This has applications in content creation, like writing articles, generating ideas, and again, conversational AI, like chat bots, customer service agents. They are also used in programming for code generation and debugging, and much more.  For music generation, generative AI models can also be used. They create new pieces of music after being trained on a specific style or collection of tunes. A famous example is OpenAI's MuseNet. 03:21 Lois: You mentioned large language models in the context of text-based generative AI. So, let’s talk a little more about it. Himanshu, what exactly are large language models? Himanshu: LLMs are a type of artificial intelligence models built to understand, generate, and process human language at a massive scale. They were primarily designed for sequence to sequence tasks such as machine translation, where an input sequence is transformed into an output sequence.  LLMs can be used to translate text from one language to another. For example, an LLM could be used to translate English text into French. To do this job, LLM is trained on a massive data set of text and code which allows it to learn the patterns and relationships that exist between different languages. The LLM translates, “How are you?” from English to French, “Comment allez-vous?”  It can also answer questions like, what is the capital of France? And it would answer the capital of France is Paris. And it will write an essay on a given topic. For example, write an essay on French Revolution, and it will come up with a response like with a title and introduction. 04:33 Lois: And how do LLMs actually work? Himanshu: So, LLM models are typically based on deep learning architectures such as transformers. They are also trained on vast amount of text data to learn language patterns and relationships, again, with a massive number of parameters usually in order of millions or even billions. LLMs have also the ability to comprehend and understand natural language text at a semantic level. They can grasp context, infer meaning, and identify relationships between words and phrases.  05:05 Nikita: What are the most important factors for a large language model? Himanshu: Model size and parameters are crucial aspects of large language models and other deep learning models. They significantly impact the model’s capabilities, performance, and resource requirement. So, what is model size? The model size refers to the amount of memory required to store the model's parameter and other data structures. Larger model sizes generally led to better performance as they can capture more complex patterns and representation from the data.  The parameters are the numerical values of the model that change as it learns to minimize the model's error on the given task. In the context of LLMs, parameters refer to the weights and biases of the model's transformer layers. Parameters are usually measured in terms of millions or billions. For example, GPT-3, one of the largest LLMs to date, has 175 billion parameters making it extremely powerful in language understanding and generation.  Tokens represent the individual units into which a piece of text is divided during the processing by the model. In natural language, tokens are usually words, subwords, or characters. Some models have a maximum token limit that they can process and longer text can may require truncation or splitting. Again, balancing model size, parameters, and token handling is crucial when working with LLMs.  06:29 Nikita: But what’s so great about LLMs? Himanshu: Large language models can understand and interpret human language more accurately and contextually. They can comprehend complex sentence structures, nuances, and word meanings, enabling them to provide more accurate and relevant responses to user queries. This model can generate human-like text that is coherent and contextually appropriate. This capability is valuable for context creation, automated writing, and generating personalized response in applications like chatbots and virtual assistants. They can perform a variety of tasks.  Large language models are very versatile and adaptable to various industries. They can be customized to excel in applications such as language translation, sentiment analysis, code generation, and much more. LLMs can handle multiple languages making them valuable for cross-lingual tasks like translation, sentiment analysis, and understanding diverse global content.  Large language models can be again, fine-tuned for a specific task using a minimal amount of domain data. The efficiency of LLMs usually grows with more data and parameters. 07:34 Lois: You mentioned the “sequence to sequence tasks” earlier. Can you explain the concept in simple terms for us? Himanshu: Understanding language is difficult for computers and AI systems. The reason being that words often have meanings based on context. Consider a sentence such as Jane threw the frisbee, and her dog fetched it.  In this sentence, there are a few things that relate to each other. Jane is doing the throwing. The dog is doing the fetching. And it refers to the frisbee. Suppose we are looking at the word “it” in the sentence. As a human, we understand easily that “it” refers to the frisbee. But for a machine, it can be tricky. The goal in sequence problems is to find patterns, dependencies, or relationships within the data and make predictions, classification, or generate new sequences based on that understanding. 08:27 Lois: And where are sequence models mostly used? Himanshu: Some common example of sequence models includes natural language processing, which we call NLP, tasks such as machine translation, text generation sentiment analysis, language modeling involve dealing with sequences of words or characters.  Speech recognition. Converting audio signals into text, involves working with sequences of phonemes or subword units to recognize spoken words. Music generation. Generating new music involves modeling musical sequences, nodes, and rhythms to create original compositions.  Gesture recognition. Sequences of motion or hand gestures are used to interpret human movements for applications, such as sign language recognition or gesture-based interfaces. Time series analysis. In fields such as finance, economics, weather forecasting, and signal processing, time series data is used to predict future values, detect anomalies, and understand patterns in temporal data. 09:35 The Oracle University Learning Community is an excellent place to collaborate and learn with Oracle experts and fellow learners. Grow your skills, inspire innovation, and celebrate your successes. All your activities, from liking a post to answering questions and sharing with others, will help you earn a valuable reputation, badges, and ranks to be recognized in the community. Visit mylearn.oracle.com to get started.  10:03 Nikita: Welcome back! Himanshu, what would be the best way to solve those sequence problems you mentioned? Let’s use the same sentence, “Jane threw the frisbee, and her dog fetched it” as an example. Himanshu: The solution is transformers. It's like model has a bird's eye view of the entire sentence and can see how all the words relate to each other. This allows it to understand the sentence as a whole instead of just a series of individual words. Transformers with their self-attention mechanism can look at all the words in the sentence at the same time and understand how they relate to each other.  For example, transformer can simultaneously understand the connections between Jane and dog even though they are far apart in the sentence. 10:52 Nikita: But how? Himanshu: The answer is attention, which adds context to the text. Attention would notice dog comes after frisbee, fetched comes after dog, and it comes after fetched.  Transformer does not look at it in isolation. Instead, it also pays attention to all the other words in the sentence at the same time. But considering all these connections, the model can figure out that “it” likely refers to the frisbee.  The most famous current models that are emerging in natural language processing tasks consist of dozens of transformers or some of their variants, for example, GPT or Bert. 11:32 Lois: I was looking at the AI Foundations course on MyLearn and came across the terms “prompt engineering” and “fine tuning.” Can you shed some light on them? Himanshu: A prompt is the input or initial text provided to the model to elicit a specific response or behavior. So, this is something which you write or ask to a language model. Now, what is prompt engineering? So prompt engineering is the process of designing and formulating specific instructions or queries to interact with a large language model effectively.  In the context of large language models, such as GPT 3 or Burt, prompts are the input text or questions given to the model to generate responses or perform specific tasks.  The goal of prompt engineering is to ensure that the language model understands the user's intent correctly and provide accurate and relevant responses. 12:26 Nikita: That sounds easy enough, but fine tuning seems a bit more complex. Can you explain it with an example? Himanshu: Imagine you have a versatile recipe robot named chef bot. Suppose that chef bot is designed to create delicious recipes for any dish you desire.  Chef bot recognizes the prompt as a request for a pizza recipe, and it knows exactly what to do. However, if you want chef bot to be an expert in a particular type of cuisine, such as Italian dishes, you fine-tune chef bot for Italian cuisine by immersing it in a culinary crash course filled with Italian cookbooks, traditional Italian recipes, and even Italian cooking shows.  During this process, chef bot becomes more specialized in creating authentic Italian recipes, and this option is called fine tuning. LLMs are general purpose models that are pre-trained on large data sets but are often fine-tuned to address specific use cases.  When you combine prompt engineering and fine tuning, and you get a culinary wizard in chef bot, a recipe robot that is not only great at understanding specific dish requests but also capable of following a specific dish requests and even mastering the art of cooking in a particular culinary style. 13:47 Lois: Great! Now that we’ve spoken about all the major components, can you walk us through the life cycle of a large language model? Himanshu: The life cycle of a Large Language Model, LLM, involves several stages, from its initial pre-training to its deployment and ongoing refinement.  The first of this lifecycle is pre-training. The LLM is initially pre-trained on a large corpus of text data from the internet. During pre-training, the model learns grammar, facts, reasoning abilities, and general language understanding. The model predicts the next word in a sentence given the previous words, which helps it capture relationships between words and the structure of language.  The second phase is fine tuning initialization. After pre-training, the model's weights are initialized, and it's ready for task-specific fine tuning. Fine tuning can involve supervised learning on labeled data for specific tasks, such as sentiment analysis, translation, or text generation.  The model is fine-tuned on specific tasks using a smaller domain-specific data set. The weights from pre-training are updated based on the new data, making the model task aware and specialized. The next phase of the LLM life cycle is prompt engineering. So this phase craft effective prompts to guide the model's behavior in generating specific responses.  Different prompt formulations, instructions, or context can be used to shape the output.  15:13 Nikita: Ok… we’re with you so far. What’s next? Himanshu: The next phase is evaluation and iteration. So models are evaluated using various metrics to access their performance on specific tasks. Iterative refinement involves adjusting model parameters, prompts, and fine tuning strategies to improve results.  So as a part of this step, you also do few shot and one shot inference. If needed, you further fine tune the model with a small number of examples. Basically, few shot or a single example, one shot for new tasks or scenarios.  Also, you do the bias mitigation and consider the ethical concerns. These biases and ethical concerns may arise in models output. You need to implement measures to ensure fairness in inclusivity and responsible use.  16:07 Himanshu: The next phase in LLM life cycle is deployment. Once the model has been fine-tuned and evaluated, it is deployed for real world applications. Deployed models can perform tasks, such as text generation, translation, summarization, and much more. You also perform monitoring and maintenance in this phase.  So you continuously monitor the model's performance and output to ensure it aligns with desired outcomes. You also periodically update and retrain the model to incorporate new data and to adapt to evolving language patterns. This overall life cycle can also consist of a feedback loop, whether you gather feedbacks from users and incorporate it into the model’s improvement process.  You use this feedback to further refine prompts, fine tuning, and overall model behavior. RLHF, which is Reinforcement Learning with Human Feedback, is a very good example of this feedback loop. You also research and innovate as a part of this life cycle, where you continue to research and develop new techniques to enhance the model capability and address different challenges associated with it. 17:19 Nikita: As we’re talking about the LLM life cycle, I see that fine tuning is not only about making an LLM task specific. So, what are some other reasons you would fine tune an LLM model? Himanshu: The first one is task-specific adaptation. Pre-trained language models are trained on extensive and diverse data sets and have good general language understanding. They excel in language generation and comprehension tasks, though the broad understanding of language may not lead to optimal performance in specific task.  These models are not task specific. So the solution is fine tuning. The fine tuning process customizes the pre-trained models for a specific task by further training on task-specific data to adapt the model's knowledge.  The second reason is domain-specific vocabulary. Pre-trained models might lack knowledge of specific words and phrases essential for certain tasks in fields, such as legal, medical, finance, and technical domains. This can limit their performance when applied to domain-specific data.  Fine tuning enables the model to adapt and learn domain-specific words and phrases. These words could be, again, from different domains.  18:35 Himanshu: The third reason to fine tune is efficiency and resource utilization. So fine tuning is computationally efficient compared to training from scratch.  Fine tuning reuses the knowledge from pre-trained models, saving time and resources. Fine tuning requires fewer iterations to achieve task-specific competence. Shorter training cycles expedite the model development process. It conserves computational resources, such as GPU memory and processing power.  Fine tuning is efficient in quicker model deployment. It has faster time to production for real world applications. Fine tuning is, again, a scalable enabling adaptation to various tasks with the same base model, which further reduce resource demands, and it leads to cost saving for research and development.  The fourth reason to fine tune is of ethical concerns. Pre-trained models learns from diverse data. And those potentially inherit different biases. Fine tune might not completely eliminate biases. But careful curation of task specific data ensures avoiding biased or harmful vocabulary. The responsible uses of domain-specific terms promotes ethical AI applications.  19:53 Lois: Thank you so much, Himanshu, for spending time with us. We had such a great time learning from you. If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on our free AI Foundations course. Nikita: Yeah, we even have a detailed walkthrough of the architecture of transformers that you might want to check out. Join us next week for a discussion on the OCI AI Portfolio. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 20:24 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

27 Helmi 202420min

Deep Learning

Deep Learning

Did you know that the concept of deep learning goes way back to the 1950s? However, it is only in recent years that this technology has created a tremendous amount of buzz (and for good reason!). A subset of machine learning, deep learning is inspired by the structure of the human brain, making it fascinating to learn about. In this episode, Lois Houston and Nikita Abraham interview Senior Principal OCI Instructor Hemant Gahankari about deep learning concepts, including how Convolution Neural Networks work, and help you get your deep learning basics right. Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Himanshu Raj, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started! 00:26 Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor. Nikita: Hi everyone! Last week, we covered the new MySQL HeatWave Implementation Associate certification. So do go check out that episode if it interests you. Lois: That was a really interesting discussion for sure. Today, we’re going to focus on the basics of deep learning with our Senior Principal OCI Instructor, Hemant Gahankari. 00:58 Nikita: Hi Hemant! Thanks for being with us today. So, to get started, what is deep learning? Hemant: Deep learning is a subset of machine learning that focuses on training Artificial Neural Networks to solve a task at hand. Say, for example, image classification. A very important quality of the ANN is that it can process raw data like pixels of an image and extract patterns from it. These patterns are treated as features to predict the outcomes.  Let us say we have a set of handwritten images of digits 0 to 9. As we know, everyone writes the digits in a slightly different way. So how do we train a machine to identify a handwritten digit? For this, we use ANN.  ANN accepts image pixels as inputs, extracts patterns like edges and curves and so on, and correlates these patterns to predict an outcome. That is what digit does the image has in this case.  02:04 Lois: Ok, so what you’re saying is given a bunch of pixels, ANN is able to process pixel data, learn an internal representation of the data, and predict outcomes. That’s so cool! So, why do we need deep learning? Hemant: We need to specify features while we train machine learning algorithm. With deep learning, features are automatically extracted from the data. Internal representation of features and their combinations is built to predict outcomes by deep learning algorithms. This may not be feasible manually.  Deep learning algorithms can make use of parallel computations. For this, usually data is split into small batches and process parallelly. So these algorithms can process large amount of data in a short time to learn the features and their combinations. This leads to scalability and performance. In short, deep learning complements machine learning algorithms for complex data for which features cannot be described easily.  03:13 Nikita: What can you tell us about the origins of deep learning? Hemant: Some of the deep learning concepts like artificial neuron, perceptron, and multilayer perceptron existed as early as 1950s. One of the most important concept of using backpropagation for training ANN came in 1980s.  In 1990s, convolutional neural network were also introduced for image analysis task. Starting 2000, GPUs were introduced. And 2010 onwards, GPUs became cheaper and widely available. This fueled the widespread adoption of deep learning uses like computer vision, natural language processing, speech recognition, text translation, and so on.  In 2012, major networks like AlexNet and Deep-Q Network were built. 2016 onward, generative use cases of the deep learning also started to come up. Today, we have widely adopted deep learning for a variety of use cases, including large language models and many other types of generative models.  04:29 Lois: Hemant, what are various applications of deep learning algorithms?  Hemant: Deep learning algorithms are targeted at a variety of data and applications. For data, we have images, videos, text, and audio. For images, applications can be image classification, object detection, and so on. For textual data, applications are to translate the text or detect a sentiment of a text. For audio, the applications can be music generation, speech to text, and so on.  05:08 Lois: It's important that we select the right deep learning algorithm based on the data and application, right? So how do we do that?  Hemant: For image task like image classification, object detection, image segmentation, or facial recognition, CNN is a suitable architecture. For text, we have a choice of the latest transformers or LSTM or even RNN. For generative tasks like text summarization, question answering, transformers is a good choice. For generating images, text to image generation, transformers, GANs, or diffusion models are available choice. 05:51 Nikita: Let’s dive a little deeper into Artificial Neural Networks. Can you tell us more about them, Hemant? Hemant: Artificial Neural Networks are inspired by the human brain. They are made up of interconnected nodes called as neurons.  Nikita: And how are inputs processed by a neuron?  Hemant: In ANN, we assign weights to the connection between neurons. Weighted inputs are added up. And if the sum crosses a specified threshold, the neuron is fired. And the outputs of a layer of neuron become an input to another layer.  06:27 Lois: Hemant, tell us about the building blocks of ANN so we understand this better. Hemant: So first, building block is layers. We have input layer, output layer, and multiple hidden layers. The input layer and output layer are mandatory. And the hidden layers are optional. The second unit is neurons. Neurons are computational units, which accept an input and produce an output.  Weights determine the strength of connection between neurons. So the connection could be between input and a neuron, or it could be between a neuron and another neuron. Activation functions work on the weighted sum of inputs to a neuron and produce an output. Additional input to the neuron that allows a certain degree of flexibility is called as a bias.  07:27 Nikita: I think we’ve got the components of ANN straight but maybe you should give us an example. You mentioned this example earlier…of needing to train ANN to recognize handwritten digits from images. How would we go about that? Hemant: For that, we have to collect a large number of digit images, and we need to train ANN using these images.  So, in this case, the images consist of 28 by 28 pixels which act as input layer. For the output, we have neurons-- 10 neurons which represent digits 0 to 9. And we have multiple hidden layers. So, in this case, we have two hidden layers which are consisting of 16 neurons each.  The hidden layers are responsible for capturing the internal representation of the raw image data. And the output layer is responsible for producing the desired outcomes. So, in this case, the desired outcome is the prediction of whether the digit is 0 or 1 or up to digit 9.  So how do we train this particular ANN? So the first thing we use the backpropagation algorithm. During training, we show an image to the ANN. Let us say it is an image of digit 2. So we expect output neuron for digit 2 to fire. But in real, let us say output neuron of a digit 6 fired.  09:12 Lois: So, then, what do we do?  Hemant: We know that there is an error. So to correct an error, we adjust the weights of the connection between neurons based on a calculation, which we call as backpropagation algorithm. By showing thousands of images and adjusting the weights iteratively, ANN is able to predict correct outcome for most of the input images. This process of adjusting weights through backpropagation is called as model training.  09:48 Do you have an idea for a new course or learning opportunity? We’d love to hear it! Visit the Oracle University Learning Community and share your thoughts with us on the Idea Incubator. Your suggestion could find a place in future development projects! Visit mylearn.oracle.com to get started.  10:09 Nikita: Welcome back! Let’s move on to CNN. Hemant, what is a Convolutional Neural Network?  Hemant: CNN is a type of deep learning model specifically designed for processing and analyzing grid-like data, such as images and videos. In the ANN, the input image is converted to a single dimensional array and given as an input to the network.   But that does not work well with the image data because image data is inherently two dimensional. CNN works better with two dimensional data. The role of the CNN is to reduce the image into a form, which is easier to process and without losing features, which are critical for getting a good prediction.  10:53 Lois: A CNN has different layers, right? Could you tell us a bit about them?  Hemant: The first one is input layer. Input layer is followed by feature extraction layers, which is a combination and repetition of multiple feature extraction layers, including convolutional layer with ReLu activation and a pooling layer.  And this is followed by a classification layer. These are the fully connected output layers, where the classification occurs as output classes. The feature extraction layers play a vital role in image classification.   11:33 Nikita: Can you explain these layers with an example? Hemant: Let us say we have a robot to inspect a house and tell us what type of a house it is. It uses many tools for this purpose. The first tool is a blueprint detector. It scans different parts of the house, like walls, floors, or windows, and looks for specific patterns or features.  The second tool is a pattern highlighter. This tool marks areas detected by the blueprint detector. The next tool is a summarizer. It tries to capture the most significant features of every room. The next tool is house expert, which looks at all the highlighted patterns and features, and tries to understand the house.  The next tool is a guess maker. It assigns probabilities to the different possible house types. And finally, the quality checker randomly checks different parts of the analysis to make sure that the robot doesn't rely too much on any single piece of information.  12:40 Nikita: Ok, so how are you mapping these to the feature extraction layers?  Hemant: Similar to blueprint detector, we have a convolutional layer. This layer applies convolutional operations to the input image using small filters known as kernels.  Each filter slides across the input image to detect specific features, such as edges, corners, or textures. Similar to pattern highlighter, we have a activation function. The activation function allows the network to learn more complex and non-linear relationships in the data. Pooling layer is similar to room summarizer.  Pooling helps reduce the spatial dimensions of the feature maps generated by the convolutional layers. Similar to house expert, we have a fully connected layer, which is responsible for making final predictions or classifications based on the learned features. Softmax layer converts the output of the last fully connected layers into probability scores.  The class with the highest probability is the predicted class. This is similar to the guess maker. And finally, we have the dropout layer. This layer is a regularization technique used to prevent overfitting in the network. This has the same role as that of a quality checker.  14:05 Lois: Do CNNs have any limitations that we need to be aware of? Hemant: Training CNNs on large data sets can be computationally expensive and time consuming. CNNs are susceptible to overfitting, especially when the training data is limited or imbalanced. CNNs are considered black box models making it difficult to interpret.  And CNNs can be sensitive to small changes in the input leading to unstable predictions.  14:33 Nikita: And what are the top applications of CNN? Hemant: One of the most widely used applications of CNNs is image classification. For example, classifying whether an image contains a specific object, say cat or a dog.  CNNs are used for object detection tasks. The goal here is to draw bounding boxes around objects in an image. CNNs can perform pixel level segmentation, where each pixel in the image is labeled to represent different objects or regions. CNNs are employed for face recognition tasks as well, identifying and verifying individuals based on facial features.  CNNs are widely used in medical image analysis, helping with tasks like tumor detection, diagnosis, and classification of various medical conditions. CNNs play an important role in the development of self-driving cars, helping them to recognize and understand the road traffic signs, pedestrians, and other vehicles. And CNNs are applied in analyzing satellite images and remote sensing data for tasks, such as land cover classification and environmental monitoring.  15:50 Nikita: Hemant, let’s talk about sequence models. What are they and what are they used for? Hemant: Sequence models are used to solve problems, where the input data is in the form of sequences. The sequences are ordered lists of data points or events.  The goal in sequence models is to find patterns and dependencies within the data and make predictions, classifications, or even generate new sequences.  16:17 Lois: Can you give us some examples of sequence models?  Hemant: Some common examples of the sequence models are in natural language processing, deep learning models are used for tasks, such as machine translation, sentiment analysis, or text generation. In speech recognition, deep learning models are used to convert a recorded audio into text.  In deep learning models, can generate new music or create original compositions. Even sequences of hand gestures are interpreted by deep learning models for applications like sign language recognition. In fields like finance or weather prediction, time series data is used to predict future values.  17:03 Nikita: Which deep learning models can be used to work with sequence data?  Hemant: Recurrent Neural Networks, abbreviated as RNNs, are a class of neural network architectures specifically designed to handle sequential data. Unlike traditional feedforward neural network, RNNs have a feedback loop that allows information to persist across different timesteps.  The key features of RNN is their ability to maintain an internal state often referred to as a hidden state or memory, which is updated as the network processes each element in the input sequence. The hidden state is then used as input to the network for the next time step, allowing the model to capture dependencies and patterns in the data that are spread across time.  17:58 Nikita: Are there various types of RNNs? Hemant: There are different types of RNN architecture based on application.  One of them is one to one. This is like feed forward neural network and is not suited for sequential data. A one to many model produces multiple output values for one input value. Music generation or sequence generation are some applications using this architecture.  A many to one model produces one output value after receiving multiple input values. Example is sentiment analysis based on the review. Many to many model produces multiple output values for multiple input values. Examples are machine translation and named entity recognition.  RNN does not perform that well when it comes to capturing long term dependencies. This is due to the vanishing gradients problem, which is overcome by using LSTM model.  19:07 Lois: Another acronym. What is LSTM, Hemant? Hemant: Long Short-Term memory, abbreviated as LSTM, works by using a specialized memory cell and a gating mechanisms to capture long term dependencies in the sequential data.  The key idea behind LSTM is to selectively remember or forget information over time, enabling the model to maintain relevant information over long sequences, which helps overcome the vanishing gradients problem.  19:40 Nikita: Can you take us, step-by-step, through the working of LSTM?  Hemant: At each timestep, the LSTM takes an input vector representing the current data point in the sequence. The LSTM also receives the previous hidden state and cell state. These represent what the LSTM has remembered and forgotten up to the current point in the sequence.  The core of the LSTM lies in its gating mechanisms, which include three gates: the input gate, the forget gate, and the output gate. These gates are like the filters that control the flow of information within the LSTM cell. The input gate decides what new information from the current input should be added to the memory cell.  The forget gate determines what information in the current memory cell should be discarded or forgotten. The output gate regulates how much of the current memory cell should be exposed as the output of the current time step. Using the information from the input gate and forget gate, the LSTM updates its cell state. The LSTM then uses the output gate to produce the current hidden state, which becomes the output of the LSTM for the next time step.  21:12 Lois: Thank you, Hemant, for joining us in this episode of the Oracle University Podcast. I learned so much today. If you want to learn more about deep learning, visit mylearn.oracle.com and search for the Oracle Cloud Infrastructure AI Foundations course. And remember, the AI Foundations course and certification are free. So why not get started now? Nikita: Right, Lois. In our next episode, we will discuss generative AI and language learning models. Until then, this is Nikita Abraham… Lois: And Lois Houston signing off! 21:45 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.

20 Helmi 202422min

Everything You Need to Know About the MySQL HeatWave Implementation Associate Certification

Everything You Need to Know About the MySQL HeatWave Implementation Associate Certification

What is MySQL HeatWave? How do I get certified in it? Where do I start? Listen to Lois Houston and Nikita Abraham, along with MySQL Developer Scott Stroz, answer all these questions and more on this week's episode of the Oracle University Podcast. MySQL Document Store: https://oracleuniversitypodcast.libsyn.com/mysql-document-store Oracle MyLearn: https://mylearn.oracle.com/ Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X (formerly Twitter): https://twitter.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, and the OU Studio Team for helping us create this episode. -------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this  series of informative podcasts, we’ll bring you foundational training on the most popular  Oracle technologies. Let’s get started! 00:26 Nikita: Welcome to the Oracle University Podcast! I’m Nikita Abraham, Principal Technical Editor with Oracle University, and with me is Lois Houston, Director of Innovation Programs. Lois: Hi there! For the last two weeks, we’ve been having really exciting discussions on everything AI. We covered the basics of artificial intelligence and machine learning, and we’re taking a short break from that today to talk about the new MySQL HeatWave Implementation Associate Certification with MySQL Developer Advocate Scott Stroz. 00:59 Nikita: You may remember Scott from an episode last year where he came on to discuss MySQL Document Store. We’ll post the link to that episode in the show notes so you can listen to it if you haven’t already. Lois: Hi Scott! Thanks for joining us again. Before diving into the certification, tell us, what is MySQL HeatWave?  01:19 Scott: Hi Lois, Hi Niki. I’m so glad to be back. So, MySQL HeatWave Database Service is a fully managed database that is capable of running transactional and analytic queries in a single database instance. This can be done across data warehouses and data lakes. We get all the benefits of analytic queries without the latency and potential security issues of performing standard extract, transform, and load, or ETL, operations. Some other MySQL HeatWave database service features are automated system updates and database backups, high availability, in-database machine learning with AutoML, MySQL Autopilot for managing instance provisioning, and enhanced data security.  HeatWave is the only cloud database service running MySQL that is built, managed, and supported by the MySQL Engineering team. 02:14 Lois: And where can I find MySQL HeatWave? Scott: MySQL HeatWave is only available in the cloud. MySQL HeatWave instances can be provisioned in Oracle Cloud Infrastructure or OCI, Amazon Web Services (AWS), and Microsoft Azure. Now, some features though are only available in Oracle Cloud, such as access to MySQL Document Store. 02:36 Nikita: Scott, you said MySQL HeatWave runs transactional and analytic queries in a single instance. Can you elaborate on that? Scott: Sure, Niki. So, MySQL HeatWave allows developers, database administrators, and data analysts to run transactional queries (OLTP) and analytic queries (OLAP).  OLTP, or online transaction processing, allows for real-time execution of database transactions. A transaction is any kind of insertion, deletion, update, or query of data. Most DBAs and developers work with this kind of processing in their day-to-day activities.   OLAP, or online analytical processing, is one way to handle multi-dimensional analytical queries typically used for reporting or data analytics. OLTP system data must typically be exported, aggregated, and imported into an OLAP system. This procedure is called ETL as I mentioned – extract, transform, and load. With large datasets, ETL processes can take a long time to complete, so analytic data could be “old” by the time it is available in an OLAP system. There is also an increased security risk in moving the data to an external source. 03:56 Scott: MySQL HeatWave eliminates the need for time-consuming ETL processes. We can actually get real-time analytics from our data since HeatWave allows for OLTP and OLAP in a single instance. I should note, this also includes analytic from JSON data that may be stored in the database. Another advantage is that applications can use MySQL HeatWave without changing any of the application code. Developers only need to point their applications at the MySQL HeatWave databases. MySQL HeatWave is fully compatible with on-premise MySQL instances, which can allow for a seamless transition to the cloud. And one other thing. When MySQL HeatWave has OLAP features enabled, MySQL can determine what type of query is being executed and route it to either the normal database system or the in-memory database. 04:52 Lois: That’s so cool! And what about the other features you mentioned, Scott? Automated updates and backups, high availability… Scott: Right, Lois. But before that, I want to tell you about the in-memory query accelerator. MySQL HeatWave offers a massively parallel, in-memory hybrid columnar query processing engine. It provides high performance by utilizing algorithms for distributed query processing. And this query processing in MySQL HeatWave is optimized for cloud environments.  MySQL HeatWave can be configured to automatically apply system updates, so you will always have the latest and greatest version of MySQL. Then, we have automated backups. By this, I mean MySQL HeatWave can be configured to provide automated backups with point-in-time recovery to ensure data can be restored to a particular date and time. MySQL HeatWave also allows us to define a retention plan for our database backups, that means how long we keep the backups before they are deleted. High availability with MySQL HeatWave allows for more consistent uptime. When using high availability, MySQL HeatWave instances can be provisioned across multiple availability domains, providing automatic failover for when the primary node becomes unavailable. All availability domains within a region are physically separated from each other to mitigate the possibility of a single point of failure. 06:14 Scott: We also have MySQL Lakehouse. Lakehouse allows for the querying of data stored in object storage in various formats. This can be CSV, Parquet, Avro, or an export format from other database systems. And basically, we point Lakehouse at data stored in Oracle Cloud, and once it’s ingested, the data can be queried just like any other data in a database. Lakehouse supports querying data up to half a petabyte in size using the HeatWave engine. And this allows users to take advantage of HeatWave for non-MySQL workloads. MySQL AutoPilot is a part of MySQL HeatWave and can be used to predict the number of HeatWave nodes a system will need and automatically provision them as part of a cluster. AutoPilot has features that can handle automatic thread pooling and database shape predicting. A “shape” is one of the many different CPU, memory, and ethernet traffic configurations available for MySQL HeatWave. MySQL HeatWave includes some advanced security features such as asymmetric encryption and automated data masking at query execution. As you can see, there are a lot of features covered under the HeatWave umbrella! 07:31 Did you know that Oracle University offers free courses on Oracle Cloud Infrastructure? You’ll find training on everything from cloud computing, database, and security to artificial intelligence and machine learning, all free to subscribers. So, what are you waiting for? Pick a topic, leverage the Oracle University Learning Community to ask questions, and then sit for your certification. Visit mylearn.oracle.com to get started.  08:02 Nikita: Welcome back! Now coming to the certification, who can actually take this exam, Scott? Scott: The MySQL HeatWave Implementation Associate Certification Exam is designed specifically for administrators and data scientists who want to provision, configure, and manage MySQL HeatWave for transactions, analytics, machine learning, and Lakehouse. 08:22 Nikita: Can someone who’s just graduated, say an engineering graduate interested in data analytics, take this certification? Are there any prerequisites? What are the career prospects for them? Scott: There are no mandatory prerequisites, but anyone who wants to take the exam should have experience with MySQL HeatWave and other aspects of OCI, such as virtual cloud networks and identity and security processes. Also, the learning path on MyLearn will be extremely helpful when preparing for the exam, but you are not required to complete the learning path before registering for the exam. The exam focuses more on getting MySQL HeatWave running (and keeping it running) than accessing the data. That doesn’t mean it is not helpful for someone interested in data analytics. I think it can be helpful for data analysts to understand how the system providing the data functions, even if it is at just a high level. It is also possible that data analysts might be responsible for setting up their own systems and importing and managing their own data. 09:23 Lois: And how do I get started if I want to get certified on MySQL HeatWave? Scott: So, you’ll first need to go to mylearn.oracle.com and look for the “Become a MySQL HeatWave Implementation Associate” learning path. The learning path consists of over 10 hours of training across 8 different courses.  These courses include “Getting Started with MySQL HeatWave Database Service,” which offers an introduction to some Oracle Cloud functionality such as security and networking, as well as showing one way to connect to a MySQL HeatWave instance. Another course demonstrates how to configure MySQL instances and copy that configuration to other instances. Other courses cover how to migrate data into MySQL HeatWave, set up and manage high availability, and configure HeatWave for OLAP. You’ll find labs where you can perform hands-on activities, student and activity guides, and skill checks to test yourself along the way. And there’s also the option to Ask the Instructor if you have any questions you need answers to. You can also access the Oracle University Learning Community and discuss topics with others on the same journey. The learning path includes a practice exam to check your readiness to pass the certification exam. 10:33 Lois: Yeah, and remember, access to the entire learning path is free so there’s nothing stopping you from getting started right away. Now Scott, what does the certification test you on? Scott: The MySQL HeatWave Implementation exam, which is an associate-level exam, covers various topics. It will validate your ability to identify key features and benefits of MySQL HeatWave and describe the MySQL HeatWave architecture; identify Virtual Cloud Network (VCN) requirements and the different methods of connecting to a MySQL HeatWave instance; manage the automatic backup process and restore database systems from these backups; configure and manage read replicas and inbound replication channels; import data into MySQL HeatWave; configure and manage high availability and clustering of MySQL HeatWave instances. I know this seems like a lot of different topics. That is why we recommend anyone interested in the exam follow the learning path. It will help make sure you have the exposure to all the topics that are covered by the exam. 11:35 Lois: Tell us more about the certification process itself. Scott: While the courses we already talked about are valuable when preparing for the exam, nothing is better than hands-on experience. We recommend that candidates have hands-on experience with MySQL HeatWave with real-world implementations. The format of the exam is Multiple Choice. It is 90 minutes long and consists of 65 questions. When you’ve taken the recommended training and feel ready to take the certification exam, you need to purchase the exam and register for it. You go through the section on things to do before the exam and the exam policies, and then all that’s left to do is schedule the date and time of the exam according to when is convenient for you. 12:16 Nikita: And once you’ve finished the exam? Scott: When you’re done your score will be displayed on the screen when you finish the exam. You will also receive an email indicating whether you passed or failed. You can view your exam results and full score report in Oracle CertView, Oracle’s certification portal. From CertView, you can download and print your eCertificate and even share your newly earned badge on places like Facebook, Twitter, and LinkedIn. 12:38 Lois: And for how long does the certification remain valid, Scott? Scott: There is no expiration date for the exam, so the certification will remain valid for as long as the material that is covered remains relevant.  12:49 Nikita: What’s the next step for me after I get this certification? What other training can I take? Scott: So, because this exam is an associate level exam, it is kind of a stepping stone along a person’s MySQL training. I do not know if there are plans for a professional level exam for HeatWave, but Oracle University has several other training programs that are MySQL-specific. There are learning paths to help prepare for the MySQL Database Administrator and MySQL Database Developer exams. As with the HeatWave learning paths, the learning paths for these exams include video tutorials, hands-on activities, skill checks, and practice exams. 13:27 Lois: I think you’ve told us everything we need to know about this certification, Scott. Are there any parting words you might have? Scott: We know that the whole process of training and getting certified may seem daunting, but we’ve really tried to simplify things for you with the “Become a MySQL HeatWave Implementation Associate” learning path. It not only prepares you for the exam but also gives you experience with features of MySQL HeatWave that will surely be valuable in your career. 13:51 Lois: Thanks so much, Scott, for joining us today. Nikita: Yeah, we’ve had a great time with you. Scott: Thanks for having me. Lois: Next week, we’ll get back to our focus on AI with a discussion on deep learning. Until then, this is Lois Houston… Nikita: And Nikita Abraham, signing off. 14:07 That’s all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We’d also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University  Podcast.

13 Helmi 202414min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-vegaaneista-tykkaan
aamukahvilla
rss-valo-minussa-2
rss-narsisti
psykologia
adhd-podi
rss-duodecim-lehti
adhd-tyylilla
rss-vapaudu-voimaasi
jari-sarasvuo-podcast
rss-koira-haudattuna
rss-tripsteri
queen-talk
aloita-meditaatio
rss-uskonto-on-tylsaa
rss-laadukasta-ensihoitoa
dear-ladies