Never-down applications with Oracle Maximum Availability Architecture

Never-down applications with Oracle Maximum Availability Architecture

Show Video

hello and welcome to never down applications with Oracle maximum availability architecture my name is Marcos michelevitz and I am vice president of product management for Oracle database High availability scalability and Oracle maximum availability architecture and today I want to talk about how to establish never down applications with the Oracle database but before I go in media's race let me ask you what do you think is the biggest challenge for never down applications in my opinion the biggest challenge for providing never down applications is application State Management think about it in order to ensure never down applications the application State needs to be maintained during planned outages and needs to be restored after unplanned outages and of course Oracle has a recommendation of what to do in this regard and the recommendation is to keep your application stateless or stay safe which means server-side applications should be stateless in that there is no state to be maintained between two API and rest codes for example then in addition the recommendation is to keep application state in tables within the database because that eliminates the need for applications to manage the state as mentioned but it also simplifies the life cycle management the high availability and the disaster recovery which is exactly what we will be talking about in course of this presentation now of course we understand that sometimes applications just have to main State maintain State outside of the database for example caches in which case we recommend that you make sure that those um applications use is State Safe Way of managing state which means the state can be reconstituted from database tables for example a stateless application makes more efficient use of database resources also any database connection can be used for any application or user and normally is live is of lighter weight and you can use much fewer connections however if I talk about putting your application state in an Oracle database or in a database in into tables within a database I'm specifically talking about placable databases as far as the Oracle databases conserved collectible databases have been introduced a while ago and originally they were introduced as database containers for consolidation and this is still a great use case but they also provide a lot of means to provide an online database lifecycle operation or set of operations and if you want to know more about multi-tenant I've given you a link to more information about Oracle multi-tenant which basically is a description of placable databases it's an option to the Oracle Enterprise Edition but you can use up to three pdbs for free in the Oracle Enterprise database Enterprise Edition database which means trackable databases are really convenient for a simplified database lifecycle management for never down applications and the reason is because they can accommodate for the very common steps in a typical database life cycle Circle or flow the four steps are the creation of the application and then the testing of which followed by integration testing and eventually production and I want to walk you through these four steps very quickly and show you how placable databases can help you to efficiently manage that life cycle for never down applications obviously it starts with the creation so you have an application that you want to develop and you put your application State into the Practical database you develop a database application in that placable database once you are at a certain State you want to start testing and you can easily then use Rapid deployment with hot cloning to provide as many test environments as you want either in the same container database which is the database that contains the applicable databases or other databases other container databases in your estate now notice that I said hot cloning which means without impacting the production environment without impacting your development environment this is where we came from and once you then have tested your application just long enough you can move into the integration stage testing now a lot of our customers when they do this they want to not only test on test data or development data they really want to get the data from a designated or even the production system for which reason you can actually create a plugable database with a refresh incremental refresh option so instead of just hot cloning it and have a point in time copy you can make it so that the placable database will be refreshed or incrementally refreshed on a regular basis so you always get the latest data from The Source system from which you copied this plugable database so that you can test for integration purposes for example on very fresh and state-of-the-art data once that test is sufficient you probably want to move your application into production at some point in time and you can either do this yet again by cloning or you can move a relocation with no downtime which is something that our some of our customers do they go from integration without downtime into production and then you have the application in production and eventually the circle closes because there is an application upgrade required or a database upgrade is required staying with the database upgrade you can easily unplug and pluck the plugable database into a new container database of the higher version of the database so the version that you would like your plugable database to assume and then you have a new version and as you continue developing applications on that plugable database a circle will repeat itself for that reason trackable databases provide a very simplified database lifecycle management for never down application because they help a lot with certain operations that you need to perform in course of the server or cycle now the other question then is this is great this is how do you protect your database from failures and what if I have a production system that really doesn't allow me to take down the database for any sort of planned or unplanned maintenance or failures and the answer to that question is oracle's maximum availability architecture in short MAA MAA provides a standardized reference architecture set for never down deployments and when I talk about standardized reference architectures I'm referring to the left side of this picture here where I show four levels Bronx silver gold and platinum and as you go up the level of metal so to speak you increase the level of protection so the Assumption here is that Ma provides you with very clear definitions of database configuration that have a certain SLA and these SLA are leveled Bronx silver gold and platinum the other base assumption what Ma does is Illustrated in the middle of that slide here by which we show you the typical setup and what Ma does to ensure maximum availability because MAA is a long-standing project in Oracle database it has been around for over 20 years and for that period of time it has taken customer insights and expert recommendations which we then use to continuously validate and improve new features and configurations and to provide you with 8A feature configurations as well as operational best practices and the typical setup around which we provide those configurations and operational best practices as shown here you have a production site and then ideally you'll have a replicated site but which means you're already more in the silver or gold level so the silver level would assume you have a production side you have a backup in there that you perform on a regular basis and you probably have a cluster in my picture here this cluster would be represented by the exadata engineered system symbolized by the X if you then want to protect yourself and the database against disaster recovery you would want to replicate your data across distance to a replicated site and that's symbolized here by which you enter the gold standard already an MA is very flexible you don't need to use internet system I've just shown them here in this picture ma can be deployed on generic systems on engineered systems and of course our Cloud deployments base DB xlbb or XA Cloud customer or benefit from MAA in the cloud Mas part of the operations and the architecture in Internet systems part of the architecture General Thomas database which is the highest level of integration it's antonus antonus autonomously integrated for you by the way if you want to move from for example generic systems into the cloud on EXA DB or xrcc or into the autonomous database we even use Ma then as part of zero downtime migration zdm so you can move from one deployment to the other maintaining the availability of the database that you want to migrate for that reason ma continuously improves and that's shown on the right side area of this picture we have been covering a lot of functional areas with ma we started out talking about availability and scale out for example and these are the two bottom boxes that you can see here we still focus on scaling out and life cycle with ma typical features and products you would use in this context Iraq fpp and charting fpp stands for feed patching and provisioning and of course we use active replication and I already mentioned the replication part if I was to mention it in terms of products active Data Guard would be one of the solutions to be named here the other one is Golden Gate there's more solutions that we have for example for data protection we have flashback which helps you to recover from logical mistakes and we have almond which is basically the main tool to provide to take and provide backups and Recovery sets and the advanced functionality zdl array last but not least and a stronger Focus over the recent years has been continuous availability and in this area we are using Technologies such as application continuity online redefinition and Edition based redefinition to provide continuous availability for your applications so Ma has really become more than just a database protection mechanism it really focuses on providing never down the points for that matter I would like to focus on the hidden gems of Oracle MMA today so Oracle M A components that really help you achieve never down applications and the three that I've chosen here is are application continuity in the context of continuous availability active replication and then active Data Guard as part of the products in this area and last but not least I want to touch on sharding for the purpose of talking about scaling out and protecting your data in very massive databases last but not least I want to talk about a new product a new feature but it's really a product on the Oracle Cloud full stack Disaster Recovery it's a service provider that brings all these features together and ensures a food stack disaster recovery but we'll come to that later now the first feature that was listed here on that slide was application continued an application continuity is basically one solution that comes in two flavors the base solution is application continuity and it's very flexible it helps you to ensure hence the name application continuity for a lot of use cases however there's a second flavor called transparent application continuity which aims to do the same except in a transparent way and there's not a lot of difference between these two solutions both of them work for plant maintenance and unplanned outages they are both available with Oracle reconnective Data Guard but the difference is in the last bullet on the picture here and on either side because application continuity while it is very flexible and configurable requires an oracle connection group whereas transparent application continuity Tac does not and that's the reason why it's actually the default on Oracle autonomous database Oracle autonomous database can be used with a lot of applications not always what we know whether the application uses our connection group so we use transparent application continuity as the default if someone wants to connect to your autonomous database so what does transparent application continuity do in that context well it hides database server failures from your application more precisely it hides database downtime from your user and it sits right between as a picture on the right would show it sits right between the database and pure application and basically ensures that if there is an Outreach scenario on the database your application enhance your users wouldn't be affected wouldn't be seen those in more professional terms Tac or transparent application continuity rebuilds the session state and replace in-fly transactions as part of the automated session failure so should a failure occur on the database your session is automatically reconnect to another instance in in my case here that database cluster the session State thanks to Tac and functionality on the database site the session state is reinstated and even transactions that have started but yet not committed hence were in flight at the moment of failure will be replayed in the destination instance and hopefully come to a successful end for your users for that matter TSC eliminates errors on less unrecoverable so there are certain errors which we cannot recover whether for plant maintenance and unplanned outages now don't worry about it and it is actually a tool which I'll show you in a moment which will tell you which arrows will be discovered and how so you don't have to worry about the protection level you can easily determine that and if everything works out then what tic does is it will ensure that there will be no outages during or no exceptions during outages for your application without Tac however there may be exceptions and I'm sure you will be familiar with those underneath the covers we use a technology that's called Fast application notification fan and draining to achieve this kind of feature this kind of behavior stand in this context is a solution that notifies clients of database status changes so instead of waiting for the client to determine that there is an outage on the database layer we will inform the client that can either be The Driver or the connection pool for TAC at least we will inform the client about changes such as failures on the database layer and then decline together with the database will do what I formally described we'll go to another instance and reinstate the session and the Fly and the transactions that were in flight draining on the other hand is more a feature that helps us plant maintenance basically it allow us to reduce or cause the sessions to complete their work on a given instance to prepare the nodes or the database for maintenance you do not want to perform maintenance on instance or node while the database node or instance is under full workload you really want to drain it down lower it down and then move the remaining workload if it didn't drain in time to another instance so you can conveniently patch the node that requires all the incident that requires maintenance Oracle cic is best used with Oracle database 19c including 19c drivers and if you do can make sure use of a connection pool we highly recommend that even though for transparent application continuity that wouldn't be required for application continuity it will be required and whether you need to go to application continuity you can easily determine with AC check because AC check is a tool that helps you determine the protection provided by Tac it will also tell you the protection level provided by AC but for the purpose of my session I will be focusing on what it does for tiers and as a side effect accheck also catches application that use coding practices that may perhaps prevent safe replays the the AC check report which I'm showing here in the screenshot is available to you via the dbms underscore app continuity report package it's available since Oracle database 19c I think 1911 is the minimum IU and if you use 19c you can easily run AC check either together with the application you want to assess or after a dedicated or production run of that application accheck will then go to data that is stored inherently in the database statistics metadata and gather information about the transactions that have been performed during that application run and then it will list the application transactions that it has determined and tell you exactly what the result would be to what level it would either replay a transaction which means you will be covered there shouldn't be any errors even if there's an outage it will also tell you however that it sometimes cannot replay um a transaction and that is then an unrecoverable error but if it does so it will tell you by means of the error code why a replay of that particular transaction wouldn't be possible and the error code is basically a disabled reason for example or Ora 41 429 which means a side effect is detected and it makes perfect sense if you think about it transparent application continuity can really only assess the transaction that it sees some of those transactions May what we have what we would call autonomous transactions for example so a transaction that is embedded in a transaction that is a problem that Tac cannot replay because we wouldn't have control over the embedded transaction another very common example for a transaction that would be disabled for replay so wouldn't be automatically recovered would be one of those that uses external callouts for example based on dbms YouTube dbms util is a package and the database that allows you to reach out to external sources file systems URLs for web services for example and if you make the result of that dbms util call part of a transaction then that transaction would not be recoverable by Toc because we cannot control the success of that external call out initiated by dbms YouTube now if you really want that to be handled and if you want to give us instructions on how that should be handled then AC application continuity would be the solution of your choice and that is the reason why we have two flavors of application continuity applications running into itself and transparent application that was a lot of theory of this so let's not go into further details but instead let's give a let's look at a demo how transparent application continuity works and I've asked my colleague sinan to prepare a demo for us so please see none take it from here on this demo we use the windows on the right side of the screen to manage the services needed for the applications to connect to the database we create the first service with a transparent application continuity enabled this will be used by the application on the top left side the service runs on both instances of a two known track database the second database service without transparent application continuity enabled will be used by the application on the bottom left side this service also runs on bus database instances on the bottom right side we have a single user using the recommended connection string and the transparent application continuity enabled service as it also uses the scan listener and the service runs on both instances the connection can be established to either instance in this case the connection goes to instance number one please keep this number in mind for later the user buys a movie but leaves for lunch without committing the transaction in the middle window on the right we will acquire the total number of sessions connected to the database at this moment it is only that one connection to instance one now the application using the transparent application continuity enabled service starts using a connection tool 50 users log in spread across both database and instances and start executing transactions on the screen we also see the number of transactions per second over time the application using the service without a transparent application continuity also starts using a connection tool here again 50 users log in spread across both database and instances and start executing the transactions the events tab on the left side of each application shows information and messages coming from the database at this moment it shows that all users are logged in now it is time to patch node number one so we need to stop the database Services running on that node to free it up for maintenance when we stop the transparent application continuously enabled service on Node 1 the application using this service doesn't end any error messages and is not interrupted by any means however when we stop the service without a transparent application continuity enabled end users are interrupted and errors appear on the application side instance 2 continuous serving both applications we also see this in the number of transactions per second continuous to be similar to what it was before while stopping the service was completely transparent for the application using the transparent application continuity finally the single user comes back from lunch even though the service on instance 1 has been stopped meanwhile the commit completes successfully and transparently to the user now connected to instance 2 without any interruptions thanks to transplant application continuity as you can see transparent application continuity helps you to hide database failures and outages planned or unplanned from your application now another Hidden Gem that I have mentioned in the beginning is active data Guide oracleactive Data Guard is a solution that a lot of you will know because it has been in the Oracle database for the longest time for the purpose of providing zero data loss Disaster Recovery across any distance and as with the previous solution application continuity it comes in two flavors if you so on as dataga which is part of the Oracle database Enterprise Edition and active Data Guard which is an option to the Enterprise Edition but both make protection against site outages simple Data Guard in itself prevents data loss and downtime by maintaining one or more replicated databases using in-memory replication that is a huge benefit of Data Guard because we don't use storage replication we perform a replication over the network in memory in that a paralyzed process applies redo from the primary database here shown on the left side and of the picture on the standby database here shown on the right side of the picture called active standby ensuring read consistency in and across both sides so you really you get an asset compatible solution you don't have to worry whether you replicate over short or longer distance and that's exactly perhaps one of the differences between data gone and active Data Guard because active Data Guard provides better return or investment of the standby database through for example backups and read mostly workload offloading provide a block identical copy of the database so you can easily use the active standby to take backups that later on can be applied to the primary should that need ever arise but more so the active standby database in an active data environment is open for read mostly access read mostly means you can run a report against the active send file but you can also run certain updates ever since DML redirection called Data data manipulation language redirection was introduced was one of the later Oracle database versions that feature allows you to perform certain updates on the standby which will be redirected against the primary and then go via redo back into the standing bar it's a very good feature for occasional updates let's assume your application has a report requirement every quarter but in the end of that record report requirement you would need to write an update to the data you can now do this entirely on the active standby database the second feature in the list of active Data Guard is our automated block corruption identification and repair and that's exactly what it does active dataguard allows you to not only identify corruption probably you wouldn't need data down for that because your application will likely tell you if there's a corruption in the database which likely wasn't even caused by the database but it came in while data files were on rest on storage for example but active data Gap will not only tell you that there is a corruption it will also automatically repaired for you and how that looks will have um we will see in a demo later on the other feature that active data Guide provides is zero data loss across any distance and we use a component called Fast and for that matter which basically stores redo closer to the primary in a synchronized fashion so you sync reader to the faster so that after the fast note you can expect spend or spread longer distances using asymmetric replication and if you want to know about more about fasting please reach out to me later and we will provide you with respect to information last but not least rolling database upgrades for plant maintenance are also possible with active Data Guard making it a more complete solution also for plant maintenance operations now I spoke about placable databases prior in this presentation and one of the things I want to say is that active Data Guard and dataguard have historically operated on the database as a whole so when placable databases in course of multi-tenant were introduced with the Oracle database then the question arised what happens if I have pluckable databases and that is exactly why we introduced a new feature which allows you to manage Data Guard on the per workworld basis and that feature is called per pdb data Guide so unlike dataguard and active data as previously described per pdb Data Guard doesn't work on the container database itself but on the plugable databases within the container database which means you have two container databases cdps that are actively running workload as Illustrated here on the right in my picture you have a primary cdb on the left and you have a primary cdb on the right those are primary because the pvps within each of these container databases other level on which pdb data got operates so in other words PBB per pdb Data Guard provides a disaster protection at pdb level and I've tried to illustrate this here we have three pdbs and each primary cdb does need to be this you can have asymmetrical settings but we have unprotected pdps which are the ones on the outer end the light blue and the light green ones and then we have two pdbs the gray and the red one which are source and Target pdbs just the direction is inversed so the red one goes from the left primary to the right and the gray one goes from the right primary to the left symbolize the source pdb and Target pdb in each case and that disaster protection at pdb level is provided to you in real time apply so there is no downgrading from the container-based solution from the established active and active data solution it means also that you don't need to fail over the full container should there be a need to fail over because of a failure the road transition can be performed on a single pdb2 which means if you want to switch over the roll from one primary to another you can do this with the Data Guard broker it wouldn't affect any other PDP in the container and we also provide features such as automatic Gap fetching from The Source or should there ever be a gap we will not conclude either the fade over or the switch over prior to that Gap being closed so that your destination pdb is always up to date so active Data Guard and per pdb Data Guard really help enhancing the MAA architecture and the solutions Within one of those solutions that I mentioned that earlier is shining sharding is one of the features that became part of ma only later it only is part of the Oracle database since I believe Oracle database 12C and it's a solution that allows you to divide and conquer your databases especially when those databases are large very large actually because sharding is a globally distributed or distributable massively scaling database architecture and some of our customers have told us that they prefer because of the applications to scale globally and what to divide massive databases into smaller Farms of databases or Farms of smaller databases these smaller databases are known as charts in the context of shy and the way you would establish that is fairly simple assume you have a typical table family consisting of a consumer application customers orders line items as you can see here on the picture of the ride in each table you have the customer information the orders and the line items now what you would do when you use shining you would partition and distribute that data so that it wouldn't be residing in only one table but be spread out across different um tables in different databases in shards and hence these tables are called charted tables and the benefit here is very simple you can spread out data for that data to be closer for example if you have customers in certain geographical regions you may want to have those data closer to them um another way of defining it could be customers buying different products so you divide up the data by product but if you do so then the the other information the orders that those customers um place and the line items need to be need to be in need to be aligned accordingly on the other hand what customers order are products so products don't the products table the reference table for those products that these customers order doesn't need to be spread out on the contrary you don't want it to be spread out you want to make sure that each database has a copy so the access to any reference table is very fast it's in the same database and doesn't need to be requested from a remote database those tables you duplicate enchanting so you have them duplicated into each chart across the three that I've shown you and that's how that picture concludes if you do this what you basically indirectly do is you avoid scalability and availability issues with very large databases because you can break the data down in much smaller databases and have many of them in addition to that and that's where MAA provides a holistic approach to this you can replicate charts with data Boolean you can also use native SQL you don't have to change the SQL that queries that data the only difference is that you want to have your application provided sharding key that allows us to redirect your SQL to up to a thousand charts so when your application connects to this Charter database you still maintain a logical view of the data which we support with cross-charting queries but you also want your application to give off a starting key for each query so that we can route this application query that requests to the database that contains the data which the query wants to um wants to access and that data is identified by The Shining key which means that charning key was also the one that you use to spread out the data so geographically distributed customers worthy example or customers and ordering different kinds of products don't worry if you ever need to change anything let's assume you get more customers in new regions or you get more products and you want to reshuffle the setup of your Charter database you can do so online because we allow online edition and re reorganization of those charts with that it should be clear that shining is really a solution for linear scalability and extreme availability linear scalability because you can add Charts online to increase database size and throughput and that might be establish an online elasticity extreme availability because it's a sharp nothing Hardware architecture which means if there's a failure to one shot it wouldn't be affecting the other shot so if you have 10 equally distributed charts um in your Charter database if you lose one you logically lose only a tenth of the data of course that may not be acceptable to many of our customers for which you can protect that chart with Data Guard or Golden Gate last but not least you can spread out your data across Geographic regions or more precisely you can use geographic distribution and you can do this on user defined data placement for performance availability or to meet regularly regulatory requirements the letter has become very interesting to a lot of our financial services customers who need to ensure data servantry with sharding well they don't need to use sharding for that matter but sharding helps to establish the same you can use starting to put data into different geographic regions across the world America Europe Asia and it's ensure that this data will reside in the respective region in the respective country even though you still maintain a logical View and you can cross the entirety of your data using Oracle shop now one of the things that despite shining is still a very problematic issue is corruption corruption cannot be automatically recovered by a shark because the shot in itself would be part of corruption for example or could be subject to a corruption but luckily we have active data gun and as I mentioned before active dataguard has a feature that's called corrupt corruption identification and automatic repair and if you protect your shot as shown here the one for example in Europe that is subject to to a corruption if you protect that chart with active Data Guard you can make use of exactly that feature to not only identify the corruption as I said that typically happens but to repair it automatically so this chart would not even appear as to be corrupted to the application using and again this is easier shown than described so please ludovico show us how corruption identification automatic repair works with active data hi everyone for this demo I have a window on top with the primary alert lock and two windows with the primary and the standby database in active data card configuration the primary contains the table regions with a few records and the standby database uses active Data Guard real-time query which is required for automatic block recovery so I can query the same table and get the same results now I corrupt the block on the disk containing the data of the table regions on the primary but because Data Guard does not use storage replication the block on the standby database is still good if I query the data on the primary as you can see in the alert log the database detects the corruption but it fixes it automatically by getting a good copy from the standby database in a normal situation my select will return an error but thanks to active Data Guard automatic block repair it keeps selecting without getting any error thank you little Biko as you can see sharding and Data Guard active Data Guard in particular build a good combination speaking of good combination the last item I want to talk about today is full stack Disaster Recovery because so far we have learned a lot how MAA can protect your database which configurations you can use to protect your database within MAA but it's not just the database that helps or is needed to be considered when it comes to disaster recovery the application and infrastructure is equally part of this console consideration at least so it should be and that is the reason why we have introduced Oracle Cloud infrastructure full stack disaster recovery Oracle Cloud infrastructure full stack Disaster Recovery is a fully managed Disaster Recovery Dr servers providing Dr for the entire application stack it's an orchestrated or provides an orchestrated single-click Dr for infrastructure applications and databases so the whole stack is covered and it automates the Dr plan creation execution monitoring and therefore you have a unified management that provides you with a validated and monitored execution of Disaster Recovery plans through an integrated user interface as well as API and that makes it much easier for you for example to test and run Dr plans before disaster ever hits hopefully it won't some of the features that full stack Disaster Recovery provides is business continuity for applications not just the infrastructure there's a whole of a whole bunch of infrastructure components that we need to consider when it comes to the r the full stack Disaster Recovery considers all of them but also the database and any application on top of it it's also very flexible in that it allows a lot of flexibility to manage any Dr topology it's serverless it has no or low maintenance at least and you can choose the VR topology for each protection Group which means for each of your application including database and infrastructure a protection group for example can manage Dr for infrastructure only the infrastructure and the database or all three infrastructure database and applications to it again it provides business continuity using a single pane of glass you don't have to go anywhere just go to Oracle cloud and it is a cloud only feature as of this moment we are looking to expand but right now it's an oci oracle infrastructure feature under migration and Disaster Recovery you will find Disaster Recovery Dr protection groups and you can use that click to then create ADR plan for a given set of components now when I say you can do this it's a single click fully automated deal you don't have to move around you can you know choose from predefined VR plans we are working on enhancing though and you can even customize um your VR plans you can use custom intelligence automatically building complex Dr plants so in other words if you add components to your to your Dr compared to your Dr protection group food sector Disaster Recovery will look at the group and the components within and We'll advise what else needs to be covered to make that Dr protection complete so you wouldn't be leaving any component behind should it be our case ever occur of course you can customize the VR plans to to Sweet and to suit unique requirements meaning to say if you know that one of your components for example needs special tailoring to well then you can use instructions and embed them into your Dr plan and full stack Disaster Recovery will consider those instructions either during pre-checks or the full execution which means or which brings me to the next step or the next feature monitor manage live VR operations so not only can you set up a Dr plan and the uh protection groups with full stack recovery no you can monitor its execution or their execution when in progress and you can even pre-check them if you need to pre-checks are particularly important because you don't want to find out that your the r plan doesn't work when you need it you want to check before and do some dry runs and that's where pre-check to validate Dr Readiness comes in and if you do all if you use all eight of these features if you do all eight of these things that I've described you will be ready for full stack Dr in the Oracle cloud speaking of ready I hope today I have shown to you how we can provide never down applications with Oracle maximum availability architecture the three aspects I want you to take away from this presentation today is that you can keep your applications running by storing the application state in database tables and hiding database outages from users with transfer application continuity you can then protect your application state in order database or particularly in placable Oracle database using Oracle active data Guide per pdb data Guide Oracle charting and other hidden gems of Oracle maximum availability architecture last but not least you can plan configure pre-check run Monitor and monitor your Disaster Recovery plans for your infrastructure Oracle database and your applications with Oracle Cloud infrastructure full stack disaster recovery so that you would be ready for any disaster that might strike and hopefully never will with that said you can try many of the features I've talked about today for free by going to one of these sources that I've listed here on the slide particularly the developer oracle.com Life Labs which demonstrate a lot of the features that I've mentioned today and if you then have still questions please feel free to reach out to me my email is given below until then thank you so much for staying with me enjoy the rest of the day and the event thank you

2023-04-15 01:06

Show Video

Other news