Abstract:
Provided herein are data cloud administration techniques that achieve autonomy by using a rules engine that reacts to a database system event by autonomously submitting an asynchronous job to reconfigure a database. In an embodiment, a rules engine receives an event from a DBMS. Based on the event, the rules engine executes a rule to generate a request that indicates configuration details for a database. The rules engine sends the request to a request broker. The request broker dispatches an asynchronous job based on the request. The asynchronous job configures the database based on the configuration details. Thus, databases in a cloud, data grid, or data center may be administered autonomously (without human intervention) base on dynamic conditions that are foreseen and unforeseen.
Abstract:
Provided herein are workload management techniques that asynchronously configure pluggable databases within a compute cloud. In an embodiment, the compute cloud receives an administrative request that indicates configuration details for a pluggable database. The compute cloud generates a configuration descriptor that specifies an asynchronous job based on the configuration details of the request. The compute cloud accesses hosting metadata to detect at least one of: a) a current container database that already hosts the pluggable database, b) a target container database that will host the pluggable database, or c) a particular computer that hosts at least one of: the current container database, or the target container database. The compute cloud executes the asynchronous job to configure the pluggable database based on at least one of: the hosting metadata, or the configuration descriptor. Thread pools, lanes, and queues may facilitate load balancing to avoid priority inversion, starvation, and denial of service.
Abstract:
Embodiments create a clone of a PDB while the PDB accepts write operations. While the PDB remains in read-write mode, the DBMS copies the data of the PDB and sends the data to a destination location. The DBMS performs data recovery on the PDB clone based on redo entries that record changes made to the source PDB while the DBMS copied the source PDB files. This data recovery makes all changes, to the PDB clone, that occurred to the source PDB during the copy operation. The redo information, on which the data recovery is based, is foreign to the PDB clone since the redo entries were recorded for a different PDB. In order to apply foreign redo information to perform recovery on the PDB clone, a DBMS managing the PDB clone maintains mapping information that maps PDB source reference information to corresponding information for the PDB clone.
Abstract:
Techniques herein use rules automation and template pluggable databases to facilitate deployment into container databases. In an embodiment, a system of computers loads rules into a rules engine. Each rule associates a predicate with suitable container databases. The system receives a request to install a target pluggable database. The rules engine detects satisfied rules whose predicates match the request. Based on the suitable container databases of the satisfied rules, the rules engine selects a particular container database. The system installs the target pluggable database into the particular container database. In an embodiment, a system of computers stores a plurality of template pluggable databases in a repository. The repository receives an installation request. Based on the installation request, the system selects a particular template pluggable database. The system installs the particular template pluggable database into a container database.
Abstract:
Provided herein are workload management techniques that asynchronously configure pluggable databases within a compute cloud. In an embodiment, the compute cloud receives an administrative request that indicates configuration details for a pluggable database. The compute cloud generates a configuration descriptor that specifies an asynchronous job based on the configuration details of the request. The compute cloud accesses hosting metadata to detect at least one of: a) a current container database that already hosts the pluggable database, b) a target container database that will host the pluggable database, or c) a particular computer that hosts at least one of: the current container database, or the target container database. The compute cloud executes the asynchronous job to configure the pluggable database based on at least one of: the hosting metadata, or the configuration descriptor. Thread pools, lanes, and queues may facilitate load balancing to avoid priority inversion, starvation, and denial of service.
Abstract:
Techniques are described herein for cloning a database. According to some embodiments, a database server receives a request to clone a source database. In response to receiving the request, the database server retrieves a set of one or more storage credentials for a set of one or more respective storage systems on which a set of files of the source database are stored. The set of storage credentials grant permission to the database server to create snapshot copies on the set of storage systems. The database server generates, for a target database using the set of storage credentials, a snapshot copy of each respective file in the set of files of the source database. The snapshot copy of the respective file points to the same set of one or more data blocks as the respective file until at least one of the data blocks is modified.
Abstract:
A container database stores redo records and logical timestamps for multiple pluggable databases. When it is detected that a first read-write instance of the pluggable database is opened and no other read-write instances of the pluggable database are open, offline range data associated with the pluggable database is updated. When it is detected that a second read-write instance of the pluggable database is closed, and the second read-write instance is the last open read-write instance, the offline range data associated with the pluggable database is updated. The pluggable database is restored to a logical timestamp associated with a restore request based on the offline range data.
Abstract:
Techniques are provided for synchronizing database system metadata between primary and standby persistent storage systems using an object store. A first persistent storage system enabled to store first configuration metadata describing the configuration of the first persistent storage system. A first broker process of the first persistent storage system detects receipt, at an object store endpoint, of a new version of an object message sent by a second broker process of a second persistent storage system. The object message specifies a particular value of a configuration attribute of second configuration metadata from the second persistent storage system. In response to detecting receipt of the new version of the object message, the first broker process reads the particular value of the configuration attribute in the object message. The first broker process sets the configuration attribute in the first configuration metadata to the particular value.
Abstract:
Techniques described herein automatically check for persistently inactive instances, based on defined metrics, and auto-archive such instances to lower-cost cloud resources. An inactivity time threshold is dynamically adjustable to a longer or shorter time period based on current load running on limited/more expensive resources to more aggressively or less aggressively archive the inactive instances, thus enabling additional active instances to run on the limited/more expensive resources and supporting more total users.
Abstract:
Techniques are provided for using a sparse file to create a hot archive of a pluggable database of a container database. In an embodiment and while a source pluggable database is in service, a source database server creates a clone of the source pluggable database. Also while the source pluggable database is in service, the source database server creates an archive of the source pluggable database that is based on the clone. Eventually, a need arises to consume the archive. A target database server (which may also be the source database server) creates a target pluggable database based on the archive.