Tài liệu OCA: Oracle Database 11g Administrator Certified Associate- P23 doc

50 242 0
Tài liệu OCA: Oracle Database 11g Administrator Certified Associate- P23 doc

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Review Questions 931 12. To recover a data file from the SYSTEM or UNDO tablespace, the instance must be in which database state? A. NOMOUNT B. OPEN C. ABORT D. MOUNT 13. The STATUS column of the dynamic performance view V$LOGFILE contains what value if one of the redo log file group members has been lost because of a media failure? A. INVALID B. STALE C. DELETED D. The column contains a NULL value. 14. Place the following events or actions leading up to and during instance recovery in the cor- rect order. 1. The database is opened and available. 2. Oracle uses undo segments in the undo tablespace to roll back uncommitted transactions. 3. The DBA issues the STARTUP command at the SQL*Plus prompt. 4. Oracle applies the information in the online redo log files to the data files. A. 4, 3, 2, 1 B. 3, 4, 1, 2 C. 2, 1, 3, 4 D. 2, 1, 4, 3 E. 3, 2, 4, 1 F. 3, 4, 2, 1 15. You noticed that when your instance crashes, it takes a long time to start up the database. Which advisor can be used to tune this situation? A. The Undo Advisor B. The SQL Tuning Advisor C. The Database Tuning Advisor D. The MTTR Advisor E. The Instance Tuning Advisor 16. If a data file is missing when the instance is started, where is the error message recorded? A. Only in the alert log. B. All missing files are returned directly to the administrator in the SQL*Plus session. C. The first missing file is returned directly to the administrator in the SQL*Plus session, and the rest of the missing files are identified in V$RECOVER_FILE. D. Only in the alert log and in the DBWR background-process trace files. 95127c16.indd 931 2/17/09 3:03:57 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 932 Chapter 16 N Recovering the Database 17. In ARCHIVELOG mode, the loss of a data file for any tablespace other than the SYSTEM or UNDO tablespace affects which objects in the database? A. The loss affects only objects whose extents reside in the lost data file. B. The loss affects only the objects in the affected tablespace, and work can continue in other tablespaces. C. The loss will not abort the instance but will prevent other transactions in any tablespace other than SYSTEM or UNDO until the affected tablespace is recovered. D. The loss affects only those users whose default tablespace contains the lost or damaged data file. 18. Which dynamic performance view shows the data files either needing media recovery or missing at instance startup? A. V$RECOVER_FILE B. V$DATAFILE C. V$TABLESPACE D. V$RECOVERY_FILE_DEST E. V$RECOVERY_FILE_STATUS 19. A fire breaks out in the server room near the routers, and the operations manager cuts off power to all servers, including the database servers. Before the fire is put out, the disk drive containing the SYSTEM tablespace and both network cards on the Oracle Database 11g server are destroyed. The user SCOTT was about to create a new table, but the connection was dropped after the power was disconnected from the server. This scenario is primarily an example of what kind of failure? A. Network B. Instance C. Statement D. Media E. User error F. User process 20. Which of the following conditions prevents the instance from progressing through the NOMOUNT, MOUNT, and OPEN states? A. One of the redo log file groups is missing a member. B. The instance was previously shut down uncleanly with SHUTDOWN ABORT. C. Either the spfile or init.ora file is missing. D. One of the five multiplexed control files is damaged. E. The USERS tablespace is offline, with one of its data files deleted. 95127c16.indd 932 2/17/09 3:03:57 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. Answers to Review Questions 933 Answers to Review Questions 1. D. The distance (in bytes) between the checkpoint position in a redo log group and the end of the current redo log group can never be more than 90 percent of the size of the smallest redo log group. 2. C. The failure of one statement is considered a statement failure, and one way to solve the problem is to enable resumable-space allocation. When resumable space is enabled, Oracle generates an alert and places the session in a suspended state. 3. C. The parameter FAST_START_MTTR_TARGET specifies the desired time, in seconds, to recover a single instance from a crash or instance failure. The parameters LOG_CHECKPOINT_ TIMEOUT and FAST_START_IO_TARGET can still be used in Oracle 11g but should be used only together with an advanced-tuning scenario or for compatibility with older versions of Oracle. MTTR_TARGET_ADVICE and FAST_START_TARGET_MTTR are not valid initialization parameters. 4. D. The PMON process periodically polls server processes to make sure their sessions are still connected. 5. C. A DBA’s disconnection of a session is an intentional process termination, not a failure. If a user’s PC reboots, the user does not get a chance to log off, and the session is cleaned up by PMON; similarly, disconnecting from the application or SQL*Plus before logging out is considered a user-process failure. A network problem can prematurely disconnect a user session, causing a user-process failure. In all cases, PMON performs the session cleanup, whether the disconnection was intentional or not. 6. A, C. In addition to configuring a backup listener process and installing multiple network cards, you can implement connect-time failover and a backup network connection to reduce the possibility of network failures. 7. B. The instance must be shut down, if it is not already down, to repair or replace the missing or damaged control file. 8. B, C. Media failure, physical corruption, logical corruption, and missing data files all can be identified by the Data Recovery Advisor, which also provides recommendations for repair. 9. B, E. If a tablespace is taken offline because a data file is missing, the instance can still be started as long as the missing data file does not belong to the SYSTEM or UNDO tablespace. 10. A. If a network card fails, the failure type is network; the actual media containing the database files are not affected. 11. B. The Data Recovery Advisor in Oracle 11g Release 1 does not support RAC databases. It is integrated with EM Database Control and with RMAN. CHANGE FAILURE and other commands can be executed using RMAN. The ADVISE FAILURE command must be run before you can perform REPAIR FAILURE. 95127c16.indd 933 2/17/09 3:03:57 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 934 Chapter 16 N Recovering the Database 12. D. Unlike recovery of non–system-critical tablespaces other than SYSTEM or UNDO that can be recovered with the database in OPEN state, the database must be in MOUNT state to recover either the SYSTEM or UNDO tablespace. 13. A. If the redo log file group member has been lost because of a media failure or inadvertent deletion, the STATUS column is set to INVALID when an attempt is made to write redo infor- mation to that member. 14. B. Instance recovery, also known as crash recovery, occurs when the DBA attempts to open the database but the files were not synchronized to the same SCN when the database was shut down. Once the DBA issues the STARTUP command, Oracle uses information in the redo log files to restore the data files (including the undo tablespace’s data files) to the state before the instance failure. Oracle then uses undo data in the undo tablespace after the database has been opened and made available to users to roll back uncommitted transactions. 15. D. The MTTR Advisor can tell the DBA the most effective value for the FAST_START_ MTTR_TARGET parameter. This parameter specifies the maximum time required in seconds to perform instance recovery. 16. C. In addition to reporting the first missing file to the administrator and listing all the missing files in the dynamic performance view V$RECOVER_FILE, the missing data file(s) are noted in the DBWR background-process trace files. 17. B. The loss of one or more of a tablespace’s data files does not prevent other users from doing their work in other tablespaces. Recovering the affected data files can continue while the database is still online and available. 18. A. The dynamic performance view V$RECOVER_FILE contains a list of the data files that either need media recovery or are missing when the instance is started. 19. B. The primary failure in this scenario is instance. Subsequently, a network failure will occur when connections are attempted through the burned-out router. However, no con- nections are possible until the network card in the server is replaced; the instance cannot start because of a media failure on the disk containing the SYSTEM tablespace. 20. D. All copies of the control files as defined in the spfile or the init.ora file must be iden- tical and available. If one of the redo log file groups is missing a member, a warning is recorded in the alert log, but instance startup still proceeds. If the instance was previously shut down with SHUTDOWN ABORT, instance recovery automatically occurs during startup. Only an spfile or an init.ora file is needed to enter the NOMOUNT state, not both. If a tablespace is offline, the status of its data files is not checked until an attempt is made to bring it online; therefore, it will not prevent instance startup. 95127c16.indd 934 2/17/09 3:03:58 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. Chapter 17 Moving Data and Using EM Tools ORACLE DATABASE 11g: ADMINISTRATION I EXAM OBJECTIVES COVERED IN THIS CHAPTER: Moving Data Describe and use methods to move data (Directory objects,  SQL*Loader, External Tables) Explain the general architecture of Oracle Data Pump  Use Data Pump Export and Import to move data between  Oracle databases Intelligent Infrastructure Enhancements Use the Enterprise Manager Support Workbench  Managing Patches  95127c17.indd 935 2/17/09 3:10:37 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. As a DBA, you are often required to move data between databases, extract data, or load data received from external sources. Oracle 11g provides tools to move data. You can use these tools to back up data from a table or a schema before making changes for quick recovery. Oracle Data Pump is a high-performance data-movement tool that you can use to unload and load data between Oracle databases, and you can use the SQL*Loader tool to load data received from external sources such as flat files. In this chapter you will also learn about contacting Oracle Support through Enterprise Manager Support Workbench. EM Support Workbench is new in Oracle 11g and can be used to examine a database problem and contact Oracle Support for a resolution. EM can also alert you when database patches are ready. You will learn to use EM to stage and apply a patch. Understanding Data Pump The Data Pump facility is a high-speed mechanism for transferring data or metadata from one database to another or from operating-system files. Data Pump employs direct path unloading and direct path loading technologies. Unlike the older export and import programs ( exp and imp ), which operated on the client side of a database session, the Data Pump facility runs on the server. Thus, you must use a database directory to specify dump- file and log-file locations. You can use Data Pump to copy data from one schema to another between two data- bases or within a single database. You can also use it to extract a logical copy of the entire database, a list of schemas, a list of tables, or a list of tablespaces to portable operating- system files. Data Pump can also transfer or extract the metadata (DDL statements) for a database, schema, or table. You can call Data Pump from the command-line programs expdp and impdp or through the DBMS_DATAPUMP PL/SQL package, or you can invoke it from EM. Data Pump export extracts data and metadata from your database, and Data Pump import loads this extracted data into the same database or into a different database, option- ally transforming metadata along the way. These transformations let you, for example, copy tables from one schema to another or remap a tablespace from one database to another. These are some of the key features of Data Pump: A fine-grained object selection using  INCLUDE and EXCLUDE options An option to specify a lower-compatibility version so only supported object types are  exported 95127c17.indd 936 2/17/09 3:10:38 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. Understanding Data Pump 937 The ability to perform export and import in using parallel processes  The ability to detach and attach to a job from the client session, allowing the DBA to  close the export/import session and yet have the ability to administer the jobs An option to change target table names, tablespace names, and schema names  Another option to compress metadata or data or both during export  A tablespace metadata export to support the transportable tablespace feature of the  database An option to append data to an existing table or to truncate and load data to an exist-  ing table The automatic use of direct path export whenever possible  The ability to copy data from one database to another using a network  The ability to specify a sample percentage to unload only a subset of data  The ability to monitor job progress; job status can be queried from the database or  using EM An option to restart or terminate failed export and import jobs  Architecture of Data Pump In Oracle 11g Data Pump, the database does all the work. This is a major deviation from the architecture of export/import utilities, which previously ran as clients and did the major part of the work. The dump files for export/import were stored at the client, whereas the Data Pump files are stored at the server. Figure 17.1 shows the Data Pump architecture. Data Pump Components Data Pump consists of the following components: Data Pump API DBMS_DATAPUMP is the PL/SQL API for Data Pump, which is the engine. Data Pump jobs are created and monitored using this API. Metadata API T h e DBMS_METADATA API provides the database object definition to the Data Pump processes. Client Tools Data Pump client tools expdp and impdp use the procedures provided by the DBMS_DATAPUMP package. These tools make calls to the Data Pump API to initiate and moni- tor Data Pump operations. Data-movement APIs Data Pump uses the Direct Path API (DPAPI) to move data. Certain circumstances do not allow the use of DPAPI; in those cases, the Oracle external table with the ORACLE_DATADUMP access driver API is used. 95127c17.indd 937 2/17/09 3:10:38 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 938 Chapter 17  Moving Data and Using EM Tools FIGURE 17.1 Data Pump architecture Export Dump Client: expdp Import Dump Client: impdp Other Clients: Enterprise Manager, SQL*Plus Metadata API: DBMS_METADATA Database DBMS_DATAPUMP: Data and Metadata Movement Engine Direct Path API External Table ORACLE_DATAPUMP API Data Pump Processes Oracle Data Pump jobs, once started, are performed by various processes on the database server. The following are the processes involved in the Data Pump operation: Client process This process is initiated by the client utility— expdp , impdp , or other clients— to make calls to the Data Pump API. Since Data Pump is completely integrated into the database, once the Data Pump job is initiated, this process is not necessary for the progress of the job. Shadow process When a client logs into the Oracle Database, a foreground process is created (a standard feature of Oracle). This shadow process services the client data dump API requests. This process creates the master table and creates Advanced Queries (AQ) queues used for communication. Once the client process ends, the shadow process goes away too. Master control process (MCP) T h e master control process controls the execution of the Data Pump job; there is one MCP per job. MCP divides the Data Pump job into various metadata and data-load or -unload jobs and hands them over to the worker processes. The MCP has a process name of the format <ORACLE_SID>_DMnn_<PROCESS_ID> . It maintains the job state, job description, restart information, and file information in the master table. 95127c17.indd 938 2/17/09 3:10:38 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. Understanding Data Pump 939 Worker process The MCP creates the worker processes based on the value of the PAR- ALLEL parameter. The workers perform the tasks requested by the MCP, mainly loading or unloading data and metadata. The worker processes have the format <ORACLE_SID>_ DWnn_<PROCESS_ID> . The worker processes maintain the current status in the master table that can be used to restart a failed job. Parallel query (PQ) processes The worker processes can initiate parallel-query processes if an external table is used as the data-access method for loading or unloading. These are standard parallel-query slaves of the parallel-execution architecture. Oracle Data Pump cannot be used to load data into a database from data exported using the exp utility. Let’s consider the example of an export Data Pump operation and see all the activities and processes involved. Say user A invokes the expdp client, which initiates the shadow pro- cess. The client calls the DBMS_DATAPUMP.OPEN procedure to establish the kind of export to be performed. The OPEN call starts the MCP process and creates two AQ queues. The first queue is the status queue, used to send the status of the job, which includes log- ging information and errors. Clients interested in the status of the job can query this queue. This is strictly a unidirectional queue—the MCP posts the information to the queue, and the clients consume the information. The second queue is the command-and-control queue, which is used to control the worker processes established by the MCP and to perform API commands and file requests. This is a bidirectional queue where the MCP listens and writes. The commands are sent to this queue by the DBMS_DATAPUMP methods or by using the parameters of the expdp client. Once all the components (parameters and filters) of the job are defined, the client ( expdp ) invokes DBMS_DATAPUMP.START_JOB . Based on the number of parallel processes requested, the MCP starts the worker processes. The MCP directs one of the worker processes to do the metadata extraction using the DBMS_METADATA API. During the operation, a master table is maintained in the schema of the user who initi- ated the Data Pump export. The master table has the same name as the name of the Data Pump job. This table maintains one row per object with status information. In the event of a failure, Data Pump uses the information in this table to restart the job. The master table is the heart of every Data Pump operation; it maintains all the information about the job. Data Pump uses the master table to restart a failed or suspended job. The master table is dropped (by default) when the Data Pump job finishes successfully. The master table is written to the dump file set as the last step of the export dump opera- tion and is removed from the user’s schema. For an import dump operation, the master table is loaded from the dump file set to the user’s schema as the first step and is used to sequence the objects being imported. While the export job is underway, the original client who invoked the export job can detach from the job without aborting the job. This is especially useful when performing long-running data export jobs. Users can attach the job at any time using the DBMS_DATAPUMP methods and query the status or change the parallelism of the job. 95127c17.indd 939 2/17/09 3:10:38 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. 940 Chapter 17  Moving Data and Using EM Tools Since the master table is created in the Data Pump user’s schema as a table, if there is an existing table in the schema with the Data Pump job name, the job fails. The user must have appropriate privileges to create the table and have appropriate tablespace quotas. Data Access Methods Data Pump chooses the most appropriate data-access method. Two methods are supported: direct path access and external table access. Direct path export has been supported since Oracle 7.3. External tables were introduced in Oracle9i, and support for writing to external tables has been available since Oracle 10g. Data Pump provides an external-tables access driver ( ORACLE_DATAPUMP ) that can be used to read and write files. The format of the file is the same as the direct path methods; hence, it’s possible to load data that is unloaded in another method. Data Pump uses the Direct Load API whenever possible. The following are the exceptions when an external tables method will be used: Tables with fine-grained access control are enabled in insert and select operations.  A domain index exists for a  LOB column. A global index on multipartition table exists during a single-partition load.  Clustered table or table has an active trigger during import.  A table contains  BFILE columns. A referential integrity constraint is present during import.  A table contains a  VARRAY column with an embedded opaque type. Loading and unloading very large tables and partitions, where the  PARALLEL SQL clause can be used to an advantage. Loading tables that are partitioned differently at load time and unload time.  Using Data Pump Clients Oracle 11g comes with the expdp utility to invoke Data Pump for export and comes with impdp for import. The Data Pump export utility ( expdp ) unloads data and metadata to a set of OS files called dump files. The Data Pump import utility ( impdp ) loads data and meta- data stored in an export dump file to a target database. expdp and impdp accept parameters that are then passed to the DBMS_DATAPUMP program. The command-line executable name for Data Pump export is expdp and for Data Pump import is impdp on Windows as well as Unix platforms. For a user to invoke expdp / impdp , they need to set up a directory where the dump files will be stored and they must have appropriate privileges to perform Data Pump export/import. In the next section, I will discuss how to set up the export dump location. 95127c17.indd 940 2/17/09 3:10:38 PM Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark. [...]... Export Mode Import Mode Database Schema Table Full Tablespace Live database 950  Chapter 17    Moving Data and Using EM Tools n Ta b l e  17 3      Export to Import Modes   (continued) Source Export Mode Import Mode Database Schema Live database Schema Database Schema Table Tablespace Live database Table Database Schema Table Tablespace Live database Tablespace The IMP_FULL _DATABASE role is required... copied from one database to another without using a dump file Understanding Data Pump  955 Network-Mode Import NETWORK_LINK enables the network-mode import using a database link The database link must be created before performing the import Export is performed on the source database based on the various parameters; the data and metadata are passed to the source database using the database link and... say “huge”) tables This particular database includes tables that are DSS in nature in addition to the OLTP tables In Oracle8 i and Oracle9 i, I had to re-create one of the dictionary views to exclude certain multimillion-row transaction tables I’m not listing the view name here because changing SYS-owned data dictionary views isn’t supported In Oracle 10g and Oracle 11g, you do not have to mess with the... files Note that the oracle user (who owns the software installation and database files) must have read and write OS privileges on the directory The user SCOTT, for example, need not have any OS privileges on the directory for Data Pump to succeed A default directory can be created for Data Pump operations in the database Privileged users (with the EXP_FULL _DATABASE or IMP_FULL _DATABASE privilege) need... the EXP_FULL_ DATABASE role The import user requires the IMP_FULL _DATABASE role Data and metadata for only those objects contained in the specified tablespaces are unloaded The export user requires the EXP_ FULL _DATABASE role All objects contained in the specified tablespaces are loaded The import user requires the IMP_ FULL _DATABASE privilege The source dump file can be exported in database, tablespace,... 11.1.0.6.0 - Production on Saturday, 15 November, 2008 16:04:22 Copyright (c) 2003, 2007, Oracle All rights reserved Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 Production With the Partitioning, OLAP, Data Mining and Real Application Testing options FLASHBACK automatically enabled to preserve database integrity Starting “SCOTT”.”SYS_EXPORT_SCHEMA_01”: scott/******** directory=dumplocation... the job: oracle@ linux:> expdp bill/billthedba attach=VOLEST_EXP_TEST Export: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008 23:33:18 Copyright (c) 2003, 2007, Oracle All rights reserved Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 Production With the Partitioning, OLAP, Data Mining and Real Application Testing options 960  Chapter 17    Moving Data and Using... user, he must specify the DIRECTORY object name $ expdp scott/tiger Export: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008 13:50:05 Copyright (c) 2003, 2007, Oracle All rights reserved Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 Production With the Partitioning, OLAP, Data Mining and Real Application Testing options ORA-39002: invalid operation ORA-39070: Unable... the Dump Location Since Data Pump is server-based, directory objects must be created in the database where the Data Pump files will be stored Directory objects are named directory locations on the database server representing the physical location on the server’s file system Directories are used with several database features, including BFILEs, external tables, utl_file, SQL*Loader, and Data Pump The... specifying expdp help=y: $ expdp help=y Export: Release 11.1.0.6.0 - Production on Saturday, 15 November, 2008 16:54:49 Copyright (c) 2003, 2007, Oracle All rights reserved The Data Pump export utility provides a mechanism for transferring data objects between Oracle databases The utility is invoked with the following command: Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp You can control . Import Mode Database Schema Live database Schema Database Schema Table Tablespace Live database Table Database Schema Table Tablespace Live database Tablespace. the database files are not affected. 11. B. The Data Recovery Advisor in Oracle 11g Release 1 does not support RAC databases. It is integrated with EM Database

Ngày đăng: 24/12/2013, 02:17

Từ khóa liên quan

Tài liệu cùng người dùng

Tài liệu liên quan