Oracle Database “19c New Features”
Inspecting the Oracle 19c documentation, following are some of the fascinating new database features for your reference.
Automatic Indexing
The automatic indexing feature automates the index management tasks in an Oracle database. Automatic indexing automatically creates, rebuilds, and drops indexes in a database based on the changes in application workload, thus improving database performance. The automatically managed indexes are known as auto indexes.
Functionality
- Runs the automatic indexing process in the background periodically at a predefined time interval.
- Analyzes application workload, and accordingly creates new indexes and drops the existing under performing indexes to improve database performance.
- Rebuilds the indexes that are marked unusable due to table partitioning maintenance operations, such as ALTER TABLE MOVE.
- Provides PL/SQL APIs for configuring automatic indexing in a database and generating reports related to automatic indexing operations.
How to Configure?
Enable automatic indexing in a database and creates any new auto indexes as visible indexes, so that they can be used in SQL statements:
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');
Enable automatic indexing in a database, but creates any new auto indexes as invisible indexes, so that they cannot be used in SQL statements:
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY');
Disable automatic indexing in a database, so that no new auto indexes are created, and the existing auto indexes are disabled:
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','OFF');
Set retention period for unused auto indexes to 90 days:
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_AUTO', '90');
Set retention period for unused non-auto indexes to 60 days:
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_RETENTION_FOR_MANUAL', '60');
Define tablespace of TBSAUTO to store auto indexes:
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE', 'TBSAUTO');
Allocates 5% of the tablespace for auto indexes:
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SPACE_BUDGET', '5');
Generating Reports
Generate a report about the automatic indexing operations for last 24 hours in plain txt(default) format:
set long 9999999 pages 50000 lines 500 col xxxx for a300 SELECT dbms_auto_index.Report_activity() as xxxx FROM dual;
Make a report about the automatic indexing operations for specific timestamp in HTML format:
set long 9999999 pages 50000 lines 500 col xxxx for a300 SELECT dbms_auto_index.Report_activity(activity_start => systimestamp-1, activity_end => systimestamp, type => 'HTML', section => 'ALL') as xxxx FROM dual;
Create a report about the last automatic indexing operation:
set long 9999999 pages 50000 lines 500 col xxxx for a300 SELECT dbms_auto_index.Report_last_activity() as xxxx FROM dual;
Behind the Scenes!
col DESCRIPTION for a30 col ADVISOR_NAME for a25 col task_name for a30 select TASK_NAME,DESCRIPTION,ADVISOR_NAME,CREATED,LAST_MODIFIED,LAST_EXECUTION,EXECUTION_START,EXECUTION_END,STATUS,PROGRESS_METRIC from dba_advisor_tasks; TASK_NAME DESCRIPTION ADVISOR_NAME CREATED LAST_MODIFIED ------------------------------ ------------------------------ ------------------------- -------------------- --------------------- SYS_AUTO_SPM_EVOLVE_TASK Automatic SPM Evolve Task SPM Evolve Advisor 17-apr-2019 01:32:48 17-apr-2019 01:32:48 SYS_AI_SPM_EVOLVE_TASK Automatic SPM Evolve Task SPM Evolve Advisor 17-apr-2019 01:32:48 17-apr-2019 01:32:48 SYS_AI_VERIFY_TASK SQL Performance Analyzer 17-apr-2019 01:32:48 05-aug-2019 19:41:36 SYS_AUTO_INDEX_TASK SQL Access Advisor 17-apr-2019 01:32:48 05-aug-2019 19:41:37 AUTO_STATS_ADVISOR_TASK Statistics Advisor 17-apr-2019 01:32:53 17-apr-2019 01:56:11 INDIVIDUAL_STATS_ADVISOR_TASK Statistics Advisor 17-apr-2019 01:32:53 17-apr-2019 01:32:53 Task Run for Every 15 minutes col TASK_NAME for a30 col EXECUTION_NAME for a30 set lines 200 pages 5000 select TASK_NAME,EXECUTION_NAME,EXECUTION_START,EXECUTION_END,STATUS,REQUESTED_DOP,ACTUAL_DOP from dba_advisor_executions where TASK_NAME in ('SYS_AUTO_INDEX_TASK','SYS_AI_SPM_EVOLVE_TASK','SYS_AI_VERIFY_TASK') order by EXECUTION_START; TASK_NAME EXECUTION_NAME EXECUTION EXECUTION STATUS REQUESTED_DOP ACTUAL_DOP ------------------------------ ------------------------------ --------- --------- ----------- ------------- ---------- SYS_AUTO_INDEX_TASK SYS_AI_2019-08-05/19:26:07 05-AUG-19 05-AUG-19 COMPLETED 0 1 SYS_AUTO_INDEX_TASK SYS_AI_2019-08-05/19:41:17 05-AUG-19 05-AUG-19 COMPLETED 0 1 SYS_AI_VERIFY_TASK SYS_AI_2019-08-05/19:41:17_C 05-AUG-19 05-AUG-19 COMPLETED 0 1 SYS_AI_VERIFY_TASK SYS_AI_2019-08-05/19:41:17_E1 05-AUG-19 05-AUG-19 COMPLETED 0 1 SYS_AUTO_INDEX_TASK SYS_AI_2019-08-05/19:56:27 05-AUG-19 05-AUG-19 COMPLETED 0 1 SYS_AUTO_INDEX_TASK SYS_AI_2019-08-05/20:11:36 05-AUG-19 05-AUG-19 COMPLETED 0 1 Internal Parameters select * From SMB$CONFIG where PARAMETER_NAME like '%INDEX%'; PARAMETER_NAME PARAMETER_VALUE LAST_UPDATED UPDATED_BY PARAMETER_DATA ---------------------------------------- --------------- ------------------------------ ------------ -------------------- AUTO_INDEX_SCHEMA 0 AUTO_INDEX_DEFAULT_TABLESPACE 0 AUTO_INDEX_SPACE_BUDGET 50 AUTO_INDEX_REPORT_RETENTION 31 AUTO_INDEX_RETENTION_FOR_AUTO 0 373 AUTO_INDEX_RETENTION_FOR_MANUAL 0 AUTO_INDEX_MODE 0 05-AUG-19 07.10.13.000000 PM SYS IMPLEMENT _AUTO_INDEX_TRACE 0 _AUTO_INDEX_TASK_INTERVAL 900 _AUTO_INDEX_TASK_MAX_RUNTIME 3600 _AUTO_INDEX_IMPROVEMENT_THRESHOLD 20 _AUTO_INDEX_REGRESSION_THRESHOLD 10 _AUTO_INDEX_ABSDIFF_THRESHOLD 100 _AUTO_INDEX_STS_CAPTURE_TASK 0 05-AUG-19 07.10.13.000000 PM SYS ON _AUTO_INDEX_CONTROL 0 _AUTO_INDEX_DERIVE_STATISTICS 0 ON _AUTO_INDEX_CONCURRENCY 1 _AUTO_INDEX_SPA_CONCURRENCY 1 _AUTO_INDEX_REBUILD_TIME_LIMIT 30 _AUTO_INDEX_REBUILD_COUNT_LIMIT 5 _AUTO_INDEX_REVERIFY_TIME 30 AUTO_INDEX_COMPRESSION 0 OFF Change Task Interval from 15 minutes to 30 minutes exec dbms_auto_index_internal.configure('_AUTO_INDEX_TASK_INTERVAL', '1800', allow_internal => true); PARAMETER_NAME PARAMETER_VALUE LAST_UPDATED UPDATED_BY PARAMETER_DATA -------------------------- --------------- ------------------------------ ---------- -------------------- _AUTO_INDEX_TASK_INTERVAL 1800 06-AUG-19 07.02.01.000000 PM SYS
Demo
EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT'); col PARAMETER_NAME for a40 col PARAMETER_VALUE for a15 col LAST_MODIFIED for a20 col MODIFIED_BY for a15 select * From DBA_AUTO_INDEX_CONFIG; PARAMETER_NAME PARAMETER_VALUE LAST_MODIFIED MODIFIED_BY ---------------------------------------- --------------- ------------------------------ ------------ AUTO_INDEX_COMPRESSION OFF AUTO_INDEX_DEFAULT_TABLESPACE AUTO_INDEX_MODE IMPLEMENT 05-AUG-19 07.10.13.000000 PM SYS AUTO_INDEX_REPORT_RETENTION 31 AUTO_INDEX_RETENTION_FOR_AUTO 373 AUTO_INDEX_RETENTION_FOR_MANUAL AUTO_INDEX_SCHEMA AUTO_INDEX_SPACE_BUDGET 50 CREATE TABLE TEST(id NUMBER, name VARCHAR2(20)); begin for a in 1..500000 loop insert into TEST values (a,'test'||to_char(a)); end loop; commit; end; / set serveroutput on declare vid number; vname varchar2(20); begin for a in 1..500 loop execute immediate 'select /*+ DINESH_005 */ id,name from test where id=:a' into vid,vname using a; dbms_output.put_line('Data : '||vid||','||vname); end loop; end; /
Sample Report – Auto_Index_Report
Real-Time Statistics
Oracle Database automatically gathers online statistics during conventional DML operations.
Statistics can go stale between execution of DBMS_STATS statistics gathering jobs. By gathering some statistics automatically during DML operations, the database augments the statistics gathered by DBMS_STATS. Fresh statistics enable the optimizer to produce more optimal plans.
Parameters to control this feature
NAME VALUE DESCRIPTION -------------------------------------------------- --------------- ----------------------------------------------------------------- _optimizer_gather_stats_on_conventional_config 0 settings for optimizer online stats gathering on conventional DML _optimizer_gather_stats_on_conventional_dml TRUE optimizer online stats gathering for conventional DML _optimizer_stats_on_conventional_dml_sample_rate 100 sampling rate for online stats gathering on conventional DML _optimizer_use_stats_on_conventional_config 0 settings for optimizer usage of online stats on conventional DML _optimizer_use_stats_on_conventional_dml TRUE use optimizer statistics gathered for conventional DML
Demo
Note: In My environment, this feature functioned after setting “_exadata_feature_on”=TRUE which defaults to FALSE.
CREATE TABLE TEST(id NUMBER, name VARCHAR2(20)); begin for a in 1..500000 loop insert into TEST values (a,'test'||to_char(a)); end loop; commit; end; / begin DBMS_STATS.GATHER_TABLE_STATS('DINESH', 'TEST', METHOD_OPT=>'FOR ALL COLUMNS SIZE 1'); end; / SET PAGES 5000 LINES 200 COL TABLE_NAME for a15 COL COLUMN_NAME FORMAT a13 COL NOTES FORMAT a35 select TABLE_NAME,COLUMN_NAME,NUM_DISTINCT,SAMPLE_SIZE,LAST_ANALYZED,NOTES from USER_TAB_COL_STATISTICS where table_name='TEST'; TABLE_NAME COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE LAST_ANALYZED NOTES --------------- ------------- ------------ ----------- -------------------- ------------------- TEST ID 500000 500000 05-aug-2019 17:19:49 TEST NAME 500000 500000 05-aug-2019 17:19:49 select TABLE_NAME,NUM_ROWS,BLOCKS,AVG_ROW_LEN,LAST_ANALYZED,NOTES from USER_TAB_STATISTICS where table_name='TEST'; TABLE_NAME NUM_ROWS BLOCKS AVG_ROW_LEN LAST_ANALYZED NOTES --------------- ---------- ---------- ----------- -------------------- ------------------------- TEST 500000 1504 16 05-aug-2019 17:19:49 begin for a in 500001..600000 loop insert into TEST values (a,'test'||to_char(a)); end loop; commit; end; / SET PAGES 5000 LINES 200 COL TABLE_NAME for a25 COL COLUMN_NAME FORMAT a13 COL NOTES FORMAT a35 select TABLE_NAME,COLUMN_NAME,NUM_DISTINCT,SAMPLE_SIZE,LAST_ANALYZED,NOTES from USER_TAB_COL_STATISTICS where table_name='TEST'; TABLE_NAME COLUMN_NAME NUM_DISTINCT SAMPLE_SIZE LAST_ANALYZED NOTES --------------- ------------- ------------ ----------- -------------------- ------------------------------ TEST ID 500000 500000 05-aug-2019 17:19:49 TEST NAME 500000 500000 05-aug-2019 17:19:49 TEST ID 957 05-aug-2019 17:21:23 STATS_ON_CONVENTIONAL_DML TEST NAME 935 05-aug-2019 17:21:23 STATS_ON_CONVENTIONAL_DML EXEC DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; select TABLE_NAME,NUM_ROWS,BLOCKS,AVG_ROW_LEN,LAST_ANALYZED,NOTES,SCOPE from USER_TAB_STATISTICS where table_name='TEST'; TABLE_NAME NUM_ROWS BLOCKS AVG_ROW_LEN LAST_ANALYZED NOTES --------------- ---------- ---------- ----------- -------------------- ----------------------------------- TEST 500000 1504 16 05-aug-2019 17:19:49 TEST 600000 1756 05-aug-2019 17:21:23 STATS_ON_CONVENTIONAL_DML Insert Single Row - Plan explain plan for insert into test values(1000001,'test1000001'); Explained. SQL> select * From table(dbms_xplan.display); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------ --------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------- | 0 | INSERT STATEMENT | | 1 | 16 | 1 (0)| 00:00:01 | | 1 | LOAD TABLE CONVENTIONAL | TEST | | | | | --------------------------------------------------------------------------------- Note ----- - dynamic statistics used: statistics for conventional DML enabled 11 rows selected. Insert Multiple Rows - Plan SQL> explain plan for insert into test select * From test where rownum <100000; Explained. SQL> select * From table(dbms_xplan.display); PLAN_TABLE_OUTPUT ------------------------------------------------------------------------------------------------- Plan hash value: 3931117773 ----------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------- | 0 | INSERT STATEMENT | | 99999 | 1562K| 412 (1)| 00:00:01 | | 1 | LOAD TABLE CONVENTIONAL | TEST | | | | | | 2 | OPTIMIZER STATISTICS GATHERING | | 500K| 7812K| 412 (1)| 00:00:01 | |* 3 | COUNT STOPKEY | | | | | | | 4 | TABLE ACCESS FULL | TEST | 500K| 7812K| 412 (1)| 00:00:01 | ----------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter(ROWNUM<100000) 16 rows selected. Query Plan SELECT /*+ DINESH_001 */ COUNT(*) FROM TEST WHERE ID > 590000; SQL_ID 7nn4gu7s0p5yu, child number 0 ------------------------------------- SELECT /*+ DINESH_001 */ COUNT(*) FROM TEST WHERE ID > 590000 Plan hash value: 1950795681 --------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | | | 481 (100)| | | 1 | SORT AGGREGATE | | 1 | 13 | | | |* 2 | TABLE ACCESS FULL| TEST | 9897 | 125K| 481 (1)| 00:00:01 | --------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("ID">590000) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 (E - Syntax error (1)) --------------------------------------------------------------------------- 1 - SEL$1 E - DINESH_001 Note ----- - dynamic statistics used: statistics for conventional DML
Active Data Guard DML Redirection
You can run DML operations on Active Data Guard standby databases. This enables you to run read-mostly applications, which occasionally execute DML’s, on the standby database.
DML operations on a standby can be transparently redirected to and run on the primary database. This includes DML statements that are part of PL/SQL blocks. The Active Data Guard session waits until the corresponding changes are shipped to and applied to the Active Data Guard standby. Read consistency is maintained during the DML operation and the standby database on which the DML is run can view its uncommitted changes. However, all the other standby database instances can view these changes only after the transaction is committed.
How to configure?
To configure automatic redirection of DML operations for all standby sessions in an Active Data Guard environment:
ADG_REDIRECT_DML=TRUE
To configure automatic redirection of DML operations for the current session, use the following command:
ALTER SESSION ENABLE ADG_REDIRECT_DML;
Demo
Production select name, open_mode, controlfile_type from v$database; NAME OPEN_MODE CONTROL --------- -------------------- ------- DBADB READ WRITE CURRENT begin for a in 1..500 loop insert into TEST values (a,'test'||to_char(a)); end loop; commit; end; / PL/SQL procedure successfully completed. Elapsed: 00:00:00.05 select count(*) from test; COUNT(*) ---------- 601000 Here as highlighted above, inserting 500 records completes within 0.05 seconds. Standby select name, open_mode, controlfile_type from v$database; NAME OPEN_MODE CONTROL --------- -------------------- ------- DBADB READ ONLY WITH APPLY STANDBY show parameter redirect NAME TYPE VALUE ------------------------------------ ----------- ------- adg_redirect_dml boolean TRUE select count(*) from test; COUNT(*) ---------- 601000 begin for a in 1..500 loop insert into TEST values (a,'test'||to_char(a)); end loop; commit; end; / PL/SQL procedure successfully completed. Elapsed: 00:08:18.72 select count(*) from test; COUNT(*) ---------- 601500 On Standby when same DML block is executed, it took around 8 minutes 18 seconds. This is the impact of running DML over the network.
What’s happening backstairs?
While performing DML’s on Standby database, session tracing has been enabled.
Following are details from trace file.
begin
for a in 1..500 loop
insert into TEST values (a,'test'||to_char(a));
end loop;
commit;
end;
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.16 464.14 0 0 0 1
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.16 464.15 0 0 0 1
Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 107
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 12.57 12.57
********************************************************************************
SQL ID: cz467hg4s37vr Plan Hash: 0
INSERT INTO TEST
VALUES
(:B1 ,'test'||TO_CHAR(:B1 ))
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 5.97 0 0 3 0
Execute 500 0.28 0.41 0 0 0 500
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 501 0.29 6.39 0 0 3 500
Misses in library cache during parse: 0
Optimizer mode: ALL_ROWS
Parsing user id: 107 (recursive depth: 1)
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
single-task message 1 0.11 0.11
SQL*Net message from dblink 1010 0.07 0.66
SQL*Net message to dblink 1008 0.00 0.00
Disk file operations I/O 1 0.00 0.00
SQL*Net vector data to dblink 501 0.01 0.03
standby query scn advance 500 1.49 463.66
********************************************************************************
As tinted in above section, total wait time has been consumed across “standby query scn advance“.
For event details refer below Doc ID
Wait event ‘standby query scn advance’ (Doc ID 2533359.1)
Automatic Resolution of SQL Plan Regressions
SQL plan management searches for SQL statements in the Automatic Workload Repository (AWR). Prioritizing by highest load, it looks for alternative plans in all available sources, adding better-performing plans to the SQL plan baseline. Oracle Database also provides a plan comparison facility and improved hint reporting.
Automatic SQL plan management resolves plan regressions without user intervention. For example, if high-load statements are performing sub optimally, then SQL plan management evolve advisor can locate the statements automatically, and then test and accept the best plans.
Automatic SPM Evolve Advisor
Finer granularity Supplemental Logging
Supplemental logging was designed and implemented for Logical Standby or full database replication requirements. This adds unnecessary overhead in environments where only a subset of tables is being replicated. Fine-grained supplemental logging provides a way for partial database replication users to disable supplemental logging for uninteresting tables so that even when supplemental logging is enabled in database or schema level, there is no supplemental logging overhead for uninteresting tables.
Use of this feature can significantly reduce the overhead in terms of resource usage and redo generation in case when only some of the tables in the database require supplemental logging, such as in a Golden Gate partial replication configuration.
If both source and mining database are at redo compatibility 19 or higher and the value of enable_goldengate_replication is TRUE, then Oracle Database 19c and higher provides an advantage of reduced supplemental logging overhead for Oracle GoldenGate.
How to enable
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION; ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION * ERROR at line 1: ORA-26947: Oracle GoldenGate replication is not enabled. SQL> alter system set enable_goldengate_replication=true; System altered. SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION; Database altered. SQL> SELECT supplemental_log_data_min, supplemental_log_data_sr from v$database; SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_SR ------------------------------ ------------------------------ IMPLICIT YES
Way to disable
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION; ALTER DATABASE DROP SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION * ERROR at line 1: ORA-31545: cannot drop subset database replication when there is no other database (container) level supplemental log setting Note: If no other database-level supplemental log is explicitly enabled, this DDL would fail. This is to prevent the loss of supplemental log data during replication. In this case, you need to add minimal supplemental logging explicitly using the command ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; and then execute ALTER DATABASE DROP SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION; SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; Database altered. SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA SUBSET DATABASE REPLICATION; Database altered. SQL> SELECT supplemental_log_data_min, supplemental_log_data_sr from v$database; SUPPLEMENTAL_LOG_DATA_MIN SUPPLEMENTAL_LOG_DATA_SR ------------------------------ ------------------------------ YES NO
Dry-Run Validation of Clusterware Upgrade
Oracle Grid Infrastructure installation wizard (gridSetup.sh) enables you to perform a dry-run mode upgrade using dryRunForUpgrade flag to check your system’s upgrade readiness.
In dry-run upgrade mode, the installation wizard performs all of the system readiness checks that it would perform in an actual upgrade and enables you to verify whether your system is ready for upgrade before you start the upgrade. This mode does not perform an actual upgrade. It helps anticipate potential problems with the system setup and avoid upgrade failures.
Steps to perform Dry-Run
AutoUpgrade for Oracle Database
AutoUpgrade enables to upgrade one or many Oracle databases at the command-line with a single command and a single configuration file.
AutoUpgrade will run the preupgrade tasks, perform automated fixups where needed, execute the database upgrade, and finish by taking care of post-upgrade tasks. It includes automatic retry and fallback, the possibility to schedule upgrades for future points in time, and the ability to set, change or remove initialization parameters as desired.
It saves time and money by upgrading hundreds of databases with one command and replacing bespoke high maintenance upgrade solutions.
Reference Doc ID: AutoUpgrade Tool (Doc ID 2485457.1)
Workflow and Stages
Demo
Config File global.autoupg_log_dir=/home/oracle/DBADB122_UPG/DBADB122 DBADB122.dbname=DBADB122 DBADB122.start_time=NOW DBADB122.source_home=/u02/app/oracle/product/12c/dbhome_1 DBADB122.target_home=/opt/oracle/product/19c/dbhome_1 DBADB122.sid=DBADB122 DBADB122.log_dir=/home/oracle/DBADB122_UPG/DBADB122 DBADB122.upgrade_node=dinesh19c DBADB122.target_version=19.3 DBADB122.run_utlrp=yes DBADB122.timezone_upg=yes Run PRECHECK $ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config DBADB122_config.cfg -mode analyze upg> status -job 100 Progress ----------------------------------- Start time: 19/08/09 16:55 Elapsed (min): 3 End time: N/A Last update: 2019-08-09T16:58:08.991 Stage: PRECHECKS Operation: PREPARING Status: RUNNING Pending stages: 1 Stage summary: SETUP <1 min PRECHECKS 2 min (IN PROGRESS) Job Logs Locations ----------------------------------- Logs Base: /home/oracle/DBADB122_UPG/DBADB122/DBADB122 Job logs: /home/oracle/DBADB122_UPG/DBADB122/DBADB122/100 Stage logs: /home/oracle/DBADB122_UPG/DBADB122/DBADB122/100/prechecks TimeZone: /home/oracle/DBADB122_UPG/DBADB122/DBADB122/temp Error Details: None upg> Job 100 completed ------------------- Final Summary -------------------- Number of databases [ 1 ] Jobs finished successfully [1] Jobs failed [0] Jobs pending [0] ------------- JOBS FINISHED SUCCESSFULLY ------------- PRECHECK completion status cat /home/oracle/DBADB122_UPG/DBADB122/cfgtoollogs/upgrade/auto/status/status.json { "totalJobs" : 1, "lastUpdateTime" : "2019-08-09 16:58:12", "jobs" : [ { "dbName" : "DBADB122", "jobNo" : 100, "logDirectory" : "/home/oracle/DBADB122_UPG/DBADB122/DBADB122/100", "conNumber" : 1, "lastUpdateTime" : "2019-08-09 16:58:12", "modules" : [ { "moduleName" : "PRECHECKS", "status" : 0, "errors" : [ ], "lastUpdateTime" : "2019-08-09 16:58:12" } ] } ] Upgrade Database using DEPLOY $ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -config DBADB122_config.cfg -mode deploy upg> status -job 101 Progress ----------------------------------- Start time: 19/08/09 17:42 Elapsed (min): 94 End time: N/A Last update: 2019-08-09T19:15:36.494 Stage: POSTUPGRADE Operation: EXECUTING Status: RUNNING Pending stages: 1 Stage summary: SETUP <1 min PREUPGRADE <1 min PRECHECKS 3 min GRP <1 min PREFIXUPS 5 min DRAIN <1 min DBUPGRADE 70 min POSTCHECKS 4 min POSTFIXUPS 8 min POSTUPGRADE <1 min (IN PROGRESS) Job Logs Locations ----------------------------------- Logs Base: /home/oracle/DBADB122_UPG/DBADB122/DBADB122 Job logs: /home/oracle/DBADB122_UPG/DBADB122/DBADB122/101 Stage logs: /home/oracle/DBADB122_UPG/DBADB122/DBADB122/101/postupgrade TimeZone: /home/oracle/DBADB122_UPG/DBADB122/DBADB122/temp Additional information ----------------------------------- Details: Update of oratab [DBADB122] [/etc/oratab] [SUCCESS] [None] Network Files [DBADB122] [/opt/oracle/product/19c/dbhome_1/network/admin/tnsnames.ora] [SUCCESS] [None] [/opt/oracle/product/19c/dbhome_1/network/admin/listener.ora] [SUCCESS] [None] [/opt/oracle/product/19c/dbhome_1/network/admin/sqlnet.ora] [SUCCESS] [None] Copy of password file [DBADB122] [/opt/oracle/product/19c/dbhome_1/dbs/orapwDBADB122] [SUCCESS] [None] Database State Resetting the database's state: [SUCCESS] [None] Drop GRP [DBADB122] [SUCCESS] [None] Restart of database [DBADB122] [PENDING] [None] Error Details: None upg> Job 101 completed ------------------- Final Summary -------------------- Number of databases [ 1 ] Jobs finished successfully [1] Jobs failed [0] Jobs pending [0] ------------- JOBS FINISHED SUCCESSFULLY ------------- Job 101 FOR DBADB122 ---- Drop GRP at your convenience once you consider it is no longer needed ---- Drop GRP from DBADB122: drop restore point AUTOUPGRADE_221145114461854_DBADB122
Oracle Network Log File Segmentation
The maximum size and number of text log files can be configured for Oracle Network components such as Oracle Net Listener, Oracle Connection Manager, and Global Service Manager. This feature allows better management of log files, particularly in Cloud environments.
Use below parameters to define number of files with defined size
LOG_FILE_NUM_listener_name
To specify the number of log file segments. At any point of time there can be only “n” log file segments where “n” is LOG_FILE_NUM_listener_name.
If the log grows beyond this number, then the older segments are deleted.
LOG_FILE_SIZE_listener_name
To specify the size of each log file segment. The size is in MB.
Sample listener.ora
DBADB_LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = dinesh19c)(PORT = 1521)) (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) ) SID_LIST_DBADB_LISTENER = (SID_LIST = (SID_DESC = (GLOBAL_DBNAME = DBADB) (ORACLE_HOME = /opt/oracle/product/19c/dbhome_1) (SID_NAME = DBADB) ) ) LOG_FILE_NUM_DBADB_LISTENER=5 LOG_FILE_SIZE_DBADB_LISTENER=1
Flashback Standby when Primary is flashed back
Flashback Database moves the entire database to an older point in time and opens the database with RESETLOGS. In a Data Guard setup, if the primary database is flashed back, the standby site is no longer in sync with the primary.
In previous releases, getting the secondary to the same point in time as the primary requires a manual procedure to flash back standby databases. A new parameter(_standby_auto_flashback) is introduced which enables the standby database to be flashed back automatically when Flashback Database is performed on the primary database.
Demo
Note :
(On PRIMARY database, if you create restore point without explicitly enabling FLASHBACK ON, standby database failed to automatically flashback.
In my case I need to recreate standby database from scratch
You must enable “FLASHBACK ON” explicitly for standby database as well)
Lets start with demo.
On PRIMARY database, guaranteed restore point created with “FLASHBACK ON” for entire database.
Standby must configured with “FLASHBACK ON” setting.
create restore point DINESH_REST guarantee flashback database;
Restore point created.
select name, open_mode, controlfile_type ,database_role,flashback_on
from v$database;
NAME OPEN_MODE CONTROL DATABASE_ROLE FLASHBACK_ON
---------- -------------------- ------- ---------------- -------------
DBADB READ WRITE CURRENT PRIMARY YES
col NAME for a25
col TIME for a40
col REPLICATED for a15
col GUARANTEE_FLASHBACK_DATABASE for a30
select SCN, GUARANTEE_FLASHBACK_DATABASE, TIME, NAME, REPLICATED
from v$restore_point;
SCN GUARANTEE_FLASHBACK_DATABASE TIME NAME REPLICATED
---------- ------------------------------ ---------------------------------------- ------------------------- -----------
3189676 YES 29-AUG-19 05.40.18.000000000 PM DINESH_REST NO
PRIMARY database restarted and flashback to restore point
shutdown immediate Database closed. Database dismounted. ORACLE instance shut down. startup mount ORACLE instance started. Total System Global Area 1795159792 bytes Fixed Size 9135856 bytes Variable Size 452984832 bytes Database Buffers 1191182336 bytes Redo Buffers 7639040 bytes In-Memory Area 134217728 bytes Database mounted. SQL> flashback database to restore point DINESH_REST; Flashback complete. SQL> alter database open resetlogs; Database altered.
During this process, standby database was running in read only mode and failed with highlighted error. Media recovery interrupted.
2019-08-29T18:01:43.445468+05:30 rfs (PID:24762): New archival redo branch: 1017597674 current: 1017589052 2019-08-29T18:01:43.508354+05:30 rfs (PID:24762): Selected LNO:6 for T-1.S-1 dbid 3467819188 branch 1017597674 rfs (PID:24762): Standby in the future of new recovery destination branch(resetlogs_id) 1017597674 rfs (PID:24762): Incomplete Recovery SCN:0x000000000030ba2f rfs (PID:24762): Resetlogs SCN:0x000000000030abae rfs (PID:24762): Flashback database to SCN:0x000000000030abad (3189677) to follow new branch rfs (PID:24762): New Archival REDO Branch(resetlogs_id): 1017597674 Prior: 1017589052 rfs (PID:24762): Archival Activation ID: 0xced3adfb Current: 0xced35e65 rfs (PID:24762): Effect of primary database OPEN RESETLOGS rfs (PID:24762): Managed Standby Recovery process is active 2019-08-29T18:01:43.628450+05:30 Incarnation entry added for Branch(resetlogs_id): 1017597674 (DBADBFALL) 2019-08-29T18:01:43.665243+05:30 Setting recovery target incarnation to 4 2019-08-29T18:01:43.667334+05:30 PR00 (PID:22886): MRP0: Incarnation has changed! Retry recovery... 2019-08-29T18:01:43.667635+05:30 Errors in file /opt/oracle/diag/rdbms/dbadbfall/DBADBFALL/trace/DBADBFALL_pr00_22886.trc: ORA-19906: recovery target incarnation changed during recovery PR00 (PID:22886): Managed Standby Recovery not using Real Time Apply 2019-08-29T18:01:43.715752+05:30 ARC3 (PID:21354): Archived Log entry 10 added for T-1.S-1 ID 0xced3adfb LAD:1 2019-08-29T18:01:43.793205+05:30 rfs (PID:24825): Opened log for T-1.S-11 dbid 3467819188 branch 1017589052 2019-08-29T18:01:43.881460+05:30 Recovery interrupted! 2019-08-29T18:01:44.126230+05:30 rfs (PID:24825): Archived Log entry 11 added for B-1017589052.T-1.S-11 ID 0xced35e65 LAD:2 Recovered data files to a consistent state at change 3193867 2019-08-29T18:01:44.129553+05:30 stopping change tracking 2019-08-29T18:01:44.131161+05:30 Errors in file /opt/oracle/diag/rdbms/dbadbfall/DBADBFALL/trace/DBADBFALL_pr00_22886.trc: ORA-19906: recovery target incarnation changed during recovery 2019-08-29T18:01:44.307981+05:30 Started logmerger process 2019-08-29T18:01:44.324883+05:30 PR00 (PID:24828): Managed Standby Recovery starting Real Time Apply max_pdb is 3 Warning: Recovery target destination is in a sibling branch of the controlfile checkpoint. Recovery will only recover changes to datafiles. Datafile 1 (ckpscn 3193867) is orphaned on incarnation#=3 PR00 (PID:24828): MRP0: Detected orphaned datafiles! 2019-08-29T18:01:44.381034+05:30 Errors in file /opt/oracle/diag/rdbms/dbadbfall/DBADBFALL/trace/DBADBFALL_pr00_24828.trc: ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/u01/oradata/data/DBADBFALL/system01.dbf' PR00 (PID:24828): Managed Standby Recovery not using Real Time Apply stopping change tracking 2019-08-29T18:01:44.545290+05:30 Recovery Slave PR00 previously exited with exception 19909 2019-08-29T18:01:44.545932+05:30 Errors in file /opt/oracle/diag/rdbms/dbadbfall/DBADBFALL/trace/DBADBFALL_mrp0_22880.trc: ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/u01/oradata/data/DBADBFALL/system01.dbf' 2019-08-29T18:01:53.132342+05:30 rfs (PID:24856): Primary database is in MAXIMUM PERFORMANCE mode 2019-08-29T18:01:53.243039+05:30 rfs (PID:24856): Selected LNO:6 for T-1.S-2 dbid 3467819188 branch 1017597674 2019-08-29T18:02:04.551559+05:30 MRP0 (PID:22880): Recovery coordinator encountered one or more errors during automatic flashback on standby 2019-08-29T18:02:04.552127+05:30 Background Media Recovery process shutdown (DBADBFALL)
To proceed further, standby database restarted in mounted mode and media recovery started.
Now standby database able to identify flashback performed on primary and automatically flashback to restore point.
Background Managed Standby Recovery process started (DBADBFALL) 2019-08-29T18:05:40.530166+05:30 Started logmerger process 2019-08-29T18:05:40.559253+05:30 PR00 (PID:25235): Managed Standby Recovery starting Real Time Apply max_pdb is 3 Warning: Recovery target destination is in a sibling branch of the controlfile checkpoint. Recovery will only recover changes to datafiles. Datafile 1 (ckpscn 3193867) is orphaned on incarnation#=3 PR00 (PID:25235): MRP0: Detected orphaned datafiles! 2019-08-29T18:05:40.587358+05:30 Errors in file /opt/oracle/diag/rdbms/dbadbfall/DBADBFALL/trace/DBADBFALL_pr00_25235.trc: ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/u01/oradata/data/DBADBFALL/system01.dbf' PR00 (PID:25235): Managed Standby Recovery not using Real Time Apply stopping change tracking 2019-08-29T18:05:40.753195+05:30 Recovery Slave PR00 previously exited with exception 19909 2019-08-29T18:05:40.875141+05:30 Errors in file /opt/oracle/diag/rdbms/dbadbfall/DBADBFALL/trace/DBADBFALL_mrp0_25221.trc: ORA-19909: datafile 1 belongs to an orphan incarnation ORA-01110: data file 1: '/u01/oradata/data/DBADBFALL/system01.dbf' 2019-08-29T18:06:00.882678+05:30 MRP0 (PID:25221): Recovery coordinator performing automatic flashback of database to SCN:0x000000000030abac (3189676) Flashback Restore Start Flashback Restore Complete Flashback Media Recovery Start 2019-08-29T18:06:01.438806+05:30 Setting recovery target incarnation to 3 2019-08-29T18:06:01.451938+05:30 Started logmerger process 2019-08-29T18:06:01.486379+05:30 max_pdb is 3 2019-08-29T18:06:01.561969+05:30 Parallel Media Recovery started with 3 slaves 2019-08-29T18:06:01.683807+05:30 stopping change tracking 2019-08-29T18:06:01.859008+05:30 Media Recovery Log /u01/oradata/data/DBADBFALL/arch/1_9_1017589052.dbf 2019-08-29T18:06:01.961192+05:30 Media Recovery Log /u01/oradata/data/DBADBFALL/arch/1_10_1017589052.dbf 2019-08-29T18:06:02.090738+05:30 Media Recovery Log /u01/oradata/data/DBADBFALL/arch/1_11_1017589052.dbf 2019-08-29T18:06:02.356221+05:30 Incomplete Recovery applied until change 3189676 time 08/29/2019 17:40:18 2019-08-29T18:06:02.373747+05:30 Flashback Media Recovery Complete 2019-08-29T18:06:02.473453+05:30 stopping change tracking 2019-08-29T18:06:02.506925+05:30 Setting recovery target incarnation to 4 2019-08-29T18:06:02.542883+05:30 Started logmerger process 2019-08-29T18:06:02.568830+05:30 PR00 (PID:25267): Managed Standby Recovery starting Real Time Apply max_pdb is 3 2019-08-29T18:06:02.682426+05:30 Parallel Media Recovery started with 3 slaves 2019-08-29T18:06:02.686792+05:30 Media Recovery start incarnation depth : 1, target inc# : 4, irscn : 3189677 stopping change tracking 2019-08-29T18:06:02.880310+05:30 PR00 (PID:25267): Media Recovery Log /u01/oradata/data/DBADBFALL/arch/1_11_1017589052.dbf 2019-08-29T18:06:03.101114+05:30 PR00 (PID:25267): Media Recovery Log /u01/oradata/data/DBADBFALL/arch/1_1_1017597674.dbf PR00 (PID:25267): Media Recovery Waiting for T-1.S-2 (in transit) 2019-08-29T18:06:03.213438+05:30 Recovery of Online Redo Log: Thread 1 Group 6 Seq 2 Reading mem 0 Mem# 0: /u01/oradata/data/DBADBFALL/redostby03.log 2019-08-29T18:06:03.545178+05:30 Completed: ALTER DATABASE RECOVER managed standby database disconnect
Clear Flash logs periodically for FRA size certainty
Customers have many databases that all use the Fast Recovery Area (FRA). They usually subscribe to FRA by using the db_recovery_file_dest_size initialization parameter. Difficulties arise when flashback logs are not cleared until space pressure requires it. In many cases, the only remedy is to turn off flashback logging and turn it back on.
This feature makes flashback space usage become predictable from a storage management perspective, since flashback uses no more space than is required by retention. This feature also allows users to control cumulative space pressure by adjusting the flashback retention.
Propagate Restore Points from Primary to Standby
Normal restore points or guaranteed restore points can be defined at the primary site to enable fast point-in-time recovery in the event of any logical corruption issues. However, this restore point is stored in the control file and is not propagated to the standby database. In the event of a failover, the standby becomes the primary and the restore point information is lost.
This feature ensures that the restore point is propagated from the primary to standby sites, so that the restore point is available even after a failover event.
Demo
Production select name, open_mode, controlfile_type ,database_role,flashback_on from v$database; NAME OPEN_MODE CONTROL DATABASE_ROLE FLASHBACK_ON --------- -------------------- ------- ---------------- ------------------ DBADB READ WRITE CURRENT PRIMARY YES create restore point DINESH_REST guarantee flashback database; Restore point created. col NAME for a25 col TIME for a40 col REPLICATED for a15 col GUARANTEE_FLASHBACK_DATABASE for a30 select SCN, GUARANTEE_FLASHBACK_DATABASE, TIME, NAME, REPLICATED from v$restore_point; SCN GUARANTEE_FLASHBACK_DATABASE TIME NAME REPLICATED ---------- ------------------------------ ---------------------------------------- ------------- --------------- 3189676 YES 29-AUG-19 05.40.18.000000000 PM DINESH_REST NO Standby select name, open_mode, controlfile_type ,database_role,flashback_on from v$database; NAME OPEN_MODE CONTROL DATABASE_ROLE FLASHBACK_ON --------- -------------------- ------- ---------------- ------------------ DBADB READ ONLY WITH APPLY STANDBY PHYSICAL STANDBY YES col NAME for a25 col TIME for a40 col REPLICATED for a15 col GUARANTEE_FLASHBACK_DATABASE for a30 select SCN, GUARANTEE_FLASHBACK_DATABASE, TIME, NAME, REPLICATED from v$restore_point; SCN GUARANTEE_FLASHBACK_DATABASE TIME NAME REPLICATED ---------- ------------------------------ ---------------------------------------- ------------------------- ----------- 3189676 NO 29-AUG-19 05.40.18.000000000 PM DINESH_REST_PRIMARY YES
As highlighted in “Standby” section, once you create restore point on primary database, it is created with “_PRIMARY” suffix in standby database. Important column to note here is “REPLICATED” which shows “YES” on standby.
Easy Parameter Management in a Broker Config
Users can now manage all Data Guard related parameter settings using the SQL*Plus ALTER
SYSTEM commands or in DGMGRL with the new EDIT DATABASE … SET PARAMETER command.
Parameter changes made in the DGMGRL interface are immediately executed on the target database.
In addition, this new capability allows the user to modify a parameter on all databases in a Data Guard
configuration using the ALL qualifier, eliminating the requirement to attach to each database and
execute an ALTER SYSTEM command.
Parameters to tune auto outage resolution with ADG
In Oracle Database 19c, the DBA can tune the amount of wait time for this detection period by using two new parameters, DATA_GUARD_MAX_IO_TIME and DATA_GUARD_MAX_LONGIO_TIME.
These parameters allow the waits times to be tuned for a specific Data Guard configuration based on the user network and Disk I/O behavior. Users can now tune Oracle Data Guard automatic outage resolution to fit their specific needs.
max_datapump_parallel_per_job
MAX_DATAPUMP_PARALLEL_PER_JOB is introduced to control the number of parallel workers that can be used for an individual Data Pump job. This is an improvement to implement resource usage policies in DataPump.
In a database with MAX_DATAPUMP_PARALLEL_PER_JOB set to N, even if parallel>N is specified for the DataPump job, only N number of Worker processes will be started.
Test Mode for Transportable Tablespaces
TTS_CLOSURE_CHECK=TEST_MODE indicates that the tablespaces are not required to be in Read-Only mode. This option is for testing purposes only to provide timing requirements of a transportable tablespace export operation. The resulting export dump file is not available for use by Data Pump Import.
Now you can more easily determine how long an export takes, and discover unforeseen issues not reported by the closure check.
Demo
select status,tablespace_name from dba_tablespaces where tablespace_name='USERS'; STATUS TABLESPACE_NAME --------- ---------------- ONLINE USERS expdp directory=DATA_PUMP_DIR dumpfile=users_tts_test.dmp logfile=users_tts_test.log transport_tablespaces=users TTS_CLOSURE_CHECK=test_mode Export: Release 19.0.0.0.0 - Production on Fri Sep 13 17:13:38 2019 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved. Username: / as sysdba Password: Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Warning: Oracle Data Pump operations are not typically needed when connected to the root or seed of a container database. Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01": /******** AS SYSDBA directory=DATA_PUMP_DIR dumpfile=users_tts_test.dmp logfile=users_tts_test.log transport_tablespaces=users TTS_CLOSURE_CHECK=test_mode Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded ****************************************************************************** Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is: /opt/oracle/admin/DBADB/dpdump/users_tts_test.dmp Dump file set is unusable. TEST_MODE requested. ****************************************************************************** Datafiles required for transportable tablespace USERS: /u01/oradata/data/DBADB/users01.dbf Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Fri Sep 13 17:15:13 2019 elapsed 0 00:01:15 ls -rlt /opt/oracle/admin/DBADB/dpdump/users_tts_test.dmp -rw-r-----. 1 oracle oinstall 2367488 Sep 13 17:15 /opt/oracle/admin/DBADB/dpdump/users_tts_test.dmp
Orachk & Exachk Support for Encrypting Collection Files
Oracle ORAchk and Oracle EXAchk diagnostic collection files may contain sensitive data. Starting in this release, you can encrypt and decrypt diagnostic collection ZIP files and protect them with a password.
Oracle ORAchk and Oracle EXAchk collections and their reports can include sensitive data. When you email or transfer these reports to repositories, it is critical that only the intended recipients can view the sensitive data. To prevent leaks, you can restrict access to sensitive data by encrypting the diagnostic collections and protecting them with a password.
This feature is available only on Linux and Solaris platforms.
Stay tuned for surplus demos !!!
Comments
Post a Comment