How To Estimate Disk Space Needed for an Export Job ?

If we  just  want to  know how  much  disk space  for  the  dump  without  actually   exporting  any data, we  can use the ESTIMATE_ONLY=y parameter on  the  expdp  command. ESTIMATE_ONLY=y  parameter  estimate  the space in bytes per tables. The below demo will show that how a system user wants to see the disk space estimates on  scott schemas export. 


[oracle@primaryDB ~]$ expdp schemas=scott estimate_only=y directory=DATA_PUMP_DIR

Export: Release 11.2.0.4.0 - Production on Sat Sep 14 09:10:16 2019

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_SCHEMA_02":  /******** AS SYSDBA schemas=scott estimate_only=y directory=DATA_PUMP_DIR
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
.  estimated "SCOTT"."EMP2"                                104 MB
.  estimated "SCOTT"."DEPT"                                 64 KB
.  estimated "SCOTT"."EMP"                                  64 KB
.  estimated "SCOTT"."EMP1"                                 64 KB
.  estimated "SCOTT"."EMPLOYEE"                             64 KB
.  estimated "SCOTT"."PHANI"                                64 KB
.  estimated "SCOTT"."SALGRADE"                             64 KB
.  estimated "SCOTT"."STACK"                                64 KB
.  estimated "SCOTT"."BONUS"                                 0 KB
Total estimation using BLOCKS method: 104.4 MB
Job "SYS"."SYS_EXPORT_SCHEMA_02" successfully completed at Sat Sep 14 09:10:42 2019 elapsed 0 00:00:15


Note : 

  • The ESTIMATE_ONLY parameter cannot be used in conjunction with the QUERY parameter dumpfile.
  • Estimate Parameter  do not give the accurate size . It gives the approax.  size of the dumpfile . 
  • If the Data Pump export job involves compressed tables, the default size estimation given for the compressed table is inaccurate when ESTIMATE=BLOCKS is used. This is because the size estimate does not reflect that the data was stored in a compressed form. To get a more accurate size estimate for compressed tables, use ESTIMATE=STATISTICS.
  • The estimate may also be inaccurate if the QUERY, SAMPLE, or REMAP_DATA parameter is used.

Now we see that we need  104MB  disk space to export the entire data base.

This approach may not give you exact size since it uses block method, you can use statistics method which would be more accurate no.

Refer to ESTIMATE={BLOCKS | STATISTICS}

It's all depends on the condition's . Both parameters didn't give the exact size but in general cases block size gives the approax size .

I have done some experiment and come to conclusion that Actual dump size may varies upto 50% of original size , If size of the dumpfile is small then there may be more deviation in size but in case of large file , deviation keeps on decreasing .

I have added few notes on this .

You comments are very useful for me .

Thanks and good time ahead . :) :)



Comments

Popular posts from this blog

Step by Step: How to troubleshoot a slow running query in Oracle

Register Archive log file manually in standby Database

How to check UNDO tablespace usage and who is using more undo