New commands ( Just Collection)

alter database open resetlogs upgrade ; —RMAN Restore backup of lower version database to a higher version
Reference:

https://taliphakanozturken.wordpress.com/tag/alter-database-open-resetlogs-upgrade/
https://shivanandarao-oracle.com/2015/09/16/rman-restore-backup-of-lower-version-database-to-a-higher-version/

EBS Upgrades and Platform Migration

Upgrade     : Install & Upgrade All Data
Reimplement : Legacy Instance --> Data Migration --> Reimplemented Instance
Hybrid: Upgrade & Consolidate : Upgrade,Migrate data ==> Single Global Instance
... upgrading to a newer version of your current operating system?
... migrating to a different platform with the same endian format?
... migrating to a different platform with a different endian format?

Best practices for combined upgrades
----------------------------------------------------------
1)Consider the database and application tier migrations separately and plan to perform the database migration first.
2)Choose the right migration process for the database while considering the target platform, database size and process complexity.
3)Migrate and upgrade your EBS application tier from 11i to R12 by laying down the new application tier on the target platform as part of the upgrade.
Database Migrations and Upgrades
-------------------------------------------------------------------
A. An OS upgrade on the same platform
B. Migration to a new platform of the same endian format
Transportable Database (TDB)
C. Migration to a new platform of different endian formats
1. Export/Import (Datapump) , but slow for databases > 1TB
2. Transportable Tablespaces  is better for large databases

Application Tier Migrations and Upgrades
-------------------------------------------------------------------
Migrations without Upgrades
R12 Upgrade using Rapid Install
R12 Upgrade using Release Update Packs

Transportable Database is the fastest way to migrate data between two platforms(same endian format), as the process fundamentally is one of copying database files and
then using Recovery Manager (rman) to convert data files (using the ‘rman convert database’ command).
The EBS TDB process for migration does however require that the source and target database be of the same release and patchset version.
For larger databases (>1TB) however, the use of export/import can be an extremely slow process and alternatives should be considered.
TTS essentially is a process of moving or copying the data portion of the database,
and then using the Recovery Manager (rman) utility to convert the endian format of the data.
The use of TTS will still require export/import of certain objects in the EBS database such as metadata, system tables, etc.
Full Transportable Export/Import migration by default –
this is similar to Transportable Tablespaces
but automates a number of steps in the process of migration and reduces the manual tasks required to perform the migration.

References
Migration
https://blogs.oracle.com/stevenchan/a-primer-on-migrating-oracle-applications-to-a-new-platform
https://blogs.oracle.com/stevenchan/best-practices-for-combining-ebs-upgrades-with-platform-migrations
Planning Your EBS Upgrade from 11i to R12 and beyond https://www.oracle.com/technetwork/apps-tech/upgrade-planning-2011-1403212.pdf
Oracle E-Business Suite Upgrades and Platform Migration (Doc ID 1377213.1) *************
https://www.oracle.com/technetwork/database/availability/maa-ebs-exadata-xtts-321616.pdf
https://www.cisco.com/c/en/us/solutions/collateral/servers-unified-computing/ucs-5100-series-blade-server-chassis/Whitepaper_c11-707249.html
https://hansrajsao.wordpress.com/2015/03/06/migrating-oracle-database-to-exadata/
Cross Platform Transportable Tablespaces on 11i with 10gR2 (Doc ID 454574.1)

Upgrade
Oracle E-Business Suite Upgrade Guide, Release 11i to 12.1.3 https://docs.oracle.com/cd/B53825_08/current/acrobat/121upgrade.pdf
Oracle E-Business Suite Upgrade Guide, Release 12.0/1 to 12.2 https://docs.oracle.com/cd/E26401_01/doc.122/e48839.pdf
Oracle E-Business Suite Upgrade Guide, Release 11i to 12.2 https://docs.oracle.com/cd/E51111_01/current/acrobat/122upg11i.pdf
https://www.datavail.com/blog/planning-tips-and-best-practices-for-upgrading-to-ebs-12-2/
Oracle E-Business Suite Release 12.2 Information Center – Upgrade (Doc ID 1583158.1) *****
Best Practices for Minimizing Oracle E-Business Suite Release 12.2.n Upgrade Downtime (Doc ID 1581549.1)
Oracle E-Business Suite Release 12.2 Information Center (Doc ID 1581299.1)
https://blogs.oracle.com/stevenchan/getting-started-with-the-release-12-technology-stack
Oracle E-Business Suite Release 12.2 Technology Stack Documentation Roadmap (Doc ID 1934915.1)

SCHEMA Refresh

SCHEMA Refresh Steps with Datapump
Script to take an EXPDP backup of the XX_DUMMY schema via Cron

### Scrpt to take an EXPDP backup of the XX_DUMMY schema
. $HOME/EBSDEV_dev-01.env
pswd=<code>cat $HOME/scripts/db_mon/.watchword</code>; export pswd
export SCHEMA=XX_DUMMY;
DUMP_LOG="$SCHEMA"_EXPDP_<code>date +"%d_%b_%y"</code>.log; export DUMP_LOG
DBA_MAIL=ebsdba@srinalla.com
echo "$SCHEMA Schema Backup started in $ORACLE_SID"|mailx -s "$SCHEMA Schema Backup started in $ORACLE_SID" $DBA_MAIL
expdp system/"$pswd" directory=WEEKLY_BACKUP dumpfile="$SCHEMA"_"$ORACLE_SID"_<code>date +"%d_%b_%y"</code>_EXP%U.dmp logfile="$DUMP_LOG" schemas=$SCHEMA filesize=3G cluster=n parallel=16
bkp_err=<code>cat /u03/EBSDEV/DMP_BKPS/$DUMP_LOG|grep -i "ora-"|wc -l</code>; export bkp_err
if [ "$bkp_err" -gt "0" ]
then
<code>cat /u03/EBSDEV/DMP_BKPS/WEEKLY_BKPS/$DUMP_LOG</code>|mailx -s "$SCHEMA Schema Backup completed with errors in $ORACLE_SID, Pls chk!!!" $DBA_MAIL
else
echo "$SCHEMA Schema Backup completed in $ORACLE_SID"|mailx -s "$SCHEMA Schema Backup completed in $ORACLE_SID" $MAILING_LIST
fi
#To Remove dumpfiles older than a month
find /u03/EBSDEV/DMP_BKPS/WEEKLY_BKPS/* -mtime +10 -exec rm {} ;

Working with Datapump ?

Let’s look at below in detail


Data Pump Best Practices
Dont Invoke expdp using SYS 
Purge recyclebin before Export , User/Table/DB Level
** PARALLELISM doesn't work with LOB COLUMN
How to use/estimate PARALLEL parameter in Datapump?
How to Check/Monitor DATAPUMP JOBS?
**Datapump will use two different load methods during import(impdp).

Data Pump Best Practices

pga_aggregate_target  -->Set this to high,it will improve the Data pump performance.
For export consistency use:-
------------------------------
FLASHBACK_TIME=SYSTIMESTAMP, This will increase UNDO requirements for the duration of the export

compression_algorithm=medium --12C Recommended option. Similar characteristics to BASIC, but uses a different algorithm

Always set parameters:-
------------------------------
METRICS=YES
EXCLUDE=STATISTICS
LOGTIME=ALL -->Timestamps   (From 12C)

Speed up Data Pump:-
------------------------------
PARALLEL=n
EXCLUDE=STATISTICS on export
EXCLUDE=INDEXES on import
1. Initial impdp with EXCLUDE=INDEXES 
2. Second impdp with INCLUDE=INDEXES SQLFILE=indexes.sql 
3. Split indexes.sql into multiple SQL files and run in multiple sessions

– Set COMMIT_WAIT=NOWAIT and COMMIT_LOGGING=BATCH during full imports

Direct import via database link (Network bandwidth and CPU bound):-
----------------------------------------------------------------------------------------------------
– Parameter: NETWORK_LINK
Run only impdp on the target system - no expdp necessary
No dump file written, no disk I/O, no file transfer needed

Restrictions of database links apply: – Does not work with LONG/LONG RAW and certain object types
Performance: Depends on network bandwidth and target's CPUs

Some Commands /Use Cases

remap_tablespace=OLD_TBS:NEW_TBS ==>Move all objects from one tablespace to another
remap_schema=old_schema:new_schema ==> Move a object to a different schema
expdp with content=metadata_only & impdp with remap_schema=A:Z  ==> Clone a User
remap_datafile=’/u01/app/oracle/oradata/datafile_01.dbf’:’/u01/datafile_01.dbf’  ==> Create your database in a different file structure
transform=pctspace:70 ,sample=70 -->tell the Data Pump to reduce the size of extents to 70% in impdp
transform=disable_archive_logging:Y
there is a database parameter FORCE LOGGING which overwrites this feature.
sqlfile=x_24112010.sql

EXPDP Filesize : Split or Slice the Dump file into Multiple Directories
expdp srinalla/srinalla job_name=exp_job_multiple_dir  schemas=STHOMAS  filesize=3G dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp logfile=dump.log compression=all parallel=10

While import,mention like this
dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp


Statistics are imported by default
compression
parallel
cluster  (Default=Y,From 11gR2,Parallelization in RAC, Can be on all nodes or only few nodes based on service_name=EBS_DP_12
         Sttaus Check :select inst_id, session_type from dba_datapump_sessions;
Commit the Import on every row with COMMIT=Y.

If COMMIT=Y, Import commits tables containing LONG, LOB, BFILE, ROWID, UROWID,
DATE or Type Columns after each row.
		  
Restart
restart the job with a different degree of parallelism, say 4 (earlier it was 6):
Export> parallel=4
Export> START_JOB
Export> continue_client --show progress

import using “table_exists_action=replace” and TABLES=(list of skipped tables)

nohup impdp system/secret NETWORK_LINK=olddb FULL=y  PARALLEL=25 &
impdp system attach
Import> status
Import> parallel=30 << this will increase the parallel processes if you want

Do not invoke expdp using ‘/ as sysdba’

Also, do not invoke expdp using ‘/ as sysdba’ – use the SYSTEM account – see the first Note section here
http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_export.htm#i1012781

Purge recyclebin before Export , User/Table/DB Level
select object_name, original_name, type, can_undrop as “UND”, can_purge as “PUR”, droptime from dba_recyclebin
where owner = ‘XX_DUMMY’;
purge table “BIN$HGnc55/7rRPgQPeM/qQoRw==$0” ;

** PARALLELISM doesn’t work with LOB COLUMN
parallelism doesn’t work ,because data pump serializes the dump when it comes to a LOB table.
The Approach should be like this
1) the whole database/schema minus LOB table and
2) the LOB table.
** pga_aggregate_target proved to be the most important change in the overall scheme of things
because indexes were built towards the end of the job and took 3 times longer
than actually creating the tables and importing the data in this test.
Check LOB Columns with below Query

SELECT  s.tablespace_name ,l.owner,l.table_name,l.column_name,l.segment_name,s.segment_type, round(s.bytes/1024/1024/1024,2) "Size(GB)"
FROM DBA_SEGMENTS s,dba_lobs l
where l.owner = s.owner and l.segment_name = s.segment_name
and l.owner not in ('SYS','SYSTEM','APPS','APPLSYS')
--and round(s.bytes/1024/1024/1024,2)>1
order by s.bytes desc;

Check below links how to fix the issue
http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html
Master Note: Overview of Oracle Large Objects (BasicFiles LOBs and SecureFiles LOBs) (Doc ID 1490228.1)

How to use/estimate PARALLEL parameter in Datapump?

Before starting any export/import, it is better to use ESTIMATE_ONLY parameter. Divide the output by 250MB and based on the result decide on PARALLEL value
Finally when using PARALLEL option, do keep below points in mind
a. Set the degree of parallelism 2*no of CPU, then tune from there.
b. For Data Pump Export, the PARALLEL parameter value < dumpfiles
c. For Data Pump Import, the PARALLEL parameter value  < dumpfiles
For more details, you can refer to MOS doc 365459.1

How to Check/Monitor DATAPUMP JOBS?

DBA_DATAPUMP_JOBS
DBA_DATAPUMP_SESSIONS
V$SESSION_LONGOPS
Monitoring Data Pump http://www.dbaref.com/home/oracle-11g-new-features/monitoringdatapump
Queries to Monitor Datapump Jobs https://databaseinternalmechanism.com/2016/09/13/how-to-monitor-datapump-jobs/
How to delete/remove non executing datapump jobs? https://pavandba.com/2011/07/12/how-to-deleteremove-non-executing-datapump-jobs/

Datapump will use two different load methods during import(impdp)

  1. Direct load path – this is the main reason why datapump import (impdp) is faster than traditional import (imp)
  2. external table path
    But datapump cannot use direct path always due to some restrictions and because of this reason, sometimes you may observe impdp run slower than expected.
    Now, what are those situations when datapump will not use direct path? If a table exist with
    1. A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
    2. A domain index exists for a LOB column.
    3. A table is in a cluster.
    4. There is an active trigger on a pre-existing table.
    5. Fine-grained access control is enabled in insert mode on a pre-existing table.
    6. A table contains BFILE columns or columns of opaque types.
    7. A referential integrity constraint is present on a pre-existing table.
    8. A table contains VARRAY columns with an embedded opaque type.
    9. The table has encrypted columns
    10. The table into which data is being imported is a pre-existing table and at least one of the following conditions exists:
    – There is an active trigger
    – The table is partitioned
    – A referential integrity constraint exists
    – A unique index exists
    11. Supplemental logging is enabled and the table has at least 1 LOB column.
    Note: Data Pump will not load tables with disabled unique indexes. If the data needs to be loaded into the table, the indexes must be either dropped or re-enabled.
    12. using TABLE_EXISTS_ACTION=TRUNCATE ON IOT

References
http://www.orafaq.com/wiki/Datapump
Master Note for Data Pump:MOS Note:1264715.1
For Compatibility and version changes:MOS Note:553337.
Using Oracle’s recycle bin http://www.orafaq.com/node/968
Master Note: Overview of Oracle Large Objects (BasicFiles LOBs and SecureFiles LOBs) (Doc ID 1490228.1)
Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) (Doc ID 453895.1)
https://mayankoracledba.wordpress.com/2018/01/15/oracle-12c-data-pump-best-practices/
http://jeyaseelan-m.blogspot.com/2016/05/speed-up-expdpimpdp.html
https://dbatricksworld.com/data-pump-export-import-performance-tips/
http://its-all-about-oracle.blogspot.com/2013/06/datapump-expdpimpdp-utility.html
http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html
https://rajat1205sharma.wordpress.com/2015/07/03/data-pump-export-import-performance-tips/
http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html ***************
How to use PARALLEL parameter in Datapump? https://pavandba.com/2011/07/15/how-to-use-parallel-parameter-in-datapump/
impdp slow with TABLE_EXISTS_ACTION=TRUNCATE https://pavandba.com/2013/03/22/impdp-slow-with-table_exists_actiontruncate/

Transportable Tablespaces(TTS) for Upgrade/Migration

Let’s look at below in detail

Why TTS ?  
-->Transportable Tablespaces (TTS)
-->FTEX (TTS + Data Pump)  
-->FTEX using RMAN Backups (Less Down Time) 
High Level Steps for Migration of Data Using TTS
Time Taken Cross-Platform Tablespace(Xtts) Transport To The Export/Import Method  
-->Normal TTS Approach 
-->RMAN TTS Approach  
-->12C TTS Enhancement using RMAN backup sets  

tts-flow

Why TTS ?
Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database.

Transportable Tablespaces (TTS)

Transportable Tablespaces feature exists since Oracle 8i
– Can be used cross version ,Version to transport to must be always equal or higher
– Cross platform Transportable Tablespaces got introduced in Oracle Database 10g
==> Can be used cross version and cross platform
==> Required tablespaces to be in read-only mode
==> Extra work necessary for everything in SYSTEM/SYSAUX

Full Transportable Export/Import FTEX (TTS + Data Pump)

Transport an entire database in a single operation
– Cross version and cross platform
– Can include the database upgrade
– Combination of TTS for data tablespaces and Data Pump for administrative tablespaces (SYSTEM, SYSAUX ...)
– Supports information from database components such as Spatial,Text, Multimedia, OLAP, etc.
 Full transportable export supported since Oracle 11.2.0.3
 Full transportable import supported since Oracle 12.1.0.1
Relationship of XTTS to Data Pump and Recovery Manager 
XTTS works within the framework of Data Pump and Recovery Manager
(RMAN). Use Data Pump to move the metadata of the objects in the tablespaces
being transported to the target database. 
RMAN converts the datafiles being transported to the endian format of the target platform
You can use transportable tablespaces to perform tablespace point-in-time recovery (TSPITR).
RMAN uses the transportable tablespaces functionality to perform TSPITR. Therefore, any limitations on transportable tablespaces are also applicable to TSPITR.

Learn more at ………
OLL Video : Oracle Database 12c: Transporting Data

FTEX using RMAN Backups (Less Down Time)

Read more at ………
11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)

High Level Steps for Migration of Data Using TTS

Step 1: Check Platform Support And File Conversion Requirement
Step 2: Identify Tablespaces To Be Transported And Verify Self-Containment
Step 3: Check For Problematic Data Types
Step 4: Check For Missing Schemas And Duplicate Tablespace And Object Names
Step 5: Make Tablespaces Read-Only In Source Database
Step 6: Extract Metadata From Source Database
Step 7: Copy Files To Target Server And Convert If Necessary 
Step 8: Import Metadata Into Target Database
Step 9: Copy Additional Objects To Target Database As Desired

Read more at ………
Moving Oracle Databases Across Platforms without Export/Import

Time Taken Cross-Platform Tablespace(Xtts) Transport To The Export/Import Method

Task Export/Import Tablespace Transport
Export time 37 min 2 min
File transfer time 8 min 13 min
File conversion time n/a 14 min
Import time 42 min 2 min
Approximate total time 87 min 31 min
Export file size 4100 Mb 640 Kb
Target database extra TEMP tablespace requirement 1200 Mb n/a

Normal TTS Approach

1) On the source database
--------------------------------------
Validating Self Containing Property
exec DBMS_TTS.TRANSPORT_SET_CHECK('tbs', TRUE);
a) Put TBS in Read Only
b) Export the Metadata
exp FILE=/path/dump_file.dmp LOG=/path/tts_exp.log TABLESPACES=tbs-names TRANSPORT_TABLESPACE=y STATISTICS=none
**************Transfer Datafiles and export file to TARGET *********************
2) on the destination database
--------------------------------------
a)Import the export file.
impdp DUMPFILE=tts.dmp LOGFILE=tts_imp.log DIRECTORY=exp_dir REMAP_SCHEMA=master:scott TRANSPORT_DATAFILES='/path/tts.dbf'
b) Make TBS in READ-WRITE

RMAN TTS Approach

1) RMAN transport tablespace  On the source database
**************Transfer Datafiles and export file to TARGET *********************
2) Run Import script created by RMAN on the destination database
   impscrpt.sql (or) impdp command from the file

Read more at ………
RMAN TRANSPORT TABLESPACE By Franck Pachot
RMAN Transportable Tablespace
RMAN TRANSPORT TABLESPACE – Oracle Doc

Why Using RMAN TTS ?

-->RMAN checks that they are self-contained
-->Faster 
-->no need to put in Read only
Using RMAN TTS feature,the datafiles which contain actual data can be copied, thus making the migration faster. 
RMAN creates transportable tablespace sets from backup,eliminates need of putting tablespaces in read-only mode.
RMAN process for creating transportable tablespaces from backup uses the Data Pump Export and Import utilities
RMAN creates the automatic auxiliary instance used for restore and recovery on the same node as the source instance, 
there is some performance overhead during the operation of the TRANSPORT TABLESPACE command.

RMAN> transport tablespace tbs_2, tbs_3
   tablespace destination '/disk1/transportdest' --->  Set of Datafiles will be created here with Original Names & export log,Export dump file ,impscrpt.sql will also be created
   auxiliary destination '/disk1/auxdest'
   DATAPUMP DIRECTORY  mypumpdir
   DUMP FILE 'mydumpfile.dmp'
   IMPORT SCRIPT 'myimportscript.sql'
   EXPORT LOG 'myexportlog.log';

12C TTS Enhancement using RMAN backup sets
1) RMAN TTS on the source database

    a) Put TBS in Read Only ,
   b) RMAN--> BACKUP FOR TRANSPORT ( Metadata by Datapump,Backup set by RMAN)
      --> convert the platform and the endian format if required
   c) Back TBS to READ-WRITE

****** Transfrer Backup set & Dump files to Target Server from Source
2) RMAN TTS on the destination database

   a) Restore foreign tablespace ( Restore by RMAN, Import Metadata by Datapump)
   b) Make TBS in READ-WRITE  
RMAN> backup for transport format '/tmp/stage/tbs1.bkset' datapump format '/tmp/stage/tbs1.dmp' tablespace tbs1;
RMAN> restore foreign tablespace tbs1 
format '/u01/app/oracle/oradata/sekunda/tbs1.dbf' 
from backupset '/tmp/stage/tbs1.bkset' 
dump file from backupset '/tmp/stage/tbs1.dmp';

Read more at ………
Transport Tablespace using RMAN Backupsets in #Oracle 12c
12c How Perform Cross-Platform Database Transport to different Endian Platform with RMAN Backup Sets (Doc ID 2013271.1)

References:
How to Move a Database Using Transportable Tablespaces (Doc ID 1493809.1)
How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN (Doc ID 371556.1)
Master Note for Transportable Tablespaces (TTS) — Common Questions and Issues (Doc ID 1166564.1)
Transportable Tablespaces
Transportable Tablespaces Tips
Using Transportable Tablespaces for Oracle Upgrades

Transparent Data Encryption(TDE) -Overview

ebs-tde

Let’s look at below in detail

What is Oracle Transparent Data Encryption (TDE)? 
How do I migrate existing clear data to TDE encrypted data? 
How to find details for encryption/encrypted objects? 
How to Enable TDE in 12C? 
Best Practices for TDE 

What is Oracle Transparent Data Encryption (TDE)?
Oracle Advanced Security Encryption -TDE(Transparent Data Encryption) (From 10gR2)
allows administrators to encrypt sensitive data (i.e. Personally Identifiable Information or PII)
by protecting it from unauthorized access via encryption key if storage media, backups, or datafiles are stolen.
TDE supports two levels of encryption
1)TDE column encryption ( From 10gR2)
2)TDE tablespace encryption ( From 11gR1)
tde-col
tde-tablesspace

TDE uses a two tier encryption key architecture, consisting of a master key and one or more table and/or tablespace keys.
The table and tablespace keys are encrypted using the master key. The master key is stored in the Oracle Wallet.
When TDE column encryption is applied to an existing application table column, a new table key is created and stored in the Oracle data dictionary.
When TDE tablespace encryption is used, the individual tablespace keys are stored in the header of the underlying OS file(s).

How do I migrate existing clear data to TDE encrypted data?

You can  migrate existing clear data to encrypted tablespaces or columns using Online/Offline.
Existing tablespaces can be encrypted online with zero downtime on production systems or encrypted offline with no storage overhead during a maintenance period.
Online conversion  is available on Oracle Database 12.2.0.1 and above
Offline conversion  on Oracle Database 11.2.0.4 and 12.1.0.2.
In 11g/12c,you can use DBMS_REDEFINITION(Online Table Redefinition) copy existing clear data into a new encrypted tablespace background with no downtime.

How to find details for encryption/encrypted objects?
gv$encryption_wallet
V$ENCRYPTED_TABLESPACES
DBA_ENCRYPTED_COLUMNS

select * from gv$encryption_wallet; ---(gv$wallet)
select * from dba_encrypted_columns;
select table_name from dba_tables where tablespace_name in (select tablespace_name from dba_tablespaces where encrypted='yes');
select tablespace_name, encrypted from dba_tablespaces where encrypted='yes';
select et.inst_id, ts.name, et.encryptionalg, et.encryptedts from v$encrypted_tablespaces et, ts$ ts where et.ts# = ts.ts#;

How to Enable TDE in 12C?

Step 1: Set KEYSTORE location in $TNS_ADMIN/sqlnet.ora
Step 2: Create a Password-based KeyStore
Step 3: Open the KEYSTORE
Step 4: Set Master Encryption Key
Step 5: Encrypt your Data
     --> Make sure compatible parameter value greater than 11.2.0.
     5.1 Encrypt a Columns in Table
     5.2 Encrypt Tablespace
	 -->Note: Encrypting existing tablespace is not supported.( Do online/Offline Conversion)

Step 1: Set KEYSTORE location in $TNS_ADMIN/sqlnet.ora

==> created the directory
mkdir -p /u01/app/oracle/admin/SLOB/tde_wallet
==>declared it in sqlnet.ora
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/SLOB/tde_wallet)))

Step 2: Create a Password-based KeyStore

  ==> sqlplus sys/password as sysdba
        administer key management create keystore '/u01/app/oracle/admin/SLOB/tde_wallet' identified by oracle;

Step 3: Open the KEYSTORE

administer key management set keystore open identified by oracle;

Step 4: Set Master Encryption Key

administer key management set key identified by oracle with backup;
  ==> Optionally created an auto-login wallet
Administer key management create auto_login keystore from keystore '/u01/app/oracle/admin/SLOB/tde_wallet' identified by oracle;

Step 5: Encrypt your Data
–> Make sure compatible parameter value greater than 11.2.0.
===> 5.1 Encrypt a Columns in Table

create table job(title varchar2(128) encrypt);->Create a table with an encrypted column,By default it is AES192
create table emp(name varchar2(128) encrypt using '3DES168', age NUMBER ENCRYPT NO SALT);--Create a column with an encryption algorithm
Alter table employees add (new_name varchar(40) ENCRYPT);--Encrypt an existing table column
alter table employees rekey using '3DES168'; --Change the Encryption key of an existing column

===> 5.2 Encrypt Tablespace
Note: Encrypting existing tablespace is not supported.( Do online/Offline Conversion)

CREATE TABLESPACE D_CRYPT DATAFILE '+DATA' SIZE 10G ENCRYPTION USING 'AES256' DEFAULT STORAGE (ENCRYPT);
Alter table TABLE_WITH_PAN move online tablespace D_CRYPT;

For 11g, We use orapki wallet & encryption key commands are used as below

orapki wallet create -wallet <wallet location> -pwd "<wallet password>" -auto_login
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "welcome1";
ALTER SYSTEM SET WALLET OPEN IDENTIFIED BY  "welcome1";

Best Practices for TDE
1)Avoiding accidentally deleting the TDE Wallet files on Linux
Go to Wallet Directory and set set the ‘immutable’ bit

chattr +i ewallet.p12  ( encrypted wallet )
chattr +i cwallet.sso  ( auto-open wallet )

Using Data Pump with TDE
By default,data in the resulting dump file will be in clear text, even the encrypted column data.
To protect your encrypted column data in the data pump dump files, you can password-protect your dump file when exporting the table.

expdp arup/arup ENCRYPTION_PASSWORD=pooh tables=accounts

Importing a password-protected dump file

impdp arup/arup ENCRYPTION_PASSWORD=pooh  tables=accounts table_exists_action=replace

References
https://easyteam.fr/oracle-tde-12c-concepts-and-implementation/
http://www.catgovind.com/oracle/how-to-enable-transparent-data-encryption-tde-in-oracle-database/
TDE best practices in EBS https://www.oracle.com/technetwork/database/security/twp-transparent-data-encryption-bes-130696.pdf
http://psoug.org/reference/tde.html
https://www.oracle.com/database/technologies/security/advanced-security.html
https://www.oracle.com/technetwork/database/security/tde-faq-093689.html
Encrypting Tablespaces By Arup Nanda https://www.oracle.com/technetwork/articles/oem/o19tte-086996.html
Encrypt sensitive data transparently without writing a single line of code https://blogs.oracle.com/oraclemagazine/transparent-data-encryption-v2
https://blogs.oracle.com/stevenchan/using-fast-offline-conversion-to-enable-transparent-data-encryption-in-ebs
http://expertoracle.com/2017/12/21/oracle-ebs-r12-and-tde-tablespace-encryption/
https://juliandontcheff.wordpress.com/2017/06/02/twelve-new-features-for-cyber-security-dbas/
Transparent Data Encryption (TDE) (Doc ID 317311.1)
Master Note For Transparent Data Encryption ( TDE ) (Doc ID 1228046.1)
Quick TDE Setup and FAQ (Doc ID 1251597.1)
Managing TDE Wallets in a RAC Environment (Doc ID 567287.1)
How to Export/Import with Data Encrypted with Transparent Data Encryption (TDE) (Doc ID 317317.1)
Using Transportable Tablespaces to Migrate Oracle EBS Release 12.0 or 12.1 Using Oracle Database 12.1.0 (Doc ID 1945814.1)
Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 (Doc ID 732764.1)

What is Table Fragmentation in Oracle ? ( Finding & Removing)

Fragmentation/ Reclaiming Wasted Space by Table Shrinking

select owner,table_name,round(((blocks*8)/1024/1024),2) "Size(GB)" ,round(((num_rows*avg_row_len/1024))/1024/1024,2) "Actual_Data(GB)",round((((blocks*8)) - ((num_rows*avg_row_len/1024)))/1024/1024,2) "Wasted_Space(GB)", round(((((blocks*8)-(num_rows*avg_row_len/1024))/(blocks*8))*100 -10),2) "Reclaimable Space(%)", partitioned from dba_tables where (round((blocks*8),2) &gt; round((num_rows*avg_row_len/1024),2))

and round(((((blocks*8)-(num_rows*avg_row_len/1024))/(blocks*8))*100 -10),2) &gt;20 -- More than  20% fragmentation

and round((((blocks*8)) - ((num_rows*avg_row_len/1024)))/1024/1024,2)&gt;5 --Wasted Space is more than 5GB

order by 5 desc;
OWNER TABLE_NAME Size(GB) Actual_Data(GB) Wasted_Space(GB) Reclaimable Space(%) PARTITIONED
XLA XLA_DISTRIBUTION_LINKS 375.6 246.22 129.37 24.44 YES
ZX ZX_LINES_DET_FACTORS 162.61 108.24 54.36 23.43 NO
AR RA_CUST_TRX_LINE_GL_DIST_ALL 122.6 74.72 47.88 29.05 NO
ASO ASO_ORDER_FEEDBACK_T 54.91 8.16 46.75 75.14 NO
XLA XLA_AE_LINES 82.74 55.29 27.45 23.17 YES
APPLSYS WF_ITEM_ACTIVITY_STATUSES_H 27 1.92 25.07 82.87 NO
AR RA_CUSTOMER_TRX_LINES_ALL 57.44 36.49 20.96 26.48 NO
ZX ZX_LINES 34.06 22.78 11.28 23.12 NO
AR AR_RECEIVABLE_APPLICATIONS_ALL 21.81 12.97 8.84 30.54 NO
XXCUST XXCUST_INTERFACE_LINES_ALL 15.39 8.9 6.49 32.16 NO
APPLSYS WF_JAVA_DEFERRED 5.08 0 5.08 90 NO
select owner,segment_name TABLE_NAME,segment_type,round(bytes/1024/1024/1024,2) "Size(GB)"

from dba_segments

where segment_type='TABLE'

and owner='APPLSYS'

and round(bytes/1024/1024/1024,2)&gt;1

order by 4 desc;
OWNER TABLE_NAME SEGMENT_TYPE Size(GB)
APPLSYS FND_LOG_MESSAGES TABLE 86.08
APPLSYS WF_ITEM_ACTIVITY_STATUSES_H TABLE 27.11
APPLSYS WF_ITEM_ATTRIBUTE_VALUES TABLE 20.38
APPLSYS FND_FLEX_VALUE_HIERARCHI_A TABLE 10
APPLSYS WF_ITEM_ACTIVITY_STATUSES TABLE 6.95
APPLSYS WF_NOTIFICATION_ATTRIBUTES TABLE 5.35
APPLSYS WF_JAVA_DEFERRED TABLE 5.1
APPLSYS WF_DEFERRED TABLE 4.8
APPLSYS FND_CONCURRENT_REQUESTS TABLE 3.51
APPLSYS FND_DOCUMENTS_TL TABLE 2.95
APPLSYS FND_LOGINS TABLE 2.16
APPLSYS FND_SEARCHABLE_CHANGE_LOG TABLE 1.97
APPLSYS WF_NOTIFICATIONS TABLE 1.79
APPLSYS DR$FND_LOBS_CTX$I TABLE 1.48

References

http://expertoracle.com/2017/05/07/reorg-tables-and-indexes-in-oracle-ebs-applications-best-practices/
http://select-star-from.blogspot.com/2013/09/how-to-check-table-fragmentation-in.html
https://orahow.com/how-to-find-and-remove-table-fragmentation-in-oracle-database/
https://www.oracle.com/technetwork/articles/database/fragmentation-table-shrink-tips-2330997.html

#tuning