New commands ( Just Collection)

alter database open resetlogs upgrade ;

Working with Datapump ?

Let’s look at below in detail


Data Pump Best Practices
Dont Invoke expdp using SYS 
Purge recyclebin before Export , User/Table/DB Level
** PARALLELISM doesn't work with LOB COLUMN
How to use/estimate PARALLEL parameter in Datapump?
How to Check/Monitor DATAPUMP JOBS?
**Datapump will use two different load methods during import(impdp).

Data Pump Best Practices

pga_aggregate_target  -->Set this to high,it will improve the Data pump performance.
For export consistency use:-
------------------------------
FLASHBACK_TIME=SYSTIMESTAMP, This will increase UNDO requirements for the duration of the export

compression_algorithm=medium --12C Recommended option. Similar characteristics to BASIC, but uses a different algorithm

Always set parameters:-
------------------------------
METRICS=YES
EXCLUDE=STATISTICS
LOGTIME=ALL -->Timestamps   (From 12C)

Speed up Data Pump:-
------------------------------
PARALLEL=n
EXCLUDE=STATISTICS on export
EXCLUDE=INDEXES on import
1. Initial impdp with EXCLUDE=INDEXES 
2. Second impdp with INCLUDE=INDEXES SQLFILE=indexes.sql 
3. Split indexes.sql into multiple SQL files and run in multiple sessions

– Set COMMIT_WAIT=NOWAIT and COMMIT_LOGGING=BATCH during full imports

Direct import via database link (Network bandwidth and CPU bound):-
----------------------------------------------------------------------------------------------------
– Parameter: NETWORK_LINK
Run only impdp on the target system - no expdp necessary
No dump file written, no disk I/O, no file transfer needed

Restrictions of database links apply: – Does not work with LONG/LONG RAW and certain object types
Performance: Depends on network bandwidth and target's CPUs

Some Commands /Use Cases

remap_tablespace=OLD_TBS:NEW_TBS ==>Move all objects from one tablespace to another
remap_schema=old_schema:new_schema ==> Move a object to a different schema
expdp with content=metadata_only & impdp with remap_schema=A:Z  ==> Clone a User
remap_datafile=’/u01/app/oracle/oradata/datafile_01.dbf’:’/u01/datafile_01.dbf’  ==> Create your database in a different file structure
transform=pctspace:70 ,sample=70 -->tell the Data Pump to reduce the size of extents to 70% in impdp
transform=disable_archive_logging:Y
there is a database parameter FORCE LOGGING which overwrites this feature.
sqlfile=x_24112010.sql

EXPDP Filesize : Split or Slice the Dump file into Multiple Directories
expdp srinalla/srinalla job_name=exp_job_multiple_dir  schemas=STHOMAS  filesize=3G dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp logfile=dump.log compression=all parallel=10

While import,mention like this
dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp


Statistics are imported by default
compression
parallel
cluster  (Default=Y,From 11gR2,Parallelization in RAC, Can be on all nodes or only few nodes based on service_name=EBS_DP_12
         Sttaus Check :select inst_id, session_type from dba_datapump_sessions;
Commit the Import on every row with COMMIT=Y.

If COMMIT=Y, Import commits tables containing LONG, LOB, BFILE, ROWID, UROWID,
DATE or Type Columns after each row.
		  
Restart
restart the job with a different degree of parallelism, say 4 (earlier it was 6):
Export> parallel=4
Export> START_JOB
Export> continue_client --show progress

import using “table_exists_action=replace” and TABLES=(list of skipped tables)

nohup impdp system/secret NETWORK_LINK=olddb FULL=y  PARALLEL=25 &
impdp system attach
Import> status
Import> parallel=30 << this will increase the parallel processes if you want

Do not invoke expdp using ‘/ as sysdba’

Also, do not invoke expdp using ‘/ as sysdba’ – use the SYSTEM account – see the first Note section here
http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_export.htm#i1012781

Purge recyclebin before Export , User/Table/DB Level
select object_name, original_name, type, can_undrop as “UND”, can_purge as “PUR”, droptime from dba_recyclebin
where owner = ‘XX_DUMMY’;
purge table “BIN$HGnc55/7rRPgQPeM/qQoRw==$0” ;

** PARALLELISM doesn’t work with LOB COLUMN
parallelism doesn’t work ,because data pump serializes the dump when it comes to a LOB table.
The Approach should be like this
1) the whole database/schema minus LOB table and
2) the LOB table.
** pga_aggregate_target proved to be the most important change in the overall scheme of things
because indexes were built towards the end of the job and took 3 times longer
than actually creating the tables and importing the data in this test.
Check LOB Columns with below Query

SELECT  s.tablespace_name ,l.owner,l.table_name,l.column_name,l.segment_name,s.segment_type, round(s.bytes/1024/1024/1024,2) "Size(GB)"
FROM DBA_SEGMENTS s,dba_lobs l
where l.owner = s.owner and l.segment_name = s.segment_name
and l.owner not in ('SYS','SYSTEM','APPS','APPLSYS')
--and round(s.bytes/1024/1024/1024,2)>1
order by s.bytes desc;

Check below links how to fix the issue
http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html
Master Note: Overview of Oracle Large Objects (BasicFiles LOBs and SecureFiles LOBs) (Doc ID 1490228.1)

How to use/estimate PARALLEL parameter in Datapump?

Before starting any export/import, it is better to use ESTIMATE_ONLY parameter. Divide the output by 250MB and based on the result decide on PARALLEL value
Finally when using PARALLEL option, do keep below points in mind
a. Set the degree of parallelism 2*no of CPU, then tune from there.
b. For Data Pump Export, the PARALLEL parameter value < dumpfiles
c. For Data Pump Import, the PARALLEL parameter value  < dumpfiles
For more details, you can refer to MOS doc 365459.1

How to Check/Monitor DATAPUMP JOBS?

DBA_DATAPUMP_JOBS
DBA_DATAPUMP_SESSIONS
V$SESSION_LONGOPS
Monitoring Data Pump http://www.dbaref.com/home/oracle-11g-new-features/monitoringdatapump
Queries to Monitor Datapump Jobs https://databaseinternalmechanism.com/2016/09/13/how-to-monitor-datapump-jobs/
How to delete/remove non executing datapump jobs? https://pavandba.com/2011/07/12/how-to-deleteremove-non-executing-datapump-jobs/

Datapump will use two different load methods during import(impdp)

  1. Direct load path – this is the main reason why datapump import (impdp) is faster than traditional import (imp)
  2. external table path
    But datapump cannot use direct path always due to some restrictions and because of this reason, sometimes you may observe impdp run slower than expected.
    Now, what are those situations when datapump will not use direct path? If a table exist with
    1. A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
    2. A domain index exists for a LOB column.
    3. A table is in a cluster.
    4. There is an active trigger on a pre-existing table.
    5. Fine-grained access control is enabled in insert mode on a pre-existing table.
    6. A table contains BFILE columns or columns of opaque types.
    7. A referential integrity constraint is present on a pre-existing table.
    8. A table contains VARRAY columns with an embedded opaque type.
    9. The table has encrypted columns
    10. The table into which data is being imported is a pre-existing table and at least one of the following conditions exists:
    – There is an active trigger
    – The table is partitioned
    – A referential integrity constraint exists
    – A unique index exists
    11. Supplemental logging is enabled and the table has at least 1 LOB column.
    Note: Data Pump will not load tables with disabled unique indexes. If the data needs to be loaded into the table, the indexes must be either dropped or re-enabled.
    12. using TABLE_EXISTS_ACTION=TRUNCATE ON IOT

References
http://www.orafaq.com/wiki/Datapump
Master Note for Data Pump:MOS Note:1264715.1
For Compatibility and version changes:MOS Note:553337.
Using Oracle’s recycle bin http://www.orafaq.com/node/968
Master Note: Overview of Oracle Large Objects (BasicFiles LOBs and SecureFiles LOBs) (Doc ID 1490228.1)
Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) (Doc ID 453895.1)
https://mayankoracledba.wordpress.com/2018/01/15/oracle-12c-data-pump-best-practices/
http://jeyaseelan-m.blogspot.com/2016/05/speed-up-expdpimpdp.html
https://dbatricksworld.com/data-pump-export-import-performance-tips/
http://its-all-about-oracle.blogspot.com/2013/06/datapump-expdpimpdp-utility.html
http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html
https://rajat1205sharma.wordpress.com/2015/07/03/data-pump-export-import-performance-tips/
http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html ***************
How to use PARALLEL parameter in Datapump? https://pavandba.com/2011/07/15/how-to-use-parallel-parameter-in-datapump/
impdp slow with TABLE_EXISTS_ACTION=TRUNCATE https://pavandba.com/2013/03/22/impdp-slow-with-table_exists_actiontruncate/

Transportable Tablespaces(TTS) for Upgrade/Migration

Let’s look at below in detail

Why TTS ?  
-->Transportable Tablespaces (TTS)
-->FTEX (TTS + Data Pump)  
-->FTEX using RMAN Backups (Less Down Time) 
High Level Steps for Migration of Data Using TTS
Time Taken Cross-Platform Tablespace(Xtts) Transport To The Export/Import Method  
-->Normal TTS Approach 
-->RMAN TTS Approach  
-->12C TTS Enhancement using RMAN backup sets  

tts-flow

Why TTS ?
Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database.

Transportable Tablespaces (TTS)

Transportable Tablespaces feature exists since Oracle 8i
– Can be used cross version ,Version to transport to must be always equal or higher
– Cross platform Transportable Tablespaces got introduced in Oracle Database 10g
==> Can be used cross version and cross platform
==> Required tablespaces to be in read-only mode
==> Extra work necessary for everything in SYSTEM/SYSAUX

Full Transportable Export/Import FTEX (TTS + Data Pump)

Transport an entire database in a single operation
– Cross version and cross platform
– Can include the database upgrade
– Combination of TTS for data tablespaces and Data Pump for administrative tablespaces (SYSTEM, SYSAUX ...)
– Supports information from database components such as Spatial,Text, Multimedia, OLAP, etc.
 Full transportable export supported since Oracle 11.2.0.3
 Full transportable import supported since Oracle 12.1.0.1
Relationship of XTTS to Data Pump and Recovery Manager 
XTTS works within the framework of Data Pump and Recovery Manager
(RMAN). Use Data Pump to move the metadata of the objects in the tablespaces
being transported to the target database. 
RMAN converts the datafiles being transported to the endian format of the target platform
You can use transportable tablespaces to perform tablespace point-in-time recovery (TSPITR).
RMAN uses the transportable tablespaces functionality to perform TSPITR. Therefore, any limitations on transportable tablespaces are also applicable to TSPITR.

Learn more at ………
OLL Video : Oracle Database 12c: Transporting Data

FTEX using RMAN Backups (Less Down Time)

Read more at ………
11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)

High Level Steps for Migration of Data Using TTS

Step 1: Check Platform Support And File Conversion Requirement
Step 2: Identify Tablespaces To Be Transported And Verify Self-Containment
Step 3: Check For Problematic Data Types
Step 4: Check For Missing Schemas And Duplicate Tablespace And Object Names
Step 5: Make Tablespaces Read-Only In Source Database
Step 6: Extract Metadata From Source Database
Step 7: Copy Files To Target Server And Convert If Necessary 
Step 8: Import Metadata Into Target Database
Step 9: Copy Additional Objects To Target Database As Desired

Read more at ………
Moving Oracle Databases Across Platforms without Export/Import

Time Taken Cross-Platform Tablespace(Xtts) Transport To The Export/Import Method

Task Export/Import Tablespace Transport
Export time 37 min 2 min
File transfer time 8 min 13 min
File conversion time n/a 14 min
Import time 42 min 2 min
Approximate total time 87 min 31 min
Export file size 4100 Mb 640 Kb
Target database extra TEMP tablespace requirement 1200 Mb n/a

Normal TTS Approach

1) On the source database
--------------------------------------
Validating Self Containing Property
exec DBMS_TTS.TRANSPORT_SET_CHECK('tbs', TRUE);
a) Put TBS in Read Only
b) Export the Metadata
exp FILE=/path/dump_file.dmp LOG=/path/tts_exp.log TABLESPACES=tbs-names TRANSPORT_TABLESPACE=y STATISTICS=none
**************Transfer Datafiles and export file to TARGET *********************
2) on the destination database
--------------------------------------
a)Import the export file.
impdp DUMPFILE=tts.dmp LOGFILE=tts_imp.log DIRECTORY=exp_dir REMAP_SCHEMA=master:scott TRANSPORT_DATAFILES='/path/tts.dbf'
b) Make TBS in READ-WRITE

RMAN TTS Approach

1) RMAN transport tablespace  On the source database
**************Transfer Datafiles and export file to TARGET *********************
2) Run Import script created by RMAN on the destination database
   impscrpt.sql (or) impdp command from the file

Read more at ………
RMAN TRANSPORT TABLESPACE By Franck Pachot
RMAN Transportable Tablespace
RMAN TRANSPORT TABLESPACE – Oracle Doc

Why Using RMAN TTS ?

-->RMAN checks that they are self-contained
-->Faster 
-->no need to put in Read only
Using RMAN TTS feature,the datafiles which contain actual data can be copied, thus making the migration faster. 
RMAN creates transportable tablespace sets from backup,eliminates need of putting tablespaces in read-only mode.
RMAN process for creating transportable tablespaces from backup uses the Data Pump Export and Import utilities
RMAN creates the automatic auxiliary instance used for restore and recovery on the same node as the source instance, 
there is some performance overhead during the operation of the TRANSPORT TABLESPACE command.

RMAN> transport tablespace tbs_2, tbs_3
   tablespace destination '/disk1/transportdest' --->  Set of Datafiles will be created here with Original Names & export log,Export dump file ,impscrpt.sql will also be created
   auxiliary destination '/disk1/auxdest'
   DATAPUMP DIRECTORY  mypumpdir
   DUMP FILE 'mydumpfile.dmp'
   IMPORT SCRIPT 'myimportscript.sql'
   EXPORT LOG 'myexportlog.log';

12C TTS Enhancement using RMAN backup sets
1) RMAN TTS on the source database

    a) Put TBS in Read Only ,
   b) RMAN--> BACKUP FOR TRANSPORT ( Metadata by Datapump,Backup set by RMAN)
      --> convert the platform and the endian format if required
   c) Back TBS to READ-WRITE

****** Transfrer Backup set & Dump files to Target Server from Source
2) RMAN TTS on the destination database

   a) Restore foreign tablespace ( Restore by RMAN, Import Metadata by Datapump)
   b) Make TBS in READ-WRITE  
RMAN> backup for transport format '/tmp/stage/tbs1.bkset' datapump format '/tmp/stage/tbs1.dmp' tablespace tbs1;
RMAN> restore foreign tablespace tbs1 
format '/u01/app/oracle/oradata/sekunda/tbs1.dbf' 
from backupset '/tmp/stage/tbs1.bkset' 
dump file from backupset '/tmp/stage/tbs1.dmp';

Read more at ………
Transport Tablespace using RMAN Backupsets in #Oracle 12c
12c How Perform Cross-Platform Database Transport to different Endian Platform with RMAN Backup Sets (Doc ID 2013271.1)

References:
How to Move a Database Using Transportable Tablespaces (Doc ID 1493809.1)
How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN (Doc ID 371556.1)
Master Note for Transportable Tablespaces (TTS) — Common Questions and Issues (Doc ID 1166564.1)
Transportable Tablespaces
Transportable Tablespaces Tips
Using Transportable Tablespaces for Oracle Upgrades

Transparent Data Encryption(TDE) -Overview

ebs-tde

Let’s look at below in detail

What is Oracle Transparent Data Encryption (TDE)? 
How do I migrate existing clear data to TDE encrypted data? 
How to find details for encryption/encrypted objects? 
How to Enable TDE in 12C? 
Best Practices for TDE 

What is Oracle Transparent Data Encryption (TDE)?
Oracle Advanced Security Encryption -TDE(Transparent Data Encryption) (From 10gR2)
allows administrators to encrypt sensitive data (i.e. Personally Identifiable Information or PII)
by protecting it from unauthorized access via encryption key if storage media, backups, or datafiles are stolen.
TDE supports two levels of encryption
1)TDE column encryption ( From 10gR2)
2)TDE tablespace encryption ( From 11gR1)
tde-col
tde-tablesspace

TDE uses a two tier encryption key architecture, consisting of a master key and one or more table and/or tablespace keys.
The table and tablespace keys are encrypted using the master key. The master key is stored in the Oracle Wallet.
When TDE column encryption is applied to an existing application table column, a new table key is created and stored in the Oracle data dictionary.
When TDE tablespace encryption is used, the individual tablespace keys are stored in the header of the underlying OS file(s).

How do I migrate existing clear data to TDE encrypted data?

You can  migrate existing clear data to encrypted tablespaces or columns using Online/Offline.
Existing tablespaces can be encrypted online with zero downtime on production systems or encrypted offline with no storage overhead during a maintenance period.
Online conversion  is available on Oracle Database 12.2.0.1 and above
Offline conversion  on Oracle Database 11.2.0.4 and 12.1.0.2.
In 11g/12c,you can use DBMS_REDEFINITION(Online Table Redefinition) copy existing clear data into a new encrypted tablespace background with no downtime.

How to find details for encryption/encrypted objects?
gv$encryption_wallet
V$ENCRYPTED_TABLESPACES
DBA_ENCRYPTED_COLUMNS

select * from gv$encryption_wallet; ---(gv$wallet)
select * from dba_encrypted_columns;
select table_name from dba_tables where tablespace_name in (select tablespace_name from dba_tablespaces where encrypted='yes');
select tablespace_name, encrypted from dba_tablespaces where encrypted='yes';
select et.inst_id, ts.name, et.encryptionalg, et.encryptedts from v$encrypted_tablespaces et, ts$ ts where et.ts# = ts.ts#;

How to Enable TDE in 12C?

Step 1: Set KEYSTORE location in $TNS_ADMIN/sqlnet.ora
Step 2: Create a Password-based KeyStore
Step 3: Open the KEYSTORE
Step 4: Set Master Encryption Key
Step 5: Encrypt your Data
     --> Make sure compatible parameter value greater than 11.2.0.
     5.1 Encrypt a Columns in Table
     5.2 Encrypt Tablespace
	 -->Note: Encrypting existing tablespace is not supported.( Do online/Offline Conversion)

Step 1: Set KEYSTORE location in $TNS_ADMIN/sqlnet.ora

==> created the directory
mkdir -p /u01/app/oracle/admin/SLOB/tde_wallet
==>declared it in sqlnet.ora
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/SLOB/tde_wallet)))

Step 2: Create a Password-based KeyStore

  ==> sqlplus sys/password as sysdba
        administer key management create keystore '/u01/app/oracle/admin/SLOB/tde_wallet' identified by oracle;

Step 3: Open the KEYSTORE

administer key management set keystore open identified by oracle;

Step 4: Set Master Encryption Key

administer key management set key identified by oracle with backup;
  ==> Optionally created an auto-login wallet
Administer key management create auto_login keystore from keystore '/u01/app/oracle/admin/SLOB/tde_wallet' identified by oracle;

Step 5: Encrypt your Data
–> Make sure compatible parameter value greater than 11.2.0.
===> 5.1 Encrypt a Columns in Table

create table job(title varchar2(128) encrypt);->Create a table with an encrypted column,By default it is AES192
create table emp(name varchar2(128) encrypt using '3DES168', age NUMBER ENCRYPT NO SALT);--Create a column with an encryption algorithm
Alter table employees add (new_name varchar(40) ENCRYPT);--Encrypt an existing table column
alter table employees rekey using '3DES168'; --Change the Encryption key of an existing column

===> 5.2 Encrypt Tablespace
Note: Encrypting existing tablespace is not supported.( Do online/Offline Conversion)

CREATE TABLESPACE D_CRYPT DATAFILE '+DATA' SIZE 10G ENCRYPTION USING 'AES256' DEFAULT STORAGE (ENCRYPT);
Alter table TABLE_WITH_PAN move online tablespace D_CRYPT;

For 11g, We use orapki wallet & encryption key commands are used as below

orapki wallet create -wallet <wallet location> -pwd "<wallet password>" -auto_login
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "welcome1";
ALTER SYSTEM SET WALLET OPEN IDENTIFIED BY  "welcome1";

Best Practices for TDE
1)Avoiding accidentally deleting the TDE Wallet files on Linux
Go to Wallet Directory and set set the ‘immutable’ bit

chattr +i ewallet.p12  ( encrypted wallet )
chattr +i cwallet.sso  ( auto-open wallet )

Using Data Pump with TDE
By default,data in the resulting dump file will be in clear text, even the encrypted column data.
To protect your encrypted column data in the data pump dump files, you can password-protect your dump file when exporting the table.

expdp arup/arup ENCRYPTION_PASSWORD=pooh tables=accounts

Importing a password-protected dump file

impdp arup/arup ENCRYPTION_PASSWORD=pooh  tables=accounts table_exists_action=replace

References
https://easyteam.fr/oracle-tde-12c-concepts-and-implementation/
http://www.catgovind.com/oracle/how-to-enable-transparent-data-encryption-tde-in-oracle-database/
TDE best practices in EBS https://www.oracle.com/technetwork/database/security/twp-transparent-data-encryption-bes-130696.pdf
http://psoug.org/reference/tde.html
https://www.oracle.com/database/technologies/security/advanced-security.html
https://www.oracle.com/technetwork/database/security/tde-faq-093689.html
Encrypting Tablespaces By Arup Nanda https://www.oracle.com/technetwork/articles/oem/o19tte-086996.html
Encrypt sensitive data transparently without writing a single line of code https://blogs.oracle.com/oraclemagazine/transparent-data-encryption-v2
https://blogs.oracle.com/stevenchan/using-fast-offline-conversion-to-enable-transparent-data-encryption-in-ebs
http://expertoracle.com/2017/12/21/oracle-ebs-r12-and-tde-tablespace-encryption/
https://juliandontcheff.wordpress.com/2017/06/02/twelve-new-features-for-cyber-security-dbas/
Transparent Data Encryption (TDE) (Doc ID 317311.1)
Master Note For Transparent Data Encryption ( TDE ) (Doc ID 1228046.1)
Quick TDE Setup and FAQ (Doc ID 1251597.1)
Managing TDE Wallets in a RAC Environment (Doc ID 567287.1)
How to Export/Import with Data Encrypted with Transparent Data Encryption (TDE) (Doc ID 317317.1)
Using Transportable Tablespaces to Migrate Oracle EBS Release 12.0 or 12.1 Using Oracle Database 12.1.0 (Doc ID 1945814.1)
Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 (Doc ID 732764.1)

Configuring Oracle SGA/PGA ?

Let’s look at below in detail


What is SGA/PGA ? 
SGA/PGA Suggestions (Best Practices)
Configuring SHMMAX and SHMALL for Oracle in Linux
Linux Huge Pages for Oracle (For Large SGA On Linux)
Disabling Transparent HugePages 
SGA/PGA Suggestions ( From AWR/ADDM)
SGA/PGA Paremeters ( What is? )
SGA/PGA Details ( From v$views)

sga

SGA is shared memory, so it is allocated when the database is started.
PGA (Process Global Area) is private memory allocated to individual processes,it can’t be pre-allocated,One need basis it will be allocated in RAM

SGA/PGA Suggestions (Best Practices)

OS Reserved RAM                  --> 10% of RAM for Linux & 20% or RAM for Windows
Oracle Database Connections RAM -->pga_aggregate_target(RAM regions for sorting and hash joins)
Oracle SGA Sizing for RAM        --> sga_max_size/sga_target
SGA Allocation (Capacity Planning)
45% of ram size RAM SIZE = 484 MB,
So 45 % should be = 216 MB
-->Split the 216 MM for SGA,PGA,BACKGROUNG PROCESSES
-->Fixed background process requires = 40 MB SGA =160 MB PGA =16 MB

Configuring SHMMAX and SHMALL for Oracle in Linux

Shared memory is important for the Oracle Database System Global Area (SGA).
Shared memory is nothing but part of Unix IPC System (Inter Process Communication) maintained by kernel
where multiple processes share a single chunk of memory to communicate with each other.
If there is insufficient shared memory, the Oracle database instance will not start. Set the SHMMAX and SHMALL to maximum values,for SGA Requirements

shm

SHMALL ==>(40% RAM), total size of Shared Memory Segments System wide set in “pages”.
SHMMAX ==> the maximum size (in bytes) of a single shared memory segment set
If the SHMMAX is set incorrectly, for example too low, you may receive the following error:
ORA-27123: unable to attach to shared memory segment.
ORA-27102: out of memory
Linux-x86_64 Error: 28: No space left on device.
is not relevant when you use /dev/shm POSIX shared memory (Oracle AMM).

Note :

SHMALL is the division of SHMMAX/PAGE_SIZE, e.g:. 1073741824/4096=262144.
Make SHMALL smaller than free RAM to avoid paging.
32-bit servers -3GB
64-bit servers - Half the RAM
Oracle recommends that the SHMALL should be the sum of the SGA regions divided by 4096 (The Linux page size).
-->Oracle 11g uses AMM by default,which relies on POSIX shared memory (/dev/shm-maps shared memory to files using a virtual shared memory filesystem) ,uses 4 KB pages(default PAGE_SIZE in Linux) and set to 50 % of RAM (allocated on demand and can be swapped)
For performance reasons,any Oracle database that uses more than 4 GB of SGA should use kernel Huge Pages.
Note:Linux kernel Huge Pages are setup manually, use 2M (2048 KB) pages, cannot be swapped and are reserved at system start up.
AMM uses either Sys V(virtual) shared memory(ipcs) including kernel hugepages or (POSIX) /dev/shm for SGA, but not both at the same time.
oracle@srinalla-db-1:~ $ grep -i shm /etc/sysctl.conf|grep -v '^#';ls -lrth /proc/sys/kernel/shm*;cd /proc/sys/kernel/;cat shmall;cat shmmni;cat shmmax
kernel.shmall = 154618822656
kernel.shmmax = 154618822656
kernel.shmmni = 4096
-rw-r--r-- 1 root root 0 Dec 21 03:11 /proc/sys/kernel/shmall
-rw-r--r-- 1 root root 0 Dec 21 03:11 /proc/sys/kernel/shmmni
-rw-r--r-- 1 root root 0 Dec 21 03:11 /proc/sys/kernel/shmmax
154618822656
4096
154618822656
oracle@srinalla-db-1:~ $ cat /etc/*-release ;ipcs -l
Red Hat Enterprise Linux Server release 5.11 (Tikanga)
------ Shared Memory Limits --------
max number of segments = 4096  /* SHMMNI  */
max seg size (kbytes) = 150994944 /* SHMMAX  */
max total shared memory (kbytes) = 618475290624 /* SHMALL  */
min seg size (bytes) = 1
------ Semaphore Limits --------
max number of arrays = 4096
max semaphores per array = 256
max semaphores system wide = 32000
max ops per semop call = 100
semaphore max value = 32767
------ Messages: Limits --------
max queues system wide = 2878
max size of message (bytes) = 8192
default max size of queue (bytes) = 65535

Linux Huge Pages for Oracle (For Large SGA On Linux)

With HugePages, the Linux memory page size is set at 2MB (instead of the default 4K).
This will improve OS performance when running Oracle databases with large SGA sizes.
For 11g,Enable ASMM feature additionally ,Refer more Linux HugePages

The AMM and HugePages are not compatible. One needs to disable AMM on 11g to be able to use HugePages. See Document 749851.1 for further information.
ebsdba@ebsdb-prd-01:~ $ grep Hugepagesize /proc/meminfo
Hugepagesize: 2048 kB
oracle@of3200:~ 
Oracle Linux: Script to find Recommended Linux HugePages (Doc ID 401749.1)
HugePages on Linux: What It Is... and What It Is Not... (Doc ID 361323.1)

Disabling Transparent HugePages 
Starting from RHEL6/OL6, Transparent HugePages are implemented and enabled by default. 
This is causing node reboots in RAC installations and performance problems on both single instance and RAC installations.
Oracle recommends disabling Transparent HugePages on all servers running Oracle databases, as described in this MOS note (Doc ID 1557478.1)
Check setting below
# cat /sys/kernel/mm/transparent_hugepage/enabled
[always] madvise never
#
SGA/PGA Suggestions ( From AWR/ADDM)
1)Undersized SGA Findings in ADDM Report:
2)SGA/PGA Advice Views select * from v$SGA_TARGET_ADVICE ;
    (Remember, bigger does not necessarily mean better/faster.) 
select * from v$PGA_TARGET_ADVICE;

SGA/PGA Paremeters ( What is? )

SGA

Some parameters to consider in SGA sizing include

SGA_MAX_SIZE --> For Dynamic Allocation set this higher than sga_target SGA_TARGET --> new(in 10g) -->Automatically sizes SGA components dynamic,Can be increased till SGA_MAX_SIZE,Change in value of 
affects only automatically sized components  
the (redo) log buffers, the shared pool, Java pool, streams pool, buffer cache, keep/recycle caches, 
and if they are  specified, the non-standard block size caches.
-------------->Usually, sga_max_size and sga_target will be the same value
DB_CACHE_SIZE - ->RAM buffer for the data buffers DB_XK_CACHE_SIZE SHARED_POOL_SIZE ->RAM buffer for Oracle and the library cache. LARGE_POOL_SIZE -> For RMAN parallel queries & Parallel execution allocates  buffers out of the large pool only when parallel_automatic_tuning=true. 
LOG_BUFFER ->RAM buffer for redo logs

PGA

PGA_AGGREGATE_LIMIT -> (12c),limit on Total PGA per Instance can't be below the PGA_AGGREGATE_TARGET.
** To revert to the pre-12c functionality,set parameter value to "0"

PGA_AGGREGATE_TARGET-->(9i), soft limit on the PGA used by the instance _pga_max_size ->

SGA/PGA Details ( From v$views)

set lines 200 pages 300 
Select Round(sum(bytes)/1024/1024,0) "Total Free memory(MB)" From V$sgastat Where Name Like '%free memory%'; 
Select POOL, Round(bytes/1024/1024,0) "Free memory(MB)" From V$sgastat Where Name Like '%free memory%';
show parameter sga ;
show parameter pga;
select NAME,round(BYTES/(1024*1024*1024),0) as "Size(GB)",round(BYTES/(1024*1024),0) as "Size(MB)" ,RESIZEABLE from v$sgainfo order by 2 desc;
Total Free memory(MB)
---------------------
5117
SQL>
POOL Free memory(MB)
------------ ---------------
shared pool 2143
large pool 706
java pool 1756
streams pool 512
SQL>
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
allow_group_access_to_sga boolean FALSE
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 100G
sga_target big integer 85G
unified_audit_sga_queue_size integer 1048576
SQL>
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
pga_aggregate_limit big integer 50G
pga_aggregate_target big integer 30G
SQL>
NAME Size(GB) Size(MB) RES
-------------------------------- ---------- ---------- ---
Maximum SGA Size 100 102400 No
Buffer Cache Size 70 71936 Yes
Free SGA Memory Available 15 15360
Shared Pool Size 12 11776 Yes
Startup overhead in Shared Pool 5 5078 No
Java Pool Size 2 1792 Yes
Streams Pool Size 1 512 Yes
Large Pool Size 1 768 Yes
In-Memory Area Size 0 0 No
Shared IO Pool Size 0 256 Yes
Data Transfer Cache Size 0 0 Yes
Granule Size 0 256 No
Redo Buffers 0 249 No
Fixed SGA Size 0 7 No
14 rows selected.

References:

Oracle PGA behavior
PGA and SGA sizing on linux
Monitoring Oracle SGA & PGA Memory Changes
Oracle SGA Sizing
Optimal SHMMAX for Oracle
Monitoring SGA (Free Memory) Using v$sgastat
Get confused about how to calculate SHMMAX and SHMALL on Linux.
Configuring SHMMAX and SHMALL for Oracle in Linux
https://docs.oracle.com/database/121/LADBI/app_manual.htm#LADBI7864
Configuring HugePages for Oracle on Linux (x86-64)

#tuning

Oracle SQL Plan Changed?

Let’s look at below in detail


Why Execution Plan gets changes? 
How to fix Bad SQL Plan? 
Image flow of SQL Profile & SQL Plan BaseLine
Create SQL Profile ( Considering SQL Tuning Adviser Suggestions)
How to Create SQL Profile ? ( Which script is used?)

How to fix Bad SQL Plan? .

You can fix the SQL Plan by either SQL Profile / SQL Plan Baseline
Let’s take look at what they are and differences?

SQL profiles and SQL plan baselines help improve the performance of SQL statements by ensuring that the optimizer uses only optimal plans.
==>Baselines define the set of execution plans that can be used by each query, 
    SQL Profiles only provide additional information to “push” the optimizer to favor one or another plan,
==>SQL plan baselines are proactive, whereas SQL profiles are reactive.
==>SQL plan baselines reproduce a specific plan, whereas SQL profiles correct optimizer cost estimates.
SQL plan baselines prevent the optimizer from using suboptimal plans in the future.

Image flow of SQL Profile & SQL Plan BaseLine.

sqltsqlbase

Create SQL Profile ( Considering SQL Tuning Adviser Suggestions)

1) Run SQL Tuning Adviser(sqltrpt.sql/OEM) and get the suggestions for available best plan
2) Create SQL Profile using SQLT Script (coe_xfr_sql_profile.sql)
3) Flush the SQL ID from Memory ( Cancel any Jobs w.r to SQL ID,then flush)
4) Now you could see the new plan is reflected ( From gv$sql) 

This is best plan available for this SQL ID , we have to force the good plan by SQL Profile creation

PLAN_HASH_VALUE AVG_ET_SECS
————— ———–
2538395789 .437————> not reproducibe
2141716862 100.932———–last seen 2018-12-03/17:00:03 original plan
420475214 4898.131 ========== current Plan

How to Create SQL Profile ? ( Which script is used?)

Using coe_xfr_sql_profile.sql ,we can create it ( No need of Installation ,Just Download SQLTXPLAIN (SQLT),Refer Download SQLT section

SQL >  @coe_xfr_sql_profile.sql
SQL> @/dba/srinalla/scripts/sqlt/sqlt/utl/coe_xfr_sql_profile.sql
Parameter 1:
SQL_ID (required)

Enter value for 1: 8ap1zyhp2q7xd

PLAN_HASH_VALUE AVG_ET_SECS
--------------- -----------
     2538395789        .437
     2819292650        .599
     3897290700       4.068
     3111005336        4.21
     3446519203         6.4

Parameter 2:

PLAN_HASH_VALUE (required)
Enter value for 2: 2141716862

Values passed to coe_xfr_sql_profile:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL_ID : "8ap1zyhp2q7xd"
PLAN_HASH_VALUE: "2141716862"

Execute coe_xfr_sql_profile_8ap1zyhp2q7xd_2141716862.sql
on TARGET system in order to create a custom SQL Profile
with plan 2141716862 linked to adjusted sql_text.

Now, You can create the SQL Profile with above Script

SQL> @coe_xfr_sql_profile_8ap1zyhp2q7xd_2141716862.sql
PL/SQL procedure successfully completed.
SQL> WHENEVER SQLERROR CONTINUE
SQL> SET ECHO OFF;
            SIGNATURE
---------------------
   582750245629242078
           SIGNATUREF
---------------------
   582750245629242078
  ... manual custom SQL Profile has been created 

<strong><span style="color:#0000ff;">FROM SQL Tuning Advisor, this Can be run from DB Node as below(or OEM Cloud Control)</span></strong>
SQL @?/rdbms/admin/sqltrpt.sql

Specify the Sql id ~~~~~~~~~~~~~~~~~~ 
Enter value for sqlid: 8ap1zyhp2q7xd 
Sql Id specified: 8ap1zyhp2q7xd 
Tune the sql ~~~~~~~~~~~~
GENERAL INFORMATION SECTION
Tuning Task Name : TASK_313228
 Tuning Task Owner : SYS 
Workload Type : Single SQL Statement 
Scope : COMPREHENSIVE 
Time Limit(seconds): 1800 
Completion Status : COMPLETED 
Started at : 12/10/2018 02:09:31 
Completed at : 12/10/2018 02:16:37

Schema Name: APPS SQL ID : 8ap1zyhp2q7xd 
SQL Text : SELECT /*+ leading (gjl gjh) */ GJL.EFFECTIVE_DATE ACC_DATE,
/
FINDINGS SECTION (3 findings)
1- SQL Profile Finding (see explain plans section below)
 -------------------------------------------------------- 
A potentially better execution plan was found for this statement.
Recommendation (estimated benefit: 82.16%)
 ------------------------------------------ -
 Consider accepting the recommended SQL profile. 
execute dbms_sqltune.accept_sql_profile(task_name => 'TASK_313228',task_owner => 'SYS', replace => TRUE); 
id plan hash last seen elapsed (s) origin note 
-- ---------- -------------------- ------------ --------------- -------------- 
1 3013459780 2018-06-03/23:58:03 2.674 STS not reproducible 
2 3897290700 2018-12-06/14:00:54 7.268 AWR not reproducible 
3 2141716862 2018-12-03/17:00:03 22.321 AWR original plan
 4 1462347114 2018-12-06/17:00:38 50.572 AWR not reproducible 
5 1471114134 2018-12-07/16:00:04 60.236 AWR not reproducibe

3- Alternative Plan Finding
----------------------------------------------
Some alternative execution plans for this statement were found by searching the system's real-time and historical performance data. The following table lists these plans ranked by their average elapsed time. 
See section "ALTERNATIVE PLANS SECTION" for detailed information on each plan.

References

Database SQL Tuning Guide
Using Oracle baselines you can fix the sql plan for a SQLID
How To Improve SQL Statements Performance: Using SQL Plan Baselines
SQL Profiles Vs SQL Plan Baselines? by Girish Kumar
What is the difference between SQL Profiles and SQL Plan Baselines?
One Easy Step To Get Started With SQL Plan Baselines
2 Useful Things To Know About SQL Plan Baselines
Parsing SQL Statements in Oracle

#tuning

EBS on OCI ( My Notes)

Oracle Cloud Infrastructure (OCI), IaaS from Oracle.
There are many cloud infrastructure providers, like Amazon (AWS), Oracle (OCI), Microsoft (Azure), Google (Google Cloud).
What’s New in Oracle Cloud Infrastructure

EBS (R12) Deployment Options on Cloud
EBS Middle/Application Tier
Oracle EBS Middle Tier can only be deployed on IaaS Service Model and within IaaS Service Model, it is either on Oracle Cloud Infrastructure (OCI) or OCI Classic.
EBS Database Tier
Oracle EBS Database Tier can be deployed either on IaaS or PaaS Service Model and within PaaS Service Model, it’ll be Database Cloud Service (DBCS). DBCS has few deployment options and out of them, two are supported for EBS (R12) on Cloud i.e. Database as a Service DBaaS and Exadata Cloud Service (Exa CS).

Metalink EBS on Oracle Cloud

OLL
OCI Study Guide *********
Just-in-Time Videos

OCI Introduction
OCI -Foundation
OCI Advanced
Tips for OCI Architect Certification
Certification Path

White Paper: EBS Deployment on OCI ****

Cloud On boarding Guide

OCI Blog

EBS on OCI Blog Posts
Cloud computing concepts (HA, DR, Security), regions, availability domains, OCI terminology and services, networking, databases, load balancing, IAM, DNS, FASTCONNECT, VPN, Compartments, tagging, Terraform, with focus on how to use it with OCI and Exadata

OCI for Apps DBA **
EBS (R12) on Cloud: Architects Perspective ****
EBS (R12) On Cloud (OCI): High Level Steps *****
EBS Cloud Admin Tool
EBS Cloud Manager

EBS Cloud Admin Tool on OCI is superseded with EBS Cloud Manager
EBS Cloud Manager
Provisioning –> 2 options (Simple & Advance)
Migration –> Lift and Shift using Cloud Manager
Cloning –>
Deletion
Cloud Service Model: SaaS | PaaS | IaaS
EBS (R12) On Cloud Deployment Architecture
Role of Apps DBA in Cloud

OCI vs OCI Classic
Storage

Summary

Knowledge of Oracle Cloud (OCI & DBCS) ,EBS-OCI Lift & Shift
	Taken Oracle Cloud Internal Training on OCI(Iaas),DBCS(Paas)
	High Level understanding of Core OCI fundamentals
	Familiar with EBS-OCI Lift & Shit & High Level Deployment & Tools like EBS Cloud Manager(EBS Cloud Admin tool)

INTERNAL TRAININGS & KNOWLEDGE

Topic	Oracle Cloud offerings for EBS (OCI & DBCS-Dbas/Exacs)
Mode	Online ,Self-Paced &
Role	Cloud EBS DBA

Gained Knowledge in following Areas

•	OCI(IaaS) & DBCS (Exa CS)
•	Knowledge of OCI (Infrastructure,Compute,Database,Networking(VCN,VPN,IPSec),Storage Services(FSS,IAM,VPN IPSec tunnel functionality)
•	Knowledge in DBCS(DBaaS & ExaCS),Cloud Backup Storage Service.
•	Basic Understanding of Migrating EBS to OCI ((Lift & Shift)
•	 Familiar with High-Level Steps of deployment of EBS on OCI
•	 Knowledge in EBS Cloud Admin Tool
•	 Familiar with Install and configure EBS Cloud Manager
•	Knowledge in EBS Cloud Manager (Provisioning,Migration,Cloning,Cloning,Deletion)

#ebs-cloud, #notes