Inspiration Collection by Social Media
ఈ తరం కు తెలియనది ,మా తరం కు అంద అన0ద మయన జ్ఞాపకాలు
Intrested in Data Science?
Just Exploring the new skill
This is being updated along with my learning path..
How I have Started Data Science?
Simple & easily understandable of Concepts in Machine Learning Tutorial by Javatpoint.com
Python Essentials for Data Scientists by Kevin Markham
(Learn the key Python features that will help you to be productive as a data scientist!)
Easier Data Analysis with pandas (video series)
Introduction to Machine Learning with scikit-learn by Kevin Markham (free 4-hour course!)
Building an Effective Machine Learning Workflow with scikit-learn
(Become fluent in scikit-learn & get better Machine Learning results faster!)
Learning To-do
FAQ: Oracle E-Business Suite and the Oracle Multitenant Architecture (Doc ID 2567105.1) *
Document 2552181.1, Interoperability Notes: Oracle E-Business Suite Release 12.2 with Oracle Database 19c
- Oracle E-Business Suite Performance Best Practices (Doc ID 2528000.1)
- Using Database Partitioning with Oracle E-Business Suite (Doc ID 554539.1)
- Oracle Database Initialization Parameters for Oracle E-Business Suite Release 12 (Doc ID 396009.1)
Learn about Oracle E-Business Suite 12.2
Module : Planning your Oracle E-Business Suite Upgrade to Release 12.2
https://learn.oracle.com/ols/course/50662/80641Module : Strategies for Maintenance and Online Patching for Oracle E-Business Suite 12.2
https://learn.oracle.com/ols/course/50662/75306Module : Oracle E-Business Suite 12.2 Technical Upgrade Overview and Process Flow
https://learn.oracle.com/ols/course/50662/80643Module : Oracle E-Business Suite 12.2 Upgrade Best Practices to Minimize Downtime
https://learn.oracle.com/ols/course/50662/80644Module : Migrating and Managing Customizations for Oracle E-Business Suite 12
https://learn.oracle.com/ols/course/50662/80642*** Performance Best Practices for Oracle E-Business Suite on Oracle Cloud Infrastructure and On-Premises (MOS Note 2528000.1)
Best support article to show all EBS Perf related info
https://techgoeasy.com/step-step-upgrade-steps-r12-2/
https://techgoeasy.com/oracle-tutorials/oracle-performance-tuning/
https://techgoeasy.com/downloads/
Personal Finance
Select your Time Horizon. There are 4 options –
- Very short Term – Less than 1 year
- Short Term – 1 to 5 years
- Medium Term – 5 to 10 years
- Long Term – More than 10 years
Collections for Best Reads
- https://unovest.co/2021/10/its-the-world-financial-planning-day/
- https://unovest.co/2018/04/frugality-financial-freedom-and-facts/
- https://unovest.co/2017/12/save-rs-1-crore-dream-startup/
- https://unovest.co/2017/09/guide-mutual-fund-portfolio-xirr/ (Great Read Complete PDF)
- https://unovest.co/2018/02/balanced-funds-taxation/
- https://unovest.co/2020/06/5-financial-must-dos-for-married-couples/
- https://unovest.co/2018/09/6-investing-basics/
- https://unovest.co/2018/09/21-money-investing-rules/
- https://unovest.co/2018/05/50-decisions-financial-plan/
- https://unovest.co/2018/01/question-money-ask/
Insurance Related
https://www.valueresearchonline.com/stories/46426/simplicity-in-insurance-is-key/
Gold
- https://www.jagoinvestor.com/2019/11/sovereign-gold-bonds.html
- https://www.jagoinvestor.com/2015/11/sovereign-gold-bond-scheme.html (Only for Wealth ,then avoid)
Investment Related
- Emergency corpus: Keep an amount equal to six months of your expenses and EMIs in a combination of liquid funds and sweep-in FDs.
- Life insurance: You need life insurance only if you have financial dependents and your assets are not sufficient to take care of them in your absence. Buy only pure term plans for life insurance.
- Health insurance: Unplanned medical expenses can derail your financial future. Maintain a sufficient health cover for all your family members.
https://www.jagoinvestor.com/2012/12/6-rules-of-great-financial-life.html
https://www.jagoinvestor.com/2017/01/buying-first-car.html
https://www.jagoinvestor.com/2013/08/real-estate-and-property-terminologies.html
https://www.jagoinvestor.com/2019/01/critical-illness-cover.html
https://www.jagoinvestor.com/2019/06/nps.html (Detailed Article on NPS)
https://www.jagoinvestor.com/2015/06/atal-pension-yojana.html (Detailed Article on APY)
https://www.valueresearchonline.com/stories/48426/the-right-time-to-start-investing/
https://www.valueresearchonline.com/stories/45887/investing-on-a-salary-of-35k/
https://www.valueresearchonline.com/stories/47020/a-plan-for-an-it-professional-with-dependents/
https://www.valueresearchonline.com/stories/48128/financial-planning-amidst-the-crash/
https://www.valueresearchonline.com/stories/46969/a-financial-plan-that-puts-idle-money-to-use/
https://www.valueresearchonline.com/stories/34288/first-things-first/
https://www.valueresearchonline.com/stories/46611/assess-your-lifestyle/
https://www.valueresearchonline.com/stories/47106/charm-of-early-retirement/
https://www.valueresearchonline.com/stories/46166/managing-children-s-education/
https://www.valueresearchonline.com/stories/46303/investing-for-your-child-s-education/
https://www.valueresearchonline.com/stories/46300/a-5-point-guide-to-buying-a-home/
https://www.valueresearchonline.com/stories/49808/where-should-i-invest-my-gratuity-money/
https://www.valueresearchonline.com/stories/49905/should-i-book-profit-and-reinvest/
NRI Investments Related
- https://www.jagoinvestor.com/2019/09/nri-investments-guide.html
- https://www.jagoinvestor.com/2020/01/dtaa.html (Double Taxation Avoidance Agreement (DTAA))
- https://www.jagoinvestor.com/2019/05/nri-mutual-funds.html
Income Tax Related
- https://www.valueresearchonline.com/stories/48404/not-much-surplus-here-s-how-you-can-still-save-tax/
- https://www.valueresearchonline.com/stories/47647/does-tax-harvesting-make-a-difference/
- https://www.valueresearchonline.com/stories/47015/why-elss-is-the-best-tax-saving-investment/
- https://www.valueresearchonline.com/stories/49631/should-one-continue-to-invest-in-sukanya-samriddhi-yojana/
- https://www.valueresearchonline.com/stories/24297/all-about-tax-deductions/
- https://www.valueresearchonline.com/stories/46823/a-quick-analysis-of-popular-80c-options/
- https://www.valueresearchonline.com/stories/46816/essential-tax-saving-guide/
- https://www.valueresearchonline.com/stories/26858/an-overview-of-6-popular-tax-saving-options/
- https://www.jagoinvestor.com/2018/12/80c.html
- https://www.jagoinvestor.com/2019/04/income-tax-section-80d.html
- https://www.jagoinvestor.com/2015/06/beyond-life-insurance.html
- https://www.jagoinvestor.com/2015/04/tax-free-income-in-india.html
Other Resources for further reading:
- https://zerodha.com/varsity/module/personalfinance/
- https://zerodha.com/varsity/
- https://fyers.in/introducing-school-of-stocks/
- https://www.skillshare.com/classes/Modern-Money-Habits-5-Steps-to-Build-the-Life-You-Want/403120065?via=search-layout-grid
Notes on Investings in Buinsess
Income Statment
Balance Sheet
Cash Flows
Sales ==> Revenue ==>EBITDA ==> NET Profit ==> Margins
EBITDA — Earning before Intrests D -Depreciation & A -Amortization
Raw Material +labour –>Production Cost
Employess +salries –>Oprational Cost
Taxes
Intrests
Cash FLow Statements:
Raising ,Borrowing,Re-paying Capital
Cash flow from Operations
New commands ( Just Collection)
alter database open resetlogs upgrade ; —RMAN Restore backup of lower version database to a higher version
Reference:
https://taliphakanozturken.wordpress.com/tag/alter-database-open-resetlogs-upgrade/ https://shivanandarao-oracle.com/2015/09/16/rman-restore-backup-of-lower-version-database-to-a-higher-version/
EBS Upgrades and Platform Migration
Upgrade : Install & Upgrade All Data Reimplement : Legacy Instance --> Data Migration --> Reimplemented Instance Hybrid: Upgrade & Consolidate : Upgrade,Migrate data ==> Single Global Instance
... upgrading to a newer version of your current operating system? ... migrating to a different platform with the same endian format? ... migrating to a different platform with a different endian format? Best practices for combined upgrades ---------------------------------------------------------- 1)Consider the database and application tier migrations separately and plan to perform the database migration first. 2)Choose the right migration process for the database while considering the target platform, database size and process complexity. 3)Migrate and upgrade your EBS application tier from 11i to R12 by laying down the new application tier on the target platform as part of the upgrade.
Database Migrations and Upgrades ------------------------------------------------------------------- A. An OS upgrade on the same platform B. Migration to a new platform of the same endian format Transportable Database (TDB) C. Migration to a new platform of different endian formats 1. Export/Import (Datapump) , but slow for databases > 1TB 2. Transportable Tablespaces is better for large databases Application Tier Migrations and Upgrades ------------------------------------------------------------------- Migrations without Upgrades R12 Upgrade using Rapid Install R12 Upgrade using Release Update Packs
Transportable Database is the fastest way to migrate data between two platforms(same endian format), as the process fundamentally is one of copying database files and
then using Recovery Manager (rman) to convert data files (using the ‘rman convert database’ command).
The EBS TDB process for migration does however require that the source and target database be of the same release and patchset version.
For larger databases (>1TB) however, the use of export/import can be an extremely slow process and alternatives should be considered.
TTS essentially is a process of moving or copying the data portion of the database,
and then using the Recovery Manager (rman) utility to convert the endian format of the data.
The use of TTS will still require export/import of certain objects in the EBS database such as metadata, system tables, etc.
Full Transportable Export/Import migration by default –
this is similar to Transportable Tablespaces
but automates a number of steps in the process of migration and reduces the manual tasks required to perform the migration.
References
Migration
https://blogs.oracle.com/stevenchan/a-primer-on-migrating-oracle-applications-to-a-new-platform
https://blogs.oracle.com/stevenchan/best-practices-for-combining-ebs-upgrades-with-platform-migrations
Planning Your EBS Upgrade from 11i to R12 and beyond https://www.oracle.com/technetwork/apps-tech/upgrade-planning-2011-1403212.pdf
Oracle E-Business Suite Upgrades and Platform Migration (Doc ID 1377213.1) *************
Click to access maa-ebs-exadata-xtts-321616.pdf
https://www.cisco.com/c/en/us/solutions/collateral/servers-unified-computing/ucs-5100-series-blade-server-chassis/Whitepaper_c11-707249.html
https://hansrajsao.wordpress.com/2015/03/06/migrating-oracle-database-to-exadata/
Cross Platform Transportable Tablespaces on 11i with 10gR2 (Doc ID 454574.1)
Upgrade
Oracle E-Business Suite Upgrade Guide, Release 11i to 12.1.3 https://docs.oracle.com/cd/B53825_08/current/acrobat/121upgrade.pdf
Oracle E-Business Suite Upgrade Guide, Release 12.0/1 to 12.2 https://docs.oracle.com/cd/E26401_01/doc.122/e48839.pdf
Oracle E-Business Suite Upgrade Guide, Release 11i to 12.2 https://docs.oracle.com/cd/E51111_01/current/acrobat/122upg11i.pdf
Oracle E-Business Suite Release 12.2 Information Center – Upgrade (Doc ID 1583158.1) *****
Best Practices for Minimizing Oracle E-Business Suite Release 12.2.n Upgrade Downtime (Doc ID 1581549.1)
Oracle E-Business Suite Release 12.2 Information Center (Doc ID 1581299.1)
https://blogs.oracle.com/stevenchan/getting-started-with-the-release-12-technology-stack
Oracle E-Business Suite Release 12.2 Technology Stack Documentation Roadmap (Doc ID 1934915.1)
SCHEMA Refresh
SCHEMA Refresh Steps with Datapump
Script to take an EXPDP backup of the XX_DUMMY schema via Cron
### Scrpt to take an EXPDP backup of the XX_DUMMY schema . $HOME/EBSDEV_dev-01.env pswd=<code>cat $HOME/scripts/db_mon/.watchword</code>; export pswd export SCHEMA=XX_DUMMY; DUMP_LOG="$SCHEMA"_EXPDP_<code>date +"%d_%b_%y"</code>.log; export DUMP_LOG DBA_MAIL=ebsdba@srinalla.com echo "$SCHEMA Schema Backup started in $ORACLE_SID"|mailx -s "$SCHEMA Schema Backup started in $ORACLE_SID" $DBA_MAIL expdp system/"$pswd" directory=WEEKLY_BACKUP dumpfile="$SCHEMA"_"$ORACLE_SID"_<code>date +"%d_%b_%y"</code>_EXP%U.dmp logfile="$DUMP_LOG" schemas=$SCHEMA filesize=3G cluster=n parallel=16 bkp_err=<code>cat /u03/EBSDEV/DMP_BKPS/$DUMP_LOG|grep -i "ora-"|wc -l</code>; export bkp_err if [ "$bkp_err" -gt "0" ] then <code>cat /u03/EBSDEV/DMP_BKPS/WEEKLY_BKPS/$DUMP_LOG</code>|mailx -s "$SCHEMA Schema Backup completed with errors in $ORACLE_SID, Pls chk!!!" $DBA_MAIL else echo "$SCHEMA Schema Backup completed in $ORACLE_SID"|mailx -s "$SCHEMA Schema Backup completed in $ORACLE_SID" $MAILING_LIST fi #To Remove dumpfiles older than a month find /u03/EBSDEV/DMP_BKPS/WEEKLY_BKPS/* -mtime +10 -exec rm {} ;
Working with Datapump ?
Let’s look at below in detail
Data Pump Best Practices Dont Invoke expdp using SYS Purge recyclebin before Export , User/Table/DB Level ** PARALLELISM doesn't work with LOB COLUMN How to use/estimate PARALLEL parameter in Datapump? How to Check/Monitor DATAPUMP JOBS? **Datapump will use two different load methods during import(impdp).
Data Pump Best Practices
pga_aggregate_target -->Set this to high,it will improve the Data pump performance. For export consistency use:- ------------------------------ FLASHBACK_TIME=SYSTIMESTAMP, This will increase UNDO requirements for the duration of the export compression_algorithm=medium --12C Recommended option. Similar characteristics to BASIC, but uses a different algorithm Always set parameters:- ------------------------------ METRICS=YES EXCLUDE=STATISTICS LOGTIME=ALL -->Timestamps (From 12C) Speed up Data Pump:- ------------------------------ PARALLEL=n EXCLUDE=STATISTICS on export EXCLUDE=INDEXES on import 1. Initial impdp with EXCLUDE=INDEXES 2. Second impdp with INCLUDE=INDEXES SQLFILE=indexes.sql 3. Split indexes.sql into multiple SQL files and run in multiple sessions – Set COMMIT_WAIT=NOWAIT and COMMIT_LOGGING=BATCH during full imports Direct import via database link (Network bandwidth and CPU bound):- ---------------------------------------------------------------------------------------------------- – Parameter: NETWORK_LINK Run only impdp on the target system - no expdp necessary No dump file written, no disk I/O, no file transfer needed Restrictions of database links apply: – Does not work with LONG/LONG RAW and certain object types Performance: Depends on network bandwidth and target's CPUs
Some Commands /Use Cases
remap_tablespace=OLD_TBS:NEW_TBS ==>Move all objects from one tablespace to another remap_schema=old_schema:new_schema ==> Move a object to a different schema expdp with content=metadata_only & impdp with remap_schema=A:Z ==> Clone a User remap_datafile=’/u01/app/oracle/oradata/datafile_01.dbf’:’/u01/datafile_01.dbf’ ==> Create your database in a different file structure transform=pctspace:70 ,sample=70 -->tell the Data Pump to reduce the size of extents to 70% in impdp transform=disable_archive_logging:Y there is a database parameter FORCE LOGGING which overwrites this feature. sqlfile=x_24112010.sql EXPDP Filesize : Split or Slice the Dump file into Multiple Directories expdp srinalla/srinalla job_name=exp_job_multiple_dir schemas=STHOMAS filesize=3G dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp logfile=dump.log compression=all parallel=10 While import,mention like this dumpfile=datapump:expdp_datapump_%U.dmp,TESTING:expdp_testing_%U.dmp Statistics are imported by default compression parallel cluster (Default=Y,From 11gR2,Parallelization in RAC, Can be on all nodes or only few nodes based on service_name=EBS_DP_12 Sttaus Check :select inst_id, session_type from dba_datapump_sessions; Commit the Import on every row with COMMIT=Y. If COMMIT=Y, Import commits tables containing LONG, LOB, BFILE, ROWID, UROWID, DATE or Type Columns after each row. Restart restart the job with a different degree of parallelism, say 4 (earlier it was 6): Export> parallel=4 Export> START_JOB Export> continue_client --show progress import using “table_exists_action=replace” and TABLES=(list of skipped tables) nohup impdp system/secret NETWORK_LINK=olddb FULL=y PARALLEL=25 & impdp system attach Import> status Import> parallel=30 << this will increase the parallel processes if you want
Do not invoke expdp using ‘/ as sysdba’
Also, do not invoke expdp using ‘/ as sysdba’ – use the SYSTEM account – see the first Note section here
http://docs.oracle.com/cd/E11882_01/server.112/e22490/dp_export.htm#i1012781
Purge recyclebin before Export , User/Table/DB Level
select object_name, original_name, type, can_undrop as “UND”, can_purge as “PUR”, droptime from dba_recyclebin
where owner = ‘XX_DUMMY’;
purge table “BIN$HGnc55/7rRPgQPeM/qQoRw==$0” ;
** PARALLELISM doesn’t work with LOB COLUMN
parallelism doesn’t work ,because data pump serializes the dump when it comes to a LOB table.
The Approach should be like this
1) the whole database/schema minus LOB table and
2) the LOB table.
** pga_aggregate_target proved to be the most important change in the overall scheme of things
because indexes were built towards the end of the job and took 3 times longer
than actually creating the tables and importing the data in this test.
Check LOB Columns with below Query
SELECT s.tablespace_name ,l.owner,l.table_name,l.column_name,l.segment_name,s.segment_type, round(s.bytes/1024/1024/1024,2) "Size(GB)" FROM DBA_SEGMENTS s,dba_lobs l where l.owner = s.owner and l.segment_name = s.segment_name and l.owner not in ('SYS','SYSTEM','APPS','APPLSYS') --and round(s.bytes/1024/1024/1024,2)>1 order by s.bytes desc;
Check below links how to fix the issue
http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html
Master Note: Overview of Oracle Large Objects (BasicFiles LOBs and SecureFiles LOBs) (Doc ID 1490228.1)
How to use/estimate PARALLEL parameter in Datapump?
Before starting any export/import, it is better to use ESTIMATE_ONLY parameter. Divide the output by 250MB and based on the result decide on PARALLEL value Finally when using PARALLEL option, do keep below points in mind a. Set the degree of parallelism 2*no of CPU, then tune from there. b. For Data Pump Export, the PARALLEL parameter value < dumpfiles c. For Data Pump Import, the PARALLEL parameter value < dumpfiles For more details, you can refer to MOS doc 365459.1
How to Check/Monitor DATAPUMP JOBS?
DBA_DATAPUMP_JOBS
DBA_DATAPUMP_SESSIONS
V$SESSION_LONGOPS
Monitoring Data Pump http://www.dbaref.com/home/oracle-11g-new-features/monitoringdatapump
Queries to Monitor Datapump Jobs https://databaseinternalmechanism.com/2016/09/13/how-to-monitor-datapump-jobs/
How to delete/remove non executing datapump jobs? https://pavandba.com/2011/07/12/how-to-deleteremove-non-executing-datapump-jobs/
Datapump will use two different load methods during import(impdp)
- Direct load path – this is the main reason why datapump import (impdp) is faster than traditional import (imp)
- external table path
But datapump cannot use direct path always due to some restrictions and because of this reason, sometimes you may observe impdp run slower than expected.
Now, what are those situations when datapump will not use direct path? If a table exist with1. A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned. 2. A domain index exists for a LOB column. 3. A table is in a cluster. 4. There is an active trigger on a pre-existing table. 5. Fine-grained access control is enabled in insert mode on a pre-existing table. 6. A table contains BFILE columns or columns of opaque types. 7. A referential integrity constraint is present on a pre-existing table. 8. A table contains VARRAY columns with an embedded opaque type. 9. The table has encrypted columns 10. The table into which data is being imported is a pre-existing table and at least one of the following conditions exists: – There is an active trigger – The table is partitioned – A referential integrity constraint exists – A unique index exists 11. Supplemental logging is enabled and the table has at least 1 LOB column. Note: Data Pump will not load tables with disabled unique indexes. If the data needs to be loaded into the table, the indexes must be either dropped or re-enabled. 12. using TABLE_EXISTS_ACTION=TRUNCATE ON IOT
References
http://www.orafaq.com/wiki/Datapump
Master Note for Data Pump:MOS Note:1264715.1
For Compatibility and version changes:MOS Note:553337.
Using Oracle’s recycle bin http://www.orafaq.com/node/968
Master Note: Overview of Oracle Large Objects (BasicFiles LOBs and SecureFiles LOBs) (Doc ID 1490228.1)
Checklist For Slow Performance Of DataPump Export (expdp) And Import (impdp) (Doc ID 453895.1)
https://mayankoracledba.wordpress.com/2018/01/15/oracle-12c-data-pump-best-practices/
http://jeyaseelan-m.blogspot.com/2016/05/speed-up-expdpimpdp.html
http://its-all-about-oracle.blogspot.com/2013/06/datapump-expdpimpdp-utility.html
http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html
https://rajat1205sharma.wordpress.com/2015/07/03/data-pump-export-import-performance-tips/
http://jensenmo.blogspot.com/2012/10/optimising-data-pump-export-and-import.html ***************
How to use PARALLEL parameter in Datapump? https://pavandba.com/2011/07/15/how-to-use-parallel-parameter-in-datapump/
impdp slow with TABLE_EXISTS_ACTION=TRUNCATE https://pavandba.com/2013/03/22/impdp-slow-with-table_exists_actiontruncate/
Transportable Tablespaces(TTS) for Upgrade/Migration
Let’s look at below in detail
Why TTS ? -->Transportable Tablespaces (TTS) -->FTEX (TTS + Data Pump) -->FTEX using RMAN Backups (Less Down Time) High Level Steps for Migration of Data Using TTS Time Taken Cross-Platform Tablespace(Xtts) Transport To The Export/Import Method -->Normal TTS Approach -->RMAN TTS Approach -->12C TTS Enhancement using RMAN backup sets
Why TTS ?
Moving data using transportable tablespaces is much faster than performing either an export/import or unload/load of the same data. This is because the datafiles containing all of the actual data are just copied to the destination location, and you use an export/import utility to transfer only the metadata of the tablespace objects to the new database.
Transportable Tablespaces (TTS)
Transportable Tablespaces feature exists since Oracle 8i – Can be used cross version ,Version to transport to must be always equal or higher – Cross platform Transportable Tablespaces got introduced in Oracle Database 10g ==> Can be used cross version and cross platform ==> Required tablespaces to be in read-only mode ==> Extra work necessary for everything in SYSTEM/SYSAUX
Full Transportable Export/Import FTEX (TTS + Data Pump)
Transport an entire database in a single operation – Cross version and cross platform – Can include the database upgrade – Combination of TTS for data tablespaces and Data Pump for administrative tablespaces (SYSTEM, SYSAUX ...) – Supports information from database components such as Spatial,Text, Multimedia, OLAP, etc. Full transportable export supported since Oracle 11.2.0.3 Full transportable import supported since Oracle 12.1.0.1
Relationship of XTTS to Data Pump and Recovery Manager XTTS works within the framework of Data Pump and Recovery Manager (RMAN). Use Data Pump to move the metadata of the objects in the tablespaces being transported to the target database. RMAN converts the datafiles being transported to the endian format of the target platform You can use transportable tablespaces to perform tablespace point-in-time recovery (TSPITR). RMAN uses the transportable tablespaces functionality to perform TSPITR. Therefore, any limitations on transportable tablespaces are also applicable to TSPITR.
Learn more at ………
OLL Video : Oracle Database 12c: Transporting Data
FTEX using RMAN Backups (Less Down Time)
Read more at ………
11G – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 1389592.1)
12C – Reduce Transportable Tablespace Downtime using Cross Platform Incremental Backup (Doc ID 2005729.1)
High Level Steps for Migration of Data Using TTS
Step 1: Check Platform Support And File Conversion Requirement Step 2: Identify Tablespaces To Be Transported And Verify Self-Containment Step 3: Check For Problematic Data Types Step 4: Check For Missing Schemas And Duplicate Tablespace And Object Names Step 5: Make Tablespaces Read-Only In Source Database Step 6: Extract Metadata From Source Database Step 7: Copy Files To Target Server And Convert If Necessary Step 8: Import Metadata Into Target Database Step 9: Copy Additional Objects To Target Database As Desired
Read more at ………
Moving Oracle Databases Across Platforms without Export/Import
Time Taken Cross-Platform Tablespace(Xtts) Transport To The Export/Import Method
Task | Export/Import | Tablespace Transport |
Export time | 37 min | 2 min |
File transfer time | 8 min | 13 min |
File conversion time | n/a | 14 min |
Import time | 42 min | 2 min |
Approximate total time | 87 min | 31 min |
Export file size | 4100 Mb | 640 Kb |
Target database extra TEMP tablespace requirement | 1200 Mb | n/a |
Normal TTS Approach
1) On the source database -------------------------------------- Validating Self Containing Property exec DBMS_TTS.TRANSPORT_SET_CHECK('tbs', TRUE); a) Put TBS in Read Only b) Export the Metadata exp FILE=/path/dump_file.dmp LOG=/path/tts_exp.log TABLESPACES=tbs-names TRANSPORT_TABLESPACE=y STATISTICS=none **************Transfer Datafiles and export file to TARGET ********************* 2) on the destination database -------------------------------------- a)Import the export file. impdp DUMPFILE=tts.dmp LOGFILE=tts_imp.log DIRECTORY=exp_dir REMAP_SCHEMA=master:scott TRANSPORT_DATAFILES='/path/tts.dbf' b) Make TBS in READ-WRITE
RMAN TTS Approach
1) RMAN transport tablespace On the source database **************Transfer Datafiles and export file to TARGET ********************* 2) Run Import script created by RMAN on the destination database impscrpt.sql (or) impdp command from the file
Read more at ………
RMAN TRANSPORT TABLESPACE By Franck Pachot
RMAN Transportable Tablespace
RMAN TRANSPORT TABLESPACE – Oracle Doc
Why Using RMAN TTS ?
-->RMAN checks that they are self-contained -->Faster -->no need to put in Read only Using RMAN TTS feature,the datafiles which contain actual data can be copied, thus making the migration faster. RMAN creates transportable tablespace sets from backup,eliminates need of putting tablespaces in read-only mode. RMAN process for creating transportable tablespaces from backup uses the Data Pump Export and Import utilities RMAN creates the automatic auxiliary instance used for restore and recovery on the same node as the source instance, there is some performance overhead during the operation of the TRANSPORT TABLESPACE command. RMAN> transport tablespace tbs_2, tbs_3 tablespace destination '/disk1/transportdest' ---> Set of Datafiles will be created here with Original Names & export log,Export dump file ,impscrpt.sql will also be created auxiliary destination '/disk1/auxdest' DATAPUMP DIRECTORY mypumpdir DUMP FILE 'mydumpfile.dmp' IMPORT SCRIPT 'myimportscript.sql' EXPORT LOG 'myexportlog.log';
12C TTS Enhancement using RMAN backup sets
1) RMAN TTS on the source database
a) Put TBS in Read Only , b) RMAN--> BACKUP FOR TRANSPORT ( Metadata by Datapump,Backup set by RMAN) --> convert the platform and the endian format if required c) Back TBS to READ-WRITE
****** Transfrer Backup set & Dump files to Target Server from Source
2) RMAN TTS on the destination database
a) Restore foreign tablespace ( Restore by RMAN, Import Metadata by Datapump) b) Make TBS in READ-WRITE
RMAN> backup for transport format '/tmp/stage/tbs1.bkset' datapump format '/tmp/stage/tbs1.dmp' tablespace tbs1; RMAN> restore foreign tablespace tbs1 format '/u01/app/oracle/oradata/sekunda/tbs1.dbf' from backupset '/tmp/stage/tbs1.bkset' dump file from backupset '/tmp/stage/tbs1.dmp';
Read more at ………
Transport Tablespace using RMAN Backupsets in #Oracle 12c
12c How Perform Cross-Platform Database Transport to different Endian Platform with RMAN Backup Sets (Doc ID 2013271.1)
References:
How to Move a Database Using Transportable Tablespaces (Doc ID 1493809.1)
How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN (Doc ID 371556.1)
Master Note for Transportable Tablespaces (TTS) — Common Questions and Issues (Doc ID 1166564.1)
Transportable Tablespaces
Transportable Tablespaces Tips
Using Transportable Tablespaces for Oracle Upgrades
Transparent Data Encryption(TDE) -Overview
Let’s look at below in detail
What is Oracle Transparent Data Encryption (TDE)? How do I migrate existing clear data to TDE encrypted data? How to find details for encryption/encrypted objects? How to Enable TDE in 12C? Best Practices for TDE
What is Oracle Transparent Data Encryption (TDE)?
Oracle Advanced Security Encryption -TDE(Transparent Data Encryption) (From 10gR2)
allows administrators to encrypt sensitive data (i.e. Personally Identifiable Information or PII)
by protecting it from unauthorized access via encryption key if storage media, backups, or datafiles are stolen.
TDE supports two levels of encryption
1)TDE column encryption ( From 10gR2)
2)TDE tablespace encryption ( From 11gR1)
TDE uses a two tier encryption key architecture, consisting of a master key and one or more table and/or tablespace keys. The table and tablespace keys are encrypted using the master key. The master key is stored in the Oracle Wallet. When TDE column encryption is applied to an existing application table column, a new table key is created and stored in the Oracle data dictionary. When TDE tablespace encryption is used, the individual tablespace keys are stored in the header of the underlying OS file(s).
How do I migrate existing clear data to TDE encrypted data?
You can migrate existing clear data to encrypted tablespaces or columns using Online/Offline. Existing tablespaces can be encrypted online with zero downtime on production systems or encrypted offline with no storage overhead during a maintenance period. Online conversion is available on Oracle Database 12.2.0.1 and above Offline conversion on Oracle Database 11.2.0.4 and 12.1.0.2. In 11g/12c,you can use DBMS_REDEFINITION(Online Table Redefinition) copy existing clear data into a new encrypted tablespace background with no downtime.
How to find details for encryption/encrypted objects?
gv$encryption_wallet
V$ENCRYPTED_TABLESPACES
DBA_ENCRYPTED_COLUMNS
select * from gv$encryption_wallet; ---(gv$wallet) select * from dba_encrypted_columns; select table_name from dba_tables where tablespace_name in (select tablespace_name from dba_tablespaces where encrypted='yes'); select tablespace_name, encrypted from dba_tablespaces where encrypted='yes'; select et.inst_id, ts.name, et.encryptionalg, et.encryptedts from v$encrypted_tablespaces et, ts$ ts where et.ts# = ts.ts#;
How to Enable TDE in 12C?
Step 1: Set KEYSTORE location in $TNS_ADMIN/sqlnet.ora Step 2: Create a Password-based KeyStore Step 3: Open the KEYSTORE Step 4: Set Master Encryption Key Step 5: Encrypt your Data --> Make sure compatible parameter value greater than 11.2.0. 5.1 Encrypt a Columns in Table 5.2 Encrypt Tablespace -->Note: Encrypting existing tablespace is not supported.( Do online/Offline Conversion)
Step 1: Set KEYSTORE location in $TNS_ADMIN/sqlnet.ora
==> created the directory mkdir -p /u01/app/oracle/admin/SLOB/tde_wallet ==>declared it in sqlnet.ora ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/u01/app/oracle/admin/SLOB/tde_wallet)))
Step 2: Create a Password-based KeyStore
==> sqlplus sys/password as sysdba administer key management create keystore '/u01/app/oracle/admin/SLOB/tde_wallet' identified by oracle;
Step 3: Open the KEYSTORE
administer key management set keystore open identified by oracle;
Step 4: Set Master Encryption Key
administer key management set key identified by oracle with backup; ==> Optionally created an auto-login wallet Administer key management create auto_login keystore from keystore '/u01/app/oracle/admin/SLOB/tde_wallet' identified by oracle;
Step 5: Encrypt your Data
–> Make sure compatible parameter value greater than 11.2.0.
===> 5.1 Encrypt a Columns in Table
create table job(title varchar2(128) encrypt);->Create a table with an encrypted column,By default it is AES192 create table emp(name varchar2(128) encrypt using '3DES168', age NUMBER ENCRYPT NO SALT);--Create a column with an encryption algorithm Alter table employees add (new_name varchar(40) ENCRYPT);--Encrypt an existing table column alter table employees rekey using '3DES168'; --Change the Encryption key of an existing column
===> 5.2 Encrypt Tablespace
Note: Encrypting existing tablespace is not supported.( Do online/Offline Conversion)
CREATE TABLESPACE D_CRYPT DATAFILE '+DATA' SIZE 10G ENCRYPTION USING 'AES256' DEFAULT STORAGE (ENCRYPT); Alter table TABLE_WITH_PAN move online tablespace D_CRYPT;
For 11g, We use orapki wallet & encryption key commands are used as below
orapki wallet create -wallet <wallet location> -pwd "<wallet password>" -auto_login ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "welcome1"; ALTER SYSTEM SET WALLET OPEN IDENTIFIED BY "welcome1";
Best Practices for TDE
1)Avoiding accidentally deleting the TDE Wallet files on Linux
Go to Wallet Directory and set set the ‘immutable’ bit
chattr +i ewallet.p12 ( encrypted wallet ) chattr +i cwallet.sso ( auto-open wallet )
Using Data Pump with TDE
By default,data in the resulting dump file will be in clear text, even the encrypted column data.
To protect your encrypted column data in the data pump dump files, you can password-protect your dump file when exporting the table.
expdp arup/arup ENCRYPTION_PASSWORD=pooh tables=accounts
Importing a password-protected dump file
impdp arup/arup ENCRYPTION_PASSWORD=pooh tables=accounts table_exists_action=replace
References
http://www.catgovind.com/oracle/how-to-enable-transparent-data-encryption-tde-in-oracle-database/
TDE best practices in EBS https://www.oracle.com/technetwork/database/security/twp-transparent-data-encryption-bes-130696.pdf
http://psoug.org/reference/tde.html
https://www.oracle.com/database/technologies/security/advanced-security.html
https://www.oracle.com/technetwork/database/security/tde-faq-093689.html
Encrypting Tablespaces By Arup Nanda https://www.oracle.com/technetwork/articles/oem/o19tte-086996.html
Encrypt sensitive data transparently without writing a single line of code https://blogs.oracle.com/oraclemagazine/transparent-data-encryption-v2
https://blogs.oracle.com/stevenchan/using-fast-offline-conversion-to-enable-transparent-data-encryption-in-ebs
https://juliandontcheff.wordpress.com/2017/06/02/twelve-new-features-for-cyber-security-dbas/
Transparent Data Encryption (TDE) (Doc ID 317311.1)
Master Note For Transparent Data Encryption ( TDE ) (Doc ID 1228046.1)
Quick TDE Setup and FAQ (Doc ID 1251597.1)
Managing TDE Wallets in a RAC Environment (Doc ID 567287.1)
How to Export/Import with Data Encrypted with Transparent Data Encryption (TDE) (Doc ID 317317.1)
Using Transportable Tablespaces to Migrate Oracle EBS Release 12.0 or 12.1 Using Oracle Database 12.1.0 (Doc ID 1945814.1)
Using Transparent Data Encryption (TDE) Column Encryption with Oracle E-Business Suite Release 12 (Doc ID 732764.1)
What is Table Fragmentation in Oracle ? ( Finding & Removing)
Fragmentation/ Reclaiming Wasted Space by Table Shrinking
select owner,table_name,round(((blocks*8)/1024/1024),2) "Size(GB)" ,round(((num_rows*avg_row_len/1024))/1024/1024,2) "Actual_Data(GB)",round((((blocks*8)) - ((num_rows*avg_row_len/1024)))/1024/1024,2) "Wasted_Space(GB)", round(((((blocks*8)-(num_rows*avg_row_len/1024))/(blocks*8))*100 -10),2) "Reclaimable Space(%)", partitioned from dba_tables where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2)) and round(((((blocks*8)-(num_rows*avg_row_len/1024))/(blocks*8))*100 -10),2) >20 -- More than 20% fragmentation and round((((blocks*8)) - ((num_rows*avg_row_len/1024)))/1024/1024,2)>5 --Wasted Space is more than 5GB order by 5 desc;
OWNER TABLE_NAME Size(GB) Actual_Data(GB) Wasted_Space(GB) Reclaimable Space(%) PARTITIONED XLA XLA_DISTRIBUTION_LINKS 375.6 246.22 129.37 24.44 YES ZX ZX_LINES_DET_FACTORS 162.61 108.24 54.36 23.43 NO AR RA_CUST_TRX_LINE_GL_DIST_ALL 122.6 74.72 47.88 29.05 NO ASO ASO_ORDER_FEEDBACK_T 54.91 8.16 46.75 75.14 NO XLA XLA_AE_LINES 82.74 55.29 27.45 23.17 YES APPLSYS WF_ITEM_ACTIVITY_STATUSES_H 27 1.92 25.07 82.87 NO AR RA_CUSTOMER_TRX_LINES_ALL 57.44 36.49 20.96 26.48 NO ZX ZX_LINES 34.06 22.78 11.28 23.12 NO AR AR_RECEIVABLE_APPLICATIONS_ALL 21.81 12.97 8.84 30.54 NO XXCUST XXCUST_INTERFACE_LINES_ALL 15.39 8.9 6.49 32.16 NO APPLSYS WF_JAVA_DEFERRED 5.08 0 5.08 90 NO
select owner,segment_name TABLE_NAME,segment_type,round(bytes/1024/1024/1024,2) "Size(GB)" from dba_segments where segment_type='TABLE' and owner='APPLSYS' and round(bytes/1024/1024/1024,2)>1 order by 4 desc;
OWNER TABLE_NAME SEGMENT_TYPE Size(GB) APPLSYS FND_LOG_MESSAGES TABLE 86.08 APPLSYS WF_ITEM_ACTIVITY_STATUSES_H TABLE 27.11 APPLSYS WF_ITEM_ATTRIBUTE_VALUES TABLE 20.38 APPLSYS FND_FLEX_VALUE_HIERARCHI_A TABLE 10 APPLSYS WF_ITEM_ACTIVITY_STATUSES TABLE 6.95 APPLSYS WF_NOTIFICATION_ATTRIBUTES TABLE 5.35 APPLSYS WF_JAVA_DEFERRED TABLE 5.1 APPLSYS WF_DEFERRED TABLE 4.8 APPLSYS FND_CONCURRENT_REQUESTS TABLE 3.51 APPLSYS FND_DOCUMENTS_TL TABLE 2.95 APPLSYS FND_LOGINS TABLE 2.16 APPLSYS FND_SEARCHABLE_CHANGE_LOG TABLE 1.97 APPLSYS WF_NOTIFICATIONS TABLE 1.79 APPLSYS DR$FND_LOBS_CTX$I TABLE 1.48
References
http://expertoracle.com/2017/05/07/reorg-tables-and-indexes-in-oracle-ebs-applications-best-practices/ http://select-star-from.blogspot.com/2013/09/how-to-check-table-fragmentation-in.htmlHow to Find and Remove Table Fragmentation in Oracle Databasehttps://www.oracle.com/technetwork/articles/database/fragmentation-table-shrink-tips-2330997.html
#tuning
What is Oracle Partitioning – ( Why ?,How ? &Benefits)
Partitioning ==>Increased performance and Ease of data management
Partitioning allows a database object(Table/View/IOT-index-organized table) to be subdivided into smaller pieces,called a partition.
Partition
It has its own name, and may optionally have its own storage characteristics can be managed either collectively or individually. From application point of View , no modifications are required.
Moreover, partitioning can greatly reduce the total cost of data ownership, using a “tiered archiving” approach of keeping older info still online on low cost storage devices.IT administrators can implement Information Lifecycle Management (ILM) protocols by partitioning data and moving historical data to low-cost storage.Partitioning can be used to obtain better concurrency as well as to decrease the number of rows to be processed through partition pruning and partition-wise joins
Partitioning for Performance
Partitioning pruning==> leveraging the partitioning metadata to only touch the data of relevance for a SQL operation
Partitioning for Manageability
Partitioning for Availability
Information
Lifecycle Management(ILM) with Partitioning
Which Partitioning Method Should Be Used?
Step 1 – Is Partitioning Necessary? Step 2 - Should This Object Be Partitioned? Step 3 - Which Partitioning Method Should Be Used? Step 4 - Identify the Partition Key Step 5 – Performance Check & Access Path Analysis Step 6 – Partitioned Table Creation and Data Migration Method: 1 – Straight Insert Oracle Data Pump Data Pump Access Methods Method 2 – Import/Export using Data Pump Step 7 – Maintenance Step PARTITION MAINTENANCE OPERATIONS Adding partitions or sub partitions Dropping a partition Moving a partition Splitting and merging partitions Exchanging a partition with a table Renaming a partition
Practical Partitioning Case Study
Oracle General Ledger -->340 GB in size with General Ledger representing 90% of this data.
Background – Current Table Volumes , (35 % of database) , (6% of database) ,(5% of database)
Strategy ?
Partition Maintenance ?
The Benefits of This Partitioning Strategy
Maintenance Purging historic data Performance- Performance improvements of SQL were achieved through partition pruning. Read-only partitions-in read only table spaces as this reduced the cost of storage
DBA_PART_TABLES |DBA_PART_INDEXES DBA_TAB_PARTITIONS | DBA_TAB_SUBPARTITIONS DBA_PART_KEY_COLUMNS | DBA_SUBPART_KEY_COLUMNS DBA_PART_COL_STATISTICS | DBA_SUBPART_COL_STATISTICS DBA_PART_HISTOGRAMS | DBA_SUBPART_HISTOGRAMS DBA_IND_PARTITIONS |DBA_IND_SUBPARTITIONS DBA_SUBPARTITION_TEMPLATES
References
Using_Database_Partitioning_with_Oracle_E-Business_Suite.pdf
White-paper:Optimizing Storage for Oracle E-Business Suite Applications
Meta link:Using Database Partitioning with Oracle E-Business Suite
#tuning
11g Partitioning Features
Partitioning in Oracle
sqlrpt.sql
sqlrpt.sql (Enhanced for Output Display & Skip Top Session
SET NUMWIDTH 10 SET TAB OFF set long 1000000; set lines 300 pages 300 set longchunksize 1000; set feedback off; set veri off; prompt Specify the Sql id prompt ~~~~~~~~~~~~~~~~~~ column sqlid new_value sqlid; set heading off; select 'Sql Id specified: &&sqlid' from dual; set heading on; prompt prompt Tune the sql prompt ~~~~~~~~~~~~ variable task_name varchar2(64); variable err number; -- By default, no error execute :err := 0; set serveroutput on; DECLARE cnt NUMBER; bid NUMBER; eid NUMBER; BEGIN -- If it's not in V$SQL we will have to query the workload repository select count(*) into cnt from V$SQLSTATS where sql_id = '&&sqlid'; IF (cnt > 0) THEN :task_name := dbms_sqltune.create_tuning_task(sql_id => '&&sqlid', scope=> DBMS_SQLTUNE.scope_comprehensive ,time_limit => 7200); ELSE select min(snap_id) into bid from dba_hist_sqlstat where sql_id = '&&sqlid'; select max(snap_id) into eid from dba_hist_sqlstat where sql_id = '&&sqlid'; :task_name := dbms_sqltune.create_tuning_task(begin_snap => bid, end_snap => eid, sql_id => '&&sqlid', time_limit => 7200); END IF; DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || :task_name); dbms_sqltune.execute_tuning_task(:task_name); EXCEPTION WHEN OTHERS THEN :err := 1; IF (SQLCODE = -13780) THEN dbms_output.put_line ('ERROR: statement is not in the cursor cache ' || 'or the workload repository.'); dbms_output.put_line('Execute the statement and try again'); ELSE RAISE; END IF; END; / set heading off; set lines 300 pages 3000 select dbms_sqltune.report_tuning_task(:task_name) from dual where :err <> 1; select ' ' from dual where :err = 1; set heading on; undefine sqlid; set feedback on; set veri on;
Let’s look at below in detail
What is SGA/PGA ? SGA/PGA Suggestions (Best Practices) Configuring SHMMAX and SHMALL for Oracle in Linux Linux Huge Pages for Oracle (For Large SGA On Linux) Disabling Transparent HugePages SGA/PGA Suggestions ( From AWR/ADDM) SGA/PGA Paremeters ( What is? ) SGA/PGA Details ( From v$views)
SGA is shared memory, so it is allocated when the database is started.
PGA (Process Global Area) is private memory allocated to individual processes,it can’t be pre-allocated,One need basis it will be allocated in RAM
SGA/PGA Suggestions (Best Practices)
OS Reserved RAM --> 10% of RAM for Linux & 20% or RAM for Windows Oracle Database Connections RAM -->pga_aggregate_target(RAM regions for sorting and hash joins) Oracle SGA Sizing for RAM --> sga_max_size/sga_target
SGA Allocation (Capacity Planning) 45% of ram size RAM SIZE = 484 MB, So 45 % should be = 216 MB -->Split the 216 MM for SGA,PGA,BACKGROUNG PROCESSES -->Fixed background process requires = 40 MB SGA =160 MB PGA =16 MB
Configuring SHMMAX and SHMALL for Oracle in Linux
Shared memory is important for the Oracle Database System Global Area (SGA). Shared memory is nothing but part of Unix IPC System (Inter Process Communication) maintained by kernel where multiple processes share a single chunk of memory to communicate with each other. If there is insufficient shared memory, the Oracle database instance will not start. Set the SHMMAX and SHMALL to maximum values,for SGA Requirements
SHMALL ==>(40% RAM), total size of Shared Memory Segments System wide set in “pages”.
SHMMAX ==> the maximum size (in bytes) of a single shared memory segment set
If the SHMMAX is set incorrectly, for example too low, you may receive the following error:
ORA-27123: unable to attach to shared memory segment.
ORA-27102: out of memory
Linux-x86_64 Error: 28: No space left on device.
is not relevant when you use /dev/shm POSIX shared memory (Oracle AMM).
Note :
SHMALL is the division of SHMMAX/PAGE_SIZE, e.g:. 1073741824/4096=262144. Make SHMALL smaller than free RAM to avoid paging. 32-bit servers -3GB 64-bit servers - Half the RAM Oracle recommends that the SHMALL should be the sum of the SGA regions divided by 4096 (The Linux page size).
-->Oracle 11g uses AMM by default,which relies on POSIX shared memory (/dev/shm-maps shared memory to files using a virtual shared memory filesystem) ,uses 4 KB pages(default PAGE_SIZE in Linux) and set to 50 % of RAM (allocated on demand and can be swapped) For performance reasons,any Oracle database that uses more than 4 GB of SGA should use kernel Huge Pages. Note:Linux kernel Huge Pages are setup manually, use 2M (2048 KB) pages, cannot be swapped and are reserved at system start up. AMM uses either Sys V(virtual) shared memory(ipcs) including kernel hugepages or (POSIX) /dev/shm for SGA, but not both at the same time.
oracle@srinalla-db-1:~ $ grep -i shm /etc/sysctl.conf|grep -v '^#';ls -lrth /proc/sys/kernel/shm*;cd /proc/sys/kernel/;cat shmall;cat shmmni;cat shmmax kernel.shmall = 154618822656 kernel.shmmax = 154618822656 kernel.shmmni = 4096 -rw-r--r-- 1 root root 0 Dec 21 03:11 /proc/sys/kernel/shmall -rw-r--r-- 1 root root 0 Dec 21 03:11 /proc/sys/kernel/shmmni -rw-r--r-- 1 root root 0 Dec 21 03:11 /proc/sys/kernel/shmmax 154618822656 4096 154618822656
oracle@srinalla-db-1:~ $ cat /etc/*-release ;ipcs -l Red Hat Enterprise Linux Server release 5.11 (Tikanga) ------ Shared Memory Limits -------- max number of segments = 4096 /* SHMMNI */ max seg size (kbytes) = 150994944 /* SHMMAX */ max total shared memory (kbytes) = 618475290624 /* SHMALL */ min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 4096 max semaphores per array = 256 max semaphores system wide = 32000 max ops per semop call = 100 semaphore max value = 32767 ------ Messages: Limits -------- max queues system wide = 2878 max size of message (bytes) = 8192 default max size of queue (bytes) = 65535
Linux Huge Pages for Oracle (For Large SGA On Linux)
With HugePages, the Linux memory page size is set at 2MB (instead of the default 4K). This will improve OS performance when running Oracle databases with large SGA sizes. For 11g,Enable ASMM feature additionally ,Refer more Linux HugePages The AMM and HugePages are not compatible. One needs to disable AMM on 11g to be able to use HugePages. See Document 749851.1 for further information. ebsdba@ebsdb-prd-01:~ $ grep Hugepagesize /proc/meminfo Hugepagesize: 2048 kB oracle@of3200:~ Oracle Linux: Script to find Recommended Linux HugePages (Doc ID 401749.1) HugePages on Linux: What It Is... and What It Is Not... (Doc ID 361323.1)
Disabling Transparent HugePages Starting from RHEL6/OL6, Transparent HugePages are implemented and enabled by default. This is causing node reboots in RAC installations and performance problems on both single instance and RAC installations. Oracle recommends disabling Transparent HugePages on all servers running Oracle databases, as described in this MOS note (Doc ID 1557478.1) Check setting below # cat /sys/kernel/mm/transparent_hugepage/enabled [always] madvise never #
SGA/PGA Suggestions ( From AWR/ADDM)
1)Undersized SGA Findings in ADDM Report:
2)SGA/PGA Advice Views select * from v$SGA_TARGET_ADVICE ;
(Remember, bigger does not necessarily mean better/faster.)
select * from v$PGA_TARGET_ADVICE;
SGA/PGA Paremeters ( What is? )
SGA
Some parameters to consider in SGA sizing include SGA_MAX_SIZE --> For Dynamic Allocation set this higher than sga_target SGA_TARGET --> new(in 10g) -->Automatically sizes SGA components dynamic,Can be increased till SGA_MAX_SIZE,Change in value of affects only automatically sized components the (redo) log buffers, the shared pool, Java pool, streams pool, buffer cache, keep/recycle caches, and if they are specified, the non-standard block size caches. -------------->Usually, sga_max_size and sga_target will be the same value DB_CACHE_SIZE - ->RAM buffer for the data buffers DB_XK_CACHE_SIZE SHARED_POOL_SIZE ->RAM buffer for Oracle and the library cache. LARGE_POOL_SIZE -> For RMAN parallel queries & Parallel execution allocates buffers out of the large pool only when parallel_automatic_tuning=true. LOG_BUFFER ->RAM buffer for redo logs
PGA
PGA_AGGREGATE_LIMIT -> (12c),limit on Total PGA per Instance can't be below the PGA_AGGREGATE_TARGET. ** To revert to the pre-12c functionality,set parameter value to "0" PGA_AGGREGATE_TARGET-->(9i), soft limit on the PGA used by the instance _pga_max_size ->
SGA/PGA Details ( From v$views)
set lines 200 pages 300 Select Round(sum(bytes)/1024/1024,0) "Total Free memory(MB)" From V$sgastat Where Name Like '%free memory%'; Select POOL, Round(bytes/1024/1024,0) "Free memory(MB)" From V$sgastat Where Name Like '%free memory%'; show parameter sga ; show parameter pga; select NAME,round(BYTES/(1024*1024*1024),0) as "Size(GB)",round(BYTES/(1024*1024),0) as "Size(MB)" ,RESIZEABLE from v$sgainfo order by 2 desc;
Total Free memory(MB) --------------------- 5117 SQL> POOL Free memory(MB) ------------ --------------- shared pool 2143 large pool 706 java pool 1756 streams pool 512 SQL> NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ allow_group_access_to_sga boolean FALSE lock_sga boolean FALSE pre_page_sga boolean TRUE sga_max_size big integer 100G sga_target big integer 85G unified_audit_sga_queue_size integer 1048576 SQL> NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ pga_aggregate_limit big integer 50G pga_aggregate_target big integer 30G SQL> NAME Size(GB) Size(MB) RES -------------------------------- ---------- ---------- --- Maximum SGA Size 100 102400 No Buffer Cache Size 70 71936 Yes Free SGA Memory Available 15 15360 Shared Pool Size 12 11776 Yes Startup overhead in Shared Pool 5 5078 No Java Pool Size 2 1792 Yes Streams Pool Size 1 512 Yes Large Pool Size 1 768 Yes In-Memory Area Size 0 0 No Shared IO Pool Size 0 256 Yes Data Transfer Cache Size 0 0 Yes Granule Size 0 256 No Redo Buffers 0 249 No Fixed SGA Size 0 7 No 14 rows selected.
References:
Oracle PGA behavior PGA and SGA sizing on linux Monitoring Oracle SGA & PGA Memory Changes Oracle SGA Sizing Optimal SHMMAX for Oracle Monitoring SGA (Free Memory) Using v$sgastat Get confused about how to calculate SHMMAX and SHMALL on Linux. Configuring SHMMAX and SHMALL for Oracle in Linux https://docs.oracle.com/database/121/LADBI/app_manual.htm#LADBI7864 Configuring HugePages for Oracle on Linux (x86-64)
#tuning
OCI Level 200 – Identity and Access Management
OCI Level 200 – Virtual Cloud Network
OCI Level 200 – Connectivity IPsec VPN
OCI Level 200 – Virtual Cloud Network Best Practices
OCI Level 200 – Connectivity FastConnect – Part 1
OCI Level 200 – Connectivity FastConnect – Part 2
OCI Level 200 – Compute
OCI Level 200 – Storage
OCI Level 200 – LoadBalancer
OCI Learn 200 – Database
#ebs-cloud
Intrested in Oracle Cloud(OCI)?
Now through Feb 28, 2022, take OCI certification exams from Oracle University for free.
Get Oracle Cloud Infrastructure (OCI) certified for free.
Oracle Cloud Infrastructure 2021 Architect Associate
–This certification retires on 30-Jun-2022. The 2022 certification will be released in April 2022.
The Oracle Cloud Infrastructure 2021 Architect Associate exam is designed for individuals who possess a strong foundation knowledge in architecting infrastructure using Oracle Cloud Infrastructure services. This certification covers topics such as: Launching Bare Metal and Virtual Compute Instances, Instantiating a Load Balancer, Using Advanced Database features (Dataguard, BYOL, Data encryption, RAC, and EXADATA), Advanced Networking Concepts, Architecting Best Practices for HA and Security, Identity and Access Management, Networking, Compute, and Storage.
This certification validates deep understanding of OCI services to spin up infrastructure and provides a competitive edge in the industry. Up-to-date OCI training and hands-on experience are recommended. This certification is available to all candidates.
Now through Feb 28, 2022, take OCI certification exams from Oracle University for free.
Get Oracle Cloud Infrastructure (OCI) certified for free.
Oracle Cloud Infrastructure 2021 Architect Associate
–This certification retires on 30-Jun-2022. The 2022 certification will be released in April 2022.
The Oracle Cloud Infrastructure 2021 Architect Associate exam is designed for individuals who possess a strong foundation knowledge in architecting infrastructure using Oracle Cloud Infrastructure services. This certification covers topics such as: Launching Bare Metal and Virtual Compute Instances, Instantiating a Load Balancer, Using Advanced Database features (Dataguard, BYOL, Data encryption, RAC, and EXADATA), Advanced Networking Concepts, Architecting Best Practices for HA and Security, Identity and Access Management, Networking, Compute, and Storage.
This certification validates deep understanding of OCI services to spin up infrastructure and provides a competitive edge in the industry. Up-to-date OCI training and hands-on experience are recommended. This certification is available to all candidates.
Now through Feb 28, 2022, take OCI certification exams from Oracle University for free.
Get Oracle Cloud Infrastructure (OCI) certified for free.
Oracle Cloud Infrastructure 2021 Architect Associate
–This certification retires on 30-Jun-2022. The 2022 certification will be released in April 2022.
The Oracle Cloud Infrastructure 2021 Architect Associate exam is designed for individuals who possess a strong foundation knowledge in architecting infrastructure using Oracle Cloud Infrastructure services. This certification covers topics such as: Launching Bare Metal and Virtual Compute Instances, Instantiating a Load Balancer, Using Advanced Database features (Dataguard, BYOL, Data encryption, RAC, and EXADATA), Advanced Networking Concepts, Architecting Best Practices for HA and Security, Identity and Access Management, Networking, Compute, and Storage.
This certification validates deep understanding of OCI services to spin up infrastructure and provides a competitive edge in the industry. Up-to-date OCI training and hands-on experience are recommended. This certification is available to all candidates.
Oracle SQL Plan Changed?
Let’s look at below in detail
Why Execution Plan gets changes? How to fix Bad SQL Plan? Image flow of SQL Profile & SQL Plan BaseLine Create SQL Profile ( Considering SQL Tuning Adviser Suggestions) How to Create SQL Profile ? ( Which script is used?)
How to fix Bad SQL Plan? .
You can fix the SQL Plan by either SQL Profile / SQL Plan Baseline
Let’s take look at what they are and differences?
SQL profiles and SQL plan baselines help improve the performance of SQL statements by ensuring that the optimizer uses only optimal plans. ==>Baselines define the set of execution plans that can be used by each query, SQL Profiles only provide additional information to “push” the optimizer to favor one or another plan, ==>SQL plan baselines are proactive, whereas SQL profiles are reactive. ==>SQL plan baselines reproduce a specific plan, whereas SQL profiles correct optimizer cost estimates. SQL plan baselines prevent the optimizer from using suboptimal plans in the future.
Image flow of SQL Profile & SQL Plan BaseLine.
Create SQL Profile ( Considering SQL Tuning Adviser Suggestions)
1) Run SQL Tuning Adviser(sqltrpt.sql/OEM) and get the suggestions for available best plan 2) Create SQL Profile using SQLT Script (coe_xfr_sql_profile.sql) 3) Flush the SQL ID from Memory ( Cancel any Jobs w.r to SQL ID,then flush) 4) Now you could see the new plan is reflected ( From gv$sql)
This is best plan available for this SQL ID , we have to force the good plan by SQL Profile creation
PLAN_HASH_VALUE AVG_ET_SECS
————— ———–
2538395789 .437————> not reproducibe
2141716862 100.932———–last seen 2018-12-03/17:00:03 original plan
420475214 4898.131 ========== current Plan
How to Create SQL Profile ? ( Which script is used?)
Using coe_xfr_sql_profile.sql ,we can create it ( No need of Installation ,Just Download SQLTXPLAIN (SQLT),Refer Download SQLT section
SQL > @coe_xfr_sql_profile.sql
SQL> @/dba/srinalla/scripts/sqlt/sqlt/utl/coe_xfr_sql_profile.sql Parameter 1: SQL_ID (required)
Enter value for 1: 8ap1zyhp2q7xd
PLAN_HASH_VALUE AVG_ET_SECS --------------- ----------- 2538395789 .437 2819292650 .599 3897290700 4.068 3111005336 4.21 3446519203 6.4
Parameter 2:
PLAN_HASH_VALUE (required)
Enter value for 2: 2141716862
Values passed to coe_xfr_sql_profile:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SQL_ID : "8ap1zyhp2q7xd"
PLAN_HASH_VALUE: "2141716862"
Execute coe_xfr_sql_profile_8ap1zyhp2q7xd_2141716862.sql
on TARGET system in order to create a custom SQL Profile
with plan 2141716862 linked to adjusted sql_text.
Now, You can create the SQL Profile with above Script
SQL> @coe_xfr_sql_profile_8ap1zyhp2q7xd_2141716862.sql PL/SQL procedure successfully completed. SQL> WHENEVER SQLERROR CONTINUE SQL> SET ECHO OFF; SIGNATURE --------------------- 582750245629242078 SIGNATUREF --------------------- 582750245629242078 ... manual custom SQL Profile has been created
<strong><span style="color:#0000ff;">FROM SQL Tuning Advisor, this Can be run from DB Node as below(or OEM Cloud Control)</span></strong>
SQL @?/rdbms/admin/sqltrpt.sql
Specify the Sql id ~~~~~~~~~~~~~~~~~~ Enter value for sqlid: 8ap1zyhp2q7xd Sql Id specified: 8ap1zyhp2q7xd Tune the sql ~~~~~~~~~~~~ GENERAL INFORMATION SECTION Tuning Task Name : TASK_313228 Tuning Task Owner : SYS Workload Type : Single SQL Statement Scope : COMPREHENSIVE Time Limit(seconds): 1800 Completion Status : COMPLETED Started at : 12/10/2018 02:09:31 Completed at : 12/10/2018 02:16:37 Schema Name: APPS SQL ID : 8ap1zyhp2q7xd SQL Text : SELECT /*+ leading (gjl gjh) */ GJL.EFFECTIVE_DATE ACC_DATE,
/ FINDINGS SECTION (3 findings) 1- SQL Profile Finding (see explain plans section below) -------------------------------------------------------- A potentially better execution plan was found for this statement. Recommendation (estimated benefit: 82.16%) ------------------------------------------ - Consider accepting the recommended SQL profile. execute dbms_sqltune.accept_sql_profile(task_name => 'TASK_313228',task_owner => 'SYS', replace => TRUE); id plan hash last seen elapsed (s) origin note -- ---------- -------------------- ------------ --------------- -------------- 1 3013459780 2018-06-03/23:58:03 2.674 STS not reproducible 2 3897290700 2018-12-06/14:00:54 7.268 AWR not reproducible 3 2141716862 2018-12-03/17:00:03 22.321 AWR original plan 4 1462347114 2018-12-06/17:00:38 50.572 AWR not reproducible 5 1471114134 2018-12-07/16:00:04 60.236 AWR not reproducibe 3- Alternative Plan Finding ---------------------------------------------- Some alternative execution plans for this statement were found by searching the system's real-time and historical performance data. The following table lists these plans ranked by their average elapsed time. See section "ALTERNATIVE PLANS SECTION" for detailed information on each plan.
References
Database SQL Tuning Guide
Using Oracle baselines you can fix the sql plan for a SQLID
How To Improve SQL Statements Performance: Using SQL Plan Baselines
SQL Profiles Vs SQL Plan Baselines? by Girish Kumar
What is the difference between SQL Profiles and SQL Plan Baselines?
One Easy Step To Get Started With SQL Plan Baselines
2 Useful Things To Know About SQL Plan Baselines
Parsing SQL Statements in Oracle
#tuning
AD Patch Scripts
Patch Application Steps
1. Review Downtime required or not,
a. if downtime required,
i. Get list of invalid objects.
ii. Bring Down Applications
iii. Enable Maintenance Mode through adadmin
iv. Apply patch through adpatch
v. Disable Maintenance Mode through adadmin
vi. Bring up applications.
vii. Validate the patch has been applied and any new invalid objects
viii. Ensure all services are up and release.
b. if downtime not required,
i. Get list of invalid objects.
ii. Apply patch through adpatch options=hotpatch
iii. Validate the patch has been applied and any new invalid objects.
iv. Ensure all services are functioning.
Patching Activity on EBSPRD:
Steps:
1. Bring down application.
2. Enable maintenence mode.
3. adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt
4. Enter credential till it prompt for location.
5. when prompt for location type ‘abort’
6. disable maintenence mode.
7. open screen session.
8. sudo to application user(applmgr) and source application env.
9. execute sh /apps/patches/Test2.sh
10. when it prompt for old session then say ‘NO’
Bringing down application tier.
1. concurrent node service
2. form/web services
Applying Patch:
Total 43 patch. adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt above step we are running for recording all input we are providing to store in <strong>ERPdefaults.txt</strong><em></em> file Merging patch's----------->If language patch then merging all language patch.
Adpatch script:
cat /apps/patches/Test2.sh sqlplus apps @$AD_TOP/patch/115/sql/adsetmmd.sql ENABLE adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/22673920 logfile=l22673920.log driver=u22673920.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/19066382 logfile=l19066382.log driver=u19066382.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/20086596 logfile=l20086596.log driver=u20086596.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/21471243 logfile=l21471243.log driver=u21471243.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/25611260 logfile=l25611260.log driver=u25611260.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/16972536 logfile=l16972536.log driver=u16972536.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/21424549 logfile=l21424549.log driver=u21424549.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/18426069 logfile=l18426069.log driver=u18426069.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/16966157 logfile=l16966157.log driver=u16966157.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/12802881 logfile=l12802881.log driver=u12802881.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/22653309 logfile=l22653309.log driver=u22653309.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/13692238 logfile=l13692238.log driver=u13692238.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/16970138 logfile=l16970138.log driver=u16970138.drv workers=50 adpatch defaultsfile=$APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt patchtop=/apps/patches/Test2/All_Lan_Dst logfile=Test2_lan_pth.log driver=u_Test22.drv workers=128 sqlplus apps @$AD_TOP/patch/115/sql/adsetmmd.sql DISABLE; exit
ebsprd@apps-ebsprd01:/apps/patches/ $ cat $APPL_TOP/admin/$TWO_TASK/ERPdefaults.txt # # AD Default Values File # # # Updated by AutoPatch on Fri Jun 16 2017 22:16:42 # ## Start of Defaults Record %%START_OF_TOKEN%% APPL_TOP %%END_OF_TOKEN%% %%START_OF_VALUE%% /im/EBSPRD/apps/apps_st/appl %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% DBNAME %%END_OF_TOKEN%% %%START_OF_VALUE%% EBSPRD_BALANCE %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% MATCH_APPL_TOP %%END_OF_TOKEN%% %%START_OF_VALUE%% Yes %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% LOG_FNAME %%END_OF_TOKEN%% %%START_OF_VALUE%% ERPdefaults.log %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% DEF_BATCH_SIZE %%END_OF_TOKEN%% %%START_OF_VALUE%% 1000 %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% ORACLE_HOME %%END_OF_TOKEN%% %%START_OF_VALUE%% /im/EBSPRD/apps/tech_st/10.1.2 %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% CORRECT_DBENV %%END_OF_TOKEN%% %%START_OF_VALUE%% Yes %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% DEF_SYSTEM_PWD %%END_OF_TOKEN%% %%START_OF_VALUE%% 7782A9A20F0B4F635CADA0EA24D095D610C0 %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% ORACLE_username_Application_Object_Library %%END_OF_TOKEN%% %%START_OF_VALUE%% APPLSYS %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% ORACLE_password_Application_Object_Library %%END_OF_TOKEN%% %%START_OF_VALUE%% 7782A9A282CA42FED619A93FF9202C88D75612 %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% DEF_ACTIVATE_EMAIL %%END_OF_TOKEN%% %%START_OF_VALUE%% No %%END_OF_VALUE%% ## End of Defaults Record ## Start of Defaults Record %%START_OF_TOKEN%% MISSING_TRANSLATED_VERSION %%END_OF_TOKEN%% %%START_OF_VALUE%% yes %%END_OF_VALUE%% ## End of Defaults Record
#notes
EBS on OCI ( My Notes)
Oracle Cloud Infrastructure (OCI), IaaS from Oracle.
There are many cloud infrastructure providers, like Amazon (AWS), Oracle (OCI), Microsoft (Azure), Google (Google Cloud).
What’s New in Oracle Cloud Infrastructure
EBS (R12) Deployment Options on Cloud
EBS Middle/Application Tier
Oracle EBS Middle Tier can only be deployed on IaaS Service Model and within IaaS Service Model, it is either on Oracle Cloud Infrastructure (OCI) or OCI Classic.
EBS Database Tier
Oracle EBS Database Tier can be deployed either on IaaS or PaaS Service Model and within PaaS Service Model, it’ll be Database Cloud Service (DBCS). DBCS has few deployment options and out of them, two are supported for EBS (R12) on Cloud i.e. Database as a Service DBaaS and Exadata Cloud Service (Exa CS).
OLL
OCI Study Guide *********
Just-in-Time Videos
OCI Introduction
OCI -Foundation
OCI Advanced
Tips for OCI Architect Certification
Certification Path
White Paper: EBS Deployment on OCI ****
Cloud On boarding Guide
EBS on OCI Blog Posts
Cloud computing concepts (HA, DR, Security), regions, availability domains, OCI terminology and services, networking, databases, load balancing, IAM, DNS, FASTCONNECT, VPN, Compartments, tagging, Terraform, with focus on how to use it with OCI and Exadata
OCI for Apps DBA **
EBS (R12) on Cloud: Architects Perspective ****
EBS (R12) On Cloud (OCI): High Level Steps *****
EBS Cloud Admin Tool
EBS Cloud Manager
EBS Cloud Admin Tool on OCI is superseded with EBS Cloud Manager
EBS Cloud Manager
Provisioning –> 2 options (Simple & Advance)
Migration –> Lift and Shift using Cloud Manager
Cloning –>
Deletion
Cloud Service Model: SaaS | PaaS | IaaS
EBS (R12) On Cloud Deployment Architecture
Role of Apps DBA in Cloud
Summary
Knowledge of Oracle Cloud (OCI & DBCS) ,EBS-OCI Lift & Shift Taken Oracle Cloud Internal Training on OCI(Iaas),DBCS(Paas) High Level understanding of Core OCI fundamentals Familiar with EBS-OCI Lift & Shit & High Level Deployment & Tools like EBS Cloud Manager(EBS Cloud Admin tool)
INTERNAL TRAININGS & KNOWLEDGE
Topic Oracle Cloud offerings for EBS (OCI & DBCS-Dbas/Exacs) Mode Online ,Self-Paced & Role Cloud EBS DBA
Gained Knowledge in following Areas
• OCI(IaaS) & DBCS (Exa CS) • Knowledge of OCI (Infrastructure,Compute,Database,Networking(VCN,VPN,IPSec),Storage Services(FSS,IAM,VPN IPSec tunnel functionality) • Knowledge in DBCS(DBaaS & ExaCS),Cloud Backup Storage Service. • Basic Understanding of Migrating EBS to OCI ((Lift & Shift) • Familiar with High-Level Steps of deployment of EBS on OCI • Knowledge in EBS Cloud Admin Tool • Familiar with Install and configure EBS Cloud Manager • Knowledge in EBS Cloud Manager (Provisioning,Migration,Cloning,Cloning,Deletion)
#ebs-cloud, #notes
DBA References
My Posts
EBS Maintenance ( Patching/Upgrades/Migration)
Performace Tuning (DB/EBS)
Intrested in Oracle Cloud(OCI)?
Newsletters
Oracle Database Monthly News
Other Blogs
https://blogs.oracle.com/ebs/oracle-ebs-news
https://blogs.oracle.com/ebsandoraclecloud/
https://blogs.oracle.com/stevenchan/training-3
https://blogs.oracle.com/stevenchan
E-Business Suite DBA
dba-self
Apps DBA Interview-1
Apps DBA Interview-2
Apps DBA Interview-3
Few Support Issues
Support Issues
EBS 12.1: Premier Support to December 2021
EBS 12.2: Premier Support to At Least December 2032
Oracle E-Business Suite 12.2.12 Now Available (Nov-2022) —Latest Release
Oracle E-Business Suite 12.2.11 Now Available (Nov-2021)
Oracle E-Business Suite 12.2.10 Now Available (Sep-2020)
Oracle E-Business Suite 12.2.9 Now Available (Aug-2019)
Previous Releases are listed as below
EBS Maintenance ( Patching/Upgrades/Migration)
My Posts –
( Still Writing Notes for below..)
EBS Upgrades and Platform Migration
Transportable Tablespaces(TTS) for Upgrade/Migration
Working with Datapump ?
EBS Upgrades
Oracle E-Business Suite Upgrade Guide
Sizing and Best Practices
12.2 Upgrade Process Flow
Planning Your Upgrade
12.2 Upgrade Best Practices to Minimize Downtime
EBS Patching Issues
Top Patching Issues 11i & 12.x
Restart a failed patch?
Webcasts
Webcast:EBS Maintenance
Webcast: Advanced Architectures:DMZ
Webcast:Auditing and Security
Webcast: TLS 1.2 Configuration
EBS Debug and Tracing
Query : Trace/Debug Profile
Critical Patch Update Advisory
15 January 2019 16 April 2019 16 July 2019 15 October 2019
JRE Plugin Upgrade (Doc ID 393931.1)
JRE 8-->JRE 1.8.0_191/192 JRE7 -->JRE 1.7.0_201 JRE 6-->JRE 1.6.0_211
Java Web Start and Java Plug-in ?
browser-independent architecture ,to deprecate the Java Plug-in for web browsers starting with the release of Java SE 9
#notes