Feed aggregator

Latest Blog Posts by Oracle ACEs - May 5-11, 2019

OTN TechBlog - Thu, 2019-05-23 11:20
Skills on display...

As I explained in Tuesday's blog post, members of the Oracle ACE Program were extraordinarily busy blogging the week of May 5-11, 2019, so much so that I've had to break the list of the latest posts into separate lists for the three levels in the program: ACE Directors , ACEs , and ACE Associates . Tuesday's post featured the ACE Director posts. Today it's the ACE's turn. Posts from ACE Associates will soon follow.

 

Oracle ACE Anoop JohnyAnoop Johny
Senior Developer, Wesfarmers Chemicals, Energy & Fertilisers
Perth, Australia

Oracle ACE Anton ElsAnton Els
VP Engineering, Dbvisit Software Limited
Auckland, New Zealand

 

Oracle ACE Atul KUmarAtul Kumar
Founder & CTO, K21 Academy
London, United Kingdom

 

Oracle ACE Jhonata LamimJhonata Lamim
Senior Oracle Consultant, Exímio IT Solutions
Brusque, Brazil

 

Oracle ACE Kyle GoodfriendKyle Goodfriend
Vice President, Planning & Analytics, Accelytics Inc.
Columbus, Ohio

 

Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France

 

Oracle ACE Marcelo OchoaMarcelo Ochoa
System Lab Manager, Facultad de Ciencias Exactas - UNICEN
Buenos Aires, Argentina

 

Oracle ACE Marko MischkeMarco Mischke
Group Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden.Germany

 

Oracle ACE Martin BergerMartin Berger
Database Teamleader, Hutchison Drei Austria GmbH
Vienna, Austria

 

Oracle ACE Miguel PalaciosMiguel Palacios
Gerente General, Global Business Solutions Perú
Peru

 

Oracle ACE Peter ScottPeter Scott
Principal/Owner, Sandwich Analytics
Marcillé-la-Ville, France

 

Oracle ACE Rodrigo MufalaniRodrigo Mufalani
Principal Database Architect, eProseed
Luxembourg

 

Oracle ACE Rodrigo DeSouzaRodrigo De Souza
Solutions Architect, Innive Inc.
Tampa, Florida

 

 

Additional Resources

RMAN Full Backup & Demo

Zed DBA's Oracle Blog - Thu, 2019-05-23 09:13

This blog post is part of the “RMAN Back to Basics” series, which can be found here.

Full Backup

A full backup is a backup of all the database’s data which excludes free space and stores in RMAN backup pieces.  For example, if you have a newly created database with datafiles added that initial size of 10GB, the 10GB from each datafiles is mainly just free space and hence RMAN does not backup the free space, thus space efficient.

Archive Log Mode

When DML (Data Manipulation Language) is ran against a database, this is stored in online redo logs, so Oracle can reply DML in case of crash recovery to ensure database is consistent and that all committed data is present.  Online redo logs fill up and Oracle move onto the next group till reaches the last one and then go back to the first one, but in order to maintain the redo, Oracle “archive” the online redo log to archive logs, hence the word “archive”.

Archive redo logs along with online redo logs are required to recovery a database after a restored, because if the database is open it’s a moving target and the DML will need to be replayed so the the first and last datafile that were backups are consistent i.e. at the same point (SCN) if doing incomplete recovery.  Complete recovery is when all the archive redo logs and then online redo logs are applied, taking the database to the current SCN before the restore was done i.e. no data loss.

Demos Enable Archive Log Mode

Before we can take a full backup we need to enable archive log mode using my script 1_enable_archive_log_mode.sh:

[oracle@dc1sbxdb001 demo]$ ./1_enable_archive_log_mode.sh
-----------------------
Step 1: Set environment
-----------------------

Setting the Database Environment using oraenv...
The Oracle base has been set to /u01/app/oracle

ORACLE_SID: ZEDDBA
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1

Press Enter to continue

The environment is set to my ZEDDBA database, then next a table table is created to log each demo as they run in the table:

-----------------------------
Step 2: Create demo log table
-----------------------------

Content of 1_create_demo_table.sql file:

create table demo_log (when timestamp, comments varchar2(200));
exit

Press Enter to continue

Calling 'sqlplus / as sysdba @1_create_demo_table.sql'

Table created.

Press Enter to continue

Next we enable archive log mode:

-------------------------------
Step 3: Enable Archive Log Mode
-------------------------------

Content of 1_enable_archive_log_mode.sql file:

alter system set db_recovery_file_dest_size = 15G;
shutdown immediate;
startup mount;
alter database archivelog;
alter database open;
insert into demo_log values (sysdate, 'Enable Archive Log Mode');
commit;
@/media/sf_Software/scripts/demo/demo_log.sql
exit

Press Enter to continue

Calling 'sqlplus / as sysdba @1_enable_archive_log_mode.sql'

System altered.

Database closed.
Database dismounted.
ORACLE instance shut down.
ORACLE instance started.

Total System Global Area 2952790016 bytes
Fixed Size 8625080 bytes
Variable Size 1677722696 bytes
Database Buffers 1258291200 bytes
Redo Buffers 8151040 bytes
Database mounted.

Database altered.

Database altered.

1 row created.

Commit complete.

WHEN COMMENTS
------------------------------ --------------------------------------------------
16-MAY-19 05.31.24.000000 PM Enable Archive Log Mode

Press Enter to exit shell script

[oracle@dc1sbxdb001 demo]$
Full Database Backup

We take a full backup using my script 2_full_backup.sh:

[oracle@dc1sbxdb001 demo]$ ./2_full_backup.sh 
-----------------------
Step 1: Set environment
-----------------------

Setting the Database Environment using oraenv...
The Oracle base has been set to /u01/app/oracle

ORACLE_SID: ZEDDBA
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1

Press Enter to continue

The environment is set to my ZEDDBA database, then next the full backup is taken:

------------------------
Step 2: Take Full Backup
------------------------

Cotent of 2_full_backup.cmd file:

BACKUP DATABASE TAG 'FULL BACKUP';
HOST 'read Press Enter to LIST BACKUP';
LIST BACKUP;

Press Enter to continue

Calling 'rman target / cmdfile=/media/sf_Software/scripts/demo/2_full_backup.cmd'

Recovery Manager: Release 12.2.0.1.0 - Production on Thu May 16 17:38:52 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ZEDDBA (DBID=3520942925)

RMAN> BACKUP DATABASE TAG 'FULL BACKUP';
2> HOST 'read Press Enter to LIST BACKUP';
3> LIST BACKUP;
4> 
Starting backup at 16-MAY-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=149 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 16-MAY-19
channel ORA_DISK_1: finished piece 1 at 16-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_FULL_BACKUP_gfv4k119_.bkp tag=FULL BACKUP comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 16-MAY-19

Starting Control File and SPFILE Autobackup at 16-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008437952_gfv4kjko_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 16-MAY-19

host command complete

We use host within RMAN just to wait for input before moving on.  Next we list the backup within RMAN using ‘LIST BACKUP‘:

List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 498.93M DISK 00:00:08 16-MAY-19 
BP Key: 1 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_FULL_BACKUP_gfv4k119_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2 Full 8.31M DISK 00:00:00 16-MAY-19 
BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20190516T173912
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008437952_gfv4kjko_.bkp
SPFILE Included: Modification time: 16-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 353836 Ckp time: 16-MAY-19

Recovery Manager complete.

Press Enter to continue

Finally, we update the demo log table:

-------------------------------------
Step 3: Updating and viewing demo log
-------------------------------------
Calling 'sqlplus / as sysdba'
To updated and view demo log

1 row created.

Commit complete.

WHEN COMMENTS
------------------------------ --------------------------------------------------
16-MAY-19 05.31.24.000000 PM Enable Archive Log Mode
16-MAY-19 05.40.23.000000 PM Full Backup

Press Enter to exit shell script

[oracle@dc1sbxdb001 demo]$
Full Database Backup with Archive Logs

We take a full backup with archive logs using my script 3_full_backup_plus_archivelogs.sh:

[oracle@dc1sbxdb001 demo]$ ./3_full_backup_plus_archivelogs.sh
-----------------------
Step 1: Set environment
-----------------------

Setting the Database Environment using oraenv...
The Oracle base has been set to /u01/app/oracle

ORACLE_SID: ZEDDBA
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1

Press Enter to continue

The environment is set to my ZEDDBA database, then next the full backup with archive logs is taken:

----------------------------------------
Step 2: Take Full Backup plus archivelog
----------------------------------------

Content of 3_full_backup_plus_archivelogs.cmd file:

BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT TAG 'FULL BACKUP PLUS ARCHIVELOG';
HOST 'read Press Enter to LIST BACKUP';
LIST BACKUP;

Press Enter to continue

Calling 'rman target / cmdfile=/media/sf_Software/scripts/demo/3_full_backup_plus_archivelogs.cmd'

Recovery Manager: Release 12.2.0.1.0 - Production on Thu May 16 17:45:54 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ZEDDBA (DBID=3520942925)

RMAN> BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT TAG 'FULL BACKUP PLUS ARCHIVELOG';
2> HOST 'read Press Enter to LIST BACKUP';
3> LIST BACKUP;
4>

Starting backup at 16-MAY-19
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=158 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=13 RECID=1 STAMP=1008438357
channel ORA_DISK_1: starting piece 1 at 16-MAY-19
channel ORA_DISK_1: finished piece 1 at 16-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4y7m2_.bkp tag=FULL BACKUP PLUS ARCHIVELOG comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ZEDDBA/archivelog/2019_05_16/o1_mf_1_13_gfv4y4mk_.arc RECID=1 STAMP=1008438357
Finished backup at 16-MAY-19

Starting backup at 16-MAY-19
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 16-MAY-19
channel ORA_DISK_1: finished piece 1 at 16-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_TAG20190516T174603_gfv4yco3_.bkp tag=TAG20190516T174603 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:07
Finished backup at 16-MAY-19

Starting backup at 16-MAY-19
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=14 RECID=2 STAMP=1008438370
channel ORA_DISK_1: starting piece 1 at 16-MAY-19
channel ORA_DISK_1: finished piece 1 at 16-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4ym4v_.bkp tag=FULL BACKUP PLUS ARCHIVELOG comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/ZEDDBA/archivelog/2019_05_16/o1_mf_1_14_gfv4ylo5_.arc RECID=2 STAMP=1008438370
Finished backup at 16-MAY-19

Starting Control File and SPFILE Autobackup at 16-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008438372_gfv4ynmv_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 16-MAY-19

host command complete

We use host within RMAN just to wait for input before moving on for demo purposes.  Next we list the backup within RMAN using ‘LIST BACKUP‘:

List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 498.93M DISK 00:00:08 16-MAY-19 
BP Key: 1 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_FULL_BACKUP_gfv4k119_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2 Full 8.31M DISK 00:00:00 16-MAY-19 
BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20190516T173912
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008437952_gfv4kjko_.bkp
SPFILE Included: Modification time: 16-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 353836 Ckp time: 16-MAY-19

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
3 77.83M DISK 00:00:01 16-MAY-19 
BP Key: 3 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP PLUS ARCHIVELOG
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4y7m2_.bkp

List of Archived Logs in backup set 3
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 13 332298 16-MAY-19 354044 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
4 Full 498.94M DISK 00:00:06 16-MAY-19 
BP Key: 4 Status: AVAILABLE Compressed: NO Tag: TAG20190516T174603
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_TAG20190516T174603_gfv4yco3_.bkp
List of Datafiles in backup set 4
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
5 3.50K DISK 00:00:00 16-MAY-19 
BP Key: 5 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP PLUS ARCHIVELOG
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4ym4v_.bkp

List of Archived Logs in backup set 5
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 14 354044 16-MAY-19 354066 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
6 Full 8.31M DISK 00:00:00 16-MAY-19 
BP Key: 6 Status: AVAILABLE Compressed: NO Tag: TAG20190516T174612
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008438372_gfv4ynmv_.bkp
SPFILE Included: Modification time: 16-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 354077 Ckp time: 16-MAY-19

Recovery Manager complete.

Press Enter to continue

Finally, we update the demo log table:

-------------------------------------
Step 3: Updating and viewing demo log
-------------------------------------
Calling 'sqlplus / as sysdba'
To updated and view demo log

1 row created.

Commit complete.

WHEN COMMENTS
------------------------------ --------------------------------------------------
16-MAY-19 05.31.24.000000 PM Enable Archive Log Mode
16-MAY-19 05.40.23.000000 PM Full Backup
16-MAY-19 05.46.17.000000 PM Full Backup plus Archive Logs

Press Enter to exit shell script

[oracle@dc1sbxdb001 demo]$
Reference Scripts
  1. 1_enable_archive_log_mode.sh
  2. 1_enable_archive_log_mode.sql
  3. 1_create_demo_table.sql
  4. 2_full_backup.sh
  5. 2_full_backup.cmd
  6. 3_full_backup_plus_archivelogs.sh
  7. 3_full_backup_plus_archivelogs.cmd

To download all 7 in one zip: 1_full_backup.zip

The rest of the series
  1. Oracle Database File Placement Best Practice & Create Database Demo
  2. RMAN Full Backup & Demo
  3. RMAN Image Copy & Demo
  4. RMAN Incremental & Demo Part 1 (Level 0)
  5. RMAN Incremental & Demo Part 2 (Level 1)
  6. RMAN Incremental with Block Change Tracking & Demo
  7. RMAN Incremental Differential vs Cumulative & Demo
  8. RMAN Incremental Updating Backup & Demo Part 1
  9. RMAN Incremental Updating Backup & Demo Part 2
  10. Flashback
  11. RMAN Block Media Recovery
  12. RMAN Recover database with only FRA
  13. RMAN Obsolete

Please Note: Links to the blog posts will be released daily and updated here.

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Categories: DBA Blogs

Oracle Merchandising Solution Tops Shopping List for Bosnian Grocers

Oracle Press Releases - Thu, 2019-05-23 07:00
Press Release
Oracle Merchandising Solution Tops Shopping List for Bosnian Grocers Retail brands Konzum and Mercator gain operational agility with modern retail technology

Redwood Shores Calif—May 23, 2019

When customers enter a Mercator or Konzum supermarket, they discover a plethora of organic products, fresh chopped salads, and meat ripening chambers intermingled with everyday grocery needs. The dominant grocery brands in Bosnia and Herzegovina (B&H) employ a visual standard and layout that maximizes the display of products across various segments. By upgrading to the latest release of Oracle Retail, the trading department can automate the merchandising process and offer guided best practice to associates, allowing them to focus on strategic tasks that impact the bottom line and improve daily operational efficiency.

“The common user interface of our new retail merchandising system (RMS) simplifies the day to day activities while allowing for flexibility where the business demands it. Once our buyers got the hang of the new system we began to see a marked improvement in the speed and efficiency in which they could get their daily, previously tedious, tasks completed,” said Željka Mujezinović, IT manager, Mercator BH d.o.o. (B&H). The Mercator relationship with Oracle began in 2017.

“With the time saved they can spend more time on higher-value tasks and focusing on optimizing our merchandising strategy,” said Džemal Vejsil, IT manager, Konzum d.o.o. (B&H). The Konzum relationship began in 2016.

In the recent Topography of Retail Report, nearly 6,400 European consumers surveyed prioritized competitive pricing and general promotions (78 percent) as one of the most compelling attributes of shopping experience. With Oracle Retail Merchandising, Konzum and Mercator BH buyers will be able to implement differential pricing strategies and multi-buy and transaction promotions easily and pass those savings on to customers. In the future, consumers can take advantage of multi-buy transactions where the more you buy, the more you save. The buyers will be able to evaluate new promotion options to ensure more consistent pricing levels for store shoppers.  

“Retailers need a world-class, modern platform and the ability to interact across all areas of the organization seamlessly. We believe that every retail process should start and end with analytics and business intelligence. Oracle collaborated with 50 retailers, such as Mercator BH and Konzum, to rewrite our merchandising solution and establish the persona and correlated activities to drive operational agility and better business insights,” said Mike Webster, senior vice president and general manager, Oracle Retail.

“We had mStart consultants and customer team members working closely with Oracle on this project. We aligned across the Trading and Purchasing departments to avoid disruption to the business, resulting in the implementation being completed on-time and on-budget for the customer in January 2019,” said Želimira Ivček, Director of Oracle Retail Systems at mStart Plus d.o.o.

mStart Plus d.o.o. is a Gold level member of Oracle PartnerNetwork (OPN).

Contact Info
Kris Reeves
Oracle PR
+1.925.787.6744
kris.reeves@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About mStart Plus d.o.o

mStart Plus is a business and technology company that offers complete solutions for infrastructure and optimization of business processes. With over ten years of experience in ERP business systems and with over 340 tech experts on our team mStart Plus has developed, implemented, integrated and supported some of the largest IT systems in the region. mStart Plus, certified for world-leading IT platforms (Oracle, SAP), offers custom software solutions in a large number of industries including agriculture, manufacturing, retail, wholesale, distribution and pharmaceutical industry.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Re-partitioning

Jonathan Lewis - Thu, 2019-05-23 05:45

I wrote a short note a little while ago demonstrating how flexible Oracle 12.2 can be about physically rebuilding a table online to introduce or change the partitioning while discarding data, and so on.  But what do you do (as a recent question on ODC asked) if you want to upgrade a customer’s database to meet the requirements of a new release of your application by changing a partitioned table into a composite partitioned table and don’t have enough room to do an online rebuild. Which could require two copies of the data to exist at the same time.)

If you’ve got the down time (and not necessarily a lot is needed) you can fall back on “traditional methods” with some 12c enhancements. Let’s start with a range partitioned table:


rem
rem     Script:         pt_comp_from_pt.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019
rem

create table pt_range (
        id              number(8,0)     not null,
        grp             varchar2(1)     not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range(id) (
        partition p200 values less than (200),
        partition p400 values less than (400),
        partition p600 values less than (600)
)
;

insert into pt_range
select
        rownum-1,
        mod(rownum,2),
        lpad(rownum,10,'0'),
        rpad('x',100,'x')
from
        all_objects
where
        rownum <= 600
;

commit;

So we’ve got a range-partitioned table with three partitions and some data in each partition. Let’s pretend we want to change this to range/list with the grp column as the subpartition key, allowing explicit use of values 0,1,2 and a bucket subpartition for anything else. First we create an empty version of the table with a suitable subpartition template, and a simple heap table to be used as an exchange medium:


create table pt_range_list (
        id              number(8,0)     not null,
        grp             varchar2(1)     not null,
        small_vc        varchar2(10),
        padding         varchar2(100)
)
partition by range(id)
subpartition by list (grp)
subpartition template (
        subpartition p_def      values(default)
)
(
        partition p200 values less than (200),
        partition p400 values less than (400),
        partition p600 values less than (600)
)
;

prompt  ===============================================
prompt  First nice 12.2 feature - "create for exchange"
prompt  ===============================================

create table t for exchange with table pt_range;

You’ll notice that our subpartition template identifies just a single subpartition that takes default values – i.e. anything for which no explicit subpartition has been identified. This means we have a one to one correspondance between the data segments of the original table and the copy table. So now we go through a tedious loop (which we could code up with a PL/SQL “execute immediate” approach) to do a double-exchange for each partition in turn. (Any PL/SQL code is left as an exercise to the interested reader.)


alter table pt_range exchange partition p200 with table t;
alter table pt_range_list exchange subpartition p200_p_def with table t;

alter table pt_range exchange partition p400 with table t;
alter table pt_range_list exchange subpartition p400_p_def with table t;

alter table pt_range exchange partition p600 with table t;
alter table pt_range_list exchange subpartition p600_p_def with table t;

prompt  =====================================
prompt  Show that we've got the data in place
prompt  =====================================

execute dbms_stats.gather_table_stats(user,'pt_range_list',granularity=>'ALL')

break on partition_name skip 1

select  partition_name, subpartition_name, num_rows
from    user_tab_subpartitions
where   table_name = 'PT_RANGE_LIST'
order by
        partition_name, subpartition_name
;


PARTITION_NAME         SUBPARTITION_NAME        NUM_ROWS
---------------------- ---------------------- ----------
P200                   P200_P_DEF                    200

P400                   P400_P_DEF                    200

P600                   P600_P_DEF                    200


3 rows selected.

We now have to split the newly arrived subpartitions into the 4 pieces we want – but before we do that let’s make sure that any new partitions automatically have the correct subpartitions by changing the subpartition template:


alter table pt_range_list
set subpartition template(
        subpartition p_0 values (0),
        subpartition p_1 values (1),
        subpartition p_2 values (2),
        subpartition p_def values (default)
)
;

prompt  =========================================================
prompt  Second nice 12.2 feature - multiple splits in one command
prompt  We could do this online after allowing the users back on.
prompt  =========================================================

alter table pt_range_list split subpartition p200_p_def
        into (
                subpartition p200_p_0 values(0),
                subpartition p200_p_1 values(1),
                subpartition p200_p_2 values(2),
                subpartition p200_p_def
        )
;

alter table pt_range_list split subpartition p400_p_def
        into (
                subpartition p400_p_0 values(0),
                subpartition p400_p_1 values(1),
                subpartition p400_p_2 values(2),
                subpartition p400_p_def
        )
;

alter table pt_range_list split subpartition p600_p_def
        into (
                subpartition p600_p_0 values(0),
                subpartition p600_p_1 values(1),
                subpartition p600_p_2 values(2),
                subpartition p600_p_def
        )
;

Now, just to check that everything is behaving, let’s add a new partition, and check to see what partitions and subpartitions we end up with:


alter table pt_range_list add partition p800 values less than (800);

execute dbms_stats.gather_table_stats(user,'pt_range_list',granularity=>'ALL')

select  partition_name, subpartition_name, num_rows
from    user_tab_subpartitions
where   table_name = 'PT_RANGE_LIST'
order by
        partition_name, subpartition_name
;

PARTITION_NAME         SUBPARTITION_NAME        NUM_ROWS
---------------------- ---------------------- ----------
P200                   P200_P_0                      100
                       P200_P_1                      100
                       P200_P_2                        0
                       P200_P_DEF                      0

P400                   P400_P_0                      100
                       P400_P_1                      100
                       P400_P_2                        0
                       P400_P_DEF                      0

P600                   P600_P_0                      100
                       P600_P_1                      100
                       P600_P_2                        0
                       P600_P_DEF                      0

P800                   P800_P_0                        0
                       P800_P_1                        0
                       P800_P_2                        0
                       P800_P_DEF                      0


16 rows selected.

And as a final note – if we decide we want to put it all back we could merge four subpartitions down to one subpartition with a single command – then loop through every partition in turn:


alter table pt_range_list
        merge subpartitions  p200_p_0, p200_p_1, p200_p_2, p200_p_def
        into  subpartition  p200_p_def
;

And now I feel like I’m turning into Tim Hall – writing potentially useful demonstrations instead of trying to baffle people with rocket science. But I hope to get over that quite soon. Of course I have left out some important “real-world” details – particularly how you choose to handle indexes while doing the double-exchange. My view would be to take the local indexes with you on the exchange, bypass the global indexes on the exchange out, and be choosy about which global indexes to maintain on the exchange back in; but it all depends on how much downtime you have, how many indexes there are, and the state they’re likely to start or end in.

As ever it’s possible to start with a simple idea like this, then discover there are real-world complications that have to be dealt with. So there’s another article in the pipeline to handle a slightly more complex case. I’ll also be publishing a short note about the easy way of getting the job done from 18c onwards – if you’ve got the spare resources.

 

WOLF Italian Street Food Beats Lunchtime Rush with Oracle

Oracle Press Releases - Thu, 2019-05-23 03:00
Press Release
WOLF Italian Street Food Beats Lunchtime Rush with Oracle Cloud-based point-of-sale solution enables Italian street food chain to accelerate speed of service and deliver great guest experiences during its busiest moments

London, UK—May 23, 2019

Fast, fresh and healthy Italian street food restaurant, WOLF, has implemented Oracle Food & Beverage Simphony Cloud at point-of-sale to make the most of the lucrative lunchtime rush and support its growth ambitions. Aiming to improve the processes of the restaurant, owner, Tim Entwistle, upgraded the POS solution to increase the speed of service, gain visibility into management information across its franchise network and ensure a solid platform for expected growth. Already WOLF is seeing the benefits with reduced wait times, specifically, a 40 percent improvement.

Some POS solutions cannot keep up with the pressures of a modern restaurant. Services can freeze due to a high volume of orders, or the system needs constant rebooting or manager intervention – a critical issue during the busiest time of the day for a restaurant. In addition, many systems don’t offer data or analysis on menu and employee performance. Oracle Simphony Cloud combined with Oracle’s MICROS Compact Workstations help with these issues while arming WOLF with the tools needed for future growth.

“Our business outgrew our initial tablet POS solution presenting issues such as double charging and system freezes during peak hours, making it obvious the machines couldn’t take the volume of orders we experienced during the lunch rush hour.  We needed a solution that worked, always. It was important for us to think strategically and find a cloud solution that would keep up with the pace of the industry and our business. One that would match our growth ambitions. The solution from Oracle Food and Beverage gives us the reliability we need day to day and the confidence to scale quickly,” said Tim Entwistle, owner of WOLF.

An addition to boosting speed of service, Oracle Simphony Cloud provides powerful analytics delivering valuable operational insights across all areas of restaurant management for faster, smarter decision-making. This includes analysing menu and individual store performance. All critical insights for the growth of WOLF’s business.

Oracle Food and Beverage solutions are assisting WOLF with their future franchise expansions, supporting their ambitions to grow their business across the UK. By having all sites on the same cloud-based software and having powerful insights, WOLF can implement plans for expansion.  

Tim Brown, VP Global Sales Consulting of Oracle Food & Beverage, commented: “Restaurant owners are always looking for ways to increase their revenue, improve their services and expand and grow. With Simphony, WOLF is empowered to analyse each restaurant’s performance, improve its services and look at new expansion options.”

WOLF Italian Street Food currently operates in seven sites across the UK in London, Manchester, Reading, Nottingham and Leeds. They purchased the Oracle Food and Beverage solution in October of 2018 and are now live on the system.

Contact Info
Valerie Beaudett
Oracle
+1 650 400 7833
valerie.beaudett@oracle.com
About Oracle Food and Beverage

Oracle Food and Beverage, formerly MICROS, brings 40 years of experience in providing software and hardware solutions to restaurants, bars, pubs, clubs, coffee shops, cafes, stadiums, and theme parks. Thousands of operators, both large and small, around the world are using Oracle technology to deliver exceptional guest experiences, maximize sales, and reduce running costs.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Valerie Beaudett

  • +1 650 400 7833

See You at the Edge Conference in London

Anthony Shorten - Wed, 2019-05-22 23:01

For customers and partners attending the Oracle Utilities Customer Edge Conference in London next month, I will be presenting the Cloud and Technology sessions this year.

The cloud sessions will cover the Oracle Utilities SaaS offerings including advice on migrating from an on-premise implementation to those offerings. The technical sessions will deep dive on our technology strategy/roadmap, a discussion of the new version of Utilities Testing Accelerator including the roadmap and a discussion about our exciting plans for integration of machine learning into our Oracle Utilities product set. Some of these sessions will include short demonstrations of capability and prototypes of exciting new capabilities.

For those attending, feel free to introduce yourself while I am there. See you in London!!

Additionally, I will be presenting a subset of the same sessions late in July at the Oracle Utilities Customer Edge Conference in Melbourne.

Run SQL Developer in Oracle Cloud Infrastructure and Connect to Autonomous Database

OTN TechBlog - Wed, 2019-05-22 17:38

In a previous blog post, I described how to quickly create an Autonomous Database and connect to it via SQLcl. By using the the most recent Cloud Developer Image —which includes SQLcl- I was able to save a time installing and configuring. Cloud Developer Image also comes with Oracle SQL Developer pre-installed. In this post I describe how to run SQL Developer and connect it to Autonomous Database.

Steps
  1. Launch Cloud Developer Image
  2. Set up OCI cli
  3. Create Autonomous Transaction Processing Database using CLI
  4. Download Wallet using CLI
  5. Configure VNC server and connect from a VNC Client
  6. Launch SQL Developer and add Wallet
  7. Connect to the database
Step 1-4: See previous blog post

The steps to create an Autonomous Database and download the Wallet are covered in the previous blog post and apply to this tutorial as well.

5. Configure VNC server and connect from a VNC client

To access a GUI via VNC, do the following:

  • Install a VNC viewer on your local computer
    • On MacOS you can use the built-in VNC viewer in the Screen Sharing app
  • Use SSH to connect to the compute instance running the Oracle Cloud Developer Image: ssh -i <path to your ssh keys> opc@<IP address>
  • Configure a VNC password by typing vncpasswd
  • When prompted, enter a new password and verify it
  • Optionally, enter a view-only password
  • After the vncpasswd utility exits, start the VNC server by typing vncserver
  • This will start a VNC server with display number 1 for the opc user, and the VNC server will start automatically if your instance is rebooted
  • On your local computer, connect to your instance and create an ssh tunnel for port 5901 (for display number 1):
    • $ ssh -L 5901:localhost:5901 -i <path to your ssh keys> opc@<IP Address>
  • On your local computer, start a VNC viewer and establish a VNC connection to localhost:1
    • On MacOS, from Finder, hit Command-K to Connect to Server and enter vnc://localhost:5901
  • Enter the VNC password you set earlier
  • Acknowledge the welcome dialogs until you see the Oracle Linux desktop

 

1. Connecting to VNC Server using MacOS built-in VNC Viewer, Screen Sharing

 

 

2. Oracle Linux Desktop

 

Launch SQL Developer and add Wallet

Launch SQL Developer via the Applications > Programming menu. See figure 3.

 

3. Launching SQL Developer

 

Connect to the database

To create a database connection (See figure 4.):

  • In the connections panel, click the (+) icon to create a New Database Connection…
  • Name your connection
  • For Connection Type, choose Cloud Wallet
  • Browse for the wallet.zip you downloaded earlier
  • You can leave the default Service unless you have other Autonomous Databases in this tenancy

 

4. Creating the Database Connection

 

You are now ready to connect:

 

5. SQL Developer connected to Autonomous Database

 

Conclusion

The Oracle Linux-based Cloud Developer image includes all the tools you need to get started with Autonomous Database and Oracle SQL Developer via VNC. In this blog post I explained the steps create an Autonomous Dababase and access it via SQL Developer displayed via VNC.

How to Add in Excel?

VitalSoftTech - Wed, 2019-05-22 14:04
Most people know that Excel is a spreadsheet application. It automatically implies that the software can record and perform arithmetic calculations. Amongst the most commonly used functions in Excel, adding a column or a series of columns in the most popular operation. So how to add in Excel? There are three different ways that you […]
Categories: DBA Blogs

Indexing Null Values - Part 2

Randolf Geist - Wed, 2019-05-22 10:57
In the previous post I've demonstrated that Oracle has some problems to make efficient use of B*Tree indexes if an IS NULL condition is followed by IN / OR predicates also covered by the same index - the predicates following are not used to navigate the index structure efficiently but are applied as filters on all index entries identified by the IS NULL.

In this part I'll show what results I got when repeating the same exercise using Bitmap indexes - after all they include NULL values anyway, so no special tricks are required to use them for an IS NULL search. Let's start again with the same data set (actually not exactly the same but very similar) and an index on the single expression that gets searched for via IS NULL - results are again from 18.3.0:

SQL> create table null_index as select * from dba_tables;

Table created.

SQL> insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

214500 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

PL/SQL procedure successfully completed.

SQL> create bitmap index null_index_idx on null_index (pct_free);

Index created.

SQL> set serveroutput off pagesize 5000 arraysize 500
SQL>
SQL> set autotrace traceonly
SQL>
SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1297049223

------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 2342 (1)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 2342 (1)| 00:00:01 |
| 2 | BITMAP CONVERSION TO ROWIDS | | | | | |
|* 3 | BITMAP INDEX SINGLE VALUE | NULL_INDEX_IDX | | | | |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
3 - access("PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2192 consistent gets
30 physical reads
0 redo size
7199 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

So indeed the Bitmap index was successfully used to identify the PCT_FREE IS NULL rows but the efficiency suffers from the same problem and to the same degree as the corresponding B*Tree index plan - too many rows have to be filtered on table level:

Plan hash value: 1297049223

--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2342 (100)| 101 |00:00:00.01 | 2192 | 30 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 1 | 19 | 2342 (1)| 101 |00:00:00.01 | 2192 | 30 |
| 2 | BITMAP CONVERSION TO ROWIDS | | 1 | | | 13433 |00:00:00.01 | 3 | 30 |
|* 3 | BITMAP INDEX SINGLE VALUE | NULL_INDEX_IDX | 1 | | | 3 |00:00:00.01 | 3 | 30 |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
3 - access("PCT_FREE" IS NULL)

Some interesting points to mention: The 13,000+ rows are identified in the Bitmap index using just three index row entries / bitmap fragments, so that's the special efficiency of Bitmap indexes where a single index row entry can cover many, many table rows, and it's also interesting to see that the costing is pretty different from the B*Tree index costing (2342 vs. 1028, in this case closer to reality of 2,200 consistent gets but we'll see in a moment how this can change) - and no cardinality estimate gets mentioned on Bitmap index level  - the B*Tree index plan showed the spot on 13,433 estimated rows.

So reproducing the B*Tree test case, let's add the OWNER column to the Bitmap index in an attempt to increase the efficiency. Note that I drop the previous index to prevent Oracle from a "proper" usage of Bitmap indexes, as we'll see in a moment:

SQL> drop index null_index_idx;

Index dropped.

SQL> create bitmap index null_index_idx2 on null_index (pct_free, owner);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1751956722

-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 2343 (1)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 2343 (1)| 00:00:01 |
| 2 | BITMAP CONVERSION TO ROWIDS | | | | | |
|* 3 | BITMAP INDEX RANGE SCAN | NULL_INDEX_IDX2 | | | | |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
3 - access("PCT_FREE" IS NULL)
filter("PCT_FREE" IS NULL AND ("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
105 consistent gets
30 physical reads
0 redo size
7199 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

Plan hash value: 1751956722

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2343 (100)| 101 |00:00:00.01 | 105 | 30 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 1 | 19 | 2343 (1)| 101 |00:00:00.01 | 105 | 30 |
| 2 | BITMAP CONVERSION TO ROWIDS | | 1 | | | 101 |00:00:00.01 | 4 | 30 |
|* 3 | BITMAP INDEX RANGE SCAN | NULL_INDEX_IDX2 | 1 | | | 1 |00:00:00.01 | 4 | 30 |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
3 - access("PCT_FREE" IS NULL)
filter(("PCT_FREE" IS NULL AND ("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')))

So now we end up with an "Bitmap index range scan" operation, which in reality looks pretty efficient - just 105 consistent gets, so assuming 101 consistent gets for accessing the 101 table rows it just required 4 consistent gets on index level. But then look at the cost estimate: 2343, which is even greater than the cost estimate of the previous plan, and also check the "Predicate Information" section, which looks pretty weird, too - an access only for PCT_FREE IS NULL, a filter on index level repeating the whole predicates including the PCT_FREE IS NULL and most significantly the predicates on OWNER repeated on table level.

Clearly what the optimizer assumes in terms of costing and predicates required doesn't correspond to what happens at runtime, which looks pretty efficient, but at least according the predicates on index level again doesn't look like the optimal strategy we would like to see again: Why the additional filter instead of just access? We can also see that echoed in the Rowsource statistics: Only a single Bitmap index fragment gets produced by the "Bitmap index range scan" but it requires 4 consistent gets on index level, so three of them get "filtered" after access.

The costing seems to assume that only the PCT_FREE IS NULL rows are identified on index level, which clearly isn't the case at runtime...

Of course this is not proper usage of Bitmap indexes - typically you don't create a multi column Bitmap index but instead make use of the real power of Bitmap indexes, which is how Oracle can combine multiple of them for efficient usage and access.

Before doing so, let's just for the sake of completeness repeat the combined Bitmap index of the B*Tree variant that turned out to be most efficient for the B*Tree case:

SQL> drop index null_index_idx2;

Index dropped.

SQL> create bitmap index null_index_idx3 on null_index (owner, pct_free);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1022155563

--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 83 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
|* 2 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 83 (0)| 00:00:01 |
| 3 | BITMAP CONVERSION TO ROWIDS | | | | | |
|* 4 | BITMAP INDEX RANGE SCAN | NULL_INDEX_IDX3 | | | | |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("PCT_FREE" IS NULL)
4 - access("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
207 consistent gets
30 physical reads
0 redo size
7199 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

Plan hash value: 1022155563

----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 83 (100)| 101 |00:00:00.01 | 207 | 30 |
| 1 | INLIST ITERATOR | | 1 | | | 101 |00:00:00.01 | 207 | 30 |
|* 2 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 2 | 19 | 83 (0)| 101 |00:00:00.01 | 207 | 30 |
| 3 | BITMAP CONVERSION TO ROWIDS | | 2 | | | 303 |00:00:00.01 | 5 | 30 |
|* 4 | BITMAP INDEX RANGE SCAN | NULL_INDEX_IDX3 | 2 | | | 2 |00:00:00.01 | 5 | 30 |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("PCT_FREE" IS NULL)
4 - access(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))

While we see now again the desired "INLIST ITERATOR" this one looks weird for several reasons, in particular because we now have a much lower cost estimate (83) but in reality it is less efficient than the previous one (cost estimate 2343 but 105 consistent gets) due to the 207 consistent gets required. Why is this so? The "Predicate Information" section shows why: Only the predicate on OWNER is evaluated on index level (303 rows identified on index level) and therefore rows need to be filtered on table level, which looks again like an implementation limitation and pretty unnecessary - after all the PCT_FREE IS NULL should be somehow treated on index level instead.

So finally let's see how things turn out when using Bitmap indexes the way they are designed - by creating multiple ones and let Oracle combine them:

SQL> create bitmap index null_index_idx on null_index (pct_free);

Index created.

SQL> create bitmap index null_index_idx4 on null_index (owner);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 704944303

-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 8 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 8 (0)| 00:00:01 |
| 2 | BITMAP CONVERSION TO ROWIDS | | | | | |
| 3 | BITMAP AND | | | | | |
|* 4 | BITMAP INDEX SINGLE VALUE | NULL_INDEX_IDX | | | | |
| 5 | BITMAP OR | | | | | |
|* 6 | BITMAP INDEX SINGLE VALUE | NULL_INDEX_IDX4 | | | | |
|* 7 | BITMAP INDEX SINGLE VALUE | NULL_INDEX_IDX4 | | | | |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("PCT_FREE" IS NULL)
6 - access("OWNER"='AUDSYS')
7 - access("OWNER"='CBO_TEST')


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
108 consistent gets
60 physical reads
0 redo size
33646 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

Plan hash value: 704944303

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 8 (100)| 101 |00:00:00.01 | 108 | 60 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 1 | 19 | 8 (0)| 101 |00:00:00.01 | 108 | 60 |
| 2 | BITMAP CONVERSION TO ROWIDS | | 1 | | | 101 |00:00:00.01 | 7 | 60 |
| 3 | BITMAP AND | | 1 | | | 1 |00:00:00.01 | 7 | 60 |
|* 4 | BITMAP INDEX SINGLE VALUE | NULL_INDEX_IDX | 1 | | | 3 |00:00:00.01 | 3 | 30 |
| 5 | BITMAP OR | | 1 | | | 1 |00:00:00.01 | 4 | 30 |
|* 6 | BITMAP INDEX SINGLE VALUE | NULL_INDEX_IDX4 | 1 | | | 1 |00:00:00.01 | 2 | 30 |
|* 7 | BITMAP INDEX SINGLE VALUE | NULL_INDEX_IDX4 | 1 | | | 1 |00:00:00.01 | 2 | 0 |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("PCT_FREE" IS NULL)
6 - access("OWNER"='AUDSYS')
7 - access("OWNER"='CBO_TEST')

So now we see access predicates only and Oracle making efficient use by combining multiple Bitmap indexes. Nevertheless I find the range of costing amazing: This plan is assigned a cost of 8 but it's actually less efficient at runtime (108 consistent gets) than the plan above having a cost of 2343 assigned but requiring just 105 consistent gets at runtime. Clearly the costing of Bitmap indexes is still - even in version 18.3 - full of surprises.

Summary

Repeating the same exercise as previously using Bitmap indexes shows several things:

- Oracle isn't necessarily good at costing and using multi column Bitmap indexes properly
- The costing of Bitmap indexes is still questionable (the most important figure "Clustering Factor" is still meaningless for Bitmap indexes)
- For proper handling use Bitmap indexes the way they are supposed to be used: By creating separate ones and let Oracle combine them

Danger – Hints

Jonathan Lewis - Wed, 2019-05-22 08:56

It shouldn’t be possible to get the wrong results by using a hint – but hints are dangerous and the threat may be there if you don’t know exactly what a hint is supposed to do (and don’t check very carefully what has happened when you’ve used one that you’re not familiar with).

This post was inspired by a blog note from Connor McDonald titled “Being Generous to the Optimizer”. In his note Connor gives an example where the use of “flexible” SQL results in an execution plan that is always expensive to run when a more complex version of the query could produce a “conditional” plan which could be efficient some of the time and would be expensive only when there was no alternative. In his example he rewrote the first query below to produce the second query:


select data
from   address
where  ( :choice = 1 and street = :val )
or     ( :choice = 2 and suburb = :val )
;

select  data
from    address
where   ( :choice = 1 and street = :val )
union all
select  data
from    address
where   ( :choice = 2 and suburb = :val );

(We are assuming that bind variable :choice is constrained to be 1 or 2 and no other value.)

In its initial form the optimizer had to choose a tablescan for the query, in its final form the query can select which half of a UNION ALL plan to execute because the optimizer inserts a pair of FILTER operations that check the actual value of :choice at run-time.

When I started reading the example my first thought was to wonder why the optimizer hadn’t simply used “OR-expansion” (or concatenation if you’re running an older version), then I remembered that by the time the optimizer really gets going it has forgotten that “:choice” is the same bind variable in both cases, so doesn’t realise that it would use only one of two possible predicates. However, that doesn’t mean you can’t tell the optimizer to use concatenation. Here’s a model – modified slightly from Connor’s original:


drop table address purge;
create table address ( street int, suburb int, post_code int,  data char(100));

insert into address
select mod(rownum,1e4), mod(rownum,10), mod(rownum,1e2), rownum
from dual connect by level  <= 1e5 -- > comment to avoid WordPress format issue
;

commit;

exec dbms_stats.gather_table_stats('','ADDRESS')

create index ix1 on address ( street );
create index ix2 on address ( suburb );
create index ix3 on address ( post_code );

variable val number = 6
variable choice number = 1

alter session set statistics_level = all;
set serveroutput off
set linesize 180
set pagesize 60

select
        /*+ or_expand(@sel$1) */
        count(data)
from    address
where  ( :choice = 1 and street = :val )
or     ( :choice = 2 and suburb = :val )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

I’ve added one more column to the table and indexed it – I’ll explain why later. I’ve also modified the query to show the output but restricted the result set to a count of the data column rather than a (long) list of rows.

Here’s the execution plan output when hinted:


SQL_ID  6zsh2w6d9mddy, child number 0
-------------------------------------
select  /*+ or_expand(@sel$1) */  count(data) from    address where  (
:choice = 1 and street = :val ) or     ( :choice = 2 and suburb = :val )

Plan hash value: 3986461375

------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                 |      1 |        |      1 |00:00:00.01 |      12 |     27 |
|   1 |  SORT AGGREGATE                         |                 |      1 |      1 |      1 |00:00:00.01 |      12 |     27 |
|   2 |   VIEW                                  | VW_ORE_B7380F92 |      1 |  10010 |     10 |00:00:00.01 |      12 |     27 |
|   3 |    UNION-ALL                            |                 |      1 |        |     10 |00:00:00.01 |      12 |     27 |
|*  4 |     FILTER                              |                 |      1 |        |     10 |00:00:00.01 |      12 |     27 |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| ADDRESS         |      1 |     10 |     10 |00:00:00.01 |      12 |     27 |
|*  6 |       INDEX RANGE SCAN                  | IX1             |      1 |     10 |     10 |00:00:00.01 |       2 |     27 |
|*  7 |     FILTER                              |                 |      1 |        |      0 |00:00:00.01 |       0 |      0 |
|*  8 |      TABLE ACCESS FULL                  | ADDRESS         |      0 |  10000 |      0 |00:00:00.01 |       0 |      0 |
------------------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('18.1.0')
      DB_VERSION('18.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SET$9162BF3C_2")
      OUTLINE_LEAF(@"SET$9162BF3C_1")
      OUTLINE_LEAF(@"SET$9162BF3C")
      OR_EXPAND(@"SEL$1" (1) (2))
      OUTLINE_LEAF(@"SEL$B7380F92")
      OUTLINE(@"SEL$1")
      NO_ACCESS(@"SEL$B7380F92" "VW_ORE_B7380F92"@"SEL$B7380F92")
      INDEX_RS_ASC(@"SET$9162BF3C_1" "ADDRESS"@"SET$9162BF3C_1" ("ADDRESS"."STREET"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SET$9162BF3C_1" "ADDRESS"@"SET$9162BF3C_1")
      FULL(@"SET$9162BF3C_2" "ADDRESS"@"SET$9162BF3C_2")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(:CHOICE=1)
   6 - access("STREET"=:VAL)
   7 - filter(:CHOICE=2)
   8 - filter(("SUBURB"=:VAL AND (LNNVL(:CHOICE=1) OR LNNVL("STREET"=:VAL))))

As you can see we have a UNION ALL plan with two FILTER operations, and the filter operations allow one or other of the two branches of the UNION ALL to execute depending on the value for :choice. Since I’ve reported the rowsource execution statistics you can also see that the table access through index range scan (operations 5 and 6) has executed once (Starts = 1) but the tablescan (operation 8) has not been executed at all.

If you check the Predicate Information you will see that operation 8 has introduced two lnnvl() predicates. Since the optimizer has lost sight of the fact that :choice is the same variable in both cases it has to assume that sometimes both branches will be relevant for a single execution, so it has to add predicates to the second branch to eliminate data that might have been found in the first branch. This is the (small) penalty we pay for avoiding a “fully-informed” manual rewrite.

Take a look at the Outline Data – we can see our or_expand() hint repeated there, and we can discover that it’s been enhanced. The hint should have been or_expand(@sel$1 (1) (2)). This might prompt you to modify the original SQL to use the fully qualified hint rather than the bare-bones form we’ve got so far. So let’s assume we do that before shipping the code to production.

Now imagine that a couple of months later an enhancement request appears to allow queries on post_code and the front-end has been set up so that we can specify a post_code query by selecting choice number 3. The developer who happens to pick up the change request duly modifies the SQL as follows:


select
        /*+ or_expand(@sel$1 (1) (2)) */
        count(data)
from    address
where  ( :choice = 1 and street = :val )
or     ( :choice = 2 and suburb = :val )
or     ( :choice = 3 and post_code = :val)
;

Note that we’ve got the “complete” hint in place, but there’s now a 3rd predicate. Do you think the hint is still complete ? What do you think will happen when we run the query ? Here’s the execution plan when I set :choice to 3.


select  /*+ or_expand(@sel$1 (1) (2)) */  count(data) from    address
where  ( :choice = 1 and street = :val ) or     ( :choice = 2 and
suburb = :val ) or     ( :choice = 3 and post_code = :val)

Plan hash value: 3986461375

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name            | Starts | E-Rows | A-Rows |   A-Time   |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                 |      1 |        |      1 |00:00:00.01 |
|   1 |  SORT AGGREGATE                         |                 |      1 |      1 |      1 |00:00:00.01 |
|   2 |   VIEW                                  | VW_ORE_B7380F92 |      1 |  10010 |      0 |00:00:00.01 |
|   3 |    UNION-ALL                            |                 |      1 |        |      0 |00:00:00.01 |
|*  4 |     FILTER                              |                 |      1 |        |      0 |00:00:00.01 |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| ADDRESS         |      0 |     10 |      0 |00:00:00.01 |
|*  6 |       INDEX RANGE SCAN                  | IX1             |      0 |     10 |      0 |00:00:00.01 |
|*  7 |     FILTER                              |                 |      1 |        |      0 |00:00:00.01 |
|*  8 |      TABLE ACCESS FULL                  | ADDRESS         |      0 |  10000 |      0 |00:00:00.01 |
-----------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('18.1.0')
      DB_VERSION('18.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SET$9162BF3C_2")
      OUTLINE_LEAF(@"SET$9162BF3C_1")
      OUTLINE_LEAF(@"SET$9162BF3C")
      OR_EXPAND(@"SEL$1" (1) (2))
      OUTLINE_LEAF(@"SEL$B7380F92")
      OUTLINE(@"SEL$1")
      NO_ACCESS(@"SEL$B7380F92" "VW_ORE_B7380F92"@"SEL$B7380F92")
      INDEX_RS_ASC(@"SET$9162BF3C_1" "ADDRESS"@"SET$9162BF3C_1" ("ADDRESS"."STREET"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SET$9162BF3C_1" "ADDRESS"@"SET$9162BF3C_1")
      FULL(@"SET$9162BF3C_2" "ADDRESS"@"SET$9162BF3C_2")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(:CHOICE=1)
   6 - access("STREET"=:VAL)
   7 - filter(:CHOICE=2)
   8 - filter(("SUBURB"=:VAL AND (LNNVL(:CHOICE=1) OR LNNVL("STREET"=:VAL))))

We get a UNION ALL with two branches, one for :choice = 1, one for :choice = 2 and both of them show zero starts – and we don’t have any part of the plan to handle :choice = 3. The query returns no rows – and if you check the table creation code you’ll see it should have returned 1000 rows. An incorrect (historically adequate) hint has given us wrong results.

If we want the full hint for this new queryy we need to specify the 3rd predicate, by adding (3) to the existing hint to get the following plan (and correct results):


select  /*+ or_expand(@sel$1 (1) (2) (3)) */  count(data) from
address where  ( :choice = 1 and street = :val ) or     ( :choice = 2
and suburb = :val ) or     ( :choice = 3 and post_code = :val)

Plan hash value: 2153173029

---------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                 |      1 |        |      1 |00:00:00.01 |    1639 |
|   1 |  SORT AGGREGATE                         |                 |      1 |      1 |      1 |00:00:00.01 |    1639 |
|   2 |   VIEW                                  | VW_ORE_B7380F92 |      1 |  11009 |   1000 |00:00:00.01 |    1639 |
|   3 |    UNION-ALL                            |                 |      1 |        |   1000 |00:00:00.01 |    1639 |
|*  4 |     FILTER                              |                 |      1 |        |      0 |00:00:00.01 |       0 |
|   5 |      TABLE ACCESS BY INDEX ROWID BATCHED| ADDRESS         |      0 |     10 |      0 |00:00:00.01 |       0 |
|*  6 |       INDEX RANGE SCAN                  | IX1             |      0 |     10 |      0 |00:00:00.01 |       0 |
|*  7 |     FILTER                              |                 |      1 |        |      0 |00:00:00.01 |       0 |
|*  8 |      TABLE ACCESS FULL                  | ADDRESS         |      0 |  10000 |      0 |00:00:00.01 |       0 |
|*  9 |     FILTER                              |                 |      1 |        |   1000 |00:00:00.01 |    1639 |
|* 10 |      TABLE ACCESS FULL                  | ADDRESS         |      1 |    999 |   1000 |00:00:00.01 |    1639 |
---------------------------------------------------------------------------------------------------------------------

Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      IGNORE_OPTIM_EMBEDDED_HINTS
      OPTIMIZER_FEATURES_ENABLE('18.1.0')
      DB_VERSION('18.1.0')
      ALL_ROWS
      OUTLINE_LEAF(@"SET$49E1C21B_3")
      OUTLINE_LEAF(@"SET$49E1C21B_2")
      OUTLINE_LEAF(@"SET$49E1C21B_1")
      OUTLINE_LEAF(@"SET$49E1C21B")
      OR_EXPAND(@"SEL$1" (1) (2) (3))
      OUTLINE_LEAF(@"SEL$B7380F92")
      OUTLINE(@"SEL$1")
      NO_ACCESS(@"SEL$B7380F92" "VW_ORE_B7380F92"@"SEL$B7380F92")
      INDEX_RS_ASC(@"SET$49E1C21B_1" "ADDRESS"@"SET$49E1C21B_1" ("ADDRESS"."STREET"))
      BATCH_TABLE_ACCESS_BY_ROWID(@"SET$49E1C21B_1" "ADDRESS"@"SET$49E1C21B_1")
      FULL(@"SET$49E1C21B_2" "ADDRESS"@"SET$49E1C21B_2")
      FULL(@"SET$49E1C21B_3" "ADDRESS"@"SET$49E1C21B_3")
      END_OUTLINE_DATA
  */

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(:CHOICE=1)
   6 - access("STREET"=:VAL)
   7 - filter(:CHOICE=2)
   8 - filter(("SUBURB"=:VAL AND (LNNVL(:CHOICE=1) OR LNNVL("STREET"=:VAL))))
   9 - filter(:CHOICE=3)
  10 - filter(("POST_CODE"=:VAL AND (LNNVL(:CHOICE=1) OR LNNVL("STREET"=:VAL)) AND (LNNVL(:CHOICE=2) OR
              LNNVL("SUBURB"=:VAL))))


We now have three branches to the UNION ALL, and the final branch (:choice =3) ran to show A-rows = 1000 selected in the tablescan.

Conclusion

You shouldn’t mess about with hints unless you’re very confident that you know how they work and then test extremely carefully – especially if you’re modifying old code that already contains some hints.

 

Oracle Integration Cloud (OIC) For Beginners Overview

Online Apps DBA - Wed, 2019-05-22 04:00

Are you planning to Learn Oracle Integration Cloud (OIC) but not sure if it’s your ☕cup of tea? If YES, then our blog post at www.k21academy.com/oic11 will help you request a Trial Account of OIC before you actually go for it. The blog will help you: ✔ Understand the difference between OIC and SOA Cloud […]

The post Oracle Integration Cloud (OIC) For Beginners Overview appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Don’t Be the Weak Link: Why Security is Everyone’s Job

Chris Warticki - Tue, 2019-05-21 18:58
CXOTalk with Oracle’s Mark Sunday

While security may start from the top, it doesn’t end there. In today’s business environment, it’s everyone’s job to remain vigilant and stay on top of the latest security trends. Oracle’s CIO, Mark Sunday, shares his wisdom and advice on security, what it means today, and why it matters more than ever in his latest CXOTalk interview with industry analyst and host, Michael Krigsman.

 

Here are few quick takeaways from Sunday’s conversation with Krigsman:

 

Regardless of the Industry, Security Is Top of Mind and a Key Focus

Security has always been a major issue, but with the high-profile breaches that have become more and more prevalent, security, now more than ever, should be top of mind. Every business, regardless of the industry needs to be concerned about security. You want to ensure that your customers and your employees’ data are protected.

 

It Begins at the Very Top

Security starts at the top. Sunday believes it’s the responsibility of the company’s leadership to create a culture of security and to ensure the right regulations are in place.  


 

Security Is Not Just the Role of the CIO, Not Just the Role of the CISO, but It’s Everyone's Responsibility

Even though security starts from the top, it should not stop there. Sunday stresses the importance of security at every layer. Every layer in the organization and every layer of the business. Security should affect almost all roles within the organization. One weak link can break the chain.

 

Watch the CXOTalk with Mark Sunday to hear more on his approach to security.

 

 

 

 

 

 

Mark Sunday

Oracle Senior Vice President and Chief Information Officer

 

 

 

 

 

Resources:

Growing Cyber Threats Drive Need for Advanced Security Defenses

You Won’t Believe What Today’s Biggest Security Threat Is

Insights into IT Governance and Security

How Can You Protect Your Business From Cybercrime?

What is Trusted Support? CXOTALK with Oracle VP Brennan Baybeck

 

 

Oracle GoldenGate Tokens

Online Apps DBA - Tue, 2019-05-21 08:22

Oracle GoldenGate Tokens This post covers about Oracle Goldengate Tokens, and its types (both Pre-defined & User-Defined). You also learn using built-in function @token within the parameter files retrieves the token value. Check out our new blog here: https://k21academy.com/goldengate35 on Oracle Goldengate Tokens in which you’ll learn: ✔ What is a Token? ✔ Types of […]

The post Oracle GoldenGate Tokens appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Support for Oracle Java SE now Included with Oracle Cloud Infrastructure

OTN TechBlog - Tue, 2019-05-21 08:00

Today we are excited to announce that support for Oracle Java, Oracle’s widely adopted and proven Java Development Kit, is now included with Oracle Cloud Infrastructure subscriptions at no extra cost. This includes the ability to log bugs, to get regular stability, performance, and security updates, and more for Oracle Java 8, 11, and 12. With Oracle Java you can develop portable, high-performance applications for the widest range of computing platforms possible, including all of the major public cloud services. By making Oracle Java available as part of any Oracle Cloud Infrastructure subscription, we are dramatically reducing the time and cost to develop enterprise and consumer applications.

This is an important announcement as Java is the #1 programming language and #1 developer choice for the cloud. It’s used widely for embedded applications, games, web content, and enterprise software. 12 million developers run Java worldwide and its usability is growing as options for cloud deployment of Java increase.

Oracle Java in Oracle Cloud helps developers write more secure applications, with convenient access to updates and a single vendor for support – for cloud and Oracle Java use – same subscription, no additional cost. We also ensure that you will have signed software from Oracle and the latest stability, performance, and security updates addressing critical vulnerabilities. 

All of this is supported on Oracle Linux and on other operating systems you run in your Oracle Cloud Infrastructure Virtual Machine or Bare Metal instance. Microsoft Windows? Of course. Ubuntu? Yep. Red Hat Enterprise Linux?  Sure!

Easy Peasy Cloud Developer Image

How can you get the Oracle Java bits? They are a breeze to install on Oracle Linux using Oracle Cloud Infrastructure yum repositories. But with the Oracle Cloud Developer Image available in the Oracle Cloud Marketplace, it’s even easier to get started. Simply click to launch the image on an Oracle Cloud Infrastructure compute instance. The Oracle Cloud Developer Image is a Swiss army knife for developers that includes Oracle Java and a whole bunch of other valuable tools to accelerate development of your next project. You can have this image installed and ready to go within minutes.

Get started with the Oracle Cloud Developer Image.

Misleading Execution Plan

Jonathan Lewis - Tue, 2019-05-21 05:48

A couple of weeks ago I published a note about an execution plan which showed the details of a scalar subquery in the wrong place (as far as the typical strategies for interpreting execution plans are concerned). In a footnote to the article I commented that Andy Sayer had produced a simple reproducible example of the anomaly based around the key features of the query supplied in the original posting and had emailed it to me.  With his permission (and with some minor modifications) I’ve reproduced it below:


rem
rem     Script:         misplaced_subq_plan.sql
rem     Author:         Andrew Sayer
rem     Dated:          May 2019
rem

drop table recursive_table;
drop table lookup_t;
drop table join_t;

@@setup

set linesize 180
set pagesize 60

create table recursive_table (
        my_id           number constraint rt_pk primary key,
        parent_id       number,
        fk_col          number
);

insert into recursive_table 
select 
        rownum, 
        nullif(rownum-1,0)      parent_id, 
        mod(rownum,10) 
from 
        dual 
connect by 
        rownum <=100
;

prompt  ==================================================
prompt  Note that fk_col will be zero for 1/10 of the rows
prompt  ==================================================

create table lookup_t(
        pk_col number  constraint lt_pk primary key,
        value varchar2(30 char)
)
;

insert into lookup_t 
select 
        rownum, 
        rpad('x',30,'x') 
from 
        dual 
connect by 
        rownum <=100
;

create table join_t(
        pk_col number primary key,
        col_10 number,
        value varchar2(100 char)
);

insert into join_t 
select 
        rownum, mod(rownum,10), rpad('x',30,'x') 
from 
        dual 
connect by 
        rownum <=1000 --> comment to avoid WordPress format problem.
;

execute dbms_stats.gather_table_stats(null,'recursive_table')
execute dbms_stats.gather_table_stats(null,'lookup_t')
execute dbms_stats.gather_table_stats(null,'join_t')

prompt	================================
prompt	note that pk_col will never be 0
prompt	================================

set serverout off
alter session set statistics_level=all;

var x number
exec :x := 1

spool misplaced_subq_plan

select  /* distinct */ 
        r.my_id, j.value, r.ssq_value
from    (
	select 
		my_id, parent_id, fk_col, 
		(select l.value from lookup_t l where l.pk_col = r.parent_id) ssq_value 
        from 
		recursive_table r 
	connect by 
		prior my_id = parent_id 
	start with 
		my_id = :x
	) r
join    join_t j
on	r.fk_col = j.pk_col
/

select * from table(dbms_xplan.display_cursor(format=>'allstats projection'));

set serveroutput on

spool off

The code generates, populates, and queries three tables:

  • recursive_table is used in a “connect by” query to generate some data.
  • lookup_t is used in a scalar subquery in the select list of the “connect by” query.
  • join_t is then joined to the result of the “connect by” query to eliminate some rows.

The construction allows us to see a difference between the number of rows returned and the number of times the scalar subquery is executed, and makes it easy to detect an anomaly in the presentation of the execution plan. And here is the execution plan from an 18.3 instance:

--------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
--------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |      1 |        |     90 |00:00:00.01 |     170 |       |       |          |
|   1 |  TABLE ACCESS BY INDEX ROWID              | LOOKUP_T        |    100 |      1 |     99 |00:00:00.01 |     102 |       |       |          |
|*  2 |   INDEX UNIQUE SCAN                       | LT_PK           |    100 |      1 |     99 |00:00:00.01 |       3 |       |       |          |
|*  3 |  HASH JOIN                                |                 |      1 |      2 |     90 |00:00:00.01 |     170 |  1123K|  1123K|     1/0/0|
|   4 |   VIEW                                    |                 |      1 |      2 |    100 |00:00:00.01 |     125 |       |       |          |
|*  5 |    CONNECT BY NO FILTERING WITH START-WITH|                 |      1 |        |    100 |00:00:00.01 |      23 |  9216 |  9216 |     2/0/0|
|   6 |     TABLE ACCESS FULL                     | RECURSIVE_TABLE |      1 |    197 |    100 |00:00:00.01 |      23 |       |       |          |
|   7 |   TABLE ACCESS FULL                       | JOIN_T          |      1 |      1 |   1000 |00:00:00.01 |      45 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("L"."PK_COL"=:B1)
   3 - access("R"."FK_COL"="J"."PK_COL")
   5 - access("PARENT_ID"=PRIOR NULL)
       filter("MY_ID"=:X)

Column Projection Information (identified by operation id):
-----------------------------------------------------------
   1 - "L"."VALUE"[VARCHAR2,120]
   2 - "L".ROWID[ROWID,10]
   3 - (#keys=1) "R"."MY_ID"[NUMBER,22], "J"."VALUE"[VARCHAR2,400], "R"."SSQ_VALUE"[VARCHAR2,120], "J"."VALUE"[VARCHAR2,400]
   4 - "R"."MY_ID"[NUMBER,22], "R"."FK_COL"[NUMBER,22], "R"."SSQ_VALUE"[VARCHAR2,120]
   5 - "PARENT_ID"[NUMBER,22], "MY_ID"[NUMBER,22], "FK_COL"[NUMBER,22], "R"."PARENT_ID"[NUMBER,22], PRIOR NULL[22], LEVEL[4]
   6 - "MY_ID"[NUMBER,22], "PARENT_ID"[NUMBER,22], "FK_COL"[NUMBER,22]
   7 - "J"."PK_COL"[NUMBER,22], "J"."VALUE"[VARCHAR2,400]

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - this is an adaptive plan


In a typical execution plan with scalar subqueries in the select list, the sub-plans for the scalar subqueries appear in the plan before the main query – and in this plan you can see the scalar subquery here at operations 1 and 2.

But the scalar subquery is in the select list of a non-mergeable view (operations 4, 5, 6). We can see that this view generates 100 rows (A-rows of operation 4) and the scalar subquery starts 100 times (Starts of operation 1) – so we can infer that the subquery ran for each row generated by the view.

The problem, though, is that the result set from the view is joined to another table, eliminating some rows and reducing the size of the result set; so if we don’t look carefully at all the details of the plan we appear to have a driving query that produces a result set of 90 rows (at operation 3), but manages to execute the scalar subquery just above it in the plan more times than there are rows in the result set.

It’s easy to unpick what’s really happening in this very simple query with a very short plan – but much harder to do so in the original case where the scalar subquery appeared “outside” the hash join when it actually executed inside a complex subplan that generated the second input (proble table) for the hash join.

As a further little note – if you look at the Column Projection Information you’ll see that operation 4 is where Oracle first projects ‘r.ssq_value[varchar2,120]’ which is the column created by the execution of the sub-plan at operation 1.

Arguably the execution plan should have look more like:


Plan hash value: 2557600799

--------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
--------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |      1 |        |     90 |00:00:00.01 |     170 |       |       |          |
|*  1 |  HASH JOIN                                |                 |      1 |      2 |     90 |00:00:00.01 |     170 |  1123K|  1123K|     1/0/0|
|   2 |   VIEW                                    |                 |      1 |      2 |    100 |00:00:00.01 |     125 |       |       |          |
|   3 |    TABLE ACCESS BY INDEX ROWID            | LOOKUP_T        |    100 |      1 |     99 |00:00:00.01 |     102 |       |       |          |
|*  4 |     INDEX UNIQUE SCAN                     | LT_PK           |    100 |      1 |     99 |00:00:00.01 |       3 |       |       |          |
|*  5 |    CONNECT BY NO FILTERING WITH START-WITH|                 |      1 |        |    100 |00:00:00.01 |      23 |  9216 |  9216 |     2/0/0|
|   6 |     TABLE ACCESS FULL                     | RECURSIVE_TABLE |      1 |    100 |    100 |00:00:00.01 |      23 |       |       |          |
|   7 |   TABLE ACCESS FULL                       | JOIN_T          |      1 |      1 |   1000 |00:00:00.01 |      45 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------

Inevitably, there are cases where the sub-plan for a scalar subquery appears much closer to its point of operation rather than being moved to the top of the execution plan. So any time you have scalar subqueries in select lists inside in-line views keep a careful lookout for where they appear and how many times they run in the execution plan. And don’t forget that giving every query block a name will help you track down your migrating subqueries.

Footnote

If you’re wondering why the Column Projection Information reports s.ssq_value as varchar2(120) when I’ve declared the column as varchar2(30), my declaration is 30 CHAR, and the database (by default) is running with a multi-byte character set that allows a maximum of 4 bytes per character.

Update (22nd May 201)

Following the comment from Iudith Mentzel below about clever optimisations, primary keys, and related inferences I thought it worth pointing out that it is possible to modify the demonstration query to get the same plan (shape) with different Start counts. We note that instead of putting the scalar subquery inside the inline view we would get the same result if we passed the parent_id to the outer query block and ran the scalar subquery there:

select  /* distinct */ 
        r.my_id, j.value,
        (select l.value from lookup_t l where l.pk_col = r.parent_id) ssq_value 
from    (
        select 
                my_id, parent_id, fk_col
        from 
                recursive_table r 
        connect by 
                prior my_id = parent_id 
        start with 
                my_id = :x
        ) r
join    join_t j
on      r.fk_col = j.pk_col
/

This gives us the following execution plan (with rowsource execution statistics):


Plan hash value: 2557600799

--------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name            | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem |  O/1/M   |
--------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |                 |      1 |        |     90 |00:00:00.03 |      29 |       |       |          |
|   1 |  TABLE ACCESS BY INDEX ROWID              | LOOKUP_T        |     90 |      1 |     89 |00:00:00.01 |      97 |       |       |          |
|*  2 |   INDEX UNIQUE SCAN                       | LT_PK           |     90 |      1 |     89 |00:00:00.01 |       8 |       |       |          |
|*  3 |  HASH JOIN                                |                 |      1 |      2 |     90 |00:00:00.03 |      29 |  1695K|  1695K|     1/0/0|
|   4 |   VIEW                                    |                 |      1 |      2 |    100 |00:00:00.01 |       7 |       |       |          |
|*  5 |    CONNECT BY NO FILTERING WITH START-WITH|                 |      1 |        |    100 |00:00:00.01 |       7 |  6144 |  6144 |     2/0/0|
|   6 |     TABLE ACCESS FULL                     | RECURSIVE_TABLE |      1 |    100 |    100 |00:00:00.01 |       7 |       |       |          |
|   7 |   TABLE ACCESS FULL                       | JOIN_T          |      1 |      1 |   1000 |00:00:00.01 |      22 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------

Note that the plan hash values are the same even though (mechanically) the real order of activity is dramatically different. But now we can see that the scalar subquery (operations 1 and 2) starts 90 times – once for each row returned by the hash join at operation 3, and we have done slightly fewer buffer visits (97 compared to 102) for that part of the plan.

It’s a pity, though, that when you start poking at a plan and looking too closely there are always new buggy bits to see. With the scalar subquery now at its optimal position (and maybe it will eventually get there without a manual rewrite) the arithmetic of “summing up the plan” has gone wrong for (at least) the Buffers column. In the new plan the 97 buffer visits attributed to operation 1 (and its descendents) should have been added to the 29 buffer visits attributed to the hash join (and its descendents) at operation 3 to get a total of 126; instead the 97 have just disappeared from the query total.

By comparison, and reading the operations in the original plan a suitable order, we see the view at operation 4 reporting 109 buffers which comes from 7 for its “obvious” descendents plus the 102 from operation 1 that actually happen inside the view. Then the hash join at operation 3 reports 131 buffers which is the 109 from the view plus the 22 from the tablescan at operation 7, and that 131 buffers is the final figure for the query.

So, for this particular example, it doesn’t matter what you do, the execution plan and its stats try to confuse you.

Latest Blog Posts by Oracle ACE Directors - May 5-11, 2019

OTN TechBlog - Tue, 2019-05-21 05:00
Weighing in...

Given the extraordinary number of blog posts recently published between May 5 and May 11 by members of the Oracle ACE program , I'm publishing separate lists for posts by ACE Directors, ACEs, and ACE Associates. Today's list features recent posts by Oracle ACE Directors.

Oracle ACE Director David KurtzDavid Kurtz
Consultant, Accenture Enkitec Group
London, United Kingdom

 

Oracle ACE Director Oren NakdimonOren Nakdimon
Database Architect & Developer, Moovit
Tzurit, HaZafon (North) District, Israel

 

Oracle ACE Director Edward RoskeEdward Roske
CEO, interRel Consulting Partners
Arlington, Texas

 

Oracle ACE Director Franck PachotFranck Pachot
Data Engineer, CERN
Lausanne, Switzerland

 

Oracle ACE Director John ScottJohn Scott
Director, Apex Evangelists
Leeds, United Kingdom

 

Oracle ACE Director Opal AlapatOpal Alapat
Vision Team Practice Lead, interRel Consulting
Arlington, TX

 
Additional Resources

Oracle Database Upgrade made easy with AutoUpgrade utility

Syed Jaffar - Tue, 2019-05-21 03:59
Upgrading Oracle database is undoubtedly a daunting task which requires a careful study, planning and execution to prevent any potential post upgrade shortcomings. Since Oracle determined to release a new Oracle version every year, at some-point-of-time, we all should focus on upgrading databases quite often than we used to perform earlier.

Thanks to AutoUpgrade tool(utility), available with MOS Doc ID : 2485457.1, automates all upgrade process procedure without much human intervention or inputs. For latest AutoUpgrade utility version, always refer the MOD note and download. Though, with 12.2(DBJAN2019RU), 18.5 and 19.3, the AutoUpgrade utility available by default under the oracle home.

AutoUpgrade is a command-line tool which can be used to upgrade one or many oracle databases with one command and a single configuration file. This utility automates upgrade process, such as, pre-upgrade tasks, performs automated fix-ups, perform database upgrade and run through post upgrade tasks as well. This saves a huge time and money when upgrading hundreds of databases in any environment.



I am pretty sure AutoUpgrade make DBA's like easier when it comes to Oracle database upgrade. Have fun and test the capabilities of the tool.



References:
MOS Doc ID : AutoUpgrade Tool (Doc ID 2485457.1)
https://mikedietrichde.com/2019/04/29/the-new-autoupgrade-utility-in-oracle-19c/
https://docs.oracle.com/en/database/oracle/oracle-database/19/upgrd/about-oracle-database-autoupgrade.html#GUID-3FCFB2A6-4617-4783-828A-41BD635FC88C

How to Manage Oracle 12c MultiTenant Database

VitalSoftTech - Tue, 2019-05-21 00:01
Managing Oracle 12c MultiTenant Database Oracle has drastically modified its architecture by introducing the concept of containers. The main component of this architecture is the CDB container. It is a container which owns the memory structure, the background processes and SYSTEM, SYSAUX tablespaces which are all shareable with other databases which are “plugged” into it. […]
Categories: DBA Blogs

The ‘Unprecedented Challenge’ of Cybersecurity in an Age of Burgeoning Threats

Oracle Press Releases - Mon, 2019-05-20 15:43
Blog
The ‘Unprecedented Challenge’ of Cybersecurity in an Age of Burgeoning Threats

Barbara Darrow, Senior Director, Communications, Oracle—May 20, 2019

Technological and legal complexities abound in this age of heightened cybersecurity threats—including a rise in state-sponsored hacking. This “unprecedented challenge” was the topic of conversation between Dorian Daley, Oracle executive vice president and general counsel, and Edward Screven, Oracle’s chief corporate architect. Here are five key takeaways from their conversation:

1. Some good news: Businesses are aware of cybersecurity challenges in a way they were not even just a few years ago, when many considered security, generally, as a priority, but didn’t go much beyond that thought according to Screven. “It’s [become] a front-and-center kind of issue for our customers,” Daley agreed.

2. These same customers would like to make data security “someone else’s problem,” and are right to think that way, Screven added. In this context, that “someone else” is a tech vendor able to design technology that is inherently more secure than what non-tech businesses could design for themselves.

3. Regulations around data privacy are getting more complicated, starting with the European Union’s General Data Protection Regulation, Daley noted. The issues of data privacy and data security constitute slightly different sides to the same problem, she said, adding “what’s happening on the privacy side is really an explosion of regulatory frameworks around the world.”

4. There’s only so much that employees can do—no matter how skilled they may be. Recent research shows that while most companies cite human error as a leading cause of data insecurity, they also keep throwing more people at a problem that can’t really be solved without a level of automation commensurate with the sophistication and volume of attacks. “There is a lack of sufficient awareness about what technology can actually do for customers,” Daley noted.

Fast, “autonomous” or self-applying software patches and updates are a solid way to mitigate or even prevent data loss from cyber hacks. Many of the attacks and subsequent data leaks over the past few years could have been avoided had available software patches been applied in a timely fashion.

AI and machine learning tech can catch far more anomalies, like unauthorized system access, which might indicate a security problem much faster than human experts can, eliminating issues before they get serious.

5. Screven is skeptical that international treaties, if such things could be crafted, would eradicate state-sponsored cyber hacking because much of that activity happens under the covers by contractors that can be disavowed by the states.

Thus, “the same person who’s out stealing your credit card today is out trying to steal plans for [Hellfire] missiles tomorrow,” Screven said.

Full video of the talk can be found here.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Oracle Certification 12c Notes – DBA Track

VitalSoftTech - Mon, 2019-05-20 10:58
Oracle 12c OCP Exam Notes (1z0-060) Oracle DBA 12c Certification Notes Oracle 12c – Manage Multitenant CDB and PDBs with EM Express Manage Multitenant CDB and PDBs with EM Express. Quick and easy steps to create, setup and deploy EM Express. Oracle 12c Database: Create 12c CDB, PDB Databases Using OUI When installing Oracle Software, […]
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator