Feed aggregator

Partitioning -- 16 : Hybrid Partitioning

Hemant K Chitale - Tue, 2019-05-07 02:55
Oracle 19c introduces Hybrid Partitioning whereby you can have external and internal Partitions co-existing.  External Partitions are on storage (filesystem) outside the database.

Let's say we have a List Partitioned table for the widgets that we manufacture. The table is Partitioned by WIDGET_CLASS_ID, based on an ISO standard.  So all companies that manufacture widgets adopt the same WIDGET_CLASS_ID:

SQL> desc widgets_list
Name Null? Type
----------------------------------------- -------- ----------------------------
WIDGET_CLASS_ID VARCHAR2(5)
WIDGET_ID VARCHAR2(32)
WIDGET_NAME VARCHAR2(32)
WIDGET_DESCRIPTION VARCHAR2(128)

SQL>
SQL> l
1 select table_name, partitioning_type, partition_count
2 from user_part_tables
3* where table_name = 'WIDGETS_LIST'
SQL> /

TABLE_NAME PARTITION PARTITION_COUNT
-------------------------------- --------- ---------------
WIDGETS_LIST LIST 3

SQL>
SQL> l
1 select partition_name,high_value, num_rows
2 from user_tab_partitions
3* where table_name = 'WIDGETS_LIST'
SQL> /

PARTITION_NAME HIGH_VALUE NUM_ROWS
---------------- ---------------- ----------
P_A 'A' 1520
P_B 'B' 520
P_C 'C' 119

SQL>


Later, another widget manufacturer that manufactures widgets of CLASS_ID 'X' is acquired.  The WIDGETS_LIST table is in a non-Oracle database and is received as a CSV file.  We accept the CSV file onto a filesystem location :

sh-4.2$ pwd
/home/oracle/ACQUIRED_COMPANY
sh-4.2$ cat AC_Widgets_List.CSV
'X','ABCXX2','The1','cddfdaxx'
'X','XXD2','The2','dda3'
'X','XRC34','The3','ff33355312'
sh-4.2$


So, we have a CSV file "AC_Widgets_List.CSV" listing the widgets manufactured by this company. We want to add it to our WIDGETS_LIST table.

Enter user-name: / as sysdba

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> create directory acquired_company as '/home/oracle/ACQUIRED_COMPANY';

Directory created.

SQL> grant read, write on directory acquired_company to hemant;

Grant succeeded.

SQL>
SQL> connect hemant/hemant@ORCLPDB1
Connected.
SQL>
SQL> l
1 alter table widgets_list
2 add external partition attributes (
3 type oracle_loader
4 default directory acquired_company
5 access parameters (
6 fields terminated by ','
7 (widget_class_id, widget_id, widget_name, widget_description)
8 )
9* )
SQL> /

Table altered.

SQL>
SQL> l
1 alter table widgets_list
2 add partition P_ACQ_CO values ('X')
3* external location ('AC_Widgets_List.CSV')
SQL> /

Table altered.

SQL>
SQL> exec dbms_stats.gather_table_stats('','WIDGETS_LIST');

PL/SQL procedure successfully completed.

SQL>
SQL> l
1 select partition_name, high_value, num_rows
2 from user_tab_partitions
3 where table_name = 'WIDGETS_LIST'
4* order by partition_position
SQL> /

PARTITION_NAME HIGH_VALUE NUM_ROWS
-------------------------------- ---------------- ----------
P_A 'A' 1520
P_B 'B' 520
P_C 'C' 119
P_ACQ_CO 'X' 3

SQL>
SQL> l
1* select * from widgets_list partition (P_ACQ_CO)
SQL> /

WIDGET_CLASS WIDGET_ID WIDGET_NAME
------------ -------------------------------- --------------------------------
WIDGET_DESCRIPTION
--------------------------------------------------------------------------------
'X' 'ABCXX2' 'The1'
'cddfdaxx'

'X' 'XXD2' 'The2'
'dda3'

'X' 'XRC34' 'The3'
'ff33355312'


SQL>


The rows in the "AC_Widgets_List.CSV" file are now visible as rows in a *Partition* in our Oracle Table WIDGETS_LIST.
Of course, these being external, cannot be modified by INSERT/UPDATE/DELETE DML.

The External Attribute Type that I used is ORACLE_LOADER to use the SQL Loader libraries on a filesystem file.  Oracle 19c also supports ORACLE_DATAPUMP, ORACLE_HDFS and ORACLE_HIVE to reference files stored in other types of storage.

Hybrid Partitions are supported with single-level Range and List partitioning methods.  ALTER TABLE to ADD, DROP and RENAME Partitions is supported.

An External Partition can be Exchanged with an External Non-Partitioned Table only.
.
.
UPDATE :  Later, if I update the CSV file (using an external editor) to remove the quotation mark :

sh-4.2$ cat AC_Widgets_List.CSV
X,ABCXX2,The1,cddfdaxx
X,XXD2,The2,dda3
X,XRC34,The3,ff33355312
sh-4.2$

SQL> l
1* select * from widgets_list partition (P_ACQ_CO)
SQL> /

WIDGET_CLASS_ID WIDGET_ID WIDGET_NAME
---------------- -------------------------------- --------------------------------
WIDGET_DESCRIPTION
------------------------------------------------------------------------------------
X ABCXX2 The1
cddfdaxx

X XXD2 The2
dda3

X XRC34 The3
ff33355312


SQL>


So, it is possible to edit the External Partition using other methods (here I used "vi" on Linux)
.
.
.

Categories: DBA Blogs

Oracle Instant Client 19c for Linux is Available

Christopher Jones - Mon, 2019-05-06 21:11

Good news - Oracle Instant Client 19.3 is available for Linux on OTN and yum. A Docker file is also available on GitHub.

Oracle Instant Client comprises tools, libraries and SDKs for building and connecting applications to Oracle Database. It comes in a number of packages: 'Basic' and 'Basic Light' packages provide a cornerstone, and additional packages can be installed to get extra functionality. Typical uses of Instant Client are for Oracle Database APIs like Python cx_Oracle and Node.js node-oracledb.

As the first yearly release (following on from 18c) in the new Oracle Database numbering system, the 19c development cycle seemed short. However there have been useful changes and fixes. Check out the documentation for the areas you are interested in. One thing I like is the enhanced Easy Connect Plus syntax, which removes the need to have a sqlnet.ora file for some commonly used connection options.

For RPM users, there are two things to note about Instant Client 19c package installation:

  • The global system library search path is now automatically set during installation of the (requisite) Basic and Basic Light packages. You no longer need to use ldconfig or set LD_LIBRARY_PATH.

  • You can only have one version of the packages installed by default. For example, the 19c RPMs will conflict with the 18c RPMs.

These two notes do not apply to the ZIP files: you still have to manually configure the library search path to suit the directory you installed into. And you can have as many ZIP versions installed concurrently as you like.

Some old news, but worth repeating is that the RPM packages on yum do not require a login or click through to download.

Notes: Troubleshooting GRID opatchauto and Sample GRID OOP Log

Michael Dinh - Mon, 2019-05-06 16:44

Troubleshooting opatchauto Issues in Grid Infrastructure Environment (Doc ID 2467266.1)

Failure during prepatch execution
If any failure in pre-patch step, verify the logs under <oracle base>/crsdata/<hostname>/crsconfig/crspatch_<hostname>_<timestamp>.log

Failure during patching execution
If any failure in execution during patching, review the opatch execution logs under corresponding <oracle home>/cfgtoollogs/opatchauto location

Failure during post-patch execution
If any failure in post-patch execution, review the logs under <oracle base>/crsdata/<hostname>/crsconfig/crspatch_<hostname>_<timestamp>.log.

Generally, the error seen While Starting the Clusterware. In that situation, troubleshoot Grid Infrastructure issues referring Doc ID 1050908.1

How to debug opatchauto failures?
# export OPATCH_DEBUG=true
# opatchauto apply patch with location -loglevel FINEST

ACTUAL: Grid_Infrastructure_Out_of_Place_12.2 (GI/DB SAME version)

[root@racnode-dc2-1 ~]# export PATCH_HOME=/u01/stage/patch/Apr2019/29301687
[root@racnode-dc2-1 ~]# $GRID_HOME/OPatch/opatch version
OPatch Version: 12.2.0.1.17

[root@racnode-dc2-1 ~]# $GRID_HOME/OPatch/opatchauto apply $PATCH_HOME -prepare-clone -logLevel FINEST

System initialization log file is /u01/app/12.2.0.1/grid/cfgtoollogs/opatchautodb/systemconfig2019-04-29_05-24-36PM.log.
Session log file is /u01/app/12.2.0.1/grid/cfgtoollogs/opatchauto/opatchauto2019-04-29_05-26-26PM.log
Prepatch operation log file location: /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/crspatch_racnode-dc2-1_2019-04-29_05-33-30PM.log

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc2-1
RAC Home:/u01/app/oracle/12.2.0.1/db1
Version:12.2.0.1.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/stage/patch/Apr2019/29301687/29301676
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/stage/patch/Apr2019/29301687/26839277
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/stage/patch/Apr2019/29301687/28566910
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /u01/stage/patch/Apr2019/29301687/29314339
Log: /u01/app/oracle/12.2.0.1/db1_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-54PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29314424
Log: /u01/app/oracle/12.2.0.1/db1_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-54PM_1.log


Host:racnode-dc2-1
CRS Home:/u01/app/12.2.0.1/grid
Version:12.2.0.1.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/stage/patch/Apr2019/29301687/26839277
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/28566910
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29301676
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29314339
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log

Patch: /u01/stage/patch/Apr2019/29301687/29314424
Log: /u01/app/12.2.0.1/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-29_17-42-53PM_1.log


Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc2-1
Actual Home : /u01/app/oracle/12.2.0.1/db1
Version:12.2.0.1.0
Clone Home Path : /u01/app/oracle/12.2.0.1/db1_2

Host : racnode-dc2-1
Actual Home : /u01/app/12.2.0.1/grid
Version:12.2.0.1.0
Clone Home Path : /u01/app/12.2.0.1/grid_2


OPatchauto session completed at Mon Apr 29 18:03:48 2019
Time taken to complete the session 39 minutes, 16 seconds
[root@racnode-dc2-1 ~]#

ACTUAL: Grid_Infrastructure_Out_of_Place_18.6 (GI/DB DIFFERENT version)

+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

[root@racnode-dc1-1 ~]# export PATCH_HOME=/u01/patch/Apr2019/29301682
[root@racnode-dc1-1 ~]# $GRID_HOME/OPatch/opatchauto apply $PATCH_HOME -prepare-clone -logLevel FINEST

System initialization log file is /u01/18.3.0.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-04-30_05-06-34PM.log.
Session log file is /u01/18.3.0.0/grid/cfgtoollogs/opatchauto/opatchauto2019-04-30_05-08-04PM.log

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:racnode-dc1-1
CRS Home:/u01/18.3.0.0/grid
Version:18.0.0.0.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/patch/Apr2019/29301682/28435192
Reason: This patch is already been applied, so not going to apply again.

Patch: /u01/patch/Apr2019/29301682/28547619
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /u01/patch/Apr2019/29301682/29301631
Log: /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-30_17-17-53PM_1.log

Patch: /u01/patch/Apr2019/29301682/29301643
Log: /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-30_17-17-53PM_1.log

Patch: /u01/patch/Apr2019/29301682/29302264
Log: /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/core/opatch/opatch2019-04-30_17-17-53PM_1.log


Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-1
Actual Home : /u01/18.3.0.0/grid
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid_2


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1

OPatchauto session completed at Tue Apr 30 17:27:21 2019
Time taken to complete the session 20 minutes, 52 seconds
[root@racnode-dc1-1 ~]#

Fashion Retailer Achieves Over 98 Percent Inventory Accuracy with Oracle

Oracle Press Releases - Mon, 2019-05-06 07:00
Press Release
Fashion Retailer Achieves Over 98 Percent Inventory Accuracy with Oracle From inventory to in-store service, Kamal Osman Jamjoom Group LLC supports double-digit growth with modern retail technology

Redwood Shores, Calif. and Dubai, UAE —May 6, 2019

Kamal Osman Jamjoom Group (“KOJ”) believes that trust is the secret to success with customers, staff and business partners. One of the most established retail groups in the Gulf region, KOJ operates more than 700 stores with nine brands across seven countries. To meet service level expectations and optimize goods coming from three distribution centers, KOJ needed a more unified view of operational data. By modernizing with Oracle Retail, KOJ was able to improve store operations and drive higher levels of customer satisfaction through reliable, accurate merchandise availability. Today, a real-time view into inventory data has resulted in stock accuracy levels averaging 99.99 percent for online and 98.5 percent for in-store.

“KOJ is responsible for our own highly successful brands and multiple well-regarded international brands, with high growth rates. With the complexity of managing multiple brands and regions we needed greater control over products and inventory to support and exceed customer service levels,” said Ian Halliwell, deputy chief executive officer, Kamal Osman Jamjoom. “By leveraging the Oracle Retail platform, we have gained efficiencies and confidence fueled by an enterprise view of demand and our ability to fulfill inventory requirements across locations and touchpoints.”

KOJ began the work with Oracle in 2006 and recently upgraded to a modern version of the Oracle Retail Merchandise Operations suite, Warehouse Management and Store Inventory Management. By automating and optimizing retail process and workflow, KOJ gained efficiencies and saw a significant reduction in their labor costs.

The new implementation of Oracle Retail Merchandising Insights also improved visibility into inventory levels, item performance, sales trends, and customer demand. To achieve a single view of the customer across the business and gain a better understanding of customer behavior, the Group introduced Oracle Retail Customer Engagement and Oracle Retail Customer Insights. Now, store associates can access customer data in real-time, including purchasing and loyalty history. The modern Oracle Retail Xstore Point-of-Service completes the seamless in-store customer experience. KOJ is also currently implementing Oracle Retail Order Broker to enhance inventory visibility and enable dynamic fulfillment for a buy anywhere, fulfill anywhere customer experience.

“Oracle continues to make significant investments in innovations that help retailers like KOJ keep pace with market and consumer expectations. By leveraging the integrated capabilities of Oracle Retail, KOJ can continue to grow its business and refine strategies to meet performance, growth and customer objectives,” said Mike Webster, Senior Vice President and General Manager, Oracle Retail.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Cube Viewer - Designing Your Cube

Anthony Shorten - Sun, 2019-05-05 20:54

In the last Cube Viewer article we outlined a generic process for building a cube, but the first step on this process is to actually design the data we want to analyze in the cube. A common misconception with Cube Viewer is that you can take an query and convert it into a cube for analysis. This is not exactly true as Cube Viewer is really designed for particular types of analysis and should be used for those types of analysis to take full advantage of the capability.

The easiest way of deciding what type of analysis are optimal for the Cube Viewer is to visualize your end result. This is not a new technique as most designers work from what they want to determine the best approach. The easiest way I tend to visualize the Cube Viewer is to actually visualize the data in a more analytical view. If you are familiar with the "Pivot" functionality that is popular in spreadsheet programs then that is the idea. The pivot allows for different columns in a list to be combined in such a way to provide more analytical information. A very simple example is shown below:

Pivot Table

The above example we have three columns, two are considered dimensions (how we "cut" the data) and one the value we want to analyze. The pivot relationship in the above example is between Column A and Column B.

In Cube Viewer there are three concepts:

  • Dimensions. These are the columns used in the analysis. Typically dimensions represent the different ways you want to view the data in relation to other dimensions.
  • Filters. These act on the master record set (the data you want to use in the analysis) to refine the subset to focus upon. For example, you might want to limit your analysis to specific date ranges. By their nature, Filters can also become dimensions in a cube.
  • Values. These are the numeric values (including any functions) that need to analyzed.

Note: Filters and Values can be considered dimensions as well due to the interactivity allows in the Cube Viewer.

When designing your cube consider the following guidelines:

  • Dimensions (including filters) define the data to analyze. The dimensions and filters are used to define the data to focus upon. The SQL will be designed around all the concepts.
  • Interactively means analysis is fluid. Whilst dimensions, filters and values are defined in the cube definition, their roles can be altered at runtime through the interaction by the user of the cube. The user has interaction (within limits) to interactively define how the data is represented.
  • Dimensions can be derived. It is possible to add ad-hoc dimensions that may or may not be even data in the database directly. The ConfigTools capability allows for additional columns to be added during the configuration that are not present directly in the SQL. For example, it is possible to pull in value from a related object not in the SQL but in the ConfigTools objects.

Note: For large amounts of data to include or process as part of the cube it is highly recommended to build that logic into the cube query itself to improve performance.

  • Values need to be numeric. The value to be analyzed should be numeric to provide the ability to be analyzed correctly.

In he next series of articles we will explore actually building the SQL statement and then translating that into the ConfigTools objects to complete the Cube.

Cat or Dog — Image Classification with Convolutional Neural Network

Andrejus Baranovski - Sun, 2019-05-05 11:42
The goal of this post is to show how convnet (CNN — Convolutional Neural Network) works. I will be using classical cat/dog classification example described in François Chollet book — Deep Learning with Python. Source code for this example is available on François Chollet GitHub. I’m using this source code to run my experiment.

Convnet works by abstracting image features from the detail to higher level elements. An analogy can be described with the way how humans think. Each of us knows how airplane looks, but most likely when thinking about airplane we are not thinking about every little bit of airplane structure. In a similar way, convnet learns to recognize higher level elements in the image and this helps to classify new images when they look similar to the ones used for the training.

Image classification model should be trained using this notebook (you will find a description there from where to download image dataset with cats and dogs images). Model is being used and classification prediction is invoked in this notebook. For the convenience, I uploaded my own notebooks (based on the code from Deep Learning with Python book) to GitHub.

Read more in my Towards Data Science article.

What’s My Cluster Configuration

Michael Dinh - Sun, 2019-05-05 09:15
[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ crsctl get cluster configuration
Name                : ol7-183-cluster
Configuration       : Cluster
Class               : Standalone Cluster
Type                : flex
The cluster is not extended.
--------------------------------------------------------------------------------
        MEMBER CLUSTER INFORMATION

      Name       Version        GUID                       Deployed Deconfigured
================================================================================
================================================================================

[grid@ol7-183-node1 ~]$ olsnodes -s -a -t
ol7-183-node1   Active  Hub     Unpinned
ol7-183-node2   Active  Hub     Unpinned

[grid@ol7-183-node1 ~]$ crsctl query crs releasepatch
Oracle Clusterware release patch level is [70732493] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28090564 28256701 ] have been applied on the local node. The release patch string is [18.3.0.0.0].

[grid@ol7-183-node1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [70732493].
[grid@ol7-183-node1 ~]$

Updating vagrant-boxes/OracleRAC

Michael Dinh - Sat, 2019-05-04 14:17

I have been playing with and finally able to complete 18c RAC installation using oracle/vagrant-boxes/OracleRAC

Honestly, I am still fond of Mikael Sandström oravirt vagrant-boxes, but having some trouble with installations and thought to try something new.

Here are updates performed for oracle/vagrant-boxes/OracleRAC on all nodes and only showing one node as example.

/etc/oratab is not updated:

[oracle@ol7-183-node2 ~]$ ps -ef|grep pmon
grid      1155     1  0 14:00 ?        00:00:00 asm_pmon_+ASM2
oracle   18223 18079  0 14:43 pts/0    00:00:00 grep --color=auto pmon
oracle   31653     1  0 14:29 ?        00:00:00 ora_pmon_hawk2

[oracle@ol7-183-node2 ~]$ tail /etc/oratab
#   $ORACLE_SID:$ORACLE_HOME::
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#

Update /etc/oratab [my framework works :=)]

[oracle@ol7-183-node2 ~]$ cat /etc/oratab
+ASM2:/u01/app/18.0.0.0/grid:N
hawk2:/u01/app/oracle/product/18.0.0.0/dbhome_1:N

[oracle@ol7-183-node2 ~]$ /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance not alive for sid "+ASM2"

[oracle@ol7-183-node2 ~]$ /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
Oracle Instance alive for sid "hawk2"
[oracle@ol7-183-node2 ~]$

sudo for grid/oracle is not enabled:

[oracle@ol7-183-node2 ~]$ sudo /media/patch/findhomes.sh
We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for oracle:
oracle is not in the sudoers file.  This incident will be reported.
[oracle@ol7-183-node2 ~]$ exit

Enable sudo for grid/oracle: (shown as example for oracle and should be same for grid)

[vagrant@ol7-183-node2 ~]$ sudo su -
[root@ol7-183-node2 ~]# visudo
[root@ol7-183-node2 ~]# grep oracle /etc/sudoers
oracle  ALL=(ALL)       ALL
oracle  ALL=(ALL)       NOPASSWD: ALL
[root@ol7-183-node2 ~]# logout

[vagrant@ol7-183-node2 ~]$ sudo su - oracle
Last login: Sat May  4 14:43:46 -04 2019 on pts/0

[oracle@ol7-183-node2 ~]$ sudo /media/patch/findhomes.sh
   PID NAME                 ORACLE_HOME
  1155 asm_pmon_+asm2       /u01/app/18.0.0.0/grid/
 31653 ora_pmon_hawk2       /u01/app/oracle/product/18.0.0.0/dbhome_1/
[oracle@ol7-183-node2 ~]$

Login banner:

dinh@CMWPHV1 MINGW64 /c/vagrant-boxes/OracleRAC (master)
$ vagrant ssh node2
Last login: Sat May  4 14:43:40 2019 from 10.0.2.2

Welcome to Oracle Linux Server release 7.6 (GNU/Linux 4.14.35-1844.1.3.el7uek.x86_64)

The Oracle Linux End-User License Agreement can be viewed here:

    * /usr/share/eula/eula.en_US

For additional packages, updates, documentation and community help, see:

    * http://yum.oracle.com/

[vagrant@ol7-183-node2 ~]$

Remove login banner:

[root@ol7-183-node2 ~]# cp -v /etc/motd /etc/motd.bak
‘/etc/motd’ -> ‘/etc/motd.bak’
[root@ol7-183-node2 ~]# cat /dev/null > /etc/motd
[root@ol7-183-node2 ~]# logout
[vagrant@ol7-183-node2 ~]$ logout
Connection to 127.0.0.1 closed.

dinh@CMWPHV1 MINGW64 /c/vagrant-boxes/OracleRAC (master)
$ vagrant ssh node2
Last login: Sat May  4 15:00:06 2019 from 10.0.2.2
[vagrant@ol7-183-node2 ~]$

Mandatory GIMR is not installed:

    node1: -----------------------------------------------------------------
    node1: INFO: 2019-05-04 14:01:02: Make GI config command
    node1: -----------------------------------------------------------------
    node1: -----------------------------------------------------------------
    node1: INFO: 2019-05-04 14:01:02: Grid Infrastructure configuration as 'RAC'
    node1: INFO: 2019-05-04 14:01:02: - ASM library   : ASMLIB
    node1: INFO: 2019-05-04 14:01:02: - without MGMTDB: true
    node1: -----------------------------------------------------------------
    node1: Launching Oracle Grid Infrastructure Setup Wizard...

[oracle@ol7-183-node1 ~]$ ps -ef|grep pmon
grid      7294     1  0 13:53 ?        00:00:00 asm_pmon_+ASM1
oracle   10986     1  0 14:29 ?        00:00:00 ora_pmon_hawk1
oracle   28642 28586  0 15:12 pts/0    00:00:00 grep --color=auto pmon

[oracle@ol7-183-node1 ~]$ ssh ol7-183-node2
Last login: Sat May  4 14:48:20 2019
[oracle@ol7-183-node2 ~]$ ps -ef|grep pmon
grid      1155     1  0 14:00 ?        00:00:00 asm_pmon_+ASM2
oracle   29820 29711  0 15:12 pts/0    00:00:00 grep --color=auto pmon
oracle   31653     1  0 14:29 ?        00:00:00 ora_pmon_hawk2
[oracle@ol7-183-node2 ~]$

Create GMIR:
How to Move/Recreate GI Management Repository (GIMR / MGMTDB) to Different Shared Storage (Diskgroup, CFS or NFS etc) (Doc ID 1589394.1)
MDBUtil: GI Management Repository configuration tool (Doc ID 2065175.1)

[grid@ol7-183-node1 ~]$ ps -ef|grep pmon
grid      2286 27832  0 16:35 pts/0    00:00:00 grep --color=auto pmon
grid      7294     1  0 13:53 ?        00:00:00 asm_pmon_+ASM1
oracle   10986     1  0 14:29 ?        00:00:00 ora_pmon_hawk1

[grid@ol7-183-node1 ~]$ ll /tmp/mdbutil.*
-rwxr-xr-x. 1 grid oinstall 67952 May  4 16:02 /tmp/mdbutil.pl

[grid@ol7-183-node1 ~]$ /tmp/mdbutil.pl --status
mdbutil.pl version : 1.95
2019-05-04 16:35:44: I Checking CHM status...
2019-05-04 16:35:46: I Listener MGMTLSNR is configured and running on ol7-183-node1
2019-05-04 16:35:49: W MGMTDB is not configured on ol7-183-node1!
2019-05-04 16:35:49: W Cluster Health Monitor (CHM) is configured and not running on ol7-183-node1!

[grid@ol7-183-node1 ~]$ . /media/patch/gi.env
The Oracle base remains unchanged with value /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[grid@ol7-183-node1 ~]$ asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512             512   4096  4194304     65520    63108                0           63108              0             Y  DATA/
MOUNTED  NORMAL  N         512             512   4096  4194304     16368    15260             4092            5584              0             N  RECO/

[grid@ol7-183-node1 ~]$ /tmp/mdbutil.pl --addmdb --target=+DATA -debug
mdbutil.pl version : 1.95
2019-05-04 16:36:57: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status diskgroup -g DATA
2019-05-04 16:36:58: D Exit code: 0
2019-05-04 16:36:58: D Output of last command execution:
Disk Group DATA is running on ol7-183-node1,ol7-183-node2
2019-05-04 16:36:58: I Starting To Configure MGMTDB at +DATA...
2019-05-04 16:36:58: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status mgmtlsnr
2019-05-04 16:36:59: D Exit code: 0
2019-05-04 16:36:59: D Output of last command execution:
Listener MGMTLSNR is enabled
2019-05-04 16:36:59: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status mgmtdb
2019-05-04 16:37:00: D Exit code: 1
2019-05-04 16:37:00: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2019-05-04 16:37:00: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl status mgmtdb
2019-05-04 16:37:01: D Exit code: 1
2019-05-04 16:37:01: D Output of last command execution:
PRCD-1120 : The resource for database _mgmtdb could not be found.
2019-05-04 16:37:01: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl stop mgmtlsnr
2019-05-04 16:37:05: D Exit code: 0
2019-05-04 16:37:05: D Output of last command execution:
2019-05-04 16:37:05: D Executing: /u01/app/18.0.0.0/grid/bin/crsctl query crs activeversion
2019-05-04 16:37:05: D Exit code: 0
2019-05-04 16:37:05: D Output of last command execution:
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
2019-05-04 16:37:05: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl enable qosmserver
2019-05-04 16:37:06: D Exit code: 2
2019-05-04 16:37:06: D Output of last command execution:
PRKF-1321 : QoS Management Server is already enabled.
2019-05-04 16:37:06: D Executing: /u01/app/18.0.0.0/grid/bin/srvctl start qosmserver
2019-05-04 16:37:07: D Exit code: 2
2019-05-04 16:37:07: D Output of last command execution:
PRCC-1014 : qosmserver was already running
2019-05-04 16:37:07: I Container database creation in progress... for GI 18.0.0.0.0
2019-05-04 16:37:07: D Executing: /u01/app/18.0.0.0/grid/bin/dbca  -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName DATA -datafileJarLocation /u01/app/18.0.0.0/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck
2019-05-04 16:55:03: D Exit code: 0
2019-05-04 16:55:03: D Output of last command execution:
Prepare for db operation
2019-05-04 16:55:03: I Plugable database creation in progress...
2019-05-04 16:55:03: D Executing: /u01/app/18.0.0.0/grid/bin/mgmtca -local
2019-05-04 16:59:32: D Exit code: 0
2019-05-04 16:59:32: D Output of last command execution:
2019-05-04 16:59:32: D Executing: scp /tmp/mdbutil.pl ol7-183-node1:/tmp/
2019-05-04 16:59:33: D Exit code: 0
2019-05-04 16:59:33: D Output of last command execution:
2019-05-04 16:59:33: I Executing "/tmp/mdbutil.pl --addchm" on ol7-183-node1 as root to configure CHM.
2019-05-04 16:59:33: D Executing: ssh root@ol7-183-node1 "/tmp/mdbutil.pl --addchm"
root@ol7-183-node1's password:
2019-05-04 16:59:42: D Exit code: 1
2019-05-04 16:59:42: D Output of last command execution:
mdbutil.pl version : 1.95
2019-05-04 16:59:42: W Not able to execute "/tmp/mdbutil.pl --addchm" on ol7-183-node1 as root to configure CHM.
2019-05-04 16:59:42: D Executing: scp /tmp/mdbutil.pl ol7-183-node2:/tmp/
2019-05-04 16:59:43: D Exit code: 0
2019-05-04 16:59:43: D Output of last command execution:
2019-05-04 16:59:43: I Executing "/tmp/mdbutil.pl --addchm" on ol7-183-node2 as root to configure CHM.
2019-05-04 16:59:43: D Executing: ssh root@ol7-183-node2 "/tmp/mdbutil.pl --addchm"
root@ol7-183-node2's password:
2019-05-04 16:59:51: D Exit code: 1
2019-05-04 16:59:51: D Output of last command execution:
mdbutil.pl version : 1.95
2019-05-04 16:59:51: W Not able to execute "/tmp/mdbutil.pl --addchm" on ol7-183-node2 as root to configure CHM.
2019-05-04 16:59:51: I MGMTDB & CHM configuration done!

[root@ol7-183-node1 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM1"

[root@ol7-183-node1 ~]# crsctl start res ora.crf -init
CRS-2501: Resource 'ora.crf' is disabled
CRS-4000: Command Start failed, or completed with errors.

[root@ol7-183-node1 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init

[root@ol7-183-node1 ~]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'ol7-183-node1'
CRS-2676: Start of 'ora.crf' on 'ol7-183-node1' succeeded

[root@ol7-183-node1 ~]# crsctl stat res ora.crf -init
NAME=ora.crf
TYPE=ora.crf.type
TARGET=ONLINE
STATE=ONLINE on ol7-183-node1

[root@ol7-183-node1 ~]# ll /tmp/mdbutil.pl
-rwxr-xr-x. 1 grid oinstall 67952 May  4 16:59 /tmp/mdbutil.pl
[root@ol7-183-node1 ~]# /tmp/mdbutil.pl --addchm
mdbutil.pl version : 1.95
2019-05-04 17:02:54: I Starting To Configure CHM...
2019-05-04 17:02:55: I CHM has already been configured!
2019-05-04 17:02:57: I CHM Configure Successfully Completed!
[root@ol7-183-node1 ~]#

[root@ol7-183-node1 ~]# ssh ol7-183-node2
Last login: Sat May  4 16:28:28 2019
[root@ol7-183-node2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM2"
[root@ol7-183-node2 ~]# crsctl stat res ora.crf -init
NAME=ora.crf
TYPE=ora.crf.type
TARGET=OFFLINE
STATE=OFFLINE

[root@ol7-183-node2 ~]# crsctl modify res ora.crf -attr ENABLED=1 -init
[root@ol7-183-node2 ~]# crsctl start res ora.crf -init
CRS-2672: Attempting to start 'ora.crf' on 'ol7-183-node2'
CRS-2676: Start of 'ora.crf' on 'ol7-183-node2' succeeded
[root@ol7-183-node2 ~]# crsctl stat res ora.crf -init
NAME=ora.crf
TYPE=ora.crf.type
TARGET=ONLINE
STATE=ONLINE on ol7-183-node2

[root@ol7-183-node2 ~]# ll /tmp/mdbutil.pl
-rwxr-xr-x. 1 grid oinstall 67952 May  4 16:59 /tmp/mdbutil.pl
[root@ol7-183-node2 ~]# /tmp/mdbutil.pl --addchm
mdbutil.pl version : 1.95
2019-05-04 17:04:41: I Starting To Configure CHM...
2019-05-04 17:04:41: I CHM has already been configured!
2019-05-04 17:04:44: I CHM Configure Successfully Completed!

[root@ol7-183-node2 ~]# logout
Connection to ol7-183-node2 closed.
[root@ol7-183-node1 ~]# logout

[grid@ol7-183-node1 ~]$ /tmp/mdbutil.pl --status
mdbutil.pl version : 1.95
2019-05-04 17:04:54: I Checking CHM status...
2019-05-04 17:04:56: I Listener MGMTLSNR is configured and running on ol7-183-node1
2019-05-04 17:04:59: I Database MGMTDB is configured and running on ol7-183-node1
2019-05-04 17:05:00: I Cluster Health Monitor (CHM) is configured and running
--------------------------------------------------------------------------------
CHM Repository Path = +DATA/_MGMTDB/881717C3357B4146E0536538A8C05D2C/DATAFILE/sysmgmtdata.291.1007398657
MGMTDB space used on DG +DATA = 23628 Mb
--------------------------------------------------------------------------------
[grid@ol7-183-node1 ~]$

Due to role separation, fix broken script for lspatches.

[grid@ol7-183-node2 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
+ /u01/app/18.0.0.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/18.0.0.0/grid/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch lspatches

====================================================================================================
OPatch could not create/open history file for writing.
====================================================================================================

27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ exit
[grid@ol7-183-node2 ~]$

====================================================================================================

[root@ol7-183-node2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/grid
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/grid
GRID_HOME=/u01/app/18.0.0.0/grid
ORACLE_HOME=/u01/app/18.0.0.0/grid
Oracle Instance alive for sid "+ASM2"
[root@ol7-183-node2 ~]# chmod 775 -R $ORACLE_HOME/cfgtoollogs

[root@ol7-183-node2 ~]# . /media/patch/hawk.env
The Oracle base has been changed from /u01/app/grid to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/18.0.0.0/dbhome_1
Oracle Instance alive for sid "hawk2"
[root@ol7-183-node2 ~]# chmod 775 -R $ORACLE_HOME/cfgtoollogs

====================================================================================================

[vagrant@ol7-183-node2 ~]$ sudo su - grid /media/patch/lspatches.sh
Last login: Sat May  4 18:16:38 -04 2019
+ /u01/app/18.0.0.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/18.0.0.0/grid/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ exit
[vagrant@ol7-183-node2 ~]$ sudo su - oracle /media/patch/lspatches.sh
Last login: Sat May  4 18:15:18 -04 2019 on pts/0
+ /u01/app/18.0.0.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/18.0.0.0/grid/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch version
OPatch Version: 12.2.0.1.14

OPatch succeeded.
+ /u01/app/oracle/product/18.0.0.0/dbhome_1/OPatch/opatch lspatches
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

OPatch succeeded.
+ exit
[vagrant@ol7-183-node2 ~]$

I will update post as I progress.

Update statement with outer join

Tom Kyte - Sat, 2019-05-04 08:06
Hi, From the sql script, I would like to understand the difference in the behavior between executions of statement 8 and statement 11. While statement 8 updates the flag_1 to null, statement 11 updates it to 2. What difference it makes when I u...
Categories: DBA Blogs

Pass parameter to where clause in bulk collect statement

Tom Kyte - Sat, 2019-05-04 08:06
Hi I have a basic procedure which bulk collects the results of a select statement into a table array. I then print out one line to show that it has worked.. Code (SQL): <code>CREATE OR REPLACE PROCEDURE use_var IS TYPE r_tab IS TABLE OF msf010%...
Categories: DBA Blogs

Oracle 19C : Exporting and Importing Broker Configuration

Yann Neuhaus - Sat, 2019-05-04 06:57

Up to Oracle 19c, there was no automatic way to backup the configuration of the broker. One solution was to manually copy all executed instructions in a file.
With Oracle 19c, there is now the possibility to export and to import the configuration of the broker. Indeed the new EXPORT CONFIGURATION command will enable to save the metadata contained in the broker configuration file to a text file. This can be very useful if I have to recreate my configuration.
In this blog I have tested this command with following configuration

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE2 - Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 47 seconds ago)

DGMGRL>

We can see the syntax of the EXPORT command with the help command

DGMGRL> help export

Export Data Guard Broker configuration to a file.

Syntax:

  EXPORT CONFIGURATION [TO ];

DGMGRL>

Now let’s export the configuration

DGMGRL> EXPORT CONFIGURATION TO db19c_config.txt
Succeeded.
DGMGRL>

The file is generated in the trace files directory.

[oracle@primaserver trace]$ pwd
/u01/app/oracle/diag/rdbms/db19c_site1/DB19C/trace
[oracle@primaserver trace]$ ls -l db19c_config.txt
-rw-r--r--. 1 oracle oinstall 8469 May  4 12:25 db19c_config.txt
[oracle@primaserver trace]$

Let’s remove the configuration

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXPERFORMANCE;
Succeeded.
DGMGRL> remove configuration;
Removed configuration

DGMGRL> show configuration
ORA-16532: Oracle Data Guard broker configuration does not exist

Configuration details cannot be determined by DGMGRL
DGMGRL>

Now let’s use the IMPORT command to rebuild the configuration

DGMGRL> IMPORT CONFIGURATION FROM db19c_config.txt
Succeeded. Run ENABLE CONFIGURATION to enable the imported configuration.
DGMGRL>

As we can see the configuration is disabled after the import

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE2 - Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
DISABLED

DGMGRL>

So let’s enable it

DGMGRL> ENABLE CONFIGURATION
Enabled.
DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    Warning: ORA-16629: database reports a different protection level from the protection mode

    DB19C_SITE2 - Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
WARNING   (status updated 6 seconds ago)

DGMGRL>

The warning is due to the fact that the protection mode was set to MaxPerformance to be able to drop the configuration.

DGMGRL> EDIT CONFIGURATION SET PROTECTION MODE AS MAXAVAILABILITY ;
Succeeded.
DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE2 - Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 37 seconds ago)

DGMGRL>

Now let’s run the EXPORT command without specifying a file name

DGMGRL> EXPORT CONFIGURATION
Succeeded.
DGMGRL>

We can see in the trace directory that a default name is generated for the file

DB19C_dmon_5912_brkmeta_1.trc

If we run again the export command

DGMGRL> EXPORT CONFIGURATION
Succeeded.
DGMGRL>

A second default file is created

DB19C_dmon_5912_brkmeta_2.trc

Cet article Oracle 19C : Exporting and Importing Broker Configuration est apparu en premier sur Blog dbi services.

firefox extension do not work anymore....

Dietrich Schroff - Sat, 2019-05-04 02:39
Today some of my extensions stopped to work and a reinstall failed due to "Download failed. Please check your connection.":

There is an article about this issue, which says, that this is due to an expired certificate:
https://www.bleepingcomputer.com/news/software/firefox-addons-being-disabled-due-to-an-expired-certificate/

The workarounds stated there, do not work for my extensions, so i have to wait, that firefox gets a solution (and a new certificate).

For all others with this problem: Do not deinstall your extensions (like i did) - just wait...

Edit: Here the statement from mozilla: https://bugzilla.mozilla.org/show_bug.cgi?id=1548973
On twitter i found this nice comment

Edit: For latest infos read https://blog.mozilla.org/addons/2019/05/04/update-regarding-add-ons-in-firefox/

Working with ArrayDataProviders in JavaScript Functions in Visual Builder

Shay Shmeltzer - Fri, 2019-05-03 18:15

Storing data in ArrayDataProviders (rather than SDP) is useful whenever you want to further modify the data on the client side - for example if you are looking to create updatable tables in your UI. A common follow up question is "how can I do additional processing/updates on all the records I'm storing in the ADP" - this is what this blog is about.

Since the records are now stored on the client side, you can access them through JavaScript. You can, for example, write a page level module function to loop over the set of records and modify them. For example in the video below I'm using this little function to raise the salary of all the employees:

  PageModule.prototype.arrayModifier = function(array){     console.log(array.length + " is what we got")       for (var i = 0; i < array.length  ; i++ ) {         array[i].salary = array[i].salary+2;         console.log("salary after " + array[i].salary);     }     return array;   }

Into this function you'll pass the array of data from the ArrayDataProvider - you can do this in the parameter mapping of the function pointing to the data object like this:

ADP Data as Parameter

Once your function finished its processing of the data, you'll want to update the ArrayDataProvider back to reflect the changes you did. To do this you can use the action called "Fire Data Provider Event". This function has the option to do mutate events (update, insert, delete). You can read the doc about this and other actions parameters here. For the update you simply need to provide the array of updated data like this:

ADP Update Mutation Parameter

That's it. You can see all the pieces of this process working together in the video below:

 

Categories: Development

Oracle 19C : Dynamic Change of Fast-Start Failover Target

Yann Neuhaus - Fri, 2019-05-03 15:04

Oracle 19C is now available on premise. There are lot of new features. One for the Data Guard Broker is that now we can dynamically change the fast-start failover target to a specified member in the list without disabling the fast-start failover.
I have tested this new feature and is describing this in this blog
I am using 3 servers with Oracle Linux
The Data Guard is already built and the broker is already configured
To enable the fast-start failover there are some requirements. Note that flashback database must be enabled for both databases.
First we put the the transport to SYNC for 3 databases

DGMGRL> edit database 'DB19C_SITE1' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'DB19C_SITE2' set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'DB19C_SITE3' set property LogXptMode='SYNC';
Property "logxptmode" updated

After we change the protection to Maxavailability

DGMGRL>  EDIT CONFIGURATION SET PROTECTION MODE AS MaxAvailability;
Succeeded.

Then we set the fast-start failover target for both databases

DGMGRL> enable fast_start failover;
Enabled in Zero Data Loss Mode.

Below the status of the configuration. And we can see that DB19C_SITE2 is the target for the fast-start failover

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE2 - (*) Physical standby database
    DB19C_SITE3 - Physical standby database

Fast-Start Failover: Enabled in Zero Data Loss Mode

Configuration Status:
SUCCESS   (status updated 55 seconds ago)

DGMGRL>

The status of the observer will also show as the active target

DGMGRL> show observer

Configuration - db19c

  Primary:            DB19C_SITE1
  Active Target:      DB19C_SITE2

Observer "standserver2" - Master

  Host Name:                    standserver2
  Last Ping to Primary:         2 seconds ago
  Last Ping to Target:          4 seconds ago

DGMGRL>

For example let’s say we want to switchover to DB19C_SITE3

DGMGRL> switchover to 'DB19C_SITE3';
Performing switchover NOW, please wait...
Error: ORA-16655: specified standby database not the current fast-start failover target standby

Failed.
Unable to switchover, primary database is still "DB19C_SITE1"
DGMGRL>

As we can see we cannot because the first fast-start failover target is DB19C_SITE2. We have to change it to DB19C_SITE3
To dynamiccaly do this change , we use the command SET FAST_START FAILOVER TARGET.

DGMGRL> SET FAST_START FAILOVER TARGET TO 'DB19C_SITE3';
Waiting for Fast-Start Failover target to change to "DB19C_SITE3"...
Succeeded.
DGMGRL>

We can query the broker to verify the change

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE1 - Primary database
    DB19C_SITE3 - (*) Physical standby database
    DB19C_SITE2 - Physical standby database

Fast-Start Failover: Enabled in Zero Data Loss Mode

Configuration Status:
SUCCESS   (status updated 22 seconds ago)

DGMGRL>

And then now I can switchover to DB19C_SITE3

DGMGRL> switchover to 'DB19C_SITE3';
Performing switchover NOW, please wait...
New primary database "DB19C_SITE3" is opening...
Operation requires start up of instance "DB19C" on database "DB19C_SITE1"
Starting instance "DB19C"...
Connected to an idle instance.
ORACLE instance started.
Connected to "DB19C_SITE1"
Database mounted.
Database opened.
Connected to "DB19C_SITE1"
Switchover succeeded, new primary is "DB19C_SITE3"
DGMGRL>

And the new status of the configuration

DGMGRL> show configuration

Configuration - db19c

  Protection Mode: MaxAvailability
  Members:
  DB19C_SITE3 - Primary database
    DB19C_SITE1 - (*) Physical standby database
    DB19C_SITE2 - Physical standby database

Fast-Start Failover: Enabled in Zero Data Loss Mode

Configuration Status:
SUCCESS   (status updated 51 seconds ago)

DGMGRL>

Cet article Oracle 19C : Dynamic Change of Fast-Start Failover Target est apparu en premier sur Blog dbi services.

GRID Out Of Place (OOP) Rollback Disaster

Michael Dinh - Fri, 2019-05-03 11:45

Now I understand the hesitation to use Oracle new features, especially any auto.

It may just be simpler and less stress to perform manual task having control and knowing what is being executed and validated.

GRID Out Of Place (OOP) patching completed successfully for 18.6.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after patching.

+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Run cluvfy was successful too.

[oracle@racnode-dc1-1 ~]$ cluvfy stage -post crsinst -n racnode-dc1-1,racnode-dc1-2 -verbose

Post-check for cluster services setup was successful.

CVU operation performed:      stage -post crsinst
Date:                         Apr 30, 2019 8:17:49 PM
CVU home:                     /u01/18.3.0.0/grid_2/
User:                         oracle
[oracle@racnode-dc1-1 ~]$

GRID OOP Rollback Patching completed successfully for node1.

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-1 ~]#
[root@racnode-dc1-1 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-1 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:06:47 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-06-50AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-08-00AM.log
The id for this session is R47N

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-1
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1

OPatchauto session completed at Fri May  3 01:14:25 2019
Time taken to complete the session 7 minutes, 38 seconds

[root@racnode-dc1-1 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[root@racnode-dc1-1 ~]# /media/patch/findhomes.sh
   PID NAME                 ORACLE_HOME
 10486 asm_pmon_+asm1       /u01/18.3.0.0/grid/
 10833 apx_pmon_+apx1       /u01/18.3.0.0/grid/

[root@racnode-dc1-1 ~]# cat /etc/oratab
#Backup file is  /u01/app/oracle/12.1.0.1/db1/srvm/admin/oratab.bak.racnode-dc1-1 line added by Agent
#+ASM1:/u01/18.3.0.0/grid:N
hawk1:/u01/app/oracle/12.1.0.1/db1:N
hawk:/u01/app/oracle/12.1.0.1/db1:N             # line added by Agent
[root@racnode-dc1-1 ~]#

GRID OOP Rollback Patching completed successfully for node2.

[root@racnode-dc1-2 ~]# crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[root@racnode-dc1-2 ~]#
[root@racnode-dc1-2 ~]# echo $GRID_HOME
/u01/18.3.0.0/grid_2
[root@racnode-dc1-2 ~]# $GRID_HOME/OPatch/opatchauto rollback -switch-clone -logLevel FINEST

OPatchauto session is initiated at Fri May  3 01:21:39 2019

System initialization log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchautodb/systemconfig2019-05-03_01-21-41AM.log.

Session log file is /u01/18.3.0.0/grid_2/cfgtoollogs/opatchauto/opatchauto2019-05-03_01-22-46AM.log
The id for this session is 9RAT

Update nodelist in the inventory for oracle home /u01/18.3.0.0/grid.
Update nodelist in the inventory is completed for oracle home /u01/18.3.0.0/grid.


Bringing down CRS service on home /u01/18.3.0.0/grid
CRS service brought down successfully on home /u01/18.3.0.0/grid


Starting CRS service on home /u01/18.3.0.0/grid
CRS service started successfully on home /u01/18.3.0.0/grid


Confirm that all resources have been started from home /u01/18.3.0.0/grid.
All resources have been started successfully from home /u01/18.3.0.0/grid.


OPatchAuto successful.

--------------------------------Summary--------------------------------
Out of place patching clone home(s) summary
____________________________________________
Host : racnode-dc1-2
Actual Home : /u01/18.3.0.0/grid_2
Version:18.0.0.0.0
Clone Home Path : /u01/18.3.0.0/grid


Following homes are skipped during patching as patches are not applicable:

/u01/app/oracle/12.1.0.1/db1


OPatchauto session completed at Fri May  3 01:40:51 2019
Time taken to complete the session 19 minutes, 12 seconds
[root@racnode-dc1-2 ~]#

GRID OOP Rollback completed successfully for 18.5.0.0.0.

GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1

Here is an example of inventory after rollback.

+ /u01/18.3.0.0/grid/OPatch/opatch lspatches
28864607;ACFS RELEASE UPDATE 18.5.0.0.0 (28864607)
28864593;OCW RELEASE UPDATE 18.5.0.0.0 (28864593)
28822489;Database Release Update : 18.5.0.0.190115 (28822489)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

Validation shows database is OFFLINE,

+ crsctl stat res -w '((TARGET != ONLINE) or (STATE != ONLINE)' -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.CRS.GHCHKPT.advm
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.crs.ghchkpt.acfs
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            STABLE
ora.helper
               OFFLINE OFFLINE      racnode-dc1-1            STABLE
               OFFLINE OFFLINE      racnode-dc1-2            IDLE,STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.hawk.db
      1        ONLINE  OFFLINE                               Instance Shutdown,STABLE
      2        ONLINE  OFFLINE                               Instance Shutdown,STABLE

Start database FAILED.

[oracle@racnode-dc1-2 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk2"

[oracle@racnode-dc1-2 ~]$ srvctl status database -d $ORACLE_UNQNAME -v
Instance hawk1 is not running on node racnode-dc1-1
Instance hawk2 is not running on node racnode-dc1-2

[oracle@racnode-dc1-2 ~]$ srvctl start database -d $ORACLE_UNQNAME
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
[oracle@racnode-dc1-2 ~]$


[oracle@racnode-dc1-1 ~]$ . /media/patch/hawk.env
The Oracle base has been set to /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance not alive for sid "hawk1"

[oracle@racnode-dc1-1 ~]$ srvctl start database -d hawk
PRCR-1079 : Failed to start resource ora.hawk.db
CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-2/crs/trace/crsd_oraagent_oracle.trc".

CRS-5017: The resource action "ora.hawk.db start" encountered the following error:
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/hawk/spfilehawk.ora'
ORA-17503: ksfdopn:10 Failed to open file +DATA/hawk/spfilehawk.ora
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54322 (dba)
. For details refer to "(:CLSN00107:)" in "/u01/app/oracle/diag/crs/racnode-dc1-1/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-2' failed
CRS-2632: There are no more servers to try to place resource 'ora.hawk.db' on that would satisfy its placement policy
CRS-2674: Start of 'ora.hawk.db' on 'racnode-dc1-1' failed
[oracle@racnode-dc1-1 ~]$

Incorrect permissions for oracle library was the cause.
Change permissions for $GRID_HOME/bin/oracle (chmod 6751 $GRID_HOME/bin/oracle), stop and start CRS resolved the failure.

[oracle@racnode-dc1-1 dbs]$ ls -lhrt $ORACLE_HOME/bin/oracle
-rwsr-s--x 1 oracle dba 314M Apr 20 16:06 /u01/app/oracle/12.1.0.1/db1/bin/oracle

[oracle@racnode-dc1-1 dbs]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[oracle@racnode-dc1-1 dbs]$ cd /u01/18.3.0.0/grid/bin/
[oracle@racnode-dc1-1 bin]$ chmod 6751 oracle
[oracle@racnode-dc1-1 bin]$ ls -lhrt /u01/18.3.0.0/grid/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 20 19:21 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-1 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM1"
[root@racnode-dc1-1 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid
ORACLE_HOME=/u01/18.3.0.0/grid
Oracle Instance alive for sid "+ASM2"

[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwxr-x--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# chmod 6751 $GRID_HOME/bin/oracle
[root@racnode-dc1-2 ~]# ls -lhrt $GRID_HOME/bin/oracle
-rwsr-s--x 1 oracle oinstall 396M Apr 21 01:44 /u01/18.3.0.0/grid/bin/oracle

[root@racnode-dc1-2 ~]# crsctl stop crs

====================================================================================================

[root@racnode-dc1-2 ~]# crsctl start crs
[root@racnode-dc1-1 ~]# crsctl start crs

Reference: RAC Database Can’t Start: ORA-01565, ORA-17503: ksfdopn:10 Failed to open file +DATA/BPBL/spfileBPBL.ora (Doc ID 2316088.1)

Occurence

Jonathan Lewis - Fri, 2019-05-03 07:34

Before you comment – I do know that the title has a spelling mistake in it. That’s because the Oracle code uses exactly this spelling in one of the little-used features of tracing.

I write a note a few years ago about enabling sql_trace (and other tracing events) system-wide for a single SQL statement. In the note I suggested that you could enable tracing for a few minutes then disable it to minimise the impact on the system while still capturing a reasonable number of statement traces. A recent ODC thread, however, described a problem where a particular statement executed in the order of 1,000,000 times per hour – which is getting on for about 300 executions per second, and you probably don’t want to leave a system-wide trace running for any length of time when things are operating at that rate. Fortunately we can refine the method with the occurence filter to capture a small and limited number of executions, spread over as many sessions as are running. Here’s an example of the syntax:

rem
rem     Script: trace_occur.sql
rem     Author: Jonathan Lewis
rem     Dated:  April 2019
rem

define m_sql_id = 'gu1s28n6y73dg'
define m_sql_id = 'fu0ftxk6jcyya'

alter system set events 
        '
        sql_trace[SQL:&m_sql_id] 
        {occurence: start_after 101, end_after 496}
        bind=true,
        wait=true
        '
;

pause   Run the test script here and press return when it ends

alter system set events 
        '
        sql_trace[SQL:&m_sql_id]
        off
        '
;

All I’ve done, compared to the earlier note, is include in curly brackets, just after identifying the SQL ID, the text: “{occurence: start_after 101 , end_after 496}”. Roughly speaking this means that every session will start counting calls to the given statement and on the hundred and first it will start dumping the trace file, and for a total of 496 calls it will continue dumping the trace file. So it’s possible to make sure that a session does trace but doesn’t dump a huge volume of trace data. Of course I do still execute a call to switch tracing off for the statement otherwise every session that subsequently logs on will still start tracing and dump a few executions into their trace file.

There is, unfortunately, a catch. I don’t know how Oracle is counting for the start_after/end_after values – but it’s not executions of the statement, and it varies with working environment, and it changes as the trace is enabled, and it changes with version, and is probably dependent on the session_cached_cursors parameter, and it behaves differently when interacting with the PL/SQL cursor cache. It is perhaps easiest to show an example.

I have table called test_lobs (id, bytes …) with a unique index on (id) for this test.


create table test_lobs (
        id        number(8,0) primary key,
        bytes     number(8,0)
);

insert into test_lobs values(-1,999):
commit;

execute dbms_stats.gather_table_stats(user,'test_lobs')

And one of my test scripts is as follows:

rem
rem     This generates a statement with SQL_ID = gu1s28n6y73dg
rem

declare
        m_result number;
begin
        for i in 1..1000 loop
                begin
                        select bytes into m_result from test_lobs where id = i;
                exception
                        when others then null;
        end;
        end loop;
end;
/

Running 18.3 I start the trace script from one session, then start the test script from another session. As it stands the SQL statement embedded in the PL/SQL loop will have the SQL_ID I am tracing, so the second session will start dumping a trace file. The big question is: which executions of the statement will it dump? Since I’ve enabled bind variable dumping and the bound value is a simple loop counter it will be easy (!) to find the answer to this question.

To stabilise the results I did the following:

  • Session 1: Create the table.
  • Session 1: Start the trace event
  • Session 2: Connect to the database and run the test
  • Session 1: End the trace event
  • Session 1: Start the trace event again
  • Session 2: Connect to the database again and run the test a second time
  • Session 1: End the trace event

I’ll explain the need for looking at the results of the second cycle in a moment.

The trace file I produced started with the first three lines below, and then repeated the 10 line highlighted fragment a number of times:


PARSING IN CURSOR #140126713239784 len=43 dep=1 uid=104 oct=3 lid=104 tim=168304257545 hv=233016751 ad='63b8f0c0' sqlid='gu1s28n6y73dg'
SELECT BYTES FROM TEST_LOBS WHERE ID = :B1
END OF STMT

====================================================================================================
BINDS #140126713239784:

 Bind#0
  oacdty=02 mxl=22(22) mxlc=00 mal=00 scl=00 pre=00
  oacflg=03 fl2=1206001 frm=00 csi=00 siz=24 off=0
  kxsbbbfp=7f71cb0c67c0  bln=22  avl=02  flg=05
  value=50
EXEC #140126713239784:c=0,e=57,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=1,plh=2054437130,tim=168304262695
FETCH #140126713239784:c=0,e=3,p=0,cr=1,cu=0,mis=0,r=0,dep=1,og=1,plh=2054437130,tim=168304262729
CLOSE #140126713239784:c=0,e=1,dep=1,type=3,tim=168304262772
====================================================================================================

Notice the bind value report. A key feature that we are interested in is the first “value=” reported and the last “value=”. In my case the low/high were 26 and 87, for a total of 62 executions. A little arithmetic (and a few corroborating checks) highlight the significance of the following:

  • I started tracing after the 25th execution, and 25 * 4 + 1 = 101, my start_after value.
  • I traced 62 executions and 62 * 8 = 496, my end_after value.

Oracle is counting something whenever it hits the SQL_ID we’ve specified but (for SQL inside a PL/SQL loop) it’s counting something which happens 4 times for each execution; then when it hits the start_after and starts tracing whatever it counts happens twice as often each time around the loop.

My general response to this type of detail is: “Argh!!!” – by the way. Maybe a call to Frits or Stefan asking them to trace C functions is in order.

By this time you may have guessed why I examined the trace file from the second run of the test. The counting seems to include counts of database calls that take place in the recursive SQL needed to optimise / hard parse the query – anything that relates to the SQL_ID we specify may be included in the counts. So on the first test I got a load of garbage in the trace file then saw a partial dump of the trace data for value=2 and the trace file ended partway through the trace data for value=17.

As a further test, I had a go with pure SQL calls in a test script:


set serveroutput off
variable b1 number;

exec :b1 := 100
select bytes from test_lobs where id = :b1;

exec :b1 := 101
select bytes from test_lobs where id = :b1;

...

exec :b1 := 129
select bytes from test_lobs where id = :b1;

After getting a stable result, versions 12.1.0.2 and 18.3.0.0 behaved differently;

  • 18.3.0.0 – counted 5 for every execution, so start_after = 16 skipped the first 3 executions and started tracing for value = 103
  • 12.1.0.2 – counted 7 for the first execution and 5 thereafter, so start_after=8 skipped one execution, start_after=13 skipped two and so on.
  • Both versions counted 10 for every execution while tracing was enabled, so end_after = 30 traced 3 executions in both cases.

It’s possible, of course, that some differences in the way session_cached_cursors works would for the small difference – but I suspect I could have spent a couple of days trying to sort out minor variations due to slight parameter and implementation changes between versions. It’s also possible that some of my guesses are wrong and there is more method to the madness than I have spotted.

Conclusion

It is possible to enable tracing system-wide for a limited number of executions per session of a given statement; however the number of executions that might take place before tracing starts and the number of executions actually traced depends on a variety of details of which some may be outside your control.

As a baseline, it looks as if the number of executions before tracing starts is going to be about one-fifth of the value you set for start_after, and the number of executions trace will be about one-tenth of the end_after; however recursive SQL (perhaps even including dynamic sampling) can get caught up in the counts, potentially reducing the number of executions of the target statement that you see.

Wireguard: Installation & configuration

Dietrich Schroff - Fri, 2019-05-03 04:31
To install wireguard i followed this instruction.
First step is to add the repository to your machine:

root@zerberus:~# add-apt-repository ppa:wireguard/wireguard
 WireGuard is a novel VPN that runs inside the Linux Kernel. This is the Ubuntu packaging for WireGuard. More info may be found at its website, listed below.

More info: https://www.wireguard.com/
Packages: wireguard wireguard-tools wireguard-dkms

Install with: $ apt install wireguard

For help, please contact
 Mehr Informationen: https://launchpad.net/~wireguard/+archive/ubuntu/wireguard
[ENTER] drücken zum Weitermachen oder Strg-c, um das Hinzufügen abzubrechen.

OK:1 http://de.archive.ubuntu.com/ubuntu bionic InRelease
OK:2 http://ppa.launchpad.net/wireguard/wireguard/ubuntu bionic InRelease                                
OK:3 https://packages.microsoft.com/repos/azure-cli bionic InRelease                                                               
OK:4 http://ppa.launchpad.net/yannubuntu/boot-repair/ubuntu bionic InRelease                                                       
Paketlisten werden gelesen... Fertig               
Then the installion:

root@zerberus:~# apt install wireguard
Paketlisten werden gelesen... Fertig
Abhängigkeitsbaum wird aufgebaut.      
Statusinformationen werden eingelesen.... Fertig
Die folgenden Pakete wurden automatisch installiert und werden nicht mehr benötigt:
  btrfs-tools geoip-database-extra libcryptui0a libjs-openlayers seahorse-daemon
Verwenden Sie »apt autoremove«, um sie zu entfernen.
Die folgenden zusätzlichen Pakete werden installiert:
  wireguard-dkms wireguard-tools
Die folgenden NEUEN Pakete werden installiert:
  wireguard wireguard-dkms wireguard-tools
0 aktualisiert, 3 neu installiert, 0 zu entfernen und 1 nicht aktualisiert.
Es müssen 640 kB an Archiven heruntergeladen werden.
Nach dieser Operation werden 4.814 kB Plattenplatz zusätzlich benutzt.
Möchten Sie fortfahren? [J/n]
Holen:1 http://ppa.launchpad.net/wireguard/wireguard/ubuntu bionic/main amd64 wireguard-dkms all 0.0.20190123-wg1~bionic [551 kB]
Holen:2 http://ppa.launchpad.net/wireguard/wireguard/ubuntu bionic/main amd64 wireguard-tools amd64 0.0.20190123-wg1~bionic [85,2 kB]
Holen:3 http://ppa.launchpad.net/wireguard/wireguard/ubuntu bionic/main amd64 wireguard all 0.0.20190123-wg1~bionic [4.136 B]
Es wurden 640 kB in 0 s geholt (1.307 kB/s).
Vormals nicht ausgewähltes Paket wireguard-dkms wird gewählt.
(Lese Datenbank ... 471444 Dateien und Verzeichnisse sind derzeit installiert.)
Vorbereitung zum Entpacken von .../wireguard-dkms_0.0.20190123-wg1~bionic_all.deb ...
Entpacken von wireguard-dkms (0.0.20190123-wg1~bionic) ...
Vormals nicht ausgewähltes Paket wireguard-tools wird gewählt.
Vorbereitung zum Entpacken von .../wireguard-tools_0.0.20190123-wg1~bionic_amd64.deb ...
Entpacken von wireguard-tools (0.0.20190123-wg1~bionic) ...
Vormals nicht ausgewähltes Paket wireguard wird gewählt.
Vorbereitung zum Entpacken von .../wireguard_0.0.20190123-wg1~bionic_all.deb ...
Entpacken von wireguard (0.0.20190123-wg1~bionic) ...
wireguard-dkms (0.0.20190123-wg1~bionic) wird eingerichtet ...
Loading new wireguard-0.0.20190123 DKMS files...
Building for 4.17.0-rc3
Building initial module for 4.17.0-rc3
Secure Boot not enabled on this system.
Done.

wireguard:
Running module version sanity check.
 - Original module
   - No original module exists within this kernel
 - Installation
   - Installing to /lib/modules/4.17.0-rc3/updates/dkms/

depmod.....

DKMS: install completed.
wireguard-tools (0.0.20190123-wg1~bionic) wird eingerichtet ...
wireguard (0.0.20190123-wg1~bionic) wird eingerichtet ...
Trigger für man-db (2.8.3-2ubuntu0.1) werden verarbeitet ...
root@zerberus:~#
And then the configuration:
(i extracted the steps from the video here)
root@zerberus:~# wg genkey > /root/private.wireguard
Warning: writing to world accessible file.
Consider setting the umask to 077 and trying again.

root@zerberus:~# ls -l /root/
insgesamt 4
-rw-r--r-- 1 root root 45 Apr 27 18:55 private.wireguard

root@zerberus:~# cat /root/private.wireguard 
XXXYYYY....=
root@zerberus:~# wg pubkey  < /root/private.wireguard
ZZZAAAA...=

root@zerberus:~# ip link add wg0 type wireguard
root@zerberus:~# ip addr add 10.0.0.1/24 dev wg0
root@zerberus:~# wg set wg0 private-key /root/private.wireguard
root@zerberus:~# ip link set wg0 up

root@zerberus:~# ifconfig
....
wg0: flags=209  mtu 1420
        inet 10.0.0.1  netmask 255.255.255.0  destination 10.0.0.1
        unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  txqueuelen 1000  (UNSPEC)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
All the steps up to here, have to be done on both servers.
My setup was
  • server 1: "public ip" 192.168.178.39, vpn ip 10.0.0.1
    public wireguard key XXX
    wirguard port: 46932 (how to get this number, just move on)
  • server 2: "public ip" 192.168.178.54, vpn ip 10.0.0.2
    public wireguard key YYY
    wireguard port: 35891
 To get the public keys and the port number use this command:
root@zerberus:~# wg

interface: wg0
  public key: XXX
  private key: (hidden)
  listening port: 46932
Then use the ip, port and public key from the secondary server:
root@zerberus:~# wg set wg0 peer 23P8GMzwpnpaw38wEERXev1jJIQlkhB/lZB35wwXVD4= allowed-ips 10.0.0.2/32 endpoint 192.168.178.54:35891
Do the some on the secondary machine with the ip, port and public key from primary server.

And after that you can check with:
root@zerberus:~# ping 10.0.0.2

PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=47.0 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=63.8 ms
^C
--- 10.0.0.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 47.018/55.442/63.866/8.424 ms
I think this was much easier than setting up IPSec or OpenVPN.


CDB Fleet in Oracle Database 18c

Oracle in Action - Fri, 2019-05-03 03:21

RSS content

Oracle database 18c  introduces a new CDB Fleet feature  which allows many CDBs to be managed as one. A CDB fleet is a collection of CDBs and hosted PDBs that you can monitor and manage as one logical CDB from a centralized location.

There are two possible roles within a CDB Fleet:

  • Lead CDB: Only one CDB in the Fleet may be designated as the Lead CDB. The lead CDBis the central location for monitoring and managing all the CDBs in the fleet.
  • Member CDB: The CDBs registered with a lead CDB are called member CDBs. There can be one or more member CDB’s in a CDB fleet.

For every  member CDB,  proxy PDBs for the member CDB and its PDB(s) are automatically created in the lead CDB,.  Consequently, all the member CDBs  and their PDBs are now “visible” in the lead CDB . This enables management and monitoring of the entire estate of PDBs in the fleet, physically distributed across various CDBs, from the lead CDB.

Advantages

Reporting, monitoring, and management of the entire CDB fleet through a single interface:

  • Provides massive scalability of the underlying infrastructure
  • Reduces capital and operational costs
  • Provides greater efficiencies to the business.

Related Links:

‘STUB’ Status In DBA_PDBS

References:

https://docs.oracle.com/cd/E96517_01/newft/database-new-features-guide.pdf

https://docs.oracle.com/en/database/oracle/oracle-database/18/multi/administering-cdb-fleet.html#GUID-5951E81B-4351-4FA4-9F7B-52D2FEB0428D



Tags:  

Del.icio.us
Digg

Copyright © ORACLE IN ACTION [CDB Fleet in Oracle Database 18c], All Right Reserved. 2019.

The post CDB Fleet in Oracle Database 18c appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

View SYS permissions

Tom Kyte - Thu, 2019-05-02 21:46
A vendor would like me to run a script against a client's database. The script requires 'SYS' user access and makes the following grants: <code>grant select on sys.gv_\$mystat to public;</code> <code>grant select on sys.gv_\$vsession to public;</...
Categories: DBA Blogs

Ways to Name the Output Columns of a Pipelined Table Function

Tom Kyte - Thu, 2019-05-02 21:46
How many ways are there to name the output columns of a pipelined table function? My understanding is the only way is via a RECORD type like this: <code>CREATE PACKAGE blah_blah_blah IS TYPE abc_rec IS RECORD (ID NUMBER, ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator