Feed aggregator

Split Partition

Jonathan Lewis - Fri, 2019-08-09 07:02

This is a little case study on “pre-emptive trouble-shooting”, based on a recent question on the ODC database forum asking about splitting a range-based partition into two at a value above the current highest value recorded in a max_value partition.

The general principle for splitting (range-based) partitions is that if the split point is above the current high value Oracle will recognise that it can simply rename the existing partition and create a new, empty partition, leaving all the indexes (including the global and globally partitioned indexes) in a valid state. There are, however, three little wrinkles to this particular request:

  • first is that the question relates to a system running 10g
  • second is that there is a LOB column in the table
  • third is that the target is to have the new (higher value) partition(s) in a different tablespace

It’s quite possible that 10g won’t have all the capabilities of partition maintenance of newer versions, and if anything is going to go wrong LOBs are always a highly dependable point of failure, and since all the examples in the manuals tend to be very simple examples maybe any attempt to introduce complications like tablespace specification will cause problems.

So, before you risk doing the job in production, what are you going to test?

In Oracle terms we want to check the following

  • Will Oracle have silently copied/rebuilt some segments rather than simply renaming old segments and creating new, empty segments.
  • Will the segments end up where we want them
  • Will all the indexes stay valid

To get things going, the OP had supplied a framework for the table and told us about two indexes, and had then given us two possible SQL statements to do the split, stating they he (or she) had tested them and they both worked. Here’s the SQL (with a few tweaks) that creates the table and indexes. I’ve also added some data – inserting one row into each partition.

rem
rem     Script:         split_pt_lob.sql
rem     Author:         Jonathan Lewis
rem     Dated:          July 2019
rem
rem     Last tested 
rem             12.2.0.1
rem             10.2.0.5
rem

define m_old_ts = 'test_8k'
define m_new_ts = 'assm_2'

drop table part_tab purge;

create table part_tab(
  pt_id            NUMBER,
  pt_name          VARCHAR2(30),
  pt_date          DATE default SYSDATE,
  pt_lob           CLOB,
  pt_status        VARCHAR2(2)
)
tablespace &m_old_ts
lob(pt_lob) store as (tablespace &m_old_ts)
partition by range (pt_date)
(
  partition PRT1 values less than (TO_DATE('2012-01-01', 'YYYY-MM-DD')),
  partition PRT2 values less than (TO_DATE('2014-09-01', 'YYYY-MM-DD')),
  partition PRT_MAX values less than (MAXVALUE)
)
/

alter table part_tab
add constraint pt_pk primary key(pt_id)
/

create index pt_i1 on part_tab(pt_date, pt_name) local
/

insert into part_tab(
    pt_id, pt_name, pt_date, pt_lob, pt_status
)
values(
    1,'one',to_date('01-Jan-2011'),rpad('x',4000),'X'
)
/

insert into part_tab(
    pt_id, pt_name, pt_date, pt_lob, pt_status
)
values(
    2,'two',to_date('01-Jan-2013'),rpad('x',4000),'X'
)cascade=>trueee
/

insert into part_tab(
    pt_id, pt_name, pt_date, pt_lob, pt_status
)
values(
    3,'three',to_date('01-Jan-2015'),rpad('x',4000),'X'
)
/

commit;

execute dbms_stats.gather_table_stats(null,'part_tab',cascade=>true,granularity=>'ALL')

We were told that

The table has
– Primary Key on pt_id column with unique index (1 Different table has FK constraint that refers to this PK)
– Composite index on pt_date and pt_name columns

This is why I’ve added a primary key constraint (which will generate a global index) and created an index on (pt_date,pt_name) – which I’ve created as a local index since it contains the partitioning column.

The description of the requirement was:

  • The Task is to split partition(PRT_MAX) to a different tablespace
  • New partition(s) won’t have data at the moment of creation

And the two “tested” strategies were:

alter table part_tab split partition PRT_MAX at(TO_DATE('2019-08-01', 'YYYY-MM-DD')) into (
        PARTITION PRT3    tablespace &m_old_ts,
        PARTITION PRT_MAX tablespace &m_new_ts
);

alter table part_tab split partition PRT_MAX at(TO_DATE('2019-08-01', 'YYYY-MM-DD')) into (
        PARTITION PRT3    tablespace &m_old_ts LOB (pt_lob) store as (TABLESPACE &m_old_ts), 
        PARTITION PRT_MAX tablespace &m_new_ts LOB (pt_lob) store as (TABLESPACE &m_new_ts)
)
;
 

If we’re going to test these strategies properly we will need queries similar to the following:


break on object_name skip 1
select object_name, subobject_name, object_id, data_object_id  from user_objects order by object_name, subobject_name;

break on index_name skip 1
select index_name, status from user_indexes;
select index_name, partition_name, status from user_ind_partitions order by index_name, partition_name;

break on segment_name skip 1
select segment_name, partition_name, tablespace_name from user_segments order by segment_name, partition_name;

First – what are the object_id and data_object_id for each object before and after the split. Have we created new “data objects” while splitting, or has an existing data (physical) object simply changed its name.

Secondly – are there any indexes or index partitions that are no longer valid

Finally – which tablespaces do physical objects reside in.

On a test run of the first, simpler, split statement here are the before and after results for the object_id and data_object_id, followed by the post-split results for index and segment details:


Before Split
============

OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23677          23677
                                 PRT2                        23678          23678
                                 PRT_MAX                     23679          23679
                                                             23676

PT_I1                            PRT1                        23690          23690
                                 PRT2                        23691          23691
                                 PRT_MAX                     23692          23692
                                                             23689

PT_PK                                                        23688          23688

SYS_IL0000023676C00004$$         SYS_IL_P252                 23685          23685
                                 SYS_IL_P253                 23686          23686
                                 SYS_IL_P254                 23687          23687

SYS_LOB0000023676C00004$$        SYS_LOB_P249                23681          23681
                                 SYS_LOB_P250                23682          23682
                                 SYS_LOB_P251                23683          23683
                                                             23680          23680

After split
===========

OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23677          23677
                                 PRT2                        23678          23678
                                 PRT3                        23693          23679
                                 PRT_MAX                     23679          23694
                                                             23676

PT_I1                            PRT1                        23690          23690
                                 PRT2                        23691          23691
                                 PRT3                        23700          23692
                                 PRT_MAX                     23699          23699
                                                             23689

PT_PK                                                        23688          23688

SYS_IL0000023676C00004$$         SYS_IL_P252                 23685          23685
                                 SYS_IL_P253                 23686          23686
                                 SYS_IL_P257                 23697          23687
                                 SYS_IL_P258                 23698          23698

SYS_LOB0000023676C00004$$        SYS_LOB_P249                23681          23681
                                 SYS_LOB_P250                23682          23682
                                 SYS_LOB_P255                23695          23683
                                 SYS_LOB_P256                23696          23696
                                                             23680          23680


INDEX_NAME                       STATUS
-------------------------------- --------
PT_I1                            N/A
PT_PK                            VALID
SYS_IL0000023676C00004$$         N/A


INDEX_NAME                       PARTITION_NAME         STATUS
-------------------------------- ---------------------- --------
PT_I1                            PRT1                   USABLE
                                 PRT2                   USABLE
                                 PRT3                   USABLE
                                 PRT_MAX                USABLE

SYS_IL0000023676C00004$$         SYS_IL_P252            USABLE
                                 SYS_IL_P253            USABLE
                                 SYS_IL_P257            USABLE
                                 SYS_IL_P258            USABLE


SEGMENT_NAME              PARTITION_NAME         TABLESPACE_NAME
------------------------- ---------------------- ------------------------------
PART_TAB                  PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_I1                     PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_PK                                            TEST_8K

SYS_IL0000023676C00004$$  SYS_IL_P252            TEST_8K
                          SYS_IL_P253            TEST_8K
                          SYS_IL_P257            TEST_8K
                          SYS_IL_P258            TEST_8K

SYS_LOB0000023676C00004$$ SYS_LOB_P249           TEST_8K
                          SYS_LOB_P250           TEST_8K
                          SYS_LOB_P255           TEST_8K
                          SYS_LOB_P256           TEST_8K

Before the split partition PRT_MAX – with 4 segments: table, index, LOB, LOBINDEX – has object_id = data_object_id, with the values: 23679 (table), 23692 (index), 23683 (LOB), 23687 (LOBINDEX); and after the split these reappear as the data_object_id values for partition PRT3 (though the object_id values are larger than the data_object_id values) – so we infer that Oracle has simply renamed the various PRT_MAX objects to PRT3 and created new, empty PRT_MAX objects.

We can also see that all the indexes (including the global primary key index) have remained valid. We also note that the data_object_id of the primary key index has not changed, so Oracle didn’t have to rebuild it to ensure that it stayed valid.

There is a problem, though, the LOB segment and LOBINDEX segments for the new PRT_MAX partition are not in the desired target tablespace. So we need to check the effects of the second version of the split command where we add the specification of the LOB tablespaces. This is what we get – after rerunning the entire test script from scratch:


OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23727          23727
                                 PRT2                        23728          23728
                                 PRT_MAX                     23729          23729
                                                             23726

PT_I1                            PRT1                        23740          23740
                                 PRT2                        23741          23741
                                 PRT_MAX                     23742          23742
                                                             23739

PT_PK                                                        23738          23738

SYS_IL0000023726C00004$$         SYS_IL_P272                 23735          23735
                                 SYS_IL_P273                 23736          23736
                                 SYS_IL_P274                 23737          23737

SYS_LOB0000023726C00004$$        SYS_LOB_P269                23731          23731
                                 SYS_LOB_P270                23732          23732
                                 SYS_LOB_P271                23733          23733
                                                             23730          23730


OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23727          23727
                                 PRT2                        23728          23728
                                 PRT3                        23743          23743
                                 PRT_MAX                     23729          23744
                                                             23726

PT_I1                            PRT1                        23740          23740
                                 PRT2                        23741          23741
                                 PRT3                        23750          23750
                                 PRT_MAX                     23749          23749
                                                             23739

PT_PK                                                        23738          23738

SYS_IL0000023726C00004$$         SYS_IL_P272                 23735          23735
                                 SYS_IL_P273                 23736          23736
                                 SYS_IL_P277                 23747          23747
                                 SYS_IL_P278                 23748          23748

SYS_LOB0000023726C00004$$        SYS_LOB_P269                23731          23731
                                 SYS_LOB_P270                23732          23732
                                 SYS_LOB_P275                23745          23745
                                 SYS_LOB_P276                23746          23746
                                                             23730          23730

INDEX_NAME                       STATUS
-------------------------------- --------
PT_I1                            N/A
PT_PK                            UNUSABLE
SYS_IL0000023726C00004$$         N/A

INDEX_NAME                       PARTITION_NAME         STATUS
-------------------------------- ---------------------- --------
PT_I1                            PRT1                   USABLE
                                 PRT2                   USABLE
                                 PRT3                   UNUSABLE
                                 PRT_MAX                USABLE

SYS_IL0000023726C00004$$         SYS_IL_P272            USABLE
                                 SYS_IL_P273            USABLE
                                 SYS_IL_P277            USABLE
                                 SYS_IL_P278            USABLE

SEGMENT_NAME              PARTITION_NAME         TABLESPACE_NAME
------------------------- ---------------------- ------------------------------
PART_TAB                  PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_I1                     PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_PK                                            TEST_8K

SYS_IL0000023726C00004$$  SYS_IL_P272            TEST_8K
                          SYS_IL_P273            TEST_8K
                          SYS_IL_P277            TEST_8K
                          SYS_IL_P278            ASSM_2

SYS_LOB0000023726C00004$$ SYS_LOB_P269           TEST_8K
                          SYS_LOB_P270           TEST_8K
                          SYS_LOB_P275           TEST_8K
                          SYS_LOB_P276           ASSM_2


Before looking at the more complex details the first thing that leaps out to hit the eye is the word UNUSABLE – which appears for the status of the (global) primary key index and the PRT3 subpartition. The (empty) PRT_MAX LOB and LOBINDEX partitions are where we wanted them, but by specifying the location we seem to have broken two index segments that will need to be rebuilt.

It gets worse, because if we check the data_object_id of the original PRT_MAX partition (23729) and its matching index partition (23742) we see that they don’t correspond to the (new) PRT3 data_object_id values which are 23743 and 23750 respectively – the data has been physically copied from one data object to another completely unnecessarily; moreover the same applies to the LOB and LOBINDEX segments – the data object ids for the PRT_MAX LOB and LOBINDEX partitions were 23733 and 23737, the new PRT3 data object ids are 23746 and 23747.

If you did a test with only a tiny data set you might not notice the implicit threat that these changes in data_object_id tell you about – you’re going to be copying the whole LOB segment when you don’t need to.

Happy Ending (maybe)

A quick check with 12.2 suggested that Oracle had got much better at detecting that it didn’t need to copy LOB data and invalidate indexes with the second form of the code; but the OP was on 10g – so that’s not much help. However it was the thought that Oracle might misbehave when you specifyied tablespaces that made me run up this test – in particular I had wondered if specifying a tablespace for the partition that would end up holding the existing data might trigger an accident, so here’s a third variant of the split statement I tested, with the results on the indexes, segments, and data objects. Note that I specify the tablespace only for the new (empty) segments:


alter table part_tab split partition PRT_MAX at(TO_DATE('2019-08-01', 'YYYY-MM-DD')) into (
    PARTITION PRT3,
    PARTITION PRT_MAX tablespace &m_new_ts  LOB (pt_lob) store as (TABLESPACE &m_new_ts)
)
/

OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23752          23752
                                 PRT2                        23753          23753
                                 PRT_MAX                     23754          23754
                                                             23751

PT_I1                            PRT1                        23765          23765
                                 PRT2                        23766          23766
                                 PRT_MAX                     23767          23767
                                                             23764

PT_PK                                                        23763          23763

SYS_IL0000023751C00004$$         SYS_IL_P282                 23760          23760
                                 SYS_IL_P283                 23761          23761
                                 SYS_IL_P284                 23762          23762

SYS_LOB0000023751C00004$$        SYS_LOB_P279                23756          23756
                                 SYS_LOB_P280                23757          23757
                                 SYS_LOB_P281                23758          23758
                                                             23755          23755

OBJECT_NAME                      SUBOBJECT_NAME          OBJECT_ID DATA_OBJECT_ID
-------------------------------- ---------------------- ---------- --------------
PART_TAB                         PRT1                        23752          23752
                                 PRT2                        23753          23753
                                 PRT3                        23768          23754
                                 PRT_MAX                     23754          23769
                                                             23751

PT_I1                            PRT1                        23765          23765
                                 PRT2                        23766          23766
                                 PRT3                        23775          23767
                                 PRT_MAX                     23774          23774
                                                             23764

PT_PK                                                        23763          23763

SYS_IL0000023751C00004$$         SYS_IL_P282                 23760          23760
                                 SYS_IL_P283                 23761          23761
                                 SYS_IL_P287                 23772          23762
                                 SYS_IL_P288                 23773          23773

SYS_LOB0000023751C00004$$        SYS_LOB_P279                23756          23756
                                 SYS_LOB_P280                23757          23757
                                 SYS_LOB_P285                23770          23758
                                 SYS_LOB_P286                23771          23771
                                                             23755          23755
INDEX_NAME                       STATUS
-------------------------------- --------
PT_I1                            N/A
PT_PK                            VALID
SYS_IL0000023751C00004$$         N/A

INDEX_NAME                       PARTITION_NAME         STATUS
-------------------------------- ---------------------- --------
PT_I1                            PRT1                   USABLE
                                 PRT2                   USABLE
                                 PRT3                   USABLE
                                 PRT_MAX                USABLE

SYS_IL0000023751C00004$$         SYS_IL_P282            USABLE
                                 SYS_IL_P283            USABLE
                                 SYS_IL_P287            USABLE
                                 SYS_IL_P288            USABLE

SEGMENT_NAME              PARTITION_NAME         TABLESPACE_NAME
------------------------- ---------------------- ------------------------------
PART_TAB                  PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_I1                     PRT1                   TEST_8K
                          PRT2                   TEST_8K
                          PRT3                   TEST_8K
                          PRT_MAX                ASSM_2

PT_PK                                            TEST_8K

SYS_IL0000023751C00004$$  SYS_IL_P282            TEST_8K
                          SYS_IL_P283            TEST_8K
                          SYS_IL_P287            TEST_8K
                          SYS_IL_P288            ASSM_2

SYS_LOB0000023751C00004$$ SYS_LOB_P279           TEST_8K
                          SYS_LOB_P280           TEST_8K
                          SYS_LOB_P285           TEST_8K
                          SYS_LOB_P286           ASSM_2

All the index and index partitions stay valid; the new empty segments all end up in the target tablespace, and all the data object ids for the old PRT_MAX partitions becaome the data object ids for the new PRT3 partitions. Everything we want, and no physical rebuilds of any data sets.

Moral:

When you’re testing, especially when you’re doing a small test while anticipating a big data set, don’t rely on the clock; check the data dictionary (and trace files, if necessary) carefully to find out what activity actually took place.

Footnote:

It’s possible that there are ways to fiddle around with the various default attributes of the partitioned table to get the same effect – but since 12.2 is much better behaved anyway there’s no point in me spending more time looking for alternative solutions to a 10g problem.

 

Rolling out patched 19c home using gold image

Bobby Durrett's DBA Blog - Thu, 2019-08-08 18:49

For Oracle versions 11.2 through 18 I have been applying quarterly patches to a test database and then gathering the Oracle home into a tar file. When we want to roll out a new database server with the latest patches we untar the file and clone the Oracle home with a command like this:

$ORACLE_HOME/oui/bin/runInstaller -clone -silent ...

This command no longer works for 19c so now I am using a new command like this:

$ORACLE_HOME/runInstaller -silent -responseFile ...

Before running this command I had to unzip my gold image zip file into $ORACLE_HOME. I created the gold image zip file using a command like this:

./runInstaller -createGoldImage -destinationLocation ...

I ran this through MobaXterm to use their X server. I created the response file when I initially installed 19c on this test server. Then I patched the Oracle home with the July 2019 PSU and finally ran the above command to create the gold image.

Some useful links that I ran into:

Franck Pachot’s post about doing a silent 18c install using the new runInstaller

Oracle support document that says the old Oracle home cloning does not work in 19c:

19.x:Clone.pl script is deprecated and how to clone using gold-image (Doc ID 2565006.1)

Oracle documentation about installing silent with response file

DBA Stackexchange post about how you have to use a response file because the command line options don’t work

This is kind of terse but it has some of the key information. I may update it later if I can.

Bobby

Categories: DBA Blogs

2million PageViews

Hemant K Chitale - Wed, 2019-08-07 23:53

This blog has now achieved 2million PageViews :




(The "drop" at the end is the count for only the first week of August 2019).

Although this blog began in December 2006, the PageViews counts start with 8,176 in July 2010.  So, effectively, this blog has had 2million PageViews in 9years.

The first 1million PageViews were achieved in March 2015.

Unfortunately, the rate at which I have been publishing has declined since 2017 (36 posts in 2017, 30 in 2018 and only 8 so far this year).  I apologise for this.  Hopefully, I should be able to add more posts in the coming months.




Categories: DBA Blogs

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

Rittman Mead Consulting - Wed, 2019-08-07 08:19
Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIsHow much did I spend so far? Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

It's a typical question we ask ourselves daily and what do we do? Probably check the bank account status on our phone and yell at ourselves for all the money we trashed the previous night at the pub.

The Cloud

One of the great benefits of the cloud is that there is no big upfront cost required to start playing with the latest tool or technology, we just need to fill in a few forms, write down the credit card details and there we go! However, the cloud doesn't mean free: most of the times we pay based on resource and time consumption and things can become pretty expensive if we don't manage our resources wisely.

The main Oracle Cloud Dashboard offers a preview of the Month to Date Cost and by clicking on it, we can easily understand the cost per product. Like in the example below we spend £322.8 month to date and precisely £262.80 on Oracle Analytics Classic.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

We can go another step down the line: if for example, we have multiple versions of the same product, we'll see a line for each version or licensing method. In our case, all the money comes from a single B88303 - OAC Enterprise Edition OCPU per Hour product with an overall 60 hours of uptime (OAC billing is per hour).  

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

However, this requires a manual login into the Oracle Cloud to check the balance, which is not optimal if we want to display this information in external tools or automate part of the cost-checking procedures. Fortunately, we can retrieve the same information with Oracle Cloud Account Metering REST APIs.

Oracle Cloud Account Metering REST APIs

Oracle Cloud Account Metering REST APIs expose a lot of useful information about our Oracle Cloud account via REST APIs. We can, for example, check our subscription details, credit promotions, resource usage, cost and quotas. All we need to test the REST APIs is cURL, a command-line utility for sending HTTP requests. The syntax to retrieve the data is

curl -X GET -u <USERNAME>:<PASSWORD> \
	-H "X-ID-TENANT-NAME:<TENANT_ID>" \
	"https://itra.oraclecloud.com/<BASE_PATH>/<RESOURCE_NAME>

Where

  • <TENANT_ID> is the identity domain ID, you can find it under the Oracle Analytics Cloud -> Overview
Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs
  • <BASE_PATH> is the base URI of the resource, e.g. /metering/api/v1
  • <RESOURCE_NAME> is the name of the specific resource we are requesting
Checking the Cost

If, as per the example below, we want to understand the cost, we simply need to call the usagecost resource passing the <ACCOUNT_ID> parameter which can be found in the Overview page of every service we already have in our account.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

The basic cURL command to check the cost then becomes the following

curl -X GET -u <USERNAME>:<PASSWORD> \
	-H "X-ID-TENANT-NAME:<TENANT_ID>" \
	"https://itra.oraclecloud.com/metering/api/v1/usagecost/<ACCOUNT_ID>?startTime=<START_TIME>&endTime=<ENDTIME>&timeZone=<TIMEZONE>" 

Where on top of the parameters defined above we have

  • <START_TIME> and <END_TIME> with the format YYYY-MM-DDTHH:mm:sssZ e.g. 2019-08-01T00:00:00.000
  • <TIMEZONE> we specify which timezone to use for the date and time filter

So if like before, we're aiming to understand the cost from the beginning of the month, our suffix becomes

<ACCOUNT_ID>?startTime=2019-08-01T00:00:00.000Z&endTime=2019-08-10T23:59:00.000Z&timeZone=Europe/Rome

The result is in JSON format which we can easily parse the result with the command line tool jq.

curl -X GET ... | jq '.'

The output is

{
  "accountId": "<ACCOUNT_ID>",
  "items": [
    ...
    {
      "subscriptionId": "...",
      "subscriptionType": "PRODUCTION",
      "serviceName": "ANALYTICS",
      "resourceName": "ANALYTICS_EE_PAAS_ANY_OCPU_HOUR",
      "currency": "GBP",
      "gsiProductId": "B88303",
      "startTimeUtc": "2019-08-01T00:00:00.000",
      "endTimeUtc": "2019-08-10T23:00:00.000",
      "serviceEntitlementId": "...",
      "costs": [
        {
          "computedQuantity": 60,
          "computedAmount": 262.8,
          "unitPrice": 4.38,
          "overagesFlag": "Y"
        }
      ]
    },
    ..,
  "canonicalLink": "/metering/api/v1/usagecost/<ACCOUNT_ID>?timeZone=Europe%2FRome&startTime=2019-08-01T00%3A00%3A00.000Z&endTime=2019-08-10T23%3A59%3A00.000Z"
}

As expected, we get, within the items section, an entry for every product and license type we have used. In our case we have the "serviceName": "ANALYTICS", with the Enterprise Edition option billed per hour ("resourceName": "ANALYTICS_EE_PAAS_ANY_OCPU_HOUR") and we used it for 60 hours with a unit price of £4.38 for a total amount of £262.8 perfectly in line with what we see in the webpage.

We can further filter our query using one of the following parameters:

  • computeType: the nature of the cost (Usage, Overcharge...)
  • datacenter: the datacenter for which cost needs to be retrieved
  • dcAggEnabled: to roll up the cost by datacenter
  • resourceName: the type of resource billing (e.g. ANALYTICS_EE_PAAS_ANY_OCPU_HOUR)
  • serviceEntitlementId: the Id of the service, can be found in the Overview page
Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs
  • serviceName: the name of the service e.g. ANALYTICS
  • usageType: the type of usage we want to be reported either TOTAL, HOURLY or DAILY

Unfortunately, none of the above filters allows us to check the cost associated with a precise instance of the service. If, for example, we have two instances with the same edition and type of billing, we can't determine, with the above call, what the cost associated to each of the two instances is since it's rolled up and instance type level. But we're not alone! We can achieve more granularity in the billing metrics by using the /tagged REST API and properly performing instance tagging on our services.

Instance Tagging

We can group instances of various services with Tags. Tags are labels that we can attach to an instance to group them based on our company rules. Oracle allows two types of tagging for resources: free-form and defined.

With free-form tagging we can append any key-value label to our instances, e.g. we may want to tag an instance as Environment:Training with Environment being the key and Training being the label.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

The problem with free-form tagging is that we don't have control of which tag keys get associated to a certain resource and it's an error-prone method since we have to type a key and value every time (and they're not visible for cost-tracking).

If instead, we want to use a more controlled approach to tagging, we can then go for the defined tagging: while with free-form anyone was able to associate any key or value, with define tagging we create a namespace which will contain a set of tag keys.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

Once created the namespace, we can then create the set of keys within it. In this case, we create two pre-defined keys Environment and Project, please note that we flagged the COST-TRACKING checkbox to be able to use the tags with the Oracle Cloud Account Metering APIs.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

Please be aware that there are limits on the number of namespaces, of tags per resource and of cost-tracking tags which are available under the tagging documentation.

Now it's time to attach the defined tags to our instances, we can do so in the web UI during instance creation or after by selecting "Add Tags". More information under the related documentation.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

After we added the tags marked for cost-tracking to our instances we may have to wait up to 5 hours to see them in the "My Services" or via the REST APIs.

Querying Tagged Resources

There is an API within the Oracle Cloud Account Metering REST APIs which allows to querying the cost associated with tagged resources. The call is very similar to the one we used above, with the additional tagged prefix and tags=.... parameter. Taking the example above, if we can see the consumption associated with instances tagged as Operations:Project=Training then the call is the following

curl -X GET -u <USERNAME>:<PASSWORD> \
	-H "X-ID-TENANT-NAME:<TENANT_ID>" \
	"https://itra.oraclecloud.com/metering/api/v1/usagecost/<ACCOUNT_ID>/tagged?startTime=<START>&endTime=<END>&timeZone=<TZ>&tags=operations:Project=Training"

And the result is

{
  "accountId": "<ACCOUNT_ID>",
  "items": [
    {
      "subscriptionId": "...",
      "subscriptionType": "PRODUCTION",
      "serviceName": "ADWC",
      "resourceName": "ADWC_PAAS_BYOL_OCPU_HOUR",
      "currency": "GBP",
      "gsiProductId": "B89039",
      "startTimeUtc": "2019-08-01T00:00:00.000",
      "endTimeUtc": "2019-08-10T23:00:00.000",
      "serviceEntitlementId": "...",
      "costs": [
        {
          "computedQuantity": 23.0,
          "computedAmount": 8.06235468,
          "unitPrice": 0.35053716,
          "overagesFlag": "N"
        }
      ]
    }
  ],
  "canonicalLink": "/metering/api/v1/usagecost/<ACCOUNT_ID>/tagged?timeZone=UTC&startTime=2019-08-01T00%3A00%3A00.000Z&endTime=2019-08-10T23%3A59%3A00.000Z&usageType=TOTAL&tags=operations%3AProject%3DTraining"
}

A usage of ADWC for 23 hours for a total of £8.06 which is also visible from the My Services webpage.

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs

Appending the following jq command to the cURL call also displays the relevant information like serviceName, and cost details as separate columns

jq --raw-output '.items[] | "\(.serviceName)\t\(.subscriptionType)\t\(.resourceName)\t\(.currency)\t\(.costs[].computedAmount)\t\(.costs[].computedQuantity)\t\(.costs[].unitPrice)\t\(.costs[].overagesFlag)"' 

And the result is

ADWC	PRODUCTION	ADWC_PAAS_BYOL_OCPU_HOUR	GBP	8.06235468	23	0.35053716	N
Summary

Oracle Cloud Account Metering REST APIs offer an easy way to expose the Oracle cloud usage and cost externally. Used smartly in conjunction with instance tagging they provide a way to ensure cost and usage tracking down to the single resource or project.

If on the other way, the integration with REST APIs is not what you need, but you're looking into ways of getting notified when you're spending too much, check out the Alerts section of Managing and Monitoring Oracle Cloud.

Edit: You can also download your billing information as CSV from Oracle Cloud web-ui as per screenshot below which is very handy for one-off analysis. If, on the other side, you want to automate the export of billing information, then Oracle Cloud Account Metering REST APIs is the way to go!

Where is My Money Going? Checking the balance with Oracle Cloud Account Metering REST APIs
Categories: BI & Warehousing

[Solved] Oracle R12.2 AppsDBA: Adop Hotpatch Issue

Online Apps DBA - Wed, 2019-08-07 05:58

[Solved] Oracle R12.2 AppsDBA: Adop Hotpatch Issue While applying the patch 27254132 did you encountered the issue of Adop Hotpatch? If YES!! Check this https://k21academy.com/appsdba60 which covers the root cause & fixes of the Issue encountered during applying of Patch 27254132 and things like: ✔ ADOP Overview ✔What is Hotpatch? ✔ What is this Issue, that […]

The post [Solved] Oracle R12.2 AppsDBA: Adop Hotpatch Issue appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How to Get Public IP of Database In Oracle Gen 2 Cloud (OCI)

Online Apps DBA - Wed, 2019-08-07 05:29

How to Get Public IP of Database in Oracle Gen 2 Cloud (OCI) You have created a User-Managed Database in Oracle Gen 2 Cloud (OCI) and not able to find the Public IP of Database No Worries!! Check this post at https://k21academy.com/clouddba47 covering: ✔Database types in OCI ✔Steps to get the Public IP of User-Managed […]

The post How to Get Public IP of Database In Oracle Gen 2 Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Feedback on the EMEA Red Hat Partner Conference 2019 in Prague

Yann Neuhaus - Wed, 2019-08-07 04:06

A few weeks ago I attended the Red Hat EMEA Partner Conference for the first time. The Red Hat Conference took place in Prague from June 25th to 27th. If you are interested in Open Source technologies and in Red Hat, feel free to read this personal feedback on trends at Red Hat and in the IT sector.

Stronger Together!

Representing dbi services at the Partner Conference in Prague was a great opportunity for us as a Red Hat Advanced Partner.

About 850 people attended this amusing event! Interactions with the Red Hat Community were very interesting and relaxed. Is it because of the Open Source atmosphere? The organization, catering, and location were great also! Many thanks to the organizers !

Also a sincere thank you to all Swiss Red Hat and Tech Data contacts at the event for welcoming and assisting Swiss Partners during the 3 days. Everything went extremely professional thanks to Leonard Bodmer (Country Manager Red Hat Switzerland), Richard Zobrist (Head of Partner & Alliances Red Hat), Maja Zurovec (Account Manager Partner & Alliances Red Hat), Sandra Maria Sigrist (Tech Data), and Daria Stempkowski (Tech Data). Many thanks to all of you, also for the warm and relaxed evening at Villa Richter at the foot of Prague’s castle !

We are Stronger Together!

All about automation, integration, hybrid cloud, and multi-cloud

With this 3 days Partner Conference, Red Hat proposed a bride agenda of Breakouts, Exams, Hackathon, Keynotes, Labs, and an Open Innovation Lab. I mostly attended sessions where Red Hat partners and customers had the opportunity to give feedbacks on their experience with Red Hat products. Some of the sessions and keynotes were remarkable.

Red Hat Middleware Roadmap

The “Red Hat Middleware Roadmap” sessions (Part 1 & 2) with Rich Sharpels were a good opportunity to learn more about productivity (automation, integration, runtimes), reliability (security, reduced complexity), and flexibility (containers for scaling, cloud, hybrid-cloud, multi-cloud) with OpenShift. With these 2 presentations you also got informed on the iPaaS which is a new private integration Platform-as-a-Service offering to provide cloud-based services for application integration and messaging. The goal is here to strengthen collaboration within the business teams (devOps) thanks to Managed Integration + OpenShift Dedicated. Rich Sharpels summarizes the benefits of the iPaaS with: “cloud services and packages where customers don’t have to administrate anything!”

Ansible Partner Enablement Offerings

Günter Herold from Red Hat and Daniel Knözinger from Open Networks Austria held the session “Ansible Partner Enablement Offerings”. This session was focusing on advantages of automating tasks with Ansible for reducing mistakes, errors, and complexity because “Ansible is the universal language for the whole IT team”. With Ansible, “start small and develop” .

Best Practices for Working with Red Hat Support

Who wanted to get informed on “Best Practices for Working with Red Hat Support” attended the session with Caroline Baillargeon, Leona Meeks, and Peter Jakobs from Red Hat. This presentation gave the opportunity to learn and discuss on:

  • The Customer Portal which is said to be “full of information and best practices that could be useful before opening and escalating issues”. For example, should you search for information on Ansible, have a look at this page
  • The TSAnet Connect where answers for multi IT solutions are centralized
  • The Case Management Tool for sharing the open source spirit and to be part of the community (example)
  • Tips to work efficiently with the Red Hat support:
    1. “Make sure the customer is registered on the correct time zone”
    2. “To get 7×24 support, a Premium Support subscription is needed”
    3. “in case the answer on an issue is not appropriate, use the escalation button”
    4. “contact Red Hat to get access to the trainings that also the Red Hat engineers follow for technical problem solving”

While keeping the end customer’s satisfaction in mind, this session could probably be best summarized with “why not contributing and sharing knowledge within the Red Hat community ?”

Keynotes

Keynotes mainly concentrated on “marketing topics” that aim at boosting partner engagement, but still interesting, in particular:

  • “The Killer App in digital transformation is human connection” with Margaret Dawson on collaborating within the community
  • “Open Hybrid Cloud Ecosystems: Bold Goals for Tomorrow” with Lars Herrmann on innovations that have been considered impossible. In short, “if you think it is impossible, just do it”, so develop cloud, hybrid cloud and multi-cloud opportunities…

 

On Red Hat’s and Partner’s booths at the Congress Center

Besides sessions and presentations, lots of interesting technical booths at the Congress Center Prague did promote the work of the Red Hat engineers within the Open Source community. In particular I spent some time with Romain Pelisse (Red Hat Berlin) and Robert Zahradnícek (Red Hat Brno) to let me explain how they work and what are the trends in their areas. Of course we did speak about automation and integration, and about findings that are developed within the open source community first, before getting implemented in Red Hat solutions.

Last but not least, some Red Hat partners were present with a booth to promote their activities and products during the conference, among which Tech Data and Arrows ECS which are well know at dbi services.

What to take from the Red Hat EMEA Conference? On Red Hat and the Conference

At the end of the day, the keywords from the Red Hat EMEA Conference were probably not that far from the keywords you would get from other technology conferences. Concepts and products like “automation”, “integration”, “Ansible”, or “OpenShift” are means to get companies into the cloud. But why not? The trend into the cloud is getting more and more clear as it now makes sense for lots of projects at least for Disaster Recovery, Test and Development in the cloud.

If private cloud, hybrid cloud, or multi-cloud is not the topic at Red Hat. Their solutions are agnostic. And Red Hat’s strategy is clearly based on a strong commitment to open source. It’s all about “products” (not projects), “collaboration”, “community”, and “customer success”.

On Open Source trends and strategy

Now you may ask why subscribing to Red Hat’s products and support? Sure, with Ansible and other Open Source products you can easily “start small and develop”. Therefore the community version may fit. But what if you go in production? The more and the bigger the projects will become, the more you will need support. And to subscribe will probably make sense.

Then don’t forget that Open Source is not for free. That you go for community or enterprise Open Source makes no difference, at the end you will need to invest at least in time and knowledge. And, depending on the situation, you may subscribe in products and support. If you don’t know where to start, ask dbi services for Open Source expertise.

Looking forward to reading your comments.

Cet article Feedback on the EMEA Red Hat Partner Conference 2019 in Prague est apparu en premier sur Blog dbi services.

19c Grid Dry-Run Upgrade

Michael Dinh - Tue, 2019-08-06 07:42

First test using GUI.

[oracle@racnode-dc2-1 grid]$ /u01/app/19.3.0.0/grid/gridSetup.sh -dryRunForUpgrade
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_00-20-31AM/gridSetupActions2019-08-06_00-20-31AM.log
[oracle@racnode-dc2-1 grid]$

Create dryRunForUpgradegrid.rsp from grid_2019-08-06_00-20-31AM.rsp (above GUI test)

[oracle@racnode-dc2-1 grid]$ grep -v "^#" /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_00-20-31AM.rsp | grep -v "=$" | awk 'NF' > /home/oracle/dryRunForUpgradegrid.rsp

[oracle@racnode-dc2-1 ~]$ cat /home/oracle/dryRunForUpgradegrid.rsp
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=vbox-rac-dc2
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
oracle.install.crs.config.clusterNodes=racnode-dc2-1:,racnode-dc2-2:
oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.crs.config.sharedFileSystemStorage.ocrLocations=
oracle.install.crs.config.useIPMI=false
oracle.install.asm.diskGroup.name=CRS
oracle.install.asm.diskGroup.AUSize=0
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.config.omsPort=0
oracle.install.crs.rootconfig.executeRootScript=false
[oracle@racnode-dc2-1 ~]$

Create directory grid home for all nodes:

[root@racnode-dc2-1 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54318(asmdba),54322(dba),54323(backupdba),54324(oper),54325(dgdba),54326(kmdba)

[root@racnode-dc2-1 ~]# mkdir -p /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chown oracle:oinstall /u01/app/19.3.0.0/grid
[root@racnode-dc2-1 ~]# chmod 775 /u01/app/19.3.0.0/grid

[root@racnode-dc2-1 ~]# ll /u01/app/19.3.0.0/
total 4
drwxrwxr-x 2 oracle oinstall 4096 Aug  6 02:07 grid
[root@racnode-dc2-1 ~]#

Extract grid software for node1 ONLY:

[oracle@racnode-dc2-1 ~]$ unzip -qo /media/swrepo/LINUX.X64_193000_grid_home.zip -d /u01/app/19.3.0.0/grid/

[oracle@racnode-dc2-1 ~]$ ls /u01/app/19.3.0.0/grid/
addnode     clone  dbjava     diagnostics  gpnp          install        jdbc  lib      OPatch   ords  perl     qos       rhp            rootupgrade.sh  sqlpatch  tomcat  welcome.html  xdk
assistants  crs    dbs        dmu          gridSetup.sh  instantclient  jdk   md       opmn     oss   plsql    racg      root.sh        runcluvfy.sh    sqlplus   ucp     wlm
bin         css    deinstall  env.ora      has           inventory      jlib  network  oracore  oui   precomp  rdbms     root.sh.old    sdk             srvm      usm     wwg
cha         cv     demo       evm          hs            javavm         ldap  nls      ord      owm   QOpatch  relnotes  root.sh.old.1  slax            suptools  utl     xag

[oracle@racnode-dc2-1 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.0G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-1 ~]$

Run gridSetup.sh -silent -dryRunForUpgrade:

[oracle@racnode-dc2-1 ~]$ env|grep -i ora
USER=oracle
MAIL=/var/spool/mail/oracle
PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin
PWD=/home/oracle
HOME=/home/oracle
LOGNAME=oracle

[oracle@racnode-dc2-1 ~]$ date
Tue Aug  6 02:35:47 CEST 2019

[oracle@racnode-dc2-1 ~]$ /u01/app/19.3.0.0/grid/gridSetup.sh -silent -dryRunForUpgrade -responseFile /home/oracle/dryRunForUpgradegrid.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/19.3.0.0/grid/install/response/grid_2019-08-06_02-35-52AM.rsp

You can find the log of this install session at:
 /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/gridSetupActions2019-08-06_02-35-52AM.log


As a root user, execute the following script(s):
        1. /u01/app/19.3.0.0/grid/rootupgrade.sh

Execute /u01/app/19.3.0.0/grid/rootupgrade.sh on the following nodes:
[racnode-dc2-1]

Run the script on the local node.

Successfully Setup Software with warning(s).
[oracle@racnode-dc2-1 ~]$

Run rootupgrade.sh for node1 ONLY and review log:

[root@racnode-dc2-1 ~]# /u01/app/19.3.0.0/grid/rootupgrade.sh
Check /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log for the output of root script

[root@racnode-dc2-1 ~]# cat /u01/app/19.3.0.0/grid/install/root_racnode-dc2-1_2019-08-06_02-44-59-241151038.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/19.3.0.0/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Performing Dry run of the Grid Infrastructure upgrade.
Using configuration parameter file: /u01/app/19.3.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/racnode-dc2-1/crsconfig/rootcrs_racnode-dc2-1_2019-08-06_02-45-31AM.log
2019/08/06 02:45:44 CLSRSC-464: Starting retrieval of the cluster configuration data
2019/08/06 02:45:52 CLSRSC-729: Checking whether CRS entities are ready for upgrade, cluster upgrade will not be attempted now. This operation may take a few minutes.
2019/08/06 02:47:56 CLSRSC-693: CRS entities validation completed successfully.
[root@racnode-dc2-1 ~]#

Check grid home for node2:

[oracle@racnode-dc2-2 ~]$ du -sh /u01/app/19.3.0.0/grid/
6.6G    /u01/app/19.3.0.0/grid/
[oracle@racnode-dc2-2 ~]$

Check oraInventory for ALL nodes:

[oracle@racnode-dc2-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.7.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.2.0.1/grid" TYPE="O" IDX="1" CRS="true"/>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.2.0.1/db1" TYPE="O" IDX="2"/>
==========================================================================================
<HOME NAME="OraGI19Home1" LOC="/u01/app/19.3.0.0/grid" TYPE="O" IDX="3"/>
==========================================================================================
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc2-2 ~]$

Check crs activeversion: 12.2.0.1.0

[oracle@racnode-dc2-1 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"

[oracle@racnode-dc2-1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [927320293].
[oracle@racnode-dc2-1 ~]$

Check log location:

[oracle@racnode-dc2-1 ~]$ cd /u01/app/oraInventory/logs/GridSetupActions2019-08-06_02-35-52AM/

[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$ ls -alrt
total 17420
-rw-r-----  1 oracle oinstall     129 Aug  6 02:35 installerPatchActions_2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall       0 Aug  6 02:35 gridSetupActions2019-08-06_02-35-52AM.err
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:35 temp_ob
-rw-r-----  1 oracle oinstall       0 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.err
drwxrwx--- 17 oracle oinstall    4096 Aug  6 02:39 ..
-rw-r-----  1 oracle oinstall     157 Aug  6 02:39 oraInstall2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall       0 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.err.racnode-dc2-2
-rw-r-----  1 oracle oinstall     142 Aug  6 02:43 oraInstall2019-08-06_02-35-52AM.out.racnode-dc2-2
-rw-r-----  1 oracle oinstall 9341920 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.out
-rw-r-----  1 oracle oinstall   13419 Aug  6 02:43 time2019-08-06_02-35-52AM.log
-rw-r-----  1 oracle oinstall 8443087 Aug  6 02:43 gridSetupActions2019-08-06_02-35-52AM.log
drwxrwx---  3 oracle oinstall    4096 Aug  6 02:56 .
[oracle@racnode-dc2-1 GridSetupActions2019-08-06_02-35-52AM]$

Since I have not performed actual upgrade, I don’t know if 19.3.0.0 grid home in oraInventory will be problematic.

It was problematic when performing test with silent mode after initial test with GUI.

To resolve the issue, detach 19.3.0.0 grid home

export ORACLE_HOME=/u01/app/19.3.0.0/grid
$ORACLE_HOME/oui/bin/runInstaller -detachHome -silent ORACLE_HOME=$ORACLE_HOME

Cool Roblox Usernames

VitalSoftTech - Tue, 2019-08-06 02:02

Are you about to make a new Roblox account to play with your friends and are in a bind for a cool username? We’ll help you there. Roblox is an interactive, imaginative, online game that you can play with your friends. It is one of the largest online gaming platforms, with over 15 million games […]

The post Cool Roblox Usernames appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle JET Model ID Attribute

Andrejus Baranovski - Tue, 2019-08-06 01:20
When building Oracle JET Model/Collection structure to render table, you should be careful defining ID attribute. This attribute must have unique values only, if there will be duplicates in ID, then table rows will be rendered incorrectly.

Take as example below structure for Employees REST endpoint. EmployeId attribute provides unique values and it should be defined as ID for JET model:


Let's try and see what happens when non-unique DepartmentId attribute is set for a key in JET model:


In this case, during data fetch and UI table rendering (specifically when navigating to the next page of the table) - JET runtime for the table will get confused and will fetch all of the records from the backend (definitely, not what we want):


Ok, change it to the proper one - use EmployeeId for the ID in JET model:


The table works as expected:


Lesson learned - make sure to use an attribute with unique values for JET model ID. If there is no single attribute with unique values, the concatenate multiple attributes on the backend to construct a single attribute with unique value.

Source code is available from my GitHub repo.

Announcing Fluid 3!

Jim Marion - Mon, 2019-08-05 12:43

At jsmpros, we teach Fluid training courses several times a month. PeopleTools Fluid is by far our most popular course. Through our Fluid 1 and 2 courses, we teach an incredible amount of material including:

  • Fluid navigation,
  • Fluid page development strategies,
  • Oracle-delivered style classes and layout
  • External CSS libraries,
  • Fluid grid layouts
  • Fluid Group Boxes
  • Dynamic Tiles
  • Responsive and adaptive mobile design concepts, etc.

The feedback from our Fluid 1 and 2 series is overwhelmingly positive. What we are announcing today is our next level Fluid course: Fluid 3. Through this course you will learn how to:

  • Use Master/Detail to build business process based solutions,
  • Build effective secondary headers similar to PeopleSoft's self-service headers (including Related Actions),
  • Use Scroll Areas and Fluid alternatives,
  • Extend Fluid with JavaScript libraries such as d3 and Oracle JET,
  • Leverage the DataGrid to create compelling solutions,
  • Add Fluid Related Content,
  • Convert Classic components to Fluid,
  • Extend Fluid Pages with 8.57+ Drop Zones,
  • Construct robust, business-centric dynamic tiles and Fluid navigation, and
  • Learn additional PeopleSoft CSS classes not covered in the Fluid 1 and 2 courses.

To register for our upcoming August Fluid 3 session or any of our live virtual training courses, please visit our Live Virtual Training online catalog.

Has it been a while since your last Fluid training course? Are your Fluid skills a little rusty? Use our Fluid 3 course as a refresher to get you back into shape.

Note: If you have taken Fluid from other trainers, feel free to start with our Fluid 2 course. Even though we do not monitor prerequisites, we do encourage attendees with Fluid experience to attend our Fluid 2 training before continuing to Fluid 3.

How To Get The Usage Reports In Oracle Cloud Infrastructure

Online Apps DBA - Mon, 2019-08-05 11:02

How To Get The Usage Reports In Oracle Cloud Infrastructure If you are working as a Cloud Administrator or Architect then it is your day to day goal to track the usage & billing of cloud resources To know how to get the usage reports in Oracle Cloud Infrastructure using the Console, check the blog […]

The post How To Get The Usage Reports In Oracle Cloud Infrastructure appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Sparse OVM virtual disks on appliances

Yann Neuhaus - Mon, 2019-08-05 00:23

For some reason, you may need to sparse OVM virtual disks in an Oracle appliances. Even though that feature is present trough the OVM Manager, most of the Oracle appliances doesn’t have any OVM Manager deployed on it. Therefore if you un-sparse your virtual disk by mistake, you are on your own.

This is a note on how to sparse virtual disks which are un-sparse.

Stop I/Os on the virtual disk

First, ensure the VM using the disk is stopped:
xm shutdown {VM_NAME}

For instance:
xm shutdown exac01db01.domain.local

Sparse disk


dd if={PATH_TO_DISK_TO_BE_SPARSED} of={PATH_TO_NEW_SPARSED_DISK} conv=sparsed

For instance:

dd if=/EXAVMIMAGES/GuestImages/exac01db01.domain.local/vdisk_root_01.img \
of=/staging/vdisk_root_01.img \
conv=sparsed

Move disk to former location

After the sparsing operation finished, copy the disk back to their former location:

# Retrieve the disks path:
cat /EXAVMIMAGES/GuestImages/{VM_NAME}/vm.cfg | grep disk
# Copy each disk back to its location:
mv /staging/{DISK_NAME}.img /EXAVMIMAGES/GuestImages/{VM_NAME}/{DISK_NAME}.img

For instance:

mv /staging/vdisk_root_01.img /EXAVMIMAGES/GuestImages/exac01db01.domain.local/vdisk_root_01.img

Start back the VM

Then you can start back the VM which use the new disk:
xm create /EXAVMIMAGES/GuestImages/{VM_NAME}/vm.cfg

I hope this helps and please contact us or comment below should you need more details.

Cet article Sparse OVM virtual disks on appliances est apparu en premier sur Blog dbi services.

Oracle VM Server: Why the server uuid is important and why changes to this uuid are critical

Dietrich Schroff - Sun, 2019-08-04 15:02
After working a while with Oracle VM server it turns out, that a very important parameter is the UUID of a Oracle VM server.

This UUID is used by the ovs-agent (take a look at the Oracle documentation). Here a few excerpts of these chapter:
The Oracle VM Agent is a daemon that runs within dom0 on each Oracle VM Server instance. Its primary role is to facilitate communication between Oracle VM Server and Oracle VM Manager.
[...]
Oracle VM Agent is responsible for carrying out all of the configuration changes required on an Oracle VM Server instance, in accordance with the messages that are sent to it by Oracle VM Manager.
[...]
If you wish to allow another Oracle VM Manager instance to take ownership of the server, the original Oracle VM Manager instance must release ownership first.
[...]
Oracle VM Agent also maintains its own log files on the Oracle VM Server that can be used for debugging issues on a particular server instance or for auditing purposes. 
The oracle vm server gets identified at the oracle vm manager by its UUID. There is a very nice blogposting from Bjorn Naessens:
https://bjornnaessens.wordpress.com/2012/08/10/best-practices-in-ovm-2-fakeuuid-it/
He made his way through the source code and comes up with the following important things about this uuid:


From an architectural point of view, this is a really bad way, because the UUID will change, if you change the motherboard SMBIOS oder change a network MAC.
With loosing your UUID, the OVS-agent will no longer communicate with your OVM-manager and therfore you can not start/stop any VM on that host.

You can get the UUID of a server from the OVM Manager GUI:
(--> "Servers and VMs": select the server on the tree under "server pools" --> change the dropdown to "info")

How to fix a UUID change can be found here:
https://hadafq8.wordpress.com/2016/03/22/oracleovmovm-server-uuid-challenges/

Alfresco Clustering – Solr6

Yann Neuhaus - Sat, 2019-08-03 01:00

In previous blogs, I talked about some basis and presented some possible architectures for Alfresco, I talked about the Clustering setup for the Alfresco Repository, the Alfresco Share and for ActiveMQ. I also setup an Apache HTTPD as a Load Balancer. In this one, I will talk about the last layer that I wanted to present, which is Solr and more particularly Solr6 (Alfresco Search Services) Sharding. I planned on writing a blog related to Solr Sharding Concepts & Methods to explain what it brings concretely but unfortunately, it’s not ready yet. I will try to post it in the next few weeks, if I find the time.

 

I. Solr configuration modes

So, Solr supports/provides three configuration modes:

  • Master-Slave
  • SolrCloud
  • Standalone


Master-Slave
: It’s a first specific configuration mode which is pretty old. In this one, the Master node is the only to index the content and all the Slave nodes will replicate the Master’s index. This is a first step to provide a Clustering solution with Solr, and Alfresco supports it, but this solution has some important drawbacks. For example, and contrary to an ActiveMQ Master-Slave solution, Solr cannot change the Master. Therefore, if you lose your Master, there is no indexing happening anymore and you need to manually change the configuration file on each of the remaining nodes to specify a new Master and target all the remaining Slaves nodes to use the new Master. This isn’t what I will be talking about in this blog.

SolrCloud: It’s another specific configuration mode which is a little bit more recent, introduced in Solr4 I believe. SolrCloud is a true Clustering solution using a ZooKeeper Server. It adds an additional layer on top of a Standalone Solr which is slowing it down a little bit, especially on infrastructures with a huge demand on indexing. But at some points, when you start having dozens of Solr nodes, you need a central place to organize and configure them and that’s what SolrCloud is very good at. This solution provides Fault Tolerance as well as High Availability. I’m not sure if SolrCloud could be used by Alfresco because sure SolrCloud also has Shards and its behaviour is pretty similar to a Standalone Solr but it’s not entirely working in the same way. Maybe it’s possible, however I have never seen it so far. Might be the subject of some testing later… In any cases, using a SolrCloud for Alfresco might not be that useful because it’s really easier to setup a Master-Master Solr mixed with Solr Sharding for pretty much the same benefits. So, I won’t talk about SolrCloud here either.

You guessed it, in this blog, I will only talk about Standalone Solr nodes and only using Shards. Alfresco supports Solr Shards only since the version 5.1. Before that, it wasn’t possible to use this feature, even if Solr4 provided it already. When using the two default cores (the famous “alfresco” & “archive” cores), with all Alfresco versions (all supporting Solr… So since Alfresco 4), it is possible to have a High Available Solr installation by setting up two Solr Standalone nodes and putting a Load Balancer in front of it but in this case, there is no communication between the Solr nodes so, it’s only a HA solution, nothing more.

 

In the architectures that I presented in the first blog of this series, if you remember the schema N°5 (you probably don’t but no worry, I didn’t either), I put a link between the two Solr nodes and I mentioned the following related to this architecture:
“N°5: […]. Between the two Solr nodes, I put a Clustering link, that’s in case you are using Solr Sharding. If you are using the default cores (alfresco and archive), then there is no communication between distinct Solr nodes. If you are using Solr Sharding and if you want a HA architecture, then you will have the same Shards on both Solr nodes and in this case, there will be communications between the Solr nodes, it’s not really a Clustering so to speak, that’s how Solr Sharding is working but I still used the same representation.”

 

II. Solr Shards creation

As mentioned earlier in this blog, there are real Cluster solutions with Solr but in the case of Alfresco, because of the features that Alfresco adds like the Shard Registration, there is no real need to set up complex things like that. Having just a simple Master-Master installation of Solr6 with Sharding is already a very good and strong solution to provide Fault Tolerance, High Availability, Automatic Failover, Performance improvements, aso… So how can that be setup?

First, you will need to install at least two Solr Standalone nodes. You can use exactly the same setup for all nodes and it’s also exactly the same setup to use the default cores or Solr Sharding so just do what you are always doing. For the Tracking, you will need to use the Load Balancer URL so it can target all Repository nodes, if there are several.

If you created the default cores, you can remove them easily:

[alfresco@solr_n1 ~]$ curl -v "http://localhost:8983/solr/admin/cores?action=removeCore&storeRef=workspace://SpacesStore&coreName=alfresco"
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=removeCore&storeRef=workspace://SpacesStore&coreName=alfresco HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 150
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">524</int></lst>
</response>
* Connection #0 to host localhost left intact
[alfresco@solr_n1 ~]$
[alfresco@solr_n1 ~]$ curl -v "http://localhost:8983/solr/admin/cores?action=removeCore&storeRef=archive://SpacesStore&coreName=archive"
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=removeCore&storeRef=archive://SpacesStore&coreName=archive HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 150
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">485</int></lst>
</response>
* Connection #0 to host localhost left intact
[alfresco@solr_n1 ~]$

 

A status of “0” means that it’s successful.

Once that’s done, you can then simply create the Shards. In this example, I will:

  • use the DB_ID_RANGE method
  • use two Solr nodes
  • for workspace://SpacesStore: create 2 Shards out of a maximum of 10 with a range of 20M
  • for archive://SpacesStore: create 1 Shard out of a maximum of 5 with a range of 50M

Since I will use only two Solr nodes and since I want a High Availability on each of the Shards, I will need to have them all on both nodes. With a simple loop, it’s pretty easy to create all the Shards:

[alfresco@solr_n1 ~]$ solr_host=localhost
[alfresco@solr_n1 ~]$ solr_node_id=1
[alfresco@solr_n1 ~]$ begin_range=0
[alfresco@solr_n1 ~]$ range=19999999
[alfresco@solr_n1 ~]$ total_shards=10
[alfresco@solr_n1 ~]$
[alfresco@solr_n1 ~]$ for shard_id in `seq 0 1`; do
>   end_range=$((${begin_range} + ${range}))
>   curl -v "http://${solr_host}:8983/solr/admin/cores?action=newCore&storeRef=workspace://SpacesStore&numShards=${total_shards}&numNodes=${total_shards}&nodeInstance=${solr_node_id}&template=rerank&coreName=alfresco&shardIds=${shard_id}&property.shard.method=DB_ID_RANGE&property.shard.range=${begin_range}-${end_range}&property.shard.instance=${shard_id}"
>   echo ""
>   echo "  -->  Range N°${shard_id} created with: ${begin_range}-${end_range}"
>   echo ""
>   sleep 2
>   begin_range=$((${end_range} + 1))
> done

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=newCore&storeRef=workspace://SpacesStore&numShards=10&numNodes=10&nodeInstance=1&template=rerank&coreName=alfresco&shardIds=0&property.shard.method=DB_ID_RANGE&property.shard.range=0-19999999&property.shard.instance=0 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 182
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">254</int></lst><str name="core">alfresco-0</str>
</response>
* Connection #0 to host localhost left intact

  -->  Range N°0 created with: 0-19999999


*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=newCore&storeRef=workspace://SpacesStore&numShards=10&numNodes=10&nodeInstance=1&template=rerank&coreName=alfresco&shardIds=1&property.shard.method=DB_ID_RANGE&property.shard.range=20000000-39999999&property.shard.instance=1 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 182
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">228</int></lst><str name="core">alfresco-1</str>
</response>
* Connection #0 to host localhost left intact

  -->  Range N°1 created with: 20000000-39999999

[alfresco@solr_n1 ~]$
[alfresco@solr_n1 ~]$ begin_range=0
[alfresco@solr_n1 ~]$ range=49999999
[alfresco@solr_n1 ~]$ total_shards=4
[alfresco@solr_n1 ~]$ for shard_id in `seq 0 0`; do
>   end_range=$((${begin_range} + ${range}))
>   curl -v "http://${solr_host}:8983/solr/admin/cores?action=newCore&storeRef=archive://SpacesStore&numShards=${total_shards}&numNodes=${total_shards}&nodeInstance=${solr_node_id}&template=rerank&coreName=archive&shardIds=${shard_id}&property.shard.method=DB_ID_RANGE&property.shard.range=${begin_range}-${end_range}&property.shard.instance=${shard_id}"
>   echo ""
>   echo "  -->  Range N°${shard_id} created with: ${begin_range}-${end_range}"
>   echo ""
>   sleep 2
>   begin_range=$((${end_range} + 1))
> done

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8983 (#0)
> GET /solr/admin/cores?action=newCore&storeRef=archive://SpacesStore&numShards=4&numNodes=4&nodeInstance=1&template=rerank&coreName=archive&shardIds=0&property.shard.method=DB_ID_RANGE&property.shard.range=0-49999999&property.shard.instance=0 HTTP/1.1
> Host: localhost:8983
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 181
<
<?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">0</int><int name="QTime">231</int></lst><str name="core">archive-0</str>
</response>
* Connection #0 to host localhost left intact

-->  Range N°0 created with: 0-49999999

[alfresco@solr_n1 ~]$

 

On the Solr node2, to create the same Shards (another Instance of each Shard) and therefore provide the expected setup, just re-execute the same commands but replacing solr_node_id=1 with solr_node_id=2. That’s all there is to do on Solr side, just creating the Shards is sufficient. On the Alfresco side, configure the Shards registration to use the Dynamic mode:

[alfresco@alf_n1 ~]$ cat $CATALINA_HOME/shared/classes/alfresco-global.properties
...
# Solr Sharding
solr.useDynamicShardRegistration=true
search.solrShardRegistry.purgeOnInit=true
search.solrShardRegistry.shardInstanceTimeoutInSeconds=60
search.solrShardRegistry.maxAllowedReplicaTxCountDifference=500
...
[alfresco@alf_n1 ~]$

 

After a quick restart, all the Shard’s Instances will register themselves to Alfresco and you should see that each Shard has its two Shard’s Instances. Thanks to the constant Tracking, Alfresco knows which Shard’s Instances are healthy (up-to-date) and which ones aren’t (either lagging behind or completely silent). When performing searches, Alfresco will make a request to any of the healthy Shard’s Instances. Solr will be aware of the healthy Shard’s Instances as well and it will start the distribution of the search request to all the Shards for the parallel query. This is the communication between the Solr nodes that I mentioned earlier: it’s not really Clustering but rather query distribution between all the healthy Shard’s Instances.

 

 

Other posts of this series on Alfresco HA/Clustering:

Cet article Alfresco Clustering – Solr6 est apparu en premier sur Blog dbi services.

AdminClient and Set Commands

DBASolved - Fri, 2019-08-02 13:32

AdminClient is the “new” command line utility that is used with Oracle GoldenGate Microservices. Initally, AdminClient was released with Oracle GoldenGate 12c (12.3.0.0.1) and enhanced in each release there after. With this new command line tool, there are a few things you can do with it that makes it a powerful tool for administering Oracle GoldenGate.

Reminder: This is only avaliable in Oracle GoldenGate Microservices Editions.

Features that make this tool so nice:

  • Default command line tool for Microservices
  • Can be installed on a remote linux machine or Windows Workstations/Laptops
  • Can “Set” advanced setting that provide a few nice features

The third bullet is what will be the focus of this post.

The “Set” command within AdminClient provide you with options that allow you to extend the command line for Oracle GoldenGate. These features are:

After starting the AdminClient, it is possible to see the current settings of these values by using the SHOW command:

Oracle GoldenGate Administration Client for Oracle
Version 19.1.0.0.1 OGGCORE_19.1.0.0.0_PLATFORMS_190524.2201


Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


Linux, x64, 64bit (optimized) on May 25 2019 02:00:23
Operating system character set identified as US-ASCII.


OGG (not connected) 1> show


Current directory: /home/oracle/software/scripts
COLOR            : OFF
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF


OGG (not connected) 2>

 

If you want to change any of these settings, you can simply run the “set <option> <value>” at the command prompt. For example, I want to turn on the color option.

OGG (not connected) 2> set color on


OGG (not connected) 3> show


Current directory: /home/oracle/software/scripts
COLOR            : ON
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF


OGG (not connected) 4>

 

Now, that we can set these values and change how AdminClient responds; how can these settings be automated (to a degree)? In order to do this, you can write a wrapper around the execution of the AdminClient executable (similar to my post on resolving OGG-01525 error). Within this wrapper, the setting you want to change has to be prefixed with ADMINCLIENT_. This would like this:

export ADMINCLIENT_COLOR=<value>

Note: The <value> is case sensitive.

My shell script for AdminClient with the settings I like to have turned on is setup as follows:

#/bin/bash


export OGG_VAR_HOME=/tmp
export ADMINCLIENT_COLOR=ON
export ADMINCLIENT_DEBUG=OFF


${OGG_HOME}/bin/adminclient

 

Now, when I start AdminClient, I have all the settings I want for my environment. Plus, the ones I do not set will take the default settings.

[oracle@ogg19c scripts]$ sh ./adminclient.sh
Oracle GoldenGate Administration Client for Oracle
Version 19.1.0.0.1 OGGCORE_19.1.0.0.0_PLATFORMS_190524.2201


Copyright (C) 1995, 2019, Oracle and/or its affiliates. All rights reserved.


Linux, x64, 64bit (optimized) on May 25 2019 02:00:23
Operating system character set identified as US-ASCII.


OGG (not connected) 1> show


Current directory: /home/oracle/software/scripts
COLOR            : ON
DEBUG            : OFF
EDITOR           : vi
PAGER            : more
VERBOSE          : OFF


OGG (not connected) 2>

 

Enjoy!!!

Categories: DBA Blogs

VirtualBox – running a Windows 10 Guest on an Ubuntu Host

The Anti-Kyte - Fri, 2019-08-02 10:02

Yes, you read that right. There are lots of guides out there on how to set up and run Ubuntu in VirtualBox on a Windows host.
These days, you even have access to an Ubuntu sub-system in Windows itself.
If, like me, you’re OS of choice is Ubuntu but you need to test how something behaves in Windows – is it possible to knock up an appropriate environment ?
The answer is, of course, yes – otherwise this would be quite a short post.

The following steps will work for VirtualBox on any host – Linux, Mac, even Windows.

What I’m going to cover is :

  • Finding a Windows ISO
  • Configuring the VM in VirtualBox
  • Persuading VirtualBox to use a sensible screen size for your new VM

But first…

A quick word about versions

The Host OS I’m running is Ubuntu 16.04 LTS.
I’m using version 5.0 of VirtualBox.
NOTE – steps to install VirtualBox on a Debian-based host such as Ubuntu can be found here.
The Guest OS I’m installing is, as you’d expect, Windows 10.

Finding a Windows ISO

Depending on which Windows edition you are after, there are a couple of places you can look.
Microsoft provides an ISO for a 180-day evaluation version of Windows Server here.

In this case, I simply want to try Windows 10 so I need to go to this page.

Once here, I need to select an edition…

…and the language…

…before we’re presented with a choice of 32 or 64-bit :

I’ve chosen 64-bit. After the download, I am now the proud owner of :

-rw-rw-r-- 1 mike mike 4.7G Jul 10 17:10 Win10_1903_V1_English_x64.iso
Creating the VirtualBox VM

Fire up VirtualBox and click on the New button to start the Create Virtual Machine wizard :

…Next assign it some memory


I’m going to create a Virtual Hard Disk :

…using the default type…

…and being dynamically allocated…

…of the size recommended by VirtualBox :

I now have a new VM, which I need to point at the Windows ISO I downloaded so that I can install Windows itself :

All I have to do now is follow the Windows installation prompts, a process which I’ll not bore you with here.
However, you may be interested to learn that you don’t necessarily require a Product Key for this installation.
Chris Hoffman has produced an excellent guide on the subject.

Installing Guest Additions

Now I’ve configured Windows, I still need to install VirtualBox Guest Additions. Among other things, this will help to control the screen size of the VM so that I don’t need a magnifying glass !

First of all, we need to virtually eject the virtual cd containing the Windows ISO. To do this, we actually go to the VM’s VirtualBox menu and select Devices/Optical Drives/Remove disk from virtual drive :

Now, using the same menu (Devices), we select Insert Guest Additions CD Image :

When Windows prompts you, choose to install :

Accept the defaults when prompted and then reboot the VM.

If, by some chance you are still faced with a small viewport for your Windows VM, you can try the following…

Resizing the VM display

Go to the VirtualBox application itself and with the VM selected, go to the File/Preferences menu.

Click on Display, and set the Maximum Guest Screen Size to Automatic

When you next re-start the VM, the window should now be a more reasonable size.
In fact, with any luck, your desktop should now look something like this :

The best way to run Windows !

Taking Pivotal Build Service (PBS) for a test drive

Pas Apicella - Fri, 2019-08-02 05:13
Pivotal Build Service ALPHA was just released and in this blog post let's take it for a test drive to work out how it works. The Pivotal blog post about this release is below. In short it Assembles and Updates Containers in Kubernetes

https://content.pivotal.io/blog/pivotal-build-service-now-alpha-assembles-and-updates-containers-in-kubernetes

Steps:

1. Once you have deployed Pivotal Build Service, the pb CLI can be used to target it with the following command.

Note: Use the --skip-ssl-validation flag if the Pivotal Build Service targets a UAA that has a self-signed CA cert

$ pb api set https://pbs.picorivera.cf-app.com --skip-ssl-validation
Successfully set 'https://pbs.picorivera.cf-app.com' as the Build Service

2. Login using "pb login" as shown below

$ pb login
Target Build Server at: https://pbs.picorivera.cf-app.com

Username: papicella@gmail.com
Password: ******
Login successful

Using the Pivotal Build Service (PBS) requires us to create a TEAM and IMAGE. Both are explained below.

TEAM: A team is an entity on Pivotal Build Service that is used to manage authentication for the images built by Pivotal Build Service and to manage registry and git credentials for the images managed by the team

3. Create a TEAM yaml as per below and then apply that config using the pb cli

example-team.yaml

name: example-team-name
registries:
- registry: index.docker.io
  username: pasapples
  password: *****
repositories:
- domain: github.com
  username: papicella
  password: *****

$ pb team apply -f example-team.yaml
Successfully applied team 'example-team-name'

IMAGE: An image defines the specification that Pivotal Build Service uses to create images for a user.

4. Create a IMAGE yaml as per below and then apply that config using the pb cli. The PBS will automatically kick off a build.

example-image.yaml

team: example-team-name
source:
  git:
    url: https://github.com/papicella/pbs-demo
    revision: master
image:
  tag: pasapples/pbs-demo-image

$ pb image apply -f example-image.yaml
Successfully applied image configuration 'pasapples/pbs-demo-image'

$ pb image builds pasapples/pbs-demo-image
Build    Status      Image    Started Time           Finished Time    Reason
-----    ------      -----    ------------           -------------    ------
    1    BUILDING    --       2019-08-02 09:43:06    --               CONFIG

5. You can view the logs of the build using it's ID as shown below driving off the Build ID

$ pb image logs pasapples/pbs-demo-image -b 1

papicella@papicella:~$ pb image logs pasapples/pbs-demo-image -b 1
[build-step-credential-initializer] {"level":"info","ts":1564739008.561973,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
[build-step-credential-initializer]
[build-step-git-source-0] git-init:main.go:81: Successfully cloned "https://github.com/papicella/pbs-demo" @ "c1aae50feaffcd61c521796cd675e6576e58bc64" in path "/workspace"
[build-step-git-source-0]
[build-step-prepare]
[build-step-detect] Trying group 1 out of 3 with 27 buildpacks...
[build-step-detect] ======== Results ========
[build-step-detect] skip: Cloud Foundry Archive Expanding Buildpack
[build-step-detect] pass: Pivotal OpenJDK Buildpack
[build-step-detect] pass: Pivotal Build System Buildpack
[build-step-detect] pass: Cloud Foundry JVM Application Buildpack
[build-step-detect] pass: Cloud Foundry Spring Boot Buildpack
[build-step-detect] pass: Cloud Foundry Apache Tomcat Buildpack
[build-step-detect] pass: Cloud Foundry DistZip Buildpack
[build-step-detect] skip: Cloud Foundry Procfile Buildpack
[build-step-detect] skip: Pivotal AppDynamics Buildpack
[build-step-detect] skip: Pivotal AspectJ Buildpack
[build-step-detect] skip: Pivotal CA Introscope Buildpack
[build-step-detect] pass: Pivotal Client Certificate Mapper Buildpack

....


6. So after a few minutes or so we will see we have built our initial image from the GitHub repo and that OCI compliant image built using Cloud Native Buildpacks is created on our DockerHub account

$ pb image builds pasapples/pbs-demo-image
Build    Status     Image       Started Time           Finished Time          Reason
-----    ------     -----       ------------           -------------          ------
    1    SUCCESS    98239112    2019-08-02 09:43:06    2019-08-02 09:44:34    CONFIG



One of the PBS job is to keep this image updated as new successful commits occur off the matser branch. Lets show how this works as per below

7. Let's make a change to the code for or GitHub repo here I do this in IntelliJ IDEA



8. Commit the changes as shown below



9. Let's see if indeed the PBS actually started a new build for us and we should see it is doing that.

$ pb image builds pasapples/pbs-demo-image
Build    Status      Image       Started Time           Finished Time          Reason
-----    ------      -----       ------------           -------------          ------
    1    SUCCESS     98239112    2019-08-02 09:43:06    2019-08-02 09:44:34    CONFIG
    2    BUILDING    --          2019-08-02 09:57:11    --                     COMMIT

10. We can tail the logs as shown below and actually tail the build logs live using "-f"

papicella@papicella:~$ pb image logs pasapples/pbs-demo-image -b 2 -f
[build-step-credential-initializer] {"level":"info","ts":1564739850.5331886,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
[build-step-credential-initializer]
[build-step-git-source-0] git-init:main.go:81: Successfully cloned "https://github.com/papicella/pbs-demo" @ "0bb81c7523be7ada3ed956569d0241cda6b410d2" in path "/workspace"
[build-step-git-source-0]
[build-step-prepare]
[build-step-detect] Trying group 1 out of 3 with 27 buildpacks...
[build-step-detect] ======== Results ========
[build-step-detect] skip: Cloud Foundry Archive Expanding Buildpack
[build-step-detect] pass: Pivotal OpenJDK Buildpack
[build-step-detect] pass: Pivotal Build System Buildpack
[build-step-detect] pass: Cloud Foundry JVM Application Buildpack
[build-step-detect] pass: Cloud Foundry Spring Boot Buildpack
[build-step-detect] pass: Cloud Foundry Apache Tomcat Buildpack
[build-step-detect] pass: Cloud Foundry DistZip Buildpack
[build-step-detect] skip: Cloud Foundry Procfile Buildpack
[build-step-detect] skip: Pivotal AppDynamics Buildpack
[build-step-detect] skip: Pivotal AspectJ Buildpack
[build-step-detect] skip: Pivotal CA Introscope Buildpack
[build-step-detect] pass: Pivotal Client Certificate Mapper Buildpack
[build-step-detect] skip: Pivotal Elastic APM Buildpack
[build-step-detect] skip: Pivotal JaCoCo Buildpack
[build-step-detect] skip: Pivotal JProfiler Buildpack
[build-step-detect] skip: Pivotal JRebel Buildpack
[build-step-detect] skip: Pivotal New Relic Buildpack
[build-step-detect] skip: Pivotal OverOps Buildpack
[build-step-detect] skip: Pivotal Riverbed AppInternals Buildpack
[build-step-detect] skip: Pivotal SkyWalking Buildpack
[build-step-detect] skip: Pivotal YourKit Buildpack
[build-step-detect] skip: Cloud Foundry Azure Application Insights Buildpack
[build-step-detect] skip: Cloud Foundry Debug Buildpack
[build-step-detect] skip: Cloud Foundry Google Stackdriver Buildpack
[build-step-detect] skip: Cloud Foundry JDBC Buildpack
[build-step-detect] skip: Cloud Foundry JMX Buildpack
[build-step-detect] pass: Cloud Foundry Spring Auto-reconfiguration Buildpack
[build-step-detect]
[build-step-restore] Restoring cached layer 'io.pivotal.openjdk:openjdk-jdk'
[build-step-restore] Restoring cached layer 'io.pivotal.buildsystem:build-system-application'
[build-step-restore] Restoring cached layer 'io.pivotal.buildsystem:build-system-cache'
[build-step-restore] Restoring cached layer 'org.cloudfoundry.jvmapplication:executable-jar'
[build-step-restore] Restoring cached layer 'org.cloudfoundry.springboot:spring-boot'
[build-step-restore]
[build-step-analyze] Analyzing image 'index.docker.io/pasapples/pbs-demo-image@sha256:982391123b47cdbac534aaeed78c5e121d89d2064b53897c23f2248a7658fa50'
[build-step-analyze] Using cached layer 'io.pivotal.openjdk:openjdk-jdk'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:java-security-properties'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:jvmkill'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:link-local-dns'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:memory-calculator'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:openjdk-jre'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:security-provider-configurer'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.openjdk:class-counter'
[build-step-analyze] Using cached layer 'io.pivotal.buildsystem:build-system-application'
[build-step-analyze] Using cached layer 'io.pivotal.buildsystem:build-system-cache'
[build-step-analyze] Using cached launch layer 'org.cloudfoundry.jvmapplication:executable-jar'
[build-step-analyze] Rewriting metadata for layer 'org.cloudfoundry.jvmapplication:executable-jar'
[build-step-analyze] Using cached launch layer 'org.cloudfoundry.springboot:spring-boot'
[build-step-analyze] Rewriting metadata for layer 'org.cloudfoundry.springboot:spring-boot'
[build-step-analyze] Writing metadata for uncached layer 'io.pivotal.clientcertificatemapper:client-certificate-mapper'
[build-step-analyze] Writing metadata for uncached layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration'
[build-step-analyze]
[build-step-build]
[build-step-build] Pivotal OpenJDK Buildpack 1.0.0-M9
[build-step-build]   OpenJDK JDK 11.0.3: Reusing cached layer
[build-step-build]   OpenJDK JRE 11.0.3: Reusing cached layer
[build-step-build]   Java Security Properties 1.0.0-M9: Reusing cached layer
[build-step-build]   Security Provider Configurer 1.0.0-M9: Reusing cached layer
[build-step-build]   Link-Local DNS 1.0.0-M9: Reusing cached layer
[build-step-build]   JVMKill Agent 1.16.0: Reusing cached layer
[build-step-build]   Class Counter 1.0.0-M9: Reusing cached layer
[build-step-build]   Memory Calculator 4.0.0: Reusing cached layer
[build-step-build]
[build-step-build] Pivotal Build System Buildpack 1.0.0-M9
[build-step-build]     Using wrapper
[build-step-build]     Linking Cache to /home/vcap/.m2
[build-step-build]   Compiled Application (141 files): Contributing to layer
[build-step-build] [INFO] Scanning for projects...
[build-step-build] [INFO]
[build-step-build] [INFO] ------------------------< com.example:pbs-demo >------------------------
[build-step-build] [INFO] Building pbs-demo 0.0.1-SNAPSHOT
[build-step-build] [INFO] --------------------------------[ jar ]---------------------------------
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ pbs-demo ---
[build-step-build] [INFO] Using 'UTF-8' encoding to copy filtered resources.
[build-step-build] [INFO] Copying 1 resource
[build-step-build] [INFO] Copying 0 resource
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ pbs-demo ---
[build-step-build] [INFO] Changes detected - recompiling the module!
[build-step-build] [INFO] Compiling 9 source files to /workspace/target/classes
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ pbs-demo ---
[build-step-build] [INFO] Not copying test resources
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ pbs-demo ---
[build-step-build] [INFO] Not compiling test sources
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ pbs-demo ---
[build-step-build] [INFO] Tests are skipped.
[build-step-build] [INFO]
[build-step-build] [INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ pbs-demo ---
[build-step-build] [INFO] Building jar: /workspace/target/pbs-demo-0.0.1-SNAPSHOT.jar
[build-step-build] [INFO]
[build-step-build] [INFO] --- spring-boot-maven-plugin:2.1.6.RELEASE:repackage (repackage) @ pbs-demo ---
[build-step-build] [INFO] Replacing main artifact with repackaged archive
[build-step-build] [INFO] ------------------------------------------------------------------------
[build-step-build] [INFO] BUILD SUCCESS
[build-step-build] [INFO] ------------------------------------------------------------------------
[build-step-build] [INFO] Total time:  7.214 s
[build-step-build] [INFO] Finished at: 2019-08-02T09:57:52Z
[build-step-build] [INFO] ------------------------------------------------------------------------
[build-step-build]   Removing source code
[build-step-build]
[build-step-build] Cloud Foundry JVM Application Buildpack 1.0.0-M9
[build-step-build]   Executable JAR: Reusing cached layer
[build-step-build]   Process types:
[build-step-build]     executable-jar: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[build-step-build]     task:           java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[build-step-build]     web:            java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[build-step-build]
[build-step-build] Cloud Foundry Spring Boot Buildpack 1.0.0-M9
[build-step-build]   Spring Boot 2.1.6.RELEASE: Reusing cached layer
[build-step-build]   Process types:
[build-step-build]     spring-boot: java -cp $CLASSPATH $JAVA_OPTS com.example.pbsdemo.PbsDemoApplication
[build-step-build]     task:        java -cp $CLASSPATH $JAVA_OPTS com.example.pbsdemo.PbsDemoApplication
[build-step-build]     web:         java -cp $CLASSPATH $JAVA_OPTS com.example.pbsdemo.PbsDemoApplication
[build-step-build]
[build-step-build] Pivotal Client Certificate Mapper Buildpack 1.0.0-M9
[build-step-build]   Cloud Foundry Client Certificate Mapper 1.8.0: Reusing cached layer
[build-step-build]
[build-step-build] Cloud Foundry Spring Auto-reconfiguration Buildpack 1.0.0-M9
[build-step-build]   Spring Auto-reconfiguration 2.7.0: Reusing cached layer
[build-step-build] 

...


11. This time the build will be faster given we are using Cloud Native Buildpacks a CNCF project and it will only rebuild the layers required versus the whole image itself. You can see from the time taken of build "2"

$ pb image builds pasapples/pbs-demo-image
Build    Status     Image       Started Time           Finished Time          Reason
-----    ------     -----       ------------           -------------          ------
    1    SUCCESS    98239112    2019-08-02 09:43:06    2019-08-02 09:44:34    CONFIG
    2    SUCCESS    1e4b63b1    2019-08-02 09:57:11    2019-08-02 09:58:15    COMMIT

Hopefully this demo shows what the PBS is all about and how it will simplify how you create and keep updated your OCI compliant images.

More Information:

1. Get started with Pivotal Build Service.
https://github.com/pivotal-cf/docs-build-service/blob/master/using.md

2. Request alpha access to Build Service via this form, or by reaching out to your account team. Once you’ve gained access, you’ll see the bits on up PivNet

3. Cloud Native buildpacks
https://buildpacks.io/


Categories: Fusion Middleware

Alfresco Clustering – Apache HTTPD as Load Balancer

Yann Neuhaus - Fri, 2019-08-02 01:00

In previous blogs, I talked about some basis and presented some possible architectures for Alfresco, I talked about the Clustering setup for the Alfresco Repository, the Alfresco Share and for ActiveMQ. In this one, I will talk about the Front-end layer, but in a very particular setup because it will also act as a Load Balancer. For an Alfresco solution, you can choose the front-end that you prefer and it can just act as a front-end to protect your Alfresco back-end components, to add SSL or whatever. There is no real preferences but you will obviously need to know how to configure it. I posted a blog some years ago for Apache HTTPD as a simple front-end (here) or you can check the Alfresco documentation which now include a section for that as well but there is no official documentation for a Load Balancer setup.

In an Alfresco architecture that includes HA/Clustering you will, at some point, need a Load Balancer. From time to time, you will come across companies that do not already have a Load Balancer available and you might therefore have to provide something to fill this gap. Since you will most probably (should?) already have a front-end to protect Alfresco, why not using it as well as a Load Balancer? In this blog, I choose Apache HTTPD because that’s the front-end I’m usually using and I know it’s working fine as a LB as well.

The architectures that I described in the first blog of this series, there always were a front-end installed on each node with Alfresco Share and there were a LB above that. Here, these two boxes are actually together. There are multiple ways to set that up but I didn’t want to talk about that in my first blog because it’s not really related to Alfresco, it’s above that so it would just have multiplied the possible architectures that I wanted to present and my blog would just have been way too long. There were also no communications between the different front-end nodes because technically speaking, we aren’t going to setup Apache HTTPD as a Cluster, we only need to provide a High Availability solution.

Alright so let’s say that you don’t have a Load Balancer available and you want to use Apache HTTPD as a front-end+LB for a two-node Cluster. There are several solutions so here are two possible ways to do that from an inbound communication point of view that will still provide redundancy:

  • Setup a Round Robin DNS that points to both Apache HTTPD node1 and node2. The DNS will redirect connections to either of the two Apache HTTPD (Active/Active)
  • Setup a Failover DNS with a pretty low TimeToLive (TTL) which will point to a single Apache HTTPD node and redirect all traffic there. If this one isn’t available, it will failover to the second one (Active/Passive)

 

In both cases above, the Apache HTTPD configuration can be exactly the same, it will work. From an outbound communication point of view, Apache HTTPD will talk directly with all the Share nodes behind it. To avoid disconnection and loss of sessions in case an Apache HTTPD is going down, the solution will need to support session stickiness across all Apache HTTPD. With that, all communications coming a single browser will always be redirected to the same backend server which ensures that the sessions are still intact, even if you are losing an Apache HTTPD. I mentioned previously that there won’t be any communications between the different front-ends so this session stickiness must be based on something present inside the session (header or cookie) or inside the URL.

With Apache HTTPD, you can use the Proxy modules to provide both a front-end configuration as well as a Load Balancer but, in this blog, I will use the JK module. The JK module is provided by Apache for communications between Apache HTTPD and Apache Tomcat. It has been designed and optimized for this purpose and it also provides/supports a Load Balancer configuration.

 

I. Apache HTTPD setup for a single back-end node

For this example, I will use the package provided by Ubuntu for a simple installation. You can obviously build it from source to customize it, add your best practices, aso… This has nothing to do with the Clustering setup, it’s a simple front-end configuration for any installation. So let’s install a basic Apache HTTPD:

[alfresco@httpd_n1 ~]$ sudo apt-get install apache2 libapache2-mod-jk
[alfresco@httpd_n1 ~]$ sudo systemctl enable apache2.service
[alfresco@httpd_n1 ~]$ sudo systemctl daemon-reload
[alfresco@httpd_n1 ~]$ sudo a2enmod rewrite
[alfresco@httpd_n1 ~]$ sudo a2enmod ssl

 

Then to configure it for a single back-end Alfresco node (I’m just showing a minimal configuration again, there is much more to do add security & restrictions around Alfresco and mod_jk):

[alfresco@httpd_n1 ~]$ cat /etc/apache2/sites-available/alfresco-ssl.conf
...
<VirtualHost *:80>
    RewriteRule ^/?(.*) https://%{HTTP_HOST}/$1 [R,L]
</VirtualHost>

<VirtualHost *:443>
    ServerName            dns.domain
    ServerAlias           dns.domain dns
    ServerAdmin           email@domain
    SSLEngine             on
    SSLProtocol           -all +TLSv1.2
    SSLCipherSuite        EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:AES2
    SSLHonorCipherOrder   on
    SSLVerifyClient       none
    SSLCertificateFile    /etc/pki/tls/certs/dns.domain.crt
    SSLCertificateKeyFile /etc/pki/tls/private/dns.domain.key

    RewriteRule ^/$ https://%{HTTP_HOST}/share [R,L]

    JkMount /* alfworker
</VirtualHost>
...
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ cat /etc/libapache2-mod-jk/workers.properties
worker.list=alfworker
worker.alfworker.type=ajp13
worker.alfworker.port=8009
worker.alfworker.host=share_n1.domain
worker.alfworker.lbfactor=1
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ sudo a2ensite alfresco-ssl
[alfresco@httpd_n1 ~]$ sudo a2dissite 000-default
[alfresco@httpd_n1 ~]$ sudo rm /etc/apache2/sites-enabled/000-default.conf
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ sudo service apache2 restart

 

That should do it for a single back-end Alfresco node. Again, this was just an example, I wouldn’t recommend using the configuration as is (inside the alfresco-ssl.conf file), there is much more to do for security reasons.

 

II. Adaptation for a Load Balancer configuration

If you want to configure your Apache HTTPD as a Load Balancer, then on top of the standard setup shown above, you just have to modify two things:

  • Modify the JK module configuration to use a Load Balancer
  • Modify the Apache Tomcat configuration to add an identifier for Apache HTTPD to be able to redirect the communication to the correct back-end node (session stickiness). This ID put in the Apache Tomcat configuration will extend the Session’s ID like that: <session_id>.<tomcat_id>

 

So on all the nodes hosting the Apache HTTPD, you should put the exact same configuration:

[alfresco@httpd_n1 ~]$ cat /etc/libapache2-mod-jk/workers.properties
worker.list=alfworker

worker.alfworker.type=lb
worker.alfworker.balance_workers=node1,node2
worker.alfworker.sticky_session=true
worker.alfworker.method=B

worker.node1.type=ajp13
worker.node1.port=8009
worker.node1.host=share_n1.domain
worker.node1.lbfactor=1

worker.node2.type=ajp13
worker.node2.port=8009
worker.node2.host=share_n2.domain
worker.node2.lbfactor=1
[alfresco@httpd_n1 ~]$
[alfresco@httpd_n1 ~]$ sudo service apache2 reload

 

With the above configuration, we keep the same JK Worker (alfworker) but instead of using a ajp13 type, we use a lb type (line 4) which is an encapsulation. The alfworker will use 2 sub-workers named node1 and node2 (line 5), that’s just a generic name. The alfworker will also enable stickiness and use the method B (Busyness), which means that for new sessions, Apache HTTPD to choose to use the worker with the less requests being served, divided by the lbfactor value.

Each sub-worker (node1 and node2) define their type which is ajp13 this time, the port and host it should target (where the Share nodes are located) and the lbfactor. As mentioned above, increasing the lbfactor means that more requests are going to be sent to this worker:

  • For the node2 to serve 100% more requests than the node1 (x2), then set worker.node1.lbfactor=1 and worker.node2.lbfactor=2
  • For the node2 to serve 50% more requests than the node1 (x1.5), then set worker.node1.lbfactor=2 and worker.node2.lbfactor=3

 

The second thing to do is to modify the Apache Tomcat configuration to add a specific ID. On the Share node1:

[alfresco@share_n1 ~]$ grep "<Engine" $CATALINA_HOME/conf/server.xml
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="share_n1">
[alfresco@share_n1 ~]$

 

On the Share node2:

[alfresco@share_n2 ~]$ grep "<Engine" $CATALINA_HOME/conf/server.xml
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="share_n2">
[alfresco@share_n2 ~]$

 

The value to be put in the jvmRoute parameter is just a string so it can be anything but it must be unique across all Share nodes so that the Apache HTTPD JK module can find the correct back-end node that it should transfer the requests to.

It’s that simple to configure Apache HTTPD as a Load Balancer in front of Alfresco… To check which back-end server you are currently using, you can use the browser’s utilities and in particular the network recording which will display, in the headers/cookies section, the Session ID which will therefore display the value that you put in the jvmRoute.

 

 

Other posts of this series on Alfresco HA/Clustering:

Cet article Alfresco Clustering – Apache HTTPD as Load Balancer est apparu en premier sur Blog dbi services.

Too Old To Remember

Michael Dinh - Thu, 2019-08-01 12:01

Is it required to run datapatch after creating database?

Why bother trying to remember versus running datapatch -prereq to find out?

Test case for 12.2.

Database July 2019 Release Update 12.2 applied:

[oracle@racnode-dc2-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.2.0.1/grid
ORACLE_HOME=/u01/app/12.2.0.1/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.2.0.1/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.2.0.1/grid/OPatch/opatch lspatches
29770090;ACFS JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770090)
29770040;OCW JUL 2019 RELEASE UPDATE 12.2.0.1.190716 (29770040)
29757449;Database Jul 2019 Release Update : 12.2.0.1.190716 (29757449)
28566910;TOMCAT RELEASE UPDATE 12.2.0.1.0(ID:180802.1448.S) (28566910)
26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)

OPatch succeeded.
+ exit
[oracle@racnode-dc2-1 ~]$

Create 12.2 RAC database:

[oracle@racnode-dc2-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname hawkcdb -sid hawkcdb -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc2-1,racnode-dc2-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
21% complete
Creating and starting Oracle instance
35% complete
Creating cluster database views
50% complete
Completing Database Creation
57% complete
Creating Pluggable Databases
78% complete
Executing Post Configuration Actions
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/hawkcdb/hawkcdb.log" for further details.
[oracle@racnode-dc2-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc2-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.2.0.1.0 Production on Thu Aug  1 17:45:13 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    Nothing to apply
**********************************************************************

SQL Patching tool complete on Thu Aug  1 17:46:39 2019
[oracle@racnode-dc2-1 ~]$

Test case for 12.1.

Database July 2019 Bundle Patch 12.1 applied:

[oracle@racnode-dc1-1 ~]$ /media/patch/lspatches.sh
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/app/12.1.0.2/grid
ORACLE_HOME=/u01/app/12.1.0.2/grid
Oracle Instance alive for sid "+ASM1"
+ /u01/app/12.1.0.2/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/12.1.0.2/grid/OPatch/opatch lspatches
29509318;OCW PATCH SET UPDATE 12.1.0.2.190716 (29509318)
29496791;Database Bundle Patch : 12.1.0.2.190716 (29496791)
29423125;ACFS PATCH SET UPDATE 12.1.0.2.190716 (29423125)
26983807;WLM Patch Set Update: 12.1.0.2.180116 (26983807)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Create 12.1 RAC database:

[oracle@racnode-dc1-1 ~]$ dbca -silent -createDatabase -characterSet AL32UTF8 \
> -createAsContainerDatabase true \
> -templateName General_Purpose.dbc \
> -gdbname cdbhawk -sid cdbhawk -responseFile NO_VALUE \
> -sysPassword Oracle_4U! -systemPassword Oracle_4U! \
> -numberOfPDBs 1 -pdbName pdb01 -pdbAdminPassword Oracle_4U! \
> -databaseType MULTIPURPOSE \
> -automaticMemoryManagement false -totalMemory 3072 \
> -storageType ASM -diskGroupName DATA -recoveryGroupName FRA \
> -redoLogFileSize 100 \
> -emConfiguration NONE \
> -nodeinfo racnode-dc1-1,racnode-dc1-2 \
> -listeners LISTENER \
> -ignorePreReqs

Copying database files
23% complete
Creating and starting Oracle instance
38% complete
Creating cluster database views
54% complete
Completing Database Creation
77% complete
Creating Pluggable Databases
81% complete
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdbhawk/cdbhawk.log" for further details.
[oracle@racnode-dc1-1 ~]$

Run datapatch -prereq for 12.2

[oracle@racnode-dc1-1 ~]$ $ORACLE_HOME/OPatch/datapatch -prereq
SQL Patching tool version 12.1.0.2.0 Production on Thu Aug  1 18:24:53 2019
Copyright (c) 2012, 2017, Oracle.  All rights reserved.

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Adding patches to installation queue and performing prereq checks...done

**********************************************************************
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB01
    Nothing to roll back
    The following patches will be applied:
      29496791 (DATABASE BUNDLE PATCH 12.1.0.2.190716)
**********************************************************************

SQL Patching tool complete on Thu Aug  1 18:26:26 2019
[oracle@racnode-dc1-1 ~]$

For 12.1, datapatch is required and not for 12.2.

Pages

Subscribe to Oracle FAQ aggregator