Feed aggregator

Oracle ADF A Status Update

Andrejus Baranovski - Wed, 2019-07-03 14:11
Oracle posted information update for Oracle ADF - "With the continuous investment and usage of Oracle ADF inside Oracle we expect external customers will also continue to enjoy the benefits of Oracle ADF for many more years."

Read the complete post here: https://blogs.oracle.com/jdeveloperpm/oracle-adf-a-status-update

Happy to read the update, sounds positive. Thanks to Oracle for taking time and publishing this information. #adf #middleware #javascript #oracle #cloud #oraclefusion

Serving Prophet Model with Flask — Predicting Future

Andrejus Baranovski - Wed, 2019-07-03 08:23
The solution to demonstrate how to serve Prophet model API on the Web with Flask. Prophet — Open-Source Python library developed by Facebook to predict time series data.

An accurate forecast and future prediction are crucial almost for any business. This is an obvious thing and it doesn’t need explanation. There is a concept of time series data, this data is ordered by date and typically each date is assigned with one or more values specific to that date. Machine Learning powered models could generate forecasts based on time series data. Such forecasts could be an important source of information for business decisions.

Read more in my Towards Data Science post.

Null Display Value on Read-only List Item

Jeff Kemp - Wed, 2019-07-03 04:00

The updated Universal Theme has added new “Floating” item templates which look great, e.g.:

I had a list item which I wanted to leave optional; if the user leaves it null, I wanted it to show a “default” display value (derived at runtime). To implement this, I added a hidden item (P10_DEPTNO_DEFAULT) and on the list item set Null Display Value to &P10_DEPTNO_DEFAULT..

If the page is shown in read-only mode, however, the list item is rendered as a Display Item, and the Null Display Value attribute is ignored:

To solve this, I added a Dynamic Action which injects the default value into the HTML for display (without affecting the value of the underlying item):

  • Event: Page Load
  • Server-side Condition: <page is readonly> AND :P10_DEPTNO IS NULL
  • Action: Execute JavaScript Code
  • Fire on Initialization: No
  • Code:

This finds the span for the display-only item and injects the default display value for display:

If you want to see this in action for yourself, here is a demo: https://apex.oracle.com/pls/apex/f?p=APEXTEST:DISPSHOWDEFAULT&c=JK64

Snapchat Usernames that are Interesting and More You

VitalSoftTech - Tue, 2019-07-02 09:52

Snapchat is an application for Android and Apple by Eva Spiegel and Bobby Murphy. It is a social media messenger to allow users to share their photos and videos with their friends. It is essential to have a cool Snapchat username that will help you portray your personality and entertain your friends and family. One […]

The post Snapchat Usernames that are Interesting and More You appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Transparent Data Encryption and the world of Multitenant Database (Orace 12c)

VitalSoftTech - Tue, 2019-07-02 09:45

Step-by-step instructions on how to secure the Oracle Database Datafiles and the Operating System Data Files using Oracle 12c Transparent Data Encryption. Learn more ..

The post Oracle Transparent Data Encryption and the world of Multitenant Database (Orace 12c) appeared first on VitalSoftTech.

Categories: DBA Blogs

Using DbVisualizer to work with #Oracle, #PostgreSQL and #Exasol

The Oracle Instructor - Tue, 2019-07-02 09:01

As a Database Developer or Database Administrator, it becomes increasingly unlikely that you will work with only one platform.

It’s quite useful to have one single tool to handle multiple different database platforms. And that’s exactly the ambition of DbVisualizer.

As a hypothecial scenario, let’s assume you are a database admin who works on a project to migrate from Oracle to EDB Postgres and Exasol.

The goal might be to replace the corporate Oracle database landscape, moving the OLTP part to EDB Postgres and the DWH / Analytics part to Exasol.

Instead of having to switch constantly between say SQL Developer, psql and EXAplus, a more efficient approach would be using DbVisualizer for all three.

I created one connection for each of the three databases here for my demo:Now let’s see if statements I do in Oracle also work in EDB Postgres and in Exasol:

Oracle

EDB

Exasol

Works the same for all three! The convenient thing here is that I just had to select the Database Connection from the pull down menu while leaving the statement as it is. No need to copy & paste even.

What about schemas and tables?

Oracle

In EDB, I need to create a schema accordingly:

EDB

 

In Exasol, schema and table can be created in the same way:

Exasol

Notice that the data types got silently translated into the proper Exasol data types:

Exasol

There is no DBA_TABLES in Exasol, though:

Exasol

Of course, there’s much more to check and test upon migration, but I think you got an idea how a universal SQL Client like DbVisualizer might help for such purposes.

 

Categories: DBA Blogs

PeopleSoft ReConnect 2019

Jim Marion - Mon, 2019-07-01 09:37

It is about two weeks until PeopleSoft ReConnect, and definitely time to build your schedule. I'm looking forward to a great conference with partners such as Appsian, psadmin.io, SpearMC, Presence of IT, Gideon Taylor, PS Web Solutions, New Resources Consulting, Oracle, and colleagues such as Sasank Vemana. There are so many great sessions available. I personally have several overlapping sessions on my agenda. In fact, I am delivering sessions during timeslots that list sessions I would like to attend.

If you still have room in your schedule, here are the sessions I will be presenting at ReConnect 2019. I hope you aren't leaving early because both of my sessions are on Thursday, the final day of the conference.

See you there!

[Video] Exadata Deployment Options: Cloud Service (ExaCS) vs Cloud at Customer (ExaCC) vs Exadata Machine

Online Apps DBA - Mon, 2019-07-01 07:25

[Video] Exadata Deployment Options: ExaCS vs ExaCC vs Exadata Machine There are three deployment options available for Exadata, one On-Premise and two On-Cloud. 1. What are these 3 Deployment Options? 2. What is the role of Oracle Advanced Support Gateway in ExaCC? 3. How many Cloud Control Plane can be there in one Datacentre? 4. […]

The post [Video] Exadata Deployment Options: Cloud Service (ExaCS) vs Cloud at Customer (ExaCC) vs Exadata Machine appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

opt_estimate 4

Jonathan Lewis - Mon, 2019-07-01 07:18

In the previous article in this series on the opt_estimate() hint I mentioned the “query_block” option for the hint. If you can identify a specify query block that becomes an “outline_leaf” in an execution plan (perhaps because you’ve deliberately given an query block name to an inline subquery and applied the no_merge() hint to it) then you can use the opt_estimate() hint to tell the optimizer how many rows will be produced by that query block (each time it starts). The syntax of the hint is very simple:


opt_estimate(@{query block name}  query_block  rows={number of rows})

As with other options for the hint, you can use scale_rows=, min=, max= as alternatives (the last seems to be used in the code generated by Oracle for materialized view refreshes) but the simple “rows=N” is likely to be the most popular. In effect it does the same as the “non-specific” version of the cardinality() hint – which I’ve suggested from time to time as a way of telling the optimizer the size of a data set in a materialized CTE (“with” subquery), e.g.


set serveroutput off

with demo as (
        select  /*+
                        qb_name(mat_cte)
                        materialize
                        cardinality(@mat_cte 11)
--                      opt_estimate(@mat_cte query_block rows=11)
                */
                distinct trunc(created)    date_list
        from    all_objects
)
select  * from demo
;

select * from table(dbms_xplan.display_cursor);
    

Regardless of whether you use the opt_estimate() or cardinality() hint above, the materialized temporary table will be reported with 11 rows. (Note that in this case where the hint is inside the query block it applies to the “@mat_cte” isn’t necessary).

In the previous article I generated some data with a script called opt_est_gby.sql to show you the effects of the group_by and having options of the opt_estimate() hint and pointed out that there were case where you might also want to include the query_block option as well. Here’s a final example query showing the effect, with the scale_rows feature after creating a table t2 as a copy of t1 but setting pctfree 75 (to make a tablescan more expensive) and creating an index on t2(id):


create table t2 pctfree 75 as select * from t1;
create index t2_i1 on t2(id);

select
        t2.n1, t1ct
from
        t2,
        (
        select  /*+
                        qb_name(main)
                        opt_estimate(@main group_by scale_rows=4)
                        opt_estimate(@main having scale_rows=0.4)
                        opt_estimate(@main query_block scale_rows=0.5)
                */
                mod(n1,10), count(*) t1ct
        from    t1
        group by
                mod(n1,10)
        having
                count(*) > 100
        ) v1
where
        t2.id = v1.t1ct
;

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |     8 |   168 |    27   (8)| 00:00:01 |
|   1 |  NESTED LOOPS                |       |     8 |   168 |    27   (8)| 00:00:01 |
|   2 |   NESTED LOOPS               |       |     8 |   168 |    27   (8)| 00:00:01 |
|   3 |    VIEW                      |       |     8 |   104 |    10  (10)| 00:00:01 |
|*  4 |     FILTER                   |       |       |       |            |          |
|   5 |      HASH GROUP BY           |       |     8 |    32 |    10  (10)| 00:00:01 |
|   6 |       TABLE ACCESS FULL      | T1    |  3000 | 12000 |     9   (0)| 00:00:01 |
|*  7 |    INDEX RANGE SCAN          | T2_I1 |     1 |       |     1   (0)| 00:00:01 |
|   8 |   TABLE ACCESS BY INDEX ROWID| T2    |     1 |     8 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter(COUNT(*)>100)
   7 - access("T2"."ID"="V1"."T1CT")


I’ve inlined the last query (with the two opt_estimate() hints) that I used in the previous article, and added a third opt_estimate() hint to that inline view. In this case I didn’t have to add a no_merge() hint because the numbers worked in my favour but to be safe in a production environment that’s a hint that I should have included.

You may recall that the hash group by on its own resulted in a prediction of 200 rows, and with the having clause the prediction dropped to 10 rows (standard 5%). With my three opt_estimate() hints in place I should see the effects of the following arithmetic:


group by      200       * 4   = 800
having        5% of 800 * 0.4 =  16
query block   16        * 0.5 =   8

As you can see, the cardinality prediction for the VIEW operation is, indeed, 8 – so the combination of hints has worked. It’s just a shame that we can’t see the three individual steps in the arithmetic as we walk the plan.

A Warning

As always I can only repeat – hinting is not easy; and “not easy” usually translates to “not stable / not safe” (and thanks to a Freudian slip while typing: “not sage”. You probably don’t know how do it properly, except in the very simplest cases, and we don’t really know how Oracle is interpreting the hints (particularly the undocumented ones). Here’s an example of how puzzling even the opt_estimate(query_block) hint can be – as usual starting with some data:

rem
rem     Script:         opt_estimate_2.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Aug 2017
rem

create table t1
as
select * from all_objects;

create table t2
as
select * from all_objects;

As you can see, I’ve been a bit lazy with this example (which I wrote a couple of years ago) and it uses all_objects as a convenient source of data. Unfortunately this means you won’t necessarily be able to reproduce exactly the results I’m about to show you, which I did on a small instance of 12.2.0.1. I’m going to examine four versions of a simple query which

  • restricts the rows from t1,
  • finds the unique set of object_types in that subset of t1
  • then joins to t2 by object_type

select
        /*+ 
                qb_name(main)
        */
        t2.object_id, t2.object_name, created
from    (
        select  /*+ qb_name(inline) */
                distinct object_type
        from    t1 
        where 
                created >= date'2017-03-01' 
        )       v1,
        t2
where
        t2.object_type = v1.object_type
;


select
        /*+ 
                qb_name(main)
                merge(@inline)
        */
        t2.object_id, t2.object_name, created
from    (
        select  /*+ qb_name(inline) */
                distinct object_type
        from    t1 
        where 
                created >= date'2017-03-01' 
        )       v1,
        t2
where
        t2.object_type = v1.object_type
;


select
        /*+ 
                qb_name(main)
                opt_estimate(@inline query_block rows=14)
        */
        t2.object_id, t2.object_name, created
from    (
        select  /*+ qb_name(inline) */
                distinct object_type
        from    t1 
        where 
                created >= date'2017-03-01' 
        )       v1,
        t2
where
        t2.object_type = v1.object_type
;


select
        /*+ 
                qb_name(main)
                merge(@inline)
                opt_estimate(@inline query_block rows=14)
        */
        t2.object_id, t2.object_name, created
from    (
        select  /*+ qb_name(inline) */
                distinct object_type
        from    t1 
        where 
                created >= date'2017-03-01' 
        )       v1,
        t2
where
        t2.object_type = v1.object_type
;

The first version is my unhinted baseline (where, in my case, Oracle doesn’t use complex view merging), the second forces complex view merging of the inline aggregate view, then queries 3 and 4 repeat queries 1 and 2 but tell the optimizer that the number of distinct object_type values  is 14 (roughly half the actual in may case). But there is an oddity in the last query – I’ve told the optimizer how many rows it should estimate for the inline view but I’ve also told it to get rid of the inline view and merge it into the outer query block; so what effect is that going to have? My hope would be that the hint would have to be ignored because it’s going to apply to a query block that doesn’t exist in the final plan and that makes it irrelevant and unusable. Here are the four execution plans:


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      | 61776 |  4464K|   338   (7)| 00:00:01 |
|*  1 |  HASH JOIN           |      | 61776 |  4464K|   338   (7)| 00:00:01 |
|   2 |   VIEW               |      |    27 |   351 |   173   (9)| 00:00:01 |
|   3 |    HASH UNIQUE       |      |    27 |   486 |   173   (9)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| T1   | 59458 |  1045K|   164   (4)| 00:00:01 |
|   5 |   TABLE ACCESS FULL  | T2   | 61776 |  3680K|   163   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."OBJECT_TYPE"="V1"."OBJECT_TYPE")
   4 - filter("CREATED">=TO_DATE(' 2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


--------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           | 61776 |  5308K|       |  1492   (2)| 00:00:01 |
|   1 |  VIEW                  | VM_NWVW_1 | 61776 |  5308K|       |  1492   (2)| 00:00:01 |
|   2 |   HASH UNIQUE          |           | 61776 |  5489K|  6112K|  1492   (2)| 00:00:01 |
|*  3 |    HASH JOIN RIGHT SEMI|           | 61776 |  5489K|       |   330   (5)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | T1        | 59458 |  1045K|       |   164   (4)| 00:00:01 |
|   5 |     TABLE ACCESS FULL  | T2        | 61776 |  4403K|       |   163   (4)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T2"."OBJECT_TYPE"="OBJECT_TYPE")
   4 - filter("CREATED">=TO_DATE(' 2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


-----------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      | 32032 |  2314K|   338   (7)| 00:00:01 |
|*  1 |  HASH JOIN           |      | 32032 |  2314K|   338   (7)| 00:00:01 |
|   2 |   VIEW               |      |    14 |   182 |   173   (9)| 00:00:01 |
|   3 |    HASH UNIQUE       |      |    14 |   252 |   173   (9)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| T1   | 59458 |  1045K|   164   (4)| 00:00:01 |
|   5 |   TABLE ACCESS FULL  | T2   | 61776 |  3680K|   163   (4)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."OBJECT_TYPE"="V1"."OBJECT_TYPE")
   4 - filter("CREATED">=TO_DATE(' 2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


--------------------------------------------------------------------------------------------
| Id  | Operation              | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |           |    14 |  1232 |       |  1492   (2)| 00:00:01 |
|   1 |  VIEW                  | VM_NWVW_1 |    14 |  1232 |       |  1492   (2)| 00:00:01 |
|   2 |   HASH UNIQUE          |           |    14 |  1274 |  6112K|  1492   (2)| 00:00:01 |
|*  3 |    HASH JOIN RIGHT SEMI|           | 61776 |  5489K|       |   330   (5)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL  | T1        | 59458 |  1045K|       |   164   (4)| 00:00:01 |
|   5 |     TABLE ACCESS FULL  | T2        | 61776 |  4403K|       |   163   (4)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("T2"."OBJECT_TYPE"="OBJECT_TYPE")
   4 - filter("CREATED">=TO_DATE(' 2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

The first plan tells us that most of the rows in t1 have created > 1st March 2017 and there are (estimated) 27 distinct values for object_type; and there are 61,776 rows in t2 (which is basically the same as t1), and none of them are eliminated by the join on object_type from the inline view.

The second plan (with the forced complext view merging) shows Oracle changing the view with “distinct” into a (right) semi-join between t2 and t1 with the internal view name of VM_NWVW_1 – and the cardinality is correct.

The third plan shows that my hint telling the optimizer to assume the original inline view produces 14 rows has been accepted and, not surprisingly, when we claim that we have roughly half the number of object_type values the final estimate of rows in the join is roughly halved.

So what happens in the fourth plan when our hint applies to a view that no longer exists? I think the optimizer should have discarded the hint as irrelevant the moment it merged the view. Unfortunately it seems to have carried the hint up into the merged view and used it to produce a wildly inaccurate estimate for the final cardinality. If this had been a three-table join this is the sort of error that could make a sensible hash join into a third table become an unbelievably stupid nested loop join. If you had thought you were doing something incredibly clever with (just) the one opt_estimate() hint, the day might come when a small change in the statistics resulted in the optimizer using a view merge strategy you’d never seen before and producing a catastrophic execution plan in (say) an overnight batch that then ran “forever”.

Hinting is hard, you really have to be extremely thorough in your hints and make sure you cover all the options that might appear. And then you might still run into something that looks (as this does) like a bug.

Footnote

Here’s a closing thought: even if you manage to tell the optimizer exactly how many rows will come out of a query block to be joined to the next table in the query, you may still get a very bad plan unless you can also tell the optimizer how many distinct values of the join column(s) there are in that data set. Which means you may also have to learn all about the (even more undocumented) column_stats() hint.

 

Oracle the Creative Choice for Italian Stationery Brand

Oracle Press Releases - Mon, 2019-07-01 07:00
Press Release
Oracle the Creative Choice for Italian Stationery Brand Legami Milano applies cutting-edge retail technology to expand business while dazzling customers with a modern customer experience

Redwood Shores, Calif. and Milan, Italy—Jul 1, 2019

Legami Milano immerses and delights consumers in a creative workshop offering stationery products, accessories, jewelry, and lifestyle items online, in-store and through wholesale partners. Retailers, from iconic department stores to museums, around the world showcase the positive and inspiring Legami Milano collections. The chain is also expected to open 30 new boutique stores in the next three years. To support this new adventure in Italy and beyond, Legami partnered with Oracle to dazzle their customers with a connected and seamless shopping experience.  

“Legami Milano is committed to offering innovative new concepts, experiences, and products within our boutiques. With young, tech-savvy shoppers, our retail technology needs to support this mindset,” said Alberto Fassi, CEO, Legami. “We chose Oracle Retail to deliver the next-generation shopping experience needed to grow with our business. By adopting a modern Point-of-Service platform, we can speed up the checkout process while better engaging with our customers.” 

In the recent Topography of Retail Report, nearly 6,400 European consumers surveyed prioritized being able to see and touch items (67 percent) as one of the most compelling attributes of a shopping experience. The Legami brand attracts shoppers who like to browse for the perfect gift, discover innovative new items, or explore the world through merchandise.

In just six months, Legami Milano implemented Oracle Retail Xstore Point-of-Service (POS) and Oracle Retail Customer Engagement. Today, Legami Milano boutique employees have powerful tools at their disposal to access relevant customer information in real-time and manage payments safely to improve customer service in stores. The modern POS significantly streamlined operations and reduced errors during transactions.

“In-store experiences are more relevant than ever,” said Mike Webster, senior vice president, and general manager, Oracle Retail. “As Legami Milano expands to new regions, Oracle tools will provide associates the insights needed to deliver a lasting, positive impression on shoppers. It’s exciting to imagine where this dynamic brand will go next on its journey and we look forward to supporting them each step of the way.”

Contact Info
Kris Reeves
Oracle PR
+1.925.787.6744
kris.reeves@oracle.com
About Legami Milano

Legami Milano was founded April 29, 2003 in Bergamo, Italy by Alberto Fassi. At the beginning, the company focused on an iconic product: the book belt – reinterpreting the traditional 1970’s object, mainly popular among school pupils and university students in Italy. Soon, Legami Milano broke into the stationery market – its agendas, calendars and cards immediately became successful, not only in Italy but also abroad: from Europe to America, from the East to South Africa. Its range of products has continued to rapidly expand and get increasingly differentiated. Today, Legami produces and generates brands, collections and content to share positive feelings, joyful moods and good humor with everybody. Legami Milano recorded €27 million in inventory turnover in 2018, increasing its turnover by 40% in comparison to 2017.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Modifying pg_hba.conf from inside PostgreSQL

Yann Neuhaus - Sat, 2019-06-29 07:14

During one of the sessions from the last Swiss PGDay there was a question which could not be answered during the talk: Is it possible to modify pg_hba.conf from inside PostgreSQL without having access to the operating system? What everybody agreed on is, that there currently is no build-in function for doing this.

When you are on a recent version of PostgreSQL there is a view you can use to display the rules in pg_hba.conf:

postgres=# select * from pg_hba_file_rules ;
 line_number | type  |   database    | user_name |  address  |                 netmask                 | auth_method | options | error 
-------------+-------+---------------+-----------+-----------+-----------------------------------------+-------------+---------+-------
          84 | local | {all}         | {all}     |           |                                         | trust       |         | 
          86 | host  | {all}         | {all}     | 127.0.0.1 | 255.255.255.255                         | trust       |         | 
          88 | host  | {all}         | {all}     | ::1       | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff | trust       |         | 
          91 | local | {replication} | {all}     |           |                                         | trust       |         | 
          92 | host  | {replication} | {all}     | 127.0.0.1 | 255.255.255.255                         | trust       |         | 
          93 | host  | {replication} | {all}     | ::1       | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff | trust       |         | 
          94 | host  | {all}         | {mydb}    | ::1       | ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff | trust       |         | 
(7 rows)

But there is nothing which allows you to directly modify that. When you are lucky and you have enough permissions there is a way to do it, though. First, lets check where pg_hba.conf is located:

postgres=# select setting from pg_settings where name like '%hba%';
           setting           
-----------------------------
 /u02/pgdata/DEV/pg_hba.conf

Having that information we can load that file to a table:

postgres=# create table hba ( lines text ); 
CREATE TABLE
postgres=# copy hba from '/u02/pgdata/DEV/pg_hba.conf';
COPY 93

Once it is loaded we have the whole content in our table (skipping the comments and empty lines here):

postgres=# select * from hba where lines !~ '^#' and lines !~ '^$';
                                 lines                                 
-----------------------------------------------------------------------
 local   all             all                                     trust
 host    all             all             127.0.0.1/32            trust
 host    all             all             ::1/128                 trust
 local   replication     all                                     trust
 host    replication     all             127.0.0.1/32            trust
 host    replication     all             ::1/128                 trust
(6 rows)

As this is a normal table we can of course add a row:

postgres=# insert into hba (lines) values ('host  all mydb  ::1/128                 trust');
INSERT 0 1
postgres=# select * from hba where lines !~ '^#' and lines !~ '^$';
                                 lines                                 
-----------------------------------------------------------------------
 local   all             all                                     trust
 host    all             all             127.0.0.1/32            trust
 host    all             all             ::1/128                 trust
 local   replication     all                                     trust
 host    replication     all             127.0.0.1/32            trust
 host    replication     all             ::1/128                 trust
 host  all mydb  ::1/128                 trust
(7 rows)

And now we can write it back:

postgres=# copy hba to '/u02/pgdata/DEV/pg_hba.conf';
COPY 94

Reading the whole file confirms that our new rule is there:

postgres=# select pg_read_file('pg_hba.conf');
                               pg_read_file                               
--------------------------------------------------------------------------
 # PostgreSQL Client Authentication Configuration File                   +
 # ===================================================                   +
 #                                                                       +
 # Refer to the "Client Authentication" section in the PostgreSQL        +
 # documentation for a complete description of this file.  A short       +
 # synopsis follows.                                                     +
 #                                                                       +
 # This file controls: which hosts are allowed to connect, how clients   +
 # are authenticated, which PostgreSQL user names they can use, which    +
 # databases they can access.  Records take one of these forms:          +
 #                                                                       +
 # local      DATABASE  USER  METHOD  [OPTIONS]                          +
 # host       DATABASE  USER  ADDRESS  METHOD  [OPTIONS]                 +
 # hostssl    DATABASE  USER  ADDRESS  METHOD  [OPTIONS]                 +
 # hostnossl  DATABASE  USER  ADDRESS  METHOD  [OPTIONS]                 +
 #                                                                       +
 # (The uppercase items must be replaced by actual values.)              +
 #                                                                       +
 # The first field is the connection type: "local" is a Unix-domain      +
 # socket, "host" is either a plain or SSL-encrypted TCP/IP socket,      +
 # "hostssl" is an SSL-encrypted TCP/IP socket, and "hostnossl" is a     +
 # plain TCP/IP socket.                                                  +
 #                                                                       +
 # DATABASE can be "all", "sameuser", "samerole", "replication", a       +
 # database name, or a comma-separated list thereof. The "all"           +
 # keyword does not match "replication". Access to replication           +
 # must be enabled in a separate record (see example below).             +
 #                                                                       +
 # USER can be "all", a user name, a group name prefixed with "+", or a  +
 # comma-separated list thereof.  In both the DATABASE and USER fields   +
 # you can also write a file name prefixed with "@" to include names     +
 # from a separate file.                                                 +
 #                                                                       +
 # ADDRESS specifies the set of hosts the record matches.  It can be a   +
 # host name, or it is made up of an IP address and a CIDR mask that is  +
 # an integer (between 0 and 32 (IPv4) or 128 (IPv6) inclusive) that     +
 # specifies the number of significant bits in the mask.  A host name    +
 # that starts with a dot (.) matches a suffix of the actual host name.  +
 # Alternatively, you can write an IP address and netmask in separate    +
 # columns to specify the set of hosts.  Instead of a CIDR-address, you  +
 # can write "samehost" to match any of the server's own IP addresses,   +
 # or "samenet" to match any address in any subnet that the server is    +
 # directly connected to.                                                +
 #                                                                       +
 # METHOD can be "trust", "reject", "md5", "password", "scram-sha-256",  +
 # "gss", "sspi", "ident", "peer", "pam", "ldap", "radius" or "cert".    +
 # Note that "password" sends passwords in clear text; "md5" or          +
 # "scram-sha-256" are preferred since they send encrypted passwords.    +
 #                                                                       +
 # OPTIONS are a set of options for the authentication in the format     +
 # NAME=VALUE.  The available options depend on the different            +
 # authentication methods -- refer to the "Client Authentication"        +
 # section in the documentation for a list of which options are          +
 # available for which authentication methods.                           +
 #                                                                       +
 # Database and user names containing spaces, commas, quotes and other   +
 # special characters must be quoted.  Quoting one of the keywords       +
 # "all", "sameuser", "samerole" or "replication" makes the name lose    +
 # its special character, and just match a database or username with     +
 # that name.                                                            +
 #                                                                       +
 # This file is read on server startup and when the server receives a    +
 # SIGHUP signal.  If you edit the file on a running system, you have to +
 # SIGHUP the server for the changes to take effect, run "pg_ctl reload",+
 # or execute "SELECT pg_reload_conf()".                                 +
 #                                                                       +
 # Put your actual configuration here                                    +
 # ----------------------------------                                    +
 #                                                                       +
 # If you want to allow non-local connections, you need to add more      +
 # "host" records.  In that case you will also need to make PostgreSQL   +
 # listen on a non-local interface via the listen_addresses              +
 # configuration parameter, or via the -i or -h command line switches.   +
                                                                         +
 # CAUTION: Configuring the system for local "trust" authentication      +
 # allows any local user to connect as any PostgreSQL user, including    +
 # the database superuser.  If you do not trust all your local users,    +
 # use another authentication method.                                    +
                                                                         +
                                                                         +
 # TYPE  DATABASE        USER            ADDRESS                 METHOD  +
                                                                         +
 # "local" is for Unix domain socket connections only                    +
 local   all             all                                     trust   +
 # IPv4 local connections:                                               +
 host    all             all             127.0.0.1/32            trust   +
 # IPv6 local connections:                                               +
 host    all             all             ::1/128                 trust   +
 # Allow replication connections from localhost, by a user with the      +
 # replication privilege.                                                +
 local   replication     all                                     trust   +
 host    replication     all             127.0.0.1/32            trust   +
 host    replication     all             ::1/128                 trust   +
 host  all mydb  ::1/128                 trust                           +
(1 row)

All you need to do from now on is to reload the configuration and you’re done:

postgres=# select pg_reload_conf();
 pg_reload_conf 
----------------
 t
(1 row)

Of course: Use with caution!

Cet article Modifying pg_hba.conf from inside PostgreSQL est apparu en premier sur Blog dbi services.

[New Feature] Work requests for the Compute service: Oracle Cloud (OCI)

Online Apps DBA - Sat, 2019-06-29 03:19

[New Feature] Work requests for the Compute service: Oracle Cloud (OCI) Recent Oracle Update [May 29, 2019]: You can track the status of long-running operations in the Compute service (like Create Instance, Custom Image, Instance Pool, Updating & Deleting Instance Pool) using work requests. To know more about this new Update, Check our blog https://k21academy.com/oci44 […]

The post [New Feature] Work requests for the Compute service: Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

DGMGRL Using Help To Learn About New Validate Features

Michael Dinh - Fri, 2019-06-28 10:57

Wouldn’t be nicer and much better if Oracle would add (NF) for new features to help syntax?

DGMGRL for Linux: Release 12.2.0.1.0

[oracle@db-fs-1 bin]$ ./dgmgrl /
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Jun 28 17:49:16 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "orclcdb"
Connected as SYSDG.
DGMGRL> help validate

Performs an exhaustive set of validations for a member

Syntax:

  VALIDATE DATABASE [VERBOSE] <database name>;

  VALIDATE DATABASE [VERBOSE] <database name> DATAFILE <datafile number>
    OUTPUT=<file name>;

  VALIDATE DATABASE [VERBOSE] <database name> SPFILE;

  VALIDATE FAR_SYNC [VERBOSE] <far_sync name>
    [WHEN PRIMARY IS <database name>];

DGMGRL>

DGMGRL for Linux: Release 18.0.0.0.0

[oracle@ADC6160274 GDS]$ dgmgrl /
DGMGRL for Linux: Release 18.0.0.0.0 - Production on Fri Jun 28 15:54:36 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected to "chi"
Connected as SYSDG.
DGMGRL> help validate

Performs an exhaustive set of validations for a member

Syntax:

  VALIDATE DATABASE [VERBOSE] <database name>;

  VALIDATE DATABASE [VERBOSE] <database name> DATAFILE <datafile number>
    OUTPUT=<file name>;

  VALIDATE DATABASE [VERBOSE] <database name> SPFILE;

  VALIDATE FAR_SYNC [VERBOSE] <far_sync name>
    [WHEN PRIMARY IS <database name>];

  VALIDATE NETWORK CONFIGURATION FOR { ALL | <member name> }; [*** NF ***]

  VALIDATE STATIC CONNECT IDENTIFIER FOR { ALL | <database name> }; [*** NF ***]

DGMGRL>

validate network configuration

DGMGRL> validate network configuration for all;
Connecting to instance "sales" on database "sfo" ...
Connected to "sfo"
Checking connectivity from instance "sales" on database "sfo to instance "sales" on database "chi"...
Succeeded.
Connecting to instance "sales" on database "chi" ...
Connected to "chi"
Checking connectivity from instance "sales" on database "chi to instance "sales" on database "sfo"...
Succeeded.

Oracle Clusterware is not configured on database "sfo".
Connecting to database "sfo" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SLC02PNY.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sfo_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "sfo".

Oracle Clusterware is not configured on database "chi".
Connecting to database "chi" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ADC6160274.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=chi_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "chi".

validate static connect identifier

DGMGRL> validate static connect identifier for all;
Oracle Clusterware is not configured on database "sfo".
Connecting to database "sfo" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=SLC02PNY.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sfo_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "sfo".

Oracle Clusterware is not configured on database "chi".
Connecting to database "chi" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ADC6160274.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=chi_DGMGRL)(INSTANCE_NAME=sales)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "chi".

DGMGRL>

opt_estimate 3

Jonathan Lewis - Fri, 2019-06-28 07:12

This is just a quick note to throw out a couple of of the lesser-known options for the opt_estimate() hint – and they may be variants that are likely to be most useful since they address a problem where the optimizer can produce consistently bad cardinality estimates. The first is the “group by” option – a hint that I once would have called a “strategic” hint but which more properly ought to be called a “query block” hint. Here’s the simplest possible example (tested under 12.2, 18.3 and 19.2):


rem
rem     Script:         opt_est_gby.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2019
rem 

create table t1
as
select
        rownum                  id,
        mod(rownum,200)         n1,
        lpad(rownum,10,'0')     v1,
        rpad('x',100)           padding
)
from
        dual
connect by
        level <= 3000
;

set autotrace on explain

prompt  =============================
prompt  Baseline cardinality estimate
prompt  (correct cardinality is 10)
prompt  Estimate will be 200
prompt  =============================

select  /*+
                qb_name(main)
        */
        mod(n1,10), count(*) 
from    t2 
group by 
        mod(n1,10)
;

I’ve generated a table of 3,000 rows with a column n1 holding 15 rows each of 200 distinct values. The query then aggregates on mod(n1,10) so it has to return 10 rows, but the optimizer doesn’t have a mechanism for inferring this and produces the following plan – the Rows value from the HASH GROUP BY at operation 1 is the only thing we’re really interested in here:


---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   200 |   800 |    10  (10)| 00:00:01 |
|   1 |  HASH GROUP BY     |      |   200 |   800 |    10  (10)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
---------------------------------------------------------------------------

It looks as if the optimizer’s default position is to use num_distinct from the underlying column as the estimate for the aggregate. We can work around this in the usual two ways with an opt_estimate() hint. First, let’s tell the optimizer that it’s going to over-estimate the cardinality by a factor of 10:


select  /*+
                qb_name(main)
                opt_estimate(@main group_by, scale_rows = 0.1)
        */
        mod(n1,10), count(*) 
from    t1 
group by 
        mod(n1,10)
;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    20 |    80 |    10  (10)| 00:00:01 |
|   1 |  HASH GROUP BY     |      |    20 |    80 |    10  (10)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
---------------------------------------------------------------------------

The hint uses group_by as the critical option parameter, and then I’ve used the standard scale_rows=nnn to set a scaling factor that should be used to adjust the result of the default calculation. At 10% (0.1) this gives us an estimate of 20 rows.

Alternatively, we could simply tell the optimizer how many rows we want it to believe will be generated for the aggregate – let’s just tell it that the result will be 10 rows.

select  /*+
                qb_name(main)
                opt_estimate(@main group_by, rows = 10)
        */
        mod(n1,10), count(*) 
from    t1 
group by 
        mod(n1,10)
;

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    10 |    40 |    10  (10)| 00:00:01 |
|   1 |  HASH GROUP BY     |      |    10 |    40 |    10  (10)| 00:00:01 |
|   2 |   TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
---------------------------------------------------------------------------

We use the same group_by as the critical parameter, with rows=nnn.

Next steps

After an aggregation there’s often a “having” clause so you might consider using the group_by option to fix up the cardinality of the having clause if you know what the normal effect of the having clause should be. For example: “having count(*) > NNN” will use the optimizer’s standard 5% “guess” and “having count(*) = NNN” will use the standard 1% guess. However, having seen the group_by options I took a guess that there might be a having option to the opt_estimate() hint as well, so I tried it – with autotrace enabled here are three queries, first the unhinted baseline (which uses the standard 5% on my having clause) then a couple of others with hints to tweak the cardinality:

select  /*+
                qb_name(main)
        */
        mod(n1,10), count(*)
from    t1
group by
        mod(n1,10)
having
        count(*) > 100
;

select  /*+
                qb_name(main)
                opt_estimate(@main having scale_rows=0.4)
        */
        mod(n1,10), count(*)
from    t1
group by
        mod(n1,10)
having
        count(*) > 100
;

select  /*+
                qb_name(main)
                opt_estimate(@main group_by scale_rows=2)
                opt_estimate(@main having scale_rows=0.3)
        */
        mod(n1,10), count(*)
from    t1
group by
        mod(n1,10)
having
        count(*) > 100
;

The first query gives us the baseline cardinality of 10 (5% of 200). The second query scales the having cardinality down by a factor of 0.4  (with means an estimate of 4). The final query first doubles the group by cardinality (to 400), then scales the having cardinality (which would have become 20) down by a factor of 0.3 with the nett effect of producing a cardinality of 6. Here are the plans.

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |    10 |    40 |    10  (10)| 00:00:01 |
|*  1 |  FILTER             |      |       |       |            |          |   --  10
|   2 |   HASH GROUP BY     |      |    10 |    40 |    10  (10)| 00:00:01 |   -- 200
|   3 |    TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
----------------------------------------------------------------------------

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     4 |    16 |    10  (10)| 00:00:01 |
|*  1 |  FILTER             |      |       |       |            |          |    --   4
|   2 |   HASH GROUP BY     |      |     4 |    16 |    10  (10)| 00:00:01 |    -- 200
|   3 |    TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
----------------------------------------------------------------------------

----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |     6 |    24 |    10  (10)| 00:00:01 |
|*  1 |  FILTER             |      |       |       |            |          |    --   6
|   2 |   HASH GROUP BY     |      |     6 |    24 |    10  (10)| 00:00:01 |    -- 400
|   3 |    TABLE ACCESS FULL| T1   |  3000 | 12000 |     9   (0)| 00:00:01 |
----------------------------------------------------------------------------

It’s a little sad that the FILTER operation shows no estimate while the HASH GROUP BY operation shows the estimate after the application of the having clause. It would be nice to see the plan reporting the figures which I’ve added at the end of line for operations 1 and 2.

You may wonder why one would want to increase the estimate for the group by then reduce it for the having. While I’m not going to go to the trouble of creating a worked example it shouldn’t be too hard to appreciate the idea that the optimizer might use complex view merging to postpone a group by until after a join – so increasing the estimate for a group by might be necessary to ensure that that particular transformation doesn’t happen, while following this up with a reduction to the having might then ensure that the next join is a nested loop rather than a hash join. Of course, if you don’t need to be this subtle you might simply take advantage of yet another option to the opt_estimate() hint, the query_block option – but that will (probably) appear in the next article in this series.

 

Should the Oracle APEX Community Care About Autonomous Database?

Joel Kallman - Fri, 2019-06-28 06:14


This past week, Oracle announced the availability of Oracle APEX, SQL Developer Web and Oracle REST Data Services on Oracle Autonomous Database.  If you're in the APEX community, should you care?  I say "absolutely yes!", but not for the reasons you might suspect.

Autonomous Database is strategic to Oracle.  Just read the transcript from the recent Oracle quarterly earnings conference call and it will be obvious to you.  Autonomous is an advancement in technology that has significant investment from Oracle and very real benefits for customers.  It's a clear market differentiator - I do truly believe this, it's not merely my marketing spin.  And now, with the addition of Oracle APEX & SQL Developer Web & Oracle REST Data Services, I think this combination of technologies provides even more capabilities to this platform and even greater differentiation.  What other service provides elastic, autonomous capabilities, application design and proven low code application development, out-of-the-box?  Did I mention that this also happens to include the world's most popular database, Oracle Database?

The benefits of low code application development are real.  And Low Code + Autonomous Database is the ideal combination.  Low code is about reducing costs, delivering faster, with greater consistency, and being usable by a broader range of skill sets.  Some of the benefits of Autonomous Database are equivalent - less cost, instant availability, usable by others who may not be world-class experts.  It has been a long multi-year confluence of events that has brought us together here.

The APEX community is the envy of others at Oracle.  Even people who aren't APEX fans recognize the APEX community's passion.  But where did this come from?  Do people really get excited about a tool?  No.  They get excited about what they can do with a tool - how it helps them deliver a solution, and be successful.  A carpenter doesn't get passionate about his dual-slide compound miter saw because it's a cool tool.  He gets satisfaction about what he can actually do with that tool versus a hand saw.  When you get a pay raise or praise or a promotion because of what you've been able to deliver with APEX and ORDS and Oracle Database, that's a reason to get excited!  And I think that is ultimately the real story behind the enviable, tangible energy in the APEX community.  Countless people have had many great successes with this combination of technologies, and success begets success.

Let's say you're in the APEX community, you saw this announcement about APEX on Autonomous, but you're not interested in cloud.  Or, as Andre de Souza so eloquently stated on Twitter, "I know it’s big news, just does not affect 99,9% of current #orclapex developers I’m guessing."  Should you care?  I say yes, and here's why.  The great APEX community that I mention above, which has been so successful with APEX & ORDS & Oracle Database over the years, has become very large across the globe, and with not a lot of help from Oracle.  Make no mistake - Oracle does invest in APEX, millions of dollars every year.  But I still come across Oracle Database customers who have simply never heard of APEX.  This is because there has not been much promotion from Oracle marketing or public relations or even sales.  All of this is about to change.  Why?  Because APEX is on Autonomous Database, and Autonomous Database is strategic to Oracle.  You will probably see more communication and discussion from Oracle about APEX than probably the last 20 years combined.  Low code resonates with customers, APEX is proven, and everyone has application development needs.

How does this benefit someone in the APEX community?  Simple:

  1. Awareness and interest will rise by people who have never heard about APEX before, both existing on-premises customers and net new customers.
  2. There will be greater demand for APEX and database development talent.  If you have experience with APEX, with a proven track record of delivering solutions with APEX, you're a very attractive person.  Perhaps the rate you charge has now gotten a bit higher.  You'll certainly gain upward mobility.
  3. You'll no longer have to introduce someone to APEX for the very first time, or counter the claim that "it's not strategic."
  4. As our friends from Explorer UK say, with APEX, they "develop cloud ready applications".  And you've been doing this for years.  Don't be afraid to make this claim.  When and if you're ready for cloud, you're already out of the gate.  The same APEX apps you developed on-premises run and look and feel exactly the same in the cloud.  Who has been developing cloud-ready apps for years?  You!

So.  Even if you're not into "cloud" but into APEX, this announcement and these capabilities on Autonomous Database has material impact on you and everyone else in the APEX community.  Your skills and experience will become more valued, and we should expect the market and interest and demand to grow.

Everything is not perfect, and we on the APEX team still have a lot of very hard work ahead of us.  But these are exciting times and it's what we've labored on for the past 20 years, to get to this point.  For those who have been with the APEX community for so many years, congratulations!  You've bet on the right horse.  Just fasten your seat belt.

Windocks and K8s support

Yann Neuhaus - Fri, 2019-06-28 00:33

I got recently the 4.08 update from the Windocks team and I was very excited to evaluate some of new features. The first cool one I want to present in this blog concerns the Kubernetes support for deploying Windocks containers that will make my application deployment definitely easier. Let’s say you want to deploy your application that is tied to a Windocks container for SQL Server. In a previous blog post I explained why we are using Windocks in our context. So, with previous versions of Windocks, we had to write custom scripts to deploy applications on K8s that are tied to a Windocks. With the new version 4.08, this process may be simplified because both of applications and their related Windocks containers are directly deployable on K8s by using a YAML deployment file.

In fact, the new way consists in deploying a Windocks SQL Server proxy on K8s that works in conjunction with a Windocks Server. Once the SQL Server proxy deployed a corresponding Windocks container is spinning up with their specific parameters as shown in the picture below:

 

First of all, in order to make access secure between K8s and the Windocks Server authentication is required and we need to provide credential information that will be stored in the sql-proxy secret in K8s. SA password is also included in this secret and will be used to setup the SA account when the Windocks container will spin up.

$ kubectl create secret generic proxy-secrets --from-literal=WINDOCKS_REQUIRED_USERNAME='clustadmin' --from-literal=WINDOCKS_REQUIRED_PASSWORD='StrongPassword' --from-literal=WINDOCKS_REQUIRED_CONTAINER_SAPASSWORD=’sa_password'

 

The next step consists in deploying the Windocks SQL proxy by with the specific environment variables including WINDOCKS_REQUIRED_HOSTNAME (Windocks server name or IP Address), WINDOCKS_REQUIRED_IMAGE_NAME (Windocks based image used for container) and WINDOCKS_SQL_PROXY_OPTIONAL_LISTENING_PORT (optional).

  • The Windocks SQL Proxy YAML file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: windocks-sql-proxy-secure 
  labels:
    app: sqlproxy-secure 
spec:
  replicas: 1 
  template:
    metadata:
      labels:
        app: sqlproxy-secure 
        tier: frontend
    spec:
      containers:
      - name: sqlproxy-secure-app 
        image: windocks/windocks-sql-server-proxy 
        imagePullPolicy: Always
        ports:
        - name: tcp-proxy
          containerPort: 3087
        - name: tls-proxy
          containerPort: 3088
        envFrom:
          - secretRef:
              name: proxy-secrets
        env:
          - name: PROJECT_ID
            value: project_id_for_GKE_deployment_optional
          - name: WINDOCKS_REQUIRED_HOSTNAME
            value: xx.xxx.xxx.xxx
          - name: WINDOCKS_REQUIRED_IMAGE_NAME
            value: 2012_ci
          - name: WINDOCKS_SQL_PROXY_OPTIONAL_LISTENING_PORT
            value: "3087"

 

If we want to make the SQL Proxy pod accessible from outside a service is needed but this is not mandatory according to the context. Note that you may also use TLS connection to secure the network between K8s and the Windocks server.

  • The Windocks service YAML file
apiVersion: v1
kind: Service
metadata:
  name: windocks-sql-proxy-secure
  labels:
    app: sqlproxy-secure
    tier: frontend
spec:
  sessionAffinity: ClientIP
  type: LoadBalancer
  ports:
  - port: 3087
    name: tcp-proxy-secure-service
    targetPort: 3087
  - port: 3088
    name: tls-proxy-secure-service
    targetPort: 3088
  selector:
    app: sqlproxy-secure
    tier: frontend

 

Let’s give a try on my Azure infrastructure including an AKS cluster and a Windocks Server installed in an Azure VM. I also took the opportunity to create my own helm chart from the YAML files provided by the Windocks team. It will make my deployment easier for sure. Here the command I used to deploy my Windocks helm chart on my AKS cluster.

$ helm install --name windocks2012 --namespace dmk --set Windocks.Image=2012_ci --set Windocks.Port=3089 --set Windocks.PortSSL=3090 .

 

Deployment will be performed in a specific namespace named dmk and the 2012_ci image will be used as based image for my Windocks container. I will be able to connect to my Windocks container by using the 3089 port through the SQL Proxy deployed on K8s. After few seconds the following resources were deployed within my dmk namespace including a Windocks SQL Proxy pod and the Windocks SQL Proxy service.

$ kubectl get all -n dmk
NAME                                                                  READY   STATUS    RESTARTS   AGE
pod/windocks2012-sqlproxy-securewindocks-sql-proxy-secure-56fb8694m   1/1     Running   0          13m

NAME                                                            TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)
                 AGE
service/backend                                                 ClusterIP      10.0.126.154   <none>          80/TCP
                 8d
service/windocks2012-sqlproxy-securewindocks-sql-proxy-secure   LoadBalancer   10.0.252.235   xx.xx.xxx.xxx   3089:30382/TCP,3090:30677/TCP   44m

NAME                                                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/windocks2012-sqlproxy-securewindocks-sql-proxy-secure   1         1         1            1           44m

NAME                                                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/windocks2012-sqlproxy-securewindocks-sql-proxy-secure-56fbdb5c96   1         1         1       44m

 

Once deployed, the SQL proxy will redirect all connections from 3089 port to the container port after spinning up the corresponding Windocks container on the Windocks server. We may get some details by taking a look at the SQL Proxy logs on K8s. As a reminder the container port is allocated dynamically by default by the Windocks server and the SQL proxy get it automatically for connection redirection.

…
Valid response for creating Windocks container
Container id is b1201aaaba3b4cd047953b624e541e26500024e42e6381936fc7b526b5596a99
Container port is 10001
Setting up tcp server
redirecting connections from 127.0.0.1:3089 to xx.xxx.xxx.xxx:10001 
…

 

Let’s try to connect by using mssql-cli and the external IP of the SQL Proxy service and the 3089 port. The connection redirect is effective and I can interact with my Windocks container on local port 10001:

master> SELECT top 1 c.local_net_address, c.local_tcp_port
....... FROM sys.dm_exec_connections as c; 
+---------------------+------------------+
| local_net_address   | local_tcp_port   |
|---------------------+------------------|
| 172.18.0.5          | 10001            |
+---------------------+------------------+

 

The Windocks container for SQL Server was spinning up my 3 testing databases as expected:

master> \ld+
+-------------------+-------------------------+-----------------------+------------------------------+
| name              | create_date             | compatibility_level   | collation_name               |
|-------------------+-------------------------+-----------------------+------------------------------|
| master            | 2003-04-08 09:13:36.390 | 110                   | SQL_Latin1_General_CP1_CI_AS |
| tempdb            | 2019-06-27 20:04:04.273 | 110                   | SQL_Latin1_General_CP1_CI_AS |
| model             | 2003-04-08 09:13:36.390 | 110                   | SQL_Latin1_General_CP1_CI_AS |
| msdb              | 2012-02-10 21:02:17.770 | 110                   | SQL_Latin1_General_CP1_CI_AS |
| AdventureWorksDbi | 2019-06-27 20:04:03.537 | 100                   | Latin1_General_100_CS_AS     |
| ApplixEnterprise  | 2019-06-27 20:04:04.477 | 90                    | SQL_Latin1_General_CP1_CI_AS |
| dbi_tools         | 2019-06-27 20:04:05.153 | 100                   | French_CS_AS                 |
+-------------------+-------------------------+-----------------------+------------------------------+

 

From the Windocks server, I may get a picture of provisioned containers. The interesting one in our case is referenced by the name k8s-windocks2012/xxxx:

PS F:\WINDOCKS\SQL2012> docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
e9dbe5556b2f        2012_ci             ""                  29 minutes ago      Stopped             10002/              dab/Windocks-id:31432367-c744-4ae3-8248-cb3fb3d2792e
b1201aaaba3b        2012_ci             ""                  13 minutes ago      Started             10001/              k8s-windocks2012/Windocks-id:cfa58c38-d168-4c04-b4c8-12b0552b93ad

 

Well, in a nutshell a feature we will consider to integrate in our DevOps Azure pipeline for sure. Stay tuned, other blog posts will come later.

See you!

 

 

 

Cet article Windocks and K8s support est apparu en premier sur Blog dbi services.

Deploy Artifacts from JFrog Artifactory to PCF directly from IntelliJ IDEA using Spinnaker

Pas Apicella - Thu, 2019-06-27 19:18
Spinnaker is an open-source, multi-cloud continuous delivery platform that helps you release software changes with high velocity and confidence. In the demo below we will show how to deploy an Spring boot application deployed to JFrog Artifactory which Spinnaker watches for changes through a trigger and then deploys to PCF

This demo assumes you have the following:
  • A JFrog Artifactory repository configured to accept a Maven artifact—the JAR for your application (Spring Boot JAR in this example)
  • Spinnaker 1.14.7 or later installed on a k8s cluster
StepsSetup JFrog Artifactory As a Maven repository, Artifactory is both a source for artifacts needed for a build, and a target to deploy artifacts generated in the build process. For more details on how to set that up refer to this short video if required

https://jfrog.com/screencast/setting-maven-repository-jfrog-artifactory-less-one-minute/

In this screen shot below it's shows we are ready to use JFrog Artifactory as our maven repository





Configure Spinnaker1. Connect to Halyard pod as shown below

$ kubectl exec --namespace default -it myspinnaker-spinnaker-halyard-0 bash

2. Enable the Artifactory repository

$ hal config repository artifactory enable

3. Add a search for your repository

$ hal config repository artifactory search add spring-artifactory --base-url http://10.195.75.147/artifactory --repo libs-release-local --groupId com.example --username admin --password

4. Enable the Maven artifact provider

$ hal config artifact maven enable

5. Add an artifact account for Maven

$ hal config artifact maven account add spring-artifactory-maven --repository-url http://10.195.75.147/artifactory/libs-release-local

6. Apply your changes

$ hal deploy apply
Configure Spring Boot Application1. To deploy build artifacts through Artifactory you need to add a deployment element with the URL of a target local repository to which you want to deploy your artifacts. You can find this in the "Set Me Up " wizard for your maven repository in JFrog Artifactory



2. Copy the pom.xml entry and place into your local maven project POM file as shown below
  
<distributionManagement>
<repository>
<id>central</id>
<name>artifactory-artifactory-0-releases</name>
<url>http://10.195.75.147:80/artifactory/libs-release-local</url>
</repository>
</distributionManagement>

Basic Spring Boot project as follows for this:



Configure Spinnaker Pipeline1. Create a new pipeline for your application. In the pipeline configuration, add an Artifactory trigger to the Automated Triggers, as shown below. We are referencing what we created above in Spinnaker




2. In the Artifact Constraints dropdown for the Artifactory trigger, select “Define a new artifact…” and populate the Expected Artifact form with the details of the artifact that will be published to the Artifactory repository


3. Add a “Deploy” stage to the pipeline which will simply just deploy our application to PCF.





Now we are ready to Publish the application artifact to the Artifactory repository

Kick if offTo verify our setup we just need to do a "mvn deploy" from the local spring boot maven project as shown below

1. Deploy the artifact JAR using "mvn deploy" as shown below from your local maven project which will publish the artifact to your JFrog Artifactory maven repo.

$ mvn deploy

Output from command above

[INFO] --- maven-deploy-plugin:2.8.2:deploy (default-deploy) @ spinnaker-artifactory-demo ---
Uploading to central: http://10.195.75.147:80/artifactory/libs-release-local/com/example/spinnaker-artifactory-demo/0.0.1-RELEASE/spinnaker-artifactory-demo-0.0.1-RELEASE.jar
Uploaded to central: http://10.195.75.147:80/artifactory/libs-release-local/com/example/spinnaker-artifactory-demo/0.0.1-RELEASE/spinnaker-artifactory-demo-0.0.1-RELEASE.jar (17 MB at 83 kB/s)
Uploading to central: http://10.195.75.147:80/artifactory/libs-release-local/com/example/spinnaker-artifactory-demo/0.0.1-RELEASE/spinnaker-artifactory-demo-0.0.1-RELEASE.pom
Uploaded to central: http://10.195.75.147:80/artifactory/libs-release-local/com/example/spinnaker-artifactory-demo/0.0.1-RELEASE/spinnaker-artifactory-demo-0.0.1-RELEASE.pom (1.8 kB at 886 B/s)
Downloading from central: http://10.195.75.147:80/artifactory/libs-release-local/com/example/spinnaker-artifactory-demo/maven-metadata.xml
Downloaded from central: http://10.195.75.147:80/artifactory/libs-release-local/com/example/spinnaker-artifactory-demo/maven-metadata.xml (424 B at 671 B/s)
Uploading to central: http://10.195.75.147:80/artifactory/libs-release-local/com/example/spinnaker-artifactory-demo/maven-metadata.xml
Uploaded to central: http://10.195.75.147:80/artifactory/libs-release-local/com/example/spinnaker-artifactory-demo/maven-metadata.xml (368 B at 218 B/s)
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  03:33 min
[INFO] Finished at: 2019-06-28T09:59:00+10:00
[INFO] ------------------------------------------------------------------------



2. Verify from spinnaker that your artifactory trigger has fired


3. From PCF applications manager we will see our deployed application using the server group settings we used and the artifactory published JAR we used



4. Once pipeline is complete you can view information of what was created and the various stages as shown below



More InformationDeploy Artifacts from Artifactory to CF
https://www.spinnaker.io/guides/tutorials/codelabs/artifactory-to-cf/

Categories: Fusion Middleware

work Agile in a GxP-regulated environment

Yann Neuhaus - Thu, 2019-06-27 09:24

On 4 June 2019 I followed an invitation to

wega-it’s Know-how & Networking Breakfast 2 2019 on “Agile Validation in GxP Projects”.

So they were to discuss Agility in the context of GxP regulation.

I had some earlier exposure to various kinds of compliance roles and topics, and my current work environment is in the highly regulated Pharma industry. So I was really wondering (and hoping for learning) how you can possibly bring the two points of view of Agility and GxP regulation together. The Literaturhaus Basel was to see some very special type of literature that day. Not a formal presentation but a role play performance between these two viewpoints, represented by Evelyne Daniel, an experienced GxP validation expert, and Mathias Fuchs, an equally experienced Scrum master, both from wega-IT. A very nice idea, very appropriate for the topic!

What is Compliance (GxP) and what is Agile?

Typically in GxP compliance we work along the so-called V-model. In its rigidness and wanted plannability of course it corresponds largely to the waterfall model of traditional software development. Opposed to this the Agile Manifesto (2001) criticizes the very basics of these traditional ways of working. Remember the iconic claims like “Working software over comprehensive Documentation” and “Responding to change over following a Plan”. But how would you ever get acceptance in the compliance world without full documentation and planning!?! When I quickly browsed the internet, I found a multitude of statements and proposals which would certainly merit a separate blog post. For this time, I will give a quick summary of the wega breakfast presentation and some touch points with our current working environment in the Pharma industry.

Although in my current work environment we are not actually driving GxP Validation projects, we are still subject to the very tight GxP regulation. In the processes of Change and Release Management, this is reflected in the rigid rules of testing and documentation, to just name the most obvious examples. Background, of course, is the definition of Compliance and its Validation: the goal is to “establish documented evidence” to assure compliance and quality etc. These requirements hold independently of the quality, completeness or even up-to-date status of the pre-defined processes and rules! Call this inflexible and cumbersome! Any adaptation (update!) of the processes and rules is very formal through the complicated administrative processes to be used and hence very slow. Consider this in our fast-moving (not only IT-) world!

What is an MVP?

A nice play of words was interjected in the stage discussion: the acronym MVP has a very clear meaning as a basic concept for both sides, just it is not the same: MVP = Master Validation Plan (in GxP Validation) versus Minimal Viable Product (in Agile or Lean Software Development).

How to bring them together?

Now how to bring the core aspirations of Agile Development like Customer focus, Flexibility, Speed into the Compliance world? A first inevitable step in the V-model world: break up the (dead?) lock between a complete finalization of User Requirements Specification and the setup of a complete Validation Plan prescribing all Qualification criteria (IQ, OQ, PQ). Definition of Done (DoD) plays a major role when trying to cut the end-to-end Development-Validation elephant into smaller pieces. Inclusion of Validation into the “daily” Development activities is another must, instead of adding Validation at the end of Development phases only. Yet another core principle from the Agile side is the ensurance of team Maturity and Mindset. Much-hailed Diversity is opposed to pure compliance-oriented expert teams, striving for innovation and creativity in the team.

WEGA breakfast - Agile Validation in GxP projects

Some basic approaches

The final answer on how to – methodically – combine or maybe rather “emulsify” Agility and Compliance Validation comes as no surprise: there is no one-size-fits-all method. Rather three obvious basic approaches were presented.

  1. introducing Agility right between the left (Specifications) and the right (Qualifications) arms of the V-model, probably using some kind of piloting or prototyping
  2. including Validation into the Agile Development, almost doing Validation in each Agile sprint
  3. appending V-model Validation at the end of an Agile development.

The above-mentioned end-to-end Development-to-Validation elephant has to be broken into smaller better manageable units. Each specific project situation will have its own possible and best way to do it.

Think innovative and creative!

Thanks to wega-informatik (www.wega-it.com)  for organizing this creative and informative event.

 

Cet article work Agile in a GxP-regulated environment est apparu en premier sur Blog dbi services.

Oracle Exadata X8 key capabilities summary

Syed Jaffar - Thu, 2019-06-27 03:26
Below is the summary of some of the key benefits of Exadata X8 database machine, software and hardware:

Extreme (unmatched) performance
According to Oracle, the new X8 database machine is capable of delivering unto 60% faster throughput performance in contrast to the earlier Exadata database machines. Having said that, you can pull a 50GB database under one second.

Cost effective extended storage 
Petabytes of cost effective storage with option software license. This can significantly reduce the storage cost.

High Memory
Can accommodate upto 28.5 of system memory. Good for heavy workload systems, such as , in-memory databases.

Increased storage capacity
In contrast to earlier models, X8 comes with 40% increase in disk capacity. Each X8 EF system comes with 50TB raw flash capacity. While, the X8 HC with XT storage option, it comes 160TB of raw capacity.

High-performance connectivity
X8 also comes with significant improvement in connectivity. It supports upto 400Gb of client connectivity over multiple 25 GB Ethernet links.

OLTP Read/Write performance
A full X8 rack, typically can do 6.5 million random reads and 5.7 random writes per second for a 8k DB.

Automated features
In addition to the above hardware improvements, it also X8 incorporates autonomous database capability. With ML and AI capabilities, the databases are auto tuned and auto maintained.

List of technical specifications:

  • Up to 912 CPU cores and 28.5 TB memory per rack for database processing
  • Up to 576 CPU cores per rack dedicated to SQL processing in storage
  • From 2 to 19 database servers per rack
  • From 3 to 18 storage servers per rack
  • Up to 920 TB of flash capacity (raw) per rack
  • Up to 3.0 PB of disk capacity (raw) per rack
  • Hybrid Columnar Compression often delivers 10X-15X compression ratios
  • 40 Gb/second (QDR) InfiniBand Network
  • Complete redundancy for high availability

Pages

Subscribe to Oracle FAQ aggregator