Feed aggregator

Running Oracle JET on Heroku with Node.js (JET Showcase)

Andrejus Baranovski - Fri, 2019-06-07 09:03
I have implemented JET (more about Oracle JET) showcase app with data visualization components usage. This app shows historical weather data in Boston city, the dataset is taken from Kaggle. Switching years makes data visualization to change and show new data - I love how polar chat is updated. Calendar displays temperature for each day during the year using JET picto chart component:


App is deployed on Heroku and available by this URL. Heroku provides $7 per month account with analytics and better resources, but there is a free option too (it comes with sleep after 30 minutes of inactivity) - free option is good for experimentation, as for this case.

Heroku dashboard for the deployed JET app:


Free deployment comes without analytics option:


App comes with two options - Dashboard and Histogram. The dashboard allows switching between years and shows a polar chart along with daily temperature calendar:


The histogram displays the same data in a different view:


This app comes with Web Component implementation, yes Web Components are a standard feature in JET. Toolbar, where you can switch years, is implemented as Web Component:


Web Component is being used in both UIs - dashboard and histogram:


Visualization components are getting data through Knockout.JS observable variables:


Variables are initialized in JS functions:


Resources:

1. Heroku deployment guide for Node.js
2. Node.js app which is deployed on Heroku - GitHub. JET content is inside the public folder. JET content is copied from JET app web folder, after running ojet build --release
3. Oracle JET app - GitHub

Oracle Advances Safer, More Transparent Retail Supply Chain

Oracle Press Releases - Fri, 2019-06-07 07:00
Press Release
Oracle Advances Safer, More Transparent Retail Supply Chain Smarter software gives consumers peace of mind on the goods they buy and retailers the agility to resolve product issues quickly

WORLD FOOD SAFETY DAY, Redwood Shores, Calif.—Jun 7, 2019

The retail industry is more complex than ever. Not only are consumers looking for high-quality goods at fair prices, but they also want assurance that their purchases are safe and ethically sourced. That requires retailers to maintain transparency across their global supply chain networks. With new advancements in reporting and analytics and continued extension of integrations, Oracle Retail Brand Compliance Management Cloud Service is helping retailers monitor the integrity of their materials and end products, to improve customer experiences and protect their brands. 

Oracle Retail Brand Compliance is specifically designed to enable retailers, restaurants, food service providers and manufacturers to source, develop, track and market products. As products are developed, the solution audits and manages all aspects of the process, creating accurate and certified labeling detail against local regulatory and industry policies. As such, brands can rapidly and nimbly respond to and rectify product and industry incidents.

In 2016, a multistate listeria outbreak in the U.S. impacted several name brand grocery chains. Tainted frozen vegetables and fruits sourced from one plant were included in approximately 358 consumer products sold under 42 separate brands. Using Oracle Retail Brand Compliance, one well-known grocer was able to quickly pull SKUs, identify where impacted product was being sold and communicate with customers, mitigating the situation, and protecting its shoppers.

“Delivering on your brand promise today is as much about quality and trust as it is about cost,” said Jeff Warren, vice president of strategy and solution management, Oracle Retail. “Customers expect retailers to know everything about the items they purchase, whether this is information on availability, ingredients or the manufacturing process. They expect transparency and greater access to information, in real time. The biggest names in grocery rely on Oracle Retail Brand Compliance to meet these expectations while protecting their customers and brands.” 

Driving Compliance and Safety Across the Retail Supply Chain

Oracle Retail Brand Compliance is one of the world’s most widely adopted brand management solutions, representing two-thirds of all private label compliance and technical portals in use today. The offering provides a single point of authentication of all audits, accreditations and certificates. With it, common data sets are entered once and shared amongst the community to report on quality, environment, freshness and sustainability metrics and track the movement of products to drive risk assessment and rapid response to incidents. Today, the Oracle Retail Brand Compliance community represents more than 250,000 suppliers offering 750,000 consumer products.

With new enhancements to the offering, retail supply chain professionals will be able to more easily provide transparency of product information across partners and channels, with enhanced:

  • KPI dashboards that deliver key insights, business intelligence and operational reporting on supply chain and product analysis.
  • Consumer product compositions to enable in-store formulations, labeling and digital dietary advice.
  • API integrations with key Oracle applications.
 

“For retailers, having full visibility across their entire supply chains is a game-changer. It can mean the difference between minutes or weeks when responding to incidents, tracking and removing contaminated food from store shelves and notifying consumers. Brand damage aside, that can mean the difference between life and death,” noted Paul Woodward, senior director of Oracle Retail supply chain solutions.

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website, www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

[Oracle Integration Cloud] ICS, PCS, VBCS Console Walkthrough

Online Apps DBA - Thu, 2019-06-06 23:58

When you work on Oracle Integration Cloud (OIC), then you’ll come across multiple Consoles/Dashboards like My Services, OCI Console, OIC Console, ICS Design Console, etc. ▪What are these various consoles in Oracle Integration Cloud? ▪How to access these Consoles? ▪What are common tasks these various consoles perform like Adapters, Integrations, Connections, etc? Check all this […]

The post [Oracle Integration Cloud] ICS, PCS, VBCS Console Walkthrough appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Scalar Subquery Costing

Jonathan Lewis - Thu, 2019-06-06 13:54

A question came up on Oracle-l list-server a few days ago about how Oracle calculates costs for a scalar subquery in the select list. The question included an example to explain the point of the question. I’ve reproduced the test below, with the output from an 18.3 test system. The numbers don’t match the numbers produced in the original posting but they are consistent with the general appearance.

rem
rem     Script:         ssq_costing.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019
rem     Purpose:        
rem
rem     Last tested 
rem             18.3.0.0
rem             12.2.0.1
rem

create table t_1k ( n1 integer ) ;
create table t_100k ( n1 integer ) ;

insert into t_1k
  select
         level
    from dual
    connect by level <= 1e3;

insert into t_100k
  select level
    from dual
    connect by level <= 1e5;

commit ;

begin
  dbms_stats.gather_table_stats ( null, 'T_1K') ;
  dbms_stats.gather_table_stats ( null, 'T_100K') ;
end ;
/

explain plan for
select 
        /*+ qb_name(QB_MAIN) */
        (
        select /*+ qb_name(QB_SUBQ) */ count(*)
        from t_1k
        where t_1k.n1 = t_100k.n1
        )
from t_100k
;

select * from table(dbms_xplan.display);

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |   100K|   488K|  1533K  (2)| 00:01:00 |
|   1 |  SORT AGGREGATE    |        |     1 |     4 |            |          |
|*  2 |   TABLE ACCESS FULL| T_1K   |     1 |     4 |    17   (0)| 00:00:01 |
|   3 |  TABLE ACCESS FULL | T_100K |   100K|   488K|    36   (9)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T_1K"."N1"=:B1)

The key point to note is this – the scalar subquery has to execute 100,000 times because that’s the number of rows in the driving table. The cost for executing the scalar subquery once is 17 – so the total cost of the query should be 1,700,036 – not 1,533K (and for execution plans the K means x1000, not x1024). There’s always room for rounding errors, of course, but a check of the 10053 (CBO trace) file shows the numbers to be 17.216612 for the t_1k tablescan, 36.356072 for the t_100K tablescan, and 1533646.216412 for the whole query. So how is Oracle managing to get a cost that looks lower than it ought to be?

There’s plenty of scope for experimenting to see how the numbers change – and my first thought was simply to see what happens as you change the number of distinct values in the t_100K.n1 column. It would be rather tedious to go through the process of modifying the data a few hundred times to see what happens, so I took advantage of the get_column_stats() and set_column_stats() procedures in the dbms_stats package to create a PL/SQL loop that faked a number of different scenarios that lied about the actual table data.


delete from plan_table;
commit;

declare

        srec                    dbms_stats.statrec;
        n_array                 dbms_stats.numarray;

        m_distcnt               number;
        m_density               number;
        m_nullcnt               number;
        m_avgclen               number;


begin

        dbms_stats.get_column_stats(
                ownname         => user,
                tabname         => 't_100k',
                colname         => 'n1', 
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => srec,
                avgclen         => m_avgclen
        ); 

        for i in 1 .. 20 loop

                m_distcnt := 1000 * i;
                m_density := 1/m_distcnt;

                dbms_stats.set_column_stats(
                        ownname         => user,
                        tabname         => 't_100k',
                        colname         => 'n1', 
                        distcnt         => m_distcnt,
                        density         => m_density,
                        nullcnt         => m_nullcnt,
                        srec            => srec,
                        avgclen         => m_avgclen
                ); 


        execute immediate
        '
                explain plan set statement_id = ''' || m_distcnt || 
        '''
                for
                select
                        /*+ qb_name(QB_MAIN) */
                        (
                        select /*+ qb_name(QB_SUBQ) */ count(*)
                        from t_1k
                        where t_1k.n1 = t_100k.n1
                        )
                from t_100k
        ';
        
        end loop;       

end;
/

The code is straightforward. I’ve declared a few variables to hold the column stats from the t_100k.n1 column, called get_column stats(), then looped 20 times through a process that changes the number of distinct values (and corresponding density) recorded in the column stats, then used execute immediate to call “explain plan” for the original query.

You’ll notice I’ve given each plan a separate statement_id that corresponds to the num_distinct that generated the plan. In the code above I’ve changed the num_distinct from 1,000 to 20,000 in steps of 1,000.

Once the PL/SQL block ends I’ll have a plan table with 20 execution plans stored in it and, rather than reporting those plans with calls to dbms_xplan.display(), I’m going to be selective about which rows and columns I report.

select
        statement_id, 
        io_cost,
        io_cost - lag(io_cost,1) over (order by to_number(statement_id)) io_diff,
        cpu_cost,
        cpu_cost - lag(cpu_cost,1) over (order by to_number(statement_id)) cpu_diff,
        cost
from 
        plan_table
where 
        id = 0
order by 
        to_number(statement_id)
;

I’ve picked id = 0 (the top line of the plan) for each statement_id and I’ve reported the cost column, which is made up of the io_cost column plus a scaled down value of the cpu_cost column. I’ve also used the analytic lag() function to calculate how much the io_cost and cpu_cost changed from the previous statement_id. Here are my results from 18c:


STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
1000                                17033            1099838920                 17253
2000                                34033      17000 2182897480 1083058560      34470
3000                                51033      17000 3265956040 1083058560      51686
4000                                68033      17000 4349014600 1083058560      68903
5000                                85033      17000 5432073160 1083058560      86119
6000                               102033      17000 6515131720 1083058560     103336
7000                               119033      17000 7598190280 1083058560     120553
8000                               136033      17000 8681248840 1083058560     137769
9000                               153033      17000 9764307400 1083058560     154986
10000                              170033      17000 1.0847E+10 1083058560     172202
11000                              197670      27637 1.2608E+10 1760725019     200191
12000                              338341     140671 2.1570E+10 8962036084     342655
13000                              457370     119029 2.9153E+10 7583261303     463200
14000                              559395     102025 3.5653E+10 6499938259     566525
15000                              647816      88421 4.1287E+10 5633279824     656073
16000                              725185      77369 4.6216E+10 4929119846     734428
17000                              793452      68267 5.0565E+10 4349223394     803565
18000                              854133      60681 5.4431E+10 3865976350     865019
19000                              908427      54294 5.7890E+10 3459031472     920005
20000                              957292      48865 6.1003E+10 3113128324     969492

The first pattern that hits the eye is the constant change of 17,000 in the io_cost in the first few lines of the output. For “small” numbers of distinct values the (IO) cost of the query is (33 + 17 * num_distinct) – in other words, the arithmetic seems to assume that it will execute the query once for each value and then cache the results so that repeated executions for any given value will not be needed. This looks as if the optimizer is trying to match its arithmetic to the “scalar subquery caching” mechanism.

But things change somewhere between 10,000 and 11,000 distinct values. The point comes where adding one more distinct value causes a much bigger jump in cost than 17, and that’s because Oracle assumes it’s reached a point where there’s a value that it won’t have room for in the cache and will have to re-run the subquery multiple times for that value as it scans the rest of the table. Let’s find the exact break point where that happens.

Changing my PL/SQL loop so that we calculate m_distcnt as “19010 + i” this is the output from the final query:


-- m_distcnt := 10910 + i;

STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
10911                              185520            1.1834E+10                187887
10912                              185537         17 1.1835E+10    1083059     187904
10913                              185554         17 1.1836E+10    1083058     187921
10914                              185571         17 1.1837E+10    1083059     187938
10915                              185588         17 1.1838E+10    1083058     187956
10916                              185605         17 1.1839E+10    1083059     187973
10917                              185622         17 1.1841E+10    1083059     187990
10918                              185639         17 1.1842E+10    1083058     188007
10919                              185656         17 1.1843E+10    1083059     188025
10920                              185673         17 1.1844E+10    1083058     188042
10921                              185690         17 1.1845E+10    1083059     188059
10922                              185707         17 1.1846E+10    1083058     188076
10923                              185770         63 1.1850E+10    4027171     188140
10924                              185926        156 1.1860E+10    9914184     188298
10925                              186081        155 1.1870E+10    9912370     188455
10926                              186237        156 1.1880E+10    9910555     188613
10927                              186393        156 1.1890E+10    9908741     188770
10928                              186548        155 1.1900E+10    9906928     188928
10929                              186703        155 1.1909E+10    9905114     189085
10930                              186859        156 1.1919E+10    9903302     189243

If we have 10,922 distinct values in the column the optimizer calculates as if it will be able to cache them all; but if we have 10,923 distinct values the optimizer thinks that there’s going to be one value where it can’t cache the result and will have to run the subquery more than once.

Before looking at this in more detail let’s go to the other interesting point – when does the cost stop changing: we can see the cost increasing as the number of distinct values grows, we saw at the start that the cost didn’t seem to get as large as we expected, so there must be a point where it stops increasing before it “ought” to.

I’ll jump straight to the answer: here’s the output from the test when I start num_distinct off at slightly less than half the number of rows in the table:


 -- m_distcnt := (50000 - 10) + i;

STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
49991                             1514281            9.6488E+10               1533579
49992                             1514288          7 9.6489E+10     473357    1533586
49993                             1514296          8 9.6489E+10     473337    1533594
49994                             1514303          7 9.6490E+10     473319    1533601
49995                             1514311          8 9.6490E+10     473299    1533609
49996                             1514318          7 9.6491E+10     473281    1533616
49997                             1514325          7 9.6491E+10     473262    1533624
49998                             1514333          8 9.6492E+10     473243    1533631
49999                             1514340          7 9.6492E+10     473224    1533639
50000                             1514348          8 9.6493E+10     473205    1533646
50001                             1514348          0 9.6493E+10          0    1533646
50002                             1514348          0 9.6493E+10          0    1533646
50003                             1514348          0 9.6493E+10          0    1533646
50004                             1514348          0 9.6493E+10          0    1533646
50005                             1514348          0 9.6493E+10          0    1533646
50006                             1514348          0 9.6493E+10          0    1533646
50007                             1514348          0 9.6493E+10          0    1533646
50008                             1514348          0 9.6493E+10          0    1533646
50009                             1514348          0 9.6493E+10          0    1533646
50010                             1514348          0 9.6493E+10          0    1533646

The cost just stops changing when num_distinct = half the rows in the table.

Formulae

During the course of these experiments I had been exchanging email messages with Nenad Noveljic via the Oracle-L list-server (full monthly archive here) and he came up with the suggesion of a three-part formula that assumed a cache size and gave a cost of

  • “tablescan cost + num_distinct * subquery unit cost” for values of num_distinct up to the cache size;
  • then, for values of num_distinct greater than the cache_size and up to half the size of the table added a marginal cost representing the probability that some values would not be cached;
  • then for values of num_distinct greater than half the number of rows in the table reported the cost associated with num_distinct = half the number of rows in the table.

Hence:

  • for 1 <= num_distinct <= 10922, cost = (33 + num_distinct + 17)
  • for 10,923 <= num_distinct <= 50,000, cost = (33 + 10,922 * 17) + (1 – 10,922/num_distinct) * 100,000 * 17
  • for 50,000 <= num_distinct <= 100,000, cost = cost(50,000).

The middle line needs a little explanation: ( 1-10,922 / num_distinct ) is the probability that a value will not be in the cache; this has to be 100,000 to give the expected number of rows that will not be cached, and then multiplied by 17 as the cost of running the subquery for those rows.

The middle line can be re-arranged as 33 + 17 * (10,922 + (1 – 10,922/num_distinct) * 100,000)

Tweaking

At this point I could modify my code loop to report the calculated value for the cost and compare it with the actual cost to show you that the two values didn’t quite match. Instead I’ll jump forward a little bit to a correction that needs to be made to the formula above. It revolves around how Oracle determines the cache size. There’s a hidden parameter (which I mentioned in CBO Fundamentals) that controls scalar subquery caching. In the book I think I only referenced it in the context of subqueries in the “where” clause. The parameter is “_query_execution_cache_max_size” and has a default value of 131072 (power(2,7)) – so when I found that the initial formula didn’t quite work I made the following observation:

  • 131072 / 10922 = 12.00073
  • 131072 / 12 = 10922.666…

So I put 1092.66667 into the formula to see if that would improve things.

For the code change I added a variable m_cost to the PL/SQL block, and set it inside the loop as follows:

m_cost := round(33 + 17 * (10922.66667 + 100000 * (1 - (10922.66667 / m_distcnt))));

Then in the “execute immediate” I changed the “explain plan” line to read:

explain plan set statement_id = ''' || lpad(m_distcnt,7) || ' - ' || lpad(m_cost,8) ||

This allowed me to show the formula’s prediction of (IO)cost in final output, and here’s what I got for values of num_distinct in the region of 10,922:


STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
  10911 -   183901                 185520            1.1834E+10                187887
  10912 -   184057                 185537         17 1.1835E+10    1083059     187904
  10913 -   184212                 185554         17 1.1836E+10    1083058     187921
  10914 -   184368                 185571         17 1.1837E+10    1083059     187938
  10915 -   184524                 185588         17 1.1838E+10    1083058     187956
  10916 -   184680                 185605         17 1.1839E+10    1083059     187973
  10917 -   184836                 185622         17 1.1841E+10    1083059     187990
  10918 -   184992                 185639         17 1.1842E+10    1083058     188007
  10919 -   185147                 185656         17 1.1843E+10    1083059     188025
  10920 -   185303                 185673         17 1.1844E+10    1083058     188042
  10921 -   185459                 185690         17 1.1845E+10    1083059     188059
  10922 -   185615                 185707         17 1.1846E+10    1083058     188076
  10923 -   185770                 185770         63 1.1850E+10    4027171     188140
  10924 -   185926                 185926        156 1.1860E+10    9914184     188298
  10925 -   186081                 186081        155 1.1870E+10    9912370     188455
  10926 -   186237                 186237        156 1.1880E+10    9910555     188613
  10927 -   186393                 186393        156 1.1890E+10    9908741     188770
  10928 -   186548                 186548        155 1.1900E+10    9906928     188928
  10929 -   186703                 186703        155 1.1909E+10    9905114     189085
  10930 -   186859                 186859        156 1.1919E+10    9903302     189243

The formula is only supposed to work in the range 10923 – 50,000, so the first few results don’t match; but in the range 10,923 to 10,930 the match is exact. Then, in the region of 50,000 we get:


STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
  49991 -  1514281                1514281            9.6488E+10               1533579
  49992 -  1514288                1514288          7 9.6489E+10     473357    1533586
  49993 -  1514296                1514296          8 9.6489E+10     473337    1533594
  49994 -  1514303                1514303          7 9.6490E+10     473319    1533601
  49995 -  1514311                1514311          8 9.6490E+10     473299    1533609
  49996 -  1514318                1514318          7 9.6491E+10     473281    1533616
  49997 -  1514325                1514325          7 9.6491E+10     473262    1533624
  49998 -  1514333                1514333          8 9.6492E+10     473243    1533631
  49999 -  1514340                1514340          7 9.6492E+10     473224    1533639
  50000 -  1514348                1514348          8 9.6493E+10     473205    1533646
  50001 -  1514355                1514348          0 9.6493E+10          0    1533646
  50002 -  1514363                1514348          0 9.6493E+10          0    1533646
  50003 -  1514370                1514348          0 9.6493E+10          0    1533646
  50004 -  1514377                1514348          0 9.6493E+10          0    1533646
  50005 -  1514385                1514348          0 9.6493E+10          0    1533646
  50006 -  1514392                1514348          0 9.6493E+10          0    1533646
  50007 -  1514400                1514348          0 9.6493E+10          0    1533646
  50008 -  1514407                1514348          0 9.6493E+10          0    1533646
  50009 -  1514415                1514348          0 9.6493E+10          0    1533646
  50010 -  1514422                1514348          0 9.6493E+10          0    1533646

Again, the formula applies only in the range up to 50,000 (half the rows in the table) – and the match is perfect in that range.

Next steps

The work so far gives us some idea of the algorithm that the optimizer is using to derive a cost, but this is just one scenario and there are plenty of extra questions we might ask. What, as the most pressing one, is the significance of the number 12 in the calculation 131,072/12. From previous experience I guess that is was related to the length of the input and output values of the scalar subquery – as in “value X for n1 returns value Y for count(*)”.

To pursue this idea I recreated the data sets using varchar2(10) as the definition of n1 and lpad(rownum,10) as the value – the “breakpoint” dropped from 10,922 down to 5,461. Checking the arithmetic 131,072 / 5461 = 24.001456, then 131,072/24 = 5461.333… And that’s the number that made fhe formular work perfectly for the modified data set.

Then I set used set_column_stats() to hack the avg_col_,len of t_100K.n1 to 15 and the break point dropped to 4,096.  Again we do the two arithmetic steps: 131072/4096 = 32 (but then we don’t need to do the reverse step since the first result is integral).

Checking the original data set when n1 was a numeric the avg_col_len was 5, so we have three reference points:

  • Avg_col_len = 5. “Cache unit size” = 12
  • Avg_col_len = 11. Cache unit size = 24 (don’t forget the avg_col_len includes the length byte, so our padded varchar2(10) has a length of 11).
  • Avg_col_len = 15, Cache unit size = 32

There’s an obvious pattern here: “Cache unit size” = (2 x avg_col_len + 2).  Since I hadn’t been changing the t_1k.n1 column at the same time, that really does look like a deliberate factor of 2 (I’d thought intially that maybe the 12 was affected by the lengths of both columns in the predicate – but that doesn’t seem to be the case.)

The scientific method says I should now make a prediction based on my hypothesis – so I set the avg_col_len for t_100K.n1 to 23 and guessed that the break point would be at 2730 – and it was.  (131072 / (2 * 23 + 2) = 2730.6666…) .

The next question, of course, is “where does the “spare 2″ come from?” Trying to minimize the change in the code I modified my subquery to select sum(to_number(n1)) rather than count(*), then to avg(to_number(n1)) – remember I had changed n1 to a varchar2(10) that looked like a number left-padded with spaces. In every variant of the tests I’d done so far all I had to do to get an exact match between the basic formula and the optimizer’s cost calculation was to use “2 * avg_col_len + 22” as the cache unit size – and 22 is the nominal maximum length of an internally stored numeric column.

Bottom line: the cache unit size seems to be related to the input and output values, but I don’t know why there’s a factor of 2 applied to the input column length, and I don’t know why the length of count(*) is deemed to be 2 when other derived numeric outputs use have the more intuitive 22 for their length.

tl;dr

The total cost calculation for a scalar subquery in the select list is largely affected by:

  • a fixed cache size (131,072 bytes) possibly set by hidden parameter _query_execution_cache_max_size
  • the avg_col_len of the input (correlating) column(s) from the driving table
  • the nominal length of the output (select list) of the subquery

There is an unexplained factor of 2 used with the avg_col_len of the input, and a slightly surprising value of 2 if the output is simply count(*).

If the number N of distinct values for the driving column(s) is less than the number of possible cache entries the effect of the scalar subquery is to add N * estimated cost of executing the subquery once.  As the number of distinct values for the driving column(s) goes above the limit then the incremental effect of the subquery is based on the expected number of times an input value will not be cached. When the number of distinct values in the driving column(s) exceeds half the number of rows in the driving table the cost stops increasing – there is no obvious reason when the algorithm does this.

There are many more cases that I could investigate at this point – but I think this model is enough as an indication of general method. If you come across a variation where you actually need to work out how the optimizer derived a cost then this framework will probably be enough to get you started in the right direction.

 

anytype from java for anydataset

Tom Kyte - Thu, 2019-06-06 10:06
Hello TOM :) I try to make anytype in java stored procedure and use it to create anydataset in PL/SQL. But I get error ORA-22625. Why this error occures and how can I fix it? I do it according to guides, for example https://docs.oracle.com/databas...
Categories: DBA Blogs

Best performance of Top N by X

Tom Kyte - Thu, 2019-06-06 10:06
I have the following 2 tables: <code>CREATE TABLE accounts( id NUMBER unique not null, account_name VARCHAR2(30) ); CREATE TABLE log_data( account_id NUMBER not null, log_type NUMBER, log_time TIMESTAMP, msg CLOB );...
Categories: DBA Blogs

PLS_INTEGER versus NUMBER versus "dynamic types"

Tom Kyte - Thu, 2019-06-06 10:06
It has been suggested to me that I use PL/SQL declarations like PROCEDURE foo ( p_id IN PLS_INTEGER )... instead of PROCEDURE foo (p_id IN NUMBER ) ... or PROCEDURE foo (p_id IN mytable.my_id%TYPE ) I've always preferred the last option, si...
Categories: DBA Blogs

Using MERGE to update data 2 times

Tom Kyte - Thu, 2019-06-06 10:06
I am having a Full dump of 1m records arriving every day.I need to obtain this set into STG. There is soft delete records thus when I compared between Full dump(source) and STG(target), the records in STG always larger than source. I have researc...
Categories: DBA Blogs

Function comparing dates in a range

Tom Kyte - Thu, 2019-06-06 10:06
I have a function that will extract records from a databaase with specific dates and times. One of the parameters passed is a date. This functions runs twice a day On of the parameters passed is AM or PM). My functions works for morning (times are ...
Categories: DBA Blogs

Pivot with dynamic dates column

Tom Kyte - Thu, 2019-06-06 10:06
Hi Tom, I want to show the absence of my people in an APEX form / editable grid. The absence is shown for every day and the day column should be generated automatically. Current SQL code: <code>with rws as ( select person.name, absence....
Categories: DBA Blogs

Synchronizing database sequences during manual data replication

Tom Kyte - Thu, 2019-06-06 10:06
Hi Tom, Experts, I am in a need to replicate manually all objects from one schema to another (manually, because my schemas, both source and target, reside in the cloud and I have no access to any OS level utility, nor sql*net access to the databas...
Categories: DBA Blogs

Filter Incremental Data in Oracle DB

Tom Kyte - Thu, 2019-06-06 10:06
Hi TOM, I have requirement to filter differential/Incremental data in Oracle Database. Primary key that I use is composite key (combination of 2 columns). They are in such a shape that I am unable to prepare 'where' condition that can help me to id...
Categories: DBA Blogs

Connection Storm with Inactive sessions on ORACLE RAC

Tom Kyte - Thu, 2019-06-06 10:06
Hi All, We recently migrated from Solaris to AIX and moved to Oracle RAC : Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production PL/SQL Release 11.2.0.4.0 - Production CORE 11.2.0.4.0 Production TNS for...
Categories: DBA Blogs

Using Oracle best way

Tom Kyte - Thu, 2019-06-06 10:06
Hi, We have set of Mainframe programs which connects to Oracle database and queries set of tables. Our application and DB servers are in 2 different locations (which we cannot change at the moment), which i believe is causing considerable delays w...
Categories: DBA Blogs

PFCLATK - Audit Trail Toolkit - Checksums

Pete Finnigan - Thu, 2019-06-06 09:46
We have a toolkit called PFCLATK that is used in customer engagements to assist our customers to create comprehensive and useful audit trails for their databases. The toolkit is used in consulting engagements at the moment but will be adding....[Read More]

Posted by Pete On 06/06/19 At 03:08 PM

Categories: Security Blogs

Discover target database 18c with EM12c

Yann Neuhaus - Thu, 2019-06-06 08:34

Working on Enterprise Manager 12.1.0.4 version at a client’s site, we would like to know if oracle database target in 18c version could be discovered and monitored, even if Enterprise Manager 12.1.0.4 does not support Oracle 18c database targets.

Installing the agent 12c on the target host did not cause any problem, the oracle database 18c discovery ran successfully, but the database was seen as down in the Enterprise Manager 12.1.0.4 console.

We tried several tricks without any positive results, but running the following command shows us that this was a connection problem:


oracle@em12c:/home/oracle/:> emctl getmetric agent DB18,oracle_database,Response
Oracle Enterprise Manager Cloud Control 12c Release 4
Copyright (c) 1996, 2014 Oracle Corporation.
All rights reserved.
Status,State,oraerr,Archiver,DatabaseStatus,ActiveState0,UNKNOWN,
Failed to connect: java.sql.SQLException: 
ORA-28040: No matching authentication protocol,UNKNOWN,UNKNOWN,UNKNOWN

With Oracle 18c, the default value for SQLNET.ALLOWED_LOGON_VERSION parameter is 12, it means that database clients using pre-12c jdbc thin drivers cannot authenticate to 18c database servers.

The workaround is to add in the database server sqlnet.ora the following lines:

SQLNET.ALLOWED_LOGON_VERSION_SERVER=11
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=11

We restart the database and the agent, and the Oracle database 18c is displayed up and running in Enterprise Manager 12.1.0.4:

Some more tests showed us that everything is working fine: incident management, performance, top activity ..aso

Nevertheless do not forget that oracle database target version 18c are not supported with Enterprise Manager 12.1.0.4. I will recommand to use the last Enterprise Manager 13.3 last version in order to administer and monitor Oracle database 18c.

Cet article Discover target database 18c with EM12c est apparu en premier sur Blog dbi services.

Get Up to Speed with Oracle ACEs on the Kscope Database Track

OTN TechBlog - Thu, 2019-06-06 08:03
All Aboard for Database Expertise...

This second post in a series on Kscope 2019 sessions presented by members of the Oracle ACE program focuses on the database track. Kscope 2019 arrives on time, June 23-27 in Seattle. Click here for information and registration.

Click the session titles below for time, date, and location information for each session.

We'll cover sessions in the other tracks in upcoming posts. Stay tuned!

 

Oracle ACE Directors

Oracle ACE Director Alex NuijtenAlex Nuijten
Director, Senior Oracle Developer, allAPEX
Oosterhout, Netherlands

 

Oracle ACE Director Debra LilleyDebra Lilley
Associate Director, Accenture
Belfast, United Kingdom

 

Oracle ACE Director Dimitri GielisDimitri Gielis
Director, APEX R&D
Leuven, Belgium

 

Oracle ACE Director Francisco AlvarezFrancisco Munoz Alvarez
CEO, CloudDB
Sydney, Australia

 

Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy
Finland

 

Oracle ACE Director Jim CzuprynskiJim Czuprynski
Senior Enterprise Data Architect, Viscosity North America
Bartlett, Illinois

 

Oracle ACE Director Kim Berg HansenKim Berg Hansen
Senior Consultant, Trivadis
Funen, Denmark

 

Oracle ACE Director Martin Giffy D'SouzaMartin Giffy D’Souza
Director of Innovation, Insum Solutions
Calgary, Alberta, Canada

 

Oracle ACE Director Mia UrmanMia Urman
CEO, AuraPlayer Ltd
Brookline, Massachusetts

 

Oracle ACE Director Patrick BarelPatrick Barel
Sr. Oracle Developer, Alliander via Qualogy
Haarlem, Netherlands

 

Oracle ACE Director Peter KoletzkePeter Koletzke
Technical Director, Principal Instructor
Independent Consultant

 

Oracle ACE Director Richard NiemiecRichard Niemiec
Chief Innovation Officer, Viscosity North America
Chicago, Illinois

 
Oracle ACEs

Oracle ACE Dani SchniderDani Schnider
Senior Principal Consultant, Trivadis AG
Zurich, Switzerland

 

Oracle ACE Holger FriedrichHolger Friedrich
CTO, sumIT AG
Zurich, Switzerland

 

Oracle ACE Liron AmitziLiron Amitzi
Senior Database Consultant, Self Employed
Vancouver, Canada

 

Oracle ACE Philipp SalvisbergPhilipp Salvisberg
Senior Principal Consultant, Trivadis AG
Zurich, Switzerland

 

Oracle ACE Robert MarzRobert Marz
Principal Technical Architect, its-people GmbH
Frankfurt, Germany

 
Oracle ACE Associates

Oracle ACE Associate Alfredo AbateAlfredo Abate
Senior Oracle Systems Architect, Brake Parts Inc LLC
McHenry, Illinois

 

Oracle ACE Associate Eugene FedorenkoEugene Fedorenko
Senior Architect, Flexagon
De Pere, Wisconsin

 
Additional Resources

PostgreSQL partitioning (6): Attaching and detaching partitions

Yann Neuhaus - Thu, 2019-06-06 01:00

Having talked about partitioning strategies and partition pruning this time we will have a look on how you can attach and detach partitions to and from an existing partitioned table. If you missed the last posts about partitioning in PostgreSQL here they are:

  1. PostgreSQL partitioning (1): Preparing the data set
  2. PostgreSQL partitioning (2): Range partitioning
  3. PostgreSQL partitioning (3): List partitioning
  4. PostgreSQL partitioning (4) : Hash partitioning
  5. PostgreSQL partitioning (5): Partition pruning

From time to time it might be required that you attach a partition to an existing partitioned table. A reason for that might be that you partitioned by year and suddenly you get data for the next year because your data is based on imported data from an external source. Of course you could just create a new partition for that and then load the data but there is a more elegant way of doing that.

Our range partitioned table currently has partitions for the years 2012 to 2020:

postgres=# \d+ traffic_violations_p
                                      Partitioned table "public.traffic_violations_p"
         Column          |          Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
-------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------
 seqid                   | text                   |           |          |         | extended |              | 
 date_of_stop            | date                   |           |          |         | plain    |              | 
 time_of_stop            | time without time zone |           |          |         | plain    |              | 
 agency                  | text                   |           |          |         | extended |              | 
 subagency               | text                   |           |          |         | extended |              | 
 description             | text                   |           |          |         | extended |              | 
 location                | text                   |           |          |         | extended |              | 
 latitude                | numeric                |           |          |         | main     |              | 
 longitude               | numeric                |           |          |         | main     |              | 
 accident                | text                   |           |          |         | extended |              | 
 belts                   | boolean                |           |          |         | plain    |              | 
 personal_injury         | boolean                |           |          |         | plain    |              | 
 property_damage         | boolean                |           |          |         | plain    |              | 
 fatal                   | boolean                |           |          |         | plain    |              | 
 commercial_license      | boolean                |           |          |         | plain    |              | 
 hazmat                  | boolean                |           |          |         | plain    |              | 
 commercial_vehicle      | boolean                |           |          |         | plain    |              | 
 alcohol                 | boolean                |           |          |         | plain    |              | 
 workzone                | boolean                |           |          |         | plain    |              | 
 state                   | text                   |           |          |         | extended |              | 
 vehicletype             | text                   |           |          |         | extended |              | 
 year                    | smallint               |           |          |         | plain    |              | 
 make                    | text                   |           |          |         | extended |              | 
 model                   | text                   |           |          |         | extended |              | 
 color                   | text                   |           |          |         | extended |              | 
 violation_type          | text                   |           |          |         | extended |              | 
 charge                  | text                   |           |          |         | extended |              | 
 article                 | text                   |           |          |         | extended |              | 
 contributed_to_accident | boolean                |           |          |         | plain    |              | 
 race                    | text                   |           |          |         | extended |              | 
 gender                  | text                   |           |          |         | extended |              | 
 driver_city             | text                   |           |          |         | extended |              | 
 driver_state            | text                   |           |          |         | extended |              | 
 dl_state                | text                   |           |          |         | extended |              | 
 arrest_type             | text                   |           |          |         | extended |              | 
 geolocation             | point                  |           |          |         | plain    |              | 
 council_districts       | smallint               |           |          |         | plain    |              | 
 councils                | smallint               |           |          |         | plain    |              | 
 communities             | smallint               |           |          |         | plain    |              | 
 zip_codes               | smallint               |           |          |         | plain    |              | 
 municipalities          | smallint               |           |          |         | plain    |              | 
Partition key: RANGE (date_of_stop)
Partitions: traffic_violations_p_2012 FOR VALUES FROM ('2012-01-01') TO ('2013-01-01'),
            traffic_violations_p_2013 FOR VALUES FROM ('2013-01-01') TO ('2014-01-01'),
            traffic_violations_p_2014 FOR VALUES FROM ('2014-01-01') TO ('2015-01-01'),
            traffic_violations_p_2015 FOR VALUES FROM ('2015-01-01') TO ('2016-01-01'),
            traffic_violations_p_2016 FOR VALUES FROM ('2016-01-01') TO ('2017-01-01'),
            traffic_violations_p_2017 FOR VALUES FROM ('2017-01-01') TO ('2018-01-01'),
            traffic_violations_p_2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'),
            traffic_violations_p_2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'),
            traffic_violations_p_2020 FOR VALUES FROM ('2020-01-01') TO ('2021-01-01'),
            traffic_violations_p_default DEFAULT

Lets assume we suddenly get data for the year 2021 and we want that data to be part of the range partitioned table. If we keep the partitioned table as it is, new data would go to the default partition and this is not what we want. What we can do is to create a regular table with exactly the same structure as the partitioned table. With PostgreSQL’s create table command you have the “like_option” which is very handy for this use case:

postgres=# create table traffic_violations_p_2021 ( like traffic_violations_p including all );
CREATE TABLE
postgres=# \d traffic_violations_p_2021
                     Table "public.traffic_violations_p_2021"
         Column          |          Type          | Collation | Nullable | Default 
-------------------------+------------------------+-----------+----------+---------
 seqid                   | text                   |           |          | 
 date_of_stop            | date                   |           |          | 
 time_of_stop            | time without time zone |           |          | 
 agency                  | text                   |           |          | 
 subagency               | text                   |           |          | 
 description             | text                   |           |          | 
 location                | text                   |           |          | 
 latitude                | numeric                |           |          | 
 longitude               | numeric                |           |          | 
 accident                | text                   |           |          | 
 belts                   | boolean                |           |          | 
 personal_injury         | boolean                |           |          | 
 property_damage         | boolean                |           |          | 
 fatal                   | boolean                |           |          | 
 commercial_license      | boolean                |           |          | 
 hazmat                  | boolean                |           |          | 
 commercial_vehicle      | boolean                |           |          | 
 alcohol                 | boolean                |           |          | 
 workzone                | boolean                |           |          | 
 state                   | text                   |           |          | 
 vehicletype             | text                   |           |          | 
 year                    | smallint               |           |          | 
 make                    | text                   |           |          | 
 model                   | text                   |           |          | 
 color                   | text                   |           |          | 
 violation_type          | text                   |           |          | 
 charge                  | text                   |           |          | 
 article                 | text                   |           |          | 
 contributed_to_accident | boolean                |           |          | 
 race                    | text                   |           |          | 
 gender                  | text                   |           |          | 
 driver_city             | text                   |           |          | 
 driver_state            | text                   |           |          | 
 dl_state                | text                   |           |          | 
 arrest_type             | text                   |           |          | 
 geolocation             | point                  |           |          | 
 council_districts       | smallint               |           |          | 
 councils                | smallint               |           |          | 
 communities             | smallint               |           |          | 
 zip_codes               | smallint               |           |          | 
 municipalities          | smallint               |           |          | 

For now this table has no relational at all to the partitioned table, it just has the very same structure, that’s it. Lets populate the new table with some data for 2021:

insert into traffic_violations_p_2021 (date_of_stop)
       select * from generate_series ( date('01.01.2021')
                                     , date('12.31.2021')
                                     , interval '1 day');

Now that we have 365 rows in the new table we can attach it to the partitioned table (of course you could already attach it before loading the data as well):

alter table traffic_violations_p
      attach partition traffic_violations_p_2021
      for values from ('2021-01-01') to ('2022-01-01');

So you can prepare the whole new data without touching the partitioned table. Once you are happy with it a simple attach command is enough and the partitioned table contains the new partition holding the data:

postgres=# \d+ traffic_violations_p
                                      Partitioned table "public.traffic_violations_p"
         Column          |          Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
-------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------
 seqid                   | text                   |           |          |         | extended |              | 
 date_of_stop            | date                   |           |          |         | plain    |              | 
 time_of_stop            | time without time zone |           |          |         | plain    |              | 
 agency                  | text                   |           |          |         | extended |              | 
 subagency               | text                   |           |          |         | extended |              | 
 description             | text                   |           |          |         | extended |              | 
 location                | text                   |           |          |         | extended |              | 
 latitude                | numeric                |           |          |         | main     |              | 
 longitude               | numeric                |           |          |         | main     |              | 
 accident                | text                   |           |          |         | extended |              | 
 belts                   | boolean                |           |          |         | plain    |              | 
 personal_injury         | boolean                |           |          |         | plain    |              | 
 property_damage         | boolean                |           |          |         | plain    |              | 
 fatal                   | boolean                |           |          |         | plain    |              | 
 commercial_license      | boolean                |           |          |         | plain    |              | 
 hazmat                  | boolean                |           |          |         | plain    |              | 
 commercial_vehicle      | boolean                |           |          |         | plain    |              | 
 alcohol                 | boolean                |           |          |         | plain    |              | 
 workzone                | boolean                |           |          |         | plain    |              | 
 state                   | text                   |           |          |         | extended |              | 
 vehicletype             | text                   |           |          |         | extended |              | 
 year                    | smallint               |           |          |         | plain    |              | 
 make                    | text                   |           |          |         | extended |              | 
 model                   | text                   |           |          |         | extended |              | 
 color                   | text                   |           |          |         | extended |              | 
 violation_type          | text                   |           |          |         | extended |              | 
 charge                  | text                   |           |          |         | extended |              | 
 article                 | text                   |           |          |         | extended |              | 
 contributed_to_accident | boolean                |           |          |         | plain    |              | 
 race                    | text                   |           |          |         | extended |              | 
 gender                  | text                   |           |          |         | extended |              | 
 driver_city             | text                   |           |          |         | extended |              | 
 driver_state            | text                   |           |          |         | extended |              | 
 dl_state                | text                   |           |          |         | extended |              | 
 arrest_type             | text                   |           |          |         | extended |              | 
 geolocation             | point                  |           |          |         | plain    |              | 
 council_districts       | smallint               |           |          |         | plain    |              | 
 councils                | smallint               |           |          |         | plain    |              | 
 communities             | smallint               |           |          |         | plain    |              | 
 zip_codes               | smallint               |           |          |         | plain    |              | 
 municipalities          | smallint               |           |          |         | plain    |              | 
Partition key: RANGE (date_of_stop)
Partitions: traffic_violations_p_2012 FOR VALUES FROM ('2012-01-01') TO ('2013-01-01'),
            traffic_violations_p_2013 FOR VALUES FROM ('2013-01-01') TO ('2014-01-01'),
            traffic_violations_p_2014 FOR VALUES FROM ('2014-01-01') TO ('2015-01-01'),
            traffic_violations_p_2015 FOR VALUES FROM ('2015-01-01') TO ('2016-01-01'),
            traffic_violations_p_2016 FOR VALUES FROM ('2016-01-01') TO ('2017-01-01'),
            traffic_violations_p_2017 FOR VALUES FROM ('2017-01-01') TO ('2018-01-01'),
            traffic_violations_p_2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'),
            traffic_violations_p_2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'),
            traffic_violations_p_2020 FOR VALUES FROM ('2020-01-01') TO ('2021-01-01'),
            traffic_violations_p_2021 FOR VALUES FROM ('2021-01-01') TO ('2022-01-01'),
            traffic_violations_p_default DEFAULT

Having the data for 2021 it might not be required anymore to have the data for 2012. Without partitioning you would need to delete from the table for getting rid of those rows. With partitioning you can detach the partition from the partitioned table:

postgres=# alter table traffic_violations_p detach partition traffic_violations_p_2012;
ALTER TABLE
postgres=# \d+ traffic_violations_p
                                      Partitioned table "public.traffic_violations_p"
         Column          |          Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
-------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------
 seqid                   | text                   |           |          |         | extended |              | 
 date_of_stop            | date                   |           |          |         | plain    |              | 
 time_of_stop            | time without time zone |           |          |         | plain    |              | 
 agency                  | text                   |           |          |         | extended |              | 
 subagency               | text                   |           |          |         | extended |              | 
 description             | text                   |           |          |         | extended |              | 
 location                | text                   |           |          |         | extended |              | 
 latitude                | numeric                |           |          |         | main     |              | 
 longitude               | numeric                |           |          |         | main     |              | 
 accident                | text                   |           |          |         | extended |              | 
 belts                   | boolean                |           |          |         | plain    |              | 
 personal_injury         | boolean                |           |          |         | plain    |              | 
 property_damage         | boolean                |           |          |         | plain    |              | 
 fatal                   | boolean                |           |          |         | plain    |              | 
 commercial_license      | boolean                |           |          |         | plain    |              | 
 hazmat                  | boolean                |           |          |         | plain    |              | 
 commercial_vehicle      | boolean                |           |          |         | plain    |              | 
 alcohol                 | boolean                |           |          |         | plain    |              | 
 workzone                | boolean                |           |          |         | plain    |              | 
 state                   | text                   |           |          |         | extended |              | 
 vehicletype             | text                   |           |          |         | extended |              | 
 year                    | smallint               |           |          |         | plain    |              | 
 make                    | text                   |           |          |         | extended |              | 
 model                   | text                   |           |          |         | extended |              | 
 color                   | text                   |           |          |         | extended |              | 
 violation_type          | text                   |           |          |         | extended |              | 
 charge                  | text                   |           |          |         | extended |              | 
 article                 | text                   |           |          |         | extended |              | 
 contributed_to_accident | boolean                |           |          |         | plain    |              | 
 race                    | text                   |           |          |         | extended |              | 
 gender                  | text                   |           |          |         | extended |              | 
 driver_city             | text                   |           |          |         | extended |              | 
 driver_state            | text                   |           |          |         | extended |              | 
 dl_state                | text                   |           |          |         | extended |              | 
 arrest_type             | text                   |           |          |         | extended |              | 
 geolocation             | point                  |           |          |         | plain    |              | 
 council_districts       | smallint               |           |          |         | plain    |              | 
 councils                | smallint               |           |          |         | plain    |              | 
 communities             | smallint               |           |          |         | plain    |              | 
 zip_codes               | smallint               |           |          |         | plain    |              | 
 municipalities          | smallint               |           |          |         | plain    |              | 
Partition key: RANGE (date_of_stop)
Partitions: traffic_violations_p_2013 FOR VALUES FROM ('2013-01-01') TO ('2014-01-01'),
            traffic_violations_p_2014 FOR VALUES FROM ('2014-01-01') TO ('2015-01-01'),
            traffic_violations_p_2015 FOR VALUES FROM ('2015-01-01') TO ('2016-01-01'),
            traffic_violations_p_2016 FOR VALUES FROM ('2016-01-01') TO ('2017-01-01'),
            traffic_violations_p_2017 FOR VALUES FROM ('2017-01-01') TO ('2018-01-01'),
            traffic_violations_p_2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'),
            traffic_violations_p_2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'),
            traffic_violations_p_2020 FOR VALUES FROM ('2020-01-01') TO ('2021-01-01'),
            traffic_violations_p_2021 FOR VALUES FROM ('2021-01-01') TO ('2022-01-01'),
            traffic_violations_p_default DEFAULT

Whatever you want to do with the data from now on: The old partition became a regular table and you can either drop, truncate or move it to low cost storage for archiving.
This was attaching and detaching of partitions. In the next post we’ll look at indexing and constraints when it comes to partitioned tables.

Cet article PostgreSQL partitioning (6): Attaching and detaching partitions est apparu en premier sur Blog dbi services.

Local Install rlwrap for OEL 7.6

Michael Dinh - Wed, 2019-06-05 19:13

Installing rlwrap 7.6, requires python34 local install

yum install rlwrap

[root@SLC02PNY ~]# yum install rlwrap
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package rlwrap.x86_64 0:0.43-2.el7 will be installed
--> Processing Dependency: perl(Data::Dumper) for package: rlwrap-0.43-2.el7.x86_64
--> Processing Dependency: /usr/bin/python3.4 for package: rlwrap-0.43-2.el7.x86_64

****************************************************************************************************
Package python34 is obsoleted by python36, but obsoleting package does not provide for requirements
****************************************************************************************************

--> Running transaction check
---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed
---> Package rlwrap.x86_64 0:0.43-2.el7 will be installed
--> Processing Dependency: /usr/bin/python3.4 for package: rlwrap-0.43-2.el7.x86_64
Package python34 is obsoleted by python36, but obsoleting package does not provide for requirements
--> Processing Dependency: /usr/bin/python3.4 for package: rlwrap-0.43-2.el7.x86_64
Package python34 is obsoleted by python36, but obsoleting package does not provide for requirements
--> Finished Dependency Resolution

yum install python34

root@SLC02PNY ~]# yum install python34
Loaded plugins: ulninfo

****************************************************************************************************
Package python34 is obsoleted by python36, trying to install python36-3.6.8-1.el7.x86_64 instead
****************************************************************************************************

Resolving Dependencies
--> Running transaction check
---> Package python36.x86_64 0:3.6.8-1.el7 will be installed
--> Processing Dependency: python36-libs(x86-64) = 3.6.8-1.el7 for package: python36-3.6.8-1.el7.x86_64
--> Processing Dependency: libpython3.6m.so.1.0()(64bit) for package: python36-3.6.8-1.el7.x86_64
--> Running transaction check
---> Package python36-libs.x86_64 0:3.6.8-1.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                         Arch                     Version                        Repository                            Size
====================================================================================================================================
Installing:
 python36                        x86_64                   3.6.8-1.el7                    ol7_developer_EPEL                    66 k
Installing for dependencies:
 python36-libs                   x86_64                   3.6.8-1.el7                    ol7_developer_EPEL                   8.6 M

Transaction Summary
====================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 8.6 M
Installed size: 36 M
Is this ok [y/d/N]: n

cat /etc/system-release

[root@ADC6160274 ~]# cat /etc/system-release
Oracle Linux Server release 7.6
[root@ADC6160274 ~]#

yumdownloader python34-3.4.5-4.el7.x86_64

[root@ADC6160274 ~]# yumdownloader python34-3.4.5-4.el7.x86_64
python34-3.4.5-4.el7.x86_64.rpm                                                                              |  50 kB  00:00:00

yumdownloader python34-libs-3.4.5-4.el7.x86_64
[root@ADC6160274 ~]# yumdownloader python34-libs-3.4.5-4.el7.x86_64
python34-libs-3.4.5-4.el7.x86_64.rpm                                                                         | 8.2 MB  00:00:01

yum localinstall python34-libs-3.4.5-4.el7.x86_64.rpm python34-3.4.5-4.el7.x86_64.rpm

[root@ADC6160274 ~]# yum localinstall python34-libs-3.4.5-4.el7.x86_64.rpm python34-3.4.5-4.el7.x86_64.rpm
Loaded plugins: ulninfo
Examining python34-libs-3.4.5-4.el7.x86_64.rpm: python34-libs-3.4.5-4.el7.x86_64
Marking python34-libs-3.4.5-4.el7.x86_64.rpm to be installed
Examining python34-3.4.5-4.el7.x86_64.rpm: python34-3.4.5-4.el7.x86_64
Marking python34-3.4.5-4.el7.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package python34.x86_64 0:3.4.5-4.el7 will be installed
---> Package python34-libs.x86_64 0:3.4.5-4.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                     Arch                 Version                     Repository                                       Size
====================================================================================================================================
Installing:
 python34                    x86_64               3.4.5-4.el7                 /python34-3.4.5-4.el7.x86_64                     36 k
 python34-libs               x86_64               3.4.5-4.el7                 /python34-libs-3.4.5-4.el7.x86_64                29 M

Transaction Summary
====================================================================================================================================
Install  2 Packages

Total size: 29 M
Installed size: 29 M
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : python34-libs-3.4.5-4.el7.x86_64                                                                                 1/2
  Installing : python34-3.4.5-4.el7.x86_64                                                                                      2/2
  Verifying  : python34-3.4.5-4.el7.x86_64                                                                                      1/2
  Verifying  : python34-libs-3.4.5-4.el7.x86_64                                                                                 2/2

Installed:
  python34.x86_64 0:3.4.5-4.el7                                  python34-libs.x86_64 0:3.4.5-4.el7

Complete!

yum install rlwrap

[root@ADC6160274 ~]# yum install rlwrap
Loaded plugins: ulninfo
Resolving Dependencies
--> Running transaction check
---> Package rlwrap.x86_64 0:0.43-2.el7 will be installed
--> Processing Dependency: perl(Data::Dumper) for package: rlwrap-0.43-2.el7.x86_64
--> Running transaction check
---> Package perl-Data-Dumper.x86_64 0:2.145-3.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

====================================================================================================================================
 Package                           Arch                    Version                        Repository                           Size
====================================================================================================================================
Installing:
 rlwrap                            x86_64                  0.43-2.el7                     ol7_developer_EPEL                  118 k
Installing for dependencies:
 perl-Data-Dumper                  x86_64                  2.145-3.el7                    ol7_latest                           47 k

Transaction Summary
====================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 165 k
Installed size: 378 k
Is this ok [y/d/N]: y
Downloading packages:
(1/2): perl-Data-Dumper-2.145-3.el7.x86_64.rpm                                                               |  47 kB  00:00:00
(2/2): rlwrap-0.43-2.el7.x86_64.rpm                                                                          | 118 kB  00:00:00
------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                               311 kB/s | 165 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : perl-Data-Dumper-2.145-3.el7.x86_64                                                                              1/2
  Installing : rlwrap-0.43-2.el7.x86_64                                                                                         2/2
  Verifying  : perl-Data-Dumper-2.145-3.el7.x86_64                                                                              1/2
  Verifying  : rlwrap-0.43-2.el7.x86_64                                                                                         2/2

Installed:
  rlwrap.x86_64 0:0.43-2.el7

Dependency Installed:
  perl-Data-Dumper.x86_64 0:2.145-3.el7

Complete!
[root@ADC6160274 ~]#

Simplifying Troubleshooting Process for Oracle Service Cloud Admins and Agents (BUI Version)

OTN TechBlog - Wed, 2019-06-05 11:39
What is Troubleshoot Extension?

If you are an Oracle Service Cloud administrator, you are likely the first person in your company your agents will go to when they are experiencing some error with their Oracle Service Cloud. Troubleshooting errors can quickly spawn into a bigger investigative effort to identify the root cause (e.g., network issue, configuration issue, training, defect), but the troubleshooting process usually starts with getting the right details about the error from your agents. Your agents are already frustrated because an error could be impeding their work and impacting their metrics. And now you’re asking them to provide detailed information on the error: steps to reproduce this issue, workstation information or data traffic, much of which they aren’t familiar with.

Recognizing the dynamics of this common scenario, we decided to walk a mile in both your shoes, as the administrator, and your agents’ shoes. As a result of this experience, we came up with the idea of automating the process for gathering the needed information from agents instead of requiring the administrators and agents to try and overcome the current challenges. We developed a sample code named as “Troubleshoot Extension” for the Oracle Service Cloud Browser UI (Similar what we did before in Agent Desktop) to address this need.

The “Troubleshoot Extension Sample Code” was created to automatically capture information, such as browser navigator, agent basic information and console log, error, warn and debug information in one fell swoop instead of requiring agents to install or use different tools outside of Oracle Service Cloud. All the agents need to do is push the "start" button (located in the status bar) and push the “stop” when the agents have completed all steps to reproduce the error.

The sample code is available to download in the bottom of the article, but before going further, I'd like to encourage you to understand the importance of web console to troubleshoot a web-based application so that the approach used here makes more sense.

Why we are sharing this sample code?
  • First, because we believe that by automating this process, we can simplify your communication with agents experiencing errors, accelerate the potential solution for the process.
  • Second, Agents don't need to understand how to gather technical information for troubleshooting.
  • Lastly, because through this sample code, we can achieve:
    • sharing a complex sample code where you can apply and reuse for other needs;
    • sharing a start point where you can enhance this troubleshooting tool to adapt to your requirement.
How does Troubleshoot Extension work?

Let’s take a more in-depth look at what the sample code delivers. The sample code is implementing an Extension StatusBar with a start and stop button, plus a timer that provides the duration of how long your agent is capturing the steps to reproduce.

By clicking on the start button, the troubleshoot extension will automatically set the log level (see developer tools logs for more information). Everything that is registered in the web console will be captured.

Once you have finished capturing your steps to reproduce click on the stop button and the Troubleshoot Extension takes care of compile all web console information to present to you in a window modal.

**This extension is not compatible with IE11. 

The window modal presents a friendly version of the result captured, but you can save the result by clicking on the download button on top. The download button will create a text file with the information that was captured in and can share with your technical team to help them on the investigation.

The information capture is read-only, but there is an additional field on top where your agent can add more information such as steps to reproduce or any other comment.

How to install?

1. Upload the extension code as a BUI extension.

  1. Download the extension file.
  2. Open your Admin console and upload the zip file as Agent Browser UI Extensions.
  3. Name your extension. E.g.: Troubleshoot Extension. 
  4. Select "Console" as an extension type.
  5. Select  TsExnteion/ViewModel/js/ts-statusbar.js as the Init File.
  6. Go to profile and assign this extension to the profile you'd like to use this extension.

2. Create custom configuration settings.

  1. Go to Site Configuration > Configuration Settings.
  2. Click on New > Text to create a custom configuration setting.
  3. The Key name is "CUSTOM_CFG_TS".
  4. The site value is the following JSON: {"debugLevel":3, "performance": true}

Ultimately, this solution should simplify your communication with agents experiencing errors, accelerate troubleshooting by having the required information in one easy step, and save everyone time and frustration that surrounds these issues.

The source code is available here for download, and you take advantage to build a better troubleshoot model integrated into your Oracle Service Cloud. 

If you are a developer and want to contribute to this sample code, you are welcome to join the Github.

Pages

Subscribe to Oracle FAQ aggregator