Bobby Durrett's DBA Blog

Subscribe to Bobby Durrett's DBA Blog feed
Oracle database performance
Updated: 2 months 2 days ago

Tried Live SQL

Thu, 2019-05-09 18:13

Quick post. I tried https://livesql.oracle.com/ for the first time.

Looks like you can try out the latest version of Oracle for free. You can type in SQL statements in your web browser.

There seem to be a boatload of scripts and tutorials to try out. Might be good for someone who is new to SQL.

Bobby

Categories: DBA Blogs

Installed 19.3 on Linux on VirtualBox

Thu, 2019-05-02 15:14

I noticed that Oracle database 19.3 was available to download so I installed it on a Linux VM under VirtualBox.

I cloned my base Oracle Linux 7 VM and did a yum -y update to get it up to the latest Linux version.

I installed the preinstall RPM:

yum install oracle-database-preinstall-19c

Created the Oracle home, set some variables, and disabled the firewall.

mkdir -p /home/oracle/product/db/19.0.0
chgrp oinstall /home/oracle/product/db/19.0.0
cd /home/oracle/product/db/19.0.0

export ORACLE_BASE=/home/oracle/product
export ORACLE_HOME=/home/oracle/product/db/19.0.0

[root@ora19 ~]# systemctl stop firewalld
[root@ora19 ~]# systemctl disable firewalld

Edited my .bashrc

export ORACLE_BASE=/home/oracle/product
export ORACLE_HOME=/home/oracle/product/db/19.0.0
export ORAENV_ASK=NO
export ORACLE_SID=orcl
. oraenv

Unzipped the downloaded file LINUX.X64_193000_db_home.zip in my ORACLE_HOME. Then I ran runInstaller from the ORACLE_HOME through MobaXterm. For some reason this decided to put the install window overlapping my two screens so I couldn’t tell what I was typing, and it didn’t allow me to move it. Fun times.

It seemed to hang on the last step, but it eventually finished. The process named ora_mz00_orcl was spinning on the CPU for part of the time that it seemed hung. I guess it was 5 or 10 minutes, but I didn’t time it.

Seems to be up and working:

SQL*Plus: Release 19.0.0.0.0 - Production on Thu May 2 13:00:59 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Last Successful login time: Thu May 02 2019 12:51:54 -07:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL>

Bobby

Categories: DBA Blogs

May 17th AZORA Meetup – Last until September!

Tue, 2019-04-30 14:28
#meetup_oembed .mu_clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; }* html #meetup_oembed .mu_clearfix, *:first-child+html #meetup_oembed .mu_clearfix { zoom: 1; }#meetup_oembed { background:#eee;border:1px solid #ccc;padding:10px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;margin:0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; }#meetup_oembed h3 { font-weight:normal; margin:0 0 10px; padding:0; line-height:26px; font-family:Georgia,Palatino,serif; font-size:24px }#meetup_oembed p { margin: 0 0 10px; padding:0; line-height:16px; }#meetup_oembed img { border:none; margin:0; padding:0; }#meetup_oembed a, #meetup_oembed a:visited, #meetup_oembed a:link { color: #1B76B3; text-decoration: none; cursor: hand; cursor: pointer; }#meetup_oembed a:hover { color: #1B76B3; text-decoration: underline; }#meetup_oembed a.mu_button { font-size:14px; -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;border:2px solid #A7241D;color:white!important;text-decoration:none;background-color: #CA3E47; background-image: -moz-linear-gradient(top, #ca3e47, #a8252e); background-image: -webkit-gradient(linear, left bottom, left top, color-stop(0, #a8252e), color-stop(1, #ca3e47));disvplay:inline-block;padding:5px 10px; }#meetup_oembed a.mu_button:hover { color: #fff!important; text-decoration: none; }#meetup_oembed .photo { width:50px; height:50px; overflow:hidden;background:#ccc;float:left;margin:0 5px 0 0;text-align:center;padding:1px; }#meetup_oembed .photo img { height:50px }#meetup_oembed .number { font-size:18px; }#meetup_oembed .thing { text-transform: uppercase; color: #555; }
AZORA Meetup before summer heats up! Two presentations – Friday, May 17th

Friday, May 17, 2019, 12:00 PM

Republic Services – 2nd Floor Conference Rooms
14400 N 87th St (AZ101 & Raintree) Scottsdale, AZ

8 AZORAS Attending

AZORA Meetup before the summer heats up! We will be featuring two presentations on Friday May 17th meeting. Logistics are the same at friendly Republic Services, an afternoon with other Oracle types between Noon till 4pm. We will follow typical agenda of Lunch – Learn – Learn some more – Let us call it a weekend! Here are the details: Meeting Agend…

Check out this Meetup →

This is our last Meetup until our Summer break. Come check out two great presentations.

Doug Hood from Oracle will talk about the Oracle In-Memory database feature. We appreciate Oracle providing us with this technical content to support AZORA.

AZORA’s own Stephen Andert will be sharing a non-technical presentation on Networking. He just gave the same talk at the national Oracle user group meeting called Collaborate 19 so it will be great to have him share with his local user group.

Looking forward to seeing you there.

Bobby

P.S. AZORA is the Arizona Oracle User Group, and we meet in the Phoenix, Arizona area.

Categories: DBA Blogs

Example of coe_xfr_sql_profile force_match TRUE

Wed, 2019-04-17 10:57

Monday, I used the coe_xfr_sql_profile.sql script from Oracle Support’s SQLT scripts to resolve a performance issue. I had to set the parameter force_match to TRUE so that the SQL Profile I created would apply to all SQL statements with the same FORCE_MATCHING_SIGNATURE value.

I just finished going off the on-call rotation at 8 am Monday and around 4 pm on Monday a coworker came up to me with a performance problem. A PeopleSoft Financials job was running longer than it normally did. Since it had run for several hours, I got an AWR report of the last hour and looked at the SQL ordered by Elapsed Time section and found a number of similar INSERT statements with different SQL_IDs.

The inserts were the same except for certain constant values. So, I used my fmsstat2.sql script with ss.sql_id = ’60dp9r760ja88′ to get the FORCE_MATCHING_SIGNATURE value for these inserts. Here is the output:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 60dp9r760ja88         3334601 15-APR-19 05.00.34.061 PM                1         224414.511     224412.713         2.982                  0                      0                   .376             5785269                 40                   3707

Now that I had the FORCE_MATCHING_SIGNATURE value 5442820596869317879 I reran fmsstat2.sql with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 instead of ss.sql_id = ’60dp9r760ja88′ and got all of the insert statements and their PLAN_HASH_VALUE values. I needed these to use coe_xfr_sql_profile.sql to generate a script to create a SQL Profile to force a better plan onto the insert statements. Here is the beginning of the output of the fmsstat2.sql script:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 0yzz90wgcybuk      1314604389 14-APR-19 01.00.44.945 PM                1            558.798        558.258             0                  0                      0                      0               23571                  0                    812
     5442820596869317879 5a86b68g7714k      1314604389 14-APR-19 01.00.44.945 PM                1            571.158        571.158             0                  0                      0                      0               23245                  0                    681
     5442820596869317879 9u1a335s936z9      1314604389 14-APR-19 01.00.44.945 PM                1            536.886        536.886             0                  0                      0                      0               21851                  0                      2
     5442820596869317879 a922w6t6nt6ry      1314604389 14-APR-19 01.00.44.945 PM                1            607.943        607.943             0                  0                      0                      0               25948                  0                   1914
     5442820596869317879 d5cca46bzhdk3      1314604389 14-APR-19 01.00.44.945 PM                1            606.268         598.11             0                  0                      0                      0               25848                  0                   1763
     5442820596869317879 gwv75p0fyf9ys      1314604389 14-APR-19 01.00.44.945 PM                1            598.806        598.393             0                  0                      0                      0               24981                  0                   1525
     5442820596869317879 0u2rzwd08859s         3334601 15-APR-19 09.00.53.913 AM                1          18534.037      18531.635             0                  0                      0                      0              713757                  0                     59
     5442820596869317879 1spgv2h2sb8n5         3334601 15-APR-19 09.00.53.913 AM                1          30627.533      30627.533          .546                  0                      0                      0             1022484                 27                    487
     5442820596869317879 252dsf173mvc4         3334601 15-APR-19 09.00.53.913 AM                1          47872.361      47869.859          .085                  0                      0                      0             1457614                  2                    476
     5442820596869317879 25bw3269yx938         3334601 15-APR-19 09.00.53.913 AM                1         107915.183     107912.459         1.114                  0                      0                      0             2996363                 26                   2442
     5442820596869317879 2ktg1dvz8rndw         3334601 15-APR-19 09.00.53.913 AM                1          62178.512      62178.512          .077                  0                      0                      0             1789536                  3                   1111
     5442820596869317879 4500kk2dtkadn         3334601 15-APR-19 09.00.53.913 AM                1         106586.665     106586.665         7.624                  0                      0                      0             2894719                 20                   1660
     5442820596869317879 4jmj30ym5rrum         3334601 15-APR-19 09.00.53.913 AM                1          17638.067      17638.067             0                  0                      0                      0              699273                  0                    102
     5442820596869317879 657tp4jd07qn2         3334601 15-APR-19 09.00.53.913 AM                1          118948.54      118890.57             0                  0                      0                      0             3257090                  0                   2515
     5442820596869317879 6gpwwnbmch1nq         3334601 15-APR-19 09.00.53.913 AM                0          48685.816      48685.816          .487                  0                      0                  1.111             1433923                 12                      0
     5442820596869317879 6k1q5byga902a         3334601 15-APR-19 09.00.53.913 AM                1            2144.59        2144.59             0                  0                      0                      0              307369                  0                      2

The first few lines show the good plan that these inserts ran on earlier runs. The good plan has PLAN_HASH_VALUE 1314604389 and runs in about 600 milliseconds. The bad plan has PLAN_HASH_VALUE 3334601 and runs in 100 or so seconds. I took a look at the plans before doing the SQL Profile but did not really dig into why the plans changed. It was 4:30 pm or so and I was trying to get out the door since I was not on call and wanted to get home at a normal time and leave the problems to the on-call DBA. Here is the good plan:

Plan hash value: 1314604389

------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                |                    |       |       |  3090 (100)|          |
|   1 |  HASH JOIN RIGHT SEMI           |                    |  2311 |  3511K|  3090   (1)| 00:00:13 |
|   2 |   VIEW                          | VW_SQ_1            |   967 | 44482 |  1652   (1)| 00:00:07 |
|   3 |    HASH JOIN                    |                    |   967 | 52218 |  1652   (1)| 00:00:07 |
|   4 |     TABLE ACCESS FULL           | PS_PST_VCHR_TAO4   |    90 |  1980 |    92   (3)| 00:00:01 |
|   5 |     NESTED LOOPS                |                    | 77352 |  2417K|  1557   (1)| 00:00:07 |
|   6 |      INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   7 |      TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  | 77352 |  2039K|  1557   (1)| 00:00:07 |
|   8 |       INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  | 77352 |       |   756   (1)| 00:00:04 |
|   9 |   TABLE ACCESS BY INDEX ROWID   | PS_VCHR_TEMP_LN4   | 99664 |   143M|  1434   (1)| 00:00:06 |
|  10 |    INDEX RANGE SCAN             | PSAVCHR_TEMP_LN4   | 99664 |       |   630   (1)| 00:00:03 |
------------------------------------------------------------------------------------------------------

Here is the bad plan:

Plan hash value: 3334601

---------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |                    |       |       |  1819 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID       | PS_VCHR_TEMP_LN4   |  2926 |  4314K|  1814   (1)| 00:00:08 |
|   2 |   NESTED LOOPS                     |                    |  2926 |  4446K|  1819   (1)| 00:00:08 |
|   3 |    VIEW                            | VW_SQ_1            |     1 |    46 |     4   (0)| 00:00:01 |
|   4 |     SORT UNIQUE                    |                    |     1 |    51 |            |          |
|   5 |      TABLE ACCESS BY INDEX ROWID   | PS_PST_VCHR_TAO4   |     1 |    23 |     1   (0)| 00:00:01 |
|   6 |       NESTED LOOPS                 |                    |     1 |    51 |     4   (0)| 00:00:01 |
|   7 |        NESTED LOOPS                |                    |     1 |    28 |     3   (0)| 00:00:01 |
|   8 |         INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   9 |         TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  |     1 |    23 |     3   (0)| 00:00:01 |
|  10 |          INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  |     1 |       |     2   (0)| 00:00:01 |
|  11 |        INDEX RANGE SCAN            | PS_PST_VCHR_TAO4   |     1 |       |     1   (0)| 00:00:01 |
|  12 |    INDEX RANGE SCAN                | PSAVCHR_TEMP_LN4   |   126K|       |  1010   (1)| 00:00:05 |
---------------------------------------------------------------------------------------------------------

Notice that in the bad plan the Rows column has 1 in it on many of the lines, but in the good plan it has larger numbers. Something about the statistics and the values in the where clause caused the optimizer to build the bad plan as if no rows would be accessed from these tables even though many rows would be accessed. So, it made a plan based on wrong information. But I had no time to dig further. I did ask my coworker if anything had changed about this job and nothing had.

So, I created a SQL Profile script by going to the utl subdirectory under sqlt where it was installed on the database server. I generated the script by running coe_xfr_sql_profile gwv75p0fyf9ys 1314604389. I edited the created script by the name coe_xfr_sql_profile_gwv75p0fyf9ys_1314604389.sql and changed the setting force_match=>FALSE to force_match=>TRUE and ran the script. The long running job finished shortly thereafter, and no new incidents have occurred in future runs.

The only thing that confuses me is that when I run fmsstat2.sql now with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 I do not see any runs with the good plan. Maybe future runs of the job have a different FORCE_MATCHING_SIGNATURE and the SQL Profile only helped the one job. If that is true, the future runs may have had the correct statistics and run the good plan on their own.

I wanted to post this to give an example of using force_match=>TRUE with coe_xfr_sql_profile. I had an earlier post about this subject, but I thought another example could not hurt. I also wanted to show how I use fmsstat2.sql to find multiple SQL statements by their FORCE_MATCHING_SIGNATURE value. I realize that SQL Profiles are a kind of band aid rather than a solution to the real problem. But I got out of the door by 5 pm on Monday and did not get woken up in the middle of the night so sometimes a quick fix is what you need.

Bobby

Categories: DBA Blogs

Check your hints carefully

Tue, 2019-04-16 16:32

Back in 2017 I wrote about how I had to disable the result cache after upgrading a database to 11.2.0.4. This week I found one of our top queries and it looked like removing the result cache hints made it run 10 times faster. But this did not make sense because I disabled the result cache. Then I examined the hints closer. They looked like this:

/*+ RESULT CACHE */

There should be an underscore between the two words. I look up hints in the manuals and found that CACHE is a real hint. So, I tried the query with these three additional combinations:

 
/*+ RESULT */
 
/*+ CACHE */
 
/*+ RESULT_CACHE */

It ran slow with the original hint and with just the CACHE hint but none of the others. So, the moral of the story is to check your hints carefully because they may not be what you think they are.

Bobby

Categories: DBA Blogs

Huge Pages Speeds Startup of Web Servers

Tue, 2019-04-02 10:50

We had issues deploying changes to a large web farm a few weeks back. We had automated the startup of the 25 web servers using a DevOps tool but when it tried to bring them all up at once it took forever and the web servers got timeout errors. We resolved this by putting in huge pages on the metadata database.

I mentioned the unexpectedly large impact of huge pages on login time in a previous post. But, we had not realized that a small metadata database with a 5 gigabyte SGA would need huge pages. I guess I should mention that this is 11.2.0.3 Oracle on Linux. The web farm has 25 servers with 25 database connections each. I think the number of connections from each server should be 5 or less but I have not convinced others to reduce the number. If you have a metadata database with 4 cores and 625 connections from 25 web servers, the web servers cannot use all of those connections. A 4-core database server can only process 10 or 20 SQL statements at once if they are I/O centric. If they are all CPU you are looking at 4 concurrent active SQL statements. If you kick off 625 SQL statements at once the load will go through the roof and everything will timeout. I thought about putting in shared servers on the metadata database to force the 625 sessions to funnel through 12 shared servers so the database server would not be overwhelmed should all 625 connections choose to run at once. But the only problem we had was with logins. All 625 were logging in at the same time when the web farm was started by our DevOps tool. Huge pages resolved this issue by making the login time 10-20 times faster.

The database was in a 16 gigabyte Linux VM with 4 cores and a 5 gigabyte SGA. Without huge pages each login took about 2.5 seconds. With it the logins took about .15 seconds. Without huge pages the load on the database server went over 100 when all 625 sessions started at once. With huge pages the load never exceeded 1. Without huge pages the web farm never came up cleanly and with it the farm came up quickly without error. So, if you have a Linux VM with several gigabytes in your SGA you might want to use huge pages.

Bobby

Categories: DBA Blogs

High Processes Cause Semaphore Errors

Thu, 2019-03-28 10:38

I could not create a new Delphix virtual database because I kept getting semaphore limit errors. I ended up reducing the processes parameter value to resolve the issue.

I got these errors creating the new VDB:

ERROR : SQL> ORA-27154: post/wait create failed 
ERROR : ORA-27300: OS system dependent operation:semget failed with status: 28
ERROR : ORA-27301: OS failure message: No space left on device
ERROR : ORA-27302: failure occurred at: sskgpcreates

The processes variable was set to 3000 on production so Delphix tried to create the VDB with the same value. I reduced processes to 100 and the VDB came up cleanly.

I have been living with this problem for a year or two but yesterday I really needed to resolve it and I finally figured it out. I got the idea of reducing processes from this Oracle support document:

Instance Startup Fails With Error ORA-27154,ORA-27300,ORA-27301,ORA-27302 (Doc ID 314179.1)

I looked these errors up on Oracle’s support site several times in the past and never saw the idea of reducing processes so I thought I would mention it here if only to remind myself.

Bobby

Categories: DBA Blogs

Python Practice Produced Pretty Pictures

Sat, 2019-03-23 18:57

I wrote a Python program that made some fun pictures so I thought I would share them even though this is not really a database post.

I practice Python programming by doing Rosetta Code programming tasks that no one has implemented in Python. This is a fun way of keeping up my Python skills. My most recent contribution made pretty pictures so I thought I would show them here. The code takes a cube and breaks up the faces into smaller and smaller pieces that change the cube into a rounder shape.

Here is the input:

Input to the program, a cube

Here is the output after one subdivision:

After one subdivision. Chunky.

Here is the output after two subdivisions:

Two subdivisions. Pretty round.

Note that it is getting rounder. Lastly, after four subdivisions it is remarkably round considering that it started as a cube:

Four subdivisions. Quite round.

The main point of this post was to show the pretty pictures. But, to be more serious, if someone is reading this blog and looking for a programming task to do for practice you can do what I do and find a Rosetta Code task for the language you are learning and you can get some good practice.

Bobby

Categories: DBA Blogs

Speed of Light

Wed, 2019-03-20 16:30

Looking at cloud databases has me thinking about the speed of light. Wikipedia says that the speed of light is about 186,000 miles per second. If my calculations are correct that is 5.37 microseconds per mile. The United States is about 2680 miles wide so it would take light about 14.4 milliseconds to cross the US. If I ping one of my favorite web sites it takes tens of milliseconds to ping so that kind of makes sense because those sites are in other cities and I am going through various routers. I did some tests with my company’s storage and found that reading from our storage when the data is cached in the storage server takes around 200 microseconds. That is 200 microseconds for a round trip. I’m sure that our database servers and storage are a lot less than a mile apart so most of that time has nothing to do with the speed of light. I heard about a cloud vendor whose fast network connection took 100 microseconds plus the speed of light. I guess 100 microseconds is the cost of getting your data to fiber and light does the rest. If your cloud database was on the other side of the country, I guess it could take 14 milliseconds each way at least for each SQL request. If the cloud database was in your own city and say 10 miles away that would only tack on about 53.7 microseconds each way to the 100 microseconds overhead. I guess it makes sense. Maybe 100 microseconds plus the speed of light is the cost of moving data in the best case?

Bobby

Categories: DBA Blogs

New utility Python scripts for DBAs

Tue, 2019-03-19 14:45

I pushed out three new Python scripts that might be helpful to Oracle DBAs. They are in my miscpython repository.

Might be helpful to some people.

Bobby

Categories: DBA Blogs

Slides from March AZORA meeting

Thu, 2019-03-14 16:27

Here are the slides from our March Arizona Oracle User Group (AZORA) meeting:

Daniel Morgan Security Master Class

We really appreciate Daniel Morgan taking the time to share this information about the increasingly important topic of database security.

Also, AZORA is always looking for people to present at future meetings. We have one more meeting in May before the blazing hot Arizona summer and then we start up again in September. Email be at bobby@bobbydurrettdba.com if you would like to speak at a future meeting.

Bobby

Categories: DBA Blogs

Oracle Database Security Master Class AZORA Meetup Thursday

Fri, 2019-03-01 16:33

Next Thursday Daniel Morgan from TidalScale will be giving a free Oracle Database Security Master Class for our March Arizona Oracle User Group meeting. See the details in our Meetup link:

#meetup_oembed .mu_clearfix:after { visibility: hidden; display: block; font-size: 0; content: " "; clear: both; height: 0; }* html #meetup_oembed .mu_clearfix, *:first-child+html #meetup_oembed .mu_clearfix { zoom: 1; }#meetup_oembed { background:#eee;border:1px solid #ccc;padding:10px;-moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;margin:0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; font-size: 12px; }#meetup_oembed h3 { font-weight:normal; margin:0 0 10px; padding:0; line-height:26px; font-family:Georgia,Palatino,serif; font-size:24px }#meetup_oembed p { margin: 0 0 10px; padding:0; line-height:16px; }#meetup_oembed img { border:none; margin:0; padding:0; }#meetup_oembed a, #meetup_oembed a:visited, #meetup_oembed a:link { color: #1B76B3; text-decoration: none; cursor: hand; cursor: pointer; }#meetup_oembed a:hover { color: #1B76B3; text-decoration: underline; }#meetup_oembed a.mu_button { font-size:14px; -moz-border-radius:3px;-webkit-border-radius:3px;border-radius:3px;border:2px solid #A7241D;color:white!important;text-decoration:none;background-color: #CA3E47; background-image: -moz-linear-gradient(top, #ca3e47, #a8252e); background-image: -webkit-gradient(linear, left bottom, left top, color-stop(0, #a8252e), color-stop(1, #ca3e47));disvplay:inline-block;padding:5px 10px; }#meetup_oembed a.mu_button:hover { color: #fff!important; text-decoration: none; }#meetup_oembed .photo { width:50px; height:50px; overflow:hidden;background:#ccc;float:left;margin:0 5px 0 0;text-align:center;padding:1px; }#meetup_oembed .photo img { height:50px }#meetup_oembed .number { font-size:18px; }#meetup_oembed .thing { text-transform: uppercase; color: #555; }
Oracle Database Security Master Class

Thursday, Mar 7, 2019, 12:00 PM

Republic Services – 2nd Floor Conference Rooms
14400 N 87th St (AZ101 & Raintree) Scottsdale, AZ

13 AZORAS Attending

Oracle Database Security Master Class: Year after year Oracle databases that should be secure are violated. In some cases data is stolen, in some cases data is altered, in some cases the database is not the target but rather is means to an end. In almost all cases there is no excuse. No excuse … but there is an explanation. Oracle professionals d…

Check out this Meetup →

Please RSVP so that we know how many people will be there.

Given the ever-increasing number of security threats to our databases this session on Oracle database security should be valuable to us all.

I hope to see you there!

Bobby

Categories: DBA Blogs

ORA-30926 on MERGE statement

Tue, 2019-02-26 15:19

I was on call last week and I got a ticket about a batch job that was failing on a MERGE statement with an ORA-30926 error. Our support team worked around it by deleting some duplicate rows in a table that was feeding into the MERGE. This week I wanted to go back and try to understand what would cause ORA-30926 errors on MERGE statements.

I read through some blog posts and Oracle support documents relating to ORA-30926 and merge. Then I tried building some simple test scripts to see when you get ORA-30926 on a MERGE. At the end of my search I came back to this simple statement from the 18c SQL Language Reference manual’s description of the MERGE statement:

MERGE is a deterministic statement. You cannot update the same row of the target table multiple times in the same MERGE statement.

I was puzzled by the term “deterministic” but “cannot update the same row…multiple times” seems simple enough. I created test scripts to show whether you can update the same row multiple times with a MERGE: zip

Here are the two test tables:

create table target (a number,b number);

create table source (a number,b number);

Here is data and a MERGE statement that causes the error:

SQL> insert into target values (1,1);
SQL> insert into source values (1,2);
SQL> insert into source values (1,3);

SQL> merge into target
  2  using source
  3  on (target.a = source.a)
  4  when matched then
  5  update set target.b=source.b;
using source
      *
ERROR at line 2:
ORA-30926: unable to get a stable set of rows in the source tables 

This seems very simple. We join the source and target tables on column A with value 1. The merge statement tries to update column B on the target table row twice. It tries to set it to 2 and then to 3. I guess this is where the term “deterministic” comes in. If the merge statement updated B to 2 and 3 which would it do first and which second?

The example above with values 2 and 3 for B makes good sense but I saw some odd behavior when I used source rows with 1 and 2 for the B values of the two rows. With B values of 1 and 2 in this case the MERGE does not get an error:

SQL> insert into target values (1,1);
SQL> insert into source values (1,1);
SQL> insert into source values (1,2);

SQL> merge into target
  2  using source
  3  on (target.a = source.a)
  4  when matched then
  5  update set target.b=source.b;

2 rows merged.

SQL> select * from target;

         A          B                                                           
---------- ----------                                                           
         1          2                                                           

This seems to contradict the idea that you cannot update the same row multiple times with a MERGE. In this case it seems like B is updated twice, once to 1 which is what it already was set to and then to 2. I guess this works because B was already set to 1 so the update of B to the same value does not count as an update. It seems like only one update took place and then B ends up set to 2.

This example does not work with a slightly different MERGE statement on Oracle 12.1 or earlier:

SQL> select banner from v$version;

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production    

SQL> merge into target
  2  using source
  3  on (target.a = source.a)
  4  when matched then
  5  update set target.b=source.b
  6  when not matched then
  7  insert values (source.a,source.b);
using source
      *
ERROR at line 2:
ORA-30926: unable to get a stable set of rows in the source tables 

SQL> select * from source;

         A          B                                                           
---------- ----------                                                           
         1          1                                                           
         1          2                                                           

SQL> 
SQL> select * from target;

         A          B                                                           
---------- ----------                                                           
         1          1                                                           

All that I can say based on these two tests is that sometimes an update of a column to the same value counts as an update and sometimes it does not. The preceding example works on 12.2:

SQL> select banner from v$version;

BANNER                                                                          
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production    

SQL> merge into target
  2  using source
  3  on (target.a = source.a)
  4  when matched then
  5  update set target.b=source.b
  6  when not matched then
  7  insert values (source.a,source.b);

2 rows merged.

SQL> select * from source;

         A          B                                                           
---------- ----------                                                           
         1          1                                                           
         1          2                                                           

SQL> 
SQL> select * from target;

         A          B                                                           
---------- ----------                                                           
         1          2                                                           

It all seemed so simple when I read the documentation. Maybe instead of saying:

You cannot update the same row of the target table multiple times in the same MERGE statement.

Instead it should say:

You may not be able to update the same row of the target table multiple times in the same MERGE statement.

We should not count on being able to update the same row multiple times with a single MERGE statement, but clearly there are some cases in which we can. If you are like me and you get paged on a job that fails with ORA-30926 on a MERGE, it makes sense to get rid of the multiple updates on a single row because you most likely have hit a case where it is not allowed.

Bobby

Categories: DBA Blogs

One query in MySQL Performance Schema

Tue, 2019-02-12 16:48

I am learning about MySQL performance tuning. I read the Performance Schema chapter of the MySQL 5.7 manual and I have a MySQL 5.7.20 database running under Linux. I have a test table and I am running a simple query that does a full scan of the table and returns the sum of a couple of columns. The goal was to look at some Performance Schema tables to see what they would tell me about this simple query. I bounce the MySQL database service before running my test script so that the query pulls data from disk and not memory. A zip of my test script and its output is here: zip

I looked at the tables that had this pattern events_*_history_long. These are the four tables:

There seems to be a hierarchical relationship among these tables something like this:

  • statements->transactions
  • statements->stages->waits->waits

There may be other relationships that my simple test did not capture. If you look at the zip you will see all the levels of the hierarchy that I explored but, in this case, the bottom level seems the most interesting. Here is the query that explores the bottom of the statements->stages->waits->waits path:

select * from events_waits_history_long
where
NESTING_EVENT_ID in 
(select EVENT_ID from events_waits_history_long
where
NESTING_EVENT_ID in 
(select EVENT_ID from events_stages_history_long
where
NESTING_EVENT_ID in 
(select EVENT_ID from events_statements_history_long
where SQL_TEXT like concat('%te','st%'))
and NESTING_EVENT_TYPE = 'STATEMENT')
and NESTING_EVENT_TYPE = 'STAGE')
and NESTING_EVENT_TYPE = 'WAIT'
order by EVENT_ID;

The output reminds me of a trace in Oracle. It shows the individual disk reads and waits for internal locks and mutexs.

+-----------+----------+--------------+-------------------------------------------+-----------------+---------------+---------------+------------+-------+---------------+-------------------------------+------------+-------------+-----------------------+------------------+--------------------+----------------+-----------------+-------+
| THREAD_ID | EVENT_ID | END_EVENT_ID | EVENT_NAME                                | SOURCE          | TIMER_START   | TIMER_END     | TIMER_WAIT | SPINS | OBJECT_SCHEMA | OBJECT_NAME                   | INDEX_NAME | OBJECT_TYPE | OBJECT_INSTANCE_BEGIN | NESTING_EVENT_ID | NESTING_EVENT_TYPE | OPERATION      | NUMBER_OF_BYTES | FLAGS |
+-----------+----------+--------------+-------------------------------------------+-----------------+---------------+---------------+------------+-------+---------------+-------------------------------+------------+-------------+-----------------------+------------------+--------------------+----------------+-----------------+-------+
|        28 |    11162 |        11162 | wait/io/file/innodb/innodb_data_file      | fil0fil.cc:5778 | 5744476160960 | 5744479752316 |    3591356 |  NULL | NULL          | /var/lib/mysql/bobby/test.ibd | NULL       | FILE        |       139648405042752 |              203 | WAIT               | read           |           16384 |  NULL |
|        28 |    11163 |        11163 | wait/synch/mutex/innodb/buf_pool_mutex    | buf0lru.cc:1320 | 5744479934370 | 5744479974470 |      40100 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66655496 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11164 |        11164 | wait/synch/mutex/innodb/buf_pool_mutex    | buf0buf.cc:5150 | 5744480619278 | 5744480646546 |      27268 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66655496 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11165 |        11165 | wait/synch/sxlock/innodb/hash_table_locks | buf0buf.cc:5153 | 5744480749202 | 5744480858274 |     109072 |  NULL | NULL          | NULL                          | NULL       | NULL        |              70197752 |              203 | WAIT               | exclusive_lock |            NULL |  NULL |
|        28 |    11166 |        11166 | wait/synch/mutex/innodb/fil_system_mutex  | fil0fil.cc:1032 | 5744481202332 | 5744481236016 |      33684 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66654712 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11167 |        11167 | wait/io/file/innodb/innodb_data_file      | fil0fil.cc:5778 | 5744481464586 | 5744485206718 |    3742132 |  NULL | NULL          | /var/lib/mysql/bobby/test.ibd | NULL       | FILE        |       139648405042752 |              203 | WAIT               | read           |           16384 |  NULL |
|        28 |    11168 |        11168 | wait/synch/mutex/innodb/buf_pool_mutex    | buf0lru.cc:1320 | 5744485374336 | 5744485415238 |      40902 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66655496 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11169 |        11169 | wait/synch/mutex/innodb/buf_pool_mutex    | buf0buf.cc:5150 | 5744485590876 | 5744485618144 |      27268 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66655496 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11170 |        11170 | wait/synch/sxlock/innodb/hash_table_locks | buf0buf.cc:5153 | 5744485730424 | 5744485815436 |      85012 |  NULL | NULL          | NULL                          | NULL       | NULL        |              70197624 |              203 | WAIT               | exclusive_lock |            NULL |  NULL |
|        28 |    11171 |        11171 | wait/synch/mutex/innodb/fil_system_mutex  | fil0fil.cc:1032 | 5744486328716 | 5744486357588 |      28872 |  NULL | NULL          | NULL                          | NULL       | NULL        |              66654712 |              203 | WAIT               | lock           |            NULL |  NULL |
|        28 |    11172 |        11172 | wait/io/file/innodb/innodb_data_file      | fil0fil.cc:5778 | 5744486586960 | 5744490523176 |    3936216 |  NULL | NULL          | /var/lib/mysql/bobby/test.ibd | NULL       | FILE        |       139648405042752 |              203 | WAIT               | read           |           16384 |  NULL |
|
... edited for length...

Most of the time seems to be taken up on reads from a file, which is what I expected. If you look at the wait/io/file/innodb/innodb_data_file waits they seem to be 16384 byte reads from the file associated with the table. I could use this information to build a query to show a profile of the time spent by the query based on EVENT_NAME. It would be kind of like a tkprof of an Oracle trace.

This post is just a simple first test. I am new to MySQL tuning so feel free to leave a comment or email me if you have suggestions or corrections related to this post.

Bobby

Categories: DBA Blogs

SQL Profile not used on slightly different query

Thu, 2019-01-31 15:09

Last week I was asked to help with a performance problem that looked a lot like a problem I fixed in July with a SQL Profile. The query whose plan I fixed back in July was modified by a minor application change over the weekend. A single column that was already in the select clause was added to another part of the select clause. As a result, the SQL_ID for the new query was different than the one for the July query. The SQL Profile from July associated SQL_ID 2w9nb7yvu91g0 with PLAN_HASH_VALUE 1178583502, but since the SQL_ID was now 43r1v8v6fc52q the SQL Profile was no longer used. At first, I thought I would have to redo the work I did in July to create a SQL Profile for the new query. Then I realized that the plan I used in July would work with the new SQL_ID so all I did was create a SQL Profile relating SQL_ID 43r1v8v6fc52q with PLAN_HASH_VALUE 1178583502 and the problem was solved. This is an 11.2.0.3 database running on the HP-UX Itanium platform. Here is a post from 2013 explaining how to create a SQL Profile: url. I thought it would be helpful to use this post to go over the steps that I went through with the July incident and how I originally generated the good plan. Then I wanted to make some comments about the various ways I come up with good plans for SQL Profiles by either generating a new better plan or by finding an older existing better one. Lastly, I wanted to talk about how a given good plan can be used for a variety of similar SQL statements.

The problem query that I worked on in July and many of the other SQL statements that I tune with SQL Profiles have bind variables in their where clauses. Usually the optimizer generates the plan for a query with bind variables once based on the values of the bind variables at that time. Then, unless the plan is flushed out of the shared pool, the query continues to run on the same plan even if it is horribly inefficient for other bind variable values. There is a feature that will cause the optimizer to run different plans based on the bind variable values in some cases but the SQL statements that I keep running into do not seem to use that feature. Since the query I worked on in July had bind variables I assumed that it was a typical case of a plan that worked well for one set of bind variables and that was terribly slow for another set. So, I had to find a set of bind variable values that made the query slow and figure out a better plan for those values. I used my bind2.sql script to extract the bind variable values for the problem query when I was working on the problem in July.

After extracting the bind variables, I used an AWR report to figure out which part of the plan contributed the most to the run time of the query so that I knew which bind variable value was causing the slowdown. Using an AWR report in this way only works if you do not have a bunch of slow SQL statements running at the same time. In this case the problem query 2w9nb7yvu91g0 was dominating the activity on the database with 62.19% of the total elapsed time. If there were a bunch of SQL Statements at the top of this list with similar percent of total values, it might be hard to use the AWR report to find information about this one query.

Since the activity for 2w9nb7yvu91g0 was 87.19% CPU I looked for the segments with the most logical reads. Logical reads are reads from memory, so they consume CPU and not disk I/O. In the graph below the segment for the S_ACCNT_POSTN table has 88.60% of the logical reads so most likely this segment caused the slowness of the query’s plan.

I looked at the plan for 2w9nb7yvu91g0 to see where the most heavily read table was used. This would probably be the source of the slow query performance. I found that it was doing a range scan of an index for the S_ACCNT_POSTN table that had the column POSITION_ID as its first column. This made me suspect that the plan was using the wrong index. If an index was used to retrieve many rows from the table that could take a long time. I did a count on all the rows in the table grouping by POSITION_ID and found that most rows had a specific value for that column. I replaced the actual POSITION_ID values with VALUE1, VALUE2, etc. below to hide the real values.

POSITION_ID            CNT
--------------- ----------
VALUE1             2075039
VALUE2               17671
VALUE3                8965
VALUE4                5830
VALUE5                5502
VALUE6                5070
VALUE7                4907
VALUE8                4903

Next, I verified that the query had an equal condition that related a bind variable to the POSITION_ID column of the problem table. This made me suspect that the plan in the shared pool was generated with a bind variable value for POSITION_ID other than VALUE1. So, that plan would work well for whatever value was used to create it. POSITION_ID would be equal to that value for a small percentage of the rows in the table. But, running the query in SQL*Plus with POSITION_ID=’VALUE1′ caused the optimizer to choose a plan that made sense given that this condition was true for most of the rows in the table. The PLAN_HASH_VALUE for the new plan was 1178583502.

I tested 1178583502 against a variety of possible bind variable values by using an outline hint in SQL*Plus scripts to force that plan no matter which values I tested against. I extracted the outline hint by running the query with POSITION_ID=’VALUE1′ and using this dbms_xplan call:

select * from table(dbms_xplan.display_cursor(null,null,'OUTLINE'));

Then I just added the outline hint to a copy of the same SQL*Plus script and tried various combinations of bind variable values as constants in the where clause just as I had tried VALUE1 for POSITION_ID. I used the values that I had extracted using bind2.sql. After verifying that the new plan worked with a variety of possible bind variable values, I used a SQL Profile to force 2w9nb7yvu91g0 to use 1178583502 and the problem was resolved.

I have just described how I created the original July SQL Profile by running a version of the problem query replacing the bind variables with constants that I knew would cause the original plan to run for a long time. The optimizer chose a better plan for this set of constants than the one locked into the shared pool for the original query. I used the PLAN_HASH_VALUE for this plan to create a SQL Profile for the July query. This is like an approach that I documented in two earlier blog posts. In 2014 I talked about using a hint to get a faster plan in memory so I could use it in a SQL Profile. In 2017 I suggested using an outline hint in the same way. In both of those cases I ran the problem query with hints and verified that it was faster with the hints. Then I used a SQL Profile to force the better PLAN_HASH_VALUE onto the problem query. So, in all these cases the key is to generate a better plan in any way possible so that it is in memory and then create a SQL Profile based on it. A lot of times we have queries that have run on a better plan in the past and we just apply a SQL Profile that forces the better plan that is already in the system. My December, 2018 post documents this type of situation. But the 2014 and 2017 blog posts that I mentioned above and the July 2018 example that I just described all are similar in that we had to come up with a new plan that the query had never used and then force it onto the SQL statement using a SQL Profile.

The incidents in January and July and the cases where I added hints all lead me to wonder how different one SQL statement can be from another and still share the same plan. The problem last week showed that two queries with slightly different select clauses could still use the same plan. The other cases show that you can add hints or run the statement with bind variables replaced with constants. In the January case I did not have to go back through the analysis that I did in July because I could quickly force the existing plan from the July query onto the January one. The January problem also shows the limits of SQL Profiles. The slightest change to a SQL statement causes a SQL Profile to be ignored, even though the plan would still work for the new SQL statement. But in the January case the ability to use the same plan for slightly different queries made it easy to create a new SQL Profile.

Bobby

Categories: DBA Blogs

Slides From January 2019 AZORA Meeting

Wed, 2019-01-30 10:20

Here are slides from the January 2019 AZORA Meeting with Charles Kim and Viscosity:

Oracle Database 12.2 New Features

Oracle Database 18c New Features

Thank you to Charles and Viscosity for making our meeting a big success, and thanks for sharing these slides.

Bobby

Categories: DBA Blogs

MIT’s Free Algorithms Track

Wed, 2019-01-23 17:58

MIT, the Massachusetts Institute of Technology, has published a free set of three undergraduate level Algorithms classes that I recommend to the readers of this blog. These are links to the three classes in the order that they should be taken.

Mathematics for Computer Science (Spring 2015)

Introduction to Algorithms (Fall 2011)

Design and Analysis of Algorithms (Spring 2015)

I took the 2010 version of the Mathematics for Computer Science class and I am almost done with the Introduction to Algorithms class. I am considering taking the third class next. I want to use this post to explain why I recommend this set of three classes.

All three of these classes have video lectures. These classes are part of a massive collection of free educational information that MIT publishes on its OCW or Open Courseware web site: ocw.mit.edu. I have focused on the Computer Science classes out of the many available. But only certain classes contain video. For example, the Database Systems class does not have video. It does have free lecture notes and other content that has real value, but the Algorithms track has three classes in a row with video and that makes them especially valuable.

In addition to the video content all three classes have valuable static content such as lecture notes. The Mathematics for Computer Science class has a free online textbook. It looks like the 2015 version has all kinds of interesting extra content that was not in the 2010 class that I took, and I thought the 2010 class was great. The last two classes have solutions to the problem sets, quizes, and exams which is great when you are studying this on your own.

The Introduction to Algorithms and Design and Analysis of Algorithms classes have a highly regarded standard Algorithms textbook for their readings. It is Introduction to Algorithms, 3rd Edition by by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein. This textbook is commonly referred to as CLRS and seems to be the standard textbook for people who are serious about Algorithms.

The price is right. All three of these classes are free. I did buy CLRS, or actually I got it as a present, so my wife’s parents bought it, but $65 is a worthwhile investment especially since it is the text for two of the three classes and a valuable reference afterwards. The first class is completely free since it has a free text book included. The three classes represent thousands of dollars’ worth of educational material for free.

These classes are not easy. I do not claim to have mastered this material. It is hard to really discipline yourself to study for a class by yourself without grades and deadlines and others to talk to about it. But I have learned a lot from the two classes that I have worked through. I am a little afraid to take on the third class which is the intermediate level Algorithms class. Yikes! But even if I struggle to fully grasp every assignment, I am learning something. Some of it is a reminder of things I learned long ago in college, but many things are new. So, I do not mind the classes being hard. Better hard than too easy.

So, if you want to learn about Algorithms this three-class sequence is a great way to go. But, why would readers of this blog want to study this subject? In my opinion, Algorithms is just one area of Computer Science that would be helpful for a database professional to learn. Obviously, it is great to learn about database theory if you work with databases. It is great to learn about other systems topics such as operating systems and compilers since your work uses this type of software. It is great to learn about hardware, software development, etc. I think that many aspects of my Computer Science background from long ago in school have helped me in my database job. So, I do not think that Algorithms is the most important or only thing within CS that a database person needs to know, but I do think that Algorithms is very helpful to database workers.

I think that all of the main topics within CS that a database person would want are covered on the list of OCW EE and CS classes. But it seems that Algorithms is unique in that it has a very nice three class sequence that is so fully developed with video and the other resources that I mentioned. Maybe there are other areas that I missed, but I know that Algorithms is well covered by OCW. So, that is why I am singling out this three-class sequence in this blog post. I believe that it is an especially well covered topic within Computer Science that is useful to database professionals and available for free. So, check it out!

Bobby

Categories: DBA Blogs

January 25th AZORA Meetup with Viscosity

Wed, 2019-01-09 17:36

If you are in the Phoenix, Arizona area on Friday, January 25th please come to the AZORA meetup with talks by Viscosity. Please RSVP on the meetup site so we know how much food to order: Meetup link. We have a great lineup of speakers with talks about new features of the latest Oracle database versions. I know that I have not kept up with all the latest features, especially with all the other technical training that is on my plate. So, I am looking forward to setting aside this dedicated time to learn about Oracle database features from highly qualified presenters.

Plus, it will be fun and includes lunch! Come join us for our first meeting of the new year.

Bobby

Categories: DBA Blogs

Long running queries and my DBA_HIST_SQLSTAT scripts

Mon, 2018-12-17 14:24

On December 2nd we ran into a problem with one of my scripts that uses the DBA_HIST_SQLSTAT view. It did not show the full run time of a SQL statement that ran for several hours. So, it hid the fact that the plan that the SQL statement was using was much less efficient than another better plan. This post will document the two DBA_HIST_SQLSTAT scripts that I have been using so far and will introduce a new script that should make it clearer that a SQL statement is running for several hours on a plan.

There are two scripts that are based on DBA_HIST_SQLSTAT that I use most often for tuning: sqlstat.sql, sqlstat2.sql. I use these scripts all the time to look at the performance of queries and the different execution plans that they use. But in this case the output of sqlstat.sql was misleading. Here is part of its output for the problem SQL statement am47wcwn336yj:

PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
--------------- ------------------------- ---------------- ------------------
     3771021445 04-NOV-18 05.00.15.814 AM                1            2250696
     2117190322 11-NOV-18 03.01.00.265 AM                6         223502.352
     1484468961 18-NOV-18 06.00.22.364 AM                1         2695964.96
     2117190322 25-NOV-18 02.00.34.302 AM                3         198192.968
     1139203276 02-DEC-18 07.00.21.784 AM                1          322110.53

We got a performance ticket at about 9 am on December 2nd about a session with SQL ID am47wcwn336yj running for longer than normal. Based on the sqlstat.sql output above you would think that the current plan is 1139203276 and that it is a little slower than the other plans but not terrible. The problem is that sqlstat.sql only shows you SQL executions that have completed during the hour prior to the END_INTERVAL_TIME. So, one execution of am47wcwn336yj completed between 6 and 7 am on December 2nd, but the output makes it look like it only ran for 322110.53 milliseconds which is about 5 minutes. Really it probably ran for several hours and completed around 6:05 am. So, we thought that 1139203276 was an ok plan when really 2117190322 is the much more efficient plan. But, the sqlstat.sql output confused us because it did not show the hours where am47wcwn336yj was running with plan 1139203276 but had not completed. It is easy to see why sqlstat.sql leaves out these hours if you look at this condition in its where clause:

and executions_delta > 0

This part of the where clause conditions prevents you from seeing where the SQL statement is still running. But the select clause contains a division by executions_delta so you cannot have divide by zero so that is why I excluded it.

When I am working on a long running SQL statement that spans two or more hours, I run sqlstat2.sql after I run sqlstat.sql. Sqlstat2.sql shows all the time spent per hour regardless of whether a SQL statement’s execution completed. It does not have the condition requiring executions_delta to be > 0 because it does not divide by that column. Here is the output for am47wcwn336yj:

PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA ELAPSED_TIME_DELTA
--------------- ------------------------- ---------------- ------------------
     3771021445 04-NOV-18 03.00.43.465 AM                0         3584030311
     3771021445 04-NOV-18 04.00.27.516 AM                0         3588264762
     3771021445 04-NOV-18 05.00.15.814 AM                1         2250695998
     2117190322 11-NOV-18 03.01.00.265 AM                6         1341014109
     1484468961 18-NOV-18 02.00.07.772 AM                0         3171408998
     1484468961 18-NOV-18 03.00.58.545 AM                0         3648125526
     1484468961 18-NOV-18 04.00.44.680 AM                0         3586687594
     1484468961 18-NOV-18 05.00.32.832 AM                0         3592026797
     1484468961 18-NOV-18 06.00.22.364 AM                1         2695964960
     2117190322 25-NOV-18 02.00.34.302 AM                3          594578904
     1139203276 02-DEC-18 02.00.14.082 AM                0          480039949
     1139203276 02-DEC-18 03.00.05.255 AM                0         3591464884
     1139203276 02-DEC-18 04.00.50.936 AM                0         3645014119
     1139203276 02-DEC-18 05.00.40.304 AM                0         3591817904
     1139203276 02-DEC-18 06.00.31.820 AM                0         3587861998
     1139203276 02-DEC-18 07.00.21.784 AM                1          322110530
     1139203276 02-DEC-18 09.00.58.127 AM                0         2028205794
     1139203276 02-DEC-18 10.00.37.793 AM                0         3582326580

Here you can see that plan 1139203276 ran between 1 and 7 am on 02-DEC-18 and completed and then ran between 8 and 10 am and never completed. We ended up killing the long running session and we reran the SQL another way so there is no more history of it. The problem with sqlstat2.sql is that it does not show average elapsed time per execution like sqlstat.sql does. It just shows the total elapsed time for all executions, if any. This is because we don’t divide by executions_delta. Also, the units are microseconds instead of milliseconds. I find milliseconds to be easier to use.

To me looking at the output from sqlstat.sql and the output from sqlstat2.sql 2117190322 is the best plan. It is clear to me because I wrote these two scripts and am used to the way they look in this situation. If you go back to the first output, you can see that plan 2117190322 averages less than 224 seconds per execution (223502.352 ms and 198192.968 ms). Plans 3771021445, 1484468961, and 1139203276 ran for multiple hours. You can tell this because there are multiple consecutive hours with those plans and 0 executions.

I have used sqlstat.sql and sqlstat2.sql a lot as they are but today, I got the idea of combining them into a script that shows average elapsed time per execution for a given hour and includes hours where an execution did not complete. I have not used it yet in a real performance tuning situation but here it is, cleverly named sqlstat3.sql. Here is its output for this situation:

PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms
--------------- ------------------------- ---------------- ------------------
     3771021445 04-NOV-18 03.00.43.465 AM                0         3584030.31
     3771021445 04-NOV-18 04.00.27.516 AM                0         3588264.76
     3771021445 04-NOV-18 05.00.15.814 AM                1            2250696
     2117190322 11-NOV-18 03.01.00.265 AM                6         223502.352
     1484468961 18-NOV-18 02.00.07.772 AM                0            3171409
     1484468961 18-NOV-18 03.00.58.545 AM                0         3648125.53
     1484468961 18-NOV-18 04.00.44.680 AM                0         3586687.59
     1484468961 18-NOV-18 05.00.32.832 AM                0          3592026.8
     1484468961 18-NOV-18 06.00.22.364 AM                1         2695964.96
     2117190322 25-NOV-18 02.00.34.302 AM                3         198192.968
     1139203276 02-DEC-18 02.00.14.082 AM                0         480039.949
     1139203276 02-DEC-18 03.00.05.255 AM                0         3591464.88
     1139203276 02-DEC-18 04.00.50.936 AM                0         3645014.12
     1139203276 02-DEC-18 05.00.40.304 AM                0          3591817.9
     1139203276 02-DEC-18 06.00.31.820 AM                0            3587862
     1139203276 02-DEC-18 07.00.21.784 AM                1          322110.53
     1139203276 02-DEC-18 09.00.58.127 AM                0         2028205.79
     1139203276 02-DEC-18 10.00.37.793 AM                0         3582326.58

The nice thing is that you can see that 2117190322 ran 6 times averaging 223502.352 milliseconds on November 11 and 3 times averaging 198192.968 milliseconds on November 25th. Plus, you can see that the other plans span multiple hours. Note that the plans that span a full hour run for about 3600000 milliseconds which is an hour so that supports the idea that the SQL statement is active throughout the entire hour. I have only shown some of the total output to fit on the screen, but the full output includes more information such as a CPU and IO breakdown of the run time.

Anyway, the point is that the main DBA_HIST_SQLSTAT query that I use, sqlstat.sql does not show SQL statements that run for several hours very clearly. You can figure it out using sqlstat2.sql in addition to sqlstat.sql. The new sqlstat3.sql script may make it easier to diagnose multi-hour SQL statement executions in the future without having to use two scripts.

Bobby

Categories: DBA Blogs

To Blog, or Not to Blog?

Wed, 2018-12-12 17:42

As 2018 is ending I am thinking about what I spend my time on. Do I have my priorities correct? I know that I should spend some part of my time interacting with people over the Internet. I have gotten so much benefit from talking with other people about Oracle database issues through web-based forums. I have also had some in person interactions. I’m excited about the sort of rebirth of AZORA, my local Arizona Oracle user group. I talk to human beings in person about Oracle at user group meetings and that has great value. But I noticed that it had been a while since my last blog post of any substance. So, I asked myself if I am losing interest in blogging. I have not lost interest in writing blog posts, but I have had many other things tugging for my time, including other ways to interact with people over the Internet. So, I need to think about how much time to allocate to this blog next year.

I blame part of my lack of blog verbosity on Python. I have written a number of Python related posts, but Python has also drawn me into interacting with Rosetta Code and Stack Overflow. I’m trying to keep up my Python skills so that I have a general-purpose programming language available in my tool kit. But any time I might have devoted to my blog may have gotten used up by these Python related sites. Even GitHub is a form of distraction because maintaining Python repositories there takes time. But I don’t regret any time spent on Python because it is such a popular language now and it has really helped me in my work.

I guess the second time sink has been my PeopleSoft work. I don’t really talk much about my PeopleSoft work on this blog, but I have done a lot more of this sort of work in 2018 than I ever would have expected. With the push to move to the cloud and move to non-Oracle databases I have nevertheless been doing a bunch of old fashioned on premises ERP support, PeopleSoft applications on Oracle databases. I’ve been doing PeopleSoft on Oracle database for at least 24 years now so if my employer needs this sort of work, I’m capable of doing it. But PeopleSoft doesn’t excite me as much as database internals and performance tuning so that’s why I don’t blog about it much.

Speaking of the cloud and non-Oracle database, I have done some work in these areas in 2018 but not as much as I would have liked. I probably wouldn’t blog about the basics of using AWS or MySQL RDS, but if I can dig into some MySQL internals and use or build some MySQL performance tools, I can see blogging about that. My experience is mostly with Oracle, but I think open source is neat. I like having the MySQL and PostgreSQL source code even if I am unfamiliar with it. So, I guess I haven’t blogged about the cloud and non-Oracle databases because I just haven’t gotten very far. Maybe next year.

But, why have I not blogged about Oracle performance tuning? That really is the question. There have been several performance issues that I have worked on over the past few months, but in many cases, they were not anything new to me. Still, I think I may have missed out by not documenting the steps I went through in some of these real issues. I think it is easy to feel like what I am writing is not anything new and that there are better Oracle performance tuning bloggers out there. Also, I worry that I will make a mistake and confuse people or mislead them with wrong information. I would not intentionally mislead anyone, but I can certainly be wrong! I think going forward in 2019 I want to make the effort to write blog posts about Oracle performance issues that I have resolved even if they are not especially new territory. I probably can find some interesting angle from most issues. I think it might help people to see how I am using my SQL*Plus and Python scripts in different scenarios. Anyway, I hope to get back to blogging about Oracle performance.

Topics of my 15 blog posts in 2018 to this point: User Group6Oracle4Python3MySQL2 My Shakespeare inspiration (To be, or not to be):


To be, or not to be, that is the question:
Whether ’tis nobler in the mind to suffer
The slings and arrows of outrageous fortune,
Or to take Arms against a Sea of troubles…

Hamlet. Act III, Scene I

Categories: DBA Blogs

Pages