Feed aggregator

Partner Webcast – Developing Internet of Things Applications with Oracle IoT Cloud Service

Oracle Internet of Things (IoT) Cloud Service is a managed Platform as a Service (PaaS) cloud-based offering that helps you make critical business decisions and strategies by allowing you to connect...

We share our skills to maximize your revenue!
Categories: DBA Blogs

utPLSQL integration with Oracle SQLcl

John Scott - Fri, 2019-05-10 03:42

I saw a tweet by Kris Rice with a very cool hint of something that may make it into Oracle SQLcl.

2019-05-13_09-39-51.png

If you’re not familiar with utPLSQL it’s an Open Source framework for PL/SQL developers to define PL/SQL and SQL tests.

In my opinion, anything that makes it easier to run test means we (as Developers) have less reasons to not incorporate testing into our workflow.

Looking forward to this a lot!

Tried Live SQL

Bobby Durrett's DBA Blog - Thu, 2019-05-09 18:13

Quick post. I tried https://livesql.oracle.com/ for the first time.

Looks like you can try out the latest version of Oracle for free. You can type in SQL statements in your web browser.

There seem to be a boatload of scripts and tutorials to try out. Might be good for someone who is new to SQL.

Bobby

Categories: DBA Blogs

APEX Connect 2019 – Day 3

Yann Neuhaus - Thu, 2019-05-09 11:20

For the last conference day, after the Keynote about “JavaScript, Why Should I Care?” by Dan McGhan , I decided to attend some JavaScript learning sessions to improve myself and presentations on following topics:
– How to hack your APEX App… (only for testing)
– What you need to know about APEX validations

I also got the chance to have a 1:1 talk with Anthony Rayner to expose some wishes about APEX and talk about some issue on interactive grid search.

JavaScript programming language

Nowadays being an good APEX developer means being a full stack developer who master different areas:
– Server side (database, data modeling, SQL, PL/SQL)
– Client side (HTML, CSS, JavaScript)
So, even JavaScript was weird from the beginning you cannot avoid learning and mastering it. It’s simply the number 1 most used programming language (thanks to the web). Think APEX Dynamic Actions can solve all issues by hiding the complexity of Java Script just isn’t always possible anymore. Some statistics show that APEX team is already putting a lot of effort into JavaScript as it is more than 50% of APEX code way ahead from PL/SQL.
A couple of characteristics about JavaScript:
– It’s a no variable type language, meaning that the type is not in the variable but rather in the value assigned to it. This can some how be seen as polymorphism.
– It’s case sensitive
– 0 based array index (PL/SQL being a 1 based array index)
– There are no procedures, only functions
– Functions can be given other functions as parameter
– there is one convention: Functions starting with Uppercase are meant to be used with the new operator
While developing JavaScript your best friend are the web browser developer tools which allow to do a lot locally and test it before moving to the server and share with other developers and users.
There are a lot of resources on the internet to support the copy/paste way of work of JavaScript developers, so there are big chance that someone already did what you need. Just take care about licensing.
In APEX JavaScript can be encapsulated in Dynamic Actions, but try to keep that code as short as possible.
Oracle is also providing some very useful free Open Source Java development Toolkit: JET (Javascript Extension Toolkit)
It’s already integrated in APEX thru the charts.

How to hack your APEX App… (only for testing)

APEX generating web application it’s exposed to the same dangers than any other web application like SQL injection, XSS (cross site scripting, aso).
There is no excuse to ignore security issues because application is only used on the intranet or you think no one will ever find the issue…
… Security is the part of the job as a developer. Danger can come from the outside but also the inside with social engineering based hacking.
It’s very easy to find hacker tools on the internet like Kali Linux, based on Debian, which provides more that 600 tools for penetration testing like for example BEEF (Browser exploitation Framework.
In APEX the golden rule says “Don’t turn of escaping on your pages”.
Don’t forget “Security is hard. If it’s easy you’re not doing it right” so don’r forget it in your project planning.

What you need to know about APEX validations

There are 2 types of validations with web applications:
– Client side
– Server side
APEX is making use of both and even sometimes combines them but server side is the most used.
Where possible Client side validation should be used as well as it’s lighter (less network traffic), but be careful as it can be skirt with developer tools as it’s based on HTML attributes or JavaScript. Tht’s where Server side validation will be you second line of defense and the database triggers and constraints your very last line of defense.
Validation can make use of data patterns (regular expressions).
Interactive Grid validation can also be improved significantly with Java Script and Dynamic actions fired on value changes and/or on page submission.

There is always more to learn and thanks the community a lot of information is available. So keep sharing.
Enjoy APEX!

Cet article APEX Connect 2019 – Day 3 est apparu en premier sur Blog dbi services.

Ace Your Excel Test to Get Your Dream Job

VitalSoftTech - Thu, 2019-05-09 10:29
You aced your first interview and you couldn’t feel better about it – you answered all the questions correctly, made a great impression, and managed to have a professional yet very friendly conversation with someone you might be working with soon. That is when you found out that the next step of the recruitment process […]
Categories: DBA Blogs

Latest Blog Posts from Oracle ACEs: April 28 - May 4, 2019

OTN TechBlog - Thu, 2019-05-09 05:00

The chances of having a movie theater to yourself these days are slim. But while the rest of the world is focused on learning the fates of various Marvel characters in the latest Avengers epic, the members of the Oracle ACE program listed below demonstrated super will power last week by devoting their screen time to hammering out these blog posts.  The least you can do to reward that kind of effort is to take a look, right?

 

Oracle ACE Director  Oracle ACE Directors

Oracle ACE Director Opal AlaphatOpal Alaphat
Vision Team Practice Lead, interRel Consulting
Arlington, TX

 
Oracle ACE  Oracle ACEs

Oracle ACE Ahmed AboulnagaAhmed Aboulnaga
Principal, Attain
Washington D.C.

 

Oracle ACE Anju GargAnju Garg
Corporate Trainer, Author, Speaker, Blogger
New Delhi, India

 

Oracle ACE Bert ScalzoBert Scalzo
Technical Product Manager: Databases
Flower Mound, Texas

 

Oracle ACE Eduardo LegattiEduardo Legatti
Administrador de Banco de Dados - DBA, SYDLE
Belo Horizonte, Brazil

 

Oracle ACE Fabio PradoFabio Prado
Instrutor, Oramaster Treinamentos em Bancos de Dados
Sao Paulo, Brazil

 

Oracle ACE Jhonata LamimJhonata Lamim
Senior Oracle Consultant, Exímio IT Solutions
Brusque, Brazil

 

Oracle ACE Leonardo GonzalezLeonardo Gonzalez Cruz
SOA Architect, Services & Processes Solutions
Mexico

 

Oracle ACE Marcelo OchoaMarcelo Ochoa
System Lab Manager, Facultad de Ciencias Exactas - UNICEN
Buenos Aires, Argentina

 

Oracle ACE Peter ScottPeter Scott
Principal/Owner, Sandwich Analytics
Marcillé-la-Ville, France

 

Oracle ACE Ricardo GiampaoliRicardo Giampaoli
EPM Architect Consultant, The Hackett Group
Malahide, Ireland

 

Oracle ACE Rodrigo DeSouzaRodrigo De Souza
Solutions Architect, Innive Inc.
Tampa, Florida

 

Oracle ACE Wataru MorohashiWataru Morohashi
Solution Architect, Hewlett-Packard Japan, Ltd.
Tokyo, Japan

 
Oracle ACE Associates  Oracle ACE Associates

Oracle ACE Associate Abigail Gils-HaighAbigail Giles-Haigh
Chief Data Science Officer, Vertice
United Kingdom

 

Oracle ACE Associate Diana RobeteDiana Robete
Team Lead/Senior Database Administrator, First4 Database Partners Inc
Calgary, Canada

 

Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia

 

Oracle ACE Associate Emiliano FusagliaEmiliano Fusaglia
Principal Oracle RAC DBA/Data Architect, Trivadis
Lausanne, Switzerland

 

Oracle ACE Associate Eugene FedorenkoEugene Fedorenko
Senior Architect, Flexagon
De Pere, Wisconsin

 

Oracle ACE Associate Flora BarrieleFlora Barriele
Oracle Database Administrator, Etat de Vaud
Lausanne, Switzerland

 

Oracle ACE Associate Heema SatapathyHeema Satapathy
Senior Principal Consultant, BIAS Corporation
United States

 

Oracle ACE Associate Lykle ThijssenLykle Thijssen
Principal Architect, eProseed
Utrecht, Netherlands

 

Oracle ACE Associate Omar ShubeilatOmar Shubeilat
Cloud Solution Architect EPM, PrimeQ (ANZ)
Sydney, Australia

 

Oracle ACE Associate Roy SalazarRoy Salazar
Senior Oracle Database Consultant, Pythian
Costa Rica

 

Oracle ACE Associate Mark DaynesMark Daynes
Managing Director, Beyond Systems Ltd
Manchester, United Kingdom

 
Additional Resources

Launching into the middle of a Fluid Navigation Collection

Jim Marion - Wed, 2019-05-08 21:18

One of the greatest challenges to a successful Fluid deployment is Classic navigation and one of the most common solutions to this challenge is the Tile Wizard-based Navigation Collection. Tile Wizard-based navigation collections allow us to build business process-based navigation for our back-office users (back-office pages are primarily Classic). Let's say you have built some amazing Navigation Collections for your back-office users and now you want to drill from one collection or page to a specific target page in another Navigation Collection. You know you can easily craft a URL to a Tile Wizard Navigation Collection, but can you load a specific target component from that Navigation Collection? This is a great question that my PeopleTools Fluid course students ask me often. The answer is Yes!

When using the Tile Wizard to publish a Navigation Collection, PeopleTools uses a special Activity Guide template to display the Navigation Collection. This means that every Tile Wizard-based Navigation Collection is viewed as an Activity Guide. Each link is considered a step in the Activity Guide. Fluid Activity Guide steps have an id attribute called ptgpid. Once we find the ptgpid of a step, we can use that ID in a URL. When the Activity Guide is a Navigation Collection, the step ID is the Navigation Collection CREF Link ID. Note: this is not the base CREF ID, but the CREF link that was created by the Navigation Collection utility. If we inspect the HTML for a Navigation Collection, we can find the ptgpid in the HTML. Alternatively, we can find it in the portal registry under Portal Objects > Navigation Collections. Here is an example from the HTML:

To launch the highlighted item, copy the ptgpid and add the following to the end of the URL:

&ptgpid=<the id goes here>

For example, in the Portal Navigation Collection that is part of the PeopleSoft Developer homepage, appending the following to the URL will navigate directly to the Find Object Navigation item:

&ptgpid=HC_S201604180146095689166800

At jsmpros we believe that navigation is a critical component of a successful Fluid implementation, which is why we devote the first day of our Fluid 1 course to Fluid navigation. To learn more or to schedule a course, visit us online at jsmpros.com.

APEX Connect 2019 – Day 2

Yann Neuhaus - Wed, 2019-05-08 18:17

The day first started with a 5K fun run.

After the Keynote about “APEX: Where we have come from and where we’re heading: Personal insights on Application Express from the Development Team” by John Snyders, Anthony Rayner and Carsten Czarski explaining their work on APEX and some upcoming features, I decided to attend presentations on following topics:
– Know your Browser Dev Tools!
– APEX Devops with Database tooling
– Klick, Klick, Test – Automated Testing for APEX
– Sponsored Session Pitss: Migration from Oracle Forms to APEX – Approaches compared
– APEX-Actions

Potential upcoming features in APEX 19.2:
– APEX_EXEC enabled Interactive Grids
– Enhanced LOV
– New popup LOV
– Filter reports

Know your Browser Dev Tools!

Every web browser has it’s own set of developer tools but all of them mainly allow following functionalities:
– Manipulate HTML in DOM tree
– Execute JavaScript
– Apply CSS
– Client side debugging and logging
– Analyze network activity
– Simulate screensize
The most popular and complet set of tools is provided by Google Chrome with:
– Elements
– Console
– Sources
– Network
– Performance
– Memory
– Application
– Security
– Audits
Note that if you want to see console output as well as details of Dynamic Actions from APEX, you need to activate debug mode in your application.

APEX Devops with Database tooling

One of the goal of DevOps is to bring Development and Operations to work closer together and make deployment of application smoother.
In order to achieve that 100% automation of follwing tasks is helping a lot:
– Build
– Test
– Release
– Deploy
This is mainly supported by RESTful services within Oracle, ORDS being the corner stone.
Beside that Jenkins has been replaced by GitLab with better web services support.
Database changes are tracked based in Liquibase integrated and enhanced in SQLcl. Vault is also integrated in SQLcl to ease and automate the password management.
Another target of DevOps is zero downtime. This can be supported with tools like consul.io and fabiolb which permit to dynamically add ORDS servers covered by dynamic load balancing.

Klick, Klick, Test – Automated Testing for APEX

There are lots of automated testing tools on the market but they mostly are restricted to specific web browsers.
The target is to have a solution that fits most of them and allows testing of APEX web applications.
It needs a testing framework to abstract the scenario from underlying tool: codecept.io
The code generated by the testing framework being standardized it can be generated based on APEX metadata analysis with the help of a templating tool: handlebars
The process is then supported by an APEX application that can retrieve the applications from the workspace and manage the dedicated test scenarios as well as trigger them on docker containers.

Sponsored Session Pitss: Migration from Oracle Forms to APEX – Approaches compared

Migrating forms applications to APEX can be very cumbersome as they can be very large and complex.
The main phases fo such a migration are:
– Forms application preparation and analysis
– Migration
– APEX Application fine tuning
– Rollout
The success of such a migration lays on the combination of skilled FORMS developers and APEX developers.

APEX-Actions

Beside the well known Dynamic Actions in APEX, there is a new JavaScript library introduced in APEX 5.0: apex.actions
Documentation to it came with version 5.1 in the APEX JavaScript API documentation.
It’s used by the APEX development team in the Page Designer and is now available to all developers.
Actions allow to centrally encapsulate and define rendering, associated function and shortcuts of objects from the web pages all of it dynamically.
It uses an observer which allows to have the same behavior for multiple objects of the same type on the same page.

The day ended with Sponsor Pitches & Lightning Talks:
– APEX Competition Winner Plugin
– 5 things that make your life easier when using Docker
– Verwenden Sie kein PL/SQL!
– Improving Tech with Compassionate Coding

Cet article APEX Connect 2019 – Day 2 est apparu en premier sur Blog dbi services.

ISCSI-Attached Volumes Do Not Connect On Reboot (OCI)

Online Apps DBA - Wed, 2019-05-08 14:17

[Fixed] Troubleshooting: Attached Volumes missing in Cloud VM Instance in OCI Sometimes, after rebooting the Instance, the Attached Volumes don’t connect to the Instance. Well, www.k21academy.com/oci34 is the fix you’ve been searching around. The blog post explains: ✔ Block Volume ✔ iSCSI ✔ The Reason Behind the Issue ✔ Workaround & much more… [Fixed] Troubleshooting: […]

The post ISCSI-Attached Volumes Do Not Connect On Reboot (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

PeopleSoft Administrator Podcast: #183 – Effective Performance Monitoring

David Kurtz - Wed, 2019-05-08 14:09
I recently recorded a podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about instrumentation, monitoring the performance of PeopleSoft system, and Performance Monitor.  There is also just a little about cursor sharing.
(3 May 2019) #183 – Effective Performance Monitoring You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

Oracle Customer Support Services at Oracle OpenWorld 2019

Chris Warticki - Wed, 2019-05-08 13:58

Standing Room Only Last Year-—Don't Worry—We Are Bringing More Chairs

We’re deep into planning our presence at Oracle OpenWorld in San Francisco, September 16-19. You will have the opportunity to talk to the Oracle Support experts at the Oracle Support Stars Bar. You can meet with Oracle Support Engineers to discuss topics that you need to understand better—from on premises to the cloud, software to hardware, and education to certification. Bring your questions or just come by to say hello in person.

Get to know the experts from Oracle Premier Support, Advanced Customer Services, Oracle Platinum Services, and Oracle University at the Oracle Support Stars Bar. They are ready to help you and delivering conference sessions throughout Oracle OpenWorld.

We will also have our Mini-Briefing Center in full swing with short sessions that focus on practical, actionable topics from the Oracle experts. Last year, the Mini-Briefing Center was standing room only for many of the sessions, so we know we need more chairs this year.

You will always find something new at the Oracle Support Stars Bar—we can’t wait to show you what we have planned.

Save with early bird pricing. Registration is open now and early bird pricing is available through 6 June.

Register now. We look forward to meeting you in September.

    Meet us at the Oracle Support Stars Bar Oracle OpenWorld 2019

September 16-19, San Francisco, California

 

 

Resources

Digital Marketing and Its Benefits

VitalSoftTech - Wed, 2019-05-08 10:12
Does your business make use of digital marketing services? Today, a lot of businesses utilize the online world to take their business to a higher level. They engage in digital marketing to share what they can offer. If you do, that’s really good! Now, all you need to do is to grow it or improve […]
Categories: DBA Blogs

Oracle Applies Cutting-Edge Tech to Enhance Fan Experiences

Oracle Press Releases - Wed, 2019-05-08 07:00
Blog
Oracle Applies Cutting-Edge Tech to Enhance Fan Experiences

By Barbara Darrow—May 8, 2019

Applying cloud computing, analytics, and machine learning to reams of data can modernize age-old activities like taking in a baseball game, watching a sailboat race,—or tracking product inventories—Oracle executives say.

Oracle execs and officials from the SailGP yacht race series gathered at San Francisco’s Golden Gate Yacht Club for Oracle Media Days to talk about how technology can make work and leisure activities better

Inside Oracle’s AI and Emerging Tech Strategy and Innovations

The Palma, Majorca-based Melia resort chain is rolling out a waterproof bracelet embedded with an NFC chip that lets guests unlock their rooms, order cocktails, book massages within the resort and buy things outside the property without a wallet or credit card.

Working with Melia, Oracle engineers came up with nearly two dozen possible uses for such a device in days. A first test of the bracelet, run in Majorca, went well and now the system is going live elsewhere, said Neil Sholay, vice president, Digital Innovation, Oracle.

Sholay’s “co-innovation teams”—comprised of developers, digital designers, business modelers, and consultants—work with customers on a real business problem that they have struggled to solve. “In 10 weeks, we’ll take that challenge, find ideas, build prototypes, validate it with client data, and then get it into execution,” Sholay said.

The idea of a smart “wearable” is to give consumers what they want without hassle and for the business to gain access—with customer opt-in—to useful information.

The availability of data and great ways to use it are obviously valuable in less exotic workplaces as well. Smart companies want to wring more out of their own “first-party” data, and relish the idea of making better use of that data to cut costs and grow revenue.

“We are focused on embedding machine learning in our business applications,” said Melissa Boxer, vice president of Adaptive Intelligence Applications for Oracle. 

These applications come “ready to go, with pre-tuned data models, what you need to feed the business requirement whether it’s next-best-action for a salesperson or expense auditing,” Boxer said.

By integrating machine learning into commonly used business applications customers can take advantage of the technology in a familiar context with the data they already have. They don’t necessarily need to know that machine learning is even in there.

In addition to helping organizations get the most out of their own data, Oracle provides “third party data” from other sources that can enrich their customer’s applications with context from the real world.

The combination of internal data with contextual “signals” about what’s happening at a company—credit rating changes or new product launches—can help businesses “better categorize suppliers and understand which might be the best to work with,” Boxer said.

Oracle’s Commitment to Sports, Fan Experience and Technology Innovation

Most people go to sporting events to see what’s happening on the field. Any time they spend standing in line trying to buy food or drinks, detracts from that goal.

This is not a trivial problem. A ticket holder who doesn’t buy food equates to lost revenue for the venue, and less fun for the fan.

A recent survey found that most sports enthusiasts will embrace things that make it easier to get food and drinks although they don’t want to be distracted by the technology.

“Fans want convenience of mobile payments and kiosks but they don’t want tech for tech’s sake,” said Simon de Montfort Walker, senior vice president and general manager of Oracle’s Food and Beverage Global Business Unit

Collecting data about stadium traffic flow helps alleviate soul-sapping congestion. A computer vision system that sees where lines are shortest can push alerts to redirect people thus alleviating long waits at food stands and restrooms.

SailGP: Powered by technology Innovation

Racing space-age yachts in San Francisco Bay (or anywhere) presents its own set of technological and athletic hurdles. SailGP catamarans are the fastest boats in the world, capable of reaching speeds of up to 50 knots (or 60 mph on land), said Sir Russell Coutts, CEO of SailGP and five-time Americas Cup Winner.

Hans Henken, flight controller for the USA SailGP Team, described the experience for those who’ve never raced:

“It’s like driving your car down the freeway with the top down in the pouring rain. The exit is coming up fast, you change lanes and find your traction controls are off, you slam on the brakes and the brakes are out. You have to make quick decisions in a lot of spray, and it’s very, very noisy.”

In SailGP races, all of the boats are built to the same specifications, meaning that any edge the crew can get from 1,200 embedded sensors could be the difference between victory and defeat.

Sensor data is sent to the Oracle cloud in 200 msec, less time than the time it takes to blink, said Edwin Upson, Oracle group vice president of Enterprise Cloud Architects. Then it is returned to the crew for analysis in near-real-time.

Sailors on the boat and off-boat crews parse the data, plus audio and video from shipboard cameras, to adjust to changing conditions.

The same technology will help spectators track action on their tablets with a live TV feed and information about boat location and performance. “You can customize your view, zoom in and out, change which boat you’re tracking and change the camera view on a given boat,” Upson said.

Oracle sees opportunity in the intersection of technology and sports, an area Judy Sim, chief of staff and senior vice president of marketing calls “fast entertainment.”

“This is all about building cool new fan experiences that extend way beyond their seats,” she said.

Alan Zeichick contributed to this story.

Retailers Tap Oracle AI to Better Serve Customers

Oracle Press Releases - Wed, 2019-05-08 07:00
Press Release
Retailers Tap Oracle AI to Better Serve Customers Notebook-based data science tools make it easier for retailers to turn their data into a powerful business asset

Redwood Shores, Calif.—May 8, 2019

The competition in retail is fierce and great products alone are no longer enough. Whether shopping online or in person, customers want seamless, personalized shopping experiences. Helping retailers compete by making better use of their data, Oracle continues to weave significant artificial intelligence (AI) and machine learning capabilities into its Retail Insights and Science Suite.

With new features, such as notebook-based data science tools, retailers can quickly deploy new data science-driven solutions to further engage and delight customers across channels. As an example, a retailer can build machine learning models within their Oracle Retail solutions to extract insights from images, sense demand from social media, or re-balance inventory to maximize productivity. 

“Notebook-based solutions offer open source without the hassles,” according to Forrester1. They leverage innovations in open source machine learning to enable data scientists to create models in their development languages of choice, such as Python or R. As such, they can speed new learnings that can both enhance the customer experience and bottom line results. 

A leading Saudi Arabia pharmacy retailer, Nahdi Medical Company, applied these principles when they needed to create a new AI and machine learning model that could correlate customer engagement with marketing campaigns to identify and reward loyal customers. 

“By taking the innovative predictive models provided by Oracle Retail Science Platform Cloud Service and coupling them with our analytics intellectual property, we’ll be able to understand and predict our guests’ behavior which will give us the opportunities to serve them better,” said Dr. Ayman Abdalazem, head of business intelligence and analytics, Nahdi Medical Company. “Selecting a cloud platform for data science has enabled us to quickly implement the solution while minimizing technical debt and accelerating the value we extract for Nahdi Medical Company.”

Standout Features of the Oracle Retail Insights and Science Suite

With the Retail Science and Insights Suite business leaders and data scientists can deliver faster decisions with greater precision and innovations that drive tangible results.

“Most machine learning projects fall short on delivering tangible business benefits. Not because the innovation is misaligned with a business objective but because it is difficult to operationalize innovation,” said Jeff Warren, vice president, Oracle Retail. “The Retail Science Platform delivers the standardization and controls that enterprises need to accelerate their new offerings and swiftly integrate them into their business workflows. With the addition of notebook-based tools, our solution is a force to be reckoned with in predictive analytics and machine learning for the retail industry.”

New advances include:

  • Notebook-based predictive analytics and machine learning solutions that give retailers the ability to bring innovations to market with faster speed and efficiency.
  • Customer Lifetime Value, RFM and Engagement Score are now available in the Retail Insights suite so that retailers can manage to essential, modern retail KPIs associated with acquiring and retaining customers.
  • Affinity Analysis enhancements that enable retailers to drill down into critical information and answer questions such as, “How do affinities for a particular store compare to the region?” and “What were the halo effects of a particular event?”
 

“Retailers all know they need advanced analytics and retail science to drive their business forward, but they don’t all have the luxury of hiring on a data science team,” said Marc Koehler, solution director, Oracle Retail. “We continue to enhance Oracle Retail Insights and Science Suite to provide retailers with packaged insights and science applications, including Customer Insights Cloud Service and Offer Optimization Cloud Service, as well as the innovation workbench so that they can leverage best of both worlds.”

1 Forrester Research, Inc. The Forrester Wave™: Notebook-Based Predictive Analytics And Machine Learning Solutions, Q3 2018,” September 5, 2018, by Kjell Carlsson, Ph.D., Mike Gualtieri with  Srividya SridharanAldila Yunus, Robert Perdoni

Contact Info
Kris Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle Retail

Oracle provides retailers with a complete, open, and integrated suite of best-of-breed business applications, cloud services, and hardware that are engineered to work together and empower commerce. Leading fashion, grocery, and specialty retailers use Oracle solutions to anticipate market changes, simplify operations and inspire authentic brand interactions. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Spotlight on Oracle ACE Director Ruben Rodriguez

OTN TechBlog - Wed, 2019-05-08 05:00

Oracle ACE Director Ruben Rodriguez is a Cloud and Mobile Solution Specialist with avanttic Consultoría Tecnológica in Madrid, Spain. He graduated from the Universidad Alfonso X El Sabio in Madrid in 2011 with a degree in Computer Science, then made his way through a variety of IT jobs in Spain and the UK before landing at avantic in 2015. Ruben first entered the Oracle ACE program in October 2017 and was confirmed as an Oracle ACE Director in November 2018. Active in the community, Ruben is a blogger and frequent conference speaker. In December 2019 Packt Publishing will publish Professional Oracle Mobile, written by Ruben and co-author Soham Dasgupta.

Watch the video and get the story from Ruben himself.

Additional Resouces

Pester script parameter passing not working

Matt Penny - Wed, 2019-05-08 04:22
Problem

I was trying to parameterize a Pester script. The script looked like this:


param (
[string]$ComputerName,
[string]$IPAddress
)

write-dbg "$ComputerName: "
write-dbg "
$IPAddress: "

Describe "$ComputerName is visible" {

It "It is ping-able" {
    {test-connection $ComputerName -count 1} | Should Not Throw

    $(test-connection $ComputerName -count 1 | Measure-Object).count | Should Be 1
}

}

…but passing the parameters wasn’t working.

Solution

The problem was that I was calling the script as follows
$ Invoke-Pester @{PAth = c:\pester\diagnostics\simple\StandardDomainContoller.tests.ps1; Parameters=@{ComputerName = "server1.here.co.uk";IPAddress = "17.6.5.1""}}

…and the Path variable needs quotes:

$ Invoke-Pester @{PAth = 'c:\pester\diagnostics\simple\StandardDomainContoller.tests.ps1'; Parameters=@{ComputerName = "server1.here.co.uk";IPAddress = "17.6.5.1""}}

Categories: DBA Blogs

APEX Connect 2019 – Day 1

Yann Neuhaus - Tue, 2019-05-07 17:33

This year again the APEX connect conference spans over three days with mixed topics around APEX, like JavaScript, PL/SQL and much more.
After the welcome speech and the very funny and interesting Keynote about “PL/SQL: A Career Built On Top Of PL/SQL – 25 years of Coding, Sharing, and Learning” by Martin Widlake, I decided to attend presentations on following topics:
– Oracle Text – A Powerful Tool for Oracle APEX Developers
– Make It Pretty! MIP your APEX application!
– REST Services without APEX – but with APEX_EXEC
– Microservices with APEX
– SQL Magic!
– The UX of forms

PL/SQL: A Career Built On Top Of PL/SQL – 25 years of Coding, Sharing, and Learning:

Martin Widlake shared the story of 25 years development on Oracle from version 6 to the newest 19c.
The most important to retain from his professional journey is that “Good developers are made by other developers” and “Everything you learn will have some return sometime in the future”. That means sharing is the key, keep yourself curious and never stop learning, even things that are not yet obviously useful.

Oracle Text – A Powerful Tool for Oracle APEX Developers

That feature is embedded as a standard in Oracle databases since 1997 when it was named Car Text. In 1999 it became Intermedia Text and finally Oracle Text in 2001. It allows to index text based fields of the database as well as files in BLOBs, allowing much faster and easier search of text patterns (words, sentences, …). We went thru aspects like syntax, fuzzy search, snippets and lexer.
As search combinations require usage of specific operators and delimiters, which are cumbersome for end users, there is a useful package written by Roger Ford that allows to convert simple “Google” like requests into the right format for Oracle Text: PARSER download
His blog will provide nice information about it and Oracle documentation provides all details to the usage of Oracle Text.
You can find further information on following Blog:
Oracle text at a glance

Make It Pretty! MIP your APEX application!

The business logic is the heart of the application but the UI is its face and what users will judge first.
There are some rules which to Make It Pretty (MIP).
First of all it needs to fullfil user needs by either:
– stick to company brand rules
– stick to company webpage design
– stick to user wishes (can be drafted with template monster
Technical and non-technical aspects need to be considered.
Following Design rules help to improve the UI:
– Be consistent
– Make it intuitive
– Be responsive (give feedback to users)
– Keep it simple (not crowded)
– Fonts: max 2 per page, 16px rule (verify on fontpair.co)
– Color rules (verify on contrast checker)
– Have imagination
– Know your APEX universal theme

REST Services without APEX – but with APEX_EXEC

APEX is based on metadata to store definitions and PL/SQL packages support the engine.
That means APEX metadata support can be used outside the APEX application in any PL/SQL code.
One particular APEX PL/SQL package is APEX_EXEC introduced in APEX 18.1. It allows to abstract the data format (XML, json, …) in websources in order to be able to use data as it would come from any local table. It also takes care of pagination from web services to make the data retrieval transparent. But in order to make use of that package an APEX session must first be created to initiate the needed metadata. Fortunately this is made easy since APEX 18.1 with procedure create_session from the apex_session package.
The next version of APEX 19.2 might integrate websource modules with interactive grid.

Microservices with APEX

APEX can be compared to microservices by looking at the characteristics:
– Scalability
– Fail-safe
– Maintainable
– Technology independent
– Database independent
– Small
And it mostly matches!
But APEX also overrules the microservices drawbacks:
– Complex architecture
– Complex testing
– Migration efforts
– Complex development
To have a behavior close to microservices, APEX applications have to make use of web services for the data management and the interfacing with any kind of other services. This allows to clearly separate data management and applications. ORDS allows to enable REST at schema and also object level within APEX. Caching also needs to be considered based data change frequency to lower the lag time of data handling.

SQL Magic!

Since Oracle 12c the database provides the json data guide which allows easy json data manipulation like any standard table data. This comes also with new views like user_json_data_guide.
Oracle 11g introduced Invisible columns that hides columns from table description as well as standard “select *” but not specific select statements. This can be used to deprecate columns or add new columns without breaking existing applications with “select *”. Even though “select *” should be avoided in applications.
Oracle 12c also introduced polymorphic table function that can be used with pipelined tables to create views allowing to pivot and transpose tables whatever number of columns and rows they have.
All those features are very useful and should be used further.

The UX of forms

User eXperience (UX) rules to be applied in forms go beyond APEX. The aim to:
– Reduce cognitive load
– Prevent errors
– Make it user friendly
The rules are following:
– One thing per page
– Put field labels above rather than side to
– Replace small dropdowns by radio buttons
– Use Interactive data lists for long dropdowns
– For login pages, be specific on username type (name, e-mail) and password definition rules
– Avoid * for required fields but rather flag Optional fileds
– Adapt field size on expected data length
– Use smart default values
– Use entry masks
– Use date picker
– Define check before you start pattern to guide users and reduce form length
All that will improve UX.

Cet article APEX Connect 2019 – Day 1 est apparu en premier sur Blog dbi services.

Compression Restrictions - Update: Wide Table Compression In 12.1 And Later

Randolf Geist - Tue, 2019-05-07 17:03
I do have a very old post that used to be rather popular here that listed various restrictions related to compression. One of the most obvious restrictions in older versions was that the basic / OLTP (Advanced Row) heap table compression based on symbol tables / de-duplication was limited to tables with 254 columns or less - or probably more general to rows with single row pieces.

This limitation was lifted with Oracle 12c which started to support this compression type with tables having more than 254 columns / multiple row pieces - and there is a new internal parameter called "_widetab_comp_enabled" that defaults to TRUE.

So this is nothing really new, 12.1 has been released a long time ago, but it looks like it wasn't mentioned in the official 12c "New Features Guide", but only in some other documents as a side note - although it could be an important change for users not allowed to make use of HCC compression that have such "wide" tables and would like to save space resp. improve I/O performance.

Now the odd thing is that Oracle obviously found some problems with this change in 12.1.0.2 that seems to revolve around redo apply and hence decided to disable this feature partially in later PSUs of 12.1.0.2. There are a number of related notes on MyOracleSupport, specifically:

All About Advanced Table Compression (Overview, Usage, Examples, Restrictions) (Doc ID 882712.1)

Advanced Compression Restriction on 255 Columns Lifted For Oracle 12c (Doc ID 1612095.1)

Bug 25498473 : OLTP COMPRESSION DOESN'T WORK IN 12.1.0.2.160719 AND HIGHER WITH > 255 COLUMNS

So the summary of the situation seems to be this:

- In 12.1.0.2 with current PSUs applied you only get compression of tables with multiple row pieces when using Advanced Compression and conventional DML. Direct Path Load / CTAS / ALTER TABLE MOVE reorganisation does not compress hence basic compression does not work for such "wide" tables

- Starting with Oracle 12.2 this (artificial) limitation is lifted again, and now the Direct Path Load / CTAS / ALTER TABLE MOVE reorganisation code paths support compression again, which seems to include basic compression, although above mentioned documents only refer to OLTP (Advanced Row) compression

Note that the code obviously - according to the mentioned documents - checks the COMPATIBLE setting of the database, so running 12.2+ with compatible set to 12.1 means behaviour like 12.1, which means no basic compression for "wide" tables.

So depending on the patch level of the database a script populating a 1,000 columns table with 10,000 rows of very repeatable content that you can find at the end of this post produces the following output in 12.1.0.2:

Unpatched 12.1.0.2:

SQL> create table many_cols (
2 col001 varchar2(10),
3 col002 varchar2(10),
4 col003 varchar2(10),
.
.
.
1000 col999 varchar2(10),
1001 col1000 varchar2(10)
1002 );

Table created.

SQL>
SQL> declare
2 s_sql1 varchar2(32767);
3 s_sql2 varchar2(32767);
4 begin
5 for i in 1..1000 loop
6 s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
7 s_sql2 := s_sql2 || '''BLABLA'',';
8 end loop;
9
10 s_sql1 := rtrim(s_sql1, ',');
11 s_sql2 := rtrim(s_sql2, ',');
12
13 s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
14
15 for i in 1..10000 loop
16 execute immediate s_sql1;
17 end loop;
18
19 commit;
20 end;
21 /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

COUNT(*)
----------
10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10003


SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
55


12.1.0.2 with some recent PSU applied:

SQL> create table many_cols (
2 col001 varchar2(10),
3 col002 varchar2(10),
4 col003 varchar2(10),
.
.
.
1000 col999 varchar2(10),
1001 col1000 varchar2(10)
1002 );

Table created.

SQL>
SQL> declare
2 s_sql1 varchar2(32767);
3 s_sql2 varchar2(32767);
4 begin
5 for i in 1..1000 loop
6 s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
7 s_sql2 := s_sql2 || '''BLABLA'',';
8 end loop;
9
10 s_sql1 := rtrim(s_sql1, ',');
11 s_sql2 := rtrim(s_sql2, ',');
12
13 s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
14
15 for i in 1..10000 loop
16 execute immediate s_sql1;
17 end loop;
18
19 commit;
20 end;
21 /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

COUNT(*)
----------
10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10097


SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10143


12.2 and later produce in principle again the same output as the unpatched 12.1.0.2, here as an example 18.3 (18.4.1 same behaviour):

SQL> create table many_cols (
2 col001 varchar2(10),
3 col002 varchar2(10),
4 col003 varchar2(10),
.
.
.
1000 col999 varchar2(10),
1001 col1000 varchar2(10)
1002 );

Table created.

SQL>
SQL> declare
2 s_sql1 varchar2(32767);
3 s_sql2 varchar2(32767);
4 begin
5 for i in 1..1000 loop
6 s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
7 s_sql2 := s_sql2 || '''BLABLA'',';
8 end loop;
9
10 s_sql1 := rtrim(s_sql1, ',');
11 s_sql2 := rtrim(s_sql2, ',');
12
13 s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
14
15 for i in 1..10000 loop
16 execute immediate s_sql1;
17 end loop;
18
19 commit;
20 end;
21 /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

COUNT(*)
----------
10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10003


SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
55


If you wanted to make use of "wide" table compression in 12.1.0.2 with PSU applied, then the script would have to be changed to enable OLTP / Advanced Row compression before the initial population and then you'll get this output:

SQL> create table many_cols (
2 col001 varchar2(10),
3 col002 varchar2(10),
4 col003 varchar2(10),
.
.
.
1000 col999 varchar2(10),
1001 col1000 varchar2(10)
1002 )
1003 compress for oltp
1004 ;

Table created.

SQL>
SQL> declare
2 s_sql1 varchar2(32767);
3 s_sql2 varchar2(32767);
4 begin
5 for i in 1..1000 loop
6 s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
7 s_sql2 := s_sql2 || '''BLABLA'',';
8 end loop;
9
10 s_sql1 := rtrim(s_sql1, ',');
11 s_sql2 := rtrim(s_sql2, ',');
12
13 s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';
14
15 for i in 1..10000 loop
16 execute immediate s_sql1;
17 end loop;
18
19 commit;
20 end;
21 /

PL/SQL procedure successfully completed.

SQL> select count(*) from many_cols;

COUNT(*)
----------
10000

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
88


SQL> alter table many_cols move compress basic;

Table altered.

SQL> exec dbms_stats.gather_table_stats(null, 'many_cols')

PL/SQL procedure successfully completed.

SQL> select blocks from user_tab_statistics where table_name = 'MANY_COLS';

BLOCKS
----------
10143


which confirms that the "wide" table compression works for conventional inserts, but it requires the additional "Advanced Compression" license. Another downside of this approach is a potentially huge impact on performance and undo / redo generation due to the repeated block compression with each compression operation dumping the whole before image block to undo / redo.

Script used:

set echo on

drop table many_cols purge;

create table many_cols (
col001 varchar2(10),
col002 varchar2(10),
col003 varchar2(10),
col004 varchar2(10),
col005 varchar2(10),
col006 varchar2(10),
col007 varchar2(10),
col008 varchar2(10),
col009 varchar2(10),
col010 varchar2(10),
col011 varchar2(10),
col012 varchar2(10),
col013 varchar2(10),
col014 varchar2(10),
col015 varchar2(10),
col016 varchar2(10),
col017 varchar2(10),
col018 varchar2(10),
col019 varchar2(10),
col020 varchar2(10),
col021 varchar2(10),
col022 varchar2(10),
col023 varchar2(10),
col024 varchar2(10),
col025 varchar2(10),
col026 varchar2(10),
col027 varchar2(10),
col028 varchar2(10),
col029 varchar2(10),
col030 varchar2(10),
col031 varchar2(10),
col032 varchar2(10),
col033 varchar2(10),
col034 varchar2(10),
col035 varchar2(10),
col036 varchar2(10),
col037 varchar2(10),
col038 varchar2(10),
col039 varchar2(10),
col040 varchar2(10),
col041 varchar2(10),
col042 varchar2(10),
col043 varchar2(10),
col044 varchar2(10),
col045 varchar2(10),
col046 varchar2(10),
col047 varchar2(10),
col048 varchar2(10),
col049 varchar2(10),
col050 varchar2(10),
col051 varchar2(10),
col052 varchar2(10),
col053 varchar2(10),
col054 varchar2(10),
col055 varchar2(10),
col056 varchar2(10),
col057 varchar2(10),
col058 varchar2(10),
col059 varchar2(10),
col060 varchar2(10),
col061 varchar2(10),
col062 varchar2(10),
col063 varchar2(10),
col064 varchar2(10),
col065 varchar2(10),
col066 varchar2(10),
col067 varchar2(10),
col068 varchar2(10),
col069 varchar2(10),
col070 varchar2(10),
col071 varchar2(10),
col072 varchar2(10),
col073 varchar2(10),
col074 varchar2(10),
col075 varchar2(10),
col076 varchar2(10),
col077 varchar2(10),
col078 varchar2(10),
col079 varchar2(10),
col080 varchar2(10),
col081 varchar2(10),
col082 varchar2(10),
col083 varchar2(10),
col084 varchar2(10),
col085 varchar2(10),
col086 varchar2(10),
col087 varchar2(10),
col088 varchar2(10),
col089 varchar2(10),
col090 varchar2(10),
col091 varchar2(10),
col092 varchar2(10),
col093 varchar2(10),
col094 varchar2(10),
col095 varchar2(10),
col096 varchar2(10),
col097 varchar2(10),
col098 varchar2(10),
col099 varchar2(10),
col100 varchar2(10),
col101 varchar2(10),
col102 varchar2(10),
col103 varchar2(10),
col104 varchar2(10),
col105 varchar2(10),
col106 varchar2(10),
col107 varchar2(10),
col108 varchar2(10),
col109 varchar2(10),
col110 varchar2(10),
col111 varchar2(10),
col112 varchar2(10),
col113 varchar2(10),
col114 varchar2(10),
col115 varchar2(10),
col116 varchar2(10),
col117 varchar2(10),
col118 varchar2(10),
col119 varchar2(10),
col120 varchar2(10),
col121 varchar2(10),
col122 varchar2(10),
col123 varchar2(10),
col124 varchar2(10),
col125 varchar2(10),
col126 varchar2(10),
col127 varchar2(10),
col128 varchar2(10),
col129 varchar2(10),
col130 varchar2(10),
col131 varchar2(10),
col132 varchar2(10),
col133 varchar2(10),
col134 varchar2(10),
col135 varchar2(10),
col136 varchar2(10),
col137 varchar2(10),
col138 varchar2(10),
col139 varchar2(10),
col140 varchar2(10),
col141 varchar2(10),
col142 varchar2(10),
col143 varchar2(10),
col144 varchar2(10),
col145 varchar2(10),
col146 varchar2(10),
col147 varchar2(10),
col148 varchar2(10),
col149 varchar2(10),
col150 varchar2(10),
col151 varchar2(10),
col152 varchar2(10),
col153 varchar2(10),
col154 varchar2(10),
col155 varchar2(10),
col156 varchar2(10),
col157 varchar2(10),
col158 varchar2(10),
col159 varchar2(10),
col160 varchar2(10),
col161 varchar2(10),
col162 varchar2(10),
col163 varchar2(10),
col164 varchar2(10),
col165 varchar2(10),
col166 varchar2(10),
col167 varchar2(10),
col168 varchar2(10),
col169 varchar2(10),
col170 varchar2(10),
col171 varchar2(10),
col172 varchar2(10),
col173 varchar2(10),
col174 varchar2(10),
col175 varchar2(10),
col176 varchar2(10),
col177 varchar2(10),
col178 varchar2(10),
col179 varchar2(10),
col180 varchar2(10),
col181 varchar2(10),
col182 varchar2(10),
col183 varchar2(10),
col184 varchar2(10),
col185 varchar2(10),
col186 varchar2(10),
col187 varchar2(10),
col188 varchar2(10),
col189 varchar2(10),
col190 varchar2(10),
col191 varchar2(10),
col192 varchar2(10),
col193 varchar2(10),
col194 varchar2(10),
col195 varchar2(10),
col196 varchar2(10),
col197 varchar2(10),
col198 varchar2(10),
col199 varchar2(10),
col200 varchar2(10),
col201 varchar2(10),
col202 varchar2(10),
col203 varchar2(10),
col204 varchar2(10),
col205 varchar2(10),
col206 varchar2(10),
col207 varchar2(10),
col208 varchar2(10),
col209 varchar2(10),
col210 varchar2(10),
col211 varchar2(10),
col212 varchar2(10),
col213 varchar2(10),
col214 varchar2(10),
col215 varchar2(10),
col216 varchar2(10),
col217 varchar2(10),
col218 varchar2(10),
col219 varchar2(10),
col220 varchar2(10),
col221 varchar2(10),
col222 varchar2(10),
col223 varchar2(10),
col224 varchar2(10),
col225 varchar2(10),
col226 varchar2(10),
col227 varchar2(10),
col228 varchar2(10),
col229 varchar2(10),
col230 varchar2(10),
col231 varchar2(10),
col232 varchar2(10),
col233 varchar2(10),
col234 varchar2(10),
col235 varchar2(10),
col236 varchar2(10),
col237 varchar2(10),
col238 varchar2(10),
col239 varchar2(10),
col240 varchar2(10),
col241 varchar2(10),
col242 varchar2(10),
col243 varchar2(10),
col244 varchar2(10),
col245 varchar2(10),
col246 varchar2(10),
col247 varchar2(10),
col248 varchar2(10),
col249 varchar2(10),
col250 varchar2(10),
col251 varchar2(10),
col252 varchar2(10),
col253 varchar2(10),
col254 varchar2(10),
col255 varchar2(10),
col256 varchar2(10),
col257 varchar2(10),
col258 varchar2(10),
col259 varchar2(10),
col260 varchar2(10),
col261 varchar2(10),
col262 varchar2(10),
col263 varchar2(10),
col264 varchar2(10),
col265 varchar2(10),
col266 varchar2(10),
col267 varchar2(10),
col268 varchar2(10),
col269 varchar2(10),
col270 varchar2(10),
col271 varchar2(10),
col272 varchar2(10),
col273 varchar2(10),
col274 varchar2(10),
col275 varchar2(10),
col276 varchar2(10),
col277 varchar2(10),
col278 varchar2(10),
col279 varchar2(10),
col280 varchar2(10),
col281 varchar2(10),
col282 varchar2(10),
col283 varchar2(10),
col284 varchar2(10),
col285 varchar2(10),
col286 varchar2(10),
col287 varchar2(10),
col288 varchar2(10),
col289 varchar2(10),
col290 varchar2(10),
col291 varchar2(10),
col292 varchar2(10),
col293 varchar2(10),
col294 varchar2(10),
col295 varchar2(10),
col296 varchar2(10),
col297 varchar2(10),
col298 varchar2(10),
col299 varchar2(10),
col300 varchar2(10),
col301 varchar2(10),
col302 varchar2(10),
col303 varchar2(10),
col304 varchar2(10),
col305 varchar2(10),
col306 varchar2(10),
col307 varchar2(10),
col308 varchar2(10),
col309 varchar2(10),
col310 varchar2(10),
col311 varchar2(10),
col312 varchar2(10),
col313 varchar2(10),
col314 varchar2(10),
col315 varchar2(10),
col316 varchar2(10),
col317 varchar2(10),
col318 varchar2(10),
col319 varchar2(10),
col320 varchar2(10),
col321 varchar2(10),
col322 varchar2(10),
col323 varchar2(10),
col324 varchar2(10),
col325 varchar2(10),
col326 varchar2(10),
col327 varchar2(10),
col328 varchar2(10),
col329 varchar2(10),
col330 varchar2(10),
col331 varchar2(10),
col332 varchar2(10),
col333 varchar2(10),
col334 varchar2(10),
col335 varchar2(10),
col336 varchar2(10),
col337 varchar2(10),
col338 varchar2(10),
col339 varchar2(10),
col340 varchar2(10),
col341 varchar2(10),
col342 varchar2(10),
col343 varchar2(10),
col344 varchar2(10),
col345 varchar2(10),
col346 varchar2(10),
col347 varchar2(10),
col348 varchar2(10),
col349 varchar2(10),
col350 varchar2(10),
col351 varchar2(10),
col352 varchar2(10),
col353 varchar2(10),
col354 varchar2(10),
col355 varchar2(10),
col356 varchar2(10),
col357 varchar2(10),
col358 varchar2(10),
col359 varchar2(10),
col360 varchar2(10),
col361 varchar2(10),
col362 varchar2(10),
col363 varchar2(10),
col364 varchar2(10),
col365 varchar2(10),
col366 varchar2(10),
col367 varchar2(10),
col368 varchar2(10),
col369 varchar2(10),
col370 varchar2(10),
col371 varchar2(10),
col372 varchar2(10),
col373 varchar2(10),
col374 varchar2(10),
col375 varchar2(10),
col376 varchar2(10),
col377 varchar2(10),
col378 varchar2(10),
col379 varchar2(10),
col380 varchar2(10),
col381 varchar2(10),
col382 varchar2(10),
col383 varchar2(10),
col384 varchar2(10),
col385 varchar2(10),
col386 varchar2(10),
col387 varchar2(10),
col388 varchar2(10),
col389 varchar2(10),
col390 varchar2(10),
col391 varchar2(10),
col392 varchar2(10),
col393 varchar2(10),
col394 varchar2(10),
col395 varchar2(10),
col396 varchar2(10),
col397 varchar2(10),
col398 varchar2(10),
col399 varchar2(10),
col400 varchar2(10),
col401 varchar2(10),
col402 varchar2(10),
col403 varchar2(10),
col404 varchar2(10),
col405 varchar2(10),
col406 varchar2(10),
col407 varchar2(10),
col408 varchar2(10),
col409 varchar2(10),
col410 varchar2(10),
col411 varchar2(10),
col412 varchar2(10),
col413 varchar2(10),
col414 varchar2(10),
col415 varchar2(10),
col416 varchar2(10),
col417 varchar2(10),
col418 varchar2(10),
col419 varchar2(10),
col420 varchar2(10),
col421 varchar2(10),
col422 varchar2(10),
col423 varchar2(10),
col424 varchar2(10),
col425 varchar2(10),
col426 varchar2(10),
col427 varchar2(10),
col428 varchar2(10),
col429 varchar2(10),
col430 varchar2(10),
col431 varchar2(10),
col432 varchar2(10),
col433 varchar2(10),
col434 varchar2(10),
col435 varchar2(10),
col436 varchar2(10),
col437 varchar2(10),
col438 varchar2(10),
col439 varchar2(10),
col440 varchar2(10),
col441 varchar2(10),
col442 varchar2(10),
col443 varchar2(10),
col444 varchar2(10),
col445 varchar2(10),
col446 varchar2(10),
col447 varchar2(10),
col448 varchar2(10),
col449 varchar2(10),
col450 varchar2(10),
col451 varchar2(10),
col452 varchar2(10),
col453 varchar2(10),
col454 varchar2(10),
col455 varchar2(10),
col456 varchar2(10),
col457 varchar2(10),
col458 varchar2(10),
col459 varchar2(10),
col460 varchar2(10),
col461 varchar2(10),
col462 varchar2(10),
col463 varchar2(10),
col464 varchar2(10),
col465 varchar2(10),
col466 varchar2(10),
col467 varchar2(10),
col468 varchar2(10),
col469 varchar2(10),
col470 varchar2(10),
col471 varchar2(10),
col472 varchar2(10),
col473 varchar2(10),
col474 varchar2(10),
col475 varchar2(10),
col476 varchar2(10),
col477 varchar2(10),
col478 varchar2(10),
col479 varchar2(10),
col480 varchar2(10),
col481 varchar2(10),
col482 varchar2(10),
col483 varchar2(10),
col484 varchar2(10),
col485 varchar2(10),
col486 varchar2(10),
col487 varchar2(10),
col488 varchar2(10),
col489 varchar2(10),
col490 varchar2(10),
col491 varchar2(10),
col492 varchar2(10),
col493 varchar2(10),
col494 varchar2(10),
col495 varchar2(10),
col496 varchar2(10),
col497 varchar2(10),
col498 varchar2(10),
col499 varchar2(10),
col500 varchar2(10),
col501 varchar2(10),
col502 varchar2(10),
col503 varchar2(10),
col504 varchar2(10),
col505 varchar2(10),
col506 varchar2(10),
col507 varchar2(10),
col508 varchar2(10),
col509 varchar2(10),
col510 varchar2(10),
col511 varchar2(10),
col512 varchar2(10),
col513 varchar2(10),
col514 varchar2(10),
col515 varchar2(10),
col516 varchar2(10),
col517 varchar2(10),
col518 varchar2(10),
col519 varchar2(10),
col520 varchar2(10),
col521 varchar2(10),
col522 varchar2(10),
col523 varchar2(10),
col524 varchar2(10),
col525 varchar2(10),
col526 varchar2(10),
col527 varchar2(10),
col528 varchar2(10),
col529 varchar2(10),
col530 varchar2(10),
col531 varchar2(10),
col532 varchar2(10),
col533 varchar2(10),
col534 varchar2(10),
col535 varchar2(10),
col536 varchar2(10),
col537 varchar2(10),
col538 varchar2(10),
col539 varchar2(10),
col540 varchar2(10),
col541 varchar2(10),
col542 varchar2(10),
col543 varchar2(10),
col544 varchar2(10),
col545 varchar2(10),
col546 varchar2(10),
col547 varchar2(10),
col548 varchar2(10),
col549 varchar2(10),
col550 varchar2(10),
col551 varchar2(10),
col552 varchar2(10),
col553 varchar2(10),
col554 varchar2(10),
col555 varchar2(10),
col556 varchar2(10),
col557 varchar2(10),
col558 varchar2(10),
col559 varchar2(10),
col560 varchar2(10),
col561 varchar2(10),
col562 varchar2(10),
col563 varchar2(10),
col564 varchar2(10),
col565 varchar2(10),
col566 varchar2(10),
col567 varchar2(10),
col568 varchar2(10),
col569 varchar2(10),
col570 varchar2(10),
col571 varchar2(10),
col572 varchar2(10),
col573 varchar2(10),
col574 varchar2(10),
col575 varchar2(10),
col576 varchar2(10),
col577 varchar2(10),
col578 varchar2(10),
col579 varchar2(10),
col580 varchar2(10),
col581 varchar2(10),
col582 varchar2(10),
col583 varchar2(10),
col584 varchar2(10),
col585 varchar2(10),
col586 varchar2(10),
col587 varchar2(10),
col588 varchar2(10),
col589 varchar2(10),
col590 varchar2(10),
col591 varchar2(10),
col592 varchar2(10),
col593 varchar2(10),
col594 varchar2(10),
col595 varchar2(10),
col596 varchar2(10),
col597 varchar2(10),
col598 varchar2(10),
col599 varchar2(10),
col600 varchar2(10),
col601 varchar2(10),
col602 varchar2(10),
col603 varchar2(10),
col604 varchar2(10),
col605 varchar2(10),
col606 varchar2(10),
col607 varchar2(10),
col608 varchar2(10),
col609 varchar2(10),
col610 varchar2(10),
col611 varchar2(10),
col612 varchar2(10),
col613 varchar2(10),
col614 varchar2(10),
col615 varchar2(10),
col616 varchar2(10),
col617 varchar2(10),
col618 varchar2(10),
col619 varchar2(10),
col620 varchar2(10),
col621 varchar2(10),
col622 varchar2(10),
col623 varchar2(10),
col624 varchar2(10),
col625 varchar2(10),
col626 varchar2(10),
col627 varchar2(10),
col628 varchar2(10),
col629 varchar2(10),
col630 varchar2(10),
col631 varchar2(10),
col632 varchar2(10),
col633 varchar2(10),
col634 varchar2(10),
col635 varchar2(10),
col636 varchar2(10),
col637 varchar2(10),
col638 varchar2(10),
col639 varchar2(10),
col640 varchar2(10),
col641 varchar2(10),
col642 varchar2(10),
col643 varchar2(10),
col644 varchar2(10),
col645 varchar2(10),
col646 varchar2(10),
col647 varchar2(10),
col648 varchar2(10),
col649 varchar2(10),
col650 varchar2(10),
col651 varchar2(10),
col652 varchar2(10),
col653 varchar2(10),
col654 varchar2(10),
col655 varchar2(10),
col656 varchar2(10),
col657 varchar2(10),
col658 varchar2(10),
col659 varchar2(10),
col660 varchar2(10),
col661 varchar2(10),
col662 varchar2(10),
col663 varchar2(10),
col664 varchar2(10),
col665 varchar2(10),
col666 varchar2(10),
col667 varchar2(10),
col668 varchar2(10),
col669 varchar2(10),
col670 varchar2(10),
col671 varchar2(10),
col672 varchar2(10),
col673 varchar2(10),
col674 varchar2(10),
col675 varchar2(10),
col676 varchar2(10),
col677 varchar2(10),
col678 varchar2(10),
col679 varchar2(10),
col680 varchar2(10),
col681 varchar2(10),
col682 varchar2(10),
col683 varchar2(10),
col684 varchar2(10),
col685 varchar2(10),
col686 varchar2(10),
col687 varchar2(10),
col688 varchar2(10),
col689 varchar2(10),
col690 varchar2(10),
col691 varchar2(10),
col692 varchar2(10),
col693 varchar2(10),
col694 varchar2(10),
col695 varchar2(10),
col696 varchar2(10),
col697 varchar2(10),
col698 varchar2(10),
col699 varchar2(10),
col700 varchar2(10),
col701 varchar2(10),
col702 varchar2(10),
col703 varchar2(10),
col704 varchar2(10),
col705 varchar2(10),
col706 varchar2(10),
col707 varchar2(10),
col708 varchar2(10),
col709 varchar2(10),
col710 varchar2(10),
col711 varchar2(10),
col712 varchar2(10),
col713 varchar2(10),
col714 varchar2(10),
col715 varchar2(10),
col716 varchar2(10),
col717 varchar2(10),
col718 varchar2(10),
col719 varchar2(10),
col720 varchar2(10),
col721 varchar2(10),
col722 varchar2(10),
col723 varchar2(10),
col724 varchar2(10),
col725 varchar2(10),
col726 varchar2(10),
col727 varchar2(10),
col728 varchar2(10),
col729 varchar2(10),
col730 varchar2(10),
col731 varchar2(10),
col732 varchar2(10),
col733 varchar2(10),
col734 varchar2(10),
col735 varchar2(10),
col736 varchar2(10),
col737 varchar2(10),
col738 varchar2(10),
col739 varchar2(10),
col740 varchar2(10),
col741 varchar2(10),
col742 varchar2(10),
col743 varchar2(10),
col744 varchar2(10),
col745 varchar2(10),
col746 varchar2(10),
col747 varchar2(10),
col748 varchar2(10),
col749 varchar2(10),
col750 varchar2(10),
col751 varchar2(10),
col752 varchar2(10),
col753 varchar2(10),
col754 varchar2(10),
col755 varchar2(10),
col756 varchar2(10),
col757 varchar2(10),
col758 varchar2(10),
col759 varchar2(10),
col760 varchar2(10),
col761 varchar2(10),
col762 varchar2(10),
col763 varchar2(10),
col764 varchar2(10),
col765 varchar2(10),
col766 varchar2(10),
col767 varchar2(10),
col768 varchar2(10),
col769 varchar2(10),
col770 varchar2(10),
col771 varchar2(10),
col772 varchar2(10),
col773 varchar2(10),
col774 varchar2(10),
col775 varchar2(10),
col776 varchar2(10),
col777 varchar2(10),
col778 varchar2(10),
col779 varchar2(10),
col780 varchar2(10),
col781 varchar2(10),
col782 varchar2(10),
col783 varchar2(10),
col784 varchar2(10),
col785 varchar2(10),
col786 varchar2(10),
col787 varchar2(10),
col788 varchar2(10),
col789 varchar2(10),
col790 varchar2(10),
col791 varchar2(10),
col792 varchar2(10),
col793 varchar2(10),
col794 varchar2(10),
col795 varchar2(10),
col796 varchar2(10),
col797 varchar2(10),
col798 varchar2(10),
col799 varchar2(10),
col800 varchar2(10),
col801 varchar2(10),
col802 varchar2(10),
col803 varchar2(10),
col804 varchar2(10),
col805 varchar2(10),
col806 varchar2(10),
col807 varchar2(10),
col808 varchar2(10),
col809 varchar2(10),
col810 varchar2(10),
col811 varchar2(10),
col812 varchar2(10),
col813 varchar2(10),
col814 varchar2(10),
col815 varchar2(10),
col816 varchar2(10),
col817 varchar2(10),
col818 varchar2(10),
col819 varchar2(10),
col820 varchar2(10),
col821 varchar2(10),
col822 varchar2(10),
col823 varchar2(10),
col824 varchar2(10),
col825 varchar2(10),
col826 varchar2(10),
col827 varchar2(10),
col828 varchar2(10),
col829 varchar2(10),
col830 varchar2(10),
col831 varchar2(10),
col832 varchar2(10),
col833 varchar2(10),
col834 varchar2(10),
col835 varchar2(10),
col836 varchar2(10),
col837 varchar2(10),
col838 varchar2(10),
col839 varchar2(10),
col840 varchar2(10),
col841 varchar2(10),
col842 varchar2(10),
col843 varchar2(10),
col844 varchar2(10),
col845 varchar2(10),
col846 varchar2(10),
col847 varchar2(10),
col848 varchar2(10),
col849 varchar2(10),
col850 varchar2(10),
col851 varchar2(10),
col852 varchar2(10),
col853 varchar2(10),
col854 varchar2(10),
col855 varchar2(10),
col856 varchar2(10),
col857 varchar2(10),
col858 varchar2(10),
col859 varchar2(10),
col860 varchar2(10),
col861 varchar2(10),
col862 varchar2(10),
col863 varchar2(10),
col864 varchar2(10),
col865 varchar2(10),
col866 varchar2(10),
col867 varchar2(10),
col868 varchar2(10),
col869 varchar2(10),
col870 varchar2(10),
col871 varchar2(10),
col872 varchar2(10),
col873 varchar2(10),
col874 varchar2(10),
col875 varchar2(10),
col876 varchar2(10),
col877 varchar2(10),
col878 varchar2(10),
col879 varchar2(10),
col880 varchar2(10),
col881 varchar2(10),
col882 varchar2(10),
col883 varchar2(10),
col884 varchar2(10),
col885 varchar2(10),
col886 varchar2(10),
col887 varchar2(10),
col888 varchar2(10),
col889 varchar2(10),
col890 varchar2(10),
col891 varchar2(10),
col892 varchar2(10),
col893 varchar2(10),
col894 varchar2(10),
col895 varchar2(10),
col896 varchar2(10),
col897 varchar2(10),
col898 varchar2(10),
col899 varchar2(10),
col900 varchar2(10),
col901 varchar2(10),
col902 varchar2(10),
col903 varchar2(10),
col904 varchar2(10),
col905 varchar2(10),
col906 varchar2(10),
col907 varchar2(10),
col908 varchar2(10),
col909 varchar2(10),
col910 varchar2(10),
col911 varchar2(10),
col912 varchar2(10),
col913 varchar2(10),
col914 varchar2(10),
col915 varchar2(10),
col916 varchar2(10),
col917 varchar2(10),
col918 varchar2(10),
col919 varchar2(10),
col920 varchar2(10),
col921 varchar2(10),
col922 varchar2(10),
col923 varchar2(10),
col924 varchar2(10),
col925 varchar2(10),
col926 varchar2(10),
col927 varchar2(10),
col928 varchar2(10),
col929 varchar2(10),
col930 varchar2(10),
col931 varchar2(10),
col932 varchar2(10),
col933 varchar2(10),
col934 varchar2(10),
col935 varchar2(10),
col936 varchar2(10),
col937 varchar2(10),
col938 varchar2(10),
col939 varchar2(10),
col940 varchar2(10),
col941 varchar2(10),
col942 varchar2(10),
col943 varchar2(10),
col944 varchar2(10),
col945 varchar2(10),
col946 varchar2(10),
col947 varchar2(10),
col948 varchar2(10),
col949 varchar2(10),
col950 varchar2(10),
col951 varchar2(10),
col952 varchar2(10),
col953 varchar2(10),
col954 varchar2(10),
col955 varchar2(10),
col956 varchar2(10),
col957 varchar2(10),
col958 varchar2(10),
col959 varchar2(10),
col960 varchar2(10),
col961 varchar2(10),
col962 varchar2(10),
col963 varchar2(10),
col964 varchar2(10),
col965 varchar2(10),
col966 varchar2(10),
col967 varchar2(10),
col968 varchar2(10),
col969 varchar2(10),
col970 varchar2(10),
col971 varchar2(10),
col972 varchar2(10),
col973 varchar2(10),
col974 varchar2(10),
col975 varchar2(10),
col976 varchar2(10),
col977 varchar2(10),
col978 varchar2(10),
col979 varchar2(10),
col980 varchar2(10),
col981 varchar2(10),
col982 varchar2(10),
col983 varchar2(10),
col984 varchar2(10),
col985 varchar2(10),
col986 varchar2(10),
col987 varchar2(10),
col988 varchar2(10),
col989 varchar2(10),
col990 varchar2(10),
col991 varchar2(10),
col992 varchar2(10),
col993 varchar2(10),
col994 varchar2(10),
col995 varchar2(10),
col996 varchar2(10),
col997 varchar2(10),
col998 varchar2(10),
col999 varchar2(10),
col1000 varchar2(10)
);

declare
s_sql1 varchar2(32767);
s_sql2 varchar2(32767);
begin
for i in 1..1000 loop
s_sql1 := s_sql1 || 'col' || to_char(i, 'FM9000') || ',';
s_sql2 := s_sql2 || '''BLABLA'',';
end loop;

s_sql1 := rtrim(s_sql1, ',');
s_sql2 := rtrim(s_sql2, ',');

s_sql1 := 'insert into many_cols (' || s_sql1 || ') values (' || s_sql2 || ')';

for i in 1..10000 loop
execute immediate s_sql1;
end loop;

commit;
end;
/

select count(*) from many_cols;

exec dbms_stats.gather_table_stats(null, 'many_cols')

select blocks from user_tab_statistics where table_name = 'MANY_COLS';

-- alter session set "_widetab_comp_enabled" = false;

alter table many_cols move compress basic;

exec dbms_stats.gather_table_stats(null, 'many_cols')

select blocks from user_tab_statistics where table_name = 'MANY_COLS';

Remove GRID Home After Upgrade

Michael Dinh - Tue, 2019-05-07 16:53

The environment started with a GRID 12.1.0.1 installation, upgraded to 18.3.0.0, and performed out-of-place patching (OOP) to 18.6.0.0.

As a result, there are three GRID homes and remove 12.1.0.1.

This demonstration will be for the last node from the cluster; however, the action performed will be the same for all nodes.

Review existing patch for Grid and Database homes:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/lspatches.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ /u01/18.3.0.0/grid_2/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/18.3.0.0/grid_2/OPatch/opatch lspatches
29302264;OCW RELEASE UPDATE 18.6.0.0.0 (29302264)
29301643;ACFS RELEASE UPDATE 18.6.0.0.0 (29301643)
29301631;Database Release Update : 18.6.0.0.190416 (29301631)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
28435192;DBWLM RELEASE UPDATE 18.0.0.0.0 (28435192)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

OPatch succeeded.
+ . /media/patch/hawk.env
++ set +x
The Oracle base remains unchanged with value /u01/app/oracle
ORACLE_UNQNAME=hawk
ORACLE_SID=hawk2
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/12.1.0.1/db1
Oracle Instance alive for sid "hawk2"
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch version
OPatch Version: 12.2.0.1.17

OPatch succeeded.
+ /u01/app/oracle/12.1.0.1/db1/OPatch/opatch lspatches
28731800;Database Bundle Patch : 12.1.0.2.190115 (28731800)
28729213;OCW PATCH SET UPDATE 12.1.0.2.190115 (28729213)

OPatch succeeded.
+ exit
[oracle@racnode-dc1-1 ~]$

Notice that the GRID home is /u01/18.3.0.0/grid_2 because this was the suggestion from OOP process.
Based on experience, it might be better to name GRID home with the correct version, i.e. /u01/18.6.0.0/grid

Verify cluster state is [NORMAL]:

[oracle@racnode-dc1-1 ~]$ ssh racnode-dc1-2 "/media/patch/crs_Query.sh"
+ . /media/patch/gi.env
++ set +x
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
+ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [18.0.0.0.0]
+ crsctl query crs softwareversion
Oracle Clusterware version on node [racnode-dc1-2] is [18.0.0.0.0]
+ crsctl query crs softwarepatch
Oracle Clusterware patch level on node racnode-dc1-2 is [2056778364].
+ crsctl query crs releasepatch
Oracle Clusterware release patch level is [2056778364] and the complete list of patches [27908644 27923415 28090523 28090553 28090557 28256701 28435192 28547619 28822489 28864593 28864607 29301631 29301643 29302264 ] have been applied on the local node. The release patch string is [18.6.0.0.0].
+ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [18.0.0.0.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [2056778364].
+ exit
[oracle@racnode-dc1-1 ~]$

Check Oracle Inventory:

[oracle@racnode-dc1-2 ~]$ cat /etc/oraInst.loc
inventory_loc=/u01/app/oraInventory
inst_group=oinstall

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2019, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.2.0.4.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>

### GRID home (/u01/app/12.1.0.1/grid) to be removed.
========================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
========================================================================================

<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove GRID home (/u01/app/12.1.0.1/grid). Use -local flag to avoid any bug issues.

[oracle@racnode-dc1-2 ~]$ export ORACLE_HOME=/u01/app/12.1.0.1/grid
[oracle@racnode-dc1-2 ~]$ $ORACLE_HOME/oui/bin/runInstaller -detachHome -silent -local ORACLE_HOME=$ORACLE_HOME
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 16040 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
'DetachHome' was successful.
[oracle@racnode-dc1-2 ~]$

Verify GRID home was removed:

[oracle@racnode-dc1-2 ~]$ cat /u01/app/oraInventory/ContentsXML/inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2014, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
   <SAVED_WITH>12.1.0.2.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/12.1.0.1/db1" TYPE="O" IDX="2">
   <NODE_LIST>
      <NODE NAME="racnode-dc1-1"/>
      <NODE NAME="racnode-dc1-2"/>
   </NODE_LIST>
</HOME>
<HOME NAME="OraGI18Home1" LOC="/u01/18.3.0.0/grid" TYPE="O" IDX="3"/>
<HOME NAME="OraHome1" LOC="/u01/18.3.0.0/grid_2" TYPE="O" IDX="4" CRS="true"/>

### GRID home (/u01/app/12.1.0.1/grid) removed.
================================================================================
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0.1/grid" TYPE="O" IDX="1" REMOVED="T"/>
================================================================================

</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@racnode-dc1-2 ~]$

Remove 12.1.0.1 directory:

[oracle@racnode-dc1-2 ~]$ sudo su -
Last login: Thu May  2 23:38:22 CEST 2019
[root@racnode-dc1-2 ~]# cd /u01/app/
[root@racnode-dc1-2 app]# ll
total 12
drwxr-xr-x  3 root   oinstall 4096 Apr 17 23:36 12.1.0.1
drwxrwxr-x 12 oracle oinstall 4096 Apr 30 18:05 oracle
drwxrwx---  5 oracle oinstall 4096 May  2 23:54 oraInventory
[root@racnode-dc1-2 app]# rm -rf 12.1.0.1/
[root@racnode-dc1-2 app]#

Check the cluster:

[root@racnode-dc1-2 app]# logout
[oracle@racnode-dc1-2 ~]$ . /media/patch/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM2
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u01/18.3.0.0/grid_2
ORACLE_HOME=/u01/18.3.0.0/grid_2
Oracle Instance alive for sid "+ASM2"
[oracle@racnode-dc1-2 ~]$ crsctl check cluster -all
**************************************************************
racnode-dc1-1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
racnode-dc1-2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
[oracle@racnode-dc1-2 ~]$

Later, /u01/18.3.0.0/grid will be removed, too, if there are no issues with the most recent patch.

Pages

Subscribe to Oracle FAQ aggregator