Feed aggregator

Pivot with list of rows

Tom Kyte - Thu, 2019-09-26 06:46
We have a table which contains db_name and usernames. In the output we need list of users per DB i.e. number of columns will be equal to distinct db_name. sample output format: <b>DB1 DB2</b> USER1 USER4 USER2 USER5 USER3 Database version:...
Categories: DBA Blogs

confuse at the order of execution plan table

Tom Kyte - Thu, 2019-09-26 06:46
As we were told that "The execution order in EXPLAIN PLAN output begins with the line that is the furthest indented to the right. The next step is the parent of that line. If two lines are indented equally, then the top line is executed first." howe...
Categories: DBA Blogs

Converting data types in where clause

Tom Kyte - Thu, 2019-09-26 06:46
Hi Tom, My question is regarding when a query is not having the right datatype in the where clause Example: -- Create table <code>CREATE TABLE mytable ( mynumber varchar2(20), primary key(mynumber));</code> -- Insert some rows <...
Categories: DBA Blogs

Resetting a live sequence

Tom Kyte - Thu, 2019-09-26 06:46
A sequence was about to finish, so I had make it bigger. I work on database 11.2 so I had to use a workaround described here (https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1119633817597). The procedure shown in LiveSQL was ru...
Categories: DBA Blogs

Strange behavior in analytic functions with partitions

Tom Kyte - Thu, 2019-09-26 06:46
Hello, I have met strange behavior that I can't understand. I have this table <code>CREATE TABLE test_table (id NUMBER(10,0) NOT NULL, register_date DATE DEFAULT sysdate NOT NULL, row...
Categories: DBA Blogs

Issue in generating Custom Reference ID and Mapping with Form's field

Tom Kyte - Thu, 2019-09-26 06:46
Building an app for blocking a demo calendar for particular product setup. I wanted to create a custom reference ID ( CURRENT_MONTH-CURRENT_YEAR-SEQUENCE like SEPT-2019-003) which will be the Primary Key (Column BOOKING_REF) for the table (table ...
Categories: DBA Blogs

Inserting without a full list for field names

Tom Kyte - Thu, 2019-09-26 06:46
We have an issue when we perform an insert like this <code> INSERT INTO STS_RESP_LOG(STS_REQ_LOG_SYSTEM_ID,HSTRY_FLG, SNGL_STS, FRQNCY, CRT_DATE ) VALUES ( ?, ?, ?, ?, sysdate); </code> ?...
Categories: DBA Blogs

RMAN MAXPIECESIZE VS SECTION SIZE

Tom Kyte - Thu, 2019-09-26 06:46
HELLO , i've made some test to try to parallelizethe best the backup of bigdatabase and i wanted to know if it's possible to parallelize the backup of a backuppiece (multiple backupset) on mulitple channel with maxpiecesize because our SBT media is ...
Categories: DBA Blogs

Where is my StreamAdmin account?

DBASolved - Wed, 2019-09-25 15:10

One of the huge benefits of Oracle GoldenGate Microservices is the security framework which comes standard when you install GoldenGate. As you setup the ServiceManager and first deployment, you are prompted to build an administration account. As a best practice we recommend that the account be named “oggdmin” with the password you define. Although it […]

The post Where is my StreamAdmin account? appeared first on DBASolved.

Categories: DBA Blogs

Running Oracle JET in Oracle Cloud Free Tier

Andrejus Baranovski - Wed, 2019-09-25 13:09
OOW'19 stands up from recent years OOW conferences with important announcement - Oracle Cloud Free Tier offering. This offering includes two free DB instances and two free compute VM instances. What else you could wish for the side and hobby projects? This is a strong move by Oracle and it should boost Oracle Cloud. Read more about it in Oracle Cloud Free Tier page.



It was interesting to test how to deploy Oracle JET app to Oracle Always Free instance of compute VM. I will not go through the initial steps, related how to create VM instance and enable internet access (for the port 80). You can read all that in a nice write up from Dimitri Gielis post.

Assuming you already have created Oracle JET app and want to deploy it. One way would be to set up Node.js and Nginx on the compute VM and pull app source code from Git. I prefer another way - to go through Docker container, Nginx would act as HTTP server to redirect requests to Docker container port. But in this post for simplicity reasons, we are not going to look into Nginx setup - will focus only on JET deployment through Docker container.

1. Create an empty Node application (follow these steps):

express --view=pug

2. Add dependencies, go into the Node app and run:

npm install

3. Copy Oracle JET content from web folder into Node app public folder (remove existing files)

4. Inside Node app, adjust app.js file, comment out these lines:

var usersRouter = require('./routes/users');

app.set('view engine', 'pug');

app.use('/users', usersRouter);

5. Keep only index.js file in router folder

6. Remove template files from views folder

7. Update index.js file to redirect to Oracle JET index.html

router.get('/', function(req, res, next) {
  //res.render('index', { title: 'Express' });
  res.sendFile('index.html', {root: './public/'});
});

8. Note down port 3000 info from bin/www, this is the port Node app will run in Docker container

9. Create Dockerfile inside Node app folder (follow these steps). Content:

FROM node:10

# Create app directory
WORKDIR /usr/src/app

# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./

RUN npm install
# If you are building your code for production
# RUN npm ci --only=production

# Bundle app source
COPY . .

EXPOSE 3000
CMD [ "node", "./bin/www" ]

10. Create .dockerignore file. Content:

node_modules
npm-debug.log

11. Build Docker image locally, by running below command inside Node app:

docker build -t username/imagename -f ./Dockerfile .

12. Push Docker container to Docker Hub. This way we will be able to pull container from Oracle compute VM in the cloud:

docker push username/imagename

---

Next steps are executed inside Oracle compute VM. You should connect through SSH to run below commands.

13. Install Docker (run sudo su):

yum install docker-engine

14. Enable Docker:

systemctl enable docker

15. Start Docker:

systemctl start docker

16. Check Docker status:

systemctl status docker.service

17. Check Docker version:

docker version

18. Login into Docker Hub, to be able to pull the container with Node app. If login doesn't work (access permission issue), run this command: sudo usermod -a -G docker $USER

docker login

19. Run container:

docker run -p 80:3000 -d --name appname username/imagename

Node app with Oracle JET content can be accessed by port 80 using public IP of your Oracle container VM: http://130.61.241.30/index.html

Oracle JET app runs on Oracle container VM free tier:

Basic Replication -- 5 : REFRESH_METHOD : FAST or FORCE ?

Hemant K Chitale - Wed, 2019-09-25 10:14
In the previous blog post, I had a remark "We'll explore the implications of "REFRESH FAST" and just "REFRESH" alone in a subsequent blog post."

This is in the context of whether it is a FORCE or FAST that shows up as the REFRESH_METHOD.  A FORCE attempts a FAST and, if it can't do so (e.g. the Materialized View Log is not accessible), attempts a COMPLETE Refresh from all the rows of the Source Table.

Other than a MV Log being a requirement, there are constraints on which types of Materialized Views can do a FAST Refresh.

SQL> create materialized view mv_fast_not_possible
2 refresh fast on demand
3 as select id, data_element_2, sysdate
4 from source_table
5 /
as select id, data_element_2, sysdate
*
ERROR at line 3:
ORA-12015: cannot create a fast refresh materialized view from a complex query


SQL> !oerr ora 12015
12015, 00000, "cannot create a fast refresh materialized view from a complex query"
// *Cause: Neither ROWIDs and nor primary key constraints are supported for
// complex queries.
// *Action: Reissue the command with the REFRESH FORCE or REFRESH COMPLETE
// option or create a simple materialized view.

SQL>


Thus, a "complex" query -- here one that adds a SYSDATE column -- cannot use a FAST Refresh.
(For all the restrictions, see Paragraph "5.3.8.4 General Restrictions on Fast Refresh" in the 19c documentation. )

SQL> create materialized view mv_fast_not_possible
2 refresh force on demand
3 as select id, data_element_2, sysdate
4 from source_table
5 /

Materialized view created.

SQL> select refresh_mode, refresh_method, last_refresh_type
2 from user_mviews
3 where mview_name = 'MV_FAST_NOT_POSSIBLE'
4 /

REFRESH_M REFRESH_ LAST_REF
--------- -------- --------
DEMAND FORCE COMPLETE

SQL>
SQL> insert into source_table
2 values (2000,'TwoThousand','NewTwoTh',sysdate);

1 row created.

SQL> select * from source_table order by date_col ;

ID DATA_ELEMENT_1 DATA_ELEMENT_2 DATE_COL
---------- --------------- --------------- ---------
101 First One 18-AUG-19
103 Third Three 18-AUG-19
104 Fourth Updated 09-SEP-19
5 Fifth Five 16-SEP-19
6 Sixth TwoHundred 19-SEP-19
7 Seventh ThreeHundred 19-SEP-19
2000 TwoThousand NewTwoTh 25-SEP-19

7 rows selected.

SQL>
SQL> commit;

Commit complete.

SQL> exec dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL> exec dbms_mview.refresh('MV_2');

PL/SQL procedure successfully completed.

SQL> exec dbms_mview.refresh('MV_FAST_NOT_POSSIBLE');

PL/SQL procedure successfully completed.

SQL>
SQL> select mview_name, refresh_mode,refresh_method,last_refresh_type, last_refresh_date
2 from user_mviews
3 order by last_refresh_date
4 /

MVIEW_NAME REFRESH_M REFRESH_ LAST_REF LAST_REFR
--------------------- --------- -------- -------- ---------
MV_OF_SOURCE DEMAND FAST FAST 25-SEP-19
MV_2 DEMAND FORCE FAST 25-SEP-19
MV_FAST_NOT_POSSIBLE DEMAND FORCE COMPLETE 25-SEP-19

SQL>


MV_FAST_NOT_POSSIBLE will always undergo a COMPLETE Refresh using REFRESH_METHOD='FORCE'.

MV_2 has REFRESH_METHOD='FORCE' because it was created with "refresh on demand" with the "fast" keyword missing.  Nevertheless, it is a "simple" Materialized View so does a FAST Refresh.

MV_OF_SOURCE was created with "refresh fast on demand", so it is already configured as REFRESH_METHOD='FAST'



Categories: DBA Blogs

Managing Licenses with AWS License Manager

Yann Neuhaus - Wed, 2019-09-25 06:51
Introduction

Computing environments became more and more agile over these last years. Companies need to provide solutions helping people to quickly set up new resources, starting and stopping them, scaling them according to the need and finally, removing them. In such environments, it could be tricky to follow license compliance when resources are changing on hourly basis.

Having a look on AWS services, I saw that AWS provides a license managing tool named “AWS License Manager”. I took few minutes in order to:

  • Understand which resources this service is able to monitor
  • How it works
  • Test it with an on-premise Linux server executing an oracle database
License Manager Service

The first step in order to use License Manager is to select it in the list of AWS Services.

AWS Services List

AWS Services List

After having clicked on AWS License Manager, the AWS License Manager window will appear.

"<yoastmark

Now, we simply have to create a license configuration with required license terms according to the software vendor. You can setup different kind of metrics such as

  • vPCUs
  • Cores
  • Sockets
  • Instances

License Manager also provides the possibility to enforce license limit, meaning that it prevents license usage after available licenses are exhausted.

AWS Create License options

AWS Create License configuration

In a context of on-premise License monitoring, it is important to notice that sockets and cores license’s type are not accepted. Therefore, in this example I used vCPUs.

"<yoastmark

Error while trying to associate Socket License to an on-premise host

AWS System Manager

Once the license configuration created, it’s now mandatory to use another AWS Service, AWS System Manager. This service allows you to view and control your infrastructure on AWS. AWS System Manager not only allows you to view and control your Amazon EC2 Instance but also on-premises servers, virtual machines (including VMs in other cloud environments). Some System Manager capabilities are not free, however in the context of this example everything is free.

 

AWS System Manager Agent (SSM Agent)

In order to benefit from AWS System Manager we need to install AWS Systems Manager Agent (SSM Agent) on our on-premised host. Indeed, SSM Agent is an Amazon software that can be installed and configured on an Amazon EC2 instance, an on-premises server, or a virtual machine (VM) and provides a solution to update, manage, and configure resources. SSM Agent is installed, by default on instances created from Windows Server 2016 and Windows Server 2019, Amazon Linux, Ubuntu Server Images AMIs. However, if you are running an on-premise server you need to install it. The process is really straightforward as presented below.

[root@vmrefdba01 ~]# mkdir /tmp/ssm
[root@vmrefdba01 ~]# curl https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm -o /tmp/ssm/amazon-ssm-agent.rpm
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 18.9M  100 18.9M    0     0  3325k      0  0:00:05  0:00:05 --:--:-- 4368k
[root@vmrefdba01 ~]# sudo yum install -y /tmp/ssm/amazon-ssm-agent.rpm
Loaded plugins: refresh-packagekit, ulninfo
Setting up Install Process
Examining /tmp/ssm/amazon-ssm-agent.rpm: amazon-ssm-agent-2.3.707.0-1.x86_64
Marking /tmp/ssm/amazon-ssm-agent.rpm to be installed
public_ol6_UEK_latest                                    | 2.5 kB     00:00
public_ol6_UEK_latest/primary_db                         |  64 MB     00:07
public_ol6_latest                                        | 2.7 kB     00:00
public_ol6_latest/primary_db                             |  18 MB     00:07
Resolving Dependencies
--> Running transaction check
---> Package amazon-ssm-agent.x86_64 0:2.3.707.0-1 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================================
 Package                            Arch                     Version                        Repository                           Size
======================================================================================================================================
Installing:
 amazon-ssm-agent                   x86_64                   2.3.707.0-1                    /amazon-ssm-agent                    61 M

Transaction Summary
======================================================================================================================================
Install       1 Package(s)

Total size: 61 M
Installed size: 61 M
Downloading Packages:
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Installing : amazon-ssm-agent-2.3.707.0-1.x86_64                                                                                1/1
amazon-ssm-agent start/running, process 3896
  Verifying  : amazon-ssm-agent-2.3.707.0-1.x86_64                                                                                1/1

Installed:
  amazon-ssm-agent.x86_64 0:2.3.707.0-1

Complete!

 

Creating an activation

Once the agent installed, we have to create a new “Activation” in the AWS System Manager Service by clicking on “Create activation“. At the end of the creation you will get an Activation Code and an Activation ID (in the green field below). You have to keep this information for the agent configuration.

AWS System Manager Activation

AWS System Manager Activation

Agent Configuration

In order to register your on-premise instance on AWS, you simply have to execute the following command with the activation code and activation id provided by AWS System Manager

sudo amazon-ssm-agent -register -code "<cf Activation Code>" -id "<cf Activation ID>" -region "us-east-2"

2019-09-19 13:53:05 INFO Successfully registered the instance with AWS SSM using Managed instance-id: mi-0756a9f0dc25be3cd

 

Once registered the Managed Instance should appear as presented below in AWS Systems Manager

AWS System Manager - Managed Instances

AWS Systems Manager – Managed Instances

The Platform type is detected as well as the Kernel version, IP address and computer name. AWS Systems Manager provides also a package inventory and many other kinds of inventory such as Network inventory, Files inventory, aso…

AWS Systems Manager - Application Inventory

AWS Systems Manager – Application Inventory

 

Association between License Configuration and Resource ID

We now have to make the link between the Managed Instance (resource) and the license configuration. The goal of course is to define which license configuration will be applied to which resource. In order to proceed, we have to go into the AWS License Manager, and select “Search Inventory” into the menu. Then we simply have to select the resource and then click on “Associate license Configuration”.

"<yoastmark

AWS License Manager – Search Inventory

The following window will appear, allowing you to define to which license configuration matches which resource:

"<yoastmark

Having a look in the AWS License Manager Dashboard, you can see that 1 out of 1 license is consumed since I dedicated one vCPU to my virtual machine and I provided 1vCPU  license to this instance.

"<yoastmark

AWS License Manager – Dashboard

Core Messages
  • AWS License Manager offers more functionalities for EC2 Instances than for on-premise servers.
  • AWS License Manager offers functionalities in order to monitor socket, vCPU, Cores and Instances.
  • AWS License Manager definitively helps to manage licenses but doesn’t fit with all requirements and license model.
  • AWS Systems Manager is a powerful tool providing several functionalities.
Strenghts
  • AWS License Manager is free.
  • AWS License Manager offers possibilities to monitor on-premise resources.
  • AWS License Manager provides solution in order to prevent instance to run if license compliance doesn’t fit.
  • AWS License Manager and AWS System Manager are straightforward to install and configure.
  • AWS License Manager and AWS System Manager offers a good documentation.
  • AWS System Manager offers many free functionalities (Patch Manager, Session Manager, Insights Dashboard, aso…).
  • AWS System Manager offers many  functionalities and is the basis of several other AWS tools such as AWS Config which allows to monitor instance’s compliance.
Weaknesses
  • AWS License Manager is not able by default to monitor options usage such as the ones of an Oracle database (Partitioning, Active Data Guard, aso…).
  • AWS License Manager is not able to calculate Oracle Processors, meaning taking into consideration core factors.
  • AWS System Manager is not able to monitor socket or cores on on-premise resources, only vCPUs.

Cet article Managing Licenses with AWS License Manager est apparu en premier sur Blog dbi services.

Free Oracle Cloud: 8. Setup APEX Office Print (AOP) to export to PDF, Excel, Word, Powerpoint, HTML and Text

Dimitri Gielis - Wed, 2019-09-25 06:00
This post is part of a series of blog posts on the Best and Cheapest Oracle APEX hosting: Free Oracle Cloud.

In the previous posts we setup our Always Free Oracle Cloud machine and an Autonomous Database with Oracle Application Express (APEX). In this post I want to show you how to get started with the popular printing and reporting engine, APEX Office Print (AOP). The AOP software makes it super easy to export your data into a nice looking PDF, a custom Excel file, a fancy Powerpoint or other output formats of your choice, just the way you want it.
AOP is being used by many customers, even Oracle internally, to export their data in the format they want. The data can come from the database, a REST or GraphQL web service, or even components like the Interactive Report/Grid from Oracle APEX. Although AOP works with any technology, it is most known in the Oracle APEX community as it's the easiest and most integrated print engine for Oracle APEX. You create a template in DOCX, XLSX, PPTX, HTML or TEXT, specify the data source, and tell AOP in which format you want the output to be (PDF, Excel, Word, Powerpoint, HTML, Text) and AOP will do the rest! You can find more information in this presentation about AOP.

Christina Moore of Storm Petrel wrote me a few days ago following: "We have a client in one of our systems who generates a 1,888 page invoice monthly (about 2,000 pages). The most recent invoice was $1.3M USD and took 384MB. AOP handles it brilliantly. Well done. I can’t email it to you for confidentiality reasons, but know it has multiple sections that are merged with your tool too." I love feedback on the usage of AOP and am amazed how creative people are when developing with AOP!

I use AOP in every project because exporting/printing is a requirement sooner or later and an essential part of my Oracle APEX apps. So I thought to write how to use this in the Oracle Cloud :)

We have two options: we let our Oracle Autonomous Database and APEX talk to the AOP Cloud or we install an on-premises version of AOP on our own Compute VM. 

Ok, so lets get started...  open a browser and go to https://www.apexofficeprint.com and click the SIGN UP button:


Enter your email and hit Signup:


You will receive an email. Push the Confirm your email address button:


The browser will open where you can set a password for your account:


After hitting the Set Password button, you are logged in automatically and will see a Getting Started wizard:


Follow the wizard and you are all set! It should take less than 15 minutes :)

In short this is what the wizard will tell you:

  1. Download the AOP software and unzip the file
  2. Go to APEX > SQL Workshop > SQL Scripts > Upload and Run the file aop_db_pkg.sql which you find in the db folder. This will install the AOP PL/SQL API.
  3. Go to APEX > Your APP > Shared Components > Plug-ins and Import the APEX Plug-ins you find in the apex folder.
  4. Go to APEX > Your APP > Shared Components > Component Settings > APEX Office Print (AOP) and enter your API Key which you find in the Dashboard on the AOP site: 



The Component Settings in your APEX app:


When you look closely at the previous screenshot of the Component Settings, look at the AOP URL.
The URL specifies where the AOP Server is running, which the AOP APEX Plug-in and AOP PL/SQL API communicate with. By default this is set to the AOP Cloud, so you don't have to setup an AOP Server in your own environment.

Although the AOP Cloud is really convenient as it's maintained and support by the APEX Office Print team, some customers prefer to run the AOP Server on their own machine, especially when data can't leave the datacenter.

So if you read on, I will walk you through Setting up the AOP Server on your own Compute VM in the Oracle Cloud.  Just be sure you have already installed the AOP Sample Application, plug-ins, and Database Objects, if needed, as instructed in the Getting Started section, above.

From a Terminal connect to your Oracle Cloud VM:

ssh -i ssh_key opc@public_ip

The first thing we do, is change to the root user, as we want to install some supporting objects for AOP it will be easier to do it with the root user. Alternatively in front of every command you can add sudo.

We logged in as the OPC user, to become the ROOT user we do:

sudo su

Unlike other reporting engines, AOP software exists only out of couple of files and is installed in no time. We will download the software in the tmp folder on our machine and unpack it in /opt/aop:

cd /tmp

wget https://objectstorage.us-ashburn-1.oraclecloud.com/n/id9u4qbhnjxj/b/apexofficeprint/o/aop_free_oracle_cloud.zip

unzip aop_free_oracle_cloud.zip -d /opt/aop

That's it!! The AOP Server is installed!



To support PDF output, AOP relies on a 3rd party converter like MS Office or LibreOffice. Here are the steps to install LibreOffice:

yum install java-1.8.0-openjdk.x86_64

yum install cups.x86_64

wget http://ftp.rz.tu-bs.de/pub/mirror/tdf/tdf-pub/libreoffice/stable/6.2.7/rpm/x86_64/LibreOffice_6.2.7_Linux_x86-64_rpm.tar.gz

tar -xvf LibreOffice_6.2.7_Linux_x86-64_rpm.tar.gz

cd /tmp/LibreOffice_6.2.7.1_Linux_x86-64_rpm/RPMS/

yum localinstall *.rpm

ln -s /opt/libreoffice6.2/program/soffice /usr/sbin/soffice

LibreOffice is installed. To see if everything is fine you can run "soffice --version" and you should see something like this:



AOP comes with a built-in web server. When you start AOP you can define the port where AOP will listen to incoming requests. The default port is 8010. We will need to tell Linux this port can handle HTTP and HTTPS requests.

semanage port -a -t http_port_t  -p tcp 8010

To start AOP on the default port do:

cd /
./opt/aop/v19.2.3/server/APEXOfficePrintLinux64 --enable_printlog &

You should see something like this:



Yay!! AOP is running.

AOP comes with a cool Web Editor, we will make this Editor available on our domain dgielis.com/aop/. In order to do that, we will adapt Nginx to also be a reverse proxy for the AOP Web Editor. 
Here we go; 

vi  /etc/nginx/conf.d/dgielis.com.conf

And add following section:

  location /aop/ {
    proxy_pass http://127.0.0.1:8010/;
  }

The server part of the config file becomes:



We need to reload Nginx:

nginx -s reload

And now when we go in a browser to dgielis.com/aop/ we see the AOP Web Editor:



You can now, for example, load a sample by clicking the "Load sample" button and select PDF.
Scroll down a bit lower and click the Process button and a PDF is being generated :)



The Web Editor is built in React.js and you can drag-drop your template and add some data to test the features of AOP. There's also a Logging tab (toggle between Editor and Logging), so you can see incoming requests, results and debug output in case of errors.


Now if we want to tell our Oracle APEX apps to use our own AOP Server, the only thing we have to do is change the AOP URL.

In your Oracle APEX app, go to Shared Components > Component Settings > APEX Office Print (AOP) and change the AOP URL to the URL of your own Compute VM:


That's it! You are all set to print and export data within your own environment :)

I would recommend looking at the AOP Sample App which you installed in the last step if you followed the Getting Started wizard. It will show over 500 examples how to use AOP and its features!

Now I hope you enough knowledge so that you can please your customers with nice looking PDF, Excels and other documents in the format they want.


In the next post we will add an Object Storage to our Always Free Oracle Cloud Plan so we have a place to store files and backups.

Categories: Development

OOW19 Review: Oracle Analytics Deep Dive

Rittman Mead Consulting - Wed, 2019-09-25 05:41
 Oracle Analytics Deep Dive

In my previous blog I outlined the global news regarding Oracle like the Always Free Tier, the new datacenter plan and the set of new tools for Data Science. Today's blog is dedicated to all the news announced regarding Oracle Analytics in any of the versions: Cloud, Server or Applications.

 Oracle Analytics Deep DiveOracle Analytics Server

OAS is the long awaited replacement of OBIEE 12c on-premises and promises  functional parity with OAC. Current official ETA is Fiscal Year 2020 and it will be available to customers as a free upgrade. With OAS all customers still on-premises will experience the following benefits:

  • Almost 1-1 feature with OAC
  • Complete compatibility with OAC
  • Simplified cloud migration and better support for hybrid deployments

A related announcement for on-premises customers regards licensing: there is only a single license to purchase OAS which includes all features within it, no separate option for Mobile or Self-Service Data Visualization needed!

Oracle Analytics for Application

This represents the new incarnation of BIApps, completely redesigned specifically for Fusion Apps. As his predecessor, OAX (this is the acronym) it's a packaged, ready-to-use solution with pre-built ETLs and Analytics content like RPD, dashboards, analysis, KPIs. Under the covers uses Oracle Autonomous Data Warehouse and Oracle Data Integrator Cloud. OAX is also extendible, by bringing additional datasets in ADW and extending the semantic model and catalog.

Oracle Analytics Cloud

Several enhancements were announced, especially during Gabby Rubin's (VP of Oracle Analytics Product Management) Strategy & Roadmap Session. New features will be available in most of the areas of the tool, including the core of the centralized reporting: the RPD.

 Oracle Analytics Deep DiveData Preparation

New options will be available in the Data Preparation/Enrichment phase such as:

  • Custom Enrichments based on pre-existing set of values. E.g. enriching PRODUCT_ID with fields coming from a standard Product dimension. This is an interesting idea to enable standardization of dimensions across reporting without forcing people to write SQL or to know where the standard information is coming from.
  • Force Enrichments/Masking: as Administrators, we could enforce some transformations like the credit card obfuscation of fields that may contain sensitive data.
Natural Language Generation

The Natural Language view is already present in the current version of OAC, there is a plan to enhance this visualization by adding more options in the settings panel for grouping and trending analysis.

 Oracle Analytics Deep Dive

Spatial Analytics in OAC

A few weeks ago I wrote about Oracle Spatial Studio, a tool designed to provide advanced visual Spatial Analytics. This tool will remain and progress over time, OAC will not cover all the specific use-cases of Spatial Studio. However OAC will enhance its spatial capabilities, like:

  • Providing accurate information about row geo-location: e.g. how many rows were correctly located, how may errors and menus to fix value to location association.
  • Provide spatial functions in the front-end: an end-user will be easily able to calculate the distance between points in a map by writing a simple Logical SQL statement. This option will probably appear on the RPD first (check the twitter thread below)

Yeah!Now you can natively perform #spatialanalytics on #oac! #geospatial functions are available in the RPD and can be exposed to #OracleDataVisualization! pic.twitter.com/g5q3Lf9CiG

— Francesco Tisiot (@FTisiot) September 16, 2019

As you can see, calculating the distance will be just a matter of having the correct dataset and writing a GeometryDistance function.

Connectivity and Security

One of OAC's missions is to become the Analytics Platform on top of any type of datasource. The plan in the future is to expand the list of connectors and security/configuration options like SSL or Kerberos. There is also a roadmap to extend the Data Gateway capabilities to query non-oracle databases.

Modeling capabilities

In OAC we were used to either the classic RPD approach or the self-service Data-Sets. The future reserves some news in both approaches:

  • A new cloud web-based Modeler with the objective of functional parity with the Admintool, so capable of handling more complex designs that the current light data-modeler. I believe this will be also an effort to adapt the RPD development process to the current standards of concurrent development, versioning and storage format.
  • A new Self Service Data Model solution to build light self service models allowing end-users to evolve datasets into proper models sharable and optimized for reporting.

I like the idea of allowing both top-down (centralized) as well as bottom-up (self-service) approach to data modeling. This provides clients the flexibility on the analytical approach while still allowing to enforce centralized rules  (e.g. unique source of truth) when needed.

Unified User Experience and Layout Customizations

As of now the old "Answers and Dashboards" and the new "Data Visualization Projects" were almost completely separated products with each one having its own home page and layout. In the next releases we'll see that the two worlds will start combining, with a unique home and a similar look and feel.

In other news, highly requested by end-users is the possibility of customize almost any option of the layout: from font type and size to colors of any object visible in a project.

Machine Learning Integration

As discussed in the previous OOW review post in the future OAC will be able to use models built in other tools like Oracle Machine Learning in the Autonomous Data Warehouse or Oracle Data Science. This provides an end-to-end Data Science story from Data Analyst to Data Scientist all with a simple, secure, highly configurable and performant toolset.

 Oracle Analytics Deep Dive

As you can see a lot of news coming in various aspects of the tool, from on-premise functional parity, a new packaged solution for Fusion Apps and a lot of features enhancing OAC functionality and customization options.

What do you think? Is this the right direction? Do you feel there is something missing?

Categories: BI & Warehousing

MobaXterm 12.2

Tim Hall - Wed, 2019-09-25 02:06

In another “the rest of the world ceases to exist in the lead up to OpenWorld” moment, I missed the release of MobaxTerm 12.2.

The downloads and changelog are in the usual places.

For Windows users who, like me, spend most of the day connecting to machines via SSH, this is the best tool I’ve found.

Cheers

Tim…

MobaXterm 12.2 was first posted on September 25, 2019 at 8:06 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

500 blog posts (in 12 years)

Dietrich Schroff - Tue, 2019-09-24 15:30
Last week i did my 500th posting - not really "run like hell" but i think better than nothing.
Here some statistics:




I think, i will continue for another 500. The exciting point is, wether i will find enough topics, which i am interested in...

The Product Concept: What does it mean in 2019?

VitalSoftTech - Tue, 2019-09-24 10:41

If you’ve ever taken a marketing course, you’ve likely heard about the “product concept.” However, there may be many who are unaware of what it is and what it entails. The advertising lexicon is getting broader by the day, and with each passing minute, more words for different concepts are being introduced. Which brings us […]

The post The Product Concept: What does it mean in 2019? appeared first on VitalSoftTech.

Categories: DBA Blogs

PFCLScan - Version 3.0

Pete Finnigan - Tue, 2019-09-24 09:26
We are very excited to announce that we are currently working to have version 3.0 of PFCLScan our flagship database security scanner for the Oracle database. We will be ready for sale in September and this development is going really....[Read More]

Posted by Pete On 11/07/19 At 03:33 PM

Categories: Security Blogs

Key Takeaways From Oracle OpenWorld 2019

Online Apps DBA - Tue, 2019-09-24 02:34

Oracle OpenWorld 2019 was amazing! OOW19 Keynote Highlights: ▪World’s First Autonomous Operating System ▪Always FREE Oracle Cloud Services ▪Oracle Data Safe ▪Exadata Cloud@Customer On Gen2 Cloud (OCI) ▪X8M: New Version of Exadata ▪Multi-Cloud Innovation [Oracle+Microsoft] ▪VMware Solution on Oracle Cloud ▪36 Cloud Region Planned by 2020 ▪Desupport of non-CDBs with 20c Check more in detail […]

The post Key Takeaways From Oracle OpenWorld 2019 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Basic VMware Harbor Registry usage for Pivotal Container Service (PKS)

Pas Apicella - Tue, 2019-09-24 01:25
VMware Harbor Registry is an enterprise-class registry server that stores and distributes container images. Harbor allows you to store and manage images for use with Enterprise Pivotal Container Service (Enterprise PKS).

In this simple example we show what you need at a minimum to get an image on Harbor deployed onto your PKS cluster. First we need the following to be able to run this basic demo

Required Steps

1. PKS installed with Harbor Registry tile added as shown below


2. VMware Harbor Registry integrated with Enterprise PKS as per the link below. The most important step is the one as follows "Import the CA Certificate Used to Sign the Harbor Certificate and Key to BOSH". You must complete that prior to creating a PKS cluster

https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html

3. A PKS cluster created. You must have completed step #2 before you create the cluster

https://docs.pivotal.io/pks/1-4/create-cluster.html

$ pks cluster oranges

Name:                     oranges
Plan Name:                small
UUID:                     21998d0d-b9f8-437c-850c-6ee0ed33d781
Last Action:              CREATE
Last Action State:        succeeded
Last Action Description:  Instance provisioning completed
Kubernetes Master Host:   oranges.run.yyyy.bbbb.pivotal.io
Kubernetes Master Port:   8443
Worker Nodes:             4
Kubernetes Master IP(s):  1.1.1.1
Network Profile Name:

4. Docker Desktop Installed on your local machine



Steps

1. First let's log into Harbor and create a new project. Make sure you record your username and password you have assigned for the project. In this example I make the project public.




Details

  • Project Name: cto_apj
  • Username: pas
  • Password: ****

2. Next in order to be able to connect to our registry from our local laptop we will need to install

The VMware Harbor registry isn't running on a public domain, and is using a self-signed certificate. So we need to access this registry with self-signed certificates from my mac osx clients given I am using Docker for Mac. This link shows how to add the self signed certificate to Linux and Mac clients

https://blog.container-solutions.com/adding-self-signed-registry-certs-docker-mac

You can download the self signed cert from Pivotal Ops Manager as sown below


With all that in place a command as follows is all I need to run

$ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ca.crt

3. Now lets login to the registry using a command as follows

$ docker login harbor.haas-bbb.yyyy.pivotal.io -u pas
Password:
Login Succeeded

4. Now I have an image sitting on Docker Hub itself so let's tag that and then deploy that to our VMware Harbor registry as shown below

 $ docker tag pasapples/customer-api:latest harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest
 $ docker push harbor.haas-bbb.yyyy.io/cto_apj/customer-api:latest


5. Now lets create a new secret for accessing the container registry

$ kubectl create secret docker-registry regcred --docker-server=harbor.haas-bbb.yyyy.io --docker-username=pas --docker-password=**** --docker-email=papicella@pivotal.io

6. Now let's deploy this image to our PKS cluster using a deployment YAML file as follows

customer-api.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: customer-api
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: customer-api
    spec:
      containers:
        - name: customer-api
          image: harbor.haas-206.pez.pivotal.io/cto_apj/customer-api:latest
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: customer-api-service
  labels:
    name: customer-api-service
spec:
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
  selector:
    app: customer-api
  type: LoadBalancer

7. Deploy as follows

$ kubectl create -f customer-api.yaml

8. You should see the POD and SERVICE running as follows

$ kubectl get pods | grep customer-api
customer-api-7b8fcd5778-czh46                    1/1     Running   0          58s

$ kubectl get svc | grep customer-api
customer-api-service            LoadBalancer   10.100.2.2    10.195.1.1.80.5   80:31156/TCP 


More Information

PKS Release Notes 1.4
https://docs.pivotal.io/pks/1-4/release-notes.html

VMware Harbor Registry
https://docs.vmware.com/en/VMware-Enterprise-PKS/1.4/vmware-harbor-registry/GUID-index.html

Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator