OTN TechBlog

Subscribe to OTN TechBlog feed
Oracle Blogs
Updated: 1 month 3 weeks ago

Creating An ATP Instance With The OCI Service Broker

Mon, 2019-06-10 08:03
.gist { border-left: none !important;} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; }

We recently announced the release of the OCI Service Broker for Kubernetes, an implementation of the Open Service Broker API that streamlines the process of provisioning and binding to services that your cloud native applications depend on.

The Kubernetes documentation lays out the following use case for the Service Catalog API:

An application developer wants to use message queuing as part of their application running in a Kubernetes cluster. However, they do not want to deal with the overhead of setting such a service up and administering it themselves. Fortunately, there is a cloud provider that offers message queuing as a managed service through its service broker.

A cluster operator can setup Service Catalog and use it to communicate with the cloud provider’s service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster. The application developer therefore does not need to be concerned with the implementation details or management of the message queue. The application can simply use it as a service.

Put simply, the Service Catalog API lets you manage services within Kubernetes that are not be deployed within Kubernetes.  Things like messaging queues, object storage and databases can be deployed with a set of Kubernetes configuration files without needing knowledge of the underlying API or tools used to create those instances thus simplifying the deployment and making it portable to virtually any Kubernetes cluster.

The current OCI Service Broker adapters that are available at this time include:

  • Autonomous Transaction Processing (ATP)
  • Autonomous Data Warehouse (ADW)
  • Object Storage
  • Streaming

I won't go into too much detail in this post about the feature, as the introduction post and GitHub documentation do a great job of explaining service brokers and the problems that they solve. Rather, I'll focus on using the OCI Service Broker to provision an ATP instance and deploy a container which has access to the ATP credentials and wallet.  

To get started, you'll first have to follow the installation instructions on GitHub. At a high level, the process involves:

  1. Deploy the Kubernetes Service Catalog client to the OKE cluster
  2. Install the svcat CLI tool
  3. Deploy the OCI Service Broker
  4. Create a Kubernetes Secret containing OCI credentials
  5. Configure Service Broker with TLS
  6. Configure RBAC (Role Based Access Control) permissions
  7. Register the OCI Service Broker

Once you've installed and registered the service broker, you're ready to use the ATP service plan to provision an ATP instance. I'll go into details below, but the overview of the process looks like so:

  1. Create a Kubernetes secret with a new admin and wallet password (in JSON format)
  2. Create a YAML configuration for the ATP Service Instance
  3. Deploy the Service Instance
  4. Create a YAML config for the ATP Service Binding
  5. Deploy the Service Binding to obtain which results in the creation of a new Kubernetes secret containing the wallet contents
  6. Create a Kubernetes secret for Microservice deployment use containing the admin password and the wallet password (in plain text format)
  7. Create a YAML config for the Microservice deployment which uses an initContainer to decode the wallet secrets (due to a bug which double encodes them) and mounts the wallet contents as a volume

Following that overview, let's take a look at a detailed example. The first thing we'll have to do is make sure that the user we're using with the OCI Service Broker has the proper permissions.  If you're using a user that is a member of the group devops then you would make sure that you have a policy in place that looks like this:

Allow group devops to manage autonomous-database in compartment [COMPARTMENT_NAME]

The next step is to create a secret that will be used to set some passwords during ATP instance creation.  Create a file called atp-secret.yaml and populate it similarly to the example below.  The values for password and walletPassword must be in the format of a JSON object as shown in the comments inline below, and must be base64 encoded.  You can use an online tool for the base64 encoding, or use the command line if you're on a Unix system (echo '{"password":"Passw0rd123456"}' | base64).

Now create the secret via: kubectl create -f app-secret.yaml.

Next, create a file called atp-instance.yaml and populate as follows (updating the name, compartmentId, dbName, cpuCount, storageSizeTBs, licenseType as necessary).  The paremeters are detailed in the full documentation (link below).  Note, we're referring to the previously created secret in this YAML file.

Create the instance with: kubectl create -f atp-instance.yaml. This will take a bit of time, but in about 15 minutes or less your instance will be up and running. You can check the status via the OCI console UI, or with the command: svcat get instances which will return a status of "ready" when the instance has been provisioned.

Now that the instance has been provisioned, we can create a binding.  Create a file called atp-binding.yaml and populate it as such:

Note that we're once again using a value from the initial secret that we created in step 1. Apply the binding with: kubectl create -f atp-binding.yaml and check the binding status with svcat get bindings, looking again for a status of "ready". Once it's ready, you'll be able to view the secret that was created by the binding via: kubectl get secrets atp-demo-binding -o yaml where the secret name matches the 'name' value used in atp-binding.yaml. The secret will look similar to the following output:

This secret contains the contents of your ATP instance wallet and next we'll mount these as a volume inside of the application deployment.  Let's create a final YAML file called atp-demo.yaml and populate it like below.  Note, there is currently a bug in the service broker that double encodes the secrets, so it's currently necessary to use an initContainer to get the values properly decoded.

Here we're just creating a basic alpine linux instance just to test the service instance. Your application deployment would use a Docker image with your application, but the format and premise would be nearly identical to this. Create the deployment with kubectl create -f atp-demo.yaml and once the pod is in a "ready" state we can launch a terminal and test things out a bit:

Note that we have 3 environment variables available in the instance:  DB_ADMIN_USER, DB_ADMIN_PWD and WALLET_PWD.  We also have a volume available at /db-demo/creds containing all of our wallet contents that we need to make a connection to the new ATP instance.

Check out the full instructions for more information or background on the ATP service broker. The ability to bind to an existing ATP instance is scheduled as an enhancement to the service broker in the near future, and some other exciting features are planned.

How to Install Oracle Java in Oracle Cloud Infrastructure

Fri, 2019-06-07 11:10
Oracle Java Support and Updates Included in Oracle Cloud Infrastructure

We recently announced that Oracle Java, Oracle’s widely adopted and proven Java Development Kit, is now included with Oracle Cloud Infrastructure subscriptions at no extra cost.

In this blog post I show how to install Oracle Java on Oracle Linux running in an OCI compute shape by using RPMs available yum servers available within OCI.

Installing Oracle Java

The Oracle Java RPMs are in the ol7_oci_included repository on Oracle Linux yum server accessible from within OCI.

To enable this repository:

$ sudo yum install -y --enablerepo=ol7_ociyum_config oci-included-release-el7

As of this writing, the repository containst Oracle Java 8, 11 and 12.

$ yum list jdk* Loaded plugins: langpacks, ulninfo Available Packages jdk-11.0.3.x86_64 2000:11.0.3-ga ol7_oci_included jdk-12.0.1.x86_64 2000:12.0.1-ga ol7_oci_included jdk1.8.x86_64 2000:1.8.0_211-fcs ol7_oci_included

To install Oracle Java 12, version 12.0.1:

$ sudo yum install jdk-12.0.1

To confirm the Java version:

$ java -version java version "12.0.1" 2019-04-16 Java(TM) SE Runtime Environment (build 12.0.1+12) Java HotSpot(TM) 64-Bit Server VM (build 12.0.1+12, mixed mode, sharing) Multiple JDK versions and setting the default

If you install multiple version of the JDK, you may want to set the default version using alternatives. For example, let’s first install Oracle Java 8:

$ sudo yum install -y jdk1.8

The alternatives command shows that two programs provide java:

$ sudo alternatives --config java There are 2 programs which provide 'java'. Selection Command ----------------------------------------------- *+ 1 /usr/java/jdk-12.0.1/bin/java 2 /usr/java/jdk1.8.0_211-amd64/jre/bin/java

Choosing selection 2 sets the default to JDK 1.8 (Oracle Java 8):

$ java -version java version "1.8.0_211" Java(TM) SE Runtime Environment (build 1.8.0_211-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode) Conclusion

Oracle Cloud Infrastructure includes Oracle Java —with support and updates— at no additional cost. By providing Oracle Java RPMs in OCI’s yum servers, installation is greatly simplified.

Build and Deploy a Golang Application Using Oracle Developer Cloud

Fri, 2019-06-07 10:36

Golang recently became a trending programming language in the developer community. This blog will help you develop, build, and deploy your first Golang-based REST application using Docker and Kubernetes on Oracle Developer Cloud.

Before getting our first Golang application up and running, let’s examine Golang a little.

What is Golang?

Golang, or Go for short, is an open source programming language that is a statically-typed, compiled all-purpose programming language. It is fast and supports concurrency and cross-platform compilation. To learn more about Go, visit the following link:

https://golang.org/

Let’s get set and Go

To develop, build, and deploy a Golang-based application, you’ll need to create the following files on your machine:

  • main.go - Contains the Go application code and the listener
  • Dockerfile - Builds the Docker image for the Go application code
  • gorest.yml – A YAML file that deploys the Docker image of the Go application on Oracle Container Engine for Kubernetes

Here are the code snippets for the files mentioned above.

main.go

This file imports the required packages and defines the handler() function for the request, which is called by the main() function, where the http listener port is defined. As the name itself suggests, the errorHandler() function comes into play when an error occurs.

package main import ( "fmt" "log" "net/http" "os" ) func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello %s!", r.URL.Path[1:]) fmt.Println("RESTfulServ. on:8093, Controller:",r.URL.Path[1:]) } func main() { http.HandleFunc("/", handler) fmt.Println("Starting Restful services...") fmt.Println("Using port:8093") err := http.ListenAndServe(":8093", nil) log.Print(err) errorHandler(err) } func errorHandler(err error){ if err!=nil { fmt.Println(err) os.Exit(1) } }

Dockerfile

This Dockerfile pulls the latest Go Docker image from DockerHub, creates an app folder in the container, and then adds all the application files on the build machine(from Git repository cloning) to the app folder in the container. Next, it makes the app directory the working directory and runs the go build command to build the Go code and execute the main file.

 

FROM golang:latest RUN mkdir /app ADD . /app/ WORKDIR /app RUN go build -o main . CMD ["/app/main"]

 

gorest.yml

The script shown below defines the Kubernetes service and deployment, including the respective names, ports, and Docker image that will be downloaded from the DockerHub registry and deployed on the Kubernetes cluster. In the script, we defined the service and deployment as gorest-se, the port as 8093, and the container image as <DockerHub username>/gorest:1.0

kind: Service apiVersion: v1 metadata: name: gorest-se labels: app: gorest-se spec: type: NodePort selector: app: gorest-se ports: - port: 8093 targetPort: 8093 name: http --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: gorest-se spec: replicas: 1 template: metadata: labels: app: gorest-se version: v1 spec: containers: - name: gorest-se image: abhinavshroff/gorest:1.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8093 ---

 

Create a Git repository in the Oracle Developer Cloud Project

To create a Git repository in the Developer Cloud project, navigate to the Project Home page and then click the +Create Repository button, found on the right-hand side of the page. In the New Repository dialog, enter GoREST for the repository Name and select Empty Repository for the Initial Content option, as shown. Then, click the Create button.

 

You should now see the GoREST.git repository created in the Repositories tab on the Project Home page. Click the Clone dropdown and then click the copy icon, as shown in the screen shot, to copy the Git repository HTTPS URL. Keep this URL handy.

 

Push the code to the Git Repository

Now, in your command prompt window, navigate to the GoREST application folder and execute the following Git commands to push the application code to the Git repository you created.

Note: You need to have gitcli installed on your development machine to execute Git commands. Also, you’ll be using the Git URL that you just copied from the Repositories tab, as previously mentioned.

git init

git add --all

git commit -m "First commit"

git remote add origin <git repository url>

git push origin master

Your GoREST.git repository should have the structure shown below.

 

 

Configure the Build Job

In Developer Cloud, select Builds in the left navigation bar to display the Builds page. Then click the +Create Job button. 

In the New Job dialog, enter BuildGoRESTAppl for the Name and select a Template that has the Docker runtime. Then click the Create button. This build job will build the Docker image for the Go REST application code in the Git repository and push it to the DockerHub registry.

In the Git tab, select Git from the Add Git dropdown, select GoREST.git as the Git repository and, for the branch, select master.

In the Steps tab, use the Add Step dropdown to add Docker login, Docker build, and Docker push steps.

In the Docker login step, provide your DockerHub Username and Password. Leave the Registry Host empty, since we’re using DockerHub as the registry.

In the Docker build step, enter <DockerHub Username>/gorest for the Image Name and 1.0 for the Version Tag. The full image name shown is <DockerHub Username>/gorest:1.0

In the Docker push step, enter <DockerHub Username>/gorest for the Image Name and 1.0 for the Version Tag. Then click the Save button.

To create another build job for deployment, navigate to the Builds page and click the +Create Job button. 

In the New Job dialog enter DeployGoRESTAppl for the Name, select the template with Kubectl, then click the Create button. This build job will deploy the Docker image built by the BuildGoRESTAppl build job to the Kubernetes cluster.

The first thing you’ll do to configure the DeployGoRESTAppl build job is to specify the repository where the code is found and select the branch where you’ll be working on the files.  To do this, in the Git tab, add Git from the dropdown, select GoREST.git as the Git repository and, for the branch, select master.

In the Steps tab, select OCIcli from the Add Step dropdown. Take a look at this blog link to see how and where to get the values for the OCIcli configuration. Then, select Unix Shell from the Add Step dropdown and, in the Unix Shell build step, enter the following script.

 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl create -f gorest.yml sleep 30 kubectl get services gorest-se kubectl get pods kubectl describe pods

 

When you’re done, click the Save button.

 

Create the Build Pipeline

Navigate to the Pipelines tab in the Builds page. Then click the +Create Pipeline button.

In the Create Pipeline dialog, you can enter the Name as GoApplPipeline. Then click the Create button.

 

Drag and drop the BuildGoRESTAppl and DeployGoRESTAppl build jobs and then connect them.

 

Double click the link that connects the build jobs and select Successful as the Result Condition. Then click the Apply button.

 

Then click on the Save button.

 

Click the Build button, as shown, to run the build pipeline. The BuildGoRESTAppl build job will be executed first and, if it is successful, then the DeployGoRESTAppl build job that deploys the container on the Kubernetes cluster on Oracle Cloud will be executed next.

 

After the jobs in the build pipeline finish executing, navigate to the Jobs tab and click the link for the DeployGoRESTAppl build job.  Then click the Build Log icon for the executed build.

 

You should see messages that the service and deployment were successfully created.  Search the log for the gorest-se service and deployment that were created on the Kubernetes cluster, and find the public IP address and port to access the microservice, as shown below.

 

Enter the IP address and port that you retrieved from the log, into the browser using the format shown in the following URL:

http://<retrieved IP address>:<retrieved port>/<your name>

You should see the “Hello <your name>!” message in your browser.

 

So, you’ve seen how Oracle Developer Cloud can help you manage the complete DevOps lifecycle for your Golang-based REST applications and how out-of-the-box support for Build and Deploy to Oracle Container Engine for Kubernetes makes it easier.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Get Up to Speed with Oracle ACEs on the Kscope Database Track

Thu, 2019-06-06 08:03
All Aboard for Database Expertise...

This second post in a series on Kscope 2019 sessions presented by members of the Oracle ACE program focuses on the database track. Kscope 2019 arrives on time, June 23-27 in Seattle. Click here for information and registration.

Click the session titles below for time, date, and location information for each session.

We'll cover sessions in the other tracks in upcoming posts. Stay tuned!

 

Oracle ACE Directors

Oracle ACE Director Alex NuijtenAlex Nuijten
Director, Senior Oracle Developer, allAPEX
Oosterhout, Netherlands

 

Oracle ACE Director Debra LilleyDebra Lilley
Associate Director, Accenture
Belfast, United Kingdom

 

Oracle ACE Director Dimitri GielisDimitri Gielis
Director, APEX R&D
Leuven, Belgium

 

Oracle ACE Director Francisco AlvarezFrancisco Munoz Alvarez
CEO, CloudDB
Sydney, Australia

 

Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy
Finland

 

Oracle ACE Director Jim CzuprynskiJim Czuprynski
Senior Enterprise Data Architect, Viscosity North America
Bartlett, Illinois

 

Oracle ACE Director Kim Berg HansenKim Berg Hansen
Senior Consultant, Trivadis
Funen, Denmark

 

Oracle ACE Director Martin Giffy D'SouzaMartin Giffy D’Souza
Director of Innovation, Insum Solutions
Calgary, Alberta, Canada

 

Oracle ACE Director Mia UrmanMia Urman
CEO, AuraPlayer Ltd
Brookline, Massachusetts

 

Oracle ACE Director Patrick BarelPatrick Barel
Sr. Oracle Developer, Alliander via Qualogy
Haarlem, Netherlands

 

Oracle ACE Director Peter KoletzkePeter Koletzke
Technical Director, Principal Instructor
Independent Consultant

 

Oracle ACE Director Richard NiemiecRichard Niemiec
Chief Innovation Officer, Viscosity North America
Chicago, Illinois

 
Oracle ACEs

Oracle ACE Dani SchniderDani Schnider
Senior Principal Consultant, Trivadis AG
Zurich, Switzerland

 

Oracle ACE Holger FriedrichHolger Friedrich
CTO, sumIT AG
Zurich, Switzerland

 

Oracle ACE Liron AmitziLiron Amitzi
Senior Database Consultant, Self Employed
Vancouver, Canada

 

Oracle ACE Philipp SalvisbergPhilipp Salvisberg
Senior Principal Consultant, Trivadis AG
Zurich, Switzerland

 

Oracle ACE Robert MarzRobert Marz
Principal Technical Architect, its-people GmbH
Frankfurt, Germany

 
Oracle ACE Associates

Oracle ACE Associate Alfredo AbateAlfredo Abate
Senior Oracle Systems Architect, Brake Parts Inc LLC
McHenry, Illinois

 

Oracle ACE Associate Eugene FedorenkoEugene Fedorenko
Senior Architect, Flexagon
De Pere, Wisconsin

 
Additional Resources

Simplifying Troubleshooting Process for Oracle Service Cloud Admins and Agents (BUI Version)

Wed, 2019-06-05 11:39
What is Troubleshoot Extension?

If you are an Oracle Service Cloud administrator, you are likely the first person in your company your agents will go to when they are experiencing some error with their Oracle Service Cloud. Troubleshooting errors can quickly spawn into a bigger investigative effort to identify the root cause (e.g., network issue, configuration issue, training, defect), but the troubleshooting process usually starts with getting the right details about the error from your agents. Your agents are already frustrated because an error could be impeding their work and impacting their metrics. And now you’re asking them to provide detailed information on the error: steps to reproduce this issue, workstation information or data traffic, much of which they aren’t familiar with.

Recognizing the dynamics of this common scenario, we decided to walk a mile in both your shoes, as the administrator, and your agents’ shoes. As a result of this experience, we came up with the idea of automating the process for gathering the needed information from agents instead of requiring the administrators and agents to try and overcome the current challenges. We developed a sample code named as “Troubleshoot Extension” for the Oracle Service Cloud Browser UI (Similar what we did before in Agent Desktop) to address this need.

The “Troubleshoot Extension Sample Code” was created to automatically capture information, such as browser navigator, agent basic information and console log, error, warn and debug information in one fell swoop instead of requiring agents to install or use different tools outside of Oracle Service Cloud. All the agents need to do is push the "start" button (located in the status bar) and push the “stop” when the agents have completed all steps to reproduce the error.

The sample code is available to download in the bottom of the article, but before going further, I'd like to encourage you to understand the importance of web console to troubleshoot a web-based application so that the approach used here makes more sense.

Why we are sharing this sample code?
  • First, because we believe that by automating this process, we can simplify your communication with agents experiencing errors, accelerate the potential solution for the process.
  • Second, Agents don't need to understand how to gather technical information for troubleshooting.
  • Lastly, because through this sample code, we can achieve:
    • sharing a complex sample code where you can apply and reuse for other needs;
    • sharing a start point where you can enhance this troubleshooting tool to adapt to your requirement.
How does Troubleshoot Extension work?

Let’s take a more in-depth look at what the sample code delivers. The sample code is implementing an Extension StatusBar with a start and stop button, plus a timer that provides the duration of how long your agent is capturing the steps to reproduce.

By clicking on the start button, the troubleshoot extension will automatically set the log level (see developer tools logs for more information). Everything that is registered in the web console will be captured.

Once you have finished capturing your steps to reproduce click on the stop button and the Troubleshoot Extension takes care of compile all web console information to present to you in a window modal.

**This extension is not compatible with IE11. 

The window modal presents a friendly version of the result captured, but you can save the result by clicking on the download button on top. The download button will create a text file with the information that was captured in and can share with your technical team to help them on the investigation.

The information capture is read-only, but there is an additional field on top where your agent can add more information such as steps to reproduce or any other comment.

How to install?

1. Upload the extension code as a BUI extension.

  1. Download the extension file.
  2. Open your Admin console and upload the zip file as Agent Browser UI Extensions.
  3. Name your extension. E.g.: Troubleshoot Extension. 
  4. Select "Console" as an extension type.
  5. Select  TsExnteion/ViewModel/js/ts-statusbar.js as the Init File.
  6. Go to profile and assign this extension to the profile you'd like to use this extension.

2. Create custom configuration settings.

  1. Go to Site Configuration > Configuration Settings.
  2. Click on New > Text to create a custom configuration setting.
  3. The Key name is "CUSTOM_CFG_TS".
  4. The site value is the following JSON: {"debugLevel":3, "performance": true}

Ultimately, this solution should simplify your communication with agents experiencing errors, accelerate troubleshooting by having the required information in one easy step, and save everyone time and frustration that surrounds these issues.

The source code is available here for download, and you take advantage to build a better troubleshoot model integrated into your Oracle Service Cloud. 

If you are a developer and want to contribute to this sample code, you are welcome to join the Github.

Simplifying Troubleshooting Process for Oracle Service Cloud Admins and Agents (.NET Add-In Version)

Wed, 2019-06-05 11:31

If you are an Oracle Service Cloud administrator, you are likely the first person in your company your agents will go to when they are experiencing some error with their agent desktop. Troubleshooting errors can quickly spawn into a bigger investigative effort to identify the root cause (e.g., network issue, configuration issue, training, defect), but the troubleshooting process usually starts with getting the right details about the error from your agents. Your agents are already frustrated because an error could be impeding their work and impacting their metrics. And now you’re asking them to provide detailed information on the error: steps to reproduce this issue, workstation information or data traffic, much of which they aren’t familiar with.

Recognizing the dynamics of this common scenario, we decided to walk a mile in both your shoes, as the administrator, and your agents’ shoes. As a result of this experience, we came up with the idea of automating the process for gathering the needed information from agents instead of requiring the administrators and agents to try and overcome the current challenges. We developed a sample code named as “Troubleshoot Add-In” for the Oracle Service Cloud Agent Desktop (For the Browser UI version) to address this need.

The “Troubleshoot Add-In Sample Code” was created to automatically capture information, such as steps-to-reproduce, and workstation information in one fell swoop instead of requiring agents to install or use different tools outside of Oracle Service Cloud. All that agents need to do is push the "start" button (located in the status bar) and push the “stop” when the agents have completed all steps to reproduce the error.

Take a look at these two scenarios and see how the “Troubleshoot Add-In” can improve communication between administrators and agents who are encountering an issue.

Let’s take a more in-depth look at what the sample code delivers. The sample code is implementing an Extension StatusBar with a start and stop button, plus a timer that provides the duration of how long your agent is capturing the steps to reproduce.

How Troubleshoot Add-In Sample Code Works

By clicking on the start button, a friendly loading form shows up. You can personalize your message by changing a ServerConfigProperty used in this sample code. At this moment, the sample code will start a standard windows application called PSR (Problem Steps to Reproduce). As a sample code, this code is limited to use a windows standard application and from here, I would encourage you as a developer to expand this sample code for your needs.

For instance, If you want to get rid of Fiddler installation needs, you can use Fiddler Core and embed their .dll to capture data traffic plus steps to reproduce. Check out for Fiddler Demo Code and try yourself, I am pretty sure it will be useful.

Once you have finished capturing your steps to reproduce click on the stop button and the sample code closes PSR and starts to capture workstation information such as .NET version, windows version, capacity. Also, you can run the OSCinfo.bat as described in the answer 2412. This sample code is providing this option if you need to capture more information as ping and traces. See for the ServerConfigProperty options.

Lastly, a message pops up to inform the agent where the results were saved. The sample code takes care of compile and compresses all files resulted from PSR and Workstation Information in a local folder or wherever folder you have specified in a ServerConfigProperty.

Okay, this is true, I like to use ServerConfigProperty and there is more fo them. With that, you can set up your add-in without changing your code. 

Ultimately, this solution should simplify your communication with agents experiencing errors, accelerate troubleshooting by having the required information in one easy step, and save everyone time and frustration that surrounds these issues.

The source code is available here for download, and you take advantage to build a better troubleshoot model integrated into your Oracle Service Cloud. 

If you are a developer and want to contribute to this sample code, you are welcome to join the Github.

Kscope 2019 APEX Track Sessions by Oracle ACE Program Members

Tue, 2019-06-04 05:00
A Towering Achievement...

On June 23rd ODTUG Kscope 2019 kicks off in Seattle. Among the vast army of session presenters for the event you'll find more than 50 active members of the Oracle ACE Program, and they'll be presenting more than 100 sessions across all the various tracks. This post is the first of a series that will highlight those ACE sessions. This post focuses on sessions on the APEX track.

For the details on each session, including time, location, and session descriptions, just click the title link.

Oracle ACE Directors

Oracle ACE Director Dimitri GielisDimitri Gielis
Director, APEX R&D
Leuven, Belgium

 

Oracle ACE Director Francis MignaultFrancis Mignault
CTO/VP Technologies, Insum Solutions
Montreal, Canada

 

Oracle ACE Director John ScottJohn Scott
Founder/Director, APEX Evangelists
Leeds, United Kingdom

 

Oracle ACE Director Martin Giffy D'SouzaMartin Giffy D’Souza
Director of Innovation, Insum Solutions
Calgary, Alberta, Canada

 

Oracle ACE Director Niels de BruijnNiels de Bruijn
Business Unit Manager, MT AG
Cologne, Germany

 

Oracle ACE Director Peter RaganitschPeter Raganitsch
CEO, FOEX GmbH
Austria

 

Oracle ACE Director Roel HartmanRoel Hartman
Director/Senior APEX Developer, APEX Consulting
Apeldoorn, Netherlands

 

Oracle ACE Director Scott SpendoliniScott Spendolini
Vice President, Viscosity North America
Austin, Texas

 
Oracle ACEs

Oracle ACE Alan ArentsenAlan Arentsen
Senior Oracle Developer, Arentsen Database Consultancy
Breda, Netherlands

 

Oracle ACE Christian RokittaChristian Rokitta
Managing Partner, iAdvise
Breda Area, Netherlands

 

Oracle ACE Jorge RimblasJorge Rimblas
Senior APEX Consultant, Insum
Minneapolis-St. Paul, Minnesota

 

Oracle ACE Scott WesleyScott Wesley
Systems Consultant/Trainer, SAGE Computing Services
Perth, Australia

 

Oracle ACE Vincent MorneauVincent Morneau
Front-End Lead, Insum
Montreal, Canada

 

Oracle ACE Niall Mc PhillipsNiall Mc Phillips
Owner/CEO, Long Acre sàrl

 
Oracle ACE Associates

Oracle ACE Associate Kai DonatoKai Donato
Senior Consultant, MT AG
Cologne, Germany

 

Oracle ACE Associate Lino SchildenfeldLino Schildenfeld
NZ/AU Director, APEX R&D
Australia/New Zealand

 

Oracle ACE Associate Moritz KleinMoritz Klein
Principal APEX Consultant, MT AG
Frankfurt, Germany

 

Oracle ACE Associate Adrian PngAdrian Png
Senior Consultant/Database Administrator, Insum
Canada

 
Additional Resources

Lastest Blog Posts from ACE Program Members - May 19-25, 2019

Thu, 2019-05-30 05:00
How it all stacks up...

...depends on who's doing the stacking. The blog posts listed below, published by members of the Oracle ACE Program between May 19th and 25th, reveal a bit about how these people get things done.

 

Oracle ACE Director

Oracle ACE Director Edward Whalen Edward Whalen
Chief Technologist, Performance Tuning Corporation
Houston, Texas

 

Oracle ACE Director Opal AlapatOpal Alapat
Vision Team Practice Lead, interRel Consulting
Arlington, TX

 

Oracle ACE Director Syed Jaffar HussainSyed Jaffar Hussain
Freelance Consultant, Architect, Trainer
Saudi Arabia

 
Oracle ACE

Oracle ACE Rodrigo MufalaniRodrigo Mufalani
Principal Database Architect, eProseed
Luxembourg

 

Satoshi Mitani
Database Platform Technical Lead, Yahoo! JAPAN
Tokyo, Japan

 

Oracle ACE Sean StuberSean Stuber
Database Analyst, American Electric Power
Columbus, Ohio

 

Oracle ACE Stefan OehrliStefan Oehrli
Platform Architect, Trivadis AG
Zurich, Switzerland

 
Oracle ACE Associate

Oracle ACE Associate Bruno Reis da SilvaBruno Reis da Silva
Senior Oracle Database Administrator, IBM
Stockholm, Sweden

 

Oracle ACE Associate Tercio CostaTercio Costa
Analista de Dados, Unimed João Pessoa
João Pessoa, Paraíba, Brazil

 

Oracle ACE Associate Wadah DaouehiWadhah Daouehi
Manager/Consultant/Trainer, ORANUX
Riadh city, Tunisia

 

 

Additional Resources

Build and Deploy a Helidon Microservice Using Oracle Developer Cloud

Wed, 2019-05-29 19:53

Project Helidon was recently introduced by Oracle. It provides a new way to write microservices. This blog will help you understand how use Oracle Developer Cloud to build and deploy your first Helidon-based microservice on Oracle Container Engine for Kubernetes.

Before we begin, let’s examine a few things:

What is Helidon?

Project Helidon is a set of Java Libraries for writing microservices.  Helidon supports two programming models: Helidon MP, based on MicroProfile 1.2, and Helidon SE, a small, functional style API.

Regardless of which model you choose, you’ll be writing an application that is a Java SE-based program. Helidon is open source and the code is available on GitHub.  To read and learn more about Project Helidon, see the following links:

Get Started

Helidon doesn’t have downloads. Instead, you’ll need to use the Maven releases. This means that you’ll be using the Maven Archetype to get started with your Helidon microservice project. In this blog, we’ll be using the Helidon SE programming model.

The following basic prerequisites should be installed on your machine to develop with Helidon:

  • Maven
  • Java 8
  • Gitcli (for pushing code to the Oracle Developer Cloud Git repository)

Download the Sample Microservice Project for Helidon with Maven

Open the command prompt, if you’re using a Windows machine. Then go to (or create) the directory or folder where you’d like to create the sample Helidon microservice project and execute the following Maven command.

mvn archetype:generate -DinteractiveMode=false \

    -DarchetypeGroupId=io.helidon.archetypes \

    -DarchetypeArtifactId=helidon-quickstart-se \

    -DarchetypeVersion=1.1.0 \

    -DgroupId=io.helidon.examples \

    -DartifactId=helidon-quickstart-se \

    -Dpackage=io.helidon.examples.quickstart.se

 

When executed, this Maven command will create the helidon-quickstart-se folder.

The microservice application code, the build files, and the deployment files all reside in the helidon-quickstart-se folder.

 

These are the files and folder(s):

  • src folder –  Contains the microservice application source code
  • app.yml – Describes the Kubernetes deployment
  • Dockerfile – Provides instructions for building the Docker image
  • Dockerfile.native – Provides instructions for building the Docker image using the Graal VM
  • Pom.xml – Project descriptor for the Maven build
  • README.md –  File that contains a description of the project

 

Now let’s create an Oracle Developer Cloud project with a Git repository. We’ll call the Git repository Helidon.git.

 

Navigate to the helidon-quickstart-se folder in your command prompt window and execute the following Git commands to push the Helidon microservice application code to the Git repository you created.

Note: You need to have gitcli installed on your development machine to execute Git commands.

git init

git add --all

git commit -m "First commit"

git remote add origin <git repository url>

git push origin master

 

Your Helidon.git repository should have the structure shown below.

 

Configure the Build Job

In Developer Cloud, select Builds in the left navigation bar to display the Builds page. Then click the +Create Job button.  

In the New Job dialog, enter BuildHelidon for the Name and select a Template that has the Docker runtime. Then click the Create button. This build job will build Docker image for the Helidon Microservice code in the Git repository and push it to the DockerHub registry.

In the Git tab, select Git from the Add Git dropdown, select Helidon.git as the Git repository and, for the branch, select master.

 

In the Steps tab, use the Add Step dropdown to add Docker login, Docker build, and Docker push steps.

In the Docker login step, provide your DockerHub Username and Password. Leave the Registry Host empty, since we’re using DockerHub as the registry.

In the Docker build step, enter <DockerHub Username>/helidonmicro for the Image Name and 1,0 for the Version Tag. The full image name shown is <DockerHub Username>/helidonmicro:1.0

 

In the Docker push step, enter <DockerHub Username>/helidonmicro for the Image Name and 1.0 for the Version Tag. Then click the Save button.

Before we create the build job that will deploy the Helidon Microservice Docker container, you need to edit the app.yaml file and modify the Docker image name. To edit that file, go to the Git page, select the Helidon.git repository, and click the app.yml file link.

 

Click the pencil icon to edit the file.

 

Replace the image name with <your DockerHub username>/helidonmicro:1.0, then click the Commit button to commit the code changes to the master branch.

 

To create another build job, navigate to the Builds page and click the +Create Job button. 

In the New Job dialog enter DeployHelidon for the Name, select the template with Kubectl, then click the Create button. This build job will deploy the Docker image built by the BuildHelidon build job to the Kubernetes cluster.

 

The first thing you’ll do to configure the DeployHelidon build job is to specify the repository where the code is found and select the branch where you’ll be working on the files.  To do this, in the Git tab, add Git from the dropdown, select Helidon.git as the Git repository and, for the branch, select master.

In the Steps tab, select OCIcli and Unix Shell from the Add Step drop down. Take a look at this blog link to see how and where to get the values for the OCIcli configuration. Then, in the Unix Shell build step, enter the following script. You can get the Kubernetes Cluster Id from the Oracle Cloud Infrastructure console. 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl create -f app.yaml sleep 30 kubectl get services helidon-quickstart-se kubectl get pods kubectl describe pods

When you’re done, click the Save button.

 

Create the Build Pipeline

Navigate to the Pipelines tab in the Builds page. Then click the +Create Pipeline button.

 

In the Create Pipeline dialog, you can enter the Name as HelidonPipeline. Then click the Create button.

Drag and drop the BuildHelidon and DeployHelidon build jobs and then connect them.

 

Double click the link that connects the build jobs and select Successful as the Result Condition. Then click the Apply button.

 

Click the Build button, as shown, to run the build pipeline. The BuildHelidon build job will be executed first and, if it is successful, then the DeployHelidon build job that deploys the container on the Kubernetes cluster on Oracle Cloud will be executed next.

 

After the jobs in the build pipeline finish executing, navigate to the Jobs tab and click the link for the DeployHelidon build job.  Then click the log icon for the executed build. You should see messages that the service and deployment were successfully created.  Now, for the helidon-quickstart-se service and deployment that were created on the Kubernetes cluster, search the log, and find the public IP address and port to access the microservice, as shown below.

 

 

Enter the IP address and port that you retrieved from the log, into the browser using the format shown in the following URL:

http://<retrieved IP address>:<retrieved port>/greet

You should see the “Hello World!” message in your browser.

 

 

So, you’ve seen how Oracle Developer Cloud can help you manage the complete DevOps lifecycle for your Helidon-based microservices and how out-of-the-box support for Build and Deploy to Oracle Container Engine for Kubernetes makes it easier.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Latest Blog Posts from Members of the Oracle ACE Program - May 12-18, 2019

Tue, 2019-05-28 16:24
Getting into it...

Spring fever affects different people in different ways. For instance, while most of us were taking in nature or discussing the final episode of Game of Throne, these members of the Oracle ACE Program chose to spend time pounding out more than 50 blog posts. Here's what they produced for the week of May 12-18, 2019.

 

ACE Director

Oracle ACE Director Clarissa Maman OrfaliClarisa Maman Orfali
Founder/System Engineer, ClarTech Solutions, Inc.
Irvine, CA

  

Oracle ACE Director Timo HahnTimo Hahn
Principal Software Architect, virtual 7 GmbH
Germany

 
ACE

Oracle ACE Dirk NachbarDirk Nachbar
Senior Consultant, Trivadis AG
Bern, Switzerland

 

Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France

 

Oracle ACE Patrick JolliffePatrick Joliffe
Manager, Li & Fung Limited
Hong Kong

 

Oracle ACE Phill WilkinsPhil Wilkins
Senior Consultant, Capgemini
Reading, United Kingdom

 

Oracle ACE Phillippe FierensPhilippe Fierens
Oracle DBA on Exadata with OVM
Brussels, Belgium

 

Oracle ACE Rodrigo MufalaniRodrigo Mufalani
Principal Database Architect, eProseed
Luxembourg

 

Oracle ACE Satishbabu GunukulaSatishbabu Gunukula
Sr. Database Architect, Intuitive Surgical
San Francisco, California

 

Oracle ACE Sven WellerSven Weller
CEO, Syntegris Information Solutions Gmbh
Germany

 
ACE Associate

Oracle ACE Associate Adam BolinskiAdam Boliński
Oracle Certified Master
Warsaw, Poland

 

Oracle ACE Associate Adrian WardAdrian Ward
Owner, Addidici
Surrey, United Kingdom

 

Oracle ACE Associate Bruno Reis da SilvaBruno Reis da Silva
Senior Oracle Database Administrator, IBM
Stockholm, Sweden

 

Oracle ACE Associate Elise Valin-RakiElise Valin-Raki
Solution Manager, Fennia IT
Finland

 

Oracle ACE Associate Fred DenisFred Denis
Oracle/Exadata DBA, Pythian
Brisbane, Australia

 

Oracle ACE Associate Sayan MakashinovSayan Malakshinov
Oracle performance Tuning Expert, TMD (TransMedia Dynamics) Ltd.
Aylesbury, United Kingdom

 

Oracle ACE Simo VilmunenSimo Vilmunen
Technical Architect, Uponor
Toronto, Canada

 
Additional Resources

Latest Blog Posts by Oracle ACE Associates - May 5-11, 2019

Fri, 2019-05-24 05:00
Magic Beans?

This is the last of three posts that list blog posts published by members of the Oracle ACE Program, May 5-11, 2019. Normally I publish the complete list of all posts by ACE Program members during a given week. But during that particular week these folks must have been hyper-caffeinated or revved up on particularly effective nutritional supplements. Whatever the cause, they generated an unusually large number of posts -- more than 70 in a single week. Whatever it takes!

 

Oracle ACE Associate Alfredo AbateAlfredo Abate
Senior Oracle Systems Architect, Brake Parts Inc LLC
McHenry, Illinois

 

Oracle ACE Associate Omar ShubeilatOmar Shubeilat
Cloud Solution Architect EPM, PrimeQ (ANZ)
Sydney, Australia

 

Oracle ACE Associate Robin ChatterjeeRobin Chatterjee
Head of Oracle Exadata Centre of Excellence, Tata Consultancy Services
Kolkata, India

 

Sandra Flores
Arquitecto de soluciones, DevX MX
Mexico City, Mexico

 

Oracle ACE Simo VilmunenSimo Vilmunen
Technical Architect, Uponor
Toronto, Canada

 
Additional Resources

Latest Blog Posts by Oracle ACEs - May 5-11, 2019

Thu, 2019-05-23 11:20
Skills on display...

As I explained in Tuesday's blog post, members of the Oracle ACE Program were extraordinarily busy blogging the week of May 5-11, 2019, so much so that I've had to break the list of the latest posts into separate lists for the three levels in the program: ACE Directors , ACEs , and ACE Associates . Tuesday's post featured the ACE Director posts. Today it's the ACE's turn. Posts from ACE Associates will soon follow.

 

Oracle ACE Anoop JohnyAnoop Johny
Senior Developer, Wesfarmers Chemicals, Energy & Fertilisers
Perth, Australia

Oracle ACE Anton ElsAnton Els
VP Engineering, Dbvisit Software Limited
Auckland, New Zealand

 

Oracle ACE Atul KUmarAtul Kumar
Founder & CTO, K21 Academy
London, United Kingdom

 

Oracle ACE Jhonata LamimJhonata Lamim
Senior Oracle Consultant, Exímio IT Solutions
Brusque, Brazil

 

Oracle ACE Kyle GoodfriendKyle Goodfriend
Vice President, Planning & Analytics, Accelytics Inc.
Columbus, Ohio

 

Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France

 

Oracle ACE Marcelo OchoaMarcelo Ochoa
System Lab Manager, Facultad de Ciencias Exactas - UNICEN
Buenos Aires, Argentina

 

Oracle ACE Marko MischkeMarco Mischke
Group Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden.Germany

 

Oracle ACE Martin BergerMartin Berger
Database Teamleader, Hutchison Drei Austria GmbH
Vienna, Austria

 

Oracle ACE Miguel PalaciosMiguel Palacios
Gerente General, Global Business Solutions Perú
Peru

 

Oracle ACE Peter ScottPeter Scott
Principal/Owner, Sandwich Analytics
Marcillé-la-Ville, France

 

Oracle ACE Rodrigo MufalaniRodrigo Mufalani
Principal Database Architect, eProseed
Luxembourg

 

Oracle ACE Rodrigo DeSouzaRodrigo De Souza
Solutions Architect, Innive Inc.
Tampa, Florida

 

 

Additional Resources

Run SQL Developer in Oracle Cloud Infrastructure and Connect to Autonomous Database

Wed, 2019-05-22 17:38

In a previous blog post, I described how to quickly create an Autonomous Database and connect to it via SQLcl. By using the the most recent Cloud Developer Image —which includes SQLcl- I was able to save a time installing and configuring. Cloud Developer Image also comes with Oracle SQL Developer pre-installed. In this post I describe how to run SQL Developer and connect it to Autonomous Database.

Steps
  1. Launch Cloud Developer Image
  2. Set up OCI cli
  3. Create Autonomous Transaction Processing Database using CLI
  4. Download Wallet using CLI
  5. Configure VNC server and connect from a VNC Client
  6. Launch SQL Developer and add Wallet
  7. Connect to the database
Step 1-4: See previous blog post

The steps to create an Autonomous Database and download the Wallet are covered in the previous blog post and apply to this tutorial as well.

5. Configure VNC server and connect from a VNC client

To access a GUI via VNC, do the following:

  • Install a VNC viewer on your local computer
    • On MacOS you can use the built-in VNC viewer in the Screen Sharing app
  • Use SSH to connect to the compute instance running the Oracle Cloud Developer Image: ssh -i <path to your ssh keys> opc@<IP address>
  • Configure a VNC password by typing vncpasswd
  • When prompted, enter a new password and verify it
  • Optionally, enter a view-only password
  • After the vncpasswd utility exits, start the VNC server by typing vncserver
  • This will start a VNC server with display number 1 for the opc user, and the VNC server will start automatically if your instance is rebooted
  • On your local computer, connect to your instance and create an ssh tunnel for port 5901 (for display number 1):
    • $ ssh -L 5901:localhost:5901 -i <path to your ssh keys> opc@<IP Address>
  • On your local computer, start a VNC viewer and establish a VNC connection to localhost:1
    • On MacOS, from Finder, hit Command-K to Connect to Server and enter vnc://localhost:5901
  • Enter the VNC password you set earlier
  • Acknowledge the welcome dialogs until you see the Oracle Linux desktop

 

1. Connecting to VNC Server using MacOS built-in VNC Viewer, Screen Sharing

 

 

2. Oracle Linux Desktop

 

Launch SQL Developer and add Wallet

Launch SQL Developer via the Applications > Programming menu. See figure 3.

 

3. Launching SQL Developer

 

Connect to the database

To create a database connection (See figure 4.):

  • In the connections panel, click the (+) icon to create a New Database Connection…
  • Name your connection
  • For Connection Type, choose Cloud Wallet
  • Browse for the wallet.zip you downloaded earlier
  • You can leave the default Service unless you have other Autonomous Databases in this tenancy

 

4. Creating the Database Connection

 

You are now ready to connect:

 

5. SQL Developer connected to Autonomous Database

 

Conclusion

The Oracle Linux-based Cloud Developer image includes all the tools you need to get started with Autonomous Database and Oracle SQL Developer via VNC. In this blog post I explained the steps create an Autonomous Dababase and access it via SQL Developer displayed via VNC.

Support for Oracle Java SE now Included with Oracle Cloud Infrastructure

Tue, 2019-05-21 08:00

Today we are excited to announce that support for Oracle Java, Oracle’s widely adopted and proven Java Development Kit, is now included with Oracle Cloud Infrastructure subscriptions at no extra cost. This includes the ability to log bugs, to get regular stability, performance, and security updates, and more for Oracle Java 8, 11, and 12. With Oracle Java you can develop portable, high-performance applications for the widest range of computing platforms possible, including all of the major public cloud services. By making Oracle Java available as part of any Oracle Cloud Infrastructure subscription, we are dramatically reducing the time and cost to develop enterprise and consumer applications.

This is an important announcement as Java is the #1 programming language and #1 developer choice for the cloud. It’s used widely for embedded applications, games, web content, and enterprise software. 12 million developers run Java worldwide and its usability is growing as options for cloud deployment of Java increase.

Oracle Java in Oracle Cloud helps developers write more secure applications, with convenient access to updates and a single vendor for support – for cloud and Oracle Java use – same subscription, no additional cost. We also ensure that you will have signed software from Oracle and the latest stability, performance, and security updates addressing critical vulnerabilities. 

All of this is supported on Oracle Linux and on other operating systems you run in your Oracle Cloud Infrastructure Virtual Machine or Bare Metal instance. Microsoft Windows? Of course. Ubuntu? Yep. Red Hat Enterprise Linux?  Sure!

Easy Peasy Cloud Developer Image

How can you get the Oracle Java bits? They are a breeze to install on Oracle Linux using Oracle Cloud Infrastructure yum repositories. But with the Oracle Cloud Developer Image available in the Oracle Cloud Marketplace, it’s even easier to get started. Simply click to launch the image on an Oracle Cloud Infrastructure compute instance. The Oracle Cloud Developer Image is a Swiss army knife for developers that includes Oracle Java and a whole bunch of other valuable tools to accelerate development of your next project. You can have this image installed and ready to go within minutes.

Get started with the Oracle Cloud Developer Image.

Latest Blog Posts by Oracle ACE Directors - May 5-11, 2019

Tue, 2019-05-21 05:00
Weighing in...

Given the extraordinary number of blog posts recently published between May 5 and May 11 by members of the Oracle ACE program , I'm publishing separate lists for posts by ACE Directors, ACEs, and ACE Associates. Today's list features recent posts by Oracle ACE Directors.

Oracle ACE Director David KurtzDavid Kurtz
Consultant, Accenture Enkitec Group
London, United Kingdom

 

Oracle ACE Director Oren NakdimonOren Nakdimon
Database Architect & Developer, Moovit
Tzurit, HaZafon (North) District, Israel

 

Oracle ACE Director Edward RoskeEdward Roske
CEO, interRel Consulting Partners
Arlington, Texas

 

Oracle ACE Director Franck PachotFranck Pachot
Data Engineer, CERN
Lausanne, Switzerland

 

Oracle ACE Director John ScottJohn Scott
Director, Apex Evangelists
Leeds, United Kingdom

 

Oracle ACE Director Opal AlapatOpal Alapat
Vision Team Practice Lead, interRel Consulting
Arlington, TX

 
Additional Resources

Using YAML for Build Configuration in Oracle Developer Cloud

Thu, 2019-05-16 16:34

In the April release, we introduced support for YAML-based build configuration in Oracle Developer Cloud. This blog will introduce you to scripting YAML-based build configurations in Developer Cloud.

Before I explain how to create your first build configuration using YAML on Developer Cloud, let’s take a look at a few things.

Why do we need YAML configuration?

A YAML-based build configuration allows you to define and configure build jobs by creating YAML files that you can push to the Git repository where the application code that the build job will be building resides.

This allows you to version your build configuration and keep the older versions, should you ever need to refer back to them.  This is different from user interface-based build job configuration where once changes are saved there is no way to refer back to an older version.

Is YAML replacing the User Interface based build configuration in Developer Cloud?

No, we aren’t replacing the existing UI-based build configuration in Developer Cloud with YAML. In fact, YAML-based build configuration is an alternative to it. Both configuration methods will co-exist going forward.

Are YAML and User Interface-based build configurations interchangeable in Developer Cloud?

No, not at the moment. What this means for the user is that a build job configuration created as a YAML file will always exist as and can only be edited as a YAML file. A build job created or defined through the user interface will not be available as a YAML file for editing.

Now let’s move on to the fun part, scripting our first YAML-based build job configuration to build and push a Docker container to Docker registry.

 

Set Up the Git Repository for a YAML-Based Build

To start, create a Git repository in your Developer Cloud project and then create a .ci-build folder in that repository. This is where the YAML build configuration file will reside. For this blog, I named the Git repository NodeJSDocker, but you can name it whatever you want.

In the Project Home page, under the Repositories tab, click the +Create button to create a new repository.

 

Enter the repository Name and a Description, leave the default values for everything else, and click the Create button.

 

 

In the NodeJSDocker Git repository root, use the +File button and create three new files: Main.js, package.json, and Dockerfile.  Take a look at my NodeJS Microservice for Docker blog for the code snippets that are required for these files.

Your Git repository should look like this.

 

Create a YAML file in the .ci-build folder in the Git repository. The .ci-build folder should always be in the root of the repository.

In the file name field, enter .ci-build/my_first_yaml_build.yml, where .ci-build is the folder and my_first_yaml_build.yml is the YAML file that defines the build job configuration. Then add the code snippet below and click the Commit button.

Notice that the structure of the YAML file is very similar to the tabs for the Build Job configuration. The root mapping in the build job configuration YAML is “job”, which consists of “name”, “vm-template”, “git”, “steps”, and “settings”. The following list describes each of these:

  • name”: Identifies the name of the build job and must be unique within the project.
  • vm-template”: Identifies the VM template that is used for building this job.
  • git”: Defines the Oracle Developer Git repository url, branch, and repo-name.
  • steps”:  Defines the build steps. In YAML, we support all the same build steps as we support in a UI-based build job.

 

In the code snippet below, we define the build configuration to build and push the Docker container to DockerHub registry. To do this, we need to include the Docker Login, Docker Build, and Docker Push build steps in the steps mapping.

Note:

For the Docker Login step, you’ll need to include your password. However, storing your password in plain text in a readable file, such as in a YAML file, is definitely not a good idea. The solution is to use the named password functionality in Oracle Developer Cloud.

To define a named password for the Docker registry, we’ll to click Project Administration tab in the left navigation bar and then the Build tile, as shown below.

 

In the Named Password section, click the +Create button.

 

Enter the Name and the Password for the Named Password. You’ll refer to it in the build job. Click the Create button and it will be stored.

You’ll be able to refer this Named Password in the YAML build job configuration by using #{DOCKER_HUB}.

 

docker-build: Under source, put DOCKERFILE and, if the Dockerfile does not reside in the root of the Git repository, include the mapping that defines the path to it. Enter the image-name (required) and version-tag information.

docker-push: You do not need the registry-host entry if you plan to use DockerHub or Quay.io. Otherwise, provide the registry host. Enter the image-name (required) and version-tag information.

**Similarly for docker-login, You do not need the registry-host entry if you plan to use DockerHub or Quay.io

job: name: MyFirstYAMLJob vm-template: Docker git: - url: "https://alex.admin@devinstance4wd8us2-wd4devcs8us2.uscom-central-1.oraclecloud.com/devinstance4wd8us2-wd4devcs8us2/s/devinstance4wd8us2-wd4devcs8us2_featuredemo_8401/scm/NodeJSDocker.git" branch: master repo-name: origin steps: - docker-login: username: "abhinavshroff" # required password: "#{DOCKER_HUB}" # required - docker-build: source: "DOCKERFILE" image: image-name: "abhinavshroff/nodejsmicroservice" # required version-tag: "latest" - docker-push: image: image-name: "abhinavshroff/nodejsmicroservice" # required version-tag: "latest" settings: - discard-old: days-to-keep-build: 5 builds-to-keep: 10 days-to-keep-artifacts: 5 artifacts-to-keep: 10

Right after you commit the YAML file in the .ci-build folder of the repository, a job named MyFirstYAMLJob will be created in the Builds tab. Notice that the name of the job that is created matches the name of the job you defined in the my_first_yaml_build.yml file.

Click the MyFirstYAMLJob link and then, on the Builds page, click the Configure button. The Git tab will open, showing the my_first_yaml_build.yml file in the .ci-build folder of the NodeJSDocker.git repository. Click the Edit File button and edit the YAML file.

 

After you finish editing and commit the changes, return to the Builds tab and click the Build Job link. Then click the Build Now button.

 

When the build job executes, it builds the Docker image and then pushes it to DockerHub.

You’ll also be able to create and configure pipelines using YAML. To learn more about creating and configuring build jobs and pipelines using YAML, see the documentation link.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

ACE-Organized Meet-Ups: May 17-June 13, 2019

Thu, 2019-05-16 05:00
The meet-ups below were organized by the folks in the photos. But those people will necessarily present the content. And in many cases the events consist of multiple sessions. For additional detail on each event please click the links provided.
 

Oracle ACE Christian PfundtnerChristian Pfundtner
CEO, DB Masters GmbH
Austria


Host Organization: DB Masters
Friday, May 17, 2019
MA01 - Veranstaltungszentrum 
1220 Vienna, Stadlauerstraße 56 
 

Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France


Host Organization: Paris Province Oracle Meetup
Monday, May 20, 2019
6:30pm - 8:30pm
Easyteam
39 Rue du Faubourg Roubaix
Lille, France
 

Oracle ACE Associate Mathias MagnussonMathias Magnusson
CEO, Evil Ape
Nacka, Sweden

 
Host Organization: Stockholm Oracle
Thursday, May 23, 2019
6:00pm - 8:00pm
(See link for location details)
 

Oracle ACE Ahmed AboulnagaAhmed Aboulnaga
Principal, Attain
Washington D.C.

 
Host Organization: Oracle Fusion Middleware & Oracle PaaS of NOVA
Tuesday, May 28, 2019
4:00pm - 6:00pm
Reston Regional Library
11925 Bowman Towne Dr.
Reston, VA
 

Oracle ACE Richard MartensRichard Martens
Co-Owner, SMART4Solutions B.V.
Tilburg, Netherlands

 
Host Organization: ORCLAPEX-NL
Wednesday, May 29, 2019
5:30pm - 9:30pm
Oracle Netherlands
Hertogswetering 163-167,
Utrecht, Netherlands
 

Oracle ACE Associate Jose RodriguesJosé Rodrigues
Business Manager for BPM & WebCenter, Link Consulting
Lisbon, Portugal

 
Host Organization: Oracle Developer Meetup Lisbon
Thursday, May 30, 2019
6:30pm - 8:30pm
Auditorio Link Consulting
Avenida Duque Ávila, 23
Lisboa
 

Oracle ACE Director Rita NunezRita Nuñez
Consultora IT Sr, Tecnix Solutions
Argentina

 
Host Organization: Oracle Users Group of Argentina (AROUG)
Thursday, June 13, 2019
Aula Magna UTN.BA - Medrano 951
 
Additional Resources

Podcast: Do Bloody Anything: The Changing Role of the DBA

Wed, 2019-05-15 05:00

In August of 2018 we did a program entitled Developer Evolution: What’s Rocking Roles in IT. That program focused primarily on the forces that are reshaping the role of the software developer. In this program we shift the focus to the DBA -- the Database Administrator -- and the evolve-or-perish choices that face those in that role.

Bringing their insight to the discussion is an international panel of experts who represent years of DBA experience, and some of the forces that are transforming that role.

The Panelists

In alphabetical order

Maria ColganMaria Colgan
Master Product Manager, Oracle Database
San Francisco, California


 “Security, especially as people move more towards cloud-based models, is something DBAs should get a deeper knowledge in.”

 

Oracle ACE Director Julian DontcheffJulian Dontcheff
Managing Director/Master Technology Architect, Accenture
Helsinki, Finland

 

"Now that Autonomous Database is here, I see several database administrators being scared that somehow all their routine tasks will be replaced and they will have very little to do. As if doing the routine stuff is the biggest joy in their lives."

 

Oracle ACE Director Tim HallTim Hall
DBA, Developer, Author, and Trainer
Birmingham, United Kingdom


 “I never want to do something twice if I can help it. I want to find a way of automating it. If the database will do that for me, that’s awesome.”

 

Oracle ACE Director Lucas JellemaLucas Jellema
CTO/Consulting IT Architect, AMIS
Rotterdam,Netherlands


 “By taking heed of what architects are coming up with, and how applications and application landscapes are organized and how the data plays a part in that, I think DBAs can prepare themselves and play a part in putting it all together in a meaninful way.”

 

Oracle ACE Director Brendan TierneyBrendan Tierney
Principal Consultant, Oralytics
Dublin, Ireland


"Look beyond what you're doing in your cubicles with your blinkers on. See what's going on across all IT departments. What are the business needs? How is data being used? Where can you contribute to that to deliver better business value?"

 

Gerald VenzlGerald Venzl
Master Product Manager, Oracle Cloud, Database, and Server Technologies
San Francisco, California

 

"When you talk to anybody outside the administrative roles -- DBA or Unix Admin -- they will tell you that those people are essentially the folks that always say no. That's not very productive."

 

Additional Resources

ACEs at Riga DevDays - May 29-31

Tue, 2019-05-14 05:00

If you find yourself wandering the Baltic states late in May, why not make your way to Riga, Latvia and drop in on the Riga Dev Days? Held May 29-31 at the Cinema Kino Citadele in Riga, the 3-day DevDays event features 40 speakers, including these members of the Oracle ACE Program.

Oracle ACE Director Christian AntogniniChristian Antognini
Senior Principal Consultant and Partner, Trivadis AG
Monte Carasso, Switzerland

 

Oracle ACE Director Martin BachMartin Bach
Principal Consultant, Accenture Enkitec Group
Germany

 

Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy
Finland

 

Oracle ACE Director Oren NakdimonOren Nakdimon
Database Expert, Moovit
Acre, Israel

 

Oracle ACE Direcctor Franck PachotFranck Pachot
Data Engineer, CERN
Lausanne, Switzerland

 

Oracle ACE Øyvind IseneØyvind Isene
Consultant, Sysco AS
Oslo, Norway

 

Oracle ACE Piet De VisserPiet De Visser
Independent Oracle Database Consultant
The Hague, Netherlands

 
Related Resouorces

Get Started with Autonomous Database and SQLcl in No Time Using Cloud Developer Image

Fri, 2019-05-10 22:46

In this blog post, I describe how to use a free trial for Oracle Cloud and the recently released, Oracle Linux-based Cloud Developer Image to provision an Autonomous Transaction Processing Database and connect to it via SQLcl, all in a matter of minutes.

Think of the Cloud Developer Image as a Swiss army knife for Cloud developers. It has a ton of tools pre-installed, including:

Languages and Oracle Database Connectors
  • Java Platform Standard Edition (Java SE) 8
  • Python 3.6 and cx_Oracle 7
  • Node.js 10 and node-oracledb
  • Go 1.12
  • Oracle Instant Client 18.5
Oracle Cloud Infrastructure Client Tools
  • Oracle Cloud Infrastructure CLI
  • Python, Java, Go and Ruby Oracle Cloud Infrastructure SDKs
  • Terraform and Oracle Cloud Infrastructure Terraform Provider
  • Oracle Cloud Infrastructure Utilities
Other
  • Oracle Container Runtime for Docker
  • Extra Packages for Enterprise Linux (EPEL) via Yum
  • GUI Desktop with access via VNC Server

Here are the steps to provision a fresh Autonomous Transaction Processing Database ad connect to it via SQLcl.

Steps
  1. Launch the Cloud Developer Image from the Console
  2. Log to the instance running the Cloud Developer Image via ssh
  3. Set up OCI cli
  4. Create Autonomous Transaction Processing Database using CLI
  5. Download Wallet using CLI
  6. Install and configure SQLcl
  7. Connect to the database
1. Launch Cloud Developer Image

Log in to the Console. If you don't already have an ssh key pair, make sure you generate those firstby following the documentation. Launch the image by choosing Marketplace under Solutions, Platforms and Edge via the hamburger menu and clicking on Oracle Cloud Developer Image.

Click Launch Instance. Review the terms and conditions, click Launch Instance again. Paste in your ssh public key and click Create. Once the image is running, make a note of the IP address.

2. Set up the OCI client tools

Connect to your newly launched image from your local computer via ssh:

ssh -i <path to your ssh keys> opc@<IP address>

Once logged in, run oci setup config and follow the directions, providing the necessary OCIDs as described in the documentation on Required Keys and OCIDs.

$ oci setup config

Remember to upload your API key by following the instructions in the same documentation. If you accepted all the defaults during the oci client setup, the public key to upload is the output of this:

$ cat /home/opc/.oci/oci_api_key_public.pem 3. Create Autonomous Transaction Processing Database using the OCI CLI

A few of the next commands require the compartment-id as input so it's helpful to have a shorthand ready. Get its value and store it in an environment variable by calling the metadata service via oci-metadata

$ export C=`oci-metadata -g compartmentid --value-only`

Next, create the Autonomous Database. Be sure to provide your own admin password.

$ oci db autonomous-database create --compartment-id $C --db-name myadb --cpu-core-count 1 --data-storage-size-in-tbs 1 --admin-password "<YOUR PASSWORD>"

You should see output similar to:

{ "data": { "compartment-id": "ocid1.tenancy.oc1..aaaaaalskdjflsdkjflsdjflsdkflsjdflksjjfqntfkzizeeikohha4oa", "connection-strings": null, "cpu-core-count": 1, "data-storage-size-in-tbs": 1, "db-name": "myadb", "db-version": null, "db-workload": "OLTP", "defined-tags": {}, "display-name": "autonomousdatabase20190511024732", "freeform-tags": {}, "id": "ocid1.autonomousdatabase.oc1.iad.abuwcljrgx2kosiudoisdufoidsufoidsufodsfkdkdd3zprxjzsouzq", "license-model": "BRING_YOUR_OWN_LICENSE", "lifecycle-details": null, "lifecycle-state": "PROVISIONING", "service-console-url": null, "time-created": "2019-05-11T02:47:32.745000+00:00", "used-data-storage-size-in-tbs": null }, "etag": "a133c7fa" }

Export the Database ID in an environment variable as that will come in handy later.

export DB_ID=`oci db autonomous-database list --compartment-id $C | jq -r '.data[] | select( ."db-name" == "myadb" ).id'`

Wait for the Database to be in AVAILABLE state. You can check the database state with the following command. Initially, this command will return PROVISIONING

oci db autonomous-database get --autonomous-database-id $DB_ID | jq -r '.data["lifecycle-state"]' AVAILABLE

For me, it took about 6 minutes from for the database to be available after executing the create command.

Download Wallet using CLI $ oci db autonomous-database generate-wallet --autonomous-database-id $DB_ID --password <YOUR PASSWORD> --file wallet.zip

Set TNS_ADMIN and extract wallet.zip

$ export TNS_ADMIN="`cat /etc/ld.so.conf.d/oracle-instantclient.conf`/network/admin" $ sudo -E unzip ~/wallet.zip -d $TNS_ADMIN Install and configure SQLcl

Install SQLcl by temporarily enabling the ol7_ociyum_config repo. Then, run the sqlcl.sh that was installed in /etc/profile.d to add the sql command to your PATH.

$ sudo yum install -y --enablerepo=ol7_ociyum_config sqlcl $ source /etc/profile.d/sqlcl.sh

Start SQLcl in /nolog mode and point it to the wallet.zip you downloaded earlier using the set cloudconfig command.

$ sql /nolog SQLcl: Release 19.1 Production on Fri May 10 00:24:29 2019 Copyright (c) 1982, 2019, Oracle. All rights reserved. SQL> set cloudconfig /home/opc/wallet.zip Operation is successfully completed. Operation is successfully completed. Using temp directory:/tmp/oracle_cloud_config2842421108875448254 Connect to the database

Connect to your Autonomous database with the admin. For the service name, use one of the entries in $TNS_ADMIN/tnsnames.ora. Each ADB is created with a high, medium and low service.

SQL> connect admin/<YOUR PASSWORD>@myadb_high Connected. SQL> select sysdate from dual; SYSDATE --------- 11-MAY-19 SQL> Conclusion

The Oracle Linux-based Cloud Developer Image comes with wealth of developer tools pre-installed, reducing the time it takes to get started with Oracle Cloud and Autonomoud Database. In this blog post, I showed how you can provision an Autonomous Database and get connected to it in a matter of minutes. The fact that the Cloud Developer Image already has the important bits pre-installed, including OCI client tools an Oracle Instant Client, makes completing this task a breeze.

Pages