Feed aggregator

Using Cloud Native Buildpacks (CNB) on a local registry to speed up the building of images for test purposes

Pas Apicella - Tue, 2019-06-11 20:58
I previously blogged about the CNCF project known as Cloud Native Buildpacks previously on this blog entry below.

Building PivotalMySQLWeb using Cloud Native Buildpacks (CNB)
http://theblasfrompas.blogspot.com/2019/06/building-pivotalmysqlweb-using-cloud.html

In the steps below I will show how to use a local docker registry on your laptop or desktop to enable faster builds of your OCI compliant images using CNB's. Here is how using the same application.

Pre Steps:

1. Ensure you have Docker CE installed if not use this link

  https://hub.docker.com/search/?type=edition&offering=community

Steps:

1. Start by running a local registry on your own laptop. The guide shows how to get a container running which will be our local registry and then how you verify it's running.

https://docs.docker.com/registry/

$ docker run -d -p 5000:5000 --restart=always --name registry registry:2

Verify it's running:

$ netstat -an | grep 5000
tcp6       0      0  ::1.5000               *.*                    LISTEN
tcp4       0      0  *.5000                 *.*                    LISTEN

2. Then pull the CNB images versions of the "official" build and run images from the GCR as follows. Those images exist here

https://console.cloud.google.com/gcr/images/cncf-buildpacks-ci/GLOBAL/packs/run?gcrImageListsize=30

Here I am using the latest build/run images which at the time of this post was "run:0.2.0-build.12"

papicella@papicella:~$ docker pull gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12
0.2.0-build.12: Pulling from cncf-buildpacks-ci/packs/run
Digest: sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9
Status: Downloaded newer image for gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12

papicella@papicella:~$ docker tag gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12 localhost:5000/run:0.2.0-build.12

papicella@papicella:~$ docker rmi gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12
Untagged: gcr.io:443/cncf-buildpacks-ci/packs/run:0.2.0-build.12
Untagged: gcr.io:443/cncf-buildpacks-ci/packs/run@sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9

papicella@papicella:~$ docker push localhost:5000/run:0.2.0-build.12
The push refers to repository [localhost:5000/run]
1315c94f2536: Layer already exists
63696cbb6c17: Layer already exists
30ede08f8231: Layer already exists
b57c79f4a9f3: Layer already exists
d60e01b37e74: Layer already exists
e45cfbc98a50: Layer already exists
762d8e1a6054: Layer already exists
0.2.0-build.12: digest: sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9 size: 1780

3. Now lets use our local registry and build/run images which will be much faster for local development

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker tag localhost:5000/run:0.2.0-build.12 localhost:5000/run

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker push localhost:5000/run:latest
The push refers to repository [localhost:5000/run]
1315c94f2536: Layer already exists
63696cbb6c17: Layer already exists
30ede08f8231: Layer already exists
b57c79f4a9f3: Layer already exists
d60e01b37e74: Layer already exists
e45cfbc98a50: Layer already exists
762d8e1a6054: Layer already exists
latest: digest: sha256:ebd42c0228f776804f2e99733076216592c5a1117f1b3dde7688cf3bd0bbe7b9 size: 1780

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ pack build localhost:5000/pivotal-mysql-web --path ./PivotalMySQLWeb --no-pull --publish
Using default builder image cloudfoundry/cnb:cflinuxfs3
Selected run image cloudfoundry/cnb-run:cflinuxfs3 from builder
Using build cache volume pack-cache-65bb470893c1.build
Executing lifecycle version 0.2.1
===> DETECTING
[detector] Trying group 1 out of 4 with 8 buildpacks...
[detector] ======== Results ========
[detector] pass: Cloud Foundry OpenJDK Buildpack
[detector] skip: Cloud Foundry Build System Buildpack
[detector] pass: Cloud Foundry JVM Application Buildpack
[detector] skip: Cloud Foundry Azure Application Insights Buildpack
[detector] skip: Cloud Foundry Debug Buildpack
[detector] skip: Cloud Foundry Google Stackdriver Buildpack
[detector] skip: Cloud Foundry JMX Buildpack
[detector] skip: Cloud Foundry Procfile Buildpack
===> RESTORING
[restorer] restoring cached layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4'
===> ANALYZING
[analyzer] using cached layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4'
[analyzer] writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'
[analyzer] writing metadata for uncached layer 'org.cloudfoundry.jvmapplication:main-class'
===> BUILDING
[builder] -----> Cloud Foundry OpenJDK Buildpack 1.0.0-M8
[builder] -----> OpenJDK JRE 11.0.3: Reusing cached layer
[builder]
[builder] -----> Cloud Foundry JVM Application Buildpack 1.0.0-M8
[builder] -----> Main-Class Classpath: Reusing cached layer
[builder] -----> Process types:
[builder]        task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]        web:  java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]
===> EXPORTING
[exporter] Reusing layer 'app' with SHA sha256:b32618ed6b86fb496a4ce33db9df49fdd4ef16c5646b174b5643c8befcb7408a
[exporter] Reusing layer 'config' with SHA sha256:9538e967fa10f23b3415c382a3754ebf4c2645c20b6d76af519236c1181e7639
[exporter] Reusing layer 'launcher' with SHA sha256:04ca7957074763290a9abe6a067ce8c902a2ab51ed6c55102964e3f3294cdebd
[exporter] Reusing layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:e540f1464509ac673a25bd2f24c7dd6875f805c0dd35e9af84dd4669e2fd0c93
[exporter] Reusing layer 'org.cloudfoundry.jvmapplication:main-class' with SHA sha256:8537197b3f57d86a59397b89b4fbdd14900a602cc12961eae338b9ef2513cdc0
[exporter]
[exporter] *** Image: localhost:5000/pivotal-mysql-web:latest@sha256:f1d7a25fc5159ceb668c26b595dcffb00ef54ada31cbb52eaa8319dc143fc9d8
===> CACHING
[cacher] Reusing layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:11439713b023be71211cb83ecd56a1be63e0c0be3e4814a18cc4c71d2264dea5
Successfully built image localhost:5000/pivotal-mysql-web

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker pull localhost:5000/pivotal-mysql-web
Using default tag: latest
latest: Pulling from pivotal-mysql-web
410238d178d0: Already exists
a00e90b544bc: Already exists
9de264eecc08: Already exists
4acedf754175: Already exists
d5a72fc0c7a1: Already exists
4066d2d744ac: Already exists
dba1ef680b99: Already exists
Digest: sha256:f1d7a25fc5159ceb668c26b595dcffb00ef54ada31cbb52eaa8319dc143fc9d8
Status: Downloaded newer image for localhost:5000/pivotal-mysql-web:latest

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker run -m 1G -p 8080:8080 localhost:5000/pivotal-mysql-web

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.0.RELEASE)

2019-06-12 01:02:16.174  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Starting PivotalMySqlWebApplication on a018f17d6121 with PID 1 (/workspace/BOOT-INF/classes started by vcap in /workspace)
2019-06-12 01:02:16.179  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : No active profile set, falling back to default profiles: default
2019-06-12 01:02:18.336  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2019-06-12 01:02:18.374  INFO 1 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2019-06-12 01:02:18.375  INFO 1 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/9.0.12
2019-06-12 01:02:18.391  INFO 1 --- [           main] o.a.catalina.core.AprLifecycleListener   : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/layers/org.cloudfoundry.openjdk/openjdk-jre/lib:/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib]
2019-06-12 01:02:18.512  INFO 1 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2019-06-12 01:02:18.512  INFO 1 --- [           main] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 2270 ms
2019-06-12 01:02:19.019  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'characterEncodingFilter' to: [/*]
2019-06-12 01:02:19.020  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'webMvcMetricsFilter' to: [/*]
2019-06-12 01:02:19.020  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2019-06-12 01:02:19.020  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'formContentFilter' to: [/*]
2019-06-12 01:02:19.021  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'requestContextFilter' to: [/*]
2019-06-12 01:02:19.021  INFO 1 --- [           main] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2019-06-12 01:02:19.022  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'httpTraceFilter' to: [/*]
2019-06-12 01:02:19.022  INFO 1 --- [           main] o.s.b.w.servlet.ServletRegistrationBean  : Servlet dispatcherServlet mapped to [/]
2019-06-12 01:02:19.374  INFO 1 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2019-06-12 01:02:19.918  INFO 1 --- [           main] .s.s.UserDetailsServiceAutoConfiguration :

Using generated security password: 42d4ec01-6459-4205-a66b-1b49d333121e

2019-06-12 01:02:20.043  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: Ant [pattern='/**'], []
2019-06-12 01:02:20.092  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@47e4d9d0, org.springframework.security.web.context.SecurityContextPersistenceFilter@5e4fa1da, org.springframework.security.web.header.HeaderWriterFilter@4ae263bf, org.springframework.security.web.csrf.CsrfFilter@2788d0fe, org.springframework.security.web.authentication.logout.LogoutFilter@15fdd1f2, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@2d746ce4, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@70e02081, org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter@49798e84, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1948ea69, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3f92c349, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@66ba7e45, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6ed06f69, org.springframework.security.web.session.SessionManagementFilter@19ccca5, org.springframework.security.web.access.ExceptionTranslationFilter@57aa341b, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@7c6442c2]
2019-06-12 01:02:20.138  INFO 1 --- [           main] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 9 endpoint(s) beneath base path '/actuator'
2019-06-12 01:02:20.259  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-06-12 01:02:20.265  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Started PivotalMySqlWebApplication in 4.841 seconds (JVM running for 5.646)



And that's it a locally built OCI image (Built very fast all locally) you have run locally from your local image registry.

Here is how to view your local registry using HTTPie showing our locally built "pivotal-mysql-web" OCI image we created above

papicella@papicella:~$ http http://localhost:5000/v2/_catalog
HTTP/1.1 200 OK
Content-Length: 63
Content-Type: application/json; charset=utf-8
Date: Wed, 12 Jun 2019 01:53:40 GMT
Docker-Distribution-Api-Version: registry/2.0
X-Content-Type-Options: nosniff

{
    "repositories": [
        "pivotal-mysql-web",
        "run",
        "sample-java-app"
    ]
}


More Information

1. Cloud Native Buildpacks: an Industry-Standard Build Process for Kubernetes and Beyond.
https://content.pivotal.io/blog/cloud-native-buildpacks-for-kubernetes-and-beyond

2. buildspacks.io Home Page
https://buildpacks.io/

Categories: Fusion Middleware

RAMBleed DRAM Vulnerabilities

Oracle Security Team - Tue, 2019-06-11 12:00

On June 11th, security researchers published a paper titled “RAMBleed Reading Bits in Memory without Accessing Them”.  This paper describes attacks against Dynamic Random Access Memory (DRAM) modules that are already susceptible to Rowhammer-style attacks.

The new attack methods described in this paper are not microprocessor-specific, they leverage known issues in DRAM memory.  These attacks only impact DDR4 and DDR3 memory modules, and older generations DDR2 and DDR1 memory modules are not vulnerable to these attacks.

While the RAMBleed issues leverage RowHammer, RAMBleed is different in that confidentiality of data may be compromised: RAMBleed uses RowHammer as a side channel to discover the values of adjacent memory. 

Please note that successfully leveraging RAMBleed exploits require that the malicious attacker be able to locally execute malicious code against the targeted system. 

At this point in time, Oracle believes that:

  • All current and many older families of Oracle x86 (X5, X6, X7, X8, E1) and Oracle SPARC servers (S7, T7, T8, M7, M8) employing DDR4 DIMMs are not expected to be impacted by RAMBleed.  This is because Oracle only employs DDR4 DIMMs that have implemented the Target Row Refresh (TRR) defense mechanism against RowHammer.  Oracle’s memory suppliers have stated that these implementations have been designed to be effective against RowHammer. 
  • Older systems making use of DDR3 memory are also not expected to be impacted by RAMBleed because they are making use of a combination of other RowHammer mitigations (e.g., pseudo-TRR and increased DIMM refresh rates in addition to Error-Correcting Code (ECC)).  Oracle is currently not aware of any research that would indicate that the combination of these mechanisms would not be effective against RAMBleed. 
  • Oracle Cloud Infrastructure (OCI) is not impacted by the RAMBleed issues because OCI servers only use DDR4 memory with built-in defenses as previously described.  Exadata Engineered Systems use DDR4 memory (X5 family and newer) and DDR3 memory (X4 family and older).
  • Finally, Oracle does not believe that additional software patches will need to be produced to address the RAMBleed issues, as these memory issues can be only be addressed through hardware configuration changes.  In other words, no additional security patches are expected for Oracle product distributions.
For more information about Oracle Corporate Security Practices, see https://www.oracle.com/corporate/security-practices/

[Part I] Oracle and Microsoft Cloud: Interconnect Overview

Online Apps DBA - Tue, 2019-06-11 02:15

Oracle and Microsoft Cloud: Interconnect Overview Oracle has announced a Cloud interoperability partnership between Microsoft and Oracle Cloud. This cross-cloud interlink enables customers to migrate and run mission-critical enterprise workloads across Microsoft Azure and Oracle Cloud Infrastructure (OCI). Check our latest blog post at https://k21academy.com/oci38 to know more about: ▪Network Connectivity Between Oracle and Microsoft […]

The post [Part I] Oracle and Microsoft Cloud: Interconnect Overview appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Creating An ATP Instance With The OCI Service Broker

OTN TechBlog - Mon, 2019-06-10 08:03
.gist { border-left: none !important;} .code-inline { display: inline; margin: 0; padding: 1px 2px; white-space: pre !important; word-wrap: normal; font-size: inherit;} .cke_editable .gist-ph:before { content: 'Gist Content...'; display: inline-block; color: blue; }

We recently announced the release of the OCI Service Broker for Kubernetes, an implementation of the Open Service Broker API that streamlines the process of provisioning and binding to services that your cloud native applications depend on.

The Kubernetes documentation lays out the following use case for the Service Catalog API:

An application developer wants to use message queuing as part of their application running in a Kubernetes cluster. However, they do not want to deal with the overhead of setting such a service up and administering it themselves. Fortunately, there is a cloud provider that offers message queuing as a managed service through its service broker.

A cluster operator can setup Service Catalog and use it to communicate with the cloud provider’s service broker to provision an instance of the message queuing service and make it available to the application within the Kubernetes cluster. The application developer therefore does not need to be concerned with the implementation details or management of the message queue. The application can simply use it as a service.

Put simply, the Service Catalog API lets you manage services within Kubernetes that are not be deployed within Kubernetes.  Things like messaging queues, object storage and databases can be deployed with a set of Kubernetes configuration files without needing knowledge of the underlying API or tools used to create those instances thus simplifying the deployment and making it portable to virtually any Kubernetes cluster.

The current OCI Service Broker adapters that are available at this time include:

  • Autonomous Transaction Processing (ATP)
  • Autonomous Data Warehouse (ADW)
  • Object Storage
  • Streaming

I won't go into too much detail in this post about the feature, as the introduction post and GitHub documentation do a great job of explaining service brokers and the problems that they solve. Rather, I'll focus on using the OCI Service Broker to provision an ATP instance and deploy a container which has access to the ATP credentials and wallet.  

To get started, you'll first have to follow the installation instructions on GitHub. At a high level, the process involves:

  1. Deploy the Kubernetes Service Catalog client to the OKE cluster
  2. Install the svcat CLI tool
  3. Deploy the OCI Service Broker
  4. Create a Kubernetes Secret containing OCI credentials
  5. Configure Service Broker with TLS
  6. Configure RBAC (Role Based Access Control) permissions
  7. Register the OCI Service Broker

Once you've installed and registered the service broker, you're ready to use the ATP service plan to provision an ATP instance. I'll go into details below, but the overview of the process looks like so:

  1. Create a Kubernetes secret with a new admin and wallet password (in JSON format)
  2. Create a YAML configuration for the ATP Service Instance
  3. Deploy the Service Instance
  4. Create a YAML config for the ATP Service Binding
  5. Deploy the Service Binding to obtain which results in the creation of a new Kubernetes secret containing the wallet contents
  6. Create a Kubernetes secret for Microservice deployment use containing the admin password and the wallet password (in plain text format)
  7. Create a YAML config for the Microservice deployment which uses an initContainer to decode the wallet secrets (due to a bug which double encodes them) and mounts the wallet contents as a volume

Following that overview, let's take a look at a detailed example. The first thing we'll have to do is make sure that the user we're using with the OCI Service Broker has the proper permissions.  If you're using a user that is a member of the group devops then you would make sure that you have a policy in place that looks like this:

Allow group devops to manage autonomous-database in compartment [COMPARTMENT_NAME]

The next step is to create a secret that will be used to set some passwords during ATP instance creation.  Create a file called atp-secret.yaml and populate it similarly to the example below.  The values for password and walletPassword must be in the format of a JSON object as shown in the comments inline below, and must be base64 encoded.  You can use an online tool for the base64 encoding, or use the command line if you're on a Unix system (echo '{"password":"Passw0rd123456"}' | base64).

Now create the secret via: kubectl create -f app-secret.yaml.

Next, create a file called atp-instance.yaml and populate as follows (updating the name, compartmentId, dbName, cpuCount, storageSizeTBs, licenseType as necessary).  The paremeters are detailed in the full documentation (link below).  Note, we're referring to the previously created secret in this YAML file.

Create the instance with: kubectl create -f atp-instance.yaml. This will take a bit of time, but in about 15 minutes or less your instance will be up and running. You can check the status via the OCI console UI, or with the command: svcat get instances which will return a status of "ready" when the instance has been provisioned.

Now that the instance has been provisioned, we can create a binding.  Create a file called atp-binding.yaml and populate it as such:

Note that we're once again using a value from the initial secret that we created in step 1. Apply the binding with: kubectl create -f atp-binding.yaml and check the binding status with svcat get bindings, looking again for a status of "ready". Once it's ready, you'll be able to view the secret that was created by the binding via: kubectl get secrets atp-demo-binding -o yaml where the secret name matches the 'name' value used in atp-binding.yaml. The secret will look similar to the following output:

This secret contains the contents of your ATP instance wallet and next we'll mount these as a volume inside of the application deployment.  Let's create a final YAML file called atp-demo.yaml and populate it like below.  Note, there is currently a bug in the service broker that double encodes the secrets, so it's currently necessary to use an initContainer to get the values properly decoded.

Here we're just creating a basic alpine linux instance just to test the service instance. Your application deployment would use a Docker image with your application, but the format and premise would be nearly identical to this. Create the deployment with kubectl create -f atp-demo.yaml and once the pod is in a "ready" state we can launch a terminal and test things out a bit:

Note that we have 3 environment variables available in the instance:  DB_ADMIN_USER, DB_ADMIN_PWD and WALLET_PWD.  We also have a volume available at /db-demo/creds containing all of our wallet contents that we need to make a connection to the new ATP instance.

Check out the full instructions for more information or background on the ATP service broker. The ability to bind to an existing ATP instance is scheduled as an enhancement to the service broker in the near future, and some other exciting features are planned.

Dell Boomi Training: Day 8 Review/Introduction & Q/As

Online Apps DBA - Sun, 2019-06-09 09:00

[Q/A] Dell Boomi: Day 8: Cache & APIs   Dell Boomi is a business unit acquired by Dell in 2010 that specializes in cloud-based integration, API management, and Master Data Management. Dell Boomi AtomSphere is a multi-tenant cloud-based integration platform. That facilitates data and application integration.   It is a leading iPaas which helps to […]

The post Dell Boomi Training: Day 8 Review/Introduction & Q/As appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 2: IAM (Compartments, Policies, Users, Groups)

Online Apps DBA - Sun, 2019-06-09 01:14

[Q/A] 1Z0-932 Oracle Cloud Infra Architect Day2: IAM (Compartments, Policies, Users, Groups) IAM in OCI though carries 10% weightage in certification exam 1Z0-932 but it is one of the most important topics as IAM controls who has access to what OCI resources (Network, Storage, Compute, Database etc). It is important that you understand what is […]

The post [Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 2: IAM (Compartments, Policies, Users, Groups) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Building PivotalMySQLWeb using Cloud Native Buildpacks (CNB)

Pas Apicella - Sun, 2019-06-09 00:28
Not heard of Cloud Native Buildpacks? If not this post will show why you might want to. If you want to make your developers more productive with Kubernetes, you’re going to want to look at Cloud Native Buildpacks.

https://buildpacks.io/

Until cloud platforms such as Heroku and Pivotal Cloud Foundry incorporate the Buildpack v3 Lifecycle, the fastest way to try Cloud Native Buildpacks is via the pack CLI, which integrates with your local Docker daemon. Here is an example below taking Pivotal MySQLWeb application and creating an OCI compliant image from that

Pre Steps:

1. Install pack using this link

  https://buildpacks.io/docs/install-pack/

2. Ensure you have Docker CE installed if not use this link

  https://hub.docker.com/search/?type=edition&offering=community

Steps:

1. I am using Pivotal MySQLWeb which i have packaged using maven and then taken the JAR and exploded it onto the file system to avoid compilation. You can still just use source code and the Cloud Native Buildpack's will still work but in this example I avoid the maven compilation step by using an exploded JAR file already compiled which is what a Build Service on a cloud platform would do in any case

Let's start by using "pack" to create our image as per below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ pack build pivotal-mysql-web --path ./PivotalMySQLWeb

Using default builder image cloudfoundry/cnb:cflinuxfs3
Pulling image index.docker.io/cloudfoundry/cnb:cflinuxfs3
cflinuxfs3: Pulling from cloudfoundry/cnb
18d7ea8d445c: Pull complete
18d0be9dc457: Pull complete
f5407c34df38: Pull complete
35c61e03e6bf: Pull complete
40d144c93ada: Pull complete
4f4fb700ef54: Pull complete
0432ec3bb9f8: Pull complete
3731e128636c: Pull complete
1bab066bbafe: Pull complete
4cc53e89f635: Pull complete
4fd62e90f994: Pull complete
dc9fa77b2cd2: Pull complete
3cd4ed6e9bbf: Pull complete
a525f8221dc8: Pull complete
f01bc40f59c5: Pull complete
1f9842b1696d: Pull complete
3e15eeb884d5: Pull complete
3c0f59c7956f: Pull complete
c3e6214340d9: Pull complete
6955f2c8bfad: Pull complete
5112994886a0: Pull complete
e19195f86112: Pull complete
07fb5cd454f2: Pull complete
Digest: sha256:197439e9ccc699daa6431bd7154c80b3b0ce75b072792a0e93edd6779756f3bc
Status: Downloaded newer image for cloudfoundry/cnb:cflinuxfs3
Selected run image cloudfoundry/cnb-run:cflinuxfs3 from builder
Pulling image cloudfoundry/cnb-run:cflinuxfs3
cflinuxfs3: Pulling from cloudfoundry/cnb-run
0a25bf28c5eb: Pull complete
7216becd0525: Pull complete
Digest: sha256:f9605c5af04b2ba04918879f2bf9d37c55620ae28e73b94e9926cd97bbf8fe96
Status: Downloaded newer image for cloudfoundry/cnb-run:cflinuxfs3
Using build cache volume pack-cache-1f2556cf858e.build
Executing lifecycle version 0.2.1
===> DETECTING
[detector] Trying group 1 out of 4 with 8 buildpacks...
[detector] ======== Results ========
[detector] pass: Cloud Foundry OpenJDK Buildpack
[detector] skip: Cloud Foundry Build System Buildpack
[detector] pass: Cloud Foundry JVM Application Buildpack
[detector] skip: Cloud Foundry Azure Application Insights Buildpack
[detector] skip: Cloud Foundry Debug Buildpack
[detector] skip: Cloud Foundry Google Stackdriver Buildpack
[detector] skip: Cloud Foundry JMX Buildpack
[detector] skip: Cloud Foundry Procfile Buildpack
===> RESTORING
[restorer] cache '/cache': metadata not found, nothing to restore
===> ANALYZING
===> BUILDING
[builder] -----> Cloud Foundry OpenJDK Buildpack 1.0.0-M8
[builder] -----> OpenJDK JRE 11.0.3: Contributing to layer
[builder]        Downloading from https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jre_x64_linux_hotspot_11.0.3_7.tar.gz
[builder]        Verifying checksum
[builder]        Expanding to /layers/org.cloudfoundry.openjdk/openjdk-jre
[builder]        Writing JAVA_HOME to shared
[builder]
[builder] -----> Cloud Foundry JVM Application Buildpack 1.0.0-M8
[builder] -----> Main-Class Classpath: Contributing to layer
[builder]        Writing CLASSPATH to launch
[builder] -----> Process types:
[builder]        task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]        web:  java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]
===> EXPORTING
[exporter] Exporting layer 'app' with SHA sha256:b32618ed6b86fb496a4ce33db9df49fdd4ef16c5646b174b5643c8befcb7408a
[exporter] Exporting layer 'config' with SHA sha256:9538e967fa10f23b3415c382a3754ebf4c2645c20b6d76af519236c1181e7639
[exporter] Exporting layer 'launcher' with SHA sha256:04ca7957074763290a9abe6a067ce8c902a2ab51ed6c55102964e3f3294cdebd
[exporter] Exporting layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:e540f1464509ac673a25bd2f24c7dd6875f805c0dd35e9af84dd4669e2fd0c93
[exporter] Exporting layer 'org.cloudfoundry.jvmapplication:main-class' with SHA sha256:8537197b3f57d86a59397b89b4fbdd14900a602cc12961eae338b9ef2513cdc0
[exporter]
[exporter] *** Image: index.docker.io/library/pivotal-mysql-web:latest@8957afa91f464e2c0adc24968c31613148b9905ff1fb90ec59ff84e165d939ac
===> CACHING
[cacher] Caching layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:11439713b023be71211cb83ecd56a1be63e0c0be3e4814a18cc4c71d2264dea5
Successfully built image pivotal-mysql-web

2. Inspect the docker image on your laptop as shown below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker image inspect pivotal-mysql-web

[
    {
        "Id": "sha256:8957afa91f464e2c0adc24968c31613148b9905ff1fb90ec59ff84e165d939ac",
        "RepoTags": [
            "pivotal-mysql-web:latest"
        ],
        "RepoDigests": [],
        "Parent": "",
        "Comment": "",
        "Created": "2019-06-05T05:25:58Z",
        "Container": "",
        "ContainerConfig": {

...

3. Run the docker image as shown below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ docker run --rm -p 8080:8080 pivotal-mysql-web

  .   ____          _            __ _ _
 /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
 \\/  ___)| |_)| | | | | || (_| |  ) ) ) )
  '  |____| .__|_| |_|_| |_\__, | / / / /
 =========|_|==============|___/=/_/_/_/
 :: Spring Boot ::        (v2.1.0.RELEASE)

2019-06-05 05:30:43.005  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Starting PivotalMySqlWebApplication on 5d21f8f32ba4 with PID 1 (/workspace/BOOT-INF/classes started by vcap in /workspace)
2019-06-05 05:30:43.009  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : No active profile set, falling back to default profiles: default
2019-06-05 05:30:44.662  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat initialized with port(s): 8080 (http)
2019-06-05 05:30:44.686  INFO 1 --- [           main] o.apache.catalina.core.StandardService   : Starting service [Tomcat]
2019-06-05 05:30:44.687  INFO 1 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/9.0.12
2019-06-05 05:30:44.698  INFO 1 --- [           main] o.a.catalina.core.AprLifecycleListener   : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/layers/org.cloudfoundry.openjdk/openjdk-jre/lib:/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib]
2019-06-05 05:30:44.793  INFO 1 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext
2019-06-05 05:30:44.794  INFO 1 --- [           main] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 1736 ms
2019-06-05 05:30:45.130  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'characterEncodingFilter' to: [/*]
2019-06-05 05:30:45.131  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'webMvcMetricsFilter' to: [/*]
2019-06-05 05:30:45.131  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]
2019-06-05 05:30:45.131  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'formContentFilter' to: [/*]
2019-06-05 05:30:45.132  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'requestContextFilter' to: [/*]
2019-06-05 05:30:45.132  INFO 1 --- [           main] .s.DelegatingFilterProxyRegistrationBean : Mapping filter: 'springSecurityFilterChain' to: [/*]
2019-06-05 05:30:45.133  INFO 1 --- [           main] o.s.b.w.servlet.FilterRegistrationBean   : Mapping filter: 'httpTraceFilter' to: [/*]
2019-06-05 05:30:45.134  INFO 1 --- [           main] o.s.b.w.servlet.ServletRegistrationBean  : Servlet dispatcherServlet mapped to [/]
2019-06-05 05:30:45.436  INFO 1 --- [           main] o.s.s.concurrent.ThreadPoolTaskExecutor  : Initializing ExecutorService 'applicationTaskExecutor'
2019-06-05 05:30:45.851  INFO 1 --- [           main] .s.s.UserDetailsServiceAutoConfiguration :

Using generated security password: 3823aef6-6f72-4f5f-939d-bbd3d57ec2fa

2019-06-05 05:30:45.931  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: Ant [pattern='/**'], []
2019-06-05 05:30:45.967  INFO 1 --- [           main] o.s.s.web.DefaultSecurityFilterChain     : Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2e140e59, org.springframework.security.web.context.SecurityContextPersistenceFilter@26ae880a, org.springframework.security.web.header.HeaderWriterFilter@25a73de1, org.springframework.security.web.csrf.CsrfFilter@652ab8d9, org.springframework.security.web.authentication.logout.LogoutFilter@17814b1c, org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter@54f66455, org.springframework.security.web.authentication.ui.DefaultLoginPageGeneratingFilter@58399d82, org.springframework.security.web.authentication.ui.DefaultLogoutPageGeneratingFilter@49a71302, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4c03a37, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@3c017078, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@298d9a05, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5cd61783, org.springframework.security.web.session.SessionManagementFilter@771db12c, org.springframework.security.web.access.ExceptionTranslationFilter@5f303ecd, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@73ab3aac]
2019-06-05 05:30:46.000  INFO 1 --- [           main] o.s.b.a.e.web.EndpointLinksResolver      : Exposing 9 endpoint(s) beneath base path '/actuator'
2019-06-05 05:30:46.096  INFO 1 --- [           main] o.s.b.w.embedded.tomcat.TomcatWebServer  : Tomcat started on port(s): 8080 (http) with context path ''
2019-06-05 05:30:46.101  INFO 1 --- [           main] c.p.p.m.PivotalMySqlWebApplication       : Started PivotalMySqlWebApplication in 3.654 seconds (JVM running for 4.324)

4. Browse to localhost:8080 to invoke the application just to be sure it worked



5. Publish the OCI compliant image to your registry. In this example I am using Docker Hub as shown below

papicella@papicella:~/pivotal/PCF/APJ/PPTX/CNCF/buildpacks.io/demos$ pack build pasapples/pivotal-mysql-web:latest --publish --path ./PivotalMySQLWeb --no-pull

Using default builder image cloudfoundry/cnb:cflinuxfs3
Selected run image cloudfoundry/cnb-run:cflinuxfs3 from builder
Using build cache volume pack-cache-a4a78257c7be.build
Executing lifecycle version 0.2.1
===> DETECTING
[detector] Trying group 1 out of 4 with 8 buildpacks...
[detector] ======== Results ========
[detector] pass: Cloud Foundry OpenJDK Buildpack
[detector] skip: Cloud Foundry Build System Buildpack
[detector] pass: Cloud Foundry JVM Application Buildpack
[detector] skip: Cloud Foundry Azure Application Insights Buildpack
[detector] skip: Cloud Foundry Debug Buildpack
[detector] skip: Cloud Foundry Google Stackdriver Buildpack
[detector] skip: Cloud Foundry JMX Buildpack
[detector] skip: Cloud Foundry Procfile Buildpack
===> RESTORING
[restorer] cache '/cache': metadata not found, nothing to restore
===> ANALYZING
===> BUILDING
[builder] -----> Cloud Foundry OpenJDK Buildpack 1.0.0-M8
[builder] -----> OpenJDK JRE 11.0.3: Contributing to layer
[builder]        Downloading from https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.3%2B7/OpenJDK11U-jre_x64_linux_hotspot_11.0.3_7.tar.gz
[builder]        Verifying checksum
[builder]        Expanding to /layers/org.cloudfoundry.openjdk/openjdk-jre
[builder]        Writing JAVA_HOME to shared
[builder]
[builder] -----> Cloud Foundry JVM Application Buildpack 1.0.0-M8
[builder] -----> Main-Class Classpath: Contributing to layer
[builder]        Writing CLASSPATH to launch
[builder] -----> Process types:
[builder]        task: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]        web:  java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
[builder]
===> EXPORTING
[exporter] Exporting layer 'app' with SHA sha256:b32618ed6b86fb496a4ce33db9df49fdd4ef16c5646b174b5643c8befcb7408a
[exporter] Exporting layer 'config' with SHA sha256:9538e967fa10f23b3415c382a3754ebf4c2645c20b6d76af519236c1181e7639
[exporter] Exporting layer 'launcher' with SHA sha256:04ca7957074763290a9abe6a067ce8c902a2ab51ed6c55102964e3f3294cdebd
[exporter] Exporting layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:e540f1464509ac673a25bd2f24c7dd6875f805c0dd35e9af84dd4669e2fd0c93
[exporter] Exporting layer 'org.cloudfoundry.jvmapplication:main-class' with SHA sha256:8537197b3f57d86a59397b89b4fbdd14900a602cc12961eae338b9ef2513cdc0
[exporter]
[exporter] *** Image: index.docker.io/pasapples/pivotal-mysql-web:latest@sha256:c862eda516289c2daa29580c95b74b4d72eca9caf941a3a6ac2bf2bd886057e5
===> CACHING
[cacher] Caching layer 'org.cloudfoundry.openjdk:d2df8bc799b09c8375f79bf646747afac3d933bb1f65de71d6c78e7466ff8fe4' with SHA sha256:11439713b023be71211cb83ecd56a1be63e0c0be3e4814a18cc4c71d2264dea5
Successfully built image pasapples/pivotal-mysql-web:latest


At this point you have a OCI compliant image sitting in your registry ready to be consumed for your K8's application needs all from just source code or pre compiled source code in this example a Java Application. Let's not forget this support's a polyglot programming model so supports NodeJS, Python and anything that buildpack's supports.

More Information

1. Cloud Native Buildpacks: an Industry-Standard Build Process for Kubernetes and Beyond.

2. buildspacks.io Home Page

Categories: Fusion Middleware

Dell Boomi Training: How to check number of Connection Licenses in Boomi Account

Online Apps DBA - Sat, 2019-06-08 03:10

Connection Licenses in Boomi Account Our [New Blog] Dell Boomi: How to check the number of Connection Licenses in Boomi Account at https://k21academy.com/dellboomi21 is ready to explain: ✔ How to check the number of Connection Licenses in Boomi Account with, ✔ Step-Wise Guidance To perform the same. ✔ Types of Environments (PROD & TEST) & much […]

The post Dell Boomi Training: How to check number of Connection Licenses in Boomi Account appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

RAC Installation Logs

Michael Dinh - Fri, 2019-06-07 12:24

Note to self for 2 Nodes RAC installation and DB creation logs location.

Oracle Universal Installer logs for GI/DB:

[oracle@racnode-dc1-1 logs]$ pwd; ls -lhrt
/u01/app/oraInventory/logs
total 2.3M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-13-PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall    0 Jun  7 16:57 oraInstall2019-06-07_04-57-35-PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-35-PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.8K Jun  7 16:58 UpdateNodeList2019-06-07_04-57-13-PM.log
-rw-r----- 1 oracle oinstall  153 Jun  7 17:06 oraInstall2019-06-07_04-57-13-PM.out
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
-rw-r----- 1 oracle oinstall   47 Jun  7 17:09 time2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:09 oraInstall2019-06-07_05-09-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:13 oraInstall2019-06-07_05-09-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall   29 Jun  7 17:14 oraInstall2019-06-07_05-09-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:14 AttachHome2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  507 Jun  7 17:14 silentInstall2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  14K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 9.5K Jun  7 17:15 UpdateNodeList2019-06-07_05-09-01PM.log
-rw-r----- 1 oracle oinstall  496 Jun  7 17:15 oraInstall2019-06-07_05-09-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 17:15 installActions2019-06-07_05-09-01PM.log
[oracle@racnode-dc1-1 logs]$

silentInstall*.log

[oracle@racnode-dc1-1 logs]$ grep successful silent*.log

silentInstall2019-06-07_04-39-01PM.log:The installation of Oracle Grid Infrastructure 12c was successful.

silentInstall2019-06-07_05-09-01PM.log:The installation of Oracle Database 12c was successful.

[oracle@racnode-dc1-1 logs

installActions*.log

[oracle@racnode-dc1-1 logs]$ grep "Using paramFile" install*.log

installActions2019-06-07_04-39-01PM.log:INFO: Using paramFile: /u01/stage/12.1.0.2/grid/install/oraparam.ini

installActions2019-06-07_05-09-01PM.log:Using paramFile: /u01/stage/12.1.0.2/database/install/oraparam.ini

[oracle@racnode-dc1-1 logs]$

Run root script after installation:
$GRID_HOME/root.sh

[oracle@racnode-dc1-1 install]$ pwd; ls -lhrt root*.log
/u01/app/12.1.0.2/grid/install
-rw------- 1 oracle oinstall 7.4K Jun  7 16:51 root_racnode-dc1-1_2019-06-07_16-44-37.log
[oracle@racnode-dc1-1 install]$

Run configToolAllCommands:
$GRID_HOME/cfgtoollogs/configToolAllCommands RESPONSE_FILE=/u01/stage/rsp/configtoolallcommands.rsp

[oracle@racnode-dc1-1 oui]$ pwd; ls -lhrt
/u01/app/12.1.0.2/grid/cfgtoollogs/oui
total 1.2M
-rw-r----- 1 oracle oinstall    0 Jun  7 16:39 oraInstall2019-06-07_04-39-01PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  121 Jun  7 16:43 oraInstall2019-06-07_04-39-01PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  11K Jun  7 16:43 AttachHome2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall  544 Jun  7 16:43 silentInstall2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall  12K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall 8.0K Jun  7 16:44 UpdateNodeList2019-06-07_04-39-01PM.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 16:44 oraInstall2019-06-07_04-39-01PM.out
-rw-r----- 1 oracle oinstall 1.1M Jun  7 16:44 installActions2019-06-07_04-39-01PM.log
-rw-r--r-- 1 oracle oinstall    0 Jun  7 16:57 configActions2019-06-07_04-57-10-PM.err
-rw-r--r-- 1 oracle oinstall  13K Jun  7 17:06 configActions2019-06-07_04-57-10-PM.log
-rw------- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err
-rw-r----- 1 oracle oinstall    0 Jun  7 17:06 oraInstall2019-06-07_05-06-42PM.err.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log.racnode-dc1-2
-rw-r----- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out.racnode-dc1-2
-rw-r----- 1 oracle oinstall  12K Jun  7 17:07 UpdateNodeList2019-06-07_05-06-42PM.log
-rw------- 1 oracle oinstall   33 Jun  7 17:07 oraInstall2019-06-07_05-06-42PM.out
[oracle@racnode-dc1-1 oui]$

dbca

[oracle@racnode-dc1-1 dbca]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca
total 116K
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:02 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM.lck
drwxrwxr-x 3 oracle oinstall 4.0K Jun  7 17:02 _mgmtdb
-rwxrwxr-x 1 oracle oinstall 105K Jun  7 17:03 trace.log_OraGI12Home1_2019-06-07_05-02-52-PM
drwxr-x--- 2 oracle oinstall 4.0K Jun  7 17:23 hawk
[oracle@racnode-dc1-1 dbca]$

dbca _mgmtdb

[oracle@racnode-dc1-1 _mgmtdb]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/_mgmtdb
total 19M
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 16:58 trace.log.lck
-rwxrwxr-x 1 oracle oinstall  18M Jun  7 16:59 tempControl.ctl
-rwxrwxr-x 1 oracle oinstall  349 Jun  7 16:59 CloneRmanRestore.log
-rwxrwxr-x 1 oracle oinstall  596 Jun  7 16:59 cloneDBCreation.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:00 rmanUtil
-rwxrwxr-x 1 oracle oinstall 2.1K Jun  7 17:00 plugDatabase.log
-rwxrwxr-x 1 oracle oinstall  428 Jun  7 17:01 dbmssml_catcon_12271.lst
-rwxrwxr-x 1 oracle oinstall 3.5K Jun  7 17:01 dbmssml0.log
-rwxrwxr-x 1 oracle oinstall  396 Jun  7 17:01 postScripts.log
-rwxrwxr-x 1 oracle oinstall    0 Jun  7 17:01 lockAccount.log
-rwxrwxr-x 1 oracle oinstall  442 Jun  7 17:01 catbundleapply_catcon_12348.lst
-rwxrwxr-x 1 oracle oinstall 3.9K Jun  7 17:01 catbundleapply0.log
-rwxrwxr-x 1 oracle oinstall  424 Jun  7 17:01 utlrp_catcon_12416.lst
-rwxrwxr-x 1 oracle oinstall 9.2K Jun  7 17:02 utlrp0.log
-rwxrwxr-x 1 oracle oinstall  964 Jun  7 17:02 postDBCreation.log
-rwxrwxr-x 1 oracle oinstall  737 Jun  7 17:02 OraGI12Home1__mgmtdb_creation_checkpoint.xml
-rwxrwxr-x 1 oracle oinstall  877 Jun  7 17:02 _mgmtdb.log
-rwxrwxr-x 1 oracle oinstall 1.1M Jun  7 17:02 trace.log
-rwxrwxr-x 1 oracle oinstall 1.3K Jun  7 17:02 DetectOption.log
drwxrwxr-x 2 oracle oinstall 4.0K Jun  7 17:03 vbox_rac_dc1

[oracle@racnode-dc1-1 _mgmtdb]$ tail _mgmtdb.log
Completing Database Creation
DBCA_PROGRESS : 68%
DBCA_PROGRESS : 79%
DBCA_PROGRESS : 89%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/_mgmtdb.
Database Information:
Global Database Name:_mgmtdb
System Identifier(SID):-MGMTDB
[oracle@racnode-dc1-1 _mgmtdb]$

dbca hawk

[oracle@racnode-dc1-1 hawk]$ pwd; ls -lhrt
/u01/app/oracle/cfgtoollogs/dbca/hawk
total 34M
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 trace.log.lck
-rw-r----- 1 oracle oinstall    0 Jun  7 17:16 rmanUtil
-rw-r----- 1 oracle oinstall  18M Jun  7 17:17 tempControl.ctl
-rw-r----- 1 oracle oinstall  384 Jun  7 17:17 CloneRmanRestore.log
-rw-r----- 1 oracle oinstall 2.8K Jun  7 17:20 cloneDBCreation.log
-rw-r----- 1 oracle oinstall    8 Jun  7 17:20 postScripts.log
-rw-r----- 1 oracle oinstall    0 Jun  7 17:21 CreateClustDBViews.log
-rw-r----- 1 oracle oinstall    6 Jun  7 17:21 lockAccount.log
-rw-r----- 1 oracle oinstall 1.3K Jun  7 17:22 postDBCreation.log
-rw-r----- 1 oracle oinstall  511 Jun  7 17:23 OraDB12Home1_hawk_creation_checkpoint.xml
-rw-r----- 1 oracle oinstall  24K Jun  7 17:23 hawk.log
-rw-r----- 1 oracle oinstall  16M Jun  7 17:23 trace.log

[oracle@racnode-dc1-1 hawk]$ tail hawk.log
DBCA_PROGRESS : 73%
DBCA_PROGRESS : 76%
DBCA_PROGRESS : 85%
DBCA_PROGRESS : 94%
DBCA_PROGRESS : 100%
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/cfgtoollogs/dbca/hawk.
Database Information:
Global Database Name:hawk
System Identifier(SID) Prefix:hawk
[oracle@racnode-dc1-1 hawk]$

How to Install Oracle Java in Oracle Cloud Infrastructure

OTN TechBlog - Fri, 2019-06-07 11:10
Oracle Java Support and Updates Included in Oracle Cloud Infrastructure

We recently announced that Oracle Java, Oracle’s widely adopted and proven Java Development Kit, is now included with Oracle Cloud Infrastructure subscriptions at no extra cost.

In this blog post I show how to install Oracle Java on Oracle Linux running in an OCI compute shape by using RPMs available yum servers available within OCI.

Installing Oracle Java

The Oracle Java RPMs are in the ol7_oci_included repository on Oracle Linux yum server accessible from within OCI.

To enable this repository:

$ sudo yum install -y --enablerepo=ol7_ociyum_config oci-included-release-el7

As of this writing, the repository containst Oracle Java 8, 11 and 12.

$ yum list jdk* Loaded plugins: langpacks, ulninfo Available Packages jdk-11.0.3.x86_64 2000:11.0.3-ga ol7_oci_included jdk-12.0.1.x86_64 2000:12.0.1-ga ol7_oci_included jdk1.8.x86_64 2000:1.8.0_211-fcs ol7_oci_included

To install Oracle Java 12, version 12.0.1:

$ sudo yum install jdk-12.0.1

To confirm the Java version:

$ java -version java version "12.0.1" 2019-04-16 Java(TM) SE Runtime Environment (build 12.0.1+12) Java HotSpot(TM) 64-Bit Server VM (build 12.0.1+12, mixed mode, sharing) Multiple JDK versions and setting the default

If you install multiple version of the JDK, you may want to set the default version using alternatives. For example, let’s first install Oracle Java 8:

$ sudo yum install -y jdk1.8

The alternatives command shows that two programs provide java:

$ sudo alternatives --config java There are 2 programs which provide 'java'. Selection Command ----------------------------------------------- *+ 1 /usr/java/jdk-12.0.1/bin/java 2 /usr/java/jdk1.8.0_211-amd64/jre/bin/java

Choosing selection 2 sets the default to JDK 1.8 (Oracle Java 8):

$ java -version java version "1.8.0_211" Java(TM) SE Runtime Environment (build 1.8.0_211-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode) Conclusion

Oracle Cloud Infrastructure includes Oracle Java —with support and updates— at no additional cost. By providing Oracle Java RPMs in OCI’s yum servers, installation is greatly simplified.

Build and Deploy a Golang Application Using Oracle Developer Cloud

OTN TechBlog - Fri, 2019-06-07 10:36

Golang recently became a trending programming language in the developer community. This blog will help you develop, build, and deploy your first Golang-based REST application using Docker and Kubernetes on Oracle Developer Cloud.

Before getting our first Golang application up and running, let’s examine Golang a little.

What is Golang?

Golang, or Go for short, is an open source programming language that is a statically-typed, compiled all-purpose programming language. It is fast and supports concurrency and cross-platform compilation. To learn more about Go, visit the following link:

https://golang.org/

Let’s get set and Go

To develop, build, and deploy a Golang-based application, you’ll need to create the following files on your machine:

  • main.go - Contains the Go application code and the listener
  • Dockerfile - Builds the Docker image for the Go application code
  • gorest.yml – A YAML file that deploys the Docker image of the Go application on Oracle Container Engine for Kubernetes

Here are the code snippets for the files mentioned above.

main.go

This file imports the required packages and defines the handler() function for the request, which is called by the main() function, where the http listener port is defined. As the name itself suggests, the errorHandler() function comes into play when an error occurs.

package main import ( "fmt" "log" "net/http" "os" ) func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello %s!", r.URL.Path[1:]) fmt.Println("RESTfulServ. on:8093, Controller:",r.URL.Path[1:]) } func main() { http.HandleFunc("/", handler) fmt.Println("Starting Restful services...") fmt.Println("Using port:8093") err := http.ListenAndServe(":8093", nil) log.Print(err) errorHandler(err) } func errorHandler(err error){ if err!=nil { fmt.Println(err) os.Exit(1) } }

Dockerfile

This Dockerfile pulls the latest Go Docker image from DockerHub, creates an app folder in the container, and then adds all the application files on the build machine(from Git repository cloning) to the app folder in the container. Next, it makes the app directory the working directory and runs the go build command to build the Go code and execute the main file.

 

FROM golang:latest RUN mkdir /app ADD . /app/ WORKDIR /app RUN go build -o main . CMD ["/app/main"]

 

gorest.yml

The script shown below defines the Kubernetes service and deployment, including the respective names, ports, and Docker image that will be downloaded from the DockerHub registry and deployed on the Kubernetes cluster. In the script, we defined the service and deployment as gorest-se, the port as 8093, and the container image as <DockerHub username>/gorest:1.0

kind: Service apiVersion: v1 metadata: name: gorest-se labels: app: gorest-se spec: type: NodePort selector: app: gorest-se ports: - port: 8093 targetPort: 8093 name: http --- kind: Deployment apiVersion: extensions/v1beta1 metadata: name: gorest-se spec: replicas: 1 template: metadata: labels: app: gorest-se version: v1 spec: containers: - name: gorest-se image: abhinavshroff/gorest:1.0 imagePullPolicy: IfNotPresent ports: - containerPort: 8093 ---

 

Create a Git repository in the Oracle Developer Cloud Project

To create a Git repository in the Developer Cloud project, navigate to the Project Home page and then click the +Create Repository button, found on the right-hand side of the page. In the New Repository dialog, enter GoREST for the repository Name and select Empty Repository for the Initial Content option, as shown. Then, click the Create button.

 

You should now see the GoREST.git repository created in the Repositories tab on the Project Home page. Click the Clone dropdown and then click the copy icon, as shown in the screen shot, to copy the Git repository HTTPS URL. Keep this URL handy.

 

Push the code to the Git Repository

Now, in your command prompt window, navigate to the GoREST application folder and execute the following Git commands to push the application code to the Git repository you created.

Note: You need to have gitcli installed on your development machine to execute Git commands. Also, you’ll be using the Git URL that you just copied from the Repositories tab, as previously mentioned.

git init

git add --all

git commit -m "First commit"

git remote add origin <git repository url>

git push origin master

Your GoREST.git repository should have the structure shown below.

 

 

Configure the Build Job

In Developer Cloud, select Builds in the left navigation bar to display the Builds page. Then click the +Create Job button. 

In the New Job dialog, enter BuildGoRESTAppl for the Name and select a Template that has the Docker runtime. Then click the Create button. This build job will build the Docker image for the Go REST application code in the Git repository and push it to the DockerHub registry.

In the Git tab, select Git from the Add Git dropdown, select GoREST.git as the Git repository and, for the branch, select master.

In the Steps tab, use the Add Step dropdown to add Docker login, Docker build, and Docker push steps.

In the Docker login step, provide your DockerHub Username and Password. Leave the Registry Host empty, since we’re using DockerHub as the registry.

In the Docker build step, enter <DockerHub Username>/gorest for the Image Name and 1.0 for the Version Tag. The full image name shown is <DockerHub Username>/gorest:1.0

In the Docker push step, enter <DockerHub Username>/gorest for the Image Name and 1.0 for the Version Tag. Then click the Save button.

To create another build job for deployment, navigate to the Builds page and click the +Create Job button. 

In the New Job dialog enter DeployGoRESTAppl for the Name, select the template with Kubectl, then click the Create button. This build job will deploy the Docker image built by the BuildGoRESTAppl build job to the Kubernetes cluster.

The first thing you’ll do to configure the DeployGoRESTAppl build job is to specify the repository where the code is found and select the branch where you’ll be working on the files.  To do this, in the Git tab, add Git from the dropdown, select GoREST.git as the Git repository and, for the branch, select master.

In the Steps tab, select OCIcli from the Add Step dropdown. Take a look at this blog link to see how and where to get the values for the OCIcli configuration. Then, select Unix Shell from the Add Step dropdown and, in the Unix Shell build step, enter the following script.

 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl create -f gorest.yml sleep 30 kubectl get services gorest-se kubectl get pods kubectl describe pods

 

When you’re done, click the Save button.

 

Create the Build Pipeline

Navigate to the Pipelines tab in the Builds page. Then click the +Create Pipeline button.

In the Create Pipeline dialog, you can enter the Name as GoApplPipeline. Then click the Create button.

 

Drag and drop the BuildGoRESTAppl and DeployGoRESTAppl build jobs and then connect them.

 

Double click the link that connects the build jobs and select Successful as the Result Condition. Then click the Apply button.

 

Then click on the Save button.

 

Click the Build button, as shown, to run the build pipeline. The BuildGoRESTAppl build job will be executed first and, if it is successful, then the DeployGoRESTAppl build job that deploys the container on the Kubernetes cluster on Oracle Cloud will be executed next.

 

After the jobs in the build pipeline finish executing, navigate to the Jobs tab and click the link for the DeployGoRESTAppl build job.  Then click the Build Log icon for the executed build.

 

You should see messages that the service and deployment were successfully created.  Search the log for the gorest-se service and deployment that were created on the Kubernetes cluster, and find the public IP address and port to access the microservice, as shown below.

 

Enter the IP address and port that you retrieved from the log, into the browser using the format shown in the following URL:

http://<retrieved IP address>:<retrieved port>/<your name>

You should see the “Hello <your name>!” message in your browser.

 

So, you’ve seen how Oracle Developer Cloud can help you manage the complete DevOps lifecycle for your Golang-based REST applications and how out-of-the-box support for Build and Deploy to Oracle Container Engine for Kubernetes makes it easier.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

PS360 enhancement: Added report of DDL models

David Kurtz - Fri, 2019-06-07 09:08
I have written several blogs and presentations recently about how and how not to collect statistics in PeopleSoft.
  • Managing Cost-Based Optimizer Statistics for PeopleSoft recommends
    • If you are going to continue to use DBMS_STATS in the DDL model then
      • Do not specify ESTIMATE_PERCENT because it disables the hash-based number-of-distinct-values calculation, forcing it to go back to the COUNT(DISTINCT ...) method that requires a sort, and may not produce accurate values because it only samples data.
      • Do not specify METHOD_OPT='FOR ALL INDEXED COLUMNS SIZE 1' because this will not collect histograms on indexed columns, and will not update column statistics on unindexed columns.
      • Do specify FORCE=>TRUE so that you can lock and delete statistics on temporary records.
    • However, the alternative is to use GFCSTATS11 package to collects these statistics.  This package is controlled by a metadata table so you can defined statistics collection behaviour for specific records.
  • How Not to Collect Optimizer Statistics in an Application Engine Program
    • This blog explains why you should not explicitly code DBMS_STATS calls into Application Engine programs.
This has prompted me to add a new report to the PS360 utility that simply reports the various DDL models for Oracle.  Thus during a health check, I can see how statistics are collected during batch processes.

PS360 can be download from https://github.com/davidkurtz/ps360.




Oracle Advances Safer, More Transparent Retail Supply Chain

Oracle Press Releases - Fri, 2019-06-07 07:00
Press Release
Oracle Advances Safer, More Transparent Retail Supply Chain Smarter software gives consumers peace of mind on the goods they buy and retailers the agility to resolve product issues quickly

WORLD FOOD SAFETY DAY, Redwood Shores, Calif.—Jun 7, 2019

The retail industry is more complex than ever. Not only are consumers looking for high-quality goods at fair prices, but they also want assurance that their purchases are safe and ethically sourced. That requires retailers to maintain transparency across their global supply chain networks. With new advancements in reporting and analytics and continued extension of integrations, Oracle Retail Brand Compliance Management Cloud Service is helping retailers monitor the integrity of their materials and end products, to improve customer experiences and protect their brands. 

Oracle Retail Brand Compliance is specifically designed to enable retailers, restaurants, food service providers and manufacturers to source, develop, track and market products. As products are developed, the solution audits and manages all aspects of the process, creating accurate and certified labeling detail against local regulatory and industry policies. As such, brands can rapidly and nimbly respond to and rectify product and industry incidents.

In 2016, a multistate listeria outbreak in the U.S. impacted several name brand grocery chains. Tainted frozen vegetables and fruits sourced from one plant were included in approximately 358 consumer products sold under 42 separate brands. Using Oracle Retail Brand Compliance, one well-known grocer was able to quickly pull SKUs, identify where impacted product was being sold and communicate with customers, mitigating the situation, and protecting its shoppers.

“Delivering on your brand promise today is as much about quality and trust as it is about cost,” said Jeff Warren, vice president of strategy and solution management, Oracle Retail. “Customers expect retailers to know everything about the items they purchase, whether this is information on availability, ingredients or the manufacturing process. They expect transparency and greater access to information, in real time. The biggest names in grocery rely on Oracle Retail Brand Compliance to meet these expectations while protecting their customers and brands.” 

Driving Compliance and Safety Across the Retail Supply Chain

Oracle Retail Brand Compliance is one of the world’s most widely adopted brand management solutions, representing two-thirds of all private label compliance and technical portals in use today. The offering provides a single point of authentication of all audits, accreditations and certificates. With it, common data sets are entered once and shared amongst the community to report on quality, environment, freshness and sustainability metrics and track the movement of products to drive risk assessment and rapid response to incidents. Today, the Oracle Retail Brand Compliance community represents more than 250,000 suppliers offering 750,000 consumer products.

With new enhancements to the offering, retail supply chain professionals will be able to more easily provide transparency of product information across partners and channels, with enhanced:

  • KPI dashboards that deliver key insights, business intelligence and operational reporting on supply chain and product analysis.
  • Consumer product compositions to enable in-store formulations, labeling and digital dietary advice.
  • API integrations with key Oracle applications.
 

“For retailers, having full visibility across their entire supply chains is a game-changer. It can mean the difference between minutes or weeks when responding to incidents, tracking and removing contaminated food from store shelves and notifying consumers. Brand damage aside, that can mean the difference between life and death,” noted Paul Woodward, senior director of Oracle Retail supply chain solutions.

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website, www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

[Oracle Integration Cloud] ICS, PCS, VBCS Console Walkthrough

Online Apps DBA - Thu, 2019-06-06 23:58

When you work on Oracle Integration Cloud (OIC), then you’ll come across multiple Consoles/Dashboards like My Services, OCI Console, OIC Console, ICS Design Console, etc. ▪What are these various consoles in Oracle Integration Cloud? ▪How to access these Consoles? ▪What are common tasks these various consoles perform like Adapters, Integrations, Connections, etc? Check all this […]

The post [Oracle Integration Cloud] ICS, PCS, VBCS Console Walkthrough appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Scalar Subquery Costing

Jonathan Lewis - Thu, 2019-06-06 13:54

A question came up on Oracle-l list-server a few days ago about how Oracle calculates costs for a scalar subquery in the select list. The question included an example to explain the point of the question. I’ve reproduced the test below, with the output from an 18.3 test system. The numbers don’t match the numbers produced in the original posting but they are consistent with the general appearance.

rem
rem     Script:         ssq_costing.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2019
rem     Purpose:        
rem
rem     Last tested 
rem             18.3.0.0
rem             12.2.0.1
rem

create table t_1k ( n1 integer ) ;
create table t_100k ( n1 integer ) ;

insert into t_1k
  select
         level
    from dual
    connect by level <= 1e3;

insert into t_100k
  select level
    from dual
    connect by level <= 1e5;

commit ;

begin
  dbms_stats.gather_table_stats ( null, 'T_1K') ;
  dbms_stats.gather_table_stats ( null, 'T_100K') ;
end ;
/

explain plan for
select 
        /*+ qb_name(QB_MAIN) */
        (
        select /*+ qb_name(QB_SUBQ) */ count(*)
        from t_1k
        where t_1k.n1 = t_100k.n1
        )
from t_100k
;

select * from table(dbms_xplan.display);

-----------------------------------------------------------------------------
| Id  | Operation          | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |        |   100K|   488K|  1533K  (2)| 00:01:00 |
|   1 |  SORT AGGREGATE    |        |     1 |     4 |            |          |
|*  2 |   TABLE ACCESS FULL| T_1K   |     1 |     4 |    17   (0)| 00:00:01 |
|   3 |  TABLE ACCESS FULL | T_100K |   100K|   488K|    36   (9)| 00:00:01 |
-----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter("T_1K"."N1"=:B1)

The key point to note is this – the scalar subquery has to execute 100,000 times because that’s the number of rows in the driving table. The cost for executing the scalar subquery once is 17 – so the total cost of the query should be 1,700,036 – not 1,533K (and for execution plans the K means x1000, not x1024). There’s always room for rounding errors, of course, but a check of the 10053 (CBO trace) file shows the numbers to be 17.216612 for the t_1k tablescan, 36.356072 for the t_100K tablescan, and 1533646.216412 for the whole query. So how is Oracle managing to get a cost that looks lower than it ought to be?

There’s plenty of scope for experimenting to see how the numbers change – and my first thought was simply to see what happens as you change the number of distinct values in the t_100K.n1 column. It would be rather tedious to go through the process of modifying the data a few hundred times to see what happens, so I took advantage of the get_column_stats() and set_column_stats() procedures in the dbms_stats package to create a PL/SQL loop that faked a number of different scenarios that lied about the actual table data.


delete from plan_table;
commit;

declare

        srec                    dbms_stats.statrec;
        n_array                 dbms_stats.numarray;

        m_distcnt               number;
        m_density               number;
        m_nullcnt               number;
        m_avgclen               number;


begin

        dbms_stats.get_column_stats(
                ownname         => user,
                tabname         => 't_100k',
                colname         => 'n1', 
                distcnt         => m_distcnt,
                density         => m_density,
                nullcnt         => m_nullcnt,
                srec            => srec,
                avgclen         => m_avgclen
        ); 

        for i in 1 .. 20 loop

                m_distcnt := 1000 * i;
                m_density := 1/m_distcnt;

                dbms_stats.set_column_stats(
                        ownname         => user,
                        tabname         => 't_100k',
                        colname         => 'n1', 
                        distcnt         => m_distcnt,
                        density         => m_density,
                        nullcnt         => m_nullcnt,
                        srec            => srec,
                        avgclen         => m_avgclen
                ); 


        execute immediate
        '
                explain plan set statement_id = ''' || m_distcnt || 
        '''
                for
                select
                        /*+ qb_name(QB_MAIN) */
                        (
                        select /*+ qb_name(QB_SUBQ) */ count(*)
                        from t_1k
                        where t_1k.n1 = t_100k.n1
                        )
                from t_100k
        ';
        
        end loop;       

end;
/

The code is straightforward. I’ve declared a few variables to hold the column stats from the t_100k.n1 column, called get_column stats(), then looped 20 times through a process that changes the number of distinct values (and corresponding density) recorded in the column stats, then used execute immediate to call “explain plan” for the original query.

You’ll notice I’ve given each plan a separate statement_id that corresponds to the num_distinct that generated the plan. In the code above I’ve changed the num_distinct from 1,000 to 20,000 in steps of 1,000.

Once the PL/SQL block ends I’ll have a plan table with 20 execution plans stored in it and, rather than reporting those plans with calls to dbms_xplan.display(), I’m going to be selective about which rows and columns I report.

select
        statement_id, 
        io_cost,
        io_cost - lag(io_cost,1) over (order by to_number(statement_id)) io_diff,
        cpu_cost,
        cpu_cost - lag(cpu_cost,1) over (order by to_number(statement_id)) cpu_diff,
        cost
from 
        plan_table
where 
        id = 0
order by 
        to_number(statement_id)
;

I’ve picked id = 0 (the top line of the plan) for each statement_id and I’ve reported the cost column, which is made up of the io_cost column plus a scaled down value of the cpu_cost column. I’ve also used the analytic lag() function to calculate how much the io_cost and cpu_cost changed from the previous statement_id. Here are my results from 18c:


STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
1000                                17033            1099838920                 17253
2000                                34033      17000 2182897480 1083058560      34470
3000                                51033      17000 3265956040 1083058560      51686
4000                                68033      17000 4349014600 1083058560      68903
5000                                85033      17000 5432073160 1083058560      86119
6000                               102033      17000 6515131720 1083058560     103336
7000                               119033      17000 7598190280 1083058560     120553
8000                               136033      17000 8681248840 1083058560     137769
9000                               153033      17000 9764307400 1083058560     154986
10000                              170033      17000 1.0847E+10 1083058560     172202
11000                              197670      27637 1.2608E+10 1760725019     200191
12000                              338341     140671 2.1570E+10 8962036084     342655
13000                              457370     119029 2.9153E+10 7583261303     463200
14000                              559395     102025 3.5653E+10 6499938259     566525
15000                              647816      88421 4.1287E+10 5633279824     656073
16000                              725185      77369 4.6216E+10 4929119846     734428
17000                              793452      68267 5.0565E+10 4349223394     803565
18000                              854133      60681 5.4431E+10 3865976350     865019
19000                              908427      54294 5.7890E+10 3459031472     920005
20000                              957292      48865 6.1003E+10 3113128324     969492

The first pattern that hits the eye is the constant change of 17,000 in the io_cost in the first few lines of the output. For “small” numbers of distinct values the (IO) cost of the query is (33 + 17 * num_distinct) – in other words, the arithmetic seems to assume that it will execute the query once for each value and then cache the results so that repeated executions for any given value will not be needed. This looks as if the optimizer is trying to match its arithmetic to the “scalar subquery caching” mechanism.

But things change somewhere between 10,000 and 11,000 distinct values. The point comes where adding one more distinct value causes a much bigger jump in cost than 17, and that’s because Oracle assumes it’s reached a point where there’s a value that it won’t have room for in the cache and will have to re-run the subquery multiple times for that value as it scans the rest of the table. Let’s find the exact break point where that happens.

Changing my PL/SQL loop so that we calculate m_distcnt as “19010 + i” this is the output from the final query:


-- m_distcnt := 10910 + i;

STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
10911                              185520            1.1834E+10                187887
10912                              185537         17 1.1835E+10    1083059     187904
10913                              185554         17 1.1836E+10    1083058     187921
10914                              185571         17 1.1837E+10    1083059     187938
10915                              185588         17 1.1838E+10    1083058     187956
10916                              185605         17 1.1839E+10    1083059     187973
10917                              185622         17 1.1841E+10    1083059     187990
10918                              185639         17 1.1842E+10    1083058     188007
10919                              185656         17 1.1843E+10    1083059     188025
10920                              185673         17 1.1844E+10    1083058     188042
10921                              185690         17 1.1845E+10    1083059     188059
10922                              185707         17 1.1846E+10    1083058     188076
10923                              185770         63 1.1850E+10    4027171     188140
10924                              185926        156 1.1860E+10    9914184     188298
10925                              186081        155 1.1870E+10    9912370     188455
10926                              186237        156 1.1880E+10    9910555     188613
10927                              186393        156 1.1890E+10    9908741     188770
10928                              186548        155 1.1900E+10    9906928     188928
10929                              186703        155 1.1909E+10    9905114     189085
10930                              186859        156 1.1919E+10    9903302     189243

If we have 10,922 distinct values in the column the optimizer calculates as if it will be able to cache them all; but if we have 10,923 distinct values the optimizer thinks that there’s going to be one value where it can’t cache the result and will have to run the subquery more than once.

Before looking at this in more detail let’s go to the other interesting point – when does the cost stop changing: we can see the cost increasing as the number of distinct values grows, we saw at the start that the cost didn’t seem to get as large as we expected, so there must be a point where it stops increasing before it “ought” to.

I’ll jump straight to the answer: here’s the output from the test when I start num_distinct off at slightly less than half the number of rows in the table:


 -- m_distcnt := (50000 - 10) + i;

STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
49991                             1514281            9.6488E+10               1533579
49992                             1514288          7 9.6489E+10     473357    1533586
49993                             1514296          8 9.6489E+10     473337    1533594
49994                             1514303          7 9.6490E+10     473319    1533601
49995                             1514311          8 9.6490E+10     473299    1533609
49996                             1514318          7 9.6491E+10     473281    1533616
49997                             1514325          7 9.6491E+10     473262    1533624
49998                             1514333          8 9.6492E+10     473243    1533631
49999                             1514340          7 9.6492E+10     473224    1533639
50000                             1514348          8 9.6493E+10     473205    1533646
50001                             1514348          0 9.6493E+10          0    1533646
50002                             1514348          0 9.6493E+10          0    1533646
50003                             1514348          0 9.6493E+10          0    1533646
50004                             1514348          0 9.6493E+10          0    1533646
50005                             1514348          0 9.6493E+10          0    1533646
50006                             1514348          0 9.6493E+10          0    1533646
50007                             1514348          0 9.6493E+10          0    1533646
50008                             1514348          0 9.6493E+10          0    1533646
50009                             1514348          0 9.6493E+10          0    1533646
50010                             1514348          0 9.6493E+10          0    1533646

The cost just stops changing when num_distinct = half the rows in the table.

Formulae

During the course of these experiments I had been exchanging email messages with Nenad Noveljic via the Oracle-L list-server (full monthly archive here) and he came up with the suggesion of a three-part formula that assumed a cache size and gave a cost of

  • “tablescan cost + num_distinct * subquery unit cost” for values of num_distinct up to the cache size;
  • then, for values of num_distinct greater than the cache_size and up to half the size of the table added a marginal cost representing the probability that some values would not be cached;
  • then for values of num_distinct greater than half the number of rows in the table reported the cost associated with num_distinct = half the number of rows in the table.

Hence:

  • for 1 <= num_distinct <= 10922, cost = (33 + num_distinct + 17)
  • for 10,923 <= num_distinct <= 50,000, cost = (33 + 10,922 * 17) + (1 – 10,922/num_distinct) * 100,000 * 17
  • for 50,000 <= num_distinct <= 100,000, cost = cost(50,000).

The middle line needs a little explanation: ( 1-10,922 / num_distinct ) is the probability that a value will not be in the cache; this has to be 100,000 to give the expected number of rows that will not be cached, and then multiplied by 17 as the cost of running the subquery for those rows.

The middle line can be re-arranged as 33 + 17 * (10,922 + (1 – 10,922/num_distinct) * 100,000)

Tweaking

At this point I could modify my code loop to report the calculated value for the cost and compare it with the actual cost to show you that the two values didn’t quite match. Instead I’ll jump forward a little bit to a correction that needs to be made to the formula above. It revolves around how Oracle determines the cache size. There’s a hidden parameter (which I mentioned in CBO Fundamentals) that controls scalar subquery caching. In the book I think I only referenced it in the context of subqueries in the “where” clause. The parameter is “_query_execution_cache_max_size” and has a default value of 131072 (power(2,7)) – so when I found that the initial formula didn’t quite work I made the following observation:

  • 131072 / 10922 = 12.00073
  • 131072 / 12 = 10922.666…

So I put 1092.66667 into the formula to see if that would improve things.

For the code change I added a variable m_cost to the PL/SQL block, and set it inside the loop as follows:

m_cost := round(33 + 17 * (10922.66667 + 100000 * (1 - (10922.66667 / m_distcnt))));

Then in the “execute immediate” I changed the “explain plan” line to read:

explain plan set statement_id = ''' || lpad(m_distcnt,7) || ' - ' || lpad(m_cost,8) ||

This allowed me to show the formula’s prediction of (IO)cost in final output, and here’s what I got for values of num_distinct in the region of 10,922:


STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
  10911 -   183901                 185520            1.1834E+10                187887
  10912 -   184057                 185537         17 1.1835E+10    1083059     187904
  10913 -   184212                 185554         17 1.1836E+10    1083058     187921
  10914 -   184368                 185571         17 1.1837E+10    1083059     187938
  10915 -   184524                 185588         17 1.1838E+10    1083058     187956
  10916 -   184680                 185605         17 1.1839E+10    1083059     187973
  10917 -   184836                 185622         17 1.1841E+10    1083059     187990
  10918 -   184992                 185639         17 1.1842E+10    1083058     188007
  10919 -   185147                 185656         17 1.1843E+10    1083059     188025
  10920 -   185303                 185673         17 1.1844E+10    1083058     188042
  10921 -   185459                 185690         17 1.1845E+10    1083059     188059
  10922 -   185615                 185707         17 1.1846E+10    1083058     188076
  10923 -   185770                 185770         63 1.1850E+10    4027171     188140
  10924 -   185926                 185926        156 1.1860E+10    9914184     188298
  10925 -   186081                 186081        155 1.1870E+10    9912370     188455
  10926 -   186237                 186237        156 1.1880E+10    9910555     188613
  10927 -   186393                 186393        156 1.1890E+10    9908741     188770
  10928 -   186548                 186548        155 1.1900E+10    9906928     188928
  10929 -   186703                 186703        155 1.1909E+10    9905114     189085
  10930 -   186859                 186859        156 1.1919E+10    9903302     189243

The formula is only supposed to work in the range 10923 – 50,000, so the first few results don’t match; but in the range 10,923 to 10,930 the match is exact. Then, in the region of 50,000 we get:


STATEMENT_ID                      IO_COST    IO_DIFF   CPU_COST   CPU_DIFF       COST
------------------------------ ---------- ---------- ---------- ---------- ----------
  49991 -  1514281                1514281            9.6488E+10               1533579
  49992 -  1514288                1514288          7 9.6489E+10     473357    1533586
  49993 -  1514296                1514296          8 9.6489E+10     473337    1533594
  49994 -  1514303                1514303          7 9.6490E+10     473319    1533601
  49995 -  1514311                1514311          8 9.6490E+10     473299    1533609
  49996 -  1514318                1514318          7 9.6491E+10     473281    1533616
  49997 -  1514325                1514325          7 9.6491E+10     473262    1533624
  49998 -  1514333                1514333          8 9.6492E+10     473243    1533631
  49999 -  1514340                1514340          7 9.6492E+10     473224    1533639
  50000 -  1514348                1514348          8 9.6493E+10     473205    1533646
  50001 -  1514355                1514348          0 9.6493E+10          0    1533646
  50002 -  1514363                1514348          0 9.6493E+10          0    1533646
  50003 -  1514370                1514348          0 9.6493E+10          0    1533646
  50004 -  1514377                1514348          0 9.6493E+10          0    1533646
  50005 -  1514385                1514348          0 9.6493E+10          0    1533646
  50006 -  1514392                1514348          0 9.6493E+10          0    1533646
  50007 -  1514400                1514348          0 9.6493E+10          0    1533646
  50008 -  1514407                1514348          0 9.6493E+10          0    1533646
  50009 -  1514415                1514348          0 9.6493E+10          0    1533646
  50010 -  1514422                1514348          0 9.6493E+10          0    1533646

Again, the formula applies only in the range up to 50,000 (half the rows in the table) – and the match is perfect in that range.

Next steps

The work so far gives us some idea of the algorithm that the optimizer is using to derive a cost, but this is just one scenario and there are plenty of extra questions we might ask. What, as the most pressing one, is the significance of the number 12 in the calculation 131,072/12. From previous experience I guess that is was related to the length of the input and output values of the scalar subquery – as in “value X for n1 returns value Y for count(*)”.

To pursue this idea I recreated the data sets using varchar2(10) as the definition of n1 and lpad(rownum,10) as the value – the “breakpoint” dropped from 10,922 down to 5,461. Checking the arithmetic 131,072 / 5461 = 24.001456, then 131,072/24 = 5461.333… And that’s the number that made fhe formular work perfectly for the modified data set.

Then I set used set_column_stats() to hack the avg_col_,len of t_100K.n1 to 15 and the break point dropped to 4,096.  Again we do the two arithmetic steps: 131072/4096 = 32 (but then we don’t need to do the reverse step since the first result is integral).

Checking the original data set when n1 was a numeric the avg_col_len was 5, so we have three reference points:

  • Avg_col_len = 5. “Cache unit size” = 12
  • Avg_col_len = 11. Cache unit size = 24 (don’t forget the avg_col_len includes the length byte, so our padded varchar2(10) has a length of 11).
  • Avg_col_len = 15, Cache unit size = 32

There’s an obvious pattern here: “Cache unit size” = (2 x avg_col_len + 2).  Since I hadn’t been changing the t_1k.n1 column at the same time, that really does look like a deliberate factor of 2 (I’d thought intially that maybe the 12 was affected by the lengths of both columns in the predicate – but that doesn’t seem to be the case.)

The scientific method says I should now make a prediction based on my hypothesis – so I set the avg_col_len for t_100K.n1 to 23 and guessed that the break point would be at 2730 – and it was.  (131072 / (2 * 23 + 2) = 2730.6666…) .

The next question, of course, is “where does the “spare 2″ come from?” Trying to minimize the change in the code I modified my subquery to select sum(to_number(n1)) rather than count(*), then to avg(to_number(n1)) – remember I had changed n1 to a varchar2(10) that looked like a number left-padded with spaces. In every variant of the tests I’d done so far all I had to do to get an exact match between the basic formula and the optimizer’s cost calculation was to use “2 * avg_col_len + 22” as the cache unit size – and 22 is the nominal maximum length of an internally stored numeric column.

Bottom line: the cache unit size seems to be related to the input and output values, but I don’t know why there’s a factor of 2 applied to the input column length, and I don’t know why the length of count(*) is deemed to be 2 when other derived numeric outputs use have the more intuitive 22 for their length.

tl;dr

The total cost calculation for a scalar subquery in the select list is largely affected by:

  • a fixed cache size (131,072 bytes) possibly set by hidden parameter _query_execution_cache_max_size
  • the avg_col_len of the input (correlating) column(s) from the driving table
  • the nominal length of the output (select list) of the subquery

There is an unexplained factor of 2 used with the avg_col_len of the input, and a slightly surprising value of 2 if the output is simply count(*).

If the number N of distinct values for the driving column(s) is less than the number of possible cache entries the effect of the scalar subquery is to add N * estimated cost of executing the subquery once.  As the number of distinct values for the driving column(s) goes above the limit then the incremental effect of the subquery is based on the expected number of times an input value will not be cached. When the number of distinct values in the driving column(s) exceeds half the number of rows in the driving table the cost stops increasing – there is no obvious reason when the algorithm does this.

There are many more cases that I could investigate at this point – but I think this model is enough as an indication of general method. If you come across a variation where you actually need to work out how the optimizer derived a cost then this framework will probably be enough to get you started in the right direction.

 

anytype from java for anydataset

Tom Kyte - Thu, 2019-06-06 10:06
Hello TOM :) I try to make anytype in java stored procedure and use it to create anydataset in PL/SQL. But I get error ORA-22625. Why this error occures and how can I fix it? I do it according to guides, for example https://docs.oracle.com/databas...
Categories: DBA Blogs

Best performance of Top N by X

Tom Kyte - Thu, 2019-06-06 10:06
I have the following 2 tables: <code>CREATE TABLE accounts( id NUMBER unique not null, account_name VARCHAR2(30) ); CREATE TABLE log_data( account_id NUMBER not null, log_type NUMBER, log_time TIMESTAMP, msg CLOB );...
Categories: DBA Blogs

PLS_INTEGER versus NUMBER versus "dynamic types"

Tom Kyte - Thu, 2019-06-06 10:06
It has been suggested to me that I use PL/SQL declarations like PROCEDURE foo ( p_id IN PLS_INTEGER )... instead of PROCEDURE foo (p_id IN NUMBER ) ... or PROCEDURE foo (p_id IN mytable.my_id%TYPE ) I've always preferred the last option, si...
Categories: DBA Blogs

Using MERGE to update data 2 times

Tom Kyte - Thu, 2019-06-06 10:06
I am having a Full dump of 1m records arriving every day.I need to obtain this set into STG. There is soft delete records thus when I compared between Full dump(source) and STG(target), the records in STG always larger than source. I have researc...
Categories: DBA Blogs

Function comparing dates in a range

Tom Kyte - Thu, 2019-06-06 10:06
I have a function that will extract records from a databaase with specific dates and times. One of the parameters passed is a date. This functions runs twice a day On of the parameters passed is AM or PM). My functions works for morning (times are ...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator