Feed aggregator

Multi-Factor Authentication (MFA) using OKTA with Spring Boot and Tanzu Application Service

Pas Apicella - Thu, 2020-07-09 23:22
Recently I was asked to build a quick demo showing how to use MFA with OKTA and Spring Boot application running on Tanzu Application Service. Here is the demo application plus how to setup and run this yourself.

Steps

1. Clone the existing repo as shown below

$ git clone https://github.com/papicella/mfa-boot-fsi
Cloning into 'mfa-boot-fsi'...
remote: Enumerating objects: 47, done.
remote: Counting objects: 100% (47/47), done.
remote: Compressing objects: 100% (31/31), done.
remote: Total 47 (delta 2), reused 47 (delta 2), pack-reused 0
Unpacking objects: 100% (47/47), done.



2. Create a free account of https://developer.okta.com/

Once created login to the dev account. Your account URL will look like something as follows

https://dev-{ID}-admin.okta.com



3. You will need your default authorization server settings. From the top menu in the developer.okta.com dashboard, go to API -> Authorization Servers and click on the default server


You will need this data shortly. Image above is an example those details won't work for your own setup.

4. From the top menu, go to Applications and click the Add Application button. Click on the Web button and click Next. Name your app whatever you like. I named mine "pas-okta-springapp". Otherwise the default settings are fine. Click Done.

From this screen shot you can see that the default's refer to localhost which for DEV purposes is fine.


You will need the Client ID and Client secret from the final screen so make a note of these

5. Edit the "./mfa-boot-fsi/src/main/resources/application-DEV.yml" to include the details as per #3 and #4 above.

You will need to edit

  • issuer
  • client-id
  • client-secret


application-DEV.yaml

spring:
  security:
    oauth2:
      client:
        provider:
          okta:
            user-name-attribute: email

okta:
  oauth2:
    issuer: https://dev-213269.okta.com/oauth2/default
    redirect-uri: /authorization-code/callback
    scopes:
      - profile
      - email
      - openid
    client-id: ....
    client-secret: ....

6. In order to pick up this application-DEV.yaml we have to set the spring profile correctly. That can be done using a JVM property as follows.

-Dspring.profiles.active=DEV

In my example I use IntelliJ IDEA so I set it on the run configurations dialog as follows



7. Finally let's setup MFA which we do as follows by switching to classic UI as shown below



8. Click on Security -> Multifactor and setup another Multifactor policy. In the screen shot below I select "Email Policy" and make sure it is "Required" along with the default policy



9. Now run the application making sure you set the spring active profile to DEV.

...
2020-07-10 13:34:57.528  INFO 55990 --- [  restartedMain] pas.apa.apj.mfa.demo.DemoApplication     : The following profiles are active: DEV
...

10. Navigate to http://localhost:8080/



11. Click on the "Login" button

Verify you are taken to the default OKTA login page


12. Once logged in the second factor should then ask for a verification code to be sent to your email. Press the "Send me the code" button




13. Once you enter the code sent to your email you will be granted access to the application endpoints







14. Finally to deploy the application to Tanzu Application Service perform these steps below

- Create a manifest.yaml as follows

---
applications:
- name: pas-okta-boot-app 
  memory: 1024M
  buildpack: https://github.com/cloudfoundry/java-buildpack.git#v4.16
  instances: 2
  path: ./target/demo-0.0.1-SNAPSHOT.jar
  env:
    JBP_CONFIG_OPEN_JDK_JRE: '{ jre: { version: 11.+}}'

- Package the application as follows

$ ./mvnw -DskipTests package

- In the DEV OTKA console create a second application which will be for the deployed application on Tanzu Application Service which refers to it's FQDN rather then localhost as shown below



- Edit "application.yml" to ensure you set the following correctly for the new "Application" we created above.

You will need to edit

  • issuer
  • client-id
  • client-secret
- Push the application using "cf push -f manifest.yaml"

$ cf apps
Getting apps in org papicella-org / space apple as papicella@pivotal.io...
OK

name                requested state   instances   memory   disk   urls
pas-okta-boot-app   started           1/1         1G       1G     pas-okta-boot-app.cfapps.io


That's It!!!!

Categories: Fusion Middleware

Engage!

Scott Spendolini - Thu, 2020-07-09 21:26
One month in at Oracle, and I’ve been moving at what feels like warp speed – both literally and figuratively.  My team has been hard at work building the intake survey and that will solicit volunteers for the Phase 3 efficacy trials.  We’re also building all of the backend applications that will be used run analytics to select qualified participants and assign them to a local clinic to receive the actual vaccine.  This project is actually a part of the US Government’s Operation Warp Speed program – a program whose goal is to deliver 300 million doses of a safe and effective vaccine by January 2021.  

It should come as no surprise that we’re building 100% of this solution with Oracle’s low code platform, Oracle APEX.  Oracle APEX was the natural choice here, given the rapidly changing requirements, the need for a 100% responsive and custom UI, and the ability to deliver something substantial in just a couple of weeks.  The entire infrastructure – and it’s a lot more than just APEX & ORDS – all runs on the Oracle Cloud.

This is just one of four active projects that we have going on.  And thus the reason for this post – we’re growing.  Fast.  Each of the four current projects have multiple phases, many of which have yet to even start.  And there’s a bunch of new and exciting work on the not-too-distant horizon as we continue to reach out and assist those in need during this global pandemic.  Pandemic systems is also just the beginning.  Look for more exciting announcements about what we’re up to as we can share them.

We’re already well on our way putting together a world-class team to work on these projects.  You may have already seen Tweets from Doug Gault and Dan Mcghan about their roles on this new team.  These are just two of the several new team members that we’ve already recruited for this team.  

Despite this star-studded cast, we still need more help.  This is where you come in.  If you’re a seasoned APEX developer looking for a chance to make an actual difference in the world, we should talk.  We have a number of open slots that we need filled ASAP.  Know any good DevOps, Infrastructure or even DBA-type folks?  We’ll need them, too.

Interested in learning more or applying?  Drop me a note, hit me up on Twitter or hop on over here to fill out an application to get things started.

Execution Plans

Jonathan Lewis - Thu, 2020-07-09 10:54

This is an example from the Oracle Deveoloper Community of using the output of SQL Monitor to detect a problem with object statistics that resulted in an extremely poor choice of execution plan.

A short time after posting the original statement of the problem the OP identified where he thought the problem was and the general principle of why he thought he had a problem – so I didn’t have to read the entire execution plan to work out a strategy that would be (at least) a step in the right direction of solving the performance problem.

This note, then, is just a summary of the five minute that I spent confirming the OP’s hypothesis and explaining how to work around the problem he had identified. It does, however, give a little lead-in to the comments I made to the OP in order to give a more rounded picture of what his execution plan wass telling us.

So here’s the top half of the plan (which is the first subquery of a “concatenation”) with the first few predicates:

===============================================================================================================================================================================================================================  
| Id |                 Operation                  |            Name            |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Read | Read  |  Cell   |  Mem  | Activity |             Activity Detail             |  
|    |                                            |                            | (Estim) |      | Active(s) | Active |       | (Actual) | Reqs | Bytes | Offload | (Max) |   (%)    |               (# samples)               |  
===============================================================================================================================================================================================================================  
|  0 | SELECT STATEMENT                           |                            |         |      |     12086 |     +5 |     1 |     864K |      |       |         |       |     0.01 | Cpu (1)                                 |  
|  1 |   CONCATENATION                            |                            |         |      |     12086 |     +5 |     1 |     864K |      |       |         |       |          |                                         |  
|  2 |    FILTER                                  |                            |         |      |     12191 |     +4 |     1 |     864K |      |       |         |       |     0.03 | Cpu (4)                                 |  
|  3 |     FILTER                                 |                            |         |      |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.01 | Cpu (1)                                 |  
|  4 |      NESTED LOOPS                          |                            |     241 | 251K |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.02 | Cpu (3)                                 |  
|  5 |       NESTED LOOPS                         |                            |     241 | 251K |     12191 |     +4 |     1 |      26M |      |       |         |       |     0.07 | Cpu (8)                                 |  
|  6 |        NESTED LOOPS                        |                            |     241 | 251K |     12232 |     +4 |     1 |      26M |      |       |         |       |     0.05 | Cpu (6)                                 |  
|  7 |         NESTED LOOPS                       |                            |    5407 | 233K |     12242 |     +4 |     1 |      86M |      |       |         |       |          |                                         |  
|  8 |          MERGE JOIN CARTESIAN              |                            |       1 |   35 |     12242 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
|  9 |           TABLE ACCESS BY INDEX ROWID      | REF1                       |       1 |    3 |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 10 |            INDEX RANGE SCAN                | REF1_PK                    |       1 |    2 |     12242 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 11 |           BUFFER SORT                      |                            |      84 |   32 |     12242 |     +4 |     1 |     1000 |      |       |         |  104K |          |                                         |  
| 12 |            TABLE ACCESS BY INDEX ROWID     | STAGE                      |      84 |   32 |         1 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
| 13 |             INDEX RANGE SCAN               | STAGE_IDX1                 |      84 |    4 |         1 |     +4 |     1 |     1000 |      |       |         |       |          |                                         |  
| 14 |          PARTITION RANGE ITERATOR          |                            |    8292 | 232K |     12232 |     +4 |  1000 |      86M |      |       |         |       |          |                                         |  
| 15 |           TABLE ACCESS STORAGE FULL        | TAB1                       |    8292 | 232K |     12245 |     +1 |  1000 |      86M | 103M | 521GB |   1.96% |    7M |    51.81 | gc buffer busy acquire (1)              |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | latch: cache buffers chains (1)         |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | Cpu (1196)                              |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gcs drm freeze in enter server mode (2) |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | reliable message (5)                    |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (2827)  |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell smart table scan (1977)            |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | read by other session (304)             |  
| 16 |         PARTITION RANGE ITERATOR           |                            |       1 |   12 |     12191 |     +4 |   86M |      26M |      |       |         |       |     0.42 | Cpu (51)                                |  
| 17 |          TABLE ACCESS BY LOCAL INDEX ROWID | TAB2                       |       1 |   12 |     12191 |     +4 |   86M |      26M |   4M |  28GB |         |       |    32.14 | gc cr grant 2-way (20)                  |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gc cr request (2)                       |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gc remaster (6)                         |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | Cpu (319)                               |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | gcs drm freeze in enter server mode (4) |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | latch: gc element (2)                   |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (3563)  |  
| 18 |           INDEX RANGE SCAN                 | TAB2_IX1                   |     166 |    3 |     12210 |     +2 |   86M |      26M |   1M |  11GB |         |       |    15.17 | Cpu (292)                               |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (1557)  |  
| 19 |        INDEX UNIQUE SCAN                   | MTD_PK                     |       1 |    1 |     12242 |     +4 |   26M |      26M |  292 |   2MB |         |       |     0.17 | Cpu (20)                                |  
|    |                                            |                            |         |      |           |        |       |          |      |       |         |       |          | cell single block physical read (1)     |  
| 20 |       TABLE ACCESS BY INDEX ROWID          | REF2                       |       1 |    2 |     12191 |     +4 |   26M |      26M |    7 | 57344 |         |       |     0.11 | Cpu (13)                                |  
| 21 |      TABLE ACCESS BY INDEX ROWID           | CONTROLTAB                 |       1 |    1 |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 22 |       INDEX UNIQUE SCAN                    | CONTROLTAB_PK              |       1 |      |         1 |     +4 |     1 |        1 |      |       |         |       |          |                                         |  
| 23 |     MINUS                                  |                            |         |      |       102 |     +4 |    25 |        3 |      |       |         |       |          |                                         |  
| 24 |      TABLE ACCESS BY INDEX ROWID           | CUST_ORG_PK                |       1 |    3 |       942 |     +4 |    25 |       10 |      |       |         |       |          |                                         |  
| 25 |       INDEX UNIQUE SCAN                    | MC_PK                      |       1 |    2 |       942 |     +4 |    25 |       25 |      |       |         |       |          |                                         |  
| 26 |      SORT UNIQUE NOSORT                    |                            |       1 |    4 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  
| 27 |       TABLE ACCESS BY INDEX ROWID          | REF1                       |       1 |    3 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  
| 28 |        INDEX RANGE SCAN                    | REF1_PK                    |       1 |    2 |         8 |     +4 |    25 |        9 |      |       |         |       |          |                                         |  


Predicate Information (identified by operation id):  
---------------------------------------------------  
   2 - filter( EXISTS ( (SELECT /*+ INDEX_RS_ASC ("CUST_ORG_PK" "MC_PK") */ "CUST_ID" FROM "USER1"."CUST_ORG_PK"  "CUST_ORG_PK" WHERE "CUST_ID"=:B1 AND "CUST_ORG_PK"."INDICATR"='Y')MINUS (SELECT /*+ INDEX_RS_ASC  
              ("REF1" "REF1_PK") */ TO_NUMBER("VAL") FROM "USER1"."REF1" "REF1" WHERE "PUSER"='ZZZ' AND "EDATE">=TRUNC(SYSDATE@!) AND TO_NUMBER("VAL")=:B2  
              AND "SDATE"<=TRUNC(SYSDATE@!)))) 3 - filter( EXISTS (SELECT /*+ INDEX_RS_ASC ("CONTROLTAB" "CONTROLTAB_PK") */ 0 FROM "USER2"."CONTROLTAB" "CONTROLTAB" WHERE "CONTROLTAB"."CNTRLID"=9999 AND NVL("CONTROLTAB"."STATUS",'F')='S')) 9 - filter("REF1"."EDATE">=TRUNC(SYSDATE@!))  
  10 - access("REF1"."PUSER"='XXX' AND  "REF1"."NAME"='CODE' AND "REF1"."SDATE"<=TRUNC(SYSDATE@!))  
  13 - access("STAGE"."NAME"='XXX' AND "STAGE"."STAT"='I')  

The first thing to note is that the “Time (active)” shown at the top line is about 12,000 seconds – so it’s a long running query. Your next observation – before you look at the shape of the plan – might be to note that operations 15, 17 and 18 between them record thousands of seconds of time, mostly I/O time but including 1,200 seconds of CPU time. This might draw your eye to the part of the plan that tells you what you are doing at these operations and why you are doing it.

Looking at the detail – operation 15 is a full tablescan that is the child of a partition range iterator (operation 14), and that iterator is the second child of a nested loop join (operation 7). Why is the optimizer so out of balance that it thinks a table scan of (possibly) multiple partitions of a partitioned table is a good candidate for the second child of a nested loop?! The answer comes from the first child – the  Merge Join Cartesian at operation 8 has been given a cardinality estimate of just one row. When the cardinality estimate is one for the first table in a join then it doesn’t matter whether Oracle uses a nested loop join or a hash join, whatever happens next is only supposed to happen once.

Unfortunately when we compare “Rows (Estim)” and “Rows (Actual)” for the operation we see that the Merge Join Cartesian produced 1,000 rows – so the partition tablescan was going to happen 1,000 times (which we can confirm from the “Execs” column of operation 14). As a first step, then, we need to ensure that the optimizer gets a better estimate of the number of rows produced by the Cartesian join. So lets look at its child operations.

  • Operation 9 (Table Access of REF1) is predicted to return one row – with “Rows (actual)” = 1.
  • Operation 11 (Buffer Sort of data from STAGE1) is predicted to return 84 rows – with “Rows (actual)” = 1,000

Since the number of rows produced by a Cartesian join should be the product of the number of rows of the two inputs this tells us that the optimizer’s estimate of the number of rows from REF1 has been rounded up to 1 from a very small fraction (less than 1/84). If we can correct this then we may get Oracle to change the awful nested loop to an acceptable hash join. Wven if we can’t fix this mis-estimate we may be able to do something that improves the estimate for STAGE1 to something sufficienlty large that it will trigger the switch to a hash join. So let’s look at the predicates for these two tables.

REF1 predicates
   9 - filter("REF1"."EDATE">=TRUNC(SYSDATE@!))  
  10 - access("REF1"."PUSER"='XXX' AND  "REF1"."NAME"='CODE' AND "REF1"."SDATE"<=TRUNC(SYSDATE@!))  

The index range scan is based on an access predicate (with no filter predicate), so it looks like there’s a nearly perfect three-column index on REF1, but the optimizer can’t use the number of distinct keys in the index to get a good estimate of cardinality because one of the predicates is range-based. So the arithmetic will look at the three predicates separately and multiply up their selectivities. (It’s possible, of course, that this might be the first three columns of a 4, or more, column index.)

It’s a reasonable guess that the number of distinct combinations of (puser, name) will be much smaller than num_distinct(puser) * num_distinct(name) – so one strategy that might help increase the table’s cardinality estimate is to create extended statistics on the column group (puser, name).

Another reasonable guess is that the number of distinct values for the two columns is (relatively) small, with some skew to the distribution (name = ‘CODE’ looks particularly susceptible to being a commonly occurring value) – so perhaps we need a histogram on one or both of the columns (which would then require a histogram to be created on the column group as well if we wanted the optimizer to use the column group). We’d also have to make sure that the queried values didn’t fall outside the known low/high values for the columns if we wanted the column group to be used.

STAGE1 Predicates
13 - access("STAGE"."NAME"='XXX' AND "STAGE"."STAT"='I')

This is the access(-only) predicate for the index stage_idx1, and there are no filter predicates when we reach the table. So stage_idx1 might be a two-column index on the table that we are using completely, or it may be an index with more columns that we are using only incompletely. We can see that the cardinality estimate is out by a factor of 12 (84 predicted, 1,000 actual) so if this is the complete index (which would allow Oracle to use the distinct_keys value to estimate cardinality) there must be an uneven data distribution in the values; but if this is just the first two columns of a longer index then we might benefit from extended stats (viz: another column group) on this pair of columns.

Again, even if we create a column group, or take automatic advantage of the distinct_keys figure, the predicate STAT=’I’ (is that state, status?) looks a little dangerous – status columns tend to have a small number of distinct values with a signficant skew to the distribution of values – so we may want to create a histogram on the STAT column, which would then require us to create a histogram on the column group if we also wanted the column group stats to have an effect.

What Happened Next?

I made the suggestions about column groups and histogram to the OP – without the level of detail that I’ve given in the explanations above – and got the following response:

You are spot on.

There does exists frequency histogram on column NAME(having 14 distinct value) and STAT(having 7 distinct values) of table STAGE. Also there already exists a frequency histogram on column PUSER and height balanced histogram on column NAME of table REF1. But still they were not helping the existing query.

By creating a column group on both on ref1 (puser, name) and stage(name, stat) with histogram for both i.e. ‘FOR COLUMNS SIZE 254’. The merge Cartesian removed automatically from the path by optimizer and its doing a hash join now with TAB1 finishing the query in quick time.

Summary

When the cardinality (rows) estimate for an operation drops to one (which might mean much less than one and rounded up) then all hell can break loose and the choice of parent operation – and its cardinality estimate – might be insanely stupid, leading to a catastrophically poor execution plan.

Very low cardinality estimates are often the result of multiplying individual column selectivities to produce an estimated selectivity that is unrealistic (much too small) when compared with the actual number of distinct combinations that exist in the table. In such cases creating a column group, (possibly with a histogram) may be all you need to do to get a better cardinality estimate and a vastly improved execution plan.

 

Adding a Function-Based Index

Hemant K Chitale - Thu, 2020-07-09 10:51
What happens when you create a Function Based Index ?

{This demo is in 12.2}

I start with my "large" table.

SQL> create table my_data_table 
2 as select * from dba_objects
3 union all select * from dba_objects;

Table created.

SQL> select count(*) from my_data_table;

COUNT(*)
----------
157408

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL> select num_rows, avg_row_len
2 from user_tables
3 where table_name = 'MY_DATA_TABLE'
4 /

NUM_ROWS AVG_ROW_LEN
---------- -----------
157408 132

SQL>


I then add a Function Based Index on it.

SQL> create index my_data_fbi   
2 on my_data_table(lower(OBJECT_NAME))
3 /

Index created.

SQL>


I now look at Table and Column statistics

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL>
SQL> desc my_data_table
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
OWNER VARCHAR2(128)
OBJECT_NAME VARCHAR2(128)
SUBOBJECT_NAME VARCHAR2(128)
OBJECT_ID NUMBER
DATA_OBJECT_ID NUMBER
OBJECT_TYPE VARCHAR2(23)
CREATED DATE
LAST_DDL_TIME DATE
TIMESTAMP VARCHAR2(19)
STATUS VARCHAR2(7)
TEMPORARY VARCHAR2(1)
GENERATED VARCHAR2(1)
SECONDARY VARCHAR2(1)
NAMESPACE NUMBER
EDITION_NAME VARCHAR2(128)
SHARING VARCHAR2(18)
EDITIONABLE VARCHAR2(1)
ORACLE_MAINTAINED VARCHAR2(1)
APPLICATION VARCHAR2(1)
DEFAULT_COLLATION VARCHAR2(100)
DUPLICATED VARCHAR2(1)
SHARDED VARCHAR2(1)
CREATED_APPID NUMBER
CREATED_VSNID NUMBER
MODIFIED_APPID NUMBER
MODIFIED_VSNID NUMBER

SQL>
SQL> l
1 select column_name, num_nulls,to_char(last_analyzed,'DD-MON HH24:MI')
2 from user_tab_columns
3 where table_name = 'MY_DATA_TABLE'
4* order by column_id
SQL> /

COLUMN_NAME NUM_NULLS TO_CHAR(LAST_ANALYZED
------------------------------ ---------- ---------------------
OWNER 0 09-JUL 22:43
OBJECT_NAME 0 09-JUL 22:43
SUBOBJECT_NAME 155954 09-JUL 22:43
OBJECT_ID 2 09-JUL 22:43
DATA_OBJECT_ID 139120 09-JUL 22:43
OBJECT_TYPE 0 09-JUL 22:43
CREATED 0 09-JUL 22:43
LAST_DDL_TIME 2 09-JUL 22:43
TIMESTAMP 2 09-JUL 22:43
STATUS 0 09-JUL 22:43
TEMPORARY 0 09-JUL 22:43
GENERATED 0 09-JUL 22:43
SECONDARY 0 09-JUL 22:43
NAMESPACE 2 09-JUL 22:43
EDITION_NAME 157408 09-JUL 22:43
SHARING 0 09-JUL 22:43
EDITIONABLE 101970 09-JUL 22:43
ORACLE_MAINTAINED 0 09-JUL 22:43
APPLICATION 0 09-JUL 22:43
DEFAULT_COLLATION 125494 09-JUL 22:43
DUPLICATED 0 09-JUL 22:43
SHARDED 0 09-JUL 22:43
CREATED_APPID 157408 09-JUL 22:43
CREATED_VSNID 157408 09-JUL 22:43
MODIFIED_APPID 157408 09-JUL 22:43
MODIFIED_VSNID 157408 09-JUL 22:43

26 rows selected.

SQL>
SQL> l
1 select column_name, num_nulls,to_char(last_analyzed,'DD-MON HH24:MI')
2 from user_tab_cols
3 where table_name = 'MY_DATA_TABLE'
4* order by column_id
SQL> /

COLUMN_NAME NUM_NULLS TO_CHAR(LAST_ANALYZED
------------------------------ ---------- ---------------------
OWNER 0 09-JUL 22:43
OBJECT_NAME 0 09-JUL 22:43
SUBOBJECT_NAME 155954 09-JUL 22:43
OBJECT_ID 2 09-JUL 22:43
DATA_OBJECT_ID 139120 09-JUL 22:43
OBJECT_TYPE 0 09-JUL 22:43
CREATED 0 09-JUL 22:43
LAST_DDL_TIME 2 09-JUL 22:43
TIMESTAMP 2 09-JUL 22:43
STATUS 0 09-JUL 22:43
TEMPORARY 0 09-JUL 22:43
GENERATED 0 09-JUL 22:43
SECONDARY 0 09-JUL 22:43
NAMESPACE 2 09-JUL 22:43
EDITION_NAME 157408 09-JUL 22:43
SHARING 0 09-JUL 22:43
EDITIONABLE 101970 09-JUL 22:43
ORACLE_MAINTAINED 0 09-JUL 22:43
APPLICATION 0 09-JUL 22:43
DEFAULT_COLLATION 125494 09-JUL 22:43
DUPLICATED 0 09-JUL 22:43
SHARDED 0 09-JUL 22:43
CREATED_APPID 157408 09-JUL 22:43
CREATED_VSNID 157408 09-JUL 22:43
MODIFIED_APPID 157408 09-JUL 22:43
MODIFIED_VSNID 157408 09-JUL 22:43
SYS_NC00027$ 0 09-JUL 22:43

27 rows selected.

SQL>

SQL> l
1 select column_name, avg_col_len
2 from user_tab_cols
3 where table_name = 'MY_DATA_TABLE'
4* order by column_id
SQL> /

COLUMN_NAME AVG_COL_LEN
------------------------------ -----------
OWNER 6
OBJECT_NAME 34
SUBOBJECT_NAME 2
OBJECT_ID 5
DATA_OBJECT_ID 2
OBJECT_TYPE 10
CREATED 8
LAST_DDL_TIME 8
TIMESTAMP 20
STATUS 7
TEMPORARY 2
GENERATED 2
SECONDARY 2
NAMESPACE 3
EDITION_NAME 0
SHARING 13
EDITIONABLE 2
ORACLE_MAINTAINED 2
APPLICATION 2
DEFAULT_COLLATION 4
DUPLICATED 2
SHARDED 2
CREATED_APPID 0
CREATED_VSNID 0
MODIFIED_APPID 0
MODIFIED_VSNID 0
SYS_NC00027$ 34

27 rows selected.

SQL>
SQL> select num_rows, avg_row_len
2 from user_tables
3 where table_name = 'MY_DATA_TABLE'
4 /

NUM_ROWS AVG_ROW_LEN
---------- -----------
157408 166

SQL>


A new column "SYS_NC00027$" appears in USER_TAB_COLS but not in USER_TAB_COLUMNS.  The new column is not also not visible when I run a "DESCRIBE" command.

Also, the AVG_ROW_LEN has increased by 34  (seeing as I do not have any objects with long names) to reflect the addition of the new virtual column.
But did all the blocks actually get re-written ?  Are the "lower(OBJECT_NAME)" values written into each table block as well ?

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL>
SQL> l
1 select obj#, savtime, rowcnt, blkcnt, avgrln,samplesize,to_char(analyzetime,'DD-MON-RR HH24:MI:SS')
2 from sys.WRI$_OPTSTAT_TAB_HISTORY
3 where obj# =
4 (select object_id
5 from dba_objects
6 where owner = 'HEMANT'
7 and object_type = 'TABLE'
8 and object_name = 'MY_DATA_TABLE')
9* order by 1,2
SQL> /

OBJ# SAVTIME ROWCNT BLKCNT AVGRLN SAMPLESIZE TO_CHAR(ANALYZETIME,'DD-MON
---------- ---------------------------- ---------- ---------- ---------- ---------- ---------------------------
79843 09-JUL-20 10.39.03.789392 PM 157408 3106 132 157408 09-JUL-20 22:37:53
79843 09-JUL-20 10.43.59.424420 PM 157408 3106 132 157408 09-JUL-20 22:39:03
79843 09-JUL-20 11.02.35.088733 PM 157408 3106 166 157408 09-JUL-20 22:43:59

SQL>


Apparently, the actual number of blocks did not increase.

SQL> l
1 select segment_name, segment_type, bytes/1024
2 from user_segments
3 where segment_name like 'MY_DATA%'
4* order by 1,2
SQL> /

SEGMENT_NAME SEGMENT_TYPE BYTES/1024
------------------------------ ------------------ ----------
MY_DATA_FBI INDEX 8192
MY_DATA_TABLE TABLE 25600

SQL>
SQL>
SQL> l
1 select index_name, column_name
2 from user_ind_columns
3* where index_name = 'MY_DATA_FBI'
SQL> /

INDEX_NAME COLUMN_NAME
------------------------------ ------------------------------
MY_DATA_FBI SYS_NC00027$

SQL>


So, the Index does take up space as an Index segment but the Virtual Column ("lower(OBJECT_NAME)")  created on the Table does not grow the table.  What seems to happen is that the "computed" value ("lower(OBJECT_NAME)") is stored in the Index segment (leaf blocks) but not in the Table segment (blocks)


What happens if I drop the Function Based Index ?

SQL> drop index my_data_fbi;

Index dropped.

SQL> exec dbms_stats.gather_table_stats('','MY_DATA_TABLE',degree=>4);

PL/SQL procedure successfully completed.

SQL>
SQL> select count(*)
2 from user_tab_columns
3 where table_name = 'MY_DATA_TABLE'
4 /

COUNT(*)
----------
26

SQL> select count(*)
2 from user_tab_cols
3 where table_name = 'MY_DATA_TABLE'
4 /

COUNT(*)
----------
26

SQL> select avg_row_len
2 from user_tables
3 where table_name = 'MY_DATA_TABLE'
4 /

AVG_ROW_LEN
-----------
132

SQL>


The additional column and the 34 bytes it represents has disappeared from the Table and Column statistics.



Categories: DBA Blogs

Oracle Awarded Framework Agreement from Government of Canada

Oracle Press Releases - Thu, 2020-07-09 07:00
Press Release
Oracle Awarded Framework Agreement from Government of Canada New initiative expands cloud options for Oracle-based workloads including Oracle Database, Weblogic, Oracle E-Business Suite, PeopleSoft, and Siebel

Redwood Shores, Calif.—Jul 9, 2020

Oracle announced today the award of a Framework Agreement for the provision of secure cloud services to the Government of Canada. This agreement gives Oracle's Canadian federal government customers the ability to securely store, manage and process sensitive data and applications in Oracle Cloud Infrastructure.

Focused on a “cloud-first” strategy, the Government of Canada uses cloud services as its preferred delivery option when initiating information technology investments, initiatives, strategies and projects. The access to Oracle Cloud regions in Toronto and Montreal afforded by this agreement will help support Canadian government and affiliated public sector entities in their digital transformation efforts.

Oracle Cloud is ideal for mission-critical workloads like financials and other back office applications, as well as newer workloads like simulations, machine learning, and cloud native applications. Oracle’s Canadian Cloud regions today offer Oracle Cloud services that include Compute (CPU and GPU virtual machine and bare metal), Storage (block, object, and archive storage, including new immutable object storage capabilities), Oracle Database and Oracle Exadata Cloud Services, FastConnect, Identity and Access Management, Key Management Service, and Load Balancer.

“This Framework Agreement for Oracle Cloud Infrastructure is a critical step in meeting the growing demands and compliance requirements of our public sector customers,” said Walter Dann, Area Vice President, Oracle Public Sector. “By achieving this milestone, we are able to help the federal government and Canadian public sector tap into our second generation cloud to not just keep pace, but also stay ahead of the evolving business and technology landscape. Oracle’s outstanding price-performance, Oracle Autonomous Database, machine learning and AI innovations along with Oracle Cloud Infrastructure secure by design will be key to enabling our customers’ enterprise workloads in the cloud.”

Oracle is pleased to work with the Government of Canada to support the evaluation of Oracle Cloud. Working with the Government of Canada, Oracle Cloud was assessed across a robust set of controls and global certifications including SOC and ISO. Oracle’s support of these controls and certifications assures customers that all workloads across the government and private sector meet a high level of assurance.  

Oracle Cloud was developed with a particular focus on addressing government cloud needs with capabilities including strong encryption, in-depth auditing, and robust security controls and tools.  With this new agreement, the Government of Canada now has access to a compelling array of advanced technology, including machine learning-enabled applications, integrated security tools, automated analytics, and Oracle Autonomous Database.

Oracle Cloud is a second generation enterprise cloud platform that delivers powerful compute and networking performance and includes a comprehensive portfolio of infrastructure and platform cloud services. With this new agreement, the Government of Canada now has access to the only second generation Cloud built to run Oracle Autonomous Database, the industry’s first and only self-driving database with performance SLAs.

Today, thousands of public sector customers worldwide use Oracle’s digital solutions to transform how their constituents engage with government. Now, these cloud services will be extended to Oracle’s largest customer in the Canadian region. For more information, go to https://www.oracle.com/communitydevelopment.

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Nicole Maloney
Oracle
+1.650.606.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Nicole Maloney

  • +1.650.606.0806

Lawrence J. Ellison Institute for Transformative Medicine of USC Explores Breakthrough Tissue "Fingerprints" Concept for Cancer Diagnostics Using Oracle Cloud Technology

Oracle Press Releases - Thu, 2020-07-09 07:00
Press Release
Lawrence J. Ellison Institute for Transformative Medicine of USC Explores Breakthrough Tissue "Fingerprints" Concept for Cancer Diagnostics Using Oracle Cloud Technology Tissue “fingerprinting” enables deep-learning algorithm to predict biomarkers, prognosis and theragnosis, set to improve understanding of cancer biology and accelerate clinical workflows, new research finds.

Los Angeles and Redwood Shores, CA—Jul 9, 2020

The Lawrence J. Ellison Institute for Transformative Medicine of USC (“Ellison Institute”) and Oracle reveal a promising two-step technique to train a high-confidence predictive algorithm for enhanced cancer diagnostics. The study uses novel tissue “fingerprints”—discriminating microscopic hematoxylin and eosin (H&E) histologic features—of tumors paired with correct diagnoses to facilitate deep learning in the classification of breast cancer ER/PR/HER2 status.

The approach was able to achieve unprecedented diagnostic accuracy for an algorithm of its type and purpose, while using less than a thousand annotated breast cancer pathology slides. The findings suggest that the algorithm’s ability to make correlations between a tumor’s architectural pattern and a correct diagnosis can ultimately help clinicians determine how a tumor will behave to a given treatment. 

The study was facilitated by Oracle for Research, Oracle’s global program that provides selected researchers with access to Oracle Cloud technology, consulting and support, and participation in the Oracle research user community.

The research appears in Scientific Reports.

Challenges of medical machine learning

The challenge of developing artificial intelligence (AI) tools to diagnose cancer is that machine learning algorithms require clinically annotated data from tens of thousands of patients to analyze before they can recognize meaningful relationships in the data with consistency and high confidence. An ideal size dataset is nearly impossible to gather in cancer pathology.  Researchers training computers to diagnose cancer typically only have access to hundreds or low thousands of pathology slides annotated with correct diagnoses.

To overcome this limitation, the Ellison Institute scientists introduced a two-step process of priming the algorithm to identify unique patterns in cancerous tissue before teaching it the correct diagnoses.

“If you train a computer to reproduce what a person knows how to do, it’s never going to get far beyond human performance,” said lead author Rishi Rawat, PhD. “But if you train it on a task 10 times harder than anything a person could do you give it a chance to go beyond human capability. With tissue fingerprinting, we can train a computer to look through thousands of tumor images and recognize the visual features to identify an individual tumor. Through training, we have essentially evolved a computer eye that’s optimized to look at cancer patterns.”

The first step in the process introduces the concept of tissue “fingerprints,” or distinguishing architectural patterns in a tumor’s tissue, that an algorithm can use to discriminate between samples because no two patients’ tumors are identical.  These fingerprints are the result of biological variations such as the presence of signaling molecules and receptors that influence the 3D organization of a tumor. The study shows that AI spotted these fine, structural differentiations on pathology slides with greater accuracy and reliability than the human eye, and was able to recognize these variations without human guidance.

In this study, the research team took digital pathology images, split them in half and prompted a machine learning algorithm to pair them back together based on their molecular fingerprints.  This practice showcased the algorithm’s ability to group “same” and “different” pathology slides without paired diagnoses, which allowed the team to train the algorithm on large, unannotated datasets (a technique known as self-supervised learning).

“With clinically annotated pathology data in short supply, we must use it wisely when building classifiers,” said corresponding author Dan Ruderman, PhD., director of analytics and machine learning at the Ellison Institute. “Our work leveraged abundant unannotated data to find a reduced set of tumor features that can represent unique biology. Building classifiers upon the biology that these features represent enables us to efficiently focus the precious annotated data on clinical aspects.”

Once the model was trained to identify breast cancer tissue structure that distinguishes patients, the second step called upon its established grouping ability to learn which of those known patterns correlated to a particular diagnosis.  The discovery training set of 939 cases obtained from The Cancer Genome Atlas enabled the algorithm to accurately assign diagnostic categories of ER, PR, and Her2 status to whole slide H&E images with 0.89 AUC (ER), 0.81 AUC (PR), and 0.79 AUC (HER2) on a large independent test set of 2531 breast cancer cases from the Australian Breast Cancer Tissue Bank.

While using Oracle Cloud technology, the study’s groundbreaking technique creates a new paradigm in medical machine learning, which may allow the future use of machine learning to process unannotated or unlabeled tissue specimens, as well as variably processed tissue samples, to assist pathologists in cancer diagnostics.

“Oracle for Research is thrilled to support and accelerate the Ellison Institute’s trailblazing discoveries through advanced cloud technology,” said Mamei Sun, Vice President, Oracle. “The Ellison Institute’s innovative use of machine learning and AI can revolutionize cancer research, treatment, and patient care – and ultimately improve the lives of many.”

Technique democratizes cancer diagnosis

In breast cancer, tumors that express a molecule called estrogen receptor look unique at the cellular level and fall into their own diagnostic category because they typically respond to anti-estrogen therapies.  Currently, pathologists must use chemical stains to probe biopsy samples for the presence of the estrogen receptor to make that diagnosis, and the process is time-consuming, expensive and variable. 

The established algorithm aims to improve pathologists’ accuracy and efficiency in a digital pathology workflow by directly analyzing tumor images to diagnose them as “estrogen receptor positive” without staining specifically for estrogen receptor. The study’s results support the notion that the use of tissue “fingerprints” may allow for a direct treatment response prediction, potentially obviating the need for molecular staining approaches currently utilized in cancer theragnosis.

An exciting application of this technology lies in the possibility of deploying computer-assisted diagnostics in medically underserved regions and developing nations that lack expert pathologists, specialists and the laboratory infrastructure to stain for molecular markers.

While the study suggests additional investigation is warranted to gain a deeper understanding of AI’s ability to determine molecular status based on tissue architecture, it sets the stage for future applications where the technique could potentially aid in troubleshooting challenging tumor classification issues and enhance human pathologists’ abilities to arrive at correct diagnoses and better inform treatment decisions.

About this study

In addition to Rawat and Ruderman, other study authors include Itzel Ortega, Preeyam Roy and David Agus of the Ellison Institute; along with Ellison Institute affiliate Fei Sha of USC Michelson Center for Convergent Bioscience; and USC collaborator Darryl Shibata of the Norris Comprehensive Cancer Center at Keck School of Medicine.

The study’s computing resources were facilitated by Oracle Cloud Infrastructure through Oracle for Research, Oracle’s global program providing free cloud credits and technical support to researchers, and was supported in part by the Breast Cancer Research Foundation grant BCRF-18-002.  

In addition to his appointment at the Ellison Institute, Ruderman is an assistant professor of research medicine at USC’s Keck School of Medicine.

Contact Info
Julie Sugishita Cantor
Oracle Corporate Communications
(650) 506-0076
julie.sugishita@oracle.com
Ballantines PR
(310) 454-3080
ellison@ballantinespr.com
About Lawrence J. Ellison Institute for Transformative Medicine of USC

The Lawrence J. Ellison Institute for Transformative Medicine was founded in 2016 to leverage technology, spark innovation, and drive interdisciplinary, evidence-based research to reimagine and redefine cancer treatment, enhance health, and transform lives. Under the leadership of Dr. David B. Agus, MD, the Ellison Institute was designed to tackle the difficult questions in health care and research to push the boundaries of medicine forward.  The objective of the Ellison Institute is the rigorous and rapid translation of novel technologies into practice for use in clinical, diagnostic, and laboratory settings. The Institute is comprised of dedicated clinicians, experts and thought-leaders from disparate backgrounds who have come together to make a meaningful, positive impact on the lives of patients. This one-of-a-kind Institute hopes to serve as a powerful catalyst for innovation and reimagining the status quo in medical research and cancer treatments. For more information, visit Ellison.usc.edu.

About Oracle for Research

Oracle for Research is a global community that is working to address complex problems and drive meaningful change in the world. The program provides scientists, researchers, and university innovators with high-value, cost-effective Cloud technologies, participation in Oracle research user community, and access to Oracle’s technical support network. Through the program’s free cloud credits, users can leverage Oracle’s proven technology and infrastructure while keeping research-developed IP private and secure. Learn more at https://www.oracle.com/oracle-for-research/.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Julie Sugishita Cantor

  • (650) 506-0076

Ballantines PR

  • (310) 454-3080

Oracle Announces Oracle Dedicated Region Cloud@Customer

Oracle Press Releases - Wed, 2020-07-08 02:04
Press Release
Oracle Announces Oracle Dedicated Region Cloud@Customer Industry’s first fully managed cloud region brings 100% of Oracle’s public cloud services, including Oracle Autonomous Database and Cloud applications, to customer datacenters

Pricing starts at only $500K per month, a small fraction of what customers would have to pay for a one-off region from other cloud providers

Customers include Nomura Research Institute (NRI) in Japan and Oman Information and Communications Technology Group

Redwood Shores, Calif.—Jul 8, 2020

Driven by strong customer demand, Oracle today announced Oracle Dedicated Region Cloud@Customer, the industry’s first fully-managed cloud region that brings all of Oracle’s second-generation cloud services, including Autonomous Database and Oracle Cloud applications, to customer datacenters, starting at only $500K a month. Get started here.

With this offering, enterprises get the exact same complete set of modern cloud services, APIs, industry-leading SLAs, superior price-performance, and highest levels of security available from Oracle’s public cloud regions in their own datacenters. This is ideal for highly regulated or security-focused businesses needing to meet demanding latency and data residency requirements, reduce operational costs, and modernize legacy applications.

Over the past few years, enterprise adoption of public clouds has gone mainstream as companies took advantage of the pay-as-you-go economics, scale, and agility of cloud computing. However, most enterprises expect to continue to run a portion of their workloads in on-premises datacenters for the foreseeable future. This has resulted in strong demand from customers for a hybrid architecture where the same services, same functionality, and easy portability of applications exists between their public and on-premises cloud environments. But until today, no solution was able to bridge the gap between cloud and on-premises environments. On-premises offerings from other cloud providers offer a very small subset of the services available in their public cloud regions. With today’s announcement, Oracle is making all of its cloud services—more than 50 services—available on-premises so enterprises can use Oracle’s cloud services wherever they need them—in the cloud or on-premises via Cloud@Customer.

“Enterprise customers have told us that they want the full experience of a public cloud on-premises, including access to all of Oracle’s cloud services, to run their most important workloads,” said Clay Magouyrk, executive vice president of engineering, Oracle Cloud Infrastructure. “With Oracle Dedicated Region Cloud@Customer, enterprises get all of our second-generation cloud services, including Autonomous Database, in their datacenters. Our major competitors can’t offer customers a comparable dedicated cloud region running on-premises.”

Oracle Dedicated Region Cloud@Customer includes full management capabilities and access to new features and functions the moment they become available in Oracle’s public cloud. It provides strong isolation of customer data, including all API operations, which remain local to customer datacenters and provide the highest levels of security. Additionally, Oracle Dedicated Region Cloud@Customer is certified to seamlessly run Oracle Cloud applications, including Oracle Fusion Cloud Applications (Cloud ERP, Cloud HCM, Cloud SCM, and Cloud CX, making it a completely integrated cloud experience on-premises. Customers only pay for services they consume using the same predictable low pricing offered in Oracle’s public cloud regions.

“With Dedicated Region Cloud@Customer, Oracle delivers a slice of its public cloud experience into customer datacenters, with no changes in pricing or capabilities,” said Deepak Mohan, Research Director at IDC. “This represents a new direction for public cloud providers, who have historically offered only limited versions of their services to customer premises. Oracle Dedicated Region Cloud@Customer brings the full capabilities of Oracle Cloud Infrastructure and Oracle Fusion Cloud Applications, including over 50 services, to customer premises. This brings together public cloud service capability with the compliance, latency and co-location benefits of on premises—which can be a game changer for large scale digital transformation efforts at enterprises.”

Global Organizations Adopt Oracle Dedicated Region Cloud@Customer

Nomura Research Institute (NRI), Ltd. is the largest consulting firm and IT solutions provider in Japan. “With Oracle Dedicated Region Cloud@Customer, we can use Oracle Exadata as a cloud service and achieve greater agility, such as seamless expansion, while maintaining high availability at the same level as on-premises,” said Tomoshiro Takemoto, Senior Corporate Managing Director, NRI. “Built in our own datacenter, it also enables us to not only provide SOC2 reports based on Japanese security standards in financial industries, but it also allows us to access broader cloud services and tools provided by Oracle and further increase our business value for our customers.

Oman Information and Communications Technology Group is an entity owned by State General Reserve Fund (SGRF) of the Government of Oman. “Oman Information and Communications Technology Group (OICTG), is committed to propel and shape the Sultanate’s ICT sector. Our investments, focused initiatives and bespoke partnerships aim to unlock the full potential of ICT. So, by fostering Omani talents, we will actively support the ongoing diversification of Oman’s economic development as outlined in His Majesty Sultan Haitham bin Tarik’s Oman 2040 Vision,” said HH Sayyid Kamil Fahad Al-Said; Chairman of Oman ICT Group. “Furthermore, and to meet the Sultanate’s needs of digital transformation, our centralized, innovatively structured digital framework will support the ownership and management of cloud-ready IT services. The OICTG’s sustainable, economically sensitive budget and attainable, realistic time-bound goals will guarantee maximum operational performance, flexible scalability and a secure data residency file under the accountable canopy of the Oman Investment Authority.”

“Oracle Dedicated Region Cloud@Customer enables a variety of use cases, from migrating Oracle ERP and CRM applications to deploying custom developed applications using Oracle Database, as well as implementing Digital Innovation Services (Blockchain, AI, Big Data) and High-Performance Computing (HPC), all while following the country regulations regarding data sovereignty,” said Said Al-Mandhari, CEO, Oman ICT Group.

Contact Info
Nicole Maloney
Oracle PR
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Future Product Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

Oracle Autonomous Database Now Available in Customer Datacenters

Oracle Press Releases - Wed, 2020-07-08 02:02
Press Release
Oracle Autonomous Database Now Available in Customer Datacenters Addresses data sovereignty, security, and performance concerns that prevent some enterprise workloads from moving to the public cloud

Crédit Agricole, Entel, and Samsung SDS welcome Autonomous Database on Exadata Cloud@Customer

Redwood Shores, Calif.—Jul 8, 2020

Building on the success of Oracle’s Exadata Cloud@Customer service over the last three years, Oracle announced the availability of Oracle Autonomous Database on Exadata Cloud@Customer. This new offering combines the latest Oracle Database with the fastest Oracle Database platform—Exadata—delivered as a cloud service in customer datacenters. It eliminates database management and capital expenditures while enabling pay-per-use and elastic consumption of database cloud resources. Now, Autonomous Database is available to run in customer data centers both as a standalone offering and as part of Oracle Dedicated Region Cloud@Customer, the industry’s first on-premises cloud region, which was also announced today. Get started here.

Oracle Autonomous Database on Exadata Cloud@Customer is the simplest and fastest transition to a cloud model with typical deployments taking less than a week. Existing applications in a datacenter can simply connect and run without requiring any application changes—while data never leaves the customer’s datacenter. This is ideal for enterprises that find it challenging to move their mission-critical database workloads to the public cloud due to data sovereignty and regulatory requirements, security and performance concerns, or because their on-premises applications and databases are tightly coupled. 

“Exadata Cloud@Customer has been successfully deployed at hundreds of customers, including large financial services companies, telecoms, healthcare providers, insurers, and pharmaceutical companies worldwide to modernize their infrastructure and lower costs by up to 50 percent,” said Juan Loaiza, executive vice president, mission-critical database technologies, Oracle. “We are now bringing Oracle Autonomous Database to customer datacenters—freeing DBAs and developers from mundane maintenance tasks and enabling them to innovate and create more business value.”

Oracle Autonomous Database on Exadata Cloud@Customer enables organizations to move to an environment where everything is automated and managed by Oracle. Autonomous operations include: database provisioning, tuning, clustering, disaster protection, elastic scaling, securing and patching, which eliminates manual processes and human error while reducing costs and increasing performance, security and availability. The serverless architecture automatically scales to match changing workloads, providing true pay-per-use.

“Oracle Autonomous Database on Exadata Cloud@Customer combines the game changing capabilities of the revolutionary Exadata X8M platform with Oracle’s most advanced machine-learning-powered database and its second-generation cloud control plane for a true enterprise-grade database cloud experience on-premises,” said Carl Olofson, Research Vice President, Data Management Software, IDC. “Every business has a set of ISV and home grown applications that they depend on to run all aspects of their business from finance to manufacturing, HR, orders, procurement, and operations. For companies serious about running these types of critical Oracle-based applications in an on-premises enterprise database cloud, Oracle Autonomous Database on Exadata Cloud@Customer is currently the most advanced offering in the market today.”

Customers can leverage Oracle Autonomous Database on Exadata Cloud@Customer to consolidate thousands of databases and run the converged, open Oracle Database for multiple data types and workloads including Machine Learning, JSON, Graph, spatial, IOT and In-Memory, instead of deploying fragmented special-purpose databases. With Oracle Autonomous Database on Oracle Exadata Cloud@Customer, organizations can work with up to 7x larger databases, achieve greater database consolidation, and improve performance with up to 12x more SQL IOPS, 10x more SQL throughput, and 98 percent lower SQL latency than RDS on AWS Outposts. Oracle Autonomous Database on Exadata Cloud@Customer reduces customers’ infrastructure and database management by up to 90 percent because they only have to focus on the schemas and data inside their databases, not on running the underlying database infrastructure.

In addition to the new Cloud@Customer offerings, Oracle continues to enhance the capabilities of the Autonomous Database. Oracle today announced the certification of Oracle’s Siebel, PeopleSoft, and JD Edwards running on Oracle Autonomous Database. By using Autonomous Database, Oracle’s Siebel, PeopleSoft, and JD Edwards customers will lower their costs while improving security, performance, and availability. The company also announced Oracle Autonomous Data Guard which delivers an autonomously managed high availability and disaster recovery solution protecting against database and site failures. Oracle Autonomous Data Guard provides near zero data loss (RPO) and recovery time (RTO) objectives in the face of catastrophic failures.

Global Organizations Welcome New Cloud@Customer Offerings

Samsung SDS is the largest enterprise cloud solutions provider in Korea, delivering data-driven digital innovations to customers in 41 countries worldwide. “Back in 2010, we adopted the first Exadata platform to improve a display manufacturing system,” said Dr. WP Hong, CEO, Samsung SDS. “Now 10 years later, we have implemented nearly 300 Exadata systems for our customers in manufacturing, financial services, construction and engineering, and public and private sector services. Aligning with our digital innovation strategy and our journey to enterprise cloud, we have now adopted the first Exadata Cloud@Customer in one of our datacenters and look forward to deploying Autonomous Database.”

NTT DoCoMo is the number one mobile carrier in Japan with the largest customer base. “Oracle Exadata is implemented as our core engine to process the call, communication, and billing information of 80M users in real-time,” said Taku Hasegawa, Senior Vice President, General Manager of Information Systems Department, NTT DoCoMo. “Thanks to Exadata, we could cut operation and maintenance costs in half, while realizing 10x performance. As the core infrastructure for DoCoMo’s digital transformation and further business growth, I look forward to the continuous evolution of Oracle Exadata and the novel technology innovation driven by Autonomous Database on Exadata Cloud@Customer.”

Crédit Agricole CIB is the Corporate and Investment Banking arm of the Crédit Agricole Group, one of the world’s largest banks. “Moving to Exadata Cloud@Customer has significantly improved our accounting information systems performance, which has enabled us to carry out our accounting closing process with much greater agility and to reduce our operational costs,” said Pierre-Yves Bollard, Global Head of Finance IT, Crédit Agricole Corporate & Investment Bank. “The high value provided by the Exadata Cloud@Customer infrastructure has been recognized by all IT and business teams.”

Entel is the largest telecom provider in Chile and the third largest in Peru. “We have used Exadata systems for the past five years to support many applications across dozens of lines of business, including crucial billing and network management systems,” said Helder Branco, Head of IT Operations, Entel. “By using Exadata, we improved mission-critical Oracle Database performance by up to 3x, and reduced our security exposure. We are taking our digital transformations to the next level by moving over 30 databases to Oracle Autonomous Database on Exadata Cloud@Customer and improving their security with its self-securing capabilities.”

RKK Computer Service is an IT consultancy based in Japan, focusing on local governments and financial institutions. “RKK Computer Service selected Oracle Exadata Cloud@Customer to host our shared platform that runs core business systems for 100 municipalities,” said Chihiro Sato, Deputy General Manager, Public Sector Planning and Development Division, RKK Computer Service. “Compared to our previous on-premises solution, we have 24 percent cost savings and more than 70 percent IO performance improvement, which enables us to run concurrent batch processes for multiple municipalities. High availability is achieved with RAC and Data Guard. We believe that Oracle’s second-generation Exadata Cloud@Customer is a promising cloud platform for municipalities. RKKCS will continuously enhance our cloud infrastructure for municipalities by exploring Autonomous Database on Exadata Cloud@Customer to improve operational efficiency.”

The State of Queretaro is located in central Mexico. “Based on a directive from the state governor and state secretary to address the COVID-19 crisis, we were asked to develop an application that would allow the citizens and patients of the State of Querétaro, Mexico, to carry out a self-diagnosis to help avoid the spread of infections,” said Pedro Gonzalez, Director CIAS, Queretaro State Government, Mexico. “With Oracle Database on Exadata Cloud@Customer, we were able to react quickly and develop a mobile application in less than three weeks — plus we were able to adhere to state regulations to maintain the sensitive data of citizens and patients in our facilities. We look forward to investing in Oracle Autonomous Database this year, which will free up our staff and resources to focus on developing new business applications without spending any time on patching, tuning, and maintaining the database.” 

Siav is an enterprise content management software and IT services company based in Italy. “We chose Oracle Exadata Cloud@Customer to help us manage the constant growth of our business in cloud services and solutions,” said Nicola Voltan, CEO, Siav S.p.A. “Exadata Cloud@Customer provides the performance, scalability and security we need to offer the highest quality service to our customers. It’s managed by Oracle in our datacenter, enabling us to comply with the Italian legislation related to the geographical location of the service provided.”

New Exadata Cloud@Customer Enhancements

In addition to the Autonomous Database, Oracle is announcing the following Exadata Cloud@Customer enhancements:

  • Oracle Exadata Database Machine X8M Technology, which combines Intel® Optane™ DC Persistent Memory and 100 gigabit remote direct memory access (RDMA) over Converged Ethernet (RoCE) to remove storage bottlenecks and dramatically increase performance for the most demanding workloads such as Online Transaction Processing (OLTP), IoT, fraud detection, and high frequency trading. Direct database access to shared persistent memory increases peak performance to 12 million SQL read IOPS, 2.5X greater than the prior generation offering powered by Exadata Database Machine X8. Additionally, Exadata X8M dramatically reduces the latency of critical database IOs by enabling remote IO latencies below 19 microseconds—more than 10X faster than the prior generation offering. These ultra-low latencies are achieved even for workloads requiring millions of IOs per second.

  • Multiple VM Clusters per Exadata Rack, which enables organizations to share an Exadata system for production, DR and dev/test and provide isolation across departments and use cases.

  • PCI-DSS Certification: Exadata Cloud@Customer now supports and meets Payment Card Industry Data Security Standard requirements and can be implemented as part of a highly secure financial processing environment.

Contact Info
Nicole Maloney
Oracle PR
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Future Product Disclaimer

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

NRI Selects Oracle Dedicated Region Cloud@Customer

Oracle Press Releases - Wed, 2020-07-08 02:00
Press Release
NRI Selects Oracle Dedicated Region Cloud@Customer First customer to adopt new Oracle Cloud offering

Will be used for mission-critical, high-volume business platform to accelerate digital transformation

Redwood Shores, Calif.—Jul 8, 2020

Oracle today announced that Nomura Research Institute (NRI) has selected Oracle Dedicated Region Cloud@Customer to run its critical, high-volume business platform used in various industries, including financial, retail, and distribution. NRI will start by moving its SaaS applications for the financial industry, which are used by about 70 percent of the capital markets firms in Japan, from on-premises to an Oracle Dedicated Region Cloud@Customer in its own datacenter. This will provide NRI with access to all of Oracle’s second-generation public cloud services while ensuring control and governance because the cloud services are in NRI’s own datacenter. During NRI and Oracle’s 30-year partnership, NRI has been an early adopter of Oracle technology and is now the first global customer to move to Oracle Dedicated Region Cloud@Customer. 

“With Oracle Dedicated Region Cloud@Customer, we can use Oracle Exadata as a cloud service and achieve greater agility, such as seamless expansion, while maintaining high availability at the same level as on-premises,” said Tomoshiro Takemoto, Senior Corporate Managing Director, NRI. “Built in our own datacenter, it enables us to not only provide SOC2 reports based on Japanese security standards in financial industries, but it also allows us to access broader cloud services and tools provided by Oracle and further increase our business value for our customers. With Oracle’s Dedicated Region, we can significantly reduce our on-premises costs and invest more in our digital transformation.”

“Oracle is excited to extend our strategic relationship with NRI to modernize and support the financial services market in Japan and globally,” said Clay Magouyrk, executive vice president of engineering, Oracle Cloud Infrastructure. “Only Oracle Cloud enables customers to utilize our full-featured cloud services anywhere, whether it’s our global public cloud region portfolio or the new dedicated regions for customers with data sovereignty, latency, regulatory, security and compliance requirements. With the differentiated model of our Dedicated Region, NRI will be able to modernize and truly deliver a first-class, cloud-native experience to their customers while meeting all of their requirements.”

Oracle Dedicated Region Cloud@Customer enables customers to build Oracle’s public cloud regions in customers’ own datacenters. Customers can access all of Oracle’s second-generation cloud services, including Bare Metal Compute, VMs and GPUs, Autonomous Database, and Exadata Cloud Service; container-based services like Container Engine for Kubernetes; and analytics services like Data Flow, while also having control and governance of their systems and services. With Oracle Cloud@Customer, enterprises can seamlessly lift and shift legacy workloads and consolidate on a single platform, enabling them to dramatically improve TCO without requiring re-architecture. Customers also have access to a full set of development services, like API Gateway and Events Services that will help them incrementally modernize their stack in-place, reducing the risk and expense of adopting new technologies.

Contact Info
Nicole Maloney
Oracle PR
+1.650.506.0806
nicole.maloney@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Nicole Maloney

  • +1.650.506.0806

PL\SQL to GetDDL for all user account except system account using cursor and loop

Tom Kyte - Tue, 2020-07-07 16:06
I am not sure what I am making wrong with this statement, <code>Declare cursor cur_dba_users IS select username from dba_users where username not in ('ANONYMOUS','APEX_040200','APEX_PUBLIC_USER','APPQOSSYS','AUDSYS','BI','CTXSYS','DBSNMP','DIP','DVF','DVSYS','EXFSYS','FLOWS_FILES','GSMADMIN_INTERNAL','GSMCATUSER','GSMUSER','HR','IX','LBACSYS','MDDATA','MDSYS','OE','ORACLE_OCM','ORDDATA','ORDPLUGINS','ORDSYS','OUTLN','PM','SCOTT','SH','SI_INFORMTN_SCHEMA','SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR','SYS','SYSBACKUP','SYSDG','SYSKM','SYSTEM','WMSYS','XDB','SYSMAN','RMAN','RMAN_BACKUP','OWBSYS','OWBSYS_AUDIT','APEX_030200','MGMT_VIEW','OJVMSYS'); Begin For cur_username in cur_dba_users Loop SELECT dbms_metadata.get_ddl('USER',cur_username) FROM dual; SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT',cur_username) from dual; SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT',cur_username) from dual; SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT',cur_username) from dual; END LOOP; CLOSE cur_dba_users; End; Error report - ORA-06550: line 6, column 39: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 6, column 1: PL/SQL: SQL Statement ignored ORA-06550: line 7, column 55: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 7, column 1: PL/SQL: SQL Statement ignored ORA-06550: line 8, column 53: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 8, column 1: PL/SQL: SQL Statement ignored ORA-06550: line 9, column 55: PL/SQL: ORA-00904: "CUR_USERNAME": invalid identifier ORA-06550: line 9, column 1: PL/SQL: SQL Statement ignored 06550. 00000 - "line %s, column %s:\n%s" *Cause: Usually a PL/SQL compilation error. *Action:</code>
Categories: DBA Blogs

Redo Log query

Tom Kyte - Tue, 2020-07-07 16:06
I understand instead of log generation rate , we should consider amount of redo generated. My question is - How can we identify any database bottleneck using amount of redo? Is there any way to compare the amount of redo generation between two days or months probably and see if there is need of tuning or re-sizing?
Categories: DBA Blogs

Need information on tuning a join

Tom Kyte - Tue, 2020-07-07 16:06
Hi Tom, I need to know how the where clause is effecting my query. I have a table ABC where it has batch_id. It has 1L rows. Its daily truncated table. so it will have only one batch_id. table1 = ABC table2 = XYZ select * from ABC t1, XYZ t2 where batch_id = 111 and t1.col1 = t2.col2 the above query is taking 20+ minutes to retrieve data. the below query is taking hardly 6s. select * from ABC t1, XYZ t2 where t1.col1 = t2.col2; Just want it understand why its taking time with just batch_id in where clause.
Categories: DBA Blogs

COLLECTION ITERATOR PICKLER FETCH

Tom Kyte - Tue, 2020-07-07 16:06
Hi Tom, I am using DBMS_XPLAN.DISPLAY to display the execution plan of a query. After using EXPLAIN PLAN SET STATEMENT ID .... I am giving the following query: <code>SET PAGESIZE 0 SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY('PLAN_TABLE','SAM','ALL')); Execution Plan ---------------------------------------------------------- Plan hash value: 3013799171 --------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 32744 | 65488 | 49 (3)| 00:00:01 | | 1 | COLLECTION ITERATOR PICKLER FETCH| DISPLAY | | | | | ---------------------------------------------------------------------------------------------</code> can you please help me to know how to avoid this "COLLECTION ITERATOR PICKLER FETCH" and get the proper explain plan? Thank you very much Sarayu K.S.
Categories: DBA Blogs

SQL TO_CHAR returns differently between 2 Servers

Tom Kyte - Tue, 2020-07-07 16:06
Hello, I have two server with version 12.2.0.1.0. When I run the same SQL on each I don't have the same result on dates when I used then with TO_CHAR... SQLPLUS: <code>SELECT CONTRACTDATE, TO_CHAR(CONTRACTDATE,'YYYY/MM/DD') FROM MYTABLES WHERE CONTRACTNO=101010;</code> COLUMN CONTRACTDATE is a DATE. SERVER1 CONTRACTD TO_CHAR(CO --------- ---------- 05-DEC-19 2019/12/05 SERVER2 CONTRACTD TO_CHAR(CO --------- ---------- 05-DEC-19 <b>00</b>19/12/05 As you can see on the second server it use <b>0019</b> for the year instead of the expected <b>2019</b> Any idea why I have such behaviour? Thank you PS: I have checked the NLS and they are identical. <code>SELECT DB.PARAMETER, DB.VALUE "DATABASE", I.VALUE "INSTANCE", S.VALUE "SESSION" FROM NLS_DATABASE_PARAMETERS DB, NLS_INSTANCE_PARAMETERS I, NLS_SESSION_PARAMETERS S WHERE DB.PARAMETER=I.PARAMETER(+) AND DB.PARAMETER=S.PARAMETER(+) ORDER BY 1;</code>
Categories: DBA Blogs

Hidden paramater: No_Invalidate

Tom Kyte - Tue, 2020-07-07 16:06
Hi Tom, from couple of months some queries in production environment are intermittently producing 2 hash plan values. Obviously one is better and other is bad. We run stats gather Job daily in the morning. Have check many other platforms and understood that there are many reasons for this behaviour. Some of them reasons for above behaviour, that I have check are below 1. Parsing a SQL statement with bind variable defined as wrong type. 2. Not running stats gather Job daily. above two reasons are not valid for my case. Other than this there are some more reasons that I wanted to ask you. a. Invalidation of cursors. I understand that, this is handled by parameter 'NO_INVALIDATE'. This is set to AUTO_INVALIDATE for me. So even though my stats gather Job has produced a bad hash plan value, this will not be effective immediately. b. Adaptive cursor sharing. How to make sure that because of this feature my SQL is producing multiple Hash plan value? Could you please help me with above understanding and suggest me for point a and b. Thanks a lot.
Categories: DBA Blogs

Special character constant package after deployment

Tom Kyte - Tue, 2020-07-07 16:06
Hi TOM i have a constant package where i define a variable constant whose value is '<b>#</b>' (pound), Now, running the package script in dev environment, package created with no errors and did not show any special character in constant variable, but running same package script in UT environment package created with no errors but constant value has '<b>A#</b>'. how can i resolve this issues ? came to know its different character set issue thanks in advance. Code: create or replace package pk_constants is lv_pound constant char(1) :='#'; -- for germany and europe countries ...... ..... end pk_constants; /
Categories: DBA Blogs

Simple Performance Tuning Example

Bobby Durrett's DBA Blog - Tue, 2020-07-07 10:41

I helped resolve an Oracle database performance problem yesterday and I thought I would capture what I did in this blog post. It is a “simple” problem in that the solution was easy once I understood the issue and because this sort of problem happens often. But the process to get to the solution is not completely simple. I think documenting the process may have more value than writing down the details of this specific problem.

Today is Tuesday. Thursday last week our development team was doing some testing and performance of the system was terrible. I was busy working on another project that is my primary assignment right now, so I did not look at it. It was also Thursday before the July 4th holiday weekend and we are all mentally and physically checking out a bit early. Some key people were already off on vacation. So, we shut it all down until Monday (yesterday). Also, late Thursday there was some thought that there was a system issue maybe with our virtual machines’ host or networking. So, we let it go. Monday our dev team said they thought the slowness was caused by bad data including duplicates that were throwing errors on a target system (non-Oracle) in the cloud. So, they cleaned up the data and started a new run. But it was slow as molasses and the application front end was also super slow.

In retrospect, looking at the paragraph above, I bet a lot of DBAs with performance tuning experience would jump to the solution. I can think off the top of my head of a few types of things that might cause these symptoms. But I did not immediately think of it so I went through a process to get there. No leap of intuition was required except maybe at the very end.

Since the problem query was currently running the first step was to look at the session monitor in Toad. I may have looked at HP Unix’s Glance tool, which is like Top, to see the CPU usage and top processes at an OS level. What I saw was a parallel query running with 8 parallel child processes and the query had been running about 30 seconds.

Step two was to take an AWR report. First, I ran a command like this to get a snapshot after the current activity:

exec dbms_workload_repository.create_snapshot;

Then I ran an AWR report for the last two snapshots.

The “SQL ordered by Elapsed Time” part of the report looked like this:

SQL ordered by Elapsed Time

The top query azrtxpx1g6mxy was the one I saw in Toad. The second query is a subselect of the top one. Averaging 51.23 seconds and having 46.77% of its time on I/O I eventually went where I usually go with AWR reports. I went to the “Segments by Logical Reads” section of the AWR report. Here is part of that section:

Segments by Logical Reads

I hid the table names, but the point is that the top segment was doing 99.67% of the logical I/Os. So, the issue had to do with I/O. But looking at the “Top 5 Timed Foreground Events” section I/O seems very efficient.

Foreground Events

The top wait was the I/O event “direct path read” which can be deceptively low on systems that use asynchronous I/O. But this system uses HP Unix file system I/O which does not support asynchronous I/O so the direct path read number is correct. One of my list of possible things that could be wrong before I started was some sort of I/O contention but this report pointed away from I/O hardware.

So, the AWR report and my look at Toad session monitor points to a SQL statement doing parallel query full scans of a table and that SQL statement running 10s of seconds in length. So, I looked at the execution plan next. I like to use my getplans.sql SQL*Plus script to extract a query’s plan from the AWR using its SQL_ID. I ran this is SQL*Plus:

@getplans azrtxpx1g6mxy

Sure enough, the same top table was running a parallel full table scan in the plan.

Somewhere in this process I did some counts. I think the simplicity of using select count(*) queries may get in the way of people using it but count(*) queries are one of the biggest helps to me tuning a SQL statement. I did two types of counts for this case. I did a select count(*) on the top table and found out that it had 160 million rows. The count took 45 seconds even through it was doing a fast full scan of an index and not the full table. Then I queried grouping by the column that joins the table to its parent table. This is a typical header detail join where the child table joins the parent on the parent’s unique key. Then I got the idea that the query might run faster if there were an index on the joining column.

I ran my optimizer statistics scripts to check the index columns and to look at the column statistics to see how selective the joining column is. I found that there was no index on the column that the header table used to join to the problem detail table. So, I recommended that the dev team add an index on that column and including the other columns in the where clause for good measure. They might have to try two or three different column choices and orders to get the best index.

But the dev team was skeptical. This same query runs fine in production. It is not new so what changed? I tried running my sqlstat.sql script to get a history of the execution of SQL_ID azrtxpx1g6mxy but there was no history on our test system going back before July 2nd. I ran it on production and there was no history at all. I found out that this SQL had not run for two months on production and since we only keep 6 weeks history in our AWR any execution history of azrtxpx1g6mxy had fallen off the list.

Finally, I had a moment of insight. If I could have seen a faster plan on production, no insight would have been required. It would have been an almost mechanical process of comparing the good and bad plans. But I finally got the idea of running my same optimizer statistics scripts against production and looking at the indexes. Sure enough, production had an index on the joining column. Duh! I probably should have checked for differences between the tables and indexes in test and prod first, but I had no reason to suspect a difference. So, a coworker put the indexes back in test and everything was fine.

So, it is a simple thing. Indexes were dropped on a test system. Happens all the time. Not profound. But maybe the process of driving from problem statement through the different kinds of evidence and tools to the solution is more profound. It gets results. What does the AWR report point to? The top segments pointed to a table. The counts and column statistics suggested the need for an index. Finally, a compare of prod and test showed that an index was missing.

Bobby

Categories: DBA Blogs

A Correlated SubQuery

Hemant K Chitale - Tue, 2020-07-07 10:16
I take this simple Correlated Subquery that identifies all employees whose Salary is greater than the department's average Salary and is also greater than $9,000

SQL> l
1 select emp.department_id, emp.employee_id, emp.salary
2 from hr.employees emp
3 where emp.salary >
4 (select avg(salary)
5 from hr.employees emp_inner
6 where emp.department_id = emp_inner.department_id)
7 and emp.salary > 9000
8* order by 1,2
SQL> /

DEPARTMENT_ID EMPLOYEE_ID SALARY
------------- ----------- ----------
20 201 13000
30 114 11000
80 145 14000
80 146 13500
80 147 12000
80 148 11000
80 149 10500
80 150 10000
80 151 9500
80 156 10000
80 157 9500
80 162 10500
80 163 9500
80 168 11500
80 169 10000
80 170 9600
80 174 11000
90 100 24000
100 108 12008
110 205 12008

20 rows selected.

SQL>


The Row Source Statistics for the actual query execution are :

select emp.department_id, emp.employee_id, emp.salary
from hr.employees emp
where emp.salary >
(select avg(salary)
from hr.employees emp_inner
where emp.department_id = emp_inner.department_id)
and emp.salary > 9000
order by 1,2

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 0 0 0
Fetch 3 0.00 0.00 0 12 0 20
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 5 0.00 0.00 0 12 0 20

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 106
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
20 20 20 FILTER (cr=12 pr=0 pw=0 time=1027 us starts=1)
23 23 23 SORT GROUP BY (cr=12 pr=0 pw=0 time=1020 us starts=1 cost=7 size=3552 card=111)
536 536 536 HASH JOIN (cr=12 pr=0 pw=0 time=917 us starts=1 cost=6 size=71648 card=2239)
23 23 23 TABLE ACCESS FULL EMPLOYEES (cr=6 pr=0 pw=0 time=30 us starts=1 cost=3 size=1752 card=73)
108 108 108 TABLE ACCESS FULL EMPLOYEES (cr=6 pr=0 pw=0 time=7 us starts=1 cost=3 size=864 card=108)

********************************************************************************


The Execution Plan is

----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 111 | 3552 | 7 (15)| 00:00:01 |
|* 1 | FILTER | | | | | |
| 2 | SORT GROUP BY | | 111 | 3552 | 7 (15)| 00:00:01 |
|* 3 | HASH JOIN | | 2239 | 71648 | 6 (0)| 00:00:01 |
|* 4 | TABLE ACCESS FULL| EMPLOYEES | 73 | 1752 | 3 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL| EMPLOYEES | 108 | 864 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("EMP"."SALARY">SUM("SALARY")/COUNT("SALARY"))
3 - access("EMP"."DEPARTMENT_ID"="EMP_INNER"."DEPARTMENT_ID")
4 - filter("EMP"."SALARY">9000)


(Since I had only 1 execution of the query when tracing is enabled, "Rows (1st"), "Rows (avg)" and "Rows (max)" report on the same, single, execution)

The EMPLOYEES table underwent two Full Table Scans, each consisting of 6 "consistent reads"  ("cr=6"). 

The execution at line Id=4 expected to return 73 rows ("card=73") but actually returned 23 rows.  This query filtered for SALARY greater than 9000.

The execution at line Id=5 expected to return 108 rows ("card=108") and did actually return 108 rows.  There is no predicate information, meaning that all 108 rows of the table were returned, none filtered.

The join of the outer query and the inner query on DEPARTMENT_ID was the Hash Join at operation Id=3.

Do you note how the FILTER at operation ID=1 appears ?  It is filtering for SALARY greater than AVERAGE SALARY  (where AVERAGE is computed as SUM of SALARY  values divided by the number of not-null SALARY values)



Categories: DBA Blogs

Survey: Consumers Eager to Travel with Certain Conditions for Hotels

Oracle Press Releases - Tue, 2020-07-07 07:00
Press Release
Survey: Consumers Eager to Travel with Certain Conditions for Hotels Global Oracle and Skift study highlights what consumers expect from hotels including increased cleaning and more contactless technology

Redwood Shores, Calif.—Jul 7, 2020

Following a long stretch of stay-at-home orders, consumers are eager to get out and travel according to a new research study from Oracle and Skift. Just over half (51%) of people surveyed in both North and Latin America plan to book trips in the next six months, while 38% of those in Asia-Pacific and Europe are planning get-aways. People, however, are opting to stay closer to home with driving distance (47%) or domestic trips (44%) being favored. With so much uncertainty ahead, consumers are also demanding flexible cancellation and refund policies (76%) and are more open to considering hotels offering discounted rates (65%).

Consumers’ willingness to travel also comes with heavy caveats for hotels in terms of advancements in cleaning and technology. In response, 70% of hotels already are or are planning to adopt contactless technology for check-in, food ordering, concierge services and more. Ninety percent have or are planning to also increase cleaning and disinfecting frequency and training for staff on these procedures and safer guest interactions (89%).

“As the hospitality industry turns a corner on the road to recovery, technology will be critical to protecting travelers and workers alike,” said Alex Alt, senior vice president and general manager, Oracle Hospitality. “Safety rightly remains consumers’ top priority when they consider traveling, and the hospitality industry is doubling down on technology to facilitate social distancing and reduce face to face interaction for guests’ protection.”

The global study, A Data-Driven Look at Hospitality’s Recovery, explored 4,600 consumers and 1,800 hotel executives’ attitudes and outlook on travel, including changes to the guest and employee experience, booking policies and distribution, and the industry’s recovery outlook.

Shifting Demand, Changing Operations

With international border restriction and flying concerns, hotels will be welcoming new types of travelers, making the stakes high to leave a positive impression that will lead to repeat business and word-of-mouth recommendations. The survey found that:

  • Border closures reshaping guest profile: More than 30% of executives surveyed said they expected to see a few more or significantly more domestic travelers, and over 60% were expecting fewer or significantly fewer international travelers. This may offer a silver lining for North American hoteliers, 57% of whom agreed that the greatest majority of their guests were domestic travelers before the health crisis, signaling a more limited impact on demand.
  • New policies court travelers: Hotels are moving swiftly to abide by consumer demands with more than 80% of executives reporting that they were considering or have already made changes to allow for more flexible cancellation and refund policies.
  • Safety will be the new gold standard: Unsurprisingly, space and cleanliness will remain critical, with 84% of travelers agreeing that ‘creating social distancing’ rules for hotel public spaces will be the most influential factor in their decision about which travel brands to choose for upcoming trips. Hotel executives appear ready to meet these expectations, with 82% either considering or already altering public spaces to enable social distancing.
  Technology Paramount to a Positive Safer Guest Experience

Technology is enabling social distancing and cleanliness by reducing the need for in-person interactions and empowering hotels to deliver ‘high-touch’ service without the ‘touch.’ Specifically:

  • Hotels go contactless: More than 70% of executives said they were considering or are already using contactless payment and digital messaging services, and close to 60% were considering/already using room keys activated by smartphone. Consumers concur, indicating that contactless payments (35%), digital room keys (26%) and digital messaging services (20%) were among the top three changes that would make them feel more comfortable staying in a hotel.
  • Self-service tools let guests skip the front desk: Over 70% of executives agreed or strongly agreed that self-service technology will be important to assisting guests while minimizing unnecessary contact, with two thirds (67%) reporting they were considering or are already using self-service check-in procedures. A similar portion of consumers (70%) agreed or strongly agreed they’d be more willing to stay at hospitality businesses that implemented these types of services, with 23% citing self-service check-in via kiosk as a change that would increase their comfort.
  • Ample amenities, less interaction: More than 60% of executive respondents reported they were considering or already making changes to expand room service options, and 50% are looking into expanding meal takeout/delivery options. One-fifth of guests ranked expanded room service options as a top factor for alleviating concerns, allowing them to avoid shared spaces such as hotel restaurants.
  Methodology

The Oracle and Skift survey queried more than 1,800 hospitality executives, from a mix of businesses including hotels, short-term rental operators, cruise companies, boutique hotels and casinos, among others, to understand their opinions about an altered marketplace. Additionally, 4,600 consumers across North America, Europe, Asia-Pacific and Latin America were surveyed to understand the COVID-19’s impact on travel plans and expectations for hotel stays.

Contact Info
Christine Allen
Oracle
+1.603.743.4534
christine.allen@oracle.com
About Oracle Hospitality

Oracle Hospitality brings over 40 years of experience in providing technology solutions to independent hoteliers, global and regional chains, gaming, and cruise lines. We provide hardware, software, and services that allow our customers to act on rich data insights that deliver personalized guest experiences, maximize profitability and encourage long-term loyalty. Our cloud-based, mobile-enabled solutions for property management, point-of-sale, distribution, and reporting and analytics lower IT costs and maximize business agility. Oracle Hospitality’s OPERA is recognized globally as the leading property management platform and offers open APIs to serve as a platform for industry innovation. For more information about Oracle Hospitality, please visit www.oracle.com/Hospitality.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • +1.603.743.4534

VAT Filter for WooCommerce Reports

Nilesh Jethwa - Mon, 2020-07-06 23:04

Recently, upon a request by one of our customer, within the WooCommerce Reporting plugin we updated the Sales by Country World Map Report to include the option to filter by VAT number. Now you can easily find and filter sales that have VAT number and ones that do not have VAT number. VAT Metadata Filter … Read more

Source: InfoCaptor BI

Pages

Subscribe to Oracle FAQ aggregator