Feed aggregator

Opening a Lagging Standby Database (to verify data ?)

Hemant K Chitale - 12 hours 24 min ago

 As shown in my previous blog post, you can create a Standby Database that lags the Primary by not applying Redo immediately but "waiting" for a specified interval.  It continues to receive and  ArchiveLogs but simply applies each only after the "wait interval".


So, first, the status at the Primary:

oracle19c>echo $ORACLE_SID
ORCLCDB
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 24 18:03:12 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select systimestamp, database_role, current_scn from v$database;

SYSTIMESTAMP DATABASE_ROLE CURRENT_SCN
-------------------------------------------------------------- ---------------- -----------
24-JUL-21 06.03.32.106863 PM +08:00 PRIMARY 13258062

SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> select * from hemant.job_tracking_tbl order by 1;

JOB_END_TIME JOB_ID DB_SCN
-------------------------------------------------------------- ---------- ----------
24-JUL-21 04.57.21.676949 PM 1 13239134
24-JUL-21 05.04.29.870877 PM 2 13241261
24-JUL-21 05.30.17.962275 PM 3 13246616
24-JUL-21 05.39.10.912969 PM 4 13247859
24-JUL-21 05.40.20.865467 PM 5 13248159
24-JUL-21 05.50.23.930352 PM 6 13252182
24-JUL-21 06.00.27.037797 PM 7 13257658

7 rows selected.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /opt/oracle/archivelog/ORCLCDB
Oldest online log sequence 310
Next log sequence to archive 312
Current log sequence 312
SQL>


Now, the status at STDB2 (the Standby that is lagging with an enforced 60minutes delay

From the alert.log :
2021-07-24T17:51:15.716720+08:00
PR00 (PID:2924): Media Recovery Log /opt/oracle/archivelog/STDB21_303_1036108814.dbf
PR00 (PID:2924): Media Recovery Delayed for 59 minute(s) T-1.S-304
2021-07-24T17:57:26.299295+08:00
PR00 (PID:2924): Media Recovery Log /opt/oracle/archivelog/STDB21_304_1036108814.dbf
2021-07-24T17:57:44.580258+08:00
PR00 (PID:2924): Media Recovery Delayed for 60 minute(s) T-1.S-305
2021-07-24T18:00:32.550708+08:00
rfs (PID:3452): Archived Log entry 52 added for B-1036108814.T-1.S-311 ID 0xa7521ccd LAD:3
2021-07-24T18:00:33.444329+08:00
rfs (PID:3452): Selected LNO:4 for T-1.S-312 dbid 2778483057 branch 1036108814


oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 24 18:05:53 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database open read only;

Database altered.

SQL> select systimestamp, database_role, open_mode, current_scn from v$database;

SYSTIMESTAMP DATABASE_ROLE OPEN_MODE CURRENT_SCN
--------------------------------------------------- ---------------- ------------ -----------
24-JUL-21 06.06.51.313616 PM +08:00 PHYSICAL STANDBY READ ONLY 13239390

SQL>
SQL> alter pluggable database orclpdb1 open read only;

Pluggable database altered.

SQL> alter session set container=ORCLPDB1;

Session altered.

SQL> select * from hemant.job_tracking_tbl order by 1;

CURRENT_TIMESTAMP JOB_ID CURRENT_SCN
-------------------------------------------------------------- ---------- -----------
24-JUL-21 04.57.21.676949 PM 1 13239134

SQL>


The Primary database is at Log Sequence#312. This Standby has applied only Sequence#304. Let me resume Recovery for some more time and then check the Standby again.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>echo $ORACLE_SID
STDB2
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 24 18:11:12 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 318767104 bytes
Database Buffers 872415232 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> alter database recover managed standby database using archived logfile disconnect from session;

Database altered.

SQL>
SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
oracle19c>


Later ... from the STDB2 alert log :

2021-07-24T18:40:22.531574+08:00
PR00 (PID:29990): Media Recovery Log /opt/oracle/archivelog/STDB21_309_1036108814.dbf
PR00 (PID:29990): Media Recovery Delayed for 59 minute(s) T-1.S-310
2021-07-24T18:40:43.574486+08:00
rfs (PID:531): No SRLs available for T-1
2021-07-24T18:40:43.743506+08:00
rfs (PID:531): Opened log for T-1.S-317 dbid 2778483057 branch 1036108814
2021-07-24T18:40:43.762715+08:00
ARC3 (PID:29836): Archived Log entry 57 added for T-1.S-316 ID 0xa7521ccd LAD:1
2021-07-24T18:40:43.762785+08:00
ARC3 (PID:29836): Archive log for T-1.S-316 available in 60 minute(s)
2021-07-24T18:49:27.636427+08:00
PR00 (PID:29990): Media Recovery Log /opt/oracle/archivelog/STDB21_310_1036108814.dbf
PR00 (PID:29990): Media Recovery Delayed for 60 minute(s) T-1.S-311
2021-07-24T18:50:45.257290+08:00
rfs (PID:531): Archived Log entry 58 added for B-1036108814.T-1.S-317 ID 0xa7521ccd LAD:3
2021-07-24T18:50:46.045279+08:00
rfs (PID:531): Selected LNO:4 for T-1.S-318 dbid 2778483057 branch 1036108814


oracle19c>echo $ORACLE_SID
STDB2
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Jul 24 18:51:27 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database open read only;

Database altered.

SQL> alter pluggable database orclpdb1 open read only;

Pluggable database altered.

SQL> alter session set container=orclpdb1;

Session altered.

SQL>
SQL> select * from hemant.job_tracking_tbl order by 1;

JOB_END_TIME JOB_ID DB_SCN
-------------------------------------------------------------- ---------- ----------
24-JUL-21 04.57.21.676949 PM 1 13239134
24-JUL-21 05.04.29.870877 PM 2 13241261
24-JUL-21 05.30.17.962275 PM 3 13246616
24-JUL-21 05.39.10.912969 PM 4 13247859
24-JUL-21 05.40.20.865467 PM 5 13248159
24-JUL-21 05.50.23.930352 PM 6 13252182

6 rows selected.

SQL>
SQL> select systimestamp from dual;

SYSTIMESTAMP
---------------------------------------------------------------------------
24-JUL-21 06.53.31.159094 PM +08:00

SQL>


So, now at 18:47, STDB2 has applied Sequence#310 and the database now shows data that came through that ArchiveLog. Upto JOB_ID=6, JOB_END_TIME=05:50:23pm
The Primary has already progressed further.

SQL> l
1* select * from hemant.job_tracking_tbl order by 1
SQL> /

JOB_END_TIME JOB_ID DB_SCN
-------------------------------------------------------------- ---------- ----------
24-JUL-21 04.57.21.676949 PM 1 13239134
24-JUL-21 05.04.29.870877 PM 2 13241261
24-JUL-21 05.30.17.962275 PM 3 13246616
24-JUL-21 05.39.10.912969 PM 4 13247859
24-JUL-21 05.40.20.865467 PM 5 13248159
24-JUL-21 05.50.23.930352 PM 6 13252182
24-JUL-21 06.00.27.037797 PM 7 13257658
24-JUL-21 06.10.33.163203 PM 8 13259223
24-JUL-21 06.20.36.839944 PM 9 13261275
24-JUL-21 06.22.46.972310 PM 10 13261560
24-JUL-21 06.30.39.787880 PM 11 13262799
24-JUL-21 06.37.18.623659 PM 12 13263658
24-JUL-21 06.40.41.713016 PM 13 13264263
24-JUL-21 06.50.43.755835 PM 14 13265798

14 rows selected.

SQL>


So, the operative methods at the Standby are :
 For Recovery :
1.  alter database recover managed standby database USING ARCHIVED LOGFILE disconnect from session

To Open and Query :
1. alter database recover managed standby database CANCEL
2. alter database OPEN READ ONLY
3. alter pluggable database <pdbname>   OPEN READ ONLY

To resume Recovery :
1. shutdown immediate
2. startup mount
3. alter database recover managed standby database USING ARCHIVED LOGFILE disconnect from session

While the Primary must specify a DELAY value in the log_archive_dest_n parameter for this destination Standby




Categories: DBA Blogs

Security steps to protect a database

Tom Kyte - Fri, 2021-07-23 17:06
Which are the important security steps to protect a database whose application will be in internet? Which are the differences between Connection Manager and Database Firewall? If I have native network encryption configured do I need to do something on application server level? I mean, for example, configuring the thin jdbc client network?
Categories: DBA Blogs

how to use dbms_stats.gather_databse_stats

Tom Kyte - Fri, 2021-07-23 17:06
Tom: i try to use dbms_stats to give me a report of what statistics is missing, what i do is: declare a dbms_stats.objecttab; begin dbms_stats.gather_database_stats(OPTIONS=>'LIST EMPTY',OBJLIST=>a); end; after that how can i know the content of a?
Categories: DBA Blogs

Getting Value from JSON array using PL/SQL

Tom Kyte - Fri, 2021-07-23 17:06
I need help. I do I get data from the stats object in the JSON array using PL/SQL? <code>{ "items": [ { " stats": { "m_date": "2019-05-31T00:00:00", "v_num": "0040012", "pk_num": "0562", "amt": 94, "bal": 75, "disc": 13 } } }</code> Thanks
Categories: DBA Blogs

Invalid XML character Error - How to find the invalid character from a VARCHAR2 database column?

Tom Kyte - Fri, 2021-07-23 17:06
Hello, Oracle newbie here. I am getting this error "Character reference "&#56256" is an invalid XML character" for XML data that is printed onto a report. The XML data that is causing the issue is from a VARCHAR2 data column in the database. I have filtered out the column to a separate backup table I have created and I want to go through the records in order to find where this invalid character is in. Then I need to write an update statement replacing the invalid character with a valid one. I am not sure how to do this. Is there a regex I can write with a SELECT statement for this? I tried below, but they didn't bring up any results: <code>select * from tabname where instr(colname,chr(56256)) > 0; </code> <code>select * from tabname where colname like unistr('%\dbc0%'); </code> Glad if someone one can help. Thank you!
Categories: DBA Blogs

Usage Flashback Query with Views

Tom Kyte - Fri, 2021-07-23 17:06
Dear TOM! Is it save to use Flashback Query with views? <b>Setup:</b> <code>CREATE VIEW my_view AS SELECT a.*, b.* FROM table_a a JOIN table_b b ON (a.ID = b.ID_A);</code> <b>Flashback Query at view level:</b> <code>SELECT v.* FROM my_view AS OF TIMESTAMP systimestamp - INTERVAL '30' SECOND v; </code> <b>Flashback Query at table level:</b> <code>SELECT a.*, b.* FROM table_a AS OF TIMESTAMP systimestamp - INTERVAL '30' SECONDa JOIN table_b AS OF TIMESTAMP systimestamp - INTERVAL '30' SECONDb ON (a.ID = b.ID_A);</code> <b>Will both queries <u>reliably</u> deliver the same result?</b> <b>Will the result of both queries be <u>consistent</u> over all affected tables?</b> Thank you, Matthias
Categories: DBA Blogs

Help with trying to decide with authentication approach should be setup

Tom Kyte - Fri, 2021-07-23 17:06
Please help me keep my sanity by pointing me in the right direction when deciding the authentication approach to use with Oracle 19c databases. This is a very confusing topic since it deals with a few areas that require experience with Microsoft Active Directory or other nonRDBMS software. Also, each one of them isn't simple to use and Oracle training didn't go into implementing each of these when I attended in early 2000. This is driving me insane especially since I thought that database authentication was already secure enough. Here are all of the Oracle authentication methods that I know exist: <code>- Oracle database authentication ( create user identified by password ) - Operating System authentication ( create OPS$user identified externally ) - Kerberos / Radius (create user identified externally as 'kerberos_name' ) - certificate_DN ( create user identified externally as 'certificat_DN' ) (is this SSL authentication?) - Globally as 'directory_DN' ( create user identified GLOBALLY as 'directory_DN') ( Sigh ... this sounds so much like other authentication options.)</code> I'm not sure if these are authentication approaches, but I know they mingle with authentication and add to the confusion: - Centrally Managed Users - Enterprise User Security Also, knowning when Microsoft Active Directory can be used is confusing. I think these require Microsoft Active Directory: - Kerberos - Centrally Managed Users To muddy the water more, based on what I have seen, Kerberos can be used with Centrally Managed Users which is confusing since it seems like Kerberos with AD is enough. Finally, I keep seeing that Oracle Internet Directory is needed in some cases. The only one that seems to need is "Enterprise User Security" which seems like if we have Microsoft Active Directory, we would use "Centrally Managed Users" setup. I know i've mentioned a lot above. It would be nice if you can at a minimum tell me which one I should focus on to setup a secure authentication approach without going overboard. Which approach would recommend to use for the most secure authentication with the following in our infrastructure: <code>- Enterprise Edition Oracle 19c on Linux with April 2021 RU applied - SQLNET.TCP.INVITED_NODES - FAILED_LOGIN_ATTEMPTS=3 - orc12c_verify_function - We don't allow use of password file - Limit access through Oracle "grants" - We have changed all default passwords - We use profiles to expire passwords regularly - Microsoft active directory which we aren't using. - We use CA signed SSL certificates with strong encryption algorithms with FIPS-140-2 configured between database server and clients so we could use "Authentication with Public Key Infrastructure". - Our databases are only accessed through the applications not by individual users</code> Why isn't the above good enough? The only thing we aren't using is Microsoft Active directory or SSL Client Authentication. I thought that having Oracle database authentication with a complex password with the use of CA signed certificates would be a secure authentication approach. Why would Oracle feel the need to add more authentication approaches and confuse most of us? With this approach, a client needs to know the password. A client needs to have been given the CA signed certificate in order to be allowed to connect to the database. A client is forced to use a complex password, is only given limited password attempts with FAILED_LOGIN_ATTEMPTS=3, Finally, we have TCP.INVITED_NODES setup so only those clients with IPs in that list are allowed to connect. Geezzz, why is more needed? Thanks for your help, John
Categories: DBA Blogs

Transaction after SELECT statement

Tom Kyte - Fri, 2021-07-23 17:06
Hi, Tom. I am investigating a process of creating a transaction after a SELECT statement as a DML operator. Here is a steps, <code> COMMIT; ALTER SESSION SET ISOLATION_LEVEL = SERIALIZABLE; SELECT * FROM V$TRANSACTION; --Wait one minute to remember time of previous select execution SELECT DBMS_TRANSACTION.LOCAL_TRANSACTION_ID FROM DUAL; SELECT * FROM V$TRANSACTION; --the transaction create time is before call DBMS_TRANSACTION.LOCAL_TRANSACTION_ID </code> The questions are, We don't have any TX locks but the transaction exists. Why? After call DBMS_TRANSACTION.LOCAL_TRANSACTION_ID we can see a new transaction in V$TRANSACTION but before don't. Why? Also we don't see any transaction if ISOLATION_LEVEL = READ COMMITTED. Why? According to your excellent book Expert Oracle Chapter 8, "A transaction implicitly begins with the first statement that modifies data (the first statement that gets a TX lock)."
Categories: DBA Blogs

Snyk provides native integration for Atlassian Bitbucket Cloud security - Here is how!!!

Pas Apicella - Thu, 2021-07-22 23:56

The Snyk security integration is free and easy to set up with just a few clicks inside the Bitbucket Cloud product. For the first time, developers can consume information that was previously only available inside Snyk now within Bitbucket Cloud. Snyk enables developers to see new vulnerabilities as they emerge and implement fixes early and quickly in the process. 

In this post we show how easily you can integrate Snyk into Bitbucket Cloud

Steps
Note: For the following to work you have to integrate Bitbucket Cloud with the Snyk App as per this link 


1. Once logged into Bitbucket Cloud navigate to your "Workplace Settings" and authenticate with Snyk as shown below.





2. Next select the repository you wish to use as shown below.





3. You should see a "Snyk" tab on the left hand side click on this and then click on "Import this repository" as shown below.




4. In a couple of minutes or less you should see a "Snyk" overview report as per below. 




5. Click on "pom.xml" to get more information as shown below. Here we get a list of all Vulnerabilities as per a scan of the package manifest file "pom.xml" in this example.





6. On the top of this page you can directly go to the project page on "Snyk App" by clicking on "Visit Snyk" as shown below.




It's as simple as that!
 
More Information
Demo Video

Categories: Fusion Middleware

SQL Macro

Jonathan Lewis - Thu, 2021-07-22 04:18

A question came up recently on the Oracle Developer forum that tempted me into writing a short note about SQL Macro functions – a feature that was touted for 20c but which has been back-ported to the more recent releases of 19c. Specifically I set up this demo using 19.11.0.0.

The OP supplied a script to prepare some data. I’ll postpone that to the end of this note and start with variations of the query that could be used against that data set. I’ll be looking at the original query, a variant of the query that uses a pipelined function, then a variant that uses an SQL Macro function.

The requirement starts with a query to turn a pair of dates into a date range – which can be done in many ways but the OP had used a recursive “with subquery” (CTE/common table expression).

with calendar ( start_date, end_date ) as (
        select date '2021-07-01', date '2021-07-30' from dual
        union all
        select start_date + 1, end_date
        from   calendar
        where  start_date + 1 <= end_date
)
select start_date as day
from   calendar
;

Getting on to the full requirement we can use this subquery as if it were a table (or inline view) and join it to any other tables where we want data from a date range, for example:

select
        e.employee_id, c.day
from
        employees e
inner join
        (
                with calendar ( start_date, end_date ) as (
                        select date '2021-07-01', date '2021-07-30' from dual
                        union all
                        select start_date + 1, end_date
                        from   calendar
                        where  start_date + 1 <= end_date
                )
                select start_date as day
                from   calendar
        ) c
partition by
        (e.employee_id)
on      (substr(e.work_days, trunc(c.day) - trunc(c.day, 'IW') + 1, 1) = 'Y')
where
        not exists (
                select  1
                from    holidays h
                where   c.day = h.holiday_date
        )
and     not exists(
                select  1
                from    timeoff t
                where   e.employee_id = t.employee_id
                and     t.timeoff_date = c.day
        )
order by
        e.employee_id,
        c.day

If we want a report for a different month we just have to supply a different pair of dates, and we can probably work out a way of making it easy for the end-users to supply those dates as parameters to a report.

The pipelined function

However, we may want to use the same little “recursive CTE” (or similar) pattern in many different reports, and ad hoc queries that users might want to write for themselves. To avoid wasting time on logic, or basic typing errors, is it possible to hide some of the complexity of the subquery structure. The answer is yes, and for a long time we could have used a “pipelined function” to do this – though we have to create a simple object table and an object table type to do do. For example:

create or replace type obj_date is object (day date);
/

create or replace type nt_date is table of obj_date;
/

create or replace function generate_dates_pipelined(
        p_from  in date,
        p_to    in date
)
return nt_date 
pipelined
is
begin
        for c1 in (
                with calendar (start_date, end_date ) as (
                        select trunc(p_from), trunc(p_to) from dual
                        union all
                        select start_date + 1, end_date
                        from   calendar
                        where  start_date + 1 <= end_date
                )
                select start_date as day
                from   calendar
        ) loop
                pipe row (obj_date(c1.day));
        end loop;

        return;

end generate_dates_pipelined;
/

I’ve started by creating an object type with a single attribute called day of type date, and an object table type of that object type. This means I can use the object type and the object table type to pass data between SQL and PL/SQL. Then I’ve created a pl/sql function that returns the object table type, but in a pipelined fashion using the pipe row() mechanism to supply the data one object at a time.

In my final SQL I can now use the table() operator to cast the result of the function call from an object table to a relational table, implicitly mapping the object attributes to their basic Oracle data types.

select
        e.employee_id, c.day
from
        employees e
inner join
        table(generate_dates_pipelined(date '2021-07-01', date '2021-07-30')) c
partition by
        (e.employee_id)
on      (substr(e.work_days, trunc(c.day) - trunc(c.day, 'IW') + 1, 1) = 'Y')
where
        not exists (
                select  1
                from    holidays h
                where   c.day = h.holiday_date
        )
and     not exists(
                select  1
                from    timeoff t
                where   e.employee_id = t.employee_id
                and     t.timeoff_date = c.day
        )
order by
        e.employee_id,
        c.day
;

I’ve replaced the 9 lines of the inline “with subquery” by a single line call:

        table(generate_dates_pipelined(date '2021-07-01', date '2021-07-30')) c

In fact the table() operator hasn’t been needed since some time in the 12c timeline, but it might be useful as a little reminder of what’s going on behind the scenes. It’s also a reminder that the data really will behave as if it’s coming from a relational table rather then a pl/sql loop.

Although this pipelined function approach can be very effective another member of the forum pointed out that behind the scenes it is depending on a pl/sql loop walking through a cursor which, in this example, was row by row processing (though it could be changed to bulk collect with a limit to improve performance a little). So we might want to look at options for doing things differently.

The SQL Macro function

In many programming languages a “macro” is a symbol that is used as a short-hand for a longer piece of code. Even in environments like your favourite shell environment you can usually set up shorthand for longer texts that you use frequently, for example:

alias otr="cd /u01/app/oracle/diag/rdbms/or19/or19/trace"

The Oracle equivalent is a PL/SQL function (declared as a “SQL_Macro” function) that you include in your SQL statement, and at run-time Oracle will execute the function and use the text it returns to modify your statement. Here’s the macro strategy applied to the date range generation:

create or replace function generate_dates_macro(
        p_from  in date,
        p_to    in date
)
return varchar2
sql_macro
is
        v_sql varchar2(4000) := q'{
                with calendar (start_date, end_date ) as (
                        select
                                to_date('xxxx-xx-xx','yyyy-mm-dd'),
                                to_date('yyyy-yy-yy','yyyy-mm-dd')
                        from    dual
                        union all
                        select start_date + 1, end_date
                        from   calendar
                        where  start_date + 1 <= end_date
                )
                select start_date as day
                from   calendar
                }'
        ;

begin
        v_sql := replace(v_sql,'xxxx-xx-xx',to_char(p_from,'yyyy-mm-dd'));
        v_sql := replace(v_sql,'yyyy-yy-yy',to_char(p_to  ,'yyyy-mm-dd'));

--      dbms_output.put_line(v_sql);
        return v_sql;

end generate_dates_macro;
/

I’ve created a function, flagged as a sql_macro, that returns a varchar2. It has two input parameters which are declared as dates. The initial value of the variable v_sql looks very similar to the CTE I used in the original query except the two “dates” it uses are “xxxx-xx-xx” and “yyyy-yy-yy”, but in the body of the function I’ve replaced those with the text forms of the two incoming date parameters. There’s a call to dbms_output.put_line() that I’ve commented out that will show you that the final text returned by the function is:

                with calendar (start_date, end_date ) as (
                        select
                                to_date('2021-07-01','yyyy-mm-dd'),
                                to_date('2021-07-30','yyyy-mm-dd')
                        from    dual
                        union all
                        select start_date + 1, end_date
                        from   calendar

                 where  start_date + 1 <= end_date
                )
                select start_date as day
                from   calendar

So now we can rewrite the original statement as follows (with just a minor change from the pipelined version):

select
        e.employee_id, c.day
from
        employees e
inner join
        generate_dates_macro(date '2021-07-01', date '2021-07-30') c
partition by
        (e.employee_id)
on      (substr(e.work_days, trunc(c.day) - trunc(c.day, 'IW') + 1, 1) = 'Y')
where
        not exists (
                select  1
                from    holidays h
                where   c.day = h.holiday_date
        )
and     not exists(
                select  1
                from    timeoff t
                where   e.employee_id = t.employee_id
                and     t.timeoff_date = c.day
        )
order by
        e.employee_id,
        c.day
;

When we execute this statement Oracle evaluates the function, slots the generated text in place, then optimises and executes the resulting text. Interestingly the text reported by a call to dbms_xplan.display_cursor() shows the original text even though the plan clearly includes references to the table(s) in the SQL macro – a search of the library cache shows the same text, but also reveals an anonymous pl/sql block calling the SQL Macro function (in a style reminiscent of the way that row-level security (RLS, FGAC, VPD) calls a security predicate function) that is invisibly folded into a query.

declare
begin 
        :macro_ text := "GENERATE_DATES_MACRO"(
                TO_DATE(' 2021-07-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'),
                TO_DATE(' 2021-07-30 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
        );
end;

Here’s the execution plan for the query using the SQL Macro:

--------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                     | Name       | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                              |            |      1 |        |    41 (100)|     83 |00:00:00.01 |     130 |       |       |          |
|   1 |  SORT ORDER BY                                |            |      1 |      1 |    41   (5)|     83 |00:00:00.01 |     130 |  9216 |  9216 | 8192  (0)|
|*  2 |   FILTER                                      |            |      1 |        |            |     83 |00:00:00.01 |     130 |       |       |          |
|*  3 |    HASH JOIN ANTI                             |            |      1 |      1 |    39   (3)|     84 |00:00:00.01 |      46 |  1744K|  1744K| 1542K (0)|
|   4 |     NESTED LOOPS                              |            |      1 |      1 |    21   (0)|     88 |00:00:00.01 |      23 |       |       |          |
|   5 |      TABLE ACCESS FULL                        | EMPLOYEES  |      1 |      1 |    17   (0)|      4 |00:00:00.01 |      23 |       |       |          |
|*  6 |      VIEW                                     |            |      4 |      1 |     4   (0)|     88 |00:00:00.01 |       0 |       |       |          |
|   7 |       UNION ALL (RECURSIVE WITH) BREADTH FIRST|            |      4 |        |            |    120 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   8 |        FAST DUAL                              |            |      4 |      1 |     2   (0)|      4 |00:00:00.01 |       0 |       |       |          |
|   9 |        RECURSIVE WITH PUMP                    |            |    120 |        |            |    116 |00:00:00.01 |       0 |       |       |          |
|  10 |     TABLE ACCESS FULL                         | HOLIDAYS   |      1 |      2 |    17   (0)|      1 |00:00:00.01 |      23 |       |       |          |
|* 11 |    INDEX UNIQUE SCAN                          | TIMEOFF_PK |     84 |      1 |     1   (0)|      1 |00:00:00.01 |      84 |       |       |          |
--------------------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter( IS NULL)
   3 - access("START_DATE"="H"."HOLIDAY_DATE")
   6 - filter(SUBSTR("E"."WORK_DAYS",TRUNC(INTERNAL_FUNCTION("START_DATE"))-TRUNC(INTERNAL_FUNCTION("START_DATE"),'fmiw')+1,1)='Y')
  11 - access("T"."EMPLOYEE_ID"=:B1 AND "T"."TIMEOFF_DATE"=:B2)

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

As you can see, even though the query as written didn’t include the recursive CTE, the recursive query against DUAL appears in the plan. In fact the plan is exactly the same as the plan for the original query with the embedded CTE, though there is one interesting little difference – the generated query block names differ between plans.

Pros and Cons

Given that this is a lightweight example of a simple use of the SQL macro there’s not really a lot that can be said when comparing pipelined functions with macro functions. Both hide complexity and give you the opportunity to optimise an awkward piece of the code that might be (in effect) a common sub-routine.

The pipelined function does have to deal with the PL/SQL to SQL interchange – but that’s not a significant feature here. The main benefits, perhaps, of the macro are that the plan shows you the table(s) that would be hidden by the pipelined function, and may allow the optimizer to get better estimates of data sizes because it will be examining real tables with real statistics rather than taking a guess at a “pickler fetch” from a collection with a block box function.

Update (pre-publication)

There is some pleasure to be had by making mistakes in public, because that’s when you can learn something new. In my example to the OP on the Developer forum I used a much messier piece of code to embed the date values into the macro string, with lots of doubled and trebled quotes, to_char() functions, and concatenation all over the place.

Alex Nuijten replied to my suggestion pointing out that this degree of complexity was not necessary, and you could reference the functions parameters to construct the string. The only problem with that was that it hadn’t worked when I had tried it. Alex’s comment, however, also mentioned the problem and supplied the explanation: Bug 32212976: USING SCALAR ARGUMENTS IN WITH CLAUSE IN SQL TABLE MACRO RAISES ORA-06553 PLS-306 ). This was exactly the problem that I had been getting (the error message was – wrong number or types of arguments in call to ‘GENERATE_DATES_MACRO’ and I hadn’t thought about searching for known bugs or patches, I just hacked my way around the problem.

Here’s an alternative macro function supplied by Alex (edited slightly to be consistent with the function and column names in my example):

create or replace function generate_dates_macro(
    p_from in date,
    p_to  in date
)
return varchar2
sql_macro
is
    v_sql varchar2(4000);
begin
  v_sql := 'select trunc (generate_dates_macro.p_from) - 1 + level as day
       from dual
       connect by level <= (generate_dates_macro.p_to - generate_dates_macro.p_from) + 1';

--  dbms_output.put_line(v_sql);
    return v_sql;

end generate_dates_macro;
/

Test Code

If you want to experiment further, here’s the code to create the tables used in this demo:

rem
rem     Script:         19c_macro_2.sql
rem     Author:         Jonathan Lewis / "BeefStu"
rem     Dated:          July 2021
rem     Purpose:        
rem
rem     Last tested 
rem             19.11.0.0
rem
rem     Notes:
rem     A Macro solution to a problem that might
rem     otherwise be solved with a pipelined function
rem


drop table holidays;
drop table employees;
drop table timeoff;
drop table  emp_attendance;    
drop table absences;

drop function generate_dates_pipelined;
drop type nt_date;
drop type obj_date;

drop function generate_dates_macro;

-- @@setup

create table holidays(
        holiday_date    date,
        holiday_name    varchar2(20)
)
;

insert into holidays (holiday_date, holiday_name)
values ( to_date('2021/07/21 00:00:00', 'yyyy/mm/dd hh24:mi:ss'), 'July 21 2021') ;

create table employees(
        employee_id     number(6), 
        first_name      varchar2(20),
        last_name       varchar2(20),
        card_num        varchar2(10),
        work_days       varchar2(7)
)
;

alter table employees
        add constraint employees_pk primary key (employee_id)
;

insert into employees(employee_id, first_name, last_name, card_num, work_days)
with names as ( 
select 1, 'Jane', 'Doe', 'f123456', 'NYYYYYN' from dual 
union all 
select 2, 'Madison', 'Smith', 'r33432','NYYYYYN' from dual 
union all 
select 3, 'Justin', 'Case', 'c765341','NYYYYYN' from dual 
union all 
select 4, 'Mike', 'Jones', 'd564311','NYYYYYN' from dual 
) 
select * from names
;

create table timeoff(
        seq_num         integer generated by default as identity (start with 1) not null,
        employee_id     number(6),
        timeoff_date    date,
        timeoff_type    varchar2(1),
        constraint timeoff_chk check (timeoff_date = trunc(timeoff_date, 'dd')),
        constraint timeoff_pk primary key (employee_id, timeoff_date)
)
;

insert into timeoff (employee_id,timeoff_date,timeoff_type) 
with dts as ( 
select 1, to_date('20210726 00:00:00','yyyymmdd hh24:mi:ss'),'V'    from dual union all 
select 2, to_date('20210726 00:00:00','yyyymmdd hh24:mi:ss'),'V'    from dual union all 
select 2, to_date('20210727 00:00:00','yyyymmdd hh24:mi:ss'),'V'    from dual  
) 
select * from dts
;

create table  emp_attendance(    
        seq_num         integer  generated by default as identity (start with 1) not null,
        employee_id     number(6),
        start_date      date,
        end_date        date,
        week_number     number(2),
        create_date     date default sysdate
)
;

create table absences(
        seq_num         integer  generated by default as identity (start with 1) not null,
        employee_id     number(6),
        absent_date     date,
        constraint absence_chk check (absent_date=trunc(absent_date, 'dd')),
        constraint absence_pk primary key (employee_id, absent_date)
)
;

insert into emp_attendance (employee_id, start_date,end_date,week_number)
with dts as ( 
select 1, to_date('20210728 13:10:00','yyyymmdd hh24:mi:ss'), to_date('20210728 23:15:00','yyyymmdd hh24:mi:ss'), 30  from dual 
union all 
select 2, to_date('20210728 12:10:10','yyyymmdd hh24:mi:ss'), to_date('20210728 20:15:01','yyyymmdd hh24:mi:ss'), 30  from dual
)
select * from dts
;


Can Bad Teeth Affect Your Ears?

OraQA - Thu, 2021-07-22 00:26

Ear problems that may include ear pain, a feeling of pressure or fullness of the ear, or ringing of the ear might have multifactorial causes. Surprisingly, they can also be dentally related. If you have bad teeth or dental problems such as tooth impaction and cavities, they may lead to ear and tooth pain. You are advised to meet your doctor if you are experiencing such conditions.

Ear related problems can cause many symptoms and a huge range of severity. The doctor who specializes in ears is commonly called the Ear, Nose and Throat specialist. If the symptoms are suspected to be dentally related, dentists and ENTs must work in close collaboration to diagnose and treat each case  appropriately.

Generally, pain from problems that affect the roots of the tooth can manifest in the mouth, up the jawline to the ear, as well as over the entire side of the face. In addition, wisdom teeth located at the back of the mouth often become affected.

Bad teeth can affect your ears as the human jaw is actually related to the ears. The question is, how do the jaw and ears relate? The jaw and ear bones start out as one and the same during the development of an embryo. These bones eventually separate as the embryo grows. The tiny bones of the ears are the smallest bones in the human body and appropriately named for their shape which are the hammer (malleus), anvil (incus), and stirrup (stapes). Besides, the movement of the hammer is controlled by a muscle called the tensor tympani. Amazingly, this little muscle is also responsible for controlling the vibrations of the human eardrum and protects the inner ear from loud sounds.

Therefore, since the ears and the jaw were so closely related in our embryonic development, the nerve

that controls the tensor tympani muscle happens to be the same nerve which controls human’s chewing muscles. Concurrently, any signals sent through this nerve can affect both the muscles of the ears and jaw joint.

On top of that, physically, both of our ears and jaw joints, which include our teeth, are located very closely to one another. Based on anatomy, the human jaw is attached to the skull by two joints just in front of the ears. The part of the skull bone which separates the jaw joints from ear canals is just a paper thin.

You can try to put both of your little fingers inside your ears and clench your teeth. You are most likely to feel the movement of your jaw joints as you clench or bite down. Hence, just imagine that if your jaw joints and teeth have some problems, they could disrupt the inner workings of your ear.

Furthermore, dental related ear pain can be detected by symptoms and usually felt in just one ear and feels very “deep.” The pain may travel down to your neck, temple, or to the back of your head. It is also usually triggered by any jaw movement. Some of the symptoms are as follows:

  • Pain in the ears
  • Ringing of the ears
  • Ear pressure or sense of fullness
  • Difficulty clearing ears, especially at high altitudes
  • Dizziness
  • Excessive ear wax

Other than the symptoms above, other clues are:

  • Sore or tender muscles around jaw joints
  • Clicking sound in jaw joints
  • Jaw shifts when opening mouth too wide

If you have any of the symptoms described, you are advised to meet your doctor. Further examination is needed in order to determine if bad teeth or dental problems are the source of your conditions.  Take note that these symptoms could be signs of many types of ear problems.

The post Can Bad Teeth Affect Your Ears? appeared first on ORA QA.

oracle update-dcscomponents failure during ODA patching to 19.11

Yann Neuhaus - Wed, 2021-07-21 10:03

You might know that since ODA 19.10, the ODA metadata database is now stored in a MySQL database and not in the apache derby database any more. During a patching from a release earlier than 19.10 to 19.10 or 19.11, the apache derby database will be migrated to a mysql database. This is performed during the odacli update-dcscomponent command. Starting 19.10 DCS Agent will then use MySQL server to display any metadata information, as for example when running odacli list-databases command. MySQL server is automatically installed and configured during the dcscomponent update. MySQL server does not run with root user. A new operating system user, called odamysql, will be created. This user is not allowed to run any shell and thus can not be used for logging. systemctl oda-mysql service can be used to stop and start MySQL server. Stopping MySQL server will stop the DCS Agent as well.


I recently patched an ODA from version 19.9 to 19.11. During the patching, update-dcscomponent failed on the metadata migration which blocked the entire patching project. I could find a workaround that I wanted to share with you.

Failure during odacli update-dcscomponents

Running the update of all components failed into following error:

[root@ODA01 patch]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.11.0.0.0
DCS-10008:Failed to update DCScomponents: 19.11.0.0.0
 
Internal error while patching the DCS components :
DCS-10001:Internal error encountered: DCS-10001:Internal error encountered: Failed to configure MySQL for ODA...DCS-Agent shutdown is successful.
[root@ODA01 patch]# 
Failure in DCS Admin log file

During the troubleshooting, I could see that the MySQL server package has been installed successfully but found, in the dcs-admin.log stored in /opt/oracle/dcs/log directory, the following errors related to the MySQL configuration:

2021-07-20 09:18:45,177 DEBUG [dw-24 - PUT /dcscomponents] [] c.o.d.a.u.MySQLTask: Configure MySQL for ODA
2021-07-20 09:18:45,177 DEBUG [dw-24 - PUT /dcscomponents] [] c.o.d.c.u.CommonsUtils: 
run: cmd= '[/opt/oracle/dcs/mysql/etc/configodamysql.sh]'
2021-07-20 09:18:50,993 DEBUG [Thread-30] [] c.o.d.c.u.CommonsUtils: Output :
Initialize MySQL - Failed
2021-07-20 09:18:50,993 DEBUG [dw-24 - PUT /dcscomponents] [] c.o.d.c.u.c.DCSProcessBuilder: Return code: 1
2021-07-20 09:18:50,993 DEBUG [dw-24 - PUT /dcscomponents] [] c.o.d.c.u.c.CommandExecutor: Return code: 1
2021-07-20 09:18:50,993 ERROR [dw-24 - PUT /dcscomponents] [] c.o.d.a.u.MySQLTask: Exception : DCS-10001:Internal error encountered: Failed to configure MySQL for ODA..
ncountered: Failed to configure MySQL for ODA..
2021-07-20 09:18:50,994 ERROR [dw-24 - PUT /dcscomponents] [] c.o.d.a.r.DcsComponentServiceTaskFactory: Current status - DCS-Agent shutdown is successful. 
2021-07-20 09:18:50,997 INFO [dw-24 - PUT /dcscomponents] [] c.o.d.a.r.DcsComponentServiceTaskFactory: Components {zookeeper ,MySQL, Agent, Dcs-Cli, Dcs-Controller} upgrade operation completed.
2021-07-20 09:18:50,997 ERROR [dw-24 - PUT /dcscomponents] [] c.o.d.a.r.DcsComponentLifecycleApi: Internal error while patching the DCS components  


Failure in MySQL log files

Going further with the troubleshooting, I could find the following in the MySQL log file:

[root@ODA01 patch]# cd /opt/oracle/dcs/mysql/log/
  
[root@ODA01 log]# ls -ltrh
total 4.0K
-rw-r----- 1 odamysql odamysql 2.3K Jul 20 09:18 mysqldb.log
  
[root@ODA01 log]# more mysqldb.log
2021-07-20T07:18:45.946735Z 0 [System] [MY-013169] [Server] /opt/oracle/dcs/mysql/bin/mysqld (mysqld 8.0.23-commercial) initializing of server in progress as process 66381
2021-07-20T07:18:45.952983Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2021-07-20T07:18:45.959155Z 1 [Warning] [MY-012582] [InnoDB] io_setup() failed with EAGAIN. Will make 5 attempts before giving up.
2021-07-20T07:18:45.959220Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 1.
2021-07-20T07:18:46.459426Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 2.
2021-07-20T07:18:46.959669Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 3.
2021-07-20T07:18:47.459946Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 4.
2021-07-20T07:18:47.960193Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 5.
2021-07-20T07:18:48.460487Z 1 [Warning] [MY-012582] [InnoDB] io_setup() failed with EAGAIN. Will make 5 attempts before giving up.
2021-07-20T07:18:48.460594Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 1.
2021-07-20T07:18:48.960802Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 2.
2021-07-20T07:18:49.461068Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 3.
2021-07-20T07:18:49.961352Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 4.
2021-07-20T07:18:50.461608Z 1 [Warning] [MY-012583] [InnoDB] io_setup() attempt 5.
2021-07-20T07:18:50.961904Z 1 [ERROR] [MY-012584] [InnoDB] io_setup() failed with EAGAIN after 5 attempts.
2021-07-20T07:18:50.962203Z 1 [ERROR] [MY-012954] [InnoDB] Cannot initialize AIO sub-system
2021-07-20T07:18:50.962314Z 1 [ERROR] [MY-012929] [InnoDB] InnoDB Database creation was aborted with error Generic error. You may need to delete the ibdata1 file before trying to start up again.
2021-07-20T07:18:50.962557Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2021-07-20T07:18:50.962716Z 0 [ERROR] [MY-013236] [Server] The designated data directory /opt/oracle/dcs/mysql/data/ is unusable. You can remove all files that the server added to it.
2021-07-20T07:18:50.962890Z 0 [ERROR] [MY-010119] [Server] Aborting
2021-07-20T07:18:50.963406Z 0 [System] [MY-010910] [Server] /opt/oracle/dcs/mysql/bin/mysqld: Shutdown complete (mysqld 8.0.23-commercial)  MySQL Enterprise Server - Commercial.


It seems there were troubles with InnoDB function because the ODA has no Asynchronous IO available. This is another question why and I opened a SR on the Oracle support portal to have an explanation and ensure my next workaround is adequate.

Workaround

The workaround is to add innodb_use_native_aio = 0 parameter to both mysql cnf files : mysqldb_ssl.cnf and mysqldb.cnf and manually run configodamysql.sh before executing again odacli update-dcscomponents command.

Adding innodb_use_native_aio = 0 parameter to both mysql cnf files
[root@ODA01 /u01]# cd /opt/oracle/dcs/mysql/etc
  
[root@ODA01 etc]# ls -ltrh
total 40K
-rwxr-x--- 1 odamysql odamysql  892 May 31 04:05 oda-mysql.service
-rwxr-x--- 1 odamysql odamysql  596 May 31 04:05 mysqldb_ssl.cnf.bak
-rwxr-x--- 1 odamysql odamysql  399 May 31 04:05 mysqldb.cnf.bak
-rwxr-x--- 1 odamysql odamysql 1.9K May 31 04:05 getodamysqlport.sh
-rwxr-x--- 1 odamysql odamysql   55 May 31 04:05 enable_auth_socket.sql
-rwxr-x--- 1 odamysql odamysql 6.6K May 31 04:05 configodamysql.sh
-rwxr-x--- 1 odamysql odamysql 2.4K May 31 04:05 cleanodamysql.sh
-rwxr-x--- 1 odamysql odamysql  399 Jul 20 11:57 mysqldb.cnf
-rwxr-x--- 1 odamysql odamysql  596 Jul 20 11:57 mysqldb_ssl.cnf
  
[root@ODA01 etc]# cp -p mysqldb_ssl.cnf mysqldb_ssl.cnf.bak.20210720
  
[root@ODA01 etc]# vi mysqldb_ssl.cnf
  
[root@ODA01 etc]# diff mysqldb_ssl.cnf mysqldb_ssl.cnf.bak.20210720
16d15
< innodb_use_native_aio = 0
  
[root@ODA01 etc]# cp -p mysqldb.cnf mysqldb.cnf.bak.20210720
  
[root@ODA01 etc]# vi mysqldb.cnf
  
[root@ODA01 etc]# diff mysqldb.cnf mysqldb.cnf.bak.20210720
12d11
< innodb_use_native_aio = 0
  
[root@ODA01 etc]# grep innodb_use_native_aio *
mysqldb.cnf:innodb_use_native_aio = 0
mysqldb_ssl.cnf:innodb_use_native_aio = 0


Start DCS Agent again

DCS Agent has been stopped during dcs-components update and needs to be restarted.

[root@ODA01 etc]# ps -ef | grep dcs-agent
root     90513 25271  0 11:58 pts/3    00:00:00 grep --color=auto dcs-agent
  
[root@ODA01 etc]# systemctl start initdcsagent
  
[root@ODA01 etc]# ps -ef | grep dcs-agent
root     90893     1  0 11:58 ?        00:00:00 /bin/sh -c . /opt/oracle/dcs/bin/setupJreAgent.sh;$JAVA -Xms128m -Xmx256m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC -XX:ParallelGCThreads=4 -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:/opt/oracle/dcs/log/gc-dcs-agent-%t-%p.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Doracle.security.jps.config=/opt/oracle/dcs/agent/jps-config.xml -jar  /opt/oracle/dcs/bin/dcs-agent*.jar server /opt/oracle/dcs/conf/dcs-agent.json >/opt/oracle/dcs/log/dcsagent-stdout_$(date +%Y%m%d-%H%M).log 2>/opt/oracle/dcs/log/dcsagent-stderr_$(date +%Y%m%d-%H%M).log
root     90894 90893 99 11:58 ?        00:00:04 /opt/oracle/dcs/java/1.8.0_281/bin/java -Xms128m -Xmx256m -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=256m -XX:+DisableExplicitGC -XX:ParallelGCThreads=4 -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xloggc:/opt/oracle/dcs/log/gc-dcs-agent-%t-%p.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Doracle.security.jps.config=/opt/oracle/dcs/agent/jps-config.xml -jar /opt/oracle/dcs/bin/dcs-agent-19.9.0.0.0.jar server /opt/oracle/dcs/conf/dcs-agent.json
root     91038 25271  0 11:59 pts/3    00:00:00 grep --color=auto dcs-agent
[root@ODA01 etc]# 


Manually execute the configodamysql.sh script
[root@ODA01 /]# cd /opt/oracle/dcs/mysql/etc/
  
[root@ODA01 etc]# ls -l
total 48
-rwxr-x--- 1 odamysql odamysql 2393 May 31 04:05 cleanodamysql.sh
-rwxr-x--- 1 odamysql odamysql 6731 May 31 04:05 configodamysql.sh
-rwxr-x--- 1 odamysql odamysql   55 May 31 04:05 enable_auth_socket.sql
-rwxr-x--- 1 odamysql odamysql 1923 May 31 04:05 getodamysqlport.sh
-rwxr-x--- 1 odamysql odamysql  425 Jul 20 12:00 mysqldb.cnf
-rwxr-x--- 1 odamysql odamysql  399 May 31 04:05 mysqldb.cnf.bak
-rwxr-x--- 1 odamysql odamysql  399 Jul 20 11:57 mysqldb.cnf.bak.20210720
-rwxr-x--- 1 odamysql odamysql  622 Jul 20 11:58 mysqldb_ssl.cnf
-rwxr-x--- 1 odamysql odamysql  596 May 31 04:05 mysqldb_ssl.cnf.bak
-rwxr-x--- 1 odamysql odamysql  596 Jul 20 11:57 mysqldb_ssl.cnf.bak.20210720
-rwxr-x--- 1 odamysql odamysql  892 May 31 04:05 oda-mysql.service
  
[root@ODA01 etc]# ./configodamysql.sh
/opt/oracle/dcs/mysql/data directory should be empty to proceed further with ODA MySQL post install configuration
  
[root@ODA01 etc]# cd ../data
  
[root@ODA01 data]# ls
binlog.index
  
[root@ODA01 data]# rm binlog.index -f
  
[root@ODA01 data]# cd ../etc
  
[root@ODA01 etc]# ./configodamysql.sh
Initialize MySQL - Done
Starting MySQL for ODA - Done
mysql: [Warning] Using a password on the command line interface can be insecure.
Enable server-side auth_socket authentication plugin for MySQL root user - Done
Generating RSA private key, 2048 bit long modulus
................+++
.................................................................................................................................................................................................+++
e is 65537 (0x10001)
Generating a 2048 bit RSA private key
...................................................................................+++
.........................................+++
writing new private key to 'server-key.pem'
-----
writing RSA key
Signature ok
subject=/CN=ODA01
Getting CA Private Key
Generating a 2048 bit RSA private key
....................................................+++
...+++
writing new private key to 'dcsagent-client-key.pem'
-----
writing RSA key
Signature ok
subject=/CN=dcsagent
Getting CA Private Key
Generating a 2048 bit RSA private key
.................................+++
.+++
writing new private key to 'rhp-client-key.pem'
-----
writing RSA key
Signature ok
subject=/CN=rhp user
Getting CA Private Key
All certificates : OK
mkdir: cannot create directory ‘/opt/oracle/dcs/odamysqlcert/’: File exists
Generate certificates for CA, MySQL Server and Clients (DCS Agent & RHP) - Done
Enable SSL and Restart MySQL - Done
Create database dcsagentdb - Done
Create user dcsagent - Done
Grant privileges to user dcsagent - Done
Create database GHSUSER21 - Done
Create user GHSUSER21 - Done
Grant privileges to user GHSUSER21 - Done
Create schemas, users for CS Agent & RHP - Done
Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/leapseconds' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/tzdata.zi' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
Load the timezone tables in mysql database - Done
MySQL installed version details :
/opt/oracle/dcs/mysql/bin/mysql  Ver 8.0.23-commercial for Linux on x86_64 (MySQL Enterprise Server - Commercial)
MySQL installation and configuration.....Completed


We can check that mySQL server daemon is now running :

[root@ODA01 etc]# ps -ef | grep mysql
odamysql 26130     1 11 12:07 ?        00:00:01 /opt/oracle/dcs/mysql/bin/mysqld --defaults-file=/opt/oracle/dcs/mysql/etc/mysqldb.cnf
root     27198 35360  0 12:08 pts/4    00:00:00 grep --color=auto mysql


Execute again odacli update-dcscomponents
[root@ODA01 etc]# /opt/oracle/dcs/bin/odacli update-dcscomponents -v 19.11.0.0.0
{
  "jobId" : "ff4d652d-b394-4c43-9195-a0740d1a08e6",
  "status" : "Success",
  "message" : "Update-dcscomponents is successful on all the node(s):DCS-Agent shutdown is successful. MySQL already on required version. Metadata migration is successful. Metadata schema update is done. dcsagent RPM upgrade is successful.  dcscli RPM upgrade is successful.  dcscontroller RPM upgrade is successful.  Successfully ran setupAgentAuth.sh zookeeper RPM upgrade is successful.  DCS-Agent restart is successful. ",
  "reports" : null,
  "createTimestamp" : "July 20, 2021 12:09:47 PM CEST",
  "description" : "Update-dcscomponents job completed and is not part of Agent job list",
  "updatedTime" : "July 20, 2021 12:11:48 PM CEST"
}


And we can see that the dcscomponents update is now successfull. I could also test that the Apache derby database repository is no more existing :

[root@ODA01 etc]# cd /opt/oracle/dcs/
[root@ODA01 dcs]# ls
agent  bin  clients  commonstore  conf  configuredcs.pl  dcsagent_wallet  dcscli  dcs-ui  docs  ft  Inventory  java  log  mesg  mysql  odajavasdk  odamysqlcert  oracle.ahf  rdbaas  sample
[root@ODA01 dcs]# 


And that new MySQL metadata database is running fine executing any commands like odacli list-databases.

As explained previously, I opened a SR on oracle support to know the root cause, to have feedback on this workaround and in case of any other resolution. I will update this post accordingly as soon as I have got a feedback.

Cet article oracle update-dcscomponents failure during ODA patching to 19.11 est apparu en premier sur Blog dbi services.

odabr tool : how to resolve efi device backup failure?

Yann Neuhaus - Wed, 2021-07-21 08:54

During all my ODA projects where I need to perform patching operation, I’m using odabr tool. This tool is really easy to use and well designed. It offers the possibility to create snapshot on the /, /u01 and /opt LVM file systems as well as a physical copy (rsync) of the files to an external destination storage as NFS. This is powerful in case you would like to get just one file back. The tool, as well as the user guide, can be found on Oracle Doc ID 2466177.1. I recently faced some errors when the tool is performing EFI device backup, and thus resulting in failing without been able to create any snapshot. I wanted to share my finding here, and maybe for you to win some troubleshooting time.

How to install the tool?

Easy to install. You just need to download from the Doc ID the odabr-2.0.1-66.noarch.rpm package.

Check that the package is not already installed:

[root@ODA01 patch]# rpm -qa | grep -i odabr
[root@ODA01 patch]#

Install the tool:
[root@ODA01 patch]# ls -ltrh
total 5.2G
-rwxrwxrwx 1 root root 5.2G Jul 19 15:00 p30403673_1911000_Linux-x86-64.zip
-rw-r--r-- 1 root root 27K Jul 21 09:00 odabr-2.0.1-66.noarch.rpm
 
[root@ODA01 patch]# rpm -ivh odabr-2.0.1-66.noarch.rpm
warning: odabr-2.0.1-66.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID 939112d6: NOKEY
Preparing... ################################# [100%] Updating / installing...
1:odabr-2.0.1-66 ################################# [100%]  
odabr-2.0.1.66 has been installed on /opt/odabr succesfully!
 
[root@ODA01 patch]# rpm -qa | grep -i odabr
odabr-2.0.1-66.noarch

Making a snapshot backup

First we will need to check if we have enough place in the physical volume.

[root@ODA01 patch]# pvdisplay
--- Physical volume ---
PV Name /dev/md126p3
VG Name VolGroupSys
PV Size <446.15 GiB / not usable 23.00 MiB
Allocatable yes
PE Size 32.00 MiB
Total PE 14276
Free PE 5444
Allocated PE 8832
PV UUID VyihMy-xEoI-SgKi-Apku-PBIS-BTnf-v2tsfU

In our case we have 5444 Free PE, one PE has a size of 32 MiB, thus making here about 170 GB available for snapshots.

By default the tool is assigning 190 GB snapshot to the 3 file systems (/, /u01 and /opt). If we have enough free space, we could simply run the following command to create snapshots backup:
[root@ODA01 patch]# /opt/odabr/odabr backup -snap

In our case knowing we only have 170 GB free, we might need to adjust the snapshots size.

With a df command we will first check how many place is needed :
[root@ODA01 patch]# df -h / /u01 /opt
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroupSys-LogVolRoot 30G 7.1G 21G 26% /
/dev/mapper/VolGroupSys-LogVolU01 118G 77G 36G 69% /u01
/dev/mapper/VolGroupSys-LogVolOpt 99G 53G 41G 57% /opt

From there we will size each snapshot size (rsize for /, usize for /u01 and osize for /opt) :
[root@ODA01 patch]# /opt/odabr/odabr backup -snap -rsize 15 -usize 90 -osize 60

EFI device backup issue

Running it could fail with EFI device backup issue:
[root@ODA01 patch]# /opt/odabr/odabr backup -snap -rsize 15 -usize 90 -osize 60
INFO: 2021-07-21 09:08:08: Please check the logfile '/opt/odabr/out/log/odabr_84675.log' for more details
 
 
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
odabr - ODA node Backup Restore - Version: 2.0.1-66
Copyright Oracle, Inc. 2013, 2021
--------------------------------------------------------
Author: Ruggero Citton
RAC Pack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
 
INFO: 2021-07-21 09:08:08: Checking superuser
INFO: 2021-07-21 09:08:08: Checking Bare Metal
 
INFO: 2021-07-21 09:08:08: Removing existing LVM snapshots
WARNING: 2021-07-21 09:08:08: LVM snapshot for 'opt' does not exist
WARNING: 2021-07-21 09:08:08: LVM snapshot for 'u01' does not exist
WARNING: 2021-07-21 09:08:08: LVM snapshot for 'root' does not exist
 
INFO: 2021-07-21 09:08:08: Checking current OS version
INFO: 2021-07-21 09:08:08: Checking LVM restore backgroud process
INFO: 2021-07-21 09:08:08: Checking LVM size
 
INFO: 2021-07-21 09:08:08: Boot device backup
INFO: 2021-07-21 09:08:08: Getting EFI device
INFO: 2021-07-21 09:08:08: ...step1 - unmounting EFI
INFO: 2021-07-21 09:08:08: ...step2 - making efi device backup
SUCCESS: 2021-07-21 09:08:10: ...EFI device backup saved as '/opt/odabr/out/hbi/efi.img'
INFO: 2021-07-21 09:08:10: ...step3 - checking EFI device backup
ERROR: 2021-07-21 09:08:10: Error running fsck over /opt/odabr/out/hbi/efi.img
ERROR: 2021-07-21 09:08:10: Command: 'fsck -a /opt/odabr/out/hbi/efi.img' failed as fsck from util-linux 2.23.2 fsck.fat 3.0.20 (12 Jun 2013) 0x25: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt. Automatically removing dirty bit. Performing changes. /opt/odabr/out/hbi/efi.img: 20 files, 1100/63965 clusters
INFO: 2021-07-21 09:08:10: Mounting EFI back
ERROR: 2021-07-21 09:08:10: Backup not completed, exiting...

See the error :
ERROR: 2021-07-21 09:08:10: Error running fsck over /opt/odabr/out/hbi/efi.img
ERROR: 2021-07-21 09:08:10: Command: 'fsck -a /opt/odabr/out/hbi/efi.img' failed as fsck from util-linux 2.23.2 fsck.fat 3.0.20 (12 Jun 2013) 0x25: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt. Automatically removing dirty bit. Performing changes. /opt/odabr/out/hbi/efi.img: 20 files, 1100/63965 clusters

This mainly means that the the image being checked was not properly unmounted and contains dirty bits. This is explained as well in Doc ID 2679511.1.

Solution

In order to resolve this issue, simply umount /boot/efi before running the tool.

[root@ODA01 hbi]# df -h | grep /boot/efi
/dev/md126p1 500M 8.6M 492M 2% /boot/efi
 
[root@ODA01 hbi]# umount /boot/efi
 
[root@ODA01 hbi]# df -h | grep /boot/efi
[root@ODA01 hbi]#

Running odabr tool again will make the snapshots creation now successful:

[root@ODA01 hbi]# /opt/odabr/odabr backup -snap -rsize 15 -usize 90 -osize 60
INFO: 2021-07-21 09:17:27: Please check the logfile '/opt/odabr/out/log/odabr_18986.log' for more details
 
 
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
odabr - ODA node Backup Restore - Version: 2.0.1-66
Copyright Oracle, Inc. 2013, 2021
--------------------------------------------------------
Author: Ruggero Citton
RAC Pack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
 
INFO: 2021-07-21 09:17:27: Checking superuser
INFO: 2021-07-21 09:17:27: Checking Bare Metal
 
INFO: 2021-07-21 09:17:27: Removing existing LVM snapshots
WARNING: 2021-07-21 09:17:27: LVM snapshot for 'opt' does not exist
WARNING: 2021-07-21 09:17:27: LVM snapshot for 'u01' does not exist
WARNING: 2021-07-21 09:17:27: LVM snapshot for 'root' does not exist
 
INFO: 2021-07-21 09:17:27: Checking current OS version
INFO: 2021-07-21 09:17:27: Checking LVM restore backgroud process
INFO: 2021-07-21 09:17:27: Checking LVM size
 
INFO: 2021-07-21 09:17:27: Boot device backup
INFO: 2021-07-21 09:17:27: Getting EFI device
WARNING: 2021-07-21 09:17:27: Failing checking efi device
INFO: 2021-07-21 09:17:27: ...no EFI device found
INFO: 2021-07-21 09:17:27: Getting boot device
INFO: 2021-07-21 09:17:27: ...step1 - making boot device backup using tar
SUCCESS: 2021-07-21 09:17:31: ...boot content saved as '/opt/odabr/out/hbi/boot.tar.gz'
INFO: 2021-07-21 09:17:31: ...step2 - unmounting boot
INFO: 2021-07-21 09:17:31: ...step3 - making boot device backup using dd
SUCCESS: 2021-07-21 09:17:33: ...boot device backup saved as '/opt/odabr/out/hbi/boot.img'
INFO: 2021-07-21 09:17:33: ...step4 - mounting boot
INFO: 2021-07-21 09:17:33: ...step5 - checking boot device backup
 
INFO: 2021-07-21 09:17:33: Making OCR physical backup
INFO: 2021-07-21 09:17:38: ...ocr backup saved as '/opt/odabr/out/hbi/ocrbackup_18986.bck'
SUCCESS: 2021-07-21 09:17:38: OCR physical backup created successfully
INFO: 2021-07-21 09:17:38: OCR export backup
INFO: 2021-07-21 09:17:40: ...ocr export saved as '/opt/odabr/out/hbi/ocrexport_18986.bck'
SUCCESS: 2021-07-21 09:17:40: OCR export backup created successfully
 
INFO: 2021-07-21 09:17:40: Making LVM snapshot backup
SUCCESS: 2021-07-21 09:17:40: ...snapshot backup for 'opt' created successfully
SUCCESS: 2021-07-21 09:17:41: ...snapshot backup for 'u01' created successfully
SUCCESS: 2021-07-21 09:17:42: ...snapshot backup for 'root' created successfully
SUCCESS: 2021-07-21 09:17:42: LVM snapshots backup done successfully

Once the snapshot have been created, we can mount the /boot/efi file system again:
[root@ODA01 hbi]# df -h | grep /boot/efi
[root@ODA01 hbi]# mount /boot/efi
[root@ODA01 hbi]# df -h | grep /boot/efi
/dev/md126p1 500M 8.6M 492M 2% /boot/efi

Checking snapshot backups

To display the current running snapshot, use the following command:
[root@ODA01 hbi]# /opt/odabr/odabr infosnap
 
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
odabr - ODA node Backup Restore - Version: 2.0.1-66
Copyright Oracle, Inc. 2013, 2021
--------------------------------------------------------
Author: Ruggero Citton
RAC Pack, Cloud Innovation and Solution Engineering Team
│▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒│
 
 
LVM snap name Status COW Size Data%
------------- ---------- ---------- ------
root_snap active 15.00 GiB 0.01%
opt_snap active 60.00 GiB 0.01%
u01_snap active 90.00 GiB 0.01%

Deleting snapshot backups

To delete the snapshot backups, run:
[root@ODA01 ~]# /opt/odabr/odabr delsnap
INFO: 2021-07-21 11:12:18: Please check the logfile '/opt/odabr/out/log/odabr_35232.log' for more details
 
INFO: 2021-07-21 11:12:18: Removing LVM snapshots
INFO: 2021-07-21 11:12:18: ...removing LVM snapshot for 'opt'
SUCCESS: 2021-07-21 11:12:18: ...snapshot for 'opt' removed successfully
INFO: 2021-07-21 11:12:18: ...removing LVM snapshot for 'u01'
SUCCESS: 2021-07-21 11:12:19: ...snapshot for 'u01' removed successfully
INFO: 2021-07-21 11:12:19: ...removing LVM snapshot for 'root'
SUCCESS: 2021-07-21 11:12:19: ...snapshot for 'root' removed successfully
SUCCESS: 2021-07-21 11:12:19: Remove LVM snapshots done successfully

Cet article odabr tool : how to resolve efi device backup failure? est apparu en premier sur Blog dbi services.

Hex tip

Jonathan Lewis - Tue, 2021-07-20 11:40

A surprising amount of the work I do (or used to do) revolves around numbers; and once I’m outside the realm of the optimizer (i.e. getting away from simple arithmetic), one of the bits of playing with numbers that I do most often is conversion – usually decimal to hexadecimal, sometimes decimal to binary.

Here’s an example of how this helped me debug an Oracle error a few days ago. We start with someone trying to purge data from aud$ using the official dbms_audit_mgmt package, first setting the package’s db_delete_batch_size parameter to the value 100,000 then calling dbms_audit_mgmt.clean_audit_trail.

In theory this should have deleted (up to) 100,000 rows from aud$ starting from the oldest data. In practice it tried to delete far more rows, generating vast amounts of undo and redo, and locking up resources in the undo tablespace for ages. The SQL statement doing all the work looked like the following (after a little cosmetic work):

DELETE FROM SYS.AUD$ 
WHERE  DBID = 382813123 
AND    NTIMESTAMP# < to_timestamp('2020-12-17 00:00:00', 'YYYY-MM-DD HH24:MI:SS.FF')
AND    ROWNUM <= 140724603553440

That’s a rather large number in the rownum predicate, much larger than the expected 100,000. Whenever I am puzzled by very large numbers in places I’m not expecting to see them one of the first things I do to poke it around is to convert it to hexadecimal. (Although it seems a fairly random thing to do it doesn’t take very long and it produces an interesting result fairly frequently.)

140724603553440 (dec) = 0x7FFD000186A0

You may not think that the resulting hex number is very interesting – but there’s a string of zeros in the middle that is asking for a little extra poking. So let’s convert the last 8 digit (starting with those 3 zeros) back to decimal.

0x000186A0 = 100,000 (dec)

There’s an interesting coincidence – we’ve got back to the 100,000 that the OP had set as the db_delete_batch_size. Is this really a coincidence or does it tell us something about a bug? That’s easy enough to test, just try setting a couple of different values for the parameter and see if this affects the rownum predicate in a consistent fashion. Here are the results from two more test values:

1,000,000 ==> 140733194388032 (dec) = 0x7FFF000F4240 .... 0x000F4240 = 1,000,000 (dec)
   50,000 ==> 140728898470736 (dee) = 0x7FFE0000C350 .... 0x0000C350 =    50,000 (dec)

The top 4 digits (2 bytes) have changed, but the bottom 8 digits (4 bytes) do seem to hold the db_delete_batch_size requested. At this point I felt that we were probably seeing some sort of pointer error in a C library routine. If you examine the file $ORACLE_HOME/rdbms/admin/prvtamgt.plb) you’ll find that one of the few readable lines says:

CREATE OR REPLACE LIBRARY audsys.dbms_audit_mgmt_lib wrapped

My guess was that there were probably a couple of external C routines involved, with PL/SQL wrappers in the public package; and that there was a mismatch between the declarations in C and the declarations in the PL/SQL.

It turns out that I wasn’t quite right, but I was in the right olympic stadium. This is now (unpublished) bug 33136016, and if you’ve been seeing unexpected work patterns when purging the audit trail after upgrading to 19c or later then there may be a patch for you in the not too distant future.

A Standby that lags the Primary by a deliberate Delay

Hemant K Chitale - Tue, 2021-07-20 06:24

 As I noted in my previous blog post, with multiple Standby databases, you can have one or more of them, lagging the Primary. This allows the organisation a database that can be quickly opened for data recovery in case someone makes a mistake and deletes data or drops tables/objects from the Primary and the delete/drop has already been replicated to the first Standby.

Here is a quick demo.

At the Primary I have :

SQL> alter system archive log current;

System altered.

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /opt/oracle/archivelog/ORCLCDB
Oldest online log sequence 280
Next log sequence to archive 282
Current log sequence 282
SQL>


At the first Standby "STDBYDB", I have :

2021-07-20T17:50:05.870763+08:00
PR00 (PID:2912): Media Recovery Waiting for T-1.S-282 (in transit)
2021-07-20T17:50:06.354006+08:00
ARC3 (PID:2736): Archived Log entry 35 added for T-1.S-281 ID 0xa7521ccd LAD:1
2021-07-20T17:50:06.396543+08:00
rfs (PID:3263): Archival of T-1.S-281 complete
2021-07-20T17:50:06.527483+08:00
rfs (PID:3263): Selected LNO:4 for T-1.S-282 dbid 2778483057 branch 1036108814
2021-07-20T17:50:07.008298+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 282 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log


At the second Standby "STDB2", I have :

2021-07-20T17:50:09.484608+08:00
PR00 (PID:2975): Media Recovery Waiting for T-1.S-282 (in transit)
2021-07-20T17:50:09.500278+08:00
rfs (PID:3216): Opened log for T-1.S-282 dbid 2778483057 branch 1036108814
2021-07-20T17:50:09.527462+08:00
ARC3 (PID:2867): Archived Log entry 22 added for T-1.S-281 ID 0xa7521ccd LAD:1


To introduce a delay in applying ArchiveLogs at STDBY, I specify the DELAY parameter at the Primary database  :

SQL> show parameter log_archive_dest_3

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_3 string SERVICE=STDB2 ASYNC VALID_FOR=
(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STDB2
log_archive_dest_30 string
log_archive_dest_31 string
SQL> alter system set log_archive_dest_3='SERVICE=STDB2 DELAY=60 ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STDB2';

System altered.

SQL> show parameter log_archive_dest_3

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_3 string SERVICE=STDB2 DELAY=60 ASYNC V
ALID_FOR=(ONLINE_LOGFILES,PRIM
ARY_ROLE) DB_UNIQUE_NAME=STDB2
log_archive_dest_30 string
log_archive_dest_31 string
SQL>


So, I have introduced a lag of 60minutes for STDB2.  Over the next 60minutes, the Primary will continue generating Redo and ArchiveLogs and both Standbys will be receiving them.  But STDB2 will apply them only after 60minutes.  

However, to enforce this, I must also cause the Standby to *not* use Real Time Apply, so I must change the RECOVER command at the Standby  Note, however, that this must be done on the Standby first !  The Standby must start it's recovery with "USING ARCHIVED LOGFILE" *before* the Primary sets a DELAY value for the target log_archive_dest_n

oracle19c>echo $ORACLE_SID
STDB2
oracle19c>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Tue Jul 20 18:14:26 2021
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.


Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database recover managed standby database using archived logfile disconnect from session;

Database altered.

SQL>


If I do not change the RECOVER command at the Standby to explicitly specify "USING ARCHIVED LOGFILE", it still defaults to Real Time Apply and ignores the DELAY specified by the Primary. I get the message in STDB2 alert log :

 rfs (PID:13712): WARN: Managed Standby Recovery started with REAL TIME APPLY
rfs (PID:13712): WARN: DELAY 60 minutes specified at primary ignored


I review the alert log files for all 3 databases about an hour later.

This is the first Standby (STDBYDB)

2021-07-20T19:10:57.161885+08:00
PR00 (PID:2912): Media Recovery Waiting for T-1.S-297 (in transit)
2021-07-20T19:10:57.324337+08:00
rfs (PID:3263): Selected LNO:4 for T-1.S-297 dbid 2778483057 branch 1036108814
2021-07-20T19:10:58.401720+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 297 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log
2021-07-20T19:14:39.910894+08:00
ARC3 (PID:2736): Archived Log entry 51 added for T-1.S-297 ID 0xa7521ccd LAD:1
2021-07-20T19:14:39.943728+08:00
rfs (PID:3263): Standby controlfile consistent with primary
2021-07-20T19:14:40.136187+08:00
rfs (PID:3263): Selected LNO:4 for T-1.S-298 dbid 2778483057 branch 1036108814
2021-07-20T19:14:40.136811+08:00
PR00 (PID:2912): Media Recovery Waiting for T-1.S-298 (in transit)
2021-07-20T19:14:41.180355+08:00
Recovery of Online Redo Log: Thread 1 Group 4 Seq 298 Reading mem 0
Mem# 0: /opt/oracle/oradata/STDBYDB/stdbredo01.log

SQL> show parameter db_unique_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string STDBYDB
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
13123569

SQL>


And this is the Second Standby (STDB2) :

2021-07-20T19:10:58.180405+08:00
rfs (PID:21331): No SRLs available for T-1
2021-07-20T19:10:58.319036+08:00
ARC0 (PID:2857): Archived Log entry 37 added for T-1.S-296 ID 0xa7521ccd LAD:1
2021-07-20T19:10:58.319358+08:00
ARC0 (PID:2857): Archive log for T-1.S-296 available in 60 minute(s)
2021-07-20T19:10:58.320321+08:00
rfs (PID:21331): Opened log for T-1.S-297 dbid 2778483057 branch 1036108814
2021-07-20T19:14:40.363888+08:00
rfs (PID:21331): Archived Log entry 38 added for B-1036108814.T-1.S-297 ID 0xa7521ccd LAD:3
2021-07-20T19:14:40.782081+08:00
rfs (PID:21331): Selected LNO:4 for T-1.S-298 dbid 2778483057 branch 1036108814
2021-07-20T19:15:12.430015+08:00
PR00 (PID:26793): Media Recovery Log /opt/oracle/archivelog/STDB21_286_1036108814.dbf
PR00 (PID:26793): Media Recovery Delayed for 60 minute(s) T-1.S-287

SQL> show parameter db_unique_name

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_unique_name string STDB2
SQL> select current_scn from v$database;

CURRENT_SCN
-----------
13108691

SQL>


So, while the first Standby (STDBYDB) has already applied and archived Sequence#297 and is currently applying Sequence#298 from the Standby logfile, the second Standby (STDB2) has archived Sequence#297 and received Sequence#298 but notifies that Sequence#286 is currently being applied and the apply Sequence#297 is delayed by 60minutes.
I can also run an SCN_TO_TIMESTAMP query in the Primary (this cannot be executed on a Standby that is not OPEN) :

SQL> select scn_to_timestamp(13123569) At_STDBYDB from dual;

AT_STDBYDB
---------------------------------------------------------------------------
20-JUL-21 07.13.39.000000000 PM

SQL> select scn_to_timestamp(13108691) At_STDB2 from dual;

AT_STDB2
---------------------------------------------------------------------------
20-JUL-21 06.15.07.000000000 PM

SQL>


This shows that STDB2 is lagging by about 60minutes
So, if any "bad action"  (deletion of data or dropping of objects) is detected at the Primary (and a Flashback option is not available), the Standby can be OPENed Read Only to view the data as it was 1hour ago.  
I'll show that option in my next blog post.



Categories: DBA Blogs

Manage ODA patching with Data Guard or Dbvisit Standby

Yann Neuhaus - Mon, 2021-07-19 11:33
Introduction

Building an Oracle infrastructure today without thinking about a Disaster Recovery solution (DR) is quite rare. This became obvious that a backup or a dump will not help if you do not know where to restore or import once your production server is down. And restoring a backup is definitely not the fastest way to bring back your database to life. As a consequence, Data Guard or Dbvisit Standby, depending on which edition you’re running, is a must have. And these tools are much more than Disaster Recovery solutions. You can use them for planned maintenance as well, or if you need to move your server to another datacenter for example.

Oracle Database Appliance does not deviate from that, and the bare minimum configuration is composed of 2 ODAs, 1 for production, and 1 for DR and test/dev. DR feature being implemented with Data Guard or Dbvisit Standby.

Using Data Guard or Dbvisit Standby also helps when it comes to patching. Because it’s a good practice to patch from time to time your ODAs. You may ask how to proceed when using Data Guard or Dbvisit Standby, here is how I do that since years.

The 3-step patch on ODA

You apply an ODA patch step-by-step:

  • a few pre-patches for updating the dcs components
  • a system patch for updating BIOS, ILOM, Operating System and Grid Infrastructure
  • a storage patch for updating data disk firmwares
  • a DB patch for updating DB homes, this one being applied multiple times if you have multiple DB homes

The patch can take quite a long time, you need to plan a minimum of 3/4 hours for a single-node ODA. If you add preparing the ODA prior patching, troubleshooting and doing the sanity checks after applying the patch, you much likely need 1 complete day for patching. Downtime may vary, but as at least one or 2 reboots are needed, I usually consider the full day of downtime. Yes this is huge, but this is real life.

Furthermore, if you don’t patch often enough, a single patch will probably not do the job. Patches are not always cumulative, and you sometimes need to apply 3 or 4 patches to upgrade to the latest version, significantly increasing the time to patch and the associated downtime.

As if it were not already complicated, you can encounter problems when patching, and get stuck for a while before finding a solution or a workaround. But don’t blame Oracle for that: who can bundle such a variety of updates (OS, BIOS, firmwares, ILOM, GI, DB) in just one patch? Oracle database has always been a powerfull RDBMS, but with a high degree of complexity. Adding the GI layer, Linux OS and hardware updates definitely makes patching a tough task.

Patching strategy when using a DR solution

Patching can be your nightmare… or not. It totally depends on how you manage these patches.

First of all, I would recommend to only patch an ODA where no primary is running on it. And this is only possible if you use Data Guard or Dbvisit Standby: plan a switchover of the primaries to the other ODA before patching. If something goes wrong during patching, or if it takes more time than planned, it won’t have any impact on your business. You may just miss your standby databases for hours, but this is normally something you can manage. Highly critical databases may use multiple standbys in order to keep maximum safety during patches.

I would also recommend to keep one test primary for each DB home on each ODA (I’m used to create a DBTEST on each ODA when deploying, and keep it for testing purpose). This primary will be patched to validate the complete patching process.

When you will apply the patch on this first ODA, you can eventually stop the patching process before patching the DB homes in order to keep them with the same version as the other ODA. Or you can decide to apply this DB home patch: it will update the binaries, but it will not be able to apply the datapatch on the databases themselves. It doesn’t matter AS SOON AS YOU DO NOT SWITCH BACK THE PRIMARIES TO THIS PATCHED ODA. If you decide to apply the DB home patches, and then do the switchover to this updated ODA, your binaries will not be in sync with the catalog anymore, and it could lead to several problems (especially regarding Java code inside the DB). So applying the complete patch is OK if your keep only standby databases on this ODA, until you patch the other ODA.

It’s not recommended to wait more than few weeks for patching the other ODA. So once you successfully patched the first one and you have waited few days to make sure everything is OK with this patch, switchover all the primaries to the patched ODA. Once done, you now need to first apply the DB home patch with update-dbhome on these primaries. You can also use a manual datapatch. This is mandatory to update the catalog to match the binaries’ version. Then apply the complete patch to your other ODA. Once done, both ODAs are up to date and you can dispatch your databases again on both servers.

I would highly recommend to verify on each database if the patch has been applied correctly. Sometimes a manual datapatch is needed, so don’t hesitate if something went wrong with odacli update-dbhome:

set serverout on
exec dbms_qopatch.get_sqlpatch_status;
...
Patch Id : 29972716
Action : APPLY
Action Time : 14-OCT-2020 10:59:29
Description : DATABASE BUNDLE PATCH 12.1.0.2.191015
Logfile :
/u01/app/oracle/cfgtoollogs/sqlpatch/29972716/23132651/29972716_apply_DEV121_202
0Oct14_10_58_32.log
Status : SUCCESS

And when reimaging?

Sometimes reimaging is a better idea than applying patches. Reimaging may be faster than applying several patches, and there is much less troubleshooting as you cleanse everything and restart from scratch. Reimaging can only be considered if you can switch all your primaries to another ODA for several days. You then need to remove the standby configuration from Data Guard (only removing the standby is OK), because your standby database will not be available for hours/days (actually you will rebuild it).

When you do the reimage, you cannot decide which patch you will apply on top of the DB home, it’s the one that will come with the global version. So you won’t be able to immediately switch back your primaries after patching. For example, patching from 18.5 to 19.11 with a reimaging (it makes sense because the gap is important) will bring a 18.14 DB home that can cause problems with a 18.5 database’s catalog.

Prior reimaging, I would recommend to backup to an external filesystem the controlfile autobackup of the standby, it also includes the latest spfile. Then you don’t need to create again the spfile from a primary pfile, and you will not forget to restore a STANDBY controlfile because your controlfile backup is already a standby controlfile.

And restoring the standby database with primary backup is OK if you don’t want to use a RMAN DUPLICATE.

Why can I safely run a mounted standby with different binaries but not using it as a primary?

Normaly you should only run binaries and database’s catalog with the same version. But running a standby with different binaries’ version is actually OK. This is because the catalog is not opened on a standby: catalog resides in SYSTEM’s tablespace, and a mounted database does not open any datafile. You may also notice that applying the datapatch on a primary is only possible if you can open your old database with newer binaries, unless you would never be able to update this catalog…

There is no risk opening a Standby in READ ONLY mode with different binaries, or if you’re using Active Guard option, but some advanced features may not work correctly.

Data Guard vs Dbvisit Standby

Data Guard being included with Enterprise Edition, there is no cost to have (at least) one standby for each primary (unless the cost of the storage). Don’t hesitate to give each primary a standby, for Disaster Recovery but also for being able to patch with this method.

Dbvisit licensing metric is per database, so you may only consider using standby for productions. But as Standard Edition and Dbvisit Standby are quite inexpensive (compared to Enterprise Edition), buying extra licenses for test and dev databases is definitely a brilliant idea in order to release your ODA from primaries when it comes to patching.

Conclusion

Data Guard and Dbvisit Standby are much more than DR solutions: they simplify the patching of your ODAs, and make reimaging possible. This definitely improves your ODAs lifecycle management.

Cet article Manage ODA patching with Data Guard or Dbvisit Standby est apparu en premier sur Blog dbi services.

Azure Data Engineer [DP203] Q/A | Day 5 Live Session Review

Online Apps DBA - Mon, 2021-07-19 05:40

The blog post – https://k21academy.com/dp203day5 will cover the Q/A’s from Day 5 of Microsoft Azure Data Engineering [Dp-203] Certification in which we have covered Module 7: Ingest And Load Data Into The Data Warehouse and Module 8: Transform Data With Azure Data Factory Or Azure Synapse Pipelines FAQs. This blog will help you to get […]

The post Azure Data Engineer [DP203] Q/A | Day 5 Live Session Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Power BI Gateway – How to Install & Configure

Online Apps DBA - Mon, 2021-07-19 05:10

➤ What is Power BI Gateway? Power BI Gateway is the software required to access data from the on-premises network. The request always goes through a Gateway to access the on-premises data from a cloud. For fast and secure data transfer between on-premises, it is preferred to install a gateway on servers where data is […]

The post Power BI Gateway – How to Install & Configure appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Key Performance Indicator in Power BI

Online Apps DBA - Mon, 2021-07-19 04:56

➤ What is KPI? KPI is a Key Performance Indicator that helps in determining the amount of progress made towards a goal. It helps a business keep an eye on the overall growth and performance using certain parameters that focus on a particular result. In simple words, KPI indicates the current performance of a business […]

The post Key Performance Indicator in Power BI appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Serving ML Model with Docker, RabbitMQ, FastAPI and Nginx

Andrejus Baranovski - Mon, 2021-07-19 01:53
In this tutorial I explain how to serve ML model using such tools as Docker, RabbitMQ, FastAPI and Nginx. The solution is based on our open-source product Katana ML Skipper (or just Skipper). It allows running ML workflow using a group of microservices. It is not limited to ML, you can run any workload using Skipper and plugin your own services. You can reach out to me if you got any questions.

 

Pages

Subscribe to Oracle FAQ aggregator