Feed aggregator

Dbvisit 9: Adding datafiles and or tempfiles

Yann Neuhaus - Fri, 2020-01-17 11:57

One question I was asking is if the standby_file_management parameter is relevant in a Dbvisit environment with Oracle Standard Edition. I did some tests and I show here what I did.
We suppose that the Dbvisit is already set and that the replication is fine

[oracle@dbvisit1 trace]$ /u01/app/dbvisit/standby/dbvctl -d dbstd -i
=============================================================
Dbvisit Standby Database Technology (9.0.08_0_g99a272b) (pid 19567)
dbvctl started on dbvisit1: Fri Jan 17 16:48:16 2020
=============================================================

Dbvisit Standby log gap report for dbstd at 202001171648:
-------------------------------------------------------------
Description       | SCN          | Timestamp
-------------------------------------------------------------
Source              2041731        2020-01-17:16:48:18 +01:00
Destination         2041718        2020-01-17:16:48:01 +01:00

Standby database time lag (DAYS-HH:MI:SS): +00:00:17

Report for Thread 1
-------------------
SOURCE
Current Sequence 8
Last Archived Sequence 7
Last Transferred Sequence 7
Last Transferred Timestamp 2020-01-17 16:48:07

DESTINATION
Recovery Sequence 8

Transfer Log Gap 0
Apply Log Gap 0

=============================================================
dbvctl ended on dbvisit1: Fri Jan 17 16:48:23 2020
=============================================================

[oracle@dbvisit1 trace]$

While the standby_file_management is set to MANUAL on both servers

[oracle@dbvisit1 trace]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 17 16:50:50 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

SQL> show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL>


[oracle@dbvisit2 back_dbvisit]$ sqlplus / as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jan 17 16:51:15 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.


Connected to:
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
MOUNTED

SQL>  show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL>

Let’s create a tablespace MYTAB on the primary database

SQL> create tablespace mytab datafile '/u01/app/oracle/oradata/DBSTD/mytab01.dbf' size 10M;

Tablespace created.

SQL>

SQL> alter system switch logfile;

System altered.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB

6 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/system01.dbf
/u01/app/oracle/oradata/DBSTD/sysaux01.dbf
/u01/app/oracle/oradata/DBSTD/undotbs01.dbf
/u01/app/oracle/oradata/DBSTD/mytab01.dbf
/u01/app/oracle/oradata/DBSTD/users01.dbf

SQL>

A few moment we can see that the new datafile is replicated on the standby

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB

6 rows selected.

SQL>  select name from v$datafile
  2  ;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/system01.dbf
/u01/app/oracle/oradata/DBSTD/sysaux01.dbf
/u01/app/oracle/oradata/DBSTD/undotbs01.dbf
/u01/app/oracle/oradata/DBSTD/mytab01.dbf
/u01/app/oracle/oradata/DBSTD/users01.dbf

SQL>

Now let’s repeat the tablespace creation while the parameter is set to AUTO on both side

SQL> show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      AUTO
SQL>  create tablespace mytab2 datafile '/u01/app/oracle/oradata/DBSTD/mytab201.dbf' size 10M;

Tablespace created.

SQL>

SQL> alter system switch logfile;

System altered.

SQL>

A few moment later the tablespace mytab2 was also replicated on the standby

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB
MYTAB2

7 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/system01.dbf
/u01/app/oracle/oradata/DBSTD/mytab201.dbf
/u01/app/oracle/oradata/DBSTD/sysaux01.dbf
/u01/app/oracle/oradata/DBSTD/undotbs01.dbf
/u01/app/oracle/oradata/DBSTD/mytab01.dbf
/u01/app/oracle/oradata/DBSTD/users01.dbf

6 rows selected.

In Dbvisit documentation we can find this
Note 2: This feature is independent of the Oracle parameter STANDBY_FILE_MANAGEMENT. Dbvisit Standby will detect if STANDBY_FILE_MANAGEMENT has added the datafile to the standby database, and if so, Dbvisit Standby will not add the datafile.
Note 3: STANDBY_FILE_MANAGEMENT can only be used in Enterprise Edition and should not be set in Standard Edition.

Dbvisit does not use STANDBY_FILE_MANAGEMENT for datafile replication. So I decide to set this value to its default value which is MANUAL.

What about adding tempfile in a dbvisit environment. In the primary I create a new temporary tablespace

SQL> show parameter standby_file_management;

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL
SQL> create temporary tablespace temp2 tempfile '/u01/app/oracle/oradata/DBSTD/temp2_01.dbf' size 10M;

Tablespace created.

SQL> alter system switch logfile;

System altered.

SQL>

We can see on the primary that we now have two tempfiles.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB
MYTAB2
TEMP2

8 rows selected.

SQL>  select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/temp01.dbf
/u01/app/oracle/oradata/DBSTD/temp2_01.dbf

SQL>

On standby side, the new temporary tablespace was replicated.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSAUX
SYSTEM
UNDOTBS1
USERS
TEMP
MYTAB
MYTAB2
TEMP2

8 rows selected.

But the new tempfile is not listed on the standby

SQL>  select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/temp01.dbf

SQL>

In fact it’s the expected behavior. In the documentation we can find following
If your preference is to have exactly the same number of temp files referenced in the standby control file as your current primary database, then once a new temp file has been added on the primary, you need to recreate a standby control file by running the following command from the primary server:
dbvctl -f create_standby_ctl -d DDC

So let’s recreate the standby control file

[oracle@dbvisit1 ~]$ /u01/app/dbvisit/standby/dbvctl -f create_standby_ctl -d dbstd
=>Replace current standby controfiles on dbvisit2 with new standby control
file?  [No]: yes

>>> Create standby control file... done

>>> Copy standby control file to dbvisit2... done

>>> Recreate standby control file... done

>>> Standby controfile(s) on dbvisit2 recreated. To complete please run dbvctl on the
    primary, then on the standby.
[oracle@dbvisit1 ~]$

And then after we can verify that the new tempfile is now visible at standby side

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/DBSTD/temp01.dbf
/u01/app/oracle/oradata/DBSTD/temp2_01.dbf

SQL>

Cet article Dbvisit 9: Adding datafiles and or tempfiles est apparu en premier sur Blog dbi services.

Deploying your own PostgreSQL image on Nutanix Era

Yann Neuhaus - Fri, 2020-01-17 11:03

Some days ago we had a very good training on Nutanix. Nutanix is a Hyper-converged infrastructure and that means that all is software driven and the system can be deployed on many hardware configurations. I will not go into the details of the system itself but rather look at one component/module which is called Era. Era promises to simplify database deployments by providing a clean and simple user interface (and an API) that provides deployment procedures for PostgreSQL, MS SQL, MySQL, MariaDB and Oracle. There are predefined templates you can use but in this post I’ll look at how you can use Era to deploy your own PostgreSQL image.

Before you can register a software profile with Era there needs to be a VM up and running which already has PostgreSQL installed. For that I’ll import the latest CentOS 7 ISO with Prism (CentOS 8 is not yet supported).

Importing images is done in the “Images Configuration” section under “Settings” of Prism:


Once you start the upload a new task is generated which can be monitored in the tasks section:

Now that the image is ready we need to deploy a new virtual machine which will use the image as installation source:








As the virtual machine is now defined we need to power it on and then launch the console:


Follow your preferred way of doing the CentOS installation and once it is done you need to power off the virtual machine for removing the ISO. Otherwise you will always land in the installation procedure when the virtual machine is started:


After you powered of the virtual machine again you should be able to connect with ssh:

The next step is to install PostgreSQL as you prefer to do it. Here is an example for doing it from source code. We will not create a PostgreSQL instance, the binaries are enough. In my case everything was installed here:

 postgres@centos7postgres12:/home/postgres/ [pg121] echo $PGHOME
/u01/app/postgres/product/12/db_1/
postgres@centos7postgres12:/home/postgres/ [pg121] ls $PGHOME
bin  include  lib  share

Now that we have out PostgreSQL server we need to register the server in Era. Before doing that you should download and execute the pre-check script on the new database server:

postgres@centos7postgres12:/home/postgres/ [pg121] sudo ./era_linux_prechecks.sh

----------------------------------------------------------------------------------
   Error: Database type not specified
   Syntax: $ ./era_linux_prechecks.sh -t|--database_type  [-c|--cluster_ip ] [-p|--cluster_port] [-d|--detailed]
   Database type can be: oracle_database, postgres_database, mariadb_database, mysql_database
----------------------------------------------------------------------------------

postgres@centos7postgres12:/home/postgres/ [pg121] sudo ./era_linux_prechecks.sh -t postgres_database
libselinux-python-2.5-14.1.el7.x86_64


--------------------------------------------------------------------
|              Era Pre-requirements Validation Report              |
--------------------------------------------------------------------

     General Checks:
     ---------------
         1] Username           : root
         2] Package manager    : yum
         2] Database type      : postgres_database

     Era Configuration Dependencies:
     -------------------------------
         1] User has sudo access                         : YES
         2] User has sudo with NOPASS access             : YES
         3] Crontab configured for user                  : YES
         4] Secure paths configured in /etc/sudoers file : YES

     Era Software Dependencies:
     --------------------------
          1] GCC                  : N/A
          2] readline             : YES
          3] libselinux-python    : YES
          4] crontab              : YES
          5] lvcreate             : YES
          6] lvscan               : YES
          7] lvdisplay            : YES
          8] vgcreate             : YES
          9] vgscan               : YES
         10] vgdisplay            : YES
         11] pvcreate             : YES
         12] pvscan               : YES
         13] pvdisplay            : YES
         14] zip                  : NO
         15] unzip                : YES
         16] rsync                : NO

     Summary:
     --------
         This machine does not satisfy all of the dependencies required by Era.
         It can not be onboarded to Era unless all of these are satified.

     **WARNING: Cluster API was not provided. Couldn't go ahead with the Prism API connectivity check.
     Please ensure Prism APIs are callable from the host.
====================================================================
1postgres@centos7postgres12:/home/postgres/ [pg121]

In my case only “zip” and “rsync” are missing which of course is easy to fix:

postgres@centos7postgres12:/home/postgres/ [pg121] sudo yum install -y zip rsync
...
postgres@centos7postgres12:/home/postgres/ [pg121] sudo ./era_linux_prechecks.sh -t postgres_database
libselinux-python-2.5-14.1.el7.x86_64


--------------------------------------------------------------------
|              Era Pre-requirements Validation Report              |
--------------------------------------------------------------------

     General Checks:
     ---------------
         1] Username           : root
         2] Package manager    : yum
         2] Database type      : postgres_database

     Era Configuration Dependencies:
     -------------------------------
         1] User has sudo access                         : YES
         2] User has sudo with NOPASS access             : YES
         3] Crontab configured for user                  : YES
         4] Secure paths configured in /etc/sudoers file : YES

     Era Software Dependencies:
     --------------------------
          1] GCC                  : N/A
          2] readline             : YES
          3] libselinux-python    : YES
          4] crontab              : YES
          5] lvcreate             : YES
          6] lvscan               : YES
          7] lvdisplay            : YES
          8] vgcreate             : YES
          9] vgscan               : YES
         10] vgdisplay            : YES
         11] pvcreate             : YES
         12] pvscan               : YES
         13] pvdisplay            : YES
         14] zip                  : YES
         15] unzip                : YES
         16] rsync                : YES

     Summary:
     --------
         This machine satisfies dependencies required by Era, it can be onboarded.

Looks good and the database server can now be registered:




Era as well has a task list which can be monitored:

… and then it fails because PostgreSQL 12.1 is not supported. That is fine but I would have expected the pre-check script to tell me that. Same procedure again, this time with PostgreSQL 11.6 and that succeeds:

This database server is now the source for a new “Software profile”:




And that’s it: Our new PostgreSQL software profile is ready to use. In the next post we’ll try to deploy a new virtual machine from that profile.

Cet article Deploying your own PostgreSQL image on Nutanix Era est apparu en premier sur Blog dbi services.

Group by Elimination

Jonathan Lewis - Fri, 2020-01-17 06:57

Here’s a bug that was highlighted a couple of days ago on the Oracle Developer Community forum; it may be particularly worth thinking about if if you haven’t yet got up to Oracle 12c as it appeared in an optimizer feature that appeared in 12.2 (and hasn’t been completely fixed) even in the latest release of 19c (currently 19.6).

Oracle introduce “aggregate group by elimination” in 12.2, protected by the hidden parameter “_optimizer_aggr_groupby_elim”. The notes on MOS about the feature tell us that Oracle can eliminate a group by operation from a query block if a unique key from every table in the query block appears in the group by clause. Unfortunately there were a couple of gaps in the implementation in 12.2 that can produce wrong results. Here’s some code to model the problem.

rem
rem     Script:         group_by_elim_bug.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem

create table ref_clearing_calendar(
        calendar_name   char(17),
        business_date   date,
        update_ts       timestamp (6) default systimestamp,
        constraint pk_ref_clearing_calendar 
                        primary key (business_date)
)
/

insert into ref_clearing_calendar (business_date)
select
        sysdate + 10 * rownum
from 
        all_objects 
where 
        rownum <= 40 -- > comment to avoid wordpress format issue
/

commit;

execute dbms_stats.gather_table_stats(null,'ref_clearing_calendar',cascade=>true)

set autotrace on explain

select
        to_char(business_date,'YYYY') , count(*)
from
        ref_clearing_calendar
group by 
        to_char(business_date,'YYYY')
order by 
        to_char(business_date,'YYYY')
/

set autotrace off

I’ve created a table with a primary key on a date column, and then inserted 40 rows which are spaced every ten days from the current date; this ensures that I will have a few dates in each of two consecutive years (future proofing the example!). Then I’ve aggregated to count the rows per year using the to_char({date column},’YYYY’) conversion option to extract the year from the date. (Side note: the table definition doesn’t follow my normal pattern as the example started life in the ODC thread.)

If you run this query on Oracle 12.2 you will find that it returns 40 (non-unique) rows and displays the following execution plan:


---------------------------------------------------------------------------------------------
| Id  | Operation        | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                          |    40 |   320 |     2  (50)| 00:00:01 |
|   1 |  SORT ORDER BY   |                          |    40 |   320 |     2  (50)| 00:00:01 |
|   2 |   INDEX FULL SCAN| PK_REF_CLEARING_CALENDAR |    40 |   320 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

The optimizer has applied “aggregate group by elimination” because it hasn’t detected that the primary key column that appears in the group by clause has been massaged in a way that means the resulting value is no longer unique.

Fortunately this problem with to_char() is fixed in Oracle 18.1 where the query returns two rows using the following execution plan (which I’ve reported from an instance of 19.5):

---------------------------------------------------------------------------------------------
| Id  | Operation        | Name                     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                          |    40 |   320 |     2  (50)| 00:00:01 |
|   1 |  SORT GROUP BY   |                          |    40 |   320 |     2  (50)| 00:00:01 |
|   2 |   INDEX FULL SCAN| PK_REF_CLEARING_CALENDAR |    40 |   320 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Unfortunately there is still at least one gap in the implementation. Change the to_char(business_date) to extract(year from business_date) at all three points in the query, and even in 19.6 you’re back to the wrong results – inappropriate aggregate group by elimination and 40 rows returned.

There are a couple of workarounds, one is the hidden parameter _optimizer_aggr_groupby_elim to false at the system or session level, or through an opt_param() hint at the statement level (possibly injected through an SQL_Patch. The other option is to set a fix_control, again at the system, session, or statement level – but there’s seems to be little point in using the fix_control approach (which might be a little obscure for the next developer to see the code) when it seems to do the same as the explicitly named hidden parameter.

select
        /*+ opt_param('_optimizer_aggr_groupby_elim','false') */
        extract(year from business_date) , count(*)
from ,,,

select
        /*+ opt_param('_fix_control','23210039:0') */
        extract(year from business_date) , count(*)
from ...

One final thought about this “not quite fixed” bug. It’s the type of “oversight” error that gives you the feeling that there may be other special cases that might have been overlooked. The key question would be: are there any other functions (and not necessarily datetime functions) that might be applied (perhaps implicitly) to a primary or unique key that would produce duplicate results from distinct inputs – if so has the code that checks the validity of eliminating the aggregate operation been written to notice the threat.

Footnote

The problem with extract() has been raised as a bug on MOS, but it was not public at the time of writing this note.

Update (about 60 seconds after publication)

Re-reading my comment about “other functions” it occurred to me that to_nchar() might, or might not, behave the same way as to_char() in 19c – so I tested it … and got the wrong results in 19c.

 

 

 

Dbvisit Standby 9 : Do you know the new snapshot feature?

Yann Neuhaus - Thu, 2020-01-16 10:23

Dbvisit snapshot option is a new feature available starting from version 9.0.06. I have tested this option and in this blog I am describing the tasks I have done.
The configuration I am using is following
dbvist1 : primary server
dbvist 2 : standby server
orcl : oracle 19c database
We suppose that the dbvisit environment is already set and the replication is going fine. See previous blog for setting up dbvisit standby
First there are two options
-Snapshot Groups
-Single Snapshots

Snapshot Groups
This option is ideal for companies that are using Oracle Standard Edition (SE,SE1,SE2) where they would like to have a “database that is read-only, but also being kept up to date” . It allows to create 2 or more read-only snapshots of the standby at a time interval. The snapshot will be opened in a read-only mode after its creation. A new service which will point to the latest snapshot is created and will be automatically managed by the listener.
The use of the feature is very easy. As it required managing filesystems like mount and umount, the user which will create the snapshot need some admin privileges.
In our case we have configured the user with full sudo privileges, but you can find all required privileges in the documentation .

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] sudo grep oracle /etc/sudoers
oracle ALL=(ALL) NOPASSWD:ALL
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

The use of the option can be done by command line or using the graphical dbvserver console. But It is highly recommended to use the graphical tool.
When the option snapshot is available with your license, you will see following tab in the dbvserver console

To create a snapshot groups we just have to choose the option NEW SNAPSHOT GROUP

And then fill the required information. In this example
-We will create a service named snap_service
-We will generate a snapshot every 15 minutes (900 secondes)
There will be a maximum of 2 snapshots. When the limit is reached, the oldest snapshop is deleted. Just note that the latest snapshot will be deleted only when the new snapshot is successfully created. That’s why we can have sometimes 3 snapshots
-The snapshots will be prefixed by MySna

After the snapshots are created without errors, we have to start the snapshots generation by clicking on the green button

And then we can see the status

On OS level we can verify that the first snapshot is created and that the corresponding instance started

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    2300     1  0 09:05 ?        00:00:00 ora_pmon_orcl
oracle    6765     1  0 09:57 ?        00:00:00 ora_pmon_MySna001
oracle    7794  1892  0 10:00 pts/0    00:00:00 grep --color=auto pmon
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

We can also verify that the service snap_service is registered in the listener. This service will be automatically pointed to the latest snapshot of the group.

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] lsnrctl status

LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 16-JAN-2020 10:01:49

Copyright (c) 1991, 2019, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dbvisit2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                16-JAN-2020 09:05:07
Uptime                    0 days 0 hr. 56 min. 42 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/dbvisit2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbvisit2)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
…
…
…
Service "snap_service" has 1 instance(s).
  Instance "MySna001", status READY, has 1 handler(s) for this service...
The command completed successfully
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

To connect to this service, we just have to create an alias like

snapgroup =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dbvisit2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = snap_service)
    )
  )

15 minutes later we can see that a new snapshot was generated

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    2300     1  0 09:05 ?        00:00:00 ora_pmon_orcl
oracle    6765     1  0 09:57 ?        00:00:00 ora_pmon_MySna001
oracle   11355     1  0 10:11 ?        00:00:00 ora_pmon_MySna002
oracle   11866  1892  0 10:13 pts/0    00:00:00 grep --color=auto pmon
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

Note that we can only open the snapshot in a read only mode

oracle@dbvisit1:/home/oracle/ [orcl (CDB$ROOT)] sqlplus sys/*****@snapgroup as sysdba

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDB1                           MOUNTED
         4 PDB2                           MOUNTED
SQL> alter pluggable database all open;
alter pluggable database all open
*
ERROR at line 1:
ORA-65054: Cannot open a pluggable database in the desired mode.


SQL>  alter pluggable database all open read only;

Pluggable database altered.

Of course you have probably seen that you can stop, start, pause and remove snapshots from the GUI.

Single Snapshot
This option allows to create read-only as well read-write snapshots for the database.
To create a single snapshot , we just have to choose the NEW SINGLE SNAPSHOT
In the following example the snapshot will be opened in a read write mode

At this end of the creation we can see the status

We can verify that a service SingleSn was also created

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] lsnrctl status | grep Singl
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
Service "SingleSn" has 1 instance(s).
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service...
  Instance "SingleSn", status READY, has 1 handler(s) for this service

And that the instance SinglSn is started

oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)] ps -ef | grep pmon
oracle    3294     1  0 16:04 ?        00:00:00 ora_pmon_SingleSn
oracle    3966  1748  0 16:08 pts/0    00:00:00 grep --color=auto pmon
oracle   14349     1  0 13:57 ?        00:00:00 ora_pmon_orcl
oracle@dbvisit2:/home/oracle/ [orcl (CDB$ROOT)]

We can also remark that at OS level, a filesystem is mounted for the snap. So there must be enough free space at the LVM that host the database.

oracle@dbvisit2:/home/oracle/ [MySna004 (CDB$ROOT)] df -h | grep snap
/dev/mapper/ora_data-SingleSn   25G   18G  6.3G  74% /u01/app/dbvisit/standby/snap/orcl/SingleSn
oracle@dbvisit2:/home/oracle/ [MySna004 (CDB$ROOT)]

Using the alias

singlesnap =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = dbvisit2)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = SingleSn)
    )
  )

We can see that new snapshot is opened in read write mode

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
SINGLESN  READ WRITE

SQL>
Conclusion

What we will retain is that a snapshot groups is a group of snapshots taken at a regular time interval. These kinds of snapshots can only be opened in a read only mode.
The single snapshot can be created in a read only or read write mode. When opened in read write mode, it can be compared maybe to Oracle snapshot standby.
The option dbvisit snapshot required a license and is only supported on linux.

Cet article Dbvisit Standby 9 : Do you know the new snapshot feature? est apparu en premier sur Blog dbi services.

Why Businesses Need Gen 2 Cloud

Oracle Press Releases - Thu, 2020-01-16 09:46
Blog
Why Businesses Need Gen 2 Cloud

By Barbara Darrow—Jan 16, 2020

As we move into the third decade of cloud computing, it’s important to remember that not all clouds are alike.

Back in, say 2006, cloud computing services were a revelation to software developers. These rentable storage and compute services provided a handy virtual holding cell that developers could use for designing and prototyping new applications without breaking the bank, or requiring scarce cooperation from in-house resources. And that was all well and good for that time.

Fast forward more than a decade and the status quo is a lot more complicated. As businesses of all sizes weigh the use of cloud platforms to run their critical applications, one part of the equation is that these applications cannot fail without serious financial and reputational consequences. Put it this way: If a computer game glitches, your point total may take a hit. If your manufacturing system hiccups, you can lose real money. And/or your job, depending on your role at the company.

For these types of workloads, businesses require highly secure cloud services that can work in tandem with other cloud services, with services that remain on-premise, and are modern enough for emerging competitive and technological needs.

Wanted:  Gen 2 Cloud

This shift in how companies look at cloud services affects tech suppliers and their customers alike, said Edward Screven, Oracle’s chief corporate architect, in an address earlier this month to Oracle customers attending Oracle Cloud Day in New York City .

“Gen 1 clouds are not good enough for many uses,” he said. Oracle built its Gen 2 cloud to support its cloud business applications, and for its business customers. “Your critical apps drove how we built our cloud,” Screven said.

Early clouds were built so that processors were shared by many customers. The advantage was cost efficiency; the downside was that customer code can, in some circumstances, access the very computers that are used to run the cloud service. In this case, if that customer code were to be infected by malware, the threat may not be contained to that customer’s workload, and can spread.

Oracle’s Gen 2 cloud, by contrast, isolates customer code so it cannot access the cloud’s control computers—or vice versa. A separate processor, which Screven called a perimeter control computer, filters incoming network packets to prevent bad code from entering the customer’s work zone. Isolating the controls of the cloud operations from customer’s tasks helps ensure better security all around.

Gen 2 cloud also utilizes autonomous technology to patch, update, and configure cloud resources without human intervention. In complex environments, it is expensive and time consuming for human administrators to keep up with the latest security threats or configuration wrinkles—provided you can even find them in a tight labor market.

Oracle’s Gen 2 cloud supports online patching. During the Spectre /Meltdown bugs that afflicted Intel processors, online patching averted downtime for Oracle customers by applying 150 million fixes across 1.5 million computer cores. That massive operation took just four hours, Screven said. Perhaps more impressively, online patching -- based on Oracle’s K-splice technology—meant no servers had to be rebooted at all. “Our customers didn’t realize we’d even done it,” Screven said.

Autonomous capabilities give Oracle’s Gen 2 cloud 99.995% availability—which nets out to less than 2.5 minutes of downtime per month.

Bolstering Hybrid

While it is true that many business workloads are moving to outside public cloud infrastructure, it is also true that most businesses will keep running some mission-critical applications on premise.

According to Forrester VP and principal analyst James Staten, a healthy 74% of businesses surveyed describe their IT strategy as hybrid, meaning they use cloud in parallel with non-cloud technologies.

Those companies need a comprehensive way to manage their data and applications across deployment models. 

Oracle’s latest release of Enterprise Manager is intended to help companies migrate workloads to Oracle’s Gen 2 cloud—as well as manage their databases from a single dashboard, whether they are running on premise or in the cloud.

New World Order

Businesses are quickly learning that Gen 1 clouds relying on commodity hardware and software, and a patchwork quilt of disparate services, are not up to the mission-critical task of running their most important applications and safeguarding their data in a secure, scalable, reliable way.

Most likely they will want to avail themselves of a Gen 2 cloud built from its foundations to handle important business applications.

Disable fast_start failover

Michael Dinh - Wed, 2020-01-15 18:54

Data Guard Fast-Start Failover Test

If you recalled, observer was started from ol7-121-dg1 which was standby at the time and is now primary after failover.

[oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:21:28 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY&gt; exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg1 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1_stby - Primary database
Warning: ORA-16819: fast-start failover observer not started

cdb1 - (*) Physical standby database
Warning: ORA-16819: fast-start failover observer not started

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 16 seconds ago)

DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020

21:34:56.44 Wednesday, January 15, 2020
Initiating reinstatement for database "cdb1_stby"...
Reinstating database "cdb1_stby", please wait...
Reinstatement of database "cdb1_stby" succeeded
21:35:15.40 Wednesday, January 15, 2020
Kill observer process from OS:
[oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Thu Jan 16 00:32:04 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PRIMARY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg1 sql]$

[oracle@ol7-121-dg1 sql]$ ps -ef|grep dgmgrl
oracle 10381 32397 0 00:32 pts/1 00:00:00 grep --color=auto dgmgrl
oracle 31831 30778 0 Jan15 pts/0 00:00:01 dgmgrl

[oracle@ol7-121-dg1 sql]$ kill -9 31831
Output from Observer session:
DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020

21:34:56.44 Wednesday, January 15, 2020
Initiating reinstatement for database "cdb1_stby"...
Reinstating database "cdb1_stby", please wait...
Reinstatement of database "cdb1_stby" succeeded
21:35:15.40 Wednesday, January 15, 2020

Killed *****
[oracle@ol7-121-dg1 sql]$
Disable fast_start failover:
[oracle@ol7-121-dg1 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
Error: ORA-16820: fast-start failover observer is no longer observing this database

cdb1_stby - (*) Physical standby database
Error: ORA-16820: fast-start failover observer is no longer observing this database

Fast-Start Failover: ENABLED

Configuration Status:
ERROR (status updated 31 seconds ago)

DGMGRL> disable fast_start failover
Disabled.

DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
cdb1_stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS (status updated 12 seconds ago)

DGMGRL> exit
[oracle@ol7-121-dg1 sql]$

Data Guard Fast-Start Failover Test – Shutdown Primary Host

Michael Dinh - Wed, 2020-01-15 16:37

Note: Primary Database: cdb1_stby is because failover was previously performed.

This also demonstrate why it may not be a good idea to suffix stby for standby database.

Review Data Guard using sqlplus:
OL7-121-DG2:(SYS@cdb1):PRIMARY> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      cdb1
db_unique_name                       string      CDB1_STBY
pdb_file_name_convert                string
OL7-121-DG2:(SYS@cdb1):PRIMARY> show parameter fal

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fal_client                           string
fal_server                           string      cdb1
OL7-121-DG2:(SYS@cdb1):PRIMARY>

********************************************************************************

OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> show parameter db%name

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_name_convert                 string
db_name                              string      cdb1
db_unique_name                       string      cdb1
pdb_file_name_convert                string
OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> show parameter fal

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
fal_client                           string
fal_server                           string      cdb1_stby
OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY>
Review Data Guard configuration:
DGMGRL> show configuration verbose
Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1_stby - Primary database
cdb1 - Physical standby database

Properties:
FastStartFailoverThreshold = '30'
OperationTimeout = '30'
TraceLevel = 'USER'
FastStartFailoverLagLimit = '30'
CommunicationTimeout = '180'
ObserverReconnect = '0'
FastStartFailoverAutoReinstate = 'TRUE'
FastStartFailoverPmyShutdown = 'TRUE'
BystandersFollowRoleChange = 'ALL'
ObserverOverride = 'FALSE'
ExternalDestination1 = ''
ExternalDestination2 = ''
PrimaryLostWriteAction = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

********************************************************************************

DGMGRL> show database verbose cdb1_stby

Database - cdb1_stby

Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
cdb1

Properties:
DGConnectIdentifier = 'cdb1_stby'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = 'cdb1'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg2)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb1_stby_DGMGRL)(INSTANCE_NAME=cdb1)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

********************************************************************************

DGMGRL> show database verbose cdb1

Database - cdb1

Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 2.00 KByte/s
Active Apply Rate: 0 Byte/s
Maximum Apply Rate: 0 Byte/s
Real Time Query: ON
Instance(s):
cdb1

Properties:
DGConnectIdentifier = 'cdb1'
ObserverConnectIdentifier = ''
LogXptMode = 'ASYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '0'
TransportLagThreshold = '0'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = ''
LogFileNameConvert = ''
FastStartFailoverTarget = 'cdb1_stby'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg1)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb1_DGMGRL)(INSTANCE_NAME=cdb1)(SERVER=DEDICATED)))'
StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
TopWaitEvents = '(monitor)'

Database Status:
SUCCESS

DGMGRL>
Validate Data Guard configuration:
DGMGRL> validate database verbose cdb1_stby

Database Role: Primary database

Ready for Switchover: Yes

Capacity Information:
Database Instances Threads
cdb1_stby 1 1

Temporary Tablespace File Information:
cdb1_stby TEMP Files: 1

Flashback Database Status:
cdb1_stby: On

Data file Online Move in Progress:
cdb1_stby: No

Transport-Related Information:
Transport On: Yes

Log Files Cleared:
cdb1_stby Standby Redo Log Files: Cleared

Automatic Diagnostic Repository Errors:
Error cdb1_stby
No logging operation NO
Control file corruptions NO
System data file missing NO
System data file corrupted NO
System data file offline NO
User data file missing NO
User data file corrupted NO
User data file offline NO
Block Corruptions found NO

********************************************************************************

DGMGRL> validate database verbose cdb1

Database Role: Physical standby database
Primary Database: cdb1_stby

Ready for Switchover: Yes
Ready for Failover: Yes (Primary Running)

Capacity Information:
Database Instances Threads
cdb1_stby 1 1
cdb1 1 1

Temporary Tablespace File Information:
cdb1_stby TEMP Files: 3
cdb1 TEMP Files: 3

Flashback Database Status:
cdb1_stby: On
cdb1: On

Data file Online Move in Progress:
cdb1_stby: No
cdb1: No

Standby Apply-Related Information:
Apply State: Running
Apply Lag: 0 seconds (computed 1 second ago)
Apply Delay: 0 minutes

Transport-Related Information:
Transport On: Yes
Gap Status: No Gap
Transport Lag: 0 seconds (computed 1 second ago)
Transport Status: Success

Log Files Cleared:
cdb1_stby Standby Redo Log Files: Cleared
cdb1 Online Redo Log Files: Cleared
cdb1 Standby Redo Log Files: Available

Current Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(cdb1_stby) (cdb1)
1 3 4 Sufficient SRLs

Future Log File Groups Configuration:
Thread # Online Redo Log Groups Standby Redo Log Groups Status
(cdb1) (cdb1_stby)
1 3 4 Sufficient SRLs

Current Configuration Log File Sizes:
Thread # Smallest Online Redo Smallest Standby Redo
Log File Size Log File Size
(cdb1_stby) (cdb1)
1 50 MBytes 50 MBytes

Future Configuration Log File Sizes:
Thread # Smallest Online Redo Smallest Standby Redo
Log File Size Log File Size
(cdb1) (cdb1_stby)
1 50 MBytes 50 MBytes

Apply-Related Property Settings:
Property cdb1_stby Value cdb1 Value
DelayMins 0 0
ApplyParallel AUTO AUTO

Transport-Related Property Settings:
Property cdb1_stby Value cdb1 Value
LogXptMode ASYNC ASYNC
RedoRoutes
Dependency
DelayMins 0 0
Binding optional optional
MaxFailure 0 0
MaxConnections 1 1
ReopenSecs 300 300
NetTimeout 30 30
RedoCompression DISABLE DISABLE
LogShipping ON ON

Automatic Diagnostic Repository Errors:
Error cdb1_stby cdb1
No logging operation NO NO
Control file corruptions NO NO
SRL Group Unavailable NO NO
System data file missing NO NO
System data file corrupted NO NO
System data file offline NO NO
User data file missing NO NO
User data file corrupted NO NO
User data file offline NO NO
Block Corruptions found NO NO

DGMGRL>
Validate Data Guard connectivity from all hosts:
[oracle@ol7-121-dg2 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1
Password:
Connected as SYSDBA.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> exit
[oracle@ol7-121-dg2 sql]$

********************************************************************************

[oracle@ol7-121-dg1 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1
Password:
Connected as SYSDBA.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> exit
[oracle@ol7-121-dg1 sql]$
Start Data Guard observer from standby:

Note: this is not good practice for real world and only for testing purposes only.

oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:21:28 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg1 sql]$ dgmgrl
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys@cdb1_stby
Password:
Connected as SYSDBA.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1_stby - Primary database
Warning: ORA-16819: fast-start failover observer not started

cdb1 - (*) Physical standby database
Warning: ORA-16819: fast-start failover observer not started

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 16 seconds ago)

DGMGRL> start observer
Observer started
DGMGRL>
Shutdown primary host:
[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:26:51 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG2:(SYS@cdb1):PRIMARY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg2 sql]$ logout

[vagrant@ol7-121-dg2 ~]$ logout
Connection to 127.0.0.1 closed.

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant halt
==> default: Attempting graceful shutdown of VM...

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$
Failover succeeded:
DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020
Review Data Guard configuration:
[oracle@ol7-121-dg1 ~]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
Warning: ORA-16829: fast-start failover configuration is lagging

cdb1_stby - (*) Physical standby database (disabled)
ORA-16661: the standby database needs to be reinstated

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 41 seconds ago)

DGMGRL>
Start primary host:
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant status
Current machine states:

default poweroff (virtualbox)

The VM is powered off. To restart the VM, simply run `vagrant up`

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant up

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master) 
$ vagrant status
Current machine states:

default running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$
Start listener:
[oracle@ol7-121-dg2 ~]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 15-JAN-2020 21:33:11

Copyright (c) 1991, 2014, Oracle. All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date 15-JAN-2020 21:33:12
Uptime 0 days 0 hr. 0 min. 0 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "cdb1_stby_DGMGRL" has 1 instance(s).
Instance "cdb1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@ol7-121-dg2 ~]$
Startup mount database:
[oracle@ol7-121-dg2 sql]$ ps -ef|grep pmon
oracle 17762 17567 0 21:34 pts/0 00:00:00 grep --color=auto pmon

[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jan 15 21:34:09 2020

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SYS@cdb1> startup mount;
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size 2924928 bytes
Variable Size 520097408 bytes
Database Buffers 1073741824 bytes
Redo Buffers 13848576 bytes
Database mounted.
SYS@cdb1> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg2 sql]$
Review Data Guard configuration:
[oracle@ol7-121-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration
ORA-16795: the standby database needs to be re-created

Configuration details cannot be determined by DGMGRL
DGMGRL>
Review Observer:
DGMGRL> start observer
Observer started

21:27:46.27 Wednesday, January 15, 2020
Initiating Fast-Start Failover to database "cdb1"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "cdb1"
21:27:51.99 Wednesday, January 15, 2020

21:34:56.44 Wednesday, January 15, 2020
Initiating reinstatement for database "cdb1_stby"...
Reinstating database "cdb1_stby", please wait...
Reinstatement of database "cdb1_stby" succeeded
21:35:15.40 Wednesday, January 15, 2020
Review and Validate Data Guard configuration:
DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
cdb1_stby - (*) Physical standby database
Warning: ORA-16829: fast-start failover configuration is lagging

Fast-Start Failover: ENABLED

Configuration Status:
WARNING (status updated 54 seconds ago)

DGMGRL> show configuration

Configuration - my_dg_config

Protection Mode: MaxPerformance
Members:
cdb1 - Primary database
cdb1_stby - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS (status updated 24 seconds ago)

DGMGRL> validate database cdb1

Database Role: Primary database

Ready for Switchover: Yes

DGMGRL> validate database cdb1_stby

Database Role: Physical standby database
Primary Database: cdb1

Ready for Switchover: Yes
Ready for Failover: Yes (Primary Running)

DGMGRL>

Installing Oracle GoldenGate 19c Microservices – Binaries Only

DBASolved - Wed, 2020-01-15 12:43

It has taken me some time to get around to doing more videos; mostly because I don’t like the way that I sound when I hear my voice. Maybe next time, I’ll just do a silent video..lol.  Moving forward,  I’ll make more of an effort to produce a more videos that explain a few of […]

The post Installing Oracle GoldenGate 19c Microservices – Binaries Only appeared first on DBASolved.

Categories: DBA Blogs

Oracle Helps Customers Easily Move to Autonomous Cloud

Oracle Press Releases - Wed, 2020-01-15 07:00
Press Release
Oracle Helps Customers Easily Move to Autonomous Cloud New Oracle Enterprise Manager automates database migration and simplifies complex hybrid cloud environments

Redwood City, Calif.—Jan 15, 2020

To help customers easily move Oracle Databases to the cloud and simplify management of hybrid cloud environments, Oracle announced significant enhancements to its enterprise management platform, Oracle Enterprise Manager. The new release adds functionality that automates database migration and provides a single dashboard that improves visibility, control, and management for hybrid IT environments. 

Oracle Enterprise Manager’s migration capabilities provide unprecedented flexibility and ease of use to accelerate and simplify the transition to the cloud. Since most large organizations have to move multiple databases to the cloud over an extended period of time, it’s critical to have a cloud migration solution that eliminates the timing and pricing pressures typical with other vendor’s rigid migration solutions.

“As organizations move to the cloud, they are faced with complex, time-consuming, manual, error-prone migration tasks,” said Wim Coekaerts, senior vice president, Software Development, Oracle. “Only Oracle provides Autonomous Cloud services, as well as the tools and migration services to help customers easily move to the cloud. Oracle Enterprise Manager removes the complexity with highly automated, guided migrations and provides a single dashboard for easily managing hybrid cloud environments.”

“IDC’s research shows that well over 90 percent of major enterprises rely on a mix of on-premises IT, dedicated cloud environments and public cloud services, and are seeking efficient ways to onboard, monitor, and manage across these hybrid environments,” said Mary Johnston Turner, IDC Research vice president for Cloud Management. “Enterprise cloud management leaders tell us that analytics is their number one priority, since the scale and complexity of hybrid and multi-cloud operations requires robust automation that is informed by deep performance and optimization intelligence.”

Built for Hybrid Environments

Oracle Enterprise Manager provides enhancements in three key areas to help enterprises more easily manage hybrid database environments, including:

  • Intelligent Analytics: New intelligent analytics provided by the Exadata Warehouse enable users to maximize performance and utilization of Oracle Database and Exadata environments on-premises or in the cloud via improved capacity planning and forecasting. Additionally, the new version improves visibility of the entire hybrid estate through comprehensive monitoring and management for Oracle’s latest technology, including Autonomous Database and Exadata Cloud Service.

  • Comprehensive Lifecycle Automation and Control: Advancements in lifecycle automation and control enable enterprises to easily adopt Autonomous Database and Exadata Cloud Service and improve their security posture.

  • Mobility and Security: New comprehensive security controls include fleet maintenance support for Transparent Data Encryption, improved compliance monitoring, fine-grained control of on-premises fleets, and new security standards for Oracle Database 18c and 19c.  The new functionality also provides access to a new mobile app and new Grafana plug-in for rich visualization of Oracle Enterprise Manager data.

Additionally, Oracle is expanding deployment and access choices for Oracle Enterprise Manager. DBAs can now deploy Oracle Enterprise Manager on Oracle Cloud Infrastructure using Oracle best-practices for high availability, capitalizing on Oracle Enterprise Manager features while enjoying the benefits of a cloud deployment.

In addition, Oracle announced today that Oracle Enterprise Manager has been certified by the Center for Internet Security Benchmarks to compare the configuration status of Oracle Databases against the consensus-based best practice standards contained in the Oracle 12c Benchmark v2.1.0, Level 1- RDBMS. Organizations that use Oracle Enterprise Manager can now ensure that the configurations of their critical assets align with the CIS Benchmarks consensus-based practice standards.

Organizations Benefit from Highly Automated Capabilities, Easy Migration to Cloud

“We depend on Oracle Enterprise Manager to optimize our Oracle Database and Exadata fleet, which provides a mission-critical shared service for all of our most important business functions,” said Jones John, Database Services Manager, Technology and Innovation Division at Link Group. “The latest release of Oracle Enterprise Manager allows us to adopt the newest Exadata X8 environments without delay, and to continue to use Oracle Enterprise Manager’s comprehensive management automation capabilities across our entire hybrid database fleet.”

“Our key public sector and commercial customers and our own experts use Oracle Enterprise Manager every day to manage their Oracle Database and Exadata fleet,” said Erik Benner, Vice President, Transformation at Mythics, an Oracle Cloud partner. “The new Oracle Enterprise Manager functionality to ease migration to Autonomous Database and to apply machine learning analytics to their Oracle Enterprise Manager data is precisely what is needed to help ensure they can continue to operate seamlessly across their entire Database fleet.”

Contact Info
Victoria Brown
Oracle
+1.650.850.2009
victoria.brown@oracle.com
Sara Zick
The Hatch Agency for Oracle
+1.646.209.8726
szick@thehatchagency.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Victoria Brown

  • +1.650.850.2009

Sara Zick

  • +1.646.209.8726

Javascript in ANT

Darwin IT - Wed, 2020-01-15 04:37
Earlier I wrote about an ANT script to scan JCA adapters files in your projects home, subversion working copy or github local repo.

In my current project we use sensors to kick-of message-archiving processes, without cluttering the BPEL process. I'm not sure if I would do that like that if I would do on a new project, but technically the idea is interesting. Unfortunately, we did not build a registry what BPEL processes make use of it and how. So I tought of how I could easily find out a way to scan that, and found that based on the script to scan JCA files, I could easily scan all the BPEL sensor files. If you have found the project folders, like I did in the JCA scan script, you can search for the *_sensor.xml files.

So in a few hours I had a basic sript. Now, in a second iteration, I would like to know what sensorActions the sensors trigger. For that I need to interpret the accompanying *_sensorAction.xml file. There for, based on the found sensor filename I need to determine the name of the sensor action file.

The first step to that is to figure out how to do a substring in ANT. With a quick google on "ant property substring", I found a nice stackoverflow thread, with a nice example of an ANT script defininition based on Javascript:
  <scriptdef name="substring" language="javascript">
<attribute name="text"/>
<attribute name="start"/>
<attribute name="end"/>
<attribute name="property"/>
<![CDATA[
var text = attributes.get("text");
var start = attributes.get("start");
var end = attributes.get("end") || text.length();
project.setProperty(attributes.get("property"), text.substring(start, end));
]]>
</scriptdef>

And that can be called like:
    <substring text="${sensor.file.name}" start="0" end="20"   property="sensorAction.file.name"/>
<echo message="Sensor Action file: ${sensorAction.file.name1}"></echo>

The javascript substring() function is zero-based, so the first character is indexed by 0.
Not every sensor file name has the same length, the file is called after the BPEL file that it is tight too. And so to get the base name, the part without the "_sensor.xml" postfix, we need to determine the length of the filename. A script that determines that can easily be extracted from the script above:
  <scriptdef name="getlength" language="javascript">
<attribute name="text"/>
<attribute name="property"/>
<![CDATA[
var text = attributes.get("text");
var length = text.length();
project.setProperty(attributes.get("property"), length);
]]>
</scriptdef>

Perfect! Using this I could create the logic in ANT to determine the sensorAction file name. However, I thought that it would be easier to determine the filename in Javascript all the way. Using the strength of the proper language at hand:
  <!-- Script to get the sensorAction filename based on the sensor filename. 
1. Cut the extension "_sensor.xml" from the filename.
2. Add "_sensorAction.xml" to the base filename.
-->
<scriptdef name="getsensoractionfilename" language="javascript">
<attribute name="sensorfilename"/>
<attribute name="property"/>
<![CDATA[
var sensorFilename = attributes.get("sensorfilename");
var sensorFilenameLength = sensorFilename.length();
var postfixLength = "_sensor.xml".length();
var sensorFilenameBaseLength=sensorFilenameLength-postfixLength;
var sensorActionFilename=sensorFilename.substring(0, sensorFilenameBaseLength)+"_sensorAction.xml";
project.setProperty(attributes.get("property"), sensorActionFilename);
]]>
</scriptdef>
And then I can get the sensorAction filename as follows:
    <getsensoractionfilename sensorfilename="${sensor.file.name}" property="sensorAction.file.name"/>
<echo message="Sensor Action file: ${sensorAction.file.name}"></echo>

Superb! I found ANT a powerfull language/tool already. But with a few simple JavaScript snippets you can extend it easily.
Notice by the way also the use of xslt in the Scan JCA adapters files article. You can read xml files as properties, but to do that conveniently you need to transform a file like the sensors.xml in a way that you can easily reference the properties following the element-hierarchy. This is also explained in the Scan JCA adapters files article.
I'll go further with my sensors scan script. Maybe I'll write about it when done.

@DATE, @DATENOW … Date functions in GoldenGate

DBASolved - Tue, 2020-01-14 11:59

Dates are always fun to play with when it comes to the Oracle Database, much less any other relational database.  Dates are used for many thinks in a wide range of application and schemas.  You have birthdays, ship dates, order dates, registration date, etc….  You get the picture.   In the Oracle Database you can […]

The post @DATE, @DATENOW … Date functions in GoldenGate appeared first on DBASolved.

Categories: DBA Blogs

Largest Russian Footwear Retailer Kicks Manual Assortment Planning to the Curb with Oracle

Oracle Press Releases - Tue, 2020-01-14 11:00
Press Release
Largest Russian Footwear Retailer Kicks Manual Assortment Planning to the Curb with Oracle ZENDEN Group plans to increase business margin by 3.5 percent while reducing cross-store replenishment by 50 percent

NRF 2020 Retail’s Big Show, New York City—Jan 14, 2020

Popular Russian footwear chain, ZENDEN Group, deployed Oracle Retail to drive inventory productivity and elevate the customer experience with better inventory placement. Today, ZENDEN leads the Russian footwear market on profit per square meter, engaging about 3 million loyalty cardholders and having more than 74.4 million people visiting its stores. In the last five years, ZENDEN has sold 42 million pairs of shoes. With Oracle Retail, ZENDEN Group shifted from a manual planning process to adopt automated demand-driven assortment planning. The new process helps to ensure the right shoes are always in stock for its customers.

“With the rapid growth of the brand, we required a modern solution that could optimize the process, of forecasting and planning assortment. For us, those were the most important factors which have influenced our decision. The new Oracle Retail solution provides the common environment for all participants of the process and also allows us to make the top-down and bottom-up plans,” said Vitaly Fomin, project director, ZENDEN Group. “Due to high accuracy and the new forecasting tool which uses machine learning, we can more effectively meet consumer demand in our stores by providing better assortment availability. In the long-term this helps to reduce the volume of clearances and to increase the overall margin of the business.”

“Before the implementation, our team was responsible for shipments into over 400 stores that we managed through a highly manual and time-consuming process with using big and unsynchronized spreadsheets. Now we have a universal solution with a single user interface and seamless automation of our best practices, which gives us the capacity to think more strategically,” said Victoria Rakhimzhanova, planning department director, ZENDEN Group.

With Oracle Retail, the teams have standardized 90 percent of the planning process in one central system. ZENDEN Group implemented assortment planning solution, Oracle Retail Assortment Planning; demand forecasting solution, Oracle Retail Demand Forecasting; and Veltio’s new Product Forecasting on the Oracle Retail Predictive Application Server platform. Veltio, a Gold level member of Oracle PartnerNetwork (OPN), provided Implementation services on the project.

“In our recent global consumer research showed that it’s no longer good enough for retailers to simply sell products. Retailers need to get better at acquiring the right customers and placing assortment and inventory where the customer expects to engage with the brand,” said Mike Webster, senior vice president, and general manager, Oracle Retail. “By leveraging the machine learning embedded in our Planning and Optimization solutions, ZENDEN gains a single view of the assortment and can now orchestrate the supply chain to continue to drive margin and reduce costs.”

Contact Info
Kaitlin Ambrogio
Oracle Retail PR
+1.781.434.9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Zenden

Zenden Group – Russian diversified multidisciplinary holding on a federal scale with 22 years of history. The company manages retail chains, modern shopping centers, foreign representative offices and a number of social projects. Due to the active use of the world’s best practices and the latest technologies in footwear production, highly qualified management and staff, the holding annually shows high production and financial results, and also strives to become the most dynamic, efficient and innovative company in Russian shoe retail.

About Veltio

Veltio is one of the world leaders in the field of analytics and optimization of retail processes, solving complex business problems using machine learning systems, data mining and decision support based on Oracle Retail Planning and Optimization solutions Veltio works. For more information, visit our website http://www.veltio.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Who are Dynamic Search Ads most useful for?

VitalSoftTech - Tue, 2020-01-14 09:53

Here is a question for you. Who do you think from among the given options has the most use of Dynamic Search Ads? A website is positioning its ads strategically in the way which their audiences’ eyes move. An international clothing retailer that operates online with dynamically changing inventory every season. A local eatery with […]

The post Who are Dynamic Search Ads most useful for? appeared first on VitalSoftTech.

Categories: DBA Blogs

Drop Column bug

Jonathan Lewis - Tue, 2020-01-14 07:22

When I was a child I could get lost for hours in an encyclopedia because I’d be looking for one topic, and something in it would make me want to read another, and another, and …

The same thing happens with MOS (My  Oracle Support) – I search for something and the search result throws up a completely irrelvant item that looks much more interesting so I follow a hyperlink, which mentions a couple of other notes, and a couple of hours later I can’t remember what I had started looking for.

Today’s note is a side effect of that process. A comment made yesterday about count(*)/count(1) referenced Oracle bug “19450314: UNNECESSARY INVALIDATIONS IN 12C”, and when I searched MOS for more information on this bug I discovered bug 30404639 : TRIGGER DOES NOT WORK CORRECTLY AFTER ALTER TABLE DROP UNUSED COLUMN. The impact of this bug is easy to demonstrate, and the ramifications are as follows:

Exercise extreme care with the “alter table drop column” command in 18c and above.

The problem is easy to work around, but the impact of not knowing about it could be catastrophic if your pre-production testing wasn’t quite good enough. Here’s a little demonstration script – the bug note says the problem appeared in 18.3 but I ran this test against 19.3. The script is a modified version of the SQL in the bug note:


create table t1 (c0 varchar2(30), c1 varchar2(30), c2 varchar2(30), c3 varchar2(30), c4 varchar2(30));
create table t2 (c_log varchar2(30));

create or replace trigger t1_ariu
after insert or update on t1
for each row
begin
        IF :new.c3 is not null then
                insert into t2 values (:new.c3);
        end if;
end;
/

spool drop_col_bug_18c.lst

insert into t1(c3) values ('Inserting c3 - should log'); 
select * from t2;
 
insert into t1(c4) values ('Inserting c4 - should not log'); 
select * from t2;

prompt  ===================================
prompt  Drop some columns in two steps then
prompt  truncate t2 and repeat the test
prompt  ===================================
 
alter table t1 set unused (c1, c2);
alter table t1 drop unused columns;

truncate table t2;

insert into t1(c3) values ('Inserting c3 - should log'); 
select * from t2;
 
insert into t1(c4) values ('Inserting c4 - should not log'); 
select * from t2;
 

The code is very simple. It creates a couple of tables an “after row” trigger on one of them to copy one column value across to the other table on an insert or update provided the new column value is not null.

To check that the trigger is (at least in part) behaving the code does two inserts – one which should copy a value and one which should not – and we see that the copy takes place as expected.

Now comes the critical part. We mark two of the columns in the table as unused, then drop all unused columns, truncate the second table and repeat the inserts.

If you run the test on 12.2.0.1 then you should find that the second run behaves just like the first run. If you’re running 18c or 19c be prepared for the following:


insert into t1(c3) values ('Inserting c3 - should log')
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [insChkBuffering_1], [4], [4], [], [], [], [], [], [], [], [], []

no rows selected

insert into t1(c4) values ('Inserting c4 - should not log')
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [insChkBuffering_1], [4], [4], [], [], [], [], [], [], [], [], []

This is not good – but it gets worse. If your application starts consistently breaking with an ORA-00600 error that’s going to annoy a lot of users for (we hope) a brief interval, but if your application keeps running and corrupting your data that’s a much bigger problem. Re-run the whole script (dropping the two tables first) but change it to mark just one of the two columns as unused, and you’ll get results for the second pass that look like this:


Table truncated.


1 row created.


no rows selected


1 row created.


C_LOG
------------------------------
Inserting c4 - should not log

1 row selected.

The trigger seems to “lose count” of the columns in the table (presumably it’s compiled to refer to something like “column_position = 3” and doesn’t adjust on the “drop column” – the linked bug notes on MOS refer to the problem being associated with the project to increase fine-grained dependencies) so it manages to survive with one column dropped because there’s still a “column 3” which happens now to be the column that used to be “column 4”.

Workaround

There is a simple workaround if you run into this problem after modifying your production system (and before you’ve corrupted a huge amount of data – recompile the trigger manually immediately after the drop completes: “alter trigger t1_ariu compile”.

Refinement

The problem seems to appear only if the following two conditions are true:

  • you use a two-step approach to dropping a column, viz: set unused then drop. If you simply issue “alter table t1 drop column c1” (with or without a “checkpoint NNN”) then the problem does not appear. It’s a great shame that in the past I’ve given advice that setting columns unused and dropping them later is a better option than doing an immediate drop.
  • you drop columns that appear earlier in the table than the highest position column mentioned in the trigger. But this isn’t something you should gamble on, particularly since the workaround is so easy to implement, because the order the columns appear in the table declaration isn’t necessarily the internal column ordering so you might get it wrong (not that I’ve tried to test for that threat) – and what if there are some selective materialized view logs where you don’t explicitly create triggers and forget to cater for.

I don’t expect anyone to be dropping columns in production systems with any great frequency, and you would expect such a significant operation to be tested quite carefully, but it’s easy to envisage a scenario where the testing might be split into two pieces viz:

  1. test the application on a pre-prod version of the database where a table has been created as a subset of the production data without the column that’s due to be dropped
  2. test how long it takes to execute the actual drop on a (minimal) recovered backup of production, but don’t test the new production code on the resulting table.

Sometimes it’s easy to overlook things that “obviously” won’t need testing, especially when it’s something that has always worked in the past with no special treatment required.

<h3>Footnote</h3>

If you try running this model on LiveSQL you’ll find that the code stops and the web page reports “Error: Internal Server Error” so you can’t tell that the problem is exactly the same there – but it seems quite likely that it is.

Given how easy it is to bypass the problem I haven’t bothered to do any further research on the issue – is it only related to insert and update trigger, and do they have to be after row for the update, and what about before row delete triggers (with materialized view logs in mind).

 

A Refresh-able URL

Jim Marion - Mon, 2020-01-13 22:41

I find development to be an iterative process. I often create a shell of a program (module, component, page, etc.), just enough to test, and then run my first test. I then iteratively enhance the program (module, component, page, etc.), verifying and validating each change. For online PeopleSoft pages and components, this often means pressing the browser refresh button a LOT! But here is the problem: each time I refresh a PeopleSoft component, PeopleSoft reverts to the search page. After selecting a search value, I'm then directed to the first page in the component. What if I'm testing a different page? Is it possible to craft a URL to bypass component search and open a different page within the component? Yes! We can bypass component search by placing level 0 search key field names and values in the URL. Likewise, we can specify the starting page by placing the Page=... parameter in our URL. But if you don't know the component's search keys and the target page's name, do you have to find them in Application Designer? Fortunately, no. Every PeopleSoft page contains a JavaScript variable named strCurrUrl. This variable contains the current full URL, including search keys and Page parameter. Here is an example of a URL we might use to open the Job Earnings Distribution page of the JOB_DATA component for an employee with the ID KU0001:

http://hr92u030.example.com:8000/psp/ps/EMPLOYEE/HRMS/c/ADMINISTER_WORKFORCE_(GBL).JOB_DATA.GBL?EMPLID=KU0001&EMPL_RCD=0&PAGE=JOB_DATA_ERNDIST

For the last several years, we have been showing customers a simple JavaScript console trick to move the strCurrUrl JavaScript variable into the browser's address bar, allowing us to bypass search and navigation when testing PeopleSoft pages. The real trick to this, though, is that Fluid and Classic behave a little differently, requiring different JavaScript. So today, I want to share a simple JavaScript fragment that you may use across Classic and Fluid. This small snippet is "smart" enough to apply the correct rule on Fluid or Classic. It also has a little logic to detect Classic content in Fluid Activity Guides and Fluid WorkCenters. Please note that using this script with a Fluid Activity Guide/WorkCenter will open Classic content outside of the WorkCenter/Activity Guide frame, but then for testing purposes, that is often desirable.

OK... so... wait, it gets better. Why copy and paste that script into the JavaScript console every time you want to invoke it? Why not just create a browser bookmark or favorite to accomplish this task (also known as a bookmarklet)? Here is the same script in bookmark form. Simply drag this hyperlink into your browser's bookmarks bar and you have a bookmark you may use on almost any PeopleSoft component to get a refresh-able URL.

PS Refresh-able

Are you interested in more tips like this? Jim Marion shares tons of them in his training classes. Register today for one of our upcoming classes!

Oracle Delivers Modern Retail in the Cloud

Oracle Press Releases - Mon, 2020-01-13 11:00
Press Release
Oracle Delivers Modern Retail in the Cloud Powered by Oracle Cloud Infrastructure, Oracle Retail provides customers the ability to innovate faster and more securely

NRF 2020 Retail’s Big Show, New York City—Jan 13, 2020

An increasingly fierce retail market requires new approaches to engaging customers and satisfying their demands with efficiency and elegance. Retailers need the transparency and flexibility to shift with the needs of their customers and business—whether those interactions are happening online, in-store or in the spaces in-between, such as buying online and picking up in-store (BOPIS).

Everything from inventory to experiences must live in harmony and that requires brands to innovate faster, have intelligence at the tips of their fingers, and use data science to put customers at the center of decision making. Yesterday’s cloud infrastructure can’t meet the demands of today’s customers. Providing retailers the modern platform they need to deliver on their brand promises, Oracle’s core retail operations portfolio is now powered by Oracle's Gen 2 cloud.

Oracle Cloud Infrastructure is a purpose-built, best-in-class platform for running the most important enterprise applications with unmatched performance, security, and cost. Only Oracle’s Gen 2 Cloud runs Oracle Autonomous Database, the industry's first and only self-driving database. Oracle Cloud provides a comprehensive cloud computing portfolio, from application development and business analytics to data management, integration, security, artificial intelligence (AI), and blockchain.

“The retail enterprise is becoming increasingly complex with a slim margin for error when it comes to maintaining customer loyalty,” said Jeff Warren, vice president, Oracle Retail. “It’s more critical than ever that retailers can operate with agility and next-level intelligence on everything from planning to inventory management to customer acquisition. This can only be achieved with a next-level cloud platform, like Oracle CIoud Infrastructure.”

Customers are already seeing the advantages. For example, Gap Inc. partnered with Oracle to deploy Oracle Retail Merchandising Cloud Service and Oracle Retail Integration Cloud Service, powered by Oracle Cloud Infrastructure, to drive operational agility and furnish the Banana Republic business teams with better intelligence.

Oracle Retail applications leverage Oracle Cloud’s highly fault-tolerant regions to protect customers from unexpected outages. Oracle’s cloud applications can tolerate network, power, or hardware outages and continue to provide service without missing a beat. Additionally, the applications are architected with redundancy at all levels. If a particular component fails, another is there to take its place without disruption to the services. Oracle’s focus on continuous service versus eventual recovery help reduce the risk and magnitude of lost business and customers.

Contact Info
Kaitlin Ambrogio
Oracle Retail PR
+1.781.434.9555
kaitlin.ambrogio@oracle.com
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kaitlin Ambrogio

  • +1.781.434.9555

Collections

Jonathan Lewis - Mon, 2020-01-13 08:31

This is a note I drafted in September 2015 and only rediscovered a couple of days ago while searching for something I was sure I’d written about collections and/or table functions. The intention of collections and table functions is that they should behave like tables when you use them in a query – but there are cases where a real table and something cast to a table() aren’t treated the same way by the optimizer – and this 4-year old note (which is still valid in 2020 for 19c) is one of those cases.

 

There was a question – with test case – on Oracle-L recently [ed: now more than 4 years ago] about the behaviour of a query that changed plans as you switched from using a global temporary table to a collection – why was Oracle doing something inefficient with the collection. The answer was: “Bad luck, it’s a limitation in the optimizer”.  (Sub-text: collections are a pain).

The test case was short and simple so I thought I’d post it – with an h/t to Patrick Jolliffe who presented the probem and Timur Akhmadeev and Stefan Koehler who explained the problems.

Here’s the script (with a little cosmetic editing) to create the necessary objects and data:

rem
rem     Script:         collections.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Sep 2015
rem
rem     Last tested 
rem             19.3.0.0
rem             12.2.0.1
rem             12.1.0.2
rem

create or replace type number_table is table of number;
/

create table test_objects as select * from all_objects;
create /* unique */ index test_objects_idx on test_objects(object_id);

exec dbms_stats.gather_table_stats(null, 'test_objects');

create global temporary table gtt_test_objects (object_id number);
insert into gtt_test_objects values (1);


In this example I’ve created a type which is a simple table of number. In a more general case you might create a simple object type, and then a type that was a table of that object type, then you might create a function that returned a variable of that table type, or a function that was declared to return the table type “pipelined” and uses the “pipe row” instruction in the code to return one value of the simple object type at a time. Whichever variation you used you could then use the table() operator to tell Oracle to treat the content of the table type as if it were a relational table. (In recent versions of Oracle the table() operator is redundant).

Here’s the first query, which uses the global temporary table in an “IN” subquery, followed by its execution plan – again with a little cosmetic editing and the addition of query block names across the board:


prompt  ==================================
prompt  Query using global temporary table
prompt  ==================================

select  
        /*+ qb_name(main) */ 
        null 
from    (
        select
                /*+ qb_name(inline) */
                distinct object_id 
        from    test_objects
        ) 
where   object_id in (
                select 
                        /*+
                                qb_name(subq)
                                cardinality(gtt_test_objects 1) 
                        */ 
                        gtt_test_objects.object_id
                from
                        gtt_test_objects 
        )
;


-------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation             | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT      |                  |      1 |        |      0 |00:00:00.01 |       5 |       |       |          |
|   1 |  VIEW                 | VM_NWVW_1        |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   2 |   SORT UNIQUE NOSORT  |                  |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   3 |    NESTED LOOPS       |                  |      1 |      1 |      0 |00:00:00.01 |       5 |       |       |          |
|   4 |     SORT UNIQUE       |                  |      1 |      1 |      1 |00:00:00.01 |       3 |  2048 |  2048 | 2048  (0)|
|   5 |      TABLE ACCESS FULL| GTT_TEST_OBJECTS |      1 |      1 |      1 |00:00:00.01 |       3 |       |       |          |
|*  6 |     INDEX RANGE SCAN  | TEST_OBJECTS_IDX |      1 |      1 |      0 |00:00:00.01 |       2 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("OBJECT_ID"="GTT_TEST_OBJECTS"."OBJECT_ID")

As you can see I’ve set statistics_level to all, and used dbms_xplan.display_cursor() to pull the actual execution plan from memory. This plan tells us that the optimizer unnested the IN subquery to generate a unique set of values and used that unique set to drive a nested loop join into the test_objects table (with an index-only probe). Moreover, before this step, the optimizer used complex view merging and cost-based query transformation to postpone the “distinct” from the original query to do the join before distinct. The E-rows at operation 5 also tells us that the optimizer “knew” that there was only one row in the GTT – it took note of my cardinality() hint.

Now we replace with gtt_test_objects table with the collection – casting it to a table() and giving Oracle the same cardinality() hint – as follows:


select 
        /*+ 
                qb_name(main)
--              no_use_hash_aggregation(@sel$1)
        */ 
        null
from    (
        select  
                /*+ inline */
                distinct object_id 
        from    test_objects
        )
where   object_id in (
                select 
                        /*+ 
                                qb_name(subq)
                                cardinality(gtt_test_objects 1) 
                        */ 
                        column_value object_id
                from
                        table(number_table(1)) gtt_test_objects
        )
;

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                  |      1 |        |      0 |00:00:00.08 |     132 |       |       |          |
|   1 |  MERGE JOIN SEMI                        |                  |      1 |      1 |      0 |00:00:00.08 |     132 |       |       |          |
|   2 |   SORT JOIN                             |                  |      1 |  56762 |      1 |00:00:00.08 |     132 |  1470K|   606K| 1306K (0)|
|   3 |    VIEW                                 |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |       |       |          |
|   4 |     HASH UNIQUE                         |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |  4122K|  2749K| 3418K (0)|
|   5 |      INDEX FAST FULL SCAN               | TEST_OBJECTS_IDX |      1 |  56762 |  56762 |00:00:00.01 |     132 |       |       |          |
|*  6 |   SORT UNIQUE                           |                  |      1 |      1 |      0 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   7 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                  |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   6 - access("OBJECT_ID"=VALUE(KOKBF$))
       filter("OBJECT_ID"=VALUE(KOKBF$))

The second plan is completely different. The optimizer has unnested the subquery to produce a join, but instead of using the unique set of values that it generated from the collection to drive a nested loop it’s decide to do a merge semi-join, which has entailed an expensive fast full scan of the test_objects_idx index to acquire all the key values first.

I tried to make the optimizer use the collection to drive a nested loop, adding some carefully targeted hints to force the join order and dictate a nested loop join with pushed predicate: but the optimizer wouldn’t push the “obvious” join predicate and continued to do an index fast full scan and sort of the text_object_idx. If you’re interested here are the hints and the resulting plan:

/*+
        qb_name(main)
        leading( @sel$8969f1c9 kokbf$0@sel$2 "from$_subquery$_001"@main)
        use_nl( @sel$8969f1c9 "from$_subquery$_001"@main)
        push_pred(@sel$8969f1c9 "from$_subquery$_001"@main)
*/

-------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name             | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                  |      1 |        |      0 |00:00:00.03 |     132 |       |       |          |
|   1 |  NESTED LOOPS                           |                  |      1 |      1 |      0 |00:00:00.03 |     132 |       |       |          |
|   2 |   SORT UNIQUE                           |                  |      1 |      1 |      1 |00:00:00.01 |       0 |  2048 |  2048 | 2048  (0)|
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                  |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|*  4 |   VIEW                                  |                  |      1 |      1 |      0 |00:00:00.03 |     132 |       |       |          |
|   5 |    SORT UNIQUE                          |                  |      1 |  56762 |  56762 |00:00:00.03 |     132 |  2604K|   728K| 2314K (0)|
|   6 |     INDEX FAST FULL SCAN                | TEST_OBJECTS_IDX |      1 |  56762 |  56762 |00:00:00.01 |     132 |       |       |          |
-------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("OBJECT_ID"=VALUE(KOKBF$))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 6 (U - Unused (1))
---------------------------------------------------------------------------
0 - SEL$102722C0
- qb_name(subq)

1 - SEL$8969F1C9
- leading( @sel$8969f1c9 kokbf$0@sel$2 "from$_subquery$_001"@main)
- qb_name(main)

1 - SEL$8969F1C9 / from$_subquery$_001@MAIN
U - push_pred(@sel$8969f1c9 "from$_subquery$_001"@main)
- use_nl( @sel$8969f1c9 "from$_subquery$_001"@main)

5 - SEL$1
- inline

In the previous post we had a “NOT IN” subquery against a collection/pipelined table function that couldn’t even be unnested (even in 19c); in this example we have an IN subquery that does unnest but then can’t drive a nested loop efficiently because the optimizer won’t push the collection values into the distinct view, and won’t do complex view merging to avoid having to do that join predicate pushdown. Collections and table functions() just don’t play nicely with the optimizer!

In fact this plan also shows one of those “generic” approaches in the optimizer that allows a human operator to see a special case that could have been further optimized: if the optimizer had used a sort unique rather than a hash unique at operation 4 then the sort join at operation 2 would have been redundant – with an overall reduction in memory and CPU usage that I managed to get in a separate test by adding the hint /*+ no_use_hash_aggregation(@sel$1) */ to the query. (Since operation 6 is also a sort unique the merge join semi could, in principle, have become a merge join with no risk of producing duplicates – but the semi-join code path is probably a little more efficient, anyway, and a balance has to be struck between the risk of introducing complexity for a special case and the potential frequency and scale of the benefit it might produce.)

Conclusion

You can often see collections and table functions behaving very like tables when you use them in the from clause of queries – but there are some restrictions on the transformations that the optimizer can use when your query isn’t using “real” tables.

Footnote

There are many ways that you can play around with this starting model to investigate where the boundaries might be. For example, if I make the index on test_objects unique the plan changes to a simple nested loop driven by the unnested collection (there’s no longer a non-mergeable view in the way). If I eliminate the distinct from the original query the same thing happens (for the same reason). If I force the join order to start with the collection (using the leading() hint) but don’t hint a nested loop Oracle produces (at least in my case) a hash join with a Bloom filter that minimised the memory and and CPU requirement.

I mentioned at the start that Timur Akhmadeev and Stefan Koehler supplied explanations for what was going on behind the scenes. Critically Stefan also referenced one of two posts from the Oracle blog on complex view merging and its restrictions: part 1, part 2.

The related problem that led me to re-discover and complete this note is at this URL (published a couple of days ago).

12.2.0.1 And Later Support (Limited) Extended Stats On Virtual columns / Column Groups of Expressions

Randolf Geist - Mon, 2020-01-13 05:14
I do have a demo as part of my optimizer related workshops that shows the restriction / limitation of DBMS_STATS not supporting extended statistics on virtual columns / group of expressions, so for example the combination of both expressions and column groups, like ((TRUNC(COL1)), (TRUNC(COL2))).

Surprisingly, when following a certain sequence of operation, this starts working (to some degree) from 12.2.0.1 on.

The official documentation up to and including 19c still mentions this as a restriction, and since it doesn't work when explicitly referencing virtual columns (see the test case what I exactly mean by this) I assume this is more like a side effect / unintended feature.

Nevertheless, the optimizer happily picks up this additional information and comes up with improved estimates when having a combination of skew and correlation on expressions, for example.

The following test case shows the change in behaviour from 12.2.0.1 on:

set echo on linesize 200 trimspool on trimout on tab off pagesize 999 timing on

alter session set nls_language = american;

drop table t1;

purge table t1;

-- Initialize the random generator for "reproducible" pseudo-randomness
exec dbms_random.seed(0)

-- ATTR1 and ATTR2 are both skewed and correlated
create table t1
as
select
rownum as id
, trunc(dbms_random.value(1, 1000000000000)) as fk
, case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr1
, case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000000
;

-- Histograms on ATTR1 and ATTR2 for representing skew properly
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2')

-- In addition histograms / extended stats on (ATTR1, ATTR2) for covering both skew and correlation
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (attr1, attr2)')

-- Assuming a query expression TRUNC(ATTR1) = x and TRUNC(ATTR2) = x create a histogram on each expression separately for representing skew properly
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc(attr1)), (trunc(attr2))')

-- In addition histogram / extended stats on (TRUNC(ATTR1), TRUNC(ATTR2)) for covering both skew and correlation on the expressions
-- Officially this is isn't allowed and errors out up to and including 12.1.0.2
-- ORA-20001: Error when processing extension - missing right parenthesis
-- Starting with 12.2.0.1 this is successful, but documentation still lists this as restriction, including 19c
-- But: It's only supported when there is a corresponding virtual column already existing for each expression used - so Oracle can map this expression back to a virtual column
-- Remove the previous step and it will error out:
-- ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

-- Although it doesn't look like officially supported / possibly just a side effect of some bug fix, the optimizer picks up the information properly
-- Pre 12.2.0.1 don't recognize the correlation and show approx. 800K rows as estimate
-- 12.2.0.1 and later with the column group on the expression in place show approx. 900K rows as estimate which is (almost) correct
explain plan for
select
count(*)
from
t1 a
where
trunc(attr1) = 1
and trunc(attr2) = 1;

select * from table(dbms_xplan.display(format => 'TYPICAL'));

-- But: Explicitly referencing a virtual column doesn't work
-- This will error out:
-- ORA-20001: Error when processing extension - virtual column is referenced in a column expression

exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr1))')

exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr2))')

-- Remove this and the extended stats on the virtual columns below will work in 12.2.0.1 and later (because the extension still exists from above call)
exec dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')


alter table t1 add (trunc_attr1 as (trunc(attr1)));

alter table t1 add (trunc_attr2 as (trunc(attr2)));

-- This works and is supported in all versions supporting virtual columns
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, trunc_attr1, trunc_attr2')

-- This errors out: ORA-20001: Error when processing extension - virtual column is referenced in a column expression
-- even in 12.2.0.1 and later
-- But: Works in 12.2.0.1 and later if the call to
-- dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
-- above is removed, because the extension then already exists (!)
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)')

-- But: This works again from 12.2.0.1 on, not explicitly referencing the virtual columns but using the expressions covered by the virtual columns instead
exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

And here is the output I get when using 12.1.0.2:

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 10 12:39:00 2020

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED SPID OPID CPID SADDR PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
CBO_TEST orcl121 DELLXPS13 368 46472 12.1.0.2.0 20200110 6908 59 15536:4996 00007FFA110E9B88 00007FFA12B6F1E8



SQL>
SQL> alter session set nls_language = american;

Session altered.

Elapsed: 00:00:00.00
SQL>
SQL> drop table t1;

Table dropped.

Elapsed: 00:00:00.02
SQL>
SQL> purge table t1;

Table purged.

Elapsed: 00:00:00.06
SQL>
SQL> -- Initialize the random generator for "reproducible" pseudo-randomness
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> -- ATTR1 and ATTR2 are both skewed and correlated
SQL> create table t1
2 as
3 select
4 rownum as id
5 , trunc(dbms_random.value(1, 1000000000000)) as fk
6 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr1
7 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr2
8 , rpad('x', 100) as filler
9 from
10 dual
11 connect by
12 level <= 1000000
13 ;

Table created.

Elapsed: 00:00:13.07
SQL>
SQL> -- Histograms on ATTR1 and ATTR2 for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.43
SQL>
SQL> -- In addition histograms / extended stats on (ATTR1, ATTR2) for covering both skew and correlation
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (attr1, attr2)')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.08
SQL>
SQL> -- Assuming a query expression TRUNC(ATTR1) = x and TRUNC(ATTR2) = x create a histogram on each expression separately for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc(attr1)), (trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.51
SQL>
SQL> -- In addition histogram / extended stats on (TRUNC(ATTR1), TRUNC(ATTR2)) for covering both skew and correlation on the expressions
SQL> -- Officially this is isn't allowed and errors out up to and including 12.1.0.2
SQL> -- ORA-20001: Error when processing extension - missing right parenthesis
SQL> -- Starting with 12.2.0.1 this is successful, but documentation still lists this as restriction, including 19c
SQL> -- But: It's only supported when there is a corresponding virtual column already existing for each expression used - so Oracle can map this expression back to a virtual column
SQL> -- Remove the previous step and it will error out:
SQL> -- ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - missing right parenthesis


Elapsed: 00:00:00.04
SQL>

SQL> -- Although it doesn't look like officially supported / possibly just a side effect of some bug fix, the optimizer picks up the information properly
SQL> -- Pre 12.2.0.1 don't recognize the correlation and show approx. 800K rows as estimate
SQL> -- 12.2.0.1 and later with the column group on the expression in place show approx. 900K rows as estimate which is (almost) correct
SQL> explain plan for
2 select
3 count(*)
4 from
5 t1 a
6 where
7 trunc(attr1) = 1
8 and trunc(attr2) = 1;

Explained.

Elapsed: 00:00:00.02
SQL>
SQL> select * from table(dbms_xplan.display(format => 'TYPICAL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 4777 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
|* 2 | TABLE ACCESS FULL| T1 | 807K| 6305K| 4777 (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(TRUNC("ATTR1")=1 AND TRUNC("ATTR2")=1)

14 rows selected.

Elapsed: 00:00:00.11
SQL>
SQL> -- But: Explicitly referencing a virtual column doesn't work
SQL> -- This will error out:
SQL> -- ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr1))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.13
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.09
SQL>
SQL> -- Remove this and the extended stats on the virtual columns below will work in 12.2.0.1 and later (because the extension still exists from above call)
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
BEGIN dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - missing right parenthesis



Elapsed: 00:00:00.02
SQL>
SQL>
SQL> alter table t1 add (trunc_attr1 as (trunc(attr1)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter table t1 add (trunc_attr2 as (trunc(attr2)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> -- This works and is supported in all versions supporting virtual columns
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, trunc_attr1, trunc_attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.57
SQL>
SQL> -- This errors out: ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL> -- even in 12.2.0.1 and later
SQL> -- But: Works in 12.2.0.1 and later if the call to
SQL> -- dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
SQL> -- above is removed, because the extension then already exists (!)
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - virtual column is referenced in a column expression
ORA-06512: at "SYS.DBMS_STATS", line 34634
ORA-06512: at line 1


Elapsed: 00:00:00.06
SQL>
SQL> -- But: This works again from 12.2.0.1 on, not explicitly referencing the virtual columns but using the expressions covered by the virtual columns instead
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - missing right parenthesis


Elapsed: 00:00:00.04
SQL>

And that is what I get from 12.2.0.1 on, here using 19.3:

SQL*Plus: Release 11.2.0.1.0 Production on Fri Jan 10 12:35:18 2020

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Verbunden mit:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production


USERNAME INST_NAME HOST_NAME SID SERIAL# VERSION STARTED SPID OPID CPID SADDR PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
CBO_TEST orcl19c DELLXPS13 146 48961 19.0.0.0.0 20200110 5648 53 7260:13644 00007FF91687B3D8 00007FF91656B858



SQL>
SQL> alter session set nls_language = american;

Session altered.

Elapsed: 00:00:00.00
SQL>
SQL> drop table t1;

Table dropped.

Elapsed: 00:00:00.02
SQL>
SQL> purge table t1;

Table purged.

Elapsed: 00:00:00.10
SQL>
SQL> -- Initialize the random generator for "reproducible" pseudo-randomness
SQL> exec dbms_random.seed(0)

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00
SQL>
SQL> -- ATTR1 and ATTR2 are both skewed and correlated
SQL> create table t1
2 as
3 select
4 rownum as id
5 , trunc(dbms_random.value(1, 1000000000000)) as fk
6 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr1
7 , case when rownum <= 900000 then 1 else ceil(rownum / 10) end as attr2
8 , rpad('x', 100) as filler
9 from
10 dual
11 connect by
12 level <= 1000000
13 ;

Table created.

Elapsed: 00:00:11.89
SQL>
SQL> -- Histograms on ATTR1 and ATTR2 for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.68
SQL>
SQL> -- In addition histograms / extended stats on (ATTR1, ATTR2) for covering both skew and correlation
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (attr1, attr2)')

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.75
SQL>
SQL> -- Assuming a query expression TRUNC(ATTR1) = x and TRUNC(ATTR2) = x create a histogram on each expression separately for representing skew properly
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc(attr1)), (trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.43
SQL>
SQL> -- In addition histogram / extended stats on (TRUNC(ATTR1), TRUNC(ATTR2)) for covering both skew and correlation on the expressions
SQL> -- Officially this is isn't allowed and errors out up to and including 12.1.0.2
SQL> -- ORA-20001: Error when processing extension - missing right parenthesis
SQL> -- Starting with 12.2.0.1 this is successful, but documentation still lists this as restriction, including 19c
SQL> -- But: It's only supported when there is a corresponding virtual column already existing for each expression used - so Oracle can map this expression back to a virtual column
SQL> -- Remove the previous step and it will error out:
SQL> -- ORA-20001: Invalid Extension: Column group can contain only columns seperated by comma
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:05.24
SQL>
SQL> -- Although it doesn't look like officially supported / possibly just a side effect of some bug fix, the optimizer picks up the information properly
SQL> -- Pre 12.2.0.1 don't recognize the correlation and show approx. 800K rows as estimate
SQL> -- 12.2.0.1 and later with the column group on the expression in place show approx. 900K rows as estimate which is (almost) correct
SQL> explain plan for
2 select
3 count(*)
4 from
5 t1 a
6 where
7 trunc(attr1) = 1
8 and trunc(attr2) = 1;

Explained.

Elapsed: 00:00:00.01
SQL>
SQL> select * from table(dbms_xplan.display(format => 'TYPICAL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 8 | 4797 (1)| 00:00:01 |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
|* 2 | TABLE ACCESS FULL| T1 | 893K| 6977K| 4797 (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter(TRUNC("ATTR1")=1 AND TRUNC("ATTR2")=1)

14 rows selected.

Elapsed: 00:00:00.10
SQL>
SQL> -- But: Explicitly referencing a virtual column doesn't work
SQL> -- This will error out:
SQL> -- ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr1))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.33
SQL>
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '(trunc(attr2))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.33
SQL>
SQL> -- Remove this and the extended stats on the virtual columns below will work in 12.2.0.1 and later (because the extension still exists from above call)
SQL> exec dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.11
SQL>
SQL>
SQL> alter table t1 add (trunc_attr1 as (trunc(attr1)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> alter table t1 add (trunc_attr2 as (trunc(attr2)));

Table altered.

Elapsed: 00:00:00.01
SQL>
SQL> -- This works and is supported in all versions supporting virtual columns
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, trunc_attr1, trunc_attr2')

PL/SQL procedure successfully completed.

Elapsed: 00:00:03.49
SQL>
SQL> -- This errors out: ORA-20001: Error when processing extension - virtual column is referenced in a column expression
SQL> -- even in 12.2.0.1 and later
SQL> -- But: Works in 12.2.0.1 and later if the call to
SQL> -- dbms_stats.drop_extended_stats(null, 't1', '((trunc(attr1)), (trunc(attr2)))')
SQL> -- above is removed, because the extension then already exists (!)
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)')
BEGIN dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, (trunc_attr1, trunc_attr2)'); END;

*
ERROR at line 1:
ORA-20001: Error when processing extension - virtual column is referenced in a column expression
ORA-06512: at "SYS.DBMS_STATS", line 40751
ORA-06512: at "SYS.DBMS_STATS", line 40035
ORA-06512: at "SYS.DBMS_STATS", line 38912
ORA-06512: at "SYS.DBMS_STATS", line 15726
ORA-06512: at "SYS.DBMS_STATS", line 22064
ORA-06512: at "SYS.DBMS_STATS", line 22162
ORA-06512: at "SYS.DBMS_STATS", line 22232
ORA-06512: at "SYS.DBMS_STATS", line 22864
ORA-06512: at "SYS.DBMS_STATS", line 38313
ORA-06512: at "SYS.DBMS_STATS", line 39738
ORA-06512: at "SYS.DBMS_STATS", line 40183
ORA-06512: at "SYS.DBMS_STATS", line 40732
ORA-06512: at line 1


Elapsed: 00:00:00.09
SQL>
SQL> -- But: This works again from 12.2.0.1 on, not explicitly referencing the virtual columns but using the expressions covered by the virtual columns instead
SQL> exec dbms_stats.gather_table_stats(null, 't1', method_opt => 'for all columns size 1 for columns size 254 attr1, attr2, ((trunc(attr1)), (trunc(attr2)))')

PL/SQL procedure successfully completed.


Elapsed: 00:00:05.32
SQL>

So, from 12.2.0.1 it looks like extended statistics on virtual columns are supported to some degree, when following a certain sequence of operation - first creating extended statistics on each of the expressions used, which creates corresponding virtual columns under the cover, and afterwards creating extended statistics using those expressions as part of the column group expression.

Strange enough, when explicitly creating virtual columns for those expressions creating a column group explicitly referencing those virtual columns doesn't work - but using the expressions covered by the virtual columns instead works even with those virtual columns created explicitly.

WooCommerce Sales Analysis Report

Nilesh Jethwa - Sun, 2020-01-12 22:04

Advanced Sales Analysis for WooCommerce An in-depth understanding of your eCommerce data will help you to propel the sales of your woocommerce store. Sales Analysis or Sales Summary Dashboard is part of the WooCommerce Reporting Pro package Summary: This is one of the most useful and necessary reports for all WooCommerce Store owners. So let … Read more

Hat Tip To: InfoCaptor WooCommerce Plugins

kubectl tree - A kubectl plugin to explore ownership relationships between Kubernetes objects through ownersReferences

Pas Apicella - Sun, 2020-01-12 18:51
A kubectl plugin to explore ownership relationships between Kubernetes objects through ownersReferences on them. To get started and install the plugin visit this page.

https://github.com/ahmetb/kubectl-tree

Install Steps

Install as follows

1. Create a script as follows

install-krew.sh

(
  set -x; cd "$(mktemp -d)" &&
  curl -fsSLO "https://github.com/kubernetes-sigs/krew/releases/download/v0.3.3/krew.{tar.gz,yaml}" &&
  tar zxvf krew.tar.gz &&
  KREW=./krew-"$(uname | tr '[:upper:]' '[:lower:]')_amd64" &&
  "$KREW" install --manifest=krew.yaml --archive=krew.tar.gz &&
  "$KREW" update
)

2. Install as follows

papicella@papicella:~/pivotal/software/krew$ ./install-krew.sh
+++ mktemp -d
++ cd /var/folders/mb/93td1r4s7mz3ptq6cmpdvc6m0000gp/T/tmp.kliHlfYB
++ curl -fsSLO 'https://github.com/kubernetes-sigs/krew/releases/download/v0.3.3/krew.{tar.gz,yaml}'
++ tar zxvf krew.tar.gz
x ./krew-darwin_amd64
x ./krew-linux_amd64
x ./krew-linux_arm
x ./krew-windows_amd64.exe
x ./LICENSE
+++ uname
+++ tr '[:upper:]' '[:lower:]'
++ KREW=./krew-darwin_amd64
++ ./krew-darwin_amd64 install --manifest=krew.yaml --archive=krew.tar.gz
Installing plugin: krew
Installed plugin: krew

...

3. On a Mac add the following to your PATH and source your profile file or start a new shell

export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"

4. Check plugin is installed

$ kubectl plugin list
The following compatible plugins are available:

/Users/papicella/.krew/bin/kubectl-krew
/Users/papicella/.krew/bin/kubectl-tree

Can also use this:

$ kubectl tree --help
Show sub-resources of the Kubernetes object

Usage:
  kubectl tree KIND NAME [flags]

Examples:
  kubectl tree deployment my-app
  kubectl tree kservice.v1.serving.knative.dev my-app

6. Ok now it's installed let's see what it shows / displays information about k8s objects and relationships on my cluster which has riff and knative installed

$ kubectl tree deployment --namespace=knative-serving networking-istio
NAMESPACE        NAME                                       READY  REASON  AGE
knative-serving  Deployment/networking-istio                -              8d
knative-serving  └─ReplicaSet/networking-istio-7fcd97cbf7   -              8d
knative-serving    └─Pod/networking-istio-7fcd97cbf7-z4dc9  True           8d

$ kubectl tree deployment --namespace=riff-system riff-build-controller-manager
NAMESPACE    NAME                                                    READY  REASON  AGE
riff-system  Deployment/riff-build-controller-manager                -              8d
riff-system  └─ReplicaSet/riff-build-controller-manager-5d484d5fc4   -              8d
riff-system    └─Pod/riff-build-controller-manager-5d484d5fc4-7rhbr  True           8d


More Information

GitHub Tree Plugin
https://github.com/ahmetb/kubectl-tree

Categories: Fusion Middleware

Pages

Subscribe to Oracle FAQ aggregator