Feed aggregator

Taking VMware Tanzu Mission Control for a test drive

Pas Apicella - Mon, 2020-02-10 19:53
You may or may not have heard of Tanzu Mission Control (TMC) part of the new VMware Tanzu offering which will help you build, run and manage modern apps. To find out more about Tanzu Mission Control here is the Blog link on that.

https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

In this blog I show you how easily you can use TMC to monitor your existing k8s clusters. Keep in mind TMC can also create k8s clusters for you but here we will use the "Attach Cluster" part of TMC. Demo as follows

1. Of course you will need access account on TMC which for this demo I already have. Once logged in you will see a home screen as follows



2. In the right hand corner there is a "Attach Cluster" button click this to attach an existing cluster to TMC. Enter some cluster details , in this case I am attaching to a k8s cluster on GKE and giving it a name "pas-gke-cluster".


3. Click the "Register" button which takes you to a screen which allows you to install the VMware Tanzu Mission Control agent. This is simply done by using "kubectl apply ..." on your k8s cluster which allows an agent to communicate back to TMC itself. Everything is created in a namespace called "vmware-system-tmc"



4. Once you have run the "kubectl apply .." on your cluster you can verify the status of the pods and other components installed as follows

$ kubectl get all --namespace=vmware-system-tmc

Or you could just check the status of the various pods as shown below and assume everything else was created ok
  
$ kubectl get pods --namespace=vmware-system-tmc
NAME READY STATUS RESTARTS AGE
agent-updater-67bb5bb9c6-khfwh 1/1 Running 0 74m
agentupdater-workload-1581383460-5dsx9 0/1 Completed 0 59s
data-protection-657d8bf96c-v627g 1/1 Running 0 73m
extension-manager-857d46c6c-zfzbj 1/1 Running 0 74m
extension-updater-6ddd9858cf-lr88r 1/1 Running 0 74m
inspection-extension-789bb48b6-mnlqj 1/1 Running 0 73m
intent-agent-cfb49d788-cq8tk 1/1 Running 0 73m
policy-sync-extension-686c757989-jftjc 1/1 Running 0 73m
policy-webhook-5cdc7b87dd-8shlp 1/1 Running 0 73m
policy-webhook-5cdc7b87dd-fzz6s 1/1 Running 0 73m
sync-agent-84bd6c7bf7-rtzcn 1/1 Running 0 73m

5. Now at this point click on "Verify Connection" button to confirm the agent in your k8s cluster is able to communicate with TMC

6. Now let's search for out cluster on the "Clusters" page as shown below



7. Click on "pas-gke-cluster" and you will be taken to an Overview page as shown below. Ensure all green tick boxes are in place this may take a few minutes so refresh the page as needed



8. So this being an empty cluster I will create a deployment with 2 pods so we can see how TMC shows this workload in the UI. These "kubectl commands" should work on any cluster as the image is on Docker Hub

$ kubectl run pbs-deploy --image=pasapples/pbs-demo-image --replicas=2 --port=8080
$ kubectl expose deployment pbs-deploy --type=LoadBalancer --port=80 --target-port=8080 --name=pbs-demo-service

9. Test the workload (Although this isn't really required)

$ echo "http://`kubectl get svc pbs-demo-service -o jsonpath='{.status.loadBalancer.ingress[0].ip}'`/customers/1"
http://104.197.202.165/customers/1

$ http http://104.197.202.165/customers/1
HTTP/1.1 200
Content-Type: application/hal+json;charset=UTF-8
Date: Tue, 11 Feb 2020 01:43:26 GMT
Transfer-Encoding: chunked

{
    "_links": {
        "customer": {
            "href": "http://104.197.202.165/customers/1"
        },
        "self": {
            "href": "http://104.197.202.165/customers/1"
        }
    },
    "name": "pas",
    "status": "active"
}

10. Back on the TMC UI click on workloads. You should see our deployment as per below


11. Click on the deployment "pbs-deploy" to see the status of the pods created as part of the deployment replica set plus the YAML of the deployment itself


12. Of course this is just scratching the surface but from the other tabs you can see the cluster nodes, namespaces and other information as required not just for your workloads but also for the cluster itself




One thing to note here is when I attach a cluster as shown in this demo the life cycle of the cluster, for example upgrades, can't be managed / performed by TMC. In the next post I will show how "Create Cluster" will actually be able to control the life cycle of the cluster as well as this time TMC will actually create the cluster for us.

Stay tuned!!!

More Information

Introducing VMware Tanzu Mission Control to Bring Order to Cluster Chaos
https://blogs.vmware.com/cloudnative/2019/08/26/vmware-tanzu-mission-control/

VMware Tanzu Mission Control
https://cloud.vmware.com/tanzu-mission-control
Categories: Fusion Middleware

AZ-104 | Microsoft Azure Certification Exam

Online Apps DBA - Mon, 2020-02-10 07:18

New Version of AZ-103 is now AZ-104 Recently, Microsoft has announced a new version of the AZ-103 exam. This new version, apparently known as AZ-104 which will be introduced in late March/early April. For people who have already started preparing for AZ-103, the good news is that they will still be able to take this […]

The post AZ-104 | Microsoft Azure Certification Exam appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

How SQL Server MVCC compares to Oracle and PostgreSQL

Yann Neuhaus - Sun, 2020-02-09 13:42
By Franck Pachot

.
Microsoft SQL Server has implemented MVCC in 2005, which has been proven to be the best approach for transaction isolation (the I in ACID) in OLTP. But are you sure that writers do not block readers with READ_COMMITTED_SNAPSHOT? I’ll show here that some reads are still blocked by locked rows, contrary to the precursors of MVCC like PostgreSQL and Oracle.

For this demo, I run SQL Server 2019 RHEL image on docker in an Oracle Cloud compute running OEL7.7 as explained in the previous post. If you don’t have the memory limit mentioned, you can simply run:

docker run -d -e "ACCEPT_EULA=Y" -e 'MSSQL_PID=Express' -p 1433:1433 -e 'SA_PASSWORD=**P455w0rd**' --name mssql mcr.microsoft.com/mssql/rhel/server:2019-latest
time until docker logs mssql | grep -C10 "Recovery is complete." ; do sleep 1 ; done

Test scenario description

Here is what I’ll run in a first session:

  1. create a DEMO database
  2. (optional) set MVCC with Read Commited Snapshot isolation level
  3. create a DEMO table with two rows. One with “a”=1 and one with “a”=2
  4. (optional) build an index on column “a”
  5. update the first line where “a”=1


cat > session1.sql <<'SQL'
drop database if exists DEMO;
create database DEMO;
go
use DEMO;
go
-- set MVCC to read snapshot rather than locked current --
-- alter database DEMO set READ_COMMITTED_SNAPSHOT on;
go
drop table if exists DEMO;
create table DEMO(id int primary key, a int not null, b int);
begin transaction;
insert into DEMO values(1,1,1);
insert into DEMO values(2,2,2);
commit;
go
select * from DEMO;
go
-- index to read only rows that we want to modify --
-- create index DEMO_A on DEMO(a);
go
begin transaction;
update DEMO set b=b+1 where a=1;
go
SQL

I’ll run it in the background (you can also run it in another terminal) where it waits 60 seconds before quitting:

( cat session1.sql ; sleep 60 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &

[root@instance-20200208-1719 ~]# ( cat session1.sql ; sleep 60 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &
[1] 27759
[root@instance-20200208-1719 ~]# Feb 09 17:05:43 drop database if exists DEMO;
Feb 09 17:05:43 create database DEMO;
Feb 09 17:05:43
Feb 09 17:05:43 use DEMO;
Feb 09 17:05:43
Feb 09 17:05:43 Changed database context to 'DEMO'.
Feb 09 17:05:43 -- set MVCC to read snapshot rather than locked current --
Feb 09 17:05:43 -- alter database DEMO set READ_COMMITTED_SNAPSHOT on;
Feb 09 17:05:43
Feb 09 17:05:43 drop table if exists DEMO;
Feb 09 17:05:43 create table DEMO(id int primary key, a int not null, b int);
Feb 09 17:05:43 begin transaction;
Feb 09 17:05:43 insert into DEMO values(1,1,1);
Feb 09 17:05:43 insert into DEMO values(2,2,2);
Feb 09 17:05:43 commit;
Feb 09 17:05:43
Feb 09 17:05:43
Feb 09 17:05:43 (1 rows affected)
Feb 09 17:05:43
Feb 09 17:05:43 (1 rows affected)
Feb 09 17:05:43 select * from DEMO;
Feb 09 17:05:43
Feb 09 17:05:43 id          a           b
Feb 09 17:05:43 ----------- ----------- -----------
Feb 09 17:05:43           1           1           1
Feb 09 17:05:43           2           2           2
Feb 09 17:05:43
Feb 09 17:05:43 (2 rows affected)
Feb 09 17:05:43 -- index to read only rows that we want to modify --
Feb 09 17:05:43 -- create index DEMO_A on DEMO(a);
Feb 09 17:05:43
Feb 09 17:05:43 begin transaction;
Feb 09 17:05:43 update DEMO set b=b+1 where a=1;
Feb 09 17:05:43
Feb 09 17:05:43
Feb 09 17:05:43 (1 rows affected)
SQL Server default

While this session has locked the first row I’ll run the following, reading the same row that is currently locked by the other transaction:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO | ts
-- read access the row that is not locked
select * from DEMO where a=2;
go

This hangs until the first transaction is canceled:

[root@instance-20200208-1719 ~]# docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO | ts
-- read access the row that is not locked
 select * from DEMO where a=2;
go
Feb 09 17:06:42
Feb 09 17:06:42
Feb 09 17:06:42
Feb 09 17:06:42 Sqlcmd: Warning: The last operation was terminated because the user pressed CTRL+C.
Feb 09 17:06:42
Feb 09 17:06:42 -- read access the row that is not locked
Feb 09 17:06:42  select * from DEMO where a=2;
Feb 09 17:06:42
Feb 09 17:06:42 id          a           b
Feb 09 17:06:42 ----------- ----------- -----------
Feb 09 17:06:42           2           2           2
Feb 09 17:06:42
Feb 09 17:06:42 (1 rows affected)

The “Sqlcmd: Warning: The last operation was terminated because the user pressed CTRL+C” message is fron the first session and only then my foreground session was able to continue. This is the worst you can encounter with the default isolation level in SQL Server where writes and reads are blocking each other even when not touching the same row (I read the a=2 row and only the a=1 one was locked). The reason for this is that I have no index for this predicate and I have to read all rows in order to find mine:

set showplan_text on ;
go
select * from DEMO where a=2;
go

go
Feb 09 17:07:24 set showplan_text on ;
Feb 09 17:07:24
select * from DEMO where a=2;
go
Feb 09 17:07:30 select * from DEMO where a=2;
Feb 09 17:07:30
Feb 09 17:07:30 StmtText
Feb 09 17:07:30 -------------------------------
Feb 09 17:07:30 select * from DEMO where a=2;
Feb 09 17:07:30
Feb 09 17:07:30 (1 rows affected)
Feb 09 17:07:30 StmtText
Feb 09 17:07:30 ---------------------------------------------------------------------------------------------------------------------------------------------------
Feb 09 17:07:30   |--Clustered Index Scan(OBJECT:([DEMO].[dbo].[DEMO].[PK__DEMO__3213E83F2AD8547F]), WHERE:([DEMO].[dbo].[DEMO].[a]=CONVERT_IMPLICIT(int,[@1],0)))
Feb 09 17:07:30
Feb 09 17:07:30 (1 rows affected)

Now, in order to avoid this situation, I’ll run the same but with an index on column “a”.
It was commented out in the session1.sql script and then I just re-ren everything without those comments:

( sed -e '/create index/s/--//' session1.sql ; sleep 60 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &

I’m running the same, now with a 3 seconds timeout so that I don’t have to wait for my background session to terminate:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3 | ts
-- read access the row that is not locked
select * from DEMO where a=2;
go

[root@instance-20200208-1719 ~]# docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3 | ts
-- read access the row that is not locked
 select * from DEMO where a=2;
 go
Feb 09 17:29:25 -- read access the row that is not locked
Feb 09 17:29:25  select * from DEMO where a=2;
Feb 09 17:29:25
Feb 09 17:29:25 Timeout expired

Here I’m blocked again like in the previous scenario because the index was not used.
I can force the index access with an hint:

-- read access the row that is not locked forcing index access
select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
go

-- read access the row that is not locked forcing index access
 select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
 go
Feb 09 17:29:30 -- read access the row that is not locked forcing index access
Feb 09 17:29:30  select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
Feb 09 17:29:30
Feb 09 17:29:30 id          a           b
Feb 09 17:29:30 ----------- ----------- -----------
Feb 09 17:29:30           2           2           2
Feb 09 17:29:30
Feb 09 17:29:30 (1 rows affected)

This didn’t wait because the index access didn’t have to to to the locked row.

However, when I read the same row that is concurently locked I have to wait:

-- read access the row that is locked
select * from DEMO where a=1;
go

 -- read access the row that is locked
 select * from DEMO where a=1;
 go
Feb 09 17:29:34  -- read access the row that is locked
Feb 09 17:29:34  select * from DEMO where a=1;
Feb 09 17:29:34
Feb 09 17:29:34 Timeout expired

Here is the confirmation that the index was used only with the hint:

set showplan_text on ;
go
select * from DEMO where a=2;
go
select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
go

Feb 09 17:29:50 set showplan_text on ;
Feb 09 17:29:50
 select * from DEMO where a=2;
 go
Feb 09 17:29:50  select * from DEMO where a=2;
Feb 09 17:29:50
Feb 09 17:29:50 StmtText
Feb 09 17:29:50 --------------------------------
Feb 09 17:29:50  select * from DEMO where a=2;
Feb 09 17:29:50
Feb 09 17:29:50 (1 rows affected)
Feb 09 17:29:50 StmtText
Feb 09 17:29:50 --------------------------------------------------------------------------------------------------------------------------
Feb 09 17:29:50   |--Clustered Index Scan(OBJECT:([DEMO].[dbo].[DEMO].[PK__DEMO__3213E83F102B4054]), WHERE:([DEMO].[dbo].[DEMO].[a]=(2)))
Feb 09 17:29:50
Feb 09 17:29:50 (1 rows affected)
 select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
 go
Feb 09 17:29:52  select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
Feb 09 17:29:52
Feb 09 17:29:52 StmtText
Feb 09 17:29:52 -----------------------------------------------------
Feb 09 17:29:52  select * from DEMO WITH (INDEX(DEMO_A)) where a=2;
Feb 09 17:29:52
Feb 09 17:29:52 (1 rows affected)
Feb 09 17:29:52 StmtText                                                                                                                                                
Feb 09 17:29:52 ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Feb 09 17:29:52   |--Nested Loops(Inner Join, OUTER REFERENCES:([DEMO].[dbo].[DEMO].[id]))                                                                              
Feb 09 17:29:52        |--Index Seek(OBJECT:([DEMO].[dbo].[DEMO].[DEMO_A]), SEEK:([DEMO].[dbo].[DEMO].[a]=(2)) ORDERED FORWARD)                                         
Feb 09 17:29:52        |--Clustered Index Seek(OBJECT:([DEMO].[dbo].[DEMO].[PK__DEMO__3213E83F102B4054]), SEEK:([DEMO].[dbo].[DEMO].[id]=[DEMO].[dbo].[DEMO].[id]) LOOKUP ORDERED FORWARD)
Feb 09 17:29:52
Feb 09 17:29:52 (3 rows affected)

So, with de the default isolation level and index access, we can read a row that is not locked. The last query was blocked for the SELECT * FROM DEMO WHERE A=1 because we are in the legacy, and default, mode where readers are blocked by writers.

SQL Server MVCC

In order to improve this situation, Microsoft has implemented MVCC. With it, we do not need to read the current version of the rows (which requires waiting when it is concurrently modified) because the past version of the rows are stored in TEMPDB and we can read a past snapshot of it. Typically, with READ COMMITED SNAPSHOT isolation level, we read a snapshot as-of the point-in-time our query began. 
In general, we need to read all rows from a consistent point in time. This can be the one where our query started, and then while the query is running, a past version may be reconstructed to remove concurrent changes. Or, when there is no MVCC to rebuild this snapshot, this consistent point can only be the one when our query is completed. This means that while we read rows, we must lock them to be sure that they stay the same until the end of our query. Of course, even with MVCC there are cases where we want to read the latest value and then we will lock with something like a SELECT FOR UPDATE. But that’s not the topic here.

I’ll run the same test as the first one, but now have the database with READ_COMMITTED_SNAPSHOT on:

( sed -e '/READ_COMMITTED/s/--//' session1.sql ; sleep 120 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &

[root@instance-20200208-1719 ~]# ( sed -e '/READ_COMMITTED/s/--//' session1.sql ; sleep 120 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &
[1] 38943
[root@instance-20200208-1719 ~]# Feb 09 18:21:19 drop database if exists DEMO;
Feb 09 18:21:19 create database DEMO;
Feb 09 18:21:19
Feb 09 18:21:19 use DEMO;
Feb 09 18:21:19
Feb 09 18:21:19 Changed database context to 'DEMO'.
Feb 09 18:21:19 -- set MVCC to read snapshot rather than locked current --
Feb 09 18:21:19  alter database DEMO set READ_COMMITTED_SNAPSHOT on;
Feb 09 18:21:19
Feb 09 18:21:19 drop table if exists DEMO;
Feb 09 18:21:19 create table DEMO(id int primary key, a int not null, b int);
Feb 09 18:21:19 begin transaction;
Feb 09 18:21:19 insert into DEMO values(1,1,1);
Feb 09 18:21:19 insert into DEMO values(2,2,2);
Feb 09 18:21:19 commit;
Feb 09 18:21:19
Feb 09 18:21:19
Feb 09 18:21:19 (1 rows affected)
Feb 09 18:21:19
Feb 09 18:21:19 (1 rows affected)
Feb 09 18:21:19 select * from DEMO;
Feb 09 18:21:19
Feb 09 18:21:19 id          a           b
Feb 09 18:21:19 ----------- ----------- -----------
Feb 09 18:21:19           1           1           1
Feb 09 18:21:19           2           2           2
Feb 09 18:21:19
Feb 09 18:21:19 (2 rows affected)
Feb 09 18:21:19 -- index to read only rows that we want to modify --
Feb 09 18:21:19 -- create index DEMO_A on DEMO(a);
Feb 09 18:21:19
Feb 09 18:21:19 begin transaction;
Feb 09 18:21:19 update DEMO set b=b+1 where a=1;
Feb 09 18:21:19
Feb 09 18:21:19
Feb 09 18:21:19 (1 rows affected)

And then running the same scenario:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3 | ts
-- read access the row that is not locked
select * from DEMO where a=2;
go
-- read access the row that is locked
select * from DEMO where a=1;
go
-- write access on the row that is not locked
delete from DEMO where a=2;
go

[root@instance-20200208-1719 ~]# docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3 | ts

-- read access the row that is not locked
select * from DEMO where a=2;
go
Feb 09 18:21:36 -- read access the row that is not locked
Feb 09 18:21:36 select * from DEMO where a=2;
Feb 09 18:21:36
Feb 09 18:21:36 id          a           b
Feb 09 18:21:36 ----------- ----------- -----------
Feb 09 18:21:36           2           2           2
Feb 09 18:21:36
Feb 09 18:21:36 (1 rows affected)

-- read access the row that is locked
select * from DEMO where a=1;
go
Feb 09 18:21:47 -- read access the row that is locked
Feb 09 18:21:47 select * from DEMO where a=1;
Feb 09 18:21:47
Feb 09 18:21:47 id          a           b
Feb 09 18:21:47 ----------- ----------- -----------
Feb 09 18:21:47           1           1           1
Feb 09 18:21:47
Feb 09 18:21:47 (1 rows affected)

-- write access on the row that is not locked
delete from DEMO where a=2;
go
Feb 09 18:22:01 -- write access on the row that is not locked
Feb 09 18:22:01 delete from DEMO where a=2;
Feb 09 18:22:01
Feb 09 18:22:01 Timeout expired

Ok, that’s better. I confirm that readers are not blocked by writers. But the modification on “A”=2 was blocked. This is not a writer-writer situation because we are not modifying the row that is locked by the other session. Here, I have no index on “A” and then the delete statement must first read the table and had to read this locked row. And obviously, this read is blocked. It seems that DML must read the current version of the row even when MVCC is available. That means that reads can be blocked by writes when those reads are in a writing transaction.

Last test on SQL Server: the same, with MVCC, and the index on “A”

( sed -e '/READ_COMMITTED/s/--//' -e '/create index/s/--//' session1.sql ; sleep 120 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &

[root@instance-20200208-1719 ~]# ( sed -e '/READ_COMMITTED/s/--//' -e '/create index/s/--//' session1.sql ; sleep 120 ) | docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e | ts &
[1] 40320
[root@instance-20200208-1719 ~]#
[root@instance-20200208-1719 ~]# Feb 09 18:30:15 drop database if exists DEMO;
Feb 09 18:30:15 create database DEMO;
Feb 09 18:30:15
Feb 09 18:30:15 use DEMO;
Feb 09 18:30:15
Feb 09 18:30:15 Changed database context to 'DEMO'.
Feb 09 18:30:15 -- set MVCC to read snapshot rather than locked current --
Feb 09 18:30:15  alter database DEMO set READ_COMMITTED_SNAPSHOT on;
Feb 09 18:30:15
Feb 09 18:30:15 drop table if exists DEMO;
Feb 09 18:30:15 create table DEMO(id int primary key, a int not null, b int);
Feb 09 18:30:15 begin transaction;
Feb 09 18:30:15 insert into DEMO values(1,1,1);
Feb 09 18:30:15 insert into DEMO values(2,2,2);
Feb 09 18:30:15 commit;
Feb 09 18:30:15
Feb 09 18:30:15
Feb 09 18:30:15 (1 rows affected)
Feb 09 18:30:15
Feb 09 18:30:15 (1 rows affected)
Feb 09 18:30:15 select * from DEMO;
Feb 09 18:30:15
Feb 09 18:30:15 id          a           b
Feb 09 18:30:15 ----------- ----------- -----------
Feb 09 18:30:15           1           1           1
Feb 09 18:30:15           2           2           2
Feb 09 18:30:15
Feb 09 18:30:15 (2 rows affected)
Feb 09 18:30:15 -- index to read only rows that we want to modify --
Feb 09 18:30:15  create index DEMO_A on DEMO(a);
Feb 09 18:30:15
Feb 09 18:30:15 begin transaction;
Feb 09 18:30:15 update DEMO set b=b+1 where a=1;
Feb 09 18:30:15
Feb 09 18:30:15
Feb 09 18:30:15 (1 rows affected)

Here is my full scenario to see where it blocks:

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3
-- read access the row that is not locked
select * from DEMO where a=2;
go
-- read access the row that is locked
select * from DEMO where a=1;
go
-- write access on the row that is not locked
delete from DEMO where a=2;
go
-- write access on the row that is locked
delete from DEMO where a=1;
go

docker exec -i mssql /opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P "**P455w0rd**" -e -d DEMO -t 3
-- read access the row that is not locked
select * from DEMO where a=2;
go
-- read access the row that is not locked
select * from DEMO where a=2;

id          a           b
----------- ----------- -----------
          2           2           2

(1 rows affected)

-- read access the row that is locked
select * from DEMO where a=1;
go
-- read access the row that is locked
select * from DEMO where a=1;

id          a           b
----------- ----------- -----------
          1           1           1

(1 rows affected)

-- write access on the row that is not locked
delete from DEMO where a=2;
go
-- write access on the row that is not locked
delete from DEMO where a=2;


(1 rows affected)

-- write access on the row that is locked
delete from DEMO where a=1;
go
-- write access on the row that is locked
delete from DEMO where a=1;

Timeout expired

Finally, the only blocking situation here is when I want to write on the same row. The index access reduces the risk of being blocked.

In summary, we can achieve the best concurrency with READ_COMMITTED_SNAPSHOT isolation level, and ensuring that we read only the rows we will update, with proper indexing and maybe hinting. This is, in my opinion, very important to know because we rarely cover those situations during integration tests. But they can happen quickly in production with high load.

PostgreSQL

Let’s do the same with PostgreSQL which is natively MVCC:

cat > session1.sql <<'SQL'
drop database if exists DEMO;
create database DEMO;
\c demo
drop table if exists DEMO;
create table DEMO(id int primary key, a int not null, b int);
begin transaction;
insert into DEMO values(1,1,1);
insert into DEMO values(2,2,2);
commit;
select * from DEMO;
begin transaction;
update DEMO set b=b+1 where a=1;
SQL

No specific settings, and no index created here.

( cat session1.sql ; sleep 120 ; echo "commit;") | psql -e | ts &

-bash-4.2$ ( cat session1.sql ; sleep 120 ; echo "commit;") | psql -e | ts &
[1] 31125
-bash-4.2$
-bash-4.2$ Feb 09 18:42:48 drop database if exists DEMO;
Feb 09 18:42:48 DROP DATABASE
Feb 09 18:42:48 create database DEMO;
Feb 09 18:42:49 CREATE DATABASE
Feb 09 18:42:49 You are now connected to database "demo" as user "postgres".
Feb 09 18:42:49 drop table if exists DEMO;
NOTICE:  table "demo" does not exist, skipping
Feb 09 18:42:49 DROP TABLE
Feb 09 18:42:49 create table DEMO(id int primary key, a int not null, b int);
Feb 09 18:42:49 CREATE TABLE
Feb 09 18:42:49 begin transaction;
Feb 09 18:42:49 BEGIN
Feb 09 18:42:49 insert into DEMO values(1,1,1);
Feb 09 18:42:49 INSERT 0 1
Feb 09 18:42:49 insert into DEMO values(2,2,2);
Feb 09 18:42:49 INSERT 0 1
Feb 09 18:42:49 commit;
Feb 09 18:42:49 COMMIT
Feb 09 18:42:49 select * from DEMO;
Feb 09 18:42:49  id | a | b
Feb 09 18:42:49 ----+---+---
Feb 09 18:42:49   1 | 1 | 1
Feb 09 18:42:49   2 | 2 | 2
Feb 09 18:42:49 (2 rows)
Feb 09 18:42:49
Feb 09 18:42:49 begin transaction;
Feb 09 18:42:49 BEGIN
Feb 09 18:42:49 update DEMO set b=b+1 where a=1;
Feb 09 18:42:49 UPDATE 1

While the transaction updating the first row is in the background, I run the following readers and writers:

psql demo | ts
set statement_timeout=3000;
-- read access the row that is not locked
select * from DEMO where a=2;
-- read access the row that is locked
select * from DEMO where a=1;
-- write access on the row that is not locked
delete from DEMO where a=2;
-- write access on the row that is locked
delete from DEMO where a=1;

-bash-4.2$ psql demo | ts
set statement_timeout=3000;
Feb 09 18:43:00 SET
-- read access the row that is not locked
select * from DEMO where a=2;
Feb 09 18:43:08  id | a | b
Feb 09 18:43:08 ----+---+---
Feb 09 18:43:08   2 | 2 | 2
Feb 09 18:43:08 (1 row)
Feb 09 18:43:08
-- read access the row that is locked
select * from DEMO where a=1;
Feb 09 18:43:16  id | a | b
Feb 09 18:43:16 ----+---+---
Feb 09 18:43:16   1 | 1 | 1
Feb 09 18:43:16 (1 row)
Feb 09 18:43:16
-- write access on the row that is not locked
delete from DEMO where a=2;
Feb 09 18:43:24 DELETE 1
-- write access on the row that is locked
delete from DEMO where a=1;

ERROR:  canceling statement due to statement timeout
CONTEXT:  while deleting tuple (0,1) in relation "demo"

Nothing is blocked except, of course, when modifying the row that is locked.

Oracle Database

One of the many things I’ve learned from Tom Kyte when I was reading AskTom regularly is how to build the simplest test cases. And with Oracle there is no need to run multiple sessions to observe multiple transactions concurrency. I can do it with an autonomous transaction in one session and one advantage is that I can share a dbfiddle example:

Here, deadlock at line 14 means that only the “delete where a=1” encountered a blocking situation with “update where a=1”. All previous statements, select on any row and update of other rows, were executed without conflict.

A DML statement has two phases: one to find the rows and the second one to modify them. A DELETE or UPDATE in Oracle and Postgres runs the first in snapshot mode: non-blocking MVCC. The second must, of course, modify the current version. This is a very complex mechanism because it may require a retry (restart) when the current version does not match the consistent snapshot that was used for filtering. Both PostgreSQL and Oracle can ensure this write consistency without the need to block the reads. SQL Server has implemented MVCC more recently and provides non-blocking reads only for the SELECT reads. But a read can still be in blocking situation for the query phase of an update statement.

Cet article How SQL Server MVCC compares to Oracle and PostgreSQL est apparu en premier sur Blog dbi services.

Basic Replication -- 13 : Some Interesting SYS tables

Hemant K Chitale - Sun, 2020-02-09 08:45
I found an interesting SQL in the AWR report from my previous blog post.

What do you think this SQL statement does ?

DELETE FROM SYS.MVREF$_STMT_STATS WHERE REFRESH_ID = :B2 AND MV_OBJ# = :B1

Here are some interesting objects (I don't know which Oracle release they started appearing in) :

SQL> l
1 select object_name, object_type
2 from dba_objects
3 where owner = 'SYS'
4 and object_name like 'MVREF$%'
5* order by 2,1
SQL> /

OBJECT_NAME OBJECT_TYPE
------------------------------ -----------------------
MVREF$_STATS_SEQ SEQUENCE
MVREF$_CHANGE_STATS TABLE
MVREF$_RUN_STATS TABLE
MVREF$_STATS TABLE
MVREF$_STATS_PARAMS TABLE
MVREF$_STATS_SYS_DEFAULTS TABLE
MVREF$_STMT_STATS TABLE

7 rows selected.

SQL>


Right now, the SYS.MVREF$_STMT_STATS table appears to be empty.
SQL> desc SYS.MVREF$_STMT_STATS
Name Null? Type
----------------------------------------- -------- ----------------------------
MV_OBJ# NOT NULL NUMBER
REFRESH_ID NOT NULL NUMBER
STEP NOT NULL NUMBER
SQLID NOT NULL VARCHAR2(14)
STMT NOT NULL CLOB
EXECUTION_TIME NOT NULL NUMBER
EXECUTION_PLAN SYS.XMLTYPE STORAGE BINARY

SQL>


It would be interesting to know how Oracle is using this and the other MVREF$% tables.
SYS.MVREF$_CHANGE_STATS obviously captures DML operations

This SYS.MVREF$_RUN_STATS captures the last refresh operation (*does it only capture the last operation ?*) And what does SYS.MVREF$_STATS capture :

SQL> l
1 select *
2 from SYS.MVREF$_RUN_STATS
3* where MVIEWS='"HEMANT"."MV_1"'
SQL> /

RUN_OWNER_USER# REFRESH_ID NUM_MVS_TOTAL NUM_MVS_CURRENT MVIEWS BASE_TABLES METHOD ROLLBACK P R PURGE_OPTION
--------------- ---------- ------------- --------------- ------------------ ------------ ------ -------- - - ------------
PARALLELISM HEAP_SIZE A N O NUMBER_OF_FAILURES START_TIME END_TIME ELAPSED_TIME LOG_SETUP_TIME
----------- ---------- - - - ------------------ -------------------------- -------------------------- ------------ --------------
LOG_PURGE_TIME C TXNFLAG ON_COMMIT_FLAG
-------------- - ---------- --------------
106 245 1 1 "HEMANT"."MV_1" Y N 1
0 0 Y N N 0 09-FEB-20 09.55.33.000000 09-FEB-20 09.55.49.000000 16 1
PM PM
9 Y 0 0


SQL>
SQL> l
1 select mviews, count(*) from sys.mvref$_run_Stats group by mviews
2* order by 1
SQL> /

MVIEWS COUNT(*)
------------------------------------------ ----------
"HEMANT"."MV_1" 1
"HEMANT"."MV_2" 8
"HEMANT"."MV_DEPT", "HEMANT"."MV_EMP" 1
"HEMANT"."MV_FAST_NOT_POSSIBLE" 1
"HEMANT"."MV_OF_SOURCE" 1
"HEMANT"."NEW_MV" 2
"HEMANT"."NEW_MV_2_1" 1
"HEMANT"."NEW_MV_2_2" 2
"HR"."HR_MV_ON_COMMIT" 1
"HR"."MY_LARGE_REPLICA" 1

10 rows selected.

SQL>
SQL> desc sys.mvref$_run_stats
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
RUN_OWNER_USER# NOT NULL NUMBER
REFRESH_ID NOT NULL NUMBER
NUM_MVS_TOTAL NOT NULL NUMBER
NUM_MVS_CURRENT NOT NULL NUMBER
MVIEWS VARCHAR2(4000)
BASE_TABLES VARCHAR2(4000)
METHOD VARCHAR2(4000)
ROLLBACK_SEG VARCHAR2(4000)
PUSH_DEFERRED_RPC CHAR(1)
REFRESH_AFTER_ERRORS CHAR(1)
PURGE_OPTION NUMBER
PARALLELISM NUMBER
HEAP_SIZE NUMBER
ATOMIC_REFRESH CHAR(1)
NESTED CHAR(1)
OUT_OF_PLACE CHAR(1)
NUMBER_OF_FAILURES NUMBER
START_TIME TIMESTAMP(6)
END_TIME TIMESTAMP(6)
ELAPSED_TIME NUMBER
LOG_SETUP_TIME NUMBER
LOG_PURGE_TIME NUMBER
COMPLETE_STATS_AVAILABLE CHAR(1)
TXNFLAG NUMBER
ON_COMMIT_FLAG NUMBER

SQL> desc sys.mvref$_stats
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
MV_OBJ# NOT NULL NUMBER
REFRESH_ID NOT NULL NUMBER
ATOMIC_REFRESH NOT NULL CHAR(1)
REFRESH_METHOD VARCHAR2(30)
REFRESH_OPTIMIZATIONS VARCHAR2(4000)
ADDITIONAL_EXECUTIONS VARCHAR2(4000)
START_TIME TIMESTAMP(6)
END_TIME TIMESTAMP(6)
ELAPSED_TIME NUMBER
LOG_SETUP_TIME NUMBER
LOG_PURGE_TIME NUMBER
INITIAL_NUM_ROWS NUMBER
FINAL_NUM_ROWS NUMBER
NUM_STEPS NUMBER
REFMET NUMBER
REFFLG NUMBER

SQL>
SQL> select mv_obj#, count(*)
2 from sys.mvref$_stats
3 group by mv_obj#
4 /

MV_OBJ# COUNT(*)
---------- ----------
73223 1
73170 1
73065 1
73244 1
73079 8
73094 1
73197 2
73113 2
73188 1
73167 1
73110 1

11 rows selected.

SQL>
SQL> desc sys.mvref$_stats_params
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
MV_OWNER NOT NULL VARCHAR2(128)
MV_NAME NOT NULL VARCHAR2(128)
COLLECTION_LEVEL NOT NULL NUMBER
RETENTION_PERIOD NOT NULL NUMBER

SQL> select count(*)
2 from sys.mvref$_stats_params;

COUNT(*)
----------
0

SQL> desc sys.mvref$_stats_sys_defaults
Name Null? Type
------------------------------------------------------------------------ -------- -------------------------------------------------
COLLECTION_LEVEL NOT NULL NUMBER
RETENTION_PERIOD NOT NULL NUMBER

SQL> select * from sys.mvref$_stats_sys_defaults
2 /

COLLECTION_LEVEL RETENTION_PERIOD
---------------- ----------------
1 31

SQL>



Oracle has been introducing some more "internal" tables to trace MView Refresh operations.


Categories: DBA Blogs

Basic Replication -- 12 : MV Refresh Captured in AWR

Hemant K Chitale - Sun, 2020-02-09 08:40
Building on the example of an Index having been created on a Materialized View  in my previous blog post in this series, I've captured some information from the AWR report in 19c when this code is executed :

SQL> delete source_table_1;

72454 rows deleted.

SQL> insert into source_table_1 select object_id, owner, object_name from source_table_2;

72366 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_mview.refresh('MV_OF_SOURCE');

PL/SQL procedure successfully completed.

SQL>
SQL> exec dbms_mview.refresh('MV_1');

PL/SQL procedure successfully completed.


(Note that "MV_OF_SOURCE" is not dependent on SOURCE_TABLE_1 and as really had no rows to refresh, did not cause any load).

Some information in the AWR Report (note that this is 19.3) :

SQL ordered by Elapsed Time             DB/Inst: ORCLCDB/ORCLCDB  Snaps: 54-55
-> Resources reported for PL/SQL code includes the resources used by all SQL
statements called by the code.
-> % Total DB Time is the Elapsed Time of the SQL statement divided
into the Total Database Time multiplied by 100
-> %Total - Elapsed Time as a percentage of Total DB time
-> %CPU - CPU Time as a percentage of Elapsed Time
-> %IO - User I/O Time as a percentage of Elapsed Time
-> Captured SQL account for 108.1% of Total DB Time (s): 30
-> Captured PL/SQL account for 85.2% of Total DB Time (s): 30

Elapsed Elapsed Time
Time (s) Executions per Exec (s) %Total %CPU %IO SQL Id
---------------- -------------- ------------- ------ ------ ------ -------------
16.1 1 16.09 53.5 12.8 21.6 2uusn1kyhm9h8
Module: SQL*Plus
PDB: ORCLPDB1
BEGIN dbms_mview.refresh('MV_1'); END;

8.7 1 8.66 28.8 5.3 13.6 8chh7ksnytb52
PDB: ORCLPDB1
delete from "HEMANT"."MLOG$_SOURCE_TABLE_1" where snaptime$$ <= :1

4.5 1 4.55 15.1 17.3 75.6 57ctmbtabx1rw
Module: SQL*Plus
PDB: ORCLPDB1
BEGIN dbms_mview.refresh('MV_OF_SOURCE'); END;

4.0 1 3.96 13.2 37.2 26.1 dsyxhpb9annru
Module: SQL*Plus
PDB: ORCLPDB1
delete source_table_1

3.7 144,820 0.00 12.3 36.7 8.3 9ucb4uxnvzxc8
Module: SQL*Plus
PDB: ORCLPDB1
INSERT /*+ NO_DST_UPGRADE_INSERT_CONV IDX(0) */ INTO "HEMANT"."MLOG$_SOURCE_TABL
E_1" (dmltype$$,old_new$$,snaptime$$,change_vector$$,xid$$,"OBJECT_ID") VALUES (
:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,:x,:1)

3.5 1 3.52 11.7 19.7 45.9 dxnyhyq7sqf8j
PDB: ORCLPDB1
DELETE FROM "HEMANT"."MV_1" SNAP$ WHERE "OBJ_ID" IN (SELECT * FROM (SELECT MLOG$
."OBJECT_ID" "OBJ_ID" FROM "HEMANT"."MLOG$_SOURCE_TABLE_1" MLOG$ WHERE "SNAPTIME
$$" > :1 AND ("DMLTYPE$$" != 'I')) AS OF SNAPSHOT(:B_SCN) )

3.3 1 3.25 10.8 45.2 .6 9n1gw9vpj9248
Module: SQL*Plus
PDB: ORCLPDB1
insert into source_table_1 select object_id, owner, object_name from source_tabl
e_2

2.3 2 1.14 7.6 18.4 77.4 94z4z19ygx34a
Module: SQL*Plus
PDB: ORCLPDB1
begin sys.dbms_irefstats.run_sa(:1, :2, :3, :4, :5, :6); end;

2.1 1 2.11 7.0 19.1 21.6 a2sctn32qtwnf
PDB: ORCLPDB1
/* MV_REFRESH (MRG) */ MERGE INTO "HEMANT"."MV_1" "SNA$" USING (SELECT * FROM (S
ELECT CURRENT$."OBJ_ID",CURRENT$."OBJ_OWNER",CURRENT$."OBJ_NAME" FROM (SELECT "S
OURCE_TABLE_1"."OBJECT_ID" "OBJ_ID","SOURCE_TABLE_1"."OWNER" "OBJ_OWNER","SOURCE
_TABLE_1"."OBJECT_NAME" "OBJ_NAME" FROM "SOURCE_TABLE_1" "SOURCE_TABLE_1") CURRE

1.7 1 1.67 5.6 50.3 43.5 btqubgr940awu
Module: sqlplus@oracle-19c-vagrant (TNS V1-V3)
PDB: CDB$ROOT
BEGIN dbms_workload_repository.create_snapshot(); END;

1.3 1 1.33 4.4 27.3 .0 ggaxdw7tpmqjb
PDB: ORCLPDB1
update "HEMANT"."MLOG$_SOURCE_TABLE_1" set snaptime$$ = :1 where snaptime$$ > t
o_date('2100-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')

0.9 89 0.01 3.1 1.7 98.6 3un99a0zwp4vd
PDB: ORCLPDB1
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(proper
ty,0),subname,type#,flags,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and
p_obj#=obj#(+) order by order#

0.5 183 0.00 1.6 6.0 98.3 2sxqgx5hx76qr
PDB: ORCLPDB1
select /*+ rule */ bucket, endpoint, col#, epvalue, epvalue_raw, ep_repeat_count
, endpoint_enc from histgrm$ where obj#=:1 and intcol#=:2 and row#=:3 order by b
ucket

0.5 2 0.23 1.5 15.0 70.0 6tbg6ydrx9jmm
Module: SQL*Plus
PDB: ORCLPDB1
begin dbms_irefstats.purge_stats_mv_rp(in_time => :1, in_objnum => :2, in_r
etention_period => :3); end;

0.4 9 0.04 1.3 15.4 69.2 g1s379sraujaq
Module: SQL*Plus
PDB: ORCLPDB1
DELETE FROM SYS.MVREF$_STMT_STATS WHERE REFRESH_ID = :B2 AND MV_OBJ# = :B1

0.4 2 0.20 1.3 16.4 76.8 8szmwam7fysa3
Module: SQL*Plus
PDB: ORCLPDB1
insert into wri$_adv_objspace_trend_data select timepoint, space_usage, space_a
lloc, quality from table(dbms_space.object_growth_trend(:1, :2, :3, :4, NULL, N
ULL, NULL, 'FALSE', :5, 'FALSE'))

0.4 59 0.01 1.3 9.5 97.3 03guhbfpak0w7
PDB: CDB$ROOT
select /*+ index(idl_ub1$ i_idl_ub11) */ piece#,length,piece from idl_ub1$ where
obj#=:1 and part=:2 and version=:3 order by piece#

0.3 2 0.15 1.0 11.0 .0 a8xypykqc348c
PDB: ORCLPDB1
BEGIN dbms_stats_internal.advisor_setup_obj_filter(:tid, :rid, 'EXECUTE', FAL
SE); END;

0.3 2 0.15 1.0 8.7 .0 avf5k3k0x0cxn
PDB: ORCLPDB1
insert into stats_advisor_filter_obj$ (rule_id, obj#, flag
s, type) select :rule_id, obj#, :flag_include, :type_expanded
from stats_advisor_filter_obj$ where type = :type_priv
and (bitand(flags, :flag_orcl_owned) = 0 or :get_orcl_objects = 'T')


Quite interesting that there are large number of operations that occur.

Unlike a Trace File, the AWR does not report SQL operations as a chronologically-ordered sequence.  In this case, they are ordered by Elapsed Time per operation.

Also, remember that PL/SQL calls will include the time for "child" SQL calls, so you will encounter double-counting if you add up the figures (e.g. the "dbms_mview.refresh('MV_1');" call included a number of SQL calls --- technically you can identify them only if you *trace* the session making this PL/SQL call.  However, since there was no other activity in this database, almost everything that happened appears in this AWR extract.

The actual calls "delete source_table_1;" and "insert into source_table_1 select object_id, owner, object_name from source_table_2;" were issued *before* the "exec dbms_mview.refresh('MV_1');" and are are not "child" calls.  The child calls that do appear in the AWR are not necessarily in the same chronological order of their execution.

The interesting "child" calls from the "dbms_mview.refresh" call that I would like to point out are :

delete from "HEMANT"."MLOG$_SOURCE_TABLE_1" where snaptime$$ <= :1

INSERT /*+ NO_DST_UPGRADE_INSERT_CONV IDX(0) */ INTO "HEMANT"."MLOG$_SOURCE_TABL
E_1" (dmltype$$,old_new$$,snaptime$$,change_vector$$,xid$$,"OBJECT_ID") VALUES (
:d,:o,to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS'),:c,:x,:1)

DELETE FROM "HEMANT"."MV_1" SNAP$ WHERE "OBJ_ID" IN (SELECT * FROM (SELECT MLOG$
."OBJECT_ID" "OBJ_ID" FROM "HEMANT"."MLOG$_SOURCE_TABLE_1" MLOG$ WHERE "SNAPTIME
$$" > :1 AND ("DMLTYPE$$" != 'I')) AS OF SNAPSHOT(:B_SCN) )

/* MV_REFRESH (MRG) */ MERGE INTO "HEMANT"."MV_1" "SNA$" USING (SELECT * FROM (S
ELECT CURRENT$."OBJ_ID",CURRENT$."OBJ_OWNER",CURRENT$."OBJ_NAME" FROM (SELECT "S
OURCE_TABLE_1"."OBJECT_ID" "OBJ_ID","SOURCE_TABLE_1"."OWNER" "OBJ_OWNER","SOURCE
_TABLE_1"."OBJECT_NAME" "OBJ_NAME" FROM "SOURCE_TABLE_1" "SOURCE_TABLE_1") CURRE


In my next post, I'll share some other findings after I found something interesting in the AWR report.


Categories: DBA Blogs

Zero Downtime Migration: Database & Applications (EBS R12) To Oracle Cloud Using Hybrid Data Guard

Online Apps DBA - Sun, 2020-02-09 07:37

Most of the Customers when migrating Application and Database want minimal downtime. Hybrid Dataguard for DB Tier and RSYNC for Apps Tier, let’s you sync both database & application constantly front On-Premise to Oracle Cloud. When you are ready to move to the Cloud, all you need is to failover Database to Oracle Cloud, configure […]

The post Zero Downtime Migration: Database & Applications (EBS R12) To Oracle Cloud Using Hybrid Data Guard appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[1Z0-997] Traffic Management In Oracle Cloud Infrastructure (OCI)

Online Apps DBA - Sun, 2020-02-09 06:20

Traffic management in OCI is a critical component of the Domain Name System (DNS) that lets us configure Routing policies for serving intelligent responses to DNS queries. Check out K21 Academy’s blog post at https://k21academy.com/1z099712 that covers: • Overview On Traffic Management In OCI • Traffic Steering Use Cases • OCI Traffic Management Policies • […]

The post [1Z0-997] Traffic Management In Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Oracle Integration Cloud Services (OIC)1Z0-1042 Training: Step By Step Activity Guides/Hands-On Lab Exercise

Online Apps DBA - Sun, 2020-02-09 05:43

Find out how to become an Oracle Certified Integration Expert [1Z0-1042] by performing the Hands-On Lab Exercises at https://k21academy.com/oic05 & get in-depth, Stepwise Study for: • Hands-On Activity Guides that you must perform in order to learn Oracle Integration Cloud Service (OIC) • How to bring you and your team for integration with on-premise & […]

The post Oracle Integration Cloud Services (OIC)1Z0-1042 Training: Step By Step Activity Guides/Hands-On Lab Exercise appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[1Z0-997]Oracle Cloud Infrastructure (OCI) Architect Professional Certification: Step by Step Hands-On Lab To Clear Exam

Online Apps DBA - Sun, 2020-02-09 04:10

1Z0-997 is next level (Professional) Architect Certification after Associate (1Z0-1072) Check Hands-on Activity Guides/ Labs you must perform to learn, implement and clear Oracle OCI professional exam [1Z0-997]. Check out k21 Academy’s blog post at https://k21academy.com/1z099705 to know our Hands-On Labs in depth. The blog post comes with: •The Learning Path for Oracle OCI Architect […]

The post [1Z0-997]Oracle Cloud Infrastructure (OCI) Architect Professional Certification: Step by Step Hands-On Lab To Clear Exam appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[AZ-103] Microsoft Azure Administrator Training: Step By Step Activity Guides/Hands-On Lab Exercise

Online Apps DBA - Sun, 2020-02-09 00:04

To clear any certification, it is important to perform the right Hands-On Labs. Check out K21Academy’s blog post at https://k21academy.com/az10305 tell you about the Hands-On Labs you must perform to clear the [AZ-103] exam with high score & less trouble. Begin your journey towards becoming a [AZ-103] Microsoft Azure Administrator Training and earn a lot […]

The post [AZ-103] Microsoft Azure Administrator Training: Step By Step Activity Guides/Hands-On Lab Exercise appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Running SQL Server on the Oracle Free tier

Yann Neuhaus - Sat, 2020-02-08 16:25
By Franck Pachot

The Oracle Cloud is not only for Oracle Database. You can create a VM running Oracle Linux with full root access to it, even in the free tier: a free VM that will be always up, never expires, with full ssh connectivity to a sudoer user, where you are able to tunnel any port. Of course, there are some limits that I’ve detailed in a previous post. But that is sufficient to run a database, given that you configure a low memory usage. For Oracle Database XE, Kamil Stawiarski mentions that you can just hack the memory test in the RPM shell script.
But for Microsoft SQL Server, that’s a bit more complex because this test is hardcoded in the sqlservr binary and the solution I propose here is to intercept the call to the sysinfo() system call.

Creating a VM in the Oracle Cloud is very easy, here are the steps in one picture:

I’m connecting to the public IP Address with ssh (the public key is uploaded when creating the VM) and I’ll will run everything as root:

ssh opc@129.213.138.34
sudo su -
cat /etc/oracle-release

I install docker engine (version 19.3 there)
yum install -y docker-engine

I start docker

systemctl start docker
docker info


I’ll use the latest SQL Server 2019 image built on RHEL
docker pull mcr.microsoft.com/mssql/rhel/server:2019-latest
docker images

5 minutes to download a 1.5GB image. Now trying to start it.
The nice thing (when I compare to Oracle) is that we don’t have to manually accept the license terms with a click-through process. I just mention that I have read and accepted them with: ACCEPT_EULA=Y 

I try to run it:
docker run \
-e "ACCEPT_EULA=Y" \
-e 'MSSQL_PID=Express' \
-p 1433:1433 \
-e 'SA_PASSWORD=**P455w0rd**' \
--name mssql \
mcr.microsoft.com/mssql/rhel/server:2019-latest

There’s a hardcoded prerequisite verification to check that the system has at least 2000 MB of RAM. And I have less than one GB here in this free tier:


awk '/^Mem/{print $0,$2/1024" MB"}' /proc/meminfo

Fortunately, there’s always a nice geek on the internet with an awesome solution: hack the sysinfo() system call with a LD_PRELOAD’ed wrapper : A Slightly Liberated Microsoft SQL Server Docker image

Let’s get it:
git clone https://github.com/justin2004/mssql_server_tiny.git
cd mssql_server_tiny

I changed the FROM to build from the 2019 RHEL image and I preferred to use /etc/ld.so.preload rather than overriding the CMD command with LD_LIBRARY:


FROM oraclelinux:7-slim AS build0
WORKDIR /root
RUN yum update -y && yum install -y binutils gcc
ADD wrapper.c /root/
RUN gcc -shared -ldl -fPIC -o wrapper.so wrapper.c
FROM mcr.microsoft.com/mssql/rhel/server:2019-latest
COPY --from=build0 /root/wrapper.so /root/
ADD wrapper.c /root/
USER root
RUN echo "/root/wrapper.so" > /etc/ld.so.preload
USER mssql

I didn’t change the wrapper for the sysinfo function:
#define _GNU_SOURCE
#include
#include
#include
int sysinfo(struct sysinfo *info){
// clear it
//dlerror();
void *pt=NULL;
typedef int (*real_sysinfo)(struct sysinfo *info);
// we need the real sysinfo function address
pt = dlsym(RTLD_NEXT,"sysinfo");
//printf("pt: %x\n", *(char *)pt);
// call the real sysinfo system call
int real_return_val=((real_sysinfo)pt)(info);
// but then modify its returned totalram field if necessary
// because sqlserver needs to believe it has "2000 megabytes"
// physical memory
if( info->totalram totalram = 1000l * 1000l * 1000l * 2l ;
}
return real_return_val;
}

I build the image from there:

docker build -t mssql .


I run it:

docker run -d \
-e "ACCEPT_EULA=Y" \
-e 'MSSQL_PID=Express' \
-p 1433:1433 \
-e 'SA_PASSWORD=**P455w0rd**' \
--name mssql \
mssql

I wait until it is ready:

until docker logs mssql | grep -C10 "Recovery is complete." ; do sleep 1 ; done

All is ok and I connect and check the version:

Well… as you can see my first attempt failed. I am running with very low memory here, and then many memory allocation problems can be expected. If you look at the logs after a while, many automatic system tasks fail. But that’s sufficient for a minimal lab and you can tweak some Linux and SQL Server parameters if you need it. Comments are welcome here for feedback and ideas…

The port 1433 is exposed here locally and it can be tunneled through ssh. This is a free lab environment always accessible from everywhere to do small tests in MS SQL, running on the Oracle free tier. Here is how I connect with DBeaver from my laptop, just mentioning the public IP address, private ssh key and connection information:

Cet article Running SQL Server on the Oracle Free tier est apparu en premier sur Blog dbi services.

Tagging Overview & Concepts In Oracle Cloud (OCI)

Online Apps DBA - Sat, 2020-02-08 09:31

Tagging in Oracle Cloud Infrastructure (OCI) is used to provide metadata to the resources so that it is easier to list and organize resources present in multiple Compartments and for billing purposes (Cost tracking tags). Check out k21 Academy’s post at https://k21academy.com/oci60 that covers: • Overview of Tagging in OCI • Benefits of Using Tags […]

The post Tagging Overview & Concepts In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Fixed] opmnctl startall fails In EBS R12.2

Online Apps DBA - Sat, 2020-02-08 06:53

Did you encounter the “opmnctl startall fails In EBS R12” error after upgrading from EBS 12.2.0 to 12.2.8, while starting the Application Tier? If YES, then check out K21 Academy’s blog post at https://k21academy.com/appsdba64 that covers: • Issue Encountered • Root Cause • Fix Begin your journey towards becoming an Apps DBA Expert by joining […]

The post [Fixed] opmnctl startall fails In EBS R12.2 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Zero Downtime Migration In Oracle Cloud Infrastructure (OCI)

Online Apps DBA - Sat, 2020-02-08 05:56

Oracle has recently launched a cloud migration utility named Zero Downtime Migration (ZDM), that will help you in migrating your On-Prem Oracle Database to Oracle Cloud Database (VM DB, ExaCS, Bare Metal, Exadata Cloud at Customer) With the aim of keeping the migration process as simple as possible and minimal downtime on production workloads. If […]

The post Zero Downtime Migration In Oracle Cloud Infrastructure (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

New Feature: Exadata (ExaCs) DB System: Shared Database Homes

Online Apps DBA - Sat, 2020-02-08 04:11

Oracle recently announced that now you can add Oracle Database Homes to an existing Exadata DB system by using the Oracle Cloud Infrastructure (OCI)Console, the API (Rest API), or the CLI (dbaascli). Check out K21 Academy’s blog post at https://k21academy.com/clouddba53 that covers: • What Is Oracle Database Home? • How To Manage DB Home In […]

The post New Feature: Exadata (ExaCs) DB System: Shared Database Homes appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

how to load XML file into oracle table and using XML query to get result

Tom Kyte - Fri, 2020-02-07 18:03
Hi Tom, I have XML file from Web Report. there are some error records in the file, it is difficult to find the error in XML file, I want to load this file into oracle table. then use XML query. below is sample file and expected result. This is fir...
Categories: DBA Blogs

ORA-29279: SMTP permanent error: 550 XXXsmtpXX: Host xxx.xxx.xxx.xxx: No unauthenticated relaying permitted

Tom Kyte - Fri, 2020-02-07 18:03
Hi Tom, I want to send email through PL/SQL by using utl_mail package, I have completed below steps 1. Installed scripts <b>@$ORACLE_HOME/rdbms/admin/utlmail.sql @$ORACLE_HOME/rdbms/admin/prvtmail.plb</b> 2. granted execute to us...
Categories: DBA Blogs

Should I go for ODA 19.5 or should I wait until 19.6?

Yann Neuhaus - Fri, 2020-02-07 05:25
Introduction

As you may know, Oracle Database 19c is available for new (X8-2) or older Oracle Database Appliances since several weeks. Current version is 19.5. But when you go to the official ODA documentation , it still first proposes version 18.7 not compatible with 19c databases. Here is why.

19c database is the final 12.2

First of all, 19c is an important release because it’s the terminal release of the 12.2, as 11.2.0.4 was for 11.2. Please refer to my other blog to understand the new Oracle versioning. ODA always supports new releases few months after being available on Linux, and it’s why it’s only available now.

Drawbacks of 19.5

19.5 is available on your ODA, but you will not be able to patch to this version. Reason is quite simple, it’s not a complete patch, you can only download ISO for reimaging and 19c grid and database software and that’s it. The reason for not yet having a patch resides in the difficulty of updating the OS part. 19.5 runs on Linux 7.7, and all previous releases are stuck with Linux 6.10, meaning that the patch should include the OS upgrade, and this jump is not so easy. It’s the first drawback.

Second drawback is that you cannot run another database version. If you still need 18c, 12.2, 12.1 or 11.2, this 19.5 is not for you.

The third drawback is that you will not be able to patch from 19.5 to 19.6 or newer version. Simply because 19.5 is an out of the way release.

Another drawback concerns the documentation not yet complete: many parts are copy/paste from 18.7. For example, described initcl command to restart the dcs agent is not a command that actually exists on Linux 7.

Moreover, my first tests on this version show annoying bugs related to database creation, those under investigation by Oracle.

When 19.6 will be ready?

19.6 is planned for 2020, yes but which month? There is no official date, it could come in march, or during the summer, nobody knows. As a result, you will have to wait for this patch to be released to start your migration to 19c on ODA.

So, what to do?

3 solutions are possible:

  • You can deal with your old databases until the patch is released: buy extended support for 11gR2/12cR1. Premier support is still OK for 12.2.0.1 and 18c
  • Migrate your old 11gR2 and 12cR1 to 18c to be prepared for 19c and avoid buying extended support, differences between 18c and 19c should be minimal
  • Deploy 19.5 for testing purpose on a test ODA and start your migration project to get prepared for 19.6. Once available, patch or redeploy your ODAs and migrate all your databases
Conclusion

Not having 19.6 now is really annoying. Afterall we choose ODA because it’s easier to get updates. But you can still prepare everything for 19c migration, by first migrate to 18c or give a try to 19c with this 19.5 release.

Cet article Should I go for ODA 19.5 or should I wait until 19.6? est apparu en premier sur Blog dbi services.

Antiques

Frank van Bortel - Wed, 2020-02-05 04:08
Apart from kind of being in the same position as I started blogging (recovering from surgery), I also ended my "productive" live and am enjoying a new status: pensionado. Cleaning up That status comes with a task of cleaning up, and I found some Oracle documentation, that might have antique value to some. Leave a comment when you are interested in one of the following items, and let me know Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

ROLLBACK TO SAVEPOINT;

Yann Neuhaus - Tue, 2020-02-04 14:07
By Franck Pachot

.
I love databases and, rather than trying to compare and rank them, I like to understand their difference. Sometimes, you make a mistake and encounter an error. Let’s take the following example:
create table DEMO (n int);
begin transaction;
insert into DEMO values (0);
select n "after insert" from DEMO;
update DEMO set n=1/n;
select n "after error" from DEMO;
commit;
select n "after commit" from DEMO;

The “begin transaction” is not valid syntax in all databases because transactions may be started implicitly, but the other statements are valid syntax in all the common SQL databases. They all raise an error in the update execution because there’s one row with N=0 and then we cannot calculate 1/N as it is a math error. But, what about the result of the last select?

If I run this with Oracle, DB2, MS SQL Server, My SQL (links go to example in db<>fiddle), the row added by the insert is always visible by my session: after the insert, of course, after the update error, and after the commit (then visible by everybody).

The same statements run with PostgreSQL have a different result. You cannot do anything after the error. Only rollback the transaction. Even if you “commit” it will rollback. 

Yes, no rows are remaining there! Same code but different result.

You can have the same behavior as the other databases by defining a savepoint before the statement, and rollback to savepoint after the error. Here is the db<>fiddle. With PostgreSQL you have to define an explicit savepoint if you want to continue in your transaction after the error. Other databases take an implicit savepoint. By the way, I said “statement” but here is Tanel Poder showing that in Oracle the transaction is actually not related to the statement but the user call: Oracle State Objects and Reading System State Dumps Hacking Session Video – Tanel Poder’s blog

In Oracle, you can run multiple statements in a user call with a PL/SQL block. With PostgreSQL, you can group multiple statements in one command but you can also run a PL/pgSQL block. And with both, you can catch errors in the exception block. And then, it is PostgreSQL that takes now an implicit savepoint as I explained in a previous post: PostgreSQL subtransactions, savepoints, and exception blocks

This previous post was on Medium ( you can read https://www.linkedin.com/pulse/technology-advocacy-why-i-am-still-nomad-blogger-franck-pachot/ where I explain my blog “nomadism”), but as you can see I’m back on the dbi-services blog for my 500th post there. 

My last post here was called “COMMIThttps://blog.dbi-services.com/commit/ where I explained that I was quitting consulting for CERN to start something new. But even if I decided to change, I was really happy at dbi-services (as I mentioned on a LinkedIn post about great places to work). And when people like to work together it creates an implicit SAVEPOINT where you can come back if you encounter some unexpected results. Yes… this far-fetched analogy just to mention that I’m happy to come back to dbi services and this is where I’ll blog again.

As with many analogies, it reaches the limits of the comparison very quickly. You do not ROLLBACK a COMMIT and it is not a real rollback because this year at CERN was a good experience. I’ve met great people there, learned interesting things about matter and anti-matter, and went out of my comfort zone like co-organizing a PostgreSQL meetup and inviting external people ( https://www.linkedin.com/pulse/working-consultants-only-externalization-franck-pachot/) for visits and conferences. 

This “rollback” is actually a step further, but back in the context I like: solve customer problems in a company that cares about its employees and customers. And I’m not exactly coming back at the same “savepoint”. I was mostly focused on Oracle and I’m now covering more technologies in the database ecosystem. Of course, consulting on Oracle Database will still be a major activity. But today, many other databases are raising: NoSQL, NewSQL… Open Source is more and more relevant. And in this jungle, the replication and federation technologies are raising. I’ll continue to share on these areas and you can follow this blog, the RSS feed, and/or my twitter account.

Cet article ROLLBACK TO SAVEPOINT; est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator