Feed aggregator

Solved: Restore Of Standby Control File On Standby DB In ExaCS Failing With ERROR (ORA-600, RMAN-03002)

Online Apps DBA - Sat, 2020-01-25 06:59

Fixed: Restore Of Standby Control File On Standby DB In ExaCS Failing With Error (ORA-600, RMAN-03002) Are you looking for a fix to troubleshoot the “PSDRPC returns significant error 600 & RMAN-03002: failure of restore command” error while performing a restore of Standby Control File using RMAN on the Standby Database on Exadata Cloud Service […]

The post Solved: Restore Of Standby Control File On Standby DB In ExaCS Failing With ERROR (ORA-600, RMAN-03002) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

WITH Subquery

Jonathan Lewis - Thu, 2020-01-23 14:37

Here’s another anomaly that appears when you mix and match Oracle features. In this case it’s “With” subqueries (common table expressions / CTEs) and Active Dataguard (ADG) Standby databases. The problem appeared on the Oracle-l listserver and luckily for the OP another member of the list had seen it before and could point to a relevant MOS document id which explained the issue and supplied a workaround.

The OP had their standby database opened read-only for reporting and found the following oddity in the extended SQL trace file for one of their reports:


WAIT #140196872952648: nam='db file scattered read' ela= 1588 file#=4097 block#=579715946 blocks=128 obj#=-39778567 tim=17263910670242
WAIT #140196872952648: nam='db file scattered read' ela= 1495 file#=4097 block#=579715947 blocks=128 obj#=-39778567 tim=17263910672065
WAIT #140196872952648: nam='db file scattered read' ela= 1671 file#=4097 block#=579715948 blocks=128 obj#=-39778567 tim=17263910674042
WAIT #140196872952648: nam='db file scattered read' ela= 1094 file#=4097 block#=579715949 blocks=128 obj#=-39778567 tim=17263910675443

Before pointing out the oddity (if you haven’t spotted it already) I’ll just explain a few of the numbers thayt are a little unusual.

  • File# = 4097: the user has parameter db_files = 4096, so this is the first Temp file.
  • Block# = 579,715,946: the database is 120TB, and the temporary tablespace is a “bigfile” tablespace so it’s okay for the file to hold more than 579M blocks.
  • Obj# < 0: Negative object numbers is a characteristic of materialized CTEs: if you look at the execution plan a materialized CTE will be reported as a table with a name like  SYS_TEMP_FDA106F9_E259E68.  If you take the first hexadecimal number and treat is as a 32-bit signed integer you get the value that would be reported as the obj# in the trace file.  (Converting to decimal and subtract power(2,32) is one way of doing the arithmetic).
  • tim= nnnnnnnn:  this is the timestamp (usually in microseconds), and we can see intervals of roughly 1,400 to 2,000 microseconds between these lines.

So here’s the oddity: in this set of 4 consecutive waits we’re waiting for multiblock reads of 128 blocks – but each read starts one block after the previous read. It’s as if Oracle is reading 128 blocks and forgetting everything after the first one. And the timestamps are significant because they tell us that this isn’t a case of Oracle spending so much time between reads that the other blocks fall off the end of  the buffer cache before the query reaches them.

I think I’ve seen a pattern like this once before but it would have been quite a long time ago and I can’t find any notes I might have made about it (and it turns out that my previous experience was not relevant to this case). Fortunately another member of Oracle-l had also seen the pattern and supplied the solution through a reference to a MOS document that led to: Doc ID 2251339.1 With Subquery Factorization Temp Table Does Not Cache in Standby in 12.1.0.2.

It’s not a bug – Oracle is supposed to do this if you manage to materialize a CTE in a Read-only Standby database. I don’t understand exactly why there’s a problem but thanks to some feature of how consistent reads operate and block SCNs are generated when you populate the blocks of the global temporary table (GTT) that is your materialized CTE it’s possible for Oracle to produce the wrong results if it re-visits blocks that have been read into the cache from the GTT. So when you do a multiblock read during a tablescan of the GTT Oracle can use the first block it has read (presumably because it’s immediately pinned), but can’t use the remaining 127 – and so you get the odd pattern of consecutive blocks appearing at the start of consecutive multiblock reads.

This raises a couple of interesting (and nasty) questions.

  • First – does every 128 block read get read to the middle of the buffer cache, pushing another 128 blocks out of the buffer cache or does Oracle automatically read the blocks to the “cold” end of the LRU, minimising the impact on the rest of the cache; we hope it’s the latter.
  • Second – If I use a small fetch size while running my query might I find that I have to re-read the same block (with its 127 neghbours) many times because Oracle releases any pinned blocks at the end of each fetch and has to re-acquire the blocks on the next fetch.

If anyone wants to test the second question by running a query from SQL*Plus with extended trace enabled the following simple query should answer the question:

alter session set events '10046 trace name context forever, level 8';
set arraysize 2

with v1 as (select /*+ materialize */ * from all_objects)
select object_name from v1;

Workarounds

There is a workaround to the issue – you can add the hint /*+ inline */ to the query to ensure that the CTE is not materialized. There is a bit of a catch to this, though (on top of the fact that you might then need to have two slightly different versions of the code if you want to run the query on production and standby) – if Oracle places the subquery text inline the optimizer may manage to merge it into the rest of the query and come up with a bad execution plan. Again you can probably work around this threat by extending the hint to read: /*+ inline no_merge */. Even then the optimizer could decide it has better statistics about the “real” table columns that it might have lost when it materialized the subquery, so it could still produce a different execution plan from the materialized plan.

As an alternative (and somewhat more brutal) workaround you could set the hidden parameter “_with_subquery” to inline either at the session or system level, or in the startup parameter file.

 

Time-Series Prediction Beyond Test Data

Andrejus Baranovski - Thu, 2020-01-23 12:13
I was working on the assignment to build a large scale time-series prediction solution. I end up using a combination of approaches in the single solution — Prophet, ARIMA and LSTM Neural Network (running on top of Keras/TensorFlow). With Prophet (Serving Prophet Model with Flask — Predicting Future) and ARIMA it is straightforward to calculate a prediction for future dates, both provide a function to return prediction for a given future horizon. The same is not obvious with LSTM, if you are new — this will require a significant amount of time to research how to forecast true future dates (most of the examples are showing how to predict against test dataset only).

I found one good example though which I was following and it helped me to solve my task — A Quick Example of Time-Series Prediction Using Long Short-Term Memory (LSTM) Networks. In this post, I will show how to predict shampoo sales monthly data, mainly based on the code from the above example.

Read more - Time-Series Prediction Beyond Test Data.


PostgreSQL Meetup at CERN, Geneva, January 17, 2020

Yann Neuhaus - Thu, 2020-01-23 10:37
Registration

Last Friday (17/01/20) I had the opportunity to go to Geneva at CERN to attend a PostgreSQL meetup.
I’m really happy to write a blog about for a lot of reasons. First of all, it was about PostgreSQL (essential these days), also for the contributors as Oleg Bartunov and finally because of the organizers: Franck Pachot (old & soon new colleague by dbi-services) & Laeticia Avrot I met at the SOUG Day in Lausane.


After the registration process where a lot of people were waiting to get their badge, I followed the instructions given to reach the Building 503.  I was rapidly lost in this  labyrinth and I had to ask several times to find the conference room.

Unfortunately I was late and missed the “Welcome & opening speech” at 02pm. Worst, the first minutes of Gülçin Yıldırım’s session.
I took discreetly a seat closed to the door in the conference room. I was immediately impressed as already more than 100 persons were in.
It seems to be that there was a huge interest for this event.

So the first talk was about the evolution of Fault Tolerance in PostgreSQL.
Gülcin mainly focused on the evolution of fault tolerance capabilities in PostgreSQL throughout its versions.
Robustness, replication methods, failover & Switchover were the main topics.

The second talk was from Pavlo Golub , a PostgreSQL expert and developer at Cybertec (Software & database partner in Austria).
After some words about the existing scheduling tools in the Postgres community, he introduced a new Open Source tool from Cybertec, called pg_timetable.
He went very deeply in details & discussed some advanced topics like transaction support and cross-platform tasks.
At the end we had a successful demo.

Break

After this session, we had a break as it was 04pm. A super kings cake’s party (Galette des Rois) accompanied by excellent ciders were offered.
We could even choose either the one from Normandy or from Brittany.
I tasted both and appreciated.
By the way, I found the bean twice. I have to admit that this was a really good idea as the athmospere was relaxed and friendly.

At around 04:45pm, Anastasia Lubennikova started her session. A comparison between various backup/recovery solutions for PostgreSQL.
The goal  was to help us to choose the most appropriate tool. She covered several advanced topics in that area as: incremental backups, archive management, backup validation, retention policies.

The next session from Vik Fearing was a short one – Advent of Code Using PostgreSQL
During his talk, Vik showed some techniques to solve general-purpose programming challenges using just a single query with lateral joins.

Romuald Thion, a PostgreSQL teacher at the university of Lyon (France) followed: his talk was about some lessons learned
since he has progressively introduced PostgreSQL in its courses in 2018.
Since then,  400 students enrolled from second to fifth year use PostgreSQL in databases, web, security or system administration courses.

Finally and for the last session like an apotheosis, came on stage Oleg Bartunov, a PostgreSQL Major Contributor & developer.
“All You Need Is Postgres” was the title of his session. This could be soon a hymn.
Oleg talked a lot about his long Postgres professional experience since Postgres95, about all the projects he participated as GiST, GIN and SP-GiST, and also full text search, NoSQL features (hstore and jsonb). He talked also about astronomy and Supernovae.
This was really fascinating and we could listen to him for hours.

Conclusion

A Drinks & Food party concluded this meetup. It was a nice opportunity to meet all speakers. To discuss with them and also some customers or old colleagues (Pierre Boizot).
Learn and share, this is part of dbi-services spirit and matches our values!
One last thing, I would like to address special thanks to Franck Pachot & Laetitia Avrot for the good organization & overall the cool atmosphere during this meetup.
Hoping for next…

Cet article PostgreSQL Meetup at CERN, Geneva, January 17, 2020 est apparu en premier sur Blog dbi services.

New Feature: Restoring Deleted Compartments In Oracle Cloud (OCI)

Online Apps DBA - Thu, 2020-01-23 07:19

New feature: Restoring Deleted Compartments In Oracle Cloud (OCI) Previously, once the Compartments were deleted in OCI, they couldn’t be restored. With the recent update, now you can restore the deleted Compartments. Check out K21Academy’s post at https://k21academy.com/oci53 that covers: ▪Steps For Restoring The Deleted Compartments ▪What Changes Take Place In The Compartments After Restoring? […]

The post New Feature: Restoring Deleted Compartments In Oracle Cloud (OCI) appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Troubleshoot Socket:000445] Connection Rejected, Filter Blocked Socket While Accessing EBS R12 WebLogic Console On OCI

Online Apps DBA - Thu, 2020-01-23 03:06

Fixed: WebLogic Console For E-Business (EBS) R12 on Oracle Cloud (OCI) Socket Blocked Did you encounter the “The Server is not able to service this request: blocked Socket, weblogic.security.net.FilterException:” error while accessing the Weblogic console from the Oracle EBS environment created from the Oracle EBS Cloud Manager? If YES, then the blog post at https://k21academy.com/ebscloud36 […]

The post [Troubleshoot Socket:000445] Connection Rejected, Filter Blocked Socket While Accessing EBS R12 WebLogic Console On OCI appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Adding PostgreSQL extensions without being super user?

Yann Neuhaus - Wed, 2020-01-22 09:33

Usually, when you need to install a PostgreSQL extension you do this as superuser (or at least I am doing it like this). The downside of that is, of course, that a super user must be available once a new extension is required or that all the extensions are installed by default (e.g. in template1). Recently the question popped up internally if you can install extensions without being super user. The answer is: yes and no. It depends on the objects that get created by the extensions and it depends on the permissions of the user that wants to install the extension. As always, lets do some demos as this is the easiest way to understand how things work.

Currently there are no extensions installed in my database except for the standard PL/pgSQLl extension:

postgres=# \dx
                 List of installed extensions
  Name   | Version |   Schema   |         Description          
---------+---------+------------+------------------------------
 plpgsql | 1.0     | pg_catalog | PL/pgSQL procedural language
(1 row)

For the purpose of this post lets create a new user without any special permissions other than login:

postgres=# create user a with login password 'a';
CREATE ROLE
postgres=# \du a
           List of roles
 Role name | Attributes | Member of 
-----------+------------+-----------
 a         |            | {}

This user, even if not granted anything, by default has permissions to create objects in the public schema (you should definitely avoid that, check here):

postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> select current_database();
 current_database 
------------------
 postgres
(1 row)
postgres=> create table tab1 ( a int );
CREATE TABLE

What this user is not able to do, is to use create extension” to install a new extension into the database:

postgres=> create extension lo;
ERROR:  permission denied to create extension "lo"
HINT:  Must be superuser to create this extension.
postgres=> 

Why is that? If we take a look at the extension’s SQL file the first statement is this:

CREATE FUNCTION ltree_in(cstring)
RETURNS ltree
AS 'MODULE_PATHNAME'
LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;

“CREATE FUNCTION” does work as we are able to create objects in the public schema. The issue is this:

postgres=> CREATE FUNCTION ltree_in(cstring)
postgres-> RETURNS ltree
postgres-> AS 'MODULE_PATHNAME'
postgres-> LANGUAGE C STRICT IMMUTABLE PARALLEL SAFE;
ERROR:  permission denied for language c
postgres=> 

We do not have access to the language. Lets try to grant the required privilege for using the language:

postgres=# grant USAGE on LANGUAGE c to a;
ERROR:  language "c" is not trusted
DETAIL:  GRANT and REVOKE are not allowed on untrusted languages, because only superusers can use untrusted languages.

That does not work as well, as c is untrusted. The same is true for “language internal” as in intagg–1.1.sql:

postgres=> CREATE FUNCTION int_agg_state (internal, int4)
postgres-> RETURNS internal
postgres-> AS 'array_agg_transfn'
postgres-> PARALLEL SAFE
postgres-> LANGUAGE INTERNAL;
ERROR:  permission denied for language internal

As all the extensions in standard PostgreSQL community do reference either “c” or “internal” somehow we do not have any chance to get an extension installed as user “a”. Lets do another test and create a new database with user “a” as it’s owner:

postgres=> \c postgres postgres
You are now connected to database "postgres" as user "postgres".
postgres=# create database dba with owner a;
CREATE DATABASE
postgres=# \c dba a
You are now connected to database "dba" as user "a".
dba=> 

Can we install extensions now?

dba=> create extension lo;
ERROR:  permission denied to create extension "lo"
HINT:  Must be superuser to create this extension.
dba=> 

Another error message but it still does not work. We can, however, install PL/Perl as this is a trusted language:

dba=> create extension plperl;
CREATE EXTENSION
dba=> 

Actually PL/Perl is the only extension that can be installed in this scenario, which can be confirmed by a simply PL/pgSQL code block:

dba=> drop extension plperl;
DROP EXTENSION
dba=> do $$
dba$> declare
dba$>   ext record;
dba$>   exception_text text;
dba$> begin
dba$>   for ext in
dba$>       select name
dba$>         from pg_available_extensions
dba$>        order by name
dba$>   loop
dba$>     begin
dba$>        execute 'create extension '||ext.name;
dba$>        raise notice 'SUCCESS for %', ext.name;
dba$>     exception
dba$>        when others
dba$>          then get stacked diagnostics exception_text = MESSAGE_TEXT;
dba$>          raise notice '% failed with %', ext.name, exception_text;
dba$>     end;
dba$>   end loop;
dba$> end $$;
NOTICE:  adminpack failed with permission denied to create extension "adminpack"
NOTICE:  amcheck failed with permission denied to create extension "amcheck"
NOTICE:  autoinc failed with permission denied to create extension "autoinc"
NOTICE:  bloom failed with permission denied to create extension "bloom"
NOTICE:  btree_gin failed with permission denied to create extension "btree_gin"
NOTICE:  btree_gist failed with permission denied to create extension "btree_gist"
NOTICE:  citext failed with permission denied to create extension "citext"
NOTICE:  cube failed with permission denied to create extension "cube"
NOTICE:  dblink failed with permission denied to create extension "dblink"
NOTICE:  dict_int failed with permission denied to create extension "dict_int"
NOTICE:  dict_xsyn failed with permission denied to create extension "dict_xsyn"
NOTICE:  earthdistance failed with required extension "cube" is not installed
NOTICE:  file_fdw failed with permission denied to create extension "file_fdw"
NOTICE:  fuzzystrmatch failed with permission denied to create extension "fuzzystrmatch"
NOTICE:  hstore failed with permission denied to create extension "hstore"
NOTICE:  hstore_plperl failed with required extension "hstore" is not installed
NOTICE:  hstore_plperlu failed with required extension "hstore" is not installed
NOTICE:  hstore_plpython2u failed with required extension "hstore" is not installed
NOTICE:  hstore_plpython3u failed with required extension "hstore" is not installed
NOTICE:  hstore_plpythonu failed with required extension "hstore" is not installed
NOTICE:  insert_username failed with permission denied to create extension "insert_username"
NOTICE:  intagg failed with permission denied to create extension "intagg"
NOTICE:  intarray failed with permission denied to create extension "intarray"
NOTICE:  isn failed with permission denied to create extension "isn"
NOTICE:  jsonb_plperl failed with required extension "plperl" is not installed
NOTICE:  jsonb_plperlu failed with required extension "plperlu" is not installed
NOTICE:  jsonb_plpython2u failed with required extension "plpython2u" is not installed
NOTICE:  jsonb_plpython3u failed with required extension "plpython3u" is not installed
NOTICE:  jsonb_plpythonu failed with required extension "plpythonu" is not installed
NOTICE:  lo failed with permission denied to create extension "lo"
NOTICE:  ltree failed with permission denied to create extension "ltree"
NOTICE:  ltree_plpython2u failed with required extension "ltree" is not installed
NOTICE:  ltree_plpython3u failed with required extension "ltree" is not installed
NOTICE:  ltree_plpythonu failed with required extension "ltree" is not installed
NOTICE:  moddatetime failed with permission denied to create extension "moddatetime"
NOTICE:  pageinspect failed with permission denied to create extension "pageinspect"
NOTICE:  pg_buffercache failed with permission denied to create extension "pg_buffercache"
NOTICE:  pg_freespacemap failed with permission denied to create extension "pg_freespacemap"
NOTICE:  pg_prewarm failed with permission denied to create extension "pg_prewarm"
NOTICE:  pg_stat_statements failed with permission denied to create extension "pg_stat_statements"
NOTICE:  pg_trgm failed with permission denied to create extension "pg_trgm"
NOTICE:  pg_visibility failed with permission denied to create extension "pg_visibility"
NOTICE:  pgcrypto failed with permission denied to create extension "pgcrypto"
NOTICE:  pgrowlocks failed with permission denied to create extension "pgrowlocks"
NOTICE:  pgstattuple failed with permission denied to create extension "pgstattuple"
NOTICE:  SUCCESS for plperl
NOTICE:  plperlu failed with permission denied to create extension "plperlu"
NOTICE:  plpgsql failed with extension "plpgsql" already exists
NOTICE:  plpython3u failed with permission denied to create extension "plpython3u"
NOTICE:  postgres_fdw failed with permission denied to create extension "postgres_fdw"
NOTICE:  refint failed with permission denied to create extension "refint"
NOTICE:  seg failed with permission denied to create extension "seg"
NOTICE:  sslinfo failed with permission denied to create extension "sslinfo"
NOTICE:  tablefunc failed with permission denied to create extension "tablefunc"
NOTICE:  tcn failed with permission denied to create extension "tcn"
NOTICE:  tsm_system_rows failed with permission denied to create extension "tsm_system_rows"
NOTICE:  tsm_system_time failed with permission denied to create extension "tsm_system_time"
NOTICE:  unaccent failed with permission denied to create extension "unaccent"
NOTICE:  xml2 failed with permission denied to create extension "xml2"
DO

If you want to install an extension that only creates objects you are allowed to create anyway, that would succeed. The probably best way to handle extensions is to install all the required ones by default and then grant permissions to the users who need them.

Cet article Adding PostgreSQL extensions without being super user? est apparu en premier sur Blog dbi services.

Make Oracle database simple again!

Yann Neuhaus - Wed, 2020-01-22 09:30
Introduction

Let’s have a look at how to make Oracle database as simple as it was before.

Oracle database is a great piece of software, yes it’s quite expensive, but it’s still the reference and most of the companies can find a configuration that fits their needs according to a budget. Another complain about Oracle is the complexity: nothing is really simple, and you’ll need skillful DBA(s) to deploy, manage, upgrade, troubleshoot your databases. But complexity is sometimes caused by wrong decisions you make without having the knowledge, mainly because some choices add significant complexity compared to others.

The goal

Why the things need to be simple?

Obviously, simplification is:

  • easier troubleshooting
  • more “understandable by the others”
  • reinstallation made possible in case of big troubles
  • avoiding bugs related to the mix of multiple components
  • less work, because you probably have enough work with migration, patching, performance, …
  • more reliability because less components is less problems
On the hardware side

Rules for simplifying on the hardware side are:

  • Choose the same hardware for all your environments (DEV/TEST/QUAL/PROD/…): same server models, same CPU family, same revision. Make only slight variations on memory amount, number of disks and processor cores configuration if needed. Order all the servers at the same time. If a problem is related to hardware, you will be able to test the fix on a less important environment before going on production
  • Don’t use SAN: SAN is very nice, but SAN is not the performance guarantee you expect. Adopt local SSD disks: NVMe type SSDs have amazing speed, they are a true game changer in today’s database performance. Getting rid of the SAN is also getting rid of multipathing, resource sharing, complex troubleshooting, external dependencies and so on
  • Provision very large volumes for data: dealing with space pressure is not the most interesting part of your job. And it’s time consuming. You need 4TB of disks? Order 12TB and you’ll be ready for each situation even those not planned. For sure it’s more expensive, but adding disks afterall is not always that simple. It makes me think about a customer case where trying to add a single disk led to a nightmare (production down for several days)
  • Consider ODA appliances (Small or Medium): even if it’s not simplifying everything, at least hardware is all that you need and is dedicated to Oracle database software
  • Think consolidation: Oracle database has a strong database isolation thus leading to easy consolidation. Consolidate to limit the number of servers you need to manage is also simplifying your environment
  • Avoid virtualization: without talking about the license, virtualization is for sure underlying complexity
On the system side

Some rules are also good to know regarding the system:

  • Go for Redhat or Oracle Linux: mainly because it’s the most common OS for Oracle databases. Releases and patches are always available for Linux first. UNIX and Windows are decreasing in popularity for Oracle Databases these past 10 years
  • Same OS: please keep your operating systems strictly identical from development to production. If you decide to upgrade the OS, do that first on TEST/DEV and finish with production servers without waiting months. And never update through internet as packages can be different each time you update a system
  • Limit the filesystems number for your databases: 1 big oradata and 1 big FRA is enough on SSD, you don’t need to slice everything as we did before, and slicing is always wasting space
On the software side

A lot of things should be done, or not done regarding software:

  • Install the same Oracle version (release + patch) and use the same tree everywhere. Use OFA (/u01/…) or not but be consistent
  • Limit the Oracle versions in use: inform your software vendors that your platform cannot support too old versions, especially non-terminal releases like 11.2.0.3. 19c, 12.1.0.2 and eventually 11.2.0.4 are the only recommended version to deploy
  • Don’t use ASM: because ext4 is fine and SSDs now bring you maximum performance even on a classic filesystem. ASM will always be linked to Grid Infrastructure making dependencies between the DB Homes and that Grid stack, making patching much more complex
  • Don’t use RAC: because most of your applications cannot correctly manage high availability. RAC is much more complex compared to single instance databases. Not choosing RAC is getting rid of interconnect, cluster repository, fusion cache for SGA sharing, SAN or NAS technologies, split brains, scan listeners and complex troubleshooting. Replacing RAC with Data Guard or Dbvisit standby is the new way of doing sort of high availability without high complexity
Simplify backup configuration

For sure you will use RMAN, but how to simplify backups with RMAN?

  • Use the same backup script for all the databases
  • Use the same backup frequency for each database because when you need to restore, you’d better have a fresh backup
  • Configure only different retention on each database
  • Backup to disk (the most convenient being on a big nfs share) and without any specific library (backup your /backup filesystem later with your enterprise backup tool if needed)
  • provision large enough filesystem to never need to delete backups still in the retention period

Using the same backup strategy means being able to use the same restore procedure on all databases because you always need a quick restore of a broken database.

Always backup controlfile and spfile on the standby databases, the resulting backupset has a very small footprint and makes easier restore of the standby using database backupsets from the primary without the need for duplication.

Consider RMAN catalog only if you have enough databases to manage.

Simplify database management and configuration
  • Create scripts for database configuration and tablespace creation (for example: configure_SID.sql and tablespaces_SID.sql) to be able to reconfigure the same database elsewhere
  • don’t create grid and oracle users if you plan to use Grid Infrastructure/ASM: as a DBA you probably manage both ASM and database instances. Instead of loosing time switching between these 2 users, configure only one oracle user for both
  • never use graphical tools to create a database, deploy an appliance, configure something: because screenshots are far less convenient than pure text commands easily repeatable and scriptable
  • Use OMF: configure only db_create_file_dest and db_recovery_file_dest and Oracle will multiplex the controlfile and the redologs in these areas. OMF is also naming datafiles for you: there is no need for manual naming, who really cares about the name of the datafiles?
  • Don’t use multitenant: multitenant is fine but it’s been years we’re living with non-CDB databases and it works like a charm. You can still use non-CDB architecture in 19c, so multitenant is not mandatory even on this latest version. Later migration from non-CDB to pluggable database is quite easy, you will be able to use multitenant later
  • Keep your spfile clean: don’t specify unused parameters or parameters that already have the given value as a standard. Remove from the spfile these parameters using ALTER SYSTEM RESET parameter SCOPE=spfile SID='*';
Simplify patching

Patching can also be simplified:

  • Patch once a year, because you need to patch, but you don’t need to spend all your time applying each PSU every 3 months
  • Start with test/dev databases and take the time to test from the application
  • Don’t wait too much to patch the other environments: production should be patched few weeks after the first patched environment
Simplify Oracle*Net configuration

Simplifying also concerns Oracle*Net configuration:

  • Avoid configuring multiple listeners on a system because one is enough for all your databases
  • put your Oracle*Net configuration files in /etc because you don’t want multiple files in multiple homes
  • Keep your Oracle*Net configuration files clean and organized for increased readability
Make your database movable

Last but not least, one of the biggest mistake is to create a strong dependency between a database and a system. How to make your database easily movable? By configuring a standby database and using Data Guard or Dbvisit standby. Moving your database to another server is done within a few minutes with a single SWITCHOVER operation.

Using standby databases make your life easier for all of these purposes:

  • you need to move your server to another datacenter
  • a power outage happened on one site
  • you need to update hardware or software
  • you suspect a hardware problem impacting the database

Don’t only create standbys for production databases: even development databases are some kind of production for developers. If a database cannot be down for 1 day, you need a standby database.

Finest configuration is not dedicating a server for the primaries and a server for the standbys but dispatching the primaries between 2 identical servers on 2 sites. Each database having a preference server for its primary, the standby being on the opposite server.

Conclusion

It’s so easy to increase complexity without any good reason. Simplifying is the power of saying NO. No to interesting features and configurations that are not absolutely necessary. All you need for your databases is reliability, safety, availability, performance. Simplicity helps you in that way.

Cet article Make Oracle database simple again! est apparu en premier sur Blog dbi services.

Philosophy 23

Jonathan Lewis - Wed, 2020-01-22 08:46

It’s a long time since I wrote a note with the “philosophy” theme, but a recent complaint about Oracle prompted me to suggest the following thought:

“If you write SQL that is technically incorrect it’s not Oracle’s fault if sometimes the SQL completes without an error.”

Consider the following fragment of code:

drop table t1 purge;

create table t1 (n1 number, n2 number);
insert into t1 values (1,1);
commit;

select n1 from t1 where n2 = to_number('x');
select n1 from t1 where n2 = to_number('x') and n1 = 2;

select 
        case 
                when 
                        0 in  (select n1 from t1 where n2 = to_number('x'))
                then
                        1
                else
                        0
        end
from
        dual
;

Clearly the first query must raise an error because ‘x’ can’t be converted to a number (until Oracle supplies a format option to read it as a Roman Numeral).

Clearly the second query must raise an error because it’s just the first query with an extra predicate on the end.

Clearly the third query must raise an error because it’s going to execute a subquery that must raise an error.

Here’s the output from running the code from SQL*Plus (on 12.2.0.1).

Table dropped.


Table created.


1 row created.


Commit complete.

select n1 from t1 where n2 = to_number('x')
                                       *
ERROR at line 1:
ORA-01722: invalid number



no rows selected


CASEWHEN0IN(SELECTN1FROMT1WHEREN2=TO_NUMBER('X'))THEN1ELSE0END
--------------------------------------------------------------
                                                             0

1 row selected.

The first query fails with the expected conversion error: ORA-01722: invalid number. The second and third queries run to completion because SQL is a declarative language, not a procedural language, and the optimizer is allowed to transform your query in any way that it thinks might make it complete more quickly.

Here’s the execution plan – with predicate information – for the second query:

-------------------------------------+-----------------------------------+
| Id  | Operation          | Name    | Rows  | Bytes | Cost  | Time      |
-------------------------------------+-----------------------------------+
| 0   | SELECT STATEMENT   |         |       |       |     2 |           |
| 1   |  TABLE ACCESS FULL | T1      |     1 |    26 |     2 |  00:00:01 |
-------------------------------------+-----------------------------------+

Predicate Information:
----------------------
1 - filter(("N1"=2 AND "N2"=TO_NUMBER('x')))

The optimizer has decided that it’s more efficent to test for the constant 2 than it is to call the to_number() function, so it’s evaluated n1 = 2 first for each row and never had to check the second predicate because nothing got past the first.

The explanation for the successful completion of the third query is slightly different, but again it revolves around transforming for efficiency. Oracle will (very often) convert an IN subquery to an EXISTS subquery. In my example the resulting SQL looks like this (taken from the CBO (10053) trace file, with some cosmeticy enhancement):


SELECT 
        CASE
                WHEN 
                        EXISTS (SELECT 0 FROM T1 WHERE N1=0 AND N2=TO_NUMBER('x')) 
                THEN 1 
                ELSE 0 
        END  "CASEWHEN0IN(SELECTN1FROMT1WHEREN2=TO_NUMBER('X'))THEN1ELSE0END" 
FROM
        DUAL

Note how the “guaranteed to fail” subquery has an extra predicate added as the IN subquery is transformed into an EXISTS subquery and, just like the previous example, the extra predicate is applied before the to_number() predicate, and there’s no data to match the first predicate so the to_number() function never gets called and never gets a chance to raise an exception.

You could argue, or course, that the optimizer should spot the attempt to generate a constant and evaluate it (if possible) at parse time and raise the error before the runtime engine even sees the query – and it wouldn’t be too  hard to make a case for that – but it would only take a few seconds of thought to create slightly more complex examples to demonstrate the point I’ve been making.

Bottom line(s):

1) Use the correct datatypes for your data.

2) Don’t assume that a statement that raises an exception in some circumstances will result in an exception if it is subsequently embedded in a more complex statement. The optimizer is non-procedural and may transform your statement in a way that bypasses your bad design.

 

Footnote:

I probably shouldn’t say this – it will only encourage someone to do the wrong thing – but for the very simplest examples you can show that the ordered_predicates hint still works:


SQL> select /*+ ordered_predicates */  n1 from t1 where n2 = to_number('x') and n1 = 2;
select /*+ ordered_predicates */  n1 from t1 where n2 = to_number('x') and n1 = 2
                                                                  *
ERROR at line 1:
ORA-01722: invalid number

[AZ-103] Roles And Responsibilities Of A Microsoft Azure Administrator

Online Apps DBA - Wed, 2020-01-22 07:07

AZ-103 | Roles and Responsibilities as a Microsoft Azure Administrator Want to know the Roles and Responsibilities of a Microsoft Azure Administrator or knowledge and role-specific skills he/she should possess for bigger & better job opportunities? Check out our blog at https://k21academy.com/az10312 which covers: ▪ What Is Microsoft Azure Certification? ▪ Why Is Azure Certification […]

The post [AZ-103] Roles And Responsibilities Of A Microsoft Azure Administrator appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Manquehue Hotels Expands Properties, Personalizes Guest Experiences with Oracle

Oracle Press Releases - Wed, 2020-01-22 07:00
Press Release
Manquehue Hotels Expands Properties, Personalizes Guest Experiences with Oracle Leading hospitality technology fuels hotel chain growth, while fostering guest loyalty

Redwood Shores, Calif.—Jan 22, 2020

For 36 years, Manquehue Hotels has been delighting guests with unique properties and top-notch service. When the group looked to expand its footprint in Chile with three new hotels, they turned to Oracle Hospitality to help support this rapid growth. With Oracle, Manquehue was able to efficiently establish internal operations at the new hotels and enhance the overall guest experience for its customers.

“As we continue to grow our hotels and chain, we need a system in place that provides us with standardized, timely and online information to better serve our customers,” said Abner Cayul, Hotel Manquehue corporate general manager. “Oracle Hospitality was the only solution that could help us expand to be one of the most prominent hotel chains in Chile. We look forward to utilizing their offerings to fuel our continued growth.”

Oracle Hospitality OPERA Property features a range of innovations, including deeper customer insights and comprehensive functionality for all areas of hotel management. With it, Hotel Manquehue can create in-depth customer profiles to tailor experiences to each customer’s specific preferences, such as suggesting certain dining venues and luxury items. A personal touch that has helped the group increase customer loyalty. Hotel Manquehue also relies on OPERA for improved internal operations for employees, offering them immediate access to 24/7 support and connection to electronic invoices for easy back-office reporting.

The chain also uses the Oracle MICROS Simphony Cloud Point of Sale system to support a wide range of food and beverage operations within the hotel, including restaurant and room service amenities. Not only does MICROS Simphony streamline employee ordering and delivery, it also provides key data on what items are most popular and when. Conversely, it provides insight into which items are not selling, providing more informed decisions on menu planning to better serve guests while improving inventory management and reducing waste.

With hoteliers increasingly focusing on restaurant operations for revenue growth, it’s imperative for them to tap the advantages of an integrated property management and point-of-sale platform. The combination of OPERA and MICROS Simphony provide hoteliers with a comprehensive view of all their operations – including food and beverage – and insightful analysis of the contributions of each. Such integration also yields one-stop access to 360-degree guest profiles, giving staff detailed information about guests’ preferences and behavior anywhere on property.

“From their favorite dining options to customized rooms, today’s guests want an unprecedented level of personalization when enjoying their visit,” said David Meltzer, Group Vice President, Oracle Hospitality. “Hotel Manquehue understands this and uses the latest technology to better understand and serve the needs and desires of their customers every step of the way. As they expand throughout Chile, Oracle will continue to provide the intelligence and tools both management and front-line employees will need to ensure seamless, unforgettable guest experiences that create loyalty and bring guests back to their properties.”

Watch the Manquehue Hotels Video

In this video, Abner Cayul, corporate general manager of Manquehue Hotels shares how Oracle Hospitality Solutions is helping to support the hotel chain’s strategic growth.

Contact Info
Christine Allen
Oracle
+1.603.743.4534
christine.allen@oracle.com
About Oracle Hospitality

Oracle Hospitality brings over 40 years of experience in providing technology solutions to independent hoteliers, global and regional chains, gaming, and cruise lines. We provide hardware, software, and services that allow our customers to act on rich data insights that deliver personalized guest experiences, maximize profitability and encourage long-term loyalty. Our solutions include platforms for property management, point-of-sale, distribution, reporting and analytics all delivered from the cloud to lower IT cost and maximize business agility. Oracle Hospitality’s OPERA is recognized globally as the leading property management platform and continues to serve as a foundation for industry innovation. Learn more about Oracle’s Hospitality Solutions here.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Christine Allen

  • +1.603.743.4534

Add a deployment to Oracle GoldenGate 19c Microservices

DBASolved - Tue, 2020-01-21 13:33

Once you have an up and running Oracle GoldenGate Microservices environment, there may come a time when you need to add another deployment to the enviornment.  Adding deployments is easily done using Oracle GoldenGate Configuration Assistant (OGGCA).  In the below video, I show you how to add a deployment to an existing ServiceManager.  Additionally, I […]

The post Add a deployment to Oracle GoldenGate 19c Microservices appeared first on DBASolved.

Categories: DBA Blogs

If you’d like your ads to show on certain sites across the Internet, you can add these websites as:

VitalSoftTech - Tue, 2020-01-21 09:53

There are certain guesses you make about what would be the right course of action while displaying an ad. If you’d like to show your ads on certain sites across the Internet what should you add them as?  The obvious way is to include those websites in your ad. The question of how you should […]

The post If you’d like your ads to show on certain sites across the Internet, you can add these websites as: appeared first on VitalSoftTech.

Categories: DBA Blogs

Index Engineering

Jonathan Lewis - Mon, 2020-01-20 10:53

This is a case study based on a question that appeared on the Oracle Developer Community forum a few days ago.

What I’m aiming to present in this note is the pattern of thinking that you should adopt in cases like this. The final suggestion in this note isn’t necessarily the best answer to the question posed (at the time of writing the OP hadn’t supplied enough information to allow anyone to come up with a best solution), but the point of the exercise is to talk about the journey and (perhaps) remind you of some of the extreme engineering you can do with indexes.

The (massaged) problem statement is as follows:

I have a table of more than 200 million rows that is used for inserts, updates and queries. I have a query on this table and want to know what index I could create to speed up the query.

The supplied definition of the table was not consistent with the names used in the query, so I’ve had to do a little editing, but table, current indexes, and query were as follows:

rem
rem     Script:         extreme_indexing.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Jan 2020
rem

create table tbl (
        r_id                    varchar2(30) not null,
        c_id                    number,
        n_id                    varchar2(40),
        created_by              varchar2(30) not null,
        last_modified_by        varchar2(30),
        c_status                char(1),
        a_action                char(1),
        r_creation_dt           timestamp(6),
        cnt                     number(38)
)
;

create        index tbl_1 on tbl(cnt, r_creation_dt, c_id, a_action, last_modified_by);  
create        index tbl_2 on tbl(cnt, c_status, r_creation_dt);  
create bitmap index tbl_3 on tbl(c_status); 

select
        /*+ index(tbl) */
        c_id,
        a_action,
        cnt,
        last_modified_by
from
        tbl
where
        c_status in(
            'N',
            'F'
        )
and     cnt <= 5 -- > comment to avoid wordpress format issue
and     r_creation_dt is not null
group by
        cnt,
        r_creation_dt,
        c_id,
        a_action,
        last_modified_by,
        c_status
order by
        r_creation_dt
fetch 
        first 1000 rows only
;


The first thing to point out is the bitmap index tbl_i3 is almost certainly a bad idea – bitmaps and transactional activity do not mix. It seems quite likely that the OP in this case had read one of the many Internet notes that makes the “not totally wrong” but very misleading statement “bitmap indexes are good when you have a small number of distinct values”, and appled the principle to a column that looks like a “status” column holding only a few distisnct values.

Having got that error out of the way we can start to think about the query.  It’s using the (fairly new) “Fetch first N rows” syntax, which means we may have to find a lot of data and sort it before returning a subset: performance issues can be very deceptive in cases like this because we might want a small result set but have to do a large amount of work to get it.

In this case we’re after the first 1,000 rows – which makes you think that maybe there will be a lot of data satisfying the query. So we have two targets to meet to optimise the query:

  • acquire the data we need as efficiently as possible
  • post-process the data we acquire to derive the 1,000 rows as efficiently as possible

The query is just a single table access – which means we’re either going to do a full tablescan or find a good indexed access path, we don’t have to worry about join strategies.  So the first thing to consider is the volume (and scatter) of data that matches the predicates. If there’s only a “small” amount of data where “c_status in (‘N’,’F’) and cnt <= 5” then an index on – or starting with – (c_status, cnt) may be very helpful. (Note how I’ve specified the column with the equality predicate first – that’s part of a generic strategy for creating multi-column indexes.)

This, though, raises several questions that need to be answered:

  • How small is “small” ? In the context of 200 million rows, 100,000 is small; but if you had to visit 100,000 different blocks in the table and do 100,000 real single block reads from disc that might still be a very bad thing.
  • How many rows have status ‘N’, how many have status ‘F’, how many have cnt <= 5 ? Maybe a really tiny number of rows have cnt<=5 and lots have c_status in (‘N’,’F’) which could make this a case where ignoring the generic column-ordering strategy would be very effective.  Maybe the number of rows satisfying the individual conditions is high but the number satisfying the combination is very low.
  • Is this the ONLY combination of c_status and cnt that is of interest, or (for example) was 5 just the number that was picked as an example,  Would different c_status values be of interest, would some required combinations of c_status and cnt have to use completley different execution paths for the best performance.

I’m going to make some decisions in order to proceed – they may be totally wrong as far as the OP is concerned – so remember that this note is just for discussion purposes. Let’s assume that the common query is always exactly as stated. Perhaps it’s a query that runs every few minutes to clear up some outstanding work with the expectation that new rows matching the query keep appearing while older rows are processed, change status, and disappear from the result set. Let’s also assume that the result set is always “small”, and that it’s small because ‘N’ and ‘F’ are rare (even if the total number of rows with cnt <= 5 is large).

With these assumptions we could start by creating an index on (c_status, cnt), which gets us to exactly the rows we want from the table with no “throwaway” after visiting the table. Here’s the excution plan if that’s our choice of index (running on 12.2.0.1, and with an index() hint to force the use of the index when necessary):

----------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |        |      1 |        |   1000 |00:00:00.03 |    1573 |     34 |       |       |          |
|*  1 |  VIEW                           |        |      1 |   1000 |   1000 |00:00:00.03 |    1573 |     34 |       |       |          |
|*  2 |   WINDOW NOSORT STOPKEY         |        |      1 |   6451 |   1000 |00:00:00.03 |    1573 |     34 |   219K|   219K|          |
|   3 |    SORT GROUP BY                |        |      1 |   6451 |   1001 |00:00:00.03 |    1573 |     34 |  1186K|   567K| 1054K (0)|
|   4 |     INLIST ITERATOR             |        |      1 |        |  13142 |00:00:00.02 |    1573 |     34 |       |       |          |
|*  5 |      TABLE ACCESS BY INDEX ROWID| TBL    |      2 |  13743 |  13142 |00:00:00.02 |    1573 |     34 |       |       |          |
|*  6 |       INDEX RANGE SCAN          | TBL_I1 |      2 |  13743 |  13142 |00:00:00.01 |      33 |     34 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1000)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "R_CREATION_DT")<=1000)
   5 - filter("R_CREATION_DT" IS NOT NULL)
   6 - access((("C_STATUS"='F' OR "C_STATUS"='N')) AND "CNT"<=5)

I’ve enabled rowsource_execution_statistics (alter session set statistics_level = all) and pulled my execution plan from memory. As you can see from the A-rows for the index range scan and table access by index rowid, I’ve identified and acquired exactly the rows from the table that might be relevant (all 13,142 of them), then I’ve done a sort group by of all that data, sorting in a way that means the rows will be produced in exactly the order I need for the windowing function that Oracle will use to select the 1,000 rows I want.

If you’re curious, here (courtesy of dbms_utility.expand_sql_text() but cosmetically enhanced) is the transformed SQL that was actually optimised and executed:

SELECT 
        A1.C_ID C_ID,A1.A_ACTION A_ACTION,A1.CNT CNT,A1.LAST_MODIFIED_BY LAST_MODIFIED_BY 
FROM  (
        SELECT 
                /*+ INDEX (A2) */ 
                A2.C_ID C_ID,
                A2.A_ACTION A_ACTION,
                A2.CNT CNT,
                A2.LAST_MODIFIED_BY LAST_ MODIFIED_BY,
                A2.R_CREATION_DT rowlimit_$_0,
                ROW_NUMBER() OVER ( ORDER BY A2.R_CREATION_DT) rowlimit_$$_rownumber 
        FROM 
                TEST_USER.TBL A2 
        WHERE 
                (A2.C_STATUS='N' OR A2.C_STATUS='F') 
        AND     A2.CNT<=5 
        AND     A2.R_CREATION_DT IS NOT NULL 
        GROUP BY 
                A2.CNT,A2.R_CREATION_DT,A2.C_ID,A2.A_ACTION,A2.LAST_MODIFIED_BY,A2.C_STATUS
        ) A1 
WHERE 
        A1.rowlimit_$$_rownumber<=1000 
ORDER BY 
        A1.rowlimit_$_0

There are three main drawbacks to this choice of index.

  • I’ve acquired all the rows in the table that match the predicate even though I only really needed a subset
  • I’ve done a massive sort
  • I’ve created an index that includes every row in the table

Remember that the OP has a table of 200M rows, and we are assuming (pretending) that only a very small fraction of them match the initial predicates. Creating an index on 200M rows because we’re interested in only a few tens of thousands is wasteful of space and (given we have a “status” column) probably wasteful of processing resources as the status moves through several values. So I’m going to address that issue first. Let’s create a “function-based” index that ignores most of the data, and change the code to take advantage of that index – but since this is 12c, let’s do it by adding a virtual column and indexing that column.


alter table tbl add nf_r_creation_dt invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then r_creation_dt
                end
        ) virtual
/

create index tbl_i2 on tbl(nf_r_creation_dt)
/

I’ve introduced an invisible virtual column called nf_r_creation_dt (nf_ for status N/F) which uses a CASE expression matching the original predicate to return the r_creation_dt for rows that match and null for all the other (ca. 200M) rows. So when I create an index on the column the only entries in the index are for rows that I might want to see.

I have to edit the SQL to match – which simply means changing every appearance of r_creation_dt to nf_r_creation_dt, and eliminating the original predicate giving the following text and execution plan:


select
        /*+ index(tbl) */
        c_id,
        a_action,
        cnt,
        last_modified_by
from
        tbl
where
        nf_r_creation_dt is not null
group by
        nf_r_creation_dt,
        cnt,
        c_id,
        a_action,
        last_modified_by,
        c_status
order by
        nf_r_creation_dt
fetch 
        first 1000 rows only    -- 1,000 rows in the original
/

---------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |      1 |        |   1000 |00:00:00.02 |   13139 |     35 |       |       |          |
|*  1 |  VIEW                          |        |      1 |   1000 |   1000 |00:00:00.02 |   13139 |     35 |       |       |          |
|*  2 |   WINDOW NOSORT STOPKEY        |        |      1 |     48 |   1000 |00:00:00.02 |   13139 |     35 | 73728 | 73728 |          |
|   3 |    SORT GROUP BY               |        |      1 |     48 |   1001 |00:00:00.02 |   13139 |     35 |  1116K|   556K|  991K (0)|
|   4 |     TABLE ACCESS BY INDEX ROWID| TBL    |      1 |   2500 |  13142 |00:00:00.02 |   13139 |     35 |       |       |          |
|*  5 |      INDEX FULL SCAN           | TBL_I2 |      1 |  13142 |  13142 |00:00:00.01 |      36 |     35 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1000)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "NF_R_CREATION_DT")<=1000)
   5 - filter("NF_R_CREATION_DT" IS NOT NULL)

The plan shows an index full scan on the new index. Since the index holds only those rows that might be interesting this isn’t a threat. However we still have to visit all the matching rows in the table – and that might result in more random I/O than we like. So the next step in enhancing performance is to consider adding all the columns we want to the index. There’s a little problem with that: if we add the columns as they are we will go back to having an index entry for every single row in the table so we need to use the same CASE mechanism to create more virtual columns:

alter table tbl add nf_c_status invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then c_status
                end
        ) virtual
/

alter table tbl add nf_last_modified_by invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then last_modified_by
                end
        ) virtual
/

alter table tbl add nf_a_action invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then a_action
                end
        ) virtual
/

alter table tbl add nf_c_id invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then c_id
                end
        ) virtual
/

alter table tbl add nf_cnt invisible 
        generated always as (
                case
                        when c_status in ('N','F') and cnt <= 5
                        then cnt
                end
        ) virtual
/

create index tbl_i3 on tbl(
        nf_r_creation_dt,
        nf_cnt,
        nf_c_id,
        nf_a_action,
        nf_last_modified_by,
        nf_c_status
)
;

It looks like a bit of a pain to go through all this rigmarole to get all those columns that are null most of the time but echo the original values when the rows match our original predicate; and then we have to modify the query to match:


select
        /*+ index(tbl) */
        nf_c_id,
        nf_a_action,
        nf_cnt,
        nf_last_modified_by
from
        tbl
where
        nf_r_creation_dt is not null
group by
        nf_r_creation_dt,
        nf_cnt,
        nf_c_id,
        nf_a_action,
        nf_last_modified_by,
        nf_c_status
order by
        nf_r_creation_dt
fetch 
        first 1000 rows only    -- 1,000 rows in the original
/

But the big payoff comes from the execution plan:


----------------------------------------------------------------------------------------------------
| Id  | Operation              | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |        |      1 |        |   1000 |00:00:00.01 |      74 |     12 |
|*  1 |  VIEW                  |        |      1 |   1000 |   1000 |00:00:00.01 |      74 |     12 |
|*  2 |   WINDOW NOSORT STOPKEY|        |      1 |   2500 |   1000 |00:00:00.01 |      74 |     12 |
|   3 |    SORT GROUP BY NOSORT|        |      1 |   2500 |   1001 |00:00:00.01 |      74 |     12 |
|*  4 |     INDEX FULL SCAN    | TBL_I3 |      1 |   2500 |   1003 |00:00:00.01 |      74 |     12 |
----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=1000)
   2 - filter(ROW_NUMBER() OVER ( ORDER BY "NF_R_CREATION_DT")<=1000)
   4 - filter("NF_R_CREATION_DT" IS NOT NULL)

Notice how the SORT GROUP BY operation is a NOSORT, and the WINDOW operation is both NOSORT and STOPKEY ?

We’ve got the smallest index possible that only gets modified as rows move into, or out of, the interesting state, and when we run the query Oracle does a full scan of the index maintaining “running totals” but stop as soon as it’s aggregated enough results.

tl;dr

For very special cases it’s really amazing what you can (sometimes) do – if you can modify the code – with carefully engineered indexes to minimise the work done by a query AND the work done maintaining the infrastructure needed for that query. Virtual columns are a fantastic aid, especially now that 12c allows them to be invisible.

Video : Schema Only Accounts in Oracle Database 18c Onward

Tim Hall - Mon, 2020-01-20 01:53

Today’s video is a demonstration of schema only accounts, introduced in Oracle Database 18c.

This is based on the following articles.

The star of today’s video is Paul Vallee, of Pythian and Tehama fame.

Cheers

Tim…

Video : Schema Only Accounts in Oracle Database 18c Onward was first posted on January 20, 2020 at 8:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

PostgreSQL 13: parallel vacuum for indexes

Yann Neuhaus - Mon, 2020-01-20 01:06

Because of its implementation of MVCC PostgreSQL needs a way to cleanup old/dead rows and this is the responsibility of vacuum. Up to PostgreSQL 12 this is done table per table and index per index. There are a lot of parameters to fine tune auto vacuum but none of those allowed vacuum to run in parallel against a relation. The only option you had to allow auto vacuum to do more work in parallel was to increase autovacuum_max_workers so that more relations can be worked on at the same time. Working against multiple indexes of the same table was not possible. Once PostgreSQL 13 will be released this will change.

When you have a look at the help of vacuum you will notice a new option:

postgres=# \h vacuum
Command:     VACUUM
Description: garbage-collect and optionally analyze a database
Syntax:
VACUUM [ ( option [, ...] ) ] [ table_and_columns [, ...] ]
VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ table_and_columns [, ...] ]

where option can be one of:

    FULL [ boolean ]
    FREEZE [ boolean ]
    VERBOSE [ boolean ]
    ANALYZE [ boolean ]
    DISABLE_PAGE_SKIPPING [ boolean ]
    SKIP_LOCKED [ boolean ]
    INDEX_CLEANUP [ boolean ]
    TRUNCATE [ boolean ]
    PARALLEL integer

and table_and_columns is:

    table_name [ ( column_name [, ...] ) ]

URL: https://www.postgresql.org/docs/devel/sql-vacuum.html

By providing a positive integer to the “PARALLEL” option you tell vacuum how many background workers should be used to vacuum indexes for a given table (0 will disable parallel processing). Lets do a small test setup to demonstrate this:

postgres=# create table t1 as select i as a, i::text as b, now() as c from generate_series(1,3000000) i;
SELECT 3000000
postgres=# create index i1 on t1(a);
CREATE INDEX
postgres=# 
postgres=# create index i2 on t1(b);
CREATE INDEX
postgres=# create index i3 on t1(c);
CREATE INDEX

One table, four indexes. If we go for parallel 4 we should see four background workers doing the work against the indexes:

postgres=# update t1 set a=5,b='ccc',c=now() where mod(a,5)=0;
UPDATE 600000
postgres=# vacuum (parallel 4) t1;
VACUUM

As the table and the indexes are quite small we need to be fast but at least 2 parallel workers show up in the process list for the vacuum operation:

postgres 16688 15925 13 07:30 ?        00:01:07 postgres: postgres postgres [local] VACUUM
postgres 19184 15925  0 07:39 ?        00:00:00 postgres: parallel worker for PID 16688   
postgres 19185 15925  0 07:39 ?        00:00:00 postgres: parallel worker for PID 16688   

Nice. Please note that indexes are only considered for parallel vacuum when they meet the min_parallel_index_scan_size criteria. For FULL vacuum there will be no parallel processing as well.

Btw: The current maximal value is 1024:

postgres=# vacuum (parallel -4) t1;
ERROR:  parallel vacuum degree must be between 0 and 1024
LINE 1: vacuum (parallel -4) t1;

You can also see the parallel stuff on the verbose output:

postgres=# vacuum (parallel 4, verbose true) t1;
INFO:  vacuuming "public.t1"
INFO:  launched 2 parallel vacuum workers for index vacuuming (planned: 2)
INFO:  scanned index "i2" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.24 s, system: 0.06 s, elapsed: 0.89 s
INFO:  scanned index "i1" to remove 600000 row versions
DETAIL:  CPU: user: 0.17 s, system: 0.10 s, elapsed: 1.83 s
INFO:  scanned index "i3" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.16 s, system: 0.14 s, elapsed: 1.69 s
INFO:  scanned index "i4" to remove 600000 row versions by parallel vacuum worker
DETAIL:  CPU: user: 0.25 s, system: 0.09 s, elapsed: 1.17 s
INFO:  "t1": removed 600000 row versions in 20452 pages
DETAIL:  CPU: user: 0.17 s, system: 0.16 s, elapsed: 1.43 s
INFO:  index "i1" now contains 3000000 row versions in 14308 pages
DETAIL:  600000 index row versions were removed.
1852 index pages have been deleted, 640 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i2" now contains 3000000 row versions in 14305 pages
DETAIL:  600000 index row versions were removed.
1851 index pages have been deleted, 640 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i3" now contains 3000000 row versions in 14326 pages
DETAIL:  600000 index row versions were removed.
3941 index pages have been deleted, 1603 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  index "i4" now contains 3000000 row versions in 23391 pages
DETAIL:  600000 index row versions were removed.
5527 index pages have been deleted, 2246 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "t1": found 600000 removable, 3000000 nonremovable row versions in 21835 out of 22072 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 499
There were 132 unused item identifiers.
Skipped 0 pages due to buffer pins, 237 frozen pages.
0 pages are entirely empty.
CPU: user: 0.75 s, system: 0.36 s, elapsed: 5.07 s.
INFO:  vacuuming "pg_toast.pg_toast_16392"
INFO:  index "pg_toast_16392_index" now contains 0 row versions in 1 pages
DETAIL:  0 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_toast_16392": found 0 removable, 0 nonremovable row versions in 0 out of 0 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 499
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM

Cet article PostgreSQL 13: parallel vacuum for indexes est apparu en premier sur Blog dbi services.

Running the (Segment) Space Advisor - on a Partitioned Table

Hemant K Chitale - Sat, 2020-01-18 08:30
Here is a quick demo on running the Segment Space Advisor manually

I need to start with the ADVISOR privilege

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:10 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: system
Enter password:
Last Successful login time: Sat Jan 18 2020 22:00:32 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> grant advisor to hemant;

Grant succeeded.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production


I can then connect with my account to run the Advisor

$sqlplus

SQL*Plus: Release 12.2.0.1.0 Production on Sat Jan 18 22:02:35 2020

Copyright (c) 1982, 2016, Oracle. All rights reserved.

Enter user-name: hemant
Enter password:
Last Successful login time: Sat Jan 18 2020 21:50:05 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL>
SQL> DECLARE
l_object_id NUMBER;
l_task_name VARCHAR2(32767) := 'Advice on My SALES_DATA Table';

BEGIN
DBMS_ADVISOR.create_task (
advisor_name => 'Segment Advisor',
task_name => l_task_name
);

DBMS_ADVISOR.create_object (
task_name => l_task_name,
object_type => 'TABLE',
attr1 => 'HEMANT',
attr2 => 'SALES_DATA',
attr3 => NULL,
attr4 => NULL,
attr5 => NULL,
object_id => l_object_id
);

DBMS_ADVISOR.set_task_parameter (
task_name => l_task_name,
parameter => 'RECOMMEND_ALL',
value => 'TRUE');

DBMS_ADVISOR.execute_task(task_name => l_task_name);
end;
/

2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
PL/SQL procedure successfully completed.

SQL>


I can then review the advise :

SQL> set serveroutput on
begin
FOR cur_rec IN (SELECT f.impact,
o.type,
o.attr1,
o.attr2,
o.attr3,
o.attr4,
f.message,
f.more_info
FROM dba_advisor_findings f, dba_advisor_objects o
WHERE f.object_id = o.object_id
AND f.task_name = o.task_name
AND f.task_name = 'Advice on My SALES_DATA Table'
ORDER BY f.impact DESC)
LOOP
DBMS_OUTPUT.put_line('..');
DBMS_OUTPUT.put_line('Type : ' || cur_rec.type);
DBMS_OUTPUT.put_line('Schema : ' || cur_rec.attr1);
DBMS_OUTPUT.put_line('Table Name : ' || cur_rec.attr2);
DBMS_OUTPUT.put_line('Partition Name : ' || cur_rec.attr3);
DBMS_OUTPUT.put_line('Tablespace Name : ' || cur_rec.attr4);
DBMS_OUTPUT.put_line('Message : ' || cur_rec.message);
DBMS_OUTPUT.put_line('More info : ' || cur_rec.more_info);
END LOOP;
end;
/

SQL> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 ..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2015
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2016
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2017
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:1016: Reclaimable Space :64520:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2018
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_2019
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:
..
Type : TABLE PARTITION
Schema : HEMANT
Table Name : SALES_DATA
Partition Name : P_MAXVALUE
Tablespace Name : USERS
Message : The free space in the object is less than 10MB.
More info : Allocated Space:65536: Used Space:8192: Reclaimable Space :57344:

PL/SQL procedure successfully completed.

SQL>


Thus, it actually reports for each Partition in the table.


Note : Script based on script by Tim Hall  (@oraclebase)  at https://oracle-base.com/dba/script?category=10g&file=segment_advisor.sql


Categories: DBA Blogs

Deploying your own PostgreSQL image on Nutanix Era – 2 – Deploying a new PostgreSQL VM

Yann Neuhaus - Sat, 2020-01-18 00:00

In the last post I described how you can create your own PostgreSQL image in Nutanix Era. In Nutanix wording this is a “Software profile”. This profile can now be used to deploy PostgreSQL VMs with just a few clicks or by using the API. In this post we’ll look at how this can be done and if it is really as easy as Nutanix promises. We’ll be doing it by using the graphical console first and then by using the API. Lets go.

For deploying new databases we obviously need to go to the database section of Era:

For now we are going to deploy a single instance:

Our new software profile is available and this is what we use. Additionally a public ssh key should be provided for accessing the node:

Provide the details for the new instance:

Time machine allows you to go back in time and creates automatic storage snapshots according to the schedule you define here:

SLAs define retention policies for the snapshots. I will not cover that her.

Kick it off and wait for the task to finish:

A few minutes later it is done:

Our new database is ready:

From a deployment point of view this is super simple: A few clicks and a new PostgreSQL server is ready to use in minutes. The API call to do the same on the command line would be this:

curl -k -X POST \
	https://10.38.11.9/era/v0.8/databases/provision \
	-H 'Content-Type: application/json' \
	-H 'Authorization: Basic YWRtaW46bngyVGVjaDAwNyE=' \
	-d \
	'{"databaseType":"postgres_database","databaseName":"db2","databaseDescription":"db2","clusterId":"ccdab636-2a11-477b-a358-2b8db0382421","softwareProfileId":"aa099964-e17c-4264-b047-00881dc5e2f8","softwareProfileVersionId":"7ae7a44c-d7c5-46bc-bf0f-f9fe7665f537","computeProfileId":"fb1d20bb-38e4-4964-852f-6209990402c6","networkProfileId":"8ed5214c-4e2a-4ce5-a899-a07103bc60cc","dbParameterProfileId":"3679ab5b-7688-41c4-bcf3-9fb4491544ea","newDbServerTimeZone":"Europe/Zurich","timeMachineInfo":{"name":"db2_TM","description":"","slaId":"4d9dcd6d-b6f8-47f0-8015-9e691c1d3cf4","schedule":{"snapshotTimeOfDay":{"hours":1,"minutes":0,"seconds":0},"continuousSchedule":{"enabled":true,"logBackupInterval":30,"snapshotsPerDay":1},"weeklySchedule":{"enabled":true,"dayOfWeek":"FRIDAY"},"monthlySchedule":{"enabled":true,"dayOfMonth":"17"},"quartelySchedule":{"enabled":true,"startMonth":"JANUARY","dayOfMonth":"17"},"yearlySchedule":{"enabled":false,"dayOfMonth":31,"month":"DECEMBER"}},"tags":[],"autoTuneLogDrive":true},"provisionInfo":[{"name":"application_type","value":"postgres_database"},{"name":"nodes","value":"1"},{"name":"listener_port","value":"5432"},{"name":"proxy_read_port","value":"5001"},{"name":"proxy_write_port","value":"5000"},{"name":"database_size","value":10},{"name":"working_dir","value":"/tmp"},{"name":"auto_tune_staging_drive","value":true},{"name":"enable_synchronous_mode","value":false},{"name":"dbserver_name","value":"postgres-2"},{"name":"dbserver_description","value":"postgres-2"},{"name":"ssh_public_key","value":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDgJb4co+aaRuyAJv8F6fsz2YgO0ZHldX6qS22c813iDeUGWP/1bGhF598uODlgK5nx29sW2WTlcmorZuCgHC2BJfzhL1xsL5Ca94uAEg48MbA+1+Woe49mF6bPDJWLXqC0YUpCDBZI9VHnrij4QhX2r3G87W0WNnNww1POwIJN3xpVq/DnDWye+9ZL3s+eGR8d5f71iKnBo0BkcvY5gliOtM/DuLT7Cs7KkjPgY+S6l5Cztvcj6hGO96zwlOVN3kjB18RH/4q6B7YkhSxgTDMXCdoOf9F6BIhAjZga4lodHGbkMEBW+BJOnYnFTl+jVB/aY1dALECnh4V+rr0Pd8EL daw@ltdaw"},{"name":"db_password","value":"postgres"}]}'

As said, provisioning is easy. Now lets see how the instance got created. Using the IP Address that was assigned, the postgres user and the public ssh key we provided we can connect to the VM:

ssh postgres@10.38.11.40
The authenticity of host '10.38.11.40 (10.38.11.40)' can't be established.
ECDSA key fingerprint is SHA256:rN4QzdL2y55Lv8oALnW6pBQzBKOInP+X4obiJrqvK8o.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.38.11.40' (ECDSA) to the list of known hosts.
Last login: Fri Jan 17 18:48:26 2020 from 10.38.11.9

The first thing I would try is to connect to PostgreSQL:

-bash-4.2$ psql postgres
psql (11.6 dbi services build)
Type "help" for help.

postgres=# select version();
                                                          version
----------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 11.6 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-39), 64-bit
(1 row)

postgres=#

Ok, that works. What I did before I created the Software profile in the last post: I installed our DMK and that gets started automatically as the required stuff is added to “.bash_profile” (you can see that in the last post because PS1 is not the default). But here I did not get our environment because Era has overwritten “.bash_profile”:

-bash-4.2$ cat .bash_profile
export PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/u01/app/postgres/product/11/db_6/bin:/u01/app/postgres/product/11/db_6/bin
export PGDATA=/pg_data_544474d1_822c_4912_8db4_7c1c7f45df54
export PGDATABASE=postgres
export PGUSER=postgres
export PGPORT=5432
export ANSIBLE_LIBRARY=/opt/era_base/era_engine/drivers/nutanix_era/era_drivers/AnsibleModules/bin

Would be great if the original “.bash_profile” would still be there so custom adjustment are not lost.

Looking at mountpoinzs:

-bash-4.2$ df -h
Filesystem                                                                                                                          Size  Used Avail Use% Mounted on
devtmpfs                                                                                                                            7.8G     0  7.8G   0% /dev
tmpfs                                                                                                                               7.8G  8.0K  7.8G   1% /dev/shm
tmpfs                                                                                                                               7.8G  9.7M  7.8G   1% /run
tmpfs                                                                                                                               7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/mapper/centos_centos7postgres12-root                                                                                            26G  2.4G   24G   9% /
/dev/sda1                                                                                                                          1014M  149M  866M  15% /boot
/dev/sdb                                                                                                                             27G   74M   26G   1% /u01/app/postgres/product/11/db_6
tmpfs                                                                                                                               1.6G     0  1.6G   0% /run/user/1000
/dev/mapper/VG_PG_DATA_544474d1_822c_4912_8db4_7c1c7f45df54-LV_PG_DATA_544474d1_822c_4912_8db4_7c1c7f45df54                          50G  108M   47G   1% /pg_data_544474d1_822c_4912_8db4_7c1c7f45df54
/dev/mapper/VG_PG_DATA_TBLSPC_544474d1_822c_4912_8db4_7c1c7f45df54-LV_PG_DATA_TBLSPC_544474d1_822c_4912_8db4_7c1c7f45df54           9.8G   44M  9.2G   1% /pg_dbidb1_544474d1_822c_4912_8db4_7c1c7f45df54
/dev/mapper/ntnx_era_agent_vg_1700f784395111eab53f506b8d241a39-ntnx_era_agent_lv_era_software_1700f784395111eab53f506b8d241a39      1.5G  534M  821M  40% /opt/era_base/era_engine
/dev/mapper/ntnx_era_agent_vg_1700f784395111eab53f506b8d241a39-ntnx_era_agent_lv_era_logs_1700f784395111eab53f506b8d241a39          3.9G   18M  3.6G   1% /opt/era_base/logs
/dev/mapper/ntnx_era_agent_vg_1700f784395111eab53f506b8d241a39-ntnx_era_agent_lv_era_config_1700f784395111eab53f506b8d241a39         47M  1.1M   42M   3% /opt/era_base/cfg
/dev/mapper/ntnx_era_agent_vg_5c3a7120261b4354a3b38ee168726298-ntnx_era_agent_lv_db_stagging_logs_5c3a7120261b4354a3b38ee168726298   99G   93M   94G   1% /opt/era_base/db_logs

The PostgreSQL binaries got their own mountpoint. PGDATA is on its own mountpoint and it seems we have a tablespace in “/pg_dbidb1_544474d1_822c_4912_8db4_7c1c7f45df54”:

-bash-4.2$ psql -c "\db" postgres
                            List of tablespaces
     Name      |  Owner   |                    Location
---------------+----------+-------------------------------------------------
 pg_default    | postgres |
 pg_global     | postgres |
 tblspc_dbidb1 | postgres | /pg_dbidb1_544474d1_822c_4912_8db4_7c1c7f45df54
(3 rows)

-bash-4.2$ psql -c "\l+" postgres
                                                                     List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   |  Size   |  Tablespace   |                Description
-----------+----------+----------+-------------+-------------+-----------------------+---------+---------------+--------------------------------------------
 dbidb1    | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7717 kB | tblspc_dbidb1 |
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |                       | 7717 kB | pg_default    | default administrative connection database
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7577 kB | pg_default    | unmodifiable empty database
           |          |          |             |             | postgres=CTc/postgres |         |               |
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +| 7577 kB | pg_default    | default template for new databases
           |          |          |             |             | postgres=CTc/postgres |         |               |
(4 rows)

This confirms that our database “dbidb1” was placed into a separate tablespace. There is another mountpoint which seems to be there for the archived WAL files, and we can confirm that as well:

-bash-4.2$ psql
psql (11.6 dbi services build)
Type "help" for help.

postgres=# show archive_command ;
                      archive_command
-----------------------------------------------------------
  sh /opt/era_base/cfg/postgres/archive_command.sh %p  %f
(1 row)

postgres=# \! cat /opt/era_base/cfg/postgres/archive_command.sh
test ! -f /opt/era_base/db_logs/dbidb1/$2 &&  cp -p $1 /opt/era_base/db_logs/dbidb1//$2
postgres=#

So archiving is enabled and this is what I expected. The costing parameters seem to be the default:

postgres=# select name,setting from pg_settings where name like '%cost%';
             name             | setting
------------------------------+---------
 autovacuum_vacuum_cost_delay | 20
 autovacuum_vacuum_cost_limit | -1
 cpu_index_tuple_cost         | 0.005
 cpu_operator_cost            | 0.0025
 cpu_tuple_cost               | 0.01
 jit_above_cost               | 100000
 jit_inline_above_cost        | 500000
 jit_optimize_above_cost      | 500000
 parallel_setup_cost          | 1000
 parallel_tuple_cost          | 0.1
 random_page_cost             | 4
 seq_page_cost                | 1
 vacuum_cost_delay            | 0
 vacuum_cost_limit            | 200
 vacuum_cost_page_dirty       | 20
 vacuum_cost_page_hit         | 1
 vacuum_cost_page_miss        | 10
(17 rows)

Memory parameters seem to be the default as well:

postgres=# show shared_buffers ;
 shared_buffers
----------------
 128MB
(1 row)

postgres=# show work_mem;
 work_mem
----------
 4MB
(1 row)

postgres=# show maintenance_work_mem ;
 maintenance_work_mem
----------------------
 64MB
(1 row)

postgres=# show effective_cache_size ;
 effective_cache_size
----------------------
 4GB
(1 row)

There are “Parameter profiles” you can use and create but currently there are not that many parameters that can be adjusted:

That’s it for now. Deploying new PostgreSQL instances is really easy. I am currently not sure why there needs to be a tablespace but maybe that becomes clear in a future post when we’ll look at backup and restore. From a PostgreSQL configuration perspective you for sure need to do some adjustments when the databases become bigger and load increases.

Cet article Deploying your own PostgreSQL image on Nutanix Era – 2 – Deploying a new PostgreSQL VM est apparu en premier sur Blog dbi services.

RDS Resource in Terraform

Pakistan's First Oracle Blog - Fri, 2020-01-17 21:19
Just marveling again how easy and clean it is to create an RDS resource in Terraform.

# Create the RDS instance
resource "aws_db_instance" "databaseterra" {
  allocated_storage    = "${var.database_size}"
  engine               = "${var.DbEngineType}"
  engine_version       = "${var.DbMajorEngineVersion}"
  instance_class       = "${var.DBInstanceClass}"
  identifier             = "testdbterra"
  username             = "${var.database_user}"
  password             = "${var.database_password}"
  db_subnet_group_name = "${aws_db_subnet_group.rdssubnetgroupterra.id}"
  parameter_group_name = "${aws_db_parameter_group.rdsparametergroupterra.id}"
  multi_az           = "${var.db_multiaz}"
  vpc_security_group_ids = ["${aws_security_group.RdsSecurityGroupterra.id}"]
  publicly_accessible    = "false"
  backup_retention_period = "2"
  license_model           = "license-included"
  apply_immediately = "true"
  tags = {
    Env = "Non-Prod"
  }

Just have your vars ready and thats it.
Categories: DBA Blogs

Data Guard Fast-Start Failover Test – Shutdown Standby Host

Michael Dinh - Fri, 2020-01-17 15:41

Data Guard Fast-Start Failover Test – Shutdown Primary Host

Review primary host and start observer:
[oracle@ol7-121-dg1 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 17 20:42:54 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG1:(SYS@cdb1):PRIMARY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg1 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 13 seconds ago)

DGMGRL> enable fast_start failover
Enabled.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 12 seconds ago)

DGMGRL> validate database cdb1

  Database Role:    Primary database

  Ready for Switchover:  Yes

DGMGRL> validate database cdb1_stby

  Database Role:     Physical standby database
  Primary Database:  cdb1

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

DGMGRL> show database cdb1

Database - cdb1

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    cdb1

  Database Error(s):
    ORA-16820: fast-start failover observer is no longer observing this database

Database Status:
ERROR

DGMGRL> show database cdb1_stby

Database - cdb1_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 2.00 KByte/s
  Real Time Query:    ON
  Instance(s):
    cdb1

  Database Error(s):
    ORA-16820: fast-start failover observer is no longer observing this database

Database Status:
ERROR

DGMGRL> start observer
[P001 01/17 20:46:01.38] Authentication failed.
DGM-16979: Unable to log on to the primary or standby database as SYSDBA
Failed.
DGMGRL> connect sys@cdb1
Password:
Connected as SYSDBA.
DGMGRL> start observer
Observer started
Restart standby host, listener, and database:
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant status
Current machine states:

default                   poweroff (virtualbox)

The VM is powered off. To restart the VM, simply run `vagrant up`

resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...

====================================================================
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant status
Current machine states:

default                   running (virtualbox)

The VM is running. To stop this VM, you can run `vagrant halt` to
shut it down forcefully, or you can run `vagrant suspend` to simply
suspend the virtual machine. In either case, to restart it again,
simply run `vagrant up`.

====================================================================
resetlogs@ghost MINGW64 /g/oraclebase/vagrant/dataguard/ol7_121/node2 (master)
$ vagrant ssh
Last login: Fri Jan 17 20:11:35 2020 from 10.0.2.2
[vagrant@ol7-121-dg2 ~]$ sudo su - oracle
Last login: Fri Jan 17 20:11:44 UTC 2020 on pts/0
[oracle@ol7-121-dg2 ~]$ . oraenv <<< cdb1
ORACLE_SID = [cdb1] ? The Oracle base remains unchanged with value /u01/app/oracle
[oracle@ol7-121-dg2 ~]$ lsnrctl start

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 17-JAN-2020 20:53:20

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

Starting /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr: please wait...

TNSLSNR for Linux: Version 12.1.0.2.0 - Production
System parameter file is /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Log messages written to /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=ol7-121-dg2)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production
Start Date                17-JAN-2020 20:53:22
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /u01/app/oracle/product/12.1.0.2/dbhome_1/network/admin/listener.ora
Listener Log File         /u01/app/oracle/diag/tnslsnr/ol7-121-dg2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=ol7-121-dg2.localdomain)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
Services Summary...
Service "cdb1_stby_DGMGRL" has 1 instance(s).
  Instance "cdb1", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully
[oracle@ol7-121-dg2 ~]$ cd /sf_working/sql
[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 17 20:53:38 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@cdb1> startup mount;
ORACLE instance started.

Total System Global Area 1610612736 bytes
Fixed Size                  2924928 bytes
Variable Size             520097408 bytes
Database Buffers         1073741824 bytes
Redo Buffers               13848576 bytes
Database mounted.
SYS@cdb1> @stby.sql

Session altered.

*** v$database ***

DB          OPEN                   DATABASE           REMOTE     SWITCHOVER      DATAGUARD  PRIMARY_DB
UNIQUE_NAME MODE                   ROLE               ARCHIVE    STATUS          BROKER     UNIQUE_NAME
----------- ---------------------- ------------------ ---------- --------------- ---------- ---------------
cdb1_stby   MOUNTED                PHYSICAL STANDBY   ENABLED    NOT ALLOWED     ENABLED    cdb1

*** gv$archive_dest ***

                                                                                              MOUNT
 THREAD#  DEST_ID DESTINATION               STATUS       TARGET           SCHEDULE PROCESS       ID
-------- -------- ------------------------- ------------ ---------------- -------- ---------- -----
       1        1 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL            ACTIVE   ARCH           0
       1       32 USE_DB_RECOVERY_FILE_DEST VALID        LOCAL            ACTIVE   RFS            0

*** gv$archive_dest_status ***

                               DATABASE        RECOVERY
 INST_ID  DEST_ID STATUS       MODE            MODE                    GAP_STATUS      ERROR
-------- -------- ------------ --------------- ----------------------- --------------- --------------------------------------------------
       1        1 VALID        MOUNTED-STANDBY IDLE                                    NONE
       1       32 VALID        UNKNOWN         IDLE                                    NONE

*** v$thread ***

 THREAD# CURRENT LOG SEQUENCE STATUS
-------- -------------------- ------------
       1                   26 OPEN

*** gv$archived_log ***

 DEST_ID  THREAD# APPLIED    MAX_SEQ MAX_TIME             DELTA_SEQ DETA_MIN
-------- -------- --------- -------- -------------------- --------- --------
       1        1 NO              25 17-JAN-2020 20:53:53         2 41.68333
       1        1 YES             23 17-JAN-2020 20:12:12

*** v$archive_gap ***

no rows selected

*** GAP can also be verified using RMAN from STANDBY ***

RMAN1
------------------------------------------------------------
list archivelog from sequence 24 thread 1;

*** v$dataguard_stats ***

NAME                      VALUE              UNIT
------------------------- ------------------ ------------------------------
transport lag             +00 00:00:00       day(2) to second(0) interval
apply lag                                    day(2) to second(0) interval

*** gv$managed_standby ***

no rows selected

SYS@cdb1> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@ol7-121-dg2 sql]$
Screen output from observer:
DGMGRL> start observer
Observer started
[W000 01/17 20:48:58.27] The primary database has requested a transition to the UNSYNC/LAGGING state.
[W000 01/17 20:48:58.28] Permission granted to the primary database to transition to UNSYNC/LAGGING state.
[W000 01/17 20:50:01.29] The primary database has been in UNSYNC/LAGGING state for 63 seconds.
[W000 01/17 20:51:04.31] The primary database has been in UNSYNC/LAGGING state for 126 seconds.
[W000 01/17 20:52:07.33] The primary database has been in UNSYNC/LAGGING state for 189 seconds.
[W000 01/17 20:53:10.36] The primary database has been in UNSYNC/LAGGING state for 252 seconds.
[W000 01/17 20:54:13.39] The primary database has been in UNSYNC/LAGGING state for 315 seconds.
[W000 01/17 20:54:16.39] The primary database returned to SYNC/NOT LAGGING state.
Validate Data Guard configuration:
[oracle@ol7-121-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show configuration

Configuration - my_dg_config

  Protection Mode: MaxPerformance
  Members:
  cdb1      - Primary database
    cdb1_stby - (*) Physical standby database

Fast-Start Failover: ENABLED

Configuration Status:
SUCCESS   (status updated 10 seconds ago)

DGMGRL> validate database cdb1

  Database Role:    Primary database

  Ready for Switchover:  Yes

DGMGRL> validate database cdb1_stby

  Database Role:     Physical standby database
  Primary Database:  cdb1

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

DGMGRL> exit
[oracle@ol7-121-dg2 sql]$
Open database read only:

This is required because database is not register to cluster resource.

[oracle@ol7-121-dg2 sql]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jan 17 21:33:07 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

OL7-121-DG2:(SYS@cdb1):PHYSICAL STANDBY> alter database open read only;

Database altered.

OL7-121-DG2:(SYS@cdb1):PHYSICAL STANDBY> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

[oracle@ol7-121-dg2 sql]$ dgmgrl /
DGMGRL for Linux: Version 12.1.0.2.0 - 64bit Production

Copyright (c) 2000, 2013, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected as SYSDG.
DGMGRL> show database cdb1_stby

Database - cdb1_stby

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 1.00 KByte/s
  Real Time Query:    ON
  Instance(s):
    cdb1

Database Status:
SUCCESS

DGMGRL> exit
[oracle@ol7-121-dg2 sql]$

Pages

Subscribe to Oracle FAQ aggregator