Relational Database Design – Normalization

It’s common for people new to relational databases to approach the design of the tables like they would a flat, Excel-like spreadsheet: to design the tables as if they were isolated, with all the data needed for a particular functionality crammed into a single table.

This is not how relational databases are meant to be used. Tables in a relational database are linked together, forming a web of sorts, where all the data needed for a single query can be spread over a great many tables, and a change in one table can have an affect on every other table in that database.

It is therefore extremely important that each an every table in the database be properly designed, not only in regards to the data that each table is mean to store, but also in respect to every other table in the “web”. If you ever find yourself in need to expand your database, or refactor it in order to fix or improve your existing applications, a proper design can save you an enormous amount of time and effort.

Key concepts
Let me start by naming a few of the most important concepts in relational database design. A basic understanding of these will be required to understand the rest of the article.

  • Primary Key (PK)
    A column with a unique value for each row. Although not all database management systems (DBMS) require you to put a PK into each table, from a design perspective a PK is a requirement. No table should be without one.
  • Foreign Key (FK)
    These define relationships between tables. When you want a row in one table to be linked to a row in another table, you place a FK column in the child table and use the value of the parent row’s PK as the value of the FK field.
  • Composite Key
    This is a key that is made up of more than one column. This is typically used when you want to prevent a table from using the same combination of values twice. For example, in a table that lists item prizes for shops, you would only want each shop to have a single price for each item. So, you create a FK for the shop and a FK for the item, and then you create a composite PK out of those two columns. This would cause the DBMS to forcefully restrict entries that would create rows where the combined values of these fields are duplicated. – This type of key is commonly used in N:M relationships. (Explained below… with visual aids.)
  • One-To-One (1:1) relationship
    A relationship between two tables, where a single row in one table is linked to a single row in another table.

    +------------+     +----------------+
    | person     |     | person_contact |
    +------------+     +----------------+
    | person_id  |1---1| person_id      |
    | first_name |     | email          |
    | last_name  |     | phone          |
    +------------+     +----------------+
    

    This type of relationship is practically non-existent in normalized relational designs. They exist mostly to get around limitations in databases like Access, where the number of column was limited, thus creating the need to split tables up. They are also sometimes used to optimize the performance of the database.

  • One-To-Many (1:N) relationship
    A relationship between two tables, where multiple rows in a child table can be linked to a single row in a parent table. For example:

    +------------+     +------------+
    | person     |     | country    |
    +------------+     +------------+
    | person_id  |  |-1| country_id |
    | name       |  |  | name       |
    | country_id |*-|  +------------+
    +------------+
    

    This is in fact the only “real” type of relationship in a relational database. (See the next point for the reasoning behind that assertion.)

  • Many-To-Many (N:M) relationship
    A relationship between two tables, where multiple rows in one table can be linked to multiple rows in another table. This type is “artificial” in a a way, because this kind of relationship can not be created directly between tables. To accomplish this type of relationship you need to create a third table; an intermediary table that contains FKs to both parents, linked via a set of 1:N relationships.

    +-----------+     +--------------+     +--------------+
    | shop      |     | prices       |     | product      |
    +-----------+     +--------------+     +--------------+
    | shop_id   |1-|  | product_id   |*---1| product_id   |
    | shop_name |  |-*| shop_id      |     | product_name |
    +-----------+     | price        |     +--------------+
                      +--------------+
    

Normalization
To help us properly design our tables we have a set of guidelines which, if followed properly, will help reduce the redundancy and chance of data corruption. We call this “Normalization”.

There are several steps involved in normalizing a database. The steps are referred to as “Normal Forms” (abbreviated: NF). There are at least seven NF ranging from 1NF to 6NF. Each NF requires that the NF before it has also been satisfied. The spot between 3NF and 4NF is reserved for the BCNF (Boyce-Codd normal form), which was developed later as a slightly stronger version of the 3NF, to address certain shortcomings.

Tables that have reached the 3NF are generally considered “normalized”. Specifically aiming for a higher level is unusual, but a table that is designed to be in 3NF is very likely also in the 5NF.

The first three Normal Forms aren’t complex at all. They can take some getting used to, but after working with them for a while you may well find yourself developing 3NF compliant tables without even thinking about it.

The First Normal Form (1NF)
The first normal form is both the simplest and the most important of the three steps. It simply requires that tables must not contain repeating groups of data. This means that if you need to store multiple, identical pieces of data for a single entry (row) then you can not serialize them into a single field, or create multiple identical columns.

Consider the following example. It’s a simple list of persons, where each person is listed with his/her name and phone numbers.

+----+------+--------------------+
| id | name | phone              |
+----+------+--------------------+
|  1 | Joe  | 588-5522,789-85522 | 
|  2 | Anna | 589-4567,987-12354 |
+----+------+--------------------+

Note that both entries store multiple phone numbers in a single field, separated by a comma. There are two major problems with this approach:

  • Your database management system (DBMS) regards each field as a single value, so it can not differentiate between the individual phone numbers. From your DBMS perspective, the phone field is just a normal string containing a single value. – This complicates the use of that data, especially when you need to do things like search for or extract a single value, or filter a data-set based on a single value in the field.
  • It also means you need to manually manage the data; to write the code that separates the values when they are retrieved and constructs the string when they are inserted. This makes it FAR more likely that your data will become corrupt or incompatible between applications. The DBMS has NO control over how the data is stored. It simply considers it a single string, and stores it as such. The internal structure and integrity of the data is completely up to you.

To remedy the situation, you would need to separate the data into individual fields. Your first instinct might be to simply create multiple “phone” columns, like so:

+----+------+----------+-----------+
| id | name | phone1   | phone2    |
+----+------+----------+-----------+
|  1 | Joe  | 588-5522 | 789-85522 |
|  2 | Anna | 589-4567 | 987-12354 |
+----+------+----------+-----------+

But this is NOT an acceptable solution. It does solve both of the problems I listed above, but it creates a new problem.

Namely that now we have restricted each person to two phone numbers and ONLY two phone numbers. What if a person needs to store three numbers? Using this table, the only way would be to add a third “phone” column, which would also add a third number to ALL other persons in the table. (What if we need to store a hundred phone numbers? Or a thousand?)

What we want to do is allow each person to supply as many phones as that person needs. To allow for that, we need to extract the phone numbers from that table altogether and put it into a new table dedicated to listing phone numbers. The design for that may look something like this:

+-----------+     +--------------+
| person    |     | phone_number |
+-----------+     +--------------+
| person_id |1-|  | phone_id     |
| name      |  |-*| person_id    |
+-----------+     | number       |
                  +--------------+

There each row in the phone_number table contains a column with the ID of a person. This column identifies the person who’s number this is. A relationship like this is referred to as a One-To-Many (1:N) relationship, because each row in the parent table (person) can be linked to multiple rows in the child table (phone_number), but not the other way around.

The person_id column in the phone_number table is what we call a Foreign Key. It is an indication that the value in the column is meant to reference the value of another table. In many cases the DBMS will enforce this link and reject phone_number entries that do not provide a person_id that exists in the person table.

The Second Normal Form (2NF)
This requires that no field should only be partially dependent on any candidate key in the table. This does not only include the PK, but any fields combinations that would uniquely identify a row.

Consider this design:

+---------------+
| prices        |
+---------------+
| price_id (PK) |
| product       |
| shop          |
| unit_price    |
| qty           |
| shop_address  |
| unit_weight   |
+---------------+

The price_id column is the PK there, but because the combined values of the product and shop columns could also act as a composite PK, together they are considered a “candidate key”.

Lets look at a few example entries into that table.

+----------+---------+--------+---------+------+--------------+------------+
| price_id | product | shop   | u_price | qty  | shop_address |unit_weight |
+----------+---------+--------+---------+------+--------------+------------+
|        1 | Beer    | Bob's  |    9.50 | 12.0 | Main Road 5  |      15.00 |
|        2 | Pepper  | Bob's  |   19.50 |  2.5 | Side Road 10 |       2.00 |
|        3 | Beer    | Jill's |    3.50 |  6.0 | Main Steet 1 |       1.50 |
|        4 | Pepper  | Jill's |    8.50 | 30.0 | Main Road 1  |      20.00 |
|        5 | Salt    | Jill's |   27.50 | 3.14 | Main Road 10 |     250.00 |
+---------+---------+--------+---------+------+--------------+-------------+

The problem becomes apparent when we examine the values of the shop_address and unit_weight fields against the above mentioned candidate key. The shop_address values should be identical in all rows with the same shop value, and the unit_weight should be identical for the same product values. However, this design allows us to specify different shop addresses and unit weights for the same shops and products, which in reality makes no sense.

So, to turn this into a 2NF design, you move these fields out of the table into their own tables, creating FKs to link the new tables to the main sale table.

+-----------+     +--------------+     +--------------+
| shop      |     | prices       |     | product      |
+-----------+     +--------------+     +--------------+
| name (PK) |1-|  | price_id     |  |-1| name (PK)    |
| address   |  |  | product      |*-|  | weight       |
+-----------+  |-*| shop         |     +--------------+
                  | unit_price   |
                  | qty          |
                  +--------------+

This ensures that all prices entries will be linked to a proper address and weight values.

The Third Normal Form (3NF)
The last of the forms needed for a database to be considered normalized. It requires that columns should depend only upon the primary key of the table. Basically what that means is that any column that is not solely dependent on the primary key of this table, or only partially, should be moved out of the table.

Lets look at two examples:

First, if we add a “country” column to the “persons” table we created for the 1NF.

+-----------+------+----------+
| person_id | name | country  |
+-----------+------+----------+
|         1 | Joe  | France   |
|         2 | Anna | England  |
+-----------+------+----------+

This looks fine at first glance, but if you look closer, the name of the country does not belong in this table.

Consider what happens if you are asked to list ALL possible countries. (E.g. for a drop-down box on a website.) The problem with that is, because the countries are dependent on a person to exist, the design will only allow us to list countries of people that exist in the database. No country that does not have a representative in the person table can exist in the database.

The solution for this is to move the country out of the table, into it’s own table, and add a Foreign Key to the person table that references the country the person belongs to:

+------------+     +------------+
| person     |     | country    |
+------------+     +------------+
| person_id  |  |-1| country_id |
| name       |  |  | name       |
| country_id |*-|  +------------+
+------------+

Now you can list all the countries in the country table, and just link the persons to their respective countries.

Second, to address another common 3NF conformance issue, if we wanted to add more than just the country name for each person. If we were to add the city and address info as well, it might looks something like:

+-----------+------+----------+--------+------------------+
| person_id | name | country  | city   | address          |
+-----------+------+----------+--------+------------------+
|         1 | Joe  | France   | Paris  | Eiffel blwd. 101 |
|         2 | Anna | England  | London | English road 302 |
+-----------+------+----------+--------+------------------+

This is basically the same problem we had with the previous example, but we have added two more columns, neither of which belong in this table either. So, lets try to apply the same solution as last time, but this time move all three location items into a location table, and link the location to the person:

+-------------+     +-------------+
| person      |     | location    |
+-------------+     +-------------+
| person_id   |  |-1| location_id |
| name        |  |  | country     |
| location_id |*-|  | city        |
+-------------+     | address     |
                    +-------------+

This is better, but can you spot the problem?

As you remember, to reach 3NF no field can depend on anything but the PK of the table. But if you look at the location table, notice that “address” depends not only on the PK, but on the “city” field as well. (If the city field changes, so must the address field). The same applies to the city field; it depends on the country field.

This means that, in order to reach 3NF, you need to move country, city and address all into their own tables, each linked to each other. The address table should be linked to the person, the city table linked to the address table, and finally the country table linked to the city table.

+-------------+     +------------+
| person      |     | address    |
+-------------+     +------------+
| person_id   |  |-1| address_id |
| location_id |*-|  | city_id    |*-|
| name        |     | name       |  |
+-------------+     +------------+  |
                                    |
+------------+     +------------+   |
| country    |     | city       |   |
+------------+     +------------+   |
| country_id |1-|  | city_id    |1--|
| name       |  |-*| country_id |
+------------+     | name       |
                   +------------+

Now this design is in 3NF, and could be considered “normalized”.

Epilogue
It’s should be mentioned that in some cases, putting your tables into 3NF can have a negative effect on the performance and usability of the table. Like, for example, in the latest example. Before normalizing the table, you could query all the data from the single table with a simple SELECT query. After normalizing it, fetching the exact same result requires three JOINs.

My point is this; normalization is a design goal. It may not always be the practical solution. It may in fact be impractical in certain situations to design fully normalized tables. If performance and resource conservation matters a great deal more to you than data integrity, then you may well be better of aiming for a lower NF level. I would advice you to ALWAYS make sure you tables are in 1NF, but beyond that you must judge the situation for yourself.

I can only say that in 99% of cases, you are be better of at least trying to go for the 3NF. It may spare you some otherwise wasted efforts later on.

In any case, I hope this has been helpful.

Advertisements

How Connect to Oracle when you forgot your Password

Reset Oracle Password

 
Step1: Open SQL Command Line  and enter the following command:

SQL> conn sys/manager as sysdba;
Connected.

Step2: To reset the password of the SYSTEM password (or any other user password), run the following query:

SQL> alter user sys identified by manager;
User altered.

Or

Your password file should be under <orahome>\database\PWD<SID>.ora.

Delete it and run the Oracle password utility from the command prompt:

c\:Oracle\ora92\database>ORAPWD file=PWD<SID>.ora password={password} entries={however many}.

The <password> is your new sys password. After you log in as sys you can change it and create new passwords for system.

Oracle Enterprise Manager (OEM)

Oracle Enterprise Manager is Oracle’s GUI-based database management tool. It is possible to manage the oracle environment using SQL and PL/SQL but OEM gives you a console based framework to help you with almost all of your day-to-day management activities, including tasks such as backup, recovery, export, import and data loading, it also makes it easier to use the oracle advisors.

There are two version of OEM – a single instance version called Database Control and a system wide version that lets you manage all your systems called Grid Control. Most dba’s do not like using GUI’s but it makes sense to use the GUI for the easier day-to-day administration tasks like adding users, resizing tablespaces, etc, I think that knowing both SQL and PL/SQL and the GUI will make you a better dba in the long run.

The benefits of using the OEM

Out-of-the-box Management Installing and configuration of the OEM (both database and grid control) is a breeze, if using the DBCA to install Oracle it will automatically be installed for you, configuring the OEM is a trivial task.
Web-based Management OEM can be configured to use both HTTP or HTTPS, so security can be tightened if required. Being web based all you need is web browser to access the OEM.
Real-Time Monitoring Normally you have to write scripts or use a 3rd party tool to perform real-time monitoring, which can cause additional expensive or time, OEM will perform real-time monitoring right from the off. The new hang analysis feature in 10.2 helps during a database hang, when SQL scripts can’t get information back to you quickly, by attaching directly to the SGA bypasses SQL and helps you out when normal SQL is used.
Complete Environment Monitoring A poor performing database could be the result of a problem anywhere in the application stack (web server, app server, etc). OEM can monitor the performance of all the components in the stack not just the Oracle database. It can monitor

  • The entire Platform
  • End user experience
  • Systems and web applications availability
  • The extended network components
  • Business transactions
  • Historical data analysis and change tracking
Application Performance Monitoring Oracle uses alert and notification systems to monitor the database, these alerts can identify problems quickly thus using a pro-active approach, thresholds can also be easily adjusted with the OEM. It can identify

  • Monitors the performance and availability of the system
  • Indicates outages and bottlenecks throughout the application system
  • Analyzes the root cause of performance with drill-downs
  • Minimizes application downtime through the use of efficient problem-resolution tools.
Scaleable Management Capability OEM is highly scalable, the only requirement is that any new servers require the management agent started.
Consolidated Database Management The home page of OEM will contain everything you need for a quick overview of your environment, this means that any problems can be identified very quick, a few more clicks and you have advanced performance charts available.
Integrated System Management OEM uses the Smart Plug-In (SPI) to provide integration in system-wide monitoring tools such as HP Openview.

OEM Database Control

You can automatically install OEM with installing Oracle via the DBCA, just select the use database control for database management option.

Manually configure OEM unix: export ORACLE_SID=D01
windows: set ORACLE_SID=D01

emca -config dbcontrol db

Start dbconsole emctl start dbconsole
Stop dbconsole emctl stop dbconsole
dbconsole status emctl status dbconsole
Start management agent emctl start agent
Stop management agent emctl stop agent
Management agent status emctl status agent

Main OEM Page

The main OEM page is displayed below, please refer to the oracle web site on how to navigate and what is available on the OEM.

OEM Grid Control

OEM Grid Control can manage a number of systems across the environment, this means easier management. The Grid Control has a number of components

  • Grid Control – the main OEM web-based interface for monitoring the enterprise
  • Management Agent – monitors the host and sends information to the OEM management Service
  • Management Service – This is the middle tier (OMS) and stores the data collected in a repository, which is used by the Grid Control

Note that the grid control product is a licensed product which you can obtain from Oracle.

If possible it is best to install the Grid Control on a server on its own, depending on the number of servers you wish to monitor will determine the performance and disk space required for the server.

There are a number of commands that can be used to control the Oracle Management Services (OMS)

Status # emctl status oms
Start # emctl start oms
Stop # emctl stop oms

To install Grid Control follow below

Grid Control Installation
  1. Make sure that the Oracle CD is in the CD Drive and run the following

    ./runInstaller

  2. At the Welcome screen click Next
  3. Accept the default Operating System group Name, click Next
  4. You now need to run a script called orainstRoot.sh as user root in another terminal session.

    # cd $ORACLE_BASED/oraInventory
    # ./orainstRoot.sh

  5. Now click continue back in the Installer window
  6. Now choose the directory for the OEM files and click Next
  7. If using a new server select the option that creates a new Oracle database, click Next
  8. It will now perform prerequisite check, click Next
  9. Choose a password for SYSMAN and click Next
  10. Select password for SYS, SYSTEM AND DBSMNP, click next
  11. You can alter the Metalink and Proxy information if so desired, click Next
  12. Choose the SID and the Global database name for the database, click next
  13. Specify the location for all the database files, click Next
  14. Review the components and then click Next
  15. You will now watch while Grid Control is installed, you will be asked to run root.sh as user root
  16. A window will be displayed showing the various configuratrion assistants, click OK
  17. yet again run the orainstRoot.sh script as user root, then click Next
  18. This screen is important as it details all port numbers and links to the web page (make note)
  19. Click Exit to end the installation

To install the OEM Management agent only

Management agent Installation   1-7. Use the proceeding first seven steps

  1. In the select a product window, choose the additional management agent installation type and click Next
  2. specify the hostname and click Next
  3. Click Next after reading the security page
  4. Review the summary and click Next to start the install
  5. Click Exit to finish the install
  6. In Grid Control click on the target tab, you see all the discovered hosts

Move datafile to NEW Location

SQL> select * from v$version;

BANNER

——————————————————————————–

Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production

PL/SQL Release 11.2.0.2.0 – Production

CORE    11.2.0.2.0      Production

TNS for Linux: Version 11.2.0.2.0 – Production

NLSRTL Version 11.2.0.2.0 – Production

SQL>

SQL> select name from v$datafile;

NAME

——————————————————————————–

/opt/oracle/data/DEV3/system01.dbf

/opt/oracle/data/DEV3/sysaux01.dbf

/opt/oracle/data/TEST01.dbf

Here my TEST tablespace datafile is not in DEV3 location, so I wanted to move into /DEV location.

Here I’m testing in my DEV database but make sure no one using this tablespace if this is doing in prod database J

1)      Make the corresponding Tablespace OFFLINE

        SQL> ALTER tablespace TEST offline;

Tablespace altered.

2)      Update the data dictionary pointing to new location using below command:

         SQL> ALTER DATABASE RENAME FILE  ‘/opt/oracle/data/TEST01.dbf’  to    ‘/opt/oracle/data/DEV3/TEST01.dbf’;

Database altered.

3)      Now, recover the datafile from new location:

         SQL> RECOVER DATAFILE ‘/opt/oracle/data/DEV3/TEST01.dbf’;

Media recovery complete.

4)      Bring back the TEST tablespace ONLINE

         SQL>  ALTER tablespace TEST online;

Tablespace altered.

SQL> select name from v$datafile;

NAME

——————————————————————————–

/opt/oracle/data/DEV3/system01.dbf

/opt/oracle/data/DEV3/sysaux01.dbf

/opt/oracle/data/DEV3/TEST01.dbf

Now my TEST datafile moved to the new location.

Add database to 11g OEM Grid control

Here I have a 11g Grid control running on Linux x86_64.

To add a database in the Grid control, Open OEM grid control page –> targets –> databases

Click on ADD button, then the below screenshot will appear

Select the hostname from the list shown and click continue.

NOTE : The hostnames are displayed when that server is configured with the GRID server, if not configure.

It will take some time to find and display the targets and the  listeners found on that host(server).

Select the target you want to add and the listener associated with it. Then click on configure button for the selected target and you’ll get below screen.

Give ‘dbsnmp’ password and fill all the fields correctly then click next and ok.

Done, now you’ll see your target(database) in the OEM grid database list. Njoy !!!!!!

spool command

What is SPOOL ?
Spool Command in ORACLE is used to transfer data from oracle tables into other files.

How to SPOOL from ORACLE in CSV format ??

Set echo off;

Set Heading off;
Set define Off;
Set feedback Off;
set verify off;
Set serveroutput On;
SET PAGESIZE 5000
SET LINESIZE 120
SQL >   Spool c:\file.csv     (Windows)
SQL >  SELECT EMPNAME||’,’||EMPID||’,’||EMPSAL FROM EMPLOYEE;

SQL>  Spool Off;
Set define On;
Set feedback On;

Set heading on;
Set verify on;

Ex:  Recently i written a spool command for making all the tables and indexes max extent sizes to unlimited because lot of tables and indexes max extent size have NULL value

Set echo off;

Set Heading off;
Set define Off;
Set feedback Off;
Set verify off;
Set serveroutput On;
SET PAGESIZE  5000
SET LINESIZE 120

SQL>   Spool   extent.sql

SQL>   select   ‘alter ‘||   object_type||’  ‘||object_name||’   ‘||’ storage (maxextents unlimited);’

from  dba_objects   where   object_type in (‘TABLE’,’INDEX’)   and owner = ‘XXX’;

spool off

SQL> @extent.sql                       (for executing spool command)

If u didn’t specify anything after the file name(ex: extent  instead of extent.sql) then oracle by default generates output file as ‘.log’ extention(i.e., extent.log)

If we have very few tables in the database instead of writing spool command we can do manually one after another using

SQL >   alter  table  tab_name  move storage (maxextents unlimited);
Table altered.

Or

SQL>   alter  index  ind_name  move  storage (maxextents unlimited);

Index altered.
Using single command we can write dynamic sql script to do the same changes for all the objects

NOTE:

In Linux the output can be seen in the Directory from where you entered into the SQLPLUS

In Windows the output file is located where you specified in the spool

Tablespace Management FAQs

1) What is Block or Database Block?

Storage area for storing rows

2) What is the unit of Block?

Bytes

3) What is the default Standard Block Size in Oracle 10g?

8KB

3) What is Non Standard Block Size?

Other than 8KB

4) How many Non-Standard Block Sizes are supported by Oracle? What are they?

Four: 2KB, 4KB, 16KB, 32KB

5) What is Extent?

A Collection of Blocks

6) What is Segment?

A Collection of Extents

7) What are the types of Segments?

Table, Index, Undo & Temporary

8) What is Table Segment?

It stores actual committed data

9) What is Index Segment?

It stores Indexes information which are created on tables

10) What is Undo Segment?

It stores pre-image value

11) What are the advantages of Undo Segment?

Provides Read Consistency, to rollback uncommitted transactions during (ROLL BACKWARD) by SMON

12) What is Temporary Segment?

Performing Sorting, Joins, Having, group by

13) What is Tablespace?

It is Logical collection of Segments & Physically related with data file

14) What are the types of Tablespaces?

Permanent, Temporary

15) What is Logical Structure?

Tablespace

16) What is the hierarchy of the Logical Structures?

Tablespace _ Contains _ Segments _ Contains _ Extents _ Contains Blocks

17) What are the Mandatory Tablespaces during Database Creation?

SYSTEM & SYSAUX

18) What are the Mandatory Tablespaces during Database Usage?

TEMPORARY, UNDO & Application (User) Tablespaces

19) What is SYSTEM Tablespace?

It contains Metadata Information (Data Dictionary)

20) What is SYSAUX Tablespace?

It contains Database Workload Information

21) When SYSAUX Tablespace is introduced?

Oracle 10g

21) What is Undo Tablespace?

It stores Pre-Image value

22) What is Permanent Tablespace?

It stores the data permanently

23) What are the types of Permanent Tablespace?

SYSTEM, SYSAUX, User defined tablespaces

24) What is Temporary Tablespace?

It stores the data temporarily during Sorting Operations, Join Operations

25) What is Temporary Tablespace Group?

It is Logical which contains more than one temporary tablespace at a time

26) What is Extent Management?

Allocating & Reallocating the Blocks

27) What are the types of Extent Management?

Locally Managed, Dictionary Managed

28) What type of Extent Management is preferred?

Locally Managed Tablespace (LMT)

29) What is Locally Managed Tablespace (LMT)?

In this the Extent Information will be stored in the form of Bitmaps in Data File Header

This Bitmap indicates whether free space is available in the block or not

30) What is Dictionary Managed Tablespace (DMT)?

In this the Extent Information will be stored in the Data Dictionary Tables

31) What is Recursive SQL?

Oracle writes internal queries to get the data from the Data Dictionary Tables

32) What are the drawbacks of Dictionary Managed Tablespace?

Results in the Contention of the Data Dictionary Tables

Size of the extents are managed automatically by the system

Changes to the extent bitmaps do not generate undo information

Avoid Recursive Space Management Operations

32) What is Segment Space Management?

Managing the space in the segments

33) What are the types of Segment Space Management?

Manual Segment Space Management, Automatic Segment Space Management

34) What type of Segment Space Management is preferred? Why?

Automatic, everything will be managed by Oracle

35) What are the drawbacks of Manual Segment Space Management?

We have to specify PCTFREE, PCTUSED

36) What is PCTFREE?

How much space is available for INSERTION?

37) What is PCTUSED?

How much space is available for UPDATION?

38) What is Auto Extend on with Tablespace?

When the tablespace is filled up it will allocate some space to tablespace

39) How to make a tablespace OFFLINE?

ALTER TABLESPACE <tablespace_name> OFFLINE;

40) What are the different options available when we make a tablespace OFFLINE?

NORMAL, TEMPORARY, TRANSACTIONAL, FOR RECOVERY

41) What is Tablespace OFFLINE NORMAL?

It performs Checkpoint

It can be made OFFLINE only when there are no error conditions in data files

42) What is Tablespace OFFLINE TEMPORARY?

It performs Checkpoint

A tablespace can be taken OFFLINE even if there are error conditions on data files

43) What is Tablespace OFFLINE IMMEDIATE?

It does not perform any Checkpoint

It requires Media Recovery before making tablespace ONLINE

44) Is it possible to make a Tablespace OFFLINE when the database is running in NOARCHIVELOG mode?

No

45) What is Tablespace OFFLINE FOR RECOVER?

It is deprecated & used for backward compatibility

46) What is Physical Structure?

Data File

47) What does a Tablespace is Logically Contains?

Segments, Extents & Blocks

48) What does a Tablespace Physically Contains?

Data Files

49) What are the types of Tablespaces with respect to the data files?

Small File Tablespace & Big File Tablespace

50) What is the small file tablespace?

Default Permanent tablespace

51) How many data files can exist in small file tablespace?

65535

52) What is big file tablespace?

It is a single data file which can store data up to 128TB

53) How many data files can exist in big file tablespace?

Single

54) What are the advantages of Big File Tablespace?

One Tablespace has only one data file

No need to constantly add data files to Tablespace

Data File Management in Large Databases is simplified

Storage Capacity is Increases

55) What is LOGGING with respect to the tablespace?

Changes are tracked in Online Redo Log Files

56) What is NOLOGGING with respect to the tablespace?

Changes are not tracked in Online Redo Log Files

57) What is UNIFORM with respect to the tablespace?

Extents are allocated at a uniform rate which is specified by the user

58) What is AUTO ALLOCATE with respect to the tablespace?

Extents are allocated automatically by the oracle

59) What is the default option for allocating the extents that is UNIFORM or AUTO ALLOCATE?

Auto Allocated

60) Is it possible to Drop/Rename SYSTEM/SYSAUX Tablespace?

No

61) What is Default Permanent Tablespace?

It is allocated to a user who has been not allocated to any permanent application tablespace

62) What is Default Temporary Tablespace?

It is allocated to a user who has been not allocated to any temporary tablespace

63) How to move a table from one tablespace to another tablespace?

ALTER TABLE <table_name> MOVE TABLESPACE <tablespace_name>;

64) How to move an index from one tablespace to another tablespace?

ALTER INDEX <index_name> REBUILD TABLESPACE <tablespace_name>;

65) What are storage parameters in Tablespace (DMT)?

Free Lists, PCT FREE, PCT USED

66) What are free lists?

Space available for INSERT

67) What is PCTFREE?

Space available for UPDATE

68) What is PCTUSED?

Threshold value for how much space used

69) How to DE allocate the unused Extents?

ALTER TABLE <table_name> DEALLOCATED UNSED;

70) Create a tablespace of BLOCKSIZE 16K?

CREATE TABLESPACE <tablespace_name> DATAFILE <path> BLOCKSIZE 16K;

71) What is DROP TABLESPACE <tablespace_name>?

It will drop only the tablespace logically

72) What is DROP TABLESPACE <tablespace_name> INCLUDING CONTENTS?

It will drop the tablespace along with the contents (Segments, Extents & Blocks)

73) What is DROP TABLESPACE <tablespace_name> INCLUDING CONTENTS AND DATAFILES?

It will drop the tablespace along with the contents & Physical Data Files

74) How to rename a tablespace?

ALTER TABLESPACE <tablespace_name> RENAME TO <new_tablespace_name>;

75) How to make a tablespace READ ONLY?

ALTER TABLESPACE <tablespace_name> READ ONLY;

76) How to make a tablespace READ WRITE?

ALTER TABLESPACE <tablespace_name> READ WRITE;

77) How to make a tablespace ONLINE?

ALTER TABLESPACE <tablespace_name> ONLINE;

78) How to make a tablespace OFFLINE?

ALTER TABLESPACE <tablespace_name> OFFLINE;

79) What is the Data Dictionary views which gives the information about Tablespaces?

DBA_TABLESPACES, V$TABLESPACE

83) What is the Data Dictionary Views which gives the information about the Segments?

DBA_SEGMENTS

84) What is the Data Dictionary Views which gives the information about the Extents?

DBA_EXTENTS

85) What is the Data Dictionary Views which gives the information about the data files?

DBA_DATA_FILES, V$DATFILE

86) What is the Data Dictionary Views which gives the information about the Temporary files?

DBA_TEMP_FILES

87) What is the Data Dictionary Views which gives the information about the Temporary Tablespace

Groups?

DBA_TABLESPACE_GROUPS

Alert log & Trace file location in Oracle 11gR2

Oracle 11gR2 is a major release upgrade from 10gR2 and 11gR1 both. Many previous parameters like user_dump_dest etc have been deprecated. There is a single parameter called diagnostic_dest.  Many DBA’s are so used to the location of alert log and trace files , that it becomes quite inconvenient when you start using Oracle 11g . But Oracle 11g has a powerful new utility called ADR CLI to navigate around the alert log and other trace files. I will demonstrate a few ways to check the alert log and trace files using “adrci” command.
C:\Documents and Settings\test>adrci
ADRCI: Release 11.2.0.1.0 – Production on Thu Mar 31 11:56:54 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
ADR base = “e:\app\testdb”
adrci> help
HELP [topic]
Available Topics:
CREATE REPORT
ECHO
EXIT
HELP
HOST
IPS
PURGE
RUN
SET BASE
SET BROWSER
SET CONTROL
SET ECHO
SET EDITOR
SET HOMES | HOME | HOMEPATH
SET TERMOUT
SHOW ALERT
SHOW BASE
SHOW CONTROL
SHOW HM_RUN
SHOW HOMES | HOME | HOMEPATH
SHOW INCDIR
SHOW INCIDENT
SHOW PROBLEM
SHOW REPORT
SHOW TRACEFILE
SPOOL
There are other commands intended to be used directly by Oracle, type
“HELP EXTENDED” to see the list
adrci> show alert
1: diag\clients\user_system\host_3726408541_76
2: diag\rdbms\testdb\testdb
3: diag\tnslsnr\test\listener
Q: to quit
Please select option:
Many of the adrci commands are very similar to the standard Unix vi commands. For example
SHOW ALERT -TAIL (to view the last 10 entries)
SHOW ALERT -TAIL 50 (to view the last 50 entries)
SHOW ALERT -TAIL -F (keeps the alert log open to view as entries arrive until Control C is pressed)
show alert -p “message_text like ‘%ORA-07445%’” (to show all occurences of the string ‘ORA-07445′ in the alert log.)
Purging Alert Log Content
The adrci command ‘purge’ can be used to purge entries from the alert log. Note that this purge will only apply to the XML based alert log and not the text file based alert log which still has to be maintained using OS commands.
The purge command takes the input in minutes and specifies the number of minutes for which records should be retained.
So to purge all alert log entries older than 7 days the following command will be used:
adrci > purge -age 10080 -type ALERT
It is a powerful utility with much more functionality.
P.S: The general location of alert log on windows is : E:\app\<hostname>\diag\rdbms\<sid>\<sid>\trace

Finding Trace Files

Trace files are stored in the Automatic Diagnostic Repository (ADR), in the trace directory under each ADR home. To help you locate individual trace files within this directory, you can use data dictionary views. For example, you can find the path to your current session’s trace file or to the trace file for each Oracle Database process.

To find the trace file for your current session:
  • Submit the following query:
    SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Default Trace File';
    

    The full path to the trace file is returned.

To find all trace files for the current instance:
  • Submit the following query:
    SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';
    

    The path to the ADR trace directory for the current instance is returned.

To determine the trace file for each Oracle Database process:
  • Submit the following query:
    SELECT PID, PROGRAM, TRACEFILE FROM V$PROCESS;

Step by step understanding of ADRCI can be found here : http://uhesse.wordpress.com/2011/06/01/adrci-a-survival-guide-for-the-dba/

See also :   11g NEW FEATURES

Changing Oracle Database Id (DBID)

Shut down and open the database in MOUNT state

NOTE: This is tested on my Standalone Development database, for RAC please see [Doc ID 464922.1]


SQL> startup mount
ORACLE instance started.

Total System Global Area 481267712 bytes
Fixed Size 1300716 bytes
Variable Size 226494228 bytes
Database Buffers 247463936 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 – Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

[oracle@rac1 ~]$ . oraenv
ORACLE_SID = [demo2] ?
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/11.1.0/db_1 is /u01/app/oracle

[oracle@rac1 ~]$ nid target = /

DBNEWID: Release 11.1.0.6.0 – Production on Thu Dec 24 20:05:44 2009

Copyright (c) 1982, 2007, Oracle. All rights reserved.

Connected to database DEMO2 (DBID=3682169720)

Connected to server version 11.1.0

Control Files in database:
/u01/app/oracle/oradata/demo2/control01.ctl
/u01/app/oracle/oradata/demo2/control02.ctl
/u01/app/oracle/oradata/demo2/control03.ctl

Change database ID of database DEMO2? (Y/[N]) => y

Proceeding with operation
Changing database ID from 3682169720 to 3682222232
Control File /u01/app/oracle/oradata/demo2/control01.ctl – modified
Control File /u01/app/oracle/oradata/demo2/control02.ctl – modified
Control File /u01/app/oracle/oradata/demo2/control03.ctl – modified
Datafile /u01/app/oracle/oradata/demo2/system01.dbf – dbid changed
Datafile /u01/app/oracle/oradata/demo2/sysaux01.dbf – dbid changed
Datafile /u01/app/oracle/oradata/demo2/undotbs01.dbf – dbid changed
Datafile /u01/app/oracle/oradata/demo2/users01.dbf – dbid changed
Datafile /u01/app/oracle/oradata/demo2/temp01.dbf – dbid changed
Control File /u01/app/oracle/oradata/demo2/control01.ctl – dbid changed
Control File /u01/app/oracle/oradata/demo2/control02.ctl – dbid changed
Control File /u01/app/oracle/oradata/demo2/control03.ctl – dbid changed
Instance shut down

Database ID for database DEMO2 changed to 3682222232.
All previous backups and archived redo logs for this database are unusable.
Database is not aware of previous backups and archived logs in Recovery Area.
Database has been shutdown, open database with RESETLOGS option.
Succesfully changed database ID.
DBNEWID – Completed succesfully.

[oracle@rac1 ~]$

SQL> alter database open resetlogs;
Database altered.

SQL> select dbid from v$database;

DBID
———-
3682222232
Also see:  

Steps toChange database name and DBID

Steps tochange database name only

How to Change the DBID, DBNAME Using NID Utility in version 10gR2 onwards (Doc ID 863800.1)

ASM

ASM Basics

  • The smallest unit of storage written to disk is called an “allocation unit” (AU) and is usually 1MB (4MB recommended for Exadata)
  • Very simply, ASM is organized around storing files
  • Files are divided into pieces called “extents”
  • Extent sizes are typically equal to 1 AU, except in 11.1 and 11.2 where it will use variable extent sizes that can be 1, 8, 64 AUs  or   1, 4, 16 AUs respectively
  • File extent locations are maintained by ASM using file extent maps.
  • ASM maintains file metadata in headers on the disks rather than in a data dictionary
  • The file extent maps are cached in the RDBMS shared pool; these are consulted when an RDBMS process does I/O
  • ASM is very crash resilient since it uses instance / crash recovery similar to a normal RDBMS (similar to using undo and redo logging)

ASM Diskgroups 

Creating Diskgroup:

CREATE DISKGROUP disk_group_1 NORMAL

REDUNDANCY

FAILGROUP failure_group_1 DISK

‘/devices/diska1’ NAME diska1,

‘/devices/diska2’ NAME diska2,

FAILGROUP failure_group_2 DISK

‘/devices/diskb1’ NAME diskb1,

‘/devices/diskb2’ NAME diskb2;
NOTE :  The above disk group given as NORMAL redundancy, that means it contain 2 Fail groups (i.e., 2 sets of each data is maintained) also similarly for HIGH redundancy each data is made into thrice(1 original and 2 duplicates) so it makes 3 fail groups in Disk group
Drop disk groups

DROP DISKGROUP DATA INCLUDING CONTENTS;

Add disks

ALTER DISKGROUP DATA ADD DISK ‘/dev/sda3’;

Drop a disk

ALTER DISKGROUP DATA DROP DISK DATA_0001;

Resize all disks in a disk group

ALTER DISKGROUP DATA RESIZE ALL SIZE 100G;

UNDROP DISKS clause of the ALTER DISKGROUP

ALTER DISKGROUP DATA UNDROP DISKS;

Rebalance diskgroup

ALTER DISKGROUP DATA REBALANCE POWER 5;

Check Diskgroup

ALTER DISKGROUP DATA CHECK;

ALTER DISKGROUP DATA CHECK NOREPAIR;

Diskgroup Metadata Backup

md_backup -b asm_backup.mdb.txt -g data,fra

ASM Specific Init.ora Parameters

*.cluster_database=true

*.asm_diskstring=’/dev/sd*1′

*.instance_type=asm

*.shared_pool_size=100M

*.large_pool_size=80M

*.db_cache_size=60M

*.asm_diskgroups=’DATA’,’FRA’

Initialize ASM for non-RAC

./localconfig add

Manually start CSSD (non-RAC)

/etc/init.d/init.cssd start

Manually stop CSSD ( non-RAC)

/etc/init.d/init.cssd stop

Resetting CSS to new Oracle Home

localconfig reset /apps/oracle/product/11.1.0/ASM

ASM Dictionary Views

v$asm_alias  —list all aliases in all currently mounted diskgroups

v$asm_client —list all the databases currently accessing the diskgroups

v$asm_disk —-lists all the disks discovered by the ASM instance.

v$asm_diskgroup —Lists all the diskgroups discovered by the ASM instance.

v$asm_file —Lists all files that belong to diskgroups mounted by the ASM instance.

v$asm_operation —Reports information about current active operations. Rebalance activity is reported in this view.

v$asm_template —Lists all the templates currently mounted by the ASM instance.

v$asm_diskgroup_stat —same as v$asm_diskgroup but does discover new disgroups. Use this view instead of v$asm_diskgroup.

v$asm_disk_stat —same as v$asm_disk but does not discover new disks. Use this view instead of v$asm_disk.

srvctl commands

ADD

srvctl add asm -n rac3 -i +ASM3 -o /opt/oracle/app/product/10.2.0/asm

ENABLE

srvctl enable asm -n rac3 -i +ASM3

DISABLE

srvctl disable asm -n rac3 -i +ASM3

START

srvctl start asm -n rac3

STOP

srvctl stop asm -n rac3

CONFIG

srvctl config asm -n rac1

REMOVE

srvctl remove asm -n rac1

STATUS

srvctl status asm

srvctl status asm -n rac1

MODIFY

srvctl modify asm -o -n rac1

ASMLIB commands ( as root)

/etc/init.d/oracleasm start

/etc/init.d/oracleasm stop

/etc/init.d/oracleasm restart

/etc/init.d/oracleasm configure

/etc/init.d/oracleasm status

/etc/init.d/oracleasm enable

/etc/init.d/oracleasm disable

/etc/init.d/oracleasm listdisks

/etc/init.d/oracleasm deletedisk

/etc/init.d/oracleasm scandisks

/etc/init.d/oracleasm querydisk /dev/sdb1

/etc/init.d/oracleasm createdisk /dev/sdb1 VOL1

/etc/init.d/oracleasm renamedisk /dev/sdb1 VOL1

asmcmd Commands

cd —–changes the current directory to  the specified directory

du —–Displays the total disk space occupied by ASM files in the specified

ASM directory and all its subdirectories, recursively.

find —–Lists the paths of all occurrences of the specified name ( with wildcards) under the specified directory.

ls +data/testdb —-Lists the contents of an ASM  director, the attributes of the specified file, or the names and attributes of all disk groups.

lsct —–Lists information about current ASM clients.

lsdg —-Lists all disk groups and their attributes

mkalias —-Creates an alias for a system generated filename.

mkdir —–Creates ASM directories.

pwd ——–Displays the path of the current ASM directory.

rm        ——-Deletes the specified ASM Files or directories.

rm -f  

rmalias ———Deletes the specified alias, retaining the file that the alias points to

lsdsk ———-Lists disks visible to ASM.

md_backup ——Creates a backup of all of the mounted disk groups.

md_restore ——Restores disk groups from a backup.

remap —-repairs a range of physical blocks on a disk.

cp ——copies files into and out of ASM.

SYSASM Role (Starting in Oracle Database 11g)

SQL> Grant sysasm to sys; —sysdba deprecated sqlplus / as sysasm

ASM Rolling Upgrades START

alter system start rolling migration to 11.2.0.2;

DISABLE

alter system stop rolling migration;

Database INIT parameters for ASM.

*.control_files=’+DATA/orcl/controlfile/control1.ctl’,’+FRA/orcl/controlfile/control2.ctl’

*.db_create_file_dest=’+DATA’

*.db_create_online_log_dest_1=’+DATA’

*.db_recovery_file_dest=’+DATA’

*.log_archive_dest_1=’LOCATION=+DATA’

*.log_file_name_convert=’+DATA/VISKDR’,’+DATA/VISK’ ##added for DG

Removing disks from an ASM disk group

SQL> select group_number,name from v$asm_disk  order by group_number,name;
GROUP_NUMBER   NAME
———— ——————————
1 ASM_ORADATA31_0000
1 ASM_ORADATA31_0001
1 ASM_ORADATA31_0002
1 ASM_ORADATA31_0003
1 ASM_ORADATA31_0004
1 ASM_ORADATA31_0005
2 ASM_ORADATA32_0000
2 ASM_ORADATA32_0001
2 ASM_ORADATA32_0002
2 ASM_ORADATA32_0003
2 ASM_ORADATA32_0004
2 ASM_ORADATA32_0005

12 rows selected.

SQL> ALTER DISKGROUP ASM_ORADATA31 DROP DISK ASM_ORADATA31_0005;
Diskgroup altered.

SQL> ALTER DISKGROUP ASM_ORADATA32 DROP DISK ASM_ORADATA32_0005;
Diskgroup altered.

SQL> select est_minutes from V$ASM_OPERATION;

EST_MINUTES
———–
6

see:    Adding Disks to ASM diskgroup Manually

MIGRATE to ASM using RMAN

run

{

backup as copy database format ‘+DATA’;

switch database to copy;

#For each logfile

sql “alter database rename ‘/data/oracle/VISK/redo1a.rdo’ to ‘+DATA’ “;

alter database open resetlogs;

#For each tempfile

sql “alter tablespace TEMP add tempfile” ;

}

Restore Database to ASM using SET NEWNAME 

run

{

allocate channel d1 type disk;

#For each datafile

set newname for datafile 1 to ‘+DATA’;

restore database;

switch datafile all;

release channel d1;

}

Refer more Master Note for Automatic Storage Management (ASM) (Doc ID 1187723.1)
Information Center: Install and Configure Oracle Automatic Storage Management (Doc ID 1522675.2)