High Availbility

OS & Virtualization

Wednesday, January 30, 2013

Cascading Dataguard


You have a primary database in your corporate offices, and you want to create a standby database in another building on your local area network (LAN). In addition, you have a legal insurance requirement to keep the redo data and backup copies off site at a geographically distant location outside of your LAN but on your wide area network (WAN).

You could define two destinations on your primary database so that redo data could be transmitted to both of these sites, but this would put an extra workload on your primary database throughput due to the network latency of sending the redo data over the WAN.

To solve this problem, you could define a tight connection between your primary and physical standby databases in your LAN using the LGWR and SYNC network transports and standby redo log files. This would protect against losing access to the primary database, and it provides an alternate site for production when maintenance is required on the primary database. The secondary location on the WAN could be serviced by the physical standby database, ensuring that the redo data is stored off site. Nightly backups on the production database could then be moved to the WAN remote standby database, which removes the requirement to ship tapes to the off-site storage area.

Finally, in a worst case scenario, where you lose access to both the primary database and the physical standby database on the LAN, you could fail over to the remote standby database with minimal data loss. If you can gain access to the redo log file of the last standby database from the original standby database, you could recover it on the remote standby database, incurring no data loss.

The only time you would incur problems by sending the information over the WAN is during a switchover or failover, when the physical standby database has transitioned to the primary role. However, this configuration would still meet your insurance requirements.


To reduce the load on your primary system, you can implement cascaded destinations, whereby a standby database receives its redo data from another standby database, instead of directly from the primary database. You can configure a physical standby database to retransmit the incoming redo data it receives from the primary database to other remote destinations in the same manner as the primary database


Tuesday, January 29, 2013

Column Masking using Virtual Private Database (VPD)

Column masking is a simple way to hide you valuable data from certain users without having to apply encrypt/decrypt techniques, In conventional Virtual Private Database the VPD Policy is applied to the whole row. By default a Column-Level VPD Policy allows you to restrict the rows displayed only if specified columns are accessed.

Column masking behaviour is implemented by using the "sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS" parameter. This allows you to display all rows but mask the values of the specified columns for the restricted rows

There are 3 steps for accomplish column masking:
  1. A function to be used by the policy (function policy) created in next step.
  2. Use dbms_rls package to create the policy.
  3. Assign “exempt access policy” to users to be excluded from the policy. These users can see all data with no masking.
Example


CONN sys/password@db10g AS SYSDBA
GRANT EXECUTE ON dbms_rls TO scott;








-- Create the policy function to restrict access to SAL and COMM columns
-- if the employee is not part of the department 20.
CREATE OR REPLACE FUNCTION pf_job (oowner IN VARCHAR2, ojname IN VARCHAR2)
RETURN VARCHAR2 AS
  con VARCHAR2 (200);
BEGIN
  con := 'deptno = 20';
  RETURN (con);
END pf_job;
/







 -- Add policy
BEGIN
  DBMS_RLS.ADD_POLICY (object_schema         => 'scott',
                       object_name           => 'emp',
                       policy_name           => 'sp_job',
                       function_schema       => 'scott',
                       policy_function       => 'pf_job',
                       sec_relevant_cols     => 'sal,comm',
                       sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS);
END;
/




-- All rows are returned but the SAL and COMM values are only
-- shown for employees in department 20.

-- Remove the policy function from the table.






 
BEGIN
  DBMS_RLS.DROP_POLICY (object_schema => 'scott', object_name => 'emp', policy_name => 'sp_job');
END; /




Monday, January 28, 2013

Are you encrypting database traffic?

Encrypting Oracle network traffic safeguards sensitive data such as social security numbers, credit card numbers and other personally identifiable information against packet sniffing. Packet sniffing is where an attacker tries to capture unencrypted data by using a network sniffer. This sniffing takes place without the knowledge of either the client machine or database server.

With Oracle Advanced Security, you can set up network encryption to your database in a matter of hours. You can also configure your Oracle databases to only accept mutually authenticated and encrypted connections. This means that in addition to protecting against network eavesdropping, you can also protect against unauthorized connections to your database.

 

Oracle Net Native Encryption


These lines were added to sqlnet.ora on the database server: SQLNET.ENCRYPTION_TYPES_SERVER = RC4_256
SQLNET.ENCRYPTION_SERVER = required


You can also use Oracle Net manager to achieve the same result (do this on both client/server)

1.       Go to Local -> Profile
2.       Select Oracle Advanced Security -> Encrytion
 

No additional configuration was necessary. We connected to the database and retrieved the same data.

 http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/prod/security/network_encrypt/ntwrkencrypt.htm

Client Access Control


Oracle Net valid node checking lets you allow or deny access to an Oracle database server based on the IP address (or host name) of the client machine making the request. You can control access to the database server by specifying either which machines are allowed access or which machines are denied access.

To use the node validation feature, set the following sqlnet.ora (protocol.ora for Oracle 8) parameters on the database server:

# Enable node validation
tcp.validnode_checking = YES

# Prevent these client IP addresses from
# making connections to the Oracle listener.
tcp.excluded_nodes = {list of IP addresses}

# Allow these IP addresses to connect.
tcp.invited_nodes = {list of IP addresses}

Protecting Oracle Network Traffic with SSH Tunnelling


SSH provides a secure encrypted communications channel between two machines over an insecure network. A client machine can connect to an Oracle database over a secure SSH connection by using port forwarding. SSH port forwarding provides another way to protect data privacy through encryption and safeguard against data interception and alteration.


Creating an SSH tunnel between a client machine and an Oracle database server requires an SSH client to be present on the client machine and an SSH server to be present on the database server. No configuration is necessary on the database server.

On the Server

1.       Install CopSSH http://www.itefix.no/i2/copssh
2.       Create a windows user and activate this user


On the Client

1.       Install Putty
2.       Under connection ->SSH -> Tunnels.
a.       Enter a source port (can be any free port eg 8080)
b.      Enter the Destination , the database server IP address, listener port (eg 192.168.1.1:1521
c.       Check the local ports accept connections from other hosts
 
 
Modified the tnsnames.ora to use localhost and source port

In addition to being encrypted, data passed through an SSH tunnel is automatically integrity checked and authenticated by using SSH credential

Thursday, January 24, 2013

Extracting DDL using DBMS_METADATA and DATAPUMP

How to extract the table definition (DDL statement) from an Oracle database ?
DBMS_METADATA can be used to extract DDL definitions from a database. You can also use DataPump to extract the DDL.

 

DBMS_METADATA


set pagesize 0
set long 90000
set feedback off

select DBMS_METADATA.GET_DDL('TABLE','') from DUAL;

 

DATAPUMP

In this example we are exporting only the package "IFC". The output of the DDL is located as myoutput.txt .  The impdp keyword here is sqlfile

Create a parameter file ""parfile.parschemas=volors
directory=data_pump_dir
include=package:"= 'IFC'"


> expdp system/password parfile=c:\parfile.par

> impdp system/password directory=data_pump_dir sqlfile=myoutput.txt

Network Import using datapump

With network mode imports, one doesn't need any intermediate dump files (GREAT, no more FTP'ing of dump files). Data is exported across a database link and imported directly into the target database.



Example:



 
SQL> create user new_scott identified by tiger;
User created.

SQL> grant connect, resource to new_scott;
Grant succeeded.

SQL> grant read, write on directory dmpdir to new_scott;
Grant succeeded.

SQL> grant create database link to new_scott;
Grant succeeded.
SQL> conn new_Scott/tiger
Connected.

SQL> create database link old_scott connect to scott identified by tiger  using 'orcl.oracle.com';
Database link created.

impdp new_scott/tiger DIRECTORY=dmpdir NETWORK_LINK=old_scott remap_schema=scott:new_scott




Wednesday, January 23, 2013

How to Migrate ASM Diskgroups To Another SAN

If your plans are replacing the current disks associated to your diskgroups with a new storage, this operation can be accomplished without any downtime, so you can follow the next steps


1) Backup all your databases and valid the backup (always required to protect your data).
2) Add the new path (new disks from the new storage) to your asm_disktring to be recognized by ASM:


Example:


 SQL> alter system set asm_diskstring = '/dev/emcpowerc*' ,


'/dev/emcpowerh*';
Where: '/dev/emcpowerc*' are the current disks.
Where: '/dev/emcpowerh*' are the new disks.

3) Confirm that the new disks are being detected by ASM:


 SQL> select path from v$asm_disk;





4) Add the new disks to your desired diskgroup:



 SQL> alter diskgroup add disk
’,
’,
’,
’,
.
.
.
’ rebalance power <#>;





5) Then wait until the rebalance operation completes:


 SQL> select * from v$asm_operation;
SQL> select * from gv$asm_operation;





6) Finally, remove the old disks:



 
SQL> alter diskgroup drop disk , , ,
,

.
.
.
rebalance power <#>;






7) Then wait until the rebalance operation completes:


 SQL> select * from v$asm_operation;
SQL> select * from gv$asm_operation;





8) Done, your ASM diskgroups and database have been migrated to the new storage.
Note: Alternatively, we can execute add disk & drop disk statements in one operation, in that way only one rebalance operation will be started as follow:


 SQL> alter diskgroup
add disk '', .., ''
drop disk
, , ..,
rebalance power <#>;





This is more efficient than separated commands (add disk & drop disk statements).

Installing Oracle VM Manager 3.0.3 under Dom0 host

Check out how you can install both Oracle VM manager and VM server on the same machine. I have done it successfully.

http://www.pythian.com/blog/installing-oracle-vm-manager-3-0-3-under-dom0-host-or-how-to-save-resources-on-your-sandbox/