High Availbility

OS & Virtualization

Tuesday, October 22, 2013

Accessing SSL encrypted websites using UTL_HTTP and Oracle Wallet Manager

If you have used the UTL_HTTP package in PL/SQL to call upon external web pages or services, you might have seen following error message come by:

SELECT utl_http.request(' https://localhost/Opera.cfg') FROM dual;
 ORA-29273: HTTP request failed
 ORA-06512: at “SYS.UTL_HTTP”, line 1130
 ORA-29024: Certificate validation failure


From Opera SQL run the following to determine the location where the Database is looking for the wallet

select o_http_client.get_wallet_directory from dual

Begin
  utl_http.set_wallet('file:'||o_http_client.get_wallet_directory);
end;

Tuesday, October 01, 2013

SQLServer commands 2

Querying dynamic management views:
  • You can query sys.dm_exec_requests to find blocking queries.
  • You can query sys.dm_os_memory_cache_counters to check the health of the system memory cache.
  • You can query sys.dm_exec_sessions for information about active sessions.
  • You can use sys.dm_db_index_physical_stats  to check index fragmentation
Running basic DBCC commands:
  • You can use DBCC FREEPROCCACHE to remove all elements from the procedure cache.
  • You can use DBCC FREESYSTEMCACHE to remove all unused entries from all caches.
  • You can use DBCC DROPCLEANBUFFERS to remove all clean buffers from the buffer pool.
  • You can use DBCC SQLPERF to retrieve statistics about how the transaction log spaceis used in all databases.
  • DBCC SHOWCONTIG to show index fragmentation
Using the KILL command to end an errant session






Identifying and Rectifying the Cause of a Block
SELECT session_id, status, blocking_session_id
 FROM sys.dm_exec_requests
 WHERE blocking_session_id >
 
Finding Last Backup Time for All Database
SELECT sdb.Name AS DatabaseName,
COALESCE(CONVERT(VARCHAR(12), MAX(bus.backup_finish_date), 101),'-') AS LastBackUpTime
FROM sys.sysdatabases sdb
LEFT OUTER JOIN msdb.dbo.backupset bus ON bus.database_name = sdb.name
GROUP BY sdb.Name
 
Duration of backup
DECLARE @dbname sysname
SET @dbname = NULL --set this to be whatever dbname you want
SELECT bup.user_name AS [User],
 bup.database_name AS [Database],
 bup.server_name AS [Server],
 bup.backup_start_date AS [Backup Started],
 bup.backup_finish_date AS [Backup Finished]
 ,CAST((CAST(DATEDIFF(s, bup.backup_start_date, bup.backup_finish_date) AS int))/3600 AS varchar) + ' hours, '
 + CAST((CAST(DATEDIFF(s, bup.backup_start_date, bup.backup_finish_date) AS int))/60 AS varchar)+ ' minutes, '
 + CAST((CAST(DATEDIFF(s, bup.backup_start_date, bup.backup_finish_date) AS int))%60 AS varchar)+ ' seconds'
 AS [Total Time]
FROM msdb.dbo.backupset bup
WHERE bup.backup_set_id IN
  (SELECT MAX(backup_set_id) FROM msdb.dbo.backupset
  WHERE database_name = ISNULL(@dbname, database_name) --if no dbname, then return all
  AND type = 'D' --only interested in the time of last full backup
  GROUP BY database_name)
/* COMMENT THE NEXT LINE IF YOU WANT ALL BACKUP HISTORY */
AND bup.database_name IN (SELECT name FROM master.dbo.sysdatabases)
ORDER BY bup.database_name


  

Thursday, August 15, 2013

How to relocate Oracle RAC Service

Starting/Stopping Database instance

The database (all instances) can be shut down / started by running the below command from a command prompt window of the database servers.




 srvctl stop database –d opera –o immediate

srvctl start database –d opera

srvctl start database –d opera –o mount
srvctl stop instance –d opera –i opera3 –o immediate
or
srvctl stop instance –d opera –i "opera1,opera2 –o immediate 
 





Starting/Stopping Services






 srvctl start service –s "volors,oxihub" –d opera



How to set auto start resources in 11G RAC

https://oracleracdba1.wordpress.com/2013/01/29/how-to-set-auto-start-resources-in-11g-rac/ 
 

How to relocate Oracle RAC Service

All cluster services related to node 1 will be in OFFLINE state.

Take note on the slhors service. This service will failover to node 3 when either node 1 or 2 is down.

When we check using the srvctl command, you will this:





 d:\oracle\1020\crs\bin> srvctl status service -d opera -s slhorsservice slhors is running on instance opera2,opera3



It is now handled by opera2 and opera3 instances because node1 is down.

However, even after node 1 is up, instance opera3 won’t move the service back to opera1, therefore, we have to manually relocate the slhors service handled by opera3 back to opera using the following statement:






 
d:\oracle\1020\crs\bin> srvctl relocate service -d opera -s slhors -i opera3 -t opera1





Note that the slhors is now handled by opera1, opera2 again as it should be.

Saturday, July 27, 2013

Using datapump to extract DDL

Using datapump to extract DDL


To export




 expdp system/******** directory=data_pump_dir schemas=opera dumpfile=opera.dmp include=package







To import




 impdp system/******** directory=data_pump_dir sqlfile=sqlfile.log include=package:>'RES'




Eg :
  • PACKAGE:"LIKE '%API'" -- end with API
  • package:>'RES' -- start with RES
  • include=table,view,procedure - you can export procedure, function, package, indexes
  • EXCLUDE=SCHEMA:"IN ('OUTLN','SYSTEM') - exclude schemas

Tuesday, June 18, 2013

Oracle 11g R2 response file example

After installing the Operating System and configuring all necessary parameters, one has to install the Oracle software. It is usually a good idea to use a response file to do this.
There are a few reasons to use a response file:
  • The installation is reproducible (the most important point)
  • No X server is necessary when using a response file with the Oracle Universal Installer (OUI)
  • The installation is easily scriptable
  • Strictly enforcing the OFA or other policies on all hosts is much easier
So after extracting the archive with the software downloaded from the Oracle website, we usually find an example response file in the “response/” folder of the software package. So here is an example of a response file:

oracle.install.option=INSTALL_DB_SWONLY
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/home/oracle/oraInventory
SELECTED_LANGUAGES=en
ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.DBA_GROUP=dba
oracle.install.db.OPER_GROUP=dba
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
DECLINE_SECURITY_UPDATES=true


Note that this is a very minimalistic response file, where only the software is installed (no database is created). Please refer to the Oracle documentation and the response file that Oracle provides as part of their software delivery package.

To install the software, execute the runInstaller -silent -responseFile

More silent install
http://www.pythian.com/blog/oracle-silent-mode-part-110-installation-of-102-and-111-databases/

Tuesday, June 04, 2013

Howto Setup Yum repositories ISO CDROM

Q : How do you use yum to update / install packages from an ISO of CentOS / FC / RHEL CD?

Solution 1 : Use your DVD directly without creating any repo
  1. Mount the ISO file
    # mkdir /media/cdrom
    # mount /dev/sr0 /media/cdrom
  2. Create config file
    # vi /etc/yum.repos.d/iso.repo

    [dvd]
    baseurl=file:///media/cdrom
    enabled=1
    gpgcheck=0
  3. Run the command
    # yum install --enablerepo=dvd packagename
Solution 2 : Creation of yum repositories is handled by a separate tool called createrepo, which generates the necessary XML metadata. If you have a slow internet connection or collection of all downloaded ISO images, use this hack to install rpms from iso images

  1. Step # 1: Mount an ISO file
    # rpm -i createrepo*
    # mkdir /media/cdrom
    # mount /dev/sr0 /media/cdrom
  2. Step # 2: Create a repository
    # mkdir /tmp/repo
    # cd /mnt/iso
    # createrepo -o /tmp/repo .
  3. Step # 3: Create config file
    # vi /etc/yum.repos.d/iso.repo

    Append following text:
    [ISO Repository]
    baseurl=file:///media/cdrom
    enabled=1
Now use yum command to install packages from ISO images:
# yum install package-name

Thursday, March 28, 2013

How to Set Up SQL Server 2012 AlwaysOn Availability Groups

Advantages
  • fail over automatically
  • ability to run reports on the live database
  • Run backup on standby server
Prerequisites
  • 2 Windows Server 2008 R2 Enterprise servers
  • NET Framework 3.5.1 feature
  • Failover Clustering feature

Steps
  • Configure Failover Cluster Manager.
  • Join 2 servers to the the same domain
  • Create Cluster Wizard
  • Install SQL Server stand-alone installation
  • In Configuration Manager, enable AlwaysOn by clicking SQL Server Services



Monday, March 18, 2013

Diagnose RAC Problems

How to displays the top-level view of the cluster.
# cd /u01/app/11.2.0/grid/bin
# ./crsctl check cluster -all


How to gives information about the individual resources.
# ./crsctl stat res -t
Once the Oracle software is installed, the cluvfy utility is available to provide useful post-installation information. Use the "-help" flag for usage information.

$ cluvfy stage -help
$ cluvfy stage -post crsinst -n ol6-112-rac1,ol6-112-rac2
Oracle provide the RACcheck tool (MOS [ID 1268927.1]) to audit the configuration of RAC, CRS, ASM, GI etc. It supports database versions from 10.2-11.2, making it a useful starting point for most analysis. The MOS note includes the download and setup details.

$ unzip raccheck.zip
$ cd rachcheck
$ chmod 755 raccheck
$ ./raccheck -a

Wednesday, February 27, 2013

Installing Oracle on Solaris

How to Add Access to CD or DVD Media in a Non-Global Zone


7.Loopback mount the file system with the options ro,nodevices (read-only and no devices) in the non-global zone.
global# zonecfg -z my-zone
zonecfg:my-zone> add fs
zonecfg:my-zone:fs> set dir=/cdrom
zonecfg:my-zone:fs> set special=/cdrom
zonecfg:my-zone:fs> set type=lofs
zonecfg:my-zone:fs> add options [ro,nodevices]
zonecfg:my-zone:fs> end
zonecfg:my-zone> commit
zonecfg:my-zone> exit

8.Reboot the non-global zone.
global# zoneadm -z my-zone reboot

9.Use the zoneadm list command with the -v option to verify the status.
global# zoneadm list -v


Error Checking


ERROR Checking monitor: must be configured to display at least 256 colors >>> Could not execute auto check for
 display colors using command /usr/openwin/bin/xdpyinfo. Check if the DISPLAY variable is set. Failed <<<<
 Some requirement checks failed. You must fulfill these requirements before continuing with theinstallation, at which time they will be rechecked.

Solution(s):
 1. Install SUNWxwplt package
 2. Set DISPLAY variable
 3. Execute xhost + on target (set in DISPLAY) computer


Error : libXm.so.4 library might be missing otherwise. Proceed to check the status of the
package as shown in the previous example.
#
pkg info -r motif
#
pkg install motif

Monday, February 18, 2013

Copying & Moving Files efficiently with xargs

How to move and/or copy a subset of files from one directory

 
eg 1 zip files older than 10 days


 find . -type f -name '*.trc' -mtime +10 | xargs tar -cvzf bdump_$(date +%Y%m%d).gz.tar --remove-files

From time to time I need to move and/or copy a subset of files from one directory to another


 #-- delete
find . -type f -ctime -1 | xargs -i rm {}

 #-- COPY
find . -type f -ctime -1 | xargs -I '{}' cp {} /some/other/directory

 #-- MOVE
find . -type f -ctime -1 | xargs -I '{}' mv {} /some/other/directory

 

It is very easy to compress a Whole Linux/UNIX directory. It is useful to backup files, email all files, or even to send software you have created to friends. Technically, it is called as a compressed archive. GNU tar command is best for this work. It can be use on remote Linux or UNIX server. It does two things for you:
=> Create the archive
=> Compress the archive

You need to use tar command as follows (syntax of tar command):





 
 tar -zcvf archive-name.tar.gz directory-name




Friday, February 15, 2013

Installing Oracle Database 11g Release 2 on Oracle Solaris 11

 


I just finished installing Oracle database 11g Release 2 (11.2.0.3) on Oracle Solaris 11 and from what I experienced, Oracle Solaris 11 has a minimal package prerequisite requirement in order to install Oracle database 11g Release 2 (11.2.0.3) on Oracle Solaris 11. Most of the requirements from Solaris 10 has been integrated or consolidated into other packages which has reduced number of prerequisite packages needed in order to install Oracle database 11g Release 2



In order for remote DISPLAY to work on S11, you will need the following package: SUNWxwplt

root@s11_sparc# pkg install pkg://solaris/compatibility/packages/SUNWxwplt

You can now set the DISPLAY variable for the user installing Oracle database.







Regarding dependency on Motif and Xt libraries, you can set the toolkit as environment variable for the user installing the oracle database:

$ export AWT_TOOLKIT=XToolkit

(For more info on XToolkit, refer: http://docs.oracle.com/javase/1.5.0/docs/guide/awt/1.5/xawt.html)

Thats it! Now you should be able to proceed with Oracle database 11g Release 2 (11.2.0.3) installation on Solaris 11 server by running runInstaller

Thursday, February 14, 2013

Securing the Oracle Listener

The Oracle Database Listener is the database server software component that manages the network traffic between the Oracle Database and the client. The Oracle Database Listener listens on a specific network port (default 1521) and forwards network connections to the Database.

The listener is one of the most critical components to database operations;


  • It is responsible for the ability to have a client/server communication
  • In dedicated mode it is responsible for creating a new process (or thread on Windows) on behalf of the client and setting up the communications
  • On Windows each such server process actually speaks on a new tcpip port and the listener redirects the client to this port
  • On Unix streaming continues on the original port
    • The listener forks a new process
    • The listener then closes its own fd-s; the new process continues to speak on the fd-s
  • In MTS the listener is responsible to assign and set up the connection with the least loaded dispatcher. The dispatchers get requests from the client and place them on the request queues for the shared server processes, and read responses from the response queues to send to the client
  • How to set listener password


    Set the Listener password to stop most attacks and security issues. Setting the password manually in listener.ora using the PASSWORDS_ parameter will result in the password being stored in cleartext.


    LSNRCTL> set current_listener
    LSNRCTL> change_password Old password:

    New password:
    Reenter new password:

    LSNRCTL> set password Password:
    LSNRCTL> save_config


    Thursday, February 07, 2013

    MYSQL basic command

    MySQL Commands List


    Here you will find a collection of basic MySQL statements

    General Commands



     
     
    USE database_name
    Change to this database. You need to change to some database when you first connect to MySQL
    mysql>
    SELECT DATABASE();
    DESCRIBE table_name
     
    SET PASSWORD=PASSWORD('new_password')
     
    OPTIMIZE [table]
     
    SHOW variables
     
    SHOW DATABASES 
    Lists all MySQL databases on the system. To find out which database is currently selected, use the DATABASE() function
    show tables  [FROM database_name]
    Lists all tables from the current database or from the database given in the command
    SHOW FIELDS FROM table_name
     
    SHOW COLUMNS FROM table_name
    These commands all give a list of all columns (fields) from the given table, along with column type and other info.
    SHOW INDEX FROM table_name
    Lists all indexes from this tables.
    show open tables 
     
    show procedure status
     
    show function status
     
    show binary log
    Lists the binary log files on the server
    show master log
    Lists the binary log files on the server
     
     


    How to troubleshoot mysql database server high cpu usage/slowness

    1. Firstly find out what's causing server CPU high usage
      • vmstat 2 20top -b -n 5
    2. check mysql error log , slow query log etc from /etc/my.cnf
      • innodb_buffer_pool_size=20000M 
    3. mysql > show engine innodb status\G

    Other administration commands

    • display in vertical : -E, --vertical
    • variables : --print-defaults
    • save output : -tee=
    • save html : -H
    • alive : mysqladmin -p ping
    • status : mysqladmin -p ,
    • How to Find out current Status of MySQL server?
      • mysqladmin -u root -p extended-status
    • How to check MySQL version?
      • : mysqladmin -u root -p version
    • How to stop MYSQL?
      • : mysqladmin -p shutdown
    • How to set root password?
      • : mysqladmin -u root password YOURNEWPASSWORD
    • How to check all the running Process of MySQL server?
      • mysqladmin -u root -p processlist
    • How to kill a process

      • : mysqladmin kill id
    • How to create a job

    Configuration Files

    • my.cnf - usually located in /etc/my.cnf, /etc/mysql/my.cnf

    Directories

    • basedir ($MYSQL_HOME)
      - e.g. /opt/mysql-5.1.16-beta-linux-i686-glib23
    • datadir (defaults to $MYSQL_HOME/data)
    • tmpdir (important as mysql behaves unpredictability if full)
    • innodb_[...]_home_dir
      - mysql> SHOW GLOBAL VARIABLES LIKE '%dir' 

    Wednesday, January 30, 2013

    Cascading Dataguard


    You have a primary database in your corporate offices, and you want to create a standby database in another building on your local area network (LAN). In addition, you have a legal insurance requirement to keep the redo data and backup copies off site at a geographically distant location outside of your LAN but on your wide area network (WAN).

    You could define two destinations on your primary database so that redo data could be transmitted to both of these sites, but this would put an extra workload on your primary database throughput due to the network latency of sending the redo data over the WAN.

    To solve this problem, you could define a tight connection between your primary and physical standby databases in your LAN using the LGWR and SYNC network transports and standby redo log files. This would protect against losing access to the primary database, and it provides an alternate site for production when maintenance is required on the primary database. The secondary location on the WAN could be serviced by the physical standby database, ensuring that the redo data is stored off site. Nightly backups on the production database could then be moved to the WAN remote standby database, which removes the requirement to ship tapes to the off-site storage area.

    Finally, in a worst case scenario, where you lose access to both the primary database and the physical standby database on the LAN, you could fail over to the remote standby database with minimal data loss. If you can gain access to the redo log file of the last standby database from the original standby database, you could recover it on the remote standby database, incurring no data loss.

    The only time you would incur problems by sending the information over the WAN is during a switchover or failover, when the physical standby database has transitioned to the primary role. However, this configuration would still meet your insurance requirements.


    To reduce the load on your primary system, you can implement cascaded destinations, whereby a standby database receives its redo data from another standby database, instead of directly from the primary database. You can configure a physical standby database to retransmit the incoming redo data it receives from the primary database to other remote destinations in the same manner as the primary database


    Tuesday, January 29, 2013

    Column Masking using Virtual Private Database (VPD)

    Column masking is a simple way to hide you valuable data from certain users without having to apply encrypt/decrypt techniques, In conventional Virtual Private Database the VPD Policy is applied to the whole row. By default a Column-Level VPD Policy allows you to restrict the rows displayed only if specified columns are accessed.

    Column masking behaviour is implemented by using the "sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS" parameter. This allows you to display all rows but mask the values of the specified columns for the restricted rows

    There are 3 steps for accomplish column masking:
    1. A function to be used by the policy (function policy) created in next step.
    2. Use dbms_rls package to create the policy.
    3. Assign “exempt access policy” to users to be excluded from the policy. These users can see all data with no masking.
    Example


    CONN sys/password@db10g AS SYSDBA
    GRANT EXECUTE ON dbms_rls TO scott;








    -- Create the policy function to restrict access to SAL and COMM columns
    -- if the employee is not part of the department 20.
    CREATE OR REPLACE FUNCTION pf_job (oowner IN VARCHAR2, ojname IN VARCHAR2)
    RETURN VARCHAR2 AS
      con VARCHAR2 (200);
    BEGIN
      con := 'deptno = 20';
      RETURN (con);
    END pf_job;
    /







     -- Add policy
    BEGIN
      DBMS_RLS.ADD_POLICY (object_schema         => 'scott',
                           object_name           => 'emp',
                           policy_name           => 'sp_job',
                           function_schema       => 'scott',
                           policy_function       => 'pf_job',
                           sec_relevant_cols     => 'sal,comm',
                           sec_relevant_cols_opt => DBMS_RLS.ALL_ROWS);
    END;
    /




    -- All rows are returned but the SAL and COMM values are only
    -- shown for employees in department 20.

    -- Remove the policy function from the table.






     
    BEGIN
      DBMS_RLS.DROP_POLICY (object_schema => 'scott', object_name => 'emp', policy_name => 'sp_job');
    END; /




    Monday, January 28, 2013

    Are you encrypting database traffic?

    Encrypting Oracle network traffic safeguards sensitive data such as social security numbers, credit card numbers and other personally identifiable information against packet sniffing. Packet sniffing is where an attacker tries to capture unencrypted data by using a network sniffer. This sniffing takes place without the knowledge of either the client machine or database server.

    With Oracle Advanced Security, you can set up network encryption to your database in a matter of hours. You can also configure your Oracle databases to only accept mutually authenticated and encrypted connections. This means that in addition to protecting against network eavesdropping, you can also protect against unauthorized connections to your database.

     

    Oracle Net Native Encryption


    These lines were added to sqlnet.ora on the database server: SQLNET.ENCRYPTION_TYPES_SERVER = RC4_256
    SQLNET.ENCRYPTION_SERVER = required


    You can also use Oracle Net manager to achieve the same result (do this on both client/server)

    1.       Go to Local -> Profile
    2.       Select Oracle Advanced Security -> Encrytion
     

    No additional configuration was necessary. We connected to the database and retrieved the same data.

     http://www.oracle.com/webfolder/technetwork/tutorials/obe/db/11g/r2/prod/security/network_encrypt/ntwrkencrypt.htm

    Client Access Control


    Oracle Net valid node checking lets you allow or deny access to an Oracle database server based on the IP address (or host name) of the client machine making the request. You can control access to the database server by specifying either which machines are allowed access or which machines are denied access.

    To use the node validation feature, set the following sqlnet.ora (protocol.ora for Oracle 8) parameters on the database server:

    # Enable node validation
    tcp.validnode_checking = YES

    # Prevent these client IP addresses from
    # making connections to the Oracle listener.
    tcp.excluded_nodes = {list of IP addresses}

    # Allow these IP addresses to connect.
    tcp.invited_nodes = {list of IP addresses}

    Protecting Oracle Network Traffic with SSH Tunnelling


    SSH provides a secure encrypted communications channel between two machines over an insecure network. A client machine can connect to an Oracle database over a secure SSH connection by using port forwarding. SSH port forwarding provides another way to protect data privacy through encryption and safeguard against data interception and alteration.


    Creating an SSH tunnel between a client machine and an Oracle database server requires an SSH client to be present on the client machine and an SSH server to be present on the database server. No configuration is necessary on the database server.

    On the Server

    1.       Install CopSSH http://www.itefix.no/i2/copssh
    2.       Create a windows user and activate this user


    On the Client

    1.       Install Putty
    2.       Under connection ->SSH -> Tunnels.
    a.       Enter a source port (can be any free port eg 8080)
    b.      Enter the Destination , the database server IP address, listener port (eg 192.168.1.1:1521
    c.       Check the local ports accept connections from other hosts
     
     
    Modified the tnsnames.ora to use localhost and source port

    In addition to being encrypted, data passed through an SSH tunnel is automatically integrity checked and authenticated by using SSH credential

    Thursday, January 24, 2013

    Extracting DDL using DBMS_METADATA and DATAPUMP

    How to extract the table definition (DDL statement) from an Oracle database ?
    DBMS_METADATA can be used to extract DDL definitions from a database. You can also use DataPump to extract the DDL.

     

    DBMS_METADATA


    set pagesize 0
    set long 90000
    set feedback off

    select DBMS_METADATA.GET_DDL('TABLE','') from DUAL;

     

    DATAPUMP

    In this example we are exporting only the package "IFC". The output of the DDL is located as myoutput.txt .  The impdp keyword here is sqlfile

    Create a parameter file ""parfile.parschemas=volors
    directory=data_pump_dir
    include=package:"= 'IFC'"


    > expdp system/password parfile=c:\parfile.par

    > impdp system/password directory=data_pump_dir sqlfile=myoutput.txt

    Network Import using datapump

    With network mode imports, one doesn't need any intermediate dump files (GREAT, no more FTP'ing of dump files). Data is exported across a database link and imported directly into the target database.



    Example:



     
    SQL> create user new_scott identified by tiger;
    User created.
    
    SQL> grant connect, resource to new_scott;
    Grant succeeded.
    
    SQL> grant read, write on directory dmpdir to new_scott;
    Grant succeeded.
    
    SQL> grant create database link to new_scott;
    Grant succeeded.
    
    SQL> conn new_Scott/tiger
    Connected.
    
    SQL> create database link old_scott connect to scott identified by tiger  using 'orcl.oracle.com';
    Database link created.
    

    impdp new_scott/tiger DIRECTORY=dmpdir NETWORK_LINK=old_scott remap_schema=scott:new_scott



    
    

    Wednesday, January 23, 2013

    How to Migrate ASM Diskgroups To Another SAN

    If your plans are replacing the current disks associated to your diskgroups with a new storage, this operation can be accomplished without any downtime, so you can follow the next steps


    1) Backup all your databases and valid the backup (always required to protect your data).
    2) Add the new path (new disks from the new storage) to your asm_disktring to be recognized by ASM:


    Example:


     SQL> alter system set asm_diskstring = '/dev/emcpowerc*' ,


    '/dev/emcpowerh*';
    Where: '/dev/emcpowerc*' are the current disks.
    Where: '/dev/emcpowerh*' are the new disks.

    3) Confirm that the new disks are being detected by ASM:


     SQL> select path from v$asm_disk;





    4) Add the new disks to your desired diskgroup:



     SQL> alter diskgroup add disk
    ’,
    ’,
    ’,
    ’,
    .
    .
    .
    ’ rebalance power <#>;





    5) Then wait until the rebalance operation completes:


     SQL> select * from v$asm_operation;
    SQL> select * from gv$asm_operation;





    6) Finally, remove the old disks:



     
    SQL> alter diskgroup drop disk , , ,
    ,

    .
    .
    .
    rebalance power <#>;






    7) Then wait until the rebalance operation completes:


     SQL> select * from v$asm_operation;
    SQL> select * from gv$asm_operation;





    8) Done, your ASM diskgroups and database have been migrated to the new storage.
    Note: Alternatively, we can execute add disk & drop disk statements in one operation, in that way only one rebalance operation will be started as follow:


     SQL> alter diskgroup
    add disk '', .., ''
    drop disk
    , , ..,
    rebalance power <#>;





    This is more efficient than separated commands (add disk & drop disk statements).