High Availbility

OS & Virtualization

Thursday, October 26, 2017

MySQL - How to perform basic admin

MySQL - How to perform basic admin

How to display output

  display in vertical : -E, --vertical
 
variables : --print-defaults
 
save output : -tee=
save html : -H
 
How to check MySQL server is alive?
alive : mysqladmin -p ping
 

How to Find out current Status of MySQL server?

mysqladmin -u root -p extended-status
mysqladmin -u root -p status

How to check MySQL version?

mysqladmin -u root -p version

How to stop MYSQL?

mysqladmin -p shutdown

How to set root password?

mysqladmin -u root password YOURNEWPASSWORD

How to check all the running Process of MySQL server?

mysqladmin -u root -p processlist

How to kill a process
>
mysql> SHOW PROCESSLIST;
or
mysql> SELECT * FROM information_schema.processlist ORDER BY id;

mysql>
Kill thread_id
or
mysqladmin kill id

How to create a job

mysql > CREATE EVENT IF NOT EXISTS test_event_01
ON SCHEDULE AT CURRENT_TIMESTAMP
DO …
mysql> SHOW EVENTS FROM …


MySQL Backup and Recovery




This article provides a quick guide to performing backup and recovery of MySQL databases


Logical Backup (mysqldump)

Backup database



mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql



Backup multiple databases


mysqldump -u root -ptmppassword --databases bugs sugarcrm > bugs_sugarcrm.sql
 

 
Restore a database


In this example, to restore the sugarcrm database, execute mysql with < as shown below. When you are restoring the dumpfilename.sql on a remote database, make sure to create the sugarcrm database before you can perform the restore



mysql -u root -ptmppassword
mysql> create database sugarcrm;
mysql -u root -ptmppassword sugarcrm < /tmp/sugarcrm.sql
mysql -u root -p[root_password] [database_name] < dumpfilename.sql



Tuesday, September 05, 2017

Exadata - tools for administration



Purpose
Command
Conduct a comprehensive Exadata health check on your Exadata Database Machine to validate your hardware, firmware,
./exachk -a
Collecting RAID Storage Information
/opt/MegaRAID/MegaCli/MegaCli64 -PDList –aALL
Administering the Storage Cell Network
ipconf –verify
Diagnosing Your InfiniBand Network
/usr/bin/ibdiagnet
/usr/sbin/ibqueryerrors.pl
InfiniBand network performance
opt/oracle.SupportTools/ibdiagtools/infinicheck
Exadata version
imageinfo
imagehistory
oswatcher
exawatcher


 

Thursday, August 31, 2017

Exadata - the key software components and their location


Directory/Executable/FilePurpose
/opt/oracleTop level directory containing Oracle storage server software
/opt/oracle.cellosDirectory containing Exadata cell software and utilities
/opt/oracle.cellos/cell.confCell configuration file
/opt/oracle.cellos/CheckHWnFWProfileUtility to validate hardware profile
/opt/oracle.cellos/ExadataDiagCollector.shUtility to collect cell diagnostics data, valuable for SRs
/opt/oracle.cellos/functions_cellosContains various Cell OS function calls
/opt/oracle.cellos/imageinfoShows current image information
/opt/oracle.cellos/imagehistoryShows image history
/opt/oracle.cellos/ipconf[.pl]Displays or configures cell network environment
/opt/oracle.cellos/isoContains kernel ISO images
/opt/oracle.cellos/make_cellboot_usb.shCreates a USB rescue image
/opt/oracle.cellos/MegaCli64MegaCLI—also in /opt/MegaRAID/MegaCli
/opt/oracle.cellos/patchDirectory for staged patches
/opt/oracle.cellos/restore_cellboot.shRestores from USB rescue image
/opt/oracle.cellos/validations_cellDirectory containing output from cell server validations
/opt/oracle.cellos/vldconfigConfigures cell validation
/opt/oracle.cellos/vldrunRuns cell validation scripts and logs to /opt/oracle.cellos/validations_cellw
/opt/oracle/cellSymlink to /opt/oracle.cell[VERSION]
/opt/oracle/cell[VERSION]Directory containing current cell software. For example, /opt/oracle/cell11.2.2.4.2_LINUX.X64_111221
/opt/oracle/cell[VERSION]/cellsrvDirectory containing cellsrv software
/opt/oracle/cell[VERSION]/cellsrv/deploy/configConfiguration deployment files for active cell image
/opt/oracle/cell[VERSION]/cellsrv/deploy/config/cellinit.oraCell initialization parameter file
/var/log/oracleDirectory containing cell server log files, alerts, and trace files
/var/log/oracle/cellosDirectory containing log and trace files for Cell Services utilities, validation framework, and cell server startup/shutdown events
/var/log/oracle/diag/asmDirectory containing log and trace files for cell storage-related events in your cell

Saturday, August 19, 2017

5 key features that empowers Oracle Exadata

5 key features that empowers this engineered system:



1. Smart Flash Cache
The Exadata Storage Server layer includes some flash storage as a hardware component, which has been implemented as a set of PCI flash cards. The main benefit of that is of course faster access than standard, disk-based access. In some cases you can order Exadata to KEEP whole table in Smart Flash Cache which speeds up database layer access with Full Table Scans. On the other hand in Write-Back mode you can empower DBWR or LGWR performance by putting written data on flash cache first, and afterwards sync it with regular hard drives, which of course is transparent to database engine.


2. Storage Indexes
It is absolutely unique to Exadata. Storage Indexes are an intelligent storage implementation. Generally speaking classic database index by it definition is created to efficiently provide a location of a certain data key. To be honest Storage Indexes in Exadata are focused on eliminating areas on storage as possible place where data might exist. Online data maps which are completely transparent for database layer are stored on the Flash Cache of Exadata Storage Servers. To make the story short, when Exadata Storage Server scans through the Storage Index and identifies the regions where predicate value falls within the MIN/MAX for the region, only for the identified regions physical I/O occurs. And as a consequence, even Full Table Scan has been planned by CBO on Database Server layer, limited I/O operations use to be proceed.


3. Smart Scans and Cell Offloading
Offloading use to be called as secret sauce of Oracle Exadata. The main concept of Offloading is to move processing from DB Nodes (Database Servers) to the intelligent storage layer. What is even more important Offloading means the reduction in the volume of data that returns to database server, which is one of the major bottlenecks in terabytes or even more bigger databases. To eliminate the time spent on transferring completely unnecessary data between storage and the database tier is the main issue that Oracle Exadata has been built to solve. Keep in mind that Offloading and Smart Scan terms could be used somewhat interchangeably.


4. Hybrid Columnar Compression
Also known as HCC, Hybrid Columnar Compression is one of the key features of Oracle Exadata, and it is only available on this engineered system. HCC format of compression will be used only when data arrives with direct path loads. There are four levels of compression QUERY LOW (LZO, 4x), QUERY HIGH (ZLIB, 6x), ARCHIVE LOW (ZLIB, 7x) and ARCHIVE HIGH (Bzip2, 12x). Keep in mind that HCC is not a good option for OLTP systems. In case of HCC mechanics, HCC store data in nontraditional format. Even data resides in Oracle blocks, with block header for every block, in HCC data storage has been organized in logical structures called compression units (CUs). Each CU consists of multiple Oracle blocks.


5. IORM
I/O Resource Manager (IORM) is a Oracle Exadata features which enriches Oracle Resource Manager from Database layer. IORM only actively manages I/O requests when needed and when Storage Server is not fully utilized, it provide data immediately. But when a disk is under heavy utilzation, Storage Server software redirects the I/O requests to the appropriate IORM queue and schedules I/O from there according to the policies defined in your IORM plans. Generally IORM policies open the way to prioritize databases on intelligent storage layer, which enable workload optimization.

Wednesday, July 19, 2017

Oracle Database Backup To Cloud (to Amazon S3)

Oracle Database Backup To Cloud:
Amazon Simple Storage Service (S3)



Amazon Web Services (AWS) is the first Cloud vendor that Oracle has partnered with to enable database backup in the Cloud.


The process is:
  1. Create an AWS S3 account and setup the necessary credentials.
  2. Install an AWS specific Oracle Secure Backup library into your Oracle Home.
  3. Run an RMAN backup using the SBT_TAPE device type.



Creating an AWS S3 Account

Go to -> aws.amazon.com
Under IAM (Identify & Access Management) , select user -> add user






 




Installing the Oracle Secure Backup Cloud Module Jar file






The Oracle Secure Backup (OSB) Cloud Module enables an Oracle Database to send its backups to Amazon S3. It is compatible with Oracle Database versions 9i Release 2 and above, and it requires a network connection to the Internet




From OTN download an installer Java JAR file and copy and extract the zip to your database server. When run, the installer will determine the proper database version and OS platform, as well as download the appropriate library file to your Oracle home or other specified directory.







$ java -jar osbws_install.jar \
>    -AWSID AKI***************QA \
>    -AWSKey no/MD*******************************upxK \
>    -otnUser v
incent.ng@xxxx.com \
>    -walletDir $ORACLE_HOME/dbs/osbws_wallet \
>    -libDir $ORACLE_HOME/lib




RMAN> run {
allocate channel aws_s3 type sbt
parms='SBT_LIBRARY=libosbws.so,SBT_PARMS=(OSB_WS_PFILE=/u01/app/oracle/product/12.1.0/dbhome_2/dbs/osbwsCDB121.ora)';
backup tablespace users;
}



Tuesday, June 27, 2017

Getting X11 forwarding through ssh working after running su

Getting X11 forwarding through ssh working after running su



$ xauth list $DISPLAY
You'll get something like

somehost.somedomain:10 mit-magic-cookie-1 4d22408a71a55b41ccd1657d377923ae

Then, after having done su, tell the new user what the cookie is:

$ xauth add somehost.somedomain:10 MIT-MAGIC-COOKIE-1 4d22408a71a55b41ccd1657d377923ae

Thursday, May 11, 2017

Oracle Clusterware Main Log Files

Oracle Clusterware Main Log Files


Oracle Clusterware uses a unified log directory structure to consolidate the Oracle Clusterware
component log files. This consolidated structure simplifies diagnostic information collection and
assists during data retrieval and problem analysis.



The main directories used by Oracle Clusterware to store its log files:
  • CRS logs are in $ORA_CRS_HOME/log//crsd/. The crsd.log file is archived every 10 MB (crsd.l01, crsd.l02, …).
  • CSS logs are in $ORA_CRS_HOME/log//cssd/. The cssd.log file isarchived every 20 MB (cssd.l01, cssd.l02, …).
  • EVM logs are in $ORA_CRS_HOME/log//evmd.
  • Depending on the resource, specific logs are in $ORA_CRS_HOME/log//racg and in $ORACLE_HOME/log//racg. In the last directory, imon_.log is archived every 10 MB for each service. Each RACG executable has a subdirectory assigned exclusively for that executable. The name of the RACG executable subdirectory is the same as the name of the executable.
  • SRVM (srvctl) and OCR (ocrdump, ocrconfig, ocrcheck) logs are in $ORA_CRS_HOME/log//client/ and in $ORACLE_HOME/log//client/.
  • Important Oracle Clusterware alerts can be found in alert.log in the $ORA_CRS_HOME/log/ directory.

Wednesday, January 11, 2017

Oracle dbstart/dbshut does not do anything

Steps
  1. at etc/oratab have the last char as Y and not N
  2. Eg
  3.  orcl:/u01/app/oracle/product/11.2.0/dbhome_1:N

    change it to this :


      orcl:/u01/app/oracle/product/11.2.0/dbhome_1:Y