Home » 2011

Yearly Archives: 2011

How to find indexes which are candidate for rebuild?

I know it’s a very arguable question, but let me put my words and then you can comment.

I follow different approaches to find out indexes for rebuild

– Find out indexes having height(blevel+1) > 4 i.e. Indexes having BLEVEL > 3
SQL> select owner,index_name,table_name,blevel from dba_indexes where BLEVEL>3

– Analyze indexes and find out ratio of (DEL_LF_ROWS/LF_ROWS*100) is > 20
First “Analyze the index with validate structure option” and then,

SQL> SELECT name,height,lf_rows,del_lf_rows,(del_lf_rows/lf_rows)*100 as ratio FROM INDEX_STATS;

But (a big but), the reason to rebuild should be because of poor performance of your queries using indexes. You should/must not rebuild indexes if you find both the above reason true for index if it is not coupled with poor SQL performance.

See this example:

SQL> analyze index TEST_INDX validate structure; — First analyze the suspect index

Index analyzed.

SQL> SELECT name,height,lf_rows,lf_blks,del_lf_rows FROM INDEX_STATS;

———— ———- ———- ———– ————-
TEST_INDX 8 938752 29575 73342

You can see height of the index is 8 and also high number of DEL_LF_ROWS

SQL> set autotrace on
SQL> set timing on
SQL> select count(*) from TEST_TABLE where TEST_COL like ‘http://www.hots%’;

Elapsed: 00:00:27.25

Execution Plan
Plan hash value: 870163320

Id Operation Name Rows Bytes Cost (%CPU) Time

0 SELECT STATEMENT 1 117 10 (0) 00:00:01


*2 INDEX RANGE SCAN TEST_INDX 115 13455 10 (0) 00:00:01

1 recursive calls
0 db block gets
764 consistent gets
757 physical reads
0 redo size
516 bytes sent via SQL*Net to client
468 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed

Now you rebuild the indexes

SQL> alter index TEST_INDX rebuild;

Index altered.

SQL> select count(*) from TEST_TABLE where TEST_COL like ‘http://www.hots%’;


Elapsed: 00:00:06.18

Execution Plan
Plan hash value: 870163320 – See here although it is using the same plan but still it is faster

Id Operation Name Rows Bytes Cost (%CPU) Time


0 SELECT STATEMENT 1 117 6 (0) 00:00:01


* 2 INDEX RANGE SCAN TEST_INDX 115 13455 6 (0) 00:00:01

15 recursive calls
0 db block gets
592 consistent gets
588 physical reads
0 redo size
516 bytes sent via SQL*Net to client
468 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)

SQL> SELECT name,height,lf_rows,lf_blks,del_lf_rows,distinct_keys,used_space FROM INDEX_STATS;

—————————— ———- ———- ———- ———–
TEST_INDX 4 865410 15434 0

This clearly indicates rebuild helped my query performance. The height of index is reduced to 4 and DEL_LF_ROWS is 0

Now coming to second of part of Harvinder’s comment.

Possible ways of Rebuilding.

– Online/Offline.

ONLINE Rebuild (8i onwards)

SQL> Alter index rebuild online;

This allows parallel DML to go on while Index is getting rebuild. Remember, online index requires more space, as it creates a new index before dropping the old one.

Index Rebuild is primarily a 3 step process

Prepare phase: Oracle locks the table for a fraction of second (actually not felt) to build index structure and populate data-dictionary.

Build phase: Most of the work is done in this phase. Oracle engine populates the index using the table and allows parallel DML’s, parallel DML’s uses a temporary journal table (b tree index like structure) to host the entries while the new index is getting populated.

Merge phase: Now the final stage, Oracle merges the new index with the journal table and drops the old index. Even during the merge phase, any changes to the table are recorded in the journal table and they get merged towards end of this phase.

9i onwards, online index rebuild feature includes Reverse Key, Function Based and Key Compressed indexes.

Offline Index rebuild.

SQL> Alter index rebuild;

This is conventional rebuild which was used(still available) before 8i. In this rebuild process, oracle drops the old index and creates a new one. In this process, no extra space is required, but parallel dml’s are not supported.

Just for Refresh : Steps to upgrade Oracle from to

steps to upgrade to


using winSCP or anyother tool copy the patch file p6890831_111070_Linux-x86-64.zip to the environment from the machine where you are having patch file.

login : oracle
Uncompress the binary source
$unzip p6890831_111070_Linux-x86-64.zip
After unzipping , you can find the below folder Disk1
c111ptqhdd009: /home/oracle
$ cd /oracle/app/patches/
c111ptqhdd009: /oracle/app/patches
$ ls
Disk1 p6890831_111070_Linux-x86-64.zip

Bring down the application

login : oracle
Bring down the database
sqlplus ‘/as sysdba’
sql>shutdown immediate;
>ps -ef |grep pmon

Make sure that oracle is not running
>lsnrctl stop
>lsnrctl status
Make sure that the listener is not running.
Take a cold database backup
Controlfiles, datafiles, redo logs, archive files, tempfiles must be included in the backup. If the files are in different paths, the same has to be backedup.
>mkdir /backup_date
>cp /App1/oracle/oradata/orcl/*.* /backup/backup_date
>cd $ORACLE_HOME/dbs
>cp inittest.ora inittest.ora_bkpdate
>cp spfiletest.ora spfiletest.or_bkpdate
>cp orapwtest.ora orapwtest.ora_bkpdate

Oracle : Move datafile to different location

From time to time a DBA might need move an Oracle database datafile from one location to another. Reasons for this might include: I/O balancing for performance reasons, or to rectify incorrect naming standards.

Choose one of the following procedures based on the log mode of your database (select log_mode from sys.v_$database):

Database is in ARCHIVELOG mode

  • Take the datafile offline with the “ALTER DATABASE DATAFILE ‘/old/location’ OFFLINE;” command.
  • Copy or move the datafile to its new location. On Unix this can be done with the “dd” command.


dd if=/old/location of=/new/location bs=4096

SQL> ALTER DATABASE RENAME FILE '/old/location' TO '/new/location';
SQL> RECOVER DATAFILE '/new/location';

Database is in NOARCHIVELOG mode

  • Shutdown the database
  • Copy or move the datafile to its new location. On Unix this can be done with the “dd” command. Example:
dd if=/old/location of=/new/location bs=4096
  • Start SQL*Plus, do a “STARTUP MOUNT” and rename the file:
SQL> ALTER DATABASE RENAME FILE '/old/location' TO '/new/location';

Virtualization on IBM

IBM has a 40-plus year history of virtualization. No other vendor can come close to making this claim. The fact is that virtually (pardon the pun) everything they have implemented on the mid-range, has already been done on the mainframe. They offer one virtualization strategy, PowerVM, unlike the myriad of solutions available from Sun or HP. The technology itself uses a hypervisor-based solution (which IBM has finally implemented though Xen, but only on their x86 platform), which sits between the operating system and the hardware. PowerVM is a combination of hardware and software, which accounts for the IBM Systems p virtualization strategy.

The technology that makes up PowerVM includes:

  • Micropartition and Shared processor pools — Micropartioning lets you slice up pieces of your CPUs into virtual partitions. At the same time, it allows for the sharing of CPU, RAM, and I/O. You can carve your partition with up to 1/10 of a CPU. Using a feature called uncapped partitions, you can even exceed the amount of hardware that you configure on your partition, your entitled capacity (EC), which is an important feature in environments where activity fluctuates. This workload management is all done automatically, without requiring the usage of speciality workload management tools. IBM has since retired their workload management tool, Partition Load Manager (PLM), recognizing that it was the automation inherent in its shared processor pool strategy that really captivated the audience.
  • VIO Servers — These are special partitions that let you to service resources to VIO clients. The servers own the actual resources, which are network adapters or disk I/O. These partitions save money and provide flexibility by allowing partitions to shared I/O resources. Shared Ethernet and virtual SCSI are the solutions that allow for sharing network and disk I/O.

During the past year, IBM has changed their virtualization nomenclature from Advanced Power Virtualization to PowerVM. In the process, IBM added several significant features to their virtualization product line:

  • Live Partition Mobility — This feature, introduced with the Power6 architecture, lets you move running AIX or Linux partitions from one physical server to another.
  • Lx86. This recent innovation lets you run x86 Linux applications that were not specifically ported to the Power Architecture directly on a Linux partition without a recompile.
  • Shared Dedicated Capacity — This feature lets you spare CPU cycles from dedicated processor partitions toward the shared processor pool.
  • Integrated Virtualization Manager (IVM) — This is a browser-based tool that provides the option of not having to have a dedicated hardware management console (HMC) from which to manage virtualization resources.
Active Memory Sharing

Active Memory Sharing (AMS) is a new feature introduced in 2009. This feature, available on Power6, is the final piece of the puzzle for PowerVM on Power6. It allows for the sharing of RAM, similar to how users have been able to share and micropartition CPUs, which, in turn, allows for the increased use of memory through the Power hypervisor without having to do a DLPAR operation. AMS makes it possible to use spare idle memory not being used by other LPARs toward the distribution of LPARs. This empowers customers to optimize their RAM configuration and ensures that resources do not sit idly by while their brethren LPARs may be in dire need of assistance. The next section introduces the new POWER7 system.

How does AMS accomplish this? It does so by allowing for a shared memory pool, which is virtualized through the Power hypervisor. There is no binding of physical memory to a partition’s logical memory in this environment. In fact, the total logical memory of all shared memory partitions are allowed to exceed the real physical memory allocated to the pool. This results in logical memory being oversubscribed, which is okay because the hypervisor takes care of all the logistics. Essentially, the hypervisor backs excess logical memory using paging devices. It does this through a special paging VIOs partition. AIX still manages its own paging device, which helps the hypervisor manage the oversubscription. The hypervisor actually asks the operating system for help, and the OS steals aged pages, saves the contents to paging devices, and loans them out. This feature is enabled through a framework called collaborative memory management, which is a new feature of the OS that lets the OS page out contents and loan pages to the hypervisor. AMS also provides a great deal of flexibility because it can be disabled, making the hypervisor paging device the only device that needs to be optimized. This is only available through AIX and System i, not Linux. It should also be said that neither HP nor Sun offers anything remotely close to AMS.

Back to top

What’s on the horizon for 2010?

On the same day in February when IBM chose to announce their new POWER7 product line, HP also announced new Integrity servers, which will use Intel’s Itanium 9300 processor—codenamed Tukwila. While there are currently no available HP products powered by the new Tukwila chips, they are rumored to be arriving in May. Interestingly, the Tukwila Itanium’s sub-2GHz speed is manufactured using a 65 nm process, while IBM’s new POWER7 was manufactured using a 45 nm processor, which means that the new IBM chip should run faster, with lower power, than HP’s new chip. While Tukwila is certainly an important innovation, at the time of this writing, no roadmap for HP products containing these chips had been published. IBM on the other hand, has the roadmap, servers, and virtualization enhancements. They are all real and not just vaporware.

In addition to the obvious chip advantages just mentioned, the POWER7 architecture also introduces:

  • Modular systems with linear scalability
  • Physical and virtual management
  • Binary capability
  • Energy Thermal Management
  • PowerVM feature improvements

This article focuses on the PowerVM virtualization enhancements around the POWER7. What is most impressive is that benchmarks have shown that IBM’s new servers will deliver up to twice the performance and four times the virtualization capabilities as its powerful POWER6, for the same price and energy usage. It does this by introducing a new advanced modular design that can support up to 64 POWER7 cores. In its announcement, IBM rolled out four servers in the mid-range and enterprise classes, leaving blade, entry-level, and the big-iron 590 replacements to come out later this year. It did release one enterprise-level box, the Power 780, which is a cross between the 570 and 595. One major feature of the 780 is TurboCore technology, which gives you the option to run the system with fewer cores to exploit increased cache from other cores on the chip, offering the highest clock speed of any POWER7 system at 4.1 GHz running in TurboCore mode. This is twice the power of HP’s new Tukwila chip, running at less energy.

TurboCore mode enables workload optimization by dynamically selecting the most suitable threading mode: single, standard Simultaneous Multi-Threading (two threads), or SMT with four threads per core. It also comes with Active Memory Expansion, which allows the effective memory capacity of the system to be much larger than the actual physical memory of the box—up to 100%. It also offers intelligent threading and Active Memory Expansion technologies and can take advantage of Live Partition Mobility as well, which helps migrate users from Power6-based systems.

In the IBM world, it’s all about “virtualization without limits.” The new IBM Power 780 can support up to 640 virtual machines because the server itself can support up to 64 cores with 10 virtual machines per core. Later systems will support up to 100 cores and 1000 virtual machines!

How is all of this managed? It’s done through VMControl, a virtualization plug-in for IBM Systems Director that can be used to manage virtualization on POWER7 servers. It exists in Express, Standard, or the Enterprise edition of PowerVM. This software lets you create and store ready-to-run virtual images in a shared repository. The standard edition even lets you create and manage system pools, which are collections of virtual images running on multiple servers, in a way that is as easy as managing a single LPAR.

Virtualization on Solaris

Now that you’ve seen HP virtualization, take a closer look at Sun’s xVM, along with containers, hardware partitioning, and logical domains.

Sun made some bold changes in 2008, the most important being the release of xVM. Sun’s xVM is actually a mix of four separate technologies, the first of which was introduced in February of 2008: xVM Ops Center. xVM Ops Center’s most important function is that it provides a single console for the management of all devices in a virtualized environment. It further allows for the discovery and management of all physical and virtual assets. The other three technologies include the xVM server, VirtualBox, and VDI software. The xVM server is a hypervisor-based solution, which is based on Xen, running under Solaris on x86 computers. On SPARC it is still based on logical domains. Containers and LDOMs now are part of the umbrella named xVM.

New to Solaris in 2009

One new feature, introduced in the October 2008 update of Solaris, allows users to migrate workloads among Solaris systems and reduce the administrative overhead required to move containers between unlike configurations. This lets Solaris 8 and 9 containers run multiple Solaris 8 or 9 environments on one SPARC system. In an October 2009 release, their latest update, a new feature allows for parallel patch installation of virtual Solaris containers.

Sun is also claiming features such as predictive self-healing, which has long been available on the System p. VirtualBox is desktop virtualization software geared toward developers, allowing for many different types of operating systems to run on top of an existing desktop operating system. It supports Windows, Linux, Mac, and Solaris hosts. Sun did not develop this product, but acquired open source desktop virtualization vendor Innotel, which develops the product.

Sun also offers hardware partitioning, which allows their high-end servers to be divided into four-process partitions. These are referred to as Sun DSD’s. In many ways, this technology is similar to IBM’s logical partitioning, which was introduced in 2001, with no real virtualization capabilities. It is also similar to HP’s hardware partitioning in that only high-end and mid-range servers support this technology. You cannot share resources between partitions nor can you dynamically allocate processing resources between partitions. You also cannot share any I/O. It’s the LDOMs that actually allow virtualization. Introduced in 2007 on their SunFire line of servers, LDOMs enable customers to run multiple operating systems simultaneously. While LDOMs solved a huge deficiency in Sun’s virtualization strategy, it has many inherent flaws:

  • Scalability — Only eight CPUs and 64 GB RAM on one machine
  • Server-line — Only low-end Sparc servers are supported
  • Limited micro-partitioning — Four partitions on T1, 8 on T2
  • No dynamic allocation between partitions

For years, Sun’s answer to everything was containers or zones. Containers and zones give you the ability to run multiple virtual operating system instances inside only one kernel. They are used to provide an isolated and secure environment for running applications, which are created from a single instance of Solaris.

Simply put, they had it and IBM did not. Sun can no longer make this claim. IBM now offers AIX workload partitions (WPARs), which is their answer to containers. IBM WPARs have all the features of containers plus additional innovations:

  • Application WPARS — A workload partition that allows a single process or application to run inside of it, like a wrapper. Unlike a standard WPAR, it is temporary and stops when the application ends.
  • Live Application Mobility — This feature lets you move running WPARs to other partitions without any user disruption. With Solaris, you need to shut off the zone first. The feature also lets you perform multi-system workload balancing, which can be used to conserve data center costs.

While Sun appears to be moving in the right direction with xVM, it still needs to standardize its offerings more. There is still too much confusion around their offerings and virtualization roadmap. Nothing new has been announced in 2010 in the Sun world, unless you want to discuss Sun ending development of its 16-core UltraSPARC-RK processor.

Virtualization on HP

This section discusses HP’s Virtual Server Environment (VSE) as well as vPars, nPartitions, and IVMs.

HP’s VSE is the front-end for HP’s overall virtualization strategies. VSE itself contains several elements, including a workload management tool and advanced manageability software. The workload management feature lets you draw from spare capacity, which is available on a pay-per-use basis. This is similar in many respects to IBM’s Capacity on Demand. Further, HP Global Workload Manager (gWLM) provides intelligent policy engines that allow for automatically adjusting the workloads to increase server utilization. It also comes with a product called HP Capacity Advisor, which helps you simulate various workload scenarios and is similar in many ways to IBM’s System workload Estimator (WLE), which ships with their System Planning Tool. The VSE also lets you partition in several ways, with hard and soft partitions, as well as HP Virtual machines, partitions, and secure resource partitions. HP describes their virtualization/partitioning solutions as their partitioning continuum. Available partitioning includes:

  • nPartitions offers true electrical isolation as well as cell granularity. nPartitions are based on hard partitions, which were first introduced by HP in 2000 and offer greater fault isolation than soft partitions. nPartitions let you service one partition while others are online, which is similar to IBM’s logical partitioning, though systems require a reboot when moving cells from one partition to another. It’s important to note that while nPartitions support HP-UX, Windows®, VMS, and Linux, they only do so on their Itanium processor, not on their HP9000 PA Risc architecture. Partition scalability also depends on the operating system running in the nPartition. Another downside is that entry-level servers do not support this technology — only HP9000 and Integrity High End and Midrange servers. They also do not support moving resources to and from other partitions without a reboot.
  • vPars are separate operating system instances on the same nPartition or server. This offering lets you dynamically move either CPU or memory resources between partitions as the workload requirements change. They also give you the ability to run multiple copies of HP-UX on the same hardware. Using vPars, you can move CPUs to other running partitions, similar to PowerVM and the System p. What you can’t do with vPars is share resources, because there is no virtualized layer in which to manage the interface between the hardware and the operating systems. This is one reason why performance overhead is limited, a feature that HP will market without discussing its clear limitations. The scalability is also restricted as to the nPartition that the vPar is created on, the max being an 8 cell limitation. There is also limited workload support; resources cannot be added or removed. Finally, vPars also don’t let you share resources between partitions, nor can you dynamically allocate processing resources between partitions.
  • Integrity Virtual Machines (IVMs) are separate guest instances on the same nPartition with different operating system versions and users in a fully isolated environment. First introduced in 2005, they allow for a partition to have its own full copy of the operating system. Within this copy, the virtual machines share the resources. This is similar in many ways to IBM’s PowerVM, as there is granularity for CPUs and I/O device sharing. The granularity actually beats PowerVM because you can have up to 1/20 of a micropartition; the System p allows for only 1/10 of a CPU. The downside here is scalability. With HP’s virtual machines there is a 4 CPU limitation and RAM limitation of 64 GB. Reboots are also required to add processors or memory. There is no support for features such as uncapped partitions or shared processor pools. Finally, it’s important to note that HP PA RISC servers are not supported; only Integrity servers are supported. Virtual storage adapters also cannot be moved, unless the virtual machines are shut down. You also cannot dedicate processing resources to a single partition.
  • Resource Partitions are created from the HP Process resource manager and allow resources for specific applications within a single operating system. This is also a resource management tool, which lets you manage CPU, memory, and disk bandwidth. It allows minimum allocations of CPUs, and even lets you cap a CPU by group. In many ways, this is similar to a Solaris container or AIX WPAR in that it lets you have several applications residing in one copy of HP-UX. This feature has been available since HP-UX 9.0.

New features for HP-UX in 2009

With release 4.1, HP-UX now lets you have online workload migration, available on Integrity Virtual machines only. This is similar to AIX’s Live Application mobility. The March 2009 release, Version HP-UX 11iv3, also provides several enhancements, including the new parconfig command, which provides help for the recommended configuration of nPartitions. It also provides support for accelerated virtual I/O (AVIO) for networking on Windows and Linux guests, which gives clients up to a two-fold improvement in throughput over older virtualized storage3 and integrity solutions, as well as a 60% reduction in service demand. It consists of two components: an Integrity VM host and a VM Guest component.

For more info, just go to HP website (http://www.hp.com).