This entry is the first of a two part review of the
book Oracle Streams, High Speed Replication and Data
Sharing by Madhu Tumma. In this review, I will list
what topics are covered by this book, a chapter by
chapter overview and end with my comments and
opinion about this book. Part 1 will include the
topics covered and the chapter by chapter overview
of the preface and chapters 1-3. Part 2 will
conclude with chapters 4-10 and my opinion. I will
include the number of pages of each chapter to give
you an idea of the depth of that chapter's topic.
Here are the publishing particulars:
Title: Oracle Streams, High Speed Replication and
Author: Madhu Tumma
Publisher: Rampant Press
Publish Date: Feb 2005
Price: US $16.95
This book covers all aspects of Oracle Streams
including how to configure, monitor and use it.
The book includes a preface, 10 chapters,
references and an index.
The chapters are:
- Preface -- Data & What is Streams
- Chapter 1: What is Streams? -- Introduction
- Chapter 2: Streams Components and Processes
-- The Architecture of Streams
- Chapter 3: Streams Replication -- The OUT
When, What & How
- Chapter 4: Capture and Propagate
Configuration -- Database Nitty Gritty
- Chapter 5: Apply Process Configuration --
The IN When,What and How
- Chapter 6: Apply Handlers -- Code Time
- Chapter 7: Monitoring and Troubleshooting
Streams -- SQL, Views and Errors
- Chapter 8: Down Streams Capture -- Remote
- Chapter 9: Streams and Real Application
Clusters - Streams & RAC Overview
- Chapter 10: Streams for Heterogeneous
Replication -- Oracle and Non-oracle Data
I wouldn't normally include the preface in a
review. In most cases it's just a description of the
book and the way it's laid out with the occasional
discussion of philosophy by the author. In Oracle
Streams, Madhu Tumma opens with a really decent
definition of streams and how it's different from
data guard and RAC.
Chapter 1: What is Streams?
Chapter 1 is the introduction chapter. The author
covers data sharing and synchronization concepts and
how streams fits into those concepts. He covers why
data sharing is needed and how data sharing is
impacted by very large databases (VLDB).
The need for data transformation is discussed
briefly and an example scenario is presented.
He also discusses just what data replication is
and why it's needed, including: to support global
operations, site autonomy, enhanced performance, and
data availability and protection (failover).
This chapters explains the difference between
synchronous and asynchronous replication. This
section also described two-phase-commit (2PC),
issues with 2PC and how streams is a simpler
The next section in this chapter explains what
Oracle Streams, including a brief intro to the
streams architecture, and where to use streams. This
is a really good discussion that gets a bit more
into the differences between Streams, Data Guard and
RAC. The author points out that while Data Guard,
RAC ad Streams are different, Streams has
incorporated some of the strengths of RAC and Data
Guard. Streams also allows PL/SQL user exits (Apply
Handlers) which is not available in Data Guard (and
doesn't really make sense for RAC).
The author explains that Streams technology is
used in Message Queuing (via AQ), Event Messaging
and Notification, Oracle Replication, and Data
Warehouse loading (via Change Data Capture).
Chapter 1 also provides a history of Streams
evolution, Streams 10g new features and a little bit
more information on Streams and AQ.
The chapter ends with a discussion of two other
replication products: GoldenGate Data
Synchronization Platform by GoldenGate Software and
Shareplex Data Replication by Quest Software. Both
of these technologies gets about a page of detail.
By itself, this would be a great overview for
anyone in your organization interested in Streams,
data sharing or replication. If Rampant press
provided this chapter for free as a PDF, I bet they
would sell many more copies of this book and others
in the series.
Chapter 2: Streams Components and Processes
Chapter 2 is an introduction to the architecture
of Oracle Streams. The first part covers the
Producer/Consumer model. That is, there is a
producer database providing data and one or more
consumer databases consuming that data. The
Producer/Consumer model was introduced into Oracle
The author provides good detail on the flow of
data in Streams and the Streams Clients. These
clients are the entities that capture data, move
data around, and store/manipulate the data.
My favorite part of this chapter is the
discussion of queues. Queues are a key component of
Streams. This chapter explains HOW those queues are
used. It also covers secure queues, the typed queue
and the AnyData queue. User applications would use
This section also defines enqueuing and dequeuing
and how and when enqueues and dequeues are called.
The capture process is covered in some detail
including almost a full page about buffered queues
and how they help performance. Logical Change
Records from the ReDo logs is discussed.
The author also explains how LogMiner is is used
in the capture process and what the differences are
between Hot Mining and Cold Mining.
Since Streams uses the redo logs for capture,
additional information called "supplemental logging"
is required. This chapter provides very detailed
information about this additional logging as well as
configuration of this logging.
The latter half of this chapter is an
introduction to propagation, propagation rules and
the apply process. The main features of the apply
process are discussed as the four custom apply
handlers: DML Handler, DDL Handler, Message Handler
and Pre-Commit Handler.
The chapter ends with an overview of the Rules
Engine and Rule Based Transformations.
Chapter 3: Streams Replication
This chapter covers the specifics of data
replication, rather than Streams in general.
The author explains what database replication is
and how DDL and DML differ. Streams can handle both
kinds of replication. This chapter explains how
background processes in the source database capture
DML and DDL from the redo log. These changes are
propagated to, and applied in, destination
A fairly detailed explanation of "Downstream
Capture" is provided. Downstream capture is the
process of copying redo logs to a non-critical
database so that the capture process will not impact
performance. The author provides an excellent
explanation of the requirements and configuration of
The author includes those types of DML that are
replicated: Insert, Update, Delete, Merge and
Updates of LOBs and covers those types of DDL
activity that are not replicated. He makes an
important note here:
A Capture process can capture DDL statements,
but not the result of DDL statements, unless the
DDL statement is a CREATE TABLE AS SELECT
He goes on to use ANALYZE as an example. The
analyze itself can be captured but the statistics
generated would not be.
He also makes the point that by using nologging
(for SQL) and unrecoverable (for SQL*Loader), the
capture process will not see those changes. DBAs use
these keywords to improve performance.
Additional configuration of supplemental logging
and object instantiation are covered.
Streams has a feature called tags. The author
explains what tags are and how they can help
identify the session running.
Chapter 3 ends with a discussion of multi-way
replication and conflict resolution. He details the
four types of conflict: Update, Uniqueness, Delete
and Foreign Key. The author notes that each of these
conflicts are automatically handled by placing the
errors in an error queue unless a custom error
handler has been written. He also makes the point
that good design can alleviate some conflict issues
and he points out the pre-built conflict handlers.
That's it for today. I'll finish up with Chapters
4-10 later this week.