Christian Bauer
Gavin King
M A N N I N G
HIBERNATE
IN ACTIO
The ultimate Hibernate reference
Hibernate in Action
Licensed to Lathika
wlathika@yahoo.com
Hibernate in Action
CHRISTIAN BAUER
GAVIN KING
MANNING
Greenwich
(74° w. long.)
Licensed to Lathika
For online information and ordering of this and other Manning books, please visit
www.manning.com. The publisher offers discounts on this book when ordered in
quantity. For more information, please contact:
Special Sales Department
Manning Publications Co.
209 Bruce Park Avenue Fax: (203) 661-9018
Greenwich, CT 06830 email: manning@manning.com
©2005 by Manning Publications Co. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted,
in any form or by means electronic, mechanical, photocopying, or otherwise, without
prior written permission of the publisher.
Many of the designations used by manufacturers and sellers to distinguish their products
are claimed as trademarks. Where those designations appear in the book, and Manning
Publications was aware of a trademark claim, the designations have been printed in initial
caps or all caps.
Recognizing the importance of preserving what has been written, it is Mannings policy to have
the books they publish printed on acid-free paper, and we exert our best efforts to that end.
.
Manning Publications Co. Copyeditor: Tiffany Taylor
209 Bruce Park Avenue Typesetter: Dottie Marsico
Greenwich, CT 06830 Cover designer: Leslie Haimes
ISBN 1932394-15-X
Printed in the United States of America
1 2 3 4 5 6 7 8 9 10 VHG 07 06 05 04
Licensed to Lathika
v
contents
foreword xi
preface xiii
acknowledgments xv
about this book xvi
about Hibernate3 and EJB 3 xx
author online xxi
about the title and cover xxii
1 Understanding object/relational persistence 1
1.1 What is persistence? 3
Relational databases 3 ¦ Understanding SQL 4 ¦ Using SQL
in Java 5 ¦ Persistence in object-oriented applications 5
1.2 The paradigm mismatch 7
The problem of granularity 9 ¦ The problem of subtypes 10
The problem of identity 11 ¦ Problems relating to associations 13
The problem of object graph navigation 14 ¦ The cost of the
mismatch 15
1.3 Persistence layers and alternatives 16
Layered architecture 17 ¦ Hand-coding a persistence layer with
SQL/JDBC 18 ¦ Using serialization 19 ¦ Considering EJB
entity beans 20 ¦ Object-oriented database systems 21
Other options 22
1.4 Object/relational mapping 22
What is ORM? 23 ¦ Generic ORM problems 25
Why ORM? 26
1.5 Summary 29
Licensed to Lathika
vi CONTENTS
2 Introducing and integrating Hibernate 30
2.1 Hello World with Hibernate 31
2.2 Understanding the architecture 36
The core interfaces 38 ¦ Callback interfaces 40
Types 40 ¦ Extension interfaces 41
2.3 Basic configuration 41
Creating a SessionFactory 42 ¦ Configuration in
non-managed environments 45 ¦ Configuration in
managed environments 48
2.4 Advanced configuration settings 51
Using XML-based configuration 51 ¦ JNDI-bound
SessionFactory 53 ¦ Logging 54 ¦ Java Management
Extensions (JMX) 55
2.5 Summary 58
3 Mapping persistent classes 59
3.1 The CaveatEmptor application 60
Analyzing the business domain 61
The CaveatEmptor domain model 61
3.2 Implementing the domain model 64
Addressing leakage of concerns 64 ¦ Transparent and
automated persistence 65 ¦ Writing POJOs 67
Implementing POJO associations 69 ¦ Adding logic to
accessor methods 73
3.3 Defining the mapping metadata 75
Metadata in XML 75 ¦ Basic property and class
mappings 78 ¦ Attribute-oriented programming 84
Manipulating metadata at runtime 86
3.4 Understanding object identity 87
Identity versus equality 87 ¦ Database identity with
Hibernate 88 ¦ Choosing primary keys 90
3.5 Fine-grained object models 92
Entity and value types 93 ¦ Using components 93
3.6 Mapping class inheritance 97
Table per concrete class 97 ¦ Table per class hierarchy 99
Table per subclass 101 ¦ Choosing a strategy 104
Licensed to Lathika
CONTENTS vii
3.7 Introducing associations 105
Managed associations? 106 ¦ Multiplicity 106
The simplest possible association 107 ¦ Making the association
bidirectional 108 ¦ A parent/child relationship 111
3.8 Summary 112
4 Working with persistent objects 114
4.1 The persistence lifecycle 115
Transient objects 116 ¦ Persistent objects 117 ¦ Detached
objects 118 ¦ The scope of object identity 119 ¦ Outside the
identity scope 121 ¦ Implementing equals() and hashCode() 122
4.2 The persistence manager 126
Making an object persistent 126 ¦ Updating the persistent state
of a detached instance 127 ¦ Retrieving a persistent object 129
Updating a persistent object 129 ¦ Making a persistent object
transient 129 ¦ Making a detached object transient 130
4.3 Using transitive persistence in Hibernate 131
Persistence by reachability 131 ¦ Cascading persistence with
Hibernate 133 ¦ Managing auction categories 134
Distinguishing between transient and detached instances 138
4.4 Retrieving objects 139
Retrieving objects by identifier 140 ¦ Introducing HQL 141
Query by criteria 142 ¦ Query by example 143 ¦ Fetching
strategies 143 ¦ Selecting a fetching strategy in mappings 146
Tuning object retrieval 151
4.5 Summary 152
5 Transactions, concurrency, and caching 154
5.1 Transactions, concurrency, and caching 154
5.2 Understanding database transactions 156
JDBC and JTA transactions 157 ¦ The Hibernate Transaction
API 158 ¦ Flushing the Session 160 ¦ Understanding isolation
levels 161 ¦ Choosing an isolation level 163 ¦ Setting an
isolation level 165 ¦ Using pessimistic locking 165
5.3 Working with application transactions 168
Using managed versioning 169 ¦ Granularity of a
Session 172 ¦ Other ways to implement optimistic locking 174
Licensed to Lathika
viii CONTENTS
5.4 Caching theory and practice 175
Caching strategies and scopes 176 ¦ The Hibernate cache
architecture 179 ¦ Caching in practice 185
5.5 Summary 194
6 Advanced mapping concepts 195
6.1 Understanding the Hibernate type system 196
Built-in mapping types 198 ¦ Using mapping types 200
6.2 Mapping collections of value types 211
Sets, bags, lists, and maps 211
6.3 Mapping entity associations 220
One-to-one associations 220 ¦ Many-to-many associations 225
6.4 Mapping polymorphic associations 234
Polymorphic many-to-one associations 234 ¦ Polymorphic
collections 236 ¦ Polymorphic associations and table-perconcrete-
class 237
6.5 Summary 239
7 Retrieving objects efficiently 241
7.1 Executing queries 243
The query interfaces 243 ¦ Binding parameters 245
Using named queries 249
7.2 Basic queries for objects 250
The simplest query 250 ¦ Using aliases 251 ¦ Polymorphic
queries 251 ¦ Restriction 252 ¦ Comparison operators 253
String matching 255 ¦ Logical operators 256 ¦ Ordering query
results 257
7.3 Joining associations 258
Hibernate join options 259 ¦ Fetching associations 260
Using aliases with joins 262 ¦ Using implicit joins 265
Theta-style joins 267 ¦ Comparing identifiers 268
7.4 Writing report queries 269
Projection 270 ¦ Using aggregation 272 ¦ Grouping 273
Restricting groups with having 274 ¦ Improving performance
with report queries 275
Licensed to Lathika
CONTENTS ix
7.5 Advanced query techniques 276
Dynamic queries 276 ¦ Collection filters 279
Subqueries 281 ¦ Native SQL queries 283
7.6 Optimizing object retrieval 286
Solving the n+1 selects problem 286 ¦ Using iterate()
queries 289 ¦ Caching queries 290
7.7 Summary 292
8 Writing Hibernate applications 294
8.1 Designing layered applications 295
Using Hibernate in a servlet engine 296
Using Hibernate in an EJB container 311
8.2 Implementing application transactions 320
Approving a new auction 321 ¦ Doing it the hard way 322
Using detached persistent objects 324 ¦ Using a long session 325
Choosing an approach to application transactions 329
8.3 Handling special kinds of data 330
Legacy schemas and composite keys 330 ¦ Audit logging 340
8.4 Summary 347
9 Using the toolset 348
9.1 Development processes 349
Top down 350 ¦ Bottom up 350 ¦ Middle out (metadata
oriented) 350 ¦ Meet in the middle 350
Roundtripping 351
9.2 Automatic schema generation 351
Preparing the mapping metadata 352 ¦ Creating the
schema 355 ¦ Updating the schema 357
9.3 Generating POJO code 358
Adding meta-attributes 358 ¦ Generating finders 360
Configuring hbm2java 362 ¦ Running hbm2java 363
9.4 Existing schemas and Middlegen 364
Starting Middlegen 364 ¦ Restricting tables and
relationships 366 ¦ Customizing the metadata generation 368
Generating hbm2java and XDoclet metadata 370
Licensed to Lathika
x CONTENTS
9.5 XDoclet 372
Setting value type attributes 372 ¦ Mapping entity
associations 374 ¦ Running XDoclet 375
9.6 Summary 376
appendix A: SQL fundamentals 378
appendix B: ORM implementation strategies 382
B.1 Properties or fields? 383
B.2 Dirty-checking strategies 384
appendix C: Back in the real world 388
C.1 The strange copy 389
C.2 The more the better 390
C.3 We dont need primary keys 390
C.4 Time isnt linear 391
C.5 Dynamically unsafe 391
C.6 To synchronize or not? 392
C.7 Really fat client 393
C.8 Resuming Hibernate 394
references 395
index 397
Licensed to Lathika
xi
foreword
Relational databases are indisputably at the core of the modern enterprise.
While modern programming languages, including JavaTM, provide an intuitive,
object-oriented view of application-level business entities, the enterprise data
underlying these entities is heavily relational in nature. Further, the main strength
of the relational modelover earlier navigational models as well as over later
OODB modelsis that by design it is intrinsically agnostic to the programmatic
manipulation and application-level view of the data that it serves up.
Many attempts have been made to bridge relational and object-oriented technologies,
or to replace one with the other, but the gap between the two is one of
the hard facts of enterprise computing today. It is this challengeto provide a
bridge between relational data and JavaTM objectsthat Hibernate takes on
through its object/relational mapping (ORM) approach. Hibernate meets this
challenge in a very pragmatic, direct, and realistic way.
As Christian Bauer and Gavin King demonstrate in this book, the effective use
of ORM technology in all but the simplest of enterprise environments requires
understanding and configuring how the mediation between relational data and
objects is performed. This demands that the developer be aware and knowledgeable
both of the application and its data requirements, and of the SQL query language,
relational storage structures, and the potential for optimization that
relational technology offers.
Not only does Hibernate provide a full-function solution that meets these
requirements head on, it is also a flexible and configurable architecture. Hibernates
developers designed it with modularity, pluggability, extensibility, and user
customization in mind. As a result, in the few years since its initial release,
Licensed to Lathika
xii FOREWORD
Hibernate has rapidly become one of the leading ORM technologies for enterprise
developersand deservedly so.
This book provides a comprehensive overview of Hibernate. It covers how to
use its type mapping capabilities and facilities for modeling associations and
inheritance; how to retrieve objects efficiently using the Hibernate query language;
how to configure Hibernate for use in both managed and unmanaged
environments; and how to use its tools. In addition, throughout the book the
authors provide insight into the underlying issues of ORM and into the design
choices behind Hibernate. These insights give the reader a deep understanding
of the effective use of ORM as an enterprise technology.
Hibernate in Action is the definitive guide to using Hibernate and to object/relational
mapping in enterprise computing today.
LINDA DEMICHIEL
Lead Architect, Enterprise JavaBeans
Sun Microsystems
Licensed to Lathika
xiii
preface
Just because it is possible to push twigs along the ground with ones nose does
not necessarily mean that that is the best way to collect firewood.
Anthony Berglas
Today, many software developers work with Enterprise Information Systems (EIS).
This kind of application creates, manages, and stores structured information and
shares this information between many users in multiple physical locations.
The storage of EIS data involves massive usage of SQL-based database management
systems. Every company weve met during our careers uses at least one SQL
database; most are completely dependent on relational database technology at
the core of their business.
In the past five years, broad adoption of the Java programming language has
brought about the ascendancy of the object-oriented paradigm for software development.
Developers are now sold on the benefits of object orientation. However,
the vast majority of businesses are also tied to long-term investments in expensive
relational database systems. Not only are particular vendor products entrenched,
but existing legacy data must be made available to (and via) the shiny new objectoriented
web applications.
However, the tabular representation of data in a relational system is fundamentally
different than the networks of objects used in object-oriented Java applications.
This difference has led to the so-called object/relational paradigm mismatch.
Traditionally, the importance and cost of this mismatch have been underestimated,
and tools for solving the mismatch have been insufficient. Meanwhile, Java
developers blame relational technology for the mismatch; data professionals
blame object technology.
Licensed to Lathika
xiv PREFACE
Object/relational mapping (ORM) is the name given to automated solutions to the
mismatch problem. For developers weary of tedious data access code, the good
news is that ORM has come of age. Applications built with ORM middleware can be
expected to be cheaper, more performant, less vendor-specific, and more able to
cope with changes to the internal object or underlying SQL schema. The astonishing
thing is that these benefits are now available to Java developers for free.
Gavin King began developing Hibernate in late 2001 when he found that the
popular persistence solution at the timeCMP Entity Beansdidnt scale to nontrivial
applications with complex data models. Hibernate began life as an independent,
noncommercial open source project.
The Hibernate team (including the authors) has learned ORM the hard way
that is, by listening to user requests and implementing what was needed to satisfy
those requests. The result, Hibernate, is a practical solution, emphasizing developer
productivity and technical leadership. Hibernate has been used by tens of
thousands of users and in many thousands of production applications.
When the demands on their time became overwhelming, the Hibernate team
concluded that the future success of the project (and Gavins continued sanity)
demanded professional developers dedicated full-time to Hibernate. Hibernate
joined jboss.org in late 2003 and now has a commercial aspect; you can purchase
commercial support and training from JBoss Inc. But commercial training
shouldnt be the only way to learn about Hibernate.
Its obvious that many, perhaps even most, Java projects benefit from the use of
an ORM solution like Hibernatealthough this wasnt obvious a couple of years
ago! As ORM technology becomes increasingly mainstream, product documentation
such as Hibernates free user manual is no longer sufficient. We realized that
the Hibernate community and new Hibernate users needed a full-length book,
not only to learn about developing software with Hibernate, but also to understand
and appreciate the object/relational mismatch and the motivations behind
Hibernates design.
The book youre holding was an enormous effort that occupied most of our
spare time for more than a year. It was also the source of many heated disputes
and learning experiences. We hope this book is an excellent guide to Hibernate
(or, the Hibernate bible, as one of our reviewers put it) and also the first comprehensive
documentation of the object/relational mismatch and ORM in general.
We hope you find it helpful and enjoy working with Hibernate.
Licensed to Lathika
xv
acknowledgments
Writing (in fact, creating) a book wouldnt be possible without help. Wed first
like to thank the Hibernate community for keeping us on our toes; without your
requests for the book, we probably would have given up early on.
A book is only as good as its reviewers, and we had the best. J. B. Rainsberger,
Matt Scarpino, Ara Abrahamian, Mark Eagle, Glen Smith, Patrick Peak, Mas
Tydahl Andersen, Peter Eisentraut, Matt Raible, and Michael A. Koziarski. Thanks
for your endless hours of reading our half-finished and raw manuscript. Wed like
to thank Emmanuel Bernard for his technical review and Nick Heudecker for his
help with the first chapters.
Our team at Manning was invaluable. Clay Andres got this project started,
Jackie Carter stayed with us in good and bad times and taught us how to write.
Marjan Bace provided the necessary confidence that kept us going. Tiffany Taylor
and Liz Welch found all the many mistakes we made in grammar and style. Mary
Piergies organized the production of this book. Many thanks for your hard work.
Any others at Manning whom weve forgotten: You made it possible.
Licensed to Lathika
xvi
about this book
We introduce the object/relational paradigm mismatch in this book and give you
a high-level overview of current solutions for this time-consuming problem. Youll
learn how to use Hibernate as a persistence layer with a richly typed domain
object model in a single, continuing example application. This persistence layer
implementation covers all entity association, class inheritance, and special type
mapping strategies.
We teach you how to tune the Hibernate object query and transaction system
for the best performance in highly concurrent multiuser applications. The flexible
Hibernate dual-layer caching system is also an important topic in this book. We discuss
Hibernate integration in different scenarios and also show you typical architectural
problems in two- and three-tiered Java database applications. If you have
to work with an existing SQL database, youll also be interested in Hibernates legacy
database integration features and the Hibernate development toolset.
Roadmap
Chapter 1 defines object persistence. We discuss why a relational database with a
SQL interface is the system for persistent data in todays applications, and why
hand-coded Java persistence layers with JDBC and SQL code are time-consuming
and error-prone. After looking at alternative solutions for this problem, we introduce
object/relational mapping and talk about the advantages and downsides of
this approach.
Chapter 2 gives an architectural overview of Hibernate and shows you the
most important application-programming interfaces. We demonstrate Hibernate
Licensed to Lathika
ABOUT THIS BOOK xvii
configuration in managed (and non-managed) J2EE and J2SE environments after
looking at a simple Hello World application.
Chapter 3 introduces the example application and all kinds of entity and relationship
mappings to a database schema, including uni- and bidirectional associations,
class inheritance, and composition. Youll learn how to write Hibernate
mapping files and how to design persistent classes.
Chapter 4 teaches you the Hibernate interfaces for read and save operations;
we also show you how transitive persistence (persistence by reachability) works in
Hibernate. This chapter is focused on loading and storing objects in the most efficient
way.
Chapter 5 discusses concurrent data access, with database and long-running
application transactions. We introduce the concepts of locking and versioning of
data. We also cover caching in general and the Hibernate caching system, which
are closely related to concurrent data access.
Chapter 6 completes your understanding of Hibernate mapping techniques
with more advanced mapping concepts, such as custom user types, collections of
values, and mappings for one-to-one and many-to-many associations. We briefly
discuss Hibernates fully polymorphic behavior as well.
Chapter 7 introduces the Hibernate Query Language (HQL) and other objectretrieval
methods such as the query by criteria (QBC) API, which is a typesafe way
to express an object query. We show you how to translate complex search dialogs
in your application to a query by example (QBE) query. Youll get the full power of
Hibernate queries by combining these three features; we also show you how to use
direct SQL calls for the special cases and how to best optimize query performance.
Chapter 8 describes some basic practices of Hibernate application architecture.
This includes handling the SessionFactory, the popular ThreadLocal Session pattern,
and encapsulation of the persistence layer functionality in data access objects
(DAO) and J2EE commands. We show you how to design long-running application
transactions and how to use the innovative detached object support in Hibernate.
We also talk about audit logging and legacy database schemas.
Chapter 9 introduces several different development scenarios and tools that
may be used in each case. We show you the common technical pitfalls with each
approach and discuss the Hibernate toolset (hbm2ddl, hbm2java) and the integration
with popular open source tools such as XDoclet and Middlegen.
Licensed to Lathika
xviii ABOUT THIS BOOK
Who should read this book?
Readers of this book should have basic knowledge of object-oriented software
development and should have used this knowledge in practice. To understand the
application examples, you should be familiar with the Java programming language
and the Unified Modeling Language.
Our primary target audience consists of Java developers who work with SQLbased
database systems. Well show you how to substantially increase your productivity
by leveraging ORM.
If youre a database developer, the book could be part of your introduction to
object-oriented software development.
If youre a database administrator, youll be interested in how ORM affects performance
and how you can tune the performance of the SQL database management
system and persistence layer to achieve performance targets. Since data
access is the bottleneck in most Java applications, this book pays close attention to
performance issues. Many DBAs are understandably nervous about entrusting performance
to tool-generated SQL code; we seek to allay those fears and also to
highlight cases where applications should not use tool-managed data access. You
may be relieved to discover that we dont claim that ORM is the best solution to
every problem.
Code conventions and downloads
This book provides copious examples, which include all the Hibernate application
artifacts: Java code, Hibernate configuration files, and XML mapping metadata
files. Source code in listings or in text is in a fixed-width font like this to
separate it from ordinary text. Additionally, Java method names, component
parameters, object properties, and XML elements and attributes in text are also
presented using fixed-width font.
Java, HTML, and XML can all be verbose. In many cases, the original source code
(available online) has been reformatted; weve added line breaks and reworked
indentation to accommodate the available page space in the book. In rare cases,
even this was not enough, and listings include line-continuation markers. Additionally,
comments in the source code have been removed from the listings.
Licensed to Lathika
ABOUT THIS BOOK xix
Code annotations accompany many of the source code listings, highlighting
important concepts. In some cases, numbered bullets link to explanations that follow
the listing.
Hibernate is an open source project released under the Lesser GNU Public
License. Directions for downloading Hibernate, in source or binary form, are
available from the Hibernate web site: www.hibernate.org/.
The source code for all CaveatEmptor examples in this book is available from
http://caveatemptor.hibernate.org/. The CaveatEmptor example application
code is available on this web site in different flavors: for example, for servlet and for
EJB deployment, with or without a presentation layer. However, only the standalone
persistence layer source package is the recommended companion to this book.
About the authors
Christian Bauer is a member of the Hibernate developer team and is also responsible
for the Hibernate web site and documentation. Christian is interested in relational
database systems and sound data management in Java applications. He
works as a developer and consultant for JBoss Inc. and lives in Frankfurt, Germany.
Gavin King is the founder of the Hibernate project and lead developer. He is
an enthusiastic proponent of agile development and open source software. Gavin
is helping integrate ORM technology into the J2EE standard as a member of the
EJB 3 Expert Group. He is a developer and consultant for JBoss Inc., based in Melbourne,
Australia.
Licensed to Lathika
xx
about Hibernate3 and EJB 3
The world doesnt stop turning when you finish writing a book, and getting the
book into production takes more time than you could believe. Therefore, some of
the information in any technical book becomes quickly outdated, especially when
new standards and product versions are already on the horizon.
Hibernate3, an evolutionary new version of Hibernate, was in the early stages
of planning and design while this book was being written. By the time the book
hits the shelves, there may be an alpha release available. However, the information
in this book is valid for Hibernate3; in fact, we consider it to be an essential
reference even for the new version. We discuss fundamental concepts that will be
found in Hibernate3 and in most ORM solutions. Furthermore, Hibernate3 will
be mostly backward compatible with Hibernate 2.1. New features will be added, of
course, but you wont have problems picking them up after reading this book.
Inspired by the success of Hibernate, the EJB 3 Expert Group used several key
concepts and APIs from Hibernate in its redesign of entity beans. At the time of writing,
only an early draft of the new EJB specification was available; hence we dont
discuss it in this book. However, after reading Hibernate in Action, youll know all the
fundamentals that will let you quickly understand entity beans in EJB 3.
For more up-to-date information, see the Hibernate road map: www.hibernate.
org/About/RoadMap.
Licensed to Lathika
xxi
author online
Purchase of Hibernate in Action includes free access to a private web forum run by
Manning Publications where you can make comments about the book, ask technical
questions, and receive help from the author and from other users. To access the
forum and subscribe to it, point your web browser to www.manning.com/bauer.
This page provides information on how to get on the forum once you are registered,
what kind of help is available, and the rules of conduct on the forum. It also
provides links to the source code for the examples in the book, errata, and other
downloads.
Mannings commitment to our readers is to provide a venue where a meaningful
dialog between individual readers and between readers and the authors
can take place. It is not a commitment to any specific amount of participation on
the part of the authors, whose contribution to the AO remains voluntary (and
unpaid). We suggest you try asking the authors some challenging questions lest
their interest stray!
The Author Online forum and the archives of previous discussions will be
accessible from the publisher's web site as long as the book is in print.
Licensed to Lathika
xxii
about the title and cover
By combining introductions, overviews, and how-to examples, Mannings In Action
books are designed to help learning and remembering. According to research in
cognitive science, the things people remember are things they discover during
self-motivated exploration.
Although no one at Manning is a cognitive scientist, we are convinced that for
learning to become permanent it must pass through stages of exploration, play,
and, interestingly, re-telling of what is being learned. People understand and
remember new things, which is to say they master them, only after actively exploring
them. Humans learn in action. An essential part of an In Action guide is that it
is example-driven. It encourages the reader to try things out, to play with new
code, and explore new ideas.
There is another, more mundane, reason for the title of this book: our readers
are busy. They use books to do a job or solve a problem. They need books that
allow them to jump in and jump out easily and learn just what they want, just when
they want it. They need books that aid them in action. The books in this series are
designed for such readers.
About the cover illustration
The figure on the cover of Hibernate in Action is a peasant woman from a village in
Switzerland, Paysanne de Schwatzenbourg en Suisse. The illustration is taken
from a French travel book, Encyclopedie des Voyages by J. G. St. Saveur, published in
1796. Travel for pleasure was a relatively new phenomenon at the time and travel
guides such as this one were popular, introducing both the tourist as well as the
armchair traveler, to the inhabitants of other regions of France and abroad.
Licensed to Lathika
ABOUT THE TITLE AND COVER xxiii
The diversity of the drawings in the Encyclopedie des Voyages speaks vividly of the
uniqueness and individuality of the worlds towns and provinces just 200 years
ago. This was a time when the dress codes of two regions separated by a few dozen
miles identified people uniquely as belonging to one or the other. The travel
guide brings to life a sense of isolation and distance of that period and of every
other historic period except our own hyperkinetic present.
Dress codes have changed since then and the diversity by region, so rich at the
time, has faded away. It is now often hard to tell the inhabitant of one continent
from another. Perhaps, trying to view it optimistically, we have traded a cultural
and visual diversity for a more varied personal life. Or a more varied and interesting
intellectual and technical life.
We at Manning celebrate the inventiveness, the initiative, and the fun of the
computer business with book covers based on the rich diversity of regional life two
centuries ago brought back to life by the pictures from this travel book.
Licensed to Lathika
Licensed to Lathika
1
Understanding
object/relational persistence
This chapter covers
¦ Object persistence with SQL databases
¦ The object/relational paradigm mismatch
¦ Persistence layers in object-oriented
applications
¦ Object/relational mapping basics
Licensed to Lathika
2 CHAPTER 1
Understanding object/relational persistence
The approach to managing persistent data has been a key design decision in every
software project weve worked on. Given that persistent data isnt a new or unusual
requirement for Java applications, youd expect to be able to make a simple choice
among similar, well-established persistence solutions. Think of web application
frameworks (Jakarta Struts versus WebWork), GUI component frameworks (Swing
versus SWT), or template engines (JSP versus Velocity). Each of the competing
solutions has advantages and disadvantages, but they at least share the same scope
and overall approach. Unfortunately, this isnt yet the case with persistence technologies,
where we see some wildly differing solutions to the same problem.
For several years, persistence has been a hot topic of debate in the Java community.
Many developers dont even agree on the scope of the problem. Is persistence
a problem that is already solved by relational technology and extensions
such as stored procedures, or is it a more pervasive problem that must be
addressed by special Java component models such as EJB entity beans? Should we
hand-code even the most primitive CRUD (create, read, update, delete) operations
in SQL and JDBC, or should this work be automated? How do we achieve
portability if every database management system has its own SQL dialect? Should
we abandon SQL completely and adopt a new database technology, such as object
database systems? Debate continues, but recently a solution called object/relational
mapping (ORM) has met with increasing acceptance. Hibernate is an open source
ORM implementation.
Hibernate is an ambitious project that aims to be a complete solution to the
problem of managing persistent data in Java. It mediates the applications interaction
with a relational database, leaving the developer free to concentrate on the
business problem at hand. Hibernate is an non-intrusive solution. By this we mean
you arent required to follow many Hibernate-specific rules and design patterns
when writing your business logic and persistent classes; thus, Hibernate integrates
smoothly with most new and existing applications and doesnt require disruptive
changes to the rest of the application.
This book is about Hibernate. Well cover basic and advanced features and
describe some recommended ways to develop new applications using Hibernate.
Often, these recommendations wont be specific to Hibernatesometimes they
will be our ideas about the best ways to do things when working with persistent
data, explained in the context of Hibernate. Before we can get started with Hibernate,
however, you need to understand the core problems of object persistence
and object/relational mapping. This chapter explains why tools like Hibernate
are needed.
Licensed to Lathika
What is persistence? 3
First, we define persistent data management in the context of object-oriented
applications and discuss the relationship of SQL, JDBC, and Java, the underlying
technologies and standards that Hibernate is built on. We then discuss the socalled
object/relational paradigm mismatch and the generic problems we encounter in
object-oriented software development with relational databases. As this list of problems
grows, it becomes apparent that we need tools and patterns to minimize the
time we have to spend on the persistence-related code of our applications. After we
look at alternative tools and persistence mechanisms, youll see that ORM is the
best available solution for many scenarios. Our discussion of the advantages and
drawbacks of ORM gives you the full background to make the best decision when
picking a persistence solution for your own project.
The best way to learn isnt necessarily linear. We understand that you probably
want to try Hibernate right away. If this is how youd like to proceed, skip to
chapter 2, section 2.1, Getting started, where we jump in and start coding a
(small) Hibernate application. Youll be able to understand chapter 2 without
reading this chapter, but we also recommend that you return here at some point
as you circle through the book. That way, youll be prepared and have all the background
concepts you need for the rest of the material.
1.1 What is persistence?
Almost all applications require persistent data. Persistence is one of the fundamental
concepts in application development. If an information system didnt preserve
data entered by users when the host machine was powered off, the system
would be of little practical use. When we talk about persistence in Java, were normally
talking about storing data in a relational database using SQL. We start by taking
a brief look at the technology and how we use it with Java. Armed with that
information, we then continue our discussion of persistence and how its implemented
in object-oriented applications.
1.1.1 Relational databases
You, like most other developers, have probably worked with a relational database.
In fact, most of us use a relational database every day. Relational technology is a
known quantity. This alone is sufficient reason for many organizations to choose
it. But to say only this is to pay less respect than is due. Relational databases are so
entrenched not by accident but because theyre an incredibly flexible and robust
approach to data management.
Licensed to Lathika
4 CHAPTER 1
Understanding object/relational persistence
A relational database management system isnt specific to Java, and a relational
database isnt specific to a particular application. Relational technology provides a
way of sharing data among different applications or among different technologies
that form part of the same application (the transactional engine and the reporting
engine, for example). Relational technology is a common denominator of many
disparate systems and technology platforms. Hence, the relational data model is
often the common enterprise-wide representation of business entities.
Relational database management systems have SQL-based application programming
interfaces; hence we call todays relational database products SQL database
management systems or, when were talking about particular systems, SQL databases.
1.1.2 Understanding SQL
To use Hibernate effectively, a solid understanding of the relational model and
SQL is a prerequisite. Youll need to use your knowledge of SQL to tune the performance
of your Hibernate application. Hibernate will automate many repetitive
coding tasks, but your knowledge of persistence technology must extend beyond
Hibernate itself if you want take advantage of the full power of modern SQL databases.
Remember that the underlying goal is robust, efficient management of persistent
data.
Lets review some of the SQL terms used in this book. You use SQL as a data definition
language (DDL) to create a database schema with CREATE and ALTER statements.
After creating tables (and indexes, sequences, and so on), you use SQL as a
data manipulation language (DML). With DML, you execute SQL operations that
manipulate and retrieve data. The manipulation operations include insertion,
update, and deletion. You retrieve data by executing queries with restriction, projection,
and join operations (including the Cartesian product). For efficient reporting, you
use SQL to group, order, and aggregate data in arbitrary ways. You can even nest SQL
statements inside each other; this technique is called subselecting. You have probably
used SQL for many years and are familiar with the basic operations and statements
written in this language. Still, we know from our own experience that SQL is
sometimes hard to remember and that some terms vary in usage. To understand
this book, we have to use the same terms and concepts; so, we advise you to read
appendix A if any of the terms weve mentioned are new or unclear.
SQL knowledge is mandatory for sound Java database application development.
If you need more material, get a copy of the excellent book SQL Tuning by Dan Tow
[Tow 2003]. Also read An Introduction to Database Systems [Date 2004] for the theory,
concepts, and ideals of (relational) database systems. Although the relational
Licensed to Lathika
What is persistence? 5
database is one part of ORM, the other part, of course, consists of the objects in
your Java application that need to be persisted to the database using SQL.
1.1.3 Using SQL in Java
When you work with an SQL database in a Java application, the Java code issues
SQL statements to the database via the Java DataBase Connectivity (JDBC) API. The
SQL itself might have been written by hand and embedded in the Java code, or it
might have been generated on the fly by Java code. You use the JDBC API to bind
arguments to query parameters, initiate execution of the query, scroll through the
query result table, retrieve values from the result set, and so on. These are lowlevel
data access tasks; as application developers, were more interested in the
business problem that requires this data access. It isnt clear that we should be
concerning ourselves with such tedious, mechanical details.
What wed really like to be able to do is write code that saves and retrieves complex
objectsthe instances of our classesto and from the database, relieving us
of this low-level drudgery.
Since the data access tasks are often so tedious, we have to ask: Are the relational
data model and (especially) SQL the right choices for persistence in objectoriented
applications? We answer this question immediately: Yes! There are many
reasons why SQL databases dominate the computing industry. Relational database
management systems are the only proven data management technology and are
almost always a requirement in any Java project.
However, for the last 15 years, developers have spoken of a paradigm mismatch.
This mismatch explains why so much effort is expended on persistence-related
concerns in every enterprise project. The paradigms referred to are object modeling
and relational modeling, or perhaps object-oriented programming and SQL.
Lets begin our exploration of the mismatch problem by asking what persistence
means in the context of object-oriented application development. First well widen
the simplistic definition of persistence stated at the beginning of this section to a
broader, more mature understanding of what is involved in maintaining and using
persistent data.
1.1.4 Persistence in object-oriented applications
In an object-oriented application, persistence allows an object to outlive the process
that created it. The state of the object may be stored to disk and an object
with the same state re-created at some point in the future.
This application isnt limited to single objectsentire graphs of interconnected
objects may be made persistent and later re-created in a new process. Most objects
Licensed to Lathika
6 CHAPTER 1
Understanding object/relational persistence
arent persistent; a transient object has a limited lifetime that is bounded by the life
of the process that instantiated it. Almost all Java applications contain a mix of persistent
and transient objects; hence we need a subsystem that manages our persistent
data.
Modern relational databases provide a structured representation of persistent
data, enabling sorting, searching, and aggregation of data. Database management
systems are responsible for managing concurrency and data integrity; theyre
responsible for sharing data between multiple users and multiple applications. A
database management system also provides data-level security. When we discuss
persistence in this book, were thinking of all these things:
¦ Storage, organization, and retrieval of structured data
¦ Concurrency and data integrity
¦ Data sharing
In particular, were thinking of these problems in the context of an object-oriented
application that uses a domain model.
An application with a domain model doesnt work directly with the tabular representation
of the business entities; the application has its own, object-oriented
model of the business entities. If the database has ITEM and BID tables, the Java
application defines Item and Bid classes.
Then, instead of directly working with the rows and columns of an SQL result
set, the business logic interacts with this object-oriented domain model and its
runtime realization as a graph of interconnected objects. The business logic is
never executed in the database (as an SQL stored procedure), its implemented in
Java. This allows business logic to make use of sophisticated object-oriented concepts
such as inheritance and polymorphism. For example, we could use wellknown
design patterns such as Strategy, Mediator, and Composite [GOF 1995], all of
which depend on polymorphic method calls. Now a caveat: Not all Java applications
are designed this way, nor should they be. Simple applications might be much
better off without a domain model. SQL and the JDBC API are perfectly serviceable
for dealing with pure tabular data, and the new JDBC RowSet (Sun JCP, JSR 114)
makes CRUD operations even easier. Working with a tabular representation of persistent
data is straightforward and well understood.
However, in the case of applications with nontrivial business logic, the domain
model helps to improve code reuse and maintainability significantly. We focus on
applications with a domain model in this book, since Hibernate and ORM in general
are most relevant to this kind of application.
Licensed to Lathika
The paradigm mismatch 7
If we consider SQL and relational databases again, we finally observe the mismatch
between the two paradigms.
SQL operations such as projection and join always result in a tabular representation
of the resulting data. This is quite different than the graph of interconnected
objects used to execute the business logic in a Java application! These are fundamentally
different models, not just different ways of visualizing the same model.
With this realization, we can begin to see the problemssome well understood
and some less well understoodthat must be solved by an application that combines
both data representations: an object-oriented domain model and a persistent
relational model. Lets take a closer look.
1.2 The paradigm mismatch
The paradigm mismatch can be broken
down into several parts, which well examine
one at a time. Lets start our exploration
with a simple example that is problem
free. Then, as we build on it, youll begin
to see the mismatch appear.
Suppose you have to design and implement an online e-commerce application. In
this application, youd need a class to represent information about a user of the
system, and another class to represent information about the users billing details,
as shown in figure 1.1.
Looking at this diagram, you see that a User has many BillingDetails. You can
navigate the relationship between the classes in both directions. To begin with, the
classes representing these entities might be extremely simple:
public class User {
private String userName;
private String name;
private String address;
private Set billingDetails;
// accessor methods (get/set pairs), business methods, etc.
...
}
public class BillingDetails {
private String accountNumber;
private String accountName;
private String accountType;
private User user;
BillingDetails User 1..*
Figure 1.1 A simple UML class diagram of the
user and billing details entities
Licensed to Lathika
8 CHAPTER 1
Understanding object/relational persistence
//methods, get/set pairs...
...
}
Note that were only interested in the state of the entities with regard to persistence,
so weve omitted the implementation of property accessors and business
methods (such as getUserName() or billAuction()). Its quite easy to come up
with a good SQL schema design for this case:
create table USER (
USERNAME VARCHAR(15) NOT NULL PRIMARY KEY,
NAME VARCHAR(50) NOT NULL,
ADDRESS VARCHAR(100)
)
create table BILLING_DETAILS (
ACCOUNT_NUMBER VARCHAR(10) NOT NULL PRIMARY Key,
ACCOUNT_NAME VARCHAR(50) NOT NULL,
ACCOUNT_TYPE VARCHAR(2) NOT NULL,
USERNAME VARCHAR(15) FOREIGN KEY REFERENCES USER
)
The relationship between the two entities is represented as the foreign key,
USERNAME, in BILLING_DETAILS. For this simple object model, the object/relational
mismatch is barely in evidence; its straightforward to write JDBC code to insert,
update, and delete information about user and billing details.
Now, lets see what happens when we consider something a little more realistic.
The paradigm mismatch will be visible when we add more entities and entity relationships
to our application.
The most glaringly obvious problem with our current implementation is that
weve modeled an address as a simple String value. In most systems, its necessary
to store street, city, state, country, and ZIP code information separately. Of
course, we could add these properties directly to the User class, but since its
highly likely that other classes in the system will also carry address information, it
makes more sense to create a separate Address class. The updated object model is
shown in figure 1.2.
Should we also add an ADDRESS table? Not necessarily. Its common to keep
address information in the USER table, in individual columns. This design is likely
to perform better, since we dont require a table join to retrieve the user and
address in a single query. The nicest solution might even be to create a user-defined
BillingDetails User 1..* Address
Figure 1.2 The User has an Address.
Licensed to Lathika
The paradigm mismatch 9
SQL data type to represent addresses and to use a single column of that new type
in the USER table instead of several new columns.
Basically, we have the choice of adding either several columns or a single column
(of a new SQL data type). This is clearly a problem of granularity.
1.2.1 The problem of granularity
Granularity refers to the relative size of the objects youre working with. When
were talking about Java objects and database tables, the granularity problem
means persisting objects that can have various kinds of granularity to tables and
columns that are inherently limited in granularity.
Lets return to our example. Adding a new data type to store Address Java
objects in a single column to our database catalog sounds like the best approach.
After all, a new Address type (class) in Java and a new ADDRESS SQL data type should
guarantee interoperability. However, youll find various problems if you check the
support for user-defined column types (UDT) in todays SQL database management
systems.
UDT support is one of a number of so-called object-relational extensions to traditional
SQL. Unfortunately, UDT support is a somewhat obscure feature of most SQL
database management systems and certainly isnt portable between different systems.
The SQL standard supports user-defined data types, but very poorly. For this
reason and (whatever) other reasons, use of UDTs isnt common practice in the
industry at this timeand its unlikely that youll encounter a legacy schema that
makes extensive use of UDTs. We therefore cant store objects of our new Address
class in a single new column of an equivalent user-defined SQL data type. Our solution
for this problem has several columns, of vendor-defined SQL types (such as
boolean, numeric, and string data types). Considering the granularity of our tables
again, the USER table is usually defined as follows:
create table USER (
USERNAME VARCHAR(15) NOT NULL PRIMARY KEY,
NAME VARCHAR(50) NOT NULL,
ADDRESS_STREET VARCHAR(50),
ADDRESS_CITY VARCHAR(15),
ADDRESS_STATE VARCHAR(15),
ADDRESS_ZIPCODE VARCHAR(5),
ADDRESS_COUNTRY VARCHAR(15)
)
This leads to the following observation: Classes in our domain object model come
in a range of different levels of granularityfrom coarse-grained entity classes like
Licensed to Lathika
10 CHAPTER 1
Understanding object/relational persistence
User, to finer grained classes like Address, right down to simple String-valued
properties such as zipcode.
In contrast, just two levels of granularity are visible at the level of the database:
tables such as USER, along with scalar columns such as ADDRESS_ZIPCODE. This obviously
isnt as flexible as our Java type system. Many simple persistence mechanisms
fail to recognize this mismatch and so end up forcing the less flexible representation
upon the object model. Weve seen countless User classes with properties
named zipcode!
It turns out that the granularity problem isnt especially difficult to solve.
Indeed, we probably wouldnt even list it, were it not for the fact that its visible in
so many existing systems. We describe the solution to this problem in chapter 3,
section 3.5, Fine-grained object models.
A much more difficult and interesting problem arises when we consider domain
object models that use inheritance, a feature of object-oriented design we might use
to bill the users of our e-commerce application in new and interesting ways.
1.2.2 The problem of subtypes
In Java, we implement inheritance using super- and subclasses. To illustrate why
this can present a mismatch problem, lets continue to build our example. Lets
add to our e-commerce application so that we now can accept not only bank
account billing, but also credit and debit cards. We therefore have several methods
to bill a user account. The most natural way to reflect this change in our
object model is to use inheritance for the BillingDetails class.
We might have an abstract BillingDetails superclass along with several concrete
subclasses: CreditCard, DirectDebit, Cheque, and so on. Each of these subclasses
will define slightly different data (and completely different functionality
that acts upon that data). The UML class diagram in figure 1.3 illustrates this
object model.
We notice immediately that SQL provides no direct support for inheritance. We
cant declare that a CREDIT_CARD_DETAILS table is a subtype of BILLING_DETAILS by
writing, say, CREATE TABLE CREDIT_CARD_DETAILS EXTENDS BILLING_DETAILS (...).
Figure 1.3
Using inheritance for different
billing strategies
Licensed to Lathika
The paradigm mismatch 11
In chapter 3, section 3.6, Mapping class inheritance, we discuss how object/
relational mapping solutions such as Hibernate solve the problem of persisting a
class hierarchy to a database table or tables. This problem is now quite well understood
in the community, and most solutions support approximately the same functionality.
But we arent quite finished with inheritanceas soon as we introduce
inheritance into the object model, we have the possibility of polymorphism.
The User class has an association to the BillingDetails superclass. This is a polymorphic
association. At runtime, a User object might be associated with an instance
of any of the subclasses of BillingDetails. Similarly, wed like to be able to write
queries that refer to the BillingDetails class and have the query return instances
of its subclasses. This feature is called polymorphic queries.
Since SQL databases dont provide a notion of inheritance, its hardly surprising
that they also lack an obvious way to represent a polymorphic association. A standard
foreign key constraint refers to exactly one table; it isnt straightforward to
define a foreign key that refers to multiple tables. We might explain this by saying
that Java (and other object-oriented languages) is less strictly typed than SQL. Fortunately,
two of the inheritance mapping solutions we show in chapter 3 are
designed to accommodate the representation of polymorphic associations and efficient
execution of polymorphic queries.
So, the mismatch of subtypes is one in which the inheritance structure in your
Java model must be persisted in an SQL database that doesnt offer an inheritance
strategy. The next aspect of the mismatch problem is the issue of object identity.
You probably noticed that we defined USERNAME as the primary key of our USER
table. Was that a good choice? Not really, as youll see next.
1.2.3 The problem of identity
Although the problem of object identity might not be obvious at first, well encounter
it often in our growing and expanding example e-commerce system. This
problem can be seen when we consider two objects (for example, two Users) and
check if theyre identical. There are three ways to tackle this problem, two in the
Java world and one in our SQL database. As expected, they work together only
with some help.
Java objects define two different notions of sameness:
¦ Object identity (roughly equivalent to memory location, checked with a==b)
¦ Equality as determined by the implementation of the equals() method
(also called equality by value)
Licensed to Lathika
12 CHAPTER 1
Understanding object/relational persistence
On the other hand, the identity of a database row is expressed as the primary key
value. As youll see in section 3.4, Understanding object identity, neither
equals() nor == is naturally equivalent to the primary key value. Its common for
several (nonidentical) objects to simultaneously represent the same row of the
database. Furthermore, some subtle difficulties are involved in implementing
equals() correctly for a persistent class.
Lets discuss another problem related to database identity with an example. In
our table definition for USER, weve used USERNAME as a primary key. Unfortunately,
this decision makes it difficult to change a username: Wed need to update not only
the USERNAME column in USER, but also the foreign key column in BILLING_DETAILS.
So, later in the book, well recommend that you use surrogate keys wherever possible.
A surrogate key is a primary key column with no meaning to the user. For example,
we might change our table definitions to look like this:
create table USER (
USER_ID BIGINT NOT NULL PRIMARY KEY,
USERNAME VARCHAR(15) NOT NULL UNIQUE,
NAME VARCHAR(50) NOT NULL,
...
)
create table BILLING_DETAILS (
BILLING_DETAILS_ID BIGINT NOT NULL PRIMARY KEY,
ACCOUNT_NUMBER VARCHAR(10) NOT NULL UNIQUE,
ACCOUNT_NAME VARCHAR(50) NOT NULL,
ACCOUNT_TYPE VARCHAR(2) NOT NULL,
USER_ID BIGINT FOREIGN KEY REFERENCES USER
)
The USER_ID and BILLING_DETAILS_ID columns contain system-generated values.
These columns were introduced purely for the benefit of the relational data model.
How (if at all) should they be represented in the object model? Well discuss this
question in section 3.4 and find a solution with object/relational mapping.
In the context of persistence, identity is closely related to how the system handles
caching and transactions. Different persistence solutions have chosen various
strategies, and this has been an area of confusion. We cover all these interesting
topicsand show how theyre relatedin chapter 5.
The skeleton e-commerce application weve designed and implemented has
served our purpose well. Weve identified the mismatch problems with mapping
granularity, subtypes, and object identity. Were almost ready to move on to other
parts of the application. But first, we need to discuss the important concept of associations
that is, how the relationships between our classes are mapped and handled.
Is the foreign key in the database all we need?
Licensed to Lathika
The paradigm mismatch 13
1.2.4 Problems relating to associations
In our object model, associations represent the relationships between entities.
You remember that the User, Address, and BillingDetails classes are all associated.
Unlike Address, BillingDetails stands on its own. BillingDetails objects
are stored in their own table. Association mapping and the management of entity
associations are central concepts of any object persistence solution.
Object-oriented languages represent associations using object references and collections
of object references. In the relational world, an association is represented
as a foreign key column, with copies of key values in several tables. There are subtle
differences between the two representations.
Object references are inherently directional; the association is from one object
to the other. If an association between objects should be navigable in both directions,
you must define the association twice, once in each of the associated classes.
Youve already seen this in our object model classes:
public class User {
private Set billingDetails;
...
}
public class BillingDetails {
private User user;
...
}
On the other hand, foreign key associations arent by nature directional. In fact,
navigation has no meaning for a relational data model, because you can create
arbitrary data associations with table joins and projection.
Actually, it isnt possible to determine the multiplicity of a unidirectional association
by looking only at the Java classes. Java associations may have many-to-many
multiplicity. For example, our object model might have looked like this:
public class User {
private Set billingDetails;
...
}
public class BillingDetails {
private Set users;
...
}
Table associations on the other hand, are always one-to-many or one-to-one. You can
see the multiplicity immediately by looking at the foreign key definition. The following
is a one-to-many association (or, if read in that direction, a many-to-one):
Licensed to Lathika
14 CHAPTER 1
Understanding object/relational persistence
USER_ID BIGINT FOREIGN KEY REFERENCES USER
These are one-to-one associations:
USER_ID BIGINT UNIQUE FOREIGN KEY REFERENCES USER
BILLING_DETAILS_ID BIGINT PRIMARY KEY FOREIGN KEY REFERENCES USER
If you wish to represent a many-to-many association in a relational database, you
must introduce a new table, called a link table. This table doesnt appear anywhere
in the object model. For our example, if we consider the relationship between a
user and the users billing information to be many-to-many, the link table is
defined as follows:
CREATE TABLE USER_BILLING_DETAILS (
USER_ID BIGINT FOREIGN KEY REFERENCES USER,
BILLING_DETAILS_ID BIGINT FOREIGN KEY REFERENCES BILLING_DETAILS
PRIMARY KEY (USER_ID, BILLING_DETAILS_ID)
)
Well discuss association mappings in great detail in chapters 3 and 6.
So far, the issues weve considered are mainly structural. We can see them by
considering a purely static view of the system. Perhaps the most difficult problem
in object persistence is a dynamic. It concerns associations, and weve already
hinted at it when we drew a distinction between object graph navigation and table joins
in section 1.1.4, Persistence in object-oriented applications. Lets explore this significant
mismatch problem in more depth.
1.2.5 The problem of object graph navigation
There is a fundamental difference in the way you access objects in Java and in a
relational database. In Java, when you access the billing information of a user, you
call aUser.getBillingDetails().getAccountNumber(). This is the most natural
way to access object-oriented data and is often described as walking the object graph.
You navigate from one object to another, following associations between instances.
Unfortunately, this isnt an efficient way to retrieve data from an SQL database.
The single most important thing to do to improve performance of data access
code is to minimize the number of requests to the database. The most obvious way to do
this is to minimize the number of SQL queries. (Other ways include using stored
procedures or the JDBC batch API.)
Therefore, efficient access to relational data using SQL usually requires the use
of joins between the tables of interest. The number of tables included in the join
determines the depth of the object graph you can navigate. For example, if we
need to retrieve a User and arent interested in the users BillingDetails, we use
this simple query:
Licensed to Lathika
The paradigm mismatch 15
select * from USER u where u.USER_ID = 123
On the other hand, if we need to retrieve the same User and then subsequently
visit each of the associated BillingDetails instances, we use a different query:
select *
from USER u
left outer join BILLING_DETAILS bd on bd.USER_ID = u.USER_ID
where u.USER_ID = 123
As you can see, we need to know what portion of the object graph we plan to
access when we retrieve the initial User, before we start navigating the object graph!
On the other hand, any object persistence solution provides functionality for
fetching the data of associated objects only when the object is first accessed. However,
this piecemeal style of data access is fundamentally inefficient in the context
of a relational database, because it requires execution of one select statement for
each node of the object graph. This is the dreaded n+1 selects problem.
This mismatch in the way we access objects in Java and in a relational database
is perhaps the single most common source of performance problems in Java applications.
Yet, although weve been blessed with innumerable books and magazine
articles advising us to use StringBuffer for string concatenation, it seems impossible
to find any advice about strategies for avoiding the n+1 selects problem. Fortunately,
Hibernate provides sophisticated features for efficiently fetching graphs of
objects from the database, transparently to the application accessing the graph. We
discuss these features in chapters 4 and 7.
We now have a quite elaborate list of object/relational mismatch problems,
and it will be costly to find solutions, as you might know from experience. This
cost is often underestimated, and we think this is a major reason for many failed
software projects.
1.2.6 The cost of the mismatch
The overall solution for the list of mismatch problems can require a significant
outlay of time and effort. In our experience, the main purpose of up to 30 percent
of the Java application code written is to handle the tedious SQL/JDBC and
the manual bridging of the object/relational paradigm mismatch. Despite all this
effort, the end result still doesnt feel quite right. Weve seen projects nearly sink
due to the complexity and inflexibility of their database abstraction layers.
One of the major costs is in the area of modeling. The relational and object models
must both encompass the same business entities. But an object-oriented purist
will model these entities in a very different way than an experienced relational data
Licensed to Lathika
16 CHAPTER 1
Understanding object/relational persistence
modeler. The usual solution to this problem is to bend and twist the object model
until it matches the underlying relational technology.
This can be done successfully, but only at the cost of losing some of the advantages
of object orientation. Keep in mind that relational modeling is underpinned
by relational theory. Object orientation has no such rigorous mathematical definition
or body of theoretical work. So, we cant look to mathematics to explain how
we should bridge the gap between the two paradigmsthere is no elegant transformation
waiting to be discovered. (Doing away with Java and SQL and starting
from scratch isnt considered elegant.)
The domain modeling mismatch problem isnt the only source of the inflexibility
and lost productivity that lead to higher costs. A further cause is the JDBC API
itself. JDBC and SQL provide a statement- (that is, command-) oriented approach to
moving data to and from an SQL database. A structural relationship must be specified
at least three times (Insert, Update, Select), adding to the time required for
design and implementation. The unique dialect for every SQL database doesnt
improve the situation.
Recently, it has been fashionable to regard architectural or pattern-based models
as a partial solution to the mismatch problem. Hence, we have the entity bean
component model, the data access object (DAO) pattern, and other practices to
implement data access. These approaches leave most or all of the problems listed
earlier to the application developer. To round out your understanding of object
persistence, we need to discuss application architecture and the role of a persistence
layer in typical application design.
1.3 Persistence layers and alternatives
In a medium- or large-sized application, it usually makes sense to organize classes
by concern. Persistence is one concern. Other concerns are presentation, workflow,
and business logic. There are also the so-called cross-cutting concerns, which
may be implemented genericallyby framework code, for example. Typical crosscutting
concerns include logging, authorization, and transaction demarcation.
A typical object-oriented architecture comprises layers that represent the
concerns. Its normal, and certainly best practice, to group all classes and
components responsible for persistence into a separate persistence layer in a layered
system architecture.
In this section, we first look at the layers of this type of architecture and why we
use them. After that, we focus on the layer were most interested inthe persistence
layerand some of the ways it can be implemented.
Licensed to Lathika
Persistence layers and alternatives 17
1.3.1 Layered architecture
A layered architecture defines interfaces between code that implements the various
concerns, allowing a change to the way one concern is implemented without significant
disruption to code in the other layers. Layering also determines the kinds
of interlayer dependencies that occur. The rules are as follows:
¦ Layers communicate top to bottom. A layer is dependent only on the layer
directly below it.
¦ Each layer is unaware of any other layers except for the layer just below it.
Different applications group concerns differently, so they define different layers.
A typical, proven, high-level application architecture uses three layers, one each
for presentation, business logic, and persistence, as shown in figure 1.4.
Lets take a closer look at the layers and elements in the diagram:
¦ Presentation layerThe user interface logic is topmost. Code responsible for
the presentation and control of page and screen navigation forms the presentation
layer.
¦ Business layerThe exact form of the next layer varies widely between applications.
Its generally agreed, however, that this business layer is responsible
for implementing any business rules or system requirements that would be
understood by users as part of the problem domain. In some systems, this
layer has its own internal representation of the business domain entities. In
others, it reuses the model defined by the persistence layer. We revisit this
issue in chapter 3.
Presentation Layer
Business Layer
Persistence Layer
Utility
and
Helper
Classes
Database
Figure 1.4
A persistence layer is the basis in a
layered architecture.
Licensed to Lathika
18 CHAPTER 1
Understanding object/relational persistence
¦ Persistence layerThe persistence layer is a group of classes and components
responsible for data storage to, and retrieval from, one or more data stores.
This layer necessarily includes a model of the business domain entities
(even if its only a metadata model).
¦ DatabaseThe database exists outside the Java application. Its the actual,
persistent representation of the system state. If an SQL database is used, the
database includes the relational schema and possibly stored procedures.
¦ Helper/utility classesEvery application has a set of infrastructural helper or
utility classes that are used in every layer of the application (for example,
Exception classes for error handling). These infrastructural elements dont
form a layer, since they dont obey the rules for interlayer dependency in a
layered architecture.
Lets now take a brief look at the various ways the persistence layer can be implemented
by Java applications. Dont worrywell get to ORM and Hibernate soon.
There is much to be learned by looking at other approaches.
1.3.2 Hand-coding a persistence layer with SQL/JDBC
The most common approach to Java persistence is for application programmers
to work directly with SQL and JDBC. After all, developers are familiar with relational
database management systems, understand SQL, and know how to work
with tables and foreign keys. Moreover, they can always use the well-known and
widely used DAO design pattern to hide complex JDBC code and nonportable SQL
from the business logic.
The DAO pattern is a good oneso good that we recommend its use even with
ORM (see chapter 8). However, the work involved in manually coding persistence
for each domain class is considerable, particularly when multiple SQL dialects are
supported. This work usually ends up consuming a large portion of the development
effort. Furthermore, when requirements change, a hand-coded solution
always requires more attention and maintenance effort.
So why not implement a simple ORM framework to fit the specific requirements
of your project? The result of such an effort could even be reused in future
projects. Many developers have taken this approach; numerous homegrown
object/relational persistence layers are in production systems today. However, we
dont recommend this approach. Excellent solutions already exist, not only the
(mostly expensive) tools sold by commercial vendors but also open source projects
with free licenses. Were certain youll be able to find a solution that meets your
Licensed to Lathika
Persistence layers and alternatives 19
requirements, both business and technical. Its likely that such a solution will do a
great deal more, and do it better, than a solution you could build in a limited time.
Development of a reasonably full-featured ORM may take many developers
months. For example, Hibernate is 43,000 lines of code (some of which is much
more difficult than typical application code), along with 12,000 lines of unit test
code. This might be more than your application. A great many details can easily be
overlookedas both the authors know from experience! Even if an existing tool
doesnt fully implement two or three of your more exotic requirements, its still
probably not worth creating your own. Any ORM will handle the tedious common
casesthe ones that really kill productivity. Its okay that you might need to handcode
certain special cases; few applications are composed primarily of special cases.
Dont fall for the Not Invented Here syndrome and start your own object/relational
mapping effort just to avoid the learning curve associated with third-party
software. Even if you decide that all this ORM stuff is crazy, and you want to work
as close to the SQL database as possible, other persistence frameworks exist that
dont implement full ORM. For example, the iBATIS database layer is an open
source persistence layer that handles some of the more tedious JDBC code while
letting developers handcraft the SQL.
1.3.3 Using serialization
Java has a built-in persistence mechanism: Serialization provides the ability to write
a graph of objects (the state of the application) to a byte-stream, which may then
be persisted to a file or database. Serialization is also used by Javas Remote
Method Invocation (RMI) to achieve pass-by value semantics for complex objects.
Another usage of serialization is to replicate application state across nodes in a
cluster of machines.
Why not use serialization for the persistence layer? Unfortunately, a serialized
graph of interconnected objects can only be accessed as a whole; its impossible to
retrieve any data from the stream without deserializing the entire stream. Thus, the
resulting byte-stream must be considered unsuitable for arbitrary search or aggregation.
It isnt even possible to access or update a single object or subgraph independently.
Loading and overwriting an entire object graph in each transaction is
no option for systems designed to support high concurrency.
Clearly, given current technology, serialization is inadequate as a persistence
mechanism for high concurrency web and enterprise applications. It has a particular
niche as a suitable persistence mechanism for desktop applications.
Licensed to Lathika
20 CHAPTER 1
Understanding object/relational persistence
1.3.4 Considering EJB entity beans
In recent years, Enterprise JavaBeans (EJBs) have been a recommended way of
persisting data. If youve been working in the field of Java enterprise applications,
youve probably worked with EJBs and entity beans in particular. If you havent,
dont worryentity beans are rapidly declining in popularity. (Many of the developer
concerns will be addressed in the new EJB 3.0 specification, however.)
Entity beans (in the current EJB 2.1 specification) are interesting because, in
contrast to the other solutions mentioned here, they were created entirely by
committee. The other solutions (the DAO pattern, serialization, and ORM) were
distilled from many years of experience; they represent approaches that have
stood the test of time. Unsurprisingly, perhaps, EJB 2.1 entity beans have been a
disaster in practice. Design flaws in the EJB specification prevent bean-managed
persistence (BMP) entity beans from performing efficiently. A marginally more
acceptable solution is container-managed persistence (CMP), at least since some glaring
deficiencies of the EJB 1.1 specification were rectified.
Nevertheless, CMP doesnt represent a solution to the object/relational mismatch.
Here are six reasons why:
¦ CMP beans are defined in one-to-one correspondence to the tables of the
relational model. Thus, theyre too coarse grained; they may not take full
advantage of Javas rich typing. In a sense, CMP forces your domain model
into first normal form.
¦ On the other hand, CMP beans are also too fine grained to realize the stated
goal of EJB: the definition of reusable software components. A reusable
component should be a very coarse-grained object, with an external interface
that is stable in the face of small changes to the database schema. (Yes,
we really did just claim that CMP entity beans are both too fine grained and
too coarse grained!)
¦ Although EJBs may take advantage of implementation inheritance, entity
beans dont support polymorphic associations and queries, one of the defining
features of true ORM.
¦ Entity beans, despite the stated goal of the EJB specification, arent portable
in practice. Capabilities of CMP engines vary widely between vendors, and
the mapping metadata is highly vendor-specific. Some projects have chosen
Hibernate for the simple reason that Hibernate applications are much
more portable between application servers.
Licensed to Lathika
Persistence layers and alternatives 21
¦ Entity beans arent serializable. We find that we must define additional data
transfer objects (DTOs, also called value objects) when we need to transport
data to a remote client tier. The use of fine-grained method calls from the
client to a remote entity bean instance is not scalable; DTOs provide a way of
batching remote data access. The DTO pattern results in the growth of parallel
class hierarchies, where each entity of the domain model is represented
as both an entity bean and a DTO.
¦ EJB is an intrusive model; it mandates an unnatural Java style and makes
reuse of code outside a specific container extremely difficult. This is a huge
barrier to unit test driven development (TDD). It even causes problems in
applications that require batch processing or other offline functions.
We wont spend more time discussing the pros and cons of EJB 2.1 entity beans.
After looking at their persistence capabilities, weve come to the conclusion that
they arent suitable for a full object mapping. Well see what the new EJB 3.0 specification
can improve. Lets turn to another object persistence solution that
deserves some attention.
1.3.5 Object-oriented database systems
Since we work with objects in Java, it would be ideal if there were a way to store
those objects in a database without having to bend and twist the object model at
all. In the mid-1990s, new object-oriented database systems gained attention.
An object-oriented database management system (OODBMS) is more like an
extension to the application environment than an external data store. An OODBMS
usually features a multitiered implementation, with the backend data store, object
cache, and client application coupled tightly together and interacting via a proprietary
network protocol.
Object-oriented database development begins with the top-down definition of
host language bindings that add persistence capabilities to the programming language.
Hence, object databases offer seamless integration into the object-oriented
application environment. This is different from the model used by todays relational
databases, where interaction with the database occurs via an intermediate
language (SQL).
Analogously to ANSI SQL, the standard query interface for relational databases,
there is a standard for object database products. The Object Data Management
Group (ODMG) specification defines an API, a query language, a metadata language,
and host language bindings for C++, SmallTalk, and Java. Most object-
Licensed to Lathika
22 CHAPTER 1
Understanding object/relational persistence
oriented database systems provide some level of support for the ODMG standard,
but to the best of our knowledge, there is no complete implementation.
Furthermore, a number of years after its release, and even in version 3.0, the specification
feels immature and lacks a number of useful features, especially in a Javabased
environment. The ODMG is also no longer active. More recently, the Java
Data Objects (JDO) specification (published in April 2002) opened up new possibilities.
JDO was driven by members of the object-oriented database community
and is now being adopted by object-oriented database products as the primary API,
often in addition to the existing ODMG support. It remains to be seen if this new
effort will see object-oriented databases penetrate beyond CAD/CAM (computeraided
design/modeling), scientific computing, and other niche markets.
We wont bother looking too closely into why object-oriented database technology
hasnt been more popularwell simply observe that object databases havent
been widely adopted and that it doesnt appear likely that they will be in the near
future. Were confident that the overwhelming majority of developers will have far
more opportunity to work with relational technology, given the current political
realities (predefined deployment environments).
1.3.6 Other options
Of course, there are other kinds of persistence layers. XML persistence is a variation
on the serialization theme; this approach addresses some of the limitations
of byte-stream serialization by allowing tools to access the data structure easily
(but is itself subject to an object/hierarchical impedance mismatch). Furthermore,
there is no additional benefit from the XML, because its just another text
file format. You can use stored procedures (even write them in Java using SQLJ)
and move the problem into the database tier. Were sure there are plenty of
other examples, but none of them are likely to become popular in the immediate
future.
Political constraints (long-term investments in SQL databases) and the requirement
for access to valuable legacy data call for a different approach. ORM may be
the most practical solution to our problems.
1.4 Object/relational mapping
Now that weve looked at the alternative techniques for object persistence, its
time to introduce the solution we feel is the best, and the one we use with Hibernate:
ORM. Despite its long history (the first research papers were published in
the late 1980s), the terms for ORM used by developers vary. Some call it object
Licensed to Lathika
Object/relational mapping 23
relational mapping, others prefer the simple object mapping. We exclusively use the
term object/relational mapping and its acronym, ORM. The slash stresses the mismatch
problem that occurs when the two worlds collide.
In this section, we first look at what ORM is. Then we enumerate the problems
that a good ORM solution needs to solve. Finally, we discuss the general benefits
that ORM provides and why we recommend this solution.
1.4.1 What is ORM?
In a nutshell, object/relational mapping is the automated (and transparent) persistence
of objects in a Java application to the tables in a relational database,
using metadata that describes the mapping between the objects and the database.
ORM, in essence, works by (reversibly) transforming data from one representation
to another.
This implies certain performance penalties. However, if ORM is implemented as
middleware, there are many opportunities for optimization that wouldnt exist for
a hand-coded persistence layer. A further overhead (at development time) is the
provision and management of metadata that governs the transformation. But
again, the cost is less than equivalent costs involved in maintaining a hand-coded
solution. And even ODMG-compliant object databases require significant classlevel
metadata.
FAQ Isnt ORM a Visio plugin? The acronym ORM can also mean object role modeling,
and this term was invented before object/relational mapping
became relevant. It describes a method for information analysis, used in
database modeling, and is primarily supported by Microsoft Visio, a
graphical modeling tool. Database specialists use it as a replacement or as
an addition to the more popular entity-relationship modeling. However, if
you talk to Java developers about ORM, its usually in the context of
object/relational mapping.
An ORM solution consists of the following four pieces:
¦ An API for performing basic CRUD operations on objects of persistent
classes
¦ A language or API for specifying queries that refer to classes and properties
of classes
¦ A facility for specifying mapping metadata
¦ A technique for the ORM implementation to interact with transactional
objects to perform dirty checking, lazy association fetching, and other optimization
functions
Licensed to Lathika
24 CHAPTER 1
Understanding object/relational persistence
Were using the term ORM to include any persistence layer where SQL is autogenerated
from a metadata-based description. We arent including persistence layers
where the object/relational mapping problem is solved manually by developers
hand-coding SQL and using JDBC. With ORM, the application interacts with the
ORM APIs and the domain model classes and is abstracted from the underlying
SQL/JDBC. Depending on the features or the particular implementation, the
ORM runtime may also take on responsibility for issues such as optimistic locking
and caching, relieving the application of these concerns entirely.
Lets look at the various ways ORM can be implemented. Mark Fussel
[Fussel 1997], a researcher in the field of ORM, defined the following four levels of
ORM quality.
Pure relational
The whole application, including the user interface, is designed around the relational
model and SQL-based relational operations. This approach, despite its deficiencies
for large systems, can be an excellent solution for simple applications
where a low level of code reuse is tolerable. Direct SQL can be fine-tuned in every
aspect, but the drawbacks, such as lack of portability and maintainability, are significant,
especially in the long run. Applications in this category often make heavy
use of stored procedures, shifting some of the work out of the business layer and
into the database.
Light object mapping
Entities are represented as classes that are mapped manually to the relational
tables. Hand-coded SQL/JDBC is hidden from the business logic using wellknown
design patterns. This approach is extremely widespread and is successful
for applications with a small number of entities, or applications with generic,
metadata-driven data models. Stored procedures might have a place in this kind
of application.
Medium object mapping
The application is designed around an object model. SQL is generated at build
time using a code generation tool, or at runtime by framework code. Associations
between objects are supported by the persistence mechanism, and queries may be
specified using an object-oriented expression language. Objects are cached by the
persistence layer. A great many ORM products and homegrown persistence layers
support at least this level of functionality. Its well suited to medium-sized applications
with some complex transactions, particularly when portability between
Licensed to Lathika
Object/relational mapping 25
different database products is important. These applications usually dont use
stored procedures.
Full object mapping
Full object mapping supports sophisticated object modeling: composition, inheritance,
polymorphism, and persistence by reachability. The persistence layer
implements transparent persistence; persistent classes do not inherit any special
base class or have to implement a special interface. Efficient fetching strategies
(lazy and eager fetching) and caching strategies are implemented transparently to
the application. This level of functionality can hardly be achieved by a homegrown
persistence layerits equivalent to months or years of development time. A number
of commercial and open source Java ORM tools have achieved this level of
quality. This level meets the definition of ORM were using in this book. Lets look
at the problems we expect to be solved by a tool that achieves full object mapping.
1.4.2 Generic ORM problems
The following list of issues, which well call the O/R mapping problems, are the fundamental
problems solved by a full object/relational mapping tool in a Java environment.
Particular ORM tools may provide extra functionality (for example,
aggressive caching), but this is a reasonably exhaustive list of the conceptual issues
that are specific to object/relational mapping:
1 What do persistent classes look like? Are they fine-grained JavaBeans? Or are
they instances of some (coarser granularity) component model like EJB?
How transparent is the persistence tool? Do we have to adopt a programming
model and conventions for classes of the business domain?
2 How is mapping metadata defined? Since the object/relational transformation
is governed entirely by metadata, the format and definition of this
metadata is a centrally important issue. Should an ORM tool provide a GUI
to manipulate the metadata graphically? Or are there better approaches
to metadata definition?
3 How should we map class inheritance hierarchies? There are several standard
strategies. What about polymorphic associations, abstract classes, and
interfaces?
4 How do object identity and equality relate to database (primary key)
identity? How do we map instances of particular classes to particular
table rows?
Licensed to Lathika
26 CHAPTER 1
Understanding object/relational persistence
5 How does the persistence logic interact at runtime with the objects of the business
domain? This is a problem of generic programming, and there are a
number of solutions including source generation, runtime reflection,
runtime bytecode generation, and buildtime bytecode enhancement. The
solution to this problem might affect your build process (but, preferably,
shouldnt otherwise affect you as a user).
6 What is the lifecyle of a persistent object? Does the lifecycle of some objects
depend upon the lifecycle of other associated objects? How do we translate
the lifecyle of an object to the lifecycle of a database row?
7 What facilities are provided for sorting, searching, and aggregating? The
application could do some of these things in memory. But efficient use
of relational technology requires that this work sometimes be performed
by the database.
8 How do we efficiently retrieve data with associations? Efficient access to relational
data is usually accomplished via table joins. Object-oriented applications
usually access data by navigating an object graph. Two data access
patterns should be avoided when possible: the n+1 selects problem, and its
complement, the Cartesian product problem (fetching too much data in a
single select).
In addition, two issues are common to any data-access technology. They also
impose fundamental constraints on the design and architecture of an ORM:
¦ Transactions and concurrency
¦ Cache management (and concurrency)
As you can see, a full object-mapping tool needs to address quite a long list of
issues. We discuss the way Hibernate manages these problems and data-access
issues in chapters 3, 4, and 5, and we broaden the subject later in the book.
By now, you should be starting to see the value of ORM. In the next section, we
look at some of the other benefits you gain when you use an ORM solution.
1.4.3 Why ORM?
An ORM implementation is a complex beastless complex than an application
server, but more complex than a web application framework like Struts or Tapestry.
Why should we introduce another new complex infrastructural element into
our system? Will it be worth it?
Licensed to Lathika
Object/relational mapping 27
It will take us most of this book to provide a complete answer to those questions.
For the impatient, this section provides a quick summary of the most compelling
benefits. But first, lets quickly dispose of a non-benefit.
A supposed advantage of ORM is that it shields developers from messy SQL.
This view holds that object-oriented developers cant be expected to understand
SQL or relational databases well and that they find SQL somehow offensive. On
the contrary, we believe that Java developers must have a sufficient level of familiarity
withand appreciation ofrelational modeling and SQL in order to work
with ORM. ORM is an advanced technique to be used by developers who have
already done it the hard way. To use Hibernate effectively, you must be able to
view and interpret the SQL statements it issues and understand the implications
for performance.
Lets look at some of the benefits of ORM and Hibernate.
Productivity
Persistence-related code can be perhaps the most tedious code in a Java application.
Hibernate eliminates much of the grunt work (more than youd expect) and
lets you concentrate on the business problem. No matter which application development
strategy you prefertop-down, starting with a domain model; or bottomup,
starting with an existing database schemaHibernate used together with the
appropriate tools will significantly reduce development time.
Maintainability
Fewer lines of code (LOC) makes the system more understandable since it emphasizes
business logic rather than plumbing. Most important, a system with less code
is easier to refactor. Automated object/relational persistence substantially reduces
LOC. Of course, counting lines of code is a debatable way of measuring application
complexity.
However, there are other reasons that a Hibernate application is more maintainable.
In systems with hand-coded persistence, an inevitable tension exists between
the relational representation and the object model implementing the domain.
Changes to one almost always involve changes to the other. And often the design
of one representation is compromised to accommodate the existence of the other.
(What almost always happens in practice is that the object model of the domain is
compromised.) ORM provides a buffer between the two models, allowing more elegant
use of object orientation on the Java side, and insulating each model from
minor changes to the other.
Licensed to Lathika
28 CHAPTER 1
Understanding object/relational persistence
Performance
A common claim is that hand-coded persistence can always be at least as fast, and
can often be faster, than automated persistence. This is true in the same sense that
its true that assembly code can always be at least as fast as Java code, or a handwritten
parser can always be at least as fast as a parser generated by YACC or
ANTLRin other words, its beside the point. The unspoken implication of the
claim is that hand-coded persistence will perform at least as well in an actual application.
But this implication will be true only if the effort required to implement
at-least-as-fast hand-coded persistence is similar to the amount of effort involved
in utilizing an automated solution. The really interesting question is, what happens
when we consider time and budget constraints?
Given a persistence task, many optimizations are possible. Some (such as
query hints) are much easier to achieve with hand-coded SQL/JDBC. Most optimizations,
however, are much easier to achieve with automated ORM. In a
project with time constraints, hand-coded persistence usually allows you to make
some optimizations, some of the time. Hibernate allows many more optimizations
to be used all the time. Furthermore, automated persistence improves
developer productivity so much that you can spend more time hand-optimizing
the few remaining bottlenecks.
Finally, the people who implemented your ORM software probably had much
more time to investigate performance optimizations than you have. Did you
know, for instance, that pooling PreparedStatement instances results in a significant
performance increase for the DB2 JDBC driver but breaks the InterBase JDBC
driver? Did you realize that updating only the changed columns of a table can be
significantly faster for some databases but potentially slower for others? In your
handcrafted solution, how easy is it to experiment with the impact of these various
strategies?
Vendor independence
An ORM abstracts your application away from the underlying SQL database and
SQL dialect. If the tool supports a number of different databases (most do), then
this confers a certain level of portability on your application. You shouldnt necessarily
expect write once/run anywhere, since the capabilities of databases differ
and achieving full portability would require sacrificing some of the strength of the
more powerful platforms. Nevertheless, its usually much easier to develop a crossplatform
application using ORM. Even if you dont require cross-platform operation,
an ORM can still help mitigate some of the risks associated with vendor lock-
Licensed to Lathika
Summary 29
in. In addition, database independence helps in development scenarios where
developers use a lightweight local database but deploy for production on a different
database.
1.5 Summary
In this chapter, weve discussed the concept of object persistence and the importance
of ORM as an implementation technique. Object persistence means that
individual objects can outlive the application process; they can be saved to a data
store and be re-created at a later point in time. The object/relational mismatch
comes into play when the data store is an SQL-based relational database management
system. For instance, a graph of objects cant simply be saved to a database
table; it must be disassembled and persisted to columns of portable SQL data
types. A good solution for this problem is ORM, which is especially helpful if we
consider richly typed Java domain models.
A domain model represents the business entities used in a Java application. In a
layered system architecture, the domain model is used to execute business logic in
the business layer (in Java, not in the database). This business layer communicates
with the persistence layer beneath in order to load and store the persistent objects
of the domain model. ORM is the middleware in the persistence layer that manages
the persistence.
ORM isnt a silver bullet for all persistence tasks; its job is to relieve the developer
of 95 percent of object persistence work, such as writing complex SQL statements
with many table joins and copying values from JDBC result sets to objects or graphs
of objects. A full-featured ORM middleware might provide database portability, certain
optimization techniques like caching, and other viable functions that arent
easy to hand-code in a limited time with SQL and JDBC.
Its likely that a better solution than ORM will exist some day. We (and many others)
may have to rethink everything we know about SQL, persistence API standards,
and application integration. The evolution of todays systems into true relational
database systems with seamless object-oriented integration remains pure speculation.
But we cant wait, and there is no sign that any of these issues will improve
soon (a multibillion-dollar industry isnt very agile). ORM is the best solution
currently available, and its a timesaver for developers facing the object/relational
mismatch every day.
Licensed to Lathika
30
Introducing and
integrating Hibernate
This chapter covers
¦ Hibernate in action with Hello World
¦ The Hibernate core programming interfaces
¦ Integration with managed
and non-managed environments
¦ Advanced configuration options
Licensed to Lathika
Hello World with Hibernate 31
Its good to understand the need for object/relational mapping in Java applications,
but youre probably eager to see Hibernate in action. Well start by showing
you a simple example that demonstrates some of its power.
As youre probably aware, its traditional for a programming book to start with
a Hello World example. In this chapter, we follow that tradition by introducing
Hibernate with a relatively simple Hello World program. However, simply printing
a message to a console window wont be enough to really demonstrate Hibernate.
Instead, our program will store newly created objects in the database, update
them, and perform queries to retrieve them from the database.
This chapter will form the basis for the subsequent chapters. In addition to the
canonical Hello World example, we introduce the core Hibernate APIs and
explain how to configure Hibernate in various runtime environments, such as J2EE
application servers and stand-alone applications.
2.1 Hello World with Hibernate
Hibernate applications define persistent classes that are mapped to database tables.
Our Hello World example consists of one class and one mapping file. Lets see
what a simple persistent class looks like, how the mapping is specified, and some of
the things we can do with instances of the persistent class using Hibernate.
The objective of our sample application is to store messages in a database and
to retrieve them for display. The application has a simple persistent class, Message,
which represents these printable messages. Our Message class is shown in listing 2.1.
package hello;
public class Message {
private Long id;
private String text;
private Message nextMessage;
private Message() {}
public Message(String text) {
this.text = text;
}
public Long getId() {
return id;
}
private void setId(Long id) {
this.id = id;
}
public String getText() {
return text;
Listing 2.1 Message.java: A simple persistent class
Identifier
attribute
Message text
Reference to
another
Message
Licensed to Lathika
32 CHAPTER 2
Introducing and integrating Hibernate
}
public void setText(String text) {
this.text = text;
}
public Message getNextMessage() {
return nextMessage;
}
public void setNextMessage(Message nextMessage) {
this.nextMessage = nextMessage;
}
}
Our Message class has three attributes: the identifier attribute, the text of the message,
and a reference to another Message. The identifier attribute allows the application
to access the database identitythe primary key valueof a persistent
object. If two instances of Message have the same identifier value, they represent
the same row in the database. Weve chosen Long for the type of our identifier
attribute, but this isnt a requirement. Hibernate allows virtually anything for the
identifier type, as youll see later.
You may have noticed that all attributes of the Message class have JavaBean-style
property accessor methods. The class also has a constructor with no parameters.
The persistent classes we use in our examples will almost always look something
like this.
Instances of the Message class may be managed (made persistent) by Hibernate,
but they dont have to be. Since the Message object doesnt implement any
Hibernate-specific classes or interfaces, we can use it like any other Java class:
Message message = new Message("Hello World");
System.out.println( message.getText() );
This code fragment does exactly what weve come to expect from Hello World
applications: It prints "Hello World" to the console. It might look like were trying
to be cute here; in fact, were demonstrating an important feature that distinguishes
Hibernate from some other persistence solutions, such as EJB entity
beans. Our persistent class can be used in any execution context at allno special
container is needed. Of course, you came here to see Hibernate itself, so lets save
a new Message to the database:
Session session = getSessionFactory().openSession();
Transaction tx = session.beginTransaction();
Message message = new Message("Hello World");
session.save(message);
Licensed to Lathika
Hello World with Hibernate 33
tx.commit();
session.close();
This code calls the Hibernate Session and Transaction interfaces. (Well get to
that getSessionFactory() call soon.) It results in the execution of something similar
to the following SQL:
insert into MESSAGES (MESSAGE_ID, MESSAGE_TEXT, NEXT_MESSAGE_ID)
values (1, 'Hello World', null)
Hold onthe MESSAGE_ID column is being initialized to a strange value. We didnt
set the id property of message anywhere, so we would expect it to be null, right?
Actually, the id property is special: Its an identifier propertyit holds a generated
unique value. (Well discuss how the value is generated later.) The value is
assigned to the Message instance by Hibernate when save() is called.
For this example, we assume that the MESSAGES table already exists. In chapter 9,
well show you how to use Hibernate to automatically create the tables your application
needs, using just the information in the mapping files. (Theres some more
SQL you wont need to write by hand!) Of course, we want our Hello World program
to print the message to the console. Now that we have a message in the database,
were ready to demonstrate this. The next example retrieves all messages
from the database, in alphabetical order, and prints them:
Session newSession = getSessionFactory().openSession();
Transaction newTransaction = newSession.beginTransaction();
List messages =
newSession.find("from Message as m order by m.text asc");
System.out.println( messages.size() + " message(s) found:" );
for ( Iterator iter = messages.iterator(); iter.hasNext(); ) {
Message message = (Message) iter.next();
System.out.println( message.getText() );
}
newTransaction.commit();
newSession.close();
The literal string "from Message as m order by m.text asc" is a Hibernate query,
expressed in Hibernates own object-oriented Hibernate Query Language (HQL).
This query is internally translated into the following SQL when find() is called:
select m.MESSAGE_ID, m.MESSAGE_TEXT, m.NEXT_MESSAGE_ID
from MESSAGES m
order by m.MESSAGE_TEXT asc
The code fragment prints
1 message(s) found:
Hello World
Licensed to Lathika
34 CHAPTER 2
Introducing and integrating Hibernate
If youve never used an ORM tool like Hibernate before, you were probably
expecting to see the SQL statements somewhere in the code or metadata. They
arent there. All SQL is generated at runtime (actually at startup, for all reusable
SQL statements).
To allow this magic to occur, Hibernate needs more information about how the
Message class should be made persistent. This information is usually provided in an
XML mapping document. The mapping document defines, among other things, how
properties of the Message class map to columns of the MESSAGES table. Lets look at
the mapping document in listing 2.2.
"-//Hibernate/Hibernate Mapping DTD//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">
name="hello.Message"
table="MESSAGES">
name="id"
column="MESSAGE_ID">
name="text"
column="MESSAGE_TEXT"/>
name="nextMessage"
cascade="all"
column="NEXT_MESSAGE_ID"/>
The mapping document tells Hibernate that the Message class is to be persisted to
the MESSAGES table, that the identifier property maps to a column named
MESSAGE_ID, that the text property maps to a column named MESSAGE_TEXT, and
that the property named nextMessage is an association with many-to-one multiplicity
that maps to a column named NEXT_MESSAGE_ID. (Dont worry about the other
details for now.)
As you can see, the XML document isnt difficult to understand. You can easily
write and maintain it by hand. In chapter 3, we discuss a way of generating the
Listing 2.2 A simple Hibernate XML mapping
Note that Hibernate 2.0
and Hibernate 2.1
have the same DTD!
Licensed to Lathika
Hello World with Hibernate 35
XML file from comments embedded in the source code. Whichever method you
choose, Hibernate has enough information to completely generate all the SQL
statements that would be needed to insert, update, delete, and retrieve instances
of the Message class. You no longer need to write these SQL statements by hand.
NOTE Many Java developers have complained of the metadata hell that
accompanies J2EE development. Some have suggested a movement away
from XML metadata, back to plain Java code. Although we applaud this
suggestion for some problems, ORM represents a case where text-based
metadata really is necessary. Hibernate has sensible defaults that minimize
typing and a mature document type definition that can be used for
auto-completion or validation in editors. You can even automatically generate
metadata with various tools.
Now, lets change our first message and, while were at it, create a new message
associated with the first, as shown in listing 2.3.
Session session = getSessionFactory().openSession();
Transaction tx = session.beginTransaction();
// 1 is the generated id of the first message
Message message =
(Message) session.load( Message.class, new Long(1) );
message.setText("Greetings Earthling");
Message nextMessage = new Message("Take me to your leader (please)");
message.setNextMessage( nextMessage );
tx.commit();
session.close();
This code calls three SQL statements inside the same transaction:
select m.MESSAGE_ID, m.MESSAGE_TEXT, m.NEXT_MESSAGE_ID
from MESSAGES m
where m.MESSAGE_ID = 1
insert into MESSAGES (MESSAGE_ID, MESSAGE_TEXT, NEXT_MESSAGE_ID)
values (2, 'Take me to your leader (please)', null)
update MESSAGES
set MESSAGE_TEXT = 'Greetings Earthling', NEXT_MESSAGE_ID = 2
where MESSAGE_ID = 1
Notice how Hibernate detected the modification to the text and nextMessage
properties of the first message and automatically updated the database. Weve
taken advantage of a Hibernate feature called automatic dirty checking: This feature
Listing 2.3 Updating a message
Licensed to Lathika
36 CHAPTER 2
Introducing and integrating Hibernate
saves us the effort of explicitly asking Hibernate to update the database when we
modify the state of an object inside a transaction. Similarly, you can see that the
new message was made persistent when a reference was created from the first message.
This feature is called cascading save: It saves us the effort of explicitly making
the new object persistent by calling save(), as long as its reachable by an alreadypersistent
instance. Also notice that the ordering of the SQL statements isnt the
same as the order in which we set property values. Hibernate uses a sophisticated
algorithm to determine an efficient ordering that avoids database foreign key constraint
violations but is still sufficiently predictable to the user. This feature is
called transactional write-behind.
If we run Hello World again, it prints
2 message(s) found:
Greetings Earthling
Take me to your leader (please)
This is as far as well take the Hello World application. Now that we finally have
some code under our belt, well take a step back and present an overview of
Hibernates main APIs.
2.2 Understanding the architecture
The programming interfaces are the first thing you have to learn about Hibernate
in order to use it in the persistence layer of your application. A major objective
of API design is to keep the interfaces between software components as
narrow as possible. In practice, however, ORM APIs arent especially small. Dont
worry, though; you dont have to understand all the Hibernate interfaces at once.
Figure 2.1 illustrates the roles of the most important Hibernate interfaces in the
business and persistence layers. We show the business layer above the persistence
layer, since the business layer acts as a client of the persistence layer in a traditionally
layered application. Note that some simple applications might not
cleanly separate business logic from persistence logic; thats okayit merely simplifies
the diagram.
The Hibernate interfaces shown in figure 2.1 may be approximately classified as
follows:
¦ Interfaces called by applications to perform basic CRUD and querying operations.
These interfaces are the main point of dependency of application
business/control logic on Hibernate. They include Session, Transaction,
and Query.
Licensed to Lathika
Understanding the architecture 37
¦ Interfaces called by application infrastructure code to configure Hibernate,
most importantly the Configuration class.
¦ Callback interfaces that allow the application to react to events occurring
inside Hibernate, such as Interceptor, Lifecycle, and Validatable.
¦ Interfaces that allow extension of Hibernates powerful mapping functionality,
such as UserType, CompositeUserType, and IdentifierGenerator.
These interfaces are implemented by application infrastructure code (if
necessary).
Hibernate makes use of existing Java APIs, including JDBC), Java Transaction API
(JTA, and Java Naming and Directory Interface (JNDI). JDBC provides a rudimentary
level of abstraction of functionality common to relational databases, allowing
almost any database with a JDBC driver to be supported by Hibernate. JNDI and
JTA allow Hibernate to be integrated with J2EE application servers.
In this section, we dont cover the detailed semantics of Hibernate API methods,
just the role of each of the primary interfaces. You can find most of these interfaces
in the package net.sf.hibernate. Lets take a brief look at each interface in turn.
Figure 2.1 High-level overview of the HIbernate API in a layered architecture
Licensed to Lathika
38 CHAPTER 2
Introducing and integrating Hibernate
2.2.1 The core interfaces
The five core interfaces are used in just about every Hibernate application.
Using these interfaces, you can store and retrieve persistent objects and control
transactions.
Session interface
The Session interface is the primary interface used by Hibernate applications. An
instance of Session is lightweight and is inexpensive to create and destroy. This is
important because your application will need to create and destroy sessions all the
time, perhaps on every request. Hibernate sessions are not threadsafe and should
by design be used by only one thread at a time.
The Hibernate notion of a session is something between connection and transaction.
It may be easier to think of a session as a cache or collection of loaded objects
relating to a single unit of work. Hibernate can detect changes to the objects in this
unit of work. We sometimes call the Session a persistence manager because its also
the interface for persistence-related operations such as storing and retrieving
objects. Note that a Hibernate session has nothing to do with the web-tier HttpSession.
When we use the word session in this book, we mean the Hibernate session.
We sometimes use user session to refer to the HttpSession object.
We describe the Session interface in detail in chapter 4, section 4.2, The persistence
manager.
SessionFactory interface
The application obtains Session instances from a SessionFactory. Compared to
the Session interface, this object is much less exciting.
The SessionFactory is certainly not lightweight! Its intended to be shared
among many application threads. There is typically a single SessionFactory for the
whole applicationcreated during application initialization, for example. However,
if your application accesses multiple databases using Hibernate, youll need
a SessionFactory for each database.
The SessionFactory caches generated SQL statements and other mapping
metadata that Hibernate uses at runtime. It also holds cached data that has been
read in one unit of work and may be reused in a future unit of work (only if class
and collection mappings specify that this second-level cache is desirable).
Licensed to Lathika
Understanding the architecture 39
Configuration interface
The Configuration object is used to configure and bootstrap Hibernate. The
application uses a Configuration instance to specify the location of mapping documents
and Hibernate-specific properties and then create the SessionFactory.
Even though the Configuration interface plays a relatively small part in the
total scope of a Hibernate application, its the first object youll meet when you
begin using Hibernate. Section 2.3 covers the problem of configuring Hibernate
in some detail.
Transaction interface
The Transaction interface is an optional API. Hibernate applications may choose
not to use this interface, instead managing transactions in their own infrastructure
code. A Transaction abstracts application code from the underlying transaction
implementationwhich might be a JDBC transaction, a JTA UserTransaction,
or even a Common Object Request Broker Architecture (CORBA) transaction
allowing the application to control transaction boundaries via a consistent API.
This helps to keep Hibernate applications portable between different kinds of
execution environments and containers.
We use the Hibernate Transaction API throughout this book. Transactions and
the Transaction interface are explained in chapter 5.
Query and Criteria interfaces
The Query interface allows you to perform queries against the database and control
how the query is executed. Queries are written in HQL or in the native SQL
dialect of your database. A Query instance is used to bind query parameters, limit
the number of results returned by the query, and finally to execute the query.
The Criteria interface is very similar; it allows you to create and execute objectoriented
criteria queries.
To help make application code less verbose, Hibernate provides some shortcut
methods on the Session interface that let you invoke a query in one line of
code. We wont use these shortcuts in the book; instead, well always use the
Query interface.
A Query instance is lightweight and cant be used outside the Session that created
it. We describe the features of the Query interface in chapter 7.
Licensed to Lathika
40 CHAPTER 2
Introducing and integrating Hibernate
2.2.2 Callback interfaces
Callback interfaces allow the application to receive a notification when something
interesting happens to an objectfor example, when an object is loaded, saved,
or deleted. Hibernate applications dont need to implement these callbacks, but
theyre useful for implementing certain kinds of generic functionality, such as creating
audit records.
The Lifecycle and Validatable interfaces allow a persistent object to react to
events relating to its own persistence lifecycle. The persistence lifecycle is encompassed
by an objects CRUD operations. The Hibernate team was heavily influenced
by other ORM solutions that have similar callback interfaces. Later, they
realized that having the persistent classes implement Hibernate-specific interfaces
probably isnt a good idea, because doing so pollutes our persistent classes with
nonportable code. Since these approaches are no longer favored, we dont discuss
them in this book.
The Interceptor interface was introduced to allow the application to process
callbacks without forcing the persistent classes to implement Hibernate-specific
APIs. Implementations of the Interceptor interface are passed to the persistent
instances as parameters. Well discuss an example in chapter 8.
2.2.3 Types
A fundamental and very powerful element of the architecture is Hibernates
notion of a Type. A Hibernate Type object maps a Java type to a database column
type (actually, the type may span multiple columns). All persistent properties of
persistent classes, including associations, have a corresponding Hibernate type.
This design makes Hibernate extremely flexible and extensible.
There is a rich range of built-in types, covering all Java primitives and many JDK
classes, including types for java.util.Currency, java.util.Calendar, byte[], and
java.io.Serializable.
Even better, Hibernate supports user-defined custom types. The interfaces
UserType and CompositeUserType are provided to allow you to add your own types.
You can use this feature to allow commonly used application classes such as
Address, Name, or MonetaryAmount to be handled conveniently and elegantly. Custom
types are considered a central feature of Hibernate, and youre encouraged to
put them to new and creative uses!
We explain Hibernate types and user-defined types in chapter 6, section 6.1,
Understanding the Hibernate type system.
Licensed to Lathika
Basic configuration 41
2.2.4 Extension interfaces
Much of the functionality that Hibernate provides is configurable, allowing you to
choose between certain built-in strategies. When the built-in strategies are insufficient,
Hibernate will usually let you plug in your own custom implementation by
implementing an interface. Extension points include:
¦ Primary key generation (IdentifierGenerator interface)
¦ SQL dialect support (Dialect abstract class)
¦ Caching strategies (Cache and CacheProvider interfaces)
¦ JDBC connection management (ConnectionProvider interface)
¦ Transaction management (TransactionFactory, Transaction, and TransactionManagerLookup
interfaces)
¦ ORM strategies (ClassPersister interface hierarchy)
¦ Property access strategies (PropertyAccessor interface)
¦ Proxy creation (ProxyFactory interface)
Hibernate ships with at least one implementation of each of the listed interfaces,
so you dont usually need to start from scratch if you wish to extend the built-in
functionality. The source code is available for you to use as an example for your
own implementation.
By now you can see that before we can start writing any code that uses Hibernate,
we must answer this question: How do we get a Session to work with?
2.3 Basic configuration
Weve looked at an example application and examined Hibernates core interfaces.
To use Hibernate in an application, you need to know how to configure it.
Hibernate can be configured to run in almost any Java application and development
environment. Generally, Hibernate is used in two- and three-tiered client/
server applications, with Hibernate deployed only on the server. The client application
is usually a web browser, but Swing and SWT client applications arent
uncommon. Although we concentrate on multitiered web applications in this
book, our explanations apply equally to other architectures, such as commandline
applications. Its important to understand the difference in configuring
Hibernate for managed and non-managed environments:
¦ Managed environmentPools resources such as database connections and
allows transaction boundaries and security to be specified declaratively (that
Licensed to Lathika
42 CHAPTER 2
Introducing and integrating Hibernate
is, in metadata). A J2EE application server such as JBoss, BEA WebLogic, or
IBM WebSphere implements the standard (J2EE-specific) managed environment
for Java.
¦ Non-managed environmentProvides basic concurrency management via
thread pooling. A servlet container like Jetty or Tomcat provides a nonmanaged
server environment for Java web applications. A stand-alone desktop
or command-line application is also considered non-managed. Nonmanaged
environments dont provide automatic transaction or resource
management or security infrastructure. The application itself manages database
connections and demarcates transaction boundaries.
Hibernate attempts to abstract the environment in which its deployed. In the case
of a non-managed environment, Hibernate handles transactions and JDBC connections
(or delegates to application code that handles these concerns). In managed
environments, Hibernate integrates with container-managed transactions and
datasources. Hibernate can be configured for deployment in both environments.
In both managed and non-managed environments, the first thing you must do
is start Hibernate. In practice, doing so is very easy: You have to create a Session-
Factory from a Configuration.
2.3.1 Creating a SessionFactory
In order to create a SessionFactory, you first create a single instance of Configuration
during application initialization and use it to set the location of the mapping
files. Once configured, the Configuration instance is used to create the
SessionFactory. After the SessionFactory is created, you can discard the Configuration
class.
The following code starts Hibernate:
Configuration cfg = new Configuration();
cfg.addResource("hello/Message.hbm.xml");
cfg.setProperties( System.getProperties() );
SessionFactory sessions = cfg.buildSessionFactory();
The location of the mapping file, Message.hbm.xml, is relative to the root of the
application classpath. For example, if the classpath is the current directory, the
Message.hbm.xml file must be in the hello directory. XML mapping files must be
placed in the classpath. In this example, we also use the system properties of the
virtual machine to set all other configuration options (which might have been set
before by application code or as startup options).
Licensed to Lathika
Basic configuration 43
Method chaining is a programming style supported by many Hibernate
interfaces. This style is more popular in Smalltalk than in Java and is
considered by some people to be less readable and more difficult to
debug than the more accepted Java style. However, its very convenient
in most cases.
Most Java developers declare setter or adder methods to be of type
void, meaning they return no value. In Smalltalk, which has no void
type, setter or adder methods usually return the receiving object. This
would allow us to rewrite the previous code example as follows:
SessionFactory sessions = new Configuration()
.addResource("hello/Message.hbm.xml")
.setProperties( System.getProperties() )
.buildSessionFactory();
Notice that we didnt need to declare a local variable for the Configuration.
We use this style in some code examples; but if you dont like it, you
dont need to use it yourself. If you do use this coding style, its better to
write each method invocation on a different line. Otherwise, it might be
difficult to step through the code in your debugger.
By convention, Hibernate XML mapping files are named with the .hbm.xml extension.
Another convention is to have one mapping file per class, rather than have
all your mappings listed in one file (which is possible but considered bad style).
Our Hello World example had only one persistent class, but lets assume we
have multiple persistent classes, with an XML mapping file for each. Where should
we put these mapping files?
The Hibernate documentation recommends that the mapping file for each persistent
class be placed in the same directory as that class. For instance, the mapping
file for the Message class would be placed in the hello directory in a file named
Message.hbm.xml. If we had another persistent class, it would be defined in its own
mapping file. We suggest that you follow this practice. The monolithic metadata
files encouraged by some frameworks, such as the struts-config.xml found in
Struts, are a major contributor to metadata hell. You load multiple mapping files
by calling addResource() as often as you have to. Alternatively, if you follow the convention
just described, you can use the method addClass(), passing a persistent
class as the parameter:
SessionFactory sessions = new Configuration()
.addClass(org.hibernate.auction.model.Item.class)
.addClass(org.hibernate.auction.model.Category.class)
.addClass(org.hibernate.auction.model.Bid.class)
.setProperties( System.getProperties() )
.buildSessionFactory();
METHOD
CHAINING
Licensed to Lathika
44 CHAPTER 2
Introducing and integrating Hibernate
The addClass() method assumes that the name of the mapping file ends with the
.hbm.xml extension and is deployed along with the mapped class file.
Weve demonstrated the creation of a single SessionFactory, which is all that
most applications need. If another SessionFactory is neededif there are multiple
databases, for exampleyou repeat the process. Each SessionFactory is then
available for one database and ready to produce Sessions to work with that particular
database and a set of class mappings.
Of course, there is more to configuring Hibernate than just pointing to mapping
documents. You also need to specify how database connections are to be
obtained, along with various other settings that affect the behavior of Hibernate at
runtime. The multitude of configuration properties may appear overwhelming (a
complete list appears in the Hibernate documentation), but dont worry; most
define reasonable default values, and only a handful are commonly required.
To specify configuration options, you may use any of the following techniques:
¦ Pass an instance of java.util.Properties to Configuration.setProperties().
¦ Set system properties using java -Dproperty=value.
¦ Place a file called hibernate.properties in the classpath.
¦ Include elements in hibernate.cfg.xml in the classpath.
The first and second options are rarely used except for quick testing and prototypes,
but most applications need a fixed configuration file. Both the hibernate.
properties and the hibernate.cfg.xml files provide the same function: to configure
Hibernate. Which file you choose to use depends on your syntax preference.
Its even possible to mix both options and have different settings for development
and deployment, as youll see later in this chapter.
A rarely used alternative option is to allow the application to provide a JDBC Connection
when it opens a Hibernate Session from the SessionFactory (for example,
by calling sessions.openSession(myConnection)). Using this option means
that you dont have to specify any database connection properties. We dont recommend
this approach for new applications that can be configured to use the environments
database connection infrastructure (for example, a JDBC connection
pool or an application server datasource).
Of all the configuration options, database connection settings are the most
important. They differ in managed and non-managed environments, so we deal
with the two cases separately. Lets start with non-managed.
Licensed to Lathika
Basic configuration 45
2.3.2 Configuration in non-managed environments
In a non-managed environment, such as a servlet container, the application is
responsible for obtaining JDBC connections. Hibernate is part of the application,
so its responsible for getting these connections. You tell Hibernate how to get (or
create new) JDBC connections. Generally, it isnt advisable to create a connection
each time you want to interact with the database. Instead, Java applications should
use a pool of JDBC connections. There are three reasons for using a pool:
¦ Acquiring a new connection is expensive.
¦ Maintaining many idle connections is expensive.
¦ Creating prepared statements is also expensive for some drivers.
Figure 2.2 shows the role of a JDBC connection pool in a web application runtime
environment. Since this non-managed environment doesnt implement connection
pooling, the application must implement its own pooling algorithm or rely
upon a third-party library such as the open source C3P0 connection pool. Without
Hibernate, the application code usually calls the connection pool to obtain JDBC
connections and execute SQL statements.
With Hibernate, the picture changes: It acts as a client of the JDBC connection
pool, as shown in figure 2.3. The application code uses the Hibernate Session and
Query APIs for persistence operations and only has to manage database transactions,
ideally using the Hibernate Transaction API.
Using a connection pool
Hibernate defines a plugin architecture that allows integration with any connection
pool. However, support for C3P0 is built in, so well use that. Hibernate will
set up the configuration pool for you with the given properties. An example of a
hibernate.properties file using C3P0 is shown in listing 2.4.
Non-Managed Environment
Database
Connection
Pool
User-managed
JDBC connections
JSP
main()
Servlet
Application
Figure 2.2 JDBC connection pooling in a non-managed environment
Licensed to Lathika
46 CHAPTER 2
Introducing and integrating Hibernate
hibernate.connection.driver_class = org.postgresql.Driver
hibernate.connection.url = jdbc:postgresql://localhost/auctiondb
hibernate.connection.username = auctionuser
hibernate.connection.password = secret
hibernate.dialect = net.sf.hibernate.dialect.PostgreSQLDialect
hibernate.c3p0.min_size=5
hibernate.c3p0.max_size=20
hibernate.c3p0.timeout=300
hibernate.c3p0.max_statements=50
hibernate.c3p0.idle_test_period=3000
This codes lines specify the following information, beginning with the first line:
¦ The name of the Java class implementing the JDBC Driver (the driver JAR
file must be placed in the applications classpath).
¦ A JDBC URL that specifies the host and database name for JDBC connections.
¦ The database user name.
¦ The database password for the specified user.
¦ A Dialect for the database. Despite the ANSI standardization effort, SQL is
implemented differently by various databases vendors. So, you must specify
a Dialect. Hibernate includes built-in support for all popular SQL databases,
and new dialects may be defined easily.
¦ The minimum number of JDBC connections that C3P0 will keep ready.
Listing 2.4 Using hibernate.properties for C3P0 connection pool settings
JSP
main()
Servlet
Application
Hibernate
Database
Connection
Pool
Session
Transaction
Query
Non-Managed Environment
Figure 2.3 Hibernate with a connection pool in a non-managed environment
Licensed to Lathika
Basic configuration 47
¦ The maximum number of connections in the pool. An exception will be
thrown at runtime if this number is exhausted.
¦ The timeout period (in this case, 5 minutes or 300 seconds) after which an
idle connection will be removed from the pool.
¦ The maximum number of prepared statements that will be cached. Caching
of prepared statements is essential for best performance with Hibernate.
¦ The idle time in seconds before a connection is automatically validated.
Specifying properties of the form hibernate.c3p0.* selects C3P0 as Hibernates
connection pool (you dont need any other switch to enable C3P0 support). C3P0
has even more features than weve shown in the previous example, so we refer you
to the Hibernate API documentation. The Javadoc for the class net.sf.hibernate.
cfg.Environment documents every Hibernate configuration property,
including all C3P0-related settings and settings for other third-party connection
pools directly supported by Hibernate.
The other supported connection pools are Apache DBCP and Proxool. You
should try each pool in your own environment before deciding between them. The
Hibernate community tends to prefer C3P0 and Proxool.
Hibernate also ships with a default connection pooling mechanism. This connection
pool is only suitable for testing and experimenting with Hibernate: You
should not use this built-in pool in production systems. It isnt designed to scale to
an environment with many concurrent requests, and it lacks the fault tolerance features
found in specialized connection pools.
Starting Hibernate
How do you start Hibernate with these properties? You declared the properties in
a file named hibernate.properties, so you need only place this file in the application
classpath. It will be automatically detected and read when Hibernate is first
initialized when you create a Configuration object.
Lets summarize the configuration steps youve learned so far (this is a good
time to download and install Hibernate, if youd like to continue in a nonmanaged
environment):
1 Download and unpack the JDBC driver for your database, which is usually
available from the database vendor web site. Place the JAR files in the application
classpath; do the same with hibernate2.jar.
2 Add Hibernates dependencies to the classpath; theyre distributed along
with Hibernate in the lib/ directory. See also the text file lib/README.txt
for a list of required and optional libraries.
Licensed to Lathika
48 CHAPTER 2
Introducing and integrating Hibernate
3 Choose a JDBC connection pool supported by Hibernate and configure it
with a properties file. Dont forget to specify the SQL dialect.
4 Let the Configuration know about these properties by placing them in a
hibernate.properties file in the classpath.
5 Create an instance of Configuration in your application and load the XML
mapping files using either addResource() or addClass(). Build a Session-
Factory from the Configuration by calling buildSessionFactory().
Unfortunately, you dont have any mapping files yet. If you like, you can run the
Hello World example or skip the rest of this chapter and start learning about
persistent classes and mappings in chapter 3. Or, if you want to know more about
using Hibernate in a managed environment, read on.
2.3.3 Configuration in managed environments
A managed environment handles certain cross-cutting concerns, such as application
security (authorization and authentication), connection pooling, and transaction
management. J2EE application servers are typical managed environments.
Although application servers are generally designed to support EJBs, you can still
take advantage of the other managed services provided, even if you dont use EJB
entity beans.
Hibernate is often used with session or message-driven EJBs, as shown in
figure 2.4. EJBs call the same Hibernate APIs as servlets, JSPs, or stand-alone applications:
Session, Transaction, and Query. The Hibernate-related code is fully portable
between non-managed and managed environments. Hibernate handles the
different connection and transaction strategies transparently.
EJB
EJB
EJB
Application
Hibernate
Session
Transaction
Query
Transaction
Manager
Database
Resource
Manager
Application Server
Figure 2.4 Hibernate in a managed environment with an application server
Licensed to Lathika
Basic configuration 49
An application server exposes a connection pool as a JNDI-bound datasource, an
instance of javax.jdbc.Datasource. You need to tell Hibernate where to find the
datasource in JNDI, by supplying a fully qualified JNDI name. An example Hibernate
configuration file for this scenario is shown in listing 2.5.
hibernate.connection.datasource = java:/comp/env/jdbc/AuctionDB
hibernate.transaction.factory_class = \
net.sf.hibernate.transaction.JTATransactionFactory
hibernate.transaction.manager_lookup_class = \
net.sf.hibernate.transaction.JBossTransactionManagerLookup
hibernate.dialect = net.sf.hibernate.dialect.PostgreSQLDialect
This file first gives the JNDI name of the datasource. The datasource must be
configured in the J2EE enterprise application deployment descriptor; this is a
vendor-specific setting. Next, you enable Hibernate integration with JTA. Now
Hibernate needs to locate the application servers TransactionManager in order to
integrate fully with the container transactions. No standard approach is defined
by the J2EE specification, but Hibernate includes support for all popular application
servers. Finally, of course, the Hibernate SQL dialect is required.
Now that youve configured everything correctly, using Hibernate in a managed
environment isnt much different than using it in a non-managed environment: Just
create a Configuration with mappings and build a SessionFactory. However, some
of the transaction environmentrelated settings deserve some extra consideration.
Java already has a standard transaction API, JTA, which is used to control transactions
in a managed environment with J2EE. This is called container-managed transactions
(CMT). If a JTA transaction manager is present, JDBC connections are
enlisted with this manager and under its full control. This isnt the case in a nonmanaged
environment, where an application (or the pool) manages the JDBC connections
and JDBC transactions directly.
Therefore, managed and non-managed environments can use different transaction
methods. Since Hibernate needs to be portable across these environments, it
defines an API for controlling transactions. The Hibernate Transaction interface
abstracts the underlying JTA or JDBC transaction (or, potentially, even a CORBA
transaction). This underlying transaction strategy is set with the property hibernate.
connection.factory_class, and it can take one of the following two values:
Listing 2.5 Sample hibernate.properties for a container-provided datasource
Licensed to Lathika
50 CHAPTER 2
Introducing and integrating Hibernate
¦ net.sf.hibernate.transaction.JDBCTransactionFactory delegates to direct
JDBC transactions. This strategy should be used with a connection pool in a
non-managed environment and is the default if no strategy is specified.
¦ net.sf.hibernate.transaction.JTATransactionFactory delegates to JTA.
This is the correct strategy for CMT, where connections are enlisted with JTA.
Note that if a JTA transaction is already in progress when beginTransaction()
is called, subsequent work takes place in the context of that transaction
(otherwise a new JTA transaction is started).
For a more detailed introduction to Hibernates Transaction API and the effects
on your specific application scenario, see chapter 5, section 5.1, Transactions.
Just remember the two steps that are necessary if you work with a J2EE application
server: Set the factory class for the Hibernate Transaction API to JTA as described
earlier, and declare the transaction manager lookup specific to your application
server. The lookup strategy is required only if you use the second-level caching system
in Hibernate, but it doesnt hurt to set it even without using the cache.
Tomcat isnt a full application server; its just a servlet container, albeit a
servlet container with some features usually found only in application
servers. One of these features may be used with Hibernate: the Tomcat
connection pool. Tomcat uses the DBCP connection pool internally but
exposes it as a JNDI datasource, just like a real application server. To configure
the Tomcat datasource, youll need to edit server.xml according
to instructions in the Tomcat JNDI/JDBC documentation. You can configure
Hibernate to use this datasource by setting hibernate.connection.
datasource. Keep in mind that Tomcat doesnt ship with a
transaction manager, so this situation is still more like a non-managed
environment as described earlier.
You should now have a running Hibernate system, whether you use a simple servlet
container or an application server. Create and compile a persistent class (the
initial Message, for example), copy Hibernate and its required libraries to the
classpath together with a hibernate.properties file, and build a SessionFactory.
The next section covers advanced Hibernate configuration options. Some of
them are recommended, such as logging executed SQL statements for debugging
or using the convenient XML configuration file instead of plain properties. However,
you may safely skip this section and come back later once you have read more
about persistent classes in chapter 3.
HIBERNATE
WITH
TOMCAT
Licensed to Lathika
Advanced configuration settings 51
2.4 Advanced configuration settings
When you finally have a Hibernate application running, its well worth getting to
know all the Hibernate configuration parameters. These parameters let you optimize
the runtime behavior of Hibernate, especially by tuning the JDBC interaction
(for example, using JDBC batch updates).
We wont bore you with these details now; the best source of information about
configuration options is the Hibernate reference documentation. In the previous
section, we showed you the options youll need to get started.
However, there is one parameter that we must emphasize at this point. Youll
need it continually whenever you develop software with Hibernate. Setting the
property hibernate.show_sql to the value true enables logging of all generated
SQL to the console. Youll use it for troubleshooting, performance tuning, and just
to see whats going on. It pays to be aware of what your ORM layer is doingthats
why ORM doesnt hide SQL from developers.
So far, weve assumed that you specify configuration parameters using a hibernate.
properties file or an instance of java.util.Properties programmatically.
There is a third option youll probably like: using an XML configuration file.
2.4.1 Using XML-based configuration
You can use an XML configuration file (as demonstrated in listing 2.6) to fully
configure a SessionFactory. Unlike hibernate.properties, which contains only
configuration parameters, the hibernate.cfg.xml file may also specify the location
of mapping documents. Many users prefer to centralize the configuration of
Hibernate in this way instead of adding parameters to the Configuration in application
code.
?xml version='1.0'encoding='utf-8'?>
PUBLIC "-//Hibernate/Hibernate Configuration DTD//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-2.0.dtd">
true
java:/comp/env/jdbc/AuctionDB
net.sf.hibernate.dialect.PostgreSQLDialect
Listing 2.6 Sample hibernate.cfg.xml configuration file
B
Document type
declaration
Name
attribute C
D Property
specifications
Licensed to Lathika
52 CHAPTER 2
Introducing and integrating Hibernate
net.sf.hibernate.transaction.JBossTransactionManagerLookup
The document type declaration is used by the XML parser to validate this document
against the Hibernate configuration DTD.
The optional name attribute is equivalent to the property hibernate.session_
factory_name and used for JNDI binding of the SessionFactory, discussed in the
next section.
Hibernate properties may be specified without the hibernate prefix. Property
names and values are otherwise identical to programmatic configuration
properties.
Mapping documents may be specified as application resources or even as hardcoded
filenames. The files used here are from our online auction application,
which well introduce in chapter 3.
Now you can initialize Hibernate using
SessionFactory sessions = new Configuration()
.configure().buildSessionFactory();
Waithow did Hibernate know where the configuration file was located?
When configure() was called, Hibernate searched for a file named hibernate.
cfg.xml in the classpath. If you wish to use a different filename or have Hibernate
look in a subdirectory, you must pass a path to the configure() method:
SessionFactory sessions = new Configuration()
.configure("/hibernate-config/auction.cfg.xml")
.buildSessionFactory();
Using an XML configuration file is certainly more comfortable than a properties
file or even programmatic property configuration. The fact that you can have the
class mapping files externalized from the applications source (even if it would be
only in a startup helper class) is a major benefit of this approach. You can, for
example, use different sets of mapping files (and different configuration
options), depending on your database and environment (development or production),
and switch them programatically.
d
E Mapping
document
specifications
B
C
D
E
Licensed to Lathika
Advanced configuration settings 53
If you have both hibernate.properties and hibernate.cfg.xml in the classpath,
the settings of the XML configuration file will override the settings used in the
properties. This is useful if you keep some base settings in properties and override
them for each deployment with an XML configuration file.
You may have noticed that the SessionFactory was also given a name in the XML
configuration file. Hibernate uses this name to automatically bind the SessionFactory
to JNDI after creation.
2.4.2 JNDI-bound SessionFactory
In most Hibernate applications, the SessionFactory should be instantiated once
during application initialization. The single instance should then be used by all
code in a particular process, and any Sessions should be created using this single
SessionFactory. A frequently asked question is where this factory must be placed
and how it can be accessed without much hassle.
In a J2EE environment, a SessionFactory bound to JNDI is easily shared between
different threads and between various Hibernate-aware components. Or course,
JNDI isnt the only way that application components might obtain a SessionFactory.
There are many possible implementations of this Registry pattern, including
use of the ServletContext or a static final variable in a singleton. A particularly
elegant approach is to use an application scope IoC (Inversion of Control) framework
component. However, JNDI is a popular approach (and is exposed as a JMX
service, as you'll see later). We discuss some of the alternatives in chapter 8,
section 8.1, Designing layered applications.
NOTE The Java Naming and Directory Interface (JNDI) API allows objects to be
stored to and retrieved from a hierarchical structure (directory tree).
JNDI implements the Registry pattern. Infrastructural objects (transaction
contexts, datasources), configuration settings (environment settings,
user registries), and even application objects (EJB references, object factories)
may all be bound to JNDI.
The SessionFactory will automatically bind itself to JNDI if the property hibernate.
session_factory_name is set to the name of the directory node. If your runtime
environment doesnt provide a default JNDI context (or if the default JNDI
implementation doesnt support instances of Referenceable), you need to specify
a JNDI initial context using the properties hibernate.jndi.url and hibernate.
jndi.class.
Licensed to Lathika
54 CHAPTER 2
Introducing and integrating Hibernate
Here is an example Hibernate configuration that binds the SessionFactory to
the name hibernate/HibernateFactory using Suns (free) file systembased JNDI
implementation, fscontext.jar:
hibernate.connection.datasource = java:/comp/env/jdbc/AuctionDB
hibernate.transaction.factory_class = \
net.sf.hibernate.transaction.JTATransactionFactory
hibernate.transaction.manager_lookup_class = \
net.sf.hibernate.transaction.JBossTransactionManagerLookup
hibernate.dialect = net.sf.hibernate.dialect.PostgreSQLDialect
hibernate.session_factory_name = hibernate/HibernateFactory
hibernate.jndi.class = com.sun.jndi.fscontext.RefFSContextFactory
hibernate.jndi.url = file:/auction/jndi
Of course, you can also use the XML-based configuration for this task. This example
also isnt realistic, since most application servers that provide a connection
pool through JNDI also have a JNDI implementation with a writable default context.
JBoss certainly has, so you can skip the last two properties and just specify a
name for the SessionFactory. All you have to do now is call Configuration.configure().
buildSessionFactory() once to initialize the binding.
NOTE Tomcat comes bundled with a read-only JNDI context, which isnt writable
from application-level code after the startup of the servlet container.
Hibernate cant bind to this context; you have to either use a full
context implementation (like the Sun FS context) or disable JNDI binding
of the SessionFactory by omitting the session_factory_name property
in the configuration.
Lets look at some other very important configuration settings that log Hibernate
operations.
2.4.3 Logging
Hibernate (and many other ORM implementations) executes SQL statements
asynchronously. An INSERT statement isnt usually executed when the application
calls Session.save(); an UPDATE isnt immediately issued when the application
calls Item.addBid(). Instead, the SQL statements are usually issued at the end of a
transaction. This behavior is called write-behind, as we mentioned earlier.
This fact is evidence that tracing and debugging ORM code is sometimes nontrivial.
In theory, its possible for the application to treat Hibernate as a black box
and ignore this behavior. Certainly the Hibernate application cant detect this
asynchronicity (at least, not without resorting to direct JDBC calls). However, when
you find yourself troubleshooting a difficult problem, you need to be able to see
exactly whats going on inside Hibernate. Since Hibernate is open source, you can
Licensed to Lathika
Advanced configuration settings 55
easily step into the Hibernate code. Occasionally, doing so helps a great deal! But,
especially in the face of asynchronous behavior, debugging Hibernate can quickly
get you lost. You can use logging to get a view of Hibernates internals.
Weve already mentioned the hibernate.show_sql configuration parameter,
which is usually the first port of call when troubleshooting. Sometimes the SQL
alone is insufficient; in that case, you must dig a little deeper.
Hibernate logs all interesting events using Apache commons-logging, a thin
abstraction layer that directs output to either Apache log4j (if you put log4j.jar
in your classpath) or JDK1.4 logging (if youre running under JDK1.4 or above and
log4j isnt present). We recommend log4j, since its more mature, more popular,
and under more active development.
To see any output from log4j, youll need a file named log4j.properties in your
classpath (right next to hibernate.properties or hibernate.cfg.xml). This example
directs all log messages to the console:
### direct log messages to stdout ###
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE}
? %5p %c{1}:%L - %m%n
### root logger option ###
log4j.rootLogger=warn, stdout
### Hibernate logging options ###
log4j.logger.net.sf.hibernate=info
### log JDBC bind parameters ###
log4j.logger.net.sf.hibernate.type=info
### log PreparedStatement cache activity ###
log4j.logger.net.sf.hibernate.ps.PreparedStatementCache=info
With this configuration, you wont see many log messages at runtime. Replacing
info with debug for the log4j.logger.net.sf.hibernate category will reveal the
inner workings of Hibernate. Make sure you dont do this in a production environment
writing the log will be much slower than the actual database access.
Finally, you have the hibernate.properties, hibernate.cfg.xml, and
log4j.properties configuration files.
There is another way to configure Hibernate, if your application server supports
the Java Management Extensions.
2.4.4 Java Management Extensions (JMX)
The Java world is full of specifications, standards, and, of course, implementations
of these. A relatively new but important standard is in its first version: the Java
Licensed to Lathika
56 CHAPTER 2
Introducing and integrating Hibernate
Management Extensions (JMX). JMX is about the management of systems components
or, better, of system services.
Where does Hibernate fit into this new picture? Hibernate, when deployed in
an application server, makes use of other services like managed transactions and
pooled database transactions. But why not make Hibernate a managed service
itself, which others can depend on and use? This is possible with the Hibernate JMX
integration, making Hibernate a managed JMX component.
The JMX specification defines the following components:
¦ The JMX MBeanA reusable component (usually infrastructural) that
exposes an interface for management (administration)
¦ The JMX containerMediates generic access (local or remote) to the MBean
¦ The (usually generic) JMX clientMay be used to administer any MBean via
the JMX container
An application server with support for JMX (such as JBoss) acts as a JMX container
and allows an MBean to be configured and initialized as part of the application
server startup process. Its possible to monitor and administer the MBean using
the application servers administration console (which acts as the JMX client).
An MBean may be packaged as a JMX service, which is not only portable
between application servers with JMX support but also deployable to a running system
(a hot deploy).
Hibernate may be packaged and administered as a JMX MBean. The Hibernate
JMX service allows Hibernate to be initialized at application server startup and controlled
(configured) via a JMX client. However, JMX components arent automatically
integrated with container-managed transactions. So, the configuration
options in listing 2.7 (a JBoss service deployment descriptor) look similar to the
usual Hibernate settings in a managed environment.
code="net.sf.hibernate.jmx.HibernateService"
name="jboss.jca:service=HibernateFactory, name=HibernateFactory">
jboss.jca:service=RARDeployer
jboss.jca:service=LocalTxCM,name=DataSource
auction/Item.hbm.xml, auction/Bid.hbm.xml
Listing 2.7 Hibernate jboss-service.xml JMX deployment descriptor
Licensed to Lathika
Advanced configuration settings 57
java:/hibernate/HibernateFactory
java:/comp/env/jdbc/AuctionDB
net.sf.hibernate.dialect.PostgreSQLDialect
net.sf.hibernate.transaction.JTATransactionFactory
net.sf.hibernate.transaction.JBossTransactionManagerLookup
java:/UserTransaction
The HibernateService depends on two other JMX services: service=RARDeployer
and service=LocalTxCM,name=DataSource, both in the jboss.jca service domain
name.
The Hibernate MBean may be found in the package net.sf.hibernate.jmx.
Unfortunately, lifecycle management methods like starting and stopping the JMX
service arent part of the JMX 1.0 specification. The methods start() and stop()
of the HibernateService are therefore specific to the JBoss application server.
NOTE If youre interested in the advanced usage of JMX, JBoss is a good
open source starting point: All services (even the EJB container) in
JBoss are implemented as MBeans and can be managed via a supplied
console interface.
We recommend that you try to configure Hibernate programmatically (using the
Configuration object) before you try to run Hibernate as a JMX service. However,
some features (like hot-redeployment of Hibernate applications) may be possible
only with JMX, once they become available in Hibernate. Right now, the biggest
advantage of Hibernate with JMX is the automatic startup; it means you no longer
have to create a Configuration and build a SessionFactory in your application
code, but can simply access the SessionFactory through JNDI once the
HibernateService has been deployed and started.
Licensed to Lathika
58 CHAPTER 2
Introducing and integrating Hibernate
2.5 Summary
In this chapter, we took a high-level look at Hibernate and its architecture after
running a simple Hello World example. You also saw how to configure Hibernate
in various environments and with various techniques, even including JMX.
The Configuration and SessionFactory interfaces are the entry points to
Hibernate for applications running in both managed and non-managed environments.
Hibernate provides additional APIs, such as the Transaction interface, to
bridge the differences between environments and allow you to keep your persistence
code portable.
Hibernate can be integrated into almost every Java environment, be it a servlet,
an applet, or a fully managed three-tiered client/server application. The most
important elements of a Hibernate configuration are the database resources (connection
configuration), the transaction strategies, and, of course, the XML-based
mapping metadata.
Hibernates configuration interfaces have been designed to cover as many
usage scenarios as possible while still being easy to understand. Usually, a single
file named hibernate.cfg.xml and one line of code are enough to get Hibernate
up and running.
None of this is much use without some persistent classes and their XML mapping
documents. The next chapter is dedicated to writing and mapping persistent
classes. Youll soon be able to store and retrieve persistent objects in a real application
with a nontrivial object/relational mapping.
Licensed to Lathika
59
Mapping persistent classes
This chapter covers
¦ POJO basics for rich domain models
¦ Mapping POJOs with Hibernate metadata
¦ Mapping class inheritance and
fine-grained models
¦ An introduction to class association mappings
Licensed to Lathika
60 CHAPTER 3
Mapping persistent classes
The Hello World example in chapter 2 introduced you to Hibernate; however,
it isnt very useful for understanding the requirements of real-world applications
with complex data models. For the rest of the book, well use a much
more sophisticated example applicationan online auction systemto demonstrate
Hibernate.
In this chapter, we start our discussion of the application by introducing a programming
model for persistent classes. Designing and implementing the persistent
classes is a multistep process that well examine in detail.
First, youll learn how to identify the business entities of a problem domain. We
create a conceptual model of these entities and their attributes, called a domain
model. We implement this domain model in Java by creating a persistent class for
each entity. (Well spend some time exploring exactly what these Java classes
should look like.)
We then define mapping metadata to tell Hibernate how these classes and their
properties relate to database tables and columns. This involves writing or generating
XML documents that are eventually deployed along with the compiled Java
classes and used by Hibernate at runtime. This discussion of mapping metadata is
the core of this chapter, along with the in-depth exploration of the mapping techniques
for fine-grained classes, object identity, inheritance, and associations. This
chapter therefore provides the beginnings of a solution to the first four generic
problems of ORM listed in section 1.4.2, Generic ORM problems.
Well start by introducing the example application.
3.1 The CaveatEmptor application
The CaveatEmptor online auction application demonstrates ORM techniques and
Hibernate functionality; you can download the source code for the entire working
application from the web site http://caveatemptor.hibernate.org. The application
will have a web-based user interface and run inside a servlet engine like Tomcat.
We wont pay much attention to the user interface; well concentrate on the
data access code. In chapter 8, we discuss the changes that would be necessary if
we were to perform all business logic and data access from a separate business-tier
implemented as EJB session beans.
But, lets start at the beginning. In order to understand the design issues
involved in ORM, lets pretend the CaveatEmptor application doesnt yet exist, and
that were building it from scratch. Our first task would be analysis.
Licensed to Lathika
The CaveatEmptor application 61
3.1.1 Analyzing the business domain
A software development effort begins with analysis of the problem domain
(assuming that no legacy code or legacy database already exist).
At this stage, you, with the help of problem domain experts, identify the main
entities that are relevant to the software system. Entities are usually notions understood
by users of the system: Payment, Customer, Order, Item, Bid, and so forth.
Some entities might be abstractions of less concrete things the user thinks about
(for example, PricingAlgorithm), but even these would usually be understandable
to the user. All these entities are found in the conceptual view of the business,
which we sometimes call a business model. Developers of object-oriented software
analyze the business model and create an object model, still at the conceptual level
(no Java code).This object model may be as simple as a mental image existing only
in the mind of the developer, or it may be as elaborate as a UML class diagram (as
in figure 3.1) created by a CASE (Computer-Aided Software Engineering) tool like
ArgoUML or TogetherJ.
This simple model contains entities that youre bound to find in any typical auction
system: Category, Item, and User. The entities and their relationships (and
perhaps their attributes) are all represented by this model of the problem domain.
We call this kind of modelan object-oriented model of entities from the problem
domain, encompassing only those entities that are of interest to the usera domain
model. Its an abstract view of the real world. We refer to this model when we implement
our persistent Java classes.
Lets examine the outcome of our analysis of the problem domain of the Caveat-
Emptor application.
3.1.2 The CaveatEmptor domain model
The CaveatEmptor site auctions many different kinds of items, from electronic
equipment to airline tickets. Auctions proceed according to the English auction
model: Users continue to place bids on an item until the bid period for that item
expires, and the highest bidder wins.
In any store, goods are categorized by type and grouped with similar goods into
sections and onto shelves. Clearly, our auction catalog requires some kind of hierarchy
of item categories. A buyer may browse these categories or arbitrarily search
by category and item attributes. Lists of items appear in the category browser and
sells 0..* 0..* Category Item User
Figure 3.1 A class diagram of a typical online auction object model
Licensed to Lathika
62 CHAPTER 3
Mapping persistent classes
search result screens. Selecting an item from a list will take the buyer to an item
detail view.
An auction consists of a sequence of bids. One particular bid is the winning bid.
User details include name, login, address, email address, and billing information.
A web of trust is an essential feature of an online auction site. The web of trust
allows users to build a reputation for trustworthiness (or untrustworthiness). Buyers
may create comments about sellers (and vice versa), and the comments are visible
to all other users.
A high-level overview of our domain model is shown in figure 3.2. Lets briefly
discuss some interesting features of this model.
Each item may be auctioned only once, so we dont need to make Item distinct
from the Auction entities. Instead, we have a single auction item entity named
Item. Thus, Bid is associated directly with Item. Users can write Comments about
other users only in the context of an auction; hence the association between Item
and Comment. The Address information of a User is modeled as a separate class,
even though the User may have only one Address. We do allow the user to have
multiple BillingDetails. The various billing strategies are represented as subclasses
of an abstract class (allowing future extension).
A Category might be nested inside another Category. This is expressed by a
recursive association, from the Category entity to itself. Note that a single Category
may have multiple child categories but at most one parent category. Each Item
belongs to at least one Category.
The entities in a domain model should encapsulate state and behavior. For
example, the User entity should define the name and address of a customer and
the logic required to calculate the shipping costs for items (to this particular customer).
Our domain model is a rich object model, with complex associations,
interactions, and inheritance relationships. An interesting and detailed discussion
of object-oriented techniques for working with domain models can be found in
Patterns of Enterprise Application Architecture [Fowler 2003] or in Domain-Driven
Design [Evans 2004].
However, in this book, we wont have much to say about business rules or about
the behavior of our domain model. This is certainly not because we consider this an
unimportant concern; rather, this concern is mostly orthogonal to the problem of
persistence. Its the state of our entities that is persistent. So, we concentrate our
discussion on how to best represent state in our domain model, not on how to represent
behavior. For example, in this book, we arent interested in how tax for sold
items is calculated or how the system might approve a new user account. Were
Licensed to Lathika
The CaveatEmptor application 63
Figure 3.2 Persistent classes of the CaveatEmptor object model and their relationships
Licensed to Lathika
64 CHAPTER 3
Mapping persistent classes
more interested in how the relationship between users and the items they sell is
represented and made persistent.
FAQ Can you use ORM without a domain model? We stress that object persistence
with full ORM is most suitable for applications based on a rich domain
model. If your application doesnt implement complex business rules or
complex interactions between entities (or if you have few entities), you
may not need a domain model. Many simple and some not-so-simple
problems are perfectly suited to table-oriented solutions, where the application
is designed around the database data model instead of around an
object-oriented domain model, often with logic executed in the database
(stored procedures). However, the more complex and expressive your
domain model, the more you will benefit from using Hibernate; it shines
when dealing with the full complexity of object/relational persistence.
Now that we have a domain model, our next step is to implement it in Java. Lets
look at some of the things we need to consider.
3.2 Implementing the domain model
Several issues typically must be addressed when you implement a domain model
in Java. For instance, how do you separate the business concerns from the crosscutting
concerns (such as transactions and even persistence)? What kind of persistence
is needed: Do you need automated or transparent persistence? Do you have to
use a specific programming model to achieve this? In this section, we examine
these types of issues and how to address them in a typical Hibernate application.
Lets start with an issue that any implementation must deal with: the separation
of concerns. The domain model implementation is usually a central, organizing
component; its reused heavily whenever you implement new application
functionality. For this reason, you should be prepared to go to some lengths to
ensure that concerns other than business aspects dont leak into the domain
model implementation.
3.2.1 Addressing leakage of concerns
The domain model implementation is such an important piece of code that it
shouldnt depend on other Java APIs. For example, code in the domain model
shouldnt perform JNDI lookups or call the database via the JDBC API. This allows
you to reuse the domain model implementation virtually anywhere. Most importantly,
it makes it easy to unit test the domain model (in JUnit, for example) outside
of any application server or other managed environment.
Licensed to Lathika
Implementing the domain model 65
We say that the domain model should be concerned only with modeling the
business domain. However, there are other concerns, such as persistence, transaction
management, and authorization. You shouldnt put code that addresses these
cross-cutting concerns in the classes that implement the domain model. When these
concerns start to appear in the domain model classes, we call this an example of
leakage of concerns.
The EJB standard tries to solve the problem of leaky concerns. Indeed, if we
implemented our domain model using entity beans, the container would take care
of some concerns for us (or at least externalize those concerns to the deployment
descriptor). The EJB container prevents leakage of certain cross-cutting concerns
using interception. An EJB is a managed component, always executed inside the EJB container.
The container intercepts calls to your beans and executes its own functionality.
For example, it might pass control to the CMP engine, which takes care of
persistence. This approach allows the container to implement the predefined
cross-cutting concernssecurity, concurrency, persistence, transactions, and
remotenessin a generic way.
Unfortunately, the EJB specification imposes many rules and restrictions on how
you must implement a domain model. This in itself is a kind of leakage of concerns
in this case, the concerns of the container implementor have leaked! Hibernate
isnt an application server, and it doesnt try to implement all the cross-cutting
concerns mentioned in the EJB specification. Hibernate is a solution for just one
of these concerns: persistence. If you require declarative security and transaction
management, you should still access your domain model via a session bean, taking
advantage of the EJB containers implementation of these concerns. Hibernate is
commonly used together with the well-known session façade J2EE pattern.
Much discussion has gone into the topic of persistence, and both Hibernate and
EJB entity beans take care of that concern. However, Hibernate offers something
that entity beans dont: transparent persistence.
3.2.2 Transparent and automated persistence
Your application servers CMP engine implements automated persistence. It takes
care of the tedious details of JDBC ResultSet and PreparedStatement handling. So
does Hibernate; indeed, Hibernate is a great deal more sophisticated in this
respect. But Hibernate does this in a way that is transparent to your domain model.
We use transparent to mean a complete separation of concerns between the
persistent classes of the domain model and the persistence logic itself, where
the persistent classes are unaware ofand have no dependency tothe persistence
mechanism.
Licensed to Lathika
66 CHAPTER 3
Mapping persistent classes
Our Item class, for example, will not have any code-level dependency to any
Hibernate API. Furthermore:
¦ Hibernate doesnt require that any special superclasses or interfaces be
inherited or implemented by persistent classes. Nor are any special classes
used to implement properties or associations. Thus, transparent persistence
improves code readability, as youll soon see.
¦ Persistent classes may be reused outside the context of persistence, in unit
tests or in the user interface (UI) tier, for example. Testability is a basic
requirement for applications with rich domain models.
¦ In a system with transparent persistence, objects arent aware of the underlying
data store; they need not even be aware that they are being persisted or
retrieved. Persistence concerns are externalized to a generic persistence manager
interface in the case of Hibernate, the Session and Query interfaces.
Transparent persistence fosters a degree of portability; without special interfaces,
the persistent classes are decoupled from any particular persistence solution. Our
business logic is fully reusable in any other application context. We could easily
change to another transparent persistence mechanism.
By this definition of transparent persistence, you see that certain non-automated
persistence layers are transparent (for example, the DAO pattern) because they
decouple the persistence-related code with abstract programming interfaces. Only
plain Java classes without dependencies are exposed to the business logic. Conversely,
some automated persistence layers (including entity beans and some ORM
solutions) are non-transparent, because they require special interfaces or intrusive
programming models.
We regard transparency as required. In fact, transparent persistence should be
one of the primary goals of any ORM solution. However, no automated persistence
solution is completely transparent: Every automated persistence layer, including
Hibernate, imposes some requirements on the persistent classes. For example,
Hibernate requires that collection-valued properties be typed to an interface such
as java.util.Set or java.util.List and not to an actual implementation such as
java.util.HashSet (this is a good practice anyway). (We discuss the reasons for
this requirement in appendix B, ORM implementation strategies.)
You now know why the persistence mechanism should have minimal impact on
how you implement a domain model and that transparent and automated persistence
are required. EJB isnt transparent, so what kind of programming model
should you use? Do you need a special programming model at all? In theory, no;
Licensed to Lathika
Implementing the domain model 67
in practice, you should adopt a disciplined, consistent programming model that is
well accepted by the Java community. Lets discuss this programming model and
see how it works with Hibernate.
3.2.3 Writing POJOs
Developers have found entity beans to be tedious, unnatural, and unproductive. As
a reaction against entity beans, many developers started talking about Plain Old
Java Objects (POJOs), a back-to-basics approach that essentially revives JavaBeans, a
component model for UI development, and reapplies it to the business layer. (Most
developers are now using the terms POJO and JavaBean almost synonymously.)1
Hibernate works best with a domain model implemented as POJOs. The few
requirements that Hibernate imposes on your domain model are also best practices
for the POJO programming model. So, most POJOs are Hibernate-compatible
without any changes. The programming model well introduce is a non-intrusive
mix of JavaBean specification details, POJO best practices, and Hibernate requirements.
A POJO declares business methods, which define behavior, and properties,
which represent state. Some properties represent associations to other POJOs.
Listing 3.1 shows a simple POJO class; its an implementation of the User entity
of our domain model.
public class User
implements Serializable {
private String username;
private Address address;
public User() {}
public String getUsername() {
return username;
}
public void setUsername(String username) {
this.username = username;
}
public Address getAddress() {
return address;
}
1 POJO is sometimes also written as Plain Ordinary Java Objects; this term was coined in 2002 by Martin
Fowler, Rebecca Parsons, and Josh Mackenzie.
Listing 3.1 POJO implementation of the User class
Implementation
of Serializable B
Class constructor C
d Accessor
methods
Licensed to Lathika
68 CHAPTER 3
Mapping persistent classes
public void setAddress(Address address) {
this.address = address;
}
public MonetaryAmount calcShippingCosts(Address fromLocation) {
...
}
}
Hibernate doesnt require that persistent classes implement Serializable. However,
when objects are stored in an HttpSession or passed by value using RMI, serialization
is necessary. (This is very likely to happen in a Hibernate application.)
Unlike the JavaBeans specification, which requires no specific constructor, Hibernate
requires a constructor with no arguments for every persistent class. Hibernate
instantiates persistent classes using Constructor.newInstance(), a feature of
the Java reflection API. The constructor may be non-public, but it should be at
least package-visible if runtime-generated proxies will be used for performance
optimization (see chapter 4).
The properties of the POJO implement the attributes of our business entitiesfor
example, the username of User. Properties are usually implemented as instance
variables, together with property accessor methods: a method for retrieving the value
of the instance variable and a method for changing its value. These methods are
known as the getter and setter, respectively. Our example POJO declares getter and
setter methods for the private username instance variable and also for address.
The JavaBean specification defines the guidelines for naming these methods.
The guidelines allow generic tools like Hibernate to easily discover and manipulate
the property value. A getter method name begins with get, followed by the
name of the property (the first letter in uppercase); a setter method name begins
with set. Getter methods for Boolean properties may begin with is instead of get.
Hibernate doesnt require that accessor methods be declared public; it can easily
use private accessors for property management.
Some getter and setter methods do something more sophisticated than simple
instance variables access (validation, for example). Trivial accessor methods are
common, however.
This POJO also defines a business method that calculates the cost of shipping an
item to a particular user (we left out the implementation of this method).
d
Business method E
B
C
D
E
Licensed to Lathika
Implementing the domain model 69
Now that you understand the value of using POJO persistent classes as the programming
model, lets see how you handle the associations between those classes.
3.2.4 Implementing POJO associations
You use properties to express associations between POJO
classes, and you use accessor methods to navigate the object
graph at runtime. Lets consider the associations defined by
the Category class. The first association is shown in
figure 3.3.
As with all our diagrams, we left out the associationrelated
attributes (parentCategory and childCategories)
because they would clutter the illustration. These attributes
and the methods that manipulate their values are called scaffolding code.
Lets implement the scaffolding code for the one-to-many self-association of
Category:
public class Category implements Serializable {
private String name;
private Category parentCategory;
private Set childCategories = new HashSet();
public Category() { }
...
}
To allow bidirectional navigation of the association, we require two attributes. The
parentCategory attribute implements the single-valued end of the association and is
declared to be of type Category. The many-valued end, implemented by the child-
Categories attribute, must be of collection type. We choose a Set, since duplicates
are disallowed, and initialize the instance variable to a new instance of HashSet.
Hibernate requires interfaces for collection-typed attributes. You must use
java.util.Set rather than HashSet, for example. At runtime, Hibernate wraps the
HashSet instance with an instance of one of Hibernates own classes. (This special
class isnt visible to the application code). It is good practice to program to collection
interfaces, rather than concrete implementations, so this restriction shouldnt
bother you.
We now have some private instance variables but no public interface to allow
access from business code or property management by Hibernate. Lets add some
accessor methods to the Category class:
public String getName() {
return name;
}
0..*
Category
name : String
Figure 3.3 Diagram of
the Category class
with an association
Licensed to Lathika
70 CHAPTER 3
Mapping persistent classes
public void setName(String name) {
this.name = name;
}
public Set getChildCategories() {
return childCategories;
}
public void setChildCategories(Set childCategories) {
this.childCategories = childCategories;
}
public Category getParentCategory() {
return parentCategory;
}
public void setParentCategory(Category parentCategory) {
this.parentCategory = parentCategory;
}
Again, these accessor methods need to be declared public only if theyre part of
the external interface of the persistent class, the public interface used by the
application logic.
The basic procedure for adding a child Category to a parent Category looks like
this:
Category aParent = new Category();
Category aChild = new Category();
aChild.setParentCategory(aParent);
aParent.getChildCategories().add(aChild);
Whenever an association is created between a parent Category and a child Category,
two actions are required:
¦ The parentCategory of the child must be set, effectively breaking the association
between the child and its old parent (there can be only one parent for
any child).
¦ The child must be added to the childCategories collection of the new parent
Category.
Hibernate doesnt manage persistent associations. If you want to manipulate
an association, you must write exactly the same code you would write
without Hibernate. If an association is bidirectional, both sides of the relationship
must be considered. Programming models like EJB entity beans
muddle this behavior by introducing container-managed relationships. The
container automatically changes the other side of a relationship if one
side is modified by the application. This is one of the reasons why code
that uses entity beans cant be reused outside the container.
MANAGED
RELATIONSHIPS
IN
HIBERNATE
Licensed to Lathika
Implementing the domain model 71
If you ever have problems understanding the behavior of associations in Hibernate,
just ask yourself, What would I do without Hibernate? Hibernate doesnt
change the usual Java semantics.
Its a good idea to add a convenience method to the Category class that groups
these operations, allowing reuse and helping ensure correctness:
public void addChildCategory(Category childCategory) {
if (childCategory == null)
throw new IllegalArgumentException("Null child category!");
if (childCategory.getParentCategory() != null)
childCategory.getParentCategory().getChildCategories()
.remove(childCategory);
childCategory.setParentCategory(this);
childCategories.add(childCategory);
}
The addChildCategory() method not only reduces the lines of code when dealing
with Category objects, but also enforces the cardinality of the association. Errors
that arise from leaving out one of the two required actions are avoided. This kind
of grouping of operations should always be provided for associations, if possible.
Because we would like the addChildCategory() to be the only externally visible
mutator method for the child categories, we make the setChildCategories()
method private. Hibernate doesnt care if property accessor methods are private
or public, so we can focus on good API design.
A different kind of relationship exists between Category and the Item: a bidirectional
many-to-many association (see figure 3.4).
In the case of a many-to-many association, both sides are implemented with collection-
valued attributes. Lets add the new attributes and methods to access the
Item class to our Category class, as shown in listing 3.2.
Figure 3.4
Category and the
associated Item
Licensed to Lathika
72 CHAPTER 3
Mapping persistent classes
public class Category {
...
private Set items = new HashSet();
...
public Set getItems() {
return items;
}
public void setItems(Set items) {
this.items = items;
}
}
The code for the Item class (the other end of the many-to-many association) is
similar to the code for the Category class. We add the collection attribute, the
standard accessor methods, and a method that simplifies relationship management
(you can also add this to the Category class, see listing 3.3).
public class Item {
private String name;
private String description;
...
private Set categories = new HashSet();
...
public Set getCategories() {
return categories;
}
private void setCategories(Set categories) {
this.categories = categories;
}
public void addCategory(Category category) {
if (category == null)
throw new IllegalArgumentException("Null category");
category.getItems().add(this);
categories.add(category);
}
}
Listing 3.2 Category to Item scaffolding code
Listing 3.3 Item to Category scaffolding code
Licensed to Lathika
Implementing the domain model 73
The addCategory() of the Item method is similar to the addChildCategory convenience
method of the Category class. Its used by a client to manipulate the relationship
between Item and a Category. For the sake of readability, we wont show
convenience methods in future code samples and assume youll add them according
to your own taste.
Convenience methods for association handling is however not the only way to
improve a domain model implementation. You can also add logic to your accessor
methods.
3.2.5 Adding logic to accessor methods
One of the reasons we like to use JavaBeans-style accessor methods is that they
provide encapsulation: The hidden internal implementation of a property can be
changed without any changes to the public interface. This allows you to abstract
the internal data structure of a classthe instance variablesfrom the design of
the database.
For example, if your database stores a name of the user as a single NAME column,
but your User class has firstname and lastname properties, you can add the following
persistent name property to your class:
public class User {
private String firstname;
private String lastname;
...
public String getName() {
return firstname + ' ' + lastname;
}
public void setName(String name) {
StringTokenizer t = new StringTokenizer(name);
firstname = t.nextToken();
lastname = t.nextToken();
)
...
}
Later, youll see that a Hibernate custom type is probably a better way to handle
many of these kinds of situations. However, it helps to have several options.
Accessor methods can also perform validation. For instance, in the following
example, the setFirstName() method verifies that the name is capitalized:
public class User {
private String firstname;
...
Licensed to Lathika
74 CHAPTER 3
Mapping persistent classes
public String getFirstname() {
return firstname;
}
public void setFirstname(String firstname)
throws InvalidNameException {
if ( !StringUtil.isCapitalizedName(firstname) )
throw new InvalidNameException(firstname);
this.firstname = firstname;
)
...
}
However, Hibernate will later use our accessor methods to populate the state of
an object when loading the object from the database. Sometimes we would prefer
that this validation not occur when Hibernate is initializing a newly loaded object.
In that case, it might make sense to tell Hibernate to directly access the instance
variables (we map the property with access="field" in Hibernate metadata),
forcing Hibernate to bypass the setter method and access the instance variable
directly. Another issue to consider is dirty checking. Hibernate automatically detects
object state changes in order to synchronize the updated state with the database.
Its usually completely safe to return a different object from the getter method to
the object passed by Hibernate to the setter. Hibernate will compare the objects
by valuenot by object identityto determine if the propertys persistent state
needs to be updated. For example, the following getter method wont result in
unnecessary SQL UPDATEs:
public String getFirstname() {
return new String(firstname);
}
However, there is one very important exception. Collections are compared by
identity!
For a property mapped as a persistent collection, you should return exactly the
same collection instance from the getter method as Hibernate passed to the setter
method. If you dont, Hibernate will update the database, even if no update is necessary,
every time the session synchronizes state held in memory with the database.
This kind of code should almost always be avoided in accessor methods:
public void setNames(List namesList) {
names = (String[]) namesList.toArray();
}
public List getNames() {
return Arrays.asList(names);
}
Licensed to Lathika
Defining the mapping metadata 75
You can see that Hibernate doesnt unnecessarily restrict the JavaBeans (POJO)
programming model. Youre free to implement whatever logic you need in accessor
methods (as long as you keep the same collection instance in both getter and
setter). If absolutely necessary, you can tell Hibernate to use a different access
strategy to read and set the state of a property (for example, direct instance field
access), as youll see later. This kind of transparency guarantees an independent
and reusable domain model implementation.
Now that weve implemented some persistent classes of our domain model, we
need to define the ORM.
3.3 Defining the mapping metadata
ORM tools require a metadata format for the application to specify the mapping
between classes and tables, properties and columns, associations and foreign keys,
Java types and SQL types. This information is called the object/relational mapping
metadata. It defines the transformation between the different data type systems
and relationship representations.
Its our job as developers to define and maintain this metadata. We discuss various
approaches in this section.
3.3.1 Metadata in XML
Any ORM solution should provide a human-readable, easily hand-editable mapping
format, not only a GUI mapping tool. Currently, the most popular object/
relational metadata format is XML. Mapping documents written in and with XML
are lightweight, are human readable, are easily manipulated by version-control
systems and text editors, and may be customized at deployment time (or even at
runtime, with programmatic XML generation).
But is XML-based metadata really the best approach? A certain backlash against
the overuse of XML can be seen in the Java community. Every framework and application
server seems to require its own XML descriptors.
In our view, there are three main reasons for this backlash:
¦ Many existing metadata formats werent designed to be readable and easy
to edit by hand. In particular, a major cause of pain is the lack of sensible
defaults for attribute and element values, requiring significantly more typing
than should be necessary.
¦ Metadata-based solutions were often used inappropriately. Metadata is not,
by nature, more flexible or maintainable than plain Java code.
Licensed to Lathika
76 CHAPTER 3
Mapping persistent classes
¦ Good XML editors, especially in IDEs, arent as common as good Java
coding environments. Worst, and most easily fixable, a document type
declaration (DTD) often isnt provided, preventing auto-completion and
validation. Another problem are DTDs that are too generic, where every
declaration is wrapped in a generic extension of meta element.
There is no getting around the need for text-based metadata in ORM. However,
Hibernate was designed with full awareness of the typical metadata problems. The
metadata format is extremely readable and defines useful default values. When
attribute values are missing, Hibernate uses reflection on the mapped class to
help determine the defaults. Hibernate comes with a documented and complete
DTD. Finally, IDE support for XML has improved lately, and modern IDEs provide
dynamic XML validation and even an auto-complete feature. If thats not enough
for you, in chapter 9 we demonstrate some tools that may be used to generate
Hibernate XML mappings.
Lets look at the way you can use XML metadata in Hibernate. We created the
Category class in the previous section; now we need to map it to the CATEGORY table
in the database. To do that, we use the XML mapping document in listing 3.4.
PUBLIC "-//Hibernate/Hibernate Mapping DTD//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">
name="org.hibernate.auction.model.Category"
table="CATEGORY">
name="id"
column="CATEGORY_ID"
type="long">
name="name"
column="NAME"
type="string"/>
Listing 3.4 Hibernate XML mapping of the Category class
DTD declaration B
Mapping
declaration
C
Category class mapped
to table CATEGORY
D
Identifier
mapping
E
Name property mapped
to NAME column
F
Licensed to Lathika
Defining the mapping metadata 77
The Hibernate mapping DTD should be declared in every mapping file; its
required for syntactic validation of the XML.
Mappings are declared inside a element. You can include as
many class mappings as you like, along with certain other special declarations that
well mention later in the book.
The class Category (in the package org.hibernate.auction.model) is mapped to
the table CATEGORY. Every row in this table represents one instance of type
Category.
We havent discussed the concept of object identity, so you may be surprised by this
mapping element. This complex topic is covered in section 3.4. To understand
this mapping, its sufficient to know that every record in the CATEGORY table will
have a primary key value that matches the object identity of the instance in memory.
The mapping element is used to define the details of object identity.
The property name of type String is mapped to a database column NAME. Note
that the type declared in the mapping is a built-in Hibernate type (string), not
the type of the Java property or the SQL column type. Think about this as the
mapping data type. We take a closer look at these types in chapter 6, section 6.1,
Understanding the Hibernate type system.
Weve intentionally left the association mappings out of this example. Association
mappings are more complex, so well return to them in section 3.7.
TRY IT Starting Hibernate with your first persistent classAfter youve written the
POJO code for the Category and saved its Hibernate mapping to an XML
file, you can start up Hibernate with this mapping and try some operations.
However, the POJO code for Category shown earlier wasnt complete:
You have to add an additional property named id of type
java.lang.Long and its accessor methods to enable Hibernate identity
management, as discussed later in this chapter. Creating the database
schema with its tables for such a simple class should be no problem for
you. Observe the log of your application to check for a successful startup
and creation of a new SessionFactory from the Configuration shown
in chapter 2.
If you cant wait any longer, check out the save(), load(), and
delete() methods of the Session you can obtain from the SessionFactory.
Make sure you correctly deal with transactions; the easiest way is to
get a new Transaction object with Session.beginTransaction() and
commit it with its commit() method after youve made your calls. See the
code in section 2.1, Hello World with Hibernate, if youd like to copy
some example code for your first test.
B
C
D
E
F
Licensed to Lathika
78 CHAPTER 3
Mapping persistent classes
Although its possible to declare mappings for multiple classes in one mapping
file by using multiple elements, the recommended practice (and the
practice expected by some Hibernate tools) is to use one mapping file per persistent
class. The convention is to give the file the same name as the mapped class,
appending an hbm suffix: for example, Category.hbm.xml.
Lets discuss basic class and property mappings in Hibernate. Keep in mind that
we still need to come back later in this chapter to the problem of mapping associations
between persistent classes.
3.3.2 Basic property and class mappings
A typical Hibernate property mapping defines a JavaBeans property name, a database
column name, and the name of a Hibernate type. It maps a JavaBean style
property to a table column. The basic declaration provides many variations and
optional settings. Its often possible to omit the type name. So, if description is a
property of (Java) type java.lang.String, Hibernate will use the Hibernate type
string by default (we discuss the Hibernate type system in chapter 6). Hibernate
uses reflection to determine the Java type of the property. Thus, the following
mappings are equivalent:
You can even omit the column name if its the same as the property name, ignoring
case. (This is one of the sensible defaults we mentioned earlier.)
For some cases you might need to use a element instead of the column
attribute. The element provides more flexibility; it has more optional
attributes and may appear more than once. The following two property mappings
are equivalent:
The element (and especially the element) also defines certain
attributes that apply mainly to automatic database schema generation. If you
arent using the hbm2ddl tool (see section 9.2, Automatic schema generation) to
generate the database schema, you can safely omit these. However, its still preferable
to include at least the not-null attribute, since Hibernate will then be able to
report illegal null property values without going to the database:
Licensed to Lathika
Defining the mapping metadata 79
Detection of illegal null values is mainly useful for providing sensible exceptions
at development time. It isnt intended for true data validation, which is outside
the scope of Hibernate.
Some properties dont map to a column at all. In particular, a derived property
takes its value from an SQL expression.
Using derived properties
The value of a derived property is calculated at runtime by evaluation of an
expression. You define the expression using the formula attribute. For example,
we might map a totalIncludingTax property without having a single column with
the total price in the database:
formula="TOTAL + TAX_RATE * TOTAL"
type="big_decimal"/>
The given SQL formula is evaluated every time the entity is retrieved from the
database. The property doesnt have a column attribute (or sub-element) and
never appears in an SQL INSERT or UPDATE, only in SELECTs. Formulas may refer
to columns of the database table, call SQL functions, and include SQL subselects.
This example, mapping a derived property of item, uses a correlated subselect
to calculate the average amount of all bids for an item:
name="averageBidAmount"
formula="( select AVG(b.AMOUNT) from BID b
?where b.ITEM_ID = ITEM_ID )"
type="big_decimal"/>
Notice that unqualified column names refer to table columns of the class to which
the derived property belongs.
As we mentioned earlier, Hibernate doesnt require property accessor methods
on POJO classes, if you define a new property access strategy.
Property access strategies
The access attribute allows you to specify how Hibernate should access property
values of the POJO. The default strategy, property, uses the property accessors
(get/set method pair). The field strategy uses reflection to access the instance
variable directly. The following property mapping doesnt require a get/set pair:
column="NAME"
Licensed to Lathika
80 CHAPTER 3
Mapping persistent classes
type="string"
access="field"/>
Access to properties via accessor methods is considered best practice by the Hibernate
community. It provides an extra level of abstraction between the Java domain
model and the data model, beyond what is already provided by Hibernate. Properties
are more flexible; for example, property definitions may be overridden by
persistent subclasses.
If neither accessor methods nor direct instance variable access is appropriate,
you can define your own customized property access strategy by implementing
the interface net.sf.hibernate.property.PropertyAccessor and name it in the
access attribute.
Controlling insertion and updates
For properties that map to columns, you can control whether they appear in the
INSERT statement by using the insert attribute and whether they appear in the
UPDATE statement by using the update attribute.
The following property never has its state written to the database:
column="NAME"
type="string"
insert="false"
update="false"/>
The property name of the JavaBean is therefore immutable and can be read from
the database but not modified in any way. If the complete class is immutable, set
the immutable="false" in the class mapping
In addition, the dynamic-insert attribute tells Hibernate whether to include
unmodified property values in an SQL INSERT, and the dynamic-update attribute
tells Hibernate whether to include unmodified properties in the SQL UPDATE:
dynamic-insert="true"
dynamic-update="true">
...
These are both class-level settings. Enabling either of these settings will cause
Hibernate to generate some SQL at runtime, instead of using the SQL cached at
startup time. The performance cost is usually small. Furthermore, leaving out
columns in an insert (and especially in an update) can occasionally improve
performance if your tables define many columns.
Licensed to Lathika
Defining the mapping metadata 81
Using quoted SQL identifiers
By default, Hibernate doesnt quote table and column names in the generated
SQL. This makes the SQL slightly more readable and also allows us to take advantage
of the fact that most SQL databases are case insensitive when comparing
unquoted identifiers. From time to time, especially in legacy databases, youll
encounter identifiers with strange characters or whitespace, or you may wish to
force case-sensitivity.
If you quote a table or column name with backticks in the mapping document,
Hibernate will always quote this identifier in the generated SQL. The following
property declaration forces Hibernate to generate SQL with the quoted
column name "Item Description". Hibernate will also know that Microsoft SQL
Server needs the variation [Item Description] and that MySQL requires `Item
Description`.
column="`Item Description`"/>
There is no way, apart from quoting all table and column names in backticks, to
force Hibernate to use quoted identifiers everywhere.
Naming conventions
Youll often encounter organizations with strict conventions for database table
and column names. Hibernate provides a feature that allows you to enforce naming
standards automatically.
Suppose that all table names in CaveatEmptor should follow the pattern
CE_.
One solution is to manually specify a table attribute on all and collection
elements in our mapping files. This approach is time-consuming and easily
forgotten. Instead, we can implement Hibernates NamingStrategy interface, as in
listing 3.5
public class CENamingStrategy implements NamingStrategy {
public String classToTableName(String className) {
return tableName(
StringHelper.unqualify(className).toUpperCase() );
}
public String propertyToColumnName(String propertyName) {
return propertyName.toUpperCase();
}
Listing 3.5 NamingStrategy implementation
Licensed to Lathika
82 CHAPTER 3
Mapping persistent classes
public String tableName(String tableName) {
return "CE_" + tableName;
}
public String columnName(String columnName) {
return columnName;
}
public String propertyToTableName(String className,
String propertyName) {
return classToTableName(className) + '_' +
propertyToColumnName(propertyName);
}
}
The classToTableName() method is called only if a mapping doesnt specify
an explicit table name. The propertyToColumnName() method is called if a
property has no explicit column name. The tableName() and columnName() methods
are called when an explicit name is declared.
If we enable our CENamingStrategy, this class mapping declaration
will result in CE_BANKACCOUNT as the name of the table. The classToTableName()
method was called with the fully qualified class name as the argument.
However, if a table name is specified
then CE_BANK_ACCOUNT will be the name of the table. In this case, BANK_ACCOUNT was
passed to the tableName() method.
The best feature of the NamingStrategy is the potential for dynamic behavior.
To activate a specific naming strategy, we can pass an instance to the Hibernate
Configuration at runtime:
Configuration cfg = new Configuration();
cfg.setNamingStrategy( new CENamingStrategy() );
SessionFactory sessionFactory =
cfg.configure().buildSessionFactory();
This will allow us to have multiple SessionFactory instances based on the same
mapping documents, each using a different NamingStrategy. This is extremely
useful in a multiclient installation where unique table names (but the same data
model) are required for each client.
Licensed to Lathika
Defining the mapping metadata 83
However, a better way to handle this kind of requirement is to use the concept
of an SQL schema (a kind of namespace).
SQL schemas
You can specify a default schema using the hibernate.default_schema configuration
option. Alternatively, you can specify a schema in the mapping document. A
schema may be specified for a particular class or collection mapping:
name="org.hibernate.auction.model.Category"
table="CATEGORY"
schema="AUCTION">
...
It can even be declared for the whole document:
default-schema="AUCTION">
..
This isnt the only thing the root element is useful for.
Declaring class names
All the persistent classes of the CaveatEmptor application are declared in the Java
package org.hibernate.auction.model. It would become tedious to specify this
package name every time we named a class in our mapping documents.
Lets reconsider our mapping for the Category class (the file Category.
hbm.xml):
PUBLIC "-//Hibernate/Hibernate Mapping DTD//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">
name="org.hibernate.auction.model.Category"
table="CATEGORY">
...
Licensed to Lathika
84 CHAPTER 3
Mapping persistent classes
We dont want to repeat the full package name whenever this or any other class is
named in an association, subclass, or component mapping. So, instead, well specify
a package:
PUBLIC "-//Hibernate/Hibernate Mapping DTD//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">
package="org.hibernate.auction.model">
name="Category"
table="CATEGORY">
...
Now all unqualified class names that appear in this mapping document will be
prefixed with the declared package name. We assume this setting in all mapping
examples in this book.
If writing XML files by hand (using the DTD for auto-completion, of course) still
seems like too much work, attribute-oriented programming might be a good choice.
Hibernate mapping files can be automatically generated from attributes directly
embedded in the Java source code.
3.3.3 Attribute-oriented programming
The innovative XDoclet project has brought the notion of attribute-oriented programming
to Java. Until JDK 1.5, the Java language had no support for annotations;
so XDoclet leverages the Javadoc tag format (@attribute) to specify class-,
field-, or method-level metadata attributes. (There is a book about XDoclet from
Manning Publications: XDoclet in Action [Walls/Richards, 2004].)
XDoclet is implemented as an Ant task that generates code or XML metadata as
part of the build process. Creating the Hibernate XML mapping document with
XDoclet is straightforward; instead of writing it by hand, we mark up the Java
source code of our persistent class with custom Javadoc tags, as shown in listing 3.6.
/**
* The Category class of the CaveatEmptor auction site domain model.
*
* @hibernate.class
* table="CATEGORY"
*/
Listing 3.6 Using XDoclet tags to mark up Java properties with mapping metadata
Licensed to Lathika
Defining the mapping metadata 85
public class Category {
...
/**
* @hibernate.id
* generator-class="native"
* column="CATEGORY_ID"
*/
public Long getId() {
return id;
}
...
/**
* @hibernate.property
*/
public String getName() {
return name;
}
...
}
With the annotated class in place and an Ant task ready, we can automatically generate
the same XML document shown in the previous section (listing 3.4).
The downside to XDoclet is the requirement for another build step. Most large
Java projects are using Ant already, so this is usually a non-issue. Arguably, XDoclet
mappings are less configurable at deployment time. However, nothing is stopping
you from hand-editing the generated XML before deployment, so this probably
isnt a significant objection. Finally, support for XDoclet tag validation may not be
available in your development environment. However, JetBrains IntelliJ IDEA and
Eclipse both support at least auto-completion of tag names. (We look at the use of
XDoclet with Hibernate in chapter 9, section 9.5, XDoclet.)
NOTE XDoclet isnt a standard approach to attribute-oriented metadata. A new
Java specification, JSR 175, defines annotations as extensions to the Java
language. JSR 175 is already implemented in JDK 1.5, so projects like
XDoclet and Hibernate will probably provide support for JSR 175 annotations
in the near future.
Both of the approaches we have described so far, XML and XDoclet attributes,
assume that all mapping information is known at deployment time. Suppose that
some information isnt known before the application starts. Can you programmatically
manipulate the mapping metadata at runtime?
Licensed to Lathika
86 CHAPTER 3
Mapping persistent classes
3.3.4 Manipulating metadata at runtime
Its sometimes useful for an application to browse, manipulate, or build new mappings
at runtime. XML APIs like DOM, dom4j, and JDOM allow direct runtime
manipulation of XML documents. So, you could create or manipulate an XML
document at runtime, before feeding it to the Configuration object.
However, Hibernate also exposes a configuration-time metamodel. The metamodel
contains all the information declared in your XML mapping documents.
Direct programmatic manipulation of this metamodel is sometimes useful, especially
for applications that allow for extension by user-written code.
For example, the following code adds a new property, motto, to the User class
mapping:
// Get the existing mapping for User from Configuration
PersistentClass userMapping = cfg.getClassMapping(User.class);
// Define a new column for the USER table
Column column = new Column();
column.setType(Hibernate.STRING);
column.setName("MOTTO");
column.setNullable(false);
column.setUnique(true);
userMapping.getTable().addColumn(column);
// Wrap the column in a Value
SimpleValue value = new SimpleValue();
value.setTable( userMapping.getTable() );
value.addColumn(column);
value.setType(Hibernate.STRING);
// Define a new property of the User class
Property prop = new Property();
prop.setValue(value);
prop.setName("motto");
userMapping.addProperty(prop);
// Build a new session factory, using the new mapping
SessionFactory sf = cfg.buildSessionFactory();
A PersistentClass object represents the metamodel for a single persistent class;
we retrieve it from the Configuration. Column, SimpleValue, and Property are all
classes of the Hibernate metamodel and are available in the package
net.sf.hibernate.mapping. Keep in mind that adding a property to an existing
persistent class mapping as shown here is easy, but programmatically creating a
new mapping for a previously unmapped class is quite a bit more involved.
Once a SessionFactory is created, its mappings are immutable. In fact, the SessionFactory
uses a different metamodel internally than the one used at configura-
Licensed to Lathika
Understanding object identity 87
tion time. There is no way to get back to the original Configuration from the
SessionFactory or Session. However, the application may read the SessionFactorys
metamodel by calling getClassMetadata() or getCollectionMetadata().
For example:
Category category = ...;
ClassMetadata meta = sessionFactory.getClassMetadata(Category.class);
String[] metaPropertyNames = meta.getPropertyNames();
Object[] propertyValues = meta.getPropertyValues(category);
This code snippet retrieves the names of persistent properties of the Category
class and the values of those properties for a particular instance. This helps you
write generic code. For example, you might use this feature to label UI components
or improve log output.
Now lets turn to a special mapping element youve seen in most of our previous
examples: the identifier property mapping. Well begin by discussing the notion of
object identity.
3.4 Understanding object identity
Its vital to understand the difference between object identity and object equality
before we discuss terms like database identity and how Hibernate manages identity.
We need these concepts if we want to finish mapping our CaveatEmptor persistent
classes and their associations with Hibernate.
3.4.1 Identity versus equality
Java developers understand the difference between Java object identity and equality.
Object identity, ==, is a notion defined by the Java virtual machine. Two object references
are identical if they point to the same memory location.
On the other hand, object equality is a notion defined by classes that implement
the equals() method, sometimes also referred to as equivalence. Equivalence means
that two different (non-identical) objects have the same value. Two different
instances of String are equal if they represent the same sequence of characters,
even though they each have their own location in the memory space of the virtual
machine. (We admit that this is not entirely true for Strings, but you get the idea.)
Persistence complicates this picture. With object/relational persistence, a persistent
object is an in-memory representation of a particular row of a database
table. So, along with Java identity (memory location) and object equality, we pick
up database identity (location in the persistent data store). We now have three methods
for identifying objects:
Licensed to Lathika
88 CHAPTER 3
Mapping persistent classes
¦ Object identityObjects are identical if they occupy the same memory location
in the JVM. This can be checked by using the == operator.
¦ Object equalityObjects are equal if they have the same value, as defined by the
equals(Object o) method. Classes that dont explicitly override this method
inherit the implementation defined by java.lang.Object, which compares
object identity.
¦ Database identityObjects stored in a relational database are identical if they
represent the same row or, equivalently, share the same table and primary key
value.
You need to understand how database identity relates to object identity in Hibernate.
3.4.2 Database identity with Hibernate
Hibernate exposes database identity to the application in two ways:
¦ The value of the identifier property of a persistent instance
¦ The value returned by Session.getIdentifier(Object o)
The identifier property is special: Its value is the primary key value of the database
row represented by the persistent instance. We dont usually show the identifier
property in our domain modelits a persistence-related concern, not part of our
business problem. In our examples, the identifier property is always named id. So
if myCategory is an instance of Category, calling myCategory.getId() returns the
primary key value of the row represented by myCategory in the database.
Should you make the accessor methods for the identifier property private scope
or public? Well, database identifiers are often used by the application as a convenient
handle to a particular instance, even outside the persistence layer. For example,
web applications often display the results of a search screen to the user as a list
of summary information. When the user selects a particular element, the application
might need to retrieve the selected object. Its common to use a lookup by
identifier for this purposeyouve probably already used identifiers this way, even
in applications using direct JDBC. Its therefore usually appropriate to fully expose
the database identity with a public identifier property accessor.
On the other hand, we usually declare the setId() method private and let
Hibernate generate and set the identifier value. The exceptions to this rule are
classes with natural keys, where the value of the identifier is assigned by the application
before the object is made persistent, instead of being generated by Hibernate.
(We discuss natural keys in the next section.) Hibernate doesnt allow you to
change the identifier value of a persistent instance after its first assigned.
Licensed to Lathika
Understanding object identity 89
Remember, part of the definition of a primary key is that its value should never
change. Lets implement an identifier property for the Category class:
public class Category {
private Long id;
...
public Long getId() {
return this.id;
}
private void setId(Long id) {
this.id = id;
}
...
}
The property type depends on the primary key type of the CATEGORY table and the
Hibernate mapping type. This information is determined by the element in
the mapping document:
...
The identifier property is mapped to the primary key column CATEGORY_ID of the
table CATEGORY. The Hibernate type for this property is long, which maps to a BIGINT
column type in most databases and which has also been chosen to match the
type of the identity value produced by the native identifier generator. (We discuss
identifier generation strategies in the next section.) So, in addition to operations
for testing Java object identity (a == b) and object equality ( a.equals(b) ), you
may now use a.getId().equals( b.getId() ) to test database identity.
An alternative approach to handling database identity is to not implement any
identifier property, and let Hibernate manage database identity internally. In this
case, you omit the name attribute in the mapping declaration:
Hibernate will now manage the identifier values internally. You may obtain the
identifier value of a persistent instance as follows:
Long catId = (Long) session.getIdentifier(category);
Licensed to Lathika
90 CHAPTER 3
Mapping persistent classes
This technique has a serious drawback: You can no longer use Hibernate to
manipulate detached objects effectively (see chapter 4, section 4.1.6, Outside the
identity scope). So, you should always use identifier properties in Hibernate. (If
you dont like them being visible to the rest of your application, make the accessor
methods private.)
Using database identifiers in Hibernate is easy and straightforward. Choosing a
good primary key (and key generation strategy) might be more difficult. We discuss
these issues next.
3.4.3 Choosing primary keys
You have to tell Hibernate about your preferred primary key generation strategy.
But first, lets define primary key.
The candidate key is a column or set of columns that uniquely identifies a specific
row of the table. A candidate key must satisfy the following properties:
¦ The value or values are never null.
¦ Each row has a unique value or values.
¦ The value or values of a particular row never change.
For a given table, several columns or combinations of columns might satisfy these
properties. If a table has only one identifying attribute, it is by definition the primary
key. If there are multiple candidate keys, you need to choose between them
(candidate keys not chosen as the primary key should be declared as unique keys
in the database). If there are no unique columns or unique combinations of columns,
and hence no candidate keys, then the table is by definition not a relation
as defined by the relational model (it permits duplicate rows), and you should
rethink your data model.
Many legacy SQL data models use natural primary keys. A natural key is a key with
business meaning: an attribute or combination of attributes that is unique by virtue
of its business semantics. Examples of natural keys might be a U.S. Social Security
Number or Australian Tax File Number. Distinguishing natural keys is simple: If a
candidate key attribute has meaning outside the database context, its a natural
key, whether or not its automatically generated.
Experience has shown that natural keys almost always cause problems in the
long run. A good primary key must be unique, constant, and required (never null
or unknown). Very few entity attributes satisfy these requirements, and some that
do arent efficiently indexable by SQL databases. In addition, you should make
absolutely certain that a candidate key definition could never change throughout
Licensed to Lathika
Understanding object identity 91
the lifetime of the database before promoting it to a primary key. Changing the
definition of a primary key and all foreign keys that refer to it is a frustrating task.
For these reasons, we strongly recommend that new applications use synthetic
identifiers (also called surrogate keys). Surrogate keys have no business meaning
they are unique values generated by the database or application. There are a number
of well-known approaches to surrogate key generation.
Hibernate has several built-in identifier generation strategies. We list the most
useful options in table 3.1.
You arent limited to these built-in strategies; you may create your own identifier
generator by implementing Hibernates IdentifierGenerator interface. Its even
possible to mix identifier generators for persistent classes in a single domain model,
but for non-legacy data we recommend using the same generator for all classes.
The special assigned identifier generator strategy is most useful for entities with
natural primary keys. This strategy lets the application assign identifier values by
Table 3.1 Hibernates built-in identifier generator modules
Generator name Description
native The native identity generator picks other identity generators like identity,
sequence, or hilo depending on the capabilities of the underlying database.
identity This generator supports identity columns in DB2, MySQL, MS SQL Server, Sybase,
HSQLDB, Informix, and HypersonicSQL. The returned identifier is of type long,
short, or int.
sequence A sequence in DB2, PostgreSQL, Oracle, SAP DB, McKoi, Firebird, or a generator in
InterBase is used. The returned identifier is of type long, short, or int.
increment At Hibernate startup, this generator reads the maximum primary key column value
of the table and increments the value by one each time a new row is inserted. The
generated identifier is of type long, short, or int. This generator is especially
efficient if the single-server Hibernate application has exclusive access to the
database but shouldnt be used in any other scenario.
hilo A high/low algorithm is an efficient way to generate identifiers of type long,
short, or int, given a table and column (by default hibernate_unique_key
and next_hi, respectively) as a source of hi values. The high/low algorithm generates
identifiers that are unique only for a particular database. See [Ambler
2002] for more information about the high/low approach to unique identifiers.
uuid.hex This generator uses a 128-bit UUID (an algorithm that generates identifiers of type
string, unique within a network). The IP address is used in combination with a
unique timestamp. The UUID is encoded as a string of hexadecimal digits of length
32. This generation strategy isnt popular, since CHAR primary keys consume more
database space than numeric keys and are marginally slower.
Licensed to Lathika
92 CHAPTER 3
Mapping persistent classes
setting the identifier property before making the object persistent by calling
save(). This strategy has some serious disadvantages when youre working with
detached objects and transitive persistence (both of these concepts are discussed
in the next chapter). Dont use assigned identifiers if you can avoid them; its
much easier to use a surrogate primary key generated by one of the strategies listed
in table 3.1.
For legacy data, the picture is more complicated. In this case, were often stuck
with natural keys and especially composite keys (natural keys composed of multiple
table columns). Because composite identifiers can be more difficult to work with,
we only discuss them in the context of chapter 8, section 8.3.1, Legacy schemas
and composite keys.
The next step is to add identifier properties to the classes of the CaveatEmptor
application. Do all persistent classes have their own database identity? To answer
this question, we must explore the distinction between entities and value types in
Hibernate. These concepts are required for fine-grained object modeling.
3.5 Fine-grained object models
A major objective of the Hibernate project is support for fine-grained object models,
which we isolated as the most important requirement for a rich domain
model. Its one reason weve chosen POJOs.
In crude terms, fine-grained means more classes than tables. For example, a
user might have both a billing address and a home address. In the database, we
might have a single USER table with the columns BILLING_STREET, BILLING_CITY,
and BILLING_ZIPCODE along with HOME_STREET, HOME_CITY, and HOME_ZIPCODE.
There are good reasons to use this somewhat denormalized relational model (performance,
for one).
In our object model, we could use the same approach, representing the two
addresses as six string-valued properties of the User class. But we would much
rather model this using an Address class, where User has the billingAddress and
homeAddress properties.
This object model achieves improved cohesion and greater code reuse and is
more understandable. In the past, many ORM solutions havent provided good support
for this kind of mapping.
Hibernate emphasizes the usefulness of fine-grained classes for implementing
type-safety and behavior. For example, many people would model an email address
as a string-valued property of User. We suggest that a more sophisticated approach
Licensed to Lathika
Fine-grained object models 93
is to define an actual EmailAddress class that could add higher level semantics and
behavior. For example, it might provide a sendEmail() method.
3.5.1 Entity and value types
This leads us to a distinction of central importance in ORM. In Java, all classes are
of equal standing: All objects have their own identity and lifecycle, and all class
instances are passed by reference. Only primitive types are passed by value.
Were advocating a design in which there are more persistent classes than tables.
One row represents multiple objects. Because database identity is implemented by
primary key value, some persistent objects wont have their own identity. In effect,
the persistence mechanism implements pass-by-value semantics for some classes.
One of the objects represented in the row has its own identity, and others depend
on that.
Hibernate makes the following essential distinction:
¦ An object of entity type has its own database identity (primary key value). An
object reference to an entity is persisted as a reference in the database (a
foreign key value). An entity has its own lifecycle; it may exist independently
of any other entity.
¦ An object of value type has no database identity; it belongs to an entity, and
its persistent state is embedded in the table row of the owning entity (except
in the case of collections, which are also considered value types, as youll see
in chapter 6). Value types dont have identifiers or identifier properties.
The lifespan of a value-type instance is bounded by the lifespan of the owning
entity.
The most obvious value types are simple objects like Strings and Integers. Hibernate
also lets you treat a user-defined class as a value type, as youll see next. (We
also come back to this important concept in chapter 6, section 6.1, Understanding
the Hibernate type system.)
3.5.2 Using components
So far, the classes of our object model have all been entity classes with their own
lifecycle and identity. The User class, however, has a special kind of association
with the Address class, as shown in figure 3.5.
In object modeling terms, this association is a kind of aggregationa part of
relationship. Aggregation is a strong form of association: It has additional semantics
with regard to the lifecycle of objects. In our case, we have an even stronger
Licensed to Lathika
94 CHAPTER 3
Mapping persistent classes
form, composition, where the lifecycle of the part is dependent on the lifecycle of
the whole.
Object modeling experts and UML designers will claim that there is no difference
between this composition and other weaker styles of association when it
comes to the Java implementation. But in the context of ORM, there is a big difference:
a composed class is often a candidate value type.
We now map Address as a value type and User as an entity. Does this affect the
implementation of our POJO classes?
Java itself has no concept of compositiona class or attribute cant be marked
as a component or composition. The only difference is the object identifier: A component
has no identity, hence the persistent component class requires no identifier
property or identifier mapping. The composition between User and Address is
a metadata-level notion; we only have to tell Hibernate that the Address is a value
type in the mapping document.
Hibernate uses the term component for a user-defined class that is persisted to
the same table as the owning entity, as shown in listing 3.7. (The use of the word
component here has nothing to do with the architecture-level concept, as in software
component.)
name="User"
table="USER">
name="id"
column="USER_ID"
type="long">
Listing 3.7 Mapping the User class with a component Address
Address
street : String
zipCode : String
city : String
User
firstname : String
lastname : String
username : String
password : String
email : String
ranking : int
created : Date
billing
home
Figure 3.5
Relationships between User and
Address using composition
Licensed to Lathika
Fine-grained object models 95
name="username"
column="USERNAME"
type="string"/>
name="homeAddress"
class="Address">
type="string"
column="HOME_STREET"
notnull="true"/>
type="string"
column="HOME_CITY"
not-null="true"/>
type="short"
column="HOME_ZIPCODE"
not-null="true"/>
name="billingAddress"
class="Address">
type="string"
column="BILLING_STREET"
notnull="true"/>
type="string"
column="BILLING_CITY"
not-null="true"/>
type="short"
column="BILLING_ZIPCODE"
not-null="true"/>
...
We declare the persistent attributes of Address inside the element.
The property of the User class is named homeAddress.
We reuse the same component class to map another property of this type to the
same table.
Declare persistent
attributes B
Reuse
component class C
B
C
Licensed to Lathika
96 CHAPTER 3
Mapping persistent classes
Figure 3.6 shows how the attributes of the
Address class are persisted to the same table as
the User entity.
Notice that in this example, we have modeled
the composition association as unidirectional. We
cant navigate from Address to User. Hibernate
supports both unidirectional and bidirectional
compositions; however, unidirectional composition
is far more common. Heres an example of a
bidirectional mapping:
name="homeAddress"
class="Address">
The element maps a property of type User to the owning entity, in this
example, the property is named user. We then call Address.getUser() to navigate
in the other direction.
A Hibernate component may own other components and even associations to
other entities. This flexibility is the foundation of Hibernates support for finegrained
object models. (Well discuss various component mappings in chapter 6.)
However, there are two important limitations to classes mapped as components:
¦ Shared references arent possible. The component Address doesnt have its
own database identity (primary key) and so a particular Address object cant
be referred to by any object other than the containing instance of User.
¦ There is no elegant way to represent a null reference to an Address. In lieu
of an elegant approach, Hibernate represents null components as null values
in all mapped columns of the component. This means that if you store a
component object with all null property values, Hibernate will return a null
component when the owning entity object is retrieved from the database.
Support for fine-grained classes isnt the only ingredient of a rich domain model.
Class inheritance and polymorphism are defining features of object-oriented
models.
Figure 3.6 Table attributes of User
with Address component
Licensed to Lathika
Mapping class inheritance 97
3.6 Mapping class inheritance
A simple strategy for mapping classes to database tables might be one table for
every class. This approach sounds simple, and it works well until you encounter
inheritance.
Inheritance is the most visible feature of the structural mismatch between the
object-oriented and relational worlds. Object-oriented systems model both is a
and has a relationships. SQL-based models provide only has a relationships
between entities.
There are three different approaches to representing an inheritance hierarchy.
These were catalogued by Scott Ambler [Ambler 2002] in his widely read paper
Mapping Objects to Relational Databases:
¦ Table per concrete classDiscard polymorphism and inheritance relationships
completely from the relational model
¦ Table per class hierarchyEnable polymorphism by denormalizing the relational
model and using a type discriminator column to hold type information
¦ Table per subclassRepresent is a (inheritance) relationships as has a
(foreign key) relationships
This section takes a top down approach; it assumes that were starting with a
domain model and trying to derive a new SQL schema. However, the mapping
strategies described are just as relevant if were working bottom up, starting with
existing database tables.
3.6.1 Table per concrete class
Suppose we stick with the simplest approach: We could use exactly one table for
each (non-abstract) class. All properties of a class, including inherited properties,
could be mapped to columns of this table, as shown in figure 3.7.
The main problem with this approach is that it doesnt support polymorphic
associations very well. In the database, associations are usually represented as foreign
key relationships. In figure 3.7, if the subclasses are all mapped to different
tables, a polymorphic association to their superclass (abstract BillingDetails in
this example) cant be represented as a simple foreign key relationship. This would
be problematic in our domain model, because BillingDetails is associated with
User; hence both tables would need a foreign key reference to the USER table.
Polymorphic queries (queries that return objects of all classes that match the interface
of the queried class) are also problematic. A query against the superclass must
Licensed to Lathika
98 CHAPTER 3
Mapping persistent classes
be executed as several SQL SELECTs, one for each concrete subclass. We might be
able to use an SQL UNION to improve performance by avoiding multiple round trips
to the database. However, unions are somewhat nonportable and otherwise difficult
to work with. Hibernate doesnt support the use of unions at the time of writing,
and will always use multiple SQL queries. For a query against the
BillingDetails class (for example, restricting to a certain date of creation), Hibernate
would use the following SQL:
select CREDIT_CARD_ID, OWNER, NUMBER, CREATED, TYPE, ...
from CREDIT_CARD
where CREATED = ?
select BANK_ACCOUNT_ID, OWNER, NUMBER, CREATED, BANK_NAME, ...
from BANK_ACCOUNT
where CREATED = ?
Notice that a separate query is needed for each concrete subclass.
On the other hand, queries against the concrete classes are trivial and perform
well:
select CREDIT_CARD_ID, TYPE, EXP_MONTH, EXP_YEAR
from CREDIT_CARD where CREATED = ?
(Note that here, and in other places in this book, we show SQL that is conceptually
identical to the SQL executed by Hibernate. The actual SQL might look superficially
different.)
A further conceptual problem with this mapping strategy is that several different
columns of different tables share the same semantics. This makes schema evolution
more complex. For example, a change to a superclass property type results in
Figure 3.7 Mapping a composition bidirectional
Licensed to Lathika
Mapping class inheritance 99
changes to multiple columns. It also makes it much more difficult to implement
database integrity constraints that apply to all subclasses.
This mapping strategy doesnt require any special Hibernate mapping declaration:
Simply create a new declaration for each concrete class, specifying a
different table attribute for each. We recommend this approach (only) for the top
level of your class hierarchy, where polymorphism isnt usually required.
3.6.2 Table per class hierarchy
Alternatively, an entire class hierarchy could be mapped to a single table. This
table would include columns for all properties of all classes in the hierarchy. The
concrete subclass represented by a particular row is identified by the value of a
type discriminator column. This approach is shown in figure 3.8.
This mapping strategy is a winner in terms of both performance and simplicity.
Its the best-performing way to represent polymorphismboth polymorphic and
nonpolymorphic queries perform welland its even easy to implement by hand.
Ad hoc reporting is possible without complex joins or unions, and schema evolution
is straightforward.
There is one major problem: Columns for properties declared by subclasses
must be declared to be nullable. If your subclasses each define several non-nullable
properties, the loss of NOT NULL constraints could be a serious problem from the
point of view of data integrity.
In Hibernate, we use the element to indicate a table-per-class hierarchy
mapping, as in listing 3.8.
<>
BILLING_DETAILS
BILLING_DETAILS_ID <>
BILLING_DETAILS_TYPE <>
OWNER
NUMBER
CREATED
CREDIT_CARD_TYPE
CREDIT_CARD_EXP_MONTH
CREDIT_CARD_EXP_YEAR
BANK_ACCOUNT_BANK_NAME
BANK_ACCOUNT_BANK_SWIFT
Figure 3.8 Table per class hierarchy mapping
Licensed to Lathika
100 CHAPTER 3
Mapping persistent classes
name="BillingDetails"
table="BILLING_DETAILS" discriminator-value="BD">
name="id"
column="BILLING_DETAILS_ID"
type="long">
column="BILLING_DETAILS_TYPE"
type="string"/>
name="name"
column="OWNER"
type="string"/>
...
name="CreditCard"
discriminator-value="CC">
name="type"
column="CREDIT_CARD_TYPE"/>
...
...
The root class BusinessDetails of the inheritance hierarchy is mapped to the
table BUSINESS_DETAILS.
We have to use a special column to distinguish between persistent classes: the discriminator.
This isnt a property of the persistent class; its used internally by Hibernate.
The column name is BILLING_DETAILS_TYPE, and the values will be strings
in this case, "CC" or "BA". Hibernate will automatically set and retrieve the discriminator
values.
Properties of the superclass are mapped as always, with a element.
Listing 3.8 Hibernate mapping
Root class, mapped to table B
Discriminator column C
Property mappings D
CreditCard subclass E
B
C
D
Licensed to Lathika
Mapping class inheritance 101
Every subclass has its own element. Properties of a subclass are
mapped to columns in the BILLING_DETAILS table. Remember that not-null constraints
arent allowed, because a CreditCard instance wont have a bankSwift
property and the BANK_ACCOUNT_BANK_SWIFT field must be null for that row.
The element can in turn contain other elements, until
the whole hierarchy is mapped to the table. A element cant contain a
element. (The element is used in the specification
of the third mapping option: one table per subclass. This option is discussed
in the next section.) The mapping strategy cant be switched anymore at
this point.
Hibernate would use the following SQL when querying the BillingDetails class:
select BILLING_DETAILS_ID, BILLING_DETAILS_TYPE,
OWNER, ..., CREDIT_CARD_TYPE,
from BILLING_DETAILS
where CREATED = ?
To query the CreditCard subclass, Hibernate would use a condition on the discriminator:
select BILLING_DETAILS_ID,
CREDIT_CARD_TYPE, CREDIT_CARD_EXP_MONTH, ...
from BILLING_DETAILS
where BILLING_DETAILS_TYPE='CC' and CREATED = ?
How could it be any simpler than that?
3.6.3 Table per subclass
The third option is to represent inheritance relationships as relational foreign key
associations. Every subclass that declares persistent propertiesincluding abstract
classes and even interfaceshas its own table.
Unlike the strategy that uses a table per concrete class, the table here contains
columns only for each non-inherited property (each property declared by the subclass
itself) along with a primary key that is also a foreign key of the superclass table.
This approach is shown in figure 3.9.
If an instance of the CreditCard subclass is made persistent, the values of properties
declared by the BillingDetails superclass are persisted to a new row of the
BILLING_DETAILS table. Only the values of properties declared by the subclass are
persisted to the new row of the CREDIT_CARD table. The two rows are linked together
by their shared primary key value. Later, the subclass instance may be retrieved
from the database by joining the subclass table with the superclass table.
E
Licensed to Lathika
102 CHAPTER 3
Mapping persistent classes
The primary advantage of this strategy is that the relational model is completely
normalized. Schema evolution and integrity constraint definition are straightforward.
A polymorphic association to a particular subclass may be represented as a
foreign key pointing to the table of that subclass.
In Hibernate, we use the element to indicate a table-per-subclass
mapping (see listing 3.9).
name="BillingDetails"
table="BILLING_DETAILS">
Listing 3.9 Hibernate mapping
CreditCard
type : int
expMonth : String
expYear : String
BankAccount
bankName: String
bankSwift: String
BillingDetails
owner : String
number: String
created : Date
Table per Subclass
<>
CREDIT_CARD
CREDIT_CARD_ID <> <>
TYPE
EXP_MONTH
EXP_YEAR
<>
BANK_ACCOUNT
BANK_ACCOUNT_ID <> <>
BANK_NAME
BANK_SWIFT
<>
BILLING_DETAILS
BILLING_DETAILS_ID <>
OWNER
NUMBER
CREATED
Figure 3.9 Table per subclass mapping
BillingDetails root class,
mapped to
BILLING_DETAILS table
B
Licensed to Lathika
Mapping class inheritance 103
name="id"
column="BILLING_DETAILS_ID"
type="long">
name="owner"
column="OWNER"
type="string"/>
...
name="CreditCard"
table="CREDIT_CARD">
name="type"
column="TYPE"/>
...
...
Again, the root class BillingDetails is mapped to the table BILLING_DETAILS.
Note that no discriminator is required with this strategy.
The new element is used to map a subclass to a new table (in
this example, CREDIT_CARD). All properties declared in the joined subclass will be
mapped to this table. Note that we intentionally left out the mapping example for
BankAccount, which is similar to CreditCard.
A primary key is required for the CREDIT_CARD table; it will also have a foreign key
constraint to the primary key of the BILLING_DETAILS table. A CreditCard object
lookup will require a join of both tables.
A element may contain other elements
but not a element. Hibernate doesnt support mixing of these two
mapping strategies.
Hibernate will use an outer join when querying the BillingDetails class:
element
C
Primary/foreign key D
B
C
D
Licensed to Lathika
104 CHAPTER 3
Mapping persistent classes
select BD.BILLING_DETAILS_ID, BD.OWNER, BD.NUMER, BD.CREATED,
CC.TYPE, ..., BA.BANK_SWIFT, ...
case
when CC.CREDIT_CARD_ID is not null then 1
when BA.BANK_ACCOUNT_ID is not null then 2
when BD.BILLING_DETAILS_ID is not null then 0
end as TYPE
from BILLING_DETAILS BD
left join CREDIT_CARD CC on
BD.BILLING_DETAILS_ID = CC.CREDIT_CARD_ID
left join BANK_ACCOUNT BA on
BD.BILLING_DETAILS_ID = BA.BANK_ACCOUNT_ID
where BD.CREATED = ?
The SQL case statement uses the existence (or nonexistence) of rows in the subclass
tables CREDIT_CARD and BANK_ACCOUNT to determine the concrete subclass for
a particular row of the BILLING_DETAILS table.
To narrow the query to the subclass, Hibernate uses an inner join instead:
select BD.BILLING_DETAILS_ID, BD.OWNER, BD.CREATED, CC.TYPE, ...
from CREDIT_CARD CC
inner join BILLING_DETAILS BD on
BD.BILLING_DETAILS_ID = CC.CREDIT_CARD_ID
where CC.CREATED = ?
As you can see, this mapping strategy is more difficult to implement by hand
even ad hoc reporting will be more complex. This is an important consideration if
you plan to mix Hibernate code with handwritten SQL/JDBC. (For ad hoc reporting,
database views provide a way to offset the complexity of the table-per-subclass
strategy. A view may be used to transform the table-per-subclass model into the
much simpler table-per-hierarchy model.)
Furthermore, even though this mapping strategy is deceptively simple, our
experience is that performance may be unacceptable for complex class hierarchies.
Queries always require either a join across many tables or many sequential
reads. Our problem should be recast as how to choose an appropriate combination
of mapping strategies for our applications class hierarchies. A typical domain
model design has a mix of interfaces and abstract classes.
3.6.4 Choosing a strategy
You can apply all mapping strategies to abstract classes and interfaces. Interfaces
may have no state but may contain accessor method declarations, so they can be
treated like abstract classes. You can map an interface using , ,
or ; and you can map any declared or inherited property using
Licensed to Lathika
Introducing associations 105
. Hibernate wont try to instantiate an abstract class, however, even if
you query or load it.
Here are some rules of thumb:
¦ If you dont require polymorphic associations or queries, lean toward the
table-per-concrete-class strategy. If you require polymorphic associations
(an association to a superclass, hence to all classes in the hierarchy with
dynamic resolution of the concrete class at runtime) or queries, and subclasses
declare relatively few properties (particularly if the main difference
between subclasses is in their behavior), lean toward the table-per-class-hierarchy
model.
¦ If you require polymorphic associations or queries, and subclasses declare
many properties (subclasses differ mainly by the data they hold), lean
toward the table-per-subclass approach.
By default, choose table-per-class-hierarchy for simple problems. For more complex
cases (or when youre overruled by a data modeler insisting upon the importance
of nullability constraints), you should consider the table-per-subclass
strategy. But at that point, ask yourself whether it might be better to remodel
inheritance as delegation in the object model. Complex inheritance is often best
avoided for all sorts of reasons unrelated to persistence or ORM. Hibernate acts as
a buffer between the object and relational models, but that doesnt mean you can
completely ignore persistence concerns when designing your object model.
Note that you may also use and mapping elements
in a separate mapping file (as a top-level element, instead of ). You
then have to declare the class that is extended (for example,
name="CreditCard" extends="BillingDetails">), and the superclass mapping
must be loaded before the subclass mapping file. This technique allows you to
extend a class hierarchy without modifying the mapping file of the superclass.
You have now seen the intricacies of mapping an entity in isolation. In the next
section, we turn to the problem of mapping associations between entities, which is
another major issue arising from the object/relational paradigm mismatch.
3.7 Introducing associations
Managing the associations between classes and the relationships between tables is
the soul of ORM. Most of the difficult problems involved in implementing an ORM
solution relate to association management.
Licensed to Lathika
106 CHAPTER 3
Mapping persistent classes
The Hibernate association model is extremely rich but is not without pitfalls,
especially for new users. In this section, we wont try to cover all the possible
combinations. What well do is examine certain cases that are extremely common.
We return to the subject of association mappings in chapter 6, for a more
complete treatment.
But first, theres something we need to explain up front.
3.7.1 Managed associations?
If youve used CMP 2.0/2.1, youre familiar with the concept of a managed association
(or managed relationship). CMP associations are called container-managed
relationships (CMRs) for a reason. Associations in CMP are inherently bidirectional:
A change made to one side of an association is instantly reflected at the
other side. For example, if we call bid.setItem(item), the container automatically
calls item.getBids().add(item).
Transparent POJO-oriented persistence implementations such as Hibernate do
not implement managed associations. Contrary to CMR, Hibernate associations are
all inherently unidirectional. As far as Hibernate is concerned, the association from
Bid to Item is a different association than the association from Item to Bid.
To some people, this seems strange; to others, it feels completely natural. After
all, associations at the Java language level are always unidirectionaland Hibernate
claims to implement persistence for plain Java objects. Well merely observe
that this decision was made because Hibernate objects, unlike entity beans, are
not assumed to be always under the control of a container. In Hibernate applications,
the behavior of a non-persistent instance is the same as the behavior of a
persistent instance.
Because associations are so important, we need a very precise language for classifying
them.
3.7.2 Multiplicity
In describing and classifying associations, well almost always use the association
multiplicity. Look at figure 3.10.
For us, the multiplicity is just two bits of information:
¦ Can there be more than one Bid for a particular Item?
¦ Can there be more than one Item for a particular Bid?
0..* 1..1 Item Bid
Figure 3.10
Relationship between Item and Bid
Licensed to Lathika
Introducing associations 107
After glancing at the object model, we conclude that the association from Bid to
Item is a many-to-one association. Recalling that associations are directional, we
would also call the inverse association from Item to Bid a one-to-many association.
(Clearly, there are two more possibilities: many-to-many and one-to-one; well get
back to these possibilities in chapter 6.)
In the context of object persistence, we arent interested in whether many
really means two or maximum of five or unrestricted.
3.7.3 The simplest possible association
The association from Bid to Item is an example of the simplest possible kind of
association in ORM. The object reference returned by getItem() is easily mapped
to a foreign key column in the BID table. First, heres the Java class implementation
of Bid:
public class Bid {
...
private Item item;
public void setItem(Item item) {
this.item = item;
}
public Item getItem() {
return item;
}
...
}
Next, heres the Hibernate mapping for this association:
name="Bid"
table="BID">
...
name="item"
column="ITEM_ID"
class="Item"
not-null="true"/>
This mapping is called a unidirectional many-to-one association. The column ITEM_ID
in the BID table is a foreign key to the primary key of the ITEM table.
Licensed to Lathika
108 CHAPTER 3
Mapping persistent classes
We have explicitly specified the class, Item, that the association refers to. This
specification is usually optional, since Hibernate can determine this using
reflection.
We specified the not-null attribute because we cant have a bid without an
item. The not-null attribute doesnt affect the runtime behavior of Hibernate; it
exists mainly to control automatic data definition language (DDL) generation
(see chapter 9).
3.7.4 Making the association bidirectional
So far so good. But we also need to be able to easily fetch all the bids for a particular
item. We need a bidirectional association here, so we have to add scaffolding
code to the Item class:
public class Item {
...
private Set bids = new HashSet();
public void setBids(Set bids) {
this.bids = bids;
}
public Set getBids() {
return bids;
}
public void addBid(Bid bid) {
bid.setItem(this);
bids.add(bid);
}
...
}
You can think of the code in addBid() (a convenience method) as implementing
a managed association in the object model.
A basic mapping for this one-to-many association would look like this:
name="Item"
table="ITEM">
...
Licensed to Lathika
Introducing associations 109
The column mapping defined by the element is a foreign key column of the
associated BID table. Notice that we specify the same foreign key column in this
collection mapping that we specified in the mapping for the many-to-one association.
The table structure for this association mapping is shown in figure 3.11.
Now we have two different unidirectional associations mapped to the same foreign
key, which poses a problem. At runtime, there are two different in-memory
representations of the same foreign key value: the item property of Bid and an element
of the bids collection held by an Item. Suppose our application modifies the
association by, for example, adding a bid to an item in this fragment of the
addBid() method:
bid.setItem(item);
bids.add(bid);
This code is fine, but in this situation, Hibernate detects two different changes to
the in-memory persistent instances. From the point of view of the database, just
one value must be updated to reflect these changes: the ITEM_ID column of the
BID table. Hibernate doesnt transparently detect the fact that the two changes refer to the
same database column, since at this point weve done nothing to indicate that this is a bidirectional
association.
We need one more thing in our association mapping to tell Hibernate to treat
this as a bidirectional association: The inverse attribute tells Hibernate that the
collection is a mirror image of the many-to-one association on the other side:
name="Item"
table="ITEM">
...
name="bids"
inverse="true">
ITEM_ID <>
NAME
DESCRIPTION
INITIAL_PRICE
...
BID_ID <>
ITEM_ID <>
AMOUNT
...
<>
ITEM <>
BID
Figure 3.11
Table relationships and keys for a
one-to-many/many-to-one mapping
Licensed to Lathika
110 CHAPTER 3
Mapping persistent classes
Without the inverse attribute, Hibernate would try to execute two different SQL
statements, both updating the same foreign key column, when we manipulate the
association between the two instances. By specifying inverse="true", we explicitly
tell Hibernate which end of the association it should synchronize with the database.
In this example, we tell Hibernate that it should propagate changes made
at the Bid end of the association to the database, ignoring changes made only to
the bids collection. Thus if we only call item.getBids().add(bid), no changes
will be made persistent. This is consistent with the behavior in Java without
Hibernate: If an association is bidirectional, you have to create the link on two
sides, not just one.
We now have a working bidirectional many-to-one association (which could also be
called a bidirectional one-to-many association, of course).
One final piece is missing. We explore the notion of transitive persistence in
much greater detail in the next chapter. For now, well introduce the concepts of
cascading save and cascading delete, which we need in order to finish our mapping
of this association.
When we instantiate a new Bid and add it to an Item, the bid should become persistent
immediately. We would like to avoid the need to explicitly make a Bid persistent
by calling save() on the Session interface.
We make one final tweak to the mapping document to enable cascading save:
name="Item"
table="ITEM">
...
name="bids"
inverse="true"
cascade="save-update">
The cascade attribute tells Hibernate to make any new Bid instance persistent
(that is, save it in the database) if the Bid is referenced by a persistent Item.
Licensed to Lathika
Introducing associations 111
The cascade attribute is directional: It applies to only one end of the association.
We could also specify cascade="save-update" for the many-to-one association
declared in the mapping for Bid, but doing so would make no sense in this case
because Bids are created after Items.
Are we finished? Not quite. We still need to define the lifecycle for both entities
in our association.
3.7.5 A parent/child relationship
With the previous mapping, the association between Bid and Item is fairly loose.
We would use this mapping in a real system if both entities had their own lifecycle
and were created and removed in unrelated business processes. Certain associations
are much stronger than this; some entities are bound together so that their
lifecycles arent truly independent. In our example, it seems reasonable that deletion
of an item implies deletion of all bids for the item. A particular bid instance
references only one item instance for its entire lifetime. In this case, cascading
both saves and deletions makes sense.
If we enable cascading delete, the association between Item and Bid is called a
parent/child relationship. In a parent/child relationship, the parent entity is responsible
for the lifecycle of its associated child entities. This is the same semantics as a
composition (using Hibernate components), but in this case only entities are
involved; Bid isnt a value type. The advantage of using a parent/child relationship
is that the child may be loaded individually or referenced directly by another entity.
A bid, for example, may be loaded and manipulated without retrieving the owning
item. It may be stored without storing the owning item at the same time. Furthermore,
we reference the same Bid instance in a second property of Item, the single
successfulBid (see figure 3.2, page 63). Objects of value type cant be shared.
To remodel the Item to Bid association as a parent/child relationship, the only
change we need to make is to the cascade attribute:
name="Item"
table="ITEM">
...
name="bids"
inverse="true"
cascade="all-delete-orphan">
Licensed to Lathika
112 CHAPTER 3
Mapping persistent classes
We used cascade="all-delete-orphan" to indicate the following:
¦ Any newly instantiated Bid becomes persistent if the Bid is referenced by a
persistent Item (as was also the case with cascade="save-update"). Any persistent
Bid should be deleted if its referenced by an Item when the item is
deleted.
¦ Any persistent Bid should be deleted if its removed from the bids collection
of a persistent Item. (Hibernate will assume that it was only referenced
by this item and consider it an orphan.)
We have achieved with the following with this mapping: A Bid is removed from
the database if its removed from the collection of Bids of the Item (or its
removed if the Item itself is removed).
The cascading of operations to associated entities is Hibernates implementation
of transitive persistence. We look more closely at this concept in chapter 4, section
4.3, Using transitive persistence in Hibernate.
We have covered only a tiny subset of the association options available in Hibernate.
However, you already have enough knowledge to be able to build entire
applications. The remaining options are either rare or are variations of the associations
we have described.
We recommend keeping your association mappings simple, using Hibernate
queries for more complex tasks.
3.8 Summary
In this chapter, we have focused on the structural aspect of the object/relational
paradigm mismatch and have discussed the first four generic ORM problems. We
discussed the programming model for persistent classes and the Hibernate ORM
metadata for fine-grained classes, object identity, inheritance, and associations.
You now understand that persistent classes in a domain model should be free of
cross-cutting concerns such as transactions and security. Even persistence-related
concerns shouldnt leak into the domain model. We no longer entertain the use
of restrictive programming models such as EJB entity beans for our domain model.
Instead, we use transparent persistence, together with the unrestrictive POJO programming
modelwhich is really a set of best practices for the creation of properly
encapsulated Java types.
Hibernate requires you to provide metadata in XML text format. You use this
metadata to define the mapping strategy for all your persistent classes (and tables).
We created mappings for classes and properties and looked at class association
Licensed to Lathika
Summary 113
mappings. You saw how to implement the three well-known inheritance-mapping
strategies in Hibernate.
You also learned about the important differences between entities and valuetyped
objects in Hibernate. Entities have their own identity and lifecycle, whereas
value-typed objects are dependent on an entity and are persisted with by-value
semantics. Hibernate allows fine-grained object models with fewer tables than
persistent classes.
Finally, we have implemented and mapped our first parent/child association
between persistent classes, using database foreign key fields and the cascading of
operations full stop.
In the next chapter, we investigate the dynamic aspects of the object/relational
mismatch, including a much deeper study of the cascaded operations we introduced
and the lifecycle of persistent objects.
Licensed to Lathika
114
Working with
persistent objects
This chapter covers
¦ The lifecycle of objects in a
Hibernate application
¦ Using the session persistence manager
¦ Transitive persistence
¦ Efficient fetching strategy
Licensed to Lathika
The persistence lifecycle 115
You now have an understanding of how Hibernate and ORM solve the static aspects
of the object/relational mismatch. With what you know so far, its possible to solve
the structural mismatch problem, but an efficient solution to the problem requires
something more. We must investigate strategies for runtime data access, since
theyre crucial to the performance of our applications. You need to learn how to
efficiently store and load objects.
This chapter covers the behavioral aspect of the object/relational mismatch,
listed in chapter 1 as the last four O/R mapping problems described in
section 1.4.2. We consider these problems to be at least as important as the structural
problems discussed in chapter 3. In our experience, many developers are
only aware of the structural mismatch and rarely pay attention to the more
dynamic behavioral aspects of the mismatch.
In this chapter, we discuss the lifecycle of objectshow an object becomes persistent,
and how it stops being considered persistentand the method calls and
other actions that trigger these transitions. The Hibernate persistence manager,
the Session, is responsible for managing object state, so youll learn how to use this
important API.
Retrieving object graphs efficiently is another central concern, so we introduce
the basic strategies in this chapter. Hibernate provides several ways to specify queries
that return objects without losing much of the power inherent to SQL. Because
network latency caused by remote access to the database can be an important limiting
factor in the overall performance of Java applications, you must learn how to
retrieve a graph of objects with a minimal number of database hits.
Lets start by discussing objects, their lifecycle, and the events that trigger a
change of persistent state. These basics will give you the background you need
when working with your object graph, so youll know when and how to load and
save your objects. The material might be formal, but a solid understanding of the
persistence lifecycle is essential.
4.1 The persistence lifecycle
Since Hibernate is a transparent persistence mechanismclasses are unaware of
their own persistence capabilityits possible to write application logic that is
unaware of whether the objects it operates on represent persistent state or temporary
state that exists only in memory. The application shouldnt necessarily need to
care that an object is persistent when invoking its methods.
However, in any application with persistent state, the application must interact
with the persistence layer whenever it needs to propagate state held in memory to
Licensed to Lathika
116 CHAPTER 4
Working with persistent objects
the database (or vice versa). To do this, you call Hibernates persistence manager
and query interfaces. When interacting with the persistence mechanism that way,
its necessary for the application to concern itself with the state and lifecycle of an
object with respect to persistence. Well refer to this as the persistence lifecycle.
Different ORM implementations use different terminology and define different
states and state transitions for the persistence lifecycle. Moreover, the object states
used internally might be different from those exposed to the client application.
Hibernate defines only three states, hiding the complexity of its internal implementation
from the client code. In this section, we explain these three states: transient,
persistent, and detached.
Lets look at these states and their transitions in a state chart, shown in
figure 4.1. You can also see the method calls to the persistence manager that trigger
transitions. We discuss this chart in this section; refer to it later whenever you
need an overview.
In its lifecycle, an object can transition from a transient object to a persistent
object to a detached object. Lets take a closer look at each of these states.
4.1.1 Transient objects
In Hibernate, objects instantiated using the new operator arent immediately persistent.
Their state is transient, which means they arent associated with any database
table row, and so their state is lost as soon as theyre dereferenced (no longer referenced
by any other object) by the application. These objects have a lifespan that
Transient new
Persistent
Detached
save()
saveOrUpdate()
evict()
close() *
clear() *
update()
saveOrUpdate()
lock()
delete()
get()
load()
find()
iterate()
etc.
* affects all instances in a Session
garbage
garbage
Figure 4.1
States of an object and
transitions in a Hibernate
application
Licensed to Lathika
The persistence lifecycle 117
effectively ends at that time, and they become inaccessible and available for garbage
collection.
Hibernate considers all transient instances to be nontransactional; a modification
to the state of a transient instance isnt made in the context of any transaction.
This means Hibernate doesnt provide any rollback functionality for transient
objects. (In fact, Hibernate doesnt roll back any object changes, as youll see later.)
Objects that are referenced only by other transient instances are, by default, also
transient. For an instance to transition from transient to persistent state requires
either a save() call to the persistence manager or the creation of a reference from
an already persistent instance.
4.1.2 Persistent objects
A persistent instance is any instance with a database identity, as defined in chapter 3,
section 3.4, Understanding object identity. That means a persistent instance has
a primary key value set as its database identifier.
Persistent instances might be objects instantiated by the application and then
made persistent by calling the save() method of the persistence manager (the
Hibernate Session, discussed in more detail later in this chapter). Persistent
instances are then associated with the persistence manager. They might even be
objects that became persistent when a reference was created from another persistent
object already associated with a persistence manager. Alternatively, a persistent
instance might be an instance retrieved from the database by execution of a query,
by an identifier lookup, or by navigating the object graph starting from another
persistent instance. In other words, persistent instances are always associated with
a Session and are transactional.
Persistent instances participate in transactionstheir state is synchronized
with the database at the end of the transaction. When a transaction commits,
state held in memory is propagated to the database by the execution of SQL
INSERT, UPDATE, and DELETE statements. This procedure might also occur at other
times. For example, Hibernate might synchronize with the database before execution
of a query. This ensures that queries will be aware of changes made earlier
during the transaction.
We call a persistent instance new if it has been allocated a primary key value but
has not yet been inserted into the database. The new persistent instance will
remain new until synchronization occurs.
Of course, you dont update the database row of every persistent object in memory
at the end of the transaction. ORM software must have a strategy for detecting
which persistent objects have been modified by the application in the transaction.
Licensed to Lathika
118 CHAPTER 4
Working with persistent objects
We call this automatic dirty checking (an object with modifications that havent yet
been propagated to the database is considered dirty). Again, this state isnt visible
to the application. We call this feature transparent transaction-level write-behind, meaning
that Hibernate propagates state changes to the database as late as possible but
hides this detail from the application.
Hibernate can detect exactly which attributes have been modified, so its possible
to include only the columns that need updating in the SQL UPDATE statement.
This might bring performance gains, particularly with certain databases. However,
it isnt usually a significant difference, and, in theory, it could harm performance
in some environments. So, by default, Hibernate includes all columns in the SQL
UPDATE statement (hence, Hibernate can generate this basic SQL at startup, not at
runtime). If you only want to update modified columns, you can enable dynamic
SQL generation by setting dynamic-update="true" in a class mapping. (Note that
this feature is extremely difficult to implement in a handcoded persistence layer.)
We talk about Hibernates transaction semantics and the synchronization process
(known as flushing) in more detail in the next chapter.
Finally, a persistent instance may be made transient via a delete() call to the persistence
manager API, resulting in deletion of the corresponding row of the database
table.
4.1.3 Detached objects
When a transaction completes, the persistent instances associated with the persistence
manager still exist. (If the transaction were successful, their in-memory state
will have been synchronized with the database.) In ORM implementations with
process-scoped identity (see the following sections), the instances retain their association
to the persistence manager and are still considered persistent.
In the case of Hibernate, however, these instances lose their association with the
persistence manager when you close() the Session. We refer to these objects as
detached, indicating that their state is no longer guaranteed to be synchronized with
database state; theyre no longer under the management of Hibernate. However,
they still contain persistent data (that may possibly soon be stale). Its possible (and
common) for the application to retain a reference to a detached object outside of
a transaction (and persistence manager). Hibernate lets you reuse these instances
in a new transaction by reassociating them with a new persistence manager. (After
reassociation, theyre considered persistent.) This feature has a deep impact on
how multitiered applications may be designed. The ability to return objects from
one transaction to the presentation layer and later reuse them in a new transaction
Licensed to Lathika
The persistence lifecycle 119
is one of Hibernates main selling points. We discuss this usage in the next chapter
as an implementation technique for long-running application transactions. We also
show you how to avoid the DTO (anti-) pattern by using detached objects in chapter
8, in the section Rethinking data transfer objects.
Hibernate also provides an explicit detachment operation: the evict() method
of the Session. However, this method is typically used only for cache management
(a performance consideration). Its not normal to perform detachment explicitly.
Rather, all objects retrieved in a transaction become detached when the Session is
closed or when theyre serialized (if theyre passed remotely, for example). So,
Hibernate doesnt need to provide functionality for controlling detachment of subgraphs.
Instead, the application can control the depth of the fetched subgraph (the
instances that are currently loaded in memory) using the query language or
explicit graph navigation. Then, when the Session is closed, this entire subgraph
(all objects associated with a persistence manager) becomes detached.
Lets look at the different states again but this time consider the scope of object
identity.
4.1.4 The scope of object identity
As application developers, we identify an object using Java object identity (a==b).
So, if an object changes state, is its Java identity guaranteed to be the same in the
new state? In a layered application, that might not be the case.
In order to explore this topic, its important to understand the relationship
between Java identity, a==b, and database identity, a.getId().equals( b.getId() ).
Sometimes both are equivalent; sometimes they arent. We refer to the conditions
under which Java identity is equivalent to database identity as the scope of object identity.
For this scope, there are three common choices:
¦ A primitive persistence layer with no identity scope makes no guarantees that
if a row is accessed twice, the same Java object instance will be returned to
the application. This becomes problematic if the application modifies two
different instances that both represent the same row in a single transaction
(how do you decide which state should be propagated to the database?).
¦ A persistence layer using transaction-scoped identity guarantees that, in the
context of a single transaction, there is only one object instance that represents
a particular database row. This avoids the previous problem and also
allows for some caching to be done at the transaction level.
¦ Process-scoped identity goes one step further and guarantees that there is only
one object instance representing the row in the whole process (JVM).
Licensed to Lathika
120 CHAPTER 4
Working with persistent objects
For a typical web or enterprise application, transaction-scoped identity is preferred.
Process-scoped identity offers some potential advantages in terms of cache
utilization and the programming model for reuse of instances across multiple
transactions; however, in a pervasively multithreaded application, the cost of always
synchronizing shared access to persistent objects in the global identity map is too
high a price to pay. Its simpler, and more scalable, to have each thread work with
a distinct set of persistent instances in each transaction scope.
Speaking loosely, we would say that Hibernate implements transaction-scoped
identity. Actually, the Hibernate identity scope is the Session instance, so identical
objects are guaranteed if the same persistence manager (the Session) is used for
several operations. But a Session isnt the same as a (database) transactionits a
much more flexible element. Well explore the differences and the consequences
of this concept in the next chapter. Lets focus on the persistence lifecycle and
identity scope again.
If you request two objects using the same database identifier value in the
same Session, the result will be two references to the same in-memory object.
The following code example demonstrates this behavior, with several load()
operations in two Sessions:
Session session1 = sessions.openSession();
Transaction tx1 = session1.beginTransaction();
// Load Category with identifier value "1234"
Object a = session1.load(Category.class, new Long(1234) );
Object b = session1.load(Category.class, new Long(1234) );
if ( a==b ) {
System.out.println("a and b are identical.");
}
tx1.commit();
session1.close();
Session session2 = sessions.openSession();
Transaction tx2 = session2.beginTransaction();
Object b2 = session2.load(Category.class, new Long(1234) );
if ( a!=b2 ) {
System.out.println("a and b2 are not identical.");
}
tx2.commit();
session2.close();
Object references a and b not only have the same database identity, they also have
the same Java identity since they were loaded in the same Session. Once outside
this boundary, however, Hibernate doesnt guarantee Java identity, so a and b2
Licensed to Lathika
The persistence lifecycle 121
arent identical and the message is printed on the console. Of course, a test for
database identitya.getId().equals ( b2.getId() )would still return true.
To further complicate our discussion of identity scopes, we need to consider
how the persistence layer handles a reference to an object outside its identity
scope. For example, for a persistence layer with transaction-scoped identity such as
Hibernate, is a reference to a detached object (that is, an instance persisted or
loaded in a previous, completed session) tolerated?
4.1.5 Outside the identity scope
If an object reference leaves the scope of guaranteed identity, we call it a reference to
a detached object. Why is this concept useful?
In web applications, you usually dont maintain a database transaction across a
user interaction. Users take a long time to think about modifications, but for scalability
reasons, you must keep database transactions short and release database
resources as soon as possible. In this environment, its useful to be able to reuse a
reference to a detached instance. For example, you might want to send an object
retrieved in one unit of work to the presentation tier and later reuse it in a second
unit of work, after its been modified by the user.
You dont usually wish to reattach the entire object graph in the second unit of
of work; for performance (and other) reasons, its important that reassociation of
detached instances be selective. Hibernate supports selective reassociation of detached
instances. This means the application can efficiently reattach a subgraph of a graph
of detached objects with the current (second) Hibernate Session. Once a
detached object has been reattached to a new Hibernate persistence manager, it
may be considered a persistent instance, and its state will be synchronized with the
database at the end of the transaction (due to Hibernates automatic dirty checking
of persistent instances).
Reattachment might result in the creation of new rows in the database when a
reference is created from a detached instance to a new transient instance. For example,
a new Bid might have been added to a detached Item while it was on the presentation
tier. Hibernate can detect that the Bid is new and must be inserted in the
database. For this to work, Hibernate must be able to distinguish between a new
transient instance and an old detached instance. Transient instances (such as the
Bid) might need to be saved; detached instances (such as the Item) might need to
be reattached (and later updated in the database). There are several ways to distinguish
between transient and detached instances, but the nicest approach is to look
at the value of the identifier property. Hibernate can examine the identifier of a
transient or detached object on reattachment and treat the object (and the
Licensed to Lathika
122 CHAPTER 4
Working with persistent objects
associated graph of objects) appropriately. We discuss this important issue further
in section 4.3.4, Distinguishing between transient and detached instances.
If you want to take advantage of Hibernates support for reassociation of
detached instances in your own applications, you need to be aware of Hibernates
identity scope when designing your applicationthat is, the Session scope that
guarantees identical instances. As soon as you leave that scope and have detached
instances, another interesting concept comes into play.
We need to discuss the relationship between Java equality (see chapter 3,
section 3.4.1, Identity versus equality) and database identity. Equality is an identity
concept that you, as a class developer, control and that you can (and sometimes
have to) use for classes that have detached instances. Java equality is defined by the
implementation of the equals() and hashCode() methods in the persistent classes
of the domain model.
4.1.6 Implementing equals() and hashCode()
The equals() method is called by application code or, more importantly, by the
Java collections. A Set collection, for example, calls equals() on each object you
put in the Set, to determine (and prevent) duplicate elements.
First lets consider the default implementation of equals(), defined by
java.lang.Object, which uses a comparison by Java identity. Hibernate guarantees
that there is a unique instance for each row of the database inside a Session. Therefore,
the default identity equals() is appropriate if you never mix instancesthat
is, if you never put detached instances from different sessions into the same Set.
(Actually, the issue were exploring is also visible if detached instances are from the
same session but have been serialized and deserialized in different scopes.) As soon
as you have instances from multiple sessions, however, it becomes possible to have
a Set containing two Items that each represent the same row of the database table
but dont have the same Java identity. This would almost always be semantically
wrong. Nevertheless, its possible to build a complex application with identity
(default) equals as long as you exercise discipline when dealing with detached
objects from different sessions (and keep an eye on serialization and deserialization).
One nice thing about this approach is that you dont have to write extra code
to implement your own notion of equality.
However, if this concept of equality isnt what you want, you have to override
equals() in your persistent classes. Keep in mind that when you override equals(),
you always need to also override hashCode() so the two methods are consistent (if
two objects are equal, they must have the same hashcode). Lets look at some of the
ways you can override equals() and hashCode() in persistent classes.
Licensed to Lathika
The persistence lifecycle 123
Using database identifier equality
A clever approach is to implement equals() to compare just the database identifier
property (usually a surrogate primary key) value:
public class User {
...
public boolean equals(Object other) {
if (this==other) return true;
if (id==null) return false;
if ( !(other instanceof User) ) return false;
final User that = (User) other;
return this.id.equals( that.getId() );
}
public int hashCode() {
return id==null ?
System.identityHashCode(this) :
id.hashCode();
}
}
Notice how this equals() method falls back to Java identity for transient instances
(if id==null) that dont have a database identifier value assigned yet. This is reasonable,
since they cant have the same persistent identity as another instance.
Unfortunately, this solution has one huge problem: Hibernate doesnt assign
identifier values until an entity is saved. So, if the object is added to a Set before
being saved, its hash code changes while its contained by the Set, contrary to the
contract of java.util.Set. In particular, this problem makes cascade save (discussed
later in this chapter) useless for sets. We strongly discourage this solution
(database identifier equality).
Comparing by value
A better way is to include all persistent properties of the persistent class, apart from
any database identifier property, in the equals() comparison. This is how most
people perceive the meaning of equals(); we call it by value equality.
When we say all properties, we dont mean to include collections. Collection
state is associated with a different table, so it seems wrong to include it. More
important, you dont want to force the entire object graph to be retrieved just to
perform equals(). In the case of User, this means you shouldnt include the items
collection (the items sold by this user) in the comparison. So, this is the implementation
you could use:
Licensed to Lathika
124 CHAPTER 4
Working with persistent objects
public class User {
...
public boolean equals(Object other) {
if (this==other) return true;
if ( !(other instanceof User) ) return false;
final User that = (User) other;
if ( !this.getUsername().equals( that.getUsername() )
return false;
if ( !this.getPassword().equals( that.getPassword() )
return false;
return true;
}
public int hashCode() {
int result = 14;
result = 29 * result + getUsername().hashCode();
result = 29 * result + getPassword().hashCode();
return result;
}
}
However, there are again two problems with this approach:
¦ Instances from different sessions are no longer equal if one is modified (for
example, if the user changes his password).
¦ Instances with different database identity (instances that represent different
rows of the database table) could be considered equal, unless there is some
combination of properties that are guaranteed to be unique (the database
columns have a unique constraint). In the case of User, there is a unique
property: username.
To get to the solution we recommend, you need to understand the notion of a business
key.
Using business key equality
A business key is a property, or some combination of properties, that is unique for
each instance with the same database identity. Essentially, its the natural key youd
use if you werent using a surrogate key. Unlike a natural primary key, it isnt an
absolute requirement that the business key never changeas long as it changes
rarely, thats enough.
We argue that every entity should have a business key, even if it includes all properties
of the class (this would be appropriate for some immutable classes). The
business key is what the user thinks of as uniquely identifying a particular record,
whereas the surrogate key is what the application and database use.
Licensed to Lathika
The persistence lifecycle 125
Business key equality means that the equals() method compares only the properties
that form the business key. This is a perfect solution that avoids all the problems
described earlier. The only downside is that it requires extra thought to
identify the correct business key in the first place. But this effort is required anyway;
its important to identify any unique keys if you want your database to help ensure
data integrity via constraint checking.
For the User class, username is a great candidate business key. Its never null, its
unique, and it changes rarely (if ever):
public class User {
...
public boolean equals(Object other) {
if (this==other) return true;
if ( !(other instanceof User) ) return false;
final User that = (User) other;
return this.username.equals( that.getUsername() );
}
public int hashCode() {
return username.hashCode();
}
}
For some other classes, the business key might be more complex, consisting of a
combination of properties. For example, candidate business keys for the Bid class
are the item ID together with the bid amount, or the item ID together with the date
and time of the bid. A good business key for the BillingDetails abstract class is
the number together with the type (subclass) of billing details. Notice that its almost
never correct to override equals() on a subclass and include another property in
the comparison. Its tricky to satisfy the requirements that equality be both symmetric
and transitive in this case; and, more important, the business key wouldnt correspond
to any well-defined candidate natural key in the database (subclass
properties may be mapped to a different table).
You might have noticed that the equals() and hashCode() methods always access
the properties of the other object via the getter methods. This is important, since
the object instance passed as other might be a proxy object, not the actual instance
that holds the persistent state. This is one point where Hibernate isnt completely
transparent, but its a good practice to use accessor methods instead of direct
instance variable access anyway.
Finally, take care when modifying the value of the business key properties; dont
change the value while the domain object is in a set.
Licensed to Lathika
126 CHAPTER 4
Working with persistent objects
Weve talked about the persistence manager in this section. Its time to take a
closer look at the persistence manager and explore the Hibernate Session API
in greater detail. Well come back to detached objects with more details in the
next chapter.)
4.2 The persistence manager
Any transparent persistence tool includes a persistence manager API, which usually
provides services for
¦ Basic CRUD operations
¦ Query execution
¦ Control of transactions
¦ Management of the transaction-level cache
The persistence manager can be exposed by several different interfaces (in the
case of Hibernate, Session, Query, Criteria, and Transaction). Under the covers,
the implementations of these interfaces are coupled tightly.
The central interface between the application and Hibernate is Session; its
your starting point for all the operations just listed. For most of the rest of this
book, well refer to the persistence manager and the session interchangeably; this is
consistent with usage in the Hibernate community.
So, how do you start using the session? At the beginning of a unit of work, a
thread obtains an instance of Session from the applications SessionFactory. The
application might have multiple SessionFactorys if it accesses multiple datasources.
But you should never create a new SessionFactory just to service a particular
requestcreation of a SessionFactory is extremely expensive. On the other
hand, Session creation is extremely inexpensive; the Session doesnt even obtain a
JDBC Connection until a connection is required.
After opening a new session, you use it to load and save objects.
4.2.1 Making an object persistent
The first thing you want to do with a Session is make a new transient object persistent.
To do so, you use the save() method:
User user = new User();
user.getName().setFirstname("John");
user.getName().setLastname("Doe");
Licensed to Lathika
The persistence manager 127
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
session.save(user);
tx.commit();
session.close();
First, we instantiate a new transient object user as usual. Of course, we might also
instantiate it after opening a Session; they arent related yet. We open a new Session
using the SessionFactory referred to by sessions, and then we start a new
database transaction.
A call to save() makes the transient instance of User persistent. Its now associated
with the current Session. However, no SQL INSERT has yet been executed. The
Hibernate Session never executes any SQL statement until absolutely necessary.
The changes made to persistent objects have to be synchronized with the database
at some point. This happens when we commit() the Hibernate Transaction.
In this case, Hibernate obtains a JDBC connection and issues a single SQL INSERT
statement. Finally, the Session is closed and the JDBC connection is released.
Note that its better (but not required) to fully initialize the User instance before
associating it with the Session. The SQL INSERT statement contains the values that
were held by the object at the point when save() was called. You can, of course, modify
the object after calling save(), and your changes will be propagated to the database
as an SQL UPDATE.
Everything between session.beginTransaction() and tx.commit() occurs in
one database transaction. We havent discussed transactions in detail yet; well
leave that topic for the next chapter. But keep in mind that all database operations
in a transaction scope either completely succeed or completely fail. If one of the
UPDATE or INSERT statements made on tx.commit() fails, all changes made to persistent
objects in this transaction will be rolled back at the database level. However,
Hibernate does not roll back in-memory changes to persistent objects; this is reasonable
since a failure of a database transaction is normally nonrecoverable and
you have to discard the failed Session immediately.
4.2.2 Updating the persistent state of a detached instance
Modifying the user after the session is closed will have no effect on its persistent
representation in the database. When the session is closed, user becomes a
detached instance. It may be reassociated with a new Session by calling update()
or lock().
Licensed to Lathika
128 CHAPTER 4
Working with persistent objects
The update() method forces an update to the persistent state of the object in
the database, scheduling an SQL UPDATE. Heres an example of detached object
handling:
user.setPassword("secret");
Session sessionTwo = sessions.openSession();
Transaction tx = sessionTwo.beginTransaction();
sessionTwo.update(user);
user.setUsername("jonny");
tx.commit();
sessionTwo.close();
It doesnt matter if the object is modified before or after its passed to update().
The important thing is that the call to update() is used to reassociate the detached
instance to the new Session (and current transaction) and tells Hibernate to treat
the object as dirty (unless select-before-update is enabled for the persistent class
mapping, in which case Hibernate will determine if the object is dirty by executing
a SELECT statement and comparing the objects current state to the current database
state).
A call to lock() associates the object with the Session without forcing an update,
as shown here:
Session sessionTwo = sessions.openSession();
Transaction tx = sessionTwo.beginTransaction();
sessionTwo.lock(user, LockMode.NONE);
user.setPassword("secret");
user.setLoginName("jonny");
tx.commit();
sessionTwo.close();
In this case, it does matter whether changes are made before or after the object is
associated with the session. Changes made before the call to lock() arent propagated
to the database; you only use lock() if youre sure that the detached instance
hasnt been modified.
We discuss Hibernate lock modes in the next chapter. By specifying Lock-
Mode.NONE here, we tell Hibernate not to perform a version check or obtain any
database-level locks when reassociating the object with the Session. If we specified
LockMode.READ or LockMode.UPGRADE, Hibernate would execute a SELECT statement
in order to perform a version check (and to set an upgrade lock).
Licensed to Lathika
The persistence manager 129
4.2.3 Retrieving a persistent object
The Session is also used to query the database and retrieve existing persistent
objects. Hibernate is especially powerful in this area, as youll see later in this chapter
and in chapter 7. However, special methods are provided on the Session API
for the simplest kind of query: retrieval by identifier. One of these methods is
get(), demonstrated here:
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
int userID = 1234;
User user = (User) session.get(User.class, new Long(userID));
tx.commit();
session.close();
The retrieved object user may now be passed to the presentation layer for use outside
the transaction as a detached instance (after the session has been closed). If
no row with the given identifier value exists in the database, the get() returns null.
4.2.4 Updating a persistent object
Any persistent object returned by get() or any other kind of query is already associated
with the current Session and transaction context. It can be modified, and
its state will be synchronized with the database. This mechanism is called automatic
dirty checking, which means Hibernate will track and save the changes you make to
an object inside a session:
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
int userID = 1234;
User user = (User) session.get(User.class, new Long(userID));
user.setPassword("secret");
tx.commit();
session.close();
First we retrieve the object from the database with the given identifier. We modify
the object, and these modifications are propagated to the database when tx.commit()
is called. Of course, as soon as we close the Session, the instance is considered
detached.
4.2.5 Making a persistent object transient
You can easily make a persistent object transient, removing its persistent state from
the database, using the delete() method:
Licensed to Lathika
130 CHAPTER 4
Working with persistent objects
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
int userID = 1234;
User user = (User) session.get(User.class, new Long(userID));
session.delete(user);
tx.commit();
session.close();
The SQL DELETE will be executed only when the Session is synchronized with the
database at the end of the transaction.
After the Session is closed, the user object is considered an ordinary transient
instance. The transient instance will be destroyed by the garbage collector if its no
longer referenced by any other object. Both the in-memory object instance and the
persistent database row will have been removed.
4.2.6 Making a detached object transient
Finally, you can make a detached instance transient, deleting its persistent state
from the database. This means you dont have to reattach (with update() or
lock()) a detached instance to delete it from the database; you can directly delete
a detached instance:
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
session.delete(user);
tx.commit();
session.close();
In this case, the call to delete() does two things: It associates the object with the
Session and then schedules the object for deletion, executed on tx.commit().
You now know the persistence lifecycle and the basic operations of the persistence
manager. Together with the persistent class mappings we discussed in chapter
3, you can create your own small Hibernate application. (If you like, you can
jump to chapter 8 and read about a handy Hibernate helper class for SessionFactory
and Session management.) Keep in mind that we didnt show you any exception-
handling code so far, but you should be able to figure out the try/catch
blocks yourself. Map some simple entity classes and components, and then store
and load objects in a stand-alone application (you dont need a web container or
application server, just write a main method). However, as soon as you try to store
associated entity objectsthat is, when you deal with a more complex object
Licensed to Lathika
Using transitive persistence in Hibernate 131
graphyoull see that calling save() or delete() on each object of the graph isnt
an efficient way to write applications.
Youd like to make as few calls to the Session as possible. Transitive persistence provides
a more natural way to force object state changes and to control the persistence
lifecycle.
4.3 Using transitive persistence in Hibernate
Real, nontrivial applications work not with single objects but rather with graphs of
objects. When the application manipulates a graph of persistent objects, the result
may be an object graph consisting of persistent, detached, and transient instances.
Transitive persistence is a technique that allows you to propagate persistence to transient
and detached subgraphs automatically.
For example, if we add a newly instantiated Category to the already persistent
hierarchy of categories, it should automatically become persistent without a call to
Session.save(). We gave a slightly different example in chapter 3 when we
mapped a parent/child relationship between Bid and Item. In that case, not only
were bids automatically made persistent when they were added to an item, but they
were also automatically deleted when the owning item was deleted.
There is more than one model for transitive persistence. The best known is persistence
by reachability, which well discuss first. Although some basic principles are
the same, Hibernate uses its own, more powerful model, as youll see later.
4.3.1 Persistence by reachability
An object persistence layer is said to implement persistence by reachability if any
instance becomes persistent when the application creates an object reference to
the instance from another instance that is already persistent. This behavior is illustrated
by the object diagram (note that this isnt a class diagram) in figure 4.2.
Electronics : Category
Computer : Category
Desktop PCs : Category Monitors : Category
Cell Phones : Category
Transient
Persistent
Persistent by
Reachability
Figure 4.2 Persistence by reachability with a root persistent object
Licensed to Lathika
132 CHAPTER 4
Working with persistent objects
In this example, Computer is a persistent object. The objects Desktop PCs
and Monitors are also persistent; theyre reachable from the Computer Category
instance. Electronics and Cell Phones are transient. Note that we assume
navigation is only possible to child categories, and not to the parentfor example,
we can call computer.getChildCategories(). Persistence by reachability is a recursive
algorithm: All objects reachable from a persistent instance become persistent
either when the original instance is made persistent or just before in-memory state
is synchronized with the data store.
Persistence by reachability guarantees referential integrity; any object graph can
be completely re-created by loading the persistent root object. An application may
walk the object graph from association to association without worrying about the
persistent state of the instances. (SQL databases have a different approach to referential
integrity, relying on foreign key and other constraints to detect a misbehaving
application.)
In the purest form of persistence by reachability, the database has some toplevel,
or root, object from which all persistent objects are reachable. Ideally, an
instance should become transient and be deleted from the database if it isnt reachable
via references from the root persistent object.
Neither Hibernate nor other ORM solutions implement this form; there is no
analog of the root persistent object in an SQL database and no persistent garbage
collector that can detect unreferenced instances. Object-oriented data stores
might implement a garbage-collection algorithm similar to the one implemented
for in-memory objects by the JVM, but this option isnt available in the ORM world;
scanning all tables for unreferenced rows wont perform acceptably.
So, persistence by reachability is at best a halfway solution. It helps you make
transient objects persistent and propagate their state to the database without many
calls to the persistence manager. But (at least, in the context of SQL databases and
ORM) it isnt a full solution to the problem of making persistent objects transient
and removing their state from the database. This turns out to be a much more difficult
problem. You cant simply remove all reachable instances when you remove
an object; other persistent instances may hold references to them (remember that
entities can be shared). You cant even safely remove instances that arent referenced
by any persistent object in memory; the instances in memory are only a small
subset of all objects represented in the database. Lets look at Hibernates more
flexible transitive persistence model.
Licensed to Lathika
Using transitive persistence in Hibernate 133
4.3.2 Cascading persistence with Hibernate
Hibernates transitive persistence model uses the same basic concept as persistence
by reachabilitythat is, object associations are examined to determine transitive
state. However, Hibernate allows you to specify a cascade style for each association
mapping, which offers more flexibility and fine-grained control for all state transitions.
Hibernate reads the declared style and cascades operations to associated
objects automatically.
By default, Hibernate does not navigate an association when searching for transient
or detached objects, so saving, deleting, or reattaching a Category wont affect
the child category objects. This is the opposite of the persistence-by-reachability
default behavior. If, for a particular association, you wish to enable transitive persistence,
you must override this default in the mapping metadata.
You can map entity associations in metadata with the following attributes:
¦ cascade="none", the default, tells Hibernate to ignore the association.
¦ cascade="save-update" tells Hibernate to navigate the association when the
transaction is committed and when an object is passed to save() or
update() and save newly instantiated transient instances and persist changes to
detached instances.
¦ cascade="delete" tells Hibernate to navigate the association and delete persistent
instances when an object is passed to delete().
¦ cascade="all" means to cascade both save-update and delete, as well as
calls to evict and lock.
¦ cascade="all-delete-orphan" means the same as cascade="all" but, in addition,
Hibernate deletes any persistent entity instance that has been removed
(dereferenced) from the association (for example, from a collection).
¦ cascade="delete-orphan" Hibernate will delete any persistent entity
instance that has been removed (dereferenced) from the association (for
example, from a collection).
This association-level cascade style model is both richer and less safe than persistence
by reachability. Hibernate doesnt make the same strong guarantees of referential
integrity that persistence by reachability provides. Instead, Hibernate partially delegates
referential integrity concerns to the foreign key constraints of the underlying
relational database. Of course, there is a good reason for this design decision:
It allows Hibernate applications to use detached objects efficiently, because you can
control reattachment of a detached object graph at the association level.
Licensed to Lathika
134 CHAPTER 4
Working with persistent objects
Lets elaborate on the cascading concept with some example association mappings.
We recommend that you read the next section in one turn, because each
example builds on the previous one. Our first example is straightforward; it lets
you save newly added categories efficiently.
4.3.3 Managing auction categories
System administrators can create new categories, rename categories,
and move subcategories around in the category hierarchy.
This structure can be seen in figure 4.3.
Now, we map this class and the association:
...
name="parentCategory"
class="Category"
column="PARENT_CATEGORY_ID"
cascade="none"/>
name="childCategories"
table="CATEGORY"
cascade="save-update"
inverse="true">
...
This is a recursive, bidirectional, one-to-many association, as briefly discussed in
chapter 3. The one-valued end is mapped with the element and the
Set typed property with the . Both refer to the same foreign key column:
PARENT_CATEGORY_ID.
Suppose we create a new Category as a child category of Computer (see
figure 4.4).
We have several ways to create this new Laptops object and save it in the database.
We could go back to the database and retrieve the Computer category to
which our new Laptops category will belong, add the new category, and commit
the transaction:
0..*
Category
name : String
Figure 4.3
Category class with
association to itself
Licensed to Lathika
Using transitive persistence in Hibernate 135
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
Category computer = (Category) session.get(Category.class, computerId);
Category laptops = new Category("Laptops");
computer.getChildCategories().add(laptops);
laptops.setParentCategory(computer);
tx.commit();
session.close();
The computer instance is persistent (attached to a session), and the childCategories
association has cascade save enabled. Hence, this code results in the new
laptops category becoming persistent when tx.commit() is called, because Hibernate
cascades the dirty-checking operation to the children of computer. Hibernate
executes an INSERT statement.
Lets do the same thing again, but this time create the link between Computer
and Laptops outside of any transaction (in a real application, its useful to manipulate
an object graph in a presentation tierfor example, before passing the
graph back to the persistence layer to make the changes persistent):
Category computer = ... // Loaded in a previous session
Category laptops = new Category("Laptops");
computer.getChildCategories().add(laptops);
laptops.setParentCategory(computer);
Electronics : Category
Computer : Category
Desktop PCs : Category Monitors : Category
Cell Phones : Category
Laptops : Category
Figure 4.4 Adding a new Category to the object graph
Licensed to Lathika
136 CHAPTER 4
Working with persistent objects
The detached computer object and any other detached objects it refers to are now
associated with the new transient laptops object (and vice versa). We make this
change to the object graph persistent by saving the new object in a second Hibernate
session:
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
// Persist one new category and the link to its parent category
session.save(laptops);
tx.commit();
session.close();
Hibernate will inspect the database identifier property of the parent category of
laptops and correctly create the relationship to the Computer category in the
database. Hibernate inserts the identifier value of the parent into the foreign key
field of the new Laptops row in CATEGORY.
Since cascade="none" is defined for the parentCategory association, Hibernate
ignores changes to any of the other categories in the hierarchy (Computer,
Electronics). It doesnt cascade the call to save() to entities referred to by this
association. If we had enabled cascade="save-update" on the mapping
of parentCategory, Hibernate would have had to navigate the whole graph of
objects in memory, synchronizing all instances with the database. This process
would perform badly, because a lot of useless data access would be required. In
this case, we neither needed nor wanted transitive persistence for the parentCategory
association.
Why do we have cascading operations? We could have saved the laptop object,
as shown in the previous example, without any cascade mapping being used. Well,
consider the following case:
Category computer = ... // Loaded in a previous Session
Category laptops = new Category("Laptops");
Category laptopAccessories = new Category("Laptop Accessories");
Category laptopTabletPCs = new Category("Tablet PCs")
laptops.addChildCategory(laptopAccessories);
laptops.addChildCategory(laptopTabletPCs);
computer.addChildCategory(laptops);
(Notice that we use the convenience method addChildCategory() to set both ends
of the association link in one call, as described in chapter 3.)
It would be undesirable to have to save each of the three new categories individually.
Fortunately, because we mapped the childCategories association with
Licensed to Lathika
Using transitive persistence in Hibernate 137
cascade="save-update", we dont need to. The same code we used before to save
the single Laptops category will save all three new categories in a new session:
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
// Persist all three new Category instances
session.save(laptops);
tx.commit();
session.close();
Youre probably wondering why the cascade style is called cascade="save-update"
rather than cascade="save". Having just made all three categories persistent previously,
suppose we made the following changes to the category hierarchy in a subsequent
request (outside of a session and transaction):
laptops.setName("Laptop Computers");
laptopAccessories.setName("Accessories & Parts");
laptopTabletPCs.setName("Tablet Computers");
Category laptopBags = new Category("Laptop Bags");
laptops.addChildCategory(laptopBags);
We have added a new category as a child of the Laptops category and modified
all three existing categories. The following code propagates these changes
to the database:
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
// Update three old Category instances and insert the new one
session.update(laptops);
tx.commit();
session.close();
Specifying cascade="save-update" on the childCategories association accurately
reflects the fact that Hibernate determines what is needed to persist the objects to
the database. In this case, it will reattach/update the three detached categories
(laptops, laptopAccessories, and laptopTabletPCs) and save the new child category
(laptopBags).
Notice that the last code example differs from the previous two session examples
only in a single method call. The last example uses update() instead of save()
because laptops was already persistent.
We can rewrite all the examples to use the saveOrUpdate() method. Then the
three code snippets are identical:
Licensed to Lathika
138 CHAPTER 4
Working with persistent objects
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
// Let Hibernate decide what's new and what's detached
session.saveOrUpdate(laptops);
tx.commit();
session.close();
The saveOrUpdate() method tells Hibernate to propagate the state of an instance
to the database by creating a new database row if the instance is a new transient
instance or updating the existing row if the instance is a detached instance. In
other words, it does exactly the same thing with the laptops category as cascade="
save-update" did with the child categories of laptops.
One final question: How did Hibernate know which children were detached and
which were new transient instances?
4.3.4 Distinguishing between transient and detached instances
Since Hibernate doesnt keep a reference to a detached instance, you have to let
Hibernate know how to distinguish between a detached instance like laptops (if it
was created in a previous session) and a new transient instance like laptopBags.
A range of options is available. Hibernate will assume that an instance is an
unsaved transient instance if:
¦ The identifier property (if it exists) is null.
¦ The version property (if it exists) is null.
¦ You supply an unsaved-value in the mapping document for the class, and
the value of the identifier property matches.
¦ You supply an unsaved-value in the mapping document for the version
property, and the value of the version property matches.
¦ You supply a Hibernate Interceptor and return Boolean.TRUE from Interceptor.
isUnsaved() after checking the instance in your code.
In our domain model, we have used the nullable type java.lang.Long as our identifier
property type everywhere. Since were using generated, synthetic identifiers,
this solves the problem. New instances have a null identifier property value, so
Hibernate treats them as transient. Detached instances have a non-null identifier
value, so Hibernate treats them properly too.
However, if we had used the primitive type long in our persistent classes, we
would have needed to use the following identifier mapping in all our classes:
Licensed to Lathika
Retrieving objects 139
....
The unsaved-value attribute tells Hibernate to treat instances of Category with an
identifier value of 0 as newly instantiated transient instances. The default value for
the attribute unsaved-value is null; so, since weve chosen Long as our identifier
property type, we can omit the unsaved-value attribute in our auction application
classes (we use the same identifier type everywhere).
This approach works nicely for synthetic identifiers, but it breaks down in
the case of keys assigned by the application, including composite keys in
legacy systems. We discuss this issue in chapter 8, section 8.3.1, Legacy
schemas and composite keys. Avoid application-assigned (and composite)
keys in new applications if possible.
You now have the knowledge to optimize your Hibernate application and reduce
the number of calls to the persistence manager if you want to save and delete
objects. Check the unsaved-value attributes of all your classes and experiment with
detached objects to get a feeling for the Hibernate transitive persistence model.
Well now switch perspectives and look at another important concept: how to get
a graph of persistent objects out of the database (that is, how to load objects).
4.4 Retrieving objects
Retrieving persistent objects from the database is one of the most interesting (and
complex) parts of working with Hibernate. Hibernate provides the following ways
to get objects out of the database:
¦ Navigating the object graph, starting from an already loaded object, by
accessing the associated objects through property accessor methods such as
aUser.getAddress().getCity(). Hibernate will automatically load (or preload)
nodes of the graph while you navigate the graph if the Session is open.
¦ Retrieving by identifier, which is the most convenient and performant
method when the unique identifier value of an object is known.
¦ Using the Hibernate Query Language (HQL), which is a full object-oriented
query language.
UNSAVED
ASSIGNED
IDENTIFIERS
Licensed to Lathika
140 CHAPTER 4
Working with persistent objects
¦ Using the, Hibernate Criteria API, which provides a type-safe and objectoriented
way to perform queries without the need for string manipulation.
This facility includes queries based on an example object.
¦ Using native SQL queries, where Hibernate takes care of mapping the JDBC
result sets to graphs of persistent objects.
In your Hibernate applications, youll use a combination of these techniques.
Each retrieval method may use a different fetching strategythat is, a strategy
that defines what part of the persistent object graph should be retrieved. The goal
is to find the best retrieval method and fetching strategy for every use case in your
application while at the same time minimizing the number of SQL queries for
best performance.
We wont discuss each retrieval method in much detail in this section; instead
well focus on the basic fetching strategies and how to tune Hibernate mapping
files for best default fetching performance for all methods. Before we look at the
fetching strategies, well give an overview of the retrieval methods. (We mention
the Hibernate caching system but fully explore it in the next chapter.)
Lets start with the simplest case, retrieval of an object by giving its identifier
value (navigating the object graph should be self-explanatory). You saw a simple
retrieval by identifier earlier in this chapter, but there is more to know about it.
4.4.1 Retrieving objects by identifier
The following Hibernate code snippet retrieves a User object from the database:
User user = (User) session.get(User.class, userID);
The get() method is special because the identifier uniquely identifies a single
instance of a class. Hence its common for applications to use the identifier as a
convenient handle to a persistent object. Retrieval by identifier can use the cache
when retrieving an object, avoiding a database hit if the object is already cached.
Hibernate also provides a load() method:
User user = (User) session.load(User.class, userID);
The load() method is older; get() was added to Hibernates API due to user
request. The difference is trivial:
¦ If load() cant find the object in the cache or database, an exception is
thrown. The load() method never returns null. The get() method returns
null if the object cant be found.
Licensed to Lathika
Retrieving objects 141
¦ The load() method may return a proxy instead of a real persistent instance.
A proxy is a placeholder that triggers the loading of the real object when its
accessed for the first time; we discuss proxies later in this section. On the
other hand, get() never returns a proxy.
Choosing between get() and load() is easy: If youre certain the persistent
object exists, and nonexistence would be considered exceptional, load() is a
good option. If you arent certain there is a persistent instance with the given
identifier, use get() and test the return value to see if its null. Using load() has
a further implication: The application may retrieve a valid reference (a proxy) to a
persistent instance without hitting the database to retrieve its persistent state. So
load() might not throw an exception when it doesnt find the persistent object
in the cache or database; the exception would be thrown later, when the proxy
is accessed.
Of course, retrieving an object by identifier isnt as flexible as using arbitrary
queries.
4.4.2 Introducing HQL
The Hibernate Query Language is an object-oriented dialect of the familiar relational
query language SQL. HQL bears close resemblances to ODMG OQL and
EJB-QL; but unlike OQL, its adapted for use with SQL databases, and its much
more powerful and elegant than EJB-QL (However, EJB-QL 3.0 will be very similar
to HQL.) HQL is easy to learn with basic knowledge of SQL.
HQL isnt a data-manipulation language like SQL. Its used only for object
retrieval, not for updating, inserting, or deleting data. Object state synchronization
is the job of the persistence manager, not the developer.
Most of the time, youll only need to retrieve objects of a particular class and
restrict by the properties of that class. For example, the following query retrieves a
user by first name:
Query q = session.createQuery("from User u where u.firstname = :fname");
q.setString("fname", "Max");
List result = q.list();
After preparing query q, we bind the identifier value to a named parameter, fname.
The result is returned as a List of User objects.
HQL is powerful, and even though you may not use the advanced features all the
time, youll need them for some difficult problems. For example, HQL supports
the following:
Licensed to Lathika
142 CHAPTER 4
Working with persistent objects
¦ The ability to apply restrictions to properties of associated objects related
by reference or held in collections (to navigate the object graph using
query language).
¦ The ability to retrieve only properties of an entity or entities, without the
overhead of loading the entity itself in a transactional scope. This is sometimes
called a report query; its more correctly called projection.
¦ The ability to order the results of the query.
¦ The ability to paginate the results.
¦ Aggregation with group by, having, and aggregate functions like sum, min,
and max.
¦ Outer joins when retrieving multiple objects per row.
¦ The ability to call user-defined SQL functions.
¦ Subqueries (nested queries).
We discuss all these features in chapter 7, together with the optional native SQL
query mechanism.
4.4.3 Query by criteria
The Hibernate query by criteria (QBC) API lets you build a query by manipulating criteria
objects at runtime. This approach lets you specify constraints dynamically
without direct string manipulations, but it doesnt lose much of the flexibility or
power of HQL. On the other hand, queries expressed as criteria are often less readable
than queries expressed in HQL.
Retrieving a user by first name is easy using a Criteria object:
Criteria criteria = session.createCriteria(User.class);
criteria.add( Expression.like("firstname", "Max") );
List result = criteria.list();
A Criteria is a tree of Criterion instances. The Expression class provides static factory
methods that return Criterion instances. Once the desired criteria tree is
built, its executed against the database.
Many developers prefer QBC, considering it a more object-oriented approach.
They also like the fact that the query syntax may be parsed and validated at compile
time, whereas HQL expressions arent parsed until runtime.
The nice thing about the Hibernate Criteria API is the Criterion framework.
This framework allows extension by the user, which is difficult in the case of a query
language like HQL.
Licensed to Lathika
Retrieving objects 143
4.4.4 Query by example
As part of the QBC facility, Hibernate supports query by example (QBE). The idea
behind QBE is that the application supplies an instance of the queried class with
certain property values set (to nondefault values). The query returns all persistent
instances with matching property values. QBE isnt a particularly powerful
approach, but it can be convenient for some applications. The following code snippet
demonstrates a Hibernate QBE:
User exampleUser = new User();
exampleUser.setFirstname("Max");
Criteria criteria = session.createCriteria(User.class);
criteria.add( Example.create(exampleUser) );
List result = criteria.list();
A typical use case for QBE is a search screen that allows users to specify a range of
property values to be matched by the returned result set. This kind of functionality
can be difficult to express cleanly in a query language; string manipulations would
be required to specify a dynamic set of constraints.
Both the QBC API and the example query mechanism are discussed in more
detail in chapter 7.
You now know the basic retrieval options in Hibernate. We focus on the strategies
for fetching object graphs in the rest of this section. A fetching strategy
defines what part of the object graph (or, what subgraph) is retrieved with a query
or load operation.
4.4.5 Fetching strategies
In traditional relational data access, youd fetch all the data required for a particular
computation with a single SQL query, taking advantage of inner and outer joins
to retrieve related entities. Some primitive ORM implementations fetch data piecemeal,
with many requests for small chunks of data in response to the applications
navigating a graph of persistent objects. This approach doesnt make efficient use
of the relational databases join capabilities. In fact, this data access strategy scales
poorly by nature. One of the most difficult problems in ORMprobably the most
difficultis providing for efficient access to relational data, given an application
that prefers to treat the data as a graph of objects.
For the kinds of applications weve often worked with (multi-user, distributed,
web, and enterprise applications), object retrieval using many round trips to/from
the database is unacceptable. Hence we argue that tools should emphasize the R in
ORM to a much greater extent than has been traditional.
Licensed to Lathika
144 CHAPTER 4
Working with persistent objects
The problem of fetching object graphs efficiently (with minimal access to the
database) has often been addressed by providing association-level fetching strategies
specified in metadata of the association mapping. The trouble with this
approach is that each piece of code that uses an entity requires a different set of
associated objects. But this isnt enough. We argue that what is needed is support
for fine-grained runtime association fetching strategies. Hibernate supports
both, it lets you specify a default fetching strategy in the mapping file and then
override it at runtime in code.
Hibernate allows you to choose among four fetching strategies for any association,
in association metadata and at runtime:
¦ Immediate fetchingThe associated object is fetched immediately, using a
sequential database read (or cache lookup).
¦ Lazy fetchingThe associated object or collection is fetched lazily, when
its first accessed. This results in a new request to the database (unless the
associated object is cached).
¦ Eager fetchingThe associated object or collection is fetched together with
the owning object, using an SQL outer join, and no further database request
is required.
¦ Batch fetchingThis approach may be used to improve the performance of
lazy fetching by retrieving a batch of objects or collections when a lazy association
is accessed. (Batch fetching may also be used to improve the performance
of immediate fetching.)
Lets look more closely at each fetching strategy.
Immediate fetching
Immediate association fetching occurs when you retrieve an entity from the database
and then immediately retrieve another associated entity or entities in a further
request to the database or cache. Immediate fetching isnt usually an efficient
fetching strategy unless you expect the associated entities to almost always be
cached already.
Lazy fetching
When a client requests an entity and its associated graph of objects from the database,
it isnt usually necessary to retrieve the whole graph of every (indirectly) associated
object. You wouldnt want to load the whole database into memory at once;
for example, loading a single Category shouldnt trigger the loading of all Items in
that category.
Licensed to Lathika
Retrieving objects 145
Lazy fetching lets you decide how much of the object graph is loaded in the first
database hit and which associations should be loaded only when theyre first
accessed. Lazy fetching is a foundational concept in object persistence and the
first step to attaining acceptable performance.
We recommend that, to start with, all associations be configured for lazy (or perhaps
batched lazy) fetching in the mapping file. This strategy may then be overridden
at runtime by queries that force eager fetching to occur.
Eager (outer join) fetching
Lazy association fetching can help reduce database load and is often a good
default strategy. However, its a bit like a blind guess as far as performance optimization
goes.
Eager fetching lets you explicitly specify which associated objects should be loaded
together with the referencing object. Hibernate can then return the associated
objects in a single database request, utilizing an SQL OUTER JOIN. Performance optimization
in Hibernate often involves judicious use of eager fetching for particular
transactions. Hence, even though default eager fetching may be declared in the
mapping file, its more common to specify the use of this strategy at runtime for a
particular HQL or criteria query.
Batch fetching
Batch fetching isnt strictly an association fetching strategy; its a technique that may
help improve the performance of lazy (or immediate) fetching. Usually, when you
load an object or collection, your SQL WHERE clause specifies the identifier of the
object or object that owns the collection. If batch fetching is enabled, Hibernate
looks to see what other proxied instances or uninitialized collections are referenced
in the current session and tries to load them at the same time by specifying
multiple identifier values in the WHERE clause.
We arent great fans of this approach; eager fetching is almost always faster.
Batch fetching is useful for inexperienced users who wish to achieve acceptable
performance in Hibernate without having to think too hard about the SQL that will
be executed. (Note that batch fetching may be familiar to you, since its used by
many EJB2 engines.)
Well now declare the fetching strategy for some associations in our mapping
metadata.
Licensed to Lathika
146 CHAPTER 4
Working with persistent objects
4.4.6 Selecting a fetching strategy in mappings
Hibernate lets you select default association fetching strategies by specifying
attributes in the mapping metadata. You can override the default strategy using features
of Hibernates query methods, as youll see in chapter 7. A minor caveat: You
dont have to understand every option presented in this section immediately; we
recommend that you get an overview first and use this section as a reference when
youre optimizing the default fetching strategies in your application.
A wrinkle in Hibernates mapping format means that collection mappings function
slightly differently than single-point associations; so, well cover the two cases
separately. Lets first consider both ends of the bidirectional association between
Bid and Item.
Single point associations
For a or association, lazy fetching is possible only if
the associated class mapping enables proxying. For the Item class, we enable proxying
by specifying lazy="true":
Now, remember the association from Bid to Item:
When we retrieve a Bid from the database, the association property may hold an
instance of a Hibernate generated subclass of Item that delegates all method invocations
to a different instance of Item that is fetched lazily from the database (this is
the more elaborate definition of a Hibernate proxy).
Hibernate uses two different instances so that even polymorphic associations
can be proxiedwhen the proxied object is fetched, it may be an instance of a
mapped subclass of Item (if there were any subclasses of Item, that is). We can even
choose any interface implemented by the Item class as the type of the proxy. To do
so, we declare it using the proxy attribute, instead of specifying lazy="true":
As soon as we declare the proxy or lazy attribute on Item, any single-point association
to Item is proxied and fetched lazily, unless that association overrides the
fetching strategy by declaring the outer-join attribute.
There are three possible values for outer-join:
Licensed to Lathika
Retrieving objects 147
¦ outer-join="auto"The default. When the attribute isnt specified; Hibernate
fetches the associated object lazily if the associated class has proxying
enabled, or eagerly using an outer join if proxying is disabled (default).
¦ outer-join="true"Hibernate always fetches the association eagerly using
an outer join, even if proxying is enabled. This allows you to choose different
fetching strategies for different associations to the same proxied class.
¦ outer-join="false"Hibernate never fetches the association using an
outer join, even if proxying is disabled. This is useful if you expect the associated
object to exist in the second-level cache (see chapter 5). If it isnt
available in the second-level cache, the object is fetched immediately using
an extra SQL SELECT.
So, if we wanted to reenable eager fetching for the association, now that proxying
is enabled, we would specify
For a one-to-one association (discussed in more detail in chapter 6), lazy fetching
is conceptually possible only when the associated object always exists. We indicate
this by specifying constrained="true". For example, if an item can have only one
bid, the mapping for the Bid is
The constrained attribute has a slightly similar interpretation to the not-null
attribute of a mapping. It tells Hibernate that the associated object
is required and thus cannot be null.
To enable batch fetching, we specify the batch-size in the mapping for Item:
The batch size limits the number of items that may be retrieved in a single batch.
Choose a reasonably small number here.
Youll meet the same attributes (outer-join, batch-size, and lazy) when we
consider collections, but the interpretation is slightly different.
Collections
In the case of collections, fetching strategies apply not just to entity associations,
but also to collections of values (for example, a collection of strings could be
fetched by outer join).
Licensed to Lathika
148 CHAPTER 4
Working with persistent objects
Just like classes, collections have their own proxies, which we usually call collection
wrappers. Unlike classes, the collection wrapper is always there, even if lazy fetching
is disabled (Hibernate needs the wrapper to detect collection modifications).
Collection mappings may declare a lazy attribute, an outer-join attribute,
neither, or both (specifying both isnt meaningful). The meaningful options are
as follows:
¦ Neither attribute specifiedThis option is equivalent to outer-join="false"
lazy="false". The collection is fetched from the second-level cache or by
an immediate extra SQL SELECT. This option is the default and is most useful
when the second-level cache is enabled for this collection.
¦ outer-join="true"Hibernate fetches the association eagerly using an
outer join. At the time of this writing, Hibernate is able to fetch only one
collection per SQL SELECT, so it isnt possible to declare multiple collections
belonging to the same persistent class with outer-join="true".
¦ lazy="true"Hibernate fetches the collection lazily, when its first
accessed.
We dont recommend eager fetching for collections, so well map the items collection
of bids with lazy="true". This option is almost always used for collection mappings
(it should be the default, and we recommend that you consider it as a default
for all your collection mappings):
We can even enable batch fetching for the collection. In this case, the batch size
doesnt refer to the number of bids in the batch; it refers to the number of collections
of bids:
This mapping tells Hibernate to load up to nine collections of bids in one batch,
depending on how many uninitialized collections of bids are currently present in
the items associated with the session. In other words, if there are five Item instances
with persistent state in a Session, and all have an uninitialized bids collection,
Hibernate will automatically load all five collections in a single SQL query if one is
accessed. If there are 11 items, only 9 collections will be fetched. Batch fetching
Licensed to Lathika
Retrieving objects 149
can significantly reduce the number of queries required for hierarchies of objects
(for example, when loading the tree of parent and child Category objects).
Lets talk about a special case: many-to-many associations (we discuss this mapping
in more detail in chapter 6). You usually use a link table (some developers also
call it relationship table or association table) that holds only the key values of the two
associated tables and therefore allows a many-to-many multiplicity. This additional
table has to be considered if you decide to use eager fetching. Look at the following
straightforward many-to-many example, which maps the association from Category
to Item:
In this case, the eager fetching strategy refers only to the association table
CATEGORY_ITEM. If we load a Category with this fetching strategy, Hibernate will
automatically fetch all link entries from CATEGORY_ITEM in a single outer join SQL
query, but not the item instances from ITEM!
The entities contained in the many-to-many association can of course also be
fetched eagerly with the same SQL query. The element allows this
behavior to be customized:
Hibernate will now fetch all Items in a Category with a single outer join query when
the Category is loaded. However, keep in mind that we usually recommend lazy
loading as the default fetching strategy and that Hibernate is limited to one eagerly
fetched collection per mapped persistent class.
Setting the fetch depth
Well now discuss a global fetching strategy setting: the maximum fetch depth. This
setting controls the number of outer-joined tables Hibernate will use in a single
SQL query. Consider the complete association chain from Category to Item, and
from Item to Bid. The first is a many-to-many association and the second is a oneto-
many; hence both associations are mapped with collection elements. If we
declare outer-join="true" for both associations (dont forget the special
many> declaration) and load a single Category, how many queries will Hibernate
execute? Will only the Items be eagerly fetched, or also all the Bids of each Item?
Licensed to Lathika
150 CHAPTER 4
Working with persistent objects
You probably expect a single query, with an outer join operation including
the CATEGORY, CATEGORY_ITEM, ITEM, and BID tables. However, this isnt the case
by default.
Hibernates outer join fetch behavior is controlled with the global configuration
option hibernate.max_fetch_depth. If you set this to 1 (also the default), Hibernate
will fetch only the Category and the link entries from the CATEGORY_ITEM association
table. If you set it to 2, Hibernate executes an outer join that also includes
the Items in the same SQL query. Setting this option to 3 joins all four tables in one
SQL statement and also loads all Bids.
Recommended values for the fetch depth depend on the join performance and
the size of the database tables; test your applications with low values (less than 4)
first, and decrease or increase the number while tuning your application. The global
maximum fetch depth also applies to single-ended association (,
) mapped with an eager fetching strategy.
Keep in mind that eager fetching strategies declared in the mapping metadata
are effective only if you use retrieval by identifier, use the criteria query API, or
navigate through the object graph manually. Any HQL query may specify its own
fetching strategy at runtime, thus ignoring the mapping defaults. You can also
override the defaults (that is, not ignore them) with criteria queries. This is an
important difference, and we cover it in more detail in chapter 7, section 7.3.2,
Fetching associations.
However, you may sometimes simply like to initialize a proxy or a collection
wrapper manually with a simple API call.
Initializing lazy associations
A proxy or collection wrapper is automatically initialized when any of its methods
are invoked (except the identifier property getter, which may return the identifier
value without fetching the underlying persistent object). However, its only possible
to initialize a proxy or collection wrapper if its currently associated with an
open Session. If you close the session and try to access an uninitialized proxy or
collection, Hibernate throws a runtime exception.
Because of this behavior, its sometimes useful to explicitly initialize an object
before closing the session. This approach isnt as flexible as retrieving the complete
required object subgraph with an HQL query, using arbitrary fetching strategies
at runtime.
We use the static method Hibernate.initialize() for manual initialization:
Session session = sessions.openSession();
Transaction tx = session.beginTransaction();
Licensed to Lathika
Retrieving objects 151
Category cat = (Category) session.get(Category.class, id);
Hibernate.initialize( cat.getItems() );
tx.commit();
session.close();
Iterator iter = cat.getItems().iterator();
...
Hibernate.initialize() may be passed a collection wrapper, as in this example, or
a proxy. You may also, in similar rare cases, check the current state of a property by
calling Hibernate.isInitialized(). (Note that initialize() doesnt cascade to
any associated objects.)
Another solution for this problem is to keep the session open until the application
thread finishes, so you can navigate the object graph whenever you like and
have Hibernate automatically initialize all lazy references. This is a problem of
application design and transaction demarcation; we discuss it again in chapter 8,
section 8.1, Designing layered applications. However, your first choice should be
to fetch the complete required graph in the first place, using HQL or criteria queries,
with a sensible and optimized default fetching strategy in the mapping metadata
for all other cases.
4.4.7 Tuning object retrieval
Lets look at the steps involved when youre tuning the object retrieval operations
in your application:
1 Enable the Hibernate SQL log, as described in chapter 2. You should also be
prepared to read, understand, and evaluate SQL queries and their performance
characteristics for your specific relational model: Will a single join
operation be faster than two selects? Are all the indexes used properly, and
what is the cache hit ratio inside the database? Get your DBA to help you
with the performance evaluation; only she will have the knowledge to
decide which SQL execution plan is the best.
2 Step through your application use case by use case and note how many
and what SQL statements Hibernate executes. A use case can be a single
screen in your web application or a sequence of user dialogs. This step
also involves collecting the object-retrieval methods you use in each use
case: walking the graph, retrieval by identifier, HQL, and criteria queries.
Your goal is to bring down the number (and complexity) of SQL queries
for each use case by tuning the default fetching strategies in metadata.
3 You may encounter two common issues:
Licensed to Lathika
152 CHAPTER 4
Working with persistent objects
¦ If the SQL statements use join operations that are too complex and
slow, set outer-join to false for associations (this is
enabled by default). Also try to tune with the global hibernate.
max_fetch_depth configuration option, but keep in mind that this
is best left at a value between 1 and 4.
¦ If too many SQL statements are executed, use lazy="true" for all collection
mappings; by default, Hibernate will execute an immediate
additional fetch for the collection elements (which, if theyre entities,
can cascade further into the graph). In rare cases, if youre sure, enable
outer-join="true" and disable lazy loading for particular collections.
Keep in mind that only one collection property per persistent class may
be fetched eagerly. Use batch fetching with values between 3 and 10 to
further optimize collection fetching if the given unit of work involves
several of the same collections or if youre accessing a tree of parent
and child objects.
4 After you set a new fetching strategy, rerun the use case and check the generated
SQL again. Note the SQL statements, and go to the next use case.
5 After you optimize all use cases, check every one again and see if any optimizations
had side effects for others. With some experience, youll be able
to avoid any negative effects and get it right the first time.
This optimization technique isnt only practical for the default fetching strategies;
you can also use it to tune HQL and criteria queries, which can ignore and override
the default fetching for specific use cases and units of work. We discuss runtime
fetching in chapter 7.
In this section, weve started to think about performance issues, especially issues
related to association fetching. Of course, the quickest way to fetch a graph of
objects is to fetch it from the cache in memory, as shown in the next chapter.
4.5 Summary
The dynamic aspects of the object/relational mismatch are just as important as the
better known and better understood structural mismatch problems. In this chapter,
we were primarily concerned with the lifecycle of objects with respect to the
persistence mechanism. Now you understand the three object states defined by
Hibernate: persistent, detached, and transient. Objects transition between states
when you invoke methods of the Session interface or create and remove references
from a graph of already persistent instances. This latter behavior is governed
Licensed to Lathika
Summary 153
by the configurable cascade styles, Hibernates model for transitive persistence.
This model lets you declare the cascading of operations (such as saving or deletion)
on an association basis, which is more powerful and flexible than the traditional
persistence by reachability model. Your goal is to find the best cascading style for
each association and therefore minimize the number of persistence manager calls
you have to make when storing objects.
Retrieving objects from the database is equally important: You can walk the
graph of domain objects by accessing properties and let Hibernate transparently
fetch objects. You can also load objects by identifier, write arbitrary queries in the
HQL, or create an object-oriented representation of your query using the query by
criteria API. In addition, you can use native SQL queries in special cases.
Most of these object-retrieval methods use the default fetching strategies we
defined in mapping metadata (HQL ignores them; criteria queries can override
them). The correct fetching strategy minimizes the number of SQL statements that
have to be executed by lazily, eagerly, or batch-fetching objects. You optimize your
Hibernate application by analyzing the SQL executed in each use case and tuning
the default and runtime fetching strategies.
Next we explore the closely related topics of transactions and caching.
Licensed to Lathika
154
Transactions,
concurrency,
and caching
Transactions, concurrency, and caching
This chapter covers
¦ Database transactions and locking
¦ Long-running application transactions
¦ The Hibernate first- and second-level caches
¦ The caching system in practice with
CaveatEmptor
Licensed to Lathika
Transactions, concurrency, and caching 155
Now that you understand the basics of object/relational mapping with Hibernate,
lets take a closer look at one of the core issues in database application design:
transaction management. In this chapter, we examine how you use Hibernate to manage
transactions, how concurrency is handled, and how caching is related to both
aspects. Lets look at our example application.
Some application functionality requires that several different things be done
together. For example, when an auction finishes, our CaveatEmptor application
has to perform four different tasks:
1 Mark the winning (highest amount) bid.
2 Charge the seller the cost of the auction.
3 Charge the successful bidder the price of the winning bid.
4 Notify the seller and the successful bidder.
What happens if we cant bill the auction costs because of a failure in the external
credit card system? Our business requirements might state that either all listed
actions must succeed or none must succeed. If so, we call these steps collectively a
transaction or unit of work. If only one step fails, the whole unit of work must fail. We
say that the transaction is atomic: Several operations are grouped together as a single
indivisible unit.
Furthermore, transactions allow multiple users to work concurrently with the
same data without compromising the integrity and correctness of the data; a particular
transaction shouldnt be visible to and shouldnt influence other concurrently
running transactions. Several different strategies are used to implement this
behavior, which is called isolation. Well explore them in this chapter.
Transactions are also said to exhibit consistency and durability. Consistency means
that any transaction works with a consistent set of data and leaves the data in a consistent
state when the transaction completes. Durability guarantees that once a
transaction completes, all changes made during that transaction become persistent
and arent lost even if the system subsequently fails. Atomicity, consistency, isolation,
and durability are together known as the ACID criteria.
We begin this chapter with a discussion of system-level database transactions,
where the database guarantees ACID behavior. Well look at the JDBC and JTA APIs
and see how Hibernate, working as a client of these APIs, is used to control database
transactions.
In an online application, database transactions must have extremely short
lifespans. A database transaction should span a single batch of database operations,
interleaved with business logic. It should certainly not span interaction with the
Licensed to Lathika
156 CHAPTER 5
Transactions, concurrency, and caching
user. Well augment your understanding of transactions with the notion of a longrunning
application transaction, where database operations occur in several batches,
alternating with user interaction. There are several ways to implement application
transactions in Hibernate applications, all of which are discussed in this chapter.
Finally, the subject of caching is much more closely related to transactions than it
might appear at first sight. In the second half of this chapter, armed with an understanding
of transactions, we explore Hibernates sophisticated cache architecture.
Youll learn which data is a good candidate for caching and how to handle concurrency
of the cache. Well then enable caching in the CaveatEmptor application.
Lets begin with the basics and see how transactions work at the lowest level,
the database.
5.1 Understanding database transactions
Databases implement the notion of a unit of work as a database transaction (sometimes
called a system transaction).
A database transaction groups data-access operations. A transaction is guaranteed
to end in one of two ways: its either committed or rolled back. Hence, database
transactions are always truly atomic. In figure 5.1, you can see this graphically.
If several database operations should be executed inside a transaction, you must
mark the boundaries of the unit of work. You must start the transaction and, at
some point, commit the changes. If an error occurs (either while executing operations
or when committing the changes), you have to roll back the transaction to
leave the data in a consistent state. This is known as transaction demarcation, and
(depending on the API you use) it involves more or less manual intervention.
begin
Transaction
commit
rollback
Transaction Succeeded
Transaction Failed
Initial State
Figure 5.1
System states during
a transaction
Licensed to Lathika
Understanding database transactions 157
You may already have experience with two transaction-handling programming
interfaces: the JDBC API and the JTA.
5.1.1 JDBC and JTA transactions
In a non-managed environment, the JDBC API is used to mark transaction boundaries.
You begin a transaction by calling setAutoCommit(false) on a JDBC connection
and end it by calling commit(). You may, at any time, force an immediate
rollback by calling rollback(). (Easy, huh?)
FAQ What auto commit mode should you use? A magical setting that is often a
source of confusion is the JDBC connections auto commit mode. If a database
connection is in auto commit mode, the database transaction will be
committed immediately after each SQL statement, and a new transaction
will be started. This can be useful for ad hoc database queries and ad hoc
data updates.
Auto commit mode is almost always inappropriate in an application,
however. An application doesnt perform ad hoc or any unplanned queries;
instead, it executes a preplanned sequence of related operations
(which are, by definition, never ad hoc). Therefore, Hibernate automatically
disables auto commit mode as soon as it fetches a connection (from
a connection providerthat is, a connection pool). If you supply your
own connection when you open the Session, its your responsibility to
turn off auto commit!
Note that some database systems enable auto commit by default for each
new connection, but others dont. You might want to disable auto commit
in your global database system configuration to ensure that you never run
into any problems. You may then enable auto commit only when you execute
ad hoc queries (for example, in your database SQL query tool).
In a system that stores data in multiple databases, a particular unit of work may
involve access to more than one data store. In this case, you cant achieve atomicity
using JDBC alone. You require a transaction manager with support for distributed
transactions (two-phase commit). You communicate with the transaction manager
using the JTA.
In a managed environment, JTA is used not only for distributed transactions, but
also for declarative container managed transactions (CMT). CMT allows you to avoid
explicit transaction demarcation calls in your application source code; rather,
transaction demarcation is controlled by a deployment-specific descriptor. This
descriptor defines how a transaction context propagates when a single thread passes
through several different EJBs.
Licensed to Lathika
158 CHAPTER 5
Transactions, concurrency, and caching
We arent interested in the details of direct JDBC or JTA transaction demarcation.
Youll be using these APIs only indirectly.
Hibernate communicates with the database via a JDBC Connection; hence it must
support both APIs. In a stand-alone (or web-based) application, only the JDBC
transaction handling is available; in an application server, Hibernate can use JTA.
Since we would like Hibernate application code to look the same in both managed
and non-managed environments, Hibernate provides its own abstraction layer, hiding
the underlying transaction API. Hibernate allows user extension, so you could
even plug in an adaptor for the CORBA transaction service.
Transaction management is exposed to the application developer via the Hibernate
Transaction interface. You arent forced to use this APIHibernate lets you
control JTA or JDBC transactions directly, but this usage is discouraged, and we
wont discuss this option.
5.1.2 The Hibernate Transaction API
The Transaction interface provides methods for declaring the boundaries of a database
transaction. See listing 5.1 for an example of the basic usage of Transaction.
Session session = sessions.openSession();
Transaction tx = null;
try {
tx = session.beginTransaction();
concludeAuction();
tx.commit();
} catch (Exception e) {
if (tx != null) {
try {
tx.rollback();
} catch (HibernateException he) {
//log he and rethrow e
}
}
throw e;
} finally {
try {
session.close();
} catch (HibernateException he) {
throw he;
}
}
Listing 5.1 Using the Hibernate Transaction API
Licensed to Lathika
Understanding database transactions 159
The call to session.beginTransaction() marks the beginning of a database transaction.
In the case of a non-managed environment, this starts a JDBC transaction
on the JDBC connection. In the case of a managed environment, it starts a new JTA
transaction if there is no current JTA transaction, or joins the existing current JTA
transaction. This is all handled by Hibernateyou shouldnt need to care about
the implementation.
The call to tx.commit()synchronizes the Session state with the database. Hibernate
then commits the underlying transaction if and only if beginTransaction()
started a new transaction (in both managed and non-managed cases). If begin-
Transaction() did not start an underlying database transaction, commit() only synchronizes
the Session state with the database; its left to the responsible party (the
code that started the transaction in the first place) to end the transaction. This is
consistent with the behavior defined by JTA.
If concludeAuction() threw an exception, we must force the transaction to roll
back by calling tx.rollback(). This method either rolls back the transaction
immediately or marks the transaction for rollback only (if youre using CMTs).
FAQ Is it faster to roll back read-only transactions? If code in a transaction reads
data but doesnt modify it, should you roll back the transaction instead of
committing it? Would this be faster?
Apparently some developers found this approach to be faster in some
special circumstances, and this belief has now spread through the community.
We tested this with the more popular database systems and
found no difference. We also failed to discover any source of real numbers
showing a performance difference. There is also no reason why a
database system should be implemented suboptimallythat is, why it
shouldnt use the fastest transaction cleanup algorithm internally. Always
commit your transaction and roll back if the commit fails.
Its critically important to close the Session in a finally block in order to ensure that
the JDBC connection is released and returned to the connection pool. (This step
is the responsibility of the application, even in a managed environment.)
NOTE The example in listing 5.1 is the standard idiom for a Hibernate unit of
work; therefore, it includes all exception-handling code for the checked
HibernateException. As you can see, even rolling back a Transaction
and closing the Session can throw an exception. You dont want to use this
example as a template in your own application, since youd rather hide the
exception handling with generic infrastructure code. You can, for example,
use a utility class to convert the HibernateException to an unchecked
runtime exception and hide the details of rolling back a transaction and
Licensed to Lathika
160 CHAPTER 5
Transactions, concurrency, and caching
closing the session. We discuss this question of application design in more
detail in chapter 8, section 8.1, Designing layered applications.
However, there is one important aspect you must be aware of: the Session
has to be immediately closed and discarded (not reused) when an
exception occurs. Hibernate cant retry failed transactions. This is no
problem in practice, because database exceptions are usually fatal (constraint
violations, for example) and there is no well-defined state to continue
after a failed transaction. An application in production shouldnt
throw any database exceptions either.
Weve noted that the call to commit() synchronizes the Session state with the database.
This is called flushing, a process you automatically trigger when you use the
Hibernate Transaction API.
5.1.3 Flushing the Session
The Hibernate Session implements transparent write behind. Changes to the domain
model made in the scope of a Session arent immediately propagated to the database.
This allows Hibernate to coalesce many changes into a minimal number of
database requests, helping minimize the impact of network latency.
For example, if a single property of an object is changed twice in the same
Transaction, Hibernate only needs to execute one SQL UPDATE. Another example
of the usefulness of transparent write behind is that Hibernate can take
advantage of the JDBC batch API when executing multiple UPDATE, INSERT, or
DELETE statements.
Hibernate flushes occur only at the following times:
¦ When a Transaction is committed
¦ Sometimes before a query is executed
¦ When the application calls Session.flush() explicitly
Flushing the Session state to the database at the end of a database transaction is
required in order to make the changes durable and is the common case. Hibernate
doesnt flush before every query. However, if there are changes held in memory that
would affect the results of the query, Hibernate will, by default, synchronize first.
You can control this behavior by explicitly setting the Hibernate FlushMode via a
call to session.setFlushMode(). The flush modes are as follows:
¦ FlushMode.AUTOThe default. Enables the behavior just described.
¦ FlushMode.COMMITSpecifies that the session wont be flushed before query
execution (it will be flushed only at the end of the database transaction). Be
Licensed to Lathika
Understanding database transactions 161
aware that this setting may expose you to stale data: modifications you made
to objects only in memory may conflict with the results of the query.
¦ FlushMode.NEVERLets you specify that only explicit calls to flush() result
in synchronization of session state with the database.
We dont recommend that you change this setting from the default. Its provided
to allow performance optimization in rare cases. Likewise, most applications rarely
need to call flush() explicitly. This functionality is useful when youre working
with triggers, mixing Hibernate with direct JDBC, or working with buggy JDBC drivers.
You should be aware of the option but not necessarily look out for use cases.
Now that you understand the basic usage of database transactions with the
Hibernate Transaction interface, lets turn our attention more closely to the subject
of concurrent data access.
It seems as though you shouldnt have to care about transaction isolationthe
term implies that something either is or is not isolated. This is misleading. Complete
isolation of concurrent transactions is extremely expensive in terms of application
scalability, so databases provide several degrees of isolation. For most applications,
incomplete transaction isolation is acceptable. Its important to understand the
degree of isolation you should choose for an application that uses Hibernate and
how Hibernate integrates with the transaction capabilities of the database.
5.1.4 Understanding isolation levels
Databases (and other transactional systems) attempt to ensure transaction isolation,
meaning that, from the point of view of each concurrent transaction, it appears
that no other transactions are in progress.
Traditionally, this has been implemented using locking. A transaction may place
a lock on a particular item of data, temporarily preventing access to that item by
other transactions. Some modern databases such as Oracle and PostgreSQL implement
transaction isolation using multiversion concurrency control, which is generally
considered more scalable. Well discuss isolation assuming a locking model (most
of our observations are also applicable to multiversion concurrency).
This discussion is about database transactions and the isolation level provided
by the database. Hibernate doesnt add additional semantics; it uses whatever is
available with a given database. If you consider the many years of experience that
database vendors have had with implementing concurrency control, youll clearly
see the advantage of this approach. Your part, as a Hibernate application developer,
is to understand the capabilities of your database and how to change the database
isolation behavior if needed in your particular scenario (and by your data
integrity requirements).
Licensed to Lathika
162 CHAPTER 5
Transactions, concurrency, and caching
Isolation issues
First, lets look at several phenomena that break full transaction isolation. The
ANSI SQL standard defines the standard transaction isolation levels in terms of
which of these phenomena are permissible:
¦ Lost updateTwo transactions both update a row and then the second transaction
aborts, causing both changes to be lost. This occurs in systems that
dont implement any locking. The concurrent transactions arent isolated.
¦ Dirty readOne transaction reads changes made by another transaction that
hasnt yet been committed. This is very dangerous, because those changes
might later be rolled back.
¦ Unrepeatable readA transaction reads a row twice and reads different state
each time. For example, another transaction may have written to the row,
and committed, between the two reads.
¦ Second lost updates problemA special case of an unrepeatable read. Imagine
that two concurrent transactions both read a row, one writes to it and commits,
and then the second writes to it and commits. The changes made by
the first writer are lost.
¦ Phantom readA transaction executes a query twice, and the second result
set includes rows that werent visible in the first result set. (It need not necessarily
be exactly the same query.) This situation is caused by another transaction
inserting new rows between the execution of the two queries.
Now that you understand all the bad things that could occur, we can define the various
transaction isolation levels and see what problems they prevent.
Isolation levels
The standard isolation levels are defined by the ANSI SQL standard but arent particular
to SQL databases. JTA defines the same isolation levels, and youll use these
levels to declare your desired transaction isolation later:
¦ Read uncommittedPermits dirty reads but not lost updates. One transaction
may not write to a row if another uncommitted transaction has already written
to it. Any transaction may read any row, however. This isolation level
may be implemented using exclusive write locks.
¦ Read committedPermits unrepeatable reads but not dirty reads. This may
be achieved using momentary shared read locks and exclusive write locks.
Reading transactions dont block other transactions from accessing a row.
Licensed to Lathika
Understanding database transactions 163
However, an uncommitted writing transaction blocks all other transactions
from accessing the row.
¦ Repeatable readPermits neither unrepeatable reads nor dirty reads. Phantom
reads may occur. This may be achieved using shared read locks and exclusive
write locks. Reading transactions block writing transactions (but not other
reading transactions), and writing transactions block all other transactions.
¦ SerializableProvides the strictest transaction isolation. It emulates serial
transaction execution, as if transactions had been executed one after
another, serially, rather than concurrently. Serializability may not be implemented
using only row-level locks; there must be another mechanism that
prevents a newly inserted row from becoming visible to a transaction that
has already executed a query that would return the row.
Its nice to know how all these technical terms are defined, but how does that help
you choose an isolation level for your application?
5.1.5 Choosing an isolation level
Developers (ourselves included) are often unsure about what transaction isolation
level to use in a production application. Too great a degree of isolation will
harm performance of a highly concurrent application. Insufficient isolation may
cause subtle bugs in our application that cant be reproduced and that well
never find out about until the system is working under heavy load in the
deployed environment.
Note that we refer to caching and optimistic locking (using versioning) in the following
explanation, two concepts explained later in this chapter. You might want
to skip this section and come back when its time to make the decision for an
isolation level in your application. Picking the right isolation level is, after all,
highly dependent on your particular scenario. The following discussion contains
recommendations; nothing is carved in stone.
Hibernate tries hard to be as transparent as possible regarding the transactional
semantics of the database. Nevertheless, caching and optimistic locking affect
these semantics. So, what is a sensible database isolation level to choose in a Hibernate
application?
First, you eliminate the read uncommitted isolation level. Its extremely dangerous
to use one transactions uncommitted changes in a different transaction. The rollback
or failure of one transaction would affect other concurrent transactions. Rollback
of the first transaction could bring other transactions down with it, or perhaps
Licensed to Lathika
164 CHAPTER 5
Transactions, concurrency, and caching
even cause them to leave the database in an inconsistent state. Its possible that
changes made by a transaction that ends up being rolled back could be committed
anyway, since they could be read and then propagated by another transaction that
is successful!
Second, most applications dont need serializable isolation (phantom reads
arent usually a problem), and this isolation level tends to scale poorly. Few existing
applications use serializable isolation in production; rather, they use pessimistic
locks (see section 5.1.7, Using pessimistic locking), which effectively forces a serialized
execution of operations in certain situations.
This leaves you a choice between read committed and repeatable read. Lets first
consider repeatable read. This isolation level eliminates the possibility that one
transaction could overwrite changes made by another concurrent transaction (the
second lost updates problem) if all data access is performed in a single atomic database
transaction. This is an important issue, but using repeatable read isnt the only
way to resolve it.
Lets assume youre using versioned data, something that Hibernate can do for
you automatically. The combination of the (mandatory) Hibernate first-level session
cache and versioning already gives you most of the features of repeatable read
isolation. In particular, versioning prevents the second lost update problem, and
the first-level session cache ensures that the state of the persistent instances loaded
by one transaction is isolated from changes made by other transactions. So, read
committed isolation for all database transactions would be acceptable if you use
versioned data.
Repeatable read provides a bit more reproducibility for query result sets (only
for the duration of the database transaction), but since phantom reads are still possible,
there isnt much value in that. (Its also not common for web applications to
query the same table twice in a single database transaction.)
You also have to consider the (optional) second-level Hibernate cache. It can
provide the same transaction isolation as the underlying database transaction, but
it might even weaken isolation. If youre heavily using a cache concurrency strategy
for the second-level cache that doesnt preserve repeatable read semantics (for
example, the read-write and especially the nonstrict-read-write strategies, both discussed
later in this chapter), the choice for a default isolation level is easy: You cant
achieve repeatable read anyway, so theres no point slowing down the database. On
the other hand, you might not be using second-level caching for critical classes, or
you might be using a fully transactional cache that provides repeatable read isolation.
Should you use repeatable read in this case? You can if you like, but its probably
not worth the performance cost.
Licensed to Lathika
Understanding database transactions 165
Setting the transaction isolation level allows you to choose a good default locking
strategy for all your database transactions. How do you set the isolation level?
5.1.6 Setting an isolation level
Every JDBC connection to a database uses the databases default isolation level, usually
read committed or repeatable read. This default can be changed in the database
configuration. You may also set the transaction isolation for JDBC connections
using a Hibernate configuration option:
hibernate.connection.isolation = 4
Hibernate will then set this isolation level on every JDBC connection obtained from
a connection pool before starting a transaction. The sensible values for this option
are as follows (you can also find them as constants in java.sql.Connection):
¦ 1Read uncommitted isolation
¦ 2Read committed isolation
¦ 4Repeatable read isolation
¦ 8Serializable isolation
Note that Hibernate never changes the isolation level of connections obtained
from a datasource provided by the application server in a managed environment.
You may change the default isolation using the configuration of your
application server.
As you can see, setting the isolation level is a global option that affects all connections
and transactions. From time to time, its useful to specify a more restrictive
lock for a particular transaction. Hibernate allows you to explicitly specify the
use of a pessimistic lock.
5.1.7 Using pessimistic locking
Locking is a mechanism that prevents concurrent access to a particular item of data.
When one transaction holds a lock on an item, no concurrent transaction can read
and/or modify this item. A lock might be just a momentary lock, held while the
item is being read, or it might be held until the completion of the transaction. A
pessimistic lock is a lock that is acquired when an item of data is read and that is held
until transaction completion.
In read-committed mode (our preferred transaction isolation level), the database
never acquires pessimistic locks unless explicitly requested by the application. Usually,
pessimistic locks arent the most scalable approach to concurrency. However,
Licensed to Lathika
166 CHAPTER 5
Transactions, concurrency, and caching
in certain special circumstances, they may be used to prevent database-level deadlocks,
which result in transaction failure. Some databases (Oracle and PostgreSQL,
for example) provide the SQL SELECT...FOR UPDATE syntax to allow the use of explicit
pessimistic locks. You can check the Hibernate Dialects to find out if your database
supports this feature. If your database isnt supported, Hibernate will always execute
a normal SELECT without the FOR UPDATE clause.
The Hibernate LockMode class lets you request a pessimistic lock on a particular
item. In addition, you can use the LockMode to force Hibernate to bypass the cache
layer or to execute a simple version check. Youll see the benefit of these operations
when we discuss versioning and caching.
Lets see how to use LockMode. If you have a transaction that looks like this
Transaction tx = session.beginTransaction();
Category cat = (Category) session.get(Category.class, catId);
cat.setName("New Name");
tx.commit();
then you can obtain a pessimistic lock as follows:
Transaction tx = session.beginTransaction();
Category cat =
(Category) session.get(Category.class, catId, LockMode.UPGRADE);
cat.setName("New Name");
tx.commit();
With this mode, Hibernate will load the Category using a SELECT...FOR UPDATE,
thus locking the retrieved rows in the database until theyre released when the
transaction ends.
Hibernate defines several lock modes:
¦ LockMode.NONEDont go to the database unless the object isnt in either
cache.
¦ LockMode.READBypass both levels of the cache, and perform a version
check to verify that the object in memory is the same version that currently
exists in the database.
¦ LockMode.UPDGRADEBypass both levels of the cache, do a version check
(if applicable), and obtain a database-level pessimistic upgrade lock, if
that is supported.
¦ LockMode.UPDGRADE_NOWAITThe same as UPGRADE, but use a SELECT...FOR
UPDATE NOWAIT on Oracle. This disables waiting for concurrent lock releases,
thus throwing a locking exception immediately if the lock cant be obtained.
Licensed to Lathika
Understanding database transactions 167
¦ LockMode.WRITEIs obtained automatically when Hibernate has written to
a row in the current transaction (this is an internal mode; you cant specify
it explicitly).
By default, load() and get() use LockMode.NONE. LockMode.READ is most useful with
Session.lock() and a detached object. For example:
Item item = ... ;
Bid bid = new Bid();
item.addBid(bid);
...
Transaction tx = session.beginTransaction();
session.lock(item, LockMode.READ);
tx.commit();
This code performs a version check on the detached Item instance to verify that
the database row wasnt updated by another transaction since it was retrieved,
before saving the new Bid by cascade (assuming that the association from Item to
Bid has cascading enabled).
By specifying an explicit LockMode other than LockMode.NONE, you force Hibernate
to bypass both levels of the cache and go all the way to the database. We think
that most of the time caching is more useful than pessimistic locking, so we dont
use an explicit LockMode unless we really need it. Our advice is that if you have a
professional DBA on your project, let the DBA decide which transactions require
pessimistic locking once the application is up and running. This decision should
depend on subtle details of the interactions between different transactions and
cant be guessed up front.
Lets consider another aspect of concurrent data access. We think that most Java
developers are familiar with the notion of a database transaction and that is what
they usually mean by transaction. In this book, we consider this to be a fine-grained
transaction, but we also consider a more coarse-grained notion. Our coarsegrained
transactions will correspond to what the user of the application considers a
single unit of work. Why should this be any different than the fine-grained database
transaction?
The database isolates the effects of concurrent database transactions. It should
appear to the application that each transaction is the only transaction currently
accessing the database (even when it isnt). Isolation is expensive. The database
must allocate significant resources to each transaction for the duration of the
transaction. In particular, as weve discussed, many databases lock rows that have
been read or updated by a transaction, preventing access by any other transaction,
until the first transaction completes. In highly concurrent systems, these
Licensed to Lathika
168 CHAPTER 5
Transactions, concurrency, and caching
locks can prevent scalability if theyre held for longer than absolutely necessary.
For this reason, you shouldnt hold the database transaction (or even the JDBC
connection) open while waiting for user input. (All this, of course, also applies to
a Hibernate Transaction, since its merely an adaptor to the underlying database
transaction mechanism.)
If you want to handle long user think time while still taking advantage of the
ACID attributes of transactions, simple database transactions arent sufficient. You
need a new concept, long-running application transactions.
5.2 Working with application transactions
Business processes, which might be considered a single unit of work from the point
of view of the user, necessarily span multiple user client requests. This is especially
true when a user makes a decision to update data on the basis of the current state
of that data.
In an extreme example, suppose you collect data entered by the user on multiple
screens, perhaps using wizard-style step-by-step navigation. You must read and
write related items of data in several requests (hence several database transactions)
until the user clicks Finish on the last screen. Throughout this process, the data
must remain consistent and the user must be informed of any change to the data
made by any concurrent transaction. We call this coarse-grained transaction concept
an application transaction, a broader notion of the unit of work.
Well now restate this definition more precisely. Most web applications include
several examples of the following type of functionality:
1 Data is retrieved and displayed on the screen in a first database transaction.
2 The user has an opportunity to view and then modify the data, outside of
any database transaction.
3 The modifications are made persistent in a second database transaction.
In more complicated applications, there may be several such interactions with the
user before a particular business process is complete. This leads to the notion of
an application transaction (sometimes called a long transaction, user transaction or
business transaction). We prefer application transaction or user transaction, since
these terms are less vague and emphasize the transaction aspect from the point of
view of the user.
Since you cant rely on the database to enforce isolation (or even atomicity) of
concurrent application transactions, isolation becomes a concern of the application
itselfperhaps even a concern of the user.
Licensed to Lathika
Working with application transactions 169
Lets discuss application transactions with an example.
In our CaveatEmptor application, both the user who posted a comment and any
system administrator can open an Edit Comment screen to delete or edit the text
of a comment. Suppose two different administrators open the edit screen to view
the same comment simultaneously. Both edit the comment text and submit their
changes. At this point, we have three ways to handle the concurrent attempts to
write to the database:
¦ Last commit winsBoth updates succeed, and the second update overwrites
the changes of the first. No error message is shown.
¦ First commit winsThe first modification is persisted, and the user submitting
the second change receives an error message. The user must restart the
business process by retrieving the updated comment. This option is often
called optimistic locking.
¦ Merge conflicting updatesThe first modification is persisted, and the second
modification may be applied selectively by the user.
The first option, last commit wins, is problematic; the second user overwrites the
changes of the first user without seeing the changes made by the first user or even
knowing that they existed. In our example, this probably wouldnt matter, but it
would be unacceptable for some other kinds of data. The second and third options
are usually acceptable for most kinds of data. From our point of view, the third
option is just a variation of the secondinstead of showing an error message, we
show the message and then allow the user to manually merge changes. There is no
single best solution. You must investigate your own business requirements to
decide among these three options.
The first option happens by default if you dont do anything special in your
application; so, this option requires no work on your part (or on the part of Hibernate).
Youll have two database transactions: The comment data is loaded in the
first database transaction, and the second database transaction saves the changes
without checking for updates that could have happened in between.
On the other hand, Hibernate can help you implement the second and third
strategies, using managed versioning for optimistic locking.
5.2.1 Using managed versioning
Managed versioning relies on either a version number that is incremented or a
timestamp that is updated to the current time, every time an object is modified. For
Hibernate managed versioning, we must add a new property to our Comment class
Licensed to Lathika
170 CHAPTER 5
Transactions, concurrency, and caching
and map it as a version number using the tag. First, lets look at the
changes to the Comment class:
public class Comment {
...
private int version;
...
void setVersion(int version) {
this.version = version;
}
int getVersion() {
return version;
}
}
You can also use a public scope for the setter and getter methods. The
property mapping must come immediately after the identifier property mapping
in the mapping file for the Comment class:
...
The version number is just a counter valueit doesnt have any useful semantic
value. Some people prefer to use a timestamp instead:
public class Comment {
...
private Date lastUpdatedDatetime;
...
void setLastUpdatedDatetime(Date lastUpdatedDatetime) {
this.lastUpdatedDatetime = lastUpdatedDatetime;
}
public Date getLastUpdatedDatetime() {
return lastUpdatedDatetime;
}
}
...
In theory, a timestamp is slightly less safe, since two concurrent transactions might
both load and update the same item all in the same millisecond; in practice, this is
unlikely to occur. However, we recommend that new projects use a numeric version
and not a timestamp.
Licensed to Lathika
Working with application transactions 171
You dont need to set the value of the version or timestamp property yourself;
Hibernate will initialize the value when you first save a Comment, and increment or
reset it whenever the object is modified.
FAQ Is the version of the parent updated if a child is modified? For example, if a
single bid in the collection bids of an Item is modified, is the version
number of the Item also increased by one or not? The answer to that and
similar questions is simple: Hibernate will increment the version number
whenever an object is dirty. This includes all dirty properties, whether
theyre single-valued or collections. Think about the relationship
between Item and Bid: If a Bid is modified, the version of the related
Item isnt incremented. If we add or remove a Bid from the collection of
bids, the version of the Item will be updated. (Of course, we would make
Bid an immutable class, since it doesnt make sense to modify bids.)
Whenever Hibernate updates a comment, it uses the version column in the SQL
WHERE clause:
update COMMENTS set COMMENT_TEXT='New comment text', VERSION=3
where COMMENT_ID=123 and VERSION=2
If another application transaction would have updated the same item since it was
read by the current application transaction, the VERSION column would not contain
the value 2, and the row would not be updated. Hibernate would check the row
count returned by the JDBC driverwhich in this case would be the number of
rows updated, zeroand throw a StaleObjectStateException.
Using this exception, we might show the user of the second application transaction
an error message (You have been working with stale data because another
user modified it!) and let the first commit win. Alternatively, we could catch the
exception and show the second user a new screen, allowing the user to manually
merge changes between the two versions.
As you can see, Hibernate makes it easy to use managed versioning to implement
optimistic locking. Can you use optimistic locking and pessimistic locking
together, or do you have to make a decision for one? And why is it called optimistic?
An optimistic approach always assumes that everything will be OK and that conflicting
data modifications are rare. Instead of being pessimistic and blocking concurrent
data access immediately (and forcing execution to be serialized),
optimistic concurrency control will only block at the end of a unit of work and raise
an error.
Both strategies have their place and uses, of course. Multiuser applications usually
default to optimistic concurrency control and use pessimistic locks when
Licensed to Lathika
172 CHAPTER 5
Transactions, concurrency, and caching
appropriate. Note that the duration of a pessimistic lock in Hibernate is a single
database transaction! This means you cant use an exclusive lock to block concurrent
access longer than a single database transaction. We consider this a good
thing, because the only solution would be an extremely expensive lock held in
memory (or a so called lock table in the database) for the duration of, for example,
an application transaction. This is almost always a performance bottleneck; every
data access involves additional lock checks to a synchronized lock manager. You
may, if absolutely required in your particular application, implement a simple long
pessimistic lock yourself, using Hibernate to manage the lock table. Patterns for
this can be found on the Hibernate website; however, we definitely dont recommend
this approach. You have to carefully examine the performance implications
of this exceptional case.
Lets get back to application transactions. You now know the basics of managed
versioning and optimistic locking. In previous chapters (and earlier in this chapter),
we have talked about the Hibernate Session as not being the same as a transaction.
In fact, a Session has a flexible scope, and you can use it in different ways
with database and application transactions. This means that the granularity of a
Session is flexible; it can be any unit of work you want it to be.
5.2.2 Granularity of a Session
To understand how you can use the Hibernate Session, lets consider its relationship
with transactions. Previously, we have discussed two related concepts:
¦ The scope of object identity (see section 4.1.4)
¦ The granularity of database and application transactions
The Hibernate Session instance defines the scope of object identity. The Hibernate
Transaction instance matches the scope of a database transaction.
What is the relationship between a Session and
application transaction? Lets start this discussion
with the most common usage of the Session.
Usually, we open a new Session for each client
request (for example, a web browser request) and
begin a new Transaction. After executing the business
logic, we commit the database transaction and
close the Session, before sending the response to
the client (see figure 5.2).
S1
T1
Request Response
Figure 5.2 Using one to one
Session and Transaction per
request/response cycle
Licensed to Lathika
Working with application transactions 173
The session (S1) and the database transaction (T1) therefore have the same
granularity. If youre not working with the concept of application transactions, this
simple approach is all you need in your application. We also like to call this
approach session-per-request.
If you need a long-running application transaction, you might, thanks to
detached objects (and Hibernates support for optimistic locking as discussed in
the previous section), implement it using the same approach (see figure 5.3).
Suppose your application transaction spans two client request/response
cyclesfor example, two HTTP requests in a web application. You could load the
interesting objects in a first Session and later reattach them to a new Session after
theyve been modified by the user. Hibernate will automatically perform a version
check. The time between (S1, T1) and (S2, T2) can be long, as long as your user
needs to make his changes. This approach is also known as session-per-request-withdetached-
objects.
Alternatively, you might prefer to use a single Session that spans multiple
requests to implement your application transaction. In this case, you dont need to
worry about reattaching detached objects, since the objects remain persistent
within the context of the one long-running Session (see figure 5.4). Of course,
Hibernate is still responsible for performing optimistic locking.
A Session is serializable and may be safely stored in the servlet HttpSession, for
example. The underlying JDBC connection has to be closed, of course, and a new
connection must be obtained on a subsequent request. You use the disconnect()
and reconnect() methods of the Session interface to release the connection and
later obtain a new connection. This approach is known as session-per-applicationtransaction
or long Session.
Usually, your first choice should be to keep the Hibernate Session open no
longer than a single database transaction (session-per-request). Once the initial
database transaction is complete, the longer the session remains open, the greater
S1
T1
Request
S2
T2
Response
Application Transaction
Response Request
Detached Instances
Figure 5.3
Implementing application
transactions with
multiple Sessions, one
for each request/
response cycle
Licensed to Lathika
174 CHAPTER 5
Transactions, concurrency, and caching
the chance that it holds stale data in its cache of persistent objects (the session is
the mandatory first-level cache). Certainly, you should never reuse a single session
for longer than it takes to complete a single application transaction.
The question of application transactions and the scope of the Session is a matter
of application design. We discuss implementation strategies with examples in
chapter 8, section 8.2, Implementing application transactions.
Finally, there is an important issue you might be concerned about. If you work
with a legacy database schema, you probably cant add version or timestamp columns
for Hibernates optimistic locking.
5.2.3 Other ways to implement optimistic locking
If you dont have version or timestamp columns, Hibernate can still perform optimistic
locking, but only for objects that are retrieved and modified in the same
Session. If you need optimistic locking for detached objects, you must use a version
number or timestamp.
This alternative implementation of optimistic locking checks the current database
state against the unmodified values of persistent properties at the time the
object was retrieved (or the last time the session was flushed). You can enable this
functionality by setting the optimistic-lock attribute on the class mapping:
...
Now, Hibernate will include all properties in the WHERE clause:
update COMMENTS set COMMENT_TEXT='New text'
where COMMENT_ID=123
and COMMENT_TEXT='Old Text'
and RATING=5
and ITEM_ID=3
and FROM_USER_ID=45
S1
T1 T2
Request Response Request Response
Application Transaction
Disconnected from JDBC Connection
Figure 5.4
Implementing application
transactions with
a long Session using
disconnection
Licensed to Lathika
Caching theory and practice 175
Alternatively, Hibernate will include only the modified properties (only
COMMENT_TEXT, in this example) if you set optimistic-lock="dirty". (Note that this
setting also requires you to set the class mapping to dynamic-update="true".)
We dont recommend this approach; its slower, more complex, and less reliable
than version numbers and doesnt work if your application transaction spans multiple
sessions (which is the case if youre using detached objects).
Well now again switch perspective and consider a new Hibernate aspect. We
already mentioned the close relationship between transactions and caching in the
introduction of this chapter. The fundamentals of transactions and locking, and
also the session granularity concepts, are of central importance when we consider
caching data in the application tier.
5.3 Caching theory and practice
A major justification for our claim that applications using an object/relational persistence
layer are expected to outperform applications built using direct JDBC is
the potential for caching. Although well argue passionately that most applications
should be designed so that its possible to achieve acceptable performance without
the use of a cache, there is no doubt that for some kinds of applicationsespecially
read-mostly applications or applications that keep significant metadata in the database
caching can have an enormous impact on performance.
We start our exploration of caching with some background information. This
includes an explanation of the different caching and identity scopes and the
impact of caching on transaction isolation. This information and these rules can
be applied to caching in general; they arent only valid for Hibernate applications.
This discussion gives you the background to understand why the Hibernate
caching system is like it is. Well then introduce the Hibernate caching system and
show you how to enable, tune, and manage the first- and second-level Hibernate
cache. We recommend that you carefully study the fundamentals laid out in this
section before you start using the cache. Without the basics, you might quickly run
into hard-to-debug concurrency problems and risk the integrity of your data.
A cache keeps a representation of current database state close to the application,
either in memory or on disk of the application server machine. The cache is
a local copy of the data. The cache sits between your application and the database.
The cache may be used to avoid a database hit whenever
¦ The application performs a lookup by identifier (primary key)
¦ The persistence layer resolves an association lazily
Licensed to Lathika
176 CHAPTER 5
Transactions, concurrency, and caching
Its also possible to cache the results of queries. As youll see in chapter 7, the performance
gain of caching query results is minimal in most cases, so this functionality
is used much less often.
Before we look at how Hibernates cache works, lets walk through the different
caching options and see how theyre related to identity and concurrency.
5.3.1 Caching strategies and scopes
Caching is such a fundamental concept in object/relational persistence that you
cant understand the performance, scalability, or transactional semantics of an
ORM implementation without first knowing what kind of caching strategy (or strategies)
it uses. There are three main types of cache:
¦ Transaction scopeAttached to the current unit of work, which may be an
actual database transaction or an application transaction. Its valid and used
as long as the unit of work runs. Every unit of work has its own cache.
¦ Process scopeShared among many (possibly concurrent) units of work or
transactions. This means that data in the process scope cache is accessed by
concurrently running transactions, obviously with implications on transaction
isolation. A process scope cache might store the persistent instances
themselves in the cache, or it might store just their persistent state in a disassembled
format.
¦ Cluster scopeShared among multiple processes on the same machine or
among multiple machines in a cluster. It requires some kind of remote process
communication to maintain consistency. Caching information has to be replicated
to all nodes in the cluster. For many (not all) applications, cluster
scope caching is of dubious value, since reading and updating the cache
might be only marginally faster than going straight to the database.
Persistence layers might provide multiple levels of caching. For example, a cache
miss (a cache lookup for an item that isnt contained in the cache) at the transaction
scope might be followed by a lookup at the process scope. A database request
would be the last resort.
The type of cache used by a persistence layer affects the scope of object identity
(the relationship between Java object identity and database identity).
Caching and object identity
Consider a transaction scope cache. It seems natural that this cache is also used as
the identity scope of persistent objects. This means the transaction scope cache
Licensed to Lathika
Caching theory and practice 177
implements identity handling: two lookups for objects using the same database
identifier return the same actual Java instance in a particular unit of work. A transaction
scope cache is therefore ideal if a persistence mechanism also provides
transaction-scoped object identity.
Persistence mechanisms with a process scope cache might choose to implement
process-scoped identity. In this case, object identity is equivalent to database
identity for the whole process. Two lookups using the same database identifier in
two concurrently running units of work result in the same Java instance. Alternatively,
objects retrieved from the process scope cache might be returned by value.
The cache contains tuples of data, not persistent instances. In this case, each unit
of work retrieves its own copy of the state (a tuple) and constructs its own persistent
instance. The scope of the cache and the scope of object identity are no
longer the same.
A cluster scope cache always requires remote communication, and in the case of
POJO-oriented persistence solutions like Hibernate, objects are always passed
remotely by value. A cluster scope cache cant guarantee identity across a cluster.
You have to choose between transaction- or process-scoped object identity.
For typical web or enterprise application architectures, its most convenient that
the scope of object identity be limited to a single unit of work. In other words, its
neither necessary nor desirable to have identical objects in two concurrent
threads. There are other kinds of applications (including some desktop or fat-client
architectures) where it might be appropriate to use process-scoped object
identity. This is particularly true where memory is extremely limitedthe memory
consumption of a transaction scope cache is proportional to the number of concurrent
units of work.
The real downside to process-scoped identity is the need to synchronize access
to persistent instances in the cache, resulting in a high likelihood of deadlocks.
Caching and concurrency
Any ORM implementation that allows multiple units of work to share the same persistent
instances must provide some form of object-level locking to ensure synchronization
of concurrent access. Usually this is implemented using read and write
locks (held in memory) together with deadlock detection. Implementations like
Hibernate, which maintain a distinct set of instances for each unit of work (transaction-
scoped identity), avoid these issues to a great extent.
Its our opinion that locks held in memory are to be avoided, at least for web and
enterprise applications where multiuser scalability is an overriding concern. In
Licensed to Lathika
178 CHAPTER 5
Transactions, concurrency, and caching
these applications, its usually not required to compare object identity across concurrent
units of work; each user should be completely isolated from other users.
There is quite a strong case for this view when the underlying relational database
implements a multiversion concurrency model (Oracle or PostgreSQL, for example).
Its somewhat undesirable for the object/relational persistence cache to redefine
the transactional semantics or concurrency model of the underlying database.
Lets consider the options again. A transaction scope cache is preferred if you
also use transaction-scoped object identity and is the best strategy for highly concurrent
multiuser systems. This first-level cache would be mandatory, because it
also guarantees identical objects. However, this isnt the only cache you can use.
For some data, a second-level cache scoped to the process (or cluster) that returns
data by value can be useful. This scenario therefore has two cache layers; youll
later see that Hibernate uses this approach.
Lets discuss which data benefits from second-level cachingor, in other words,
when to turn on the process (or cluster) scope second-level cache in addition to
the mandatory first-level transaction scope cache.
Caching and transaction isolation
A process or cluster scope cache makes data retrieved from the database in one
unit of work visible to another unit of work. This may have some very nasty sideeffects
upon transaction isolation.
First, if an application has non-exclusive access to the database, process scope
caching shouldnt be used, except for data which changes rarely and may be safely
refreshed by a cache expiry. This type of data occurs frequently in content management-
type applications but rarely in financial applications.
You need to look out for two main scenarios involving non-exclusive access:
¦ Clustered applications
¦ Shared legacy data
Any application that is designed to scale must support clustered operation. A process
scope cache doesnt maintain consistency between the different caches on different
machines in the cluster. In this case, you should use a cluster scope
(distributed) cache instead of the process scope cache.
Many Java applications share access to their database with other (legacy) applications.
In this case, you shouldnt use any kind of cache beyond a transaction
scope cache. There is no way for a cache system to know when the legacy application
updated the shared data. Actually, its possible to implement application-level
functionality to trigger an invalidation of the process (or cluster) scope cache
Licensed to Lathika
Caching theory and practice 179
when changes are made to the database, but we dont know of any standard or best
way to achieve this. Certainly, it will never be a built-in feature of Hibernate. If you
implement such a solution, youll most likely be on your own, because its
extremely specific to the environment and products used.
After considering non-exclusive data access, you should establish what isolation
level is required for the application data. Not every cache implementation respects
all transaction isolation levels, and its critical to find out what is required. Lets
look at data that benefits most from a process (or cluster) scoped cache.
A full ORM solution will let you configure second-level caching separately for
each class. Good candidate classes for caching are classes that represent
¦ Data that changes rarely
¦ Non-critical data (for example, content-management data)
¦ Data that is local to the application and not shared
Bad candidates for second-level caching are
¦ Data that is updated often
¦ Financial data
¦ Data that is shared with a legacy application
However, these arent the only rules we usually apply. Many applications have a
number of classes with the following properties:
¦ A small number of instances
¦ Each instance referenced by many instances of another class or classes
¦ Instances rarely (or never) updated
This kind of data is sometimes called reference data. Reference data is an excellent
candidate for caching with a process or cluster scope, and any application that uses
reference data heavily will benefit greatly if that data is cached. You allow the data
to be refreshed when the cache timeout period expires.
Weve shaped a picture of a dual layer caching system in the previous sections,
with a transaction scope first-level and an optional second-level process or cluster
scope cache. This is close to the Hibernate caching system.
5.3.2 The Hibernate cache architecture
As we said earlier, Hibernate has a two-level cache architecture. The various elements
of this system can be seen in figure 5.5.
Licensed to Lathika
180 CHAPTER 5
Transactions, concurrency, and caching
The first-level cache is the Session itself. A session lifespan corresponds to either a
database transaction or an application transaction (as explained earlier in this
chapter). We consider the cache associated with the Session to be a transaction
scope cache. The first-level cache is mandatory and cant be turned off; it also guarantees
object identity inside a transaction.
The second-level cache in Hibernate is pluggable and might be scoped to the
process or cluster. This is a cache of state (returned by value), not of persistent
instances. A cache concurrency strategy defines the transaction isolation details for
a particular item of data, whereas the cache provider represents the physical, actual
cache implementation. Use of the second-level cache is optional and can be configured
on a per-class and per-association basis.
Hibernate also implements a cache for query result sets that integrates closely
with the second-level cache. This is an optional feature. We discuss the query cache
in chapter 7, since its usage is closely tied to the actual query being executed.
Lets start with using the first-level cache, also called the session cache.
Using the first-level cache
The session cache ensures that when the application requests the same persistent
object twice in a particular session, it gets back the same (identical) Java instance.
This sometimes helps avoid unnecessary database traffic. More important, it
ensures the following:
Cache Concurrency
Strategy
Second-level Cache
Cache Provider
Cache Implementation
(Physical Cache Regions)
Query Cache
Session
First-level Cache
Figure 5.5
Hibernates two-level
cache architecture
Licensed to Lathika
Caching theory and practice 181
¦ The persistence layer isnt vulnerable to stack overflows in the case of circular
references in a graph of objects.
¦ There can never be conflicting representations of the same database row at
the end of a database transaction. There is at most a single object representing
any database row. All changes made to that object may be safely written
to the database (flushed).
¦ Changes made in a particular unit of work are always immediately visible to
all other code executed inside that unit of work.
You dont have to do anything special to enable the session cache. Its always on
and, for the reasons shown, cant be turned off.
Whenever you pass an object to save(), update(), or saveOrUpdate(), and whenever
you retrieve an object using load(), find(), list(), iterate(), or filter(),
that object is added to the session cache. When flush() is subsequently called, the
state of that object will be synchronized with the database.
If you dont want this synchronization to occur, or if youre processing a huge
number of objects and need to manage memory efficiently, you can use the
evict() method of the Session to remove the object and its collections from the
first-level cache. There are several scenarios where this can be useful.
Managing the first-level cache
Consider this frequently asked question: I get an OutOfMemoryException when I try
to load 100,000 objects and manipulate all of them. How can I do mass updates
with Hibernate?
Its our view that ORM isnt suitable for mass update (or mass delete) operations.
If you have a use case like this, a different strategy is almost always better: call a
stored procedure in the database or use direct SQL UPDATE and DELETE statements.
Dont transfer all the data to main memory for a simple operation if it can be performed
more efficiently by the database. If your application is mostly mass operation
use cases, ORM isnt the right tool for the job!
If you insist on using Hibernate even for mass operations, you can immediately
evict() each object after it has been processed (while iterating through a query
result), and thus prevent memory exhaustion.
To completely evict all objects from the session cache, call Session.clear(). We
arent trying to convince you that evicting objects from the first-level cache is a bad
thing in general, but that good use cases are rare. Sometimes, using projection and
Licensed to Lathika
182 CHAPTER 5
Transactions, concurrency, and caching
a report query, as discussed in chapter 7, section 7.4.5, Improving performance
with report queries, might be a better solution.
Note that eviction, like save or delete operations, can be automatically applied
to associated objects. Hibernate will evict associated instances from the Session
if the mapping attribute cascade is set to all or all-delete-orphan for a particular
association.
When a first-level cache miss occurs, Hibernate tries again with the second-level
cache if its enabled for a particular class or association.
The Hibernate second-level cache
The Hibernate second-level cache has process or cluster scope; all sessions share
the same second-level cache. The second-level cache actually has the scope of a
SessionFactory.
Persistent instances are stored in the second-level cache in a disassembled form.
Think of disassembly as a process a bit like serialization (the algorithm is much,
much faster than Java serialization, however).
The internal implementation of this process/cluster scope cache isnt of much
interest; more important is the correct usage of the cache policiesthat is, caching
strategies and physical cache providers.
Different kinds of data require different cache policies: the ratio of reads to
writes varies, the size of the database tables varies, and some tables are shared with
other external applications. So the second-level cache is configurable at the
granularity of an individual class or collection role. This lets you, for example,
enable the second-level cache for reference data classes and disable it for classes
that represent financial records. The cache policy involves setting the following:
¦ Whether the second-level cache is enabled
¦ The Hibernate concurrency strategy
¦ The cache expiration policies (such as timeout, LRU, memory-sensitive)
¦ The physical format of the cache (memory, indexed files, cluster-replicated)
Not all classes benefit from caching, so its extremely important to be able to disable
the second-level cache. To repeat, the cache is usually useful only for readmostly
classes. If you have data that is updated more often than its read, dont
enable the second-level cache, even if all other conditions for caching are true!
Furthermore, the second-level cache can be dangerous in systems that share the
database with other writing applications. As we explained in earlier sections, you
must exercise careful judgment here.
Licensed to Lathika
Caching theory and practice 183
The Hibernate second-level cache is set up in two steps. First, you have to decide
which concurrency strategy to use. After that, you configure cache expiration and
physical cache attributes using the cache provider.
Built-in concurrency strategies
A concurrency strategy is a mediator; its responsible for storing items of data in
the cache and retrieving them from the cache. This is an important role, because
it also defines the transaction isolation semantics for that particular item. Youll
have to decide, for each persistent class, which cache concurrency strategy to use,
if you want to enable the second-level cache.
There are four built-in concurrency strategies, representing decreasing levels of
strictness in terms of transaction isolation:
¦ transactionalAvailable in a managed environment only. It guarantees full
transactional isolation up to repeatable read, if required. Use this strategy for
read-mostly data where its critical to prevent stale data in concurrent transactions,
in the rare case of an update.
¦ read-writeMaintains read committed isolation, using a timestamping mechanism.
Its available only in non-clustered environments. Again, use this strategy
for read-mostly data where its critical to prevent stale data in
concurrent transactions, in the rare case of an update.
¦ nonstrict-read-writeMakes no guarantee of consistency between the cache
and the database. If there is a possibility of concurrent access to the same
entity, you should configure a sufficiently short expiry timeout. Otherwise,
you may read stale data in the cache. Use this strategy if data rarely changes
(many hours, days or even a week) and a small likelihood of stale data isnt
of critical concern. Hibernate invalidates the cached element if a modified
object is flushed, but this is an asynchronous operation, without any cache
locking or guarantee that the retrieved data is the latest version.
¦ read-onlyA concurrency strategy suitable for data which never changes.
Use it for reference data only.
Note that with decreasing strictness comes increasing performance. You have to
carefully evaluate the performance of a clustered cache with full transaction isolation
before using it in production. In many cases, you might be better off disabling
the second-level cache for a particular class if stale data isnt an option. First benchmark
your application with the second-level cache disabled. Then enable it for
good candidate classes, one at a time, while continuously testing the performance
of your system and evaluating concurrency strategies.
Licensed to Lathika
184 CHAPTER 5
Transactions, concurrency, and caching
Its possible to define your own concurrency strategy by implementing
net.sf.hibernate.cache.CacheConcurrencyStrategy, but this is a relatively difficult
task and only appropriate for extremely rare cases of optimization.
Your next step after considering the concurrency strategies youll use for your
cache candidate classes is to pick a cache provider. The provider is a plugin, the physical
implementation of a cache system.
Choosing a cache provider
For now, Hibernate forces you to choose a single cache provider for the whole
application. Providers for the following products are built into Hibernate:
¦ EHCache is intended for a simple process scope cache in a single JVM. It can
cache in memory or on disk, and it supports the optional Hibernate query
result cache.
¦ OpenSymphony OSCache is a library that supports caching to memory and disk
in a single JVM, with a rich set of expiration policies and query cache support.
¦ SwarmCache is a cluster cache based on JGroups. It uses clustered invalidation
but doesnt support the Hibernate query cache.
¦ JBossCache is a fully transactional replicated clustered cache also based on
the JGroups multicast library. The Hibernate query cache is supported,
assuming that clocks are synchronized in the cluster.
Its easy to write an adaptor for other products by implementing net.sf.hibernate.
cache.CacheProvider.
Not every cache provider is compatible with every concurrency strategy. The
compatibility matrix in table 5.1 will help you choose an appropriate combination.
Table 5.1 Cache concurrency strategy support
Cache Provider read-only
nonstrictread-
write
read-write transactional
EHCache X X X
OSCache X X X
SwarmCache X X
JBossCache X X
Licensed to Lathika
Caching theory and practice 185
Setting up caching therefore involves two steps:
1 Look at the mapping files for your persistent classes and decide which cache
concurrency strategy youd like to use for each class and each association.
2 Enable your preferred cache provider in the global Hibernate configuration
and customize the provider-specific settings.
For example, if youre using OSCache, you should edit oscache.properties, or for
EHCache, ehcache.xml in your classpath.
Lets add caching to our CaveatEmptor Category and Item classes.
5.3.3 Caching in practice
Remember that you dont have to explicitly enable the first-level cache. So, lets
declare caching policies and set up cache providers for the second-level cache in
our CaveatEmptor application.
The Category has a small number of instances and is updated rarely, and
instances are shared among many users, so its a great candidate for use of the second-
level cache. We start by adding the mapping element required to tell Hibernate
to cache Category instances:
name="Category"
table="CATEGORY">
The usage="read-write" attribute tells Hibernate to use a read-write concurrency
strategy for the Category cache. Hibernate will now try the second-level cache
whenever we navigate to a Category or when we load a Category by identifier.
We have chosen read-write instead of nonstrict-read-write, since Category is
a highly concurrent class, shared among many concurrent transactions, and its
clear that a read-committed isolation level is good enough. However, nonstrictread-
write would probably be an acceptable alternative, since a small probability
of inconsistency between the cache and database is acceptable (the category hierarchy
has little financial significance).
This mapping was enough to tell Hibernate to cache all simple Category property
values but not the state of associated entities or collections. Collections require
their own element. For the items collection, well use a read-write concurrency
strategy:
Licensed to Lathika
186 CHAPTER 5
Transactions, concurrency, and caching
name="Category"
table="CATEGORY">
This cache will be used when we call category.getItems().iterate(), for example.
Now, a collection cache holds only the identifiers of the associated item
instances. So, if we require the instances themselves to be cached, we must enable
caching of the Item class. A read-write strategy is especially appropriate here. Our
users dont want to make decisions (placing a Bid) based on possibly stale data.
Lets go a step further and consider the collection of Bids. A particular Bid in the
bids collection is immutable, but we have to map the collection using read-write,
since new bids may be made at any time (and its critical that we be immediately
aware of new bids):
name="Item"
table="ITEM">
To the immutable Bid class, we apply a read-only strategy:
name="Bid"
table="BID">
Cached Bid data is valid indefinitely, because bids are never updated. No cache
invalidation is required. (Instances may be evicted by the cache providerfor
example, if the maximum number of objects in the cache is reached.)
Licensed to Lathika
Caching theory and practice 187
User is an example of a class that could be cached with the nonstrict-read-write
strategy, but we arent certain that it makes sense to cache users at all.
Lets set the cache provider, expiration policies, and physical properties of our
cache. We use cache regions to configure class and collection caching individually.
Understanding cache regions
Hibernate keeps different classes/collections in different cache regions. A region
is a named cache: a handle by which you can reference classes and collections in
the cache provider configuration and set the expiration policies applicable to
that region.
The name of the region is the class name, in the case of a class cache; or the class
name together with the property name, in the case of a collection cache. Category
instances are cached in a region named org.hibernate.auction.Category, and the
items collection is cached in a region named org.hibernate.auction.Category.
items.
You can use the Hibernate configuration property hibernate.cache.region_
prefix to specify a root region name for a particular SessionFactory. For example,
if the prefix was set to node1, Category would be cached in a region named
node1.org.hibernate.auction.Category. This setting is useful if your application
includes multiple SessionFactory instances.
Now that you know about cache regions, lets configure the expiry policies for
the Category cache. First well choose a cache provider. Assume that were running
our auction application in a single JVM, so we dont need a cluster-safe implementation
(which would limit our options).
Setting up a local cache provider
We need to set the property that selects a cache provider:
hibernate.cache.provider_class=net.sf.ehcache.hibernate.Provider
Weve chosen EHCache as our second-level cache.
Now, we need to specify the expiry policies for the cache regions. EHCache
has its own configuration file, ehcache.xml, in the classpath of the application.
The Hibernate distribution comes bundled with example configuration files for
all built-in cache providers, so we recommend the usage comments in those files
for detailed configuration and assume the defaults for all options we dont mention
explicitly.
A cache configuration in ehcache.xml for the Category class might look like this:
maxElementsInMemory="500"
Licensed to Lathika
188 CHAPTER 5
Transactions, concurrency, and caching
eternal="true"
timeToIdleSeconds="0"
timeToLiveSeconds="0"
overflowToDisk="false"
/>
There are a small number of Category instances, and theyre all shared among
many concurrent transactions. We therefore disable eviction by timeout by choosing
a cache size limit greater than the number of categories in our system and setting
eternal="true". There is no need to expire cached data by timeout because
the Category cache concurrency strategy is read-write and because there are no
other applications changing category data. We also disable disk-based caching,
since we know that there are few instances of Category and so memory consumption
wont be a problem.
Bids, on the other hand, are small and immutable, but there are many of them;
so we must configure EHCache to carefully manage the cache memory consumption.
We use both an expiry timeout and a maximum cache size limit:
maxElementsInMemory="5000"
eternal="false"
timeToIdleSeconds="1800"
timeToLiveSeconds="100000"
overflowToDisk="false"
/>
The timeToIdleSeconds attribute defines the expiry time in seconds since an element
was last accessed in the cache. We must set a sensible value here, since we
dont want unused bids to consume memory. The timeToLiveSeconds attribute
defines the maximum expiry time in seconds since the element was added to the
cache. Since bids are immutable, we dont need them to be removed from the
cache if theyre being accessed regularly. Hence, timeToLiveSeconds is set to a
high number.
The result is that cached bids are removed from the cache if they have not been
used in the past 30 minutes or if theyre the least recently used item when the total
size of the cache has reached its maximum limit of 5000 elements.
Weve disabled the disk-based cache in this example, since we anticipate that
the application server will be deployed to the same machine as the database. If
the expected physical architecture were different, we might enable the diskbased
cache.
Optimal cache eviction policies are, as you can see, specific to the particular data
and particular application. You must consider many external factors, including
Licensed to Lathika
Caching theory and practice 189
available memory on the application server machine, expected load on the database
machine, network latency, existence of legacy applications, and so on. Some
of these factors cant possibly be known at development time, so youll often need
to iteratively test the performance impact of different settings in the production
environment or a simulation of it.
This is especially true in a more complex scenario, with a replicated cache
deployed to a cluster of server machines.
Setting up a replicated cache
EHCache is an excellent cache provider if your application is deployed on a single
virtual machine. However, enterprise applications supporting thousands of concurrent
users might require more computing power, and scaling your application
might be critical to the success of your project. Hibernate applications are naturally
scalablethat is, Hibernate behaves the same whether its deployed to a single
machine or to many machines. The only feature of Hibernate that must be configured
specifically for clustered operation is the second-level cache. With a few
changes to our cache configuration, were able to use a clustered caching system.
It isnt necessarily wrong to use a purely local (noncluster-aware) cache provider
in a cluster. Some dataespecially immutable data, or data that can be
refreshed by cache timeoutdoesnt require clustered invalidation and may safely
be cached locally, even in a clustered environment. We might be able to have each
node in the cluster use a local instance of EHCache, and carefully choose sufficiently
short timeToLiveSeconds timeouts.
However, if you require strict cache consistency in a clustered environment, you
must use a more sophisticated cache provider. We recommend JBossCache, a fully
transactional, cluster-safe caching system based on the JGroups multicast library.
JBossCache is extremely performant, and cluster communication may be tuned in
almost any way imaginable.
Well now step through a setup of JBossCache for CaveatEmptor for a small cluster
of two nodes: node A and node B. However, we only scratch the surface of the
topic; cluster configurations are by nature complex, and many settings depend on
the particular scenario.
First, we have to check that all our mapping files use read-only or transactional
as a cache concurrency strategy. These are the only strategies supported by the
JBossCache provider. A nice trick can help us avoid this search-and-replace problem
in the future: Instead of placing elements in our mapping files, we can
centralize cache configuration in hibernate.cfg.xml:
Licensed to Lathika
190 CHAPTER 5
Transactions, concurrency, and caching
class="org.hibernate.auction.model.Item"
usage="transactional"/>
collection="org.hibernate.auction.model.Item.bids"
usage="transactional"/>
We enabled transactional caching for Item and the bids collection in this example.
However, there is one important caveat: at the time of this writing, Hibernate will
run into a conflict if we also have elements in the mapping file for Item.
We therefore cant use the global configuration to override the mapping file settings.
We recommend that you use the centralized cache configuration from the
start, especially if you arent sure how your application might be deployed. Its also
easier to tune cache settings with a centralized configuration.
The next step in our cluster setup is the configuration of the JBossCache provider.
First, we enable it in the Hibernate configurationfor example, if we arent
using properties, in hibernate.cfg.xml:
net.sf.hibernate.cache.TreeCacheProvider
JBossCache has its own configuration file, treecache.xml, which is expected in the
classpath of your application. In most scenarios, you need a different configuration
for each node in your cluster, and you have to make sure the correct file is copied
to the classpath on deployment. Lets look at a typical configuration file. In our
two-node cluster (named MyCluster), this file is used on the node A:
archives="jboss-cache.jar, jgroups.jar"/>
name="jboss.cache:service=TreeCache">
jboss:service=Naming
jboss:service=TransactionManager
Licensed to Lathika
Caching theory and practice 191
MyCluster
REPL_SYNC
10000
15000
true
org.jboss.cache.eviction.LRUPolicy
5
5000
1000
500
5000
5000
1800
ip_mcast="true"
loopback="false"/>
num_initial_members="3"
up_thread="false"
down_thread="false"/>
retransmit_timeout="600,1200,2400,4800"
max_xmit_size="8192"
up_thread="false" down_thread="false"/>
window_size="100"
min_threshold="10"
down_thread="false"/>
Licensed to Lathika
192 CHAPTER 5
Transactions, concurrency, and caching
up_thread="false"
down_thread="false"/>
down_thread="false"
up_thread="false"/>
join_retry_timeout="2000"
shun="true" print_local_addr="true"/>
down_thread="true"/>
Granted, this configuration file might look scary at first, but its easy to understand.
You have to know that it isnt only a configuration file for JBossCache, its many
things in one: a JMX service configuration for JBoss deployment, a configuration
file for TreeCache, and a fine-grained configuration of JGroups, the communication
library.
Lets ignore the first few lines relating to JBoss deployment (they will be ignored
when running JBossCache outside a JBoss application server) and look at the Tree-
Cache configuration attributes. These settings define a replicated cache that uses
synchronized communication. This means that a node sending a replication message
waits until all nodes in the group acknowledge the message. This is a good choice
for use in a true replicated cache. Asynchronous non-blocking communication
might be more appropriate if node B was a hot standby (a node that immediately
takes over if node A fails) instead of a live partner. A hot standby is used when the
purpose of the cluster is failover rather than throughput. The other configuration
attributes are self explanatory, dealing with issues such as timeouts and population
of the cache when a new node joins the cluster.
JBossCache provides pluggable eviction policies. In this case, weve selected the
built-in policy, org.jboss.cache.eviction.LRUPolicy. We then configure eviction
for each cache region, just as we did with EHCache.
Finally, lets look at the JGroups cluster communication configuration. The
order of communication protocols is extremely important, so dont change or
add lines randomly. Most interesting is the first protocol, . We declare a
binding of the communication socket to the IP interface 192.168.0.1 (the IP
address of node A in our network) and enable multicast communication. The
Licensed to Lathika
Caching theory and practice 193
loopback attribute has to be set to true if node A would be a Microsoft Windows
machine (it isnt).
The other JGroups attributes are more complex and can be found in the
JGroups documentation. They deal with the discovery algorithms used to detect
new nodes in a group, failure detection, and in general, the management of the
group communication.
So, after changing the cache concurrency strategy of your persistent classes to
transactional (or read-only) and creating a treecache.xml file for node A, you can
start up your application and check the log output. We recommend enabling DEBUG
logging for the org.jboss.cache class; youll see how JBossCache reads the configuration
and node A is reported as the first node in the cluster. To deploy node B,
change the IP address in the configuration file and repeat the deployment procedure
with this new file. You should see join messages on both nodes as soon as the
cache is started. Your Hibernate application will now use fully transactional
caching in a cluster: each element put into the cache will be replicated, and
updated elements will be invalidated.
There is one final optional setting to consider. For cluster cache providers, it
might be better to set the Hibernate configuration option hibernate.
cache.use_minimal_puts to true. When this setting is enabled, Hibernate will only
add an item to the cache after checking to ensure that the item isnt already
cached. This strategy performs better if cache writes (puts) are much more expensive
than cache reads (gets). This is the case for a replicated cache in a cluster, but
not for a local cache (the default is false, optimized for a local cache). Whether
youre using a cluster or a local cache, you sometimes need to control it programmatically
for testing or tuning purposes.
Controlling the second-level cache
Hibernate has some useful methods that will help you test and tune your cache.
You may wonder how to disable the second-level cache completely. Hibernate will
only load the cache provider and start using the second-level cache if you have any
cache declarations in your mapping files or XML configuration file. If you comment
them out, the cache is disabled. This is another good reason to prefer centralized
cache configuration in hibernate.cfg.xml.
Just as the Session provides methods for controlling the first-level cache programmatically,
so does the SessionFactory for the second-level cache.
You can call evict() to remove an element from the cache, by specifying the
class and the object identifer value:
Licensed to Lathika
194 CHAPTER 5
Transactions, concurrency, and caching
SessionFactory.evict( Category.class, new Long(123) );
You can also evict all elements of a certain class or only evict a particular collection
role:
SessionFactory.evict("org.hibernate.auction.model.Category");
Youll rarely need these control mechanisms.
5.4 Summary
This chapter was dedicated to concurrency control and data caching.
You learned that for a single unit of work, either all operations should be completely
successful or the whole unit of work should fail (and changes made to persistent
state should be rolled back). This led us to the notion of a transaction and
the ACID attributes. A transaction is atomic, leaves data in a consistent state, and is
isolated from concurrently running transactions, and you have the guarantee that
data changed by a transaction is durable.
You use two transaction concepts in Hibernate applications: short database
transactions and long-running application transactions. Usually, you use read committed
isolation for database transactions, together with optimistic concurrency
control (version and timestamp checking) for long application transactions.
Hibernate greatly simplifies the implementation of application transactions
because it manages version numbers and timestamps for you.
Finally, we discussed the fundamentals of caching, and you learned how to use
caching effectively in Hibernate applications.
Hibernate provides a dual-layer caching system with a first-level object cache
(the Session) and a pluggable second-level data cache. The first-level cache is
always activeits used to resolve circular references in your object graph and to
optimize performance in a single unit of work. The (process or cluster scope) second-
level cache on the other hand is optional and works best for read-mostly candidate
classes. You can configure a non-volatile second-level cache for reference
(read-only) data or even a second-level cache with full transaction isolation for critical
data. However, you have to carefully examine whether the performance gain is
worth the effort. The second-level cache can be customized fine-grained, for each
persistent class and even for each collection and class association. Used correctly
and thoroughly tested, caching in Hibernate gives you a level of performance that
is almost unachievable in a hand-coded data access layer.
Licensed to Lathika
195
Advanced mapping concepts
This chapter covers
¦ The Hibernate type system
¦ Custom mapping types
¦ Collection mappings
¦ One-to-one and many-to-many associations
Licensed to Lathika
196 CHAPTER 6
Advanced mapping concepts
In chapter 3, we introduced the most important ORM features provided by Hibernate.
Youve met basic class and property mappings, inheritance mappings, component
mappings, and one-to-many association mappings. We now continue
exploring these topics by turning to the more exotic collection and association
mappings. At various places, well warn you against using a feature without careful
consideration. For example, its usually possible to implement any domain model
using only component mappings and one-to-many (occasionally one-to-one) associations.
The exotic mapping features should be used with care, perhaps even
avoided most of the time.
Before we start to talk about the exotic features, you need a more rigorous
understanding of Hibernates type systemparticularly of the distinction between
entity and value types.
6.1 Understanding the Hibernate type system
In chapter 3, section 3.5.1, Entity and value types, we first distinguished between
entity and value types, a central concept of ORM in Java. We must elaborate that
distinction in order for you to fully understand the Hibernate type system of entities,
value types, and mapping types.
Entities are the coarse-grained classes in a system. You usually define the features
of a system in terms of the entities involved: the user places a bid for an item is a
typical feature definition that mentions three entities. Classes of value type often
dont appear in the business requirementstheyre usually the fine-grained classes
representing strings, numbers, and monetary amounts. Occasionally, value types do
appear in feature definitions: the user changes billing address is one example,
assuming that Address is a value type, but this is atypical.
More formally, an entity is any class whose instances have their own persistent
identity. A value type is a class that doesnt define some kind of persistent identity.
In practice, this means entity types are classes with identifier properties, and valuetype
classes depend on an entity.
At runtime, you have a graph of entity instances interleaved with value type
instances. The entity instances may be in any of the three persistent lifecycle states:
transient, detached, or persistent. We dont consider these lifecycle states to apply
to the value type instances.
Therefore, entities have their own lifecycle. The save() and delete() methods
of the Hibernate Session interface apply to instances of entity classes, never to
value type instances. The persistence lifecycle of a value type instance is completely
tied to the lifecycle of the owning entity instance. For example, the username
Licensed to Lathika
Understanding the Hibernate type system 197
becomes persistent when the user is saved; it never becomes persistent independently
of the user.
In Hibernate, a value type may define associations; its possible to navigate from
a value type instance to some other entity. However, its never possible to navigate
from the other entity back to the value type instance. Associations always point to
entities. This means that a value type instance is owned by exactly one entity when
its retrieved from the databaseits never shared.
At the level of the database, any table is considered an entity. However, Hibernate
provides certain constructs to hide the existence of a database-level entity
from the Java code. For example, a many-to-many association mapping hides the
intermediate association table from the application. A collection of strings (more
accurately, a collection of value-typed instances) behaves like a value type from the
point of view of the application; however, its mapped to its own table. Although
these features seem nice at first (they simplify the Java code), we have over time
become suspicious of them. Inevitably, these hidden entities end up needing to be
exposed to the application as business requirements evolve. The many-to-many
association table, for example, often has additional columns that are added when
the application is maturing. Were almost prepared to recommend that every database-
level entity be exposed to the application as an entity class. For example, wed
be inclined to model the many-to-many association as two one-to-many associations
to an intervening entity class. Well leave the final decision to you, however, and
return to the topic of many-to-many entity associations later in this chapter.
So, entity classes are always mapped to the database using , ,
and mapping elements. How are value types mapped?
Consider this mapping of the CaveatEmptor User and email address:
name="email"
column="EMAIL"
type="string"/>
Lets focus on the type="string" attribute. In ORM, you have to deal with Java
types and SQL data types. The two different type systems must be bridged. This is
the job of the Hibernate mapping types, and string is the name of a built-in Hibernate
mapping type.
The string mapping type isnt the only one built into Hibernate; Hibernate
comes with various mapping types that define default persistence strategies for
primitive Java types and certain JDK classes.
Licensed to Lathika
198 CHAPTER 6
Advanced mapping concepts
6.1.1 Built-in mapping types
Hibernates built-in mapping types usually share the name of the Java type they
map; however, there may be more than one Hibernate mapping type for a particular
Java type. Furthermore, the built-in types may not be used to perform arbitrary
conversions, such as mapping a VARCHAR field value to a Java Integer property
value. You may define your own custom value types to do this kind of thing, as discussed
later in this chapter.
Well now discuss the basic, date and time, large object, and various other builtin
mapping types and show you what Java and SQL data types they handle.
Java primitive mapping types
The basic mapping types in table 6.1 map Java primitive types (or their wrapper
types) to appropriate built-in SQL standard types.
Youve probably noticed that your database doesnt support some of the SQL types
listed in table 6.1. The listed names are ANSI-standard data types. Most database
vendors ignore this part of the SQL standard (because their type systems sometimes
predate the standard). However, the JDBC driver provides a partial abstraction of
vendor-specific SQL data types, allowing Hibernate to work with ANSI-standard
Table 6.1 Primitive types
Mapping type Java type
Standard SQL
built-in type
integer int or java.lang.Integer INTEGER
long long or java.lang.Long BIGINT
short short or java.lang.Short SMALLINT
float float or java.lang.Float FLOAT
double double or java.lang.Double DOUBLE
big_decimal java.math.BigDecimal NUMERIC
character java.lang.String CHAR(1)
string java.lang.String VARCHAR
byte byte or java.lang.Byte TINYINT
boolean boolean or java.lang.Boolean BIT
yes_no boolean or java.lang.Boolean CHAR(1) ('Y' or 'N')
true_false boolean or java.lang.Boolean CHAR(1) ('T' or 'F')
Licensed to Lathika
Understanding the Hibernate type system 199
types when executing data manipulation language (DML). For database-specific
DDL generation, Hibernate translates from the ANSI-standard type to an appropriate
vendor-specific type, using the built-in support for specific SQL dialects. (You
usually dont have to worry about SQL data types if youre using Hibernate for data
access and data schema definition.)
Date and time mapping types
Table 6.2 lists Hibernate types associated with dates, times, and timestamps. In your
domain model, you may choose to represent date and time data using either
java.util.Date, java.util.Calendar, or the subclasses of java.util.Date defined
in the java.sql package. This is a matter of taste, and we leave the decision to
youmake sure youre consistent, however!
Large object mapping types
Table 6.3 lists Hibernate types for handling binary data and large objects. Note that
none of these types may be used as the type of an identifier property.
Table 6.2 Date and time types
Mapping type Java type
Standard SQL
built-in type
date java.util.Date or java.sql.Date DATE
time java.util.Date or java.sql.Time TIME
timestamp java.util.Date or java.sql.Timestamp TIMESTAMP
calendar java.util.Calendar TIMESTAMP
calendar_date java.util.Calendar DATE
Table 6.3 Binary and large object types
Mapping type Java type
Standard SQL
built-in type
binary byte[] VARBINARY (or BLOB)
text java.lang.String CLOB
serializable any Java class that implements
java.io.Serializable
VARBINARY (or BLOB)
clob java.sql.Clob CLOB
blob java.sql.Blob BLOB
Licensed to Lathika
200 CHAPTER 6
Advanced mapping concepts
java.sql.Blob and java.sql.Clob are the most efficient way to handle large
objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the
JDBC transaction completes. So if your persistent class defines a property of
java.sql.Clob or java.sql.Blob (not a good idea anyway), youll be restricted in
how instances of the class may be used. In particular, you wont be able to use
instances of that class as detached objects. Furthermore, many JDBC drivers dont
feature working support for java.sql.Blob and java.sql.Clob. Therefore, it
makes more sense to map large objects using the binary or text mapping type,
assuming retrieval of the entire large object into memory isnt a performance killer.
Note you can find up-to-date design patterns and tips for large object usage on
the Hibernate website, with tricks for particular platforms.
Various JDK mapping types
Table 6.4 lists Hibernate types for various other Java types of the JDK that may be
represented as VARCHARs in the database.
Certainly, isnt the only Hibernate mapping element that has a type
attribute.
6.1.2 Using mapping types
All of the basic mapping types may appear almost anywhere in the Hibernate
mapping document, on normal property, identifier property, and other mapping
elements.
The , , , , , and elements
all define an attribute named type. (There are certain limitations on which
mapping basic types may function as an identifier or discriminator type, however.)
You can see how useful the built-in mapping types are in this mapping for the
BillingDetails class:
Table 6.4 Other JDK-related types
Mapping type Java type
Standard SQL
built-in type
class java.lang.Class VARCHAR
locale java.util.Locale VARCHAR
timezone java.util.TimeZone VARCHAR
currency java.util.Currency VARCHAR
Licensed to Lathika
Understanding the Hibernate type system 201
table="BILLING_DETAILS"
discriminator-value="null">
...
The BillingDetails class is mapped as an entity. Its discriminator, identifier, and
number properties are value typed, and we use the built-in Hibernate mapping types
to specify the conversion strategy.
Its often not necessary to explicitly specify a built-in mapping type in the XML
mapping document. For instance, if you have a property of Java type
java.lang.String, Hibernate will discover this using reflection and select string
by default. We can easily simplify the previous mapping example:
table="BILLING_DETAILS"
discriminator-value="null">
....
The most important case where this approach doesnt work well is a
java.util.Date property. By default, Hibernate interprets a Date as a timestamp
mapping. Youd need to explicitly specify type="time" or type="date" if you didnt
wish to persist both date and time information.
For each of the built-in mapping types, a constant is defined by the class
net.sf.hibernate.Hibernate. For example, Hibernate.STRING represents the
string mapping type. These constants are useful for query parameter binding, as
discussed in more detail in chapter 7:
session.createQuery("from Item i where i.description like :desc")
.setParameter("desc", desc, Hibernate.STRING)
.list();
Licensed to Lathika
202 CHAPTER 6
Advanced mapping concepts
These constants are also useful for programmatic manipulation of the Hibernate
mapping metamodel, as discussed in chapter 3.
Of course, Hibernate isnt limited to the built-in mapping types. We consider the
extensible mapping type system one of the core features and an important aspect
that makes Hibernate so flexible.
Creating custom mapping types
Object-oriented languages like Java make it easy to define new types by writing new
classes. Indeed, this is a fundamental part of the definition of object orientation. If
you were limited to the predefined built-in Hibernate mapping types when declaring
properties of persistent classes, youd lose much of Javas expressiveness. Furthermore,
your domain model implementation would be tightly coupled to the
physical data model, since new type conversions would be impossible.
Most ORM solutions that weve seen provide some kind of support for userdefined
strategies for performing type conversions. These are often called converters.
For example, the user would be able to create a new strategy for persisting a
property of JDK type Integer to a VARCHAR column. Hibernate provides a similar,
much more powerful, feature called custom mapping types.
Hibernate provides two user-friendly interfaces that applications may use when
defining new mapping types. These interfaces reduce the work involved in defining
custom mapping types and insulate the custom type from changes to the Hibernate
core. This allows you to easily upgrade Hibernate and keep your existing
custom mapping types. You can find many examples of useful Hibernate mapping
types on the Hibernate community website.
The first of the programming interfaces is net.sf.hibernate.UserType. User-
Type is suitable for most simple cases and even for some more complex problems.
Lets use it in a simple scenario.
Our Bid class defines an amount property; our Item class defines an initial-
Price property, both monetary values. So far, weve only used a simple BigDecimal
to represent the value, mapped with big_decimal to a single NUMERIC column.
Suppose we wanted to support multiple currencies in our auction application
and that we had to refactor the existing domain model for this (customer-driven)
change. One way to implement this change would be to add new properties to Bid
and Item: amountCurrency and initialPriceCurrency. We would then map these
new properties to additional VARCHAR columns with the built-in currency mapping
type. We hope you never use this approach!
Licensed to Lathika
Understanding the Hibernate type system 203
Creating a UserType
Instead, we should create a MonetaryAmount class that encapsulates both currency
and amount. Note that this is a class of the domain model; it doesnt have any
dependency on Hibernate interfaces:
public class MonetaryAmount implements Serializable {
private final BigDecimal value;
private final Currency currency;
public MonetaryAmount(BigDecimal value, Currency currency) {
this.value = value;
this.currency = currency;
}
public BigDecimal getValue() { return value; }
public Currency getCurrency() { return currency; }
public boolean equals(Object o) { ... }
public int hashCode() { ...}
}
Weve made MonetaryAmount an immutable class. This is a good practice in Java.
Note that we have to implement equals() and hashCode() to finish the class (there
is nothing special to consider here). We use this new MonetaryAmount to replace the
BigDecimal of the initialPrice property in Item. Of course, we can, and should
use it for all other BigDecimal prices in our persistent classes (such as the
Bid.amount) and even in business logic (for example, in the billing system).
Lets map this refactored property of Item to the database. Suppose were
working with a legacy database that contains all monetary amounts in USD. Our
application is no longer restricted to a single currency (the point of the refactoring),
but it takes time to get the changes done by the database team. We need to
convert the amount to USD when we persist the MonetaryAmount and convert it
back to USD when we are loading objects.
For this, we create a MonetaryAmountUserType class that implements the Hibernate
interface UserType. Our custom mapping type, is shown in listing 6.1.
package auction.customtypes;
import ...;
public class MonetaryAmountUserType implements UserType {
private static final int[] SQL_TYPES = {Types.NUMERIC};
Listing 6.1 Custom mapping type for monetary amounts in USD
Licensed to Lathika
204 CHAPTER 6
Advanced mapping concepts
public int[] sqlTypes() { return SQL_TYPES; }
public Class returnedClass() { return MonetaryAmount.class; }
public boolean equals(Object x, Object y) {
if (x == y) return true;
if (x == null || y == null) return false;
return x.equals(y);
}
public Object deepCopy(Object value) { return value; }
public boolean isMutable() { return false; }
public Object nullSafeGet(ResultSet resultSet,
String[] names,
Object owner)
throws HibernateException, SQLException {
if (resultSet.wasNull()) return null;
BigDecimal valueInUSD = resultSet.getBigDecimal(names[0]);
return new MonetaryAmount(valueInUSD, Currency.getInstance)"USD"));
}
public void nullSafeSet(PreparedStatement statement,
Object value,
int index)
throws HibernateException, SQLException {
if (value == null) {
statement.setNull(index, Types.NUMERIC);
} else {
MonetaryAmount anyCurrency = (MonetaryAmount)value;
MonetaryAmount amountInUSD =
MonetaryAmount.convert( anyCurrency,
Currency.getInstance("USD") );
// The convert() method isn't shown in our examples
statement.setBigDecimal(index, amountInUSD.getValue());
}
}
}
The sqlTypes() method tells Hibernate what SQL column types to use for DDL
schema generation. The type codes are defined by java.sql.Types. Notice that
this method returns an array of type codes. A UserType may map a single property
to multiple columns, but our legacy data model only has a single NUMERIC.
returnedClass() tells Hibernate what Java type is mapped by this UserType.
B
C
D
E
F
G
H
B
C
Licensed to Lathika
Understanding the Hibernate type system 205
The UserType is responsible for dirty-checking property values. The equals()
method compares the current property value to a previous snapshot and determines
whether the property is dirty and must by saved to the database.
The UserType is also partially responsible for creating the snapshot in the first
place. Since MonetaryAmount is an immutable class, the deepCopy() method
returns its argument. In the case of a mutable type, it would need to return a copy
of the argument to be used as the snapshot value. This method is also called when
an instance of the type is written to or read from the second-level cache.
Hibernate can make some minor performance optimizations for immutable types
like this one. The isMutable() method tells Hibernate that this type is immutable.
The nullSafeGet() method retrieves the property value from the JDBC ResultSet.
You can also access the owner of the component if you need it for the conversion.
All database values are in USD, so you have to convert the MonetaryAmount
returned by this method before you show it to the user.
The nullSafeSet() method writes the property value to the JDBC PreparedStatement.
This method takes whatever currency is set and converts it to a simple Big-
Decimal USD value before saving.
We now map the initialPrice property of Item as follows:
column="INITIAL_PRICE"
type="auction.customtypes.MonetaryAmountUserType"/>
This is the simplest kind of transformation that a UserType could perform. Much
more sophisticated things are possible. A custom mapping type could perform validation;
it could read and write data to and from an LDAP directory; it could even
retrieve persistent objects from a different Hibernate Session for a different database.
Youre limited mainly by your imagination!
Wed prefer to represent both the amount and currency of our monetary
amounts in the database, especially if the schema isnt legacy but can be defined
(or updated quickly). We could still use a UserType, but then we wouldnt be able
to use the amount (or currency) in object queries. The Hibernate query engine
(discussed in more detail in the next chapter) wouldnt know anything about the
individual properties of MonetaryAmount. You can access the properties in your Java
code (MonetaryAmount is just a regular class of the domain model, after all), but not
in Hibernate queries.
D
E
F
G
H
Licensed to Lathika
206 CHAPTER 6
Advanced mapping concepts
Instead, we should use a CompositeUserType if we need the full power of Hibernate
queries. This (slightly more complex) interface exposes the properties of our
MonetaryAmount to Hibernate.
Creating a CompositeUserType
To demonstrate the flexibility of custom mapping types, we dont change our MonetaryAmount
class (and other persistent classes) at allwe change only the custom
mapping type, as shown in listing 6.2.
package auction.customtypes;
import ...;
public class MonetaryAmountCompositeUserType
implements CompositeUserType {
public Class returnedClass() { return MonetaryAmount.class; }
public boolean equals(Object x, Object y) {
if (x == y) return true;
if (x == null || y == null) return false;
return x.equals(y);
}
public Object deepCopy(Object value) {
return value; // MonetaryAmount is immutable
}
public boolean isMutable() { return false; }
public Object nullSafeGet(ResultSet resultSet,
String[] names,
SessionImplementor session,
Object owner)
throws HibernateException, SQLException {
if (resultSet.wasNull()) return null;
BigDecimal value = resultSet.getBigDecimal( names[0] );
Currency currency =
Currency.getInstance(resultSet.getString( names[1] ));
return new MonetaryAmount(value, currency);
}
public void nullSafeSet(PreparedStatement statement,
Object value,
int index,
SessionImplementor session)
throws HibernateException, SQLException {
Listing 6.2 Custom mapping type for monetary amounts in new database schemas
Licensed to Lathika
Understanding the Hibernate type system 207
if (value==null) {
statement.setNull(index, Types.NUMERIC);
statement.setNull(index+1, Types.VARCHAR);
} else {
MonetaryAmount amount = (MonetaryAmount) value;
String currencyCode =
amount.getCurrency().getCurrencyCode();
statement.setBigDecimal( index, amount.getValue() );
statement.setString( index+1, currencyCode );
}
}
public String[] getPropertyNames() {
return new String[] { "value", "currency" };
}
public Type[] getPropertyTypes() {
return new Type[] { Hibernate.BIG_DECIMAL, Hibernate.CURRENCY };
}
public Object getPropertyValue(Object component,
int property)
throws HibernateException {
MonetaryAmount MonetaryAmount = (MonetaryAmount) component;
if (property == 0)
return MonetaryAmount.getValue()();
else
return MonetaryAmount.getCurrency();
}
public void setPropertyValue(Object component,
int property,
Object value) throws HibernateException {
throw new UnsupportedOperationException("Immutable!");
}
public Object assemble(Serializable cached,
SessionImplementor session,
Object owner)
throws HibernateException {
return cached;
}
public Serializable disassemble(Object value,
SessionImplementor session)
throws HibernateException {
return (Serializable) value;
}
}
B
C
D
E
F
G
Licensed to Lathika
208 CHAPTER 6
Advanced mapping concepts
A CompositeUserType has its own properties, defined by getPropertyNames().
The properties each have their own type, as defined by getPropertyTypes().
The getPropertyValue() method returns the value of an individual property of
the MonetaryAmount.
Since MonetaryAmount is immutable, we cant set property values individually (no
problem; this method is optional).
The assemble() method is called when an instance of the type is read from the
second-level cache.
The disassemble() method is called when an instance of the type is written to the
second-level cache.
The order of properties must be the same in the getPropertyNames(), getPropertyTypes(),
and getPropertyValues() methods. The initialPrice property now
maps to two columns, so we declare both in the mapping file. The first column
stores the value; the second stores the currency of the MonetaryAmount (the order
of columns must match the order of properties in your type implementation):
type="auction.customtypes.MonetaryAmountCompositeUserType">
In a query, we can now refer to the amount and currency properties of the custom
type, even though they dont appear anywhere in the mapping document as individual
properties:
from Item i
where i.initialPrice.value > 100.0
and i.initialPrice.currency = 'AUD'
Weve expanded the buffer between the Java object model and the SQL database
schema with our custom composite type. Both representations can now handle
changes more robustly.
If implementing custom types seems complex, relax; you rarely need to use a
custom mapping type. An alternative way to represent the MonetaryAmount class is
to use a component mapping, as in section 3.5.2, Using components. The decision
to use a custom mapping type is often a matter of taste.
Lets look at an extremely important, application of custom mapping types. The
type-safe enumeration design pattern is found in almost all enterprise applications.
B
C
D
E
F
G
Licensed to Lathika
Understanding the Hibernate type system 209
Using enumerated types
An enumerated type is a common Java idiom where a class has a constant (small)
number of immutable instances.
For example, the Comment class (users giving comments about other users in
CaveatEmptor) defines a rating. In our current model, we have a simple int property.
A typesafe (and much better) way to implement different ratings (after all, we
probably dont want arbitrary integer values) is to create a Rating class as follows:
package auction;
public class Rating implements Serializable {
private String name;
public static final Rating EXCELLENT = new Rating("Excellent");
public static final Rating OK = new Rating("OK");
public static final Rating LOW = new Rating("Low");
private static final Map INSTANCES = new HashMap();
static {
INSTANCES.put(EXCELLENT.toString(), EXCELLENT);
INSTANCES.put(OK.toString(), OK);
INSTANCES.put(LOW.toString(), LOW);
}
private Rating(String name) {
this.name=name;
}
public String toString() {
return name;
}
Object readResolve() {
return getInstance(name);
}
public static Rating getInstance(String name) {
return (Rating) INSTANCES.get(name);
}
}
We then change the rating property of our Comment class to use this new type. In
the database, ratings would be represented as VARCHAR values. Creating a UserType
for Rating-valued properties is straightforward:
package auction.customtypes;
import ...;
public class RatingUserType implements UserType {
private static final int[] SQL_TYPES = {Types.VARCHAR};
Licensed to Lathika
210 CHAPTER 6
Advanced mapping concepts
public int[] sqlTypes() { return SQL_TYPES; }
public Class returnedClass() { return Rating.class; }
public boolean equals(Object x, Object y) { return x == y; }
public Object deepCopy(Object value) { return value; }
public boolean isMutable() { return false; }
public Object nullSafeGet(ResultSet resultSet,
String[] names,
Object owner)
throws HibernateException, SQLException {
String name = resultSet.getString(names[0]);
return resultSet.wasNull() ? null : Rating.getInstance(name);
}
public void nullSafeSet(PreparedStatement statement,
Object value,
int index)
throws HibernateException, SQLException {
if (value == null) {
statement.setNull(index, Types.VARCHAR);
} else {
statement.setString(index, value.toString());
}
}
}
This code is basically the same as the UserType implemented earlier. The implementation
of nullSafeGet() and nullSafeSet() is again the most interesting part,
containing the logic for the conversion.
One problem you might run into is using enumerated types in Hibernate queries.
Consider the following query in HQL that retrieves all comments rated Low:
Query q =
session.createQuery("from Comment c where c.rating = Rating.LOW");
This query doesnt work, because Hibernate doesnt know what to do with Rating.
LOW and will try to use it as a literal. We have to use a bind parameter and set
the rating value for the comparison dynamically (which is what we need for other
reasons most of the time):
Query q =
session.createQuery("from Comment c where c.rating = :rating");
q.setParameter("rating",
Rating.LOW,
Hibernate.custom(RatingUserType.class));
Licensed to Lathika
Mapping collections of value types 211
The last line in this example uses the static helper method Hibernate.custom() to
convert the custom mapping type to a Hibernate Type, a simple way to tell Hibernate
about our enumeration mapping and how to deal with the Rating.LOW value.
If you use enumerated types in many places in your application, you may want
to take this example UserType and make it more generic. JDK 1.5 introduces a
new language feature for defining enumerated types, and we recommend using a
custom mapping type until Hibernate gets native support for JDK 1.5 features.
(Note that the Hibernate2 PersistentEnum is considered deprecated and
shouldnt be used.)
Weve now discussed all kinds of Hibernate mapping types: built-in mapping
types, user-defined custom types, and even components (chapter 3). Theyre all
considered value types, because they map objects of value type (not entities) to the
database. Were now ready to explore collections of value typed instances.
6.2 Mapping collections of value types
Youve already seen collections in the context of entity relationships in chapter 3.
In this section, we discuss collections that contain instances of a value type, including
collections of components. Along the way, youll meet some of the more
advanced features of Hibernate collection mappings, which can also be used for
collections that represent entity associations, as discussed later in this chapter.
6.2.1 Sets, bags, lists, and maps
Suppose that our sellers can attach images to Items. An image is accessible only via
the containing item; it doesnt need to support associations to any other entity in
our system. In this case, it isnt unreasonable to model the image as a value type.
Item would have a collection of images that Hibernate would consider to be part
of the Item, without its own lifecycle.
Well run through several ways to implement this behavior using Hibernate. For
now, lets assume that the image is stored somewhere on the filesystem and that we
keep just the filename in the database. How images are stored and loaded with this
approach isnt discussed.
Using a set
The simplest implementation is a Set of String filenames. We add a collection
property to the Item class:
Licensed to Lathika
212 CHAPTER 6
Advanced mapping concepts
private Set images = new HashSet();
...
public Set getImages() {
return this.images;
}
public void setImages(Set images) {
this.images = images;
}
We use the following mapping in the Item:
The image filenames are stored in a table named ITEM_IMAGE. From the databases
point of view, this table is separate from the ITEM table; but Hibernate hides this
fact from us, creating the illusion that there is a single entity. The element
declares the foreign key, ITEM_ID of the parent entity. The tag declares
this collection as a collection of value type instances: in this case, of strings.
A set cant contain duplicate elements, so the primary key of the ITEM_IMAGE
table consists of both columns in the declaration: ITEM_ID and FILENAME. See
figure 6.1 for a table schema example.
It doesnt seem likely that we would allow the user to attach the same image
more than once, but suppose we did. What kind of mapping would be appropriate?
Using a bag
An unordered collection that permits duplicate elements is called a bag. Curiously,
the Java Collections framework doesnt define a Bag interface. Hibernate lets you
use a List in Java to simulate bag behavior; this is consistent with common usage
in the Java community. Note, however, that the List contract specifies that a list is
an ordered collection; Hibernate wont preserve the ordering when persisting a
List with bag semantics. To use a bag, change the type of images in Item from Set
to List, probably using ArrayList as an implementation. (You could also use a
Collection as the type of the property.)
ITEM
ITEM_ID NAME
123 Foo
Bar
Baz
ITEM_IMAGE
ITEM_ID FILENAME
112
fooimage1.jpg
fooimage2.jpg
barimage1.jpg
Figure 6.1
Table structure and example data for
a collection of strings
Licensed to Lathika
Mapping collections of value types 213
Changing the table definition from the previous section to permit duplicate FILENAMEs
requires another primary key. An mapping lets us attach a surrogate
key column to the collection table, much like the synthetic identifiers we use for
entity classes:
In this case, the primary key is the generated ITEM_IMAGE_ID. You can see a graphical
view of the database tables in figure 6.2.
You might be wondering why the Hibernate mapping was and if there
is also a mapping. Youll soon learn more about bags, but a more likely scenario
involves preserving the order in which images were attached to the Item.
There are a number of good ways to do this; one way is to use a real list instead of
a bag.
Using a list
A mapping requires the addition of an index column to the database table.
The index column defines the position of the element in the collection. Thus,
Hibernate can preserve the ordering of the collection elements when retrieving
the collection from the database if we map the collection as a :
The primary key consists of the ITEM_ID and POSITION columns. Notice that duplicate
elements (FILENAME) are allowed, which is consistent with the semantics of a
ITEM
ITEM_ID NAME
123
Foo
Bar
Baz
ITEM_IMAGE
ITEM_ID FILENAME
112
fooimage1.jpg
fooimage1.jpg
barimage1.jpg
ITEM_IMAGE_ID
123
Figure 6.2
Table structure using a
bag with a surrogate
primary key
Licensed to Lathika
214 CHAPTER 6
Advanced mapping concepts
list. (We dont have to change the Item class; the types we used earlier for the bag
are the same.)
If the collection is [fooimage1.jpg, fooimage1.jpg, fooimage2.jpg], the POSITION
column contains the values 0, 1, and 2, as shown in figure 6.3.
Alternatively, we could use a Java array instead of a list. Hibernate supports this
usage; indeed, the details of an array mapping are virtually identical to those of a
list. However, we very strongly recommend against the use of arrays, since arrays
cant be lazily initialized (there is no way to proxy an array at the virtual machine
level).
Now, suppose that our images have user-entered names in addition to the filenames.
One way to model this in Java would be to use a Map, with names as keys and
filenames as values.
Using a map
Mapping a
By specifying sort="natural", we tell Hibernate to use a SortedMap, sorting the
image names according to the compareTo() method of java.lang.String. If you
want some other sorted orderfor example, reverse alphabetical orderyou can
specify the name of a class that implements java.util.Comparator in the sort
attribute. For example:
lazy="true"
table="ITEM_IMAGE"
sort="auction.util.comparator.ReverseStringComparator">
The behavior of a Hibernate sorted map is identical to java.util.TreeMap. A
sorted set (which behaves like java.util.TreeSet) is mapped in a similar way:
lazy="true"
table="ITEM_IMAGE"
sort="natural">
Bags cant be sorted (there is no TreeBag, unfortunately), nor may lists; the order
of list elements is defined by the list index.
Licensed to Lathika
216 CHAPTER 6
Advanced mapping concepts
Alternatively, you might choose to use an ordered map, using the sorting capabilities
of the database instead of (probably less efficient) in-memory sorting:
lazy="true"
table="ITEM_IMAGE"
order-by="IMAGE_NAME asc">
The expression in the order-by attribute is a fragment of an SQL order by clause.
In this case, we order by the IMAGE_NAME column, in ascending order. You can even
write SQL function calls in the order-by attribute:
lazy="true"
table="ITEM_IMAGE"
order-by="lower(FILENAME) asc">
Notice that you can order by any column of the collection table. Both sets and bags
accept the order-by attribute; but again, lists dont. This example uses a bag:
lazy="true"
table="ITEM_IMAGE"
order-by="ITEM_IMAGE_ID desc">
Under the covers, Hibernate uses a LinkedHashSet and a LinkedHashMap to implement
ordered sets and maps, so this functionality is only available in JDK 1.4 or
later. Ordered bags are possible in all JDK versions.
In a real system, its likely that wed need to keep more than just the image name
and filename; wed probably need to create an Image class for this extra information.
We could map Image as an entity class; but since weve already concluded that
this isnt absolutely necessary, lets see how much further we can get without an
Image entity (which would require an association mapping and more complex lifecycle
handling).
Licensed to Lathika
Mapping collections of value types 217
In chapter 3, you saw that Hibernate lets you map user-defined classes as components,
which are considered to be value types. This is still true even when component
instances are collection elements.
Collections of components
Our Image class defines the properties name, filename, sizeX, and sizeY. It has a single
association, with its parent Item class, as shown in figure 6.5.
As you can see from the aggregation association style (the black diamond),
Image is a component of Item, and Item is the entity that is responsible for the lifecycle
of Image. References to images arent shared, so our first choice is a Hibernate
component mapping. The multiplicity of the association further declares this association
as many-valuedthat is, many (or zero) Images for the same Item.
Writing the component class
First, we implement the Image class. This is just a POJO, with nothing special to consider.
As you know from chapter 3, component classes dont have an identifier
property. However, we must implement equals() (and hashCode()) to compare the
name, filename, sizeX, and sizeY properties, to allow Hibernates dirty checking to
function correctly. Strictly speaking, implementing equals() and hashCode() isnt
required for all component classes. However, we recommend it for any component
class because the implementation is straightforward and better safe than sorry is
a good motto.
The Item class hasnt changed: it still has a Set of images. Of course, the objects
in this collection are no longer Strings. Lets map this to the database.
Mapping the collection
Collections of components are mapped similarly to other collections of value type
instances. The only difference is the use of in place of the
familiar tag. An ordered set of images could be mapped like this:
Figure 6.5
Collection of Image components in Item
Licensed to Lathika
218 CHAPTER 6
Advanced mapping concepts
lazy="true"
table="ITEM_IMAGE"
order-by="IMAGE_NAME asc">
This is a set, so the primary key consists of the key column and all element columns:
ITEM_ID, IMAGE_NAME, FILENAME, SIZEX, and SIZEY. Since these columns all appear
in the primary key, we declare them with not-null="true". (This is clearly a disadvantage
of this particular mapping.)
Bidirectional navigation
The association from Item to Image is unidirectional. If the Image class also
declared a property named item, holding a reference back to the owning Item,
wed add a tag to the mapping:
lazy="true"
table="ITEM_IMAGE"
order-by="IMAGE_NAME asc">
True bidirectional navigation is impossible, however. You cant retrieve an Image
independently and then navigate back to its parent Item. This is an important
issue: Youll be able to load Image instances by querying for them, but components,
like all value types, are retrieved by value. The Image objects wont have a reference
to the parent (the property is null). You should use a full parent/child entity association,
as described in chapter 3, if you need this kind of functionality.
Still, declaring all properties as not-null is something you should probably
avoid. We need a better primary key for the IMAGE table.
Licensed to Lathika
Mapping collections of value types 219
Avoiding not-null columns
If a set of Images isnt what we need, other collection styles are possible. For example,
an offers a surrogate collection key:
lazy="true"
table="ITEM_IMAGE"
order-by="IMAGE_NAME asc">
This time, the primary key is the ITEM_IMAGE_ID column, and it isnt important that
we implement equals() and hashCode() (at least, Hibernate doesn't require it).
Nor do we need to declare the properties with not-null="true". They may be nullable
in the case of an idbag, as shown in figure 6.6.
We should point out that there isnt a great deal of difference between this bag
mapping and a standard parent/child entity relationship. The tables are identical,
and even the Java code is extremely similar; the choice is mainly a matter of taste.
Of course, a parent/child relationship supports shared references to the child
entity and true bidirectional navigation.
We could even remove the name property from the Image class and again use the
image name as the key of a map:
lazy="true"
table="ITEM_IMAGE"
order-by="IMAGE_NAME asc">
ITEM_IMAGE
ITEM_ID FILENAME
112
fooimage1.jpg
fooimage1.jpg
barimage1.jpg
ITEM_IMAGE_ID
123
IMAGE_NAME
Foo Image 1
Foo Image 1
Bar Image 1
Figure 6.6
Collection of Image
components using a bag
with a surrogate key
Licensed to Lathika
220 CHAPTER 6
Advanced mapping concepts
As before, the primary key is composed of ITEM_ID and IMAGE_NAME.
A composite element class like Image isnt limited to simple properties of basic
type like filename. It may contain components, using the
declaration, and even associations to entities. It may not own
collections, however. A composite element with a many-to-one association is useful,
and well come back to this kind of mapping later in this chapter.
Were finally finished with value types; well continue with entity association
mapping techniques. The simple parent/child association we mapped in chapter
3 is just one of many possible association mapping styles. Most of them are considered
exotic and are rare in practice.
6.3 Mapping entity associations
When we use the word associations, were always referring to relationships between
entities. In chapter 3, we demonstrated a unidirectional many-to-one association,
made it bidirectional, and finally turned it into a parent/child relationship (oneto-
many and many-to-one).
One-to-many associations are easily the most important kind of association. In
fact, we go so far as to discourage the use of more exotic association styles when a
simple bidirectional many-to-one/one-to-many will do the job. In particular, a
many-to-many association may always be represented as two many-to-one associations
to an intervening class. This model is usually more easily extensible, so we
tend not to use many-to-many associations in our applications.
Armed with this disclaimer, lets investigate Hibernates rich association mappings
starting with one-to-one associations.
6.3.1 One-to-one associations
We argued in chapter 3 that the relationships between User and Address (the user
has both a billingAddress and a homeAddress) were best represented using
mappings. This is usually the simplest way to represent one-to-one relationships,
since the lifecycle of one class is almost always dependent on the lifecycle
of the other class, and the association is a composition.
Licensed to Lathika
Mapping entity associations 221
But what if we want a dedicated table for Address and to map both User and
Address as entities? Then, the classes have a true one-to-one association. In this
case, we start with the following mapping for Address:
Note that Address now requires an identifier property; its no longer a component
class. There are two different ways to represent a one-to-one association to
this Address in Hibernate. The first approach adds a foreign key column to the
USER table.
Using a foreign key association
The easiest way to represent the association from User to its billingAddress is to
use a mapping with a unique constraint on the foreign key. This may
surprise you, since many doesnt seem to be a good description of either end of a
one-to-one association! However, from Hibernates point of view, there isnt much
difference between the two kinds of foreign key associations. So, we add a foreign
key column named BILLING_ADDRESS_ID to the USER table and map it as follows:
class="Address"
column="BILLING_ADDRESS_ID"
cascade="save-update"/>
Note that weve chosen save-update as the cascade style. This means the Address
will become persistent when we create an association from a persistent User. Probably,
cascade="all" makes sense for this association, since deletion of the User
should result in deletion of the Address. (Remember that Address now has its own
entity lifecycle.)
Our database schema still allows duplicate values in the BILLING_ADDRESS_ID column
of the USER table, so two users could have a reference to the same address. To
make this association truly one-to-one, we add unique="true" to the
element, constraining the relational model so that there can be only one user
per address:
class="Address"
Licensed to Lathika
222 CHAPTER 6
Advanced mapping concepts
column="BILLING_ADDRESS_ID"
cascade="all"
unique="true"/>
This change adds a unique constraint to the BILLING_ADDRESS_ID column in the
DDL generated by Hibernateresulting in the table structure illustrated by
figure 6.7.
But what if we want this association to be navigable from Address to User in Java?
From chapter 3, you know how to turn it into a bidirectional one-to-many collection
but weve decided that each Address has just one User, so this cant be the
right solution. We dont want a collection of users in the Address class. Instead, we
add a property named user (of type User) to the Address class, and map it like so
in the mapping of Address:
class="User"
property-ref="billingAddress"/>
This mapping tells Hibernate that the user association in Address is the reverse
direction of the billingAddress association in User.
In code, we create the association between the two objects as follows:
Address address = new Address();
address.setStreet("646 Toorak Rd");
address.setCity("Toorak");
address.setZipcode("3000");
Transaction tx = session.beginTransaction();
User user = (User) session.get(User.class, userId);
address.setUser(user);
user.setBillingAddress(address);
tx.commit();
<>
Address
ADDRESS_ID <>
STREET
ZIPCODE
CITY
<>
User
USER_ID <>
BILLING_ADDRESS_ID <>
FIRSTNAME
LASTNAME
USERNAME
PASSWORD
EMAIL
RANKING
CREATED
Figure 6.7
A one-to-one association with an
extra foreign key column
Licensed to Lathika
Mapping entity associations 223
To finish the mapping, we have to map the homeAddress property of User. This is
easy enough: we add another element to the User metadata, mapping
a new foreign key column, HOME_ADDRESS_ID:
class="Address"
column="HOME_ADDRESS_ID"
cascade="save-update"
unique="true"/>
The USER table now defines two foreign keys referencing the primary key of the
ADDRESS table: HOME_ADDRESS_ID and BILLING_ADDRESS_ID.
Unfortunately, we cant make both the billingAddress and homeAddress associations
bidirectional, since we dont know if a particular address is a billing address
or a home address. (We cant decide which property namebillingAddress or
homeAddressto use for the property-ref attribute in the mapping of the user
property.) We could try making Address an abstract class with subclasses HomeAddress
and BillingAddress and mapping the associations to the subclasses. This
approach would work, but its complex and probably not sensible in this case.
Our advice is to avoid defining more than one one-to-one association between
any two classes. If you must, leave the associations unidirectional. If you dont have
more than oneif there really is exactly one instance of Address per Userthere
is an alternative approach to the one weve just shown. Instead of defining a foreign
key column in the USER table, you can use a primary key association.
Using a primary key association
Two tables related by a primary key association share the same primary key values.
The primary key of one table is also a foreign key of the other. The main difficulty
with this approach is ensuring that associated instances are assigned the same primary
key value when the objects are saved. Before we try to solve this problem, lets
see how we would map the primary key association.
For a primary key association, both ends of the association are mapped using the
declaration. This also means that we can no longer map both the billing
and home address, only one property. Each row in the USER table has a
corresponding row in the ADDRESS table. Two addresses would require an additional
table, and this mapping style therefore wouldnt be adequate. Lets call this
single address property address and map it with the User:
class="Address"
cascade="save-update"/>
Licensed to Lathika
224 CHAPTER 6
Advanced mapping concepts
Next, heres the user of Address:
class="User"
constrained="true"/>
The most interesting thing here is the use of constrained="true". It tells Hibernate
that there is a foreign key constraint on the primary key of ADDRESS that refers
to the primary key of USER.
Now we must ensure that newly saved instances of Address are assigned the same
identifier value as their User. We use a special Hibernate identifier-generation strategy
called foreign:
user
...
class="User"
constrained="true"/>
The named property of the foreign generator allows us to name a one-toone
association of the Address classin this case, the user association. The foreign
generator inspects the associated object (the User) and uses its identifier as the
identifier of the new Address. Look at the table structure in figure 6.8.
The code to create the object association is unchanged for a primary key association;
its the same code we used earlier for the many-to-one mapping style.
<>
Address
ADDRESS_ID <> <>
STREET
ZIPCODE
CITY
<>
User
USER_ID <>
FIRSTNAME
LASTNAME
USERNAME
PASSWORD
EMAIL
RANKING
CREATED
Figure 6.8
The tables for a one-to-one association
with shared primary key values
Licensed to Lathika
Mapping entity associations 225
There is now just one remaining entity association multiplicity we havent discussed:
many-to-many.
6.3.2 Many-to-many associations
The association between Category and Item is a many-to-many association, as you
can see in figure 6.9.
In a real system, we might not use a many-to-many association. In our experience,
there is almost always other information that must be attached to each link
between associated instances (for example, the date and time when an item was set
in a category), and the best way to represent this information is via an intermediate
association class. In Hibernate, we could map the association class as an entity and
use two one-to-many associations for either side. Perhaps more conveniently, we
could also use a composite element class, a technique well show you later.
Nevertheless, its the purpose of this section to implement a real many-to-many
entity association. Lets start with a unidirectional example.
A unidirectional many-to-many association
If you only require unidirectional navigation, the mapping is straightforward. Unidirectional
many-to-many associations are no more difficult than the collections of
value type instances we covered previously. For example, if the Category has a set
of Items, we can use this mapping:
table="CATEGORY_ITEM"
lazy="true"
cascade="save-update">
Figure 6.9
A many-to-many valued
association between
Category and Item
Licensed to Lathika
226 CHAPTER 6
Advanced mapping concepts
Just like a collection of value type instances, a many-to-many association has its own
table, the link table or association table. In this case, the link table has two columns:
the foreign keys of the CATEGORY and ITEM tables. The primary key is composed of
both columns. The full table structure is shown in figure 6.10.
We can also use a bag with a separate primary key column:
table="CATEGORY_ITEM
lazy="true"
cascade="save-update">
As usual with an mapping, the primary key is a surrogate key column,
CATEGORY_ITEM_ID. Duplicate links are therefore allowed; the same Item can be
added twice to a particular Category. (This doesnt seem to be a very useful feature.)
We can even use an indexed collection (a map or list). The following example
uses a list:
table="CATEGORY_ITEM
lazy="true"
cascade="save-update">
<>
CATEGORY
CATEGORY_ID <>
PARENT_CATEGORY_ID <>
NAME
CREATED
<>
ITEM
ITEM_ID <>
NAME
DESCRIPTION
INITIAL_PRICE
...
<>
CATEGORY_ITEM
CATEGORY_ID <> <>
ITEM_ID <> <>
Figure 6.10
Many-to-many entity
association mapped to
an association table
Licensed to Lathika
Mapping entity associations 227
The primary key consists of the CATEGORY_ID and DISPLAY_POSITION columns. This
mapping guarantees that every Item knows its position in the Category.
Creating an object association is easy:
Transaction tx = session.beginTransaction();
Category cat = (Category) session.get(Category.class, categoryId);
Item item = (Item) session.get(Item.class, itemId);
cat.getItems().add(item);
tx.commit();
Bidirectional many-to-many associations are slightly more difficult.
A bidirectional many-to-many association
When we mapped a bidirectional one-to-many association in chapter 3 (section 3.7,
Introducing associations), we explained why one end of the association must be
mapped with inverse="true". We encourage you to review that explanation now.
The same principle applies to bidirectional many-to-many associations: each row
of the link table is represented by two collection elements, one element at each
end of the association. An association between an Item and a Category is represented
in memory by the Item instance belonging to the items collection of the
Category but also by the Category instance belonging to the categories collection
of the Item.
Before we discuss the mapping of this bidirectional case, you must be aware that
the code to create the object association also changes:
cat.getItems.add(item);
item.getCategories().add(category);
As always, a bidirectional association (no matter of what multiplicity) requires that
you set both ends of the association.
When you map a bidirectional many-to-many association, you must declare one
end of the association using inverse="true" to define which sides state is used to
update the link table. You can choose for yourself which end that should be.
Recall this mapping for the items collection from the previous section:
... <
set name="items"
table="CATEGORY_ITEM"
lazy="true"
cascade="save-update">
Licensed to Lathika
228 CHAPTER 6
Advanced mapping concepts
We can reuse this mapping for the Category end of the bidirectional association.
We map the Item end as follows:
...
table="CATEGORY_ITEM"
lazy="true"
inverse="true"
cascade="save-update">
Note the use of inverse="true". Once again, this setting tells Hibernate to ignore
changes made to the categories collection and use the other end of the association
(the items collection) as the representation that should be synchronized with
the database if we manipulate the association in Java code.
Weve chosen cascade="save-update" for both ends of the collection; this isnt
unreasonable. On the other hand, cascade="all", cascade="delete", and cascade="
all-delete-orphans" arent meaningful for many-to-many associations,
since an instance with potentially many parents shouldnt be deleted when just one
parent is deleted.
What kinds of collections may be used for bidirectional many-to-many associations?
Do you need to use the same type of collection at each end? Its reasonable
to use, for example, a list at the end not marked inverse="true" (or explicitly set
false) and a bag at the end that is marked inverse="true".
You can use any of the mappings weve shown for unidirectional many-to-many
associations for the noninverse end of the bidirectional association. ,
, , and are all possible, and the mappings are identical to those
shown previously.
For the inverse end, is acceptable, as is the following bag mapping:
...
table="CATEGORY_ITEM
lazy="true"
inverse="true" cascade="save-update">
Licensed to Lathika
Mapping entity associations 229
This is the first time weve shown the declaration: Its similar to an
mapping, but it doesnt involve a surrogate key column. It lets you use a List (with
bag semantics) in a persistent class instead of a Set. Thus its preferred if the noninverse
side of a many-to-many association mapping is using a map, list, or bag
(which all permit duplicates). Remember that a bag doesnt preserve the order of
elements, despite the List type in the Java property definition.
No other mappings should be used for the inverse end of a many-to-many association.
Indexed collections (lists and maps) cant be used, since Hibernate wont
initialize or maintain the index column if inverse="true". This is also true and
important to remember for all other association mappings involving collections:
an indexed collection (or even arrays) cant be set to inverse="true".
We already frowned at the use of a many-to-many association and suggested the
use of composite element mappings as an alternative. Lets see how this works.
Using a collection of components for a many-to-many association
Suppose we need to record some information each time we add an Item to a Category.
For example, we might need to store the date and the name of the user who
added the item to this category. We need a Java class to represent this information:
public class CategorizedItem {
private String username;
private Date dateAdded;
private Item item;
private Category category;
....
}
(We omitted the accessors and equals() and hashCode() methods, but they would
be necessary for this component class.)
We map the items collection on Category as follows:
class="Item"
column="ITEM_ID"
not-null="true"/>
Licensed to Lathika
230 CHAPTER 6
Advanced mapping concepts
We use the element to declare the association to Item, and we use
the mappings to declare the extra association-related information. The
link table now has four columns: CATEGORY_ID, ITEM_ID, USERNAME, and DATE_ADDED.
The columns of the CategorizedItem properties should never be null: otherwise
we cant identify a single link entry, because theyre all part of the tables primary
key. You can see the table structure in figure 6.11.
In fact, rather than mapping just the username, we might like to keep an
actual reference to the User object. In this case, we have the following ternary
association mapping:
class="Item"
column="ITEM_ID"
not-null="true"/>
class="User"
column="USER_ID"
not-null="true"/>
<>
CATEGORY
CATEGORY_ID <>
PARENT_CATEGORY_ID <>
NAME
CREATED
<>
ITEM
ITEM_ID <>
NAME
DESCRIPTION
INITIAL_PRICE
...
<>
CATEGORY_ITEM
CATEGORY_ID <> <>
ITEM_ID <> <>
USERNAME <>
DATE_ADDED <>
Figure 6.11
Many-to-many entity
association table using
a component
Licensed to Lathika
Mapping entity associations 231
This is a fairly exotic beast! If you find yourself with a mapping like this, you should
ask whether it might be better to map CategorizedItem as an entity class and use
two one-to-many associations. Furthermore, there is no way to make this mapping
bidirectional: a component (such as CategorizedItem) cant, by definition, have
shared references. You cant navigate from Item to CategorizedItem.
We talked about some limitations of many-to-many mappings in the previous
section. One of them, the restriction to nonindexed collections for the inverse end
of an association, also applies to one-to-many associations, if theyre bidirectional.
Lets take a closer look at one-to-many and many-to-one again, to refresh your
memory and elaborate on what we discussed in chapter 3.
One-to-many associations
You already know most of what you need to know about one-to-many associations
from chapter 3. We mapped a typical parent/child relationship between two entity
persistent classes, Item and Bid. This was a bidirectional association, using a
many> and a mapping. The many end of this association was
implemented in Java with a Set; we had a collection of bids in the Item class. Lets
reconsider this mapping and walk through some special cases.
Using a bag with set semantics
For example, if you absolutely need a List of children in your parent Java class,
its possible to use a mapping in place of a set. In our example, first we
have to replace the type of the bids collection in the Item persistent class with a
List. The mapping for the association between Item and Bid is then left essentially
unchanged:
name="Bid"
table="BID">
...
name="item"
column="ITEM_ID"
class="Item"
not-null="true"/>
name="Item"
table="ITEM">
...
name="bids"
Licensed to Lathika
232 CHAPTER 6
Advanced mapping concepts
inverse="true"
cascade="all-delete-orphan">
We renamed the element to , making no other changes. Note, however,
that this change isnt useful: the underlying table structure doesnt support duplicates,
so the mapping results in an association with set semantics. Some tastes
prefer the use of Lists even for associations with set semantics, but ours doesnt, so
we recommend using mappings for typical parent/child relationships.
The obvious (and wrong) solution would be to use a real mapping for
the bids with an additional column holding the position of the elements. Remember
the Hibernate limitation we introduced earlier in this chapter: you cant use
indexed collections on an inverse side of an association. The inverse="true" side
of the association isnt considered when Hibernate saves the object state, so Hibernate
will ignore the index of the elements and not update the position column.
However, if your parent/child relationship will only be unidirectional (navigation
is only possible from parent to child), you could even use an indexed collection
type (because the many end would no longer be inverse). Good uses for
unidirectional one-to-many associations are uncommon in practice, and we dont
have one in our auction application. You may remember that we started with the
Item and Bid mapping in chapter 3, making it first unidirectional, but we quickly
introduced the other side of the mapping.
Lets find a different example to implement a unidirectional one-to-many association
with an indexed collection.
Unidirectional mapping
For the sake of this section, we now suppose that the association between Category
and Item is to be remodeled as a one-to-many association (an item now belongs to
at most one category) and further that the Item doesnt own a reference to its current
category. In Java code, we model this as a collection named items in the Category
class; we dont have to change anything if we dont use an indexed collection.
If items is implemented as a Set, we use the following mapping:
Licensed to Lathika
Mapping entity associations 233
Remember that one-to-many association mappings dont need to declare a table
name. Hibernate already knows that the column names in the collection mapping
(in this case, only CATEGORY_ID) belong to the ITEM table. The table structure is
shown in figure 6.12.
The other side of the association, the Item class, has no mapping reference to
Category. We can now also use an indexed collection in the Categoryfor example,
after we change the items property to List: