.comment-link {margin-left:.6em;}

Oracle Sponge -- Now Moved To Wordpress

Please use http://oraclesponge.wordpress.com

Tuesday, July 12, 2005


My imagination has been grabbed by the idea of the TimesTen in-memory database technology recently brought in house by Oracle, and how it might fit into a data warehousing architecture.

At the moment we have repositories for Business Objects and Informatica in the same database as our data warehouse. This has long struck me as a suboptimal architecture, as rapid application interface response can be seriously harmed by the simultaneous execution of reporting or ETL queries. Isolating the two types of database access in some way seems like a beneficial move for application usability.

So given that TimesTen allows in-memory caching of a subset of Oracle database tables, I'm now wondering how practical it would be to cache a subset of the repositories, and also to cache a subset of the data warehouse dimension tables to allow more rapid retrieval of lists of values (either for display through the Web Intelligence interface or for use by the ETL process). Much depends on how transparant the technology is to the client. I do not think that it would be a trivial matter to re-point the List Of Values queries to a different DSN, although with Informatica's lookups it could be done pretty easily.

Hmmm. Well, since the software is available for download I'll have a play around with the software and see how it looks.


At 7:57 AM, Blogger Jan said...

Interesting David, thanks for pointing that out! An ex-coworker who works at JPMorgan asked me about TimesTen about half a year ago. I had never heard of it. My first reaction was 'in-memory database? What about the buffer cache'? Obviously Oracle has given it some more thought...

At 10:09 AM, Blogger David Aldridge said...

They do seem to have a pretty high-class set of customers, so I guess that the technology is pretty robust. Reading through the documents they do seem to be paying good attention to such issues as replication, checkpointing, access control, optimization, and other "grown-up" subjects, so I have quite high hopes of it for my own uses.

At 6:22 AM, Blogger Erik Ykema said...

Hi, did you consider using the KEEP POOL? (see e.g. http://www.adp-gmbh.ch/ora/concepts/cache.html and http://www.quest-pipelines.com/newsletter-v4/1203_A.htm). This combination of server initialization settings and object properties allows one to _keep_ certain objects in a seperately sized part of the SGA. Hence I see no need for an in-memory database in this area.
By using resource prifles (though quite tedious) one can manage and separate the performance of metadata and BI-data in the same instance, though using LPARs and separate databases would be a more preferrable architecture.
Interesting Blog you are keeping here.

At 2:43 PM, Blogger David Aldridge said...


part of the attraction of the in-memory db is that it can run on a different server to that of the database, so the execution of metadata and reporting queries is "de-coupled" not only in terms of i/o but in terms of cpu usage also. I thought hard about isolating memory and managing CPU resourcing though resource manager, but the in-memory caching of the metadata on the BI server itself seems to hold promise.

Which isn't to say that a similar effect couldn't be achieved by keeping metadata on the BI server in its own Oracle database ... it's difficult to evaluate the pros and cons until you've tried them though.


Post a Comment

<< Home