SAND

Persisting In-memory: When the only constant is change

Home »  SAND News »  SAND Blogs »  Mike Pilcher »  Persisting In-memory: When the only constant is change

Persisting In-memory: When the only constant is change

On January 29, 2011, Posted by , In Mike Pilcher, With No Comments

SAND’s in-memory database optimizer solves the dynamic requirements of today’s environments. We see customers adding exponentially increasing data volumes, new data types every day from social feeds to mobile data to device logs, and all moving at ever greater velocity. To solve this in-memory database vendors have a solution for their customers — “buy more memory.” Yet customers are asking — “isn’t there somewhere to go other than cap in hand to the server vendor for more hardware to make the database fit?” Are they simply jumping from the row store fire into the in-memory fire?

To paraphrase Bill Clinton — “It’s the agility, stupid”. Current competitive in-memory data models are more inflexible and fragile than a piece of Waterford crystal. They cannot absorb new data at the rate business needs and so users are made to wait. And wait. And wait some more. There is a way to solve this. You can become the hardware vendor’s best friend (and the software vendor’s by licensing by blade, CPU, core or memory capacity). You can buy much, much, much more memory than you need, You can increase the size of the memory bucket so you can keep adding water and none spills out. You will have to suck up resources and cash that would otherwise be allocated to more strategically useful projects, but they probably will never know what you did, especially if you mutter “Priority infrastructure” a hundred times. You can keep patching the problem, that’s certainly one way to go. Or you can fix it. You can solve it through innovation.

There’s an alternative approach where to add and change data while ignoring the size of the bucket. At SAND we deliver Persistent Virtual Memory in-memory database. In the event you need to load more data more quickly than memory can handle — in effect over-filling the bucket — SAND identifies the high value data users access with regularity and ensure this persists in-memory. SAND then moves lower-value data into a holding pattern, keeping it on-line and accessible but ensuring the in-memory database runs with full utilization.

(See our SAND Analytic Database Performance white paper for more.)

Anyone stating they know exactly how much data they need now and in the future and therefore how much memory they need, hasn’t directly managed data. When you have a known unknown — when your only constant is change — you have to have an agile in-memory database approach.

Persistent Virtual Memory is it.

Leave a Reply