Issue 3, 2012 | Download pdf |
The age of Big Data is here. Exploding data volumes from many different sources and formats is overwhelming "the ability of typical database software tools to capture, store, manage, and analyze"1 it. With BigMemory, you can max out the biggest servers on the market by moving terabytes of high-value data out of slow, expensive databases and mainframes and into memory where applications can use it effectively.
Cheap RAM + Big Data = Business Power
Exploding data creates business opportunities and problems that prevailing technology platforms cannot address. To handle the volume and scale of data, as much data as possible must be moved out of slow, disk-based storage into high-speed memory where it can be addressed in real-time.
How It Works
BigMemory is packaged in several ways, depending on the needs of the application. In its simplest form, as shown in Figure 1, BigMemory is an in-memory Java data store with a backing disk store for restarts. It uses a simple put/get/search API that goes into applications with the addition of a Java Archive (JAR) library and a few lines of configuration.
Understanding the Performance Gains
Data systems traditionally use a tiered store organization to automatically move data between faster, high-cost storage mechanisms to slower, low-cost (and, generally, much larger) ones. BigMemory uses a similar tiered store organization, as shown in Figure 2, to automatically move data between the different tiers as needed. The top two tiers––the Java Virtual Machine (JVM) heap memory and the in-process, off-heap BigMemory store––use the RAM on the application server host. Since application server hardware typically ships with tens of gigabytes of RAM and may be inexpensively fitted with hundreds of gigabytes or more, BigMemory efficiently stores terabytes of data in RAM where your application can most readily use it.
Scale Up And Out
BigMemory's tiered store organization keeps data where applications need it for fast, predictable access precisely when it's needed. And, because local memory is fast and increasingly cheap and abundant, BigMemory keeps as much data locally as available RAM permits.
Snaps into Any Application
BigMemory's data access API––the de-facto Java standard Ehcache interface––combines the simple get/put methods of a key/value store with powerful query, search and analysis capabilities to give applications unprecedented access to data that is otherwise locked away in slow, expensive databases.
Conclusion
BigMemory makes existing applications orders of magnitude faster and more affordable. But, because terabyte in-memory scale is unprecedented, BigMemory also changes what applications can do. Terabytes of data available at microsecond speed make it possible to extract value from that data in new ways and enables entirely new applications that weren't possible before.
|
Congratulations Terracotta!
BigMemory wins DataWeek's 2012 Big Data Technology Top Innovator award!
|