SAND

Category Archives: Richard Grondin

Home »  SAND News »  SAND Blogs »  Richard Grondin

SAND listed on the Data Mining Group’s PMML product page

On November 25, 2010, Posted by , In Richard Grondin, By , With No Comments

Just a quick note to let everyone know SAND is now listed on the Data Mining Group’s PMML (Predictive Model Markup Language) product page. Our team worked very hard to deliver PMML functionality to SAND, which enables analytics on Extreme Data with even greater flexibility. Complex mathematical and statistical models,…

Top 5 signs your SAS environment needs SAND CDBMS Nearline

On June 25, 2010, Posted by , In Richard Grondin, By , With No Comments

There are several signs that your SAS environment needs SAND CDBMS Nearline for SAS. In my experience, these are the 5 most common: ###1. Extreme data. When the size of your SAS environment is starting to grow, it’s a sign you need SAND CDBMS Nearline. Data growth can creep up…

Storage is getting cheaper but bandwidth isn't keeping up: how SAND CDBMS Nearline for SAS lightened dunnhumby's load

On June 23, 2010, Posted by , In Richard Grondin, By , With No Comments

Dunnhumby handles retail analytics for over 350 million retail customers on behalf of companies like Tesco, Home Depot, and Kroger. When they first approached SAND, they were looking for a solution to help with the data recovery architecture for their SAS implementation. The main problem was the sheer size of…

Using XAM with Nearline 2.0 to Ensure Data Compliance

On January 14, 2009, Posted by , In Richard Grondin, By , , With No Comments

Recently, SAND has been conducting tests on our SAND/DNA Access product to benchmark its support for the XAM (eXtensible Access Method) API. These tests, executed at EMC’s lab in Hopkinton, Mass. using the latest version of EMC’s Centera solution, were a great success in a number of respects, and the…

Nearline 2.0 and the Data Management Framework

On November 10, 2008, Posted by , In Richard Grondin, By , With No Comments

In my last post, I outlined some of the advanced data modeling options that have been made possible by the advent of Nearline 2.0. Today, I want to discuss how Nearline 2.0 can act as an essential component in a data management framework. The data management framework, which can be…

Nearline 2.0 and Advanced Data Modeling

On October 27, 2008, Posted by , In Richard Grondin, By , With No Comments

In my last post, I discussed the “Quick Check” method for identifying the benefits that a Nearline 2.0 implementation can deliver in the areas of operational performance, SLAs and TCO. Certainly, it is easy to see how it would be preferable to manage a database that is 1 TB rather…

Starting Nearline 2.0: The Quick Check Approach

On October 20, 2008, Posted by , In Richard Grondin, By , With No Comments

In previous posts, I introduced the concept of “Best Practices” for Nearline 2.0. Today, I will get down to the details of how and where to start with a Nearline 2.0 solution, beginning with a Best Practices approach designed to quickly identify the benefits of such an implementation in a…

Nearline 2.0 Best Practices

On October 13, 2008, Posted by , In Richard Grondin, By , With No Comments

In previous posts, we introduced the concept of Nearline 2.0, showed how it represented a significant step forward from traditional archiving practices, and discussed how Nearline 2.0 could help your business. To recapitulate: the major advantage of Nearline 2.0 is its superior data access performance, which enables a more aggressive…

How Can a Nearline 2.0 Solution Help Your Business?

On October 6, 2008, Posted by , In Richard Grondin, By , With No Comments

In my last post, I discussed how a Nearline 2.0 solution allows vast amounts of detail data to be accessed at speeds that rival the per-formance of online systems, which in turn gives business analysts the power to assess and fine-tune important business initiatives on the basis of actual historical…

Introducing Nearline 2.0

On September 23, 2008, Posted by , In Richard Grondin, By , With No Comments

In today’s post, I want to introduce the notion of “Nearline 2.0”. While the name might seem esoteric, this concept represents the logical evolution of older data warehouse and information lifecycle approaches that have struggled to maintain acceptable performance levels in the face of the increasingly intense “data tsunami” that…