Hewlett-Packard has devised a way to run programs written in the R statistical programming language against data sets that span more than one server, potentially paving the way for large-scale, real-time predictive analytics.
“Historically, big data has been focused on the past,” said Jeff Veis, HP vice president of marketing for the company’s big data business group. The new software will allow organizations to “anticipate breaking trends” by using very large data sets, he said.
While various commercial packages offer ways to run R on computer clusters, HP’s new Distributed R is the first to offer this capability in an open source package, Veis said.
With millions of users worldwide, the open-source R is one of the most widely used programming languages specifically designed for statistical computing and predictive analytics, alongside SAS, MatLab, Mathematica and a number of Python libraries.
R has been a challenge to use with large data sets, though, because it runs as a single thread on a computer.
This approach limits the amount of data that can be analyzed. As a result, data scientists will often analyze samples from a very large data set, rather than the entire data set, potentially reducing the accuracy of the results.
The new HP package includes a set of algorithms, created by HP Labs, for running an R program across multiple computers simultaneously, allowing billions of rows of data to be analyzed. This approach allows the entire data set to be analyzed.
HP primarily created Distributed R to run on its Vertica column-oriented analytic database system, which was created to facilitate analysis of terabytes of data.
Distributed R, released under the GPL version 2 open source license, can work with other databases and data processing platforms in addition to Vertica, such as Hadoop. It is fully compatible with the R Studio and R console developer tools.
Microsoft recently set out to acquire one of the leading R commercial vendors, Revolution Analytics.