воскресенье, 15 ноября 2015 г.
Facebook goes open source with query engine for big data
Potentially raising the bar on SQL scalability, Facebook has released as open source a SQL query engine it developed called Presto that was built to work with petabyte-sized data warehouses.
Currently, more than 1,000 Facebook employees use Presto daily to run 30,000 interactive queries, involving over a petabyte of processing, according to a post authored by Facebook software engineer Martin Traverso. The company has scaled the software to run on a 1,000 node cluster.
Now, Facebook wants other data-driven organizations to use, and it hopes, refine Presto. The company has posted the software's source code and is encouraging contributions from other parties. The software is already being tested by a number of other large Internet services, namely AirBnB and Dropbox.
Standard data warehouses would be hard-pressed to offer the responsiveness of Presto given the amount of data Facebook collects, according to engineers at the company. Facebook's data warehouse has over 300 petabytes worth of material from its users, stored on Hadoop clusters. Presto interacts with this data through interactive analysis, as well as through machine-learning algorithms and standard batch processing.
To analyze this data, Facebook originally used Hadoop MapReduce along with Hive. But as the data warehouse grew, this approach proved to be far too slow.
The Facebook Data Infrastructure group first looked for other software for running faster queries, but didn't find anything that was both mature enough and capable of scaling to the required levels. Instead, the group built its own distributed SQL query engine, using Java.
Presto can do many of the tasks that standard SQL engines can, including complex queries, aggregations, left/right outer joins, subqueries, and most of the common aggregate and scalar functions. It lacks the ability to write results back to data tables and cannot create table joins beyond a certain size..
.
.
.
.
.
.
.
.
.
Unlike Hive, Presto does not use MapReduce, which involves writing results back to disk. Instead, Presto compiles parts of the query on the fly and does all of its processing in memory. As a result, Facebook claims Presto is 10 times better in terms of CPU efficiency and latency than the Hive and MapReduce combo.
Presto is one of a number of newly emerging SQL query engines that tackle the problem of offering speedy results for queries run against large Hadoop data sets. Hadoop distributor Pivotal has developed Hawq for this purpose, and fellow Hadoop distributor Cloudera is working on its own software called Impala.
Подписаться на:
Сообщения (Atom)