Facebook的系统架构

读了一下Quora上那篇著名的关于Facebook系统架构的文章,尽管是2年前写的,但是仍然对今天的Web架构有很好的参考意义,我总结其中的几个技术关键字吧。

PHP、JavaScript、C++、Tornado、MySQL、Memcached、Hadoop、Hive、Varnish

Facebook的架构主要基于一些优秀开源框架、开源软件构建的,他们也根据自己实际的业务模型开发了一些通用的软件,并且将其中很多都已开源了。要了解Facebook的开源,可以访问如下Facebook的开源主页,以及其在Github上的开源项目。国内的淘宝公司也采用和Facebook差不多的开源思路,采用开源软件,并且将自己开发的部分系统开放源代码,故也贴上淘宝的链接吧。

http://developers.facebook.com/opensource/

https://github.com/facebook

http://code.taobao.org/

https://github.com/taobao

Facebook架构的文章转载,如下:(有中英文两个版本)

来源http://www.quora.com/What-is-Facebooks-architecture (由Micha?l Figuière回答)

From various readings and conversations I had, my understanding of Facebook's current architecture is:

  • Web front-end written in PHP. Facebook's HipHop Compiler [1] then converts it to C++ and compiles it using g++, thus providing a high performance templating and Web logic execution layer.
  • Because of the limitations of relying entirely on static compilation, Facebook's started to work on a HipHop Interpreter [2] as well as a HipHop Virtual Machine which translate PHP code to HipHop ByteCode [3].
  • Business logic is exposed as services using Thrift [4]. Some of these services are implemented in PHP, C++ or Java depending on service requirements (some other languages are probably used...)
  • Services implemented in Java don't use any usual enterprise application server but rather use Facebook's custom application server. At first this can look as wheel reinvented but as these services are exposed and consumed only (or mostly) using Thrift, the overhead of Tomcat, or even Jetty, was probably too high with no significant added value for their need.
  • Persistence is done using MySQL, Memcached [5], Hadoop's HBase [6]. Memcached is used as a cache for MySQL as well as a general purpose cache.
  • Offline processing is done using Hadoop and Hive.
  • Data such as logging, clicks and feeds transit using Scribe [7] and are aggregating and stored in HDFS using Scribe-HDFS [8], thus allowing extended analysis using MapReduce
  • BigPipe [9] is their custom technology to accelerate page rendering using a pipelining logic
  • Varnish Cache [10] is used for HTTP proxying. They've prefered it for its high performance and efficiency [11].
  • The storage of the billions of photos posted by the users is handled by Haystack, an ad-hoc storage solution developed by Facebook which brings low level optimizations and append-only writes [12].
  • Facebook Messages is using its own architecture which is notably based on infrastructure sharding and dynamic cluster management. Business logic and persistence is encapsulated in so-called 'Cell'. Each Cell handles a part of users ; new Cells can be added as popularity grows [13]. Persistence is achieved using HBase [14].
  • Facebook Messages' search engine is built with an inverted index stored in HBase [15]
  • Facebook Search Engine's implementation details are unknown as far as I know
  • The typeahead search uses a custom storage and retrieval logic [16]
  • Chat is based on an Epoll server developed in Erlang and accessed using Thrift [17]
  • They've built an automated system that respond to monitoring alert by launching the appropriated repairing workflow, or escalating to humans if the outage couldn't be overcome [18].

About the resources provisioned for each of these components, some information and numbers are known:

  • Facebook is estimated to own more than 60,000 servers [18]. Their recent datacenter in Prineville, Oregon is based on entirely self-designed hardware [19] that was recently unveiled as Open Compute Project [20].
  • 300 TB of data is stored in Memcached processes [21]
  • Their Hadoop and Hive cluster is made of 3000 servers with 8 cores, 32 GB RAM, 12 TB disks that is a total of 24k cores, 96 TB RAM and 36 PB disks [22]
  • 100 billion hits per day, 50 billion photos, 3 trillion objects cached, 130 TB of logs per day as of july 2010 [22]

【关于该文章的中文翻译,如下,共不喜欢看英文资料的同学参考。】

根据我现有的阅读和谈话,我所理解的今天Facebook的架构如下:

  • Web 前端是由 PHP 写的。Facebook 的 HipHop [1] 会把PHP转成 C++ 并用 g++编译,这样就可以为模板和Web业务逻辑层提供高的性能。
  • 业务逻辑以Service的形式存在,其使用Thrift [2]。这些Service根据需求的不同由PHP,C++或Java实现(也可能用其它一些语言……)
  • 用Java写的Services没有用到任何一个企业级的应用服务器,但用到了Facebook自己的定制的应用服务器。看上去好像是重新发明轮 子,但是这些Services只被暴露给Thrift使用(绝大所数是这样),Tomcat太重量级了,即使是Jetty也可能太重了点,其附加功能部分对 Facebook所需要的没有意义。
  • 持久化由MySQL, Memcached [3], Facebook 的 Cassandra [4], Hadoop 的 HBase [5] 完成。使用Memcached作为MySQL数据库的缓存。Facebook 工程师承认他们的Cassandra 使用正在减少,因为他们更喜欢HBase,因为它的更简单的一致性模型,以及其MapReduce能力。
  • 离线处理使用Hadoop 和 Hive。
  • 日志,点击,feeds数据使用Scribe [6],把其聚合并存在 HDFS,其使用Scribe-HDFS [7],因而允许使用MapReduce进行扩展分析。
  • BigPipe [8] 是他们的定制技术,用来加速页面显示。
  • 用来搞定用户上传的十亿张照片的存储,其由Haystack处理,Facebook自己开发了一个Ad-Hoc存储方案,其主要做了一些低层优化和“仅追加”写技术 [11].
  • Facebook Messages 使用了自己的架构,其明显地构建在了一个动态集群的基础架构上。业务逻辑和持久化被封装在一个所谓的’Cell’。每个‘Cell’都处理一部分用户,新 的‘Cell’可以因为访问热度被添加[12]。 持久化归档使用HBase [13]。
  • Facebook Messages 的搜索引擎由存储在HBase中的一个倒置索引的构建。 [14]
  • Facebook 搜索引擎实现细节据我所知目前是未知状态。
  • Typeahead 搜索使用了一个定制的存储和检索逻辑。 [15]
  • Chat 基于一个Epoll 服务器,这个服务器由Erlang 开发,由Thrift存取 [16]

关于那些供给给上述组件的资源,下面是一些信息和数量,但是有一些是未知的:

  • Facebook估计有超过60,000 台服务器[16]。他们最新的数据中心在俄勒冈州的Prineville,其基于完全自定设计的硬件[17] 那是最近才公开的 Open Compute 项目[18]。
  • 300 TB 的数据存在 Memcached 中处理 [19]
  • 他们的Hadoop 和 Hive 集群由3000 服务器组成,每台服务器有8个核,32GB的内存,12TB的硬盘,全部有2万4千个CPU的核,96TB内存和36PB的硬盘。 [20]
  • 每天有1000亿的点击量,500亿张照片, 3 万亿个对象被 Cache,每天130TB的日志(2010年7月的数据) [21]

参考引用

[1] HipHop for PHP: http://developers.facebook.com/blog/post/358
[2] Thrift: http://thrift.apache.org/
[3] Memcached: http://memcached.org/
[4] Cassandra: http://cassandra.apache.org/
[5] HBase: http://hbase.apache.org/
[6] Scribe: https://github.com/facebook/scribe
[7] Scribe-HDFS: http://hadoopblog.blogspot.com/2009/06/hdfs-scribe-integration.html
[8] BigPipe: http://www.facebook.com/notes/facebook-engineering/bigpipe-pipelining-web-pages-for-high-performance/389414033919
[9] Varnish Cache: http://www.varnish-cache.org/
[10] Facebook goes for Varnish: http://www.varnish-software.com/customers/facebook
[11] Needle in a haystack: efficient storage of billions of photos: http://www.facebook.com/note.php?note_id=76191543919
[12] Scaling the Messages Application Back End: http://www.facebook.com/note.php?note_id=10150148835363920
[13] The Underlying Technology of Messages: https://www.facebook.com/note.php?note_id=454991608919
[14] The Underlying Technology of Messages Tech Talk: http://www.facebook.com/video/video.php?v=690851516105
[15] Facebook’s typeahead search architecture: http://www.facebook.com/video/video.php?v=432864835468
[16] Facebook Chat: http://www.facebook.com/note.php?note_id=14218138919
[17] Who has the most Web Servers?: http://www.datacenterknowledge.com/archives/2009/05/14/whos-got-the-most-web-servers/
[18] Building Efficient Data Centers with the Open Compute Project: http://www.facebook.com/note.php?note_id=10150144039563920
[19] Open Compute Project: http://opencompute.org/
[20] Facebook’s architecture presentation at Devoxx 2010: http://www.devoxx.com
[21] Scaling Facebook to 500 millions users and beyond: http://www.facebook.com/note.php?note_id=409881258919

master

Stay hungry, stay foolish.

发表评论

邮箱地址不会被公开。 必填项已用*标注

*