Facebook may treasure the data it has on its one billion-plus users for its advertising returns, but the analysis the site performs on that data is expected to continue to pose numerous challenges over the coming year, an engineer said.
The problems, which Facebook has been forced to grapple with “much sooner than the broader industry,” include figuring out more efficient ways to process user behavior on the site, how to better access and consolidate different types of data across Facebook’s multiple data centers, and devising new open source software systems to process that data, Ravi Murthy, who manages Facebook’s analytics infrastructure, said Tuesday.
“Facebook is a data company, and the most obvious thing people think of on that front is ads targeting,” he said at an industry conference in San Francisco, during a talk on Facebook’s back-end infrastructure, data analytics and open source projects.
“But it goes deeper than this,” he said.
One major area of behind-the-scenes work relates to Facebook’s analytics infrastructure, which is designed to accelerate product development and improve the user experience through deep analysis of all the available data, whether it consists of the actions users take on the site like posting status updates or which applications they use within Facebook on different devices.
Facebook currently uses several different open source software systems known as Hadoop, Corona and Prism to process and analyze its data, which the company will focus on making faster and more efficient over the next six to twelve months, Murthy said.
Many of the company’s challenges are tied to what Facebook refers to as its data warehouse, which combines data from multiple sources into a database where user activity can be analyzed in the aggregate, such as by giving a daily report on the number of photos that have been tagged in a specific country, or looking at how many users in a certain area have engaged with pages that were recommended to them.
The analysis is designed to optimize the user experiences and find out what users like and don’t like, but it also is becoming more taxing as Facebook is able to access more and more data about its users, Murthy said. Currently, the Facebook warehouse takes in 500 terabytes of new data every day, or 500,000 gigabytes. The warehouse has grown nearly 4000-times in size over the last four years, “way ahead of Facebook’s user growth,” Murthy said.
To deal with these issues, Facebook has developed its Prism software system, which is designed to perform key analysis functions across the company’s data centers worldwide, and split up the analyses into “chunks,” Murthy said. That way, performing an analysis on, say, some metric related to users’ news feeds won’t clog up the warehouse more generally.
“We’re increasingly thinking about how to capture this data,” he said.
The company is also working on a system that takes a completely different approach to query the warehouse to give a response time within a matter of seconds, Murthy said.
Another area Facebook is continually looking at improving is its “transactional infrastructure,” which handles the more basic, day-to-day data processing of, say, likes, comments and status updates to keep the social network running smoothly. Some of the questions the company’s engineers and analysts are looking at include figuring out how to forecast the actual growth in this type of data, and how much computing Facebook should really allot for it, Murthy said.
“Can we predict what it’s going to be six months from now?” he said.
Meanwhile, Facebook is also involved in a long-term effort to make its physical servers more efficient. The company began its Open Compute Project in 2011, with the goal of designing modularized servers that give customers greater control over the networking, memory, power supplies and other components that go into their servers. It was expanded to incorporate ARM processors in January.