Capturing public conversations around the world in real time, Twitter could be a valuable source of intelligence for the business world, so IBM is creating new ways to derive potentially valuable information from this massive, sprawling data set.
At the IBM InterConnect conference, held this week in Las Vegas, company executives detailed how it is repackaging Twitter data for reuse and analysis, capitalizing on a deal struck with Twitter in October to access all the messages posted on the service.
“Developers now have the ability to get collective insights and intelligence from hundreds of millions of people,” said Linda Hunt, IBM business leader for the company’s Watson analytics services, during the presentation.
The curated Twitter data also provides a handy example of how enterprises could use IBM’s new cloud analysis services to get more from other large data sets, both their own and from others.
With the Twitter partnership, IBM can “take this huge amount of information and offer it to developers as a drink,” said Damion Heredia, IBM vice president of cloud platform services product management, in a follow-up interview. “You can sample the data, decorate it, plug it into Watson, push it onto mobile devices.”
Of course, developers could access Twitter’s APIs (application programming interfaces) directly, but IBM has done considerable work to make it easier to analyze the data and pipe it into other applications.
Twitter users create anywhere between 1 and 5 billion Twitter messages a month. Rather than store all of these dispatches, IBM saves and indexes a representative sample of around 10 percent of the tweets. Each message is annotated with additional information, such as the location and gender of the users. The company will keep a two-year backlog of these selected Twitter messages, said Thomas Schaeck, IBM distinguished engineer, during the presentation.
IBM provides a set of API calls for querying this dataset on Bluemix, a set of platform services for building cloud applications. A rich set of Boolean operators can parse data in myriad ways. A set of Twitter messages, for instance, could be filtered to specific periods of time, or by particular geographies.
Schaeck showed how to pipe the results of a Twitter query about the popularity of current movies to the Bluemix data warehousing service, DashDB. Using DashDB, Twitter messages about the movies could then be categorized by the U.S. state in which they originated.
A movie distributor could use such data, Schaeck theorized, to determine which states should get additional advertising to promote its movie.
Other Bluemix services could also be used with the Twitter data, Schaeck said. It could be analyzed with the R statistical programming language, and the results could be presented on a Web page using the Node.js runtime and the D3.js visualization library.
IBM has also incorporated Twitter data into Watson Analytics. This cloud based analysis service could, for instance, determine if a user or a topic on Twitter is regarded by the public in a largely positive light or in a largely negative light, or with ambivalence.
A company could use such sentiment analysis, as it is called, to monitor the popularity and likability of its brands, Hunt said.
Organizations could derive much useful information organizations from Twitter, said Donnie Berkholz, senior analyst for IT research firm Red Monk, who was in the audience.
Berkholz himself analyses Twitter data for work, often looking for trends around IT conferences and product announcements.
Analyzing Twitter messages emanating from a 2013 VMworld conference, Berkholz found that the IT practitioners attending the conference were more interested in current product details, whereas the IT “pundits”—those not directly involved in the maintenance of IT—were more interested in product roadmaps and other abstract concerns.
Using Twitter data, he said, “could be useful for understanding consumer sentiment.”