But perhaps the area with the most to gain is the healthcare arena where the benefits of efficiently assembling large quantities of data can literally be life-altering for a countless number of people.
David Shaywitz, a physician, scientist and management consultant, suggests that the next great quest in applied science is the assembly of a unified health database, a big data project that would collect in one searchable repository all the parameters that could measure or conceivably reflect human well-being.
But while many companies and academic researchers are focusing their efforts on defined subsets of the information challenge around big data, the one exception seems to big large pharmaceutical companies, he says. That’s because many large drug companies have opted to outsource big data analytics instead of viewing it as a core competency.
“If you were going to create the health solutions provider of the future, arguably your first move would be to recruit a cutting-edge analytics team,” he adds. “The question of core competencies is more than just semantics – it is perhaps the most important strategic question facing biopharma companies as they peer into a frightening and uncertain future.”
Shaywitz says biopharma companies should embrace big data because:
- A company’s view of its core competencies translates directly into how it performs its mission, as well as in the quality of talent it’s able to recruit and retain.
- Unless a pharma company is deliberately built around big data analytics – or this function is developed and nurtured in a fenced-off, skunkworks fashion – it’s unlikely to get adequate traction, and will be vulnerable.
- Given both the overwhelming amount of available data and the fact that traditional pharma approaches to innovation seem to have largely run out of steam, a bet on big data analytics might make a lot of sense now.
Take, for example, how academia is tapping into big data analytics as proof of what big phama could do.
“We try and leverage very, very large-scale data of very, very deep complexity to probe questions about how biology works and that can be used to help patients,” says Andrew Kasarskis, co-director of Mount Sinai’s Institute for Genomics and Multiscale Biology.
Continued efforts to use big data in healthcare and make it more widely accessible could play a significant role in lowering overall costs, according to a report published early this year.
The report by the Ewing Marion Kauffman Foundation suggests that all of the nonprofit organizations that study disease should collaborate to build a national health database.
“Using proper safeguards, we need to open the information that is locked in medical offices, hospitals and the files of pharmaceutical and insurance companies,” Kauffman senior fellow John Wilbanks, one of the report’s authors, notes. “For example, combining larger datasets on drug response with genomic data on patients could steer therapies to the people they are most likely to help. This could substantially reduce the need for trial-and-error medicine, with all its discomforts, high costs and sometimes tragically wrong guesses.”
- Subscribe to our blog to stay up to date on the latest insights and trends in big data and data analytics
- Join us on August 23 at 1 p.m. EDT for our complimentary webcast, “In-Memory Computing: Lifting the Burden of Big Data,” presented by Nathaniel Rowe, Research Analyst, Aberdeen Group and Michael O’Connell, PhD, Sr. Director, Analytics, TIBCO Spotfire. In this webcast, Rowe will discuss recent findings from Aberdeen Group’s December 2011 study on the current state of big data, which shows that organizations that have adopted in-memory computing are not only able to analyze larger amounts of data in less time than their competitors – they do it much, much faster. TIBCO Spotfire’s Michael O’Connell will follow with a discussion of Spotfire’s big data analytics capabilities.
- Download a copy of the Aberdeen In-Memory Big Data whitepaper here.