Businesses adopt new technologies and data strategies to increase shareholder value. And there is no difference when it comes to big data analytics.
We’ve seen this enthusiasm before – very large databases, decision support systems, and executive dashboards – just to name a few.
Periodically new terms and concepts are introduced to describe technological innovations that make working with data easier and more efficient, lower the cost of collection and storage, and build on already existing knowledge and appliances.
Now we have big data and, according to a recently published article by Jim Manzi, “Big data is very likely to be a big part of running almost any large corporation in the future.”
Historically, some organizations have been able to derive tremendous value by becoming adaptors of the various technology innovations while others simply fail. The article outlines three characteristics that compare organizations that have had success with those that have failed.
First, companies that succeed are not limited by existing company knowledge. Successful companies encourage training or retain specialized skills. Next, they want to exploit the benefits of the new technology as fast as possible before the competition learns how to effectively use the new technology. And finally, companies that succeed equate the criteria for successful utilization of the new technology with net profits.
So, what are the steps to achieve success with big data, gain a competitive advantage, and increase shareholder value?
The author lists three:
- Exploit Faster Clock Speeds First. Along with the decreasing cost of data storage – intensive analytical processing costs will also decline. Specialized tools are being developed including low cost and open source appliances. Analytical processing of big data will become feasible and as a first step companies should seek to deploy these faster, specialized, low-cost appliances.
- Integrate and Use New Data Types. The article suggests that big data in isolation is not as valuable as big data integrated with other sources of data. Integrating relevant data from new sources (social media, mobile devices, sensors, etc.) with data from traditional sources (transactional databases, historical data sets, demographic data, data purchased from third parties, etc.) will help decision makers derive value. Companies should take advantage of the integrated data sets, build meaningful data abstractions and incorporate the data into the appropriate schemas (depending on the data – data warehouse schema, relational data base schema, etc.), thus providing the data needed for decision makers to make decisions when required.
- Test and Learn to Improve Faster. Quoting a number of sources the author suggests that greater value can be gained by testing. The test should be designed to determine how best to take advantage of big data, learning what works and what doesn’t. One should then use the test results to make quick adjustments to jump ahead of the competition for that competitive advantage.
There are three components the author recommends for this process to work. First, there must be executive commitment. A chief data officer (CDO) would be good candidate as a sponsor. Next, the author suggests a distinct organizational entity. Organizations should establish a data management office under the control of the CDO. And finally, the test should be a repeatable, experimental process. The data scientists, as part of the DMO, should have the skills and knowledge to assist with the development of the appropriate algorithms to ensure data validation and process repeatability.
To take this all a step further, in a recent blog R “Ray” Wang delineates the following recommendations organizations should take before starting a big data project:
- What are the questions that need to be asked?
- What are the answers that help us move from data to decisions?
- Can we shift insight into action?
- How do we tie information to business process?
- Who needs what information at what right time?
- How often should this information be updated, delivered, and shared?
When new technological concepts are introduced companies that quickly determine how to deploy them successfully are in better positions to realize financial benefits. Success with big data analytics will be no different.
- Subscribe to our blog to stay up to date on the latest insights and trends in big data, data analytics and data visualization.
- Join us on August 23 at 1 p.m. EDT for our complimentary webcast, “In-Memory Computing: Lifting the Burden of Big Data,” presented by Nathaniel Rowe, Research Analyst, Aberdeen Group and Michael O’Connell, PhD, Sr. Director, Analytics, TIBCO Spotfire. In this webcast, Rowe will discuss recent findings from Aberdeen Group’s December 2011 study on the current state of big data, which shows that organizations that have adopted in-memory computing are not only able to analyze larger amounts of data in less time than their competitors – they do it much, much faster. TIBCO Spotfire’s Michael O’Connell will follow with a discussion of Spotfire’s big data analytics capabilities.
- Download a copy of the Aberdeen In-Memory Big Data whitepaper here.
Spotfire Blogging Team