With 2013 fast approaching, IDG Connect is serializing commentary from industry experts on Gartner's 2013 predictions over this week. Read our full series of pieces dedicated to the technology trends of 2013.
Over the past four years, we have seen a dramatic shift in the way enterprises store and analyze information. It seems that everywhere you turn these days, there is someone talking about Big Data - a topic on every CIO's mind right now and for good reason. At the Gartner Symposium/IT expo in October 2012, the IT analyst firm indicated that Big Data will create 4.4 million new IT jobs globally. By 2016, analysts predict that Big Data will have driven $232 billion in global IT spending.
While we have developed cost effective databases capable of handling the gigantic amounts of data being generated today, there is more work to be done in Big Data tooling. As the costs have decreased, organizations are increasingly striving to utilize as much of their data as possible. In 2013, we are bound to see significant development in both the number and quality of Big Data tooling applications. We will look at 2013 as the year Big Data became mature.
For over a decade success has meant Big Data, and Big Data meant build your own proprietary system. When industry leaders including Google, Amazon and Facebook were faced with Big Data needs, each developed proprietary systems in-house. For everyone else, the prohibitive licensing costs of proprietary systems meant enterprise IT needed to be extraordinarily careful of what they stored, as there was a significant cost for each gigabyte. In recent years, the emergence of scalable open source databases has revolutionized the industry. These tools, engineered to handle the rich amount of both unstructured and structured data being generated by today's social world, are enabling companies, municipalities and startups to do amazing things.
One high-profile example is The City of Chicago, which is revolutionizing government with their real-time analysis tool. Gone is the mantra of slow responding government, their tool enables near real time data-driven decision-making through data analytics, including groundbreaking predictive analytics. This tool was developed using open source technologies for a fraction of the cost of just licensing a traditional tool.
There is still quite a bit of room for growth in Big Data tooling, especially on the processing side. While MongoDB has emerged as the clear leader for Big Data storage and operations, companies today are often opting for custom-built processing solutions. Even when using a data processing framework like Hadoop, it is used to supplement these custom-built solutions. Other areas like analytics and analysis tools are still at the early stages of being useful with the large amount of unstructured data they have to deal with. We see the tooling space to be one of heavy growth over the next couple years, specifically in 2013.
We are, as an open source database community, in an interesting position in the development of Big Data. We've been able to solve the storage problem but, in many ways, the ability to store this data has progressed faster than the ability to process it. Whenever there's a need, someone steps in to fulfill it - that's one of the magical things about open source.
By Steve Francia, Chief Evangelist at 10gen
In 2013 Intel IT set a target to reduce all reported IT incidents requiring attention by 40% by the end of the year. By combining data mining and p
Rupert Goodwins’ unique angle on tech change