Energy Efficiency

Dr John Busch (US) - Mission Critical Databases: What to do if you're not China or Google

With the explosive growth of Web-based businesses and applications, the business critical importance of providing high service availably with excellent performance has increased exponentially. IT managers are finding it difficult to meet the accelerating demands for high availability (HA), performance, and scalability-while at the same time meeting budgets.

In the United States, companies with billions of dollars to spend like Google and Facebook are building custom software (including custom databases specialized to their applications) and deploying large scale dedicated data centers to optimize their availability, performance, scalability and cost.

In China, where labor is inexpensive and paying for software is an unnatural act, the current best practice is to use free open source software and throw throngs of programmers and administrators into the mix to address the availability and performance scalability limits. This involves constantly splitting up databases into small partitions (sharding) and recoding applications to achieve acceptable performance and availability for the current customer load. The government controls the data centers, and leasing costs are high.

What is the best solution for the regular Joe - a U.S. mid-sized enterprise? Here are a options for a growing company that needs great web site availability, performance and scalability, but wants to focus its resources on the company's unique added value and not on inventing infrastructure software or constantly employing more peopleto keep it going.

Let's look first at the requirements and then step through the options.

Mission critical service deployments require:

• high data integrity (including no data loss and high data consistency);
• high availability (including automated fail-over for both planned and unplanned   downtime as well as geographically distributed disaster recovery);
• high performance and high scalability;
• capital and operational cost-effectiveness;
• ease of management;
• and are ideally based on standards providing long term application and data compatibility.

How can these requirements be best achieved? The broad options open to IT managers today are to:

• deploy in the cloud;
• use a NoSQL open source database
• or deploy with a mission critical SQL database product.

Exploiting the cloud is attractive based on elasticity and potential cost savings. However, less than 10% of databases run virtualized today, and most large web facing businesses do not run virtualized. Only highly partitionable database workloads, where each partition fits in the DRAM of a virtualized server, perform adequately in the cloud, and fail-over latency and reliability is uncertain. Dedicated servers are required to assure adequate performance stability and availability, which drives up costs. Innovations are needed in cloud infrastructure and virtualization technology to large scale database performance and availability to enable broad mission critical deployment of scaled services.

Open source NoSQL databases were created to solve very specific problems for which they are especially well suited. They provide unlimited scaling and dynamic schemas for very large data sets with unstructured data. But they have new APIs offering restricted query capabilities, ranging from simple key-value stores to SQL-like query languages which eliminate joins or range queries. They offer a potential option for new applications that require very large unstructured data. But, there are over 100 companies offering NoSQL products and there is risk from an API longevity perspective. Also, the NoSQL databases are typically ineffective in exploiting modern commodity technology, exhibiting poor scaling with multi-core processors and limited benefit from flash memory, resulting in low server utilization and server sprawl, which drives up costs.

Mission critical SQL databases offering high availability and high performance and scalability represent a compelling option for broader classes of applications and data.

There is a new class of mission critical SQL databases that fuse together advances in database architecture and commodity server and storage technology to achieve high availability with high performance and unlimited scalability in a cost effective manner. They achieve this without sacrificing existing application/data SQL compatibility.

These database architectures utilize parallel synchronous replication to achieve 99.999% availability with full data integrity, including instantaneous, automatic fail-over and on-line scaling and upgrades. They have a very high degree of parallelism and concurrency control, enabling effective exploitation of flash memory and multi-core servers to achieve high vertical scaling on low cost commodity servers, thereby enabling capital expense reduction (based on consolidation) and operating expense reduction (based on reduced power and space requirements). They achieve unlimited scaling through transparent partitioning, and integrate WAN geographic scaling and disaster recovery. These databases can yield major improvements in data center QOS and TCO for most scaled production services in a very cost-effective manner,

By Dr. John Busch, CTO and Founder of Schooner Information Technology



« John Martin (Australia) - Internal Service Catalogues - The Critical First Step to Driving IT Culture Change


Hans Delleman (Holland) - Survival of the Fittest for Dutch Telecom Providers »

Recommended for You

Trump hits partial pause on Huawei ban, but 5G concerns persist

Phil Muncaster reports on China and beyond

FinancialForce profits from PSA investment

Martin Veitch's inside track on today’s tech trends

Future-proofing the Middle East

Keri Allan looks at the latest trends and technologies


Do you think your smartphone is making you a workaholic?