Publicity August 2012

Viewing the Cloud Homogeneously

Gregor B. Maas, Managing Director of T-Systems in Japan

Cloud infrastructures must meet the highest requirements relating to data protection and availability. But how can the quality requirements be met and data security be guaranteed? Service providers show significant differences here. What should companies take note of when selecting the provider?

Cloud computing is also an important technology trend in 2012; the market research company Gartner confirms that cloud computing is “an unbelievable force for change”. The cloud, therefore, remains at the very top of the agenda for IT managers. One reason for this is the constantly increasing and ever-more complex volume of corporate information—key word big data. At the same time the cloud also gives to companies challenges time and again, because the supply in the marketplace is frequently not transparent and many companies are still sceptical about cloud computing in terms of data security and availability.

Greater dynamism from the cloud
On the one hand, cloud computing particularly offers users economic advantages. They can avoid unnecessary IT investments and exploit capacities only as required—this turns fixed costs into variable costs (CAPEX becomes OPEX). Companies also benefit from quick implementation and modification of IT infrastructures, increased scalability and location-independent access to applications and SAP solutions or other corporate data. The latter supports companies not only with regard to the growing number of mobile employees, but it also enables better cross-team collaboration—so employees from different locations can work collaboratively on documents.

Zero outage is a question of expertise
All service providers promise high quality in their services. But it is here in particular that companies should keep their eyes open when selecting the right partner; even 99.95% availability guaranteed in the service-level agreements allows the provider potential down-time of up to 42 hours per year. The technological base must, therefore, be checked with particular care because this makes the key contribution to data and networks being highly available and protected. In particular, components that only occur once in a system—the single points of failure (SPOF)—imply a serious risk of failure. Completely redundant component designs are, therefore, decisive.

A twin-core approach is considered to be particularly reliable; all critical systems and data are mirrored in a second data center. “If one of the data centers fails, the twin takes over automatically and operation is not interrupted. Business continuity is therefore maintained,” explains Gregor B. Maas, Managing Director of T-Systems in Japan. T-Systems itself now has 22 such fail-safe twin-core data centers and has more than doubled their numbers since 2008. But according to Maas, this does not do the job on its own: “Another basic requirement is of course a comprehensive disaster-recovery plan and secure links from the data centers to the outside world. Otherwise a provider cannot guarantee high availability.” A precise check on the service provider’s technological expertise will pay off for the company. “If the right requirements are met it is even possible to provide real availability of 99.999%. This represents downtime of just five minutes per year and means real cash for companies,” adds Maas. “Zero outage computing” is therefore not simply an ideal any more.

Cloud computing: more than just technology
In addition to the technological level, other criteria must be considered in order to take the protection and availability of information into account because an end-to-end strategy is the foundation of an economic cloud. It is not enough to simply rely on the technical components. A provider must, for example, also provide appropriate precautions at the physical level; these include power failure protection and careful security measures for access to the data center. The companies’ organizational aspects must also be considered when planning a cloud strategy. Comprehensive role, right and identity management must be as much a matter of course as encryption technologies, e.g., one-off passwords for access to corporate information. Because cloud computing only results in greater dynamics and flexibility for companies if the specific corporate requirements and conditions are viewed homogeneously.

Quality management engenders trust
Another very important element in the chain is quality management. A provider’s services can only be ensured at a high level if appropriate quality management continuously checks the services and optimizes them using an ongoing process. “The awareness of high quality in the performance of a cloud must be firmly anchored throughout companies providing these services—both in individual process steps and with all employees. Finally, profound quality management makes the key difference,” explains Maas.

Conclusion
Not all clouds are alike. But users will receive good advice with a partner who has appropriate experience in the marketplace and on large-scale projects. But the objectives that a company has must not be neglected. This starts with a comprehensive analysis. And at the end the quality must be right. Only in this case can the cloud be a driver of growth.