skip to main content

A brief introduction to cloud computing – Leveraging the Cloud for Digital Transformation

A brief introduction to cloud computing

The first large-scale electronic general-purpose computer was the Electronic Numerical Integrator and Computer (ENIAC), and it was built between 1943 and 1945. Its design was proposed by the physicist John Mauchly. Until it was disabled in 1955, the ENIAC and the engineering team supporting it most likely performed more calculations than had been performed in the history of humanity until that point. As momentous as that achievement was, we have certainly come a long way since then.The term paradigm shift was first introduced in 1962 by the American science philosopher Thomas Kuhn in his influential book The Structure of Scientific Revolutions. Kuhn defines a paradigm as the formal theories, classic experiments, and trusted scientific research and thought methods. Kuhn posited that scientists accept the predominant paradigm but continuously challenge it by questioning it, refining theories, and creating experiments to validate it. Sometimes the paradigm eventually becomes inadequate to represent the observed behavior through experimentation. When this happens, a paradigm shift occurs, and a new theory or methodology replaces the old ways. Kuhn, in his book, walks us through the example of how the heliocentric model eventually replaced the geocentric solar system model because the evidence became overwhelming supporting the latter.Similarly, in computing, we have observed a few situations where a better and more efficient method replaced the accepted way of doing things. A couple of tectonic shifts have occurred since the ENIAC days. Determining which one is the most relevant is a subjective exercise, but in our view, these are the most important:

  • The creation of the first electronic computers, starting with ENIAC
  • The advent of mainframes: An ENIAC in every company’s backroom
  • The PC revolution: A mainframe on everyone’s desk
  • The emergence of the internet: PCs being connected
  • The cloud tsunami: Turning computing into a utility service

As mentioned, this list is by no means final. You can easily argue that there have been other shifts. Does IoT belong on the list? What about blockchain? I don’t think we are quite there yet, but the next paradigm shift will be the pervasive implementation of artificial intelligence. Also, not all these paradigm shifts have killed the previous paradigm. Many corporations trust their mission-critical operations to mainframes, PCs are still around, and the internet has a synergistic relationship with the cloud. Let’s focus now on the last paradigm shift since that is the topic of this chapter.What exactly is cloud computing? It is a term often thrown around by many people who don’t understand it and wonder what it means. Having your infrastructure in the cloud does not mean you have your servers up in the sky. Let’s try to define it most plainly. Essentially, cloud computing is outsourcing a company’s hardware and software infrastructure to a third party. Instead of having their own data center, Enterprises borrow someone else’s data center. It has many advantages:

  • Economies of scale are associated with buying in bulk.
  • You only pay for the time you use the equipment in increments of minutes.
  • Arguably one of the most important benefits is the ability to scale up, out, down, and in.

When using cloud computing, you are not buying the equipment; you are leasing it. Equipment leasing has been around for a long time, but not at the speed that cloud computing provides. Cloud computing makes it possible to start a resource within minutes, use it for a few hours, minutes, or even seconds, and then shut it down. You will only pay for the time you use it. Furthermore, with the advent of serverless computing, such as AWS Lambda services, we don’t even need to provision servers, and we can call a Lambda function and pay by the function call. One example of driving the point home is that the latest P3 instances available with the AWS Sagemaker Machine Learning service can be used for nearly $3.00 per hour (2022 pricing- on demand). This might sound like a high price to pay to rent one computer, but a few years ago, we would have had to spend millions of dollars for a super-computer with similar capabilities. And importantly, after a model is trained, the instance can be shut down, and the inference engine can be deployed into more appropriate hardware. The idea of being able to scale out and, notably, to scale is often referred to as elasticity or elastic computing. This concept allows companies to treat their computing resources as just another utility bill and only pay for what they need at any given moment in time.Next, we will learn about terms commonly used to specify how much of your infrastructure will live in the cloud versus how much will stay on-premises.

Leave a Reply

Your email address will not be published. Required fields are marked *