How I passed Google Certified Professional Cloud Architect Exam

After a few months of heavy preps, I managed to pass the exam. I got the electronic certificate, and supposedly I'll get a Cloud Architect Hoodie! Yeah, I'm gonna wear it :)

The exam is every bit as difficult as advertised. I did A LOTS of Hands On in the Google Cloud Platform (the 300$ that Google gives you to play around comes in quite handy), without it I don't think it's possible to pass, bunch of questions have commands to choose from, and a heavy focus on App Development and Linux Commands. If you want to know how I prepared, check out my previous posts:

  1. Why I decided to become a Certified Cloud Architect, why Google Cloud, and how I want to prepare
  2. Introduction to Big Data and Hadoop
  3. Google Cloud - Compute Options (IaaS, PaaS, CaaS/KaaS)
  4. Google Cloud - Storage and Big Data Options
  5. Google Cloud - Networking and Security Options

Stay tuned, my Cloud is about to get much more DevOps-y in 2018!

Public Cloud Networking and Security: VPCs, Interconnection to Cloud, Load Balancing

I'm so happy to finally be here, at the Networking part of the Public Cloud!!! I know, there are more important parts of Cloud then Networks, but SDN is my true love, and we should give it all the attention it deserves.

IMPORTANT: In this post I will be heavily focusing on Google Cloud Platform. The concepts described here apply to ANY Public Cloud. Yes, specifics may vary, and in my opinion GCP is a bit superior to AWS and Azure at this moment, but if you understand how this one works - you'll easily get all the others.

Virtual Private Cloud (VPC)

VPC (Virtual Private Cloud) provide global scalable and flexible networking. This is an actual Software Defined Network provided by Google. Project can have up to 5 VPC - Virtual Private Networks. VPC can be global, and contains subnets and uses a private IP space. Subnets are regional. The network that you are provided with VPC is:

  • Private
  • Secure
  • Managed
  • Scalable
  • Can contain up to 7000 VMs

Once you create the VPC, you have a cross-region RFC1918 IP Space network, using Googles private Network underneath. It uses the Global internal DNS, load balancing, firewalls, routes, and you can scale rapidly with global L7 Load Balancers. Subnets within VPC can only exist within Region/Zone, you can't extend a Subnet over your entire VPC.

VPC Networks can be provisioned in:
  • Auto Mode, where the Subnet(s) is set (automatically assigned) in every region. Firewall rules and routes are preconfigured.
  • Custom mode, where we have to manually configure the subnets.

IP Routing and Firewalling

Routes are defined for the networks to which they apply, and you can use them if you want to apply the route only for the Instances with a certain "instance tag" (If you don't specify the TAG, the route applies to all the instances).

When you use the Routes to/from the Internet, you have 2 options:

Project can contain various VPCs (Google allows you to create up to 5 VPCs per project). VPCs also have Multi Tenancy. All the resources in GCP belong to some VCP.  Routing and Forwarding must be configured to allow traffic within VPC, and with the outside world. You also need to configure the Firewall Rules.

VPCs are GLOBAL, meaning the Resources can span anywhere around the world. Even so, instances from different regions CANNOT BE IN THE SAME SUBNETAn instance needs to be in the same region as a reserved static IP address. The zone in the region doesn't matter.

Firewall Rules can be based on the Source IP (ingress) or Destination IP (Egress) There are DEFAULT "allow egress" and "deny ingress" rules, which are pre-configured for you, with the minimum priority (65535). This means that if you configure the new FW rules with the lower number/higher priority, these will be taken into account, instead of the default ones. GCP Firewall rules are STATEFUL. You can also use TAGs and Service Accounts ( for example) to configure the Firewall rules, and this is probably THE BIGGEST advantage of the Cloud Firewall, because you can do Micro Segmentation in a native way. Once you create a Firewall Rule, a TAG is created, so the next time you create an instance, and apply that rule, it will not create it again, just attach the TAG to your instance.

There are 2 types of IP addresses in VPC:
- External, in the Public IP space
- Internal, in the Private IP space

VPCs can communicate to each other using a Public IP space (External networks visible on the Internet). External IP can also be ephemeral (change every 24 hours) or static. VMs don't know what their external IP is. IMPORTANT: If you RESERVE an External IP in order to configure it as STATIC, and not use it for an Instance or a LB - you will be charged for it! Once you assign it - it's for free.

When you work with Containers - containers need to focus on the Application or Service. They don't need to do their own routing, it simplifies the traffic management.

Can I use a single RFC 1918 space within few GCP Projects?

Yes, using a Shared VPC - Networks can be shared across Regions, Projects etc. If you have different Departments that need to work on the same Network resources, you'd create two separate projects for them, give the access only to the project they work on, and use a single Shared VPC for the Network resources they all need to access.

Google Infrastructure

Google's network infrastructure has three distinct elements:
  • Core data centers (central circule), used for the Computation and Backend storage.
  • Edge Points of Presence (PoPs), Edge Points of Presence (PoPs) are where we connect Google's network to the rest of the internet via peering. We are present on over 90 internet exchanges and at over 100 interconnection facilities around the world.
  • Edge caching and services nodes (Google Global Cache, or GGC). Our edge nodes (called Google Global Cache, or GGC) represent the tier of Google's infrastructure closest to our users. With our edge nodes, network operators and internet service providers deploy Google-supplied servers inside their network.

CDN (Content Delivery Network) is also worth mentioning. It's enabled by Edge Cache Sites (Edge PoPs, or the light green circule above), the places where the online content can be delivered closer to the users for faster response times. It works with Load Balancing, and the Content is CACHED in 80+ Edge Cache Sites around the globe. unlike most CDNs, your site gets a single IP address that works everywhere, combining global performance with easy management — no regional DNS required. For more information check out the official Google docs.

Connecting your environment to GCP (Cloud Interconnect)

While this may change in the future, a VPN hosted on GCP does not allow for client connections. However, connecting a VPC to an on-premises VPN (not hosted on GCP) is not an issue.

There are 3 ways you can connect your Data Center to GCP:
  • Cloud VPN/IPsec VPN, as in a standard Site to Site VPN IPsec session (supports IKEv1 and v2). Supports up to 1,5-3 Gbps per tunnel, but you can set up various to increase performance. You can also use this option to connect different VPCs to each other, or your VPC to other Public Cloud. Cloud Router is not required for Cloud VPN, but it does make things a lot easier, by introducing the Dynamic Routing between your DC and GCP, that supports BGP. When using static routes, any new subnet on the peer end must be added to the tunnel options on the Cloud VPN gateway options.
  • Dedicated Interconnect, used if you don’t want to go via Internet, and you can meet Google in one of Dedicated Interconnect points of presence. You would be using Google Edge Location (you can connect into it Directly, or via Carrier), with Google Peering Edge (PE) device to which your physical Router (CE) connects [you need to be in the supported location - Madrid is included]. This is not cheap, currently around 1700$ per 10Gbps link, 80GB Max!
  • Direct Peering/Carrier Peering, which Google does not charge for, but also - there is no SLA. Peering is a private connection directly into Google Cloud. It's available in more locations then Dedicated Interconnect, and it can be done directly with Google (Direct Peering) if you can meet Google's direct peering requirements (Requires you to have a connection in a colocation facility, either directly or through a carrier provided wave service), or via a Carrier (Carrier Peering).

And, as always, Google provides a Choice Chart if you're not sure which option is for you:

How do I transfer my data from my Data Center to GCP?

When transferring your content into the cloud, you would use the "gsutil" command line tool, and have in mind:
  • Parallel uploads (-o, plus you need to set the parameters) are for breaking up larger files into pieces for faster uploads.
  • Multi-threaded uploads (-m) are for large numbers of smaller files.  If you have bunch of small files, you should group together and compress.
  • You can add multiple Cloud VPNs to reduce the transfer time.
  • gsutil by default will by default occupy the entire bandwidth. There are tools to optimize this. When it fails, gsutil will retry by default.
  • For ongoing automated transfers, use a cron job.

Google Transfer Appliance is a new thing, probably not in the exam, it allows you to copy all your data, ship it to google, and they will load it to the Cloud for you.

Load Balancing in GCP

One of the most important parts of Google Cloud, because it enables the Elasticity, much needed in the cloud, by providing the Auto Scaling for the Managed Instance Groups.

Have in mind that the Load Balancing services for the GCE and GKE work in a different ways, but basically they achieve the same thing - Auto Scaling. Here is how this works:
  • In GCE there is a managed group of instances generated from the same template (Managed Instance Group). By enabling a Load Balancing service, you're getting a Global URL for your Instance Group, that includes the Health check service launched from the Balancer to the Instances, which is the base trigger of the Auto Scaling.
  • In GKE you'd have a Kubernetes Cluster, and the entire Elastic operation of your containers is one of the signature functionalities of the Kubernetes Cluster, so you don't have to worry about configuring any of this manually.

Let's get deeper into the types of the Load Balancing (LB) service in GCP. Have in mind that you should always have in mind the ISO-OSI model, and if you can provide the LB service on the high level - go for it! This means that if you can do a HTTPS Balancing, rather go for that then SSL. If you can't go HTTPS - go for SSL. If your traffic is not encrypted - sure, go for TCP. Only if NONE of this works for you, you should settle for the simple Network LB Service.

IMPORTANT: Whenever you are using one of the encrypted LB Services (HTTPS, SSL/TLS), the Encryption terminates on the Load Balancer, and then the proper Load Balancer established a separate encrypted tunnel to each of the Active Instances.

There are 2 types of Load Balancing on GCP:
  1. EXTERNAL Load Balancing, for an access from the OUTSIDE (Internet)
    1. GLOBAL Load Balancing:
    • HTTP/HTTPS Load Balancing
    • SSL Proxy Load Balancing
    • TCP Proxy Load Balancing
    1. REGIONAL Load Balancing:
    • Network Load Balancer (notice that the Network Load Balancer is NOT Global, only available in a single region)
  2. INTERNAL, for the inter-tier access (example - web servers accessing Data Bases)

Google Cloud Platform (GCP) - How do I choose among the Storage and Big Data options?

Storage options are extremely important when using a GCP, performance and price wise. I will do a bit of a non-standard approach for this post. I will first cover the potential use cases, explain the Hadoop/Standard DB you would use in each case, and then the GCP option for the same use case. Once that part is done, I will go a bit deeper into each of GCP Storage and Big Data technologies. This post will therefore have 2 parts, and an "added value" Anex:
  1. Which option fits to my use case?
  2. Technical details on GCP Storage and Big Data technologies
  3. Added Value: Object Versioning and Life Cycle management

1. Which option fits to my use case?

Before we get into the use cases, let's make sure we understand the layers of abstraction of Storage. Block Storage is a typical storage carried out by applications, data stored in cylinders, UNSTRUCTURED DATA WITH NO ABSTRACTION. When you can refer to data using a physical address - you're using Block Storage. You would normally need some abstraction to use the storage, it would be rather difficult to reference your data by blocks. File Storage is a possible abstraction, and it means you are referring to data using a logical address. In order to do this, we will need some kind of layer on top of our blocks, an intelligence to make sure that our blocks underneath are properly organized and stored in the disks, so that we don't get the corrupt data.

Let's now focus on the use cases, and a single question - what kind of data do you need to store?

If you're using Mobile, the you will be using a slightly different data structures:

Let's now get a bit deeper into each of the Use Cases, and see what Google Cloud can offer.
  1. If you need Block Storage for your compute VMs/instances, you would obviously be using a Googles IaaS option called Compute Engine (GCE), and you would create the Disks using:
    • Persistent disks (Standard or SSD)
    • Local SSD
  1. If you need to store an unstructured data, or "Blobs", as Azure calls it, such as Video, Images and similar Multimedia Files - what you need is a Cloud Storage.
  2. If you need your BI guys to access your Big Data using an SQL like interface, you'll use a BigQuery, a Hive-like Google product. This applies to cases 3 (SQL interface required), and 7 (OLAP/Data Warehouse).
  3. To store the NoSQL Documents like HTML/XML, that have a characteristic pattern, you should use DataStore.
  4. For columnar NoSQL data, that requires fast scanning, use BigTable (GCP equivalent of HBase).
  5. For Transactional Processing, or OLTP , you should use Cloud SQL (if you prefer open source) or Cloud Spanner (if you need less latency, and horizontal scaling).
  6. Same like 3.
  7. Cloud Storage for Firebase is great for Security when you are doing Mobile.
  8. Firebase Realtime DB is great for fast random access with mobile SDK. This is a NoSQL database, and it remains available even when you're offline.

2. Technical details on GCP Storage and Big Data technologies

Storage - Google Cloud Storage

Google Cloud Storage is created in the form of BUCKETS, that are globally unique, identified by NAME, more or less like a DNS. Buckets are STANDALONE, not tied to any Compute or other resources.

TIP: If you want to use Cloud Storage with a web site, have in mind that you need a Domain Verification (adding a meta-tag, uploading a special HTML file or directly via the Search Console).

There are 4 types of Bucket Storage Classes. You need to be really careful to choose the most optimal Class for your Use Case, because the ones that are designed not used frequently are the ones where you'll be charged per access.  You CAN CHANGE a Buckets Storage class. The files stored in the Bucket are called OBJECTS, the Objects can have the Class which is same or "lower" then the Bucket, and if you change the Bucket storage class - the Objects will retain their storage class. The Bucket Storage Classes are:
  • Multi-regional, for frequent access from anywhere around the world. It's used for "Hot Objects", such as Web Content, it has a 99,95% availability, and it's Geo-redundant. It's pretty expensive, 0.026/GB/Month.
  • Regional, frequent access from one region, with 99,9% availability, appropriate for storing data used by Cloud Engine instances. Regilnal class has performance for data intensive computations, unlike multi-regional.
  • Nearline - access once at month at max, with 99% availability, costing 0.01/GB/month with a 30 day minimum duration, but it's got ACCESS CHARGES. It can be used for data Backup, DR or similar.
  • Coldline - access once a year at max, with same throughput and latency, for 0.007/GB/month with a 90 day minimum duration, so you would be able to retrieve your backup super fast, but you would get a bit higher bill.. At least your business wouldn’t suffer.

We can get a data IN and OUT of Cloud Storage using:
  • XML and JSON APIs
  • Command Line (gsutil - a command line tool for storage manipulation)
  • GSP Console (web)
  • Client SDK

You can use TRANSFER SERVICE in order to get your date INTO the Cloud Storage (not out!), from AWS S3, http/https, etc. This tool won't let you get the data out. Basically you would use:
  • gsutil when copying files for the first time from on premise.
  • Transfer Service when transferring from AWS etc.

Cloud Storage is not like Hadoop in the architecture sense, mostly because a HDFS architecture requires a Name Node, which you need to access A LOT, and this would increase your bill. You can read more about Hadoop and it's Ecosystem in my previous post, here.

When should I use it?

When you want to store UNSTRUCTURED data.

Storage - Cloud SQL and Google Spanner

These are both relational databases, super structured data. Cloud Spanner offers ACID++, meaning it's perfect for OLTP. It would, however, be too slow and too many checks for Analytics/BI (OLAP), because OLTP needs strict write consistency, OLAP does not. Cloud Spanner is Google proprietary, and it offers horizontal scaling, like bigger data sets.

*ACID (Atomicity, Consistency, Isolation, Durability) is a set of properties of database transactions intended to guarantee validity even in the event of errors, power failures, etc.

When should I use it?

OLTP (Transactional) Applications.

Storage - BigTable (Hbase equivalent)

BigTable is used for FAST scanning of SEQUENTIAL key values with LOW latency (unlike Datastore, which would be used for non-sequential data). Bigtable is a columnar database, good for sparse data (meaning - missing fields in the table), because similar data is stored next to each other. ACID properties apply only on the ROW level.

What is columnar Data Base? Unlike RDBMS, it is not normalised, and it is perfect for Sparse data (tables with bunch of missing values, because the Columns are converted into rows in the Columnar data store, and the Null value columns are simply not converted. Easy.). Columnar DBs are also great for the data structures with the Dynamic Attributes because we can add new columns without changing the schema.

Bigtable is sensitive to hot spotting.

When should I use it?

Low Latency, SEQUENTIAL data.

Storage - Cloud Datastore (has similarities to MongoDB)

This is much simpler data store then BigTable, similar to MongoDB and CouchDB. It's a key-value structure, like structured data, designed to store documents, and it should not be used for OLTP or OLAP but instead for fast lookup on keys (needle in the haystack type of situation, lookup for non sequential keys).  Datastore is similar to RDBMS in that they both use indices for fast lookups. The difference is that DataStore query execution time depends on the size of returned result, so it will take the same time if you're querying a dataset of 10 rows or 10.000 rows.

IMPORTANT: Don’t use DataStore for Write intensive data, because the indices are fast to read, but slow to write.

When should I use it?

Low Latency, NON-SEQUENTIAL data (mostly Documents that need to be searched really quickly, like XML or HTML, that has a characteristic patterns, to which Datastore is performing INDEXING). It's perfect for SCALING of a HIARARCHICAL documents with Key/Value data. Don't use DataStore if you're using OLTP (Cloud Spanner is a better. choice) or OLAP/Warehousing (BigQuery is a better choice). Don't use for unstructured data (Cloud Storage is better here). It's good for Multi Tenancy (think of HTML, and how the schema can be used to separate data).

Big Data - Dataproc

Dataproc is a GCP managed Hadoop + Spark (every machine in the Cluster includes Hadoop, Hive, Spark and Pig. You need at lease 1 master and 2 workers, and other workers can be Preemptable VMs). Dataproc uses Google Cloud Storage instead of HDFS, simply because the Hadoop Name Node would consume a lot of GCE resources.

When should I use it?

Dataproc allows you to move your existing Hadoop to the Cloud seamlessly.

Big Data - Dataflow

In charge of transformation of data, similar to Apache Spark in Hadoop ecosystem. Dataflow is based on Apache Beam, and it models the flow (PIPELINE) of data and transforms it as needed. Transform takes one or more Pcollections as input, and produces an output Pcollection.

Apache Beam uses the I/O Source and Sink terminology, to represent the original data, and the data after the transformation.

When should I use it?

Whenever you have one data format on the Source, and you need to deliver it in a different format, as a Backend you would use something like Apache Spark or Dataflow.

Big Data - BigQuery

BigQuery is not designed for the low latency use, but it is VERY fast comparing to Hive. It's not as fast as Bigtable and Datastore which are actually preferred for low latency. BigQuery is great for OLAP, but it cannot be used for transactional processing (OLTP).

When should I use it?

If you need a Data Warehouse if your application is OLAP/BA or if you require an SQL interface on top of Big Data.

Big Data - Pub/Sub

Pub/Sub (Publisher/Subscriber) is a messaging transport system. It can be defined as messaging Middleware. The subscribers subscribe to the TOPIC that the publisher publishes, after which the Subscriber sends an ACK to the "Subscription", and the message is deleted from the source. This message stream is called the QUEUE. Message = Data + Attributes (key value pairs). There are two types of subscribers:
  • PUSH Subscriber, where the Apps make HTTPS request to
  • PULL Subscriber, where the Web Hook endpoints able to accept POST requests over HTTPS

When should I use it?

Perfect for applications such as Oder Processing, Event Notifications, Logging to multiple systems, or maybe Streaming data from various Sensors (typical for IoT).

Big Data - Datalab

Datalab is an environment where you can execute notebooks. It's basically a Jupyter or iPhython for notebooks for running code. Notebooks are better the text files for Code, because they include Code, Documentation (markdown) and Results. Notebooks are stored in Google Cloud Storage.

When should I use it?

When you want to use Notebooks for your code.

Need some help choosing?

If it's still not clear which is the best option for you, Google also made a complete Decision Tree, exactly like in the case of "Compute".

3. Added Value: Object Versioning and Lifecycle Management

Object Versioning

By default in Google Cloud Storage If you delete a file in a Bucket, the older file is deleted, and you can't get it back. When you ENABLE Object Versioning on a Bucket (can only be enabled per bucket), the previous versions are ARCHIVED, and can be RETRIEVED later.

When versioning is enabled, you can perform different actions, for example - use an older file and override the LIVE version, or similar.

Object Lifecycle Management

To avoid the archived version creating a chaos in some point of time, it's recommendable to implement some kind of Lifecycle Management. The previous versions of the file maintain their own ACL permissions, which may be different then the LIVE one.

Object Lifecycle Management can turn on the TTL. You can create CONDITIONS or RULES to base your Object Versioning. This can get much more granular, because you have:
  • Conditions are criteria that must be met before the action is taken. These are: Object age, Date of Creation, If it's currently LIVE, Match a Storage Class, and Number of Newer Versions.
  • Rules
  • Actions, you can DELETE or Set another Storage Class.

This way you can get pretty imaginative, and for example delete all objects older then 1 year, or perhaps if a Rule is triggered and conditions are met - change the Class of the Object from, for example, Regional to Nearline etc.

Google Cloud Platform (GCP) - How do I choose among the Compute options? IaaS, PaaS, CaaS/KaaS?

Google has made their Cloud Platform (GCP) so that you can host your application any way your business requires. When we talk about the traditional Data Center, we tend to distinguish 3 types of "resources":
  • Compute
  • Storage
  • Networking and Security

In each of these 3 areas, GCP offers you plenty of options. Don't be naive though, you will need to know the options quite good in order to optimize your performance and costs. In this 3-part Blog Post I will go into each of these 3 in detail, and hopefully help you with your decision.

Let's start with the Compute options. There are 3 options to choose from. You can go for the Google App Engine, or a PaaS option, focus on the code and let Google handle everything else, use a GCP to simply deploy your VMs (or Instances as they call it) the way you like, or you can choose the Containers. My idea is to try and explain each of the options in a bit more details. If this is something you'd be interested in - keep reading.

What are IaaS and PaaS?

Let's start with a simple question: What are IaaS (Infrastructure as a Service), and PaaS (Platform as a Service) and how are they different from a traditional On-premise/Data Center model? Back to basics - what does our application need in order to run? Let's start from the bottom of the Application Stack:
  • Networking, to be reached, and to reach data it requires to operate. We need Switches, Routers, etc.
  • Storage, to store data. We need Disks, Storage Cabins, SAN switches.
  • Servers, to store the compute loads. Physical Servers, with RAM, CPUs etc.
  • Virtualization, to optimize the usage of the Physical Resources by using the VMs (Virtual Machines).
  • Operating System
  • Middleware
  • Runtime
  • Data
  • Applications

In the On Premise architecture, it is on us to (over-) provision and manage all these resources. Wouldn't it be great if someone would provision and manage some of the "basic" layers for us, so that we could focus on the part that actually matters to our business? THIS is what it's all about. I like how AWS defines this - in IaaS, the Cloud Provider takes care of all the heavy lifting, or as they call it - Undifferentiated Services, while you handle the services on top, that make your business different from your competitors.

Now, check out the following diagram, to see what exactly is managed by the Cloud Provider, and what is managed by You, in the case of IaaS, PaaS and SaaS.

*In some examples out there, in IaaS OS is partially managed by You. This pretty much depends on the model that Cloud Provider is offering.

What Compute options does GCP offer?

There are Compute options for hosting your applications in Google Cloud. You can use one of those, or Mix and Match:
  1. Google Cloud Functions (currently in Beta). It's a Serverless environment for building and connecting other cloud services. Very simple, very single purpose functions, written in JavaScript, executed in Node.js. Cloud Function executes a response to a TRIGGER event. Functions are not exactly a Compute option, but they do match this use case, so I'll just keep them here.
  2. Google App Engine (GAE), is the PaaS option, serverless and ops free. It's a flexible, zero ops, serverless platform for highly available apps. You would choose GAE if you ONLY want to focus on writing code. It can be used for the Web sites, Mobile apps, or gaming backend, and also for IoT. Google App Engine is a MANAGED SERVICE, meaning - you NEVER need to worry about the infrastructure, it's invisible to you. There are 2 available environments: Standard (predefined Runtime) and Flexible (configurable Runtime). We will get into these in more detail.
  3. Google Kubernetes Engine (GKE), is the CaaS/KaaS (Containers/Kubernetes as a Service) option, clusters of machines running Kubernetes and hosting containers. Containers are the auto-contained services, containing all the Libraries and Dependencies, so that you don't have to worry about the Operating System at all. GKE engine allows you to focus on the Applications, not on the OS. You should use it to increase velocity and improve operability by separating the application from the OS. Ideal for Hybrid applications.
  4. Google Compute Engine (GCE) is the IaaS option, fully controllable down to the OS. We're talking about Instances of VMs. You should use it if you have a very specific requirements from your operating system, or if you need to use the GPUs (yes, this is the only option that let's you add Graphical Processing Units for intensive compute tasks to your Compute resources).

There is also a fifth option called Firebase, and it's specific for Mobile, but I won't go into that right now. Instead, let's focus on each of the four options mentioned above. Each of these options can be used for any application, and it's on you to choose the one that best fits, each one has their Pros and Cons. Yes, you can mix them in the same application! Check out the following diagram, to get a clearer picture:

Google Cloud Launcher

Before we get into more detail about the Compute options, I'd like to cover the Cloud Launcher, one of my favorite tools in the GCP. Google Cloud Launcher can help you set up an easy app, such as WordPress or LAMP stack, in a few minutes. You can customize your application, because you will have full control of your instances. You will also know more or less how much everything will cost before you deploy it all. Remember this for now, because I will be mentioning the Launcher later.

Google Cloud Functions

Floating, serverless execution environments, for building and connecting the cloud environments. You would be writing simple, single-purpose functions. When an event that is being watched is fired - Cloud Function is triggered. You can run it in any standard Node.js runtime. This would be a perfect option for the coders that like to write their applications in functions.

Google Application Engine (GAE) - PaaS

PaaS option is a perfect option if you just want to focus on your code, and you trust Google to manage your entire infrastructure, including the Operating System. It tends to be very popular with SW, mobile and Web developers. If you prefer to pay per use, and not per allocation, you might prefer PaaS (No-Ops) to IaaS (DevOps) option. Also, there's no vendor lock-in, you can easily move your Apps to another platform because everything is built on the Open Source tools. App Engine is REAGIONAL, and Google makes sure that you have the HA using different (availability) zones within the region.

Can you use GAE in Multiple Regions? You cannot change the region. Your app will be served from the region you chose when creating the app. Anyone can use the app, but users closer to the selected region will have lower latency. More details:

App Engine supports ONLY HTTP/S.

GAE is super easy to use. You will basically need to create a new Folder, store your files in there, and execute the command "gcloud app deploy". That's it!

There are two environments, depending whether you can customize an OS:
  1. Standard (deployed in Containers), preconfigured with one of the several available runtimes (specific versions of Java 7, Python 2, Go, PHP). Each runtime includes the standard Libraries. Basically this is a container - Serverless. Your code is running in a Sandboxed environment.
  2. Flexible (deployed in VM Instances, based on GCE), that you can customize into a non standard environment, and you can use Java 8, Python 3.x, .NET, also supporting Node.js, Ruby, C#. This is not a container, it's a VM of a compute instance, and you are charged based on the usage of the VM instance (CPU, memory, disk usage) that's been provisioned for you. Unlike on GCE, the instances are automatically managed for you, meaning - regional placement, updates, patches and all (root SSH disabled by default, but can be enabled).

IMPORTANT: Scale up time is measured in seconds in Standard environment, and in minutes in the Flexible environment, simply because the containers are much faster then the VM instances.

Google Compute Engine (GCE) - IaaS

Google Compute Engine should be used when you need IaaS, for example, you need to tune your Load Balancing and Scaling. When you create a VM instance (each instance needs to belong to a Project, and a Project can have up to 5 VPC - Virtual Private Networks), you need to choose the Machine Type, a Zone, an Operating System (Linux and Windows Server are available, you get root access and SSH/RDP enabled). You can choose one of the following Machine Types, but have in mind that in order to later change it you need to stop the instance, change it, and then turn it back on:
  • Standard
  • High memory
  • High CPU
  • Shared core (small, non resource intensive)

Compute Engine instances are pay-per-allocation. When the instance is running, it is charged at an per-second rate whether it is being used or not. I'd also like to use this section to clarify a few important concepts related to GCE:
  • What is a PREEMPTABLE instance?
  • How does Google Maintenance affect your workloads?
  • How do I automate instance creation?
  • What Disks can I assign to my Instance?
  • Which VMs and Images are available for me, and can I qualify for discounts?

What's a Preemptable VM instance?

A type of VM instance that can be deleted with 30 second notification time, once the SOFT OFF signal is sent (best practice: you need a SHUT DOWN SCRIPT, able to shut the instance off and do all the clean-up in less then 30 seconds). It's much cheaper, of course, because it can be deleted AT ANY TIME (at least once every 24 hours). It can, for example, be used for the fault tolerant applications.

How does Google Maintenance affect your workloads?

Google can shut down your machine for maintenance. You can configure what to do in this case, migrate or terminate. This is your call, as it directly depends on the nature of your application, and whether they are Cloud Native (instances treated as a Cattle, rather then as Pets. Confused? Read my previous post for clarification).

Live Migration allows an instance to be up and running, even in the maintenance state, or during a HW or SW update, failed HW, network and power grid maintenance etc. The instance is moved to another host in the same zone. VM gets a notification that it needs to be evicted. A new VM is selected for migration, and the connection is AUTHENTICATED between the old and the new VM.

When a Live Migration is executed there are 3 stages:
  1. Pre-migration brownout: VM executing on the source, when most of the state is sent from source to target. The time depends on the memory that needs to be copied and similar.
  2. Blackout: a brief moment when none of the VMs are running.
  3. Post-migration brownout: a VM is running on the destination/Target Host, but the source VM is still not killed, ready to provide support if needed.

  • Preemptable instances cannot be live migrated.
  • Live migration cannot be used for the VMs with GPUs.
  • Instances with the local SSD can be live migrated.


How do I automate instance creation?

To AUTOMATE the instance creation, you can use the gcloud command line. One of the options is for example to assign a LABEL to instances you want to group (called Instance Group) in order to monitor or automate. You can get the exact script to, for example, create an instance, from the graphical interface, just look for the API and command line equivalents. Yes, this is awesome, you can literally get an API for any graphical interface action you take. Automation made easy, good job Google!

DevOps tools are also available (GCP equivalents for some), which is great if you have strong DevOps skills in the house:
  • Compute Engine Management using Puppet, Chef, Ansible.
  • Automated Image Builds with Jenkins, Packer and Kubernetes.
  • Distributed Load Testing with Kubernetes.
  • Continuous Delivery with Travis CI.
  • Managing Deployments using Spinnaker.

What Disks can I assign to my Instance?

You also have loads of Storage options for your instances. I won't go into the storage options here in detail, but to create a Disk for your VM instance you have 4 options:
  • Cloud Storage Bucket, as the cheapest option.
  • Standard persistent disks (64 TB).
  • SSD persistent disks (64 TB).
  • Local SSD (3 TB), actually attached to the instance, in the same Server.

Which VMs and Images are available for me, and how do I qualify for discounts?

Images help you instantiate new VMs with the OS already installed. There are Standard and Premium Images, depending whether you need some kind of license, like for RedHat Enterprise Linux or MS Windows. You should have in mind that you have 2 possibilities to get your image ready to launch:
  • Startup Script, that you need to write in order for it to download your dependencies, and prepare everything. It needs to always bring the VM in the same state, regardless how many times you execute it.
  • Baking is a more efficient way to create an image in order to faster provision an instance, much more efficient then a Startup script. You would start from the Premium image, and create a Custom instance (sort of a Template, if you will). Baking takes much shorter to provision an instance then a Startup disk. Everything is included into the "baking image". Version management and rollbacks are much easier, you can just rollback an image as a whole.

Check out this link about Google Cloud pricing for more details.

In the image lifecycle the possible statuses are: CURRENT, DEPRECATED (can still be used and launched), OBSOLETE (cannot be launched) and DELETED (cannot be used). This should give you some idea about how you would be managing your instance versions.

  • Snapshots can only be accessed within the same project.
  • All machines are charged for at least 1 minute. After that, a per-second payment is applied. The more you use the VM, the more discount you get.

Before we get to the possible discounts, you first need to choose your machine type correctly, to optimize the cost and the performance:
  1. Pre-defined
  2. Custom: You can specify the number of vCPUs and Memory. You would start with one of the pre-defined, and if you see that your CPU or memory are under-utilized, customize it.
  1. Shared-core is another option, meant for small, non resource intensive applications, that require BURSTING.
  1. High Memory Machines: more memory per vCPU, 6.5GB per Core.
  2. High CPU Machines: more vCPU per unit of memory, 1.8GB per Core

Google offers a few types of discount/price optimization, among others:
  1. Sustained use, when you use a VM for a long period of time
  2. Committed use, that you can purchase in 1 year or 3 year contract, and you get a good price.
  3. Rightsizing is a feature recommends which size of the VMs to run after analyzing your application behavior. This is a brand new feature, and it relies to the Stackdriver collected information from the past 8 days.

Google Containers/Kubernetes Engine (GKE) - CaaS/KaaS

If you have lots of dependencies, you would of course benefit most using the Containers. Container is a light weight standalone executable package that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Containers de-couple the Application from the Operating System, and they can reliably run on different environments. Different containers run on a same Kernel, as presented in the picture below, taken from the Dockers web page:

Container vs VM

A VM contains an entire operating system packaged along with the application. A container ONLY runs an OS Kernel and nothing else, it contains the Application and the essential Libraries, Binary files etc., and it can easily be moved from one Physical or Virtual machine that has the Kubernetes engine, to another. Containers are much faster, as there is no OS to boot, and they are much smaller in size.

To be precise, using Containers/Dockers we can achieve:
  • Process isolation
  • Namespace isolation
  • Own Network Interface
  • Own Filesystem

Meanwhile, when we say a Micro service, that simply means that one container = one process.

What is Kubernetes?

Kubernetes is an open source Container Manager, originally created by Google for it´s internal use. Kubernetes automates Deployment, Scaling and Management. This means that using Kubernetes you can:
  • Rollout new features seamlessly
  • Auto scale your application
  • Run your application in the Hybrid environment, as long as you have the Kubernetes Engine in your VMs.

Why is Kubernetes so important here? Because Google Kubernetes Engine uses Kubernetes as a Container Management engine.

Let's first check out the important components of the Kubernetes architecture:
  • A Container Cluster has one supervising machine running Kubernetes (Master Endpoint, or Master Instance works like Hadoop Cluster Manager). Kubernetes Master manages the cluster, and it's your single point of management of the Kubernetes Cluster.
  • Master Instance will be in touch with a number of individual machines using a software called Kubelet, each running Docker.
  • Each individual machine running Kubelet is known as a Node Instance.
  • Pod is a smallest deployable unit, a group of 1 or more containers in a Node. Inside each Pod in every Node Instance, Containers are running. Pod has it's settings in a Template.
  • Replication Controller ensures that specific number of Pod replicas are running across Nodes.
  • Services are the abstraction layer that decouples the frontend clients to the backend pods. They define the LOGICAL set of pods across nodes and the way of accessing them. Load Balancing is one of the Services, creating an IP and a port as a connection point to our Pods.
  • Label is a METADATA with semantic meaning. It's used for selecting and grouping the objects.
  • scheduler is in charge of scheduling pods onto nodes. Basically it works like this: You create a pod,
scheduler notices that the new pod you created doesn’t have a node assigned to it, and assigns a node to the pod. It’s not responsible for actually running the pod – that’s the kubelet’s job. So it basically just needs to make sure every pod has a node assigned to it.
  • kubectl is a CLI tool for Kubernetes.

Google Cloud Engine includes the following components, most clarified in the Kubernetes architecture:
  • Container Cluster, includes a Kubernetes Master and Compute Engine instances where Kubernetes are running, managing all the components with Kubernetes Master.
  • Kubernetes Master, as a single point of management of the cluster.
  • Pods, as groups of containers.
  • Nodes, as individual Compute Instances.
  • Replication Controller, ensuring the defined number of Pods are always available.
  • Services, decoupling a frontend client from the backend Pods, providing a Load Balancer with a single URL to access your Backend.
  • Container Registry is the image repository, so that you can deploy container images

Why GKE, and not Kubernetes on GCE?

This all depends on what exactly are your needs. You can use CaaS by Google (GKE), which is easier out of the box, and Google would manage the entire "Undifferentiated" application stack, up to Containers. You can also build your own Container management on top of Googles IaaS (GCE), for example if you need GPUs, or you have some specific OS needs, or maybe a non-Kubernetes container solution, or if you are migrating your existing on premise Container solution.

Before you make a decision to, for example, run Kubernetes directly without something like GKE on top of it, I strongly recommend you to investigate the following GitHub link, on implementing Kubernetes without the pre-defined scripts:

If you use containers, the best way would be to use DevOps methodology, and Jenkins for CI/CD. You can use Stackdriver for logging and monitoring.

Storage options for GKE are the same like with the GCE, but Container disks are ephemeral (lasting for a very short time), so if you do want your data not ephemeral, you need to use an abstraction called gcePersistentDisk.

When would you use GKE instead of GAE?

GAE only supports HTTP/HTTPS, so if you need to use any other protocol - you would go for CaaS rather then App Engine.  Also, if you are using a Multicloud environment, GAE only works on GCP. App Engine doesn't use Kubernetes, so if you want to use Kubernetes - you would also rather go for GKE.

Interesting fact: Pokemon GO was deployed on GKE (50x more users connected then expected), while Super Mario Run (launched at 150 countries at the same time) was deployed on the GAE.

Need some help choosing?

If it's still not clear which is the best option for you, Google also made a complete Decision Tree.

Most Popular Posts