Beruflich Dokumente
Kultur Dokumente
You can use Google App Engine to host a static website. Static web pages can contain client-side technologies such as HTML,
CSS, and JavaScript. Hosting your static site on App Engine can cost less than using a traditional hosting provider, as App
Engine provides a free tier.
Sites hosted on App Engine are hosted on the appspot.com subdomain, such as [my-project-id].appspot.com . After you
deploy your site, you can map your own domain name to your App Engine-hosted website.
1. Create a new GCP Console project or retrieve the project ID of an existing project to use:
Tip: You can retrieve a list of your existing project IDs with the gcloud command line tool.
2. Install and then initialize the Google Cloud SDK:
The app.yaml file is a configuration file that tells App Engine how to map URLs to your static files. In the following steps, you will
add handlers that will load www/index.html when someone visits your website, and all static files will be stored in and called from
the www directory.
1. Create a directory that has the same name as your project ID. You can find your project ID in the Console.
2. In directory that you just created, create a file named app.yaml .
3. Edit the app.yaml file and add the following code to the file:
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: www/index.html
upload: www/index.html
- url: /(.*)
static_files: www/\1
upload: www/(.*)
More reference information about the app.yaml file can be found in the app.yaml reference documentation.
Create an HTML file that will be served when someone navigates to the root page of your website. Store this file in
your www directory.
<html>
<head>
<title>Hello, world!</title>
<link rel="stylesheet" type="text/css" href="/css/style.css">
</head>
<body>
<h1>Hello, world!</h1>
<p>
This is a simple static HTML file that will be served from Google App
Engine.
</p>
</body>
</html>
When you deploy your application files, your website will be uploaded to App Engine. To deploy your app, run the following
command from within the root directory of your application where the app.yaml file is located:
Optional flags:
Include the --project flag to specify an alternate GCP Console project ID to what you initialized as the default in
the gcloud tool. Example: --project [YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for you. Example: -v [YOUR_VERSION_ID]
To learn more about deploying your app from the command line, see Deploying a Python App.
To launch your browser and view the app at https://[YOUR_PROJECT_ID].appspot.com , run the following command:
What’s next
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
App Engine allows applications to be served via a custom domain, such as example.com , instead of the
default appspot.com address. You can create a domain mapping for your App Engine app so that it uses a custom domain.
By default, when you map your custom domain to your app, App Engine issues a managed certificate for SSL for HTTPS
connections. For more information on using SSL with your custom domain, including how to use your own SSL certificates,
see Securing your custom domains with SSL.
Use this page to learn how to create a domain mapping for your app that is running on App Engine.
Using custom domains in the following regions might add noticeable latency to responses: northamerica-
northeast1 (Montréal), and southamerica-east1 (São Paulo), asia-south1 (Mumbai), and australia-southeast1 (Sydney).
1. Purchase a new domain, unless you already have one that you want to use. You can use any domain name registrar,
including Google Domains.
2. If you choose to use the gcloud tool commands:
a. Install and initialize the Cloud SDK:
3. If you choose to use the Admin API, see the prerequisite information in Accessing the Admin API.
Note: Some of the gcloud commands and Admin API methods that are used in this topic are beta-level features.
1. Verify that you are the owner of your domain through Webmaster Central:
CONSOLE GCLOUD
a. In the Google Cloud Platform Console, go to App Engine > Settings > Custom Domains:
b. Click Add a custom domain to display the Add a new custom domain form:
c. In the Select the domain you want to use section, enter the name of the domain that you want to use, for example example.com ,
and then click Continue to open a new tab to the Webmaster Central page.
i. Use Webmaster Central to verify ownership of your domain.
Important: Verifying domain ownership by using a CNAME record is the preferred option for App Engine. If you choose to use
a TXT record, you must avoid configuring your domain's DNS with a CNAME record because the CNAME record overrides
the TXT record and causes your domain to appear unverified.
If the verification methods for your domain do not offer the CNAME record option, you can select Other as your domain
provider and then choose Add a CNAME record:
i. Click Alternate methods and then Domain name provider.
ii. In the menu, select Other.
iii. In the Having trouble section, click Add a CNAME record and then following the instructions to verify ownership of
your domain.
Remember: It might take a minute before your CNAME is set at your domain registrar.
ii. Return to the Add new custom domain form in the GCP Console.
2. Ensure that your domain has been verified, otherwise you will not be able to proceed with the following steps. Note
that only verified domains will be displayed.
Important: The domain verification is automatically re-confirmed about every 30 days. So if you remove the verification string
from your DNS settings, you will lose the ability to change the configuration within the GCP Console. However, if this happens,
the serving setup for the domain does not change and the app continues to serve over the custom domain.
3. If you need to delegate the ownership of your domain to other users or service accounts, you can add permission through
the Webmaster Central page:
a. Opening the following address in your web browser:
https://www.google.com/webmasters/verification/home
b. Under Properties, click the domain for which you want to add a user or service account.
c. Scroll down to the Verified owners list, click Add an owner, and then enter a Google Account email address or
service account ID.
To view a list of your service accounts, open the Service Accounts page in the GCP Console:
4. After you verify ownership of your domain, you can map that domain to your App Engine app:
CONSOLE GCLOUD API
Continue to the next step of the Add new custom domain form to select the domain that you want to map to your App Engine app:
a. Specify the domain and subdomains that you want to map. The naked domain and www subdomain are pre-populated in the form.
A naked domain, such as example.com , maps to http://example.com .
A subdomain, such as www , maps to http://www.example.com .
b. Click Save mappings to create the desired mapping.
c. In the final step of the Add new custom domain form, note the resource records that are listed, including their type and canonical
name ( CNAME ), because you need to add these details to the DNS configuration of your domain.
In the example below, CNAME is one of the types listed, and ghs.googlehosted.com is its canonical name.
5. Add the resource records that you receive to the DNS configuration of your domain registrar:
a. Log in to your account at your domain registrar and then open the DNS configuration page.
b. Locate the host records section of your domain's configuration page and then add each of the resource records that
you received when you mapped your domain to your App Engine app.
Typically, you list the host name along with the canonical name as the address. For example, if you registered a
Google Domain, then one of the records that you add to your DNS configuration is the www host name along with
the ghs.googlehosted.com address. To specify a naked domain, you would instead use @ with
the ghs.googlehosted.com address.
If you are migrating from another provider, make sure all DNS records point to your App Engine app.
For more information about mapping your domain, see the following Using subdomains and Wildcard
mappings sections.
c. Save your changes in the DNS configuration page of your domain's account. It can take a while for these changes to
take effect.
6. Test for success by browsing to your app via its new domain URL, for example www.example.com .
Using subdomains
If you set up a wildcard subdomain mapping for your custom domain, then your application serves requests for any matching
subdomain.
If the user browses a domain that matches an application version name or service name, the application serves that
version.
If the user browses a domain that matches a service name, the application serves that service.
There is a limit of 20 managed SSL certificates per week for each base domain. If you encounter the limit, App Engine
keeps trying to issue managed certificates until all requests have been fulfilled.
Wildcard mappings
You can use wildcards to map subdomains at any level, starting at third-level subdomains. For example, if your domain
is example.com and you enter text in the web address field:
You can use wildcard mappings with services in App Engine by using the dispatch.yaml file to define request routing to specific
services.
Note: Wildcard mappings are not supported for managed SSL certificates.
If you use G Suite with other subdomains on your domain, such as sites and mail , those mappings have higher priority and
are matched first, before any wildcard mapping takes place. In addition, if you have other App Engine apps mapped to other
subdomains, those mappings also have higher priority than any wildcard mapping.
Some DNS providers might not work with wildcard subdomain mapping. In particular, a DNS provider must permit wildcards
in CNAME host entries.
Wildcard routing rules apply to URLs that contain components for services, versions, and instances, following the service routing
rules for App Engine.
What's next
You can use Google App Engine to host a static website. Static web pages can contain client-side technologies such as HTML,
CSS, and JavaScript. Hosting your static site on App Engine can cost less than using a traditional hosting provider, as App
Engine provides a free tier.
Sites hosted on App Engine are hosted on the appspot.com subdomain, such as [my-project-id].appspot.com . After you
deploy your site, you can map your own domain name to your App Engine-hosted website.
1. Create a new GCP Console project or retrieve the project ID of an existing project to use:
Tip: You can retrieve a list of your existing project IDs with the gcloud command line tool.
2. Install and then initialize the Google Cloud SDK:
The app.yaml file is a configuration file that tells App Engine how to map URLs to your static files. In the following steps, you will
add handlers that will load www/index.html when someone visits your website, and all static files will be stored in and called from
the www directory.
1. Create a directory that has the same name as your project ID. You can find your project ID in the Console.
2. In directory that you just created, create a file named app.yaml .
3. Edit the app.yaml file and add the following code to the file:
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /
static_files: www/index.html
upload: www/index.html
- url: /(.*)
static_files: www/\1
upload: www/(.*)
More reference information about the app.yaml file can be found in the app.yaml reference documentation.
Create an HTML file that will be served when someone navigates to the root page of your website. Store this file in
your www directory.
<html>
<head>
<title>Hello, world!</title>
<link rel="stylesheet" type="text/css" href="/css/style.css">
</head>
<body>
<h1>Hello, world!</h1>
<p>
This is a simple static HTML file that will be served from Google App
Engine.
</p>
</body>
</html>
When you deploy your application files, your website will be uploaded to App Engine. To deploy your app, run the following
command from within the root directory of your application where the app.yaml file is located:
Optional flags:
Include the --project flag to specify an alternate GCP Console project ID to what you initialized as the default in
the gcloud tool. Example: --project [YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for you. Example: -v [YOUR_VERSION_ID]
To learn more about deploying your app from the command line, see Deploying a Python App.
To launch your browser and view the app at https://[YOUR_PROJECT_ID].appspot.com , run the following command:
What’s next
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
At the beginning of 2015, I moved all of the deciphertools.com website to the Google App Engine. Most of the content on our website is static
and our traffic is moderate but bursty, so running our own virtual server on Rackspace to host the website seemed wasteful (of administration
time and money.) Our virtual server was also performing poorly: during those bursty times, we would have poor latency, and our website was
very open to (extremely lame) DoS attacks.
After the move, our website is cheaper to run, and performs beautifully. I can rest easy at night knowing that if we have spikes in traffic (which
should be cause for celebration), our infrastructure will scale to handle the load. To this day our static site costs have been free. We do pay for
some other dynamic site projects as well as Google Cloud Storage to host our larger software downloads, at rates comparable to Rackspace
Cloud Files. The entirety is still MUCH cheaper than spinning up a Rackspace cloud server ourselves and running lighttpd or apache.
Finding simple directions to host static pages on the Google App Engine was difficult, so this is my contribution of instructions. Please feel free
to post comments with questions — questions usually make me learn something new!
- my_project
|- app_engine
|- app.yaml
|- public
|- (all of my static site files, examples here below...)
|- index.html
|- favicon.ico
|- images
|- image1.png
|- image2.png
|- js
|- bootstrap.min.js
|- css
|- ... you get the idea
I make that app_engine folder to house the Google App Engine project in case I have external resources to generate pieces of the website.
For example, I use blogofile to generate our website and blog, so in addition to the app_engine folder, I also have a blogofile project that
houses the blogofile templates. (I generate the site files from my blogofile templates, then copy the results into app_engine/public when I am
happy with them.)
application: your-application-name-here
version: 1
runtime: php
api_version: 1
threadsafe: yes
handlers:
# Handle the main page by serving the index page.
# Note the $ to specify the end of the path, since app.yaml does prefix matching.
- url: /$
static_files: public/index.html
upload: public/index.html
<?
// REMINDER: ALL OF THESE NEED TO BE IN app.yaml too
$direct_redirects = array(
"/blog" => "https://yoururl.com/blog/",
"/products.html" => "https://yoururl.com/index.html",
... many many MANY... MANY other mappings...
);
(Yep, you have to map every url you want to redirect. Get fancier if you need, but I just use this to map crawl errors and moved pages, so it
works for me.)
As the script so nicely reminds me, the urls that need redirection need to be in your app.yaml file. Add your paths to the handlers section
ABOVE the other rules, since those rules match many things.
# Note the $ to specify the end of the path, since app.yaml does prefix matching.
- url: /blog$|/products.html$
script: redirector.php
Step 4: Installing SSL and Setting Up Your Domain with Google Apps
If you want to serve your site using HTTPS, then you'll need to install SSL certificates in GAE.
Update August 24, 2018: If you don't need special SSL certificates, like EV (Extended Validation, for the green bar in the web browser), then
using Google's free managed SSL for Google App Engine may be just what you need.
To enable managed SSL in App Engine:
1. Go to your Google App Engine dashboard.
2. Click the menu icon in the upper left corner, and under Compute, click App Engine > Settings .
3. Select the Custom domains section.
4. Check the domains you want to secure with managed SSL and then click the Enable managed security button.
5. Take this time you'd usually spend banging on SSL setup to get a coffee or go for a walk.
Or, if you prefer to set up your own SSL certificates, keep reading.
If you need to install SSL on your App Engine app, you will need to setup your domain with Google Apps. If you want to support us, you can
use our Google Apps referral link to sign up for Google Apps. Update 2016: You no longer need a Google Apps account to install SSL
certificates on your App Engine site. Refer to the “Adding SSL to your custom domain” section of the instructions from Google in the next
paragraph.
I don't know if there is something wrong with me, but I never remember how to do HTTPS/SSL setup. There are copious outdated
documentation pages lurking around, along with poor instructions from third parties. I HIGHLY recommend these instructions from
Google augmented with these instructions about the actual SSL installation from the Neutron Drive Blog. Update December 18, 2017: My
favorite SSL blog post is no longer maintained, so here is my version of those same instructions.
13 Comments
App Engine in a 5 minutes 2014 · 2 · 17Engine, Google Compute Cloud, Amazon EC2,
Heroku, Nodejitsu and much more. All the services
have their advantages and disadvantages over
others. Generally, we do not prefer much complex infrastructure or steps to deploy our static pages. Recently, I found that Google App engine
has been a best platform for hosting our static web pages with decent free plan over other services. In this article, we will discuss a steps to
host your static pages which can be personal blog, company site or even your client sites.
Visit Google App Engine and then create an application. When creating App engine application, the application id is very important. Because it
acts as subdomain for your site. Lets say, application id is coolmoon , the site will be in coolmoon.appspot.com .
Since python has been a best supported language in App Engine, Download and Install App Engine SDK for python. Are you not a python
developer(like me)? Do not worry, you do not need to write a single piece of python code.
You have to create an application folder which has static files and configuration file to be deployed. The structure of the folder may be as
follows
application_folder/
- app.yaml # configuration file. we will see in next section
- public/ # public folder will contain static files
- index.html
- js/
- css/
- img/
application: coolmoon
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /(.+)
static_files: public/\1
upload: public/(.*)
- url: /
static_files: public/index.html
upload: public/index.html
skip_files:
- ^(.*/)?app\.yaml
- ^(.*/)?app\.yml
- ^(.*/)?#.*#
- ^(.*/)?.*~
- ^(.*/)?.*\.py[co]
- ^(.*/)?.*/RCS/.*
- ^(.*/)?\..*
- ^(.*/)?tests$
- ^(.*/)?test$
- ^test/(.*/)?
- ^COPYING.LESSER
- ^README\..*
- \.gitignore
- ^\.git/.*
- \.*\.lint$
- ^fabfile\.py
- ^testrunner\.py
- ^grunt\.js
- ^node_modules/(.*/)?
You can run development server locally and check your static pages by following command
dev_appserver.py ./
Deploy
Everything is perfect and deploy the static pages. The command appcfg.py is used for deploy the application to Google App engine
appcfg.py update .
It will ask for email and password of your Google account. The password must be application specific password. To know how to generate
application specific password, please refer Application specific password.
You've made it
Finally you got your site hosted in <application-id>.appspot.com . Static hosting is super easy with App Engine. Moreover it is faster than
other static hosting services. Because it runs on Google infrastructure.
You can get SiteGround Coupon and try out awesome web hosting solutions SiteGround.
1. Go to the App Engine dashboard on the Google Cloud Platform Console and press
the Create button.
2. If you've not created a project before, you'll need to select whether you want to receive
email updates or not, agree to the Terms of Service, and then you should be able to
continue.
3. Enter a name for the project, edit your project ID and note it down. For this tutorial, the
following values are used:
Project Name: GAE Sample Site
Project ID: gaesamplesite
4. Click the Create button to create your project.
Creating an application
Each Cloud Platform project can contain one App Engine application. Let's prepare an app for
our project.
1. We'll need a sample application to publish. If you've not got one to use, download and
unzip this sample app.
2. Have a look at the sample application's structure — the website folder contains your
website content and app.yaml is your application configuration file.
Your website content must go inside the website folder, and its landing page must
be called index.html , but apart from that it can take whatever form you like.
The app.yaml file is a configuration file that tells App Engine how to map URLs to
your static files. You don't need to edit it.
cd sample-app
5. You are now ready to deploy your application, i.e. upload your app to App Engine:
See also
To learn more, see Google App Engine Documentation.
Tags: Beginner Google App Engine Google Cloud Platform Guide Host Learn publish Web website
Jump to: Creating a Google Cloud Platform project Creating an application Publishing your application See also
Google App Engine is a powerful platform that lets you build and run
applications on Google’s infrastructure — whether you need to build a
multi-tiered web application from scratch or host a static website.
Here's a step-by-step guide to hosting your website on Google App
Engine.
1. Go to the App Engine dashboard on the Google Cloud Platform Console and press
the Create button.
2. If you've not created a project before, you'll need to select whether you want to receive
email updates or not, agree to the Terms of Service, and then you should be able to
continue.
3. Enter a name for the project, edit your project ID and note it down. For this tutorial, the
following values are used:
Project Name: GAE Sample Site
Project ID: gaesamplesite
4. Click the Create button to create your project.
Creating an application
Each Cloud Platform project can contain one App Engine application. Let's prepare an app for
our project.
1. We'll need a sample application to publish. If you've not got one to use, download and
unzip this sample app.
2. Have a look at the sample application's structure — the website folder contains your
website content and app.yaml is your application configuration file.
Your website content must go inside the website folder, and its landing page must
be called index.html , but apart from that it can take whatever form you like.
The app.yaml file is a configuration file that tells App Engine how to map URLs to
your static files. You don't need to edit it.
cd sample-app
5. You are now ready to deploy your application, i.e. upload your app to App Engine:
6. Enter a number to choose the region where you want your application located.
7. Enter Y to confirm.
8. Now navigate your browser to your-project-id.appspot.com to see your website online.
For example, for the project ID gaesamplesite, go to gaesamplesite.appspot.com.
How to Host Static Website on
Google Cloud Storage?
Netsparker Web Application Security Scanner – the only solution that delivers automatic verification of
vulnerabilities with Proof-Based Scanning™.
There is no minimum limit of an object, and you pay what you use.
The following instructions will help you to host the static website
on Cloud Storage in less than 15 minutes.
Pre-requisite
This assumes you have a domain name registered and account created
with Google Cloud.
So you got to ensure the domain name which you’ve entered in the
bucket name is verified.
Configuring Storage
Bucket
It’s necessary to set up your bucket for your site to be accessible over the
Internet.
I wanted to try Google Cloud Storage, but SSL support comes with additional service and cost.
S3 + Cloudfare seems popular, but the task of setting up both things seems daunting.
App Engine seems fairly simple and free for low traffic site (static files shouldn’t count towards instance cost, only
need to pay for bandwidth). Static files seem to be distributed on multiple nodes with good performance (whether
they are edge-cache nodes is debatable). If I ever need to write some server-side code, it could be easily done.
Prerequisite
Create a project on Google Cloud Platform.
Language: Python (pick a language you are familar with, though it doesn’t matter for static website)
Region: us-central (depending on your audience)
Don’t have to proceed with the Quickstart Tutorial.
./google-cloud-sdk/install.sh
# Output
Modify profile to update your $PATH and enable shell command
completion? [Y]
Enter a path to an rc file to update, or leave blank to use
[ENTER]
Initialize the SDK (enter your Google credential and select Project ID)
./google-cloud-sdk/bin/gcloud init
mkdir hello-world-app
cd hello-world-app
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /robots.txt
static_files: www/robots.txt
upload: www/robots.txt
secure: always
- url: /
static_files: www/index.html
upload: www/index.html
secure: always
NOTE: To use secure: always, remember to enable managed SSL certificates for your website.
mkdir www
cd www
<html>
<head>
<title>Hello World</title>
<link rel="stylesheet" type="text/css" href="/css/style.css">
</head>
<body>
<h1>Hello World</h1>
<p class="red">I am Red</a>
</p>
</body>
</html>
mkdir css
cd css
.red {
color: #FF0000;
}
hello-world-app
├── app.yaml
└── www
├── css
│ └── style.css
└── index.html
Deploy the app and make sure the source and Project ID is correct
Deployment
Deploy local files to App Engine server.
descriptor: [/hello-world-app/app.yaml]
source: [/hello-world-app]
target project: [hello-world-project-id]
target service: [default]
target version: [1]
target url: [https://hello-world-project-id.appspot.com]
I prefer to include version (-v 1), else a new version will be created for every upload.
For development purpose, you can add a caching busting query string to the end of the url (e.g. https://hello-
world-project-id.appspot.com?r=1)
Bitcatcha Result
Location Response Times
US (W) 1 ms
US (E) 3 ms
London 25 ms
Singapore 12 ms
Sao Paulo 52 ms
Bangalore 93 ms
Sydney 335 ms
Japan 70 ms
Apps I built 😉
✈ Travelopy - places discovery and travel journal
🔒 LuaPass - offline password manager
✔ WhatIDoNow - a public log of things you are working on now
By Desmond Lua
A dream boy who enjoys programming and travelling, maker of Travelopy. Follow me on @d_luaz.
Tags:
Related entries:
Copyright © luasoftware.com
Lua Software Tutorials Privacy About RSS ☕
Introducing managed SSL for Google App
Engine
Lorne Kligerman
Product Manager
We’re excited to announce the beta release of managed SSL certificates at no charge for
applications built on Google App Engine. This service automatically encrypts server-to-client
communication — an essential part of safeguarding sensitive information over the web.
Manually managing SSL certificates to ensure a secure connection is a time consuming
process, and GCP makes it easy for customers by providing SSL systematically at no additional
charge. Managed SSL certificates are offered in addition to HTTPS connections provided on
appspot.com.
Now, when you build apps on App Engine, SSL is on by default — you no longer need to worry
about it or spend time managing it. We’ve made using HTTPS simple: map a domain to your
app, prove ownership, and App Engine automatically provisions an SSL certificate and renews it
whenever necessary, at no additional cost. Purchasing and generating certificates, dealing with
and securing keys, managing your SSL cipher suites and worrying about renewal dates —
those are all a thing of the past.
To get started with App Engine managed SSL certificates, simply head to the Cloud
Console and add a new domain. Once the domain is mapped and your DNS records are up to
date, you’ll see the SSL certificate appear in the domains list. And that’s it. Managed certificates
is now the default behavior — no further steps are required!
To switch from using your own SSL certificate on an existing domain, select the desired domain,
then click on the "Enable managed security" button. In just minutes, a certificate will be in place
and serving client requests.
You can also use the gcloud CLI to make this change:
Rest assured that your existing certificate will remain in place and communication will continue
as securely as before until the new certificate is ready and swapped in.
For more details on the full set of commands, head to the full documentation here.
If you have any questions or concerns, or if something is not working as you’d expect, you can
post in the Google App Engine forum, log a public issue, or get in touch on the App Engine
slack channel (#app-engine).
POSTED IN: GOOGLE CLOUD PLATFORM IDENTITY & SECURITY APP ENGINE
R E L AT E D A RT I C L E S
y GCP blog post from 2018 Using data and ML to better track wildfire and Cloud Functions pro tips: Retries and
assess its threat levels idempotency in action
Use Google App Engine and Golang to Host a Static Website with Sam
Published March 8, 2017 • Updated June 10, 2018
There are several inexpensive ways to host a static website generated with a static site generator like Jekyll, Hugo, or Pelican:
GitHub Pages
Google Cloud Storage Bucket
Google App Engine
Amazon S3 Bucket
This entire blog is statically generated using Jekyll. However, I am unable to use any of the options above, because, over this blog’s lifetime, I hav
posts, and I want to keep alive all of the old URLs.
I have been hosting this blog using Apache and, more recently, nginx on a single virtual machine, and the redirection features of either piece of s
and different.
A previous post details how I redirect URLs from an old domain to a new domain using Google App Engine and Python, but now I needed a way
That same domain redirection requirement is why I cannot simply use Google App Engine’s static content only feature (linked in the list above). H
simple Golang application to serve both static content and same domain redirects.
If your traffic fits within App Engine’s free tier of 28 instance hours and 1 GB of egress traffic per day, hosting the blog is practically free
Pushing updates is done with one command
Logging and monitoring are integrated using Stackdriver
Automatic up and down scaling based on traffic patterns
With a few clicks, web logs can easily be pushed to something like BigQuery for long term storage and ad hoc analysis
Managed SSL certificates using Let’s Encrypt
Prerequisites
This post assumes the following:
You are familiar with Google Cloud Platform (GCP) and have already created a GCP Project
You have installed the Google Cloud SDK
You have authenticated the gcloud command against your Google Account
Authenticate gcloud
Once you have created a GCP Project and installed the Google Cloud SDK, the last step is to authenticate the gcloud command to your Google A
command:
A web page will open in your web browser. Select your Google Account and give it permission to access GCP. Once completed, you will be authe
Create a Directory
Next, create a directory somewhere on your workstation to store your Google App Engine application:
mkdir ~/Sites/example.com/app_engine
Ch i h di
from HTTP to HTTPS as a temporary redirect; it is a permanent redirect.
If you have static assets, and you probably do, it is best practice to inform App Engine of this and let it serve those assets from object storage ins
through the app.yaml file.
For example, if you have a favicon file, a CSS directory, a Javascript directory, and an images directory, use the following app.yaml file:
runtime: go
api_version: go1
handlers:
- url: /favicon.png$
static_files: static/favicon.png
upload: static/favicon.png
- url: /css
static_dir: static/css
- url: /js
static_dir: static/js
- url: /images
static_dir: static/images
- url: /.*
script: _go_app
secure: always
redirect_http_response_code: 301
Create main.go
Next, you need the Golang application file.
For the following code to meet your needs, create file main.go, copy and paste the code below, and make the following modifications:
In the domain variable, change the value to match your domain name with the correct HTTP protocol.
In the urls map, replace all of the key value pairs to match the redirects you need in place. Replace each key with just the path portion (/ex
post-1.html) of the current domain’s old URL you want to keep alive. Then replace each value with the path portion of current domain’s ne
All redirects will be 301 redirects. This can be modified by changing 301 in the code below to a different HTTP redirect status code such as 302.
package main
import (
"net/http"
"os"
"strings"
)
func init() {
http.HandleFunc("/", handler)
}
http.ServeFile(w, r, path)
return
}
}
The App Engine section of the Google Cloud Console can be used to do this. Go here and follow the instructions to configure your custom doma
Once that is complete and DNS has had time to propagate, you should be able to navigate in your web browser to one of your current domain’s
1.html, and have it redirect to your current domain’s new URLs, for example https://example.com/post/example-post-1.html.
Pushing Updates
To push updates, make the necessary changes in your static site’s source directory, regenerate the static content, and redeploy to Google App En
the ~/Sites/example.com/app_engine directory and running gcloud app deploy .
References
A Surprising Feature of Golang that Colored Me Impressed
How to check if a map contains a key in go?
Disable directory listing with http.FileServer
3 Ways to Disable http.FileServer Directory Listings
Handling HTTP Request Errors in GO
HTTP and Error management in Go
please add ability to set custom 404 notFoundHandler for htt
CSS File Not Updating on Deploy (Google AppEngine) Ask Question
6 Any help?
google-app-engine
9 Answers
I've seen this before on App Engine, even when using cache-
busting query parameters like /stylesheets/default.css?{{
App.Version }} .
26
Here's my (unconfirmed) theory:
When this (if this is what happens) happens, I can confirm that
no amount of cache-busting browser work will help. The Google
CDN servers are holding the wrong version.
To fix: The only way I've found to fix this is to deploy another
version. You don't run the risk of this happening again (if you
haven't made any CSS changes since the race condition),
because even if the race condition occurs, presumably your first
update is done by the time you deploy your second one, so all
instances will be serving the correct version no matter what.
I'll buy this... though I'm not sure it's right. It eventually just cleared
itself up hours later. – Andrew Johnson Feb 1 '11 at 20:36
I'm facing the same issue. In my case even after I left it overnight the
new css was not being served. I'm going to try the cache bursting
technique – coderman Jan 1 '12 at 1:55
I've encountered this problem a few times. For whatever reason, this
is the fixed. I think it's something to do with upstream caching from
the google instance web host. If you see the problem and request the
css file directly in a browser with the querystring, then the issue goes
away. It looks like the caching is invalidated the first time a request is
made with a unique url to a static file. – Clint Simon Feb 14 '12 at 1:15
//static.{your-app-id}.appspot.com/{css-file-path}
2. Deploy your application. At this point your app will be broken.
3. change the version of the css file
//static.{your-app-id}.appspot.com/{css-file-path}?v={version-
Name}
4. deploy again.
Every time you change the css file. you will have to repeat 2,3
and 4.
You may have cached your old CSS, and not getting the new
2 CSS after updating it. Try clearing your browser cache and see if
that works.
Going to 1.latest downloads the new CSS since it's not in your
cache, so it appears correctly to you.
The menu at the bottom of the menu is not horizontal or big enough,
and the images in the slideshow are wrong. I have tried refreshing my
cache and loading from a different browser too. – Andrew Johnson
May 6 '10 at 17:29
The two sites you linked look exactly the same on Firefox and Chrome
for OS X. I suspect there's still some issue that's only affecting you, or
your browser, unless someone else can verify that it looks different. –
Jason Hall May 6 '10 at 18:57
Try using <Shift>+<F5> to force reload your page (at least in FF). Here
everything seems fine, both menu and slideshow images. Good luck. –
Emilien May 6 '10 at 19:14
I had this problem as well. I was using flask with GAE so I didn't
have a static handler in my app.yaml . When I added it, the deploy
2 works. Try adding something like this
handlers:
- url: /static
static_dir: static
to your app.yaml and deploy again. It worked for me. Apparently
Google is trying to optimize by not updating files that it thinks
users can't see.
The same documentation linked above states that "if you ever
plan to modify a static file, it should have a short (less than one
hour) expiration time. In most cases, the default 10-minute
expiration time is appropriate". That is something one should
Home think about before setting any static cache expiration. But for
those who, like myself, didn't know all of this beforehand and
PUBLIC have already been caught by this problem, I've found a solution.
Stack Overflow
Stack Overflow
Even though the documentation states that it's not possible to
Tags clear those intermediate caching proxies, one can delete at least
the Google Cloud cache.
Users
In order to do so, head to your Google Cloud Console and open
Jobs
your project. Under the left hamburger menu, head to Storage ->
Browser. There you should find at least one Bucket: your-project-
name.appspot.com. Under the Lifecycle column, click on the link
Teams
with respect to your-project-name.appspot.com. Delete any
Q&A for work
existing rules, since they may conflict with the one you will create
now.
Learn More
Create a new rule by clicking on the 'Add rule' button. For the
object conditions, choose only the 'Newer version' option and set
it to 1. Don't forget to click on the 'Continue' button. For the
action, select 'Delete' and click on the 'Continue' button. Save
your new rule.
This newly created rule will take up to 24 hours to take effect, but
at least for my project it took only a few minutes. Once it is up and
running, the version of the files being served by your app under
your-project-name.appspot.com will always be the latest
deployed, solving the problem. Also, if you are routinely editing
your static files, you should remove the default_expiration
element from the app.yaml file, which will help avoid unintended
caching by other servers.
handlers:
- url: /static
static_dir: static
Try clearing cache on your browser. Had exact same issue and
got it fixed by simply clearing cache.
-1 answered Dec 6 '12 at 12:24
kymni
69 1 4
Authenticated, Static Web Sites on Google App Engine
08 Jan 2012 app engine google apps static html python
Configuration
We need an application configuration file call “app.yaml” in the root of our project directory. This file controls
various aspects of the application, including how the application routes URLs to handlers. We’ll use a
configuration that handles all static file types (including HTML), and just simply serves them.
There are various other posts out there discussing configurations for static web sites on App Engine, but the best
configuration that I found was a gist by GitHub user “darktable”. However, this configuration didn’t including
authentication, so I forked the gist and added authentication attributes to produce our final app.yamlfile that you
should download to “my_site/app.yaml”. You can also view a basic Readme file and other information at the
GitHub gistpage.
Here’s a snippet of the “app.yaml” file that you’ll need to slightly modify:
application: you-app-name-here
version: 1
runtime: python
api_version: 1
default_expiration: "30d"
handlers:
- url: /(.*\.(appcache|manifest))
mime_type: text/cache-manifest
static_files: static/\1
upload: static/(.*\.(appcache|manifest))
expiration: "0m"
login: required
# site root
- url: /
static_files: static/index.html
upload: static/index.html
expiration: "15m"
login: required
After downloading to “my_site/app.yaml”, update the application: you-app-name-here directive with the
specific App Engine application identifier you chose in the application creation section above.
Static Content
Now that we have a configuration file, create a folder named “my_site/static” which will house actual static web
site. As we want to check that the authentication works first before uploading potentially sensitive information, I
would recommend creating a test HTML page that just contains the content “It worked!” and adding that as
“my_site/static/index.html”.
Now, we should have a project layout that looks like:
my_site/
app.yaml
static/
index.html
At this point we can upload the full site to our static server usingappcfg.py. Make sure that we
have appcfg.py available:
$ which appcfg.py
/usr/local/bin/appcfg.py
If you don’t get an executable path back (any path is fine as long assomething is returned by
the which command), then review the App Engine “getting started” documents for installation of the runtime.
Assuming we do have appcfg.py available, change directory in your terminal to the directory containing the
“my_site” project folder and upload the static site with the following command:
$ appcfg.py update my_site
You will have to enter your Google credentials here. After the upload finishes, you should be able to open a web
browser to: “<your application identifier>.appspot.com”. If you are authenticated to your Google Apps domain,
you should see the “It worked!” test page. If not, you should be prompted to login to your Google Apps domain. A
good way to test the authentication works is to open a new Google Chrome Incognito window. It should always
force a new Google Apps login if you have configured things properly. If the authentication doesn’t work quite
right, review the App Engine authentication page for tips and pointers, or leave a comment below on this post.
Assuming authentication does work correctly, then you can now remove the test “index.html” file and upload your
real site content to the “my_site/static” directory. Every time you change the content, make sure to re-upload the
project with appcfg.py and enjoy your static web site!
Microservices
Microservices are a software development technique—a variant of the service-oriented architecture (SOA) architectural style that structures an application as a
collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight. The benefit of decomposing an
application into different smaller services is that it improves modularity. This makes the application easier to understand, develop, test, and become more resilient to
architecture erosion.[1] It parallelizes development by enabling small autonomous teams to develop, deploy and scale their respective services independently.[2] It also
allows the architecture of an individual service to emerge through continuous refactoring.[3] Microservices-based architectures enable continuous delivery and
deployment.[4]
Contents
Introduction
History
Service Granularity
Linguistic approach
Technologies
Service Mesh
Criticism
Cognitive load
Implementations
See also
References
Further reading
Introduction
Even though there is no official definition of what microservices are, a consensus view has evolved over time, in the industry. Some of the defining characteristics that are
frequently cited include:
Per Martin Fowler and other experts, services in a microservice architecture (MSA) are often processes that communicate over a network to fulfill a goal using
technology-agnostic protocols such as HTTP.[5][6][7] However, services might also use other kinds of inter-process communication mechanisms such as shared
memory.[8] Services might also run within the same process as, for example, OSGI bundles.
Services in a microservice architecture are independently deployable.[9][1]
Services are organized around fine-grained business capabilities. The granularity of the Microservice is important - because this is key to how this approach is
different from SOA.
Services can be implemented using different programming languages, databases, hardware and software environment, depending on what fits best.[1]. This does not
mean that a single microservice is written in a patchwork of programming languages. While it is almost certainly the case that different components that the service is
composed of, will require different languages or API's (example, the web server layer may be in Java or Javascript, but the database may use SQL to communicate to
an RDBMS), this is really reflective of a comparison to the monolithic architecture style. If a monolithic application were to be re-implemented as a set of
microservices, then the individual services could pick their own implementing languages. So one microservice could pick Java for the web layer, and another
microservice could pick a Node.js based implementation, but within each microservice component, the implementing language would be uniform.
Services are small in size, messaging enabled, bounded by contexts, autonomously developed, independently deployable, decentralized and built and released with
automated processes.[9]
A Microservice is not a layer within a monolithic application (example, the web controller, or the backend-for-frontend[10]). Rather it is a self-contained piece of business
function with clear interfaces, and may, through its own internal components, implement a layered architecture. From a strategy perspective, microservices architecture
essentially follow to the Unix philosophy of "Do one thing and do it well"[11]. Martin Fowler describes a microservices-based architecture as having the following
properties[5]:
History
A workshop of software architects held near Venice in May 2011 used the term "microservice" to describe what the participants saw as a common architectural style that
many of them had been recently exploring. In May 2012, the same group decided on "microservices" as the most appropriate name. James Lewis presented some of those
ideas as a case study in March 2012 at 33rd Degree in Kraków in Microservices - Java, the Unix Way, as did Fred George about the same time. Adrian Cockcroft at Netflix,
describing this approach as "fine grained SOA", pioneered the style at web scale, as did many of the others mentioned in this article - Joe Walnes, Dan North, Evan
Bottcher and Graham Tackley.[14]
Dr. Peter Rodgers introduced the term "Micro-Web-Services" during a presentation at the Web Services Edge conference in 2005. On slide #4 of the conference
presentation, he states that "Software components are Micro-Web-Services".[15] Juval Löwy had similar precursor ideas about classes being granular services, as the next
evolution of Microsoft architecture.[16][17][18] "Services are composed using Unix-like pipelines (the Web meets Unix = true loose-coupling). Services can call services
(+multiple language run-times). Complex service-assemblies are abstracted behind simple URI interfaces. Any service, at any granularity, can be exposed." He described
how a well-designed service platform "applies the underlying architectural principles of the Web and Web services together with Unix-like scheduling and pipelines to
provide radical flexibility and improved simplicity by providing a platform to apply service-oriented architecture throughout your application environment".[19] The
design, which originated in a research project at Hewlett Packard Labs, aims to make code less brittle and to make large-scale, complex software systems robust to
change.[20] To make "Micro-Web-Services" work, one has to question and analyze the foundations of architectural styles (such as SOA) and the role of messaging between
software components in order to arrive at a new general computing abstraction.[21] In this case, one can think of resource-oriented computing (ROC) as a generalized form
of the Web abstraction. If in the Unix abstraction "everything is a file", in ROC, everything is a "Micro-Web-Service". It can contain information, code or the results of
computations so that a service can be either a consumer or producer in a symmetrical and evolving architecture.
Microservices is a specialization of an implementation approach for service-oriented architectures (SOA) used to build flexible, independently deployable software
systems.[22] The microservices approach is a first realisation of SOA that followed the introduction of DevOps and is becoming more popular for building continuously
deployed systems.[23]
Service Granularity
A key step in defining a Microservice architecture is figuring out how big an individual Microservice has to be. There is no consensus or litmus test for this, as the right
answer depends on the business and organizational context. Amazon's policy is that the team implementing a microservice should be small enough that they can be fed by
two-pizzas[5]. Many organizations choose smaller "squads" - typically 6 to 8 developers. But the key decision hinges around how "clean" the service boundary can be.
On the opposite side of the spectrum, it is considered a bad practice to make the service too small, as then the runtime overhead and the operational complexity can
overwhelm the benefits of the approach. When things get too fine-grained, alternative approaches must be considered - such as packaging the function as a library, or by
placing the function into other Microservices.
Linguistic approach
A linguistic approach to the development of microservices[24] focuses on selecting a programming language that can easily represent a microservice as a single software
artifact. When effective, the gap between architecting a project and deploying it can be minimized.
Technologies
Computer microservices can be implemented in different programming languages and might use different infrastructures. Therefore the most important technology
choices are the way microservices communicate with each other (synchronous, asynchronous, UI integration) and the protocols used for the communication (REST,
messaging, ...). In a traditional system most technology choices like the programming language impact the whole systems. Therefore the approach for choosing
technologies is quite different.[27]
The Eclipse Foundation has published a specification for developing microservices, Eclipse MicroProfile (https://projects.eclipse.org/projects/technology.microprofile).
Service Mesh
In a service mesh, each service instance is paired with an instance of a reverse proxy server, called a service proxy, sidecar proxy, or sidecar. The service instance and
sidecar proxy share a container, and the containers are managed by a container orchestration tool such as Kubernetes. The service proxies are responsible for
communication with other service instances and can support capabilities such as service (instance) discovery, load balancing, authentication and authorization, secure
communications, and others.
In a service mesh, the service instances and their sidecar proxy are said to make up the data plane, which includes not only data management but also request processing
and response. The service mesh also includes a control plane for managing the interaction between services, mediated by their sidecar proxies. There are several options
for service mesh architecture: Istio (a joint project among Google, IBM, and Lyft), Buoyant[28] & others
Criticism
The microservices approach is subject to criticism for a number of issues:
Cognitive load
The architecture introduces additional complexity and new problems to deal with, such as network latency, message formats, load balancing and fault tolerance.[33][30]
The complexity of a monolithic application doesn't disappear, if it gets re-implemented as a set of microservice applications. Some of the complexity gets translated into
operational complexity[34]. Other places where the complexity manifests itself is in the increased network traffic and resulting slower performance. Also, an application
made up of any number of microservices has a larger number of interface points to access its respective ecosystem, which increases the architectural complexity.[35] This
kind of complexity can be reduced by standardizing the access mechanism. The Web as a system standardized the access mechanism by retaining the same access
mechanism between browser and application resource over the last 20 years. Using the number of Web pages indexed by Google it grew from 26 million pages in 1998 to
around 60 trillion individual pages by 2015 without the need to change its access mechanism. The Web itself is an example that the complexity inherent in traditional
monolithic software systems can be overcome.[36][37]
Implementations
Thorntail by Red Hat
Helidon by Oracle
Meecrowave by Apache
See also
Conway's law
Cross-cutting concern
DevOps
Fallacies of distributed computing
gRPC
Microkernel
Representational state transfer (REST)
Service-oriented architecture (SOA)
Unix philosophy
Self-contained Systems
Serverless computing
Web-oriented architecture (WOA)
References
1. Chen, Lianping (2018). Microservices: Architecting for Continuous Delivery and 16. Löwy, Juval (October 2007). "Every Class a WCF Service" (https://channel9.ms
DevOps (https://www.researchgate.net/publication/323944215_Microservices_ dn.com/Shows/ARCast.TV/ARCastTV-Every-Class-a-WCF-Service-with-Juval-
Architecting_for_Continuous_Delivery_and_DevOps). The IEEE International Lowy). Channel9, ARCast.TV.
Conference on Software Architecture (ICSA 2018) (http://icsa-conferences.org/ 17. Löwy, Juval (2007). Programming WCF Services 1st Edition. pp. 543–553.
2018/). IEEE.
18. Löwy, Juval (May 2009). "Every Class As a Service" (https://blogs.msdn.micros
2. Richardson, Chris. "Microservice architecture pattern" (http://microservices.io/p oft.com/drnick/2009/04/29/wcf-at-teched-2009/). Microsoft TechEd Conference,
atterns/microservices.html). microservices.io. Retrieved 2017-03-19. SOA206. Archived from the original (https://www.youtube.com/watch?v=w-Hxc
3. Chen, Lianping; Ali Babar, Muhammad (2014). Towards an Evidence-Based 6uWCPg) on 2010.
Understanding of Emergence of Architecture through Continuous Refactoring 19. Rodgers, Peter. "Service-Oriented Development on NetKernel- Patterns,
in Agile Software Development. The 11th Working IEEE/IFIP Conference on Processes & Products to Reduce System Complexity" (http://www.cloudcomput
Software Architecture(WICSA 2014) (https://web.archive.org/web/2014073005 ingexpo.com/node/80883). CloudComputingExpo. SYS-CON Media. Retrieved
3f454/http://wicsa2014.org/). IEEE. doi:10.1109/WICSA.2014.45 (https://doi.or 19 August 2015.
g/10.1109%2FWICSA.2014.45).
20. Russell, Perry; Rodgers, Peter; Sellman, Royston (2004). "Architecture and
4. Balalaie, Armin; Heydarnoori, Abbas; Jamshidi, Pooyan (2016-05). Design of an XML Application Platform" (http://www.hpl.hp.com/techreports/200
"Microservices Architecture Enables DevOps: Migration to a Cloud-Native 4/HPL-2004-23.html). HP Technical Reports. p. 62. Retrieved 20 August 2015.
Architecture". IEEE Software. 33 (3): 42–52. doi:10.1109/ms.2016.64 (https://d
21. Hitchens, Ron (Dec 2014). Swaine, Michael, ed. "Your Object Model Sucks".
oi.org/10.1109%2Fms.2016.64). hdl:10044/1/40557 (https://hdl.handle.net/1004
PragPub Magazine: 15.
4%2F1%2F40557). ISSN 0740-7459 (https://www.worldcat.org/issn/0740-745
9). Check date values in: |date= (help) 22. Pautasso, Cesare (2017). "Microservices in Practice, Part 1: Reality Check and
Service Design" (http://ieeexplore.ieee.org/document/7819415/). IEEE
5. Martin Fowler. "Microservices" (http://martinfowler.com/articles/microservices.ht
Software. 34 (1): 91–98. doi:10.1109/MS.2017.24 (https://doi.org/10.1109%2F
ml). Archived (https://web.archive.org/web/20180214171522/https://martinfowle
MS.2017.24).
r.com/articles/microservices.html) from the original on 14 February 2018.
23. "Continuous Deployment: Strategies" (https://www.javacodegeeks.com/2014/1
6. Newman, Sam (2015-02-20). Building Microservices. O'Reilly Media.
2/continuous-deployment-strategies.html). javacodegeeks.com. Retrieved
ISBN 978-1491950357.
28 December 2016.
7. Wolff, Eberhard (2016-10-12). Microservices: Flexible Software Architectures (h
24. Claudio Guidi (2017-03-29). "What is a microservice? (from a linguistic point of
ttp://microservices-book.com). ISBN 978-0134602417.
view)" (http://claudioguidi.blogspot.it/2017/03/what-microservice-from-linguisitc.
8. "Micro-services for performance" (https://vanilla-java.github.io/2016/03/22/Micro html).
-services-for-performance.html). Vanilla Java. 2016-03-22. Retrieved
25. Jolie Team. "Vision of microservices revolution" (http://www.jolie-lang.org/visio
2017-03-19.
n.html).
9. Nadareishvili, I., Mitra, R., McLarty, M., Amundsen, M., Microservice
26. Fabrizio Montesi. "Programming Microservices with Jolie - Part 1: Data formats,
Architecture: Aligning Principles, Practices, and Culture, O’Reilly 2016
Proxies, and Workflows" (https://fmontesi.github.io/2015/02/06/programming-mi
10. "Backends For Frontends Pattern" (https://docs.microsoft.com/en-us/azure/arch croservices-with-jolie.html).
itecture/patterns/backends-for-frontends). Microsoft Azure Cloud Design
27. Wolff, Eberhard. Microservices - A Practical Guide (http://practical-microservice
Patterns. Microsoft.
s.com). ISBN 978-1717075901.
11. Lucas Krause. Microservices: Patterns and Applications. ASIN B00VJ3NP4A (h
28. "What's a service mesh?" (https://blog.buoyant.io/2017/04/25/whats-a-service-
ttps://www.amazon.com/dp/B00VJ3NP4A).
mesh-and-why-do-i-need-one/). Buoyant. Buoyant. 2017-04-25. Retrieved
12. Martin Fowler. "Microservice Prerequisites" (https://martinfowler.com/bliki/Micro 5 December 2018.
servicePrerequisites.html).
29. Jan Stenberg (11 August 2014). "Experiences from Failing with Microservices"
13. Richardson, Chris (November 2018). Microservice Patterns. Chapter 1, section (http://www.infoq.com/news/2014/08/failing-microservices).
1.4.1 Scale cube and microservices: Manning Publications.
30. "Developing Microservices for PaaS with Spring and Cloud Foundry" (http://ww
ISBN 9781617294549.
w.infoq.com/presentations/microservices-pass-spring-cloud-foundry).
14. James Lewis and Martin Fowler. "Microservices" (http://martinfowler.com/article
31. Tilkov, Stefan (17 November 2014). "How small should your microservice be?"
s/microservices.html).
(https://www.innoq.com/blog/st/2014/11/how-small-should-your-microservice-b
15. Rodgers, Peter. "Service-Oriented Development on NetKernel- Patterns, e/). innoq.com. Retrieved 4 January 2017.
Processes & Products to Reduce System Complexity Web Services Edge 2005
32. Richardson, Chris (November 2018). Microservice Patterns. Chapter 4.
East: CS-3" (http://www.cloudcomputingexpo.com/node/80883).
Managing transactions with sagas: Manning Publications.
CloudComputingExpo 2005. SYS-CON TV. Retrieved 3 July 2017.
ISBN 9781617294549.
33. Pautasso, Cesare (2017). "Microservices in Practice, Part 2: Service 35. "BRASS Building Resource Adaptive Software Systems". U.S. Government.
Integration and Sustainability" (http://ieeexplore.ieee.org/document/7888407/). DARPA. April 7, 2015. "Access to system components and the interfaces
IEEE Software. 34 (2): 97–104. doi:10.1109/MS.2017.56 (https://doi.org/10.110 between clients and their applications, however, are mediated via a number of
9%2FMS.2017.56). often unrelated mechanisms, including informally documented application
34. Martin Fowler. "Microservice Trade-Offs" (https://www.martinfowler.com/article programming interfaces (APIs), idiosyncratic foreign function interfaces,
s/microservice-trade-offs.html#ops). complex ill-understood model definitions, or ad hoc data formats. These
mechanisms usually provide only partial and incomplete understanding of the
semantics of the components themselves. In the presence of such complexity,
it is not surprising that applications typically bake-in many assumptions about
the expected behavior of the ecosystem they interact with."
36. Alpert, Jesse; Hajaj, Nissan. "We knew the web was big" (http://googleblog.blo
gspot.co.at/2008/07/we-knew-web-was-big.html). Official Google Blog.
Retrieved 22 August 2015.
37. "The Story" (http://www.google.com/insidesearch/howsearchworks/thestory/).
How search works. Retrieved 22 August 2015.
Further reading
S. Newman, Building Microservices – Designing Fine-Grained Systems, O'Reilly, 2015 ISBN 978-1491950357
I. Nadareishvili et al., Microservices Architecture – Aligning Principles, Practices and Culture (http://transform.ca.com/rs/117-QWV-692/images/CA%20Technologies%
20-%20OReilly%20Microservice%20Architecture%20eBook.pdf), O’Reilly, 2016, ISBN 978-1-491-95979-4
SEI SATURN 2015 microservices workshop, https://github.com/michaelkeeling/SATURN2015-Microservices-Workshop
Wijesuriya, Viraj Brian (2016-08-29) Microservice Architecture, Lecture Notes (http://www.slideshare.net/tyrantbrian/microservice-architecture-65505794) - University
of Colombo School of Computing, Sri Lanka
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy
Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Google Cloud Platform Overview
Contents
GCP resources
Accessing resources through services
Global, regional, and zonal resources
Projects
This overview is designed to help you understand the overall landscape of Google Cloud Platform (GCP). Here, you'll take a brief
look at some of the commonly used features and get pointers to documentation that can help you go deeper. Knowing what's
available and how the parts work together can help you make decisions about how to proceed. You'll also get pointers to some
tutorials that you can use to try out GCP in various scenarios.
GCP resources
GCP consists of a set of physical assets, such as computers and hard disk drives, and virtual resources, such as virtual
machines (VMs), that are contained in Google's data centers around the globe. Each data center location is in a global region.
Regions include Central US, Western Europe, and East Asia. Each region is a collection of zones, which are isolated from each
other within the region. Each zone is identified by a name that combines a letter identifier with the name of the region. For
example, zone a in the East Asia region is named asia-east1-a .
This distribution of resources provides several benefits, including redundancy in case of failure and reduced latency by locating
resources closer to clients. This distribution also introduces some rules about how resources can be used together.
In cloud computing, what you might be used to thinking of as software and hardware products, become services. These services
provide access to the underlying resources. Thelist of available GCP services is long, and it keeps growing. When you develop
your website or application on GCP, you mix and match these services into combinations that provide the infrastructure you
need, and then add your code to enable the scenarios you want to build.
Some resources can be accessed by any other resource, across regions and zones. These global resources include
preconfigured disk images, disk snapshots, and networks. Some resources can be accessed only by resources that are located
in the same region. These regional resources include static external IP addresses. Other resources can be accessed only by
resources that are located in the same zone. These zonal resources include VM instances, their types, and disks.
The following diagram shows the relationship between global scope, regions and zones, and some of their resources:
The scope of an operation varies depending on what kind of resources you're working with. For example, creating a network is a
global operation because a network is a global resource, while reserving an IP address is a regional operation because the
address is a regional resource.
As you start to optimize your GCP applications, it's important to understand how these regions and zones interact. For example,
even if you could, you wouldn't want to attach a disk in one region to a computer in a different region because the latency you'd
introduce would make for very poor performance. Thankfully, GCP won't let you do that; disks can only be attached to computers
in the same zone.
Depending on the level of self-management required for the computing and hosting serviceyou choose, you might or might not
need to think about how and where resources are allocated.
For more information about the geographical distribution of GCP, see Geography and Regions.
Projects
Any GCP resources that you allocate and use must belong to a project. You can think of a project as the organizing entity for
what you're building. A project is made up of the settings, permissions, and other metadata that describe your applications.
Resources within a single project can work together easily, for example by communicating through an internal network, subject to
the regions-and-zones rules. The resources that each project contains remain separate across project boundaries; you can only
interconnect them through an external network connection.
As you work with GCP, you'll use these identifiers in certain command lines and API calls. The following screenshot shows a
project name, its ID, and number:
In this example:
Each project ID is unique across GCP. Once you have created a project, you can delete the project but its ID can never be used
again.
When billing is enabled, each project is associated with one billing account. Multiple projects can have their resource usage
billed to the same account.
A project serves as a namespace. This means every resource within each project must have a unique name, but you can usually
reuse resource names if they are in separate projects. Some resource names must be globally unique. Refer to the
documentation for the resource for details.
GCP gives you three basic ways to interact with the services and resources.
The Google Cloud Platform Console provides a web-based, graphical user interface that you can use to manage your GCP
projects and resources. When you use the GCP Console, you create a new project, or choose an existing project, and use the
resources that you create in the context of that project. You can create multiple projects, so you can use projects to separate
your work in whatever way makes sense for you. For example, you might start a new project if you want to make sure only
certain team members can access the resources in that project, while all team members can continue to access resources in
another project.
Command-line interface
If you prefer to work in a terminal window, the Google Cloud SDK provides the gcloud command-line tool, which gives you
access to the commands you need. The gcloud tool can be used to manage both your development workflow and your GCP
resources. See the gcloud reference for the complete list of available commands.
GCP also provides Cloud Shell, a browser-based, interactive shell environment for GCP. You can access Cloud Shell from the
GCP console. Cloud Shell provides:
Client libraries
The Cloud SDK includes client libraries that enable you to easily create and manage resources. GCP client libraries expose APIs
for two main purposes:
App APIs provide access to services. App APIs are optimized for supported languages, such as Node.js and Python. The
libraries are designed around service metaphors, so you can work with the services more naturally and write less
boilerplate code. The libraries also provide helpers for authentication and authorization.
Admin APIs offer functionality for resource management. For example, you can use admin APIs if you want to build your
own automated tools.
You also can use the Google API client libraries to access APIs for products such as Google Maps, Google Drive, and YouTube.
Pricing
To understand Google's principles about how pricing works on GCP, see the Pricing page. To understand pricing for individual
services, see the product pricing section.
You can also take advantage of some tools to help you evaluate the costs of using GCP.
The pricing calculator provides a quick and easy way to estimate what your GCP usage will look like. You can provide
details about the services you want to use, such as the number of Compute Engine instances, persistent disks and their
sizes, and so on, and then see a pricing estimate.
The total cost of ownership (TCO) tool evaluates the relative costs for running your compute load in the cloud, and provides
a financial estimate. The tool provides several inputs for cost modeling, which you can adjust, and then compares
estimated costs on GCP and AWS. This tool does not model all components of a typical application, such as storage and
networking.
This page contains an overview of the gcloud command-line tool and its common command patterns and quirks.
What is gcloud?
gcloud is a tool that provides the primary command-line interface to Google Cloud Platform. You can use this tool to perform
many common platform tasks either from the command-line or in scripts and other automations.
You can also use gcloud to deploy App Engine applications and perform other tasks. Read the gcloud reference to learn more
about the capabilities of this tool.
gcloud is a part of the Google Cloud SDK. You must download and install the SDK on your system and initialize it before you
can use gcloud .
By default, the SDK installs those gcloud commands that are at the General Availability and Preview levels only. Additional
functionality is available in SDK components named alpha and beta . These components allow you to use gcloud to work with
Google Cloud Bigtable, Google Cloud Dataflow and other parts of the Cloud Platform at earlier release levels than General
Availability.
gcloud releases have the same version number as the SDK. The current SDK version is 228.0.0. You can download and install
previous versions of the SDK from the download archive.
Note: gcloud is available automatically in Google Cloud Shell. If you are using Cloud Shell, you do not need to install gcloud manually
in order to use it.
Downloading gcloud
You can download the latest version of Cloud SDK, which includes gcloud , from the download page.
Release levels
General None Commands are considered fully stable and available for production use.
Availability Advance warnings will be made for commands that break current
functionality and documented in the release notes.
Release level Label Description
Beta beta Commands are functionally complete, but may still have some
outstanding issues. Breaking changes to these commands may be
made without notice.
Alpha alpha Commands are in early release and may change without notice.
Preview preview Commands may be unstable and may change without notice.
The alpha and beta components are not installed by default when you install the SDK. You must install these separately using
the gcloud components install command. If you try to run an alpha or beta command and the corresponding component is not
installed, gcloud will prompt you to install it.
Command groups
Within each release level, gcloud commands are organized into a nested hierarchy of command groups, each of which
represents a product or feature of the Cloud Platform or its functional subgroups.
For example:
gcloud alpha app Commands related to managing App Engine deployments in Alpha
You can run gcloud commands from the command line in the same way you use other command-line tools. You can also
run gcloud commands from within scripts and other automations, for example, when using Jenkins to automate Cloud Platform
tasks.
Note: gcloud reference documentation and examples use backslashes, \, to denote long commands. You can execute these
commands as-is (Windows users can use ^ instead of \). If you'd like to remove the backslashes, be sure to remove newlines as well
to ensure the command is read as a single line.
Properties
gcloud properties are settings that affect the behavior of gcloud and other Cloud SDK tools. Some of these properties can be
set by either global or command flags - in which case, the value set by the flag takes precedence.
Configurations
Starting off with Cloud SDK, you'll work with a single configuration named default and you can set properties by running
either gcloud init or gcloud config set . This single default configuration is suitable for most use cases.
If you'd like to work with multiple projects or authorization accounts, you can set up multiple configurations with gcloud config
configurations create and switch among them accordingly.
For a detailed account of these concepts, see these explorations of configurations and their management.
Global flags
gcloud provides a set of gcloud -wide flags that govern the behavior of commands on a per-invocation level. Flags override any
values set in SDK properties.
While both positional arguments and flags affect the output of a gcloud command, there is a subtle difference in their use cases.
A positional argument is used to define an entity on which a command operates while a flag is required to set a variation in a
command's behaviour.
Successful output of gcloud commands is written to stdout. All other types of responses - prompts, warnings, and errors - are
written to stderr. Note that anything written to stderr is not stable and should not be scripted against.
Prompting
To protect against unintended destructive actions, gcloud will confirm your intentions before executing commands such
as gcloud projects delete .
You can also expect prompts if you were to create a Google Compute Engine virtual machine instance, say 'test-instance',
using gcloud compute instances create test-instance . You will be asked to choose a zone to create the instance in.
Note, the wording of prompts can change and should not be scripted against.
The --quiet flag (also, -q ) for gcloud disables all interactive prompts when running gcloud commands and comes in handy
when scripting. In the event input is needed, defaults will be used. If there aren't any, an error will be raised.
To suppress printing of command output to standard output and standard error in the terminal, use the --no-user-output-
enabled flag.
To adjust verbosity of logs instead, use the --verbosity flag and define the appropriate level.
By default, when a gcloud command returns a list of resources, they are pretty-printed to standard output. To produce more
meaningful output, the format, filter and projection flags allow you to finetune your output.
If you'd like to define just the format of your output, use the --format flag to produce a tabulated or flattened version of your
output (for interactive display) or a machine-readable version of the output ( json , csv , yaml , value ).
To format a list of keys that select resource data values, use projections . To further refine your output to a criteria you'd like to
define, use filter .
If you'd like to work through a quick interactive tutorial to help get you familiar with filter and format functionality, follow the link
below.
What's next
You can run your applications in App Engine using the flexible environment or standard environment. You can also choose to
simultaneously use both environments for your application and allow your services to take advantage of each environment's
individual benefits.
Structuring your applications by using a microservice architecture aligns best with App Engine, especially if you decide to utilize
both environments. There are several factors to consider when determining which environment is better suited to your
application and its services. Use the following sections to learn and understand which environment best meets your application's
needs.
Using the App Engine flexible environment means that your application instances run within Docker containers on Google Compute Engine
virtual machines (VMs).
Generally, good candidates for the flexible environment are applications that receive consistent traffic, experience regular traffic fluctuations,
or meet the parameters for scaling up and down gradually.
The flexible environment is optimal for applications with the following characteristics:
Source code that is written in a version of any of the supported programming languages:
Python, Java, Node.js, Go, Ruby, PHP, or .NET
Runs in a Docker container that includes a custom runtime or source code written in other programming languages.
Depends on other software, including operating system packages such as imagemagick, ffmpeg, libgit2, or others through apt-
get.
Uses or depends on frameworks that include native code.
Accesses the resources or services of your Cloud Platform project that reside in the Compute Engine network.
Using the App Engine standard environment means that your application instances run in asandbox, using the runtime environment of a
supported language listed below.
For some languages, building an application to run in the standard environment is more constrained and involved, but your applications will
have faster scale up times.
The standard environment is optimal for applications with the following characteristics:
The following table summarizes the differences between the two environments:
For an in-depth comparison of the environments, see the guide for your language: Python,Java, Go, or PHP.
While the flexible environment runs services in instances on Compute Engine VMs, the flexible environment differs from
Compute Engine in the following ways:
The VM instances used in the flexible environment are restarted on a weekly basis. During restarts, Google's management
services apply any necessary operating system and security updates.
You always have root access to Compute Engine VM instances. By default, SSH access to the VM instances in the flexible
environment is disabled. If you choose, you can enable root access to your app's VM instances.
The geographical region of the VM instances used in the flexible environment is determined by the location that you specify
for the App Engine application of your GCP project. Google's management services ensures that the VM instances are co-
located for optimal performance.
If you have an application in the standard environment, you might want to move some services to the flexible environment. For
guidance, see the recommendations for Python,Java, Go, and PHP.
To migrate specific services, see the instructions for Python, Java, Go, and PHP.
Scripting gcloud commands
Contents
Authorization
Disabling prompts
Handling output
Examples of filtering and formatting
Examples of scripting
More information
In addition to running gcloud commands from the command line, you can also run them from scripts or other automations — for
example, when using Jenkins to drive automation of Google Cloud Platform tasks.
Authorization
User account authorization is recommended if you are running a script or other automation on a single machine.
To authorize access and perform other common Cloud SDK setup steps:
gcloud init
Service account authorization is recommended if you are deploying a script or other automation across machines in a production
environment. It is also the recommended authorization method if you are running gcloud commands on a Google Compute
Engine virtual machine instance where all users have access to root .
To use service account authorization, use an existing service account or create a new one through the Google Cloud Platform
Console. From the options column of the service accounts table, create and download the associated private key as a JSON-
formatted key file.
You can SSH into your VM instance by using gcloud compute ssh , which takes care of authentication. SSH configuration files
can be configured using gcloud compute config-ssh .
For detailed instructions regarding authorizing Cloud SDK tools, refer to this comprehensive guide.
Disabling prompts
Some gcloud commands are interactive, prompting users for confirmation of an operation or requesting additional input for an
entered command.
In most cases, this is not desirable when running commands in a script or other automation. You can disable prompts
from gcloud commands by setting the disable_prompts property in your configuration to True or by using the global --
quiet or -q flag. Most interactive commands have default values when additional confirmation or input is required. If prompts
are disabled, these default values are used.
For example:
Handling output
If you want a script or other automation to perform actions conditionally based on the output of a gcloud command, observe the
following:
To work through an interactive tutorial about using the filter and format flags instead, launch the tutorial using the following
button:
The following are examples of common uses of formatting and filtering with gcloud commands:
List in JSON format those projects where the labels match specific values (e.g. label.env is 'test' and label.version is alpha):
List projects with their creation date and time specified in the local timezone:
List projects that were created after a specific date in table format:
Note that in the last example, a projection on the key was used. The filter is applied on the createTime key after the date
formatting is set.
List compute instance resources with box decorations and titles, sorted by name, in table format:
Examples of scripting
Using this functionality of format and filter, you can combine gcloud commands into a script to easily extract embedded
information.
If you were to list all the keys associated with all your projects' service accounts, you'd need to iterate over all your projects and
for each project, get all the service accounts associated with it. For each service account, get all the keys. This can be
accomplished as demonstrated below:
As a bash script:
#!/bin/bash
for project in $(gcloud projects list --format="value(projectId)")
do
echo "ProjectId: $project"
for robot in $(gcloud iam service-accounts list --project $project --format="value(email)")
do
echo " -> Robot $robot"
for key in $(gcloud iam service-accounts keys list --iam-account $robot --project $project --format="value(name.b
do
echo " $key"
done
done
done
Or as Windows PowerShell:
Oftentimes, you'll need to parse output for processing. For example, it'd be useful to write the service account information into an
array and segregate values in the multi-valued CSV-formatted serviceAccounts.scope() field. The script below does just this:
#!/bin/bash
for scopesInfo in $(
gcloud compute instances list --filter=name:instance-1 \
--format="csv[no-heading](name,id,serviceAccounts[].email.list(),
serviceAccounts[].scopes[].map().list(separator=;))")
do
IFS=',' read -r -a scopesInfoArray<<< "$scopesInfo"
NAME="${scopesInfoArray[0]}"
ID="${scopesInfoArray[1]}"
EMAIL="${scopesInfoArray[2]}"
SCOPES_LIST="${scopesInfoArray[3]}"
echo "NAME: $NAME, ID: $ID, EMAIL: $EMAIL"
echo ""
IFS=';' read -r -a scopeListArray<<< "$SCOPES_LIST"
for SCOPE in "${scopeListArray[@]}"
do
echo " SCOPE: $SCOPE"
done
done
More information
For a step-by-step guide to building basic scripts with gcloud , refer to this beginner's guide to automating GCP tasks.
More involved examples of the output configuring capabilities built into gcloud filters , formats , and projections can be
found in this blog post about filtering and formatting.
Cloud SDK
gcloud auth activate-service-account - authorize access to Google Cloud Platform with a service account
SYNOPSIS
DESCRIPTION
To allow gcloud (and other tools in Cloud SDK) to use service account credentials to make requests, use this command to
import these credentials from a file that contains a private authorization key, and activate them for use in gcloud . gcloud
auth activate-service-account serves the same function as gcloud auth login but uses a service account rather than
Google user credentials.
Key File
To obtain the key file for this command, use either the Google Cloud Platform Console or gcloud iam service-accounts
keys create . The key file can be .json (preferred) or .p12 (legacy) format. In the case of legacy .p12 files, a separate
password might be required and is displayed in the Console when you create the key.
Credentials
Credentials will also be activated (similar to running gcloud config set account [ACCOUNT_NAME] ).
If a project is specified using the --project flag, the project is set in active configuration, which is the same as
running gcloud config set project [PROJECT_NAME] . Any previously active credentials, will be retained (though no longer
default) and can be displayed by running gcloud auth list .
POSITIONAL ARGUMENTS
[ ACCOUNT ]
REQUIRED FLAGS
--key-file = KEY_FILE
OPTIONAL FLAGS
--password-file = PASSWORD_FILE
Path to a file containing the password for the service account private key (only for a .p12 file).
--prompt-for-password
Prompt for the password for the service account private key (only for a .p12 file).
EXAMPLES
To authorize gcloud to access Google Cloud Platform using an existing service account while also specifying a project,
run:
NOTES
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
gcloud
NAME
SYNOPSIS
gcloud GROUP | COMMAND [ --account = ACCOUNT ][ --configuration = CONFIGURATION ] [ --flags-file = YAML_FILE ][ --flatten =[ KEY ,…]] [ --format = FORMAT ] [ --help ][ --project = PROJECT_
DESCRIPTION
The gcloud CLI manages authentication, local configuration, developer workflow, and interactions with the Google Cloud Platform APIs.
GLOBAL FLAGS
--account = ACCOUNT
Google Cloud Platform user account to use for invocation. Overrides the default core/account property value for this command invocation.
--configuration = CONFIGURATION
The configuration to use for this command invocation. For more information on how to use configurations, run: gcloud topic configurations . You can also use the
[CLOUDSDK_ACTIVE_CONFIG_NAME] environment variable to set the equivalent of this flag for a terminal session.
--flags-file = YAML_FILE
A YAML or JSON file that specifies a --flag : value dictionary. Useful for specifying complex flag values with special characters that work with any command interpreter. Additionally,
each --flags-file arg is replaced by its constituent flags. See $ gcloud topic flags-file for more information.
Flatten name [] output resource slices in KEY into separate records for each item in each slice. Multiple keys and slices may be specified. This also flattens keys for --format and --
filter . For example, --flatten=abc.def flattens abc.def[].ghi references to abc.def.ghi . A resource record containing abc.def[] with N elements will expand to N records in
the flattened output. This flag interacts with other flags that are applied in this order: --flatten , --sort-by , --filter , --limit .
--format = FORMAT
Set the format for printing command output resources. The default is a command-specific human-friendly output format. The supported formats
are: config , csv , default , diff , disable , flattened , get , json , list , multi , none , object , table , text , value , yaml . For more details run $ gcloud topic formats.
--help
--project = PROJECT_ID
The Google Cloud Platform project name to use for this invocation. If omitted, then the current project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID . Overrides the default core/project property value for this command invocation.
--quiet , -q
Disable all interactive prompts when running gcloud commands. If input is required, defaults will be used, or an error will be raised. Overrides the default core/disable_prompts property
value for this command invocation. Must be used at the beginning of commands. This is equivalent to setting the environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1.
Override the default verbosity for this command with any of the supported standard verbosity levels: debug , info , warning , error , critical , none . Overrides the
default core/verbosity property value for this command invocation.
--version , -v
Print version information and exit. This flag is only available at the global level.
-h
OTHER FLAGS
--log-http
Log all HTTP server requests and responses to stderr. Overrides the default core/log_http property value for this command invocation.
--trace-token = TRACE_TOKEN
Token used to route traces of service requests for investigation of issues. Overrides the default core/trace_token property value for this command invocation.
--user-output-enabled
Print user intended output to the console. Overrides the default core/user_output_enabled property value for this command invocation. Use --no-user-output-enabled to disable.
GROUPS
alpha
app
auth
beta
(BETA) Beta versions of gcloud commands.
bigtable
builds
components
composer
compute
config
container
dataflow
dataproc
datastore
debug
deployment-manager
dns
domains
endpoints
firebase
functions
iam
iot
kms
logging
ml
ml-engine
organizations
projects
pubsub
redis
services
List, enable and disable APIs and services.
source
spanner
sql
topic
COMMANDS
docker
feedback
help
info
init
version
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered
trademark of Oracle and/or its affiliates.
DESCRIPTION
The --flags-file = YAML-FILE flag, available to all gcloud commands, supports complex flag values in any command
interpreter.
Complex flag values that contain command interpreter special characters may be difficult to specify on the command line.
The combined list of special characters across commonly used command interpreters (shell, cmd.exe, PowerShell) is
surprisingly large. Among them are ", ', `, *, ?, [, ], (, ), $, %, #, ^, &, |, {, }, ;, \, <,
>, space , tab , newline . Add to that the separator characters for list and dict valued flags, and it becomes all but
impossible to construct portable command lines.
The --flags-file = YAML-FILE flag solves this problem by allowing command line flags to be specified in a YAML/JSON
file. String, numeric, list and dict flag values are specified using YAML/JSON notation and quoting rules.
Flag specification uses dictionary notation. Use a list of dictionaries for flags that must be specified multiple times.
For example, this YAML file defines values for Boolean, integer, floating point, string, dictionary and list valued flags:
--boolean:
--integer: 123
--float: 456.789
--string: A string value.
--dictionary:
a=b: c,d
e,f: g=h
i: none
j=k=l: m=$n,o=%p
"y:": ":z"
meta:
- key: foo
value: bar
- key: abc
value: xyz
--list:
- a,b,c
- x,y,z
If the file is named my-flags.yaml then the command line flag --flags-file=my-flags.yaml will set the specified flags on
any system using any command interpreter. --flags-file may be specified in a YAML file, and its value can be a YAML
list to reference multiple files.
- --metadata: abc
--integer: 123
- --metadata: xyz
Each --flags-file arg is replaced by its contents, so normal flag precedence applies. For example, given flags-1.yaml :
--zone: us-east2-a
flags-2.yaml :
--verbosity: info
--zone: us-central1-a
and command line:
using zone us-central1-a (not us-east2-a , because flags-2.yaml , to the right of flags-1.yaml , has higher
precedence).
Implementing a static website in Google App Engine
Juliette Foucaut - 23 Aug 2013 - edited 10 Feb 2014
A robust website that successfully weathers spikes in traffic is a must when trying to sell and support a game over the internet. Last July Picroma suffered
temporarily when they released their game, Cube World, for purchase. They then had to deal with a DDoS attack. More recently, Oculus Rift's site stalled when
they tweeted about John Carmack's involvement in their technology.
Whilst we can only dream of enjoying the same level of interest, we'd like to spare ourselves the worry. When Doug researched a web hosting solution, he spotted
that Wolfire used Google App Engine. They list their reasons clearly on their blog, and given that they have years of hands-on experience on the matter, we
decided to follow their lead. As an added bonus, this solution is free for low levels of traffic.
We plan to eventually automate our site to support a blog, comments, a forum and purchases, but we currently only need a static website. This means limited
cleverness on the client side and none on the server, just some basic html and a minimum of scripting for content and a css for the looks. In this post I'll explain
how to host a "quick and dirty" static site on Google App Engine. It involves a few tricks but nothing too complicated. I've added a Links and toolssection at the end
of this post where you'll find all the resource I used (including to train myself). I hope you find the information useful and you enjoy crafting your site as much as I
did.
my-gae-website
static_website
css
bootstrap.css
font-awesome.css
font <= Font Awesome font files
*.eot, *.otf, *.svg, *.ttf, *.woff
*.html
As an example of how handy Bootstrap and Font Awesome are, the box above was created with a Bootstrap <pre> tag and Font Awesome icons.
In the root directory where you keep your static website (in this example, my static website is in directory static_website, under root directory my-gae-
website, see previous section), add an empty text file and rename it "app.yaml".
my-gae-website
static_website
app.yaml
Open the Google App Engine Launcher (I'll refer to it as the GAE launcher).
Select menu item File > Add Existing Application...
Set the application path to directory my-gae-website and select Add.
An application named "my-gae-website" is added to the list and is displayed in red. To make the application work, we need to add some code in app.yaml.
To start with we'll use a default configuration:
Paste the default text below into your app.yaml and save it. You'll notice that the app "my-gae-website" in the GAE launcher immediately turns from red text
to black.
application: my-gae-website
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
- url: /
static_files: static_website/index.html
upload: static_website/index.html
libraries:
- name: webapp2
version: "2.5.2"
If you want, you can already run the "my-gae-website" application from the GAE launcher and view your website locally in your browser. However it may not show
anything yet: you need to configure app.yaml to serve your own static pages.
In the next section I'll explain how to configure app.yaml to serve a static website using our site as an example.
Configure app.yaml
This step sets the rules for displaying the contents of the website. In other words, app.yaml describes what will be returned (web pages, images...) when specific
urls are entered. We've found the syntax of app.yaml not completely straightforward so I'm going to describe in detail how I've configured it in our specific example.
For a general understanding of the principles of app.yaml, see the links section about regex and app.yaml at the end of this post.
If you want to skip this section go straight to the deployment part.
app.yaml overview
Our app.yaml reads as follows (This is an overview. I'll explain the handler section contents in the next section):
application: my-gae-website
version: 1
runtime: python27
api_version: 1
threadsafe: yes
handlers:
#root
- url: /
static_files: static_website/devlog.html
upload: static_website/devlog.html
- url: /devlog.html
static_files: static_website/devlog.html
upload: static_website/devlog.html
#the devlog post pages: since we're going to add more pages with the format
#devlogpost-<yyyymmdd-dailyIncrement>.html and I don't want to update the
#app.yaml each time, I've used a rough regex to limit the cases where an
#invalid url would return the default 404 not found page.
- url: /(devlogpost-201[3-9][0-1][0-9][0-3][0-9]-[1-4]\.html)
static_files: static_website/
upload: static_website/(devlogpost.*\.html)
#all images and support file (css, fonts...): return file if found,
#otherwise the default 404 page so it can be handled by sites that link
#directly to images.
- url: /(.*\.(gif|png|jpg|ico|bmp|css|otf|eot|svg|ttf|woff))
static_files: static_website/
upload: static_website/(.*\.(gif|png|jpg|ico|bmp|css|otf|eot|svg|ttf|woff))
libraries:
- name: webapp2
version: "2.5.2"
In the handlers section we see a repeating pattern of 3 lines headed url, static_files and upload (note: you'll find more info on Google's site). Here's what each one
of them means:
upload: <the regex of the actual file path and name the url is referring
to, on our local machine, before deployment>
url handling
We have to decide how we'll handle each file / file type and add the behaviour to app.yaml (see the comments headed with "#" in our implementation of app.yaml).
For reference, our file names and folders are as follows:
my-gae-website
static_website
css <= cascading style sheets (Bootstrap, Font Awesome)
*.css
downloads <= all our downloadable files
AvoydV1_7_1.zip
font <= Font Awesome font files
*.eot, *.otf, *.svg, *.ttf, *.woff
images <= all our images
*.gif, *.ico, *.jpg, *.png
about.html <= about page
devlog.html <= home page, also the development blog posts list
devlogpost-20130823-1.html <= an individual blog post
devlogpost-20130411-1.html <= an individual blog post
devlogpost-20130427-1.html <= an individual blog post
devlogpost-20130509-1.html <= an individual blog post
devlogpost-20130529-1.html <= an individual blog post
notfound.html <= our custom file not found
app.yaml
#root
- url: /
static_files: static_website/devlog.html
upload: static_website/devlog.html
Home page: most sites use index.html as their de facto home page. Since our home page has a different name and it's likely someone will
request https://www.enkisoftware.com/index.html and it would be a shame if they ended up on our custom file not found, I'm adding an extra handler to redirect
them to our home page devlog.html:
Specific files: we have two predefined html pages and a zip file that can be exactly matched:
- url: /devlog.html
static_files: static_website/devlog.html
upload: static_website/devlog.html
Less specific files: I had to create individual pages for each blog post to work around a Disqus limitation. I chose a simple pattern for naming those posts since I
can't predict how many nor how often we'll add them: devlogpost-<yyyymmdd-dailyIncrement>.html. To save having to update app.yaml every time we add a new
post, I'm using this regex devlogpost-201[3-9][0-1][0-9][0-3][0-9]-[1-4]\.html .
I'm sure you've noticed this regex is not perfect. Why? Because if someone enters a url that matches the regex, i.e. a valid date (though I'm sure you've spotted
that e.g. the 32nd March will incorrectly be considered a valid date) which doesn't correspond to an existing devlogpost file, they'll get the default file not
found instead of our custom file not found (because the custom file not found will only be served if the url doesn't match the regex). Now I would prefer the
custom file not found to be served (any suggestions welcome), but I'm going to automate the site at some point so this solution will do for now.
#the devlog post pages: since we're going to add more pages with the format
#devlogpost-<yyyymmdd-dailyIncrement>.html and I don't want to update the
#app.yaml each time, I've used a rough regex to limit the cases where an
#invalid url would return the default 404 not found page.
- url: /(devlogpost-201[3-9][0-1][0-9][0-3][0-9]-[1-4]\.html)
static_files: static_website/
upload: static_website/(devlogpost.*\.html)
Test locally
From the GAE launcher, run my-gae-website and view your website locally in your browser.
Deploy
To speed up our deployment I created a batch file *.bat containing appcfg.py --email=<email address used for google app engine> update <my-gae-website>\
pause
This is a workaround for Google App Engine Launcher requiring that I enter my email and password each and every time I deploy. Whereas when I use the
command prompt interface I only have to enter the details once per session.
app.yaml
To help understand how the app.yaml file works with regards to static files, see the static file handlers section in the app.yaml google documentation
Regular expressions
You'll need a basic understanding of regular expressions (a.k.a. regex) to edit the app.yaml file.
See the POSIX standard section in wikipedia
the general regex syntax in the Python documentation
If you're looking for a step by step introduction to regular expressions, I found Udacity's CS262 course very helpful. They address regex in Unit1. If you're
happy with just the transcripts (no login required) you'll find them in the course wiki.
Tools
Google App Engine
Python google app engine SDK
Contents
Assumptions
Requirements
Create GCP project
Gateway connectivity
Google Cloud Platform Community tutorials submitted from the community do not represent official Google Cloud Platform product
documentation.
This tutorial describes the steps needed to set up the LoRa Server project on Google Cloud Platform. The following Google
Cloud Platform (GCP) services are used:
Cloud IoT Core is used to connect your LoRa gateways with GCP.
Cloud Pub/Sub is used for messaging between GCP components and LoRa Server services.
Cloud Functions is used to handle downlink LoRa gateway communication (calling the Cloud IoT Core API on downlink
Pub/Sub messages).
Cloud SQL is used as hosted PostgreSQL database solution.
Cloud Memorystore is used as hosted Redis solution.
Compute Engine is used for running a VM instance.
Assumptions
In this tutorial we will assume that the LoRa Gateway Bridge component will be installed on the gateway. We will also
assume that LoRa Server andLoRa App Server will be installed on a single Compute Engine VM, to simplify this tutorial.
The example project ID used in this tutorial will be lora-server-tutorial . You should substitute this with your own project
ID in the tutorial steps.
The LoRaWAN region used in this tutorial will be eu868 . You should substitute this with your own region in the examples.
Requirements
After logging in to the GCP Console, create a new project. For this tutorial we will name the project LoRa Server tutorial with
an example ID of lora-server-tutorial . After creating the project, make sure it is selected before continuing with the next
steps.
Gateway connectivity
The LoRa Gateway Bridge(referred to as simply Gateway in this tutorial) will use the Cloud IoT Core MQTT broker to ingest
LoRa gateway events into GCP. This removes the requirement to host your own MQTT broker and increases the reliability and
scalability of the system.
In order to connect your LoRa gateway with Cloud IoT Core, go to the IoT Coreservice in the GCP Console and create a new
device registry in the Device registries box.
This registry will contain all your gateways for a given region. When you are planning to support multiple LoRaWAN regions, it is
a good practice to create separate registries (not covered in this tutorial).
In this tutorial, we are going to create a registry for EU868 gateways, so we choose the Registry ID eu868-gateways . Select the
region which is closest to you and select MQTT as the protocol. The HTTP protocol will not be used.
Under Default telemetry topic create a new topic. We will call this eu868-gateway-events . Click Create.
In order to authenticate the LoRa gateway with the Cloud IoT Core MQTT bridge, you need to generate a certificate. You can do
this using the following commands:
To add your first LoRa gateway to the just created device registry, click the Create device button.
As Device ID, enter your Gateway ID prefixed with gw- . For example, if your Gateway ID equals to 0102030405060708 , then
enter gw-0102030405060708 . The gw- prefix is needed because a Cloud IoT Core ID must start with a letter, which is not always
the case for a LoRa gateway ID.
Each Cloud IoT Core device (LoRa gateway) will authenticate using its own certificate. Select RS256 as Public key format and
paste the public-key content in the box. This is the content of public-key.pem which was created in the previous step.
Click Create.
As there are different ways to install the LoRa Gateway Bridge on your gateway, only the configuration is covered here. For
installation instructions, please refer to LoRa Gateway Bridge gateway installation & configuration.
To configure a LoRa Gateway Bridge to forward its data to Cloud IoT, you need update the lora-gateway-
bridge.toml Configuration file.
[backend.mqtt]
marshaler="protobuf"
[backend.mqtt.auth]
type="gcp_cloud_iot_core"
[backend.mqtt.auth.gcp_cloud_iot_core]
server="ssl://mqtt.googleapis.com:8883"
device_id="gw-0102030405060708"
project_id="lora-server-tutorial"
cloud_region="europe-west1"
registry_id="eu868-gateways"
jwt_key_file="/path/to/private-key.pem"
In short:
This will configure the protobuf marshaler (either protobuf or json must be configured)
This will configure the Google Cloud IoT Core MQTT authentication
This will configure the GCP project ID, cloud-region and registry ID
Note that jwt_key_file must point to the private-key file generated in the previous step.
After applying the above configuration changes on the gateway (using your
own device_id , project_id , cloud_region and jwt_key_file ), validate that LoRa Gateway Bridge is able to connect with the
Cloud IoT Core MQTT bridge. The log output should look like this when your gateway receives an uplink message from your
LoRaWAN device:
Your gateway is now communicating succesfully with the Cloud IoT Core MQTT bridge!
Create downlink Pub/Sub topic
Instead of using MQTT directly, the LoRa Server will use Cloud Pub/Sub for receiving data from and sending data to your
gateways.
In the GCP Console, navigate to Pub/Sub > Topics. You will see the topic that was created when you created the device
registry. LoRa Server will subscribe to this topic to receive data (events) from your gateway.
For sending data back to your gateways, we will create a new topic. Click Create Topic, and enter eu868-gateway-commands as
the name.
In the previous step, you created a topic for sending downlink commands to your gateways. In order to connect this Pub/Sub
topic with your Cloud IoT Core device-registry, you must create a Cloud Function which will subscribe to the downlink Pub/Sub
topic and will forward these commands to your LoRa gateway.
In the GCP Console, navigate to Cloud Functions. Then click Create function. As Name we will use eu868-gateway-commands .
Because the only thing this function does is calling a Cloud API, 128 MB for Memory allocated should be fine.
Select Inline editor for entering the source-code and select the Node.js 8runtime. The Function to execute is
called sendMessage . Copy and paste the scripts below for the index.js and package.json files. Adjust
the index.js configuration to match your REGION , PROJECT_ID and REGISTRY_ID . Note: it is recommended to also
click More and select your region from the dropdown list. Then click Create.
index.js
'use strict';
const {google} = require('googleapis');
// configuration options
const REGION = 'europe-west1';
const PROJECT_ID = 'lora-server-tutorial';
const REGISTRY_ID = 'eu868-gateways';
let client = null;
const API_VERSION = 'v1';
const DISCOVERY_API = 'https://cloudiot.googleapis.com/$discovery/rest';
// getClient returns the GCP API client.
// Note: after the first initialization, the client will be cached.
function getClient (cb) {
if (client !== null) {
cb(client);
return;
}
google.auth.getClient({scopes: ['https://www.googleapis.com/auth/cloud-platform']}).then((authClient => {
google.options({
auth: authClient
});
const discoveryUrl = `${DISCOVERY_API}?version=${API_VERSION}`;
google.discoverAPI(discoveryUrl).then((c, err) => {
if (err) {
console.log('Error during API discovery', err);
return undefined;
}
client = c;
cb(client);
});
}));
}
// sendMessage forwards the Pub/Sub message to the given device.
exports.sendMessage = (event, context, callback) => {
const deviceId = event.attributes.deviceId;
const subFolder = event.attributes.subFolder;
const data = event.data;
getClient((client) => {
const parentName = `projects/${PROJECT_ID}/locations/${REGION}`;
const registryName = `${parentName}/registries/${REGISTRY_ID}`;
const request = {
name: `${registryName}/devices/${deviceId}`,
binaryData: data,
subfolder: subFolder
};
package.json
{
"name": "gateway-commands",
"version": "2.0.0",
"dependencies": {
"@google-cloud/pubsub": "0.20.1",
"googleapis": "34.0.0"
}
}
Set up databases
In the GCP Console, navigate to Memorystore (which provides a managed Redis datastore) and click Create instance.
You can assign any name to this instance. Make sure that you also select yourRegion. Click Create to create the Redis
instance.
In the GCP Console, navigate to SQL (which provides managed PostgreSQL database instances) and click Create instance.
Select PostgreSQL and click Next. You can assign any name to this instance. Again, make sure to also select your Region from
the dropdown.
Configure the Configuration options to your needs (the smallest instance is already sufficient for testing). An important option
to configure is Authorize networks. To allow access from any IP address, enter 0.0.0.0/0 . It is recommended to update this
later to only the IP address of your server (covered in the next steps). Then click Create.
Create users
Click on the created database instance and click the Users tab. Create two users:
loraserver_ns
loraserver_as
Create databases
loraserver_ns
loraserver_as
In the PostgreSQL instance Overview tab, click Connect using Cloud Shell and when the gcloud sql connect ... command
is shown in the console, press Enter. It will prompt you for the postgres user password (which you configured on creating the
PostgreSQL instance).
When you have succesfully completed the previous steps, then your gateway is connected to the Cloud IoT Core MQTT bridge,
all the LoRa (App) Server requirements are set up and is it time to install LoRa Server and LoRa App Server.
Create a VM instance
In the GCP Console, navigate to Compute Engine > VM instances and click on Create.
Again, the name of the instance doesn't matter but make sure you select the correct Region. The smallest Machine type is
sufficient to test with. For this tutorial we will use the default Boot disk (Debian 9).
Under Identity and API access, select Allow full access to all Cloud APIs under the Access scopes options.
Configure firewall
In order to expose the LoRa App Server web interface, we need to open port 8080 (the default LoRa App Server port) to the
public.
Click on the created instance to go to the instance details. Under Network interfaces click View details. In the left navigation
menu click Firewall rules and then on Create firewall rule. Enter the following details:
As the Compute Engine instance (created in the previous step) needs to be able to subscribe to the Pub/Sub data, we must give
the Compute Engine default service account the required role.
In the GCP Console, navigate to IAM & admin. Then edit the Compute Engine default service account. Click Add another
role and add the following roles:
Pub/Sub Publisher
Pub/Sub Subscriber
Log in to VM instance
You will find the public IP address of the created VM instance under Compute Engine > VM instances. Use the SSH web-client
provided by the GCP Console, or the gcloud ssh command to connect to the VM.
Execute the following commands in the VM's shell to add the LoRa Server repository to your VM instance:
Execute the following command in the VM's shell to install the LoRa Server service:
The LoRa Server configuration file is located at /etc/loraserver/loraserver.toml . Below you will find two (minimal but working)
configuration examples. Please refer to the LoRa Server Configurationdocumentation for all the available options.
Important: Because there might be a high latency between the Pub/Sub and Cloud Function components — especially with a
low message rate — the rx1_delay value is set to 3 in the examples below.
[postgresql]
dsn="postgres://loraserver_ns:[PASSWORD]@[POSTGRESQL_IP]/loraserver_ns?sslmode=disable"
[redis]
url="redis://[REDIS_IP]:6379"
[network_server]
net_id="000000"
[network_server.band]
name="EU_863_870"
[network_server.network_settings]
rx1_delay=3
[network_server.gateway.stats]
create_gateway_on_stats=true
timezone="UTC"
[network_server.gateway.backend]
type="gcp_pub_sub"
[network_server.gateway.backend.gcp_pub_sub]
project_id="lora-server-tutorial"
uplink_topic_name="eu868-gateway-events"
downlink_topic_name="eu868-gateway-commands"
[postgresql]
dsn="postgres://loraserver_ns:[PASSWORD]@[POSTGRESQL_IP]/loraserver_ns?sslmode=disable"
[redis]
url="redis://[REDIS_IP]:6379"
[network_server]
net_id="000000"
[network_server.band]
name="US_902_928"
[network_server.network_settings]
rx1_delay=3
enabled_uplink_channels=[0, 1, 2, 3, 4, 5, 6, 7]
[network_server.gateway.stats]
create_gateway_on_stats=true
timezone="UTC"
[network_server.gateway.backend]
type="gcp_pub_sub"
[network_server.gateway.backend.gcp_pub_sub]
project_id="lora-server-tutorial"
uplink_topic_name="eu868-gateway-events"
downlink_topic_name="eu868-gateway-commands"
To test the configuration for errors, you can execute the following command:
sudo loraserver
If all is well, then you can start the service in the background using:
When you have completed all previous steps, then it is time to install the last component, LoRa App Server. This is the
application-server that provides a web interface for device management and will publish application data to a Pub/Sub topic.
In the GCP Console, navigate to Pub/Sub > Topics. Then click Create topic to create a topic named lora-app-server .
SSH to the VM and execute the following command to install LoRa App Server:
The LoRa App Server configuration file is located at /etc/lora-app-server/lora-app-server.toml . Below you will find a
minimal but working configuration example. Please refer to the LoRa App Server Configurationdocumentation for all the available
options.
Configuration example
[postgresql]
dsn="postgres://loraserver_as:[PASSWORD]@[POSTGRESQL_IP]/loraserver_as?sslmode=disable"
[redis]
url="redis://[REDIS_IP]:6379"
[application_server]
[application_server.integration]
backend="gcp_pub_sub"
[application_server.integration.gcp_pub_sub]
project_id="lora-server-tutorial"
topic_name="lora-app-server"
[application_server.external_api]
bind="0.0.0.0:8080"
tls_cert="/etc/lora-app-server/certs/http.pem"
tls_key="/etc/lora-app-server/certs/http-key.pem"
jwt_secret="[JWT_SECRET]"
To test if there are no errors, you can execute the following command:
sudo lora-app-server
If all is well, then you can start the service in the background using these commands:
To get started with LoRa (App) Server, please follow the First gateway and device guide. It explains how to log in to the web-
interface and add your first gateway and device.
In the LoRa App Server step, you have created a Pub/Sub topic named lora-app-server . This will be the topic used by LoRa
Server for publishing device events and to which your application(s) need to subscribe in order to receive LoRaWAN device data.
For more information about Cloud Pub/Sub, please refer to the following pages:
gcloud auth
NAME
gcloud auth - manage oauth2 credentials for the Google Cloud SDK
SYNOPSIS
DESCRIPTION
The gcloud auth command group lets you grant and revoke authorization to Cloud SDK (gcloud) to access Google Cloud
Platform. Typically, when scripting Cloud SDK tools for use on multiple machines, using gcloud auth activate-service-
account is recommended.
While running gcloud auth commands, the --account flag can be specified to any command to use that account without
activation.
These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --
project, --quiet, --trace-token, --user-output-enabled,--verbosity. Run $ gcloud help for details.
GROUPS
application-default
COMMANDS
activate-service-account
configure-docker
list
login
Authorize gcloud to access the Cloud Platform with Google user credentials.
revoke
EXAMPLES
To authenticate a user account with gcloud and minimal user output, run:
$ gcloud auth login --brief
To list all credentialed accounts and identify the current active account, run:
$ gcloud auth list
NOTES
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
SYNOPSIS
gcloud GROUP | COMMAND [ --account = ACCOUNT ][ --configuration = CONFIGURATION ] [ --flags-file = YAML_FILE ][ --flatten =[ KEY ,…]] [ --format = FORMAT ] [ --help ][ --project = PROJECT_
DESCRIPTION
The gcloud CLI manages authentication, local configuration, developer workflow, and interactions with the Google Cloud Platform APIs.
GLOBAL FLAGS
--account = ACCOUNT
Google Cloud Platform user account to use for invocation. Overrides the default core/account property value for this command invocation.
--configuration = CONFIGURATION
The configuration to use for this command invocation. For more information on how to use configurations, run: gcloud topic configurations . You can also use the
[CLOUDSDK_ACTIVE_CONFIG_NAME] environment variable to set the equivalent of this flag for a terminal session.
--flags-file = YAML_FILE
A YAML or JSON file that specifies a --flag : value dictionary. Useful for specifying complex flag values with special characters that work with any command interpreter. Additionally,
each --flags-file arg is replaced by its constituent flags. See $ gcloud topic flags-file for more information.
Flatten name [] output resource slices in KEY into separate records for each item in each slice. Multiple keys and slices may be specified. This also flattens keys for --format and --
filter . For example, --flatten=abc.def flattens abc.def[].ghi references to abc.def.ghi . A resource record containing abc.def[] with N elements will expand to N records in
the flattened output. This flag interacts with other flags that are applied in this order: --flatten , --sort-by , --filter , --limit .
--format = FORMAT
Set the format for printing command output resources. The default is a command-specific human-friendly output format. The supported formats
are: config , csv , default , diff , disable , flattened , get , json , list , multi , none , object , table , text , value , yaml . For more details run $ gcloud topic formats.
--help
--project = PROJECT_ID
The Google Cloud Platform project name to use for this invocation. If omitted, then the current project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID . Overrides the default core/project property value for this command invocation.
--quiet , -q
Disable all interactive prompts when running gcloud commands. If input is required, defaults will be used, or an error will be raised. Overrides the default core/disable_prompts property
value for this command invocation. Must be used at the beginning of commands. This is equivalent to setting the environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1.
Override the default verbosity for this command with any of the supported standard verbosity levels: debug , info , warning , error , critical , none . Overrides the
default core/verbosity property value for this command invocation.
--version , -v
Print version information and exit. This flag is only available at the global level.
-h
OTHER FLAGS
--log-http
Log all HTTP server requests and responses to stderr. Overrides the default core/log_http property value for this command invocation.
--trace-token = TRACE_TOKEN
Token used to route traces of service requests for investigation of issues. Overrides the default core/trace_token property value for this command invocation.
--user-output-enabled
Print user intended output to the console. Overrides the default core/user_output_enabled property value for this command invocation. Use --no-user-output-enabled to disable.
GROUPS
alpha
app
auth
beta
(BETA) Beta versions of gcloud commands.
bigtable
builds
components
composer
compute
config
container
dataflow
dataproc
datastore
debug
deployment-manager
dns
domains
endpoints
firebase
functions
iam
iot
kms
logging
ml
ml-engine
organizations
projects
pubsub
redis
services
List, enable and disable APIs and services.
source
spanner
sql
topic
COMMANDS
docker
feedback
help
info
init
version
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered
trademark of Oracle and/or its affiliates.
SYNOPSIS
DESCRIPTION
These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --
project, --quiet, --trace-token, --user-output-enabled,--verbosity. Run $ gcloud help for details.
COMMANDS
add-backend
add-signed-url-key
create
delete
delete-signed-url-key
describe
edit
get-health
list
remove-backend
update
update-backend
gcloud config list - list Cloud SDK properties for the currently active configuration
SYNOPSIS
gcloud config
list [ SECTION / PROPERTY ] [ --all ] [ --filter = EXPRESSION ][ --limit = LIMIT ] [ --sort-by =[ FIELD ,…]] [ GCLOUD_WIDE_FLAG … ]
DESCRIPTION
gcloud config list lists all properties of the active configuration. These include the account used to authorize access to the Cloud
Platform, the current Cloud Platform project, and the default Compute Engine region and zone, if set. See gcloud topic
configurations for more about configurations.
POSITIONAL ARGUMENTS
[ SECTION / PROPERTY ]
Property to be listed. Note that SECTION/ is optional while referring to properties in the core section.
FLAGS
--all
List all set and unset properties that match the arguments.
--filter = EXPRESSION
Apply a Boolean filter EXPRESSION to each resource item to be listed. If the expression evaluates True , then that item is
listed. For more details and examples of filter expressions, run $ gcloud topic filters. This flag interacts with other flags that
are applied in this order: --flatten , --sort-by , --filter , --limit .
--limit = LIMIT
Maximum number of resources to list. The default is unlimited . This flag interacts with other flags that are applied in this
order: --flatten , --sort-by , --filter , --limit .
Comma-separated list of resource field key names to sort by. The default order is ascending. Prefix a field with ``~´´ for
descending order on that field. This flag interacts with other flags that are applied in this order: --flatten , --sort-by , --
filter , --limit .
These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --project, --
quiet, --trace-token, --user-output-enabled,--verbosity. Run $ gcloud help for details.
AVAILABLE PROPERTIES
core
account
Account gcloud should use for authentication. Run gcloud auth list to see your currently available accounts.
custom_ca_certs_file
default_regional_backend_service
If True, backend services in gcloud compute backend-services will be regional by default. Setting the --global flag
is required for global backend services.
disable_color
If True, color will not be used when printing messages in the terminal.
disable_prompts
If True, the default answer will be assumed for all user prompts. However, for any prompts that require user input, an
error will be raised. This is equivalent to either using the global --quiet flag or setting the environment
variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1. Setting this property is useful when scripting with gcloud .
disable_usage_reporting
If True, anonymous statistics on SDK usage will not be collected. This value is set by default based on your choices
during installation, but can be changed at any time. For more information, see: https://cloud.google.com/sdk/usage-
statistics
log_http
If True, log HTTP requests and responses to the logs. To see logs in the terminal, adjust verbosity settings.
Otherwise, logs are available in their respective log files.
max_log_days
Maximum number of days to retain log files before deleting. If set to 0, turns off log garbage collection and does not
delete log files. If unset, the default is 30 days.
pass_credentials_to_gsutil
project
Project ID of the Cloud Platform project to operate on by default. This can be overridden by using the global --
project flag.
show_structured_logs
Control when JSON-structured log messages for the current verbosity level (and above) will be written to standard
error. If this property is disabled, logs are formatted as text by default.
trace_token
Token used to route traces of service requests for investigation of issues. This token will be provided by Google support.
user_output_enabled
True, by default. If False, messages to the user and command output on both standard output and standard error will
be suppressed.
verbosity
Default logging verbosity for gcloud commands. This is the equivalent of using the global --verbosity flag.
Supported verbosity levels: debug , info , warning , error , and none .
app
cloud_build_timeout
Timeout, in seconds, to wait for Docker builds to complete during deployments. All Docker builds now use the Cloud
Build API.
promote_by_default
If True, when deploying a new version of a service, that version will be promoted to receive all traffic for the service.
This property can be overridden via the --promote-by-default or --no-promote-by-default flags.
stop_previous_version
If True, when deploying a new version of a service, the previously deployed version is stopped. If False, older versions
must be stopped manually.
use_runtime_builders
If set, opt in/out to a new code path for building applications using pre-fabricated runtimes that can be updated
independently of client tooling. If not set, the default path for each runtime is used.
auth
disable_credentials
If True, gcloud will not attempt to load any credentials or authenticate any requests. This is useful when behind a
proxy that adds authentication to requests.
billing
quota_project
Project that will be charged quota for the operations performed in gcloud . When unset, the default is
[CURRENT_PROJECT]; this will charge quota against the currently set project for operations performed on it.
Additionally, some existing APIs will continue to use a shared project for quota by default, when this property is unset.
If you need to operate on one project, but need quota against a different project, you can use this property to specify
the alternate project.
builds
timeout
component_manager
additional_repositories
Comma separated list of additional repositories to check for components. This property is automatically managed by
the gcloud components repositories commands.
disable_update_check
composer
location
Composer location to use. Each Composer location constitutes an independent resource namespace constrained to
deploying environments into Compute Engine regions inside this location. This parameter corresponds to the
/locations/<location> segment of the Composer resource URIs being referenced.
compute
region
Default region to use when working with regional Compute Engine resources. When a --region flag is required but
not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute regions list .
use_new_list_usable_subnets_api
If True, use the new API for listing usable subnets which only returns subnets in the current project.
zone
Default zone to use when working with zonal Compute Engine resources. When a --zone flag is required but not
provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute zones list .
container
build_timeout
cluster
Name of the cluster to use by default when working with Kubernetes Engine.
new_scopes_behavior
If True, use new scopes behavior and do not add compute-rw , storage-ro , service-control , or service-
management scopes. The former two ( compute-rw and storage-ro ) only apply to clusters at Kubernetes v1.9 and
below; starting v1.10, compute-rw and storage-ro are not added by default. Any of these scopes may be added
explicitly using --scopes . Using new scopes behavior will be the default in a future release. Additionally, if this
property is set to True, using --[no-]enable-cloud-endpoints is not allowed. This property is ignored in alpha and
beta, since these tracks always use the new behavior. See --scopes help for more info.
use_application_default_credentials
If True, use application default credentials to authenticate to the cluster API server.
use_client_certificate
If True, use the cluster's client certificate to authenticate to the cluster API server.
dataproc
region
Cloud Dataproc region to use. Each Cloud Dataproc region constitutes an independent resource namespace
constrained to deploying instances into Compute Engine zones inside the region. The default value of global is a
special multi-region namespace which is capable of deploying instances into all Compute Engine zones globally, and
is disjoint from other Cloud Dataproc regions.
deployment_manager
glob_imports
Enable import path globbing. Uses glob patterns to match multiple imports in a config file.
filestore
location
Default location to use when working with Cloud Filestore locations. When a --location flag is required but not
provided, the command will fall back to this value, if set.
functions
region
Default region to use when working with Cloud Functions resources. When a --region flag is required but not
provided, the command will fall back to this value, if set. To see valid choices, run gcloud beta functions regions
list .
gcloudignore
enabled
If True, do not upload .gcloudignore files (see $ gcloud topic gcloudignore ). If False, turn off the gcloudignore
mechanism entirely and upload all files.
interactive
bottom_bindings_line
bottom_status_line
completion_menu_lines
fixed_prompt_position
help_lines
hidden
justify_bottom_lines
manpage_generator
If True, use the manpage CLI tree generator for unsupported commands.
multi_column_completion_menu
prompt
show_help
suggest
ml_engine
local_python
Full path to the Python interpreter to use for Cloud ML Engine local predict/train jobs. If not specified, the default path
is the one to the Python interpreter found on system PATH .
polling_interval
Interval (in seconds) at which to poll logs from your Cloud ML Engine jobs. Note that making it much faster than the
default (60) will quickly use all of your quota.
proxy
address
password
port
rdns
If True, DNS queries will not be performed locally, and instead, handed to the proxy to resolve. This is default
behavior.
type
Type of proxy being used. Supported proxy types are: [http, http_no_tunnel, socks4, socks5].
username
region
Default region to use when working with Cloud Memorystore for Redis resources. When a region is required but not
provided by a flag, the command will fall back to this value, if set.
spanner
instance
Default instance to use when working with Cloud Spanner resources. When an instance is required but not provided
by a flag, the command will fall back to this value, if set.
EXAMPLES
NOTES
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache
2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
SYNOPSIS
DESCRIPTION
gcloud config set sets the specified property in your active configuration only. A property governs the behavior of a specific
aspect of Cloud SDK such as the service account to use or the verbosity level of logs. To set the property across all
configurations, use the --installation flag. For more information regarding creating and using configurations,
see gcloud topic configurations .
Note, Cloud SDK comes with a default configuration. To create multiple configurations, use gcloud config
configurations create , and gcloud config configurations activate to switch between them.
POSITIONAL ARGUMENTS
SECTION / PROPERTY
Property to be set. Note that SECTION/ is optional while referring to properties in the core section, i.e., using
either core/project or project is a valid way of setting a project, while using section names is essential for setting
specific properties like compute/region . Consult the Cloud SDK properties page for a comprehensive list of
properties: https://cloud.google.com/sdk/docs/properties
VALUE
Value to be set.
FLAGS
--installation
If set, the property is updated for the entire Cloud SDK installation. Otherwise, by default, the property is updated only
in the currently active configuration.
These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --
project, --quiet, --trace-token, --user-output-enabled,--verbosity. Run $ gcloud help for details.
AVAILABLE PROPERTIES
core
account
Account gcloud should use for authentication. Run gcloud auth list to see your currently available accounts.
custom_ca_certs_file
default_regional_backend_service
If True, backend services in gcloud compute backend-services will be regional by default. Setting the --
global flag is required for global backend services.
disable_color
If True, color will not be used when printing messages in the terminal.
disable_prompts
If True, the default answer will be assumed for all user prompts. However, for any prompts that require user
input, an error will be raised. This is equivalent to either using the global --quiet flag or setting the
environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1. Setting this property is useful when scripting
with gcloud .
disable_usage_reporting
If True, anonymous statistics on SDK usage will not be collected. This value is set by default based on your
choices during installation, but can be changed at any time. For more information,
see: https://cloud.google.com/sdk/usage-statistics
log_http
If True, log HTTP requests and responses to the logs. To see logs in the terminal, adjust verbosity settings.
Otherwise, logs are available in their respective log files.
max_log_days
Maximum number of days to retain log files before deleting. If set to 0, turns off log garbage collection and does
not delete log files. If unset, the default is 30 days.
pass_credentials_to_gsutil
project
Project ID of the Cloud Platform project to operate on by default. This can be overridden by using the global --
project flag.
show_structured_logs
Control when JSON-structured log messages for the current verbosity level (and above) will be written to
standard error. If this property is disabled, logs are formatted as text by default.
trace_token
Token used to route traces of service requests for investigation of issues. This token will be provided by Google
support.
user_output_enabled
True, by default. If False, messages to the user and command output on both standard output and standard
error will be suppressed.
verbosity
Default logging verbosity for gcloud commands. This is the equivalent of using the global --verbosity flag.
Supported verbosity levels: debug , info , warning , error , and none .
app
cloud_build_timeout
Timeout, in seconds, to wait for Docker builds to complete during deployments. All Docker builds now use the
Cloud Build API.
promote_by_default
If True, when deploying a new version of a service, that version will be promoted to receive all traffic for the
service. This property can be overridden via the --promote-by-default or --no-promote-by-default flags.
stop_previous_version
If True, when deploying a new version of a service, the previously deployed version is stopped. If False, older
versions must be stopped manually.
use_runtime_builders
If set, opt in/out to a new code path for building applications using pre-fabricated runtimes that can be updated
independently of client tooling. If not set, the default path for each runtime is used.
auth
disable_credentials
If True, gcloud will not attempt to load any credentials or authenticate any requests. This is useful when behind
a proxy that adds authentication to requests.
billing
quota_project
Project that will be charged quota for the operations performed in gcloud . When unset, the default is
[CURRENT_PROJECT]; this will charge quota against the currently set project for operations performed on it.
Additionally, some existing APIs will continue to use a shared project for quota by default, when this property is
unset.
If you need to operate on one project, but need quota against a different project, you can use this property to
specify the alternate project.
builds
timeout
component_manager
additional_repositories
Comma separated list of additional repositories to check for components. This property is automatically
managed by the gcloud components repositories commands.
disable_update_check
composer
location
Composer location to use. Each Composer location constitutes an independent resource namespace
constrained to deploying environments into Compute Engine regions inside this location. This parameter
corresponds to the /locations/<location> segment of the Composer resource URIs being referenced.
compute
region
Default region to use when working with regional Compute Engine resources. When a --region flag is required
but not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute
regions list .
use_new_list_usable_subnets_api
If True, use the new API for listing usable subnets which only returns subnets in the current project.
zone
Default zone to use when working with zonal Compute Engine resources. When a --zone flag is required but
not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute zones
list .
container
build_timeout
cluster
Name of the cluster to use by default when working with Kubernetes Engine.
new_scopes_behavior
If True, use new scopes behavior and do not add compute-rw , storage-ro , service-control , or service-
management scopes. The former two ( compute-rw and storage-ro ) only apply to clusters at Kubernetes v1.9
and below; starting v1.10, compute-rw and storage-ro are not added by default. Any of these scopes may be
added explicitly using --scopes . Using new scopes behavior will be the default in a future release. Additionally,
if this property is set to True, using --[no-]enable-cloud-endpoints is not allowed. This property is ignored in
alpha and beta, since these tracks always use the new behavior. See --scopes help for more info.
use_application_default_credentials
If True, use application default credentials to authenticate to the cluster API server.
use_client_certificate
If True, use the cluster's client certificate to authenticate to the cluster API server.
dataproc
region
Cloud Dataproc region to use. Each Cloud Dataproc region constitutes an independent resource namespace
constrained to deploying instances into Compute Engine zones inside the region. The default value
of global is a special multi-region namespace which is capable of deploying instances into all Compute Engine
zones globally, and is disjoint from other Cloud Dataproc regions.
deployment_manager
glob_imports
Enable import path globbing. Uses glob patterns to match multiple imports in a config file.
filestore
location
Default location to use when working with Cloud Filestore locations. When a --location flag is required but not
provided, the command will fall back to this value, if set.
functions
region
Default region to use when working with Cloud Functions resources. When a --region flag is required but not
provided, the command will fall back to this value, if set. To see valid choices, run gcloud beta functions
regions list .
gcloudignore
enabled
If True, do not upload .gcloudignore files (see $ gcloud topic gcloudignore ). If False, turn off the
gcloudignore mechanism entirely and upload all files.
interactive
bottom_bindings_line
bottom_status_line
completion_menu_lines
context
fixed_prompt_position
help_lines
hidden
justify_bottom_lines
manpage_generator
If True, use the manpage CLI tree generator for unsupported commands.
multi_column_completion_menu
prompt
show_help
suggest
ml_engine
local_python
Full path to the Python interpreter to use for Cloud ML Engine local predict/train jobs. If not specified, the default
path is the one to the Python interpreter found on system PATH .
polling_interval
Interval (in seconds) at which to poll logs from your Cloud ML Engine jobs. Note that making it much faster than
the default (60) will quickly use all of your quota.
proxy
address
password
port
rdns
If True, DNS queries will not be performed locally, and instead, handed to the proxy to resolve. This is default
behavior.
type
Type of proxy being used. Supported proxy types are: [http, http_no_tunnel, socks4, socks5].
username
redis
region
Default region to use when working with Cloud Memorystore for Redis resources. When a region is required
but not provided by a flag, the command will fall back to this value, if set.
spanner
instance
Default instance to use when working with Cloud Spanner resources. When an instance is required but not
provided by a flag, the command will fall back to this value, if set.
EXAMPLES
To set a proxy with the appropriate type, and specify the address and port on which to reach it, run:
For a full list of accepted values, see the Cloud SDK properties page: https://cloud.google.com/sdk/docs/properties
NOTES
gcloud
NAME
SYNOPSIS
gcloud GROUP | COMMAND [ --account = ACCOUNT ][ --configuration = CONFIGURATION ] [ --flags-file = YAML_FILE ][ --flatten =[ KEY ,…]] [ --format = FORMAT ] [ --help ][ --project = PROJECT_
DESCRIPTION
The gcloud CLI manages authentication, local configuration, developer workflow, and interactions with the Google Cloud Platform APIs.
GLOBAL FLAGS
--account = ACCOUNT
Google Cloud Platform user account to use for invocation. Overrides the default core/account property value for this command invocation.
--configuration = CONFIGURATION
The configuration to use for this command invocation. For more information on how to use configurations, run: gcloud topic configurations . You can also use the
[CLOUDSDK_ACTIVE_CONFIG_NAME] environment variable to set the equivalent of this flag for a terminal session.
--flags-file = YAML_FILE
A YAML or JSON file that specifies a --flag : value dictionary. Useful for specifying complex flag values with special characters that work with any command interpreter. Additionally,
each --flags-file arg is replaced by its constituent flags. See $ gcloud topic flags-file for more information.
Flatten name [] output resource slices in KEY into separate records for each item in each slice. Multiple keys and slices may be specified. This also flattens keys for --format and --
filter . For example, --flatten=abc.def flattens abc.def[].ghi references to abc.def.ghi . A resource record containing abc.def[] with N elements will expand to N records in
the flattened output. This flag interacts with other flags that are applied in this order: --flatten , --sort-by , --filter , --limit .
--format = FORMAT
Set the format for printing command output resources. The default is a command-specific human-friendly output format. The supported formats
are: config , csv , default , diff , disable , flattened , get , json , list , multi , none , object , table , text , value , yaml . For more details run $ gcloud topic formats.
--help
--project = PROJECT_ID
The Google Cloud Platform project name to use for this invocation. If omitted, then the current project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID . Overrides the default core/project property value for this command invocation.
--quiet , -q
Disable all interactive prompts when running gcloud commands. If input is required, defaults will be used, or an error will be raised. Overrides the default core/disable_prompts property
value for this command invocation. Must be used at the beginning of commands. This is equivalent to setting the environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1.
Override the default verbosity for this command with any of the supported standard verbosity levels: debug , info , warning , error , critical , none . Overrides the
default core/verbosity property value for this command invocation.
--version , -v
Print version information and exit. This flag is only available at the global level.
-h
OTHER FLAGS
--log-http
Log all HTTP server requests and responses to stderr. Overrides the default core/log_http property value for this command invocation.
--trace-token = TRACE_TOKEN
Token used to route traces of service requests for investigation of issues. Overrides the default core/trace_token property value for this command invocation.
--user-output-enabled
Print user intended output to the console. Overrides the default core/user_output_enabled property value for this command invocation. Use --no-user-output-enabled to disable.
GROUPS
alpha
app
auth
beta
(BETA) Beta versions of gcloud commands.
bigtable
builds
components
composer
compute
config
container
dataflow
dataproc
datastore
debug
deployment-manager
dns
domains
endpoints
firebase
functions
iam
iot
kms
logging
ml
ml-engine
organizations
projects
pubsub
redis
services
List, enable and disable APIs and services.
source
spanner
sql
topic
COMMANDS
docker
feedback
help
info
init
version
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered
trademark of Oracle and/or its affiliates.
DESCRIPTION
gcloud properties can be stored in named configurations , which are collections of key-value pairs that influence the
behavior of gcloud.
Named configurations are intended to be an advanced feature, and you can probably ignore them entirely if you only work
with one project.
Properties that are commonly stored in configurations include default Google Compute Engine zone, verbosity level,
project ID, and active user or service account. Configurations allow you to define and enable these and other settings
together as a group.
Work with multiple projects. You can create a separate configuration for each project.
Use multiple accounts, for example, a user account and a service account, etc.
Perform generally orthogonal tasks (work on an appengine app in project foo, administer a Google Compute Engine
cluster in zone user-central-1a, manage the network configurations for region asia-east-1, etc.)
Property information stored in named configurations are readable by all gcloud commands and may be modified by gcloud
config set and gcloud config unset .
Creating configurations
Additionally there is a builtin configuration named NONE that has no properties set.
$ gcloud init
This will guide you through setting up your first named configuration, creating a new named configuration, or reinitializing
an existing named configuration. (Note: reinitializing an existing configuration will remove all its existing properties!)
Using configurations
gcloud may have at most one active configuration which provides property values. Inactive configurations have no effect
on gcloud executions.
Note that changes to your OS login, Google Cloud Platform account or project could change the path.
You can view and change the properties of your active configuration using the following commands:
Additionally, commands under gcloud config configurations allow you to to list, activate, describe, and delete
configurations that may or may not be active.
You can activate a configuration for a single gcloud invocation using flag, --configuration my-config , or environment
variable CLOUDSDK_ACTIVE_CONFIG_NAME=my-config .
AVAILABLE PROPERTIES
core
account
Account gcloud should use for authentication. Run gcloud auth list to see your currently available accounts.
custom_ca_certs_file
default_regional_backend_service
If True, backend services in gcloud compute backend-services will be regional by default. Setting the --
global flag is required for global backend services.
disable_color
If True, color will not be used when printing messages in the terminal.
disable_prompts
If True, the default answer will be assumed for all user prompts. However, for any prompts that require user
input, an error will be raised. This is equivalent to either using the global --quiet flag or setting the
environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1. Setting this property is useful when scripting
with gcloud .
disable_usage_reporting
If True, anonymous statistics on SDK usage will not be collected. This value is set by default based on your
choices during installation, but can be changed at any time. For more information,
see: https://cloud.google.com/sdk/usage-statistics
log_http
If True, log HTTP requests and responses to the logs. To see logs in the terminal, adjust verbosity settings.
Otherwise, logs are available in their respective log files.
max_log_days
Maximum number of days to retain log files before deleting. If set to 0, turns off log garbage collection and does
not delete log files. If unset, the default is 30 days.
pass_credentials_to_gsutil
project
Project ID of the Cloud Platform project to operate on by default. This can be overridden by using the global --
project flag.
show_structured_logs
Control when JSON-structured log messages for the current verbosity level (and above) will be written to
standard error. If this property is disabled, logs are formatted as text by default.
trace_token
Token used to route traces of service requests for investigation of issues. This token will be provided by Google
support.
user_output_enabled
True, by default. If False, messages to the user and command output on both standard output and standard
error will be suppressed.
verbosity
Default logging verbosity for gcloud commands. This is the equivalent of using the global --verbosity flag.
Supported verbosity levels: debug , info , warning , error , and none .
app
cloud_build_timeout
Timeout, in seconds, to wait for Docker builds to complete during deployments. All Docker builds now use the
Cloud Build API.
promote_by_default
If True, when deploying a new version of a service, that version will be promoted to receive all traffic for the
service. This property can be overridden via the --promote-by-default or --no-promote-by-default flags.
stop_previous_version
If True, when deploying a new version of a service, the previously deployed version is stopped. If False, older
versions must be stopped manually.
use_runtime_builders
If set, opt in/out to a new code path for building applications using pre-fabricated runtimes that can be updated
independently of client tooling. If not set, the default path for each runtime is used.
auth
disable_credentials
If True, gcloud will not attempt to load any credentials or authenticate any requests. This is useful when behind
a proxy that adds authentication to requests.
billing
quota_project
Project that will be charged quota for the operations performed in gcloud . When unset, the default is
[CURRENT_PROJECT]; this will charge quota against the currently set project for operations performed on it.
Additionally, some existing APIs will continue to use a shared project for quota by default, when this property is
unset.
If you need to operate on one project, but need quota against a different project, you can use this property to
specify the alternate project.
builds
timeout
component_manager
additional_repositories
Comma separated list of additional repositories to check for components. This property is automatically
managed by the gcloud components repositories commands.
disable_update_check
composer
location
Composer location to use. Each Composer location constitutes an independent resource namespace
constrained to deploying environments into Compute Engine regions inside this location. This parameter
corresponds to the /locations/<location> segment of the Composer resource URIs being referenced.
compute
region
Default region to use when working with regional Compute Engine resources. When a --region flag is required
but not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute
regions list .
use_new_list_usable_subnets_api
If True, use the new API for listing usable subnets which only returns subnets in the current project.
zone
Default zone to use when working with zonal Compute Engine resources. When a --zone flag is required but
not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute zones
list .
container
build_timeout
cluster
Name of the cluster to use by default when working with Kubernetes Engine.
new_scopes_behavior
If True, use new scopes behavior and do not add compute-rw , storage-ro , service-control , or service-
management scopes. The former two ( compute-rw and storage-ro ) only apply to clusters at Kubernetes v1.9
and below; starting v1.10, compute-rw and storage-ro are not added by default. Any of these scopes may be
added explicitly using --scopes . Using new scopes behavior will be the default in a future release. Additionally,
if this property is set to True, using --[no-]enable-cloud-endpoints is not allowed. This property is ignored in
alpha and beta, since these tracks always use the new behavior. See --scopes help for more info.
use_application_default_credentials
If True, use application default credentials to authenticate to the cluster API server.
use_client_certificate
If True, use the cluster's client certificate to authenticate to the cluster API server.
dataproc
region
Cloud Dataproc region to use. Each Cloud Dataproc region constitutes an independent resource namespace
constrained to deploying instances into Compute Engine zones inside the region. The default value
of global is a special multi-region namespace which is capable of deploying instances into all Compute Engine
zones globally, and is disjoint from other Cloud Dataproc regions.
deployment_manager
glob_imports
Enable import path globbing. Uses glob patterns to match multiple imports in a config file.
filestore
location
Default location to use when working with Cloud Filestore locations. When a --location flag is required but not
provided, the command will fall back to this value, if set.
functions
region
Default region to use when working with Cloud Functions resources. When a --region flag is required but not
provided, the command will fall back to this value, if set. To see valid choices, run gcloud beta functions
regions list .
gcloudignore
enabled
If True, do not upload .gcloudignore files (see $ gcloud topic gcloudignore ). If False, turn off the
gcloudignore mechanism entirely and upload all files.
interactive
bottom_bindings_line
bottom_status_line
completion_menu_lines
context
fixed_prompt_position
help_lines
justify_bottom_lines
manpage_generator
If True, use the manpage CLI tree generator for unsupported commands.
multi_column_completion_menu
prompt
show_help
suggest
ml_engine
local_python
Full path to the Python interpreter to use for Cloud ML Engine local predict/train jobs. If not specified, the default
path is the one to the Python interpreter found on system PATH .
polling_interval
Interval (in seconds) at which to poll logs from your Cloud ML Engine jobs. Note that making it much faster than
the default (60) will quickly use all of your quota.
proxy
address
password
port
rdns
If True, DNS queries will not be performed locally, and instead, handed to the proxy to resolve. This is default
behavior.
type
Type of proxy being used. Supported proxy types are: [http, http_no_tunnel, socks4, socks5].
username
redis
region
Default region to use when working with Cloud Memorystore for Redis resources. When a region is required
but not provided by a flag, the command will fall back to this value, if set.
spanner
instance
Default instance to use when working with Cloud Spanner resources. When an instance is required but not
provided by a flag, the command will fall back to this value, if set.
Run Express.js on Google App Engine Flexible Environment
Author(s): @jmdobry Published: Jan 7, 2016
Contents
Express.js
Prerequisites
Prepare
Create
Run
Deploy
Google Cloud Platform Community tutorials submitted from the community do not represent official Google Cloud Platform product
documentation.
Express.js
Express is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile
applications.
– expressjs.com
You can check out Node.js and Google Cloud Platform to get an overview of Node.js itself and learn ways to run Node.js apps on
Google Cloud Platform.
Prerequisites
Prepare
npm init
"scripts": {
"start": "node index.js"
}
3. Install Express.js:
Create
Run
npm start
Deploy
runtime: nodejs
env: flex
For the last 3 years I worked on an application that runs on Google App Engine. It is a
fascinating, unique piece of service Google is offering here. Unlike anything you'll nd
elsewhere. This is my in-depth, personal take on it.
Google launched GAE in 2008, when cloud computing was still in its infancy. Amazon
was ahead of them since they already started renting out their IT infrastructure in
2006. But with GAE, Google offered a sophisticated Platform-as-a-Service (PaaS) very
early on that would be matched by Amazon with its Elastic Beanstalk service in 2011.
Now what is so special about GAE?
By using Google App Engine, you can run your app on top of (probably) the world's
best infrastructure. Also, you receive functionality out of the box that would take at
least a dozen add-ons from third parties on Heroku or a few weeks of setup if done on
your own. This is GAE's appeal.
Noteworthy applications that run on GAE include Snapchat and Khan Academy.
Development
The web app I was working on all this time is a single, large Java application. App
Engine also supports Python, PHP and Go. Now you might wonder why the selection
is so limited. One reason is that in order to have a fully-managed environment,
Google needs to integrate the platform with the environment. You could say that
environment and platform are tightly coupled. That takes a lot of effort and
investment which becomes very clear once you start developing for GAE.
SDK
Each app needs to use a special SDK (Software Development Kit) to use the APIs
offered by GAE. The SDK is huge. For example, the Java SDK download comes in at
roughly 190 MB. Granted, some of the JARs in there are not needed for most use cases
and some only during development - but still, it certainly is not lightweight (even for
Java, that is).
The SDK is not just your bridge to the world of Google App Engine but also serves as
its simulation on your local machine. For virtually every GAE API it features a stub
that you can develop against. First of all, this means that when you run your app
locally you'll get quite close to how it would behave in production. Second of all, you
can easily write integration tests against the APIs. And usually this will get you very
far; the mismatch between the production and stub behavior is quite small.
Java APIs
Speaking of APIs, you are in for a surprise when you use certain Java APIs. Since GAE
runs your application in some kind of sandbox, it forbids using particular Java APIs.
The major restrictions include writing to the le system, certain methods of
java.lang.System and using the Java Native Interface (JNI). There are also
peculiarities about using threads and sockets but more on that later.
One interesting thing is that the Java SDK actually ensures you do not use these
restricted APIs locally. When you run your app or just an integration test, it employs a
Java agent that monitors your every method call. It immediately throws an exception
for any detected violation. This is helpful in nding violations early and not only in
production but has an annoying side effect. When you pro le the performance of your
app, there will be an overwhelming amount of violation checks by the agent. In the
end, it is hard to judge your app's actual performance since the more method calls you
make, the more overhead the agent generates.
Obviously, this limitation is a major annoyance for any developer. For me personally,
the missing lambda support weighs very heavily. Of course, one could migrate to one
of the many JVM languages like Groovy, Scala or Kotlin which all offer a lot more
features than Java 8. But this is a costly and risky investment to make. Too costly and
risky for our project. We also investigated the feasibility of retrolambda, a backport of
lambdas to Java 7, but did not pursue it yet although it looked promising in rst tests.
Having to stay with an old version is also a liability for the business. It makes it harder
to nd developers. Overall application security is threatened, as well. Google support
told us, we would still receive security patches for our production JDK 7. But
eventually, all major libraries like Spring will stop supporting it. Eventually, you'll be
stuck.
Deployment
To deploy your application, you need to create an appengine-web.xml con guration
le. There, you specify the application ID and version plus some additional settings,
e.g. marking the app as threadsafe to be able to receive multiple requests per
instance simultaneously.
Upload
App Engine expects to receive your Java application as a packaged WAR le. You can
upload it to their servers with the appcfg script from the SDK. Optionally, there are
plugins for Maven and Gradle which make this as easy as writing
mvn appengine:update . The upload can take quite a while for typical Java applications,
you'd better have a fast internet connection. Once the process nishes, you can see
your newly deployed version in the Google Cloud Console:
Static Files
Static les like images, stylesheets and scripts are part of any web application today.
In the appengine-web.xml les can be marked as static. Google will serve these les
directly - without hitting your application. It is not exactly a Content Delivery
Network (CDN) since it is not distributed to hundreds of edge nodes, but it helps to
reduce the load on your servers.
Versions
The nice thing in App Engine is that everything you deploy has a speci c version.
Every version can be accessed at https://<version>-dot-<app-id>.appspot.com . But
which one is actually live?
will be the version receiving all the requests. Switching a version to default is very
easy: all it takes is a button click or simple terminal command. GAE can switch
immediately or migrate your traf c incrementally to prevent overwhelming the new
version.
There is also one option (which we never used) that allows you to distribute your
traf c across multiple versions. This allows incrementally rolling out a new version by
only giving it to a fraction of the user base before making it available for everyone.
Since it is so easy to create new versions and switch production traf c between them,
GAE is a perfect platform to practice blue-green deployment. Each time we had the
need to rollback due to a bug in the new version, it was effortless. Continuous
Delivery should also be achievable by writing a somewhat smart deployment script.
Instances
Every version can run any number of instances (the only limit is your credit card). The
actual number is the result of incoming traf c and the scaling con guration of your
app; we'll look at that later. Google will distribute incoming requests between all
running instances of that version. You can see a list of instances, including some basic
metrics like requests and latency, in the Google Cloud Console:
The hardware options you can choose from to run these instances on are - let's be
frank here - pathetic. App Engine basically offers four different instance classes
ranging from 128MB and 600MHz CPU (you read that correctly) to 1024MB and
2.4GHz CPU. Yes, again, that is true. And truly sad. On a developer's laptop our app
started almost twice as fast as in production.
Services
So far, I have only talked about a single, monolithic application. But what do you do if
yours consists of multiple services? App Engine has got you covered. Every app is a
service. If you only have one, it is simply called default . You can access each one
directly via https://<version>-dot-<service>-dot-<app-id>.appspot.com .
You can easily deploy multiple versions of each service, scale and monitor them
separately. And since each service is separate from the others, you could run any
combination of the supported languages. Unfortunately though, some con guration
settings are shared across all services. They are therefore not perfectly isolated. Still,
all in all, GAE seems like a good t for microservices. There is some elaborate
documentation on this topic from Google, as well.
For reasons that will become clear later, we decided to separate our application into
two services: frontend (user-facing) and backend (background work). But to do so, we
didn't actually split the monolith in two - that would have taken months. We simply
deployed the same app twice and only sent users to one service and background work
to the other.
Operations
Let's talk about what it means to run your application on App Engine. As you will see,
there are a number of restrictions it imposes on you. But it is not all gloomy. In the
end you will understand why.
Application Startup
When App Engine starts a new instance, the app needs to initialize. It will either
directly send the HTTP request from the user to the app or - if the con guration and
scaling circumstances allow it - send a so-called warmup request. Either way, the rst
request is called a loading request. And as you can imagine, starting quickly is
important.
The instance itself on the other hand is ridiculously fast to start. If you have started a
server in the Cloud before, you might have waited more than a minute. Not on GAE.
Instances start almost instantly. I guess Google holds a pool of servers ready to go.
The bottleneck will always be your own app. Our application took more than 40
seconds to start in production. So unless we wanted to split our huge monolith into
separate services, we needed it to start more ef ciently.
The app uses Spring. Google even has a dedicated documentation entry just for that:
Optimizing Spring Framework for App Engine Applications. There we found the
inspiration for our most important startup optimization.
Request Handling
The very rst thing I have to mention here is the requirement of the App Engine to
handle a user request within 60 seconds and a background request in 10 minutes.
When the application takes too long to respond, the request is aborted with a 500
status code and a DeadlineExceededException is thrown.
Usually, this shouldn't be a problem. If your app takes more than 60 seconds to
respond, odds are the user is long gone anyway. But since an instance is started via an
HTTP request, this also means it has to start in 60 seconds. In production, we
observed variations in startup time of up to 10 seconds. This means you now have less
than 50 seconds to start your app. It is not uncommon for a Java app to take that long.
One nice little feature I'd like to highlight is the geographical HTTP headers: for each
incoming user request, Google adds headers that contain the user's country, region,
city as well as latitude and longitude of said city. This can be very useful, for example
for pre- lling phone number country codes or detecting unusual account login
locations. The accuracy also seems pretty high from our observations. It is usually
very cumbersome and/or expensive to get that kind of information with this level of
accuracy from a third party API or database. So getting it for free on App Engine is a
nice bonus.
Background Work
Threads
As mentioned earlier, there are restrictions using Java threads. While it is possible to
start a new thread, albeit through a custom GAE ThreadManager , it cannot 'outlive' the
request it was created in. This can be annoying in practice since third party libraries
don't follow App Engine's restrictions, of course. To nd a compatible library or adapt
a seemingly incompatible one, cost us a lot of sweat and tears over the years. For
example, we could not use the Dropwizard metrics library out of the box since it relies
on using a background thread.
Queue
But there are other ways of doing background work: In the spirit of the Cloud, you
apply the divide and conquer approach on the instance level. By using task queues
you can enqueue work for later processing. For example, when an email needs to be
sent, you can enqueue a new task with a payload (e.g. recipient, subject and body) and
a URL on a push queue. Then, one of your instances will receive the payload as a HTTP
POST request to the speci ed endpoint. If it fails, App Engine will retry the operation.
This pattern really shines when you have a lot of work to process. Simply enqueue a
batch of tasks that run in isolation. The App Engine will take care of failure handling.
No need for custom retry code. Just imagine how awkward it would be without it:
running hundreds of tasks at once you either need to stop and start from scratch
when an error occurs or carefully track which have failed and enqueue them again for
another attempt.
And just like the rest of the App Engine, task queues scale beautifully. A queue can
receive virtually unlimited tasks. The downside is the payload can only be up to 1 MB,
though. But what we usually did was to simply pass references to data to the queue.
But then, you need to take extra good care in your data handling since it can easily
happen that something vanishes between the time you enqueue a task and the time
that task is actually executed.
The queues are con gured in a queue.xml le. Here is an example of a push queue
that res up to one task per second with a maximum of two retries:
<queue>
<name>my-push-queue</name>
<rate>1/s</rate>
<retry-parameters>
<task-retry-limit>2</task-retry-limit>
</retry-parameters>
</queue>
Cron
Another extremely valuable tool is the distributed Cron. In a cron.xml you can tell
GAE to issue requests at certain time intervals. These are just simple HTTP GET
requests one of your instances will receive. The smallest interval possible is once per
minute. It is very useful for regular reports, emails and cleanups.
<cron>
<url>/tasks/summary</url>
<schedule>every 24 hours</schedule>
</cron>
A Cron job can also be combined with pull queues: they allow to actively fetch a batch
of tasks from a queue. Depending on the use case, making an instance pull lots of
tasks in a batch can be much more ef cient than pushing them to the instance
individually.
Like all other App Engine con guration les, the cron.xml is shared across all
services and versions of an application. This can be annoying. In our case, sometimes
when we deployed a version where a new Cron entry had been added, App Engine
would start sending requests to an endpoint which did not exist on the live (but older)
version - generating noise for our production error reporting. I imagine this must be
even more painful when using App Engine to host microservices.
Also, the Cron jobs are not run locally. I can understand why that might be: a lot of
the jobs are usually scheduled outside the usually busy time and would therefore not
even be triggered during a regular workday. But some run like every few minutes or
hours - and those are really interesting to observe. They might trigger noti cations,
for example. You want to see those locally. Because eventually you will introduce a
change that leads to undesirable behavior (as has happened multiple times in our
project) and seeing it locally might prevent you from shipping it. But simulating the
Cron jobs locally is tricky (we didn't bother, unfortunately). One would probably need
to write an external tool that parses the cron.xml and then pings the according
endpoints (yuck!).
Scaling
App Engine will take care of scaling the number of instances based on the traf c.
How? Well, depending on how you have con gured your application. There are three
modes:
Automatic: This is GAE's unique selling point. It will scale the number of
instances based on metrics like request rate and response latency. So if there is a
lot of traf c or your app is slow to respond, more instances spin up.
Manual: Basically like your good old virtual private servers. You tell Google how
many instances you want and Google delivers. This xed instance size is useful if
you know exactly what traf c you are going to get.
Basic: Essentially the same as manual scaling mode but when an instance
becomes idle, it is turned off.
The most useful and interesting one here certainly is the automatic mode. It has a few
parameters that help to shed some light on how it works internally:
max_concurrent_requests , max_idle_instances , min_idle_instances and
The App Engine scheduler decides whether to serve each new request with an
existing instance (either one that is idle or accepts concurrent requests), put the
request in a pending request queue, or start a new instance for that request. The
decision takes into account the number of available instances, how quickly your
application has been serving requests (its latency), and how long it takes to spin up
a new instance.
Every time we tried to tweak those numbers, it felt like practicing black magic. It is
very dif cult to actually deduce a good setup here. Yet, these numbers determine the
real-world performance of your app and hugely affect your monthly bill.
But all in all, the automatic scaling is pretty wicked. It is an especially good t for
handling background work (e.g. generating reports, sending emails) since it often -
more so than user requests - comes in large, sudden bursts.
But the thing is, Java is a terrible t for this kind of auto scaling due to its slow startup
time. What makes matters worse, it is very common for the scheduler to assign a
request to a starting (cold) instance. Then, all efforts that went into sub-second REST
responses go out the window. Since 2012 there is an issue about user-facing requests
never to be locked to cold instances. It has not even elicited the slightest comment by
Google other than the status change to 'Accepted' (sounds like one of the stages of
grief at this point).
This also explains why we split our app into two services. Before, we often found that
with a surge in background requests, the user requests would suffer. This is because
App Engine scaled the instances up immensely and, since requests are routed evenly
across instances, this led to more user requests hitting cold instances. By splitting the
app we signi cantly reduced this from happening. Also, we were able to apply
different scaling strategies for the two services.
One last thing: In a side-project, I used Go on App Engine and discovered a new
perspective on the App Engine. Among Go's traits is the ability to start an application
virtually instantly. This makes App Engine and Go a perfect combination, like Batman
and Robin. Together, they embody everything I personally expected from the Cloud
ever since I learned about it. It truly scales to the workload and does so effortlessly.
Not even the abysmal hardware options seemed to pose a real problem for Go since it
is that ef cient.
Data
When App Engine launched, the only database options you had were Google
Datastore for structured data and Google Blobstore for binary data. Since then, they
have added Google Cloud SQL (managed MySQL) and Google Cloud Storage (like
Amazon's S3) which replaced the Blobstore. From the beginning App Engine offered a
managed Memcache, as well.
It used to be very dif cult to connect to a third-party database since you could only
use HTTP for communication. But usually databases require raw TCP. This has only
changed a few years ago when the Socket API was released. But it is still in Beta,
which makes it a questionable choice for mission-critical usage. So database-wise,
there is still very much of a vendor lock-in.
Datastore
The Datastore is a proprietary NoSQL database, fully managed by Google. It is unlike
anything I had ever used before. It is a massively scaling beast with very unique traits,
guarantees and restrictions.
In the early days, the Datastore was based on a master-slave setup which featured
strongly consistent reads. A few years in, after it had suffered a few severe outtakes,
Google introduced a new con guration option: High Replication. The API stayed the
same but the latency for writes increased and some reads became eventual consistent
(more on that later). The upside was the signi cantly increased availability. It even
has a 99.95% uptime SLA. Since I worked with it, I never experienced a single issue
with the Datastore's availability. It was just something you did not have to think
about.
Entities
The basics of the Datastore are simple. You can read and write entities. They are
categorized under a particular kind. An entity consists of properties. A property has a
name and a value which has a certain type. Like string , boolean , float or integer .
Each entity also has a unique key.
Writing
There is no schema whatsoever, though. Entities with the same kind can look
completely different. This makes development very easy: just add a new property,
save it and it will be there. The ip side is that you will need to write custom
migration code to rename properties. The reason for this is that an entity cannot be
updated in place - it must be loaded, changed and saved again. And depending on the
volume of entities, this can become a non-trivial task since you might need to use the
task queue to circumvent the request time requirements. In my experience, this leads
to old property names all over the place since refactoring is so costly and dangerous.
There are a some limits for working with entities. The two most critical are:
An entity may only be 1MB in total, including additional meta data of the
encoded entity
You can only write to an entity (group, to be exact) up to once per second
In practice, this can be an issue. We rarely hit the size limit - but when we did, it was
painful. Customer data can get lost. When you hit the write rate limitation, it is
usually ne on the next try. But of course you have to design your application to
minimize the odds of that. For example, something like a regularly updated counter
takes a lot of work to get right. Google even has a documentation entry on using
sharding to build a counter.
Reading
An entity can be fetched by using its key or via a query. Reads by key are strongly
consistent, meaning you will receive the latest data even if you updated the entity
right before fetching it. However, this is not true for queries. They are eventually
consistent. So writes are not always re ected immediately. This can lead to problems
and might need to be mitigated, for example by clever data modelling (e.g. using
mnemonic as key) or leveraging special Datastore features (e.g. entity groups).
A query always speci es an entity kind and optional lters and/or sort orders. Every
property that is used in a lter or as a sort key must be indexed. Adding an index can
only be done as part of the regular write operation. Not automatically in the
background as in most SQL databases. The index will also increase the time of the
write operation and the cost (more on that later).
In contrast to other databases, the absence of a multi-index will not just result in an
inef cient, slow query - it will fail immediately. The Datastore tries its very best to
enforce performant queries. Inequality lters, for example, only support a single
property. Of course, there are always ways to shoot yourself in the foot - but they are
rare.
There are several other features I cannot go into now, for example pagination,
projection queries and transactions. Go to the Datastore documentation to learn
more, it is very extensive and helpful.
Compared to other databases the read and write operations are very slow. Based on
my observations, a read by key takes 10-20ms on average. It is rare to see signi cant
deviations. My best guess is that Google serializes entities and only indexes are
actually kept in memory.
The pricing model seems to support that: you pay for stored data, read, write and
delete operations. That's it. Note that database memory is not in that list. The
operations themselves are cheap as well: reading 100k entities costs $0.06, 100k write
operations cost $0.18 - a write operation can be the actual entity write but also every
index write. If you don't write anything, you don't pay anything. But in a single
minute you could be writing gigabytes of data. And here's the kicker: The read and
write performance is basically the same for a database with no entities or a billion. It
scales like crazy.
API
The API to the Datatore feels very low-level. Therefore, for any serious Java app there
is no way around Objectify. It is a library written by Jeff Schnitzer. If Google has not
done so already, they should write him a huge cheque for making the App Engine a
better place. He wrote it for his own business but the tireless dedication over the
years, extensive documentation and support he offers in forums is astounding. With
Objectify, working with the Datastore is actually fun.
class Car {
@Id String vin;
String color;
}
Car c = ofy().load().type(Car.class).id("123123").now();
ofy().delete().entity(c);
Objectify makes it really easy to declare entities as simple classes and then takes care
of all the mapping between the Datastore.
It also has a few tricks up its sleeve. For example, it comes with a rst-level cache.
This means that whenever you request an entity by key, it rst looks into a request-
scoped cache whether the entity was already fetched. This can be bene cial for
improving performance. However, it can also be confusing because when you fetch an
entity and modify it but do not save it, the next read will yield that same cached,
modi ed object. This can lead to Heisenbugs.
For running tests against the Datastore, the SDK is also able to start a local Datastore
for you. However, this must be a different implementation since it behaves differently
than the one for running the app. This becomes apparent when you realize that a
missing multi-index will throw an error when executing the app locally but not when
testing the same query. Over the years I accidentally released several queries with
missing indexes into production (usually still behind a Beta toggle) - although I had a
test for it. After contacting support they admitted the oversight and promised to x it
- more than one year later they still have not.
Backups
Making backups of the Datastore is an atrocious process. There is a manual and an
automatic way. Of course, when you have a production application, you'd like to have
regular backups. The of cial way is a feature introduced in 2012 which is still in
Alpha!
By adding an entry to your cron.xml you can initiate the backup process. The entry
will include the names of the entities to backup as well as the Google Cloud Storage
bucket to save them to. When the time has come, it will launch a few Python
instances with the backup code, iterate through the Datastore and save them in some
kind of proprietary backup format to your bucket. Interestingly, a bucket has a limit of
how many les it can contain, so you better use a new bucket now and then.
Memcache
The other crucial way to store data on App Engine is Memcache. By default, you get a
shared Memcache. This means, it works on a best-effort basis and there is no
guarantee how much capacity it will have. There is also the dedicated Memcache for
$0.06 per GB per hour.
Objectify is able to use this as a second-level cache. Just annotate an entity with
@Cache and it will ask Memcache before the Datastore and save every entity there
rst. This can have a tremendous effect on performance. Usually Memcache will
respond within about 5 ms, which is much faster than the Datastore. I am not aware
of any stale cache issue we might have had. So this works very well in production.
The bene ts of it are actually very noticeable when Memcache is down. This
happened to us about once a year for an hour or two. Our site was barely usable, it was
that slow.
Big Query
BigQuery is a data warehouse as a service, managed by Google. You import data -
which can be petabytes - and can run analyses via a custom query language.
It integrates somewhat well with the Datastore since it allows to import Datastore
backup les from Google Cloud Storage. I have used this a few times, unfortunately
not always successfully. For some of our entities I received a cryptic error. I was never
able to gure out what went wrong. But some entities did work. And after ddling
with the query language documentation for a bit, I was able to generate my rst
insights. Everything considered, it was a nice way to run simple analyses. I de nitely
would not have been able to do this without writing custom code. But I was not really
leveraging the service's full potential. All the queries I made could have been done in
any SQL database directly, our data set was quite small. Only because of the way the
Datastore worked did I have to resort to the BigQuery service in the rst place.
Monitoring
The Google Cloud Console brings a lot of features to diagnose your app's behavior in
production. Just look at the Google Cloud Console navigation:
This is the result of Google's acquisition of Stackdriver in 2014. It still feels like a
separate, standalone service - but its integration into Google Cloud Console is
improving.
Let's look at the capabilities one by one.
Logging
It is crucial to access an application's logs quickly and with ease. This is something
that was truly painful on App Engine in the beginning. It used to be very cumbersome
because it was incapable of searching across all versions of an application. This meant
when you were looking for something, you had to know which version was online at
the time - or try several, one by one. It was almost unusable. Plus it was extremely
slow.
Since then, they have added useful lters to show only speci c modules, versions, log
levels, user agents or status codes. It is very powerful. Still not fast, but it has gotten
much better now compared to the early days. Here is how it looks:
One very unique idea you can see here is that logs are always grouped by request. In
all other tools I have encountered, Kibana for instance, you will only get the log lines
that match your search. By always showing all other log lines around the one that
matches your search, it gives you more context. I nd this extremely helpful when
investigating issues in the logs since it immediately helps you to better understand
what happened. I truly miss that feature in every other log viewer I use.
Another interesting trait of the App Engine is that each HTTP request is
automatically assigned a request ID. It is added to the incoming HTTP request and
uniquely identi es it. This can come in handy to correlate a request with its logs. For
example, we were sending emails when an uncaught exception occurred and included
the request ID - this made it trivial to look up the logs. The same can be done for
frontend error tracking.
Metrics
The Cloud Console gives access to a few basic application metrics. This includes the
request volume and latency, traf c volume, memory usage, number of instances and
error count. It is useful as a starting point when investigating an issue and when you
want to get a quick rst impression of the general state of the app.
When you select a speci c request, it opens up a timeline. There it displays the
remote procedure calls (RPCs) that you cannot see in the logs. Plus, a summary for
each RPC by type on the side. By clicking on an RPC, more details, e.g. the response
size, are shown.
This can be extremely helpful to nd the cause of a slow request. In the following
example you can see that the request makes a few fast Memcache calls and a very slow
Datastore write operation.
The only problem is that the RPCs do not include enough information to gure out
what happened exactly. For instance, the detail view of the Datastore write operation
looks like this:
It does not even include the name of the updated entity. This is a huge annoyance and
can render this whole screen almost useless. There is just one thing which can help:
clicking the 'Show logs' button in the upper right corner. It will include the log
statements of the request inline interleaved with the RPCs. This way you might be able
to infer more details from the context.
Resources
It is also important to point out that pricing is completely usage-based. This means
the cost of your app scales virtually byte by byte, hour by hour and operation by
operation. It also means, that it is very affordable to get started. There is no xed
cost. If hardly anyone uses your app - since there is a free quota - you do not pay
anything.
The biggest item on the bill will most certainly be for the instances, contributing
about 80% in my last project. The next big chunk is likely the Datastore read/write
cost, 15% of the total cost for us.
There is a nice interface in the Google Cloud Console to keep track of all quotas:
To be more speci c, when I say 'all quotas' I mean all quotas Google tells you about.
We actually had an issue where we hit an invisible quota. I think at the time the API
may have been in Beta, though. Anyway, one part of our application stopped working
and we had no idea why. Luckily, we were subscribed to Google Cloud Support. They
informed us about said quota and we had to rewrite a part of our application to make
it work again.
We also had one minor outage due to the confusing pricing setup. At one point one of
our apps suddenly stopped working and just replied with the default error page. It
took us ten minutes to gure out that we hit the budget limit we had set up. After we
raised it, everything just started working again.
Support
There is a lot to be said about Google Cloud Support. First of all, without it we would
have been in serious trouble now and then. So having it is a must for any mission-
critical application - in my eyes. For example, about once a year our application would
just stop serving requests. There was nothing we did to cause that. After contacting
Google support we would learn that they moved our application to a 'different cluster'.
And it just worked again. It is a very scary situation. You cannot do anything but 'pray
to the Google gods'.
Second of all, it is a hit or miss based on the support person. The quality varied a lot.
Sometimes we would need to exchange a dozen messages until they nally
understood us. Like any support it can be infuriating. But in the end, they would
usually resolve our issue or at least give us enough information to help us resolve it
ourselves.
A New Age
Google is working on a new type of App Engine, the exible environment. It is
currently in Beta. Its goal is to offer the best of two worlds: the ease and comfort of
running on App Engine combined with the exibility and power of Google Compute
Engine. It allows to use any programming platform (like Java 9!) on any of the
powerful Google Compute Engine machines (like 416GB RAM!) while letting Google
take care of maintaining the servers and ensuring the app is running ne.
They have been working on this for some years already. Naturally, we were keen on
trying it out. So far, we weren't that thrilled. But let's see where Google is taking this.
Some restrictions and annoyances are the result of neglect by Google, though. It feels
like they only invest the bare minimum anymore. Actually, I have had this feeling for
the last two years. It is frustrating to work with an ancient tech stack, without any
hope of improvement in sight. It is infuriating if there are known issues but they are
not xed. It is depressing to receive so little information on where the platform is
heading. You feel trapped.
All in all, I liked how App Engine allowed the development team to focus on actually
building an application, making users happy and earning money. Google took a lot of
hassle out of the operations work. But the 'old' App Engine is on its way out. I do not
think it is a good idea to start new projects on it anymore. If App Engine Flexible
Environment on the other hand can actually x its predecessor's major issues, it
might become a very intriguing platform to develop apps on.
Stephan Behnke Share this post
Software developer by trade. Most of the time on the ever lasting quest for
simplicity, elegance and beauty in code. Or just getting stuff done in-
between.
LOG IN WITH
OR SIGN UP WITH DISQUS ?
Name
One thing that must be corrected in the blog post is the statement "But the 'old' App
Engine is on its way out..."
Not at all, App Engine might be old (it is the first real PAAS), but it is a proven platform,
with lot of large customers, and Google is investing *massively* in it, and the new Java8
Standard runtime is the first one that is running on a brand new security sandbox... All
existing applications will benefit from the upgrade.
Same free tier, same GAE APIs support, same ease of use, update to Jetty 9.x and
Servlets 3.1, new IDE/Tools plugins, no constraints, plus all the new Cloud APIs as well...
Thanks again for this excellent and timely write up. If you like AppEngine Standard as of
today, you will be delighted with new one, starting with the new Java8 runtime offering
without restrictions, and more later...
Ludo, Google App Engine engineering.
5△ ▽ • Reply • Share ›
thanks for taking the time to comment :) Cool that you like the article. I was
delighted and hugely disappointed with the latest changes. I welcome them, but I
already quit my job so I will not benefit from them any time soon. I was waiting
for exactly this anouncement for aaaages. And now shortly after I leave it all
finally happens.
My comment about App Engine being on its way out reflects exactly that. It felt
like there was no investment whatsoever. All I could read about was Flexible
Runtime. In our company we were pretty sure that the Standard Runtime was
basically in maintenance mode. This announcement certainly changes things. I
would have loved to have had this a year ago.
Anyway, I wish you all the best and hope you can fix the things that are annoying
on the App Engine with the upcoming releases.
1△ ▽ • Reply • Share ›
htt //i t k l
https://issuetracker.google...
For GAE adopters, I would recommend getting away from java. Everything
else is perfect for me.
△ ▽ • Reply • Share ›
This forced me to re-evaluate some of my pre-existing biases. I like PaaS and the ideas of
Java a lot but to get PaaS right you need a good commitment for service from the
underlying company and Google is basically a glorified advertising company. Search,
Mail, Android & Chrome come in second and everything else is way down in the food
chain... E.g. I've read of similar experiences from guys using firebase etc. and as a veteran
of Google code/other failed google "experiments" I understood it was time to admit that I
was wrong.
Unfortunately I could only admit that I was wrong after we lost a whole lot of money.
Today we manage individual VPS servers with cloudflare for scale CDN. Surprisingly our
performance and scale improved significantly. Ease of use is better since everything is
divided into smaller simpler projects. We are more flexible and could adopt newer tools
(e.g. Spring Boot which is fantastic) immediately for newer projects. So we don't get the
fancy charts but frankly they didn't help when the underlying data is completely masked.
see more
1△ ▽ • Reply • Share ›
We since moved the last bits off app engine to spring boot I will post the
link in a separate comment to avoid the moderation queue. This was trivial
to work with, gave us fixed price, performed MUCH MUCH MUCH better,
was more powerful and cost less ultimately!
was more powerful and cost less ultimately!
We paid the the highest level of support available. That didn't help.
see more
△ ▽ • Reply • Share ›
The actual response was: set spend limits. In other words the only
solution a Google engineer was able to give me was bringing down
our service daily.
When I get a service I need to have a way to check that the service
was delivered and verify the work. With IaaS that's super simple, I
have a server and it's running... With any other service I can see the
delivery and understand why I was charged.
For some types of PaaS this is problematic, for others not so much.
see more
△ ▽ • Reply • Share ›
ᗪ ᒍ ᗩ K ᗪ E K I E ᒪ • a year ago
Maybe you know, how autoscaling works with firebase database?
△ ▽ • Reply • Share ›
1. Instances can be started and stopped at any time, so any local state must only
be a cache. This is a general "best practice" for scalable applications anyway, since
it forces you to move state to an explicit storage thing. That storage thing might
now become your bottleneck, but at least this makes it visible and explicit.
2. The request timeout. This one is much more annoying and debatable. However,
it forces you to explicitly categorize operations as "fast" and safe to wait on, or
"slow" and needing some sort of polling or other way to tell if the operation is
done. This is useful for designing your software appropriately, rather than "just
wait for X" where X might take 3 minutes.
I also like to argue both sides. In this case, these restrictions do make it a bit more
difficult for "small" applications where these things don't matter as much. I think
this is part of the reason App Engine has not been as successful as it could be:
some of these "weird" restrictions don't make sense for the "toy" applications
people write when they are first getting started.
3△ ▽ • Reply • Share ›
I also work on an App Engine app (Python in our case) and we have a love/hate
relationship with it, as it sounds like you do. The biggest advantage, in my opinion, is
that once you get something set up and working, it just keeps working, no matter what
traffic or whatever you throw at it. The biggest disadvantage is that there are a bunch of
things that are "non-standard", which can make it hard to run "existing code" on it in
some cases. I don't even want to get in to what it costs, which can be horrifically
expensive if you have anything remotely CPU or memory intensive. Overall: If you have
something that fits its model, I think it is great. However, we are starting to move parts
of our workload to Container Engine.
△ ▽ • Reply • Share ›
I think it makes sense for a couple of uses cases, one being the single developer
creating a new application. Everything is taken care of for you (well almost). It
used to be borderline impossible to move off the App Engine, but now that with
offerings like Google Compute Engine it will become easier to run a mix of
GAE/GCE or move entirely.
tl;dr I would recommend App Engine if your use case and circumstances make it
d fi )
a good fit ;)
△ ▽ • Reply • Share ›
Good question. To be honest, we would have paid almost anything to get Java 8
and better hardware :) We did not look at the cost really, so I don't have an
answer for that.
△ ▽ • Reply • Share ›
I'd like to ask whether you could share your experience with
https://github.com/atteo/cl... library for Spring?
I've got lots of problems due to Spring classpath scanning and do not really want to
downgrade my configuration to raw XML.
I'd really appreciate if you could share some notes about Spring and classindex library.
Regards,
Yuri.
△ ▽ • Reply • Share ›
ALSO ON STEPHANOS