Sie sind auf Seite 1von 132

App Engine Documentation Python Standard Environment

Hosting a static website on Google App Engine


Contents
Before you begin
Creating a website to host on Google App Engine
Basic structure for the project
Creating the app.yaml file

You can use Google App Engine to host a static website. Static web pages can contain client-side technologies such as HTML,
CSS, and JavaScript. Hosting your static site on App Engine can cost less than using a traditional hosting provider, as App
Engine provides a free tier.

Sites hosted on App Engine are hosted on the appspot.com subdomain, such as [my-project-id].appspot.com . After you
deploy your site, you can map your own domain name to your App Engine-hosted website.

Before you begin

Before you can host your website on Google App Engine:

1. Create a new GCP Console project or retrieve the project ID of an existing project to use:

GO TO THE PROJECTS PAGE

Tip: You can retrieve a list of your existing project IDs with the gcloud command line tool.
2. Install and then initialize the Google Cloud SDK:

DOWNLOAD THE SDK

Creating a website to host on Google App Engine

Basic structure for the project

This guide uses the following structure for the project:

app.yaml : Configure the settings of your App Engine application.


www/ : Directory to store all of your static files, such as HTML, CSS, images, and JavaScript.
css/ : Directory to store stylesheets.
style.css : Basic stylesheet that formats the look and feel of your site.
images/ : Optional directory to store images.
index.html : An HTML file that displays content for your website.
js/ : Optional directory to store JavaScript files.
Other asset directories.

Creating the app.yaml file

The app.yaml file is a configuration file that tells App Engine how to map URLs to your static files. In the following steps, you will
add handlers that will load www/index.html when someone visits your website, and all static files will be stored in and called from
the www directory.

Create the app.yaml file in your application's root directory:

1. Create a directory that has the same name as your project ID. You can find your project ID in the Console.
2. In directory that you just created, create a file named app.yaml .
3. Edit the app.yaml file and add the following code to the file:

runtime: python27
api_version: 1
threadsafe: true

handlers:
- url: /
static_files: www/index.html
upload: www/index.html
- url: /(.*)
static_files: www/\1
upload: www/(.*)

More reference information about the app.yaml file can be found in the app.yaml reference documentation.

Creating the index.html file

Create an HTML file that will be served when someone navigates to the root page of your website. Store this file in
your www directory.

<html>
<head>
<title>Hello, world!</title>
<link rel="stylesheet" type="text/css" href="/css/style.css">
</head>
<body>
<h1>Hello, world!</h1>
<p>
This is a simple static HTML file that will be served from Google App
Engine.
</p>
</body>
</html>

Deploying your application to App Engine

When you deploy your application files, your website will be uploaded to App Engine. To deploy your app, run the following
command from within the root directory of your application where the app.yaml file is located:

gcloud app deploy

Optional flags:

Include the --project flag to specify an alternate GCP Console project ID to what you initialized as the default in
the gcloud tool. Example: --project [YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for you. Example: -v [YOUR_VERSION_ID]

To learn more about deploying your app from the command line, see Deploying a Python App.

Viewing your application

To launch your browser and view the app at https://[YOUR_PROJECT_ID].appspot.com , run the following command:

gcloud app browse

What’s next

Serve your App Engine-hosted website from a custom domain.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated October 22, 2018.


App Engine Documentation Python Standard Environment

Mapping Custom Domains


Contents
Before you begin
Adding a custom domain for your application
Using subdomains
Wildcard mappings
What's next

Python 2.7/3.7 | Java 8 | PHP 5.5/7.2 | Go 1.9/1.11 | Node.js 8

App Engine allows applications to be served via a custom domain, such as example.com , instead of the
default appspot.com address. You can create a domain mapping for your App Engine app so that it uses a custom domain.

By default, when you map your custom domain to your app, App Engine issues a managed certificate for SSL for HTTPS
connections. For more information on using SSL with your custom domain, including how to use your own SSL certificates,
see Securing your custom domains with SSL.

Use this page to learn how to create a domain mapping for your app that is running on App Engine.

Using custom domains in the following regions might add noticeable latency to responses: northamerica-
northeast1 (Montréal), and southamerica-east1 (São Paulo), asia-south1 (Mumbai), and australia-southeast1 (Sydney).

Before you begin

1. Purchase a new domain, unless you already have one that you want to use. You can use any domain name registrar,
including Google Domains.
2. If you choose to use the gcloud tool commands:
a. Install and initialize the Cloud SDK:

DOWNLOAD AND INSTALL

3. If you choose to use the Admin API, see the prerequisite information in Accessing the Admin API.

Note: Some of the gcloud commands and Admin API methods that are used in this topic are beta-level features.

Adding a custom domain for your application

To add a custom domain for your App Engine app:

1. Verify that you are the owner of your domain through Webmaster Central:

CONSOLE GCLOUD

a. In the Google Cloud Platform Console, go to App Engine > Settings > Custom Domains:

GO TO THE CUSTOM DOMAINS PAGE

b. Click Add a custom domain to display the Add a new custom domain form:
c. In the Select the domain you want to use section, enter the name of the domain that you want to use, for example example.com ,
and then click Continue to open a new tab to the Webmaster Central page.
i. Use Webmaster Central to verify ownership of your domain.
Important: Verifying domain ownership by using a CNAME record is the preferred option for App Engine. If you choose to use
a TXT record, you must avoid configuring your domain's DNS with a CNAME record because the CNAME record overrides
the TXT record and causes your domain to appear unverified.
If the verification methods for your domain do not offer the CNAME record option, you can select Other as your domain
provider and then choose Add a CNAME record:
i. Click Alternate methods and then Domain name provider.
ii. In the menu, select Other.
iii. In the Having trouble section, click Add a CNAME record and then following the instructions to verify ownership of
your domain.
Remember: It might take a minute before your CNAME is set at your domain registrar.

ii. Return to the Add new custom domain form in the GCP Console.

2. Ensure that your domain has been verified, otherwise you will not be able to proceed with the following steps. Note
that only verified domains will be displayed.

CONSOLE GCLOUD API

If your domain is not already listed, click Refresh domains.

Important: The domain verification is automatically re-confirmed about every 30 days. So if you remove the verification string
from your DNS settings, you will lose the ability to change the configuration within the GCP Console. However, if this happens,
the serving setup for the domain does not change and the app continues to serve over the custom domain.

3. If you need to delegate the ownership of your domain to other users or service accounts, you can add permission through
the Webmaster Central page:
a. Opening the following address in your web browser:
https://www.google.com/webmasters/verification/home
b. Under Properties, click the domain for which you want to add a user or service account.
c. Scroll down to the Verified owners list, click Add an owner, and then enter a Google Account email address or
service account ID.
To view a list of your service accounts, open the Service Accounts page in the GCP Console:

GO TO SERVICE ACCOUNTS PAGE

4. After you verify ownership of your domain, you can map that domain to your App Engine app:
CONSOLE GCLOUD API

Continue to the next step of the Add new custom domain form to select the domain that you want to map to your App Engine app:
a. Specify the domain and subdomains that you want to map. The naked domain and www subdomain are pre-populated in the form.
A naked domain, such as example.com , maps to http://example.com .
A subdomain, such as www , maps to http://www.example.com .
b. Click Save mappings to create the desired mapping.
c. In the final step of the Add new custom domain form, note the resource records that are listed, including their type and canonical
name ( CNAME ), because you need to add these details to the DNS configuration of your domain.

In the example below, CNAME is one of the types listed, and ghs.googlehosted.com is its canonical name.

5. Add the resource records that you receive to the DNS configuration of your domain registrar:
a. Log in to your account at your domain registrar and then open the DNS configuration page.
b. Locate the host records section of your domain's configuration page and then add each of the resource records that
you received when you mapped your domain to your App Engine app.
Typically, you list the host name along with the canonical name as the address. For example, if you registered a
Google Domain, then one of the records that you add to your DNS configuration is the www host name along with
the ghs.googlehosted.com address. To specify a naked domain, you would instead use @ with
the ghs.googlehosted.com address.
If you are migrating from another provider, make sure all DNS records point to your App Engine app.
For more information about mapping your domain, see the following Using subdomains and Wildcard
mappings sections.
c. Save your changes in the DNS configuration page of your domain's account. It can take a while for these changes to
take effect.
6. Test for success by browsing to your app via its new domain URL, for example www.example.com .

Using subdomains

If you set up a wildcard subdomain mapping for your custom domain, then your application serves requests for any matching
subdomain.
If the user browses a domain that matches an application version name or service name, the application serves that
version.
If the user browses a domain that matches a service name, the application serves that service.
There is a limit of 20 managed SSL certificates per week for each base domain. If you encounter the limit, App Engine
keeps trying to issue managed certificates until all requests have been fulfilled.

Wildcard mappings

You can use wildcards to map subdomains at any level, starting at third-level subdomains. For example, if your domain
is example.com and you enter text in the web address field:

Entering *.example.com maps all subdomains of example.com to your app.


Entering *.private.example.com maps all subdomains of private.example.com to your app.
Entering *.nichol.sharks.nhl.example.com maps all subdomains of nichol.sharks.nhl.example.com to your app.
Entering *.excogitate.system.example.com maps all subdomains of excogitate.system.example.com to your app.

You can use wildcard mappings with services in App Engine by using the dispatch.yaml file to define request routing to specific
services.

Note: Wildcard mappings are not supported for managed SSL certificates.

If you use G Suite with other subdomains on your domain, such as sites and mail , those mappings have higher priority and
are matched first, before any wildcard mapping takes place. In addition, if you have other App Engine apps mapped to other
subdomains, those mappings also have higher priority than any wildcard mapping.

Some DNS providers might not work with wildcard subdomain mapping. In particular, a DNS provider must permit wildcards
in CNAME host entries.

Wildcard routing rules apply to URLs that contain components for services, versions, and instances, following the service routing
rules for App Engine.

What's next

Secure your custom domains with SSL.


App Engine Documentation Python Standard Environment

Hosting a static website on Google App Engine


Contents
Before you begin
Creating a website to host on Google App Engine
Basic structure for the project
Creating the app.yaml file

You can use Google App Engine to host a static website. Static web pages can contain client-side technologies such as HTML,
CSS, and JavaScript. Hosting your static site on App Engine can cost less than using a traditional hosting provider, as App
Engine provides a free tier.

Sites hosted on App Engine are hosted on the appspot.com subdomain, such as [my-project-id].appspot.com . After you
deploy your site, you can map your own domain name to your App Engine-hosted website.

Before you begin

Before you can host your website on Google App Engine:

1. Create a new GCP Console project or retrieve the project ID of an existing project to use:

GO TO THE PROJECTS PAGE

Tip: You can retrieve a list of your existing project IDs with the gcloud command line tool.
2. Install and then initialize the Google Cloud SDK:

DOWNLOAD THE SDK

Creating a website to host on Google App Engine

Basic structure for the project

This guide uses the following structure for the project:

app.yaml : Configure the settings of your App Engine application.


www/ : Directory to store all of your static files, such as HTML, CSS, images, and JavaScript.
css/ : Directory to store stylesheets.
style.css : Basic stylesheet that formats the look and feel of your site.
images/ : Optional directory to store images.
index.html : An HTML file that displays content for your website.
js/ : Optional directory to store JavaScript files.
Other asset directories.

Creating the app.yaml file

The app.yaml file is a configuration file that tells App Engine how to map URLs to your static files. In the following steps, you will
add handlers that will load www/index.html when someone visits your website, and all static files will be stored in and called from
the www directory.

Create the app.yaml file in your application's root directory:

1. Create a directory that has the same name as your project ID. You can find your project ID in the Console.
2. In directory that you just created, create a file named app.yaml .
3. Edit the app.yaml file and add the following code to the file:

runtime: python27
api_version: 1
threadsafe: true

handlers:
- url: /
static_files: www/index.html
upload: www/index.html
- url: /(.*)
static_files: www/\1
upload: www/(.*)

More reference information about the app.yaml file can be found in the app.yaml reference documentation.

Creating the index.html file

Create an HTML file that will be served when someone navigates to the root page of your website. Store this file in
your www directory.

<html>
<head>
<title>Hello, world!</title>
<link rel="stylesheet" type="text/css" href="/css/style.css">
</head>
<body>
<h1>Hello, world!</h1>
<p>
This is a simple static HTML file that will be served from Google App
Engine.
</p>
</body>
</html>

Deploying your application to App Engine

When you deploy your application files, your website will be uploaded to App Engine. To deploy your app, run the following
command from within the root directory of your application where the app.yaml file is located:

gcloud app deploy

Optional flags:

Include the --project flag to specify an alternate GCP Console project ID to what you initialized as the default in
the gcloud tool. Example: --project [YOUR_PROJECT_ID]
Include the -v flag to specify a version ID, otherwise one is generated for you. Example: -v [YOUR_VERSION_ID]

To learn more about deploying your app from the command line, see Deploying a Python App.

Viewing your application

To launch your browser and view the app at https://[YOUR_PROJECT_ID].appspot.com , run the following command:

gcloud app browse

What’s next

Serve your App Engine-hosted website from a custom domain.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated October 22, 2018.


How To Run a (Mostly) Static Website in Google App Engine
Kelly Heffner Wilkerson
August 24, 2018 at 10:30 AM
Categories: Development, Website/Seo | View Comments

At the beginning of 2015, I moved all of the deciphertools.com website to the Google App Engine. Most of the content on our website is static
and our traffic is moderate but bursty, so running our own virtual server on Rackspace to host the website seemed wasteful (of administration
time and money.) Our virtual server was also performing poorly: during those bursty times, we would have poor latency, and our website was
very open to (extremely lame) DoS attacks.
After the move, our website is cheaper to run, and performs beautifully. I can rest easy at night knowing that if we have spikes in traffic (which
should be cause for celebration), our infrastructure will scale to handle the load. To this day our static site costs have been free. We do pay for
some other dynamic site projects as well as Google Cloud Storage to host our larger software downloads, at rates comparable to Rackspace
Cloud Files. The entirety is still MUCH cheaper than spinning up a Rackspace cloud server ourselves and running lighttpd or apache.
Finding simple directions to host static pages on the Google App Engine was difficult, so this is my contribution of instructions. Please feel free
to post comments with questions — questions usually make me learn something new!

Step 0: Download the App Engine SDK


Download the SDK (and administration program) from the App Engine download page.
Edit December 18, 2017: Google replaced the original App Engine SDK with a more complicated toolset, which is some pain to get up to
speed with for simple sites. However, there was enough complaint about it that they made original SDK available again. Go to this Google App
Engine download page, scroll down, and click "Download and install the original App Engine SDK for PHP."to download the older SDK. This
SDK is much easier to use for new App Engine users (including a great big Deploy button). My link is to the PHP download page, but if you're
hosting a truly static site, then it doesn't matter which programming language you pick.
More on picking a development language for app engine: most sites end up needing some redirections and email scripts, so pick something
you're comfortable scripting in. If you're used to web scripting in PHP and the site is really only going to do basic stuff like redirection and
email, then PHP is great. If you're going to want to use the App Engine datastore and other APIs eventually, then I recommend Python. (You
can also change your mind later.)

Step 1: Setup your project folder structure


Every app engine project has an app.yaml file, and then all of the supporting scripts and files. For (mostly) static sites, I like to have the
following folder structure:

- my_project
|- app_engine
|- app.yaml
|- public
|- (all of my static site files, examples here below...)
|- index.html
|- favicon.ico
|- images
|- image1.png
|- image2.png
|- js
|- bootstrap.min.js
|- css
|- ... you get the idea

I make that app_engine folder to house the Google App Engine project in case I have external resources to generate pieces of the website.
For example, I use blogofile to generate our website and blog, so in addition to the app_engine folder, I also have a blogofile project that
houses the blogofile templates. (I generate the site files from my blogofile templates, then copy the results into app_engine/public when I am
happy with them.)

Step 1.5: Setup your static file folder


The public folder is going to house the static files that your site serves. Structure this just like it were your normal site folder on a web-server
setup. In my cool ascii-art diagram above, you can see a typical index page, images subfolder, and Javascript/CSS folders.

Step 2: Make a simple app.yaml for all static content


You might be appalled by the cases, I'm ok with that!

application: your-application-name-here
version: 1
runtime: php
api_version: 1
threadsafe: yes

handlers:
# Handle the main page by serving the index page.
# Note the $ to specify the end of the path, since app.yaml does prefix matching.
- url: /$
static_files: public/index.html
upload: public/index.html

# Handle folder urls by serving the index.html page inside.


- url: /(.*)/$
static_files: public/\1/index.html
upload: public/.*/index.html

# Handle nearly every other file by just serving it.


- url: /(.+)
static_files: public/\1
upload: public/(.*)

# Recommended file skipping declaration from the GAE tutorials


skip_files:
- ^(.*/)?app\.yaml
- ^(.*/)?app\.yml
- ^(.*/)?#.*#
- ^(.*/)?.*~
- ^(.*/)?.*\.py[co]
- ^(.*/)?.*/RCS/.*
- ^(.*/)?\..*
- ^(.*/)?tests$
- ^(.*/)?test$
- ^test/(.*/)?
- ^COPYING.LESSER
- ^README\..*
- \.gitignore
- ^\.git/.*
- \.*\.lint$
- ^fabfile\.py
- ^testrunner\.py
- ^grunt\.js
- ^node_modules/(.*/)?

Step 3: Handling your 404 baggage with redirects


Our website has evolved over many years, so we have a hefty pile of 301 redirects that need serving. To serve the 301 redirect header, you
need to use some scripting; for this example I'm using PHP. (If anyone knows how to serve the 301 statically, I would LOVE to know how.) You
may be grossed out by the following script, but it works, and it reminds me of lighttpd's mod_redirect, so I'm happy enough.
I setup a script named redirector.php . I was lazy and it's just sitting in my app_engine folder; it would probably be better in
a scripts folder or something clean like that.

<?
// REMINDER: ALL OF THESE NEED TO BE IN app.yaml too
$direct_redirects = array(
"/blog" => "https://yoururl.com/blog/",
"/products.html" => "https://yoururl.com/index.html",
... many many MANY... MANY other mappings...
);

$path = parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH);


$redirect_url = $direct_redirects[$path];
if(!is_null($redirect_url)) {
header("HTTP/1.1 301 Moved Permanently");
header("Location: $redirect_url");
}
?>

(Yep, you have to map every url you want to redirect. Get fancier if you need, but I just use this to map crawl errors and moved pages, so it
works for me.)
As the script so nicely reminds me, the urls that need redirection need to be in your app.yaml file. Add your paths to the handlers section
ABOVE the other rules, since those rules match many things.
# Note the $ to specify the end of the path, since app.yaml does prefix matching.
- url: /blog$|/products.html$
script: redirector.php

Step 4: Installing SSL and Setting Up Your Domain with Google Apps
If you want to serve your site using HTTPS, then you'll need to install SSL certificates in GAE.
Update August 24, 2018: If you don't need special SSL certificates, like EV (Extended Validation, for the green bar in the web browser), then
using Google's free managed SSL for Google App Engine may be just what you need.
To enable managed SSL in App Engine:
1. Go to your Google App Engine dashboard.
2. Click the menu icon in the upper left corner, and under Compute, click App Engine > Settings .
3. Select the Custom domains section.
4. Check the domains you want to secure with managed SSL and then click the Enable managed security button.
5. Take this time you'd usually spend banging on SSL setup to get a coffee or go for a walk.
Or, if you prefer to set up your own SSL certificates, keep reading.
If you need to install SSL on your App Engine app, you will need to setup your domain with Google Apps. If you want to support us, you can
use our Google Apps referral link to sign up for Google Apps. Update 2016: You no longer need a Google Apps account to install SSL
certificates on your App Engine site. Refer to the “Adding SSL to your custom domain” section of the instructions from Google in the next
paragraph.
I don't know if there is something wrong with me, but I never remember how to do HTTPS/SSL setup. There are copious outdated
documentation pages lurking around, along with poor instructions from third parties. I HIGHLY recommend these instructions from
Google augmented with these instructions about the actual SSL installation from the Neutron Drive Blog. Update December 18, 2017: My
favorite SSL blog post is no longer maintained, so here is my version of those same instructions.

A little bit about CloudFlare


If you need CDN caching, threat mitigation, SSL, site redirections, and a pile of other awesome services, I highly recommend CloudFlare. We
used to use the Pro Plan for the Decipher Tools website, and I use the free plan for every static site I setup (at a bare minimum to use their
lovely DNS interface that doesn't make me want to shove a spoon in my eye like most others.) Now that we have a static site, and our own
SSL setup from step 4, we don't need the paid CloudFlare features. However I still really dig their DNS interface, and their caching is excellent
if you have need.

13 Comments

Join the Discussion...

MOHIT GUPTA July 7, 2016


Hi Kelly HW ,
Need to ask something I have made my project using build.xml and my all urls are on
http and now i want to move them on HTTPS how can i do that and my web site is based
on google app engine
Dileep P G December 12, 2015
Hi, Neat tutorial! Helped quick start my website on GAE.
As for redirects, I do not have any specific page redirects. But I did need the one domain
redirect from naked to www. And I opted for the Domain Forwarding feature available on
GoDaddy (My Domain Registrar) instead of using the 301 re-director script. Works good
so far. Let me know if any pitfalls going that route.
Thanks!
Kelly HW December 12, 2015
I think it's great if you can get the forwarding "for free" using another technique
besides paying for app engine processing time :)
One thing we should check though: if you have a page in the url (not just the
domain itself), does the domain forward keep the page in the URL intact? That
can be important for SEO/page rank, if someone posts a link to a page using
your naked domain, you want it to redirect to the same page with the www
subdomain.
Ferdinand June 6, 2015
Do you really need a CDN if it is hosted on the app engine? Is it not suppose scale "auto-
magically"? Do you find any speed improvement by using the CDN?
Kelly HW June 6, 2015
That is a great question Ferdinand. We used CloudFlare prior to the switch, and
I've left it for a number of reasons that may be moot now (SSL and cache
headers are the first two that come to mind, and those should be much
better/easier to address directly in GAE now.)
When I get a chance (hopefully next week? since I'm at WWDC right now) I'm
going to do a little bit of poking, measuring, and Google Page Speed Insights
experiments moving a few things I do in our CDN configuration directly into GAE
now.
GAE does scale automagically :) I'm curious to see if there is a difference in
latency worldwide for static resources with and without the CDN.
Ferdinand June 6, 2015
Thanks. I have had varying degrees of issues with CDN and WordPress
while using CloudFlare. Cache headers was one of them. Also, the
caching while using the admin interface was pretty bad at that time
(couple of years back). I moved to the now defunct Google PSS which
was much better, but still had intermittent issues with caching.
Finally, when PSS was downgraded I moved to Compute Engine to host
my blog. After having to scrape clean my blog from Pharma hacks a
couple of times, I now have a convoluted process where the WordPress
now runs locally behind a firewall, and I just publish the static rendition of
the blog to GAE. It is much better with the speed (as everything is
static), security (because there is no backend database or php
processing) and cost (as there is no front end instances to run, just
outbound bandwidth to pay for).
It is definitely faster than hosting with a provider. I was curious if i
needed a CDN to speed it up further. Let me know if and when you get
around to measure the effects of CDN with GAE at some point.
ChrisBertrand August 8, 2017
Hi Kelly
Did you checked the speed without the CDN ?
Is the CDN useful/necessary to deliver file archives/executable
(application for instance) ?
Kelly HW August 8, 2017
Hi Chris,
We stopped using the CDN for delivery of our static pages, since GAE
itself delivers them just fine. (I don't have any stats off the top of my
head unfortunately!)
I would definitely definitely encourage CDN for large files (over the static
file size cap). Our installer downloads are larger than the static file size
cap for GAE, so we serve them from Google Cloud Storage, which
incurs a cost for downloads. Using Cloudflare CDN for those downloads
saves us around $35/month (it's not free since we do have to serve
some of the files before the CDN caches it). But, if your larger files are
small enough to fit under the static file size cap, I encourage you to try
that first :)
ChrisBertrand August 8, 2017
Thank you very much for these details.
A real experience is very useful.
Decipher Tools Software August 8, 2017
Our pleasure, Chris! ;-)
Rita May 5, 2015
How did you handle 404? What would be entry in app.yaml for 'url' handler since other url
handlers are already catch all. Please suggest as I see you do have custom 404.
Thanks!!
Kelly HW May 5, 2015
Hi Rita,
Good question!
Our custom 404 is handled via CloudFlare SmartErrors (which is available in the
free plan, if you wanted that particular 404 page.)
Or, you can specify the custom page (as long as it is under 10KB) with the
error_handler rule in your app.yaml.
https://cloud.google.com/ap...
Kelly HW April 4, 2015
If you need to install SSL on your App Engine app (which requires you to setup your
domain with Google Apps), I HIGHLY recommend the instructions from Google
(https://cloud.google.com/ap..., along with the instructions about the actual SSL
installation here: http://blog.neutrondrive.co...
Now-a-days we have a lot of options to deploy our
Free Static Page Hosting on Google applications. Some of them are Google App

App Engine in a 5 minutes 2014 · 2 · 17Engine, Google Compute Cloud, Amazon EC2,
Heroku, Nodejitsu and much more. All the services
have their advantages and disadvantages over
others. Generally, we do not prefer much complex infrastructure or steps to deploy our static pages. Recently, I found that Google App engine
has been a best platform for hosting our static web pages with decent free plan over other services. In this article, we will discuss a steps to
host your static pages which can be personal blog, company site or even your client sites.

Create an application in Google App Engine

Visit Google App Engine and then create an application. When creating App engine application, the application id is very important. Because it
acts as subdomain for your site. Lets say, application id is coolmoon , the site will be in coolmoon.appspot.com .

Install App Engine SDK for python

Since python has been a best supported language in App Engine, Download and Install App Engine SDK for python. Are you not a python
developer(like me)? Do not worry, you do not need to write a single piece of python code.

You will use two commands from the SDK:

dev_appserver.py - the development web server


appcfg.py - for uploading your app to App Engine

Create an application folder

You have to create an application folder which has static files and configuration file to be deployed. The structure of the folder may be as
follows

application_folder/
- app.yaml # configuration file. we will see in next section
- public/ # public folder will contain static files
- index.html
- js/
- css/
- img/

Content of App Engine configuration(app.yaml)

application: coolmoon
version: 1
runtime: python27
api_version: 1
threadsafe: yes

handlers:

- url: /(.+)
static_files: public/\1
upload: public/(.*)

- url: /
static_files: public/index.html
upload: public/index.html

skip_files:
- ^(.*/)?app\.yaml
- ^(.*/)?app\.yml
- ^(.*/)?#.*#
- ^(.*/)?.*~
- ^(.*/)?.*\.py[co]
- ^(.*/)?.*/RCS/.*
- ^(.*/)?\..*
- ^(.*/)?tests$
- ^(.*/)?test$
- ^test/(.*/)?
- ^COPYING.LESSER
- ^README\..*
- \.gitignore
- ^\.git/.*
- \.*\.lint$
- ^fabfile\.py
- ^testrunner\.py
- ^grunt\.js
- ^node_modules/(.*/)?

Test static pages

You can run development server locally and check your static pages by following command

dev_appserver.py ./

Visit http://localhost:8080 to test your pages.

Deploy

Everything is perfect and deploy the static pages. The command appcfg.py is used for deploy the application to Google App engine

appcfg.py update .

It will ask for email and password of your Google account. The password must be application specific password. To know how to generate
application specific password, please refer Application specific password.

You've made it

Finally you got your site hosted in <application-id>.appspot.com . Static hosting is super easy with App Engine. Moreover it is faster than
other static hosting services. Because it runs on Google infrastructure.

Extra: Offer from SiteGround

You can get SiteGround Coupon and try out awesome web hosting solutions SiteGround.

Happy static hosting and Have a nice day.


Google App Engine is a powerful platform that lets you build and run
applications on Google’s infrastructure — whether you need to build a
multi-tiered web application from scratch or host a static website.
Here's a step-by-step guide to hosting your website on Google App
Engine.

Creating a Google Cloud Platform project


To use Google's tools for your own site or app, you need to create a new project on Google
Cloud Platform. This requires having a Google account.

1. Go to the App Engine dashboard on the Google Cloud Platform Console and press
the Create button.
2. If you've not created a project before, you'll need to select whether you want to receive
email updates or not, agree to the Terms of Service, and then you should be able to
continue.
3. Enter a name for the project, edit your project ID and note it down. For this tutorial, the
following values are used:
Project Name: GAE Sample Site
Project ID: gaesamplesite
4. Click the Create button to create your project.

Creating an application
Each Cloud Platform project can contain one App Engine application. Let's prepare an app for
our project.

1. We'll need a sample application to publish. If you've not got one to use, download and
unzip this sample app.
2. Have a look at the sample application's structure — the website folder contains your
website content and app.yaml is your application configuration file.
Your website content must go inside the website folder, and its landing page must
be called index.html , but apart from that it can take whatever form you like.
The app.yaml file is a configuration file that tells App Engine how to map URLs to
your static files. You don't need to edit it.

Publishing your application


Now that we've got our project made and sample app files collected together, let's publish our
app.

1. Open Google Cloud Shell.


2. Drag and drop the sample-app folder into the left pane of the code editor.
3. Run the following in the command line to select your project:

gcloud config set project gaesamplesite

4. Then run the following command to go to your app's directory:

cd sample-app

5. You are now ready to deploy your application, i.e. upload your app to App Engine:

gcloud app deploy


6. Enter a number to choose the region where you want your application located.
7. Enter Y to confirm.
8. Now navigate your browser to your-project-id.appspot.com to see your website online.
For example, for the project ID gaesamplesite, go to gaesamplesite.appspot.com.

See also
To learn more, see Google App Engine Documentation.

Tags: Beginner Google App Engine Google Cloud Platform Guide Host Learn publish Web website

Contributors to this page: Mori, anton-mladenov, 4a-j

Last updated by: Mori, Jun 22, 2018, 7:29:40 AM

Learn the best of web development


Get the latest and greatest from MDN delivered straight to your inbox.
How do you host your website on Google App
Engine?
Languages Edit

Jump to: Creating a Google Cloud Platform project Creating an application Publishing your application See also

Google App Engine is a powerful platform that lets you build and run
applications on Google’s infrastructure — whether you need to build a
multi-tiered web application from scratch or host a static website.
Here's a step-by-step guide to hosting your website on Google App
Engine.

Creating a Google Cloud Platform project


To use Google's tools for your own site or app, you need to create a new project on Google
Cloud Platform. This requires having a Google account.

1. Go to the App Engine dashboard on the Google Cloud Platform Console and press
the Create button.
2. If you've not created a project before, you'll need to select whether you want to receive
email updates or not, agree to the Terms of Service, and then you should be able to
continue.
3. Enter a name for the project, edit your project ID and note it down. For this tutorial, the
following values are used:
Project Name: GAE Sample Site
Project ID: gaesamplesite
4. Click the Create button to create your project.

Creating an application
Each Cloud Platform project can contain one App Engine application. Let's prepare an app for
our project.

1. We'll need a sample application to publish. If you've not got one to use, download and
unzip this sample app.
2. Have a look at the sample application's structure — the website folder contains your
website content and app.yaml is your application configuration file.
Your website content must go inside the website folder, and its landing page must
be called index.html , but apart from that it can take whatever form you like.
The app.yaml file is a configuration file that tells App Engine how to map URLs to
your static files. You don't need to edit it.

Publishing your application


Now that we've got our project made and sample app files collected together, let's publish our
app.

1. Open Google Cloud Shell.


2. Drag and drop the sample-app folder into the left pane of the code editor.
3. Run the following in the command line to select your project:

gcloud config set project gaesamplesite

4. Then run the following command to go to your app's directory:

cd sample-app

5. You are now ready to deploy your application, i.e. upload your app to App Engine:

gcloud app deploy

6. Enter a number to choose the region where you want your application located.
7. Enter Y to confirm.
8. Now navigate your browser to your-project-id.appspot.com to see your website online.
For example, for the project ID gaesamplesite, go to gaesamplesite.appspot.com.
How to Host Static Website on
Google Cloud Storage?

Netsparker Web Application Security Scanner – the only solution that delivers automatic verification of
vulnerabilities with Proof-Based Scanning™.

BY CHANDAN KUMAR | MAY 14, 2018 | CLOUD


COMPUTING

A step-by-step guide to hosting a


static website on Google Cloud
Storage for better performance at
lower cost.
If you are hosting static website (HTML/CSS/JS/Images), then you don’t
need to bother about cPanel web hosting plan to manage your site.
Instead, you can use Google Cloud storage which will be cheaper,
faster & easy to maintain.

A static site is suitable for personal, corporate, information page or


anything where you don’t expect to generate a transaction or dynamic
contents. It doesn’t need any server-side processing or database
connectivity.

Why Google Cloud


Storage?
It performs better at a lower cost.

You can host 10 GB of sites at multi-regional for high-availability for less


than $1 per month.

You can choose to host your content on multi-regional storage class


which means your data is available in two region’s data center for high-
availability.

Google offers high-performance cloud storage for fast loading


contentworldwide with 99.95% availability SLA.

There is no minimum limit of an object, and you pay what you use.

The following instructions will help you to host the static website
on Cloud Storage in less than 15 minutes.

Pre-requisite

This assumes you have a domain name registered and account created
with Google Cloud.

For this demonstration, I will use bloggerflare.com.

Let’s get it started…

Verify Domain Ownership


First, you got to verify that you are an owner of the domain by adding the
URL to “Search Console.”

Create Storage Bucket


Log in to Cloud Storage and click “Create Bucket.”
Enter the bucket name (important tips: if you would like to point
your domain name to storage then you got to give the bucket name as
a domain name)
Select the storage class (leave multi-regional for high-performance
& availability)
Select location from US, EU & Asia (choose the nearest to your
audience)
Click “Create.”

Note: if a domain is not verified, you will get an error as below.

So you got to ensure the domain name which you’ve entered in the
bucket name is verified.

Once a bucket is created, you should see them on the list.

Configuring Storage
Bucket
It’s necessary to set up your bucket for your site to be accessible over the
Internet.

Select the bucket from the list


Click on setting icon at right side >>Edit bucket permissions

It will open permissions properties at the right side


Type allUsers in “Add members” field and select permission as
“Storage Object Viewer.”

Click Add to save the configuration


Next, click on setting icon again >>Edit website configuration
Enter index & 404 page (Most of the time index page would be
index.html & 404.html)
Tutorials / Google App Engine

How To Host Static Website On Google App Engine


July 26, 2017 google-app-engine web-hosting static-website-hosting static-site-generator

Why I choose App Engine for Static Web Hosting


I wanted a setup without DevOps, so hosting on EC2 or DigitalOcean is out (not to mentioned the need to
integrate CDN after that).

I wanted to try Google Cloud Storage, but SSL support comes with additional service and cost.

S3 + Cloudfare seems popular, but the task of setting up both things seems daunting.

App Engine seems fairly simple and free for low traffic site (static files shouldn’t count towards instance cost, only
need to pay for bandwidth). Static files seem to be distributed on multiple nodes with good performance (whether
they are edge-cache nodes is debatable). If I ever need to write some server-side code, it could be easily done.

App Engine Limitation


Limit of 10,000 files
There is no storage cost if code and static files are less than 1 GB, else $0.026 per GB per month is charged.
Each files must be less than 32MB.
Free quota
Google Cloud Storage: 5GB
Code & Static Data Storage: 1GB
Frontend Instances: 28h per Day
Outgoing Bandwidth/egress: 1GB per Day

Prerequisite
Create a project on Google Cloud Platform.

Remember your Project ID

Create an App Engine app

Language: Python (pick a language you are familar with, though it doesn’t matter for static website)
Region: us-central (depending on your audience)
Don’t have to proceed with the Quickstart Tutorial.

Install Google Cloud SDK


Download the latest Google Cloud SDK

Extract the package (e.g. tar -zxf google-cloud-sdk*.tar.gz)

Run install script to add SDK rools to your path

./google-cloud-sdk/install.sh
# Output
Modify profile to update your $PATH and enable shell command
completion? [Y]
Enter a path to an rc file to update, or leave blank to use
[ENTER]

Initialize the SDK (enter your Google credential and select Project ID)
./google-cloud-sdk/bin/gcloud init

# You can respond "n" to the following


API [compute-component.googleapis.com] not enabled on project
[793702336627]. Would you like to enable and retry? (Y/n)?

Create App Engine project files


Create a directory for your app engine project.

mkdir hello-world-app
cd hello-world-app

Create an app.yaml file.

runtime: python27
api_version: 1
threadsafe: true

handlers:
- url: /robots.txt
static_files: www/robots.txt
upload: www/robots.txt
secure: always

# file with extensions (longer cache period)


- url: /(.*\.(css|js|woff|woff2|ico|png|jpg))
static_files: www/\1
upload: www/(.*)
expiration: "14d"
secure: always

# file with extensions


- url: /(.*\..*)
static_files: www/\1
upload: www/(.*)
secure: always

# assume file without extensions use index.html


- url: /(.*)/
static_files: www/\1/index.html
upload: www/(.*)/index.html
secure: always

- url: /
static_files: www/index.html
upload: www/index.html
secure: always

NOTE: To use secure: always, remember to enable managed SSL certificates for your website.

Create www directory

mkdir www
cd www

Create index.html in www

<html>
<head>
<title>Hello World</title>
<link rel="stylesheet" type="text/css" href="/css/style.css">
</head>
<body>
<h1>Hello World</h1>
<p class="red">I am Red</a>
</p>
</body>
</html>

Create css directory in www

mkdir css
cd css

Create style.css in www/css

.red {
color: #FF0000;
}

Project files directory structure

hello-world-app
├── app.yaml
└── www
├── css
│ └── style.css
└── index.html

Deploy the app and make sure the source and Project ID is correct
Deployment
Deploy local files to App Engine server.

gcloud app deploy -v 1


# output
Services to deploy:

descriptor: [/hello-world-app/app.yaml]
source: [/hello-world-app]
target project: [hello-world-project-id]
target service: [default]
target version: [1]
target url: [https://hello-world-project-id.appspot.com]

Do you want to continue (Y/n)? Y

Launch brower to preview the website

gcloud app browse -v 1

I prefer to include version (-v 1), else a new version will be created for every upload.

Static cache expiration


When you make changes to the files and refresh the page the second time, you might notice the page still serve
the old content. All static contents are cached (default 10-minute expiration time) once downloaded, with no way
to clear the cache (clearing browser cache and re-deploying a new version won’t clear the cache) until it expires.

You can change cache duration by changing default_expiration or expiration in app.yaml.

For development purpose, you can add a caching busting query string to the end of the url (e.g. https://hello-
world-project-id.appspot.com?r=1)

Refer to static cache expiration.

Web Server Performance


Google App Engine server performance is exceptionally good accross multiple countries.

Bitcatcha Result
Location Response Times

US (W) 1 ms

US (E) 3 ms

London 25 ms

Singapore 12 ms

Sao Paulo 52 ms

Bangalore 93 ms

Sydney 335 ms

Japan 70 ms

Pagespeed Insights Result


Your server responded quickly.

❤ Is this article helpful?


Buy me a coffee☕ or support my work via PayPal to keep this space ad-free.

Apps I built 😉
✈ Travelopy - places discovery and travel journal
🔒 LuaPass - offline password manager
✔ WhatIDoNow - a public log of things you are working on now

By Desmond Lua
A dream boy who enjoys programming and travelling, maker of Travelopy. Follow me on @d_luaz.

Tags:

google-app-engine web-hosting static-website-hosting static-site-generator

Related entries:

Things To Knows Before Hosting Static Website On Google Cloud Storage


Google App Engine Static Website Redirect Trailing Slash

Hosting Hugo On Google App Engine

This work is licensed under a


Creative Commons Attribution-NonCommercial 4.0 International License.

Copyright © luasoftware.com
Lua Software Tutorials Privacy About RSS ☕
Introducing managed SSL for Google App
Engine
Lorne Kligerman
Product Manager

September 14, 2017

We’re excited to announce the beta release of managed SSL certificates at no charge for
applications built on Google App Engine. This service automatically encrypts server-to-client
communication — an essential part of safeguarding sensitive information over the web.
Manually managing SSL certificates to ensure a secure connection is a time consuming
process, and GCP makes it easy for customers by providing SSL systematically at no additional
charge. Managed SSL certificates are offered in addition to HTTPS connections provided on
appspot.com.

Here at Google, we believe encrypted communications should be used everywhere. For


example, in 2014, the Search team announced that the use of HTTPS would positively impact
page rankings. Fast forward to 2017 and Google is a Certificate Authority, establishing HTTPS
as the default behavior for App Engine, even across custom domains.

Now, when you build apps on App Engine, SSL is on by default — you no longer need to worry
about it or spend time managing it. We’ve made using HTTPS simple: map a domain to your
app, prove ownership, and App Engine automatically provisions an SSL certificate and renews it
whenever necessary, at no additional cost. Purchasing and generating certificates, dealing with
and securing keys, managing your SSL cipher suites and worrying about renewal dates —
those are all a thing of the past.

"Anyone who has ever had to replace an expiring


SSL certificate for a production resource knows
how stressful and error-prone it can be. That's why
we're so excited about managed SSL certificates
in App Engine. Not only is it simple to add
encryption to our custom domains
programmatically, the renewal process is fully
automated as well. For our engineers that means
less operational risk."

— James Baldassari, Engineer, mabl

Get started with managed SSL/TLS certificates

To get started with App Engine managed SSL certificates, simply head to the Cloud
Console and add a new domain. Once the domain is mapped and your DNS records are up to
date, you’ll see the SSL certificate appear in the domains list. And that’s it. Managed certificates
is now the default behavior — no further steps are required!
To switch from using your own SSL certificate on an existing domain, select the desired domain,
then click on the "Enable managed security" button. In just minutes, a certificate will be in place
and serving client requests.

You can also use the gcloud CLI to make this change:

$ gcloud beta app domain-mappings update DOMAIN --certificate-


management 'AUTOMATIC'

Rest assured that your existing certificate will remain in place and communication will continue
as securely as before until the new certificate is ready and swapped in.

For more details on the full set of commands, head to the full documentation here.

Domains and SSL Certificates Admin API GA


We’re also excited to announce the general availability of the App Engine Admin API to manage
your custom domains and SSL certificates. The addition of this API enables more automation so
that you can easily scale and configure your app according to the needs of your business.
Check out the full documentation and API definition.

If you have any questions or concerns, or if something is not working as you’d expect, you can
post in the Google App Engine forum, log a public issue, or get in touch on the App Engine
slack channel (#app-engine).

POSTED IN: GOOGLE CLOUD PLATFORM IDENTITY & SECURITY APP ENGINE

R E L AT E D A RT I C L E S
y GCP blog post from 2018 Using data and ML to better track wildfire and Cloud Functions pro tips: Retries and
assess its threat levels idempotency in action
Use Google App Engine and Golang to Host a Static Website with Sam
Published March 8, 2017 • Updated June 10, 2018

There are several inexpensive ways to host a static website generated with a static site generator like Jekyll, Hugo, or Pelican:

GitHub Pages
Google Cloud Storage Bucket
Google App Engine
Amazon S3 Bucket

This entire blog is statically generated using Jekyll. However, I am unable to use any of the options above, because, over this blog’s lifetime, I hav
posts, and I want to keep alive all of the old URLs.

I have been hosting this blog using Apache and, more recently, nginx on a single virtual machine, and the redirection features of either piece of s
and different.

A previous post details how I redirect URLs from an old domain to a new domain using Google App Engine and Python, but now I needed a way
That same domain redirection requirement is why I cannot simply use Google App Engine’s static content only feature (linked in the list above). H
simple Golang application to serve both static content and same domain redirects.

Why Google App Engine?


Before you dive into the rest of the post, perhaps you are wondering, why host a blog on Google App Engine? Here are my reasons why:

If your traffic fits within App Engine’s free tier of 28 instance hours and 1 GB of egress traffic per day, hosting the blog is practically free
Pushing updates is done with one command
Logging and monitoring are integrated using Stackdriver
Automatic up and down scaling based on traffic patterns
With a few clicks, web logs can easily be pushed to something like BigQuery for long term storage and ad hoc analysis
Managed SSL certificates using Let’s Encrypt

Prerequisites
This post assumes the following:

You are familiar with Google Cloud Platform (GCP) and have already created a GCP Project
You have installed the Google Cloud SDK
You have authenticated the  gcloud  command against your Google Account

Create a GCP Project


If you have not yet created a GCP Project, follow these steps:

1. Open a web browser, and create or log in to a Google Account


2. Navigate to the GCP Console
3. If this is your first GCP Project, you will be prompted to create a GCP Project. Each Google Account gets $300 in credit to use within 12 mon
create a GCP Project, but it will not be charged until the $300 credit is consumed or 12 months expire.
4. If this is a new GCP Project, you will need to enable the Compute Engine API by navigating to the Compute Engine section of the GCP Cons

Install the Google Cloud SDK


If you have not yet installed the Google Cloud SDK, follow the instructions here.

Authenticate gcloud
Once you have created a GCP Project and installed the Google Cloud SDK, the last step is to authenticate the  gcloud  command to your Google A
command:

gcloud auth login

A web page will open in your web browser. Select your Google Account and give it permission to access GCP. Once completed, you will be authe

Create a Directory
Next, create a directory somewhere on your workstation to store your Google App Engine application:

mkdir ~/Sites/example.com/app_engine

Ch i h di
from HTTP to HTTPS as a temporary redirect; it is a permanent redirect.

If you have static assets, and you probably do, it is best practice to inform App Engine of this and let it serve those assets from object storage ins
through the app.yaml file.

For example, if you have a favicon file, a CSS directory, a Javascript directory, and an images directory, use the following app.yaml file:

runtime: go
api_version: go1

handlers:
- url: /favicon.png$
static_files: static/favicon.png
upload: static/favicon.png

- url: /css
static_dir: static/css

- url: /js
static_dir: static/js

- url: /images
static_dir: static/images

- url: /.*
script: _go_app
secure: always
redirect_http_response_code: 301

Create main.go
Next, you need the Golang application file.

For the following code to meet your needs, create file main.go, copy and paste the code below, and make the following modifications:

In the domain variable, change the value to match your domain name with the correct HTTP protocol.
In the urls map, replace all of the key value pairs to match the redirects you need in place. Replace each key with just the path portion (/ex
post-1.html) of the current domain’s old URL you want to keep alive. Then replace each value with the path portion of current domain’s ne

All redirects will be 301 redirects. This can be modified by changing 301 in the code below to a different HTTP redirect status code such as 302.

package main

import (
"net/http"
"os"
"strings"
)

func init() {
http.HandleFunc("/", handler)
}

func handler(w http.ResponseWriter, r *http.Request) {


// True (ok) if request path is in the urls map
if value, ok := urls[r.URL.Path]; ok {
value = domain + value
http.Redirect(w, r, value, 301)
} else {
path := "static/" + r.URL.Path
// Return 403 if HTTP request is to a directory that exists and does not contain an index.html file
if f, err := os.Stat(path); err == nil && f.IsDir() {
index := strings.TrimSuffix(path, "/") + "/index.html"
if _, err := os.Open(index); err != nil {
w.WriteHeader(403)
w.Write([]byte("<html><head><title>403 Forbidden</title></head><body><center><h1>403 Forbidden</h1></center></body>
return
}
}

// Return custom 404 page if HTTP request is to a non-existant file


if _, err := os.Stat(path); os.IsNotExist(err) {
w.WriteHeader(404)
http.ServeFile(w, r, "static/404.html")
return // Without naked return, a "404 page not found" string will be displayed at the bottom of your custom 404 page
}

http.ServeFile(w, r, path)
return
}
}

var domain string = "https://example com"


At this point, your application is deployed under URL https://your-project-id.appspot.com. Unless your website uses that as its domain name, y
actual current domain name.

The App Engine section of the Google Cloud Console can be used to do this. Go here and follow the instructions to configure your custom doma

Once that is complete and DNS has had time to propagate, you should be able to navigate in your web browser to one of your current domain’s
1.html, and have it redirect to your current domain’s new URLs, for example https://example.com/post/example-post-1.html.

Pushing Updates
To push updates, make the necessary changes in your static site’s source directory, regenerate the static content, and redeploy to Google App En
the ~/Sites/example.com/app_engine directory and running  gcloud app deploy .

References
A Surprising Feature of Golang that Colored Me Impressed
How to check if a map contains a key in go?
Disable directory listing with http.FileServer
3 Ways to Disable http.FileServer Directory Listings
Handling HTTP Request Errors in GO
HTTP and Error management in Go
please add ability to set custom 404 notFoundHandler for htt
CSS File Not Updating on Deploy (Google AppEngine) Ask Question

I pushed a new version of my website, but now the CSS and


static images are not deploying properly.
24 Here is the messed up page: http://www.gaiagps.com

Appengine shows the latest version as being correct though:


http://1.latest.gaiagps.appspot.com/

6 Any help?

google-app-engine

asked May 6 '10 at 17:23


Andrew Johnson
6,771 11 63 110

9 Answers

I've seen this before on App Engine, even when using cache-
busting query parameters like /stylesheets/default.css?{{
App.Version }} .
26
Here's my (unconfirmed) theory:

1. You push a new version by deploying or changing a new


version to default .
2. While this update is being propagated to all GAE instances
running your app...
3. ...someone hits your site.
4. The request for static resource default.css{{ App.Version }}
is sent to Google's CDN, which doesn't yet have it.
5. Google's CDN asks GAE for the resource before
propagation from step #2 is done for all instances.
6. If you're unlucky, GAE serves up the resource from an
instance running the old version...
7. ...which now gets cached in Google's CDN as the
authoritative "new" version.

When this (if this is what happens) happens, I can confirm that
no amount of cache-busting browser work will help. The Google
CDN servers are holding the wrong version.

To fix: The only way I've found to fix this is to deploy another
version. You don't run the risk of this happening again (if you
haven't made any CSS changes since the race condition),
because even if the race condition occurs, presumably your first
update is done by the time you deploy your second one, so all
instances will be serving the correct version no matter what.

answered Jan 29 '11 at 17:35


kamens
7,530 5 40 46

I'll buy this... though I'm not sure it's right. It eventually just cleared
itself up hours later. – Andrew Johnson Feb 1 '11 at 20:36

I'm facing the same issue. In my case even after I left it overnight the
new css was not being served. I'm going to try the cache bursting
technique – coderman Jan 1 '12 at 1:55

I am encountering it now. I've deployed many, many times but it keeps


serving the outdated static files. However if I reference the version of
the app - eg, alpha5.latest.xxx.appspot.com - then the correct file is
served. – mainsocial Feb 9 '12 at 5:54

@mainsocial Are you appending a cache-busting query string like


"foo.css?{{ App.version }}" to the end of your URL? – kamens Feb 9
'12 at 22:28

I've encountered this problem a few times. For whatever reason, this
is the fixed. I think it's something to do with upstream caching from
the google instance web host. If you see the problem and request the
css file directly in a browser with the querystring, then the issue goes
away. It looks like the caching is invalidated the first time a request is
made with a unique url to a static file. – Clint Simon Feb 14 '12 at 1:15

Following is what has worked for me.

1. Serve your css file from the static domain. This is


3 automatically created by GAE.

//static.{your-app-id}.appspot.com/{css-file-path}
2. Deploy your application. At this point your app will be broken.
3. change the version of the css file

//static.{your-app-id}.appspot.com/{css-file-path}?v={version-
Name}
4. deploy again.

Every time you change the css file. you will have to repeat 2,3
and 4.

answered Jan 14 '12 at 20:05


coderman
1,315 1 14 14

Your link looks fine to me, unless I'm missing something.

You may have cached your old CSS, and not getting the new
2 CSS after updating it. Try clearing your browser cache and see if
that works.

Going to 1.latest downloads the new CSS since it's not in your
cache, so it appears correctly to you.

answered May 6 '10 at 17:26


Jason Hall
19.2k 3 44 55

The menu at the bottom of the menu is not horizontal or big enough,
and the images in the slideshow are wrong. I have tried refreshing my
cache and loading from a different browser too. – Andrew Johnson
May 6 '10 at 17:29

The two sites you linked look exactly the same on Firefox and Chrome
for OS X. I suspect there's still some issue that's only affecting you, or
your browser, unless someone else can verify that it looks different. –
Jason Hall May 6 '10 at 18:57

Try using <Shift>+<F5> to force reload your page (at least in FF). Here
everything seems fine, both menu and slideshow images. Good luck. –
Emilien May 6 '10 at 19:14

1 The problem was server-side caching as it turns out. App Engine is


weird. – Andrew Johnson May 15 '10 at 20:07

I had this problem as well. I was using flask with GAE so I didn't
have a static handler in my app.yaml . When I added it, the deploy
2 works. Try adding something like this

handlers:
- url: /static
static_dir: static
to your app.yaml and deploy again. It worked for me. Apparently
Google is trying to optimize by not updating files that it thinks
users can't see.

answered Mar 17 '14 at 21:12


user792036
537 7 11

Here whats worked for me:

First, I've changed the version on app.yaml.


0
Then follow these steps below

Go to your console -> Click on your Project.

On the side menu, click on Computation -> Versions:

There it will be all versions, and which version is default. Mine


was set to an older version.

Mark the new version.

For me worked. Any concerns?

answered Dec 3 '14 at 1:20


pLpB
13 5

From the docs for the standard environment for Pyhton:


static_cache_expiration.
0
After a file is transmitted with a given expiration time, there is
generally no way to clear it out of intermediate caches, even if
the user clears their own browser cache. Re-deploying a new
version of the app will not reset any caches. Therefore, if you
ever plan to modify a static file, it should have a short (less
than one hour) expiration time. In most cases, the default 10-
minute expiration time is appropriate.

edited Oct 26 '18 at 6:54


Gabriel Henrique Nunes
90 7

answered Jul 31 '18 at 19:04


shoresh
8 4

As found by shoresh, the docs for the standard environment for


Pyhton state that both settings for static cache expiration, the
0 individual element expiration and the top-level element
default_expiration , are responsible for defining "the expiration
time [that] will be sent in the Cache-Control and Expires HTTP
response headers". This means that "files are likely to be cached
by the user's browser, as well as by intermediate caching proxy
servers such as Internet Service Providers".

The problem here is that "re-deploying a new version of the app


will not reset any caches". So if one has set default_expiration
to, e.g., 15 days, but makes a change to a CSS or JS file and re-
deploy the app, there is no guarantee that those files will be
automatically served due to active caches, particularly due to
intermediate caching proxy servers, which may include Google
Cloud servers - what seems to be the case since accessing your-
project-name.appspot.com also serves outdated files.

The same documentation linked above states that "if you ever
plan to modify a static file, it should have a short (less than one
hour) expiration time. In most cases, the default 10-minute
expiration time is appropriate". That is something one should
Home think about before setting any static cache expiration. But for
those who, like myself, didn't know all of this beforehand and
PUBLIC have already been caught by this problem, I've found a solution.
Stack Overflow
Stack Overflow
Even though the documentation states that it's not possible to
Tags clear those intermediate caching proxies, one can delete at least
the Google Cloud cache.
Users
In order to do so, head to your Google Cloud Console and open
Jobs
your project. Under the left hamburger menu, head to Storage ->
Browser. There you should find at least one Bucket: your-project-
name.appspot.com. Under the Lifecycle column, click on the link
Teams
with respect to your-project-name.appspot.com. Delete any
Q&A for work
existing rules, since they may conflict with the one you will create
now.
Learn More
Create a new rule by clicking on the 'Add rule' button. For the
object conditions, choose only the 'Newer version' option and set
it to 1. Don't forget to click on the 'Continue' button. For the
action, select 'Delete' and click on the 'Continue' button. Save
your new rule.

This newly created rule will take up to 24 hours to take effect, but
at least for my project it took only a few minutes. Once it is up and
running, the version of the files being served by your app under
your-project-name.appspot.com will always be the latest
deployed, solving the problem. Also, if you are routinely editing
your static files, you should remove the default_expiration
element from the app.yaml file, which will help avoid unintended
caching by other servers.

answered Oct 26 '18 at 21:40


Gabriel Henrique Nunes
90 7

For new people coming to this old questions/set of answers I


wanted to give an updated answer. I think in 2018-19 the
0 following information will probably fix most of the CSS update
issues people are having:

Make sure your app.yaml has the following:

handlers:
- url: /static
static_dir: static

Run gcloud app deploy

Chill for 10 minutes.. and the shift-reload your website

answered Dec 27 '18 at 2:35


Briford Wylie
810 10 18

Try clearing cache on your browser. Had exact same issue and
got it fixed by simply clearing cache.
-1 answered Dec 6 '12 at 12:24
kymni
69 1 4
Authenticated, Static Web Sites on Google App Engine
08 Jan 2012 app engine google apps static html python

Static HTML Web Site Hosting with Google App Engine


Google App Engine is a platform-as-a-service (PAAS) product that provides scalable, cloud-hosted web
applications using Google’s massive engineering infrastructure. While App Engine is primarily used by web
developers (e.g., programming in Python or Java), it offers three features which make it uniquely helpful for static
site hosting:
Arbitrary static file handling.
Extensible authentication support.
Very inexpensive (most likely free) for hosting static content.
In this post, we’ll walk through uploading a static HTML site to App Engine, and configuring it such that it
requires users to log in to via a Google Apps domain account before viewing any content. This is a common
situation for organizations already using Google Apps to manage email, documents, etc. that want to host a web
site without having extra configuration hassles. App Engine essentially takes care of all the authentication / user
management, and you just have to upload the static web site.
The assumption throughout the rest of this post is that you already have a domain name managed by Google Apps
(e.g., “example.com”). We will create an App Engine application and restrict it to users of the specific Google Apps
domain, requiring a login of a user “@example.com”. Then we’ll upload the static site and verify that
authentication works as expected.

Create an Authenticated App Engine Application


The first step is to create an App Engine web application. You’ll need to sign up for an App Engine
account, download the App Engine Python SDK (make sure you get the Python one!), and you should read the
“getting started” documentation. Also, if given the option to “install command symlinks”, make sure you choose to
do this (which will give us an appcfg.py executable in our path for use later).
Once App Engine is setup and installed, we can create the actual application. The important thing to point out
here is that you must select the authentication method you want for your web site atcreation time, as it cannot be
changed later. (Although, you can create a new application with different authentication and delete your original
application).
Take a moment to review the App Engine authentication article, as we’re basically going to follow these steps. Go
to the create application web page. Fill in the following information:
Application Identifier: Choose a descriptive name for your site (e.g. “internal-docs”. There cannot be an
existing matching identifier, so have some alternates handy. This will result in a domain name of “internal-
docs.appspot.com” (or whatever your identifier is) for your finished website.
Application Title: A free text description of your website. Feel free to put anything in that provides a simple
title for your site.
Authentication Options (Advanced): You’ll need to click the “edit” link which then gives us three options
for authentication: (1) “Open to all Google Accounts users (default)”, (2) “Restricted to the following Google
Apps domain:”, or (3) “(Experimental) Open to all users with an OpenID Provider”. Click the button for
“Restricted to the following Google Apps domain:” and enter your Google Apps-managed domain (e.g.,
“example.com”). I should point out again that you already should have Google Apps set up for the domain
name you are entering.
From there you can click “Create Application” and the application should be created. Make sure to keep your
application identifier handy.
To enable Google Apps domain authentication for the new application, we need to follow the instructions in the
App Engine authentication article. Basically, you need to open a web browser to:
“http://www.google.com/a/YOUR DOMAIN” and click on the “Dashboard” tab. Go to “Service settings” and click
on the “Add more services” link. In the “Other services” section, there will be a place to add an App Engine service.
Type in your application identifier code here and click “Add it now”. This will hook up your specific Google Apps
Domain with the App Engine service.

Configure and Upload Static Web Site


The next step is to gather your static web site files, add an application configuration, and upload all the content to
the App Engine application. We will place all of our application content in a directory called “my_site” (or
something else of your choosing). You are best off keeping this directory under a source control management
system (e.g., git), so that you can monitor, track and revert changes to all of your files.

Configuration
We need an application configuration file call “app.yaml” in the root of our project directory. This file controls
various aspects of the application, including how the application routes URLs to handlers. We’ll use a
configuration that handles all static file types (including HTML), and just simply serves them.
There are various other posts out there discussing configurations for static web sites on App Engine, but the best
configuration that I found was a gist by GitHub user “darktable”. However, this configuration didn’t including
authentication, so I forked the gist and added authentication attributes to produce our final app.yamlfile that you
should download to “my_site/app.yaml”. You can also view a basic Readme file and other information at the
GitHub gistpage.
Here’s a snippet of the “app.yaml” file that you’ll need to slightly modify:

application: you-app-name-here
version: 1
runtime: python
api_version: 1

default_expiration: "30d"

handlers:
- url: /(.*\.(appcache|manifest))
mime_type: text/cache-manifest
static_files: static/\1
upload: static/(.*\.(appcache|manifest))
expiration: "0m"
login: required

# ... OTHER CONTENT SNIPPED ...

# site root
- url: /
static_files: static/index.html
upload: static/index.html
expiration: "15m"
login: required

After downloading to “my_site/app.yaml”, update the application: you-app-name-here directive with the
specific App Engine application identifier you chose in the application creation section above.

Static Content
Now that we have a configuration file, create a folder named “my_site/static” which will house actual static web
site. As we want to check that the authentication works first before uploading potentially sensitive information, I
would recommend creating a test HTML page that just contains the content “It worked!” and adding that as
“my_site/static/index.html”.
Now, we should have a project layout that looks like:

my_site/
app.yaml
static/
index.html

At this point we can upload the full site to our static server usingappcfg.py. Make sure that we
have appcfg.py available:

$ which appcfg.py
/usr/local/bin/appcfg.py

If you don’t get an executable path back (any path is fine as long assomething is returned by
the which command), then review the App Engine “getting started” documents for installation of the runtime.
Assuming we do have appcfg.py available, change directory in your terminal to the directory containing the
“my_site” project folder and upload the static site with the following command:
$ appcfg.py update my_site

You will have to enter your Google credentials here. After the upload finishes, you should be able to open a web
browser to: “<your application identifier>.appspot.com”. If you are authenticated to your Google Apps domain,
you should see the “It worked!” test page. If not, you should be prompted to login to your Google Apps domain. A
good way to test the authentication works is to open a new Google Chrome Incognito window. It should always
force a new Google Apps login if you have configured things properly. If the authentication doesn’t work quite
right, review the App Engine authentication page for tips and pointers, or leave a comment below on this post.
Assuming authentication does work correctly, then you can now remove the test “index.html” file and upload your
real site content to the “my_site/static” directory. Every time you change the content, make sure to re-upload the
project with appcfg.py and enjoy your static web site!
Microservices
Microservices are a software development technique—a variant of the service-oriented architecture (SOA) architectural style that structures an application as a
collection of loosely coupled services. In a microservices architecture, services are fine-grained and the protocols are lightweight. The benefit of decomposing an
application into different smaller services is that it improves modularity. This makes the application easier to understand, develop, test, and become more resilient to
architecture erosion.[1] It parallelizes development by enabling small autonomous teams to develop, deploy and scale their respective services independently.[2] It also
allows the architecture of an individual service to emerge through continuous refactoring.[3] Microservices-based architectures enable continuous delivery and
deployment.[4]

Contents
Introduction
History
Service Granularity
Linguistic approach
Technologies
Service Mesh
Criticism
Cognitive load
Implementations
See also
References
Further reading

Introduction
Even though there is no official definition of what microservices are, a consensus view has evolved over time, in the industry. Some of the defining characteristics that are
frequently cited include:

Per Martin Fowler and other experts, services in a microservice architecture (MSA) are often processes that communicate over a network to fulfill a goal using
technology-agnostic protocols such as HTTP.[5][6][7] However, services might also use other kinds of inter-process communication mechanisms such as shared
memory.[8] Services might also run within the same process as, for example, OSGI bundles.
Services in a microservice architecture are independently deployable.[9][1]
Services are organized around fine-grained business capabilities. The granularity of the Microservice is important - because this is key to how this approach is
different from SOA.
Services can be implemented using different programming languages, databases, hardware and software environment, depending on what fits best.[1]. This does not
mean that a single microservice is written in a patchwork of programming languages. While it is almost certainly the case that different components that the service is
composed of, will require different languages or API's (example, the web server layer may be in Java or Javascript, but the database may use SQL to communicate to
an RDBMS), this is really reflective of a comparison to the monolithic architecture style. If a monolithic application were to be re-implemented as a set of
microservices, then the individual services could pick their own implementing languages. So one microservice could pick Java for the web layer, and another
microservice could pick a Node.js based implementation, but within each microservice component, the implementing language would be uniform.
Services are small in size, messaging enabled, bounded by contexts, autonomously developed, independently deployable, decentralized and built and released with
automated processes.[9]
A Microservice is not a layer within a monolithic application (example, the web controller, or the backend-for-frontend[10]). Rather it is a self-contained piece of business
function with clear interfaces, and may, through its own internal components, implement a layered architecture. From a strategy perspective, microservices architecture
essentially follow to the Unix philosophy of "Do one thing and do it well"[11]. Martin Fowler describes a microservices-based architecture as having the following
properties[5]:

Naturally enforces a modular structure.


Lends itself to a continuous delivery software development process. A change to a small part of the application only requires rebuilding and redeploying only one or a
small number of services.
Adheres to principles such as fine-grained interfaces (to independently deployable services), business-driven development (e.g. domain-driven design).
It is quite common for such an architectural style to be adopted for cloud-native applications, and applications using lightweight container deployment. As explained[12] by
Martin Fowler, because of the large number (when compared to monolithic application implementations) of services, decentralized continuous delivery, and DevOps with
holistic service monitoring are necessary to effectively develop, maintain, and operate such applications. A consequence (and rationale for) of following this approach is
that the individual microservices can be individually scaled. In the monolithic approach, an application supporting three functions would have to be scaled in its entirety
even if only one of these functions had a resource constraint[13]. With microservices, only the Microservice supporting the function with resource constraints needs to be
scaled out, thus providing resource and cost optimization benefits.

History
A workshop of software architects held near Venice in May 2011 used the term "microservice" to describe what the participants saw as a common architectural style that
many of them had been recently exploring. In May 2012, the same group decided on "microservices" as the most appropriate name. James Lewis presented some of those
ideas as a case study in March 2012 at 33rd Degree in Kraków in Microservices - Java, the Unix Way, as did Fred George about the same time. Adrian Cockcroft at Netflix,
describing this approach as "fine grained SOA", pioneered the style at web scale, as did many of the others mentioned in this article - Joe Walnes, Dan North, Evan
Bottcher and Graham Tackley.[14]

Dr. Peter Rodgers introduced the term "Micro-Web-Services" during a presentation at the Web Services Edge conference in 2005. On slide #4 of the conference
presentation, he states that "Software components are Micro-Web-Services".[15] Juval Löwy had similar precursor ideas about classes being granular services, as the next
evolution of Microsoft architecture.[16][17][18] "Services are composed using Unix-like pipelines (the Web meets Unix = true loose-coupling). Services can call services
(+multiple language run-times). Complex service-assemblies are abstracted behind simple URI interfaces. Any service, at any granularity, can be exposed." He described
how a well-designed service platform "applies the underlying architectural principles of the Web and Web services together with Unix-like scheduling and pipelines to
provide radical flexibility and improved simplicity by providing a platform to apply service-oriented architecture throughout your application environment".[19] The
design, which originated in a research project at Hewlett Packard Labs, aims to make code less brittle and to make large-scale, complex software systems robust to
change.[20] To make "Micro-Web-Services" work, one has to question and analyze the foundations of architectural styles (such as SOA) and the role of messaging between
software components in order to arrive at a new general computing abstraction.[21] In this case, one can think of resource-oriented computing (ROC) as a generalized form
of the Web abstraction. If in the Unix abstraction "everything is a file", in ROC, everything is a "Micro-Web-Service". It can contain information, code or the results of
computations so that a service can be either a consumer or producer in a symmetrical and evolving architecture.

Microservices is a specialization of an implementation approach for service-oriented architectures (SOA) used to build flexible, independently deployable software
systems.[22] The microservices approach is a first realisation of SOA that followed the introduction of DevOps and is becoming more popular for building continuously
deployed systems.[23]

Service Granularity
A key step in defining a Microservice architecture is figuring out how big an individual Microservice has to be. There is no consensus or litmus test for this, as the right
answer depends on the business and organizational context. Amazon's policy is that the team implementing a microservice should be small enough that they can be fed by
two-pizzas[5]. Many organizations choose smaller "squads" - typically 6 to 8 developers. But the key decision hinges around how "clean" the service boundary can be.

On the opposite side of the spectrum, it is considered a bad practice to make the service too small, as then the runtime overhead and the operational complexity can
overwhelm the benefits of the approach. When things get too fine-grained, alternative approaches must be considered - such as packaging the function as a library, or by
placing the function into other Microservices.

Linguistic approach
A linguistic approach to the development of microservices[24] focuses on selecting a programming language that can easily represent a microservice as a single software
artifact. When effective, the gap between architecting a project and deploying it can be minimized.

One language intended to fill this role is Jolie.[25][26]

Technologies
Computer microservices can be implemented in different programming languages and might use different infrastructures. Therefore the most important technology
choices are the way microservices communicate with each other (synchronous, asynchronous, UI integration) and the protocols used for the communication (REST,
messaging, ...). In a traditional system most technology choices like the programming language impact the whole systems. Therefore the approach for choosing
technologies is quite different.[27]

The Eclipse Foundation has published a specification for developing microservices, Eclipse MicroProfile (https://projects.eclipse.org/projects/technology.microprofile).

Service Mesh
In a service mesh, each service instance is paired with an instance of a reverse proxy server, called a service proxy, sidecar proxy, or sidecar. The service instance and
sidecar proxy share a container, and the containers are managed by a container orchestration tool such as Kubernetes. The service proxies are responsible for
communication with other service instances and can support capabilities such as service (instance) discovery, load balancing, authentication and authorization, secure
communications, and others.

In a service mesh, the service instances and their sidecar proxy are said to make up the data plane, which includes not only data management but also request processing
and response. The service mesh also includes a control plane for managing the interaction between services, mediated by their sidecar proxies. There are several options
for service mesh architecture: Istio (a joint project among Google, IBM, and Lyft), Buoyant[28] & others

Criticism
The microservices approach is subject to criticism for a number of issues:

Services form information barriers[29]


Inter-service calls over a network have a higher cost in terms of network latency and message processing time than in-process calls within a monolithic service
process[5]
Testing and deployment are more complicated[30]
Moving responsibilities between services is more difficult.[1] It may involve communication between different teams, rewriting the functionality in another language or
fitting it into a different infrastructure[5]
Viewing the size of services as the primary structuring mechanism can lead to too many services when the alternative of internal modularization may lead to a simpler
design.[31]
Two-phased commits are regarded as an anti-pattern in microservices based architectures as this results in a tighter coupling of all the participants within the
transaction. However, lack of this technology causes awkward dances which have to be implemented by all the transaction participants in order to maintain data
consistency[32]
Development and support of many services is more challenging if they are built with different tools and technologies - this is especially a problem if engineers move
between projects frequently

Cognitive load
The architecture introduces additional complexity and new problems to deal with, such as network latency, message formats, load balancing and fault tolerance.[33][30]

The complexity of a monolithic application doesn't disappear, if it gets re-implemented as a set of microservice applications. Some of the complexity gets translated into
operational complexity[34]. Other places where the complexity manifests itself is in the increased network traffic and resulting slower performance. Also, an application
made up of any number of microservices has a larger number of interface points to access its respective ecosystem, which increases the architectural complexity.[35] This
kind of complexity can be reduced by standardizing the access mechanism. The Web as a system standardized the access mechanism by retaining the same access
mechanism between browser and application resource over the last 20 years. Using the number of Web pages indexed by Google it grew from 26 million pages in 1998 to
around 60 trillion individual pages by 2015 without the need to change its access mechanism. The Web itself is an example that the complexity inherent in traditional
monolithic software systems can be overcome.[36][37]

Implementations
Thorntail by Red Hat
Helidon by Oracle
Meecrowave by Apache

See also
Conway's law
Cross-cutting concern
DevOps
Fallacies of distributed computing
gRPC
Microkernel
Representational state transfer (REST)
Service-oriented architecture (SOA)
Unix philosophy
Self-contained Systems
Serverless computing
Web-oriented architecture (WOA)

References
1. Chen, Lianping (2018). Microservices: Architecting for Continuous Delivery and 16. Löwy, Juval (October 2007). "Every Class a WCF Service" (https://channel9.ms
DevOps (https://www.researchgate.net/publication/323944215_Microservices_ dn.com/Shows/ARCast.TV/ARCastTV-Every-Class-a-WCF-Service-with-Juval-
Architecting_for_Continuous_Delivery_and_DevOps). The IEEE International Lowy). Channel9, ARCast.TV.
Conference on Software Architecture (ICSA 2018) (http://icsa-conferences.org/ 17. Löwy, Juval (2007). Programming WCF Services 1st Edition. pp. 543–553.
2018/). IEEE.
18. Löwy, Juval (May 2009). "Every Class As a Service" (https://blogs.msdn.micros
2. Richardson, Chris. "Microservice architecture pattern" (http://microservices.io/p oft.com/drnick/2009/04/29/wcf-at-teched-2009/). Microsoft TechEd Conference,
atterns/microservices.html). microservices.io. Retrieved 2017-03-19. SOA206. Archived from the original (https://www.youtube.com/watch?v=w-Hxc
3. Chen, Lianping; Ali Babar, Muhammad (2014). Towards an Evidence-Based 6uWCPg) on 2010.
Understanding of Emergence of Architecture through Continuous Refactoring 19. Rodgers, Peter. "Service-Oriented Development on NetKernel- Patterns,
in Agile Software Development. The 11th Working IEEE/IFIP Conference on Processes & Products to Reduce System Complexity" (http://www.cloudcomput
Software Architecture(WICSA 2014) (https://web.archive.org/web/2014073005 ingexpo.com/node/80883). CloudComputingExpo. SYS-CON Media. Retrieved
3f454/http://wicsa2014.org/). IEEE. doi:10.1109/WICSA.2014.45 (https://doi.or 19 August 2015.
g/10.1109%2FWICSA.2014.45).
20. Russell, Perry; Rodgers, Peter; Sellman, Royston (2004). "Architecture and
4. Balalaie, Armin; Heydarnoori, Abbas; Jamshidi, Pooyan (2016-05). Design of an XML Application Platform" (http://www.hpl.hp.com/techreports/200
"Microservices Architecture Enables DevOps: Migration to a Cloud-Native 4/HPL-2004-23.html). HP Technical Reports. p. 62. Retrieved 20 August 2015.
Architecture". IEEE Software. 33 (3): 42–52. doi:10.1109/ms.2016.64 (https://d
21. Hitchens, Ron (Dec 2014). Swaine, Michael, ed. "Your Object Model Sucks".
oi.org/10.1109%2Fms.2016.64). hdl:10044/1/40557 (https://hdl.handle.net/1004
PragPub Magazine: 15.
4%2F1%2F40557). ISSN 0740-7459 (https://www.worldcat.org/issn/0740-745
9). Check date values in: |date= (help) 22. Pautasso, Cesare (2017). "Microservices in Practice, Part 1: Reality Check and
Service Design" (http://ieeexplore.ieee.org/document/7819415/). IEEE
5. Martin Fowler. "Microservices" (http://martinfowler.com/articles/microservices.ht
Software. 34 (1): 91–98. doi:10.1109/MS.2017.24 (https://doi.org/10.1109%2F
ml). Archived (https://web.archive.org/web/20180214171522/https://martinfowle
MS.2017.24).
r.com/articles/microservices.html) from the original on 14 February 2018.
23. "Continuous Deployment: Strategies" (https://www.javacodegeeks.com/2014/1
6. Newman, Sam (2015-02-20). Building Microservices. O'Reilly Media.
2/continuous-deployment-strategies.html). javacodegeeks.com. Retrieved
ISBN 978-1491950357.
28 December 2016.
7. Wolff, Eberhard (2016-10-12). Microservices: Flexible Software Architectures (h
24. Claudio Guidi (2017-03-29). "What is a microservice? (from a linguistic point of
ttp://microservices-book.com). ISBN 978-0134602417.
view)" (http://claudioguidi.blogspot.it/2017/03/what-microservice-from-linguisitc.
8. "Micro-services for performance" (https://vanilla-java.github.io/2016/03/22/Micro html).
-services-for-performance.html). Vanilla Java. 2016-03-22. Retrieved
25. Jolie Team. "Vision of microservices revolution" (http://www.jolie-lang.org/visio
2017-03-19.
n.html).
9. Nadareishvili, I., Mitra, R., McLarty, M., Amundsen, M., Microservice
26. Fabrizio Montesi. "Programming Microservices with Jolie - Part 1: Data formats,
Architecture: Aligning Principles, Practices, and Culture, O’Reilly 2016
Proxies, and Workflows" (https://fmontesi.github.io/2015/02/06/programming-mi
10. "Backends For Frontends Pattern" (https://docs.microsoft.com/en-us/azure/arch croservices-with-jolie.html).
itecture/patterns/backends-for-frontends). Microsoft Azure Cloud Design
27. Wolff, Eberhard. Microservices - A Practical Guide (http://practical-microservice
Patterns. Microsoft.
s.com). ISBN 978-1717075901.
11. Lucas Krause. Microservices: Patterns and Applications. ASIN B00VJ3NP4A (h
28. "What's a service mesh?" (https://blog.buoyant.io/2017/04/25/whats-a-service-
ttps://www.amazon.com/dp/B00VJ3NP4A).
mesh-and-why-do-i-need-one/). Buoyant. Buoyant. 2017-04-25. Retrieved
12. Martin Fowler. "Microservice Prerequisites" (https://martinfowler.com/bliki/Micro 5 December 2018.
servicePrerequisites.html).
29. Jan Stenberg (11 August 2014). "Experiences from Failing with Microservices"
13. Richardson, Chris (November 2018). Microservice Patterns. Chapter 1, section (http://www.infoq.com/news/2014/08/failing-microservices).
1.4.1 Scale cube and microservices: Manning Publications.
30. "Developing Microservices for PaaS with Spring and Cloud Foundry" (http://ww
ISBN 9781617294549.
w.infoq.com/presentations/microservices-pass-spring-cloud-foundry).
14. James Lewis and Martin Fowler. "Microservices" (http://martinfowler.com/article
31. Tilkov, Stefan (17 November 2014). "How small should your microservice be?"
s/microservices.html).
(https://www.innoq.com/blog/st/2014/11/how-small-should-your-microservice-b
15. Rodgers, Peter. "Service-Oriented Development on NetKernel- Patterns, e/). innoq.com. Retrieved 4 January 2017.
Processes & Products to Reduce System Complexity Web Services Edge 2005
32. Richardson, Chris (November 2018). Microservice Patterns. Chapter 4.
East: CS-3" (http://www.cloudcomputingexpo.com/node/80883).
Managing transactions with sagas: Manning Publications.
CloudComputingExpo 2005. SYS-CON TV. Retrieved 3 July 2017.
ISBN 9781617294549.
33. Pautasso, Cesare (2017). "Microservices in Practice, Part 2: Service 35. "BRASS Building Resource Adaptive Software Systems". U.S. Government.
Integration and Sustainability" (http://ieeexplore.ieee.org/document/7888407/). DARPA. April 7, 2015. "Access to system components and the interfaces
IEEE Software. 34 (2): 97–104. doi:10.1109/MS.2017.56 (https://doi.org/10.110 between clients and their applications, however, are mediated via a number of
9%2FMS.2017.56). often unrelated mechanisms, including informally documented application
34. Martin Fowler. "Microservice Trade-Offs" (https://www.martinfowler.com/article programming interfaces (APIs), idiosyncratic foreign function interfaces,
s/microservice-trade-offs.html#ops). complex ill-understood model definitions, or ad hoc data formats. These
mechanisms usually provide only partial and incomplete understanding of the
semantics of the components themselves. In the presence of such complexity,
it is not surprising that applications typically bake-in many assumptions about
the expected behavior of the ecosystem they interact with."
36. Alpert, Jesse; Hajaj, Nissan. "We knew the web was big" (http://googleblog.blo
gspot.co.at/2008/07/we-knew-web-was-big.html). Official Google Blog.
Retrieved 22 August 2015.
37. "The Story" (http://www.google.com/insidesearch/howsearchworks/thestory/).
How search works. Retrieved 22 August 2015.

Further reading
S. Newman, Building Microservices – Designing Fine-Grained Systems, O'Reilly, 2015 ISBN 978-1491950357
I. Nadareishvili et al., Microservices Architecture – Aligning Principles, Practices and Culture (http://transform.ca.com/rs/117-QWV-692/images/CA%20Technologies%
20-%20OReilly%20Microservice%20Architecture%20eBook.pdf), O’Reilly, 2016, ISBN 978-1-491-95979-4
SEI SATURN 2015 microservices workshop, https://github.com/michaelkeeling/SATURN2015-Microservices-Workshop
Wijesuriya, Viraj Brian (2016-08-29) Microservice Architecture, Lecture Notes (http://www.slideshare.net/tyrantbrian/microservice-architecture-65505794) - University
of Colombo School of Computing, Sri Lanka

Retrieved from "https://en.wikipedia.org/w/index.php?title=Microservices&oldid=875748269"

This page was last edited on 28 December 2018, at 19:17 (UTC).

Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy
Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Google Cloud Platform Overview
Contents
GCP resources
Accessing resources through services
Global, regional, and zonal resources
Projects

This overview is designed to help you understand the overall landscape of Google Cloud Platform (GCP). Here, you'll take a brief
look at some of the commonly used features and get pointers to documentation that can help you go deeper. Knowing what's
available and how the parts work together can help you make decisions about how to proceed. You'll also get pointers to some
tutorials that you can use to try out GCP in various scenarios.

GCP resources

GCP consists of a set of physical assets, such as computers and hard disk drives, and virtual resources, such as virtual
machines (VMs), that are contained in Google's data centers around the globe. Each data center location is in a global region.
Regions include Central US, Western Europe, and East Asia. Each region is a collection of zones, which are isolated from each
other within the region. Each zone is identified by a name that combines a letter identifier with the name of the region. For
example, zone a in the East Asia region is named asia-east1-a .

This distribution of resources provides several benefits, including redundancy in case of failure and reduced latency by locating
resources closer to clients. This distribution also introduces some rules about how resources can be used together.

Accessing resources through services

In cloud computing, what you might be used to thinking of as software and hardware products, become services. These services
provide access to the underlying resources. Thelist of available GCP services is long, and it keeps growing. When you develop
your website or application on GCP, you mix and match these services into combinations that provide the infrastructure you
need, and then add your code to enable the scenarios you want to build.

Global, regional, and zonal resources

Some resources can be accessed by any other resource, across regions and zones. These global resources include
preconfigured disk images, disk snapshots, and networks. Some resources can be accessed only by resources that are located
in the same region. These regional resources include static external IP addresses. Other resources can be accessed only by
resources that are located in the same zone. These zonal resources include VM instances, their types, and disks.

The following diagram shows the relationship between global scope, regions and zones, and some of their resources:
The scope of an operation varies depending on what kind of resources you're working with. For example, creating a network is a
global operation because a network is a global resource, while reserving an IP address is a regional operation because the
address is a regional resource.

As you start to optimize your GCP applications, it's important to understand how these regions and zones interact. For example,
even if you could, you wouldn't want to attach a disk in one region to a computer in a different region because the latency you'd
introduce would make for very poor performance. Thankfully, GCP won't let you do that; disks can only be attached to computers
in the same zone.

Depending on the level of self-management required for the computing and hosting serviceyou choose, you might or might not
need to think about how and where resources are allocated.

For more information about the geographical distribution of GCP, see Geography and Regions.

Projects

Any GCP resources that you allocate and use must belong to a project. You can think of a project as the organizing entity for
what you're building. A project is made up of the settings, permissions, and other metadata that describe your applications.
Resources within a single project can work together easily, for example by communicating through an internal network, subject to
the regions-and-zones rules. The resources that each project contains remain separate across project boundaries; you can only
interconnect them through an external network connection.

Each GCP project has:

A project name, which you provide.


A project ID, which you can provide or GCP can provide for you.
A project number, which GCP provides.

As you work with GCP, you'll use these identifiers in certain command lines and API calls. The following screenshot shows a
project name, its ID, and number:
In this example:

Example Project is the project name.


example-id is the project ID.
123456789012 is the project number.

Each project ID is unique across GCP. Once you have created a project, you can delete the project but its ID can never be used
again.

When billing is enabled, each project is associated with one billing account. Multiple projects can have their resource usage
billed to the same account.

A project serves as a namespace. This means every resource within each project must have a unique name, but you can usually
reuse resource names if they are in separate projects. Some resource names must be globally unique. Refer to the
documentation for the resource for details.

Ways to interact with the services

GCP gives you three basic ways to interact with the services and resources.

Google Cloud Platform Console

The Google Cloud Platform Console provides a web-based, graphical user interface that you can use to manage your GCP
projects and resources. When you use the GCP Console, you create a new project, or choose an existing project, and use the
resources that you create in the context of that project. You can create multiple projects, so you can use projects to separate
your work in whatever way makes sense for you. For example, you might start a new project if you want to make sure only
certain team members can access the resources in that project, while all team members can continue to access resources in
another project.

Command-line interface

If you prefer to work in a terminal window, the Google Cloud SDK provides the gcloud command-line tool, which gives you
access to the commands you need. The gcloud tool can be used to manage both your development workflow and your GCP
resources. See the gcloud reference for the complete list of available commands.

GCP also provides Cloud Shell, a browser-based, interactive shell environment for GCP. You can access Cloud Shell from the
GCP console. Cloud Shell provides:

A temporary Compute Engine virtual machine instance.


Command-line access to the instance from a web browser.
A built-in code editor.
5 GB of persistent disk storage.
Pre-installed Google Cloud SDK and other tools.
Language support for Java, Go, Python, Node.js, PHP, Ruby and .NET.
Web preview functionality.
Built-in authorization for access to GCP Console projects and resources.

Client libraries

The Cloud SDK includes client libraries that enable you to easily create and manage resources. GCP client libraries expose APIs
for two main purposes:

App APIs provide access to services. App APIs are optimized for supported languages, such as Node.js and Python. The
libraries are designed around service metaphors, so you can work with the services more naturally and write less
boilerplate code. The libraries also provide helpers for authentication and authorization.
Admin APIs offer functionality for resource management. For example, you can use admin APIs if you want to build your
own automated tools.

You also can use the Google API client libraries to access APIs for products such as Google Maps, Google Drive, and YouTube.

Pricing

To understand Google's principles about how pricing works on GCP, see the Pricing page. To understand pricing for individual
services, see the product pricing section.

You can also take advantage of some tools to help you evaluate the costs of using GCP.

The pricing calculator provides a quick and easy way to estimate what your GCP usage will look like. You can provide
details about the services you want to use, such as the number of Compute Engine instances, persistent disks and their
sizes, and so on, and then see a pricing estimate.
The total cost of ownership (TCO) tool evaluates the relative costs for running your compute load in the cloud, and provides
a financial estimate. The tool provides several inputs for cost modeling, which you can adjust, and then compares
estimated costs on GCP and AWS. This tool does not model all components of a typical application, such as storage and
networking.

NEXT: LEARN ABOUT THE SERVICES


Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated December 5, 2018.


gcloud Overview
Contents
What is gcloud?
gcloud and the SDK
Downloading gcloud
Release levels

This page contains an overview of the gcloud command-line tool and its common command patterns and quirks.

What is gcloud?

gcloud is a tool that provides the primary command-line interface to Google Cloud Platform. You can use this tool to perform
many common platform tasks either from the command-line or in scripts and other automations.

For example, you can use gcloud to create and manage:

Google Compute Engine virtual machine instances and other resources


Google Cloud SQL instances
Google Kubernetes Engine clusters
Google Cloud Dataproc clusters and jobs
Google Cloud DNS managed zones and record sets
Google Cloud Deployment manager deployments

You can also use gcloud to deploy App Engine applications and perform other tasks. Read the gcloud reference to learn more
about the capabilities of this tool.

gcloud and the SDK

gcloud is a part of the Google Cloud SDK. You must download and install the SDK on your system and initialize it before you
can use gcloud .

By default, the SDK installs those gcloud commands that are at the General Availability and Preview levels only. Additional
functionality is available in SDK components named alpha and beta . These components allow you to use gcloud to work with
Google Cloud Bigtable, Google Cloud Dataflow and other parts of the Cloud Platform at earlier release levels than General
Availability.

gcloud releases have the same version number as the SDK. The current SDK version is 228.0.0. You can download and install
previous versions of the SDK from the download archive.

Note: gcloud is available automatically in Google Cloud Shell. If you are using Cloud Shell, you do not need to install gcloud manually
in order to use it.

Downloading gcloud

You can download the latest version of Cloud SDK, which includes gcloud , from the download page.

Release levels

gcloud commands have the following release levels:

Release level Label Description

General None Commands are considered fully stable and available for production use.
Availability Advance warnings will be made for commands that break current
functionality and documented in the release notes.
Release level Label Description

Beta beta Commands are functionally complete, but may still have some
outstanding issues. Breaking changes to these commands may be
made without notice.

Alpha alpha Commands are in early release and may change without notice.

Preview preview Commands may be unstable and may change without notice.

The alpha and beta components are not installed by default when you install the SDK. You must install these separately using
the gcloud components install command. If you try to run an alpha or beta command and the corresponding component is not
installed, gcloud will prompt you to install it.

Command groups

Within each release level, gcloud commands are organized into a nested hierarchy of command groups, each of which
represents a product or feature of the Cloud Platform or its functional subgroups.

For example:

Command group Description

gcloud compute Commands related to Compute Engine in general availability

gcloud compute Commands related to Compute Engine instances in general


instances
availability

gcloud beta compute Commands related to Compute Engine in Beta

gcloud alpha app Commands related to managing App Engine deployments in Alpha

Running gcloud commands

You can run gcloud commands from the command line in the same way you use other command-line tools. You can also
run gcloud commands from within scripts and other automations, for example, when using Jenkins to automate Cloud Platform
tasks.

Note: gcloud reference documentation and examples use backslashes, \, to denote long commands. You can execute these
commands as-is (Windows users can use ^ instead of \). If you'd like to remove the backslashes, be sure to remove newlines as well
to ensure the command is read as a single line.

Properties

gcloud properties are settings that affect the behavior of gcloud and other Cloud SDK tools. Some of these properties can be
set by either global or command flags - in which case, the value set by the flag takes precedence.

A list of available properties can be found here.

Configurations

A configuration is a named set of gcloud properties. It works like a profile, essentially.

Starting off with Cloud SDK, you'll work with a single configuration named default and you can set properties by running
either gcloud init or gcloud config set . This single default configuration is suitable for most use cases.

If you'd like to work with multiple projects or authorization accounts, you can set up multiple configurations with gcloud config
configurations create and switch among them accordingly.

For a detailed account of these concepts, see these explorations of configurations and their management.
Global flags

gcloud provides a set of gcloud -wide flags that govern the behavior of commands on a per-invocation level. Flags override any
values set in SDK properties.

Positional Arguments and Flags

While both positional arguments and flags affect the output of a gcloud command, there is a subtle difference in their use cases.
A positional argument is used to define an entity on which a command operates while a flag is required to set a variation in a
command's behaviour.

Use of stdout and stderr

Successful output of gcloud commands is written to stdout. All other types of responses - prompts, warnings, and errors - are
written to stderr. Note that anything written to stderr is not stable and should not be scripted against.

For a definitive list of guidelines on handling output, read this section.

Prompting

To protect against unintended destructive actions, gcloud will confirm your intentions before executing commands such
as gcloud projects delete .

You can also expect prompts if you were to create a Google Compute Engine virtual machine instance, say 'test-instance',
using gcloud compute instances create test-instance . You will be asked to choose a zone to create the instance in.

To disable prompting, use the --quiet flag.

Note, the wording of prompts can change and should not be scripted against.

Suppressing prompting, writing to the terminal, and logging

The --quiet flag (also, -q ) for gcloud disables all interactive prompts when running gcloud commands and comes in handy
when scripting. In the event input is needed, defaults will be used. If there aren't any, an error will be raised.

To suppress printing of command output to standard output and standard error in the terminal, use the --no-user-output-
enabled flag.

To adjust verbosity of logs instead, use the --verbosity flag and define the appropriate level.

Determining output structure

By default, when a gcloud command returns a list of resources, they are pretty-printed to standard output. To produce more
meaningful output, the format, filter and projection flags allow you to finetune your output.

If you'd like to define just the format of your output, use the --format flag to produce a tabulated or flattened version of your
output (for interactive display) or a machine-readable version of the output ( json , csv , yaml , value ).

To format a list of keys that select resource data values, use projections . To further refine your output to a criteria you'd like to
define, use filter .

If you'd like to work through a quick interactive tutorial to help get you familiar with filter and format functionality, follow the link
below.

What's next

Learn more about gcloud commands in the gcloud Reference.


Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated December 11, 2018.


App Engine Documentation

Choosing an App Engine Environment


Contents
Choosing your App Engine environment
Comparing high-level features
Comparing the flexible environment to Compute Engine
Migrating from standard to the flexible environment

You can run your applications in App Engine using the flexible environment or standard environment. You can also choose to
simultaneously use both environments for your application and allow your services to take advantage of each environment's
individual benefits.

Choosing your App Engine environment

Structuring your applications by using a microservice architecture aligns best with App Engine, especially if you decide to utilize
both environments. There are several factors to consider when determining which environment is better suited to your
application and its services. Use the following sections to learn and understand which environment best meets your application's
needs.

When to choose the flexible environment

Using the App Engine flexible environment means that your application instances run within Docker containers on Google Compute Engine
virtual machines (VMs).

Generally, good candidates for the flexible environment are applications that receive consistent traffic, experience regular traffic fluctuations,
or meet the parameters for scaling up and down gradually.

The flexible environment is optimal for applications with the following characteristics:

Source code that is written in a version of any of the supported programming languages:
Python, Java, Node.js, Go, Ruby, PHP, or .NET
Runs in a Docker container that includes a custom runtime or source code written in other programming languages.
Depends on other software, including operating system packages such as imagemagick, ffmpeg, libgit2, or others through apt-
get.
Uses or depends on frameworks that include native code.
Accesses the resources or services of your Cloud Platform project that reside in the Compute Engine network.

When to choose the standard environment

Using the App Engine standard environment means that your application instances run in asandbox, using the runtime environment of a
supported language listed below.

For some languages, building an application to run in the standard environment is more constrained and involved, but your applications will
have faster scale up times.

The standard environment is optimal for applications with the following characteristics:

Source code is written in specific versions of the supported programming languages:


Python 2.7, Python 3.7 (beta)
Java 8, Java 7
Node.js 8 (beta)
PHP 5.5, PHP 7.2 (beta)
Go 1.6, 1.8, 1.9, and Go 1.11 (beta)
Intended to run for free or at very low cost, where you pay only for what you need and when you need it. For example, your
application can scale to 0 instances when there is no traffic.
Experiences sudden and extreme spikes of traffic which require immediate scaling.

Comparing high-level features

The following table summarizes the differences between the two environments:

Feature Flexible environment Standard environment

Instance startup time Minutes Seconds


Feature Flexible environment Standard environment

Maximum request 60 minutes 60 seconds


timeout

Background threads Yes Yes, with restrictions

Background processes Yes No

SSH debugging Yes No

Scaling Manual, Automatic Manual, Basic, Automatic

Scale to zero No, minimum 1 instance Yes

Writing to local disk Yes, ephemeral (disk initialized on No


each VM startup)

Modifying the runtime Yes (through Dockerfile) No

Automatic in-place Yes (excludes container image Yes


security patches runtime)

Network access Yes Node.js, Python 3, PHP 7.2, Go


1.11: Yes.
Python, Go, and PHP (billing-
enabled): Only via App Engine
services (includes outbound
sockets).

Supports installing third- Yes No


party binaries

Location North America, Asia Pacific, or North America, Asia Pacific, or


Europe Europe

Pricing Based on usage of vCPU, memory, Based on instance hours


and persistent disks

For an in-depth comparison of the environments, see the guide for your language: Python,Java, Go, or PHP.

Comparing the flexible environment to Compute Engine

While the flexible environment runs services in instances on Compute Engine VMs, the flexible environment differs from
Compute Engine in the following ways:

The VM instances used in the flexible environment are restarted on a weekly basis. During restarts, Google's management
services apply any necessary operating system and security updates.
You always have root access to Compute Engine VM instances. By default, SSH access to the VM instances in the flexible
environment is disabled. If you choose, you can enable root access to your app's VM instances.
The geographical region of the VM instances used in the flexible environment is determined by the location that you specify
for the App Engine application of your GCP project. Google's management services ensures that the VM instances are co-
located for optimal performance.

Migrating from standard to the flexible environment

If you have an application in the standard environment, you might want to move some services to the flexible environment. For
guidance, see the recommendations for Python,Java, Go, and PHP.

To migrate specific services, see the instructions for Python, Java, Go, and PHP.
Scripting gcloud commands
Contents
Authorization
Disabling prompts
Handling output
Examples of filtering and formatting
Examples of scripting
More information

In addition to running gcloud commands from the command line, you can also run them from scripts or other automations — for
example, when using Jenkins to drive automation of Google Cloud Platform tasks.

Authorization

Google Cloud SDK tools support two authorization methods:

User account authorization


Service account authorization

User account authorization is recommended if you are running a script or other automation on a single machine.

To authorize access and perform other common Cloud SDK setup steps:

gcloud init

Service account authorization is recommended if you are deploying a script or other automation across machines in a production
environment. It is also the recommended authorization method if you are running gcloud commands on a Google Compute
Engine virtual machine instance where all users have access to root .

To use service account authorization, use an existing service account or create a new one through the Google Cloud Platform
Console. From the options column of the service accounts table, create and download the associated private key as a JSON-
formatted key file.

To run the authorization, use gcloud auth activate-service-account :

gcloud auth activate-service-account --key-file [KEY_FILE]

You can SSH into your VM instance by using gcloud compute ssh , which takes care of authentication. SSH configuration files
can be configured using gcloud compute config-ssh .

For detailed instructions regarding authorizing Cloud SDK tools, refer to this comprehensive guide.

Disabling prompts

Some gcloud commands are interactive, prompting users for confirmation of an operation or requesting additional input for an
entered command.

In most cases, this is not desirable when running commands in a script or other automation. You can disable prompts
from gcloud commands by setting the disable_prompts property in your configuration to True or by using the global --
quiet or -q flag. Most interactive commands have default values when additional confirmation or input is required. If prompts
are disabled, these default values are used.

For example:

gcloud --quiet debug targets list

Note the --quiet flag is inserted right at the front.

Handling output

If you want a script or other automation to perform actions conditionally based on the output of a gcloud command, observe the
following:

Don't depend on messages printed to standard error.


These may change in future versions of gcloud and break your automation.
Don't depend on the raw output of messages printed to standard output.
The default output for any command may change in a future release. You can minimize the impact of those changes by
using the --format flag to format the output with one of the following: --format=json|yaml|csv|text|list to specify values to
be returned. Run $ gcloud topic formats for more options.
You can modify the default output from --format by using projections . For increased granularity, use the --
filter flag to return a subset of the values based on an expression. You can then script against those returned values.
Examples of formatting and filtering output can be found in the section below.
Do depend on command exit status.
If the exit status is not zero, an error occurred and the output may be incomplete unless the command documentation
notes otherwise. For example, a command that creates multiple resources may only create a few, list them on the standard
output, and then exit with a non-zero status. Alternatively, you can use the show_structured_logs property to parse error
logs. Run $ gcloud config for more details.

Examples of filtering and formatting

To work through an interactive tutorial about using the filter and format flags instead, launch the tutorial using the following
button:

The following are examples of common uses of formatting and filtering with gcloud commands:

List instances created in zone us-central1-a:

gcloud compute instances list --filter="zone:us-central1-a"

List in JSON format those projects where the labels match specific values (e.g. label.env is 'test' and label.version is alpha):

gcloud projects list --format="json" \


--filter="labels.env=test AND labels.version=alpha"

List projects with their creation date and time specified in the local timezone:

gcloud projects list \


--format="table(name, project_id, createTime.date(tz=LOCAL))"

List projects that were created after a specific date in table format:

gcloud projects list \


--format="table(projectNumber,projectId,createTime)" \
--filter="createTime.date('%Y-%m-%d', Z)='2016-05-11'"

Note that in the last example, a projection on the key was used. The filter is applied on the createTime key after the date
formatting is set.

List a nested table of the quotas of a region:

gcloud compute regions describe us-central1 \


--format="table(quotas:format='table(metric,limit,usage)')"

Print a flattened list of global quotas in CSV format:

gcloud compute project-info describe --flatten='quotas[]' \


--format='csv(quotas.metric,quotas.limit,quotas.usage)'

List compute instance resources with box decorations and titles, sorted by name, in table format:

gcloud compute instances list \


--format='table[box,title=Instances](name:sort=1,zone:title=zone,status)'

List the project authenticated user email address:

gcloud info --format='value(config.account)'

Examples of scripting
Using this functionality of format and filter, you can combine gcloud commands into a script to easily extract embedded
information.

If you were to list all the keys associated with all your projects' service accounts, you'd need to iterate over all your projects and
for each project, get all the service accounts associated with it. For each service account, get all the keys. This can be
accomplished as demonstrated below:

As a bash script:

#!/bin/bash
for project in $(gcloud projects list --format="value(projectId)")
do
echo "ProjectId: $project"
for robot in $(gcloud iam service-accounts list --project $project --format="value(email)")
do
echo " -> Robot $robot"
for key in $(gcloud iam service-accounts keys list --iam-account $robot --project $project --format="value(name.b
do
echo " $key"
done
done
done

Or as Windows PowerShell:

foreach ($project in gcloud projects list --format="value(projectId)")


{
Write-Host "ProjectId: $project"
foreach ($robot in gcloud iam service-accounts list --project $project --format="value(email)")
{
Write-Host " -> Robot $robot"
foreach ($key in gcloud iam service-accounts keys list --iam-account $robot --project $project --format="value(n
{
Write-Host " $key"
}
}
}

Oftentimes, you'll need to parse output for processing. For example, it'd be useful to write the service account information into an
array and segregate values in the multi-valued CSV-formatted serviceAccounts.scope() field. The script below does just this:

#!/bin/bash
for scopesInfo in $(
gcloud compute instances list --filter=name:instance-1 \
--format="csv[no-heading](name,id,serviceAccounts[].email.list(),
serviceAccounts[].scopes[].map().list(separator=;))")
do
IFS=',' read -r -a scopesInfoArray<<< "$scopesInfo"
NAME="${scopesInfoArray[0]}"
ID="${scopesInfoArray[1]}"
EMAIL="${scopesInfoArray[2]}"
SCOPES_LIST="${scopesInfoArray[3]}"
echo "NAME: $NAME, ID: $ID, EMAIL: $EMAIL"
echo ""
IFS=';' read -r -a scopeListArray<<< "$SCOPES_LIST"
for SCOPE in "${scopeListArray[@]}"
do
echo " SCOPE: $SCOPE"
done
done

More information

For a step-by-step guide to building basic scripts with gcloud , refer to this beginner's guide to automating GCP tasks.

More involved examples of the output configuring capabilities built into gcloud filters , formats , and projections can be
found in this blog post about filtering and formatting.
Cloud SDK

gcloud auth activate-service-account


NAME

gcloud auth activate-service-account - authorize access to Google Cloud Platform with a service account

SYNOPSIS

gcloud auth activate-service-account [ ACCOUNT ] --key-


file = KEY_FILE [ --password-file = PASSWORD_FILE | --prompt-for-password ][ GCLOUD_WIDE_FLAG … ]

DESCRIPTION

To allow gcloud (and other tools in Cloud SDK) to use service account credentials to make requests, use this command to
import these credentials from a file that contains a private authorization key, and activate them for use in gcloud . gcloud
auth activate-service-account serves the same function as gcloud auth login but uses a service account rather than
Google user credentials.

For more information on authorization and credential types, see: https://cloud.google.com/sdk/docs/authorizing.

Key File

To obtain the key file for this command, use either the Google Cloud Platform Console or gcloud iam service-accounts
keys create . The key file can be .json (preferred) or .p12 (legacy) format. In the case of legacy .p12 files, a separate
password might be required and is displayed in the Console when you create the key.

Credentials

Credentials will also be activated (similar to running gcloud config set account [ACCOUNT_NAME] ).

If a project is specified using the --project flag, the project is set in active configuration, which is the same as
running gcloud config set project [PROJECT_NAME] . Any previously active credentials, will be retained (though no longer
default) and can be displayed by running gcloud auth list .

If you want to delete previous credentials, see gcloud auth revoke .

Note: Service accounts use client quotas for tracking usage.

POSITIONAL ARGUMENTS

[ ACCOUNT ]

E-mail address of the service account.

REQUIRED FLAGS

--key-file = KEY_FILE

Path to the private key file.

OPTIONAL FLAGS

At most one of these may be specified:

--password-file = PASSWORD_FILE

Path to a file containing the password for the service account private key (only for a .p12 file).

--prompt-for-password

Prompt for the password for the service account private key (only for a .p12 file).

GCLOUD WIDE FLAGS


These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --
project, --quiet, --trace-token, --user-output-enabled,--verbosity. Run $ gcloud help for details.

EXAMPLES

To authorize gcloud to access Google Cloud Platform using an existing service account while also specifying a project,
run:

$ gcloud auth activate-service-account \


test-service-account@google.com \
--key-file=/path/key.json --project=testproject

NOTES

These variants are also available:


$ gcloud alpha auth activate-service-account
$ gcloud beta auth activate-service-account

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated December 4, 2018.


Cloud SDK

gcloud
NAME

gcloud - manage Google Cloud Platform resources and developer workflow

SYNOPSIS

gcloud GROUP | COMMAND [ --account = ACCOUNT ][ --configuration = CONFIGURATION ] [ --flags-file = YAML_FILE ][ --flatten =[ KEY ,…]] [ --format = FORMAT ] [ --help ][ --project = PROJECT_

DESCRIPTION

The gcloud CLI manages authentication, local configuration, developer workflow, and interactions with the Google Cloud Platform APIs.

GLOBAL FLAGS

--account = ACCOUNT

Google Cloud Platform user account to use for invocation. Overrides the default core/account property value for this command invocation.

--configuration = CONFIGURATION

The configuration to use for this command invocation. For more information on how to use configurations, run: gcloud topic configurations . You can also use the
[CLOUDSDK_ACTIVE_CONFIG_NAME] environment variable to set the equivalent of this flag for a terminal session.

--flags-file = YAML_FILE

A YAML or JSON file that specifies a --flag : value dictionary. Useful for specifying complex flag values with special characters that work with any command interpreter. Additionally,
each --flags-file arg is replaced by its constituent flags. See $ gcloud topic flags-file for more information.

--flatten =[ KEY ,…]

Flatten name [] output resource slices in KEY into separate records for each item in each slice. Multiple keys and slices may be specified. This also flattens keys for --format and --
filter . For example, --flatten=abc.def flattens abc.def[].ghi references to abc.def.ghi . A resource record containing abc.def[] with N elements will expand to N records in
the flattened output. This flag interacts with other flags that are applied in this order: --flatten , --sort-by , --filter , --limit .

--format = FORMAT

Set the format for printing command output resources. The default is a command-specific human-friendly output format. The supported formats
are: config , csv , default , diff , disable , flattened , get , json , list , multi , none , object , table , text , value , yaml . For more details run $ gcloud topic formats.

--help

Display detailed help.

--project = PROJECT_ID

The Google Cloud Platform project name to use for this invocation. If omitted, then the current project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID . Overrides the default core/project property value for this command invocation.

--quiet , -q

Disable all interactive prompts when running gcloud commands. If input is required, defaults will be used, or an error will be raised. Overrides the default core/disable_prompts property
value for this command invocation. Must be used at the beginning of commands. This is equivalent to setting the environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1.

--verbosity = VERBOSITY ; default="warning"

Override the default verbosity for this command with any of the supported standard verbosity levels: debug , info , warning , error , critical , none . Overrides the
default core/verbosity property value for this command invocation.

--version , -v

Print version information and exit. This flag is only available at the global level.

-h

Print a summary help and exit.

OTHER FLAGS

--log-http

Log all HTTP server requests and responses to stderr. Overrides the default core/log_http property value for this command invocation.

--trace-token = TRACE_TOKEN

Token used to route traces of service requests for investigation of issues. Overrides the default core/trace_token property value for this command invocation.

--user-output-enabled

Print user intended output to the console. Overrides the default core/user_output_enabled property value for this command invocation. Use --no-user-output-enabled to disable.

GROUPS

GROUP is one of the following:

alpha

(ALPHA) Alpha versions of gcloud commands.

app

Manage your App Engine deployments.

auth

Manage oauth2 credentials for the Google Cloud SDK.

beta
(BETA) Beta versions of gcloud commands.

bigtable

Manage your Cloud Bigtable storage.

builds

Create and manage builds for Google Cloud Build.

components

List, install, update, or remove Google Cloud SDK components.

composer

Create and manage Cloud Composer Environments.

compute

Create and manipulate Google Compute Engine resources.

config

View and edit Cloud SDK properties.

container

Deploy and manage clusters of machines for running containers.

dataflow

Manage Google Cloud Dataflow jobs.

dataproc

Create and manage Google Cloud Dataproc clusters and jobs.

datastore

Manage your Cloud Datastore indexes.

debug

Commands for interacting with the Cloud Debugger.

deployment-manager

Manage deployments of cloud resources.

dns

Manage your Cloud DNS managed-zones and record-sets.

domains

Manage domains for your Google Cloud projects.

endpoints

Create, enable and manage API services.

firebase

Work with Google Firebase.

functions

Manage Google Cloud Functions.

iam

Manage IAM service accounts and keys.

iot

Manage Cloud IoT resources.

kms

Manage cryptographic keys in the cloud.

logging

Manage Stackdriver Logging.

ml

Use Google Cloud machine learning capabilities.

ml-engine

Manage Cloud ML Engine jobs and models.

organizations

Create and manage Google Cloud Platform Organizations.

projects

Create and manage project access policies.

pubsub

Manage Cloud Pub/Sub topics and subscriptions.

redis

Manage Cloud Memorystore Redis resources.

services
List, enable and disable APIs and services.

source

Cloud git repository commands.

spanner

Command groups for Cloud Spanner.

sql

Create and manage Google Cloud SQL databases.

topic

gcloud supplementary help.

COMMANDS

COMMAND is one of the following:

docker

(DEPRECATED) Enable Docker CLI access to Google Container Registry.

feedback

Provide feedback to the Google Cloud SDK team.

help

Search gcloud help text.

info

Display information about the current gcloud environment.

init

Initialize or reinitialize gcloud.

version

Print version information for Cloud SDK components.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered
trademark of Oracle and/or its affiliates.

Last updated November 21, 2018.


gcloud topic flags-file
NAME

gcloud topic flags-file - --flags-file=YAML_FILE supplementary help

DESCRIPTION

The --flags-file = YAML-FILE flag, available to all gcloud commands, supports complex flag values in any command
interpreter.

Complex flag values that contain command interpreter special characters may be difficult to specify on the command line.
The combined list of special characters across commonly used command interpreters (shell, cmd.exe, PowerShell) is
surprisingly large. Among them are ", ', `, *, ?, [, ], (, ), $, %, #, ^, &, |, {, }, ;, \, <,
>, space , tab , newline . Add to that the separator characters for list and dict valued flags, and it becomes all but
impossible to construct portable command lines.

The --flags-file = YAML-FILE flag solves this problem by allowing command line flags to be specified in a YAML/JSON
file. String, numeric, list and dict flag values are specified using YAML/JSON notation and quoting rules.

Flag specification uses dictionary notation. Use a list of dictionaries for flags that must be specified multiple times.

For example, this YAML file defines values for Boolean, integer, floating point, string, dictionary and list valued flags:

--boolean:
--integer: 123
--float: 456.789
--string: A string value.
--dictionary:
a=b: c,d
e,f: g=h
i: none
j=k=l: m=$n,o=%p
"y:": ":z"
meta:
- key: foo
value: bar
- key: abc
value: xyz
--list:
- a,b,c
- x,y,z

If the file is named my-flags.yaml then the command line flag --flags-file=my-flags.yaml will set the specified flags on
any system using any command interpreter. --flags-file may be specified in a YAML file, and its value can be a YAML
list to reference multiple files.

This example specifies the --metadata flag multiple times:

- --metadata: abc
--integer: 123
- --metadata: xyz

Each --flags-file arg is replaced by its contents, so normal flag precedence applies. For example, given flags-1.yaml :

--zone: us-east2-a

flags-2.yaml :

--verbosity: info
--zone: us-central1-a
and command line:

gcloud compute instances describe \


--flags-file=flags-1.yaml my-instance --flags-file=flags-2.yaml

the effective command line is:

gcloud compute instances describe \


--zone=us-east2-a my-instance --verbosity=info --zone=us-central1-a

using zone us-central1-a (not us-east2-a , because flags-2.yaml , to the right of flags-1.yaml , has higher
precedence).
Implementing a static website in Google App Engine
Juliette Foucaut - 23 Aug 2013 - edited 10 Feb 2014

A robust website that successfully weathers spikes in traffic is a must when trying to sell and support a game over the internet. Last July Picroma suffered
temporarily when they released their game, Cube World, for purchase. They then had to deal with a DDoS attack. More recently, Oculus Rift's site stalled when
they tweeted about John Carmack's involvement in their technology.
Whilst we can only dream of enjoying the same level of interest, we'd like to spare ourselves the worry. When Doug researched a web hosting solution, he spotted
that Wolfire used Google App Engine. They list their reasons clearly on their blog, and given that they have years of hands-on experience on the matter, we
decided to follow their lead. As an added bonus, this solution is free for low levels of traffic.
We plan to eventually automate our site to support a blog, comments, a forum and purchases, but we currently only need a static website. This means limited
cleverness on the client side and none on the server, just some basic html and a minimum of scripting for content and a css for the looks. In this post I'll explain
how to host a "quick and dirty" static site on Google App Engine. It involves a few tricks but nothing too complicated. I've added a Links and toolssection at the end
of this post where you'll find all the resource I used (including to train myself). I hope you find the information useful and you enjoy crafting your site as much as I
did.

Google Sites didn't work out


In September last year, Doug started rewriting Avoyd from scratch. At the time we only needed a presence on the internet. Our old website, which I'd written 12
years earlier, was still online but it badly needed a redesign (using tables for layout, difficult to maintain, etc...). We were already using Google Sites for our
personal intranet. It's free, fast to configure and easy to use, great for our "we don't care about the looks" private space. So we decided to use Google Sites for our
website as well.
Whilst it is powerful, Google Sites turned out unsuitable for our public website. I won't start a detailed rant about it here, but I'll say that if at least the documentation
had been up to date - or in some cases had existed - it would have saved me a lot of time and frustration. We ended up with a decent-looking 2-page site including
a blog that served its purpose for several months. However, it wasn't the right solution for us in the long term. That's when we decided to build the site ourselves
from scratch, and to use Google App Engine to host it. Personally, I find that writing the website manually is easier, and I'm neither an expert in this domain nor am
I a coder. I'm learning on the fly as I implement the site with Doug's help.

Moving to Google App Engine


Let's get on with the practical tasks involved in setting up a static website in Google App Engine.

Styles: Bootstrap and Font Awesome


Twitter Bootstrap is a simple (and free) way to make a website look sleek by using their predefined css. I wanted a slightly different look to start from so I used a
css file from Bootswatch. They provide a few themes based on the bootstrap toolkit that can easily be tweaked.
I first downloaded the bootstrap.css file from one of Bootswatch's themes and added it to my css directory. Next I added the html <link
href="https://www.enkisoftware.com/css/bootstrap.css" rel="stylesheet"> to all my html files <head> sections. Finally I tweaked the css to get the looks I
wanted.
A side note on all the *.min.css and *.less files you'll encounter: I've not included (nor referenced them in my html) because a. it's easier; and b. I want to see the
effects of editing my css file straight away.
With Bootstrap to address the layout, we also wanted to add some cool Font Awesome icons as visual cues, for instance for a link to Twitter or to illustrate a
button (as a side note, Doug also uses Font Awesome with libRocket for Avoyd's in-game GUI elements).
I followed the same principle as with Bootstap: downloaded Font Awesome, added the file font-awesome.css to my css directory, then the html <link
href="https://www.enkisoftware.com/css/font-awesome.css" rel="stylesheet"> to all my html files <head> sections. I also needed to add the fonts themselves:
I simply dumped the "font" directory and all of its contents at the same level as my css folder.
Once this is done, all you have to do to add an icon is to include <i class="fa fa-thumbs-o-up"></i> in your html, et voilà !
At this point my file structure looks like this:

my-gae-website
static_website
css
bootstrap.css
font-awesome.css
font <= Font Awesome font files
*.eot, *.otf, *.svg, *.ttf, *.woff
*.html

As an example of how handy Bootstrap and Font Awesome are, the box above was created with a Bootstrap <pre> tag and Font Awesome icons.

Google App Engine


Download and setup
Although we have no server-side code (for now), we use the Python Google App Engine SDK. The steps needed to integrate your static website with GAE are as
follows:
Follow the Google App Engine site's download and installation instructions.
Note: if you've never used the Google App Engine Python SDK, it's a good idea to do the Hello, World! example. You won't need to know more for the purpose of a static website and
you won't need to understand the programming involved.

In the root directory where you keep your static website (in this example, my static website is in directory static_website, under root directory my-gae-
website, see previous section), add an empty text file and rename it "app.yaml".

my-gae-website
static_website
app.yaml

Open the Google App Engine Launcher (I'll refer to it as the GAE launcher).
Select menu item File > Add Existing Application...
Set the application path to directory my-gae-website and select Add.
An application named "my-gae-website" is added to the list and is displayed in red. To make the application work, we need to add some code in app.yaml.
To start with we'll use a default configuration:
Paste the default text below into your app.yaml and save it. You'll notice that the app "my-gae-website" in the GAE launcher immediately turns from red text
to black.

application: my-gae-website
version: 1
runtime: python27
api_version: 1
threadsafe: yes

handlers:
- url: /
static_files: static_website/index.html
upload: static_website/index.html

libraries:
- name: webapp2
version: "2.5.2"

If you want, you can already run the "my-gae-website" application from the GAE launcher and view your website locally in your browser. However it may not show
anything yet: you need to configure app.yaml to serve your own static pages.
In the next section I'll explain how to configure app.yaml to serve a static website using our site as an example.

Configure app.yaml
This step sets the rules for displaying the contents of the website. In other words, app.yaml describes what will be returned (web pages, images...) when specific
urls are entered. We've found the syntax of app.yaml not completely straightforward so I'm going to describe in detail how I've configured it in our specific example.
For a general understanding of the principles of app.yaml, see the links section about regex and app.yaml at the end of this post.
If you want to skip this section go straight to the deployment part.

app.yaml overview
Our app.yaml reads as follows (This is an overview. I'll explain the handler section contents in the next section):

application: my-gae-website
version: 1
runtime: python27
api_version: 1
threadsafe: yes

handlers:
#root
- url: /
static_files: static_website/devlog.html
upload: static_website/devlog.html

#serve our home page in case index.html is requested


- url: /index.html
static_files: static_website/devlog.html
upload: static_website/devlog.html

#specific html pages:


- url: /about.html
static_files: static_website/about.html
upload: static_website/about.html

- url: /devlog.html
static_files: static_website/devlog.html
upload: static_website/devlog.html

#specified zip file for download


- url: /downloads/AvoydV1_7_1.zip
static_files: static_website/downloads/AvoydV1_7_1.zip
upload: static_website/downloads/AvoydV1_7_1.zip

#the devlog post pages: since we're going to add more pages with the format
#devlogpost-<yyyymmdd-dailyIncrement>.html and I don't want to update the
#app.yaml each time, I've used a rough regex to limit the cases where an
#invalid url would return the default 404 not found page.
- url: /(devlogpost-201[3-9][0-1][0-9][0-3][0-9]-[1-4]\.html)
static_files: static_website/
upload: static_website/(devlogpost.*\.html)

#all images and support file (css, fonts...): return file if found,
#otherwise the default 404 page so it can be handled by sites that link
#directly to images.
- url: /(.*\.(gif|png|jpg|ico|bmp|css|otf|eot|svg|ttf|woff))
static_files: static_website/
upload: static_website/(.*\.(gif|png|jpg|ico|bmp|css|otf|eot|svg|ttf|woff))

#all other urls: return the enkisoftware 404 not found


- url: /.*
static_files: static_website/notfound.html
upload: static_website/notfound.html

libraries:
- name: webapp2
version: "2.5.2"

In the handlers section we see a repeating pattern of 3 lines headed url, static_files and upload (note: you'll find more info on Google's site). Here's what each one
of them means:

- url: <the regex of the anticipated url>

static_files: <the regex of the directory or file path which will be be


served for the url above>.
The "" is particularly useful as it matches the 1st marked
subexpression which is the interior of the outermost url regex
brackets. For further information, see the definition of "
" in
the wiki on POSIX)

upload: <the regex of the actual file path and name the url is referring
to, on our local machine, before deployment>

url handling
We have to decide how we'll handle each file / file type and add the behaviour to app.yaml (see the comments headed with "#" in our implementation of app.yaml).
For reference, our file names and folders are as follows:

my-gae-website
static_website
css <= cascading style sheets (Bootstrap, Font Awesome)
*.css
downloads <= all our downloadable files
AvoydV1_7_1.zip
font <= Font Awesome font files
*.eot, *.otf, *.svg, *.ttf, *.woff
images <= all our images
*.gif, *.ico, *.jpg, *.png
about.html <= about page
devlog.html <= home page, also the development blog posts list
devlogpost-20130823-1.html <= an individual blog post
devlogpost-20130411-1.html <= an individual blog post
devlogpost-20130427-1.html <= an individual blog post
devlogpost-20130509-1.html <= an individual blog post
devlogpost-20130529-1.html <= an individual blog post
notfound.html <= our custom file not found
app.yaml

url handling: custom page not found


Whenever a url is entered that doesn't The url pattern is defined last in the handlers section:
match any of our files, we want to
display our custom-defined file not #all other urls: return the enkisoftware 404 not found
found page so that people stay on our
- url: /.*
website. Note: there is an exception to
this rule in the case of support files static_files: static_website/notfound.html
such as images, css... where we want upload: static_website/notfound.html
to serve the default file not found error
(see section below). Our custom file not found will be served for any url that matches none of the regexes
described in the handlers section, except for the last: /.* .

url handling: html pages loaded with regex


With the file not found case taken care of, we can concentrate on defining the pages to serve for each url. The desired behaviour is as follows:
First of all we address the specific cases where we have unambiguously defined pages and files.
Root: If no file is requested in the url, for instance navigating to https://www.enkisoftware.com, we serve our home page which is (currently) devlog.html.

#root
- url: /
static_files: static_website/devlog.html
upload: static_website/devlog.html

Home page: most sites use index.html as their de facto home page. Since our home page has a different name and it's likely someone will
request https://www.enkisoftware.com/index.html and it would be a shame if they ended up on our custom file not found, I'm adding an extra handler to redirect
them to our home page devlog.html:

#serve our home page in case index.html is requested


- url: /index.html
static_files: static_website/devlog.html
upload: static_website/devlog.html

Specific files: we have two predefined html pages and a zip file that can be exactly matched:

#specific html pages:


- url: /about.html
static_files: static_website/about.html
upload: static_website/about.html

- url: /devlog.html
static_files: static_website/devlog.html
upload: static_website/devlog.html

#specified zip file for download


- url: /downloads/AvoydV1_7_1.zip
static_files: static_website/downloads/AvoydV1_7_1.zip
upload: static_website/downloads/AvoydV1_7_1.zip

Less specific files: I had to create individual pages for each blog post to work around a Disqus limitation. I chose a simple pattern for naming those posts since I
can't predict how many nor how often we'll add them: devlogpost-<yyyymmdd-dailyIncrement>.html. To save having to update app.yaml every time we add a new
post, I'm using this regex devlogpost-201[3-9][0-1][0-9][0-3][0-9]-[1-4]\.html .
I'm sure you've noticed this regex is not perfect. Why? Because if someone enters a url that matches the regex, i.e. a valid date (though I'm sure you've spotted
that e.g. the 32nd March will incorrectly be considered a valid date) which doesn't correspond to an existing devlogpost file, they'll get the default file not
found instead of our custom file not found (because the custom file not found will only be served if the url doesn't match the regex). Now I would prefer the
custom file not found to be served (any suggestions welcome), but I'm going to automate the site at some point so this solution will do for now.

#the devlog post pages: since we're going to add more pages with the format
#devlogpost-<yyyymmdd-dailyIncrement>.html and I don't want to update the
#app.yaml each time, I've used a rough regex to limit the cases where an
#invalid url would return the default 404 not found page.
- url: /(devlogpost-201[3-9][0-1][0-9][0-3][0-9]-[1-4]\.html)
static_files: static_website/
upload: static_website/(devlogpost.*\.html)

url handling: support files and images


We match images and support files (css, fonts etc.) based on their filename extension. If the file doesn't exist, we want to serve the default file not found so the
error can be handled automatically by third party sites that e.g. link directly to images. In other words, in this case, we don't want to serve our custom file not found,
as it wouldn't be processed by third parties.
#all images and support file (css, fonts...): return file if found,
#otherwise the default 404 page so it can be handled by sites that link
#directly to images.
- url: /(.*\.(gif|png|jpg|ico|bmp|css|otf|eot|svg|ttf|woff))
static_files: static_website/
upload: static_website/(.*\.(gif|png|jpg|ico|bmp|css|otf|eot|svg|ttf|woff))

Test locally
From the GAE launcher, run my-gae-website and view your website locally in your browser.

Deploy
To speed up our deployment I created a batch file *.bat containing appcfg.py --email=<email address used for google app engine> update <my-gae-website>\
pause
This is a workaround for Google App Engine Launcher requiring that I enter my email and password each and every time I deploy. Whereas when I use the
command prompt interface I only have to enter the details once per session.

Comments: Disqus integration


Disqus was relatively straightforward to integrate. It mostly consisted in adding a block of javascript at the bottom of each html page where we wanted comments. I
first registered with Disqus as "enkisoftware", then followed the instructions. When asked to choose my platform, I picked Universal Code then integrated the
javascript as instructed.
Unfortunately Disqus doesn't support more than one comments stream per page. To keep the number of pages to a minimum whilst the site is static, I wanted to
have all the blog entries on one page devlog.html. Since that wasn't possible, as a workaround I created an individual page devlogpost-*.html for each blog post
with its own comments area. I then duplicated each post on devlog.html, each with its Disqus comment counter which doubles as a link to the
corresponding devlogpost-*.html. The links simply have #disqus_thread appended. As an example, in the devlog.htmlsource, you'll find the link <a
href="devlogpost-20130411-1.html#disqus_thread"> . Note: to implement this I didn't need to use Disqus Identifiers.

Stats: Google Analytics


To add Google Analytics coverage, I followed the instructions on their support page, Set up(web), namely adding a piece of javascript to each html page.

Links and tools


Code references and self-training
html and css
html knowledge is necessary if you write your web pages manually, and css is useful to understand if you want to customise the bootstrap css file or make your
own.
Code Academy provides a fast and succinct course on html and css
CSS positioning: try rainbodesign's tutorial if you find the Code Academy explanations confusing. (I found that using bootstrap, in particular their grid system,
meant that I didn't need in-depth knowledge of css positioning).

app.yaml
To help understand how the app.yaml file works with regards to static files, see the static file handlers section in the app.yaml google documentation

Regular expressions
You'll need a basic understanding of regular expressions (a.k.a. regex) to edit the app.yaml file.
See the POSIX standard section in wikipedia
the general regex syntax in the Python documentation
If you're looking for a step by step introduction to regular expressions, I found Udacity's CS262 course very helpful. They address regex in Unit1. If you're
happy with just the transcripts (no login required) you'll find them in the course wiki.

Tools
Google App Engine
Python google app engine SDK

Editor and versioning


Text editor: Notepad++
Change control:
Online code repository: Bitbucket
Version control GUI: SourceTree

Third party functionality


All the optional third party material I've used/integrated in the website:
css tech: Twitter Bootstrap
css template: Bootswatch
fonts: Font Awesome
comments: Disqus
stats: Google Analytics
press kit for video game developers: presskit() for GAE
[Edit 10 Feb 2014: thanks weaver for the comments, I've reworked the following sections of the post:
- Rewrote the GAE Download and Setup section to add information about the integration with GAE and the GAE launcher;
- Removed mentions of index.yaml since for a purely static website, index.yaml isn't required;
- Added a Test Locally section;
- Reworded the "Less specific files" part under url handling: html pages loaded with regex;
- Replaced all instances of "static/" with "static_website/";
- Replaced all instances of "mymain" with "my-gae-website".
Added link to presskit() for GAE.]
Setting up LoRa Server on Google Cloud Platform
Author(s): @brocaar Published: Dec 20, 2018

Contents
Assumptions
Requirements
Create GCP project
Gateway connectivity

Google Cloud Platform Community tutorials submitted from the community do not represent official Google Cloud Platform product
documentation.

This tutorial describes the steps needed to set up the LoRa Server project on Google Cloud Platform. The following Google
Cloud Platform (GCP) services are used:

Cloud IoT Core is used to connect your LoRa gateways with GCP.
Cloud Pub/Sub is used for messaging between GCP components and LoRa Server services.
Cloud Functions is used to handle downlink LoRa gateway communication (calling the Cloud IoT Core API on downlink
Pub/Sub messages).
Cloud SQL is used as hosted PostgreSQL database solution.
Cloud Memorystore is used as hosted Redis solution.
Compute Engine is used for running a VM instance.

Assumptions

In this tutorial we will assume that the LoRa Gateway Bridge component will be installed on the gateway. We will also
assume that LoRa Server andLoRa App Server will be installed on a single Compute Engine VM, to simplify this tutorial.
The example project ID used in this tutorial will be lora-server-tutorial . You should substitute this with your own project
ID in the tutorial steps.
The LoRaWAN region used in this tutorial will be eu868 . You should substitute this with your own region in the examples.

Requirements

Google Cloud Platform account. You can create one here.


LoRa gateway.
LoRaWAN device.

Create GCP project

After logging in to the GCP Console, create a new project. For this tutorial we will name the project LoRa Server tutorial with
an example ID of lora-server-tutorial . After creating the project, make sure it is selected before continuing with the next
steps.

Gateway connectivity

The LoRa Gateway Bridge(referred to as simply Gateway in this tutorial) will use the Cloud IoT Core MQTT broker to ingest
LoRa gateway events into GCP. This removes the requirement to host your own MQTT broker and increases the reliability and
scalability of the system.

Create device registry

In order to connect your LoRa gateway with Cloud IoT Core, go to the IoT Coreservice in the GCP Console and create a new
device registry in the Device registries box.
This registry will contain all your gateways for a given region. When you are planning to support multiple LoRaWAN regions, it is
a good practice to create separate registries (not covered in this tutorial).

In this tutorial, we are going to create a registry for EU868 gateways, so we choose the Registry ID eu868-gateways . Select the
region which is closest to you and select MQTT as the protocol. The HTTP protocol will not be used.

Under Default telemetry topic create a new topic. We will call this eu868-gateway-events . Click Create.

Create LoRa gateway certificate

In order to authenticate the LoRa gateway with the Cloud IoT Core MQTT bridge, you need to generate a certificate. You can do
this using the following commands:

ssh-keygen -t rsa -b 4096 -f private-key.pem


openssl rsa -in private-key.pem -pubout -outform PEM -out public-key.pem

Do not set a passphrase!

Add device (LoRa gateway)

To add your first LoRa gateway to the just created device registry, click the Create device button.

As Device ID, enter your Gateway ID prefixed with gw- . For example, if your Gateway ID equals to 0102030405060708 , then
enter gw-0102030405060708 . The gw- prefix is needed because a Cloud IoT Core ID must start with a letter, which is not always
the case for a LoRa gateway ID.

Each Cloud IoT Core device (LoRa gateway) will authenticate using its own certificate. Select RS256 as Public key format and
paste the public-key content in the box. This is the content of public-key.pem which was created in the previous step.
Click Create.

Configure LoRa Gateway Bridge

As there are different ways to install the LoRa Gateway Bridge on your gateway, only the configuration is covered here. For
installation instructions, please refer to LoRa Gateway Bridge gateway installation & configuration.

To configure a LoRa Gateway Bridge to forward its data to Cloud IoT, you need update the lora-gateway-
bridge.toml Configuration file.

A minimal configuration example:

[backend.mqtt]
marshaler="protobuf"
[backend.mqtt.auth]
type="gcp_cloud_iot_core"
[backend.mqtt.auth.gcp_cloud_iot_core]
server="ssl://mqtt.googleapis.com:8883"
device_id="gw-0102030405060708"
project_id="lora-server-tutorial"
cloud_region="europe-west1"
registry_id="eu868-gateways"
jwt_key_file="/path/to/private-key.pem"

In short:

This will configure the protobuf marshaler (either protobuf or json must be configured)
This will configure the Google Cloud IoT Core MQTT authentication
This will configure the GCP project ID, cloud-region and registry ID

Note that jwt_key_file must point to the private-key file generated in the previous step.

After applying the above configuration changes on the gateway (using your
own device_id , project_id , cloud_region and jwt_key_file ), validate that LoRa Gateway Bridge is able to connect with the
Cloud IoT Core MQTT bridge. The log output should look like this when your gateway receives an uplink message from your
LoRaWAN device:

INFO[0000] starting LoRa Gateway Bridge docs="https://www.loraserver.io/lora-gateway-bridge/" version


INFO[0000] gateway: starting gateway udp listener addr="0.0.0.0:1700"
INFO[0000] mqtt: connected to mqtt broker
INFO[0007] mqtt: subscribing to topic qos=0 topic="/devices/gw-0102030405060708/commands/#"
INFO[0045] mqtt: publishing message qos=0 topic=/devices/gw-0102030405060708/events/up

Your gateway is now communicating succesfully with the Cloud IoT Core MQTT bridge!
Create downlink Pub/Sub topic

Instead of using MQTT directly, the LoRa Server will use Cloud Pub/Sub for receiving data from and sending data to your
gateways.

In the GCP Console, navigate to Pub/Sub > Topics. You will see the topic that was created when you created the device
registry. LoRa Server will subscribe to this topic to receive data (events) from your gateway.

For sending data back to your gateways, we will create a new topic. Click Create Topic, and enter eu868-gateway-commands as
the name.

Create downlink Cloud Function

In the previous step, you created a topic for sending downlink commands to your gateways. In order to connect this Pub/Sub
topic with your Cloud IoT Core device-registry, you must create a Cloud Function which will subscribe to the downlink Pub/Sub
topic and will forward these commands to your LoRa gateway.

In the GCP Console, navigate to Cloud Functions. Then click Create function. As Name we will use eu868-gateway-commands .
Because the only thing this function does is calling a Cloud API, 128 MB for Memory allocated should be fine.

Select Cloud Pub/Sub as trigger and select eu868-gateway-commands as the topic.

Select Inline editor for entering the source-code and select the Node.js 8runtime. The Function to execute is
called sendMessage . Copy and paste the scripts below for the index.js and package.json files. Adjust
the index.js configuration to match your REGION , PROJECT_ID and REGISTRY_ID . Note: it is recommended to also
click More and select your region from the dropdown list. Then click Create.

index.js

'use strict';
const {google} = require('googleapis');
// configuration options
const REGION = 'europe-west1';
const PROJECT_ID = 'lora-server-tutorial';
const REGISTRY_ID = 'eu868-gateways';
let client = null;
const API_VERSION = 'v1';
const DISCOVERY_API = 'https://cloudiot.googleapis.com/$discovery/rest';
// getClient returns the GCP API client.
// Note: after the first initialization, the client will be cached.
function getClient (cb) {
if (client !== null) {
cb(client);
return;
}
google.auth.getClient({scopes: ['https://www.googleapis.com/auth/cloud-platform']}).then((authClient => {
google.options({
auth: authClient
});
const discoveryUrl = `${DISCOVERY_API}?version=${API_VERSION}`;
google.discoverAPI(discoveryUrl).then((c, err) => {
if (err) {
console.log('Error during API discovery', err);
return undefined;
}
client = c;
cb(client);
});
}));
}
// sendMessage forwards the Pub/Sub message to the given device.
exports.sendMessage = (event, context, callback) => {
const deviceId = event.attributes.deviceId;
const subFolder = event.attributes.subFolder;
const data = event.data;

getClient((client) => {
const parentName = `projects/${PROJECT_ID}/locations/${REGION}`;
const registryName = `${parentName}/registries/${REGISTRY_ID}`;
const request = {
name: `${registryName}/devices/${deviceId}`,
binaryData: data,
subfolder: subFolder
};

console.log("start call sendCommandToDevice");


client.projects.locations.registries.devices.sendCommandToDevice(request, (err, data) => {
if (err) {
console.log("Could not send command:", request, "Message:", err);
callback(new Error(err));
} else {
callback();
callback();
}
});
});
};

package.json

{
"name": "gateway-commands",
"version": "2.0.0",
"dependencies": {
"@google-cloud/pubsub": "0.20.1",
"googleapis": "34.0.0"
}
}

Set up databases

Create Redis datastore

In the GCP Console, navigate to Memorystore (which provides a managed Redis datastore) and click Create instance.

You can assign any name to this instance. Make sure that you also select yourRegion. Click Create to create the Redis
instance.

Create PostgreSQL databases

In the GCP Console, navigate to SQL (which provides managed PostgreSQL database instances) and click Create instance.

Select PostgreSQL and click Next. You can assign any name to this instance. Again, make sure to also select your Region from
the dropdown.

Configure the Configuration options to your needs (the smallest instance is already sufficient for testing). An important option
to configure is Authorize networks. To allow access from any IP address, enter 0.0.0.0/0 . It is recommended to update this
later to only the IP address of your server (covered in the next steps). Then click Create.

Create users

Click on the created database instance and click the Users tab. Create two users:

loraserver_ns
loraserver_as

Create databases

Click the Databases tab. Create the following databases:

loraserver_ns
loraserver_as

Enable trgm extension

In the PostgreSQL instance Overview tab, click Connect using Cloud Shell and when the gcloud sql connect ... command
is shown in the console, press Enter. It will prompt you for the postgres user password (which you configured on creating the
PostgreSQL instance).

Then execute the following SQL commands:

-- change to the LoRa App Server database


\c loraserver_as

-- enable the pq_trgm extension


-- (this is needed to facilitate the search feature)
create extension pg_trgm;
-- exit psql
\q

You can close the Cloud Shell.


Install LoRa Server

When you have succesfully completed the previous steps, then your gateway is connected to the Cloud IoT Core MQTT bridge,
all the LoRa (App) Server requirements are set up and is it time to install LoRa Server and LoRa App Server.

Create a VM instance

In the GCP Console, navigate to Compute Engine > VM instances and click on Create.

Again, the name of the instance doesn't matter but make sure you select the correct Region. The smallest Machine type is
sufficient to test with. For this tutorial we will use the default Boot disk (Debian 9).

Under Identity and API access, select Allow full access to all Cloud APIs under the Access scopes options.

When all is configured, click Create.

Configure firewall

In order to expose the LoRa App Server web interface, we need to open port 8080 (the default LoRa App Server port) to the
public.

Click on the created instance to go to the instance details. Under Network interfaces click View details. In the left navigation
menu click Firewall rules and then on Create firewall rule. Enter the following details:

Name: can be any name


Targets: All instances in the network
Source IP ranges: 0.0.0.0/0
Protocols and ports > TCP: 8080

Then click Create.

Compute Engine service account roles

As the Compute Engine instance (created in the previous step) needs to be able to subscribe to the Pub/Sub data, we must give
the Compute Engine default service account the required role.

In the GCP Console, navigate to IAM & admin. Then edit the Compute Engine default service account. Click Add another
role and add the following roles:

Pub/Sub Publisher
Pub/Sub Subscriber

Log in to VM instance

You will find the public IP address of the created VM instance under Compute Engine > VM instances. Use the SSH web-client
provided by the GCP Console, or the gcloud ssh command to connect to the VM.

Configure the LoRa Server repository

Execute the following commands in the VM's shell to add the LoRa Server repository to your VM instance:

# add required packages


sudo apt install apt-transport-https dirmngr
# import LoRa Server key
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 1CE2AFD36DBCCA00
# add the repository to apt configuration
sudo echo "deb https://artifacts.loraserver.io/packages/2.x/deb stable main" | sudo tee /etc/apt/sources.list.d/lorase
# update the package cache
sudo apt update

Install LoRa Server

Execute the following command in the VM's shell to install the LoRa Server service:

sudo apt install loraserver


Configure LoRa Server

The LoRa Server configuration file is located at /etc/loraserver/loraserver.toml . Below you will find two (minimal but working)
configuration examples. Please refer to the LoRa Server Configurationdocumentation for all the available options.

Important: Because there might be a high latency between the Pub/Sub and Cloud Function components — especially with a
low message rate — the rx1_delay value is set to 3 in the examples below.

You need to replace the following values:

[PASSWORD] with the loraserver_ns PostgreSQL user password


[POSTGRESQL_IP] with the Primary IP address of the created PostgreSQL instance
[REDIS_IP] with the IP address of the created Redis instance

EU868 configuration example

[postgresql]
dsn="postgres://loraserver_ns:[PASSWORD]@[POSTGRESQL_IP]/loraserver_ns?sslmode=disable"
[redis]
url="redis://[REDIS_IP]:6379"

[network_server]
net_id="000000"
[network_server.band]
name="EU_863_870"
[network_server.network_settings]
rx1_delay=3

[network_server.gateway.stats]
create_gateway_on_stats=true
timezone="UTC"

[network_server.gateway.backend]
type="gcp_pub_sub"
[network_server.gateway.backend.gcp_pub_sub]
project_id="lora-server-tutorial"
uplink_topic_name="eu868-gateway-events"
downlink_topic_name="eu868-gateway-commands"

US915 configuration example

[postgresql]
dsn="postgres://loraserver_ns:[PASSWORD]@[POSTGRESQL_IP]/loraserver_ns?sslmode=disable"

[redis]
url="redis://[REDIS_IP]:6379"

[network_server]
net_id="000000"

[network_server.band]
name="US_902_928"

[network_server.network_settings]
rx1_delay=3
enabled_uplink_channels=[0, 1, 2, 3, 4, 5, 6, 7]

[network_server.gateway.stats]
create_gateway_on_stats=true
timezone="UTC"

[network_server.gateway.backend]
type="gcp_pub_sub"
[network_server.gateway.backend.gcp_pub_sub]
project_id="lora-server-tutorial"
uplink_topic_name="eu868-gateway-events"
downlink_topic_name="eu868-gateway-commands"

To test the configuration for errors, you can execute the following command:

sudo loraserver

This should output something like the following:

INFO[0000] setup redis connection pool url="redis://10.0.0.3:6379"


INFO[0000] connecting to postgresql
INFO[0000] gateway/gcp_pub_sub: setting up client
INFO[0000] gateway/gcp_pub_sub: setup downlink topic topic=eu868-gateway-commands
[ ]
INFO[0001] gateway/gcp_pub_sub: setup uplink topic topic=eu868-gateway-events
INFO[0002] gateway/gcp_pub_sub: check if uplink subscription exists subscription=eu868-gateway-events-loraserver
INFO[0002] gateway/gcp_pub_sub: create uplink subscription subscription=eu868-gateway-events-loraserver
INFO[0005] applying database migrations
INFO[0006] migrations applied count=19
INFO[0006] starting api server bind="0.0.0.0:8000" ca-cert= tls-cert= tls-key=

If all is well, then you can start the service in the background using:

sudo systemctl start loraserver


sudo systemctl enable loraserver

Install LoRa App Server

When you have completed all previous steps, then it is time to install the last component, LoRa App Server. This is the
application-server that provides a web interface for device management and will publish application data to a Pub/Sub topic.

Create Pub/Sub topic

In the GCP Console, navigate to Pub/Sub > Topics. Then click Create topic to create a topic named lora-app-server .

Install LoRa App Server

SSH to the VM and execute the following command to install LoRa App Server:

sudo apt install lora-app-server

Configure LoRa App Server

The LoRa App Server configuration file is located at /etc/lora-app-server/lora-app-server.toml . Below you will find a
minimal but working configuration example. Please refer to the LoRa App Server Configurationdocumentation for all the available
options.

You need to replace the following values:

[PASSWORD] with the loraserver_as PostgreSQL user password


[POSTGRESQL_IP] with the Primary IP address of the created PostgreSQL instance
[REDIS_IP] with the IP address of the created Redis instance
[JWT_SECRET] with your own random JWT secret (e.g. the output of openssl rand -base64 32 )

Configuration example

[postgresql]
dsn="postgres://loraserver_as:[PASSWORD]@[POSTGRESQL_IP]/loraserver_as?sslmode=disable"

[redis]
url="redis://[REDIS_IP]:6379"

[application_server]

[application_server.integration]
backend="gcp_pub_sub"

[application_server.integration.gcp_pub_sub]
project_id="lora-server-tutorial"
topic_name="lora-app-server"

[application_server.external_api]
bind="0.0.0.0:8080"
tls_cert="/etc/lora-app-server/certs/http.pem"
tls_key="/etc/lora-app-server/certs/http-key.pem"
jwt_secret="[JWT_SECRET]"

To test if there are no errors, you can execute the following command:

sudo lora-app-server

This should output something like the following:

INFO[0000] setup redis connection pool url="redis://10.0.0.3:6379"


INFO[0000] connecting to postgresql
INFO[0000] gateway/gcp_pub_sub: setting up client
INFO[0000] gateway/gcp_pub_sub: setup downlink topic topic=eu868-gateway-commands
INFO[0001] gateway/gcp_pub_sub: setup uplink topic topic=eu868-gateway-events
INFO[0002] gateway/gcp_pub_sub: check if uplink subscription exists subscription=eu868-gateway-events-loraserver
INFO[0002] gateway/gcp_pub_sub: create uplink subscription subscription=eu868-gateway-events-loraserver
INFO[0005] applying database migrations
INFO[0006] migrations applied count=19
INFO[0006] starting api server bind="0.0.0.0:8000" ca-cert= tls-cert= tls-key=

If all is well, then you can start the service in the background using these commands:

sudo systemctl start lora-app-server


sudo systemctl enable lora-app-server

Using the LoRa (App) Server

Set up your first gateway and device

To get started with LoRa (App) Server, please follow the First gateway and device guide. It explains how to log in to the web-
interface and add your first gateway and device.

Integrate your applications

In the LoRa App Server step, you have created a Pub/Sub topic named lora-app-server . This will be the topic used by LoRa
Server for publishing device events and to which your application(s) need to subscribe in order to receive LoRaWAN device data.

For more information about Cloud Pub/Sub, please refer to the following pages:

Cloud Pub/Sub product page


Cloud Pub/Sub documentation
Cloud Pub/Sub Quickstarts
See more by @brocaar and more tagged LoRa Server, LoRaWAN, IoT, Cloud IoT Core
Cloud SDK

gcloud auth
NAME

gcloud auth - manage oauth2 credentials for the Google Cloud SDK

SYNOPSIS

gcloud auth GROUP | COMMAND [ GCLOUD_WIDE_FLAG … ]

DESCRIPTION

The gcloud auth command group lets you grant and revoke authorization to Cloud SDK (gcloud) to access Google Cloud
Platform. Typically, when scripting Cloud SDK tools for use on multiple machines, using gcloud auth activate-service-
account is recommended.

For more information on authorization and credential types, see: https://cloud.google.com/sdk/docs/authorizing.

While running gcloud auth commands, the --account flag can be specified to any command to use that account without
activation.

GCLOUD WIDE FLAGS

These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --
project, --quiet, --trace-token, --user-output-enabled,--verbosity. Run $ gcloud help for details.

GROUPS

GROUP is one of the following:

application-default

Manage your active Application Default Credentials.

COMMANDS

COMMAND is one of the following:

activate-service-account

Authorize access to Google Cloud Platform with a service account.

configure-docker

Register gcloud as a Docker credential helper.

list

Lists credentialed accounts.

login

Authorize gcloud to access the Cloud Platform with Google user credentials.

revoke

Revoke access credentials for an account.

EXAMPLES

To authenticate a user account with gcloud and minimal user output, run:
$ gcloud auth login --brief

To list all credentialed accounts and identify the current active account, run:
$ gcloud auth list

To revoke credentials for a user account (like logging out), run:

$ gcloud auth revoke test@gmail.com

NOTES

These variants are also available:


$ gcloud alpha auth
$ gcloud beta auth

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated December 4, 2018.


gcloud
NAME

gcloud - manage Google Cloud Platform resources and developer workflow

SYNOPSIS

gcloud GROUP | COMMAND [ --account = ACCOUNT ][ --configuration = CONFIGURATION ] [ --flags-file = YAML_FILE ][ --flatten =[ KEY ,…]] [ --format = FORMAT ] [ --help ][ --project = PROJECT_

DESCRIPTION

The gcloud CLI manages authentication, local configuration, developer workflow, and interactions with the Google Cloud Platform APIs.

GLOBAL FLAGS

--account = ACCOUNT

Google Cloud Platform user account to use for invocation. Overrides the default core/account property value for this command invocation.

--configuration = CONFIGURATION

The configuration to use for this command invocation. For more information on how to use configurations, run: gcloud topic configurations . You can also use the
[CLOUDSDK_ACTIVE_CONFIG_NAME] environment variable to set the equivalent of this flag for a terminal session.

--flags-file = YAML_FILE

A YAML or JSON file that specifies a --flag : value dictionary. Useful for specifying complex flag values with special characters that work with any command interpreter. Additionally,
each --flags-file arg is replaced by its constituent flags. See $ gcloud topic flags-file for more information.

--flatten =[ KEY ,…]

Flatten name [] output resource slices in KEY into separate records for each item in each slice. Multiple keys and slices may be specified. This also flattens keys for --format and --
filter . For example, --flatten=abc.def flattens abc.def[].ghi references to abc.def.ghi . A resource record containing abc.def[] with N elements will expand to N records in
the flattened output. This flag interacts with other flags that are applied in this order: --flatten , --sort-by , --filter , --limit .

--format = FORMAT

Set the format for printing command output resources. The default is a command-specific human-friendly output format. The supported formats
are: config , csv , default , diff , disable , flattened , get , json , list , multi , none , object , table , text , value , yaml . For more details run $ gcloud topic formats.

--help

Display detailed help.

--project = PROJECT_ID

The Google Cloud Platform project name to use for this invocation. If omitted, then the current project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID . Overrides the default core/project property value for this command invocation.

--quiet , -q

Disable all interactive prompts when running gcloud commands. If input is required, defaults will be used, or an error will be raised. Overrides the default core/disable_prompts property
value for this command invocation. Must be used at the beginning of commands. This is equivalent to setting the environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1.

--verbosity = VERBOSITY ; default="warning"

Override the default verbosity for this command with any of the supported standard verbosity levels: debug , info , warning , error , critical , none . Overrides the
default core/verbosity property value for this command invocation.

--version , -v

Print version information and exit. This flag is only available at the global level.

-h

Print a summary help and exit.

OTHER FLAGS

--log-http

Log all HTTP server requests and responses to stderr. Overrides the default core/log_http property value for this command invocation.

--trace-token = TRACE_TOKEN

Token used to route traces of service requests for investigation of issues. Overrides the default core/trace_token property value for this command invocation.

--user-output-enabled

Print user intended output to the console. Overrides the default core/user_output_enabled property value for this command invocation. Use --no-user-output-enabled to disable.

GROUPS

GROUP is one of the following:

alpha

(ALPHA) Alpha versions of gcloud commands.

app

Manage your App Engine deployments.

auth

Manage oauth2 credentials for the Google Cloud SDK.

beta
(BETA) Beta versions of gcloud commands.

bigtable

Manage your Cloud Bigtable storage.

builds

Create and manage builds for Google Cloud Build.

components

List, install, update, or remove Google Cloud SDK components.

composer

Create and manage Cloud Composer Environments.

compute

Create and manipulate Google Compute Engine resources.

config

View and edit Cloud SDK properties.

container

Deploy and manage clusters of machines for running containers.

dataflow

Manage Google Cloud Dataflow jobs.

dataproc

Create and manage Google Cloud Dataproc clusters and jobs.

datastore

Manage your Cloud Datastore indexes.

debug

Commands for interacting with the Cloud Debugger.

deployment-manager

Manage deployments of cloud resources.

dns

Manage your Cloud DNS managed-zones and record-sets.

domains

Manage domains for your Google Cloud projects.

endpoints

Create, enable and manage API services.

firebase

Work with Google Firebase.

functions

Manage Google Cloud Functions.

iam

Manage IAM service accounts and keys.

iot

Manage Cloud IoT resources.

kms

Manage cryptographic keys in the cloud.

logging

Manage Stackdriver Logging.

ml

Use Google Cloud machine learning capabilities.

ml-engine

Manage Cloud ML Engine jobs and models.

organizations

Create and manage Google Cloud Platform Organizations.

projects

Create and manage project access policies.

pubsub

Manage Cloud Pub/Sub topics and subscriptions.

redis

Manage Cloud Memorystore Redis resources.

services
List, enable and disable APIs and services.

source

Cloud git repository commands.

spanner

Command groups for Cloud Spanner.

sql

Create and manage Google Cloud SQL databases.

topic

gcloud supplementary help.

COMMANDS

COMMAND is one of the following:

docker

(DEPRECATED) Enable Docker CLI access to Google Container Registry.

feedback

Provide feedback to the Google Cloud SDK team.

help

Search gcloud help text.

info

Display information about the current gcloud environment.

init

Initialize or reinitialize gcloud.

version

Print version information for Cloud SDK components.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered
trademark of Oracle and/or its affiliates.

Last updated November 21, 2018.


Cloud SDK

gcloud compute backend-services


NAME

gcloud compute backend-services - list, create, and delete backend services

SYNOPSIS

gcloud compute backend-services COMMAND [ GCLOUD_WIDE_FLAG … ]

DESCRIPTION

List, create, and delete backend services.

GCLOUD WIDE FLAGS

These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --
project, --quiet, --trace-token, --user-output-enabled,--verbosity. Run $ gcloud help for details.

COMMANDS

COMMAND is one of the following:

add-backend

Add a backend to a backend service.

add-signed-url-key

Add Cloud CDN Signed URL key to a backend service.

create

Create a backend service.

delete

Delete backend services.

delete-signed-url-key

Delete Cloud CDN Signed URL key from a backend service.

describe

Display detailed information about a backend service.

edit

Modify backend services.

get-health

Get backend health statuses from a backend service.

list

List Google Compute Engine backend services.

remove-backend

Remove a backend from a backend service.

update

Update a backend service.

update-backend

Update an existing backend in a backend service.


NOTES

These variants are also available:


$ gcloud alpha compute backend-services
$ gcloud beta compute backend-services

Except as otherwise not


gcloud config list
NAME

gcloud config list - list Cloud SDK properties for the currently active configuration

SYNOPSIS

gcloud config
list [ SECTION / PROPERTY ] [ --all ] [ --filter = EXPRESSION ][ --limit = LIMIT ] [ --sort-by =[ FIELD ,…]] [ GCLOUD_WIDE_FLAG … ]

DESCRIPTION

gcloud config list lists all properties of the active configuration. These include the account used to authorize access to the Cloud
Platform, the current Cloud Platform project, and the default Compute Engine region and zone, if set. See gcloud topic
configurations for more about configurations.

POSITIONAL ARGUMENTS

[ SECTION / PROPERTY ]

Property to be listed. Note that SECTION/ is optional while referring to properties in the core section.

FLAGS

--all

List all set and unset properties that match the arguments.

LIST COMMAND FLAGS

--filter = EXPRESSION

Apply a Boolean filter EXPRESSION to each resource item to be listed. If the expression evaluates True , then that item is
listed. For more details and examples of filter expressions, run $ gcloud topic filters. This flag interacts with other flags that
are applied in this order: --flatten , --sort-by , --filter , --limit .

--limit = LIMIT

Maximum number of resources to list. The default is unlimited . This flag interacts with other flags that are applied in this
order: --flatten , --sort-by , --filter , --limit .

--sort-by =[ FIELD ,…]

Comma-separated list of resource field key names to sort by. The default order is ascending. Prefix a field with ``~´´ for
descending order on that field. This flag interacts with other flags that are applied in this order: --flatten , --sort-by , --
filter , --limit .

GCLOUD WIDE FLAGS

These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --project, --
quiet, --trace-token, --user-output-enabled,--verbosity. Run $ gcloud help for details.

AVAILABLE PROPERTIES

core

account

Account gcloud should use for authentication. Run gcloud auth list to see your currently available accounts.

custom_ca_certs_file

Absolute path to a custom CA cert file.

default_regional_backend_service

If True, backend services in gcloud compute backend-services will be regional by default. Setting the --global flag
is required for global backend services.
disable_color

If True, color will not be used when printing messages in the terminal.

disable_prompts

If True, the default answer will be assumed for all user prompts. However, for any prompts that require user input, an
error will be raised. This is equivalent to either using the global --quiet flag or setting the environment
variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1. Setting this property is useful when scripting with gcloud .

disable_usage_reporting

If True, anonymous statistics on SDK usage will not be collected. This value is set by default based on your choices
during installation, but can be changed at any time. For more information, see: https://cloud.google.com/sdk/usage-
statistics

log_http

If True, log HTTP requests and responses to the logs. To see logs in the terminal, adjust verbosity settings.
Otherwise, logs are available in their respective log files.

max_log_days

Maximum number of days to retain log files before deleting. If set to 0, turns off log garbage collection and does not
delete log files. If unset, the default is 30 days.

pass_credentials_to_gsutil

If True, pass the configured Cloud SDK authentication to gsutil.

project

Project ID of the Cloud Platform project to operate on by default. This can be overridden by using the global --
project flag.

show_structured_logs

Control when JSON-structured log messages for the current verbosity level (and above) will be written to standard
error. If this property is disabled, logs are formatted as text by default.

Valid values are:

never - Log messages as text

always - Always log messages as JSON

log - Only log messages as JSON if stderr is a file

terminal - Only log messages as JSON if stderr is a terminal

If unset, default is never .

trace_token

Token used to route traces of service requests for investigation of issues. This token will be provided by Google support.

user_output_enabled

True, by default. If False, messages to the user and command output on both standard output and standard error will
be suppressed.

verbosity

Default logging verbosity for gcloud commands. This is the equivalent of using the global --verbosity flag.
Supported verbosity levels: debug , info , warning , error , and none .

app

cloud_build_timeout

Timeout, in seconds, to wait for Docker builds to complete during deployments. All Docker builds now use the Cloud
Build API.
promote_by_default

If True, when deploying a new version of a service, that version will be promoted to receive all traffic for the service.
This property can be overridden via the --promote-by-default or --no-promote-by-default flags.

stop_previous_version

If True, when deploying a new version of a service, the previously deployed version is stopped. If False, older versions
must be stopped manually.

use_runtime_builders

If set, opt in/out to a new code path for building applications using pre-fabricated runtimes that can be updated
independently of client tooling. If not set, the default path for each runtime is used.

auth

disable_credentials

If True, gcloud will not attempt to load any credentials or authenticate any requests. This is useful when behind a
proxy that adds authentication to requests.

billing

quota_project

Project that will be charged quota for the operations performed in gcloud . When unset, the default is
[CURRENT_PROJECT]; this will charge quota against the currently set project for operations performed on it.
Additionally, some existing APIs will continue to use a shared project for quota by default, when this property is unset.

If you need to operate on one project, but need quota against a different project, you can use this property to specify
the alternate project.

builds

timeout

Timeout, in seconds, to wait for builds to complete.

component_manager

additional_repositories

Comma separated list of additional repositories to check for components. This property is automatically managed by
the gcloud components repositories commands.

disable_update_check

If True, Cloud SDK will not automatically check for updates.

composer

location

Composer location to use. Each Composer location constitutes an independent resource namespace constrained to
deploying environments into Compute Engine regions inside this location. This parameter corresponds to the
/locations/<location> segment of the Composer resource URIs being referenced.

compute

region

Default region to use when working with regional Compute Engine resources. When a --region flag is required but
not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute regions list .

use_new_list_usable_subnets_api

If True, use the new API for listing usable subnets which only returns subnets in the current project.

zone

Default zone to use when working with zonal Compute Engine resources. When a --zone flag is required but not
provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute zones list .

container
build_timeout

Timeout, in seconds, to wait for container builds to complete.

cluster

Name of the cluster to use by default when working with Kubernetes Engine.

new_scopes_behavior

If True, use new scopes behavior and do not add compute-rw , storage-ro , service-control , or service-
management scopes. The former two ( compute-rw and storage-ro ) only apply to clusters at Kubernetes v1.9 and
below; starting v1.10, compute-rw and storage-ro are not added by default. Any of these scopes may be added
explicitly using --scopes . Using new scopes behavior will be the default in a future release. Additionally, if this
property is set to True, using --[no-]enable-cloud-endpoints is not allowed. This property is ignored in alpha and
beta, since these tracks always use the new behavior. See --scopes help for more info.

use_application_default_credentials

If True, use application default credentials to authenticate to the cluster API server.

use_client_certificate

If True, use the cluster's client certificate to authenticate to the cluster API server.

dataproc

region

Cloud Dataproc region to use. Each Cloud Dataproc region constitutes an independent resource namespace
constrained to deploying instances into Compute Engine zones inside the region. The default value of global is a
special multi-region namespace which is capable of deploying instances into all Compute Engine zones globally, and
is disjoint from other Cloud Dataproc regions.

deployment_manager

glob_imports

Enable import path globbing. Uses glob patterns to match multiple imports in a config file.

filestore

location

Default location to use when working with Cloud Filestore locations. When a --location flag is required but not
provided, the command will fall back to this value, if set.

functions

region

Default region to use when working with Cloud Functions resources. When a --region flag is required but not
provided, the command will fall back to this value, if set. To see valid choices, run gcloud beta functions regions
list .

gcloudignore

enabled

If True, do not upload .gcloudignore files (see $ gcloud topic gcloudignore ). If False, turn off the gcloudignore
mechanism entirely and upload all files.

interactive

bottom_bindings_line

If True, display the bottom key bindings line.

bottom_status_line

If True, display the bottom status line.

completion_menu_lines

Number of lines in the completion menu.


context

Command context string.

fixed_prompt_position

If True, display the prompt at the same position.

help_lines

Maximum number of help snippet lines.

hidden

If True, expose hidden commands/flags.

justify_bottom_lines

If True, left- and right-justify bottom toolbar lines.

manpage_generator

If True, use the manpage CLI tree generator for unsupported commands.

multi_column_completion_menu

If True, display the completions as a multi-column menu.

prompt

Command prompt string.

show_help

If True, show help as command args are being entered.

suggest

If True, add command line suggestions based on history.

ml_engine

local_python

Full path to the Python interpreter to use for Cloud ML Engine local predict/train jobs. If not specified, the default path
is the one to the Python interpreter found on system PATH .

polling_interval

Interval (in seconds) at which to poll logs from your Cloud ML Engine jobs. Note that making it much faster than the
default (60) will quickly use all of your quota.

proxy

address

Hostname or IP address of proxy server.

password

Password to use when connecting, if the proxy requires authentication.

port

Port to use when connected to the proxy server.

rdns

If True, DNS queries will not be performed locally, and instead, handed to the proxy to resolve. This is default
behavior.

type

Type of proxy being used. Supported proxy types are: [http, http_no_tunnel, socks4, socks5].

username

Username to use when connecting, if the proxy requires authentication.


redis

region

Default region to use when working with Cloud Memorystore for Redis resources. When a region is required but not
provided by a flag, the command will fall back to this value, if set.

spanner

instance

Default instance to use when working with Cloud Spanner resources. When an instance is required but not provided
by a flag, the command will fall back to this value, if set.

EXAMPLES

To list the project property in the core section, run:


$ gcloud config list project

To list the zone property in the compute section, run:

$ gcloud config list compute/zone

To list all the properties, run:

$ gcloud config list --all

Note, you cannot specify both --all and a property name.

NOTES

These variants are also available:


$ gcloud alpha config list
$ gcloud beta config list

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache
2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated December 4, 2018


gcloud config set
NAME

gcloud config set - set a Cloud SDK property

SYNOPSIS

gcloud config set SECTION / PROPERTY VALUE [ --installation ][ GCLOUD_WIDE_FLAG … ]

DESCRIPTION

gcloud config set sets the specified property in your active configuration only. A property governs the behavior of a specific
aspect of Cloud SDK such as the service account to use or the verbosity level of logs. To set the property across all
configurations, use the --installation flag. For more information regarding creating and using configurations,
see gcloud topic configurations .

To view a list of properties currently in use, run gcloud config list .

To unset properties, use gcloud config unset .

Note, Cloud SDK comes with a default configuration. To create multiple configurations, use gcloud config
configurations create , and gcloud config configurations activate to switch between them.

POSITIONAL ARGUMENTS

SECTION / PROPERTY

Property to be set. Note that SECTION/ is optional while referring to properties in the core section, i.e., using
either core/project or project is a valid way of setting a project, while using section names is essential for setting
specific properties like compute/region . Consult the Cloud SDK properties page for a comprehensive list of
properties: https://cloud.google.com/sdk/docs/properties

VALUE

Value to be set.

FLAGS

--installation

If set, the property is updated for the entire Cloud SDK installation. Otherwise, by default, the property is updated only
in the currently active configuration.

GCLOUD WIDE FLAGS

These flags are available to all commands: --account, --configuration, --flags-file, --flatten, --format, --help, --log-http, --
project, --quiet, --trace-token, --user-output-enabled,--verbosity. Run $ gcloud help for details.

AVAILABLE PROPERTIES

core

account

Account gcloud should use for authentication. Run gcloud auth list to see your currently available accounts.

custom_ca_certs_file

Absolute path to a custom CA cert file.

default_regional_backend_service

If True, backend services in gcloud compute backend-services will be regional by default. Setting the --
global flag is required for global backend services.
disable_color

If True, color will not be used when printing messages in the terminal.

disable_prompts

If True, the default answer will be assumed for all user prompts. However, for any prompts that require user
input, an error will be raised. This is equivalent to either using the global --quiet flag or setting the
environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1. Setting this property is useful when scripting
with gcloud .

disable_usage_reporting

If True, anonymous statistics on SDK usage will not be collected. This value is set by default based on your
choices during installation, but can be changed at any time. For more information,
see: https://cloud.google.com/sdk/usage-statistics

log_http

If True, log HTTP requests and responses to the logs. To see logs in the terminal, adjust verbosity settings.
Otherwise, logs are available in their respective log files.

max_log_days

Maximum number of days to retain log files before deleting. If set to 0, turns off log garbage collection and does
not delete log files. If unset, the default is 30 days.

pass_credentials_to_gsutil

If True, pass the configured Cloud SDK authentication to gsutil.

project

Project ID of the Cloud Platform project to operate on by default. This can be overridden by using the global --
project flag.

show_structured_logs

Control when JSON-structured log messages for the current verbosity level (and above) will be written to
standard error. If this property is disabled, logs are formatted as text by default.

Valid values are:

never - Log messages as text

always - Always log messages as JSON

log - Only log messages as JSON if stderr is a file

terminal - Only log messages as JSON if stderr is a terminal

If unset, default is never .

trace_token

Token used to route traces of service requests for investigation of issues. This token will be provided by Google
support.

user_output_enabled

True, by default. If False, messages to the user and command output on both standard output and standard
error will be suppressed.

verbosity

Default logging verbosity for gcloud commands. This is the equivalent of using the global --verbosity flag.
Supported verbosity levels: debug , info , warning , error , and none .
app

cloud_build_timeout

Timeout, in seconds, to wait for Docker builds to complete during deployments. All Docker builds now use the
Cloud Build API.

promote_by_default

If True, when deploying a new version of a service, that version will be promoted to receive all traffic for the
service. This property can be overridden via the --promote-by-default or --no-promote-by-default flags.

stop_previous_version

If True, when deploying a new version of a service, the previously deployed version is stopped. If False, older
versions must be stopped manually.

use_runtime_builders

If set, opt in/out to a new code path for building applications using pre-fabricated runtimes that can be updated
independently of client tooling. If not set, the default path for each runtime is used.

auth

disable_credentials

If True, gcloud will not attempt to load any credentials or authenticate any requests. This is useful when behind
a proxy that adds authentication to requests.

billing

quota_project

Project that will be charged quota for the operations performed in gcloud . When unset, the default is
[CURRENT_PROJECT]; this will charge quota against the currently set project for operations performed on it.
Additionally, some existing APIs will continue to use a shared project for quota by default, when this property is
unset.

If you need to operate on one project, but need quota against a different project, you can use this property to
specify the alternate project.

builds

timeout

Timeout, in seconds, to wait for builds to complete.

component_manager

additional_repositories

Comma separated list of additional repositories to check for components. This property is automatically
managed by the gcloud components repositories commands.

disable_update_check

If True, Cloud SDK will not automatically check for updates.

composer

location

Composer location to use. Each Composer location constitutes an independent resource namespace
constrained to deploying environments into Compute Engine regions inside this location. This parameter
corresponds to the /locations/<location> segment of the Composer resource URIs being referenced.

compute

region
Default region to use when working with regional Compute Engine resources. When a --region flag is required
but not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute
regions list .

use_new_list_usable_subnets_api

If True, use the new API for listing usable subnets which only returns subnets in the current project.

zone

Default zone to use when working with zonal Compute Engine resources. When a --zone flag is required but
not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute zones
list .

container

build_timeout

Timeout, in seconds, to wait for container builds to complete.

cluster

Name of the cluster to use by default when working with Kubernetes Engine.

new_scopes_behavior

If True, use new scopes behavior and do not add compute-rw , storage-ro , service-control , or service-
management scopes. The former two ( compute-rw and storage-ro ) only apply to clusters at Kubernetes v1.9
and below; starting v1.10, compute-rw and storage-ro are not added by default. Any of these scopes may be
added explicitly using --scopes . Using new scopes behavior will be the default in a future release. Additionally,
if this property is set to True, using --[no-]enable-cloud-endpoints is not allowed. This property is ignored in
alpha and beta, since these tracks always use the new behavior. See --scopes help for more info.

use_application_default_credentials

If True, use application default credentials to authenticate to the cluster API server.

use_client_certificate

If True, use the cluster's client certificate to authenticate to the cluster API server.

dataproc

region

Cloud Dataproc region to use. Each Cloud Dataproc region constitutes an independent resource namespace
constrained to deploying instances into Compute Engine zones inside the region. The default value
of global is a special multi-region namespace which is capable of deploying instances into all Compute Engine
zones globally, and is disjoint from other Cloud Dataproc regions.

deployment_manager

glob_imports

Enable import path globbing. Uses glob patterns to match multiple imports in a config file.

filestore

location

Default location to use when working with Cloud Filestore locations. When a --location flag is required but not
provided, the command will fall back to this value, if set.

functions

region

Default region to use when working with Cloud Functions resources. When a --region flag is required but not
provided, the command will fall back to this value, if set. To see valid choices, run gcloud beta functions
regions list .

gcloudignore

enabled

If True, do not upload .gcloudignore files (see $ gcloud topic gcloudignore ). If False, turn off the
gcloudignore mechanism entirely and upload all files.

interactive

bottom_bindings_line

If True, display the bottom key bindings line.

bottom_status_line

If True, display the bottom status line.

completion_menu_lines

Number of lines in the completion menu.

context

Command context string.

fixed_prompt_position

If True, display the prompt at the same position.

help_lines

Maximum number of help snippet lines.

hidden

If True, expose hidden commands/flags.

justify_bottom_lines

If True, left- and right-justify bottom toolbar lines.

manpage_generator

If True, use the manpage CLI tree generator for unsupported commands.

multi_column_completion_menu

If True, display the completions as a multi-column menu.

prompt

Command prompt string.

show_help

If True, show help as command args are being entered.

suggest

If True, add command line suggestions based on history.

ml_engine

local_python

Full path to the Python interpreter to use for Cloud ML Engine local predict/train jobs. If not specified, the default
path is the one to the Python interpreter found on system PATH .

polling_interval

Interval (in seconds) at which to poll logs from your Cloud ML Engine jobs. Note that making it much faster than
the default (60) will quickly use all of your quota.
proxy

address

Hostname or IP address of proxy server.

password

Password to use when connecting, if the proxy requires authentication.

port

Port to use when connected to the proxy server.

rdns

If True, DNS queries will not be performed locally, and instead, handed to the proxy to resolve. This is default
behavior.

type

Type of proxy being used. Supported proxy types are: [http, http_no_tunnel, socks4, socks5].

username

Username to use when connecting, if the proxy requires authentication.

redis

region

Default region to use when working with Cloud Memorystore for Redis resources. When a region is required
but not provided by a flag, the command will fall back to this value, if set.

spanner

instance

Default instance to use when working with Cloud Spanner resources. When an instance is required but not
provided by a flag, the command will fall back to this value, if set.

EXAMPLES

To set the project property in the core section, run:

$ gcloud config set project myProject

To set the zone property in the compute section, run:

$ gcloud config set compute/zone asia-east1-b

To disable prompting for scripting, run:

$ gcloud config set disable_prompts true

To set a proxy with the appropriate type, and specify the address and port on which to reach it, run:

$ gcloud config set proxy/type http


$ gcloud config set proxy/address 1.234.56.78
$ gcloud config set proxy/port 8080

For a full list of accepted values, see the Cloud SDK properties page: https://cloud.google.com/sdk/docs/properties

NOTES

These variants are also available:


$ gcloud alpha config set
$ gcloud beta config set
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under
the Apache 2.0 License. For details, see our Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated December 4, 2018.


Cloud SDK

gcloud
NAME

gcloud - manage Google Cloud Platform resources and developer workflow

SYNOPSIS

gcloud GROUP | COMMAND [ --account = ACCOUNT ][ --configuration = CONFIGURATION ] [ --flags-file = YAML_FILE ][ --flatten =[ KEY ,…]] [ --format = FORMAT ] [ --help ][ --project = PROJECT_

DESCRIPTION

The gcloud CLI manages authentication, local configuration, developer workflow, and interactions with the Google Cloud Platform APIs.

GLOBAL FLAGS

--account = ACCOUNT

Google Cloud Platform user account to use for invocation. Overrides the default core/account property value for this command invocation.

--configuration = CONFIGURATION

The configuration to use for this command invocation. For more information on how to use configurations, run: gcloud topic configurations . You can also use the
[CLOUDSDK_ACTIVE_CONFIG_NAME] environment variable to set the equivalent of this flag for a terminal session.

--flags-file = YAML_FILE

A YAML or JSON file that specifies a --flag : value dictionary. Useful for specifying complex flag values with special characters that work with any command interpreter. Additionally,
each --flags-file arg is replaced by its constituent flags. See $ gcloud topic flags-file for more information.

--flatten =[ KEY ,…]

Flatten name [] output resource slices in KEY into separate records for each item in each slice. Multiple keys and slices may be specified. This also flattens keys for --format and --
filter . For example, --flatten=abc.def flattens abc.def[].ghi references to abc.def.ghi . A resource record containing abc.def[] with N elements will expand to N records in
the flattened output. This flag interacts with other flags that are applied in this order: --flatten , --sort-by , --filter , --limit .

--format = FORMAT

Set the format for printing command output resources. The default is a command-specific human-friendly output format. The supported formats
are: config , csv , default , diff , disable , flattened , get , json , list , multi , none , object , table , text , value , yaml . For more details run $ gcloud topic formats.

--help

Display detailed help.

--project = PROJECT_ID

The Google Cloud Platform project name to use for this invocation. If omitted, then the current project is assumed; the current project can be listed using gcloud config list --
format='text(core.project)' and can be set using gcloud config set project PROJECTID . Overrides the default core/project property value for this command invocation.

--quiet , -q

Disable all interactive prompts when running gcloud commands. If input is required, defaults will be used, or an error will be raised. Overrides the default core/disable_prompts property
value for this command invocation. Must be used at the beginning of commands. This is equivalent to setting the environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1.

--verbosity = VERBOSITY ; default="warning"

Override the default verbosity for this command with any of the supported standard verbosity levels: debug , info , warning , error , critical , none . Overrides the
default core/verbosity property value for this command invocation.

--version , -v

Print version information and exit. This flag is only available at the global level.

-h

Print a summary help and exit.

OTHER FLAGS

--log-http

Log all HTTP server requests and responses to stderr. Overrides the default core/log_http property value for this command invocation.

--trace-token = TRACE_TOKEN

Token used to route traces of service requests for investigation of issues. Overrides the default core/trace_token property value for this command invocation.

--user-output-enabled

Print user intended output to the console. Overrides the default core/user_output_enabled property value for this command invocation. Use --no-user-output-enabled to disable.

GROUPS

GROUP is one of the following:

alpha

(ALPHA) Alpha versions of gcloud commands.

app

Manage your App Engine deployments.

auth

Manage oauth2 credentials for the Google Cloud SDK.

beta
(BETA) Beta versions of gcloud commands.

bigtable

Manage your Cloud Bigtable storage.

builds

Create and manage builds for Google Cloud Build.

components

List, install, update, or remove Google Cloud SDK components.

composer

Create and manage Cloud Composer Environments.

compute

Create and manipulate Google Compute Engine resources.

config

View and edit Cloud SDK properties.

container

Deploy and manage clusters of machines for running containers.

dataflow

Manage Google Cloud Dataflow jobs.

dataproc

Create and manage Google Cloud Dataproc clusters and jobs.

datastore

Manage your Cloud Datastore indexes.

debug

Commands for interacting with the Cloud Debugger.

deployment-manager

Manage deployments of cloud resources.

dns

Manage your Cloud DNS managed-zones and record-sets.

domains

Manage domains for your Google Cloud projects.

endpoints

Create, enable and manage API services.

firebase

Work with Google Firebase.

functions

Manage Google Cloud Functions.

iam

Manage IAM service accounts and keys.

iot

Manage Cloud IoT resources.

kms

Manage cryptographic keys in the cloud.

logging

Manage Stackdriver Logging.

ml

Use Google Cloud machine learning capabilities.

ml-engine

Manage Cloud ML Engine jobs and models.

organizations

Create and manage Google Cloud Platform Organizations.

projects

Create and manage project access policies.

pubsub

Manage Cloud Pub/Sub topics and subscriptions.

redis

Manage Cloud Memorystore Redis resources.

services
List, enable and disable APIs and services.

source

Cloud git repository commands.

spanner

Command groups for Cloud Spanner.

sql

Create and manage Google Cloud SQL databases.

topic

gcloud supplementary help.

COMMANDS

COMMAND is one of the following:

docker

(DEPRECATED) Enable Docker CLI access to Google Container Registry.

feedback

Provide feedback to the Google Cloud SDK team.

help

Search gcloud help text.

info

Display information about the current gcloud environment.

init

Initialize or reinitialize gcloud.

version

Print version information for Cloud SDK components.

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 3.0 License, and code samples are licensed under the Apache 2.0 License. For details, see our Site Policies. Java is a registered
trademark of Oracle and/or its affiliates.

Last updated November 21, 2018.


Cloud SDK

gcloud topic configurations


NAME

gcloud topic configurations - supplementary help for named configurations

DESCRIPTION

gcloud properties can be stored in named configurations , which are collections of key-value pairs that influence the
behavior of gcloud.

Named configurations are intended to be an advanced feature, and you can probably ignore them entirely if you only work
with one project.

Properties that are commonly stored in configurations include default Google Compute Engine zone, verbosity level,
project ID, and active user or service account. Configurations allow you to define and enable these and other settings
together as a group.

Configurations are especially useful if you:

Work with multiple projects. You can create a separate configuration for each project.

Use multiple accounts, for example, a user account and a service account, etc.

Perform generally orthogonal tasks (work on an appengine app in project foo, administer a Google Compute Engine
cluster in zone user-central-1a, manage the network configurations for region asia-east-1, etc.)

Property information stored in named configurations are readable by all gcloud commands and may be modified by gcloud
config set and gcloud config unset .

Creating configurations

Named configurations maybe be defined by users or built into gcloud.


User defined configurations have lowercase names, such as 'johndoe', 'default', 'jeff-staging', or 'foo2'. These are defined
by the following regular expression: ^[a-z][-a-z0-9]*$

Additionally there is a builtin configuration named NONE that has no properties set.

The easiest way to create a brand new configuration is by running

$ gcloud init

This will guide you through setting up your first named configuration, creating a new named configuration, or reinitializing
an existing named configuration. (Note: reinitializing an existing configuration will remove all its existing properties!)

You can create a new empty configuration with

$ gcloud config configurations create my-config

Using configurations

gcloud may have at most one active configuration which provides property values. Inactive configurations have no effect
on gcloud executions.

You can activate a configuration with

$ gcloud config configurations activate my-config

To display the path of the active configuration, run:


$ gcloud info --format='get(config.paths.active_config_path)'

Note that changes to your OS login, Google Cloud Platform account or project could change the path.

You can view and change the properties of your active configuration using the following commands:

$ gcloud config list


$ gcloud config set

Additionally, commands under gcloud config configurations allow you to to list, activate, describe, and delete
configurations that may or may not be active.

You can activate a configuration for a single gcloud invocation using flag, --configuration my-config , or environment
variable CLOUDSDK_ACTIVE_CONFIG_NAME=my-config .

AVAILABLE PROPERTIES

core

account

Account gcloud should use for authentication. Run gcloud auth list to see your currently available accounts.

custom_ca_certs_file

Absolute path to a custom CA cert file.

default_regional_backend_service

If True, backend services in gcloud compute backend-services will be regional by default. Setting the --
global flag is required for global backend services.

disable_color

If True, color will not be used when printing messages in the terminal.

disable_prompts

If True, the default answer will be assumed for all user prompts. However, for any prompts that require user
input, an error will be raised. This is equivalent to either using the global --quiet flag or setting the
environment variable CLOUDSDK_CORE_DISABLE_PROMPTS to 1. Setting this property is useful when scripting
with gcloud .

disable_usage_reporting

If True, anonymous statistics on SDK usage will not be collected. This value is set by default based on your
choices during installation, but can be changed at any time. For more information,
see: https://cloud.google.com/sdk/usage-statistics

log_http

If True, log HTTP requests and responses to the logs. To see logs in the terminal, adjust verbosity settings.
Otherwise, logs are available in their respective log files.

max_log_days

Maximum number of days to retain log files before deleting. If set to 0, turns off log garbage collection and does
not delete log files. If unset, the default is 30 days.

pass_credentials_to_gsutil

If True, pass the configured Cloud SDK authentication to gsutil.

project
Project ID of the Cloud Platform project to operate on by default. This can be overridden by using the global --
project flag.

show_structured_logs

Control when JSON-structured log messages for the current verbosity level (and above) will be written to
standard error. If this property is disabled, logs are formatted as text by default.

Valid values are:

never - Log messages as text

always - Always log messages as JSON

log - Only log messages as JSON if stderr is a file

terminal - Only log messages as JSON if stderr is a terminal

If unset, default is never .

trace_token

Token used to route traces of service requests for investigation of issues. This token will be provided by Google
support.

user_output_enabled

True, by default. If False, messages to the user and command output on both standard output and standard
error will be suppressed.

verbosity

Default logging verbosity for gcloud commands. This is the equivalent of using the global --verbosity flag.
Supported verbosity levels: debug , info , warning , error , and none .

app

cloud_build_timeout

Timeout, in seconds, to wait for Docker builds to complete during deployments. All Docker builds now use the
Cloud Build API.

promote_by_default

If True, when deploying a new version of a service, that version will be promoted to receive all traffic for the
service. This property can be overridden via the --promote-by-default or --no-promote-by-default flags.

stop_previous_version

If True, when deploying a new version of a service, the previously deployed version is stopped. If False, older
versions must be stopped manually.

use_runtime_builders

If set, opt in/out to a new code path for building applications using pre-fabricated runtimes that can be updated
independently of client tooling. If not set, the default path for each runtime is used.

auth

disable_credentials

If True, gcloud will not attempt to load any credentials or authenticate any requests. This is useful when behind
a proxy that adds authentication to requests.

billing

quota_project
Project that will be charged quota for the operations performed in gcloud . When unset, the default is
[CURRENT_PROJECT]; this will charge quota against the currently set project for operations performed on it.
Additionally, some existing APIs will continue to use a shared project for quota by default, when this property is
unset.

If you need to operate on one project, but need quota against a different project, you can use this property to
specify the alternate project.

builds

timeout

Timeout, in seconds, to wait for builds to complete.

component_manager

additional_repositories

Comma separated list of additional repositories to check for components. This property is automatically
managed by the gcloud components repositories commands.

disable_update_check

If True, Cloud SDK will not automatically check for updates.

composer

location

Composer location to use. Each Composer location constitutes an independent resource namespace
constrained to deploying environments into Compute Engine regions inside this location. This parameter
corresponds to the /locations/<location> segment of the Composer resource URIs being referenced.

compute

region

Default region to use when working with regional Compute Engine resources. When a --region flag is required
but not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute
regions list .

use_new_list_usable_subnets_api

If True, use the new API for listing usable subnets which only returns subnets in the current project.

zone

Default zone to use when working with zonal Compute Engine resources. When a --zone flag is required but
not provided, the command will fall back to this value, if set. To see valid choices, run gcloud compute zones
list .

container

build_timeout

Timeout, in seconds, to wait for container builds to complete.

cluster

Name of the cluster to use by default when working with Kubernetes Engine.

new_scopes_behavior

If True, use new scopes behavior and do not add compute-rw , storage-ro , service-control , or service-
management scopes. The former two ( compute-rw and storage-ro ) only apply to clusters at Kubernetes v1.9
and below; starting v1.10, compute-rw and storage-ro are not added by default. Any of these scopes may be
added explicitly using --scopes . Using new scopes behavior will be the default in a future release. Additionally,
if this property is set to True, using --[no-]enable-cloud-endpoints is not allowed. This property is ignored in
alpha and beta, since these tracks always use the new behavior. See --scopes help for more info.

use_application_default_credentials

If True, use application default credentials to authenticate to the cluster API server.

use_client_certificate

If True, use the cluster's client certificate to authenticate to the cluster API server.

dataproc

region

Cloud Dataproc region to use. Each Cloud Dataproc region constitutes an independent resource namespace
constrained to deploying instances into Compute Engine zones inside the region. The default value
of global is a special multi-region namespace which is capable of deploying instances into all Compute Engine
zones globally, and is disjoint from other Cloud Dataproc regions.

deployment_manager

glob_imports

Enable import path globbing. Uses glob patterns to match multiple imports in a config file.

filestore

location

Default location to use when working with Cloud Filestore locations. When a --location flag is required but not
provided, the command will fall back to this value, if set.

functions

region

Default region to use when working with Cloud Functions resources. When a --region flag is required but not
provided, the command will fall back to this value, if set. To see valid choices, run gcloud beta functions
regions list .

gcloudignore

enabled

If True, do not upload .gcloudignore files (see $ gcloud topic gcloudignore ). If False, turn off the
gcloudignore mechanism entirely and upload all files.

interactive

bottom_bindings_line

If True, display the bottom key bindings line.

bottom_status_line

If True, display the bottom status line.

completion_menu_lines

Number of lines in the completion menu.

context

Command context string.

fixed_prompt_position

If True, display the prompt at the same position.

help_lines

Maximum number of help snippet lines.


hidden

If True, expose hidden commands/flags.

justify_bottom_lines

If True, left- and right-justify bottom toolbar lines.

manpage_generator

If True, use the manpage CLI tree generator for unsupported commands.

multi_column_completion_menu

If True, display the completions as a multi-column menu.

prompt

Command prompt string.

show_help

If True, show help as command args are being entered.

suggest

If True, add command line suggestions based on history.

ml_engine

local_python

Full path to the Python interpreter to use for Cloud ML Engine local predict/train jobs. If not specified, the default
path is the one to the Python interpreter found on system PATH .

polling_interval

Interval (in seconds) at which to poll logs from your Cloud ML Engine jobs. Note that making it much faster than
the default (60) will quickly use all of your quota.

proxy

address

Hostname or IP address of proxy server.

password

Password to use when connecting, if the proxy requires authentication.

port

Port to use when connected to the proxy server.

rdns

If True, DNS queries will not be performed locally, and instead, handed to the proxy to resolve. This is default
behavior.

type

Type of proxy being used. Supported proxy types are: [http, http_no_tunnel, socks4, socks5].

username

Username to use when connecting, if the proxy requires authentication.

redis

region

Default region to use when working with Cloud Memorystore for Redis resources. When a region is required
but not provided by a flag, the command will fall back to this value, if set.
spanner

instance

Default instance to use when working with Cloud Spanner resources. When an instance is required but not
provided by a flag, the command will fall back to this value, if set.
Run Express.js on Google App Engine Flexible Environment
Author(s): @jmdobry Published: Jan 7, 2016

Contents
Express.js
Prerequisites
Prepare
Create
Run
Deploy

Google Cloud Platform Community tutorials submitted from the community do not represent official Google Cloud Platform product
documentation.

Express.js

Express is a minimal and flexible Node.js web application framework that provides a robust set of features for web and mobile
applications.

– expressjs.com

You can check out Node.js and Google Cloud Platform to get an overview of Node.js itself and learn ways to run Node.js apps on
Google Cloud Platform.

Prerequisites

1. Create a project in the Google Cloud Platform Console.


2. Enable billing for your project.
3. Install the Google Cloud SDK.
4. Install Node.js on your local machine.

Prepare

1. Initialize a package.json file with the following command:

npm init

2. Add a start script to your package.json file:

"scripts": {
"start": "node index.js"
}

3. Install Express.js:

npm install --save express

Create

Create an index.js file with the following contents:

const express = require('express');


const app = express();
app.get('/', (req, res) => {
res.send('Hello World!');
});

const server = app.listen(8080, () => {


const host = server.address().address;
const port = server.address().port;

console.log(`Example app listening at http://${host}:${port}`);


});

Run

1. Run the app with the following command:

npm start

2. Visit http://localhost:8080 to see the Hello World! message.

Deploy

1. Create an app.yaml file with the following contents:

runtime: nodejs
env: flex

2. Run the following command to deploy your app:

gcloud app deploy

3. Visit http://YOUR_PROJECT_ID.appspot.com to see the Hello World! message.


See more by @jmdobry and more tagged App Engine, Express.js, Node.js
 HOME  SUBSCRIBE

3 years on Google App


Engine. An Epic Review.
13 MARCH 2017 on cloud, hosting, java, datastore, google, app-engine

For the last 3 years I worked on an application that runs on Google App Engine. It is a
fascinating, unique piece of service Google is offering here. Unlike anything you'll nd
elsewhere. This is my in-depth, personal take on it.

Google's Cloud (est. 2008)


First of all, what is Google App Engine (GAE) actually? It is a platform to run your web
applications on. Like Heroku. But different when you look closer. It is also a versatile
cloud computing platform. Like AWS. But different. Let me explain.

Google launched GAE in 2008, when cloud computing was still in its infancy. Amazon
was ahead of them since they already started renting out their IT infrastructure in
2006. But with GAE, Google offered a sophisticated Platform-as-a-Service (PaaS) very
early on that would be matched by Amazon with its Elastic Beanstalk service in 2011.
Now what is so special about GAE?

It is a fully-managed application platform. So far, I do not know a platform which


comes close to GAE's full package: log management, mail delivery, scaling,
memcache, image manipulation, distributed Cron jobs, load balancing, version
management, task queue, search, performance analysis, cloud debugging, content
delivery network - and that is not even mentioning auxiliary services that have
popped up on Google's cloud in the meantime like SQL, BigQuery, le storage... the
list goes on.

By using Google App Engine, you can run your app on top of (probably) the world's
best infrastructure. Also, you receive functionality out of the box that would take at
least a dozen add-ons from third parties on Heroku or a few weeks of setup if done on
your own. This is GAE's appeal.

Noteworthy applications that run on GAE include Snapchat and Khan Academy.
Development
The web app I was working on all this time is a single, large Java application. App
Engine also supports Python, PHP and Go. Now you might wonder why the selection
is so limited. One reason is that in order to have a fully-managed environment,
Google needs to integrate the platform with the environment. You could say that
environment and platform are tightly coupled. That takes a lot of effort and
investment which becomes very clear once you start developing for GAE.

SDK
Each app needs to use a special SDK (Software Development Kit) to use the APIs
offered by GAE. The SDK is huge. For example, the Java SDK download comes in at
roughly 190 MB. Granted, some of the JARs in there are not needed for most use cases
and some only during development - but still, it certainly is not lightweight (even for
Java, that is).

The SDK is not just your bridge to the world of Google App Engine but also serves as
its simulation on your local machine. For virtually every GAE API it features a stub
that you can develop against. First of all, this means that when you run your app
locally you'll get quite close to how it would behave in production. Second of all, you
can easily write integration tests against the APIs. And usually this will get you very
far; the mismatch between the production and stub behavior is quite small.

Java APIs
Speaking of APIs, you are in for a surprise when you use certain Java APIs. Since GAE
runs your application in some kind of sandbox, it forbids using particular Java APIs.
The major restrictions include writing to the le system, certain methods of
java.lang.System and using the Java Native Interface (JNI). There are also

peculiarities about using threads and sockets but more on that later.

One interesting thing is that the Java SDK actually ensures you do not use these
restricted APIs locally. When you run your app or just an integration test, it employs a
Java agent that monitors your every method call. It immediately throws an exception
for any detected violation. This is helpful in nding violations early and not only in
production but has an annoying side effect. When you pro le the performance of your
app, there will be an overwhelming amount of violation checks by the agent. In the
end, it is hard to judge your app's actual performance since the more method calls you
make, the more overhead the agent generates.

Java Development Kit (JDK)


The next thing you might notice when you start developing is that you can not use
Java 8. Even though Java 7's end of life was in 2015, it is still very much alive and
kicking on GAE. The third highest voted issue on GAE's issue tracker is support for
Java 8 (the second highest is support for Python 3). It was created in 2013. Since then,
the only shred of news about any progress on the matter is a post on the App Engine
mailing list from 2016, stating engineers are actively working on it. Well, good for
you.

Obviously, this limitation is a major annoyance for any developer. For me personally,
the missing lambda support weighs very heavily. Of course, one could migrate to one
of the many JVM languages like Groovy, Scala or Kotlin which all offer a lot more
features than Java 8. But this is a costly and risky investment to make. Too costly and
risky for our project. We also investigated the feasibility of retrolambda, a backport of
lambdas to Java 7, but did not pursue it yet although it looked promising in rst tests.

Having to stay with an old version is also a liability for the business. It makes it harder
to nd developers. Overall application security is threatened, as well. Google support
told us, we would still receive security patches for our production JDK 7. But
eventually, all major libraries like Spring will stop supporting it. Eventually, you'll be
stuck.

Deployment
To deploy your application, you need to create an appengine-web.xml con guration
le. There, you specify the application ID and version plus some additional settings,
e.g. marking the app as threadsafe to be able to receive multiple requests per
instance simultaneously.

Upload
App Engine expects to receive your Java application as a packaged WAR le. You can
upload it to their servers with the appcfg script from the SDK. Optionally, there are
plugins for Maven and Gradle which make this as easy as writing
mvn appengine:update . The upload can take quite a while for typical Java applications,

you'd better have a fast internet connection. Once the process nishes, you can see
your newly deployed version in the Google Cloud Console:

Static Files
Static les like images, stylesheets and scripts are part of any web application today.
In the appengine-web.xml les can be marked as static. Google will serve these les
directly - without hitting your application. It is not exactly a Content Delivery
Network (CDN) since it is not distributed to hundreds of edge nodes, but it helps to
reduce the load on your servers.

Versions
The nice thing in App Engine is that everything you deploy has a speci c version.
Every version can be accessed at https://<version>-dot-<app-id>.appspot.com . But
which one is actually live?

You can mark a version as default . This means when you go to


https://<app-id>.appspot.com (or the domain name you speci ed for the app), that

will be the version receiving all the requests. Switching a version to default is very
easy: all it takes is a button click or simple terminal command. GAE can switch
immediately or migrate your traf c incrementally to prevent overwhelming the new
version.
There is also one option (which we never used) that allows you to distribute your
traf c across multiple versions. This allows incrementally rolling out a new version by
only giving it to a fraction of the user base before making it available for everyone.

Since it is so easy to create new versions and switch production traf c between them,
GAE is a perfect platform to practice blue-green deployment. Each time we had the
need to rollback due to a bug in the new version, it was effortless. Continuous
Delivery should also be achievable by writing a somewhat smart deployment script.

Instances
Every version can run any number of instances (the only limit is your credit card). The
actual number is the result of incoming traf c and the scaling con guration of your
app; we'll look at that later. Google will distribute incoming requests between all
running instances of that version. You can see a list of instances, including some basic
metrics like requests and latency, in the Google Cloud Console:

The hardware options you can choose from to run these instances on are - let's be
frank here - pathetic. App Engine basically offers four different instance classes
ranging from 128MB and 600MHz CPU (you read that correctly) to 1024MB and
2.4GHz CPU. Yes, again, that is true. And truly sad. On a developer's laptop our app
started almost twice as fast as in production.

Services
So far, I have only talked about a single, monolithic application. But what do you do if
yours consists of multiple services? App Engine has got you covered. Every app is a
service. If you only have one, it is simply called default . You can access each one
directly via https://<version>-dot-<service>-dot-<app-id>.appspot.com .

You can easily deploy multiple versions of each service, scale and monitor them
separately. And since each service is separate from the others, you could run any
combination of the supported languages. Unfortunately though, some con guration
settings are shared across all services. They are therefore not perfectly isolated. Still,
all in all, GAE seems like a good t for microservices. There is some elaborate
documentation on this topic from Google, as well.
For reasons that will become clear later, we decided to separate our application into
two services: frontend (user-facing) and backend (background work). But to do so, we
didn't actually split the monolith in two - that would have taken months. We simply
deployed the same app twice and only sent users to one service and background work
to the other.

Operations
Let's talk about what it means to run your application on App Engine. As you will see,
there are a number of restrictions it imposes on you. But it is not all gloomy. In the
end you will understand why.

Application Startup
When App Engine starts a new instance, the app needs to initialize. It will either
directly send the HTTP request from the user to the app or - if the con guration and
scaling circumstances allow it - send a so-called warmup request. Either way, the rst
request is called a loading request. And as you can imagine, starting quickly is
important.

The instance itself on the other hand is ridiculously fast to start. If you have started a
server in the Cloud before, you might have waited more than a minute. Not on GAE.
Instances start almost instantly. I guess Google holds a pool of servers ready to go.
The bottleneck will always be your own app. Our application took more than 40
seconds to start in production. So unless we wanted to split our huge monolith into
separate services, we needed it to start more ef ciently.

The app uses Spring. Google even has a dedicated documentation entry just for that:
Optimizing Spring Framework for App Engine Applications. There we found the
inspiration for our most important startup optimization.

We got rid of Spring's classpath scanning. It is particularly slow on App Engine


(probably due to the abysmal CPU). Luckily, there is a library called classindex. It
writes the fully quali ed path of classes with a special annotation to a text le. By
simply reading the beans from the text le, the Spring initialization went down by
about 8-10 seconds.

Request Handling
The very rst thing I have to mention here is the requirement of the App Engine to
handle a user request within 60 seconds and a background request in 10 minutes.
When the application takes too long to respond, the request is aborted with a 500
status code and a DeadlineExceededException is thrown.

Usually, this shouldn't be a problem. If your app takes more than 60 seconds to
respond, odds are the user is long gone anyway. But since an instance is started via an
HTTP request, this also means it has to start in 60 seconds. In production, we
observed variations in startup time of up to 10 seconds. This means you now have less
than 50 seconds to start your app. It is not uncommon for a Java app to take that long.

One nice little feature I'd like to highlight is the geographical HTTP headers: for each
incoming user request, Google adds headers that contain the user's country, region,
city as well as latitude and longitude of said city. This can be very useful, for example
for pre- lling phone number country codes or detecting unusual account login
locations. The accuracy also seems pretty high from our observations. It is usually
very cumbersome and/or expensive to get that kind of information with this level of
accuracy from a third party API or database. So getting it for free on App Engine is a
nice bonus.

Background Work
Threads
As mentioned earlier, there are restrictions using Java threads. While it is possible to
start a new thread, albeit through a custom GAE ThreadManager , it cannot 'outlive' the
request it was created in. This can be annoying in practice since third party libraries
don't follow App Engine's restrictions, of course. To nd a compatible library or adapt
a seemingly incompatible one, cost us a lot of sweat and tears over the years. For
example, we could not use the Dropwizard metrics library out of the box since it relies
on using a background thread.

Queue
But there are other ways of doing background work: In the spirit of the Cloud, you
apply the divide and conquer approach on the instance level. By using task queues
you can enqueue work for later processing. For example, when an email needs to be
sent, you can enqueue a new task with a payload (e.g. recipient, subject and body) and
a URL on a push queue. Then, one of your instances will receive the payload as a HTTP
POST request to the speci ed endpoint. If it fails, App Engine will retry the operation.

This pattern really shines when you have a lot of work to process. Simply enqueue a
batch of tasks that run in isolation. The App Engine will take care of failure handling.
No need for custom retry code. Just imagine how awkward it would be without it:
running hundreds of tasks at once you either need to stop and start from scratch
when an error occurs or carefully track which have failed and enqueue them again for
another attempt.

And just like the rest of the App Engine, task queues scale beautifully. A queue can
receive virtually unlimited tasks. The downside is the payload can only be up to 1 MB,
though. But what we usually did was to simply pass references to data to the queue.
But then, you need to take extra good care in your data handling since it can easily
happen that something vanishes between the time you enqueue a task and the time
that task is actually executed.

The queues are con gured in a queue.xml le. Here is an example of a push queue
that res up to one task per second with a maximum of two retries:

<queue>
<name>my-push-queue</name>
<rate>1/s</rate>

<retry-parameters>
<task-retry-limit>2</task-retry-limit>
</retry-parameters>
</queue>

Cron
Another extremely valuable tool is the distributed Cron. In a cron.xml you can tell
GAE to issue requests at certain time intervals. These are just simple HTTP GET
requests one of your instances will receive. The smallest interval possible is once per
minute. It is very useful for regular reports, emails and cleanups.

This is what an entry in cron.xml looks like:

<cron>
<url>/tasks/summary</url>
<schedule>every 24 hours</schedule>
</cron>

A Cron job can also be combined with pull queues: they allow to actively fetch a batch
of tasks from a queue. Depending on the use case, making an instance pull lots of
tasks in a batch can be much more ef cient than pushing them to the instance
individually.

Like all other App Engine con guration les, the cron.xml is shared across all
services and versions of an application. This can be annoying. In our case, sometimes
when we deployed a version where a new Cron entry had been added, App Engine
would start sending requests to an endpoint which did not exist on the live (but older)
version - generating noise for our production error reporting. I imagine this must be
even more painful when using App Engine to host microservices.

Also, the Cron jobs are not run locally. I can understand why that might be: a lot of
the jobs are usually scheduled outside the usually busy time and would therefore not
even be triggered during a regular workday. But some run like every few minutes or
hours - and those are really interesting to observe. They might trigger noti cations,
for example. You want to see those locally. Because eventually you will introduce a
change that leads to undesirable behavior (as has happened multiple times in our
project) and seeing it locally might prevent you from shipping it. But simulating the
Cron jobs locally is tricky (we didn't bother, unfortunately). One would probably need
to write an external tool that parses the cron.xml and then pings the according
endpoints (yuck!).

Scaling
App Engine will take care of scaling the number of instances based on the traf c.
How? Well, depending on how you have con gured your application. There are three
modes:

Automatic: This is GAE's unique selling point. It will scale the number of
instances based on metrics like request rate and response latency. So if there is a
lot of traf c or your app is slow to respond, more instances spin up.
Manual: Basically like your good old virtual private servers. You tell Google how
many instances you want and Google delivers. This xed instance size is useful if
you know exactly what traf c you are going to get.
Basic: Essentially the same as manual scaling mode but when an instance
becomes idle, it is turned off.

The most useful and interesting one here certainly is the automatic mode. It has a few
parameters that help to shed some light on how it works internally:
max_concurrent_requests , max_idle_instances , min_idle_instances and

max_pending_latency . To quote the App Engine documentation:

The App Engine scheduler decides whether to serve each new request with an
existing instance (either one that is idle or accepts concurrent requests), put the
request in a pending request queue, or start a new instance for that request. The
decision takes into account the number of available instances, how quickly your
application has been serving requests (its latency), and how long it takes to spin up
a new instance.

Every time we tried to tweak those numbers, it felt like practicing black magic. It is
very dif cult to actually deduce a good setup here. Yet, these numbers determine the
real-world performance of your app and hugely affect your monthly bill.

But all in all, the automatic scaling is pretty wicked. It is an especially good t for
handling background work (e.g. generating reports, sending emails) since it often -
more so than user requests - comes in large, sudden bursts.

But the thing is, Java is a terrible t for this kind of auto scaling due to its slow startup
time. What makes matters worse, it is very common for the scheduler to assign a
request to a starting (cold) instance. Then, all efforts that went into sub-second REST
responses go out the window. Since 2012 there is an issue about user-facing requests
never to be locked to cold instances. It has not even elicited the slightest comment by
Google other than the status change to 'Accepted' (sounds like one of the stages of
grief at this point).

This also explains why we split our app into two services. Before, we often found that
with a surge in background requests, the user requests would suffer. This is because
App Engine scaled the instances up immensely and, since requests are routed evenly
across instances, this led to more user requests hitting cold instances. By splitting the
app we signi cantly reduced this from happening. Also, we were able to apply
different scaling strategies for the two services.

One last thing: In a side-project, I used Go on App Engine and discovered a new
perspective on the App Engine. Among Go's traits is the ability to start an application
virtually instantly. This makes App Engine and Go a perfect combination, like Batman
and Robin. Together, they embody everything I personally expected from the Cloud
ever since I learned about it. It truly scales to the workload and does so effortlessly.
Not even the abysmal hardware options seemed to pose a real problem for Go since it
is that ef cient.

Data
When App Engine launched, the only database options you had were Google
Datastore for structured data and Google Blobstore for binary data. Since then, they
have added Google Cloud SQL (managed MySQL) and Google Cloud Storage (like
Amazon's S3) which replaced the Blobstore. From the beginning App Engine offered a
managed Memcache, as well.

It used to be very dif cult to connect to a third-party database since you could only
use HTTP for communication. But usually databases require raw TCP. This has only
changed a few years ago when the Socket API was released. But it is still in Beta,
which makes it a questionable choice for mission-critical usage. So database-wise,
there is still very much of a vendor lock-in.

Anyway, in the beginning, there was only the Datastore.

Datastore
The Datastore is a proprietary NoSQL database, fully managed by Google. It is unlike
anything I had ever used before. It is a massively scaling beast with very unique traits,
guarantees and restrictions.

In the early days, the Datastore was based on a master-slave setup which featured
strongly consistent reads. A few years in, after it had suffered a few severe outtakes,
Google introduced a new con guration option: High Replication. The API stayed the
same but the latency for writes increased and some reads became eventual consistent
(more on that later). The upside was the signi cantly increased availability. It even
has a 99.95% uptime SLA. Since I worked with it, I never experienced a single issue
with the Datastore's availability. It was just something you did not have to think
about.

Entities
The basics of the Datastore are simple. You can read and write entities. They are
categorized under a particular kind. An entity consists of properties. A property has a
name and a value which has a certain type. Like string , boolean , float or integer .
Each entity also has a unique key.

Writing
There is no schema whatsoever, though. Entities with the same kind can look
completely different. This makes development very easy: just add a new property,
save it and it will be there. The ip side is that you will need to write custom
migration code to rename properties. The reason for this is that an entity cannot be
updated in place - it must be loaded, changed and saved again. And depending on the
volume of entities, this can become a non-trivial task since you might need to use the
task queue to circumvent the request time requirements. In my experience, this leads
to old property names all over the place since refactoring is so costly and dangerous.

There are a some limits for working with entities. The two most critical are:

An entity may only be 1MB in total, including additional meta data of the
encoded entity
You can only write to an entity (group, to be exact) up to once per second

In practice, this can be an issue. We rarely hit the size limit - but when we did, it was
painful. Customer data can get lost. When you hit the write rate limitation, it is
usually ne on the next try. But of course you have to design your application to
minimize the odds of that. For example, something like a regularly updated counter
takes a lot of work to get right. Google even has a documentation entry on using
sharding to build a counter.

Reading
An entity can be fetched by using its key or via a query. Reads by key are strongly
consistent, meaning you will receive the latest data even if you updated the entity
right before fetching it. However, this is not true for queries. They are eventually
consistent. So writes are not always re ected immediately. This can lead to problems
and might need to be mitigated, for example by clever data modelling (e.g. using
mnemonic as key) or leveraging special Datastore features (e.g. entity groups).

A query always speci es an entity kind and optional lters and/or sort orders. Every
property that is used in a lter or as a sort key must be indexed. Adding an index can
only be done as part of the regular write operation. Not automatically in the
background as in most SQL databases. The index will also increase the time of the
write operation and the cost (more on that later).

If a query involves multiple properties, it requires a multi-index. It must be speci ed


in a con guration le called datastore-indexes.xml . Here is an example:

<datastore-index kind="Employee" ancestor="false">


<property name="lastName" direction="asc" />
<property name="hireDate" direction="desc"
</datastore-index>

In contrast to other databases, the absence of a multi-index will not just result in an
inef cient, slow query - it will fail immediately. The Datastore tries its very best to
enforce performant queries. Inequality lters, for example, only support a single
property. Of course, there are always ways to shoot yourself in the foot - but they are
rare.

There are several other features I cannot go into now, for example pagination,
projection queries and transactions. Go to the Datastore documentation to learn
more, it is very extensive and helpful.

Compared to other databases the read and write operations are very slow. Based on
my observations, a read by key takes 10-20ms on average. It is rare to see signi cant
deviations. My best guess is that Google serializes entities and only indexes are
actually kept in memory.

The pricing model seems to support that: you pay for stored data, read, write and
delete operations. That's it. Note that database memory is not in that list. The
operations themselves are cheap as well: reading 100k entities costs $0.06, 100k write
operations cost $0.18 - a write operation can be the actual entity write but also every
index write. If you don't write anything, you don't pay anything. But in a single
minute you could be writing gigabytes of data. And here's the kicker: The read and
write performance is basically the same for a database with no entities or a billion. It
scales like crazy.

API
The API to the Datatore feels very low-level. Therefore, for any serious Java app there
is no way around Objectify. It is a library written by Jeff Schnitzer. If Google has not
done so already, they should write him a huge cheque for making the App Engine a
better place. He wrote it for his own business but the tireless dedication over the
years, extensive documentation and support he offers in forums is astounding. With
Objectify, working with the Datastore is actually fun.

Here is an example from the documentation:


@Entity

class Car {
@Id String vin;
String color;
}

ofy().save().entity(new Car("123123", "red")).now();

Car c = ofy().load().type(Car.class).id("123123").now();
ofy().delete().entity(c);

Objectify makes it really easy to declare entities as simple classes and then takes care
of all the mapping between the Datastore.

It also has a few tricks up its sleeve. For example, it comes with a rst-level cache.
This means that whenever you request an entity by key, it rst looks into a request-
scoped cache whether the entity was already fetched. This can be bene cial for
improving performance. However, it can also be confusing because when you fetch an
entity and modify it but do not save it, the next read will yield that same cached,
modi ed object. This can lead to Heisenbugs.

Development & Testing


Since the App Engine is a proprietary cloud database, you cannot just start it locally.
When you run your application on your machine, a mock Datastore is started by the
SDK. Its behavior comes very close to the production environment. Only the
performance is much better, which can be misleading.

For running tests against the Datastore, the SDK is also able to start a local Datastore
for you. However, this must be a different implementation since it behaves differently
than the one for running the app. This becomes apparent when you realize that a
missing multi-index will throw an error when executing the app locally but not when
testing the same query. Over the years I accidentally released several queries with
missing indexes into production (usually still behind a Beta toggle) - although I had a
test for it. After contacting support they admitted the oversight and promised to x it
- more than one year later they still have not.

Backups
Making backups of the Datastore is an atrocious process. There is a manual and an
automatic way. Of course, when you have a production application, you'd like to have
regular backups. The of cial way is a feature introduced in 2012 which is still in
Alpha!

By adding an entry to your cron.xml you can initiate the backup process. The entry
will include the names of the entities to backup as well as the Google Cloud Storage
bucket to save them to. When the time has come, it will launch a few Python
instances with the backup code, iterate through the Datastore and save them in some
kind of proprietary backup format to your bucket. Interestingly, a bucket has a limit of
how many les it can contain, so you better use a new bucket now and then.

This is the absolute worst thing about the Datastore.

Memcache
The other crucial way to store data on App Engine is Memcache. By default, you get a
shared Memcache. This means, it works on a best-effort basis and there is no
guarantee how much capacity it will have. There is also the dedicated Memcache for
$0.06 per GB per hour.

Objectify is able to use this as a second-level cache. Just annotate an entity with
@Cache and it will ask Memcache before the Datastore and save every entity there

rst. This can have a tremendous effect on performance. Usually Memcache will
respond within about 5 ms, which is much faster than the Datastore. I am not aware
of any stale cache issue we might have had. So this works very well in production.

The bene ts of it are actually very noticeable when Memcache is down. This
happened to us about once a year for an hour or two. Our site was barely usable, it was
that slow.

Big Query
BigQuery is a data warehouse as a service, managed by Google. You import data -
which can be petabytes - and can run analyses via a custom query language.

It integrates somewhat well with the Datastore since it allows to import Datastore
backup les from Google Cloud Storage. I have used this a few times, unfortunately
not always successfully. For some of our entities I received a cryptic error. I was never
able to gure out what went wrong. But some entities did work. And after ddling
with the query language documentation for a bit, I was able to generate my rst
insights. Everything considered, it was a nice way to run simple analyses. I de nitely
would not have been able to do this without writing custom code. But I was not really
leveraging the service's full potential. All the queries I made could have been done in
any SQL database directly, our data set was quite small. Only because of the way the
Datastore worked did I have to resort to the BigQuery service in the rst place.

Monitoring
The Google Cloud Console brings a lot of features to diagnose your app's behavior in
production. Just look at the Google Cloud Console navigation:

This is the result of Google's acquisition of Stackdriver in 2014. It still feels like a
separate, standalone service - but its integration into Google Cloud Console is
improving.
Let's look at the capabilities one by one.

Logging
It is crucial to access an application's logs quickly and with ease. This is something
that was truly painful on App Engine in the beginning. It used to be very cumbersome
because it was incapable of searching across all versions of an application. This meant
when you were looking for something, you had to know which version was online at
the time - or try several, one by one. It was almost unusable. Plus it was extremely
slow.

Since then, they have added useful lters to show only speci c modules, versions, log
levels, user agents or status codes. It is very powerful. Still not fast, but it has gotten
much better now compared to the early days. Here is how it looks:

One very unique idea you can see here is that logs are always grouped by request. In
all other tools I have encountered, Kibana for instance, you will only get the log lines
that match your search. By always showing all other log lines around the one that
matches your search, it gives you more context. I nd this extremely helpful when
investigating issues in the logs since it immediately helps you to better understand
what happened. I truly miss that feature in every other log viewer I use.

Another interesting trait of the App Engine is that each HTTP request is
automatically assigned a request ID. It is added to the incoming HTTP request and
uniquely identi es it. This can come in handy to correlate a request with its logs. For
example, we were sending emails when an uncaught exception occurred and included
the request ID - this made it trivial to look up the logs. The same can be done for
frontend error tracking.

Metrics
The Cloud Console gives access to a few basic application metrics. This includes the
request volume and latency, traf c volume, memory usage, number of instances and
error count. It is useful as a starting point when investigating an issue and when you
want to get a quick rst impression of the general state of the app.

Here is an example with the app's request volume:


Tracing
Since the App Engine instance is a black box, you cannot use other tools to diagnose
its performance. If the logging console is not enough, the Trace page provides more
detailed data. It allows to search for the latency distribution of certain requests.

When you select a speci c request, it opens up a timeline. There it displays the
remote procedure calls (RPCs) that you cannot see in the logs. Plus, a summary for
each RPC by type on the side. By clicking on an RPC, more details, e.g. the response
size, are shown.

This can be extremely helpful to nd the cause of a slow request. In the following
example you can see that the request makes a few fast Memcache calls and a very slow
Datastore write operation.

The only problem is that the RPCs do not include enough information to gure out
what happened exactly. For instance, the detail view of the Datastore write operation
looks like this:
It does not even include the name of the updated entity. This is a huge annoyance and
can render this whole screen almost useless. There is just one thing which can help:
clicking the 'Show logs' button in the upper right corner. It will include the log
statements of the request inline interleaved with the RPCs. This way you might be able
to infer more details from the context.

Resources
It is also important to point out that pricing is completely usage-based. This means
the cost of your app scales virtually byte by byte, hour by hour and operation by
operation. It also means, that it is very affordable to get started. There is no xed
cost. If hardly anyone uses your app - since there is a free quota - you do not pay
anything.

The biggest item on the bill will most certainly be for the instances, contributing
about 80% in my last project. The next big chunk is likely the Datastore read/write
cost, 15% of the total cost for us.

There is a nice interface in the Google Cloud Console to keep track of all quotas:

To be more speci c, when I say 'all quotas' I mean all quotas Google tells you about.
We actually had an issue where we hit an invisible quota. I think at the time the API
may have been in Beta, though. Anyway, one part of our application stopped working
and we had no idea why. Luckily, we were subscribed to Google Cloud Support. They
informed us about said quota and we had to rewrite a part of our application to make
it work again.

We also had one minor outage due to the confusing pricing setup. At one point one of
our apps suddenly stopped working and just replied with the default error page. It
took us ten minutes to gure out that we hit the budget limit we had set up. After we
raised it, everything just started working again.
Support
There is a lot to be said about Google Cloud Support. First of all, without it we would
have been in serious trouble now and then. So having it is a must for any mission-
critical application - in my eyes. For example, about once a year our application would
just stop serving requests. There was nothing we did to cause that. After contacting
Google support we would learn that they moved our application to a 'different cluster'.
And it just worked again. It is a very scary situation. You cannot do anything but 'pray
to the Google gods'.

Second of all, it is a hit or miss based on the support person. The quality varied a lot.
Sometimes we would need to exchange a dozen messages until they nally
understood us. Like any support it can be infuriating. But in the end, they would
usually resolve our issue or at least give us enough information to help us resolve it
ourselves.

A New Age
Google is working on a new type of App Engine, the exible environment. It is
currently in Beta. Its goal is to offer the best of two worlds: the ease and comfort of
running on App Engine combined with the exibility and power of Google Compute
Engine. It allows to use any programming platform (like Java 9!) on any of the
powerful Google Compute Engine machines (like 416GB RAM!) while letting Google
take care of maintaining the servers and ensuring the app is running ne.

They have been working on this for some years already. Naturally, we were keen on
trying it out. So far, we weren't that thrilled. But let's see where Google is taking this.

Design for Scale


Now, you can look at the restrictions the App Engine imposes on your app as
annoyances. But bear with me for a moment. App Engine was created by Google.
These guys know how to build scalable systems. The restrictions are merely a
necessity. They force you to adapt your app to the ways of the Cloud. This is a good
thing and should be embraced. If you feel like you are ghting the App Engine, then
you are ghting against the 'new' rules of the Cloud. This is certainly one lesson I'm
taking away from three years on Google App Engine.

Some restrictions and annoyances are the result of neglect by Google, though. It feels
like they only invest the bare minimum anymore. Actually, I have had this feeling for
the last two years. It is frustrating to work with an ancient tech stack, without any
hope of improvement in sight. It is infuriating if there are known issues but they are
not xed. It is depressing to receive so little information on where the platform is
heading. You feel trapped.

All in all, I liked how App Engine allowed the development team to focus on actually
building an application, making users happy and earning money. Google took a lot of
hassle out of the operations work. But the 'old' App Engine is on its way out. I do not
think it is a good idea to start new projects on it anymore. If App Engine Flexible
Environment on the other hand can actually x its predecessor's major issues, it
might become a very intriguing platform to develop apps on.
Stephan Behnke Share this post
Software developer by trade. Most of the time on the ever lasting quest for
simplicity, elegance and beauty in code. Or just getting stuff done in-
  
between.

Toronto, Canada https://stephanbehnke.de stebehn


34 Comments Stephanos 
1 Login

 Recommend 12 t Tweet f Share Sort by Best

Join the discussion…

LOG IN WITH
OR SIGN UP WITH DISQUS ?

Name

Ivan Bulanov • a year ago


GAE is a brilliant platform. It is worth mentioning that reads from Datastore by key are
free. Moreover at least in Python every such read hits Memcache first. It makes them not
only free but also fast. This property affects data modeling. For example one-to-many
relationships sometimes are better represented as separate entities with lists of keys.
33 △ ▽ • Reply • Share ›

stephanos Mod > Ivan Bulanov • a year ago


Just a minor correction: "Small Operations" are free (e.g. keys-only queries and
projection queries). Getting an entity from the Datastore is only free for the first
20k reads/day. See https://cloud.google.com/ap...
1△ ▽ • Reply • Share ›

Ludovic Champenois • 2 years ago


Very nice write up in 1 single page, that goes through all the important GAE topics,
including Java8.
I am sure you'll be interested with our Alpha launch of the Java8 Runtime in GAE
Standard, which eliminates all the restrictions of the Java7 runtime: no more whitelisting
of classes, and full control on threads and local file system (/tmp in memory RW).
You can apply for the Alpha program at: https://docs.google.com/a/g...

One thing that must be corrected in the blog post is the statement "But the 'old' App
Engine is on its way out..."
Not at all, App Engine might be old (it is the first real PAAS), but it is a proven platform,
with lot of large customers, and Google is investing *massively* in it, and the new Java8
Standard runtime is the first one that is running on a brand new security sandbox... All
existing applications will benefit from the upgrade.
Same free tier, same GAE APIs support, same ease of use, update to Jetty 9.x and
Servlets 3.1, new IDE/Tools plugins, no constraints, plus all the new Cloud APIs as well...

Thanks again for this excellent and timely write up. If you like AppEngine Standard as of
today, you will be delighted with new one, starting with the new Java8 runtime offering
without restrictions, and more later...
Ludo, Google App Engine engineering.
5△ ▽ • Reply • Share ›

stephanos Mod > Ludovic Champenois • 2 years ago


Hey Ludo,

thanks for taking the time to comment :) Cool that you like the article. I was
delighted and hugely disappointed with the latest changes. I welcome them, but I
already quit my job so I will not benefit from them any time soon. I was waiting
for exactly this anouncement for aaaages. And now shortly after I leave it all
finally happens.

My comment about App Engine being on its way out reflects exactly that. It felt
like there was no investment whatsoever. All I could read about was Flexible
Runtime. In our company we were pretty sure that the Standard Runtime was
basically in maintenance mode. This announcement certainly changes things. I
would have loved to have had this a year ago.

Anyway, I wish you all the best and hope you can fix the things that are annoying
on the App Engine with the upcoming releases.
1△ ▽ • Reply • Share ›

Nilson > stephanos • a year ago


Please note that java8 is already GA. Although it has several
enhancements, it is worth mentioning that the cold boot time is nearly 3
times slower when compared to java7 and it is the "Intended behavior":

htt //i t k l
https://issuetracker.google...

For GAE adopters, I would recommend getting away from java. Everything
else is perfect for me.
△ ▽ • Reply • Share ›

Erika Bell • a year ago


We at 3wks have been using App Engine since circa 2011. We've done projects on
Amazon too. Here is our founder's views on our experiences
https://in.3wks.com.au/goog...
2△ ▽ • Reply • Share ›

Shai Almog • a year ago


My experience with App Engine was similarly optimistic when we were roughly 3 years
in. It took a sharp turn in the other direction as it almost bankrupted us which I wrote
about in some detail here: https://medium.com/hacker-d...

This forced me to re-evaluate some of my pre-existing biases. I like PaaS and the ideas of
Java a lot but to get PaaS right you need a good commitment for service from the
underlying company and Google is basically a glorified advertising company. Search,
Mail, Android & Chrome come in second and everything else is way down in the food
chain... E.g. I've read of similar experiences from guys using firebase etc. and as a veteran
of Google code/other failed google "experiments" I understood it was time to admit that I
was wrong.

Unfortunately I could only admit that I was wrong after we lost a whole lot of money.
Today we manage individual VPS servers with cloudflare for scale CDN. Surprisingly our
performance and scale improved significantly. Ease of use is better since everything is
divided into smaller simpler projects. We are more flexible and could adopt newer tools
(e.g. Spring Boot which is fantastic) immediately for newer projects. So we don't get the
fancy charts but frankly they didn't help when the underlying data is completely masked.

see more

1△ ▽ • Reply • Share ›

James Doehring > Shai Almog • 4 months ago


You say in your article you still have no idea what went wrong with memcache. I
wouldn't rush to blame App Engine if nobody on your side knows how to use it.
App Engine has billing alerts which you should have set up to be notified if your
bill exceeds your expected costs. If you expected memcache to be used instead of
datastore but that didn't happen, this absolutely could be debugged in App
Engine, either by viewing the statistics in the App Engine console or by logging
and writing a simple script to search through the logs. I don't know your specifics
but it sounds like there were multiple ways to detect skyrocketing costs before
they put you out of business. You say you were too busy with startup stuff...what
would be more important for a startup than ensuring your bill doesn't jump from
2 digits to 4 digits without understanding why?
△ ▽ • Reply • Share ›

Shai Almog > James Doehring • 4 months ago


Here's the article where I discuss the final part of our migration process
https://medium.com/@Codenam...
△ ▽ • Reply • Share ›

Shai Almog > James Doehring • 4 months ago


You clearly didn't read the article. We know how to use it. It worked well
for a couple of years and SUDDENLY stopped without a change from our
side.
App engine didn't have billing alerts at that time. We were gold customers
paying 400USD per month just for support and their "suggestion" was to
use bill limits that would have essentially brought our server down once
we reached the bill limit. You are referring to newer versions of app engine
that allow a bit easier debugging than the crap we worked with. We didn't
sit in front of the app engine dashboard/billing UI and hit refresh every
day which was the only billing alert available at the time. So no app engine
was one (small) part of our server infrastructure where most of our code is
mobile...

We since moved the last bits off app engine to spring boot I will post the
link in a separate comment to avoid the moderation queue. This was trivial
to work with, gave us fixed price, performed MUCH MUCH MUCH better,
was more powerful and cost less ultimately!
was more powerful and cost less ultimately!

App engine provides a very theoretical scalability benefit which is very


dubious. Our server uptime is better than it was under app engine and
currently we scale without a problem.
△ ▽ • Reply • Share ›

James Doehring > Shai Almog • 4 months ago


It sounds like App Engine did what it's supposed to do and your
company doesn't know how to debug or monitor resource usage.
You said yourself you don't dispute the problem was on your side.
△ ▽ • Reply • Share ›

Shai Almog > James Doehring • 4 months ago


Read the article. We paid for the gold support. We sent google the
source code and THEY couldn't pinpoint the so called problem.
They still claimed it was our fault and provided no means of
tracking the issue. Billing over data store read is something that's
literally impossible to track without filling the logs with ridiculous
amounts of verbose noise. There was no other way of debugging
this in production. I doubt there is a way to debug something like
this today.

You seem to look at blaming the customer and blaming us for


having a bug (which I don't know if we did). Do you write code
that's 100% bug free?

We might have had a bug somewhere that manifested itself due to


a change in app engine. That's fine. But what could we have done
better?

We paid the the highest level of support available. That didn't help.

see more

△ ▽ • Reply • Share ›

James Doehring > Shai Almog • 4 months ago


The thing is, gold support still doesn't include debugging your
source code. They might not even have looked at it. So unless you
believe app engine malfunctioned with your app and made
datastore reads it shouldn't have, then this isn't a shortcoming of
app engine.
△ ▽ • Reply • Share ›

Shai Almog > James Doehring • 4 months ago


You are again picking and choosing one statement while
completely ignoring the other facts and misrepresenting all the
others. Do you work for google or have other vested interest?
They did open the code and did look at it, I communicated with the
Google engineer and asked specifically the question I asked here:
what could I have done better?

The actual response was: set spend limits. In other words the only
solution a Google engineer was able to give me was bringing down
our service daily.

It sure as hell points at a conceptual problem in their service!

When I get a service I need to have a way to check that the service
was delivered and verify the work. With IaaS that's super simple, I
have a server and it's running... With any other service I can see the
delivery and understand why I was charged.

For some types of PaaS this is problematic, for others not so much.
see more

△ ▽ • Reply • Share ›

Nicolas Grilly > Shai Almog • 7 months ago


What do you use to manage logs?
△ ▽ • Reply • Share ›

Shai Almog > Nicolas Grilly • 7 months ago


g y g

A mixture of logging events to the common database and machine specific


logging. It's a bit of a hacky solution as we're migrating services back and
forth. We're thinking about overops but it hasn't been that much of a
problem for us that would require this. I'm assuming that as we grow this
will suddenly become a big issue.
△ ▽ • Reply • Share ›

Nicolas Grilly > Shai Almog • 7 months ago


Thanks.
△ ▽ • Reply • Share ›

Dzintars • a year ago


Still has no idea is it cost safe for hosting ERP like system with millions rows in hundreds
of tables. So far it looks like it is only "One card per screen" like apps (Instagram, FB,
G+), but what if all my screens are large table heavy (web Desktop only)? Massive CRUD
operations, importing and exporting CSVs with thousands of rows per file? Sorting,
filtering grids/tables with sometimes 10+ calculated columns on screen. Batch
operations, for example to re-calculate delivery routing for 10 000 deliveries per day.
Currently I run this easy on my own server with SSD RAID 10 and 64 GB RAM and some
Xenons.
But I want to prepare for scaling so searching to understand, how costly could be this in
some cloud. And to make real migration just to test it is too expensive for me (by time
and effort).
△ ▽ • Reply • Share ›

ᗪ ᒍ ᗩ K ᗪ E K I E ᒪ • a year ago
Maybe you know, how autoscaling works with firebase database?
△ ▽ • Reply • Share ›

stephanos Mod > ᗪ ᒍ ᗩ K ᗪ E K I E ᒪ • a year ago


Sorry, I'm not familiar with Firebase.
△ ▽ • Reply • Share ›

rbanffy • a year ago


That's a very nice review. I've been using App Engine with Python for numerous projects
with immense success since its launch. It's now my go-to environment for applications
unless there is a compelling reason to deploy elsewhere. You get some lock-in (thanks,
AppScale, for providing a way out) but, in exchange, you have a pretty well rounded
toolset.
△ ▽ • Reply • Share ›

stephanos Mod > rbanffy • a year ago


Thank you! Glad you enjoyed it. I never tried Python on GAE but it sounds like it's
a good fit.
△ ▽ • Reply • Share ›

rbanffy > stephanos • a year ago


Deployment times are faster. And you don't need to press ";" as much ;-)
△ ▽ • Reply • Share ›

Khaled • a year ago


Loved your review, wish I'd seen it a couple of months ago :D. Working on GAE and
Datastore has also been a love/hate relationship. Solo developer, trying to get started
with a small financial app. Eventual consistency drove me crazy at the beginning, since
retrieved saved values didn't reflect instantly and caused some other numbers in the app
to miscalculate. The idea of ancestor queries took some time to wrap my head around it.
Now, quite a few months in, working on optimizing performance for a better user
experience. I think people can benefit if you add a couple of sections regarding
performance optimization opportunities like using map reduce, Async calls, bulk
operations, etc. It would also be great if you throw in some of the real case examples in
your journey. Thank you again for the generous & detailed review, beautifully written.
△ ▽ • Reply • Share ›

stephanos Mod > Khaled • a year ago


Thanks for your feedback! I'm glad that you liked the article. You make a good
point about pointing people to helpful articles. Do you have any in particular in
mind? I personally found Google's documentation very helpful (albeit not very
approachable).
△ ▽ • Reply • Share ›
Khaled > stephanos • a year ago
I agree with you on Google documentation, I refer back to them every
time. What happens -at least with me- I read them do not fully
understand, implement and face issues, read back, now I get it (probably
my fault somehow!). Through the implementation I was pushed to think
in a certain way to overcome some issues related to strong consistency,
decrease the number of reads (thus lower costs) ...etc. I found some of the
topics I went through here in this article: http://blog.appscale.com/20...
I think if written with more details, can help lots of newly starting
developers and startups :)
△ ▽ • Reply • Share ›

Ryan • a year ago


epic review indeed. great writeup, generous but even handed. thanks for posting it. really
fun to see how our biggest design goal - "The restrictions are merely a necessity. They
force you to adapt your app to the ways of the Cloud." - still shines through over a decade
after we first envisaged the product. glad it served you well!
△ ▽ • Reply • Share ›

Evan Jones > Ryan • a year ago


As someone involved in a medium-sized App Engine application, and moving
chunks of it elsewhere, I think some of these restrictions are possibly the best part
of the design. The two that come to mind for me:

1. Instances can be started and stopped at any time, so any local state must only
be a cache. This is a general "best practice" for scalable applications anyway, since
it forces you to move state to an explicit storage thing. That storage thing might
now become your bottleneck, but at least this makes it visible and explicit.

2. The request timeout. This one is much more annoying and debatable. However,
it forces you to explicitly categorize operations as "fast" and safe to wait on, or
"slow" and needing some sort of polling or other way to tell if the operation is
done. This is useful for designing your software appropriately, rather than "just
wait for X" where X might take 3 minutes.

I also like to argue both sides. In this case, these restrictions do make it a bit more
difficult for "small" applications where these things don't matter as much. I think
this is part of the reason App Engine has not been as successful as it could be:
some of these "weird" restrictions don't make sense for the "toy" applications
people write when they are first getting started.
3△ ▽ • Reply • Share ›

Evan Jones • a year ago


Nice discussion! The most important question: Would you build a new app on App
Engine? Or would you recommend it to others? It sounds like yes?

I also work on an App Engine app (Python in our case) and we have a love/hate
relationship with it, as it sounds like you do. The biggest advantage, in my opinion, is
that once you get something set up and working, it just keeps working, no matter what
traffic or whatever you throw at it. The biggest disadvantage is that there are a bunch of
things that are "non-standard", which can make it hard to run "existing code" on it in
some cases. I don't even want to get in to what it costs, which can be horrifically
expensive if you have anything remotely CPU or memory intensive. Overall: If you have
something that fits its model, I think it is great. However, we are starting to move parts
of our workload to Container Engine.
△ ▽ • Reply • Share ›

stephanos Mod > Evan Jones • a year ago


Thanks. Yes, you are absolutely right, I do have a love/hate relationship with it.

I think it makes sense for a couple of uses cases, one being the single developer
creating a new application. Everything is taken care of for you (well almost). It
used to be borderline impossible to move off the App Engine, but now that with
offerings like Google Compute Engine it will become easier to run a mix of
GAE/GCE or move entirely.

As everything in engineering, it's a tradeoff. And it's important - that's why I


wrote the article - to understand what you're getting into. If that's a worthwhile
tradeoff, great then.

tl;dr I would recommend App Engine if your use case and circumstances make it
d fi )
a good fit ;)
△ ▽ • Reply • Share ›

chairam • 2 years ago


Thx for the nice article, I also have an application running on GAE (I used Python) and
I'm very satified. I'd also cite Google Cloud endpoints that saved me a lot of effort in
producing iOS and Android apps. A question for you on the new Flexible environment:
don't you think that the same applicaiton would be more expensive on the flexible
environment than on the standard?
△ ▽ • Reply • Share ›

stephanos Mod > chairam • 2 years ago


Glad you liked it!

Good question. To be honest, we would have paid almost anything to get Java 8
and better hardware :) We did not look at the cost really, so I don't have an
answer for that.
△ ▽ • Reply • Share ›

xSAVIKx • 2 years ago


Hello @stephanos, thanks for the great article.

I'd like to ask whether you could share your experience with
https://github.com/atteo/cl... library for Spring?

I've got lots of problems due to Spring classpath scanning and do not really want to
downgrade my configuration to raw XML.

I'd really appreciate if you could share some notes about Spring and classindex library.

Regards,
Yuri.
△ ▽ • Reply • Share ›

joao silva • 2 years ago


google app enigne How do I put the cache control?
△ ▽ • Reply • Share ›

ALSO ON STEPHANOS

The hunt for an immutable, type safe Zero to Om - Act 3


data record in JavaScript 2 comments • 4 years ago
4 comments • 3 years ago stephanos — Ah, I totally missed this
rgbkrk — I really love this approach and comment! Thank you :)PS: The links are
overall article. Is the babel plugin available now fixed
somewhere else? I'm getting a 404 on the …

Going through the Meat Grinder (aka Learning Om - Act 2


applying for a job) in Toronto 1 comment • 4 years ago
23 comments • 2 years ago Brazen Kid — FYI... The bottom of the post
Ram Chadalavada — As someone who is reads "That's it for now. In Act 3 we'll look
moving to Canada with PR, this post one is at the Om-specific code in detail." It links …
so timely. Danke schön for sharing! I …

✉ Subscribe d Add Disqus to your siteAdd DisqusAdd


🔒 Disqus' Privacy PolicyPrivacy PolicyPrivacy

This Programming Life © 2019 Proudly published with Ghost

Das könnte Ihnen auch gefallen