Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

3. Fleet Scope phase

The Fleet Scope phase defines the resources used to create the GKE Fleet Scopes, Fleet namespaces, and some Fleet features.

Purpose

This phase deploys the per-environment fleet resources deployed via the fleetscope infrastructure pipeline.

An overview of the fleet-scope pipeline is shown below. Enterprise Application fleet-scope  diagram

The following resources are created:

  • Fleet scope
  • Fleet namespace
  • Cloud Source Repo
  • Config Management
  • Service Mesh
  • Multicluster Ingress
  • Multicluster Service
  • Policy Controller

Prerequisites

  1. Provision of the per-environment folder, network project, network, and subnetwork(s).
  2. 1-bootstrap phase executed successfully.
  3. 2-multitenant phase executed successfully.

Configuring Git Access for Config Sync Repository

With Config Sync, you can manage Kubernetes resources with configuration files stored in a source of truth. Config Sync supports Git repositories, that are used as the source of truth in this example.

Config Sync is installed in this step when running the Terraform code. Before installing, you must grant access to Git.

Config Sync supports the following mechanisms for authentication:

  • SSH key pair (ssh)
  • Cookiefile (cookiefile)
  • Token (token)
  • Google service account (gcpserviceaccount)
  • Compute Engine default service account (gcenode)
  • GitHub App (githubapp)

The example below shows configuration steps for the token mechanism, using Gitlab as the Git provider, for more information please check the following documentation.

Git access: Gitlab using Token

After you create and obtain the personal access token in Gitlab, add it to a new Secret in the cluster.

  • (No HTTPS-Proxy) If you don't use an HTTPS proxy, create the Secret with the following command:

    kubectl create ns config-management-system && \
    kubectl create secret generic git-creds \
    --namespace="config-management-system" \
    --from-literal=username=USERNAME \
    --from-literal=token=TOKEN

    Replace the following:

    • USERNAME: the username that you want to use.
    • TOKEN: the token that you created in the previous step.
  • (HTTPS-Proxy) If you need to use an HTTPS proxy, add it to the Secret together with username and token by running the following command:

    kubectl create ns config-management-system && \
    kubectl create secret generic git-creds \
    --namespace=config-management-system \
    --from-literal=username=USERNAME \
    --from-literal=token=TOKEN \
    --from-literal=https_proxy=HTTPS_PROXY_URL

    Replace the following:

    • USERNAME: the username that you want to use.
    • TOKEN: the token that you created in the previous step.
    • HTTPS_PROXY_URL: the URL for the HTTPS proxy that you use when communicating with the Git repository.

NOTE: Config Sync must be able to fetch your Git server, this means you might need to adjust your firewall rules to allow GKE pods to reach that server or create a Cloud NAT Router to allow accessing the Github/Gitlab or Bitbucket SaaS servers.

Usage

Deploying with Google Cloud Build

The steps below assume that you are checked out on the same level as terraform-google-enterprise-application and terraform-example-foundation directories.

.
├── terraform-example-foundation
├── terraform-google-enterprise-application
└── .

Please note that some steps in this documentation are specific to the selected Git provider. These steps are clearly marked at the beginning of each instruction. For example, if a step applies only to GitHub users, it will be labeled with "(GitHub only)."

  1. Retrieve Multi-tenant administration project variable value from 1-bootstrap:

    export multitenant_admin_project=$(terraform -chdir=./terraform-google-enterprise-application/1-bootstrap output -raw project_id)
    
    echo multitenant_admin_project=$multitenant_admin_project
  2. (CSR Only) Clone the infrastructure pipeline repository:

    gcloud source repos clone eab-fleetscope --project=$multitenant_admin_project
  3. (Github Only) When using Github with Cloud Build, clone the repository with the following command.

    git clone git@github.com:<GITHUB-OWNER or ORGANIZATION>/eab-fleetscope.git
  4. (Gitlab Only) When using Gitlab with Cloud Build, clone the repository with the following command.

    git clone git@gitlab.com:<GITLAB-GROUP or ACCOUNT>/eab-fleetscope.git
  5. Initialize the git repository, copy 3-fleetscope code into the repository, Cloud Build yaml files and terraform wrapper script:

    cd eab-fleetscope
    git checkout -b plan
    
    cp -r ../terraform-google-enterprise-application/3-fleetscope/* .
    cp ../terraform-example-foundation/build/cloudbuild-tf-* .
    cp ../terraform-example-foundation/build/tf-wrapper.sh .
    chmod 755 ./tf-wrapper.sh
    
    cp -RT ../terraform-example-foundation/policy-library/ ./policy-library
    sed -i 's/CLOUDSOURCE/FILESYSTEM/g' cloudbuild-tf-*
  6. Disable all policies validation:

    rm -rf policy-library/policies/constraints/*
  7. Rename terraform.example.tfvars to terraform.tfvars.

    mv terraform.example.tfvars terraform.tfvars
  8. Use terraform output to get the state bucket value from 1-bootstrap output and replace the placeholder in terraform.tfvars.

    export remote_state_bucket=$(terraform -chdir="../terraform-google-enterprise-application/1-bootstrap/" output -raw state_bucket)
    
    echo "remote_state_bucket = ${remote_state_bucket}"
    
    sed -i'' -e "s/REMOTE_STATE_BUCKET/${remote_state_bucket}/" ./terraform.tfvars
  9. Update the terraform.tfvars file with values for your environment.

  10. Commit and push changes. Because the plan branch is not a named environment branch, pushing your plan branch triggers terraform plan but not terraform apply. Review the plan output in your Cloud Build project. https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git add .
    git commit -m 'Initialize multitenant repo'
    git push --set-upstream origin plan
  11. Merge changes to development. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git checkout -b development
    git push origin development
  12. Merge changes to nonproduction. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git checkout -b nonproduction
    git push origin nonproduction
  13. Merge changes to production. Because this is a named environment branch, pushing to this branch triggers both terraform plan and terraform apply. Review the apply output in your Cloud Build project https://console.cloud.google.com/cloud-build/builds;region=DEFAULT_REGION?project=YOUR_CLOUD_BUILD_PROJECT_ID

    git checkout -b production
    git push origin production

Running Terraform locally

  1. The next instructions assume that you are in the terraform-google-enterprise-application/3-fleetscope folder.

    cd ../3-fleetscope
  2. Rename terraform.example.tfvars to terraform.tfvars.

    mv terraform.example.tfvars terraform.tfvars
  3. Update the namespace_ids variables with Google Groups respective to each namespace/team.

  4. Use terraform output to get the state bucket value from 1-bootstrap output and replace the placeholder in terraform.tfvars.

    export remote_state_bucket=$(terraform -chdir="../1-bootstrap/" output -raw state_bucket)
    
    echo "remote_state_bucket = ${remote_state_bucket}"
    
    sed -i'' -e "s/REMOTE_STATE_BUCKET/${remote_state_bucket}/" ./terraform.tfvars
  5. Update the file with values for your environment. See any of the envs folder README.md files for additional information on the values in the terraform.tfvars file.

You can now deploy each of your environments (e.g. production).

  1. Run init and plan and review the output.

    terraform -chdir=./envs/production init
    terraform -chdir=./envs/production plan
  2. Run apply production.

    terraform -chdir=./envs/production apply

If you receive any errors or made any changes to the Terraform config or terraform.tfvars, re-run terraform -chdir=./envs/production plan before you run terraform -chdir=./envs/production apply.

  1. Repeat the same series of terraform commands but replace -chdir=./envs/production with -chdir=./envs/nonproduction to deploy the nonproduction environment.

  2. Repeat the same series of terraform commands but replace -chdir=./envs/production with -chdir=./envs/development to deploy the development environment.

Namespace Network-Level Isolation Example

Namespace network isolation is an aspect of Kubernetes security that helps to limit the access of different services and components within the cluster. You can find an example namespace isolation using Network Policies for Cymbal Bank. This example will enforce the following:

  • Namespaces pods will deny all ingress traffic.
  • Namespaces pods will allow all egress traffic.
  • Frontend namespace will allow ingress traffic.
  • Cymbal-Bank example namespaces will be able to communicate with each other by allowing ingress from the necessary specific namespaces.

Use Config Sync for Network Policies

To use config-sync you will need to clone you config-sync repository and add the policies there. Commit it and wait for the next sync. Here is a detailed tutorial on how to setup network policies with config-sync. The steps below will show an example for cymbal-bank application.

  1. Clone config-sync repository.

    git clone https://YOUR-GIT-INSTANCE/YOUR-NAMESPACE/config-sync-development.git
  2. Checkout to sync branch:

    cd config-sync-development
    git checkout master
  3. Copy example policies from terraform-google-enterprise-applicaiton repository to the config-sync repository (development environment):

    cp ../terraform-google-enterprise-applicaiton/examples/cymbal-bank/3-fleetscope/config-sync/development/cymbal-bank-network-policies-development.yaml .
  4. Commit and push changes:

    git add .
    git commit -am "Add cymbal bank network policies - development"
    git push origin master
  5. Wait until the resources are synced. You can check status by using nomos command line. This requires you having your kubeconfig configured to connect to the cluster. Or by accessing the Config Management Console on https://console.cloud.google.com/kubernetes/config_management/dashboard.

    nomos status

    NOTE: For more information on nomos command line, see this documentation

For more information on namespace isolation options see this documentation.

Policy Controller

For more information on Policies, refer to the following documentation