Life ConnectLife Connect
Table of contents
Architecture
Services
Swagger Docs
GitHub
Table of contents
Architecture
Services
Swagger Docs
GitHub

Remote Environment Setup

Prerequisites

Ensure the following prerequisites are met before proceeding with the setup:

  • The $HOME/.aws directory exists and contains valid AWS user credentials.
  • A reverse proxy Docker container is configured and running on the VPS (covered in this document).
  • The acl, jq Debian packages are installed.
  • A MongoDB Atlas cluster is created and populated with the necessary databases.
  • MongoDB required trigger functions are manually created.
  • The Keycloak UI client has its Valid Redirect URIs and Web Origins properties set to the environment's Fully Qualified Domain Name (FQDN).

Steps

1. Create an AWS Secrets Manager Secret

Create a secret in the eu-west-3 region.

2. Access the VPS

SSH into the VPS where the environment will be set up.

3. Create a New Directory

Create a directory under /opt/devops/$ENV, replacing $ENV with the environment name (e.g., athena).

ENV='athena'
mkdir /opt/devops/$ENV

Set the directory permissions:

# all files created within the directory inherit the group ownership of the directory
sudo chmod g+s /opt/devops/$ENV/

# new files are created with group write permissions
GROUP_ID=$(cat /etc/group | grep devops | cut -d: -f3)
sudo setfacl -d -m "g:$GROUP_ID:rwX" /opt/devops/$ENV/

# all existing files and directories in the shared directory have the correct group and permissions
sudo chmod g+rwX /opt/devops/$ENV/

Note: If the directory /opt/devops/$ENV already exists, ensure the correct group ownership and permissions are set by running the following commands:

sudo chown -R :devops /opt/devops/$ENV
sudo find /opt/devops/$ENV/ -type d -exec chmod g+s {} +
GROUP_ID=$(cat /etc/group | grep devops | cut -d: -f3)
sudo setfacl -d -m "g:$GROUP_ID:rwX" /opt/devops/$ENV
sudo find /opt/devops/$ENV -exec chmod g+rwX {} +

This ensures all files created within the directory inherit the correct group ownership and permissions.

4. Clone the Charts Repository

Clone the necessary repository into the shared environment working directory:

git clone -b develop git@bitbucket.org:lifeconnectsas/adb-charts.git /opt/devops/$ENV

5. Navigate to the Target Directory

Change to the directory where Terraform commands will be executed:

cd /opt/devops/$ENV/charts/terraform/envs/kind

6. Initialize Terraform

Initialize the working directory and backend for Terraform state storage:

terraform init -reconfigure \
    -backend-config='bucket=tf-states.life-connect.fr' \
    -backend-config="key=envs/$ENV/terraform.tfstate" \
    -backend-config='region=eu-west-3' \
    -backend-config='encrypt=true'

7. Configure Environment Variables

In variables.tfvars, specify the environment name and AWS secret region:

env               = "athena"
aws_secret_region = "eu-west-3"

8. Define Service Versions

Set the appropriate service versions in the values.yaml file:

accountingTag: 0.31.0-SNAPSHOT-DEV-2051
aggregatesTag: 0.29.0-SNAPSHOT-DEV-1296
contractsTag: 0.34.0-SNAPSHOT-DEV-2332
filesTag: 0.20.0-SNAPSHOT-DEV-644
partsTag: 0.22.0-SNAPSHOT-DEV-879
personsTag: 0.22.0-SNAPSHOT-DEV-1278
reportsTag: 0.13.0-SNAPSHOT-DEV-650
ticketsTag: 0.11.0-SNAPSHOT-FEA-345
uiTag: 0.48.0-SNAPSHOT-DEV-7986
utilitiesTag: 0.29.0-SNAPSHOT-DEV-1215
viewsTag: 0.23.0-SNAPSHOT-DEV-665

9. Create and Apply Terraform Modules

Each module should be created and applied sequentially.

a. Cluster Module

Create and apply the Terraform plan for cluster resources:

terraform plan -target='module.cluster' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

b. AWS Module

Due to the aws_sns_topic_subscription resource issue, resources are created in two phases:

Phase 1: Create resources necessary for AWS SNS topic subscriptions:

terraform plan -target='module.aws.aws_sqs_queue.file_uploaded_queue' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

Phase 2: Create the remaining AWS module resources:

terraform plan -target='module.aws' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

c. Messaging Module

This module is also created in two phases:

Phase 1:

terraform plan -target='module.messaging.aws_cloudwatch_event_target.targets' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

Phase 2:

terraform plan -target='module.messaging' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

d. Services Module

Create and apply the Terraform plan for the services module:

terraform plan -target='module.services' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

Feature Development Workflow

Directory Structure

Every development environment has its own dedicated working directory on the server. For example, athena directory is /opt/devops/athena/adb-charts.

Development Environments (as of 11/02/2025)

New AWS Secret Configuration:

  • dev
  • odin
  • achilles
  • athena
  • hephaestus

Obsolete Configuration:

  • dev-nikita
  • dev-sergey
  • dev-filatov

Steps

For feature development, follow these steps to change service versions or introduce infrastructure changes:

1. Access the VPS

SSH into the VPS where the environment resides.

2. Change Directory

Navigate to the shared environment directory:

cd /opt/devops/$ENV/adb-charts/terraform/envs/kind

3. Destroy Existing Resources

terraform plan -destroy -var-file="./variables.tfvars" -out="./destroy.tfplan"
terraform apply "./destroy.tfplan"

4. Create Feature Branch

Create a new git branch named after the corresponding Jira issue.

Modify $HOME/.gitconfig if needed:

[safe]
    directory = /opt/devops/athena/adb-charts

5. Apply Required Infrastructure Changes

Make necessary updates (e.g., adding a new trigger configuration in messaging.tf).

6. Recreate Environment with New Configuration

terraform plan -target='module.cluster' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

terraform plan -target='module.aws.aws_sqs_queue.file_uploaded_queue' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

terraform plan -target='module.aws' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

terraform plan -target='module.messaging.aws_cloudwatch_event_target.targets' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

terraform plan -target='module.messaging' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

terraform plan -target='module.services' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

7. Deploy New Service Versions

Get current service versions:

terraform show -json \
    | jq '.values.root_module.child_modules[].resources[] | select(.address=="module.services.helm_release.adb-services") | .values.values[]' \
    | sed -e 's/\\n/\n/g' -e 's/"//g' \
    | head -n -1 > values.yaml

Note: To access the Kubernetes cluster you may need to set

export KUBE_CONFIG_PATH=~/.kube/config

or in case the cluster was destroyed and recreated by other users to export kubeconfig

kind export kubeconfig --name $ENV

To use the kubectl command don't forget to switch the context to a proper cluster:

kubectl config use-cluster "kind-$ENV"

Update versions in values.yaml and apply changes:

terraform plan -target='module.services.helm_release.adb-services' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

If more resources were changed complete destroying and creation of the services module may be required:

terraform plan -destroy -target='module.services' -var-file="./variables.tfvars" -out="./destroy.tfplan"
terraform apply "./destroy.tfplan"

terraform plan -target='module.services' -var-file="./variables.tfvars" -out="./main.tfplan"
terraform apply "./main.tfplan"

8. Apply New Configuration to Other Environments

  • Destroy resources (see step 3)
  • Merge the feature branch into develop
  • Check out develop and provision resources again (see step 6)

Proxy

Running multiple Kind (Kubernetes in Docker) clusters on a single machine causes port conflicts. This issue can be resolved by configuring an Nginx reverse proxy and implementing port mapping. With this setup, each cluster is assigned unique ports while exposing its services to the Internet through the standard port 443, enabling seamless and simultaneous operation.

Configure proxy/nginx.conf

Update the domain names in the configuration file to match your development environments. Ensure the HTTP and HTTPS port settings match the corresponding Terraform variables host_http_port and host_https_port for each environment.

Create docker image

cd proxy/
docker build -t vps-reverse-proxy ./

Run docker image

docker run --rm -it -p 80:80 -p 443:443 --name adb-vps-reverse-proxy vps-reverse-proxy
Edit this page
Last Updated:
Contributors: gregory