Skip to content

HOWTO coldsetup 10_extras

steveoro edited this page May 2, 2021 · 4 revisions

HOW-TO: Cold Deploy Server step-by-step

Part 10: misc extra considerations & examples

Following up, some manual intervention procedure examples.


Extra 1: Cold-running / updating tagged releases

Both the latest & the tagged releases are auto-deployed and refreshed by the build pipeline. Nevertheless, in case the pipeline build service is down, these are the manual steps required to run or update both services.

Note:

Values in the shell take precedence over those specified in the .env file; the tagged release specified in the .env file can be easily overridden with a bespoke task that brings up the new version in a more automated way or just by running the existing script stored on the host.

Procedure:

  1. Toggle manually the static maintenance site:
$> sudo a2dissite goggles-prod goggles-prod-le-ssl goggles-staging
$> sudo a2ensite maintenance maintenance-le-ssl
# Restart:
$> sudo systemctl restart apache2
  1. Deploy manually a new release with a local pull:

Use the dedicated scripts on the host for this. Just set the ENV variables before running:

  • staging:

    $> DOCKERHUB_USERNAME=<DOCKER_USER> DOCKERHUB_PASSWORD=<DOCKER_PWD> bash -c ./deploy_staging.sh
  • production:

    $> TAG=MAJOR.MINOR.PATCH DOCKERHUB_USERNAME=<DOCKER_USER> DOCKERHUB_PASSWORD=<DOCKER_PWD> bash -c ./deploy_prod.sh

(A sample of both scripts is included in the main repository, inside the prototype deploy folder.)

Both will bring up the composed service in detached mode as if running docker-compose up after taking down the already running service.

Be advised that the docker-compose.deploy_prod.yml script will make use of the existing TAG value stored inside .env to take down the previously run production service by referring to the exact tagged version. (The deploy_prod.sh script auto-updates that value too after taking down the previous version when found.)

  1. Toggle back the main endpoints when the service is ready:
$> sudo a2dissite maintenance maintenance-le-ssl
$> sudo a2ensite goggles-prod goggles-prod-le-ssl goggles-staging
# Restart:
$> sudo systemctl restart apache2

When done, you can check the API status, for instance, or check the docker log for the app, or run a curl for checking the response of the landing page.


Extra 2: semi-manual checks for testing services & running stats

  1. Check that a running container has at least some running process in it.

Make an executable script that can be checked for error result by Monit;

#! /bin/bash
docker top "<container-name>"
exit $?

Example of an additional Monit config:

CHECK PROGRAM <container-name> WITH PATH /etc/monit/scripts/check_container_<container-name>.sh
  START PROGRAM = "/usr/bin/docker start <container-name>"
  STOP PROGRAM = "/usr/bin/docker stop <container-name>"
  IF status != 0 FOR 3 CYCLES THEN RESTART
  IF 2 RESTARTS WITHIN 5 CYCLES THEN UNMONITOR
  1. Check memory & disk stats + manual clean-up.
$> free --mega -h
$> df -h -T -x squashfs

Use the htop dashboard on the remote server (F10 to exit):

$> htop

Repeated auto-deploys will yield a lot of Docker cached images and each cached images is ~2GB in size.

In case the Slack devops channel gets a high disk usage warning message from the DevOps dashboard, clean-up any old unused Docker images:

$> docker images

#...

$> docker rmi <IMAGE_ID1> <IMAGE_ID2> <IMAGE_ID3> ...
  1. Test response HTTP code for a URL

This should always output a 2xx or a 3xx HTTP status code:

$> curl --write-out '%{http_code}' --silent --head --output /dev/null https://master-goggles.org

Extra 3: Test outgoing email service from a running container:

References:

Assuming:

  1. On the host, Postfix is running and accepts incoming connections (port 25) from Docker bridge networks (172.0.0.0/8 - both ufw & /etc/postfix/main.cf needs to be set for this)

  2. On the host, Postfix submission service is enabled in /etc/postfix/master.cf

  3. On the container, ssmtp is installed and has the Docker bridge gateway IP set as mailhost in /etc/ssmtp/ssmtp.conf

  4. The app environment uses the proper ActionMailer settings to connect to ssmtp (for plain connections, :sendmail should be enough; for restricted TSL access, :smtp credentials will be needed).

Starting from the host itself, checking for example the staging app:

$> docker exec -it goggles-main.staging sh

/app# echo "Subject: test with sendmail INSIDE container" | sendmail -v [email protected]

/app# bundle exec rails c

Inside the console, testing a specific mailer:

> ApplicationMailer.generic_message( \
    user_email:'[email protected]', \
    user_name: 'Steve A.', \
    subject_text: "Testing purposes", \
    content_body: "I'm writing from inside the app!" \
  ).deliver_now

Extra 4: Test TCP Port 25 (smtp) access with telnet

This is a bit low-level than the usual, but it may come in handy during the debug of the mail server configuration.

SMTP is a plain text protocol. This makes it easy to simulate a mail client with the telnet command to check the access to port 25.

(Install telnet on Alpine Linux with apk add busybox-extras)

Run:

$> telnet SERVERNAME 25

This connects telnet to port 25 on the server with the name SERVERNAME.

The name or IP address of the server for a domain can be determined by dig DOMAIN -t MX. If there is no own MX record for a domain, the corresponding A-record must be used.

If the TCP connection can be established, telnet responds with the message: Connected to SERVERNAME. and Escape character is' ^]'..

Now you can send an e-mail via the SMTP protocol. The best way to do this is to use a recipient address for which the connected mail server is responsible.

EHLO test.example.com
MAIL FROM:<SENDERADDRESS>
RCPT TO:<RECIPIENTADDRESS>
DATA
Subject: Testmessage
(Blank line, press Enter again)
This is a test.
(Blank line, press Enter again)
.
QUIT

Extra 5: DB rebuild/restore from a backup dump

Rebuilding the whole DB from a backup can be CPU & memory intensive and also quite long, so it should be done only after enabling the static maintenance mode.

Assuming we have already a valid production dump on localhost's ./db/dump...

  1. Toggle static maintenance site to work freely with the back-end DB:

    $> cap production maintenance:site
  2. Upload the production dump (found @db/dump):

    $> cap production db:dump[put]
  3. Rebuild it directly on the host (better):

    $> ssh [email protected]
    
    # At remote host:
    $> docker exec -it goggles-main sh -c 'bundle exec rails db:rebuild'
    $> exit
  4. Toggle off static maintenance:

    $> cap production maintenance:site[off]
Clone this wiki locally