Production deployment
Didmos consists of different modules (see didmos 2.0). Some of these modules are based on existing open source software with configuration and extensions provided by DAASI (e. g. the LDAP server in didmos Core or didmos Auth). Other modules are developed and shipped by DAASI (e.g. didmos LUI) . Therefore the deployment and operations model for the components is quite heterogenous.
Deployment is supported as either Docker containers or as a VM based deployment (currently only Ubuntu 18/20 is supported) for most components with some exceptions as per the following list:
Docker | VM based (Ubuntu) | |
---|---|---|
Core | yes | yes |
LUI | yes | yes |
Authenticator | yes | yes |
Provisioner | yes | no |
ETL | yes | no |
Pwd Synchronizer | no | yes |
Docker deployment
Docker images are provided for all components (except Pwd Synchronizer) and this is the preferred deployment model.
In order to run didmos as docker containers the following requirements must be met:
- docker: https://docs.docker.com/compose/install/, version 20.10.0 or later
- docker-compose: https://docs.docker.com/compose/install/, version 1.25.0 or later
A docker-compose.yml file describes the system as a whole. See the following documentation and example for didmos2-demo:
https://gitlab.daasi.de/didmos2-demo/didmos2-demo-compose/-/tree/master/deploy
The docker-compose.yml file for individual projects might deviate from this example, as more or less components are included and configuration might be different. Furthermore a .env file must be located in the same directory which contains deployment specific variables.
On the docker host these files are usually located in either /root/docker or /opt/didmos.
The following commands might be useful for operations:
Start: docker-compose up -d Stop: docker-compose down Display Status: docker-compose ps Show logs of individual container: docker logs {container-name} Restart individual container: docker restart {container-name}
VM based deployment
For the VM based deployment project specific Ansible roles are provided for initial setup. The general setup is documented here: https://gitlab.daasi.de/didmos2/didmos2-compose/-/tree/master/ansible
Please note that most didmos projects are extended by project specific roles for setup of extensions and project specific components. Generally these roles are also required for a full setup.
After running the initial setup via Ansible please refer to the following chapters for details on operations for each of the didmos modules:
Module specific details
didmos Core
Docker
didmos Core consists of two Docker containers:
- {project-name}-core (API)
- {project-name}-openldap (LDAP Metadirectory)
The logs of each component can be accessed via docker logs (see list of general commands).
Configuration is possible via docker environment variables (for supported parameters).
VM based
The LDAP Metadirectory is installed via the Ubuntu distribution during the initial Ansible setup (i.e. apt install slapd).
- Stop/Start/Restart/Status: systemctl {command} slapd
- Logs: TODO
The didmos Core API server is installed as a python virtual environment and deployed as a mod_wsgi app in Apache webserver. The following locations of the VM are used:
- Virtual environment: /opt/didmos2coreEnv
- Python application: /opt/didmos2core
- Configuration (Templates and default config): /opt/didmos2core/general and /opt/didmos2core/customer/customer_config
- Configuration (Overrides): /etc/didmos/core
- Logs: /var/log/didmos
- Apache config (mod_wsgi integration): /etc/apache2/sites-available/api-ssl.conf
- Apache logs: /var/log/apache2
Restarting the didmos Core API server is possible via the Apache webserver: systemctl {command} apache2 (e.g. restart).
didmos LUI
Docker
didmos LUI consists of the following Docker container:
- {project-name}-frontend
In that container the compiled frontend (Angular JavaScript app with assets like images, CSS-files etc.) is shipped using an nginx webserver.
The logs can be accessed via docker logs (see list of general commands). Since the application itself runs as Java Script in the web browser, for debugging purposes the browser console might be more useful than the server side logs.
Configuration is possible via docker environment variables (for supported parameters).
VM based
The compiled frontend (Angular JavaScript app with assets like images, CSS-files etc.) is located in /var/www/didmos2lui and then shipped as static files using an Apache webserver. The following locations on the VM are used:
- Frontend files: /var/www/didmos2lui
- Configuration file: /var/www/didmos2lui/assets/config/environment.json
- Apache config: /etc/apache2/sites-available/lui-ssl.conf
- Apache logs: /var/log/apache2
Generally changes to the functionality always require recompiling the static files from source and then redeploying the compiled application on the VM.
didmos Auth
Docker
didmos Auth consists of the following Docker container:
- {project-name}-auth
In that container Auth is running as a mod_wsgi application inside an Apache webserver.
The logs can be accessed via docker logs (see list of general commands).
Configuration is possible via docker environment variables (for supported parameters). For a list of general environment variables refer to didmos2 Authenticator.
VM based
didmos Auth is installed as a python virtual environment and deployed as a mod_wsgi app in Apache webserver. The following locations of the VM are used:
- Virtual environment (Python): /opt/didmos2auth
- Configuration: /etc/satosa
- Logs: /var/log/satosa
- Apache config (mod_wsgi integration): /etc/apache2/sites-available/auth.conf
- Apache logs: /var/log/apache2
Restarting didmos Auth is possible via the Apache webserver: systemctl {command} apache2 (e.g. restart).
The application is based on Satosa and most of the configuration in /etc/satosa follows the default Satosa configuration (see https://github.com/IdentityPython/SATOSA).
Provisioner
Docker
didmos Provisioner consists of the following Docker containers:
- didmos2-rabbitmq (RabbitMQ queue)
- {project-name}-ra (Requesting authority)
- {project-name}-xyz-worker (Worker nodes, possible multiple containers for each target system)
The logs can be accessed via docker logs (see list of general commands).
Configuration is possible via docker environment variables (for supported parameters). For a list of general environment variables refer to didmos2 Provisioner.
ETL
Docker
didmos ETL consists of the following Docker container:
- {project-name}-etl
The data and config is mounted as docker volumes from the host system like so (the variables are defined in .env):
volumes: - /${ETL_DATA_DIR}/:/var/didmos/:rw - /${ETL_CONF_DIR}/:/etc/didmos/etl/:rw
Typically the data and config directories are set as following on the host system:
- ETL_DATA_DIR=/var/didmos/etl/etl-data
- ETL_CONF_DIR=/var/didmos/etl/etl-conf
The logs can be accessed via docker logs (see list of general commands).
Configuration is possible via docker environment variables (for supported parameters) but generally via file sin the volume.