Merge pull request #607 from vrslev/run-pre-commit
chore: Run pre-commit
This commit is contained in:
commit
3171f212ab
@ -37,7 +37,7 @@ repos:
|
|||||||
rev: v2.1.0
|
rev: v2.1.0
|
||||||
hooks:
|
hooks:
|
||||||
- id: codespell
|
- id: codespell
|
||||||
exclude: ".*Dockerfile.*"
|
exclude: "build/bench/Dockerfile"
|
||||||
|
|
||||||
- repo: local
|
- repo: local
|
||||||
hooks:
|
hooks:
|
||||||
|
8
.vscode/extensions.json
vendored
8
.vscode/extensions.json
vendored
@ -3,11 +3,7 @@
|
|||||||
// Extension identifier format: ${publisher}.${name}. Example: vscode.csharp
|
// Extension identifier format: ${publisher}.${name}. Example: vscode.csharp
|
||||||
|
|
||||||
// List of extensions which should be recommended for users of this workspace.
|
// List of extensions which should be recommended for users of this workspace.
|
||||||
"recommendations": [
|
"recommendations": ["ms-vscode-remote.remote-containers"],
|
||||||
"ms-vscode-remote.remote-containers"
|
|
||||||
],
|
|
||||||
// List of extensions recommended by VS Code that should not be recommended for users of this workspace.
|
// List of extensions recommended by VS Code that should not be recommended for users of this workspace.
|
||||||
"unwantedRecommendations": [
|
"unwantedRecommendations": []
|
||||||
|
|
||||||
]
|
|
||||||
}
|
}
|
@ -14,21 +14,21 @@ appearance, race, religion, or sexual identity and orientation.
|
|||||||
Examples of behavior that contributes to creating a positive environment
|
Examples of behavior that contributes to creating a positive environment
|
||||||
include:
|
include:
|
||||||
|
|
||||||
* Using welcoming and inclusive language
|
- Using welcoming and inclusive language
|
||||||
* Being respectful of differing viewpoints and experiences
|
- Being respectful of differing viewpoints and experiences
|
||||||
* Gracefully accepting constructive criticism
|
- Gracefully accepting constructive criticism
|
||||||
* Focusing on what is best for the community
|
- Focusing on what is best for the community
|
||||||
* Showing empathy towards other community members
|
- Showing empathy towards other community members
|
||||||
|
|
||||||
Examples of unacceptable behavior by participants include:
|
Examples of unacceptable behavior by participants include:
|
||||||
|
|
||||||
* The use of sexualized language or imagery and unwelcome sexual attention or
|
- The use of sexualized language or imagery and unwelcome sexual attention or
|
||||||
advances
|
advances
|
||||||
* Trolling, insulting/derogatory comments, and personal or political attacks
|
- Trolling, insulting/derogatory comments, and personal or political attacks
|
||||||
* Public or private harassment
|
- Public or private harassment
|
||||||
* Publishing others' private information, such as a physical or electronic
|
- Publishing others' private information, such as a physical or electronic
|
||||||
address, without explicit permission
|
address, without explicit permission
|
||||||
* Other conduct which could reasonably be considered inappropriate in a
|
- Other conduct which could reasonably be considered inappropriate in a
|
||||||
professional setting
|
professional setting
|
||||||
|
|
||||||
## Our Responsibilities
|
## Our Responsibilities
|
||||||
|
@ -7,9 +7,8 @@ Before publishing a PR, please test builds locally:
|
|||||||
- with VSCode for testing environments (only for frappe/bench image).
|
- with VSCode for testing environments (only for frappe/bench image).
|
||||||
|
|
||||||
On each PR that contains changes relevant to Docker builds, images are being built and tested in our CI (GitHub Actions).
|
On each PR that contains changes relevant to Docker builds, images are being built and tested in our CI (GitHub Actions).
|
||||||
> :evergreen_tree: Please be considerate when pushing commits and opening PR for multiple branches, as the process of building images uses energy and contributes to global warming.
|
|
||||||
>
|
|
||||||
|
|
||||||
|
> :evergreen_tree: Please be considerate when pushing commits and opening PR for multiple branches, as the process of building images uses energy and contributes to global warming.
|
||||||
|
|
||||||
## Lint
|
## Lint
|
||||||
|
|
||||||
@ -38,7 +37,6 @@ To run all the files in repository, run:
|
|||||||
pre-commit run --all-files
|
pre-commit run --all-files
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Build
|
## Build
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
@ -52,20 +50,25 @@ docker buildx bake -f docker-bake.hcl *...*
|
|||||||
## Test
|
## Test
|
||||||
|
|
||||||
### Ping site
|
### Ping site
|
||||||
|
|
||||||
Lightweight test that just checks if site will be available after creation.
|
Lightweight test that just checks if site will be available after creation.
|
||||||
|
|
||||||
Frappe:
|
Frappe:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
./tests/test-frappe.sh
|
./tests/test-frappe.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
ERPNext:
|
ERPNext:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
./tests/test-erpnext.sh
|
./tests/test-erpnext.sh
|
||||||
```
|
```
|
||||||
|
|
||||||
### Integration test
|
### Integration test
|
||||||
|
|
||||||
Tests frappe-bench-like commands, for example, `backup` and `restore`.
|
Tests frappe-bench-like commands, for example, `backup` and `restore`.
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
./tests/integration-test.sh
|
./tests/integration-test.sh
|
||||||
```
|
```
|
||||||
|
10
README.md
10
README.md
@ -30,14 +30,14 @@ cd frappe_docker
|
|||||||
|
|
||||||
It takes care of the following:
|
It takes care of the following:
|
||||||
|
|
||||||
* Setting up the desired version of Frappe/ERPNext.
|
- Setting up the desired version of Frappe/ERPNext.
|
||||||
* Setting up all the system requirements: eg. MariaDB, Node, Redis.
|
- Setting up all the system requirements: eg. MariaDB, Node, Redis.
|
||||||
* Configure networking for remote access and setting up LetsEncrypt.
|
- Configure networking for remote access and setting up LetsEncrypt.
|
||||||
|
|
||||||
It doesn't take care of the following:
|
It doesn't take care of the following:
|
||||||
|
|
||||||
* Cron Job to backup sites is not created by default.
|
- Cron Job to backup sites is not created by default.
|
||||||
* Use `CronJob` on k8s or refer wiki for alternatives.
|
- Use `CronJob` on k8s or refer wiki for alternatives.
|
||||||
|
|
||||||
1. Single Server Installs
|
1. Single Server Installs
|
||||||
1. [Single bench](docs/single-bench.md). Easiest Install!
|
1. [Single bench](docs/single-bench.md). Easiest Install!
|
||||||
|
@ -89,7 +89,7 @@ server {
|
|||||||
client_body_buffer_size 16K;
|
client_body_buffer_size 16K;
|
||||||
client_header_buffer_size 1k;
|
client_header_buffer_size 1k;
|
||||||
|
|
||||||
# enable gzip compresion
|
# enable gzip compression
|
||||||
# based on https://mattstauffer.co/blog/enabling-gzip-on-nginx-servers-including-laravel-forge
|
# based on https://mattstauffer.co/blog/enabling-gzip-on-nginx-servers-including-laravel-forge
|
||||||
gzip on;
|
gzip on;
|
||||||
gzip_http_version 1.1;
|
gzip_http_version 1.1;
|
||||||
|
@ -1,19 +1,22 @@
|
|||||||
#!/home/frappe/frappe-bench/env/bin/python
|
#!/home/frappe/frappe-bench/env/bin/python
|
||||||
|
|
||||||
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
import sys
|
import sys
|
||||||
import os
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
bench_dir = os.path.join(os.sep, 'home', 'frappe', 'frappe-bench')
|
bench_dir = os.path.join(os.sep, "home", "frappe", "frappe-bench")
|
||||||
sites_dir = os.path.join(bench_dir, 'sites')
|
sites_dir = os.path.join(bench_dir, "sites")
|
||||||
bench_helper = os.path.join(
|
bench_helper = os.path.join(
|
||||||
bench_dir, 'apps', 'frappe',
|
bench_dir,
|
||||||
'frappe', 'utils', 'bench_helper.py',
|
"apps",
|
||||||
|
"frappe",
|
||||||
|
"frappe",
|
||||||
|
"utils",
|
||||||
|
"bench_helper.py",
|
||||||
)
|
)
|
||||||
cwd = os.getcwd()
|
cwd = os.getcwd()
|
||||||
os.chdir(sites_dir)
|
os.chdir(sites_dir)
|
||||||
subprocess.check_call(
|
subprocess.check_call(
|
||||||
[sys.executable, bench_helper, 'frappe'] + sys.argv[1:],
|
[sys.executable, bench_helper, "frappe"] + sys.argv[1:],
|
||||||
)
|
)
|
||||||
|
@ -1,14 +1,14 @@
|
|||||||
import os
|
import os
|
||||||
import semantic_version
|
|
||||||
import git
|
|
||||||
|
|
||||||
|
import git
|
||||||
|
import semantic_version
|
||||||
from migrate import migrate_sites
|
from migrate import migrate_sites
|
||||||
from utils import (
|
from utils import (
|
||||||
save_version_file,
|
|
||||||
get_apps,
|
get_apps,
|
||||||
|
get_config,
|
||||||
get_container_versions,
|
get_container_versions,
|
||||||
get_version_file,
|
get_version_file,
|
||||||
get_config
|
save_version_file,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
@ -30,12 +30,12 @@ def main():
|
|||||||
version_file_hash = None
|
version_file_hash = None
|
||||||
container_hash = None
|
container_hash = None
|
||||||
|
|
||||||
repo = git.Repo(os.path.join('..', 'apps', app))
|
repo = git.Repo(os.path.join("..", "apps", app))
|
||||||
branch = repo.active_branch.name
|
branch = repo.active_branch.name
|
||||||
|
|
||||||
if branch == 'develop':
|
if branch == "develop":
|
||||||
version_file_hash = version_file.get(app+'_git_hash')
|
version_file_hash = version_file.get(app + "_git_hash")
|
||||||
container_hash = container_versions.get(app+'_git_hash')
|
container_hash = container_versions.get(app + "_git_hash")
|
||||||
if container_hash and version_file_hash:
|
if container_hash and version_file_hash:
|
||||||
if container_hash != version_file_hash:
|
if container_hash != version_file_hash:
|
||||||
is_ready = True
|
is_ready = True
|
||||||
@ -54,7 +54,7 @@ def main():
|
|||||||
|
|
||||||
config = get_config()
|
config = get_config()
|
||||||
|
|
||||||
if is_ready and config.get('maintenance_mode') != 1:
|
if is_ready and config.get("maintenance_mode") != 1:
|
||||||
migrate_sites(maintenance_mode=True)
|
migrate_sites(maintenance_mode=True)
|
||||||
version_file = container_versions
|
version_file = container_versions
|
||||||
save_version_file(version_file)
|
save_version_file(version_file)
|
||||||
|
@ -1,7 +1,8 @@
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
import frappe
|
import frappe
|
||||||
from frappe.utils.backups import scheduled_backup
|
|
||||||
from frappe.utils import cint, get_sites, now
|
from frappe.utils import cint, get_sites, now
|
||||||
|
from frappe.utils.backups import scheduled_backup
|
||||||
|
|
||||||
|
|
||||||
def backup(sites, with_files=False):
|
def backup(sites, with_files=False):
|
||||||
@ -13,12 +14,17 @@ def backup(sites, with_files=False):
|
|||||||
backup_path_db=None,
|
backup_path_db=None,
|
||||||
backup_path_files=None,
|
backup_path_files=None,
|
||||||
backup_path_private_files=None,
|
backup_path_private_files=None,
|
||||||
force=True
|
force=True,
|
||||||
)
|
)
|
||||||
print("database backup taken -", odb.backup_path_db, "- on", now())
|
print("database backup taken -", odb.backup_path_db, "- on", now())
|
||||||
if with_files:
|
if with_files:
|
||||||
print("files backup taken -", odb.backup_path_files, "- on", now())
|
print("files backup taken -", odb.backup_path_files, "- on", now())
|
||||||
print("private files backup taken -", odb.backup_path_private_files, "- on", now())
|
print(
|
||||||
|
"private files backup taken -",
|
||||||
|
odb.backup_path_private_files,
|
||||||
|
"- on",
|
||||||
|
now(),
|
||||||
|
)
|
||||||
frappe.destroy()
|
frappe.destroy()
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,15 +1,16 @@
|
|||||||
import socket
|
import socket
|
||||||
import time
|
import time
|
||||||
|
|
||||||
|
from constants import (
|
||||||
|
DB_HOST_KEY,
|
||||||
|
DB_PORT,
|
||||||
|
DB_PORT_KEY,
|
||||||
|
REDIS_CACHE_KEY,
|
||||||
|
REDIS_QUEUE_KEY,
|
||||||
|
REDIS_SOCKETIO_KEY,
|
||||||
|
)
|
||||||
from six.moves.urllib.parse import urlparse
|
from six.moves.urllib.parse import urlparse
|
||||||
from utils import get_config
|
from utils import get_config
|
||||||
from constants import (
|
|
||||||
REDIS_QUEUE_KEY,
|
|
||||||
REDIS_CACHE_KEY,
|
|
||||||
REDIS_SOCKETIO_KEY,
|
|
||||||
DB_HOST_KEY,
|
|
||||||
DB_PORT_KEY,
|
|
||||||
DB_PORT
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def is_open(ip, port, timeout=30):
|
def is_open(ip, port, timeout=30):
|
||||||
@ -29,7 +30,7 @@ def check_host(ip, port, retry=10, delay=3, print_attempt=True):
|
|||||||
ipup = False
|
ipup = False
|
||||||
for i in range(retry):
|
for i in range(retry):
|
||||||
if print_attempt:
|
if print_attempt:
|
||||||
print("Attempt {i} to connect to {ip}:{port}".format(ip=ip, port=port, i=i+1))
|
print(f"Attempt {i+1} to connect to {ip}:{port}")
|
||||||
if is_open(ip, port):
|
if is_open(ip, port):
|
||||||
ipup = True
|
ipup = True
|
||||||
break
|
break
|
||||||
@ -40,30 +41,26 @@ def check_host(ip, port, retry=10, delay=3, print_attempt=True):
|
|||||||
|
|
||||||
# Check service
|
# Check service
|
||||||
def check_service(
|
def check_service(
|
||||||
retry=10,
|
retry=10, delay=3, print_attempt=True, service_name=None, service_port=None
|
||||||
delay=3,
|
):
|
||||||
print_attempt=True,
|
|
||||||
service_name=None,
|
|
||||||
service_port=None):
|
|
||||||
|
|
||||||
config = get_config()
|
config = get_config()
|
||||||
if not service_name:
|
if not service_name:
|
||||||
service_name = config.get(DB_HOST_KEY, 'mariadb')
|
service_name = config.get(DB_HOST_KEY, "mariadb")
|
||||||
if not service_port:
|
if not service_port:
|
||||||
service_port = config.get(DB_PORT_KEY, DB_PORT)
|
service_port = config.get(DB_PORT_KEY, DB_PORT)
|
||||||
|
|
||||||
is_db_connected = False
|
is_db_connected = False
|
||||||
is_db_connected = check_host(
|
is_db_connected = check_host(
|
||||||
service_name,
|
service_name, service_port, retry, delay, print_attempt
|
||||||
service_port,
|
)
|
||||||
retry,
|
|
||||||
delay,
|
|
||||||
print_attempt)
|
|
||||||
if not is_db_connected:
|
if not is_db_connected:
|
||||||
print("Connection to {service_name}:{service_port} timed out".format(
|
print(
|
||||||
|
"Connection to {service_name}:{service_port} timed out".format(
|
||||||
service_name=service_name,
|
service_name=service_name,
|
||||||
service_port=service_port,
|
service_port=service_port,
|
||||||
))
|
)
|
||||||
|
)
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
|
|
||||||
@ -71,14 +68,13 @@ def check_service(
|
|||||||
def check_redis_queue(retry=10, delay=3, print_attempt=True):
|
def check_redis_queue(retry=10, delay=3, print_attempt=True):
|
||||||
check_redis_queue = False
|
check_redis_queue = False
|
||||||
config = get_config()
|
config = get_config()
|
||||||
redis_queue_url = urlparse(config.get(REDIS_QUEUE_KEY, "redis://redis-queue:6379")).netloc
|
redis_queue_url = urlparse(
|
||||||
|
config.get(REDIS_QUEUE_KEY, "redis://redis-queue:6379")
|
||||||
|
).netloc
|
||||||
redis_queue, redis_queue_port = redis_queue_url.split(":")
|
redis_queue, redis_queue_port = redis_queue_url.split(":")
|
||||||
check_redis_queue = check_host(
|
check_redis_queue = check_host(
|
||||||
redis_queue,
|
redis_queue, redis_queue_port, retry, delay, print_attempt
|
||||||
redis_queue_port,
|
)
|
||||||
retry,
|
|
||||||
delay,
|
|
||||||
print_attempt)
|
|
||||||
if not check_redis_queue:
|
if not check_redis_queue:
|
||||||
print("Connection to redis queue timed out")
|
print("Connection to redis queue timed out")
|
||||||
exit(1)
|
exit(1)
|
||||||
@ -88,14 +84,13 @@ def check_redis_queue(retry=10, delay=3, print_attempt=True):
|
|||||||
def check_redis_cache(retry=10, delay=3, print_attempt=True):
|
def check_redis_cache(retry=10, delay=3, print_attempt=True):
|
||||||
check_redis_cache = False
|
check_redis_cache = False
|
||||||
config = get_config()
|
config = get_config()
|
||||||
redis_cache_url = urlparse(config.get(REDIS_CACHE_KEY, "redis://redis-cache:6379")).netloc
|
redis_cache_url = urlparse(
|
||||||
|
config.get(REDIS_CACHE_KEY, "redis://redis-cache:6379")
|
||||||
|
).netloc
|
||||||
redis_cache, redis_cache_port = redis_cache_url.split(":")
|
redis_cache, redis_cache_port = redis_cache_url.split(":")
|
||||||
check_redis_cache = check_host(
|
check_redis_cache = check_host(
|
||||||
redis_cache,
|
redis_cache, redis_cache_port, retry, delay, print_attempt
|
||||||
redis_cache_port,
|
)
|
||||||
retry,
|
|
||||||
delay,
|
|
||||||
print_attempt)
|
|
||||||
if not check_redis_cache:
|
if not check_redis_cache:
|
||||||
print("Connection to redis cache timed out")
|
print("Connection to redis cache timed out")
|
||||||
exit(1)
|
exit(1)
|
||||||
@ -105,14 +100,13 @@ def check_redis_cache(retry=10, delay=3, print_attempt=True):
|
|||||||
def check_redis_socketio(retry=10, delay=3, print_attempt=True):
|
def check_redis_socketio(retry=10, delay=3, print_attempt=True):
|
||||||
check_redis_socketio = False
|
check_redis_socketio = False
|
||||||
config = get_config()
|
config = get_config()
|
||||||
redis_socketio_url = urlparse(config.get(REDIS_SOCKETIO_KEY, "redis://redis-socketio:6379")).netloc
|
redis_socketio_url = urlparse(
|
||||||
|
config.get(REDIS_SOCKETIO_KEY, "redis://redis-socketio:6379")
|
||||||
|
).netloc
|
||||||
redis_socketio, redis_socketio_port = redis_socketio_url.split(":")
|
redis_socketio, redis_socketio_port = redis_socketio_url.split(":")
|
||||||
check_redis_socketio = check_host(
|
check_redis_socketio = check_host(
|
||||||
redis_socketio,
|
redis_socketio, redis_socketio_port, retry, delay, print_attempt
|
||||||
redis_socketio_port,
|
)
|
||||||
retry,
|
|
||||||
delay,
|
|
||||||
print_attempt)
|
|
||||||
if not check_redis_socketio:
|
if not check_redis_socketio:
|
||||||
print("Connection to redis socketio timed out")
|
print("Connection to redis socketio timed out")
|
||||||
exit(1)
|
exit(1)
|
||||||
@ -123,7 +117,7 @@ def main():
|
|||||||
check_redis_queue()
|
check_redis_queue()
|
||||||
check_redis_cache()
|
check_redis_cache()
|
||||||
check_redis_socketio()
|
check_redis_socketio()
|
||||||
print('Connections OK')
|
print("Connections OK")
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
|
@ -1,13 +1,13 @@
|
|||||||
REDIS_QUEUE_KEY = 'redis_queue'
|
REDIS_QUEUE_KEY = "redis_queue"
|
||||||
REDIS_CACHE_KEY = 'redis_cache'
|
REDIS_CACHE_KEY = "redis_cache"
|
||||||
REDIS_SOCKETIO_KEY = 'redis_socketio'
|
REDIS_SOCKETIO_KEY = "redis_socketio"
|
||||||
DB_HOST_KEY = 'db_host'
|
DB_HOST_KEY = "db_host"
|
||||||
DB_PORT_KEY = 'db_port'
|
DB_PORT_KEY = "db_port"
|
||||||
DB_PORT = 3306
|
DB_PORT = 3306
|
||||||
APP_VERSIONS_JSON_FILE = 'app_versions.json'
|
APP_VERSIONS_JSON_FILE = "app_versions.json"
|
||||||
APPS_TXT_FILE = 'apps.txt'
|
APPS_TXT_FILE = "apps.txt"
|
||||||
COMMON_SITE_CONFIG_FILE = 'common_site_config.json'
|
COMMON_SITE_CONFIG_FILE = "common_site_config.json"
|
||||||
DATE_FORMAT = "%Y%m%d_%H%M%S"
|
DATE_FORMAT = "%Y%m%d_%H%M%S"
|
||||||
RDS_DB = 'rds_db'
|
RDS_DB = "rds_db"
|
||||||
RDS_PRIVILEGES = "SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, CREATE VIEW, EVENT, TRIGGER, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EXECUTE, LOCK TABLES"
|
RDS_PRIVILEGES = "SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, CREATE VIEW, EVENT, TRIGGER, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EXECUTE, LOCK TABLES"
|
||||||
ARCHIVE_SITES_PATH = '/home/frappe/frappe-bench/sites/archive_sites'
|
ARCHIVE_SITES_PATH = "/home/frappe/frappe-bench/sites/archive_sites"
|
||||||
|
@ -1,20 +1,20 @@
|
|||||||
import argparse
|
import argparse
|
||||||
|
|
||||||
from check_connection import (
|
from check_connection import (
|
||||||
check_service,
|
|
||||||
check_redis_cache,
|
check_redis_cache,
|
||||||
check_redis_queue,
|
check_redis_queue,
|
||||||
check_redis_socketio,
|
check_redis_socketio,
|
||||||
|
check_service,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def parse_args():
|
def parse_args():
|
||||||
parser = argparse.ArgumentParser()
|
parser = argparse.ArgumentParser()
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
'-p',
|
"-p",
|
||||||
'--ping-service',
|
"--ping-service",
|
||||||
dest='ping_services',
|
dest="ping_services",
|
||||||
action='append',
|
action="append",
|
||||||
type=str,
|
type=str,
|
||||||
help='list of services to ping, e.g. doctor -p "postgres:5432" --ping-service "mariadb:3306"',
|
help='list of services to ping, e.g. doctor -p "postgres:5432" --ping-service "mariadb:3306"',
|
||||||
)
|
)
|
||||||
@ -33,15 +33,15 @@ def main():
|
|||||||
check_redis_socketio(retry=1, delay=0, print_attempt=False)
|
check_redis_socketio(retry=1, delay=0, print_attempt=False)
|
||||||
print("Redis SocketIO Connected")
|
print("Redis SocketIO Connected")
|
||||||
|
|
||||||
if(args.ping_services):
|
if args.ping_services:
|
||||||
for service in args.ping_services:
|
for service in args.ping_services:
|
||||||
service_name = None
|
service_name = None
|
||||||
service_port = None
|
service_port = None
|
||||||
|
|
||||||
try:
|
try:
|
||||||
service_name, service_port = service.split(':')
|
service_name, service_port = service.split(":")
|
||||||
except ValueError:
|
except ValueError:
|
||||||
print('Service should be in format host:port, e.g postgres:5432')
|
print("Service should be in format host:port, e.g postgres:5432")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
check_service(
|
check_service(
|
||||||
@ -51,7 +51,7 @@ def main():
|
|||||||
service_name=service_name,
|
service_name=service_name,
|
||||||
service_port=service_port,
|
service_port=service_port,
|
||||||
)
|
)
|
||||||
print("{0}:{1} Connected".format(service_name, service_port))
|
print(f"{service_name}:{service_port} Connected")
|
||||||
|
|
||||||
print("Health check successful")
|
print("Health check successful")
|
||||||
exit(0)
|
exit(0)
|
||||||
|
@ -1,2 +1,3 @@
|
|||||||
import gevent.monkey
|
import gevent.monkey
|
||||||
|
|
||||||
gevent.monkey.patch_all()
|
gevent.monkey.patch_all()
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
import os
|
import os
|
||||||
import frappe
|
|
||||||
|
|
||||||
|
import frappe
|
||||||
from frappe.utils import cint, get_sites
|
from frappe.utils import cint, get_sites
|
||||||
from utils import get_config, save_config
|
from utils import get_config, save_config
|
||||||
|
|
||||||
@ -27,11 +27,12 @@ def migrate_sites(maintenance_mode=False):
|
|||||||
set_maintenance_mode(True)
|
set_maintenance_mode(True)
|
||||||
|
|
||||||
for site in sites:
|
for site in sites:
|
||||||
print('Migrating', site)
|
print("Migrating", site)
|
||||||
frappe.init(site=site)
|
frappe.init(site=site)
|
||||||
frappe.connect()
|
frappe.connect()
|
||||||
try:
|
try:
|
||||||
from frappe.migrate import migrate
|
from frappe.migrate import migrate
|
||||||
|
|
||||||
migrate()
|
migrate()
|
||||||
finally:
|
finally:
|
||||||
frappe.destroy()
|
frappe.destroy()
|
||||||
|
@ -1,15 +1,10 @@
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
import frappe
|
import frappe
|
||||||
import semantic_version
|
import semantic_version
|
||||||
|
|
||||||
from frappe.installer import update_site_config
|
|
||||||
from constants import COMMON_SITE_CONFIG_FILE, RDS_DB, RDS_PRIVILEGES
|
from constants import COMMON_SITE_CONFIG_FILE, RDS_DB, RDS_PRIVILEGES
|
||||||
from utils import (
|
from frappe.installer import update_site_config
|
||||||
run_command,
|
from utils import get_config, get_password, get_site_config, run_command
|
||||||
get_config,
|
|
||||||
get_site_config,
|
|
||||||
get_password,
|
|
||||||
)
|
|
||||||
|
|
||||||
# try to import _new_site from frappe, which could possibly
|
# try to import _new_site from frappe, which could possibly
|
||||||
# exist in either commands.py or installer.py, and so we need
|
# exist in either commands.py or installer.py, and so we need
|
||||||
@ -24,33 +19,43 @@ except ImportError:
|
|||||||
|
|
||||||
def main():
|
def main():
|
||||||
config = get_config()
|
config = get_config()
|
||||||
db_type = 'mariadb'
|
db_type = "mariadb"
|
||||||
db_port = config.get('db_port', 3306)
|
db_port = config.get("db_port", 3306)
|
||||||
db_host = config.get('db_host')
|
db_host = config.get("db_host")
|
||||||
site_name = os.environ.get("SITE_NAME", 'site1.localhost')
|
site_name = os.environ.get("SITE_NAME", "site1.localhost")
|
||||||
db_root_username = os.environ.get("DB_ROOT_USER", 'root')
|
db_root_username = os.environ.get("DB_ROOT_USER", "root")
|
||||||
mariadb_root_password = get_password("MYSQL_ROOT_PASSWORD", 'admin')
|
mariadb_root_password = get_password("MYSQL_ROOT_PASSWORD", "admin")
|
||||||
postgres_root_password = get_password("POSTGRES_PASSWORD")
|
postgres_root_password = get_password("POSTGRES_PASSWORD")
|
||||||
db_root_password = mariadb_root_password
|
db_root_password = mariadb_root_password
|
||||||
|
|
||||||
if postgres_root_password:
|
if postgres_root_password:
|
||||||
db_type = 'postgres'
|
db_type = "postgres"
|
||||||
db_host = os.environ.get("POSTGRES_HOST")
|
db_host = os.environ.get("POSTGRES_HOST")
|
||||||
db_port = 5432
|
db_port = 5432
|
||||||
db_root_password = postgres_root_password
|
db_root_password = postgres_root_password
|
||||||
if not db_host:
|
if not db_host:
|
||||||
db_host = config.get('db_host')
|
db_host = config.get("db_host")
|
||||||
print('Environment variable POSTGRES_HOST not found.')
|
print("Environment variable POSTGRES_HOST not found.")
|
||||||
print('Using db_host from common_site_config.json')
|
print("Using db_host from common_site_config.json")
|
||||||
|
|
||||||
sites_path = os.getcwd()
|
sites_path = os.getcwd()
|
||||||
common_site_config_path = os.path.join(sites_path, COMMON_SITE_CONFIG_FILE)
|
common_site_config_path = os.path.join(sites_path, COMMON_SITE_CONFIG_FILE)
|
||||||
update_site_config("root_login", db_root_username, validate = False, site_config_path = common_site_config_path)
|
update_site_config(
|
||||||
update_site_config("root_password", db_root_password, validate = False, site_config_path = common_site_config_path)
|
"root_login",
|
||||||
|
db_root_username,
|
||||||
|
validate=False,
|
||||||
|
site_config_path=common_site_config_path,
|
||||||
|
)
|
||||||
|
update_site_config(
|
||||||
|
"root_password",
|
||||||
|
db_root_password,
|
||||||
|
validate=False,
|
||||||
|
site_config_path=common_site_config_path,
|
||||||
|
)
|
||||||
|
|
||||||
force = True if os.environ.get("FORCE", None) else False
|
force = True if os.environ.get("FORCE", None) else False
|
||||||
install_apps = os.environ.get("INSTALL_APPS", None)
|
install_apps = os.environ.get("INSTALL_APPS", None)
|
||||||
install_apps = install_apps.split(',') if install_apps else []
|
install_apps = install_apps.split(",") if install_apps else []
|
||||||
frappe.init(site_name, new_site=True)
|
frappe.init(site_name, new_site=True)
|
||||||
|
|
||||||
if semantic_version.Version(frappe.__version__).major > 11:
|
if semantic_version.Version(frappe.__version__).major > 11:
|
||||||
@ -59,7 +64,7 @@ def main():
|
|||||||
site_name,
|
site_name,
|
||||||
mariadb_root_username=db_root_username,
|
mariadb_root_username=db_root_username,
|
||||||
mariadb_root_password=db_root_password,
|
mariadb_root_password=db_root_password,
|
||||||
admin_password=get_password("ADMIN_PASSWORD", 'admin'),
|
admin_password=get_password("ADMIN_PASSWORD", "admin"),
|
||||||
verbose=True,
|
verbose=True,
|
||||||
install_apps=install_apps,
|
install_apps=install_apps,
|
||||||
source_sql=None,
|
source_sql=None,
|
||||||
@ -75,7 +80,7 @@ def main():
|
|||||||
site_name,
|
site_name,
|
||||||
mariadb_root_username=db_root_username,
|
mariadb_root_username=db_root_username,
|
||||||
mariadb_root_password=db_root_password,
|
mariadb_root_password=db_root_password,
|
||||||
admin_password=get_password("ADMIN_PASSWORD", 'admin'),
|
admin_password=get_password("ADMIN_PASSWORD", "admin"),
|
||||||
verbose=True,
|
verbose=True,
|
||||||
install_apps=install_apps,
|
install_apps=install_apps,
|
||||||
source_sql=None,
|
source_sql=None,
|
||||||
@ -83,16 +88,23 @@ def main():
|
|||||||
reinstall=False,
|
reinstall=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
if db_type == "mariadb":
|
if db_type == "mariadb":
|
||||||
site_config = get_site_config(site_name)
|
site_config = get_site_config(site_name)
|
||||||
db_name = site_config.get('db_name')
|
db_name = site_config.get("db_name")
|
||||||
db_password = site_config.get('db_password')
|
db_password = site_config.get("db_password")
|
||||||
|
|
||||||
mysql_command = ["mysql", f"-h{db_host}", f"-u{db_root_username}", f"-p{mariadb_root_password}", "-e"]
|
mysql_command = [
|
||||||
|
"mysql",
|
||||||
|
f"-h{db_host}",
|
||||||
|
f"-u{db_root_username}",
|
||||||
|
f"-p{mariadb_root_password}",
|
||||||
|
"-e",
|
||||||
|
]
|
||||||
|
|
||||||
# Drop User if exists
|
# Drop User if exists
|
||||||
command = mysql_command + [f"DROP USER IF EXISTS '{db_name}'; FLUSH PRIVILEGES;"]
|
command = mysql_command + [
|
||||||
|
f"DROP USER IF EXISTS '{db_name}'; FLUSH PRIVILEGES;"
|
||||||
|
]
|
||||||
run_command(command)
|
run_command(command)
|
||||||
|
|
||||||
# Grant permission to database and set password
|
# Grant permission to database and set password
|
||||||
@ -102,10 +114,12 @@ def main():
|
|||||||
if config.get(RDS_DB) or site_config.get(RDS_DB):
|
if config.get(RDS_DB) or site_config.get(RDS_DB):
|
||||||
grant_privileges = RDS_PRIVILEGES
|
grant_privileges = RDS_PRIVILEGES
|
||||||
|
|
||||||
command = mysql_command + [f"\
|
command = mysql_command + [
|
||||||
|
f"\
|
||||||
CREATE USER IF NOT EXISTS '{db_name}'@'%' IDENTIFIED BY '{db_password}'; \
|
CREATE USER IF NOT EXISTS '{db_name}'@'%' IDENTIFIED BY '{db_password}'; \
|
||||||
GRANT {grant_privileges} ON `{db_name}`.* TO '{db_name}'@'%'; \
|
GRANT {grant_privileges} ON `{db_name}`.* TO '{db_name}'@'%'; \
|
||||||
FLUSH PRIVILEGES;"]
|
FLUSH PRIVILEGES;"
|
||||||
|
]
|
||||||
run_command(command)
|
run_command(command)
|
||||||
|
|
||||||
if frappe.redis_server:
|
if frappe.redis_server:
|
||||||
|
@ -1,16 +1,12 @@
|
|||||||
|
import datetime
|
||||||
import os
|
import os
|
||||||
import time
|
import time
|
||||||
import boto3
|
|
||||||
|
|
||||||
import datetime
|
|
||||||
from glob import glob
|
from glob import glob
|
||||||
from frappe.utils import get_sites
|
|
||||||
|
import boto3
|
||||||
from constants import DATE_FORMAT
|
from constants import DATE_FORMAT
|
||||||
from utils import (
|
from frappe.utils import get_sites
|
||||||
get_s3_config,
|
from utils import check_s3_environment_variables, get_s3_config, upload_file_to_s3
|
||||||
upload_file_to_s3,
|
|
||||||
check_s3_environment_variables,
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_file_ext():
|
def get_file_ext():
|
||||||
@ -18,7 +14,7 @@ def get_file_ext():
|
|||||||
"database": "-database.sql.gz",
|
"database": "-database.sql.gz",
|
||||||
"private_files": "-private-files.tar",
|
"private_files": "-private-files.tar",
|
||||||
"public_files": "-files.tar",
|
"public_files": "-files.tar",
|
||||||
"site_config": "-site_config_backup.json"
|
"site_config": "-site_config_backup.json",
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -31,19 +27,26 @@ def get_backup_details(sitename):
|
|||||||
|
|
||||||
if os.path.exists(site_backup_path):
|
if os.path.exists(site_backup_path):
|
||||||
for filetype, ext in file_ext.items():
|
for filetype, ext in file_ext.items():
|
||||||
site_slug = sitename.replace('.', '_')
|
site_slug = sitename.replace(".", "_")
|
||||||
pattern = site_backup_path + '*-' + site_slug + ext
|
pattern = site_backup_path + "*-" + site_slug + ext
|
||||||
backup_files = list(filter(os.path.isfile, glob(pattern)))
|
backup_files = list(filter(os.path.isfile, glob(pattern)))
|
||||||
|
|
||||||
if len(backup_files) > 0:
|
if len(backup_files) > 0:
|
||||||
backup_files.sort(key=lambda file: os.stat(os.path.join(site_backup_path, file)).st_ctime)
|
backup_files.sort(
|
||||||
backup_date = datetime.datetime.strptime(time.ctime(os.path.getmtime(backup_files[0])), "%a %b %d %H:%M:%S %Y")
|
key=lambda file: os.stat(
|
||||||
|
os.path.join(site_backup_path, file)
|
||||||
|
).st_ctime
|
||||||
|
)
|
||||||
|
backup_date = datetime.datetime.strptime(
|
||||||
|
time.ctime(os.path.getmtime(backup_files[0])),
|
||||||
|
"%a %b %d %H:%M:%S %Y",
|
||||||
|
)
|
||||||
backup_details[filetype] = {
|
backup_details[filetype] = {
|
||||||
"sitename": sitename,
|
"sitename": sitename,
|
||||||
"file_size_in_bytes": os.stat(backup_files[-1]).st_size,
|
"file_size_in_bytes": os.stat(backup_files[-1]).st_size,
|
||||||
"file_path": os.path.abspath(backup_files[-1]),
|
"file_path": os.path.abspath(backup_files[-1]),
|
||||||
"filename": os.path.basename(backup_files[-1]),
|
"filename": os.path.basename(backup_files[-1]),
|
||||||
"backup_date": backup_date.date().strftime("%Y-%m-%d %H:%M:%S")
|
"backup_date": backup_date.date().strftime("%Y-%m-%d %H:%M:%S"),
|
||||||
}
|
}
|
||||||
|
|
||||||
return backup_details
|
return backup_details
|
||||||
@ -54,31 +57,34 @@ def delete_old_backups(limit, bucket, site_name):
|
|||||||
all_backup_dates = list()
|
all_backup_dates = list()
|
||||||
backup_limit = int(limit)
|
backup_limit = int(limit)
|
||||||
check_s3_environment_variables()
|
check_s3_environment_variables()
|
||||||
bucket_dir = os.environ.get('BUCKET_DIR')
|
bucket_dir = os.environ.get("BUCKET_DIR")
|
||||||
oldest_backup_date = None
|
oldest_backup_date = None
|
||||||
|
|
||||||
s3 = boto3.resource(
|
s3 = boto3.resource(
|
||||||
's3',
|
"s3",
|
||||||
region_name=os.environ.get('REGION'),
|
region_name=os.environ.get("REGION"),
|
||||||
aws_access_key_id=os.environ.get('ACCESS_KEY_ID'),
|
aws_access_key_id=os.environ.get("ACCESS_KEY_ID"),
|
||||||
aws_secret_access_key=os.environ.get('SECRET_ACCESS_KEY'),
|
aws_secret_access_key=os.environ.get("SECRET_ACCESS_KEY"),
|
||||||
endpoint_url=os.environ.get('ENDPOINT_URL')
|
endpoint_url=os.environ.get("ENDPOINT_URL"),
|
||||||
)
|
)
|
||||||
|
|
||||||
bucket = s3.Bucket(bucket)
|
bucket = s3.Bucket(bucket)
|
||||||
objects = bucket.meta.client.list_objects_v2(
|
objects = bucket.meta.client.list_objects_v2(Bucket=bucket.name, Delimiter="/")
|
||||||
Bucket=bucket.name,
|
|
||||||
Delimiter='/')
|
|
||||||
|
|
||||||
if objects:
|
if objects:
|
||||||
for obj in objects.get('CommonPrefixes'):
|
for obj in objects.get("CommonPrefixes"):
|
||||||
if obj.get('Prefix') == bucket_dir + '/':
|
if obj.get("Prefix") == bucket_dir + "/":
|
||||||
for backup_obj in bucket.objects.filter(Prefix=obj.get('Prefix')):
|
for backup_obj in bucket.objects.filter(Prefix=obj.get("Prefix")):
|
||||||
if backup_obj.get()["ContentType"] == "application/x-directory":
|
if backup_obj.get()["ContentType"] == "application/x-directory":
|
||||||
continue
|
continue
|
||||||
try:
|
try:
|
||||||
# backup_obj.key is bucket_dir/site/date_time/backupfile.extension
|
# backup_obj.key is bucket_dir/site/date_time/backupfile.extension
|
||||||
bucket_dir, site_slug, date_time, backupfile = backup_obj.key.split('/')
|
(
|
||||||
|
bucket_dir,
|
||||||
|
site_slug,
|
||||||
|
date_time,
|
||||||
|
backupfile,
|
||||||
|
) = backup_obj.key.split("/")
|
||||||
date_time_object = datetime.datetime.strptime(
|
date_time_object = datetime.datetime.strptime(
|
||||||
date_time, DATE_FORMAT
|
date_time, DATE_FORMAT
|
||||||
)
|
)
|
||||||
@ -98,7 +104,7 @@ def delete_old_backups(limit, bucket, site_name):
|
|||||||
for backup in all_backups:
|
for backup in all_backups:
|
||||||
try:
|
try:
|
||||||
# backup is bucket_dir/site/date_time/backupfile.extension
|
# backup is bucket_dir/site/date_time/backupfile.extension
|
||||||
backup_dir, site_slug, backup_dt_string, filename = backup.split('/')
|
backup_dir, site_slug, backup_dt_string, filename = backup.split("/")
|
||||||
backup_datetime = datetime.datetime.strptime(
|
backup_datetime = datetime.datetime.strptime(
|
||||||
backup_dt_string, DATE_FORMAT
|
backup_dt_string, DATE_FORMAT
|
||||||
)
|
)
|
||||||
@ -113,7 +119,7 @@ def delete_old_backups(limit, bucket, site_name):
|
|||||||
for obj in bucket.objects.filter(Prefix=oldest_backup):
|
for obj in bucket.objects.filter(Prefix=oldest_backup):
|
||||||
# delete all keys that are inside the oldest_backup
|
# delete all keys that are inside the oldest_backup
|
||||||
if bucket_dir in obj.key:
|
if bucket_dir in obj.key:
|
||||||
print('Deleteing ' + obj.key)
|
print("Deleting " + obj.key)
|
||||||
s3.Object(bucket.name, obj.key).delete()
|
s3.Object(bucket.name, obj.key).delete()
|
||||||
|
|
||||||
|
|
||||||
@ -124,31 +130,52 @@ def main():
|
|||||||
|
|
||||||
for site in sites:
|
for site in sites:
|
||||||
details = get_backup_details(site)
|
details = get_backup_details(site)
|
||||||
db_file = details.get('database', {}).get('file_path')
|
db_file = details.get("database", {}).get("file_path")
|
||||||
folder = os.environ.get('BUCKET_DIR') + '/' + site + '/'
|
folder = os.environ.get("BUCKET_DIR") + "/" + site + "/"
|
||||||
if db_file:
|
if db_file:
|
||||||
folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' + os.path.basename(db_file)[:15] + '/'
|
folder = (
|
||||||
|
os.environ.get("BUCKET_DIR")
|
||||||
|
+ "/"
|
||||||
|
+ site
|
||||||
|
+ "/"
|
||||||
|
+ os.path.basename(db_file)[:15]
|
||||||
|
+ "/"
|
||||||
|
)
|
||||||
upload_file_to_s3(db_file, folder, conn, bucket)
|
upload_file_to_s3(db_file, folder, conn, bucket)
|
||||||
|
|
||||||
# Archive site_config.json
|
# Archive site_config.json
|
||||||
site_config_file = details.get('site_config', {}).get('file_path')
|
site_config_file = details.get("site_config", {}).get("file_path")
|
||||||
if not site_config_file:
|
if not site_config_file:
|
||||||
site_config_file = os.path.join(os.getcwd(), site, 'site_config.json')
|
site_config_file = os.path.join(os.getcwd(), site, "site_config.json")
|
||||||
upload_file_to_s3(site_config_file, folder, conn, bucket)
|
upload_file_to_s3(site_config_file, folder, conn, bucket)
|
||||||
|
|
||||||
public_files = details.get('public_files', {}).get('file_path')
|
public_files = details.get("public_files", {}).get("file_path")
|
||||||
if public_files:
|
if public_files:
|
||||||
folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' + os.path.basename(public_files)[:15] + '/'
|
folder = (
|
||||||
|
os.environ.get("BUCKET_DIR")
|
||||||
|
+ "/"
|
||||||
|
+ site
|
||||||
|
+ "/"
|
||||||
|
+ os.path.basename(public_files)[:15]
|
||||||
|
+ "/"
|
||||||
|
)
|
||||||
upload_file_to_s3(public_files, folder, conn, bucket)
|
upload_file_to_s3(public_files, folder, conn, bucket)
|
||||||
|
|
||||||
private_files = details.get('private_files', {}).get('file_path')
|
private_files = details.get("private_files", {}).get("file_path")
|
||||||
if private_files:
|
if private_files:
|
||||||
folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' + os.path.basename(private_files)[:15] + '/'
|
folder = (
|
||||||
|
os.environ.get("BUCKET_DIR")
|
||||||
|
+ "/"
|
||||||
|
+ site
|
||||||
|
+ "/"
|
||||||
|
+ os.path.basename(private_files)[:15]
|
||||||
|
+ "/"
|
||||||
|
)
|
||||||
upload_file_to_s3(private_files, folder, conn, bucket)
|
upload_file_to_s3(private_files, folder, conn, bucket)
|
||||||
|
|
||||||
delete_old_backups(os.environ.get('BACKUP_LIMIT', '3'), bucket, site)
|
delete_old_backups(os.environ.get("BACKUP_LIMIT", "3"), bucket, site)
|
||||||
|
|
||||||
print('push-backup complete')
|
print("push-backup complete")
|
||||||
exit(0)
|
exit(0)
|
||||||
|
|
||||||
|
|
||||||
|
@ -1,93 +1,88 @@
|
|||||||
import os
|
|
||||||
import datetime
|
import datetime
|
||||||
import tarfile
|
|
||||||
import hashlib
|
import hashlib
|
||||||
import frappe
|
import os
|
||||||
import boto3
|
import tarfile
|
||||||
|
|
||||||
from frappe.utils import get_sites, random_string
|
import boto3
|
||||||
from frappe.installer import (
|
import frappe
|
||||||
make_conf,
|
|
||||||
get_conf_params,
|
|
||||||
make_site_dirs,
|
|
||||||
update_site_config
|
|
||||||
)
|
|
||||||
from constants import COMMON_SITE_CONFIG_FILE, DATE_FORMAT, RDS_DB, RDS_PRIVILEGES
|
from constants import COMMON_SITE_CONFIG_FILE, DATE_FORMAT, RDS_DB, RDS_PRIVILEGES
|
||||||
|
from frappe.installer import (
|
||||||
|
get_conf_params,
|
||||||
|
make_conf,
|
||||||
|
make_site_dirs,
|
||||||
|
update_site_config,
|
||||||
|
)
|
||||||
|
from frappe.utils import get_sites, random_string
|
||||||
from utils import (
|
from utils import (
|
||||||
run_command,
|
check_s3_environment_variables,
|
||||||
list_directories,
|
|
||||||
set_key_in_site_config,
|
|
||||||
get_site_config,
|
|
||||||
get_config,
|
get_config,
|
||||||
get_password,
|
get_password,
|
||||||
check_s3_environment_variables,
|
get_site_config,
|
||||||
|
list_directories,
|
||||||
|
run_command,
|
||||||
|
set_key_in_site_config,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_backup_dir():
|
def get_backup_dir():
|
||||||
return os.path.join(
|
return os.path.join(os.path.expanduser("~"), "backups")
|
||||||
os.path.expanduser('~'),
|
|
||||||
'backups'
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def decompress_db(database_file, site):
|
def decompress_db(database_file, site):
|
||||||
command = ["gunzip", "-c", database_file]
|
command = ["gunzip", "-c", database_file]
|
||||||
with open(database_file.replace(".gz", ""), "w") as db_file:
|
with open(database_file.replace(".gz", ""), "w") as db_file:
|
||||||
print('Extract Database GZip for site {}'.format(site))
|
print(f"Extract Database GZip for site {site}")
|
||||||
run_command(command, stdout=db_file)
|
run_command(command, stdout=db_file)
|
||||||
|
|
||||||
|
|
||||||
def restore_database(files_base, site_config_path, site):
|
def restore_database(files_base, site_config_path, site):
|
||||||
# restore database
|
# restore database
|
||||||
database_file = files_base + '-database.sql.gz'
|
database_file = files_base + "-database.sql.gz"
|
||||||
decompress_db(database_file, site)
|
decompress_db(database_file, site)
|
||||||
config = get_config()
|
config = get_config()
|
||||||
|
|
||||||
# Set db_type if it exists in backup site_config.json
|
# Set db_type if it exists in backup site_config.json
|
||||||
set_key_in_site_config('db_type', site, site_config_path)
|
set_key_in_site_config("db_type", site, site_config_path)
|
||||||
# Set db_host if it exists in backup site_config.json
|
# Set db_host if it exists in backup site_config.json
|
||||||
set_key_in_site_config('db_host', site, site_config_path)
|
set_key_in_site_config("db_host", site, site_config_path)
|
||||||
# Set db_port if it exists in backup site_config.json
|
# Set db_port if it exists in backup site_config.json
|
||||||
set_key_in_site_config('db_port', site, site_config_path)
|
set_key_in_site_config("db_port", site, site_config_path)
|
||||||
|
|
||||||
# get updated site_config
|
# get updated site_config
|
||||||
site_config = get_site_config(site)
|
site_config = get_site_config(site)
|
||||||
|
|
||||||
# if no db_type exists, default to mariadb
|
# if no db_type exists, default to mariadb
|
||||||
db_type = site_config.get('db_type', 'mariadb')
|
db_type = site_config.get("db_type", "mariadb")
|
||||||
is_database_restored = False
|
is_database_restored = False
|
||||||
|
|
||||||
if db_type == 'mariadb':
|
if db_type == "mariadb":
|
||||||
restore_mariadb(
|
restore_mariadb(
|
||||||
config=config,
|
config=config, site_config=site_config, database_file=database_file
|
||||||
site_config=site_config,
|
)
|
||||||
database_file=database_file)
|
|
||||||
is_database_restored = True
|
is_database_restored = True
|
||||||
elif db_type == 'postgres':
|
elif db_type == "postgres":
|
||||||
restore_postgres(
|
restore_postgres(
|
||||||
config=config,
|
config=config, site_config=site_config, database_file=database_file
|
||||||
site_config=site_config,
|
)
|
||||||
database_file=database_file)
|
|
||||||
is_database_restored = True
|
is_database_restored = True
|
||||||
|
|
||||||
if is_database_restored:
|
if is_database_restored:
|
||||||
# Set encryption_key if it exists in backup site_config.json
|
# Set encryption_key if it exists in backup site_config.json
|
||||||
set_key_in_site_config('encryption_key', site, site_config_path)
|
set_key_in_site_config("encryption_key", site, site_config_path)
|
||||||
|
|
||||||
|
|
||||||
def restore_files(files_base):
|
def restore_files(files_base):
|
||||||
public_files = files_base + '-files.tar'
|
public_files = files_base + "-files.tar"
|
||||||
# extract tar
|
# extract tar
|
||||||
public_tar = tarfile.open(public_files)
|
public_tar = tarfile.open(public_files)
|
||||||
print('Extracting {}'.format(public_files))
|
print(f"Extracting {public_files}")
|
||||||
public_tar.extractall()
|
public_tar.extractall()
|
||||||
|
|
||||||
|
|
||||||
def restore_private_files(files_base):
|
def restore_private_files(files_base):
|
||||||
private_files = files_base + '-private-files.tar'
|
private_files = files_base + "-private-files.tar"
|
||||||
private_tar = tarfile.open(private_files)
|
private_tar = tarfile.open(private_files)
|
||||||
print('Extracting {}'.format(private_files))
|
print(f"Extracting {private_files}")
|
||||||
private_tar.extractall()
|
private_tar.extractall()
|
||||||
|
|
||||||
|
|
||||||
@ -96,15 +91,15 @@ def pull_backup_from_s3():
|
|||||||
|
|
||||||
# https://stackoverflow.com/a/54672690
|
# https://stackoverflow.com/a/54672690
|
||||||
s3 = boto3.resource(
|
s3 = boto3.resource(
|
||||||
's3',
|
"s3",
|
||||||
region_name=os.environ.get('REGION'),
|
region_name=os.environ.get("REGION"),
|
||||||
aws_access_key_id=os.environ.get('ACCESS_KEY_ID'),
|
aws_access_key_id=os.environ.get("ACCESS_KEY_ID"),
|
||||||
aws_secret_access_key=os.environ.get('SECRET_ACCESS_KEY'),
|
aws_secret_access_key=os.environ.get("SECRET_ACCESS_KEY"),
|
||||||
endpoint_url=os.environ.get('ENDPOINT_URL')
|
endpoint_url=os.environ.get("ENDPOINT_URL"),
|
||||||
)
|
)
|
||||||
|
|
||||||
bucket_dir = os.environ.get('BUCKET_DIR')
|
bucket_dir = os.environ.get("BUCKET_DIR")
|
||||||
bucket_name = os.environ.get('BUCKET_NAME')
|
bucket_name = os.environ.get("BUCKET_NAME")
|
||||||
bucket = s3.Bucket(bucket_name)
|
bucket = s3.Bucket(bucket_name)
|
||||||
|
|
||||||
# Change directory to /home/frappe/backups
|
# Change directory to /home/frappe/backups
|
||||||
@ -118,10 +113,10 @@ def pull_backup_from_s3():
|
|||||||
for obj in bucket.objects.filter(Prefix=bucket_dir):
|
for obj in bucket.objects.filter(Prefix=bucket_dir):
|
||||||
if obj.get()["ContentType"] == "application/x-directory":
|
if obj.get()["ContentType"] == "application/x-directory":
|
||||||
continue
|
continue
|
||||||
backup_file = obj.key.replace(os.path.join(bucket_dir, ''), '')
|
backup_file = obj.key.replace(os.path.join(bucket_dir, ""), "")
|
||||||
backup_files.append(backup_file)
|
backup_files.append(backup_file)
|
||||||
site_name, timestamp, backup_type = backup_file.split('/')
|
site_name, timestamp, backup_type = backup_file.split("/")
|
||||||
site_timestamp = site_name + '/' + timestamp
|
site_timestamp = site_name + "/" + timestamp
|
||||||
sites.add(site_name)
|
sites.add(site_name)
|
||||||
site_timestamps.add(site_timestamp)
|
site_timestamps.add(site_timestamp)
|
||||||
|
|
||||||
@ -129,13 +124,11 @@ def pull_backup_from_s3():
|
|||||||
for site in sites:
|
for site in sites:
|
||||||
backup_timestamps = []
|
backup_timestamps = []
|
||||||
for site_timestamp in site_timestamps:
|
for site_timestamp in site_timestamps:
|
||||||
site_name, timestamp = site_timestamp.split('/')
|
site_name, timestamp = site_timestamp.split("/")
|
||||||
if site == site_name:
|
if site == site_name:
|
||||||
timestamp_datetime = datetime.datetime.strptime(
|
timestamp_datetime = datetime.datetime.strptime(timestamp, DATE_FORMAT)
|
||||||
timestamp, DATE_FORMAT
|
|
||||||
)
|
|
||||||
backup_timestamps.append(timestamp)
|
backup_timestamps.append(timestamp)
|
||||||
download_backups.append(site + '/' + max(backup_timestamps))
|
download_backups.append(site + "/" + max(backup_timestamps))
|
||||||
|
|
||||||
# Only download latest backups
|
# Only download latest backups
|
||||||
for backup_file in backup_files:
|
for backup_file in backup_files:
|
||||||
@ -143,21 +136,21 @@ def pull_backup_from_s3():
|
|||||||
if backup in backup_file:
|
if backup in backup_file:
|
||||||
if not os.path.exists(os.path.dirname(backup_file)):
|
if not os.path.exists(os.path.dirname(backup_file)):
|
||||||
os.makedirs(os.path.dirname(backup_file))
|
os.makedirs(os.path.dirname(backup_file))
|
||||||
print('Downloading {}'.format(backup_file))
|
print(f"Downloading {backup_file}")
|
||||||
bucket.download_file(bucket_dir + '/' + backup_file, backup_file)
|
bucket.download_file(bucket_dir + "/" + backup_file, backup_file)
|
||||||
|
|
||||||
os.chdir(os.path.join(os.path.expanduser('~'), 'frappe-bench', 'sites'))
|
os.chdir(os.path.join(os.path.expanduser("~"), "frappe-bench", "sites"))
|
||||||
|
|
||||||
|
|
||||||
def restore_postgres(config, site_config, database_file):
|
def restore_postgres(config, site_config, database_file):
|
||||||
# common config
|
# common config
|
||||||
common_site_config_path = os.path.join(os.getcwd(), COMMON_SITE_CONFIG_FILE)
|
common_site_config_path = os.path.join(os.getcwd(), COMMON_SITE_CONFIG_FILE)
|
||||||
|
|
||||||
db_root_user = config.get('root_login')
|
db_root_user = config.get("root_login")
|
||||||
if not db_root_user:
|
if not db_root_user:
|
||||||
postgres_user = os.environ.get('DB_ROOT_USER')
|
postgres_user = os.environ.get("DB_ROOT_USER")
|
||||||
if not postgres_user:
|
if not postgres_user:
|
||||||
print('Variable DB_ROOT_USER not set')
|
print("Variable DB_ROOT_USER not set")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
db_root_user = postgres_user
|
db_root_user = postgres_user
|
||||||
@ -165,13 +158,14 @@ def restore_postgres(config, site_config, database_file):
|
|||||||
"root_login",
|
"root_login",
|
||||||
db_root_user,
|
db_root_user,
|
||||||
validate=False,
|
validate=False,
|
||||||
site_config_path=common_site_config_path)
|
site_config_path=common_site_config_path,
|
||||||
|
)
|
||||||
|
|
||||||
db_root_password = config.get('root_password')
|
db_root_password = config.get("root_password")
|
||||||
if not db_root_password:
|
if not db_root_password:
|
||||||
root_password = get_password('POSTGRES_PASSWORD')
|
root_password = get_password("POSTGRES_PASSWORD")
|
||||||
if not root_password:
|
if not root_password:
|
||||||
print('Variable POSTGRES_PASSWORD not set')
|
print("Variable POSTGRES_PASSWORD not set")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
db_root_password = root_password
|
db_root_password = root_password
|
||||||
@ -179,53 +173,72 @@ def restore_postgres(config, site_config, database_file):
|
|||||||
"root_password",
|
"root_password",
|
||||||
db_root_password,
|
db_root_password,
|
||||||
validate=False,
|
validate=False,
|
||||||
site_config_path=common_site_config_path)
|
site_config_path=common_site_config_path,
|
||||||
|
)
|
||||||
|
|
||||||
# site config
|
# site config
|
||||||
db_host = site_config.get('db_host')
|
db_host = site_config.get("db_host")
|
||||||
db_port = site_config.get('db_port', 5432)
|
db_port = site_config.get("db_port", 5432)
|
||||||
db_name = site_config.get('db_name')
|
db_name = site_config.get("db_name")
|
||||||
db_password = site_config.get('db_password')
|
db_password = site_config.get("db_password")
|
||||||
|
|
||||||
psql_command = ["psql"]
|
psql_command = ["psql"]
|
||||||
psql_uri = f"postgres://{db_root_user}:{db_root_password}@{db_host}:{db_port}"
|
psql_uri = f"postgres://{db_root_user}:{db_root_password}@{db_host}:{db_port}"
|
||||||
|
|
||||||
print('Restoring PostgreSQL')
|
print("Restoring PostgreSQL")
|
||||||
run_command(psql_command + [psql_uri, "-c", f"DROP DATABASE IF EXISTS \"{db_name}\""])
|
run_command(psql_command + [psql_uri, "-c", f'DROP DATABASE IF EXISTS "{db_name}"'])
|
||||||
run_command(psql_command + [psql_uri, "-c", f"DROP USER IF EXISTS {db_name}"])
|
run_command(psql_command + [psql_uri, "-c", f"DROP USER IF EXISTS {db_name}"])
|
||||||
run_command(psql_command + [psql_uri, "-c", f"CREATE DATABASE \"{db_name}\""])
|
run_command(psql_command + [psql_uri, "-c", f'CREATE DATABASE "{db_name}"'])
|
||||||
run_command(psql_command + [psql_uri, "-c", f"CREATE user {db_name} password '{db_password}'"])
|
run_command(
|
||||||
run_command(psql_command + [psql_uri, "-c", f"GRANT ALL PRIVILEGES ON DATABASE \"{db_name}\" TO {db_name}"])
|
psql_command
|
||||||
with open(database_file.replace('.gz', ''), 'r') as db_file:
|
+ [psql_uri, "-c", f"CREATE user {db_name} password '{db_password}'"]
|
||||||
|
)
|
||||||
|
run_command(
|
||||||
|
psql_command
|
||||||
|
+ [psql_uri, "-c", f'GRANT ALL PRIVILEGES ON DATABASE "{db_name}" TO {db_name}']
|
||||||
|
)
|
||||||
|
with open(database_file.replace(".gz", "")) as db_file:
|
||||||
run_command(psql_command + [f"{psql_uri}/{db_name}", "<"], stdin=db_file)
|
run_command(psql_command + [f"{psql_uri}/{db_name}", "<"], stdin=db_file)
|
||||||
|
|
||||||
|
|
||||||
def restore_mariadb(config, site_config, database_file):
|
def restore_mariadb(config, site_config, database_file):
|
||||||
db_root_password = get_password('MYSQL_ROOT_PASSWORD')
|
db_root_password = get_password("MYSQL_ROOT_PASSWORD")
|
||||||
if not db_root_password:
|
if not db_root_password:
|
||||||
print('Variable MYSQL_ROOT_PASSWORD not set')
|
print("Variable MYSQL_ROOT_PASSWORD not set")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
db_root_user = os.environ.get("DB_ROOT_USER", 'root')
|
db_root_user = os.environ.get("DB_ROOT_USER", "root")
|
||||||
|
|
||||||
db_host = site_config.get('db_host', config.get('db_host'))
|
db_host = site_config.get("db_host", config.get("db_host"))
|
||||||
db_port = site_config.get('db_port', config.get('db_port', 3306))
|
db_port = site_config.get("db_port", config.get("db_port", 3306))
|
||||||
db_name = site_config.get('db_name')
|
db_name = site_config.get("db_name")
|
||||||
db_password = site_config.get('db_password')
|
db_password = site_config.get("db_password")
|
||||||
|
|
||||||
# mysql command prefix
|
# mysql command prefix
|
||||||
mysql_command = ["mysql", f"-u{db_root_user}", f"-h{db_host}", f"-p{db_root_password}", f"-P{db_port}"]
|
mysql_command = [
|
||||||
|
"mysql",
|
||||||
|
f"-u{db_root_user}",
|
||||||
|
f"-h{db_host}",
|
||||||
|
f"-p{db_root_password}",
|
||||||
|
f"-P{db_port}",
|
||||||
|
]
|
||||||
|
|
||||||
# drop db if exists for clean restore
|
# drop db if exists for clean restore
|
||||||
drop_database = mysql_command + ["-e", f"DROP DATABASE IF EXISTS `{db_name}`;"]
|
drop_database = mysql_command + ["-e", f"DROP DATABASE IF EXISTS `{db_name}`;"]
|
||||||
run_command(drop_database)
|
run_command(drop_database)
|
||||||
|
|
||||||
# create db
|
# create db
|
||||||
create_database = mysql_command + ["-e", f"CREATE DATABASE IF NOT EXISTS `{db_name}`;"]
|
create_database = mysql_command + [
|
||||||
|
"-e",
|
||||||
|
f"CREATE DATABASE IF NOT EXISTS `{db_name}`;",
|
||||||
|
]
|
||||||
run_command(create_database)
|
run_command(create_database)
|
||||||
|
|
||||||
# create user
|
# create user
|
||||||
create_user = mysql_command + ["-e", f"CREATE USER IF NOT EXISTS '{db_name}'@'%' IDENTIFIED BY '{db_password}'; FLUSH PRIVILEGES;"]
|
create_user = mysql_command + [
|
||||||
|
"-e",
|
||||||
|
f"CREATE USER IF NOT EXISTS '{db_name}'@'%' IDENTIFIED BY '{db_password}'; FLUSH PRIVILEGES;",
|
||||||
|
]
|
||||||
run_command(create_user)
|
run_command(create_user)
|
||||||
|
|
||||||
# grant db privileges to user
|
# grant db privileges to user
|
||||||
@ -236,11 +249,14 @@ def restore_mariadb(config, site_config, database_file):
|
|||||||
if config.get(RDS_DB) or site_config.get(RDS_DB):
|
if config.get(RDS_DB) or site_config.get(RDS_DB):
|
||||||
grant_privileges = RDS_PRIVILEGES
|
grant_privileges = RDS_PRIVILEGES
|
||||||
|
|
||||||
grant_privileges_command = mysql_command + ["-e", f"GRANT {grant_privileges} ON `{db_name}`.* TO '{db_name}'@'%' IDENTIFIED BY '{db_password}'; FLUSH PRIVILEGES;"]
|
grant_privileges_command = mysql_command + [
|
||||||
|
"-e",
|
||||||
|
f"GRANT {grant_privileges} ON `{db_name}`.* TO '{db_name}'@'%' IDENTIFIED BY '{db_password}'; FLUSH PRIVILEGES;",
|
||||||
|
]
|
||||||
run_command(grant_privileges_command)
|
run_command(grant_privileges_command)
|
||||||
|
|
||||||
print('Restoring MariaDB')
|
print("Restoring MariaDB")
|
||||||
with open(database_file.replace('.gz', ''), 'r') as db_file:
|
with open(database_file.replace(".gz", "")) as db_file:
|
||||||
run_command(mysql_command + [f"{db_name}"], stdin=db_file)
|
run_command(mysql_command + [f"{db_name}"], stdin=db_file)
|
||||||
|
|
||||||
|
|
||||||
@ -251,35 +267,38 @@ def main():
|
|||||||
pull_backup_from_s3()
|
pull_backup_from_s3()
|
||||||
|
|
||||||
for site in list_directories(backup_dir):
|
for site in list_directories(backup_dir):
|
||||||
site_slug = site.replace('.', '_')
|
site_slug = site.replace(".", "_")
|
||||||
backups = [datetime.datetime.strptime(backup, DATE_FORMAT) for backup in list_directories(os.path.join(backup_dir, site))]
|
backups = [
|
||||||
|
datetime.datetime.strptime(backup, DATE_FORMAT)
|
||||||
|
for backup in list_directories(os.path.join(backup_dir, site))
|
||||||
|
]
|
||||||
latest_backup = max(backups).strftime(DATE_FORMAT)
|
latest_backup = max(backups).strftime(DATE_FORMAT)
|
||||||
files_base = os.path.join(backup_dir, site, latest_backup, '')
|
files_base = os.path.join(backup_dir, site, latest_backup, "")
|
||||||
files_base += latest_backup + '-' + site_slug
|
files_base += latest_backup + "-" + site_slug
|
||||||
site_config_path = files_base + '-site_config_backup.json'
|
site_config_path = files_base + "-site_config_backup.json"
|
||||||
if not os.path.exists(site_config_path):
|
if not os.path.exists(site_config_path):
|
||||||
site_config_path = os.path.join(backup_dir, site, 'site_config.json')
|
site_config_path = os.path.join(backup_dir, site, "site_config.json")
|
||||||
if site in get_sites():
|
if site in get_sites():
|
||||||
print('Overwrite site {}'.format(site))
|
print(f"Overwrite site {site}")
|
||||||
restore_database(files_base, site_config_path, site)
|
restore_database(files_base, site_config_path, site)
|
||||||
restore_private_files(files_base)
|
restore_private_files(files_base)
|
||||||
restore_files(files_base)
|
restore_files(files_base)
|
||||||
else:
|
else:
|
||||||
site_config = get_conf_params(
|
site_config = get_conf_params(
|
||||||
db_name='_' + hashlib.sha1(site.encode()).hexdigest()[:16],
|
db_name="_" + hashlib.sha1(site.encode()).hexdigest()[:16],
|
||||||
db_password=random_string(16)
|
db_password=random_string(16),
|
||||||
)
|
)
|
||||||
|
|
||||||
frappe.local.site = site
|
frappe.local.site = site
|
||||||
frappe.local.sites_path = os.getcwd()
|
frappe.local.sites_path = os.getcwd()
|
||||||
frappe.local.site_path = os.getcwd() + '/' + site
|
frappe.local.site_path = os.getcwd() + "/" + site
|
||||||
make_conf(
|
make_conf(
|
||||||
db_name=site_config.get('db_name'),
|
db_name=site_config.get("db_name"),
|
||||||
db_password=site_config.get('db_password'),
|
db_password=site_config.get("db_password"),
|
||||||
)
|
)
|
||||||
make_site_dirs()
|
make_site_dirs()
|
||||||
|
|
||||||
print('Create site {}'.format(site))
|
print(f"Create site {site}")
|
||||||
restore_database(files_base, site_config_path, site)
|
restore_database(files_base, site_config_path, site)
|
||||||
restore_private_files(files_base)
|
restore_private_files(files_base)
|
||||||
restore_files(files_base)
|
restore_files(files_base)
|
||||||
|
@ -1,15 +1,12 @@
|
|||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
|
|
||||||
import boto3
|
import boto3
|
||||||
import git
|
import git
|
||||||
|
from constants import APP_VERSIONS_JSON_FILE, APPS_TXT_FILE, COMMON_SITE_CONFIG_FILE
|
||||||
from frappe.installer import update_site_config
|
from frappe.installer import update_site_config
|
||||||
from constants import (
|
|
||||||
APP_VERSIONS_JSON_FILE,
|
|
||||||
APPS_TXT_FILE,
|
|
||||||
COMMON_SITE_CONFIG_FILE
|
|
||||||
)
|
|
||||||
|
|
||||||
def run_command(command, stdout=None, stdin=None, stderr=None):
|
def run_command(command, stdout=None, stdin=None, stderr=None):
|
||||||
stdout = stdout or subprocess.PIPE
|
stdout = stdout or subprocess.PIPE
|
||||||
@ -26,7 +23,7 @@ def run_command(command, stdout=None, stdin=None, stderr=None):
|
|||||||
|
|
||||||
|
|
||||||
def save_version_file(versions):
|
def save_version_file(versions):
|
||||||
with open(APP_VERSIONS_JSON_FILE, 'w') as f:
|
with open(APP_VERSIONS_JSON_FILE, "w") as f:
|
||||||
return json.dump(versions, f, indent=1, sort_keys=True)
|
return json.dump(versions, f, indent=1, sort_keys=True)
|
||||||
|
|
||||||
|
|
||||||
@ -58,10 +55,10 @@ def get_container_versions(apps):
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
try:
|
try:
|
||||||
path = os.path.join('..', 'apps', app)
|
path = os.path.join("..", "apps", app)
|
||||||
repo = git.Repo(path)
|
repo = git.Repo(path)
|
||||||
commit_hash = repo.head.object.hexsha
|
commit_hash = repo.head.object.hexsha
|
||||||
versions.update({app+'_git_hash': commit_hash})
|
versions.update({app + "_git_hash": commit_hash})
|
||||||
except Exception:
|
except Exception:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@ -94,18 +91,22 @@ def get_config():
|
|||||||
|
|
||||||
def get_site_config(site_name):
|
def get_site_config(site_name):
|
||||||
site_config = None
|
site_config = None
|
||||||
with open('{site_name}/site_config.json'.format(site_name=site_name)) as site_config_file:
|
with open(f"{site_name}/site_config.json") as site_config_file:
|
||||||
site_config = json.load(site_config_file)
|
site_config = json.load(site_config_file)
|
||||||
return site_config
|
return site_config
|
||||||
|
|
||||||
|
|
||||||
def save_config(config):
|
def save_config(config):
|
||||||
with open(COMMON_SITE_CONFIG_FILE, 'w') as f:
|
with open(COMMON_SITE_CONFIG_FILE, "w") as f:
|
||||||
return json.dump(config, f, indent=1, sort_keys=True)
|
return json.dump(config, f, indent=1, sort_keys=True)
|
||||||
|
|
||||||
|
|
||||||
def get_password(env_var, default=None):
|
def get_password(env_var, default=None):
|
||||||
return os.environ.get(env_var) or get_password_from_secret(f"{env_var}_FILE") or default
|
return (
|
||||||
|
os.environ.get(env_var)
|
||||||
|
or get_password_from_secret(f"{env_var}_FILE")
|
||||||
|
or default
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def get_password_from_secret(env_var):
|
def get_password_from_secret(env_var):
|
||||||
@ -128,14 +129,14 @@ def get_password_from_secret(env_var):
|
|||||||
|
|
||||||
def get_s3_config():
|
def get_s3_config():
|
||||||
check_s3_environment_variables()
|
check_s3_environment_variables()
|
||||||
bucket = os.environ.get('BUCKET_NAME')
|
bucket = os.environ.get("BUCKET_NAME")
|
||||||
|
|
||||||
conn = boto3.client(
|
conn = boto3.client(
|
||||||
's3',
|
"s3",
|
||||||
region_name=os.environ.get('REGION'),
|
region_name=os.environ.get("REGION"),
|
||||||
aws_access_key_id=os.environ.get('ACCESS_KEY_ID'),
|
aws_access_key_id=os.environ.get("ACCESS_KEY_ID"),
|
||||||
aws_secret_access_key=os.environ.get('SECRET_ACCESS_KEY'),
|
aws_secret_access_key=os.environ.get("SECRET_ACCESS_KEY"),
|
||||||
endpoint_url=os.environ.get('ENDPOINT_URL')
|
endpoint_url=os.environ.get("ENDPOINT_URL"),
|
||||||
)
|
)
|
||||||
|
|
||||||
return conn, bucket
|
return conn, bucket
|
||||||
@ -164,7 +165,7 @@ def list_directories(path):
|
|||||||
def get_site_config_from_path(site_config_path):
|
def get_site_config_from_path(site_config_path):
|
||||||
site_config = dict()
|
site_config = dict()
|
||||||
if os.path.exists(site_config_path):
|
if os.path.exists(site_config_path):
|
||||||
with open(site_config_path, 'r') as sc:
|
with open(site_config_path) as sc:
|
||||||
site_config = json.load(sc)
|
site_config = json.load(sc)
|
||||||
return site_config
|
return site_config
|
||||||
|
|
||||||
@ -173,32 +174,35 @@ def set_key_in_site_config(key, site, site_config_path):
|
|||||||
site_config = get_site_config_from_path(site_config_path)
|
site_config = get_site_config_from_path(site_config_path)
|
||||||
value = site_config.get(key)
|
value = site_config.get(key)
|
||||||
if value:
|
if value:
|
||||||
print('Set {key} in site config for site: {site}'.format(key=key, site=site))
|
print(f"Set {key} in site config for site: {site}")
|
||||||
update_site_config(key, value,
|
update_site_config(
|
||||||
site_config_path=os.path.join(os.getcwd(), site, "site_config.json"))
|
key,
|
||||||
|
value,
|
||||||
|
site_config_path=os.path.join(os.getcwd(), site, "site_config.json"),
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
def check_s3_environment_variables():
|
def check_s3_environment_variables():
|
||||||
if 'BUCKET_NAME' not in os.environ:
|
if "BUCKET_NAME" not in os.environ:
|
||||||
print('Variable BUCKET_NAME not set')
|
print("Variable BUCKET_NAME not set")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
if 'ACCESS_KEY_ID' not in os.environ:
|
if "ACCESS_KEY_ID" not in os.environ:
|
||||||
print('Variable ACCESS_KEY_ID not set')
|
print("Variable ACCESS_KEY_ID not set")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
if 'SECRET_ACCESS_KEY' not in os.environ:
|
if "SECRET_ACCESS_KEY" not in os.environ:
|
||||||
print('Variable SECRET_ACCESS_KEY not set')
|
print("Variable SECRET_ACCESS_KEY not set")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
if 'ENDPOINT_URL' not in os.environ:
|
if "ENDPOINT_URL" not in os.environ:
|
||||||
print('Variable ENDPOINT_URL not set')
|
print("Variable ENDPOINT_URL not set")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
if 'BUCKET_DIR' not in os.environ:
|
if "BUCKET_DIR" not in os.environ:
|
||||||
print('Variable BUCKET_DIR not set')
|
print("Variable BUCKET_DIR not set")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
|
||||||
if 'REGION' not in os.environ:
|
if "REGION" not in os.environ:
|
||||||
print('Variable REGION not set')
|
print("Variable REGION not set")
|
||||||
exit(1)
|
exit(1)
|
||||||
|
@ -94,6 +94,7 @@ code Procfile
|
|||||||
```
|
```
|
||||||
|
|
||||||
Or running the following command:
|
Or running the following command:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
sed -i '/redis/d' ./Procfile
|
sed -i '/redis/d' ./Procfile
|
||||||
```
|
```
|
||||||
@ -105,6 +106,7 @@ You can create a new site with the following command:
|
|||||||
```shell
|
```shell
|
||||||
bench new-site sitename --no-mariadb-socket
|
bench new-site sitename --no-mariadb-socket
|
||||||
```
|
```
|
||||||
|
|
||||||
sitename MUST end with .localhost for trying deployments locally.
|
sitename MUST end with .localhost for trying deployments locally.
|
||||||
|
|
||||||
for example:
|
for example:
|
||||||
@ -234,7 +236,7 @@ The first step is installing and updating the required software. Usually the fra
|
|||||||
/workspace/development/frappe-bench/env/bin/python -m pip install --upgrade jupyter ipykernel ipython
|
/workspace/development/frappe-bench/env/bin/python -m pip install --upgrade jupyter ipykernel ipython
|
||||||
```
|
```
|
||||||
|
|
||||||
Then, run the commmand `Python: Show Python interactive window` from the VSCode command palette.
|
Then, run the command `Python: Show Python interactive window` from the VSCode command palette.
|
||||||
|
|
||||||
Replace `mysite.localhost` with your site and run the following code in a Jupyter cell:
|
Replace `mysite.localhost` with your site and run the following code in a Jupyter cell:
|
||||||
|
|
||||||
@ -259,7 +261,6 @@ Example shows the queries to be executed for site `localhost`
|
|||||||
|
|
||||||
Open sites/localhost/site_config.json:
|
Open sites/localhost/site_config.json:
|
||||||
|
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
code sites/localhost/site_config.json
|
code sites/localhost/site_config.json
|
||||||
```
|
```
|
||||||
@ -286,6 +287,7 @@ EXIT;
|
|||||||
In case you don't use VSCode, you may start the containers manually with the following command:
|
In case you don't use VSCode, you may start the containers manually with the following command:
|
||||||
|
|
||||||
### Running the containers
|
### Running the containers
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
docker-compose -f .devcontainer/docker-compose.yml up -d
|
docker-compose -f .devcontainer/docker-compose.yml up -d
|
||||||
```
|
```
|
||||||
|
@ -10,7 +10,12 @@
|
|||||||
"request": "launch",
|
"request": "launch",
|
||||||
"program": "${workspaceFolder}/frappe-bench/apps/frappe/frappe/utils/bench_helper.py",
|
"program": "${workspaceFolder}/frappe-bench/apps/frappe/frappe/utils/bench_helper.py",
|
||||||
"args": [
|
"args": [
|
||||||
"frappe", "serve", "--port", "8000", "--noreload", "--nothreading"
|
"frappe",
|
||||||
|
"serve",
|
||||||
|
"--port",
|
||||||
|
"8000",
|
||||||
|
"--noreload",
|
||||||
|
"--nothreading"
|
||||||
],
|
],
|
||||||
"pythonPath": "${workspaceFolder}/frappe-bench/env/bin/python",
|
"pythonPath": "${workspaceFolder}/frappe-bench/env/bin/python",
|
||||||
"cwd": "${workspaceFolder}/frappe-bench/sites",
|
"cwd": "${workspaceFolder}/frappe-bench/sites",
|
||||||
@ -23,9 +28,7 @@
|
|||||||
"type": "python",
|
"type": "python",
|
||||||
"request": "launch",
|
"request": "launch",
|
||||||
"program": "${workspaceFolder}/frappe-bench/apps/frappe/frappe/utils/bench_helper.py",
|
"program": "${workspaceFolder}/frappe-bench/apps/frappe/frappe/utils/bench_helper.py",
|
||||||
"args": [
|
"args": ["frappe", "worker", "--queue", "default"],
|
||||||
"frappe", "worker", "--queue", "default"
|
|
||||||
],
|
|
||||||
"pythonPath": "${workspaceFolder}/frappe-bench/env/bin/python",
|
"pythonPath": "${workspaceFolder}/frappe-bench/env/bin/python",
|
||||||
"cwd": "${workspaceFolder}/frappe-bench/sites",
|
"cwd": "${workspaceFolder}/frappe-bench/sites",
|
||||||
"env": {
|
"env": {
|
||||||
@ -37,9 +40,7 @@
|
|||||||
"type": "python",
|
"type": "python",
|
||||||
"request": "launch",
|
"request": "launch",
|
||||||
"program": "${workspaceFolder}/frappe-bench/apps/frappe/frappe/utils/bench_helper.py",
|
"program": "${workspaceFolder}/frappe-bench/apps/frappe/frappe/utils/bench_helper.py",
|
||||||
"args": [
|
"args": ["frappe", "worker", "--queue", "short"],
|
||||||
"frappe", "worker", "--queue", "short"
|
|
||||||
],
|
|
||||||
"pythonPath": "${workspaceFolder}/frappe-bench/env/bin/python",
|
"pythonPath": "${workspaceFolder}/frappe-bench/env/bin/python",
|
||||||
"cwd": "${workspaceFolder}/frappe-bench/sites",
|
"cwd": "${workspaceFolder}/frappe-bench/sites",
|
||||||
"env": {
|
"env": {
|
||||||
@ -51,9 +52,7 @@
|
|||||||
"type": "python",
|
"type": "python",
|
||||||
"request": "launch",
|
"request": "launch",
|
||||||
"program": "${workspaceFolder}/frappe-bench/apps/frappe/frappe/utils/bench_helper.py",
|
"program": "${workspaceFolder}/frappe-bench/apps/frappe/frappe/utils/bench_helper.py",
|
||||||
"args": [
|
"args": ["frappe", "worker", "--queue", "long"],
|
||||||
"frappe", "worker", "--queue", "long"
|
|
||||||
],
|
|
||||||
"pythonPath": "${workspaceFolder}/frappe-bench/env/bin/python",
|
"pythonPath": "${workspaceFolder}/frappe-bench/env/bin/python",
|
||||||
"cwd": "${workspaceFolder}/frappe-bench/sites",
|
"cwd": "${workspaceFolder}/frappe-bench/sites",
|
||||||
"env": {
|
"env": {
|
||||||
@ -69,7 +68,13 @@
|
|||||||
"cwd": "${workspaceFolder}/frappe-bench",
|
"cwd": "${workspaceFolder}/frappe-bench",
|
||||||
"console": "internalConsole",
|
"console": "internalConsole",
|
||||||
"args": [
|
"args": [
|
||||||
"start", "socketio", "watch", "schedule", "worker_short", "worker_long", "worker_default"
|
"start",
|
||||||
|
"socketio",
|
||||||
|
"watch",
|
||||||
|
"schedule",
|
||||||
|
"worker_short",
|
||||||
|
"worker_long",
|
||||||
|
"worker_default"
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
|
@ -42,7 +42,7 @@ version: "3.7"
|
|||||||
|
|
||||||
services:
|
services:
|
||||||
mariadb-master:
|
mariadb-master:
|
||||||
image: 'bitnami/mariadb:10.3'
|
image: "bitnami/mariadb:10.3"
|
||||||
deploy:
|
deploy:
|
||||||
restart_policy:
|
restart_policy:
|
||||||
condition: on-failure
|
condition: on-failure
|
||||||
@ -54,7 +54,7 @@ services:
|
|||||||
secrets:
|
secrets:
|
||||||
- frappe-mariadb-root-password
|
- frappe-mariadb-root-password
|
||||||
volumes:
|
volumes:
|
||||||
- 'mariadb_master_data:/bitnami/mariadb'
|
- "mariadb_master_data:/bitnami/mariadb"
|
||||||
environment:
|
environment:
|
||||||
- MARIADB_REPLICATION_MODE=master
|
- MARIADB_REPLICATION_MODE=master
|
||||||
- MARIADB_REPLICATION_USER=repl_user
|
- MARIADB_REPLICATION_USER=repl_user
|
||||||
@ -62,7 +62,7 @@ services:
|
|||||||
- MARIADB_ROOT_PASSWORD_FILE=/run/secrets/frappe-mariadb-root-password
|
- MARIADB_ROOT_PASSWORD_FILE=/run/secrets/frappe-mariadb-root-password
|
||||||
|
|
||||||
mariadb-slave:
|
mariadb-slave:
|
||||||
image: 'bitnami/mariadb:10.3'
|
image: "bitnami/mariadb:10.3"
|
||||||
deploy:
|
deploy:
|
||||||
restart_policy:
|
restart_policy:
|
||||||
condition: on-failure
|
condition: on-failure
|
||||||
@ -74,7 +74,7 @@ services:
|
|||||||
secrets:
|
secrets:
|
||||||
- frappe-mariadb-root-password
|
- frappe-mariadb-root-password
|
||||||
volumes:
|
volumes:
|
||||||
- 'mariadb_slave_data:/bitnami/mariadb'
|
- "mariadb_slave_data:/bitnami/mariadb"
|
||||||
environment:
|
environment:
|
||||||
- MARIADB_REPLICATION_MODE=slave
|
- MARIADB_REPLICATION_MODE=slave
|
||||||
- MARIADB_REPLICATION_USER=repl_user
|
- MARIADB_REPLICATION_USER=repl_user
|
||||||
@ -265,6 +265,7 @@ Use environment variables:
|
|||||||
- `FRAPPE_VERSION` variable to be set to desired version of Frappe Framework. e.g. 12.7.0
|
- `FRAPPE_VERSION` variable to be set to desired version of Frappe Framework. e.g. 12.7.0
|
||||||
- `MARIADB_HOST=frappe-mariadb_mariadb-master`
|
- `MARIADB_HOST=frappe-mariadb_mariadb-master`
|
||||||
- `SITES` variable is list of sites in back tick and separated by comma
|
- `SITES` variable is list of sites in back tick and separated by comma
|
||||||
|
|
||||||
```
|
```
|
||||||
SITES=`site1.example.com`,`site2.example.com`
|
SITES=`site1.example.com`,`site2.example.com`
|
||||||
```
|
```
|
||||||
@ -292,4 +293,3 @@ SITES=`site1.example.com`,`site2.example.com`
|
|||||||
6. Env variables:
|
6. Env variables:
|
||||||
- MAINTENANCE_MODE=1
|
- MAINTENANCE_MODE=1
|
||||||
7. Start container
|
7. Start container
|
||||||
|
|
||||||
|
@ -123,46 +123,49 @@ Notes:
|
|||||||
## Docker containers
|
## Docker containers
|
||||||
|
|
||||||
This repository contains the following docker-compose files, each one containing the described images:
|
This repository contains the following docker-compose files, each one containing the described images:
|
||||||
* docker-compose-common.yml
|
|
||||||
* redis-cache
|
|
||||||
* volume: redis-cache-vol
|
|
||||||
* redis-queue
|
|
||||||
* volume: redis-queue-vol
|
|
||||||
* redis-socketio
|
|
||||||
* volume: redis-socketio-vol
|
|
||||||
* mariadb: main database
|
|
||||||
* volume: mariadb-vol
|
|
||||||
* docker-compose-erpnext.yml
|
|
||||||
* erpnext-nginx: serves static assets and proxies web request to the appropriate container, allowing to offer all services on the same port.
|
|
||||||
* volume: assets-vol
|
|
||||||
* erpnext-python: main application code
|
|
||||||
* frappe-socketio: enables realtime communication to the user interface through websockets
|
|
||||||
* frappe-worker-default: background runner
|
|
||||||
* frappe-worker-short: background runner for short-running jobs
|
|
||||||
* frappe-worker-long: background runner for long-running jobs
|
|
||||||
* frappe-schedule
|
|
||||||
|
|
||||||
* docker-compose-frappe.yml
|
- docker-compose-common.yml
|
||||||
* frappe-nginx: serves static assets and proxies web request to the appropriate container, allowing to offer all services on the same port.
|
- redis-cache
|
||||||
* volume: assets-vol, sites-vol
|
- volume: redis-cache-vol
|
||||||
* erpnext-python: main application code
|
- redis-queue
|
||||||
* volume: sites-vol
|
- volume: redis-queue-vol
|
||||||
* frappe-socketio: enables realtime communication to the user interface through websockets
|
- redis-socketio
|
||||||
* volume: sites-vol
|
- volume: redis-socketio-vol
|
||||||
* frappe-worker-default: background runner
|
- mariadb: main database
|
||||||
* volume: sites-vol
|
- volume: mariadb-vol
|
||||||
* frappe-worker-short: background runner for short-running jobs
|
- docker-compose-erpnext.yml
|
||||||
* volume: sites-vol
|
|
||||||
* frappe-worker-long: background runner for long-running jobs
|
|
||||||
* volume: sites-vol
|
|
||||||
* frappe-schedule
|
|
||||||
* volume: sites-vol
|
|
||||||
|
|
||||||
* docker-compose-networks.yml: this yaml define the network to communicate with *Letsencrypt Nginx Proxy Companion*.
|
- erpnext-nginx: serves static assets and proxies web request to the appropriate container, allowing to offer all services on the same port.
|
||||||
|
- volume: assets-vol
|
||||||
|
- erpnext-python: main application code
|
||||||
|
- frappe-socketio: enables realtime communication to the user interface through websockets
|
||||||
|
- frappe-worker-default: background runner
|
||||||
|
- frappe-worker-short: background runner for short-running jobs
|
||||||
|
- frappe-worker-long: background runner for long-running jobs
|
||||||
|
- frappe-schedule
|
||||||
|
|
||||||
* erpnext-publish.yml: this yml extends erpnext-nginx service to publish port 80, can only be used with docker-compose-erpnext.yml
|
- docker-compose-frappe.yml
|
||||||
|
|
||||||
* frappe-publish.yml: this yml extends frappe-nginx service to publish port 80, can only be used with docker-compose-frappe.yml
|
- frappe-nginx: serves static assets and proxies web request to the appropriate container, allowing to offer all services on the same port.
|
||||||
|
- volume: assets-vol, sites-vol
|
||||||
|
- erpnext-python: main application code
|
||||||
|
- volume: sites-vol
|
||||||
|
- frappe-socketio: enables realtime communication to the user interface through websockets
|
||||||
|
- volume: sites-vol
|
||||||
|
- frappe-worker-default: background runner
|
||||||
|
- volume: sites-vol
|
||||||
|
- frappe-worker-short: background runner for short-running jobs
|
||||||
|
- volume: sites-vol
|
||||||
|
- frappe-worker-long: background runner for long-running jobs
|
||||||
|
- volume: sites-vol
|
||||||
|
- frappe-schedule
|
||||||
|
- volume: sites-vol
|
||||||
|
|
||||||
|
- docker-compose-networks.yml: this yaml define the network to communicate with _Letsencrypt Nginx Proxy Companion_.
|
||||||
|
|
||||||
|
- erpnext-publish.yml: this yml extends erpnext-nginx service to publish port 80, can only be used with docker-compose-erpnext.yml
|
||||||
|
|
||||||
|
- frappe-publish.yml: this yml extends frappe-nginx service to publish port 80, can only be used with docker-compose-frappe.yml
|
||||||
|
|
||||||
## Updating and Migrating Sites
|
## Updating and Migrating Sites
|
||||||
|
|
||||||
|
@ -36,8 +36,8 @@ To get started, copy the existing `env-local` or `env-production` file to `.env`
|
|||||||
- In case of a separately managed database setups, set the value to the database's hostname/IP/domain.
|
- In case of a separately managed database setups, set the value to the database's hostname/IP/domain.
|
||||||
- `SITE_NAME=erp.example.com`
|
- `SITE_NAME=erp.example.com`
|
||||||
- Creates this site after starting all services and installs ERPNext. Site name must be resolvable by users machines and the ERPNext components. e.g. `erp.example.com` or `mysite.localhost`.
|
- Creates this site after starting all services and installs ERPNext. Site name must be resolvable by users machines and the ERPNext components. e.g. `erp.example.com` or `mysite.localhost`.
|
||||||
- ``SITES=`erp.example.com` ``
|
- `` SITES=`erp.example.com` ``
|
||||||
- List of sites that are part of the deployment "bench" Each site is separated by a comma(,) and quoted in backtick (`). By default site created by ``SITE_NAME`` variable is added here.
|
- List of sites that are part of the deployment "bench" Each site is separated by a comma(,) and quoted in backtick (`). By default site created by `SITE_NAME` variable is added here.
|
||||||
- If LetsEncrypt is being setup, make sure that the DNS for all the site's domains correctly point to the current instance.
|
- If LetsEncrypt is being setup, make sure that the DNS for all the site's domains correctly point to the current instance.
|
||||||
- `DB_ROOT_USER=root`
|
- `DB_ROOT_USER=root`
|
||||||
- MariaDB root username
|
- MariaDB root username
|
||||||
@ -51,7 +51,7 @@ To get started, copy the existing `env-local` or `env-production` file to `.env`
|
|||||||
- Related to the traefik configuration, says all traffic from outside should come from HTTP or HTTPS, for local development should be web, for production websecure. if redirection is needed, read below.
|
- Related to the traefik configuration, says all traffic from outside should come from HTTP or HTTPS, for local development should be web, for production websecure. if redirection is needed, read below.
|
||||||
- `CERT_RESOLVER_LABEL=traefik.http.routers.erpnext-nginx.tls.certresolver=myresolver`
|
- `CERT_RESOLVER_LABEL=traefik.http.routers.erpnext-nginx.tls.certresolver=myresolver`
|
||||||
- Which traefik resolver to use to get TLS certificate, sets `erpnext.local.no-cert-resolver` for local setup.
|
- Which traefik resolver to use to get TLS certificate, sets `erpnext.local.no-cert-resolver` for local setup.
|
||||||
- ``HTTPS_REDIRECT_RULE_LABEL=traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`) ``
|
- `` HTTPS_REDIRECT_RULE_LABEL=traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`) ``
|
||||||
- Related to the traefik https redirection configuration, sets `erpnext.local.no-redirect-rule` for local setup.
|
- Related to the traefik https redirection configuration, sets `erpnext.local.no-redirect-rule` for local setup.
|
||||||
- `HTTPS_REDIRECT_ENTRYPOINT_LABEL=traefik.http.routers.http-catchall.entrypoints=web`
|
- `HTTPS_REDIRECT_ENTRYPOINT_LABEL=traefik.http.routers.http-catchall.entrypoints=web`
|
||||||
- Related to the traefik https redirection configuration, sets `erpnext.local.no-entrypoint` for local setup.
|
- Related to the traefik https redirection configuration, sets `erpnext.local.no-entrypoint` for local setup.
|
||||||
@ -77,7 +77,7 @@ Make sure to replace `<project-name>` with the desired name you wish to set for
|
|||||||
|
|
||||||
Notes:
|
Notes:
|
||||||
|
|
||||||
- If it is the first time running and site is being initialized, *it can take multiple minutes for the site to be up*. Monitor `site-creator` container logs to check progress. Use command `docker logs <project-name>_site-creator_1 -f`
|
- If it is the first time running and site is being initialized, _it can take multiple minutes for the site to be up_. Monitor `site-creator` container logs to check progress. Use command `docker logs <project-name>_site-creator_1 -f`
|
||||||
- After the site is ready the username is `Administrator` and the password is `$ADMIN_PASSWORD`
|
- After the site is ready the username is `Administrator` and the password is `$ADMIN_PASSWORD`
|
||||||
- The local deployment is for testing and REST API development purpose only
|
- The local deployment is for testing and REST API development purpose only
|
||||||
- A complete development environment is available [here](../development)
|
- A complete development environment is available [here](../development)
|
||||||
@ -86,32 +86,32 @@ Notes:
|
|||||||
|
|
||||||
The docker-compose file contains following services:
|
The docker-compose file contains following services:
|
||||||
|
|
||||||
* traefik: manages letsencrypt
|
- traefik: manages letsencrypt
|
||||||
* volume: cert-vol
|
- volume: cert-vol
|
||||||
* redis-cache: cache store
|
- redis-cache: cache store
|
||||||
* volume: redis-cache-vol
|
- volume: redis-cache-vol
|
||||||
* redis-queue: used by workers
|
- redis-queue: used by workers
|
||||||
* volume: redis-queue-vol
|
- volume: redis-queue-vol
|
||||||
* redis-socketio: used by socketio service
|
- redis-socketio: used by socketio service
|
||||||
* volume: redis-socketio-vol
|
- volume: redis-socketio-vol
|
||||||
* mariadb: main database
|
- mariadb: main database
|
||||||
* volume: mariadb-vol
|
- volume: mariadb-vol
|
||||||
* erpnext-nginx: serves static assets and proxies web request to the appropriate container, allowing to offer all services on the same port.
|
- erpnext-nginx: serves static assets and proxies web request to the appropriate container, allowing to offer all services on the same port.
|
||||||
* volume: assets-vol and sites-vol
|
- volume: assets-vol and sites-vol
|
||||||
* erpnext-python: main application code
|
- erpnext-python: main application code
|
||||||
* volume: sites-vol and sites-vol
|
- volume: sites-vol and sites-vol
|
||||||
* frappe-socketio: enables realtime communication to the user interface through websockets
|
- frappe-socketio: enables realtime communication to the user interface through websockets
|
||||||
* volume: sites-vol
|
- volume: sites-vol
|
||||||
* erpnext-worker-default: background runner
|
- erpnext-worker-default: background runner
|
||||||
* volume: sites-vol
|
- volume: sites-vol
|
||||||
* erpnext-worker-short: background runner for short-running jobs
|
- erpnext-worker-short: background runner for short-running jobs
|
||||||
* volume: sites-vol
|
- volume: sites-vol
|
||||||
* erpnext-worker-long: background runner for long-running jobs
|
- erpnext-worker-long: background runner for long-running jobs
|
||||||
* volume: sites-vol
|
- volume: sites-vol
|
||||||
* erpnext-schedule
|
- erpnext-schedule
|
||||||
* volume: sites-vol
|
- volume: sites-vol
|
||||||
* site-creator: run once container to create new site.
|
- site-creator: run once container to create new site.
|
||||||
* volume: sites-vol
|
- volume: sites-vol
|
||||||
|
|
||||||
## Updating and Migrating Sites
|
## Updating and Migrating Sites
|
||||||
|
|
||||||
|
@ -72,7 +72,7 @@ Notes:
|
|||||||
|
|
||||||
## Add sites to proxy
|
## Add sites to proxy
|
||||||
|
|
||||||
Change `SITES` variable to the list of sites created encapsulated in backtick and separated by comma with no space. e.g. ``SITES=`site1.example.com`,`site2.example.com` ``.
|
Change `SITES` variable to the list of sites created encapsulated in backtick and separated by comma with no space. e.g. `` SITES=`site1.example.com`,`site2.example.com` ``.
|
||||||
|
|
||||||
Reload variables with following command.
|
Reload variables with following command.
|
||||||
|
|
||||||
@ -168,16 +168,16 @@ Note:
|
|||||||
- /home/frappe/backups
|
- /home/frappe/backups
|
||||||
- site1.domain.com
|
- site1.domain.com
|
||||||
- 20200420_162000
|
- 20200420_162000
|
||||||
- 20200420_162000-site1_domain_com-*
|
- 20200420_162000-site1_domain_com-\*
|
||||||
- site2.domain.com
|
- site2.domain.com
|
||||||
- 20200420_162000
|
- 20200420_162000
|
||||||
- 20200420_162000-site2_domain_com-*
|
- 20200420_162000-site2_domain_com-\*
|
||||||
|
|
||||||
## Edit configs
|
## Edit configs
|
||||||
|
|
||||||
Editing config manually might be required in some cases,
|
Editing config manually might be required in some cases,
|
||||||
one such case is to use Amazon RDS (or any other DBaaS).
|
one such case is to use Amazon RDS (or any other DBaaS).
|
||||||
For full instructions, refer to the [wiki](https://github.com/frappe/frappe/wiki/Using-Frappe-with-Amazon-RDS-(or-any-other-DBaaS)). Common question can be found in Issues and on forum.
|
For full instructions, refer to the [wiki](<https://github.com/frappe/frappe/wiki/Using-Frappe-with-Amazon-RDS-(or-any-other-DBaaS)>). Common question can be found in Issues and on forum.
|
||||||
|
|
||||||
`common_site_config.json` or `site_config.json` from `sites-vol` volume has to be edited using following command:
|
`common_site_config.json` or `site_config.json` from `sites-vol` volume has to be edited using following command:
|
||||||
|
|
||||||
@ -231,7 +231,6 @@ Notes:
|
|||||||
- Use it to install/uninstall custom apps, add system manager user, etc.
|
- Use it to install/uninstall custom apps, add system manager user, etc.
|
||||||
- To run the command as non root user add the command option `--user frappe`.
|
- To run the command as non root user add the command option `--user frappe`.
|
||||||
|
|
||||||
|
|
||||||
## Delete/Drop Site
|
## Delete/Drop Site
|
||||||
|
|
||||||
#### MariaDB Site
|
#### MariaDB Site
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
version: '3'
|
version: "3"
|
||||||
|
|
||||||
services:
|
services:
|
||||||
redis-cache:
|
redis-cache:
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
version: '3'
|
version: "3"
|
||||||
|
|
||||||
services:
|
services:
|
||||||
erpnext-nginx:
|
erpnext-nginx:
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
version: '3'
|
version: "3"
|
||||||
|
|
||||||
services:
|
services:
|
||||||
frappe-nginx:
|
frappe-nginx:
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
version: '3'
|
version: "3"
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
default:
|
default:
|
||||||
|
Loading…
x
Reference in New Issue
Block a user