From 55641c07deedcb7b2678e547b2a02227ce855830 Mon Sep 17 00:00:00 2001 From: Davide Bortolami Date: Tue, 24 Mar 2020 15:00:46 +0000 Subject: [PATCH 1/9] Add contributing.md by github standards --- CONTRIBUTING.md | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) create mode 100644 CONTRIBUTING.md diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 00000000..1174e93c --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,25 @@ +# Contribution Guidelines +## Branches + +* *master*: images on the master branch are built monthly. +* *develop*: images on this branch are when commits are pushed. + +# Pull Requests + +Please send all pull request exclusively to the *develop* branch. +When the PR are merged, the merge will trigger the image build automatically. + +Please test all PR as extensively as you can, considering that the software can be run in different modes: +* with docker-compose for production +* with or without Nginx proxy +* with VScode for testing environments + +Every once in a while (or before monthly release) develop will be merged into master. + +## Reducing the number of branching and builds :evergreen_tree: :evergreen_tree: :evergreen_tree: +Please be considerate when pushing commits and opening PR for multiple branches, as the process of building images (triggered on push and PR branch push) uses energy and contributes to global warming. + +# Documentation + +You should place README.md(s) in the relevant directories, explaining what the software in that particular directory does. + From d64662d7da534bc2a58852c2b9e45955dfef9a85 Mon Sep 17 00:00:00 2001 From: Davide Bortolami Date: Tue, 24 Mar 2020 17:01:33 +0000 Subject: [PATCH 2/9] fixed --- CONTRIBUTING.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 1174e93c..d9b4633c 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -2,11 +2,11 @@ ## Branches * *master*: images on the master branch are built monthly. -* *develop*: images on this branch are when commits are pushed. +* *develop*: images on this branch are built daily. # Pull Requests -Please send all pull request exclusively to the *develop* branch. +Please **send all pull request exclusively to the *develop*** branch. When the PR are merged, the merge will trigger the image build automatically. Please test all PR as extensively as you can, considering that the software can be run in different modes: From ebd9d877d92b622916ae7d3871ce249b88868f2c Mon Sep 17 00:00:00 2001 From: Davide Bortolami Date: Tue, 24 Mar 2020 17:09:09 +0000 Subject: [PATCH 3/9] Create stale.yml --- .github/workflows/stale.yml | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) create mode 100644 .github/workflows/stale.yml diff --git a/.github/workflows/stale.yml b/.github/workflows/stale.yml new file mode 100644 index 00000000..7bbc0505 --- /dev/null +++ b/.github/workflows/stale.yml @@ -0,0 +1,19 @@ +name: Mark stale issues and pull requests + +on: + schedule: + - cron: "0 0 * * *" + +jobs: + stale: + + runs-on: ubuntu-latest + + steps: + - uses: actions/stale@v1 + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + stale-issue-message: 'Stale issue message' + stale-pr-message: 'Stale pull request message' + stale-issue-label: 'no-issue-activity' + stale-pr-label: 'no-pr-activity' From dbd172efc5d90410d1c1c04fa8638e02a74f84f7 Mon Sep 17 00:00:00 2001 From: Davide Bortolami Date: Tue, 24 Mar 2020 17:15:27 +0000 Subject: [PATCH 4/9] Create greetings.yml Adds greeting for first time user interacts with the repo --- greetings.yml | 15 +++++++++++++++ 1 file changed, 15 insertions(+) create mode 100644 greetings.yml diff --git a/greetings.yml b/greetings.yml new file mode 100644 index 00000000..669221c0 --- /dev/null +++ b/greetings.yml @@ -0,0 +1,15 @@ +name: Greetings + +on: [pull_request, issues] + +jobs: + greeting: + runs-on: ubuntu-latest + steps: + - uses: actions/first-interaction@v1 + with: + repo-token: ${{ secrets.GITHUB_TOKEN }} + issue-message: | + Hello! We're very happy to see your first issue. If your issue is about a problem, go back and check you have copy-pasted all the debug logs you can so we can help you as fast as possible! + pr-message: | + Hello! Thank you about this PR. Since this is your first PR, please make sure you have described the improvements and your code is well documented. From 5f187c4e3f73430cdff661d585d9112959391268 Mon Sep 17 00:00:00 2001 From: Revant Nandgaonkar Date: Wed, 25 Mar 2020 06:35:49 +0530 Subject: [PATCH 5/9] feat: worker command to push backups to cloud --- README.md | 20 +++ build/common/commands/push_backup.py | 161 +++++++++++++++++++++++ build/common/worker/docker-entrypoint.sh | 6 + 3 files changed, 187 insertions(+) create mode 100644 build/common/commands/push_backup.py diff --git a/README.md b/README.md index 2b688b1f..1991c39d 100644 --- a/README.md +++ b/README.md @@ -224,6 +224,26 @@ docker exec -it \ The backup will be available in the `sites` mounted volume. +#### Push backup to s3 compatible storage + +Environment Variables + +- `BUCKET_NAME`, Required to set bucket created on S3 compatible storage. +- `ACCESS_KEY_ID`, Required to set access key. +- `SECRET_ACCESS_KEY`, Required to set secret access key. +- `ENDPOINT_URL`, Required to set URL of S3 compatible storage. +- `BUCKET_DIR`, Required to set directory in bucket where sites from this deployment will be backed up. +- `BACKUP_LIMIT`, Optionally set this to limit number of backups in bucket directory. Defaults to 3. + +```sh +docker exec -it \ + -e "BUCKET_NAME=backups" \ + -e "ACCESS_KEY_ID=access_id_from_provider" \ + -e "SECRET_ACCESS_KEY=secret_access_from_provider" \ + -e "ENDPOINT_URL=https://region.storage-provider.com" \ + -e "BUCKET_DIR=frappe-bench-v12" \ +``` + #### Updating and Migrating Sites Switch to the root of the `frappe_docker` directory before running the following commands: diff --git a/build/common/commands/push_backup.py b/build/common/commands/push_backup.py new file mode 100644 index 00000000..c02e510b --- /dev/null +++ b/build/common/commands/push_backup.py @@ -0,0 +1,161 @@ +import os +import time +import boto3 + +import datetime +from glob import glob +from frappe.utils import get_sites + +def get_file_ext(): + return { + "database": "-database.sql.gz", + "private_files": "-private-files.tar", + "public_files": "-files.tar" + } + +def get_backup_details(sitename): + backup_details = dict() + file_ext = get_file_ext() + + # add trailing slash https://stackoverflow.com/a/15010678 + site_backup_path = os.path.join(os.getcwd(), sitename, "private", "backups", "") + + if os.path.exists(site_backup_path): + for filetype, ext in file_ext.items(): + site_slug = sitename.replace('.', '_') + pattern = site_backup_path + '*-' + site_slug + ext + backup_files = list(filter(os.path.isfile, glob(pattern))) + + if len(backup_files) > 0: + backup_files.sort(key=lambda file: os.stat(os.path.join(site_backup_path, file)).st_ctime) + backup_date = datetime.datetime.strptime(time.ctime(os.path.getmtime(backup_files[0])), "%a %b %d %H:%M:%S %Y") + backup_details[filetype] = { + "sitename": sitename, + "file_size_in_bytes": os.stat(backup_files[-1]).st_size, + "file_path": os.path.abspath(backup_files[-1]), + "filename": os.path.basename(backup_files[-1]), + "backup_date": backup_date.date().strftime("%Y-%m-%d %H:%M:%S") + } + + return backup_details + +def get_s3_config(): + check_environment_variables() + bucket = os.environ.get('BUCKET_NAME') + + conn = boto3.client( + 's3', + aws_access_key_id=os.environ.get('ACCESS_KEY_ID'), + aws_secret_access_key=os.environ.get('SECRET_ACCESS_KEY'), + endpoint_url=os.environ.get('ENDPOINT_URL') + ) + + return conn, bucket + +def check_environment_variables(): + if not 'BUCKET_NAME' in os.environ: + print('Variable BUCKET_NAME not set') + exit(1) + + if not 'ACCESS_KEY_ID' in os.environ: + print('Variable ACCESS_KEY_ID not set') + exit(1) + + if not 'SECRET_ACCESS_KEY' in os.environ: + print('Variable SECRET_ACCESS_KEY not set') + exit(1) + + if not 'ENDPOINT_URL' in os.environ: + print('Variable ENDPOINT_URL not set') + exit(1) + + if not 'BUCKET_DIR' in os.environ: + print('Variable BUCKET_DIR not set') + exit(1) + +def upload_file_to_s3(filename, folder, conn, bucket): + + destpath = os.path.join(folder, os.path.basename(filename)) + try: + print("Uploading file:", filename) + conn.upload_file(filename, bucket, destpath) + + except Exception as e: + print("Error uploading: %s" % (e)) + exit(1) + +def delete_old_backups(limit, bucket, folder): + all_backups = list() + backup_limit = int(limit) + check_environment_variables() + bucket_dir = os.environ.get('BUCKET_DIR') + + s3 = boto3.resource( + 's3', + aws_access_key_id=os.environ.get('ACCESS_KEY_ID'), + aws_secret_access_key=os.environ.get('SECRET_ACCESS_KEY'), + endpoint_url=os.environ.get('ENDPOINT_URL') + ) + + bucket = s3.Bucket(bucket) + objects = bucket.meta.client.list_objects_v2( + Bucket=bucket.name, + Delimiter='/') + + if objects: + for obj in objects.get('CommonPrefixes'): + if obj.get('Prefix') in folder: + for backup_obj in bucket.objects.filter(Prefix=obj.get('Prefix')): + try: + backup_dir = backup_obj.key.split('/')[1] + all_backups.append(backup_dir) + except expression as error: + print(error) + exit(1) + + all_backups = set(sorted(all_backups)) + if len(all_backups) > backup_limit: + latest_backup = sorted(all_backups)[0] if len(all_backups) > 0 else None + print("Deleting Backup: {0}".format(latest_backup)) + for obj in bucket.objects.filter(Prefix=bucket_dir + '/' + latest_backup): + # delete all keys that are inside the latest_backup + if bucket_dir in obj.key: + try: + delete_directory = obj.key.split('/')[1] + print('Deleteing ' + obj.key) + s3.Object(bucket.name, obj.key).delete() + except expression as error: + print(error) + exit(1) + +def main(): + details = dict() + sites = get_sites() + conn, bucket = get_s3_config() + + for site in sites: + details = get_backup_details(site) + db_file = details.get('database', {}).get('file_path') + folder = None + if db_file: + folder = os.environ.get('BUCKET_DIR') + '/' + os.path.basename(db_file)[:15] + '/' + upload_file_to_s3(db_file, folder, conn, bucket) + + public_files = details.get('public_files', {}).get('file_path') + if public_files: + folder = os.environ.get('BUCKET_DIR') + '/' + os.path.basename(public_files)[:15] + '/' + upload_file_to_s3(public_files, folder, conn, bucket) + + private_files = details.get('private_files', {}).get('file_path') + if private_files: + folder = os.environ.get('BUCKET_DIR') + '/' + os.path.basename(private_files)[:15] + '/' + upload_file_to_s3(private_files, folder, conn, bucket) + + if folder: + delete_old_backups(os.environ.get('BACKUP_LIMIT', '3'), bucket, folder) + + print('push-backup complete') + exit(0) + +if __name__ == "__main__": + main() diff --git a/build/common/worker/docker-entrypoint.sh b/build/common/worker/docker-entrypoint.sh index 26cb9ef1..31085d6f 100755 --- a/build/common/worker/docker-entrypoint.sh +++ b/build/common/worker/docker-entrypoint.sh @@ -175,6 +175,12 @@ elif [ "$1" = 'console' ]; then python /home/frappe/frappe-bench/commands/console.py "$2" fi +elif [ "$1" = 'push-backup' ]; then + + su frappe -c ". /home/frappe/frappe-bench/env/bin/activate \ + && python /home/frappe/frappe-bench/commands/push_backup.py" + exit + else exec su frappe -c "$@" From 754ba8a91a83253fb7470dade73e53584aaa4dd7 Mon Sep 17 00:00:00 2001 From: Revant Nandgaonkar Date: Wed, 25 Mar 2020 22:25:24 +0530 Subject: [PATCH 6/9] feat: restrict backups to backup limit for each site --- README.md | 7 +++ build/common/commands/push_backup.py | 69 ++++++++++++++++++---------- 2 files changed, 51 insertions(+), 25 deletions(-) diff --git a/README.md b/README.md index 1991c39d..a8b406fd 100644 --- a/README.md +++ b/README.md @@ -244,6 +244,13 @@ docker exec -it \ -e "BUCKET_DIR=frappe-bench-v12" \ ``` +Note: + +- Above example will backup files in bucket called `backup` at location `frappe-bench-v12/site.name.com/DATE_TIME/DATE_TIME-site_name_com-{filetype}.{extension}`, +- example DATE_TIME: 20200325_042020. +- example filetype: database, files or private-files +- example extension: sql.gz or tar + #### Updating and Migrating Sites Switch to the root of the `frappe_docker` directory before running the following commands: diff --git a/build/common/commands/push_backup.py b/build/common/commands/push_backup.py index c02e510b..0e291060 100644 --- a/build/common/commands/push_backup.py +++ b/build/common/commands/push_backup.py @@ -6,6 +6,8 @@ import datetime from glob import glob from frappe.utils import get_sites +DATE_FORMAT = "%Y%m%d_%H%M%S" + def get_file_ext(): return { "database": "-database.sql.gz", @@ -84,8 +86,9 @@ def upload_file_to_s3(filename, folder, conn, bucket): print("Error uploading: %s" % (e)) exit(1) -def delete_old_backups(limit, bucket, folder): +def delete_old_backups(limit, bucket, site_name): all_backups = list() + all_backup_dates = list() backup_limit = int(limit) check_environment_variables() bucket_dir = os.environ.get('BUCKET_DIR') @@ -104,29 +107,46 @@ def delete_old_backups(limit, bucket, folder): if objects: for obj in objects.get('CommonPrefixes'): - if obj.get('Prefix') in folder: + if obj.get('Prefix') == bucket_dir + '/': for backup_obj in bucket.objects.filter(Prefix=obj.get('Prefix')): try: - backup_dir = backup_obj.key.split('/')[1] - all_backups.append(backup_dir) - except expression as error: + # backup_obj.key is bucket_dir/site/date_time/backupfile.extension + bucket_dir, site_slug, date_time, backupfile = backup_obj.key.split('/') + date_time_object = datetime.datetime.strptime( + date_time, DATE_FORMAT + ) + + if site_name in backup_obj.key: + all_backup_dates.append(date_time_object) + all_backups.append(backup_obj.key) + except IndexError as error: print(error) exit(1) - all_backups = set(sorted(all_backups)) - if len(all_backups) > backup_limit: - latest_backup = sorted(all_backups)[0] if len(all_backups) > 0 else None - print("Deleting Backup: {0}".format(latest_backup)) - for obj in bucket.objects.filter(Prefix=bucket_dir + '/' + latest_backup): - # delete all keys that are inside the latest_backup - if bucket_dir in obj.key: - try: - delete_directory = obj.key.split('/')[1] - print('Deleteing ' + obj.key) - s3.Object(bucket.name, obj.key).delete() - except expression as error: - print(error) - exit(1) + oldest_backup_date = min(all_backup_dates) + + if len(all_backups) / 3 > backup_limit: + oldest_backup = None + for backup in all_backups: + try: + # backup is bucket_dir/site/date_time/backupfile.extension + backup_dir, site_slug, backup_dt_string, filename = backup.split('/') + backup_datetime = datetime.datetime.strptime( + backup_dt_string, DATE_FORMAT + ) + if backup_datetime == oldest_backup_date: + oldest_backup = backup + + except IndexError as error: + print(error) + exit(1) + + if oldest_backup: + for obj in bucket.objects.filter(Prefix=oldest_backup): + # delete all keys that are inside the oldest_backup + if bucket_dir in obj.key: + print('Deleteing ' + obj.key) + s3.Object(bucket.name, obj.key).delete() def main(): details = dict() @@ -136,23 +156,22 @@ def main(): for site in sites: details = get_backup_details(site) db_file = details.get('database', {}).get('file_path') - folder = None + folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' if db_file: - folder = os.environ.get('BUCKET_DIR') + '/' + os.path.basename(db_file)[:15] + '/' + folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' + os.path.basename(db_file)[:15] + '/' upload_file_to_s3(db_file, folder, conn, bucket) public_files = details.get('public_files', {}).get('file_path') if public_files: - folder = os.environ.get('BUCKET_DIR') + '/' + os.path.basename(public_files)[:15] + '/' + folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' + os.path.basename(public_files)[:15] + '/' upload_file_to_s3(public_files, folder, conn, bucket) private_files = details.get('private_files', {}).get('file_path') if private_files: - folder = os.environ.get('BUCKET_DIR') + '/' + os.path.basename(private_files)[:15] + '/' + folder = os.environ.get('BUCKET_DIR') + '/' + site + '/' + os.path.basename(private_files)[:15] + '/' upload_file_to_s3(private_files, folder, conn, bucket) - if folder: - delete_old_backups(os.environ.get('BACKUP_LIMIT', '3'), bucket, folder) + delete_old_backups(os.environ.get('BACKUP_LIMIT', '3'), bucket, site) print('push-backup complete') exit(0) From 3a6f7e1934892697900be37961427f258f6bb9ba Mon Sep 17 00:00:00 2001 From: Revant Nandgaonkar Date: Fri, 27 Mar 2020 00:28:50 +0530 Subject: [PATCH 7/9] feat: resotre backups from volume or cloud --- build/common/commands/restore_backup.py | 175 +++++++++++++++++++++++ build/common/worker/docker-entrypoint.sh | 6 + 2 files changed, 181 insertions(+) create mode 100644 build/common/commands/restore_backup.py diff --git a/build/common/commands/restore_backup.py b/build/common/commands/restore_backup.py new file mode 100644 index 00000000..f7e14b4a --- /dev/null +++ b/build/common/commands/restore_backup.py @@ -0,0 +1,175 @@ +import os +import datetime +import tarfile +import hashlib +import frappe +import boto3 + +from push_backup import DATE_FORMAT, check_environment_variables +from frappe.utils import get_sites, random_string +from frappe.commands.site import _new_site +from frappe.installer import make_conf, get_conf_params +from check_connection import get_site_config, get_config + +def list_directories(path): + directories = [] + for name in os.listdir(path): + if os.path.isdir(os.path.join(path, name)): + directories.append(name) + return directories + +def get_backup_dir(): + return os.path.join( + os.path.expanduser('~'), + 'backups' + ) + +def decompress_db(files_base, site): + database_file = files_base + '-database.sql.gz' + config = get_config() + site_config = get_site_config(site) + db_root_user = os.environ.get('DB_ROOT_USER', 'root') + command = 'gunzip -c {database_file} > {database_extract}'.format( + database_file=database_file, + database_extract=database_file.replace('.gz','') + ) + + print('Extract Database GZip for site {}'.format(site)) + os.system(command) + +def restore_database(files_base, site): + db_root_password = os.environ.get('MYSQL_ROOT_PASSWORD') + if not db_root_password: + print('Variable MYSQL_ROOT_PASSWORD not set') + exit(1) + + db_root_user = os.environ.get("DB_ROOT_USER", 'root') + # restore database + + database_file = files_base + '-database.sql.gz' + decompress_db(files_base, site) + config = get_config() + site_config = get_site_config(site) + + # mysql command prefix + mysql_command = 'mysql -u{db_root_user} -h{db_host} -p{db_password} -e '.format( + db_root_user=db_root_user, + db_host=config.get('db_host'), + db_password=db_root_password + ) + + # create db + create_database = mysql_command + "\"CREATE DATABASE IF NOT EXISTS \`{db_name}\`;\"".format( + db_name=site_config.get('db_name') + ) + os.system(create_database) + + # create user + create_user = mysql_command + "\"CREATE USER IF NOT EXISTS \'{db_name}\'@\'%\' IDENTIFIED BY \'{db_password}\'; FLUSH PRIVILEGES;\"".format( + db_name=site_config.get('db_name'), + db_password=site_config.get('db_password') + ) + os.system(create_user) + + # grant db privileges to user + grant_privileges = mysql_command + "\"GRANT ALL PRIVILEGES ON \`{db_name}\`.* TO '{db_name}'@'%'; FLUSH PRIVILEGES;\"".format( + db_name=site_config.get('db_name') + ) + os.system(grant_privileges) + + command = "mysql -u{db_root_user} -h{db_host} -p{db_password} '{db_name}' < {database_file}".format( + db_root_user=db_root_user, + db_host=config.get('db_host'), + db_password=db_root_password, + db_name=site_config.get('db_name'), + database_file=database_file.replace('.gz',''), + ) + + print('Restoring database for site: {}'.format(site)) + os.system(command) + +def restore_files(files_base): + public_files = files_base + '-files.tar' + # extract tar + public_tar = tarfile.open(public_files) + print('Extracting {}'.format(public_files)) + public_tar.extractall() + +def restore_private_files(files_base): + private_files = files_base + '-private-files.tar' + private_tar = tarfile.open(private_files) + print('Extracting {}'.format(private_files)) + private_tar.extractall() + +def pull_backup_from_s3(): + check_environment_variables() + + # https://stackoverflow.com/a/54672690 + s3 = boto3.resource( + 's3', + aws_access_key_id=os.environ.get('ACCESS_KEY_ID'), + aws_secret_access_key=os.environ.get('SECRET_ACCESS_KEY'), + endpoint_url=os.environ.get('ENDPOINT_URL') + ) + + bucket_dir = os.environ.get('BUCKET_DIR') + bucket_name = os.environ.get('BUCKET_NAME') + bucket = s3.Bucket(bucket_name) + + # Change directory to /home/frappe/backups + os.chdir(get_backup_dir()) + + for obj in bucket.objects.filter(Prefix = bucket_dir): + backup_file = obj.key.replace(os.path.join(bucket_dir,''),'') + if not os.path.exists(os.path.dirname(backup_file)): + os.makedirs(os.path.dirname(backup_file)) + print('Downloading {}'.format(backup_file)) + bucket.download_file(obj.key, backup_file) + + os.chdir(os.path.join(os.path.expanduser('~'), 'frappe-bench', 'sites')) + +def main(): + backup_dir = get_backup_dir() + + if len(list_directories(backup_dir)) == 0: + pull_backup_from_s3() + + for site in list_directories(backup_dir): + site_slug = site.replace('.','_') + backups = [datetime.datetime.strptime(backup, DATE_FORMAT) for backup in list_directories(os.path.join(backup_dir,site))] + latest_backup = max(backups).strftime(DATE_FORMAT) + files_base = os.path.join(backup_dir, site, latest_backup, '') + files_base += latest_backup + '-' + site_slug + if site in get_sites(): + restore_database(files_base, site) + restore_private_files(files_base) + restore_files(files_base) + else: + mariadb_root_password = os.environ.get('MYSQL_ROOT_PASSWORD') + if not mariadb_root_password: + print('Variable MYSQL_ROOT_PASSWORD not set') + exit(1) + mariadb_root_username = os.environ.get('DB_ROOT_USER', 'root') + database_file = files_base + '-database.sql.gz' + + site_config = get_conf_params( + db_name='_' + hashlib.sha1(site.encode()).hexdigest()[:16], + db_password=random_string(16) + ) + + frappe.local.site = site + frappe.local.sites_path = os.getcwd() + frappe.local.site_path = os.getcwd() + '/' + site + make_conf( + db_name=site_config.get('db_name'), + db_password=site_config.get('db_name'), + ) + + restore_database(files_base, site) + restore_private_files(files_base) + restore_files(files_base) + + exit(0) + +if __name__ == "__main__": + main() diff --git a/build/common/worker/docker-entrypoint.sh b/build/common/worker/docker-entrypoint.sh index 31085d6f..d07e3cb6 100755 --- a/build/common/worker/docker-entrypoint.sh +++ b/build/common/worker/docker-entrypoint.sh @@ -181,6 +181,12 @@ elif [ "$1" = 'push-backup' ]; then && python /home/frappe/frappe-bench/commands/push_backup.py" exit +elif [ "$1" = 'restore-backup' ]; then + + su frappe -c ". /home/frappe/frappe-bench/env/bin/activate \ + && python /home/frappe/frappe-bench/commands/restore_backup.py" + exit + else exec su frappe -c "$@" From 4e7b7690ee0b8de2b102e433035e4e0964e86475 Mon Sep 17 00:00:00 2001 From: Revant Nandgaonkar Date: Fri, 27 Mar 2020 16:07:12 +0530 Subject: [PATCH 8/9] fix: backup and restore new command FORCE=1 error fixed only push backups if exists prepare and process db restore --- build/common/commands/new.py | 21 ++++++--------------- build/common/commands/push_backup.py | 4 +++- build/common/commands/restore_backup.py | 21 +++++++++++++++++---- 3 files changed, 26 insertions(+), 20 deletions(-) diff --git a/build/common/commands/new.py b/build/common/commands/new.py index b603458c..5fbe2cb9 100644 --- a/build/common/commands/new.py +++ b/build/common/commands/new.py @@ -29,36 +29,27 @@ def main(): site_config = get_site_config(site_name) - # update User's host to '%' required to connect from any container - command = 'mysql -h{db_host} -u{mariadb_root_username} -p{mariadb_root_password} -e '.format( + mysql_command = 'mysql -h{db_host} -u{mariadb_root_username} -p{mariadb_root_password} -e '.format( db_host=config.get('db_host'), mariadb_root_username=mariadb_root_username, mariadb_root_password=mariadb_root_password ) - command += "\"UPDATE mysql.user SET Host = '%' where User = '{db_name}'; FLUSH PRIVILEGES;\"".format( + + # update User's host to '%' required to connect from any container + command = mysql_command + "\"UPDATE mysql.user SET Host = '%' where User = '{db_name}'; FLUSH PRIVILEGES;\"".format( db_name=site_config.get('db_name') ) os.system(command) # Set db password - command = 'mysql -h{db_host} -u{mariadb_root_username} -p{mariadb_root_password} -e '.format( - db_host=config.get('db_host'), - mariadb_root_username=mariadb_root_username, - mariadb_root_password=mariadb_root_password - ) - command += "\"SET PASSWORD FOR '{db_name}'@'%' = PASSWORD('{db_password}'); FLUSH PRIVILEGES;\"".format( + command = mysql_command + "\"UPDATE mysql.user SET authentication_string = PASSWORD('{db_password}') WHERE User = \'{db_name}\' AND Host = \'%\';\"".format( db_name=site_config.get('db_name'), db_password=site_config.get('db_password') ) os.system(command) # Grant permission to database - command = 'mysql -h{db_host} -u{mariadb_root_username} -p{mariadb_root_password} -e '.format( - db_host=config.get('db_host'), - mariadb_root_username=mariadb_root_username, - mariadb_root_password=mariadb_root_password - ) - command += "\"GRANT ALL PRIVILEGES ON \`{db_name}\`.* TO '{db_name}'@'%'; FLUSH PRIVILEGES;\"".format( + command = mysql_command + "\"GRANT ALL PRIVILEGES ON \`{db_name}\`.* TO '{db_name}'@'%'; FLUSH PRIVILEGES;\"".format( db_name=site_config.get('db_name') ) os.system(command) diff --git a/build/common/commands/push_backup.py b/build/common/commands/push_backup.py index 0e291060..e795b3ef 100644 --- a/build/common/commands/push_backup.py +++ b/build/common/commands/push_backup.py @@ -92,6 +92,7 @@ def delete_old_backups(limit, bucket, site_name): backup_limit = int(limit) check_environment_variables() bucket_dir = os.environ.get('BUCKET_DIR') + oldest_backup_date = None s3 = boto3.resource( 's3', @@ -123,7 +124,8 @@ def delete_old_backups(limit, bucket, site_name): print(error) exit(1) - oldest_backup_date = min(all_backup_dates) + if len(all_backup_dates) > 0: + oldest_backup_date = min(all_backup_dates) if len(all_backups) / 3 > backup_limit: oldest_backup = None diff --git a/build/common/commands/restore_backup.py b/build/common/commands/restore_backup.py index f7e14b4a..82854e85 100644 --- a/build/common/commands/restore_backup.py +++ b/build/common/commands/restore_backup.py @@ -8,7 +8,7 @@ import boto3 from push_backup import DATE_FORMAT, check_environment_variables from frappe.utils import get_sites, random_string from frappe.commands.site import _new_site -from frappe.installer import make_conf, get_conf_params +from frappe.installer import make_conf, get_conf_params, make_site_dirs from check_connection import get_site_config, get_config def list_directories(path): @@ -44,8 +44,8 @@ def restore_database(files_base, site): exit(1) db_root_user = os.environ.get("DB_ROOT_USER", 'root') - # restore database + # restore database database_file = files_base + '-database.sql.gz' decompress_db(files_base, site) config = get_config() @@ -58,6 +58,12 @@ def restore_database(files_base, site): db_password=db_root_password ) + # drop db if exists for clean restore + drop_database = mysql_command + "\"DROP DATABASE IF EXISTS \`{db_name}\`;\"".format( + db_name=site_config.get('db_name') + ) + os.system(drop_database) + # create db create_database = mysql_command + "\"CREATE DATABASE IF NOT EXISTS \`{db_name}\`;\"".format( db_name=site_config.get('db_name') @@ -71,6 +77,13 @@ def restore_database(files_base, site): ) os.system(create_user) + # create user password + set_user_password = mysql_command + "\"UPDATE mysql.user SET authentication_string = PASSWORD('{db_password}') WHERE User = \'{db_name}\' AND Host = \'%\';\"".format( + db_name=site_config.get('db_name'), + db_password=site_config.get('db_password') + ) + os.system(set_user_password) + # grant db privileges to user grant_privileges = mysql_command + "\"GRANT ALL PRIVILEGES ON \`{db_name}\`.* TO '{db_name}'@'%'; FLUSH PRIVILEGES;\"".format( db_name=site_config.get('db_name') @@ -162,9 +175,9 @@ def main(): frappe.local.site_path = os.getcwd() + '/' + site make_conf( db_name=site_config.get('db_name'), - db_password=site_config.get('db_name'), + db_password=site_config.get('db_password'), ) - + make_site_dirs() restore_database(files_base, site) restore_private_files(files_base) restore_files(files_base) From 2422fbad26a63c7c34d90845db006cac73ef1ddc Mon Sep 17 00:00:00 2001 From: Revant Nandgaonkar Date: Fri, 27 Mar 2020 16:41:32 +0530 Subject: [PATCH 9/9] fix: backup and restore create backup dir in worker images set ownership and mount volume for backups update readme about restore backup --- README.md | 43 +++++++++++++++++++++++++++++- build/frappe-worker/Dockerfile | 6 ++--- build/frappe-worker/v11.Dockerfile | 6 ++--- build/frappe-worker/v12.Dockerfile | 6 ++--- 4 files changed, 51 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index a8b406fd..2a9e26d0 100644 --- a/README.md +++ b/README.md @@ -236,12 +236,15 @@ Environment Variables - `BACKUP_LIMIT`, Optionally set this to limit number of backups in bucket directory. Defaults to 3. ```sh -docker exec -it \ + docker run \ -e "BUCKET_NAME=backups" \ -e "ACCESS_KEY_ID=access_id_from_provider" \ -e "SECRET_ACCESS_KEY=secret_access_from_provider" \ -e "ENDPOINT_URL=https://region.storage-provider.com" \ -e "BUCKET_DIR=frappe-bench-v12" \ + -v ./installation/sites:/home/frappe/frappe-bench/sites \ + --network _default \ + frappe/frappe-worker:v12 push-backup ``` Note: @@ -278,6 +281,44 @@ docker exec -it \ _erpnext-python_1 docker-entrypoint.sh migrate ``` +#### Restore backups + +Environment Variables + +- `MYSQL_ROOT_PASSWORD`, Required to restore mariadb backups. +- `BUCKET_NAME`, Required to set bucket created on S3 compatible storage. +- `ACCESS_KEY_ID`, Required to set access key. +- `SECRET_ACCESS_KEY`, Required to set secret access key. +- `ENDPOINT_URL`, Required to set URL of S3 compatible storage. +- `BUCKET_DIR`, Required to set directory in bucket where sites from this deployment will be backed up. + +```sh +docker run \ + -e "MYSQL_ROOT_PASSWORD=admin" \ + -e "BUCKET_NAME=backups" \ + -e "ACCESS_KEY_ID=access_id_from_provider" \ + -e "SECRET_ACCESS_KEY=secret_access_from_provider" \ + -e "ENDPOINT_URL=https://region.storage-provider.com" \ + -e "BUCKET_DIR=frappe-bench-v12" \ + -v ./installation/sites:/home/frappe/frappe-bench/sites \ + -v ./backups:/home/frappe/backups \ + --network _default \ + frappe/frappe-worker:v12 restore-backup +``` + +Note: + +- Volume must be mounted at location `/home/frappe/backups` for restoring sites +- If no backup files are found in volume, it will use s3 credentials to pull backups +- Backup structure for mounted volume or downloaded from s3: + - /home/frappe/backups + - site1.domain.com + - 20200420_162000 + - 20200420_162000-site1_domain_com-* + - site2.domain.com + - 20200420_162000 + - 20200420_162000-site2_domain_com-* + ### Custom apps To add your own Frappe/ERPNext apps to the image, we'll need to create a custom image with the help of a unique wrapper script diff --git a/build/frappe-worker/Dockerfile b/build/frappe-worker/Dockerfile index 00766f57..337f43e8 100644 --- a/build/frappe-worker/Dockerfile +++ b/build/frappe-worker/Dockerfile @@ -21,7 +21,7 @@ RUN install_packages \ RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.5/wkhtmltox_0.12.5-1.stretch_amd64.deb RUN dpkg -i wkhtmltox_0.12.5-1.stretch_amd64.deb && rm wkhtmltox_0.12.5-1.stretch_amd64.deb -RUN mkdir -p apps logs commands +RUN mkdir -p apps logs commands /home/frappe/backups RUN virtualenv env \ && . env/bin/activate \ @@ -40,9 +40,9 @@ COPY build/common/worker/install_app.sh /usr/local/bin/install_app WORKDIR /home/frappe/frappe-bench/sites -RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites +RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites /home/frappe/backups -VOLUME [ "/home/frappe/frappe-bench/sites" ] +VOLUME [ "/home/frappe/frappe-bench/sites", "/home/frappe/backups" ] ENTRYPOINT ["docker-entrypoint.sh"] CMD ["start"] diff --git a/build/frappe-worker/v11.Dockerfile b/build/frappe-worker/v11.Dockerfile index de61332c..6d9384cb 100644 --- a/build/frappe-worker/v11.Dockerfile +++ b/build/frappe-worker/v11.Dockerfile @@ -18,7 +18,7 @@ RUN install_packages \ RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.5/wkhtmltox_0.12.5-1.stretch_amd64.deb RUN dpkg -i wkhtmltox_0.12.5-1.stretch_amd64.deb && rm wkhtmltox_0.12.5-1.stretch_amd64.deb -RUN mkdir -p apps logs commands +RUN mkdir -p apps logs commands /home/frappe/backups RUN virtualenv env \ && . env/bin/activate \ @@ -37,9 +37,9 @@ COPY build/common/worker/install_app.sh /usr/local/bin/install_app WORKDIR /home/frappe/frappe-bench/sites -RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites +RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites /home/frappe/backups -VOLUME [ "/home/frappe/frappe-bench/sites" ] +VOLUME [ "/home/frappe/frappe-bench/sites", "/home/frappe/backups" ] ENTRYPOINT ["docker-entrypoint.sh"] CMD ["start"] diff --git a/build/frappe-worker/v12.Dockerfile b/build/frappe-worker/v12.Dockerfile index bfdaa317..f923e26d 100644 --- a/build/frappe-worker/v12.Dockerfile +++ b/build/frappe-worker/v12.Dockerfile @@ -21,7 +21,7 @@ RUN install_packages \ RUN wget https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.5/wkhtmltox_0.12.5-1.stretch_amd64.deb RUN dpkg -i wkhtmltox_0.12.5-1.stretch_amd64.deb && rm wkhtmltox_0.12.5-1.stretch_amd64.deb -RUN mkdir -p apps logs commands +RUN mkdir -p apps logs commands /home/frappe/backups RUN virtualenv env \ && . env/bin/activate \ @@ -40,9 +40,9 @@ COPY build/common/worker/install_app.sh /usr/local/bin/install_app WORKDIR /home/frappe/frappe-bench/sites -RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites +RUN chown -R frappe:frappe /home/frappe/frappe-bench/sites /home/frappe/backups -VOLUME [ "/home/frappe/frappe-bench/sites" ] +VOLUME [ "/home/frappe/frappe-bench/sites", "/home/frappe/backups" ] ENTRYPOINT ["docker-entrypoint.sh"] CMD ["start"]