Difference between revisions of "MediaWiki 的安装、数据备份、数据恢复"

From PKC
Jump to navigation Jump to search
imported>Admin
m (1 revision imported)
imported>Admin
Line 1: Line 1:
=MediaWiki 安装=
*[[MediaWiki on XAMPP]]


==Docker==
*[https://docs.docker.com/engine/installation/linux/docker-ee/ubuntu/ Docker install]
*[https://www.digitalocean.com/community/tutorials/how-to-install-and-use-docker-on-ubuntu-16-04 how-to-install-and-use-docker-on-ubuntu-16-04]
==Docker pull==
docker pull bitnami/mariadb:latest
docker pull bitnami/mediawiki:latest
==Docker-compose==
*[https://www.digitalocean.com/community/tutorials/how-to-install-docker-compose-on-ubuntu-16-04 how-to-install-docker-compose-on-ubuntu-16-04]
curl -sSL https://raw.githubusercontent.com/benkoo/TensorCloud/master/mariadb.yml > mariadb.yml
curl -sSL https://raw.githubusercontent.com/benkoo/TensorCloud/master/mediawiki.yml > mediawiki.yml
docker-compose -f bitnami/mariadb:latest -f bitnami/mediawiki:latest up -d
=MediaWiki 数据备份=
There are two parts of Wiki data backup. First one is the textual content in SQL database, the second one is the uploaded files.
For textual data backup, the fastest way is to use "mysqldump". The more detailed instructions can be found in the following link: https://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki
To backup all the uploaded files, such as images, pdf files, and other binary files, you can reference the following Stack Overflow answer: https://stackoverflow.com/questions/1002258/exporting-and-importing-images-in-mediawiki
The most essential command is the following:
First, create a temporary working directory:
<code>mkdir /tmp/workingBackupMediaFiles</code>
Then, you can
<syntaxhighlight lang="bash">
php maintenance/dumpUploads.php \
  | sed 's~mwstore://local-backend/local-public~./images~' \
  | xargs cp -t /tmp/workingBackupMediaFiles
</syntaxhighlight>
Then, compress the file in a zip directory.
<code>zip -r ~/Mediafiles.zip /tmp/workingBackupMediaFiles </code>
Remember to remove the temporary files and its directory.
<code>rm -r /tmp/workingBackupMediaFiles</code>
*[https://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki MediaWiki data backup]
==Docker exec==
docker ps & docker exec -it YourMariadbContainerID bash
==MysqlDump==
mysql -u wiki -p -S /opt/lampp/var/mysql/mysql.sock //with defined socks.
And show progress:
mysqldump  -S /opt/lampp/var/mysql/mysql.sock -h [domain name/ip] -u [username] -p[password] [databasename] | pv --progress --size 4096m > [filename.sql]
And gzip:
mysqldump  -S /opt/lampp/var/mysql/mysql.sock -h [domain name/ip] -u [username] -p[password] [databasename]  | pv --progress --size 4096m | gzip > [filename.sql.gz]
==Jenkins==
http://toyhouse.cc:8080/job/mediawiki_backup/
http://toyhouse.cc:8080/job/mediawiki_restore/
==Rancher snapshot==
convoy snapshot create (volume name or UUID) --name (snapshot name)
*[http://rancher.com/introducing-convoy-a-docker-volume-driver-for-backup-and-recovery-of-persistent-data/ convoy-a-docker-volume-driver-for-backup-and-recovery-of-persistent-data]
*[https://github.com/benkoo/TensorCloud/tree/master/Rancher/Convoy S3 for ToyhouseRancher]
==Volumerize==
Docker Volume Backups Multiple Backends,more:https://github.com/blacklabelops/volumerize
===0.wget mariadb_mediawiki.yml===
curl -sSL https://raw.githubusercontent.com/benkoo/TensorCloud/master/mariadb_mediawiki.yml > mariadb_mediawiki.yml
===1.start mariadb_mediawiki===
docker-compose -f mariadb_mediawiki.yml up -d
===2.0 run volumerize backup===
    docker run -d \
    --name volumerize \
    -v mediawiki_data:/source/application_data:ro \
    -v mariadb_data:/source/application_database_data:ro \
    -v mediawiki_data:/source/application_configuration:ro \
    -v backup_volume:/backup \
    -v cache_volume:/volumerize-cache \
    -e "VOLUMERIZE_SOURCE=/source" \
    -e "VOLUMERIZE_TARGET=file:///backup" \
    blacklabelops/volumerize
===2.1 exec volumerize backup===
docker exec volumerize backup
===3.1 stop volumerize===
docker stop $(docker ps -a -q)
====3.1.1 Remove volumerize====
docker rm volumerize
===3.2 stop volumerize===
docker stop volumerize
===4.volumerize restore===
    docker run -d \
    --name volumerize \
    -v mediawiki_data:/source/application_data:ro \
    -v mariadb_data:/source/application_database_data:ro \
    -v mediawiki_data:/source/application_configuration:ro \
    -v backup_volume:/backup:ro \
    -v cache_volume:/volumerize-cache \
    -e "VOLUMERIZE_SOURCE=/source" \
    -e "VOLUMERIZE_TARGET=file:///backup" \
    blacklabelops/volumerize
===5.start mariadb_mediawiki again to verify===
docker-compose -f mariadb_mediawiki.yml up -d
===6.start volumerize again===
docker start volumerize
=MediaWiki 数据恢复=
*[https://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki MediaWiki restore]
*[https://stackoverflow.com/questions/1002258/exporting-and-importing-images-in-mediawiki Exporting and Importing Images in MediaWiki (on http://stackoverflow.com)]
*[https://www.mediawiki.org/wiki/Fullsiterestore MediaWiki官方网站上的全网备份与恢复介绍 (Full site restore)]
==MySQL import==
docker ps & docker exec -it YourMariadbContainerID bash
mysql -h <host.name> -u <username> -p<PlainPassword> <databasename> < <filename.sql>
==LocalSettings.php==
vim /bitnami/mediawiki/LocalSettings.php
$wgDBprefix = "wiki_";//your wiki prefix;
==Rancher backup==
convoy backup create (snapshot name or UUID) --dest s3://(bucket_name@region_name)/
*[http://rancher.com/introducing-convoy-a-docker-volume-driver-for-backup-and-recovery-of-persistent-data/  convoy-a-docker-volume-driver-for-backup-and-recovery-of-persistent-data]
*[https://github.com/benkoo/TensorCloud/tree/master/Rancher/Convoy S3 for ToyhouseRancher]
*[https://github.com/rancher/convoy Convoy Quick Start Guide]
=MediaWiki Jenkins=
curl -sSL https://raw.githubusercontent.com/benkoo/TensorCloud/master/jenkins.yml > jenkins.yml
docker-compose -f jenkins.yml up -d
=MediaWik/Phabricator微服务安装指南=
*[[微服务群安装指南]]

Revision as of 00:30, 25 March 2021

MediaWiki 安装

Docker

Docker pull

docker pull bitnami/mariadb:latest

docker pull bitnami/mediawiki:latest

Docker-compose

curl -sSL https://raw.githubusercontent.com/benkoo/TensorCloud/master/mariadb.yml > mariadb.yml

curl -sSL https://raw.githubusercontent.com/benkoo/TensorCloud/master/mediawiki.yml > mediawiki.yml

docker-compose -f bitnami/mariadb:latest -f bitnami/mediawiki:latest up -d

MediaWiki 数据备份

There are two parts of Wiki data backup. First one is the textual content in SQL database, the second one is the uploaded files.

For textual data backup, the fastest way is to use "mysqldump". The more detailed instructions can be found in the following link: https://www.mediawiki.org/wiki/Manual:Backing_up_a_wiki

To backup all the uploaded files, such as images, pdf files, and other binary files, you can reference the following Stack Overflow answer: https://stackoverflow.com/questions/1002258/exporting-and-importing-images-in-mediawiki


The most essential command is the following:

First, create a temporary working directory: mkdir /tmp/workingBackupMediaFiles

Then, you can

php maintenance/dumpUploads.php \
   | sed 's~mwstore://local-backend/local-public~./images~' \
   | xargs cp -t /tmp/workingBackupMediaFiles

Then, compress the file in a zip directory. zip -r ~/Mediafiles.zip /tmp/workingBackupMediaFiles

Remember to remove the temporary files and its directory. rm -r /tmp/workingBackupMediaFiles


Docker exec

docker ps & docker exec -it YourMariadbContainerID bash

MysqlDump

mysql -u wiki -p -S /opt/lampp/var/mysql/mysql.sock //with defined socks.

And show progress:

mysqldump -S /opt/lampp/var/mysql/mysql.sock -h [domain name/ip] -u [username] -p[password] [databasename] | pv --progress --size 4096m > [filename.sql]

And gzip:

mysqldump -S /opt/lampp/var/mysql/mysql.sock -h [domain name/ip] -u [username] -p[password] [databasename] | pv --progress --size 4096m | gzip > [filename.sql.gz]

Jenkins

http://toyhouse.cc:8080/job/mediawiki_backup/

http://toyhouse.cc:8080/job/mediawiki_restore/

Rancher snapshot

convoy snapshot create (volume name or UUID) --name (snapshot name)

Volumerize

Docker Volume Backups Multiple Backends,more:https://github.com/blacklabelops/volumerize

0.wget mariadb_mediawiki.yml

curl -sSL https://raw.githubusercontent.com/benkoo/TensorCloud/master/mariadb_mediawiki.yml > mariadb_mediawiki.yml

1.start mariadb_mediawiki

docker-compose -f mariadb_mediawiki.yml up -d

2.0 run volumerize backup

   docker run -d \
   --name volumerize \
   -v mediawiki_data:/source/application_data:ro \
   -v mariadb_data:/source/application_database_data:ro \
   -v mediawiki_data:/source/application_configuration:ro \
   -v backup_volume:/backup \
   -v cache_volume:/volumerize-cache \
   -e "VOLUMERIZE_SOURCE=/source" \
   -e "VOLUMERIZE_TARGET=file:///backup" \
   blacklabelops/volumerize

2.1 exec volumerize backup

docker exec volumerize backup

3.1 stop volumerize

docker stop $(docker ps -a -q)

3.1.1 Remove volumerize

docker rm volumerize

3.2 stop volumerize

docker stop volumerize

4.volumerize restore

   docker run -d \
   --name volumerize \
   -v mediawiki_data:/source/application_data:ro \
   -v mariadb_data:/source/application_database_data:ro \
   -v mediawiki_data:/source/application_configuration:ro \
   -v backup_volume:/backup:ro \
   -v cache_volume:/volumerize-cache \
   -e "VOLUMERIZE_SOURCE=/source" \
   -e "VOLUMERIZE_TARGET=file:///backup" \
   blacklabelops/volumerize

5.start mariadb_mediawiki again to verify

docker-compose -f mariadb_mediawiki.yml up -d

6.start volumerize again

docker start volumerize

MediaWiki 数据恢复

MySQL import

docker ps & docker exec -it YourMariadbContainerID bash

mysql -h <host.name> -u <username> -p<PlainPassword> <databasename> < <filename.sql>

LocalSettings.php

vim /bitnami/mediawiki/LocalSettings.php

$wgDBprefix = "wiki_";//your wiki prefix;

Rancher backup

convoy backup create (snapshot name or UUID) --dest s3://(bucket_name@region_name)/

MediaWiki Jenkins

curl -sSL https://raw.githubusercontent.com/benkoo/TensorCloud/master/jenkins.yml > jenkins.yml

docker-compose -f jenkins.yml up -d

MediaWik/Phabricator微服务安装指南