Difference between revisions of "Toyhouse Migration"
(22 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
== Migration toyhouse.wiki == | == Migration toyhouse.wiki == | ||
This page is the documentation of migrating the toyhouse.wiki at Alibaba Infrastructure into AWS Infrastructure. | |||
=== Installation Inspection === | === Installation Inspection === | ||
On source server, below are the fact that is found. | On source server, below are the fact that is found. | ||
Line 7: | Line 9: | ||
As we can inspect, there are fourteen containers that are running on the source server. It is possible that not all of the containers is functional. Since there are no yml file that is directly related to bring the whole configuration up, decided to reverse engineer the installation by exporting the container, copy them to new server and proceed the installation on new server. Instead of building the container from scratch, which are the normal procedure of deploying new server. | As we can inspect, there are fourteen containers that are running on the source server. It is possible that not all of the containers is functional. Since there are no yml file that is directly related to bring the whole configuration up, decided to reverse engineer the installation by exporting the container, copy them to new server and proceed the installation on new server. Instead of building the container from scratch, which are the normal procedure of deploying new server. | ||
Then, next step is to find mounted directory which use by the container on the host machine, we can use below command to display the configuration of each container | |||
docker inspect [container-name] | |||
and, to inspect the mediawiki container installation, type below command | |||
docker inspect mediawiki | |||
which, we can see portion of the output below | |||
"HostConfig": { | |||
"Binds": [ | |||
"/data/xlpsystem/mediawiki_dev:/xlp_dev:rw", | |||
"/data/xlpsystem/mediawiki:/xlp_data:rw" | |||
], | |||
after performing similar command for all the container, we can conclude that the mounter folder at host machine is at /data/xlpsystem/ | |||
=== Outline Planning === | === Outline Planning === | ||
# Export all docker container from source server using docker command | # Export all docker container from source server using docker command | ||
# Copy the mounted folder | |||
# Transfer all file to target server | # Transfer all file to target server | ||
# Prepare the target server | # Prepare the target server | ||
Line 17: | Line 32: | ||
Below are the process of above plan, and all of the issues, and how to solve them. | Below are the process of above plan, and all of the issues, and how to solve them. | ||
==== Exporting all the containers ==== | ==== Exporting all the containers ==== | ||
Below is the docker command to export all the containers: | |||
docker export nginx > /root/container/nginx.tar | |||
docker export red_panda > /root/container/red_panda.tar | |||
docker export jenkins > /root/container/jenkins.tar | |||
docker export wordpress > /root/container/ordpress.tar | |||
docker export matomo > /root/container/matomo.tar | |||
docker export grafana > /root/container/grafana.tar | |||
docker export kibana > /root/container/kibana.tar | |||
docker export phabricator > /root/container/phabricator.tar | |||
docker export elasticsearch > /root/container/elasticsearch.tar | |||
docker export phabricator_mysql > /root/container/phabricator_mysql.tar | |||
docker export mem2018_wordpress_1 > /root/container/mem2018_wordpress_1.tar | |||
docker export mem2018_mediawiki_1 > /root/container/mem2018_mediawiki_1.tar | |||
docker export mem2018_mariadb_1 > /root/container/mem2018_mariadb_1.tar | |||
Above command will resulting files in the folder /root/container/*.tar, theses are the files to be transferred to new server. | |||
==== Tar mounting folder ==== | |||
Below are the command to create tar files into new .tar.gz file. | |||
tar -zcvf /data/xlpsystem.tar.gz /data/xlpsystem | |||
New file will be created at /data/xlpsystem/xlpsystem.tar.gz. Size around 31GB. After all the files is created, all files to be transferred is moved into one final folder to simplified the transfer process. | |||
==== Transfer all file to target server ==== | ==== Transfer all file to target server ==== | ||
Challanges faced for transferring the files is the size and the connectivity speed between Alibaba Cloud and AWS Cloud server that is not as stable and fast as expected. Then we need some kind of mechanism to copy around 40GB of files containing the containers and mountpoint's tar file from Alibaba Cloud to AWS Cloud. There are two step decided to transfer the file. First step is to split the mountfile, second is to create cronjob files to ensure the transfer process is continued if the process is disconnected. | |||
To split the files, below are the command | |||
split [options] filename prefix | |||
split -b512MB xlpsystem.tar.gz xlp | |||
which will create splitted file, 512MB each file. This splittin process will help the transfer process to be more faster. | |||
Next, is to create cronjob to run the transfer process and repeat the process if disconnected. To enable the process, create new shell script file, which contain below command | |||
#!/bin/bash | |||
SERVICE="rsync" | |||
if pgrep -x "$SERVICE" >/dev/null | |||
then | |||
echo "$SERVICE is running" | |||
else | |||
echo "$SERVICE stopped" | |||
# uncomment to start | |||
rsync -avP root@toyhouse.wiki:/root/container/* /home/ubuntu/container | |||
fi | |||
change the file mode to executable, then place it into cronjob entry, which in this particular case is run every minute. | |||
crontab -e | |||
Place this line at the bottom of the file | |||
* * * * * /home/ubuntu/rsync-container.sh >> /home/ubuntu/rsync-log.log | |||
Rsync is chosen for its capability to resume the transfer process if the sync process is disconnected in the middle of file transfer process. | |||
Once all the files it setup, please inspect the log file to see the process. | |||
==== Prepare the target server ==== | ==== Prepare the target server ==== | ||
Please refer to this page [[Docker|Docker Installation]] for docker installation | |||
==== Importing back Container ==== | |||
Once the docker is installed, use below command to import it back to target server. | |||
docker import /home/ubuntu/container/mariadb.tar mariadb:10.3 | |||
for each of the container. | |||
==== Mariadb Container ==== | ==== Mariadb Container ==== | ||
After inspect, confirmed the mount folder is in /data/xlpsystem/mariadb, and using mariadb version 10:3, to simplified the process, decided to directly using docker hub image to pull and installed the service by using docker-compose.yml in /data/xlpsystem/docker-compose.yml | |||
Once the image is running, test the data existence by getting in into the docker container and perform simple query to check the data. And confirmed the userid and password as well. | |||
Once confirmed, proceed to installation of mediawiki docker container. | |||
==== Mediawiki Container ==== | ==== Mediawiki Container ==== | ||
on inspection, confirmed the mounted folder is on /data/xlpsystem/mediawiki and /data/xlpsystem/mediawiki_dev. Once the container image is imported then proceed to source server to generate the docker command to imitate by using below command. | |||
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike [container-name] | |||
which result below docker command | |||
docker run --name=mediawiki --hostname=623b88d7f03d --env=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --env='PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c' --env=PHP_INI_DIR=/usr/local/etc/php --env=APACHE_CONFDIR=/etc/apache2 --env=APACHE_ENVVARS=/etc/apache2/envvars --env=PHP_EXTRA_BUILD_DEPS=apache2-dev --env='PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi' --env='PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2' --env='PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2' --env='PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie' --env='GPG_KEYS=1729F83938DA44E27BA0F4D3DBDB397470D12172 B1B44D8F021E4E2D6021E995DC9FF8D3EE5AF27F' --env=PHP_VERSION=7.2.8 --env=PHP_URL=https://secure.php.net/get/php-7.2.8.tar.xz/from/this/mirror --env=PHP_ASC_URL=https://secure.php.net/get/php-7.2.8.tar.xz.asc/from/this/mirror --env=PHP_SHA256=53ba0708be8a7db44256e3ae9fcecc91b811e5b5119e6080c951ffe7910ffb0f --env=PHP_MD5= --env=MEDIAWIKI_MAJOR_VERSION=1.31 --env=MEDIAWIKI_BRANCH=REL1_31 --env=MEDIAWIKI_VERSION=1.31.0 --env=MEDIAWIKI_SHA512=50ad9303b0c0bd8380dea7489be18a4022d5b65a31961af8d36c3c9ff6d74cdf25e8e10137ef1e025b4287e9ee9b7e0bf4198ca342a46ab42915c91f1ddaf940 --volume=/data/xlpsystem/mediawiki_dev:/xlp_dev:rw --volume=/data/xlpsystem/mediawiki:/xlp_data:rw --volume=/xlp_data --volume=/xlp_dev --network=xlpsystem_default --workdir=/tmp -p 81:80 --restart=always --label='com.docker.compose.oneoff=False' --label='com.docker.compose.container-number=1' --label='com.docker.compose.config-hash=67606e77ea08235b6ce97f53e238f7404769547f2bb39e620593c6244fee36a0' --label='com.docker.compose.version=1.17.1' --label='com.docker.compose.service=mediawiki' --label='com.docker.compose.project=xlpsystem' --runtime=runc --detach=true daocloud.io/weimar/xlp_mediawiki:20180827140844 /bin/sh -c ./xlp_start.sh | |||
Once the docker is up, do adjustment to cater new server configuration at Localsettings.php, below line | |||
## The protocol and server name to use in fully-qualified URLs | |||
$wgServer = "http://toyhouse.wiki:81"; | |||
to become | |||
## The protocol and server name to use in fully-qualified URLs | |||
$wgServer = "http://pkc-dev.org:81"; | |||
Then, ensure the port 81 is opened from AWS firewall settings, and try to accessed the link. Mediawiki should be accessible at this point. | |||
==== Matomo Container ==== | ==== Matomo Container ==== | ||
Once the container is up, | Once the container is up, the matomo screen displayed error message of : | ||
The directory "/var/www/html/tmp/cache/tracker/" does not exist and could not be created | |||
At this point, we understand that the problem might be related to user's authorization on host's directory. Try to look which user is use to accessed the folder, from the container. How to check is by going inside the container, see which user is used to execute the service inside the container. One point that we need to understand, is that the user inside the container and user at the host folder will be sharing similar user id. Below is the "ps aux" command from inside the container. | |||
... | |||
www-data 22 0.0 0.6 427860 25612 ? S 08:30 0:00 apache2 -DFOREGROUND | |||
www-data 23 0.0 0.8 503900 35304 ? S 08:30 0:00 apache2 -DFOREGROUND | |||
www-data 24 0.0 0.7 427768 28672 ? S 08:30 0:00 apache2 -DFOREGROUND | |||
www-data 25 0.0 0.6 427760 26484 ? S 08:30 0:00 apache2 -DFOREGROUND | |||
www-data 26 0.0 0.2 425396 11164 ? S 08:30 0:00 apache2 -DFOREGROUND | |||
... | |||
And, we check the user id of www-data account, at the host file, if we cant found one, then we need to create one using similar user-id. | |||
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin | |||
and, it is clear that matomo's container are using user-id 33, of www-data to execute the service. After inspection on host file, found similar userid, which is www-data using userid of 33, then we need to change the owner of the mounted folder to www-data. | |||
chmod -R www-data:www-data /data/xlpsystem/matomo | |||
After the change of folder owner is executed, the error message is moving forward to below message | |||
Warning: You are now accessing Matomo from http://pkc-dev.org:82/index.php, but Matomo has been configured to run at this address: http://toyhouse.wiki:82/index.php. | |||
To fix the error, we need to adjust matomo's configuration file at config.ini.php, at the [General] section, as shown below. | |||
[General] | |||
trusted_hosts[] = "pkc-dev.org:82" | |||
Then, we can try to login. Once the matomo service is running, we also need to adjust the matomo configuration on Localsettings.php from MediaWiki side to ensure that we sending the matomo data to the correct matomo instance. Below are the part of Localsettings.php that we need to change | |||
# Matomo | |||
wfLoadExtension( 'Piwik' ); | |||
$wgPiwikURL = "pkc-dev.org:82"; | |||
$wgPiwikIDSite = "1"; | |||
Once done, we can start to inspect the data at Matomo's service to ensure everything is working properly. | |||
=== References === | === References === | ||
# How to install docker ([[Docker|Docker Installation]]) | |||
# |
Latest revision as of 11:48, 7 January 2022
Migration toyhouse.wiki
This page is the documentation of migrating the toyhouse.wiki at Alibaba Infrastructure into AWS Infrastructure.
Installation Inspection
On source server, below are the fact that is found. Installation is using docker container, we can check with docker ps command, and below are the output
docker ps
As we can inspect, there are fourteen containers that are running on the source server. It is possible that not all of the containers is functional. Since there are no yml file that is directly related to bring the whole configuration up, decided to reverse engineer the installation by exporting the container, copy them to new server and proceed the installation on new server. Instead of building the container from scratch, which are the normal procedure of deploying new server.
Then, next step is to find mounted directory which use by the container on the host machine, we can use below command to display the configuration of each container
docker inspect [container-name]
and, to inspect the mediawiki container installation, type below command
docker inspect mediawiki
which, we can see portion of the output below
"HostConfig": { "Binds": [ "/data/xlpsystem/mediawiki_dev:/xlp_dev:rw", "/data/xlpsystem/mediawiki:/xlp_data:rw" ],
after performing similar command for all the container, we can conclude that the mounter folder at host machine is at /data/xlpsystem/
Outline Planning
- Export all docker container from source server using docker command
- Copy the mounted folder
- Transfer all file to target server
- Prepare the target server
- Re-Create the docker-run command
- Bring docker container up
- Solve all issues
Below are the process of above plan, and all of the issues, and how to solve them.
Exporting all the containers
Below is the docker command to export all the containers:
docker export nginx > /root/container/nginx.tar docker export red_panda > /root/container/red_panda.tar docker export jenkins > /root/container/jenkins.tar docker export wordpress > /root/container/ordpress.tar docker export matomo > /root/container/matomo.tar docker export grafana > /root/container/grafana.tar docker export kibana > /root/container/kibana.tar docker export phabricator > /root/container/phabricator.tar docker export elasticsearch > /root/container/elasticsearch.tar docker export phabricator_mysql > /root/container/phabricator_mysql.tar docker export mem2018_wordpress_1 > /root/container/mem2018_wordpress_1.tar docker export mem2018_mediawiki_1 > /root/container/mem2018_mediawiki_1.tar docker export mem2018_mariadb_1 > /root/container/mem2018_mariadb_1.tar
Above command will resulting files in the folder /root/container/*.tar, theses are the files to be transferred to new server.
Tar mounting folder
Below are the command to create tar files into new .tar.gz file.
tar -zcvf /data/xlpsystem.tar.gz /data/xlpsystem
New file will be created at /data/xlpsystem/xlpsystem.tar.gz. Size around 31GB. After all the files is created, all files to be transferred is moved into one final folder to simplified the transfer process.
Transfer all file to target server
Challanges faced for transferring the files is the size and the connectivity speed between Alibaba Cloud and AWS Cloud server that is not as stable and fast as expected. Then we need some kind of mechanism to copy around 40GB of files containing the containers and mountpoint's tar file from Alibaba Cloud to AWS Cloud. There are two step decided to transfer the file. First step is to split the mountfile, second is to create cronjob files to ensure the transfer process is continued if the process is disconnected.
To split the files, below are the command
split [options] filename prefix split -b512MB xlpsystem.tar.gz xlp
which will create splitted file, 512MB each file. This splittin process will help the transfer process to be more faster. Next, is to create cronjob to run the transfer process and repeat the process if disconnected. To enable the process, create new shell script file, which contain below command
#!/bin/bash SERVICE="rsync" if pgrep -x "$SERVICE" >/dev/null then echo "$SERVICE is running" else echo "$SERVICE stopped" # uncomment to start rsync -avP root@toyhouse.wiki:/root/container/* /home/ubuntu/container fi
change the file mode to executable, then place it into cronjob entry, which in this particular case is run every minute.
crontab -e
Place this line at the bottom of the file
* * * * * /home/ubuntu/rsync-container.sh >> /home/ubuntu/rsync-log.log
Rsync is chosen for its capability to resume the transfer process if the sync process is disconnected in the middle of file transfer process. Once all the files it setup, please inspect the log file to see the process.
Prepare the target server
Please refer to this page Docker Installation for docker installation
Importing back Container
Once the docker is installed, use below command to import it back to target server.
docker import /home/ubuntu/container/mariadb.tar mariadb:10.3
for each of the container.
Mariadb Container
After inspect, confirmed the mount folder is in /data/xlpsystem/mariadb, and using mariadb version 10:3, to simplified the process, decided to directly using docker hub image to pull and installed the service by using docker-compose.yml in /data/xlpsystem/docker-compose.yml
Once the image is running, test the data existence by getting in into the docker container and perform simple query to check the data. And confirmed the userid and password as well.
Once confirmed, proceed to installation of mediawiki docker container.
Mediawiki Container
on inspection, confirmed the mounted folder is on /data/xlpsystem/mediawiki and /data/xlpsystem/mediawiki_dev. Once the container image is imported then proceed to source server to generate the docker command to imitate by using below command.
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike [container-name]
which result below docker command
docker run --name=mediawiki --hostname=623b88d7f03d --env=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin --env='PHPIZE_DEPS=autoconf dpkg-dev file g++ gcc libc-dev make pkg-config re2c' --env=PHP_INI_DIR=/usr/local/etc/php --env=APACHE_CONFDIR=/etc/apache2 --env=APACHE_ENVVARS=/etc/apache2/envvars --env=PHP_EXTRA_BUILD_DEPS=apache2-dev --env='PHP_EXTRA_CONFIGURE_ARGS=--with-apxs2 --disable-cgi' --env='PHP_CFLAGS=-fstack-protector-strong -fpic -fpie -O2' --env='PHP_CPPFLAGS=-fstack-protector-strong -fpic -fpie -O2' --env='PHP_LDFLAGS=-Wl,-O1 -Wl,--hash-style=both -pie' --env='GPG_KEYS=1729F83938DA44E27BA0F4D3DBDB397470D12172 B1B44D8F021E4E2D6021E995DC9FF8D3EE5AF27F' --env=PHP_VERSION=7.2.8 --env=PHP_URL=https://secure.php.net/get/php-7.2.8.tar.xz/from/this/mirror --env=PHP_ASC_URL=https://secure.php.net/get/php-7.2.8.tar.xz.asc/from/this/mirror --env=PHP_SHA256=53ba0708be8a7db44256e3ae9fcecc91b811e5b5119e6080c951ffe7910ffb0f --env=PHP_MD5= --env=MEDIAWIKI_MAJOR_VERSION=1.31 --env=MEDIAWIKI_BRANCH=REL1_31 --env=MEDIAWIKI_VERSION=1.31.0 --env=MEDIAWIKI_SHA512=50ad9303b0c0bd8380dea7489be18a4022d5b65a31961af8d36c3c9ff6d74cdf25e8e10137ef1e025b4287e9ee9b7e0bf4198ca342a46ab42915c91f1ddaf940 --volume=/data/xlpsystem/mediawiki_dev:/xlp_dev:rw --volume=/data/xlpsystem/mediawiki:/xlp_data:rw --volume=/xlp_data --volume=/xlp_dev --network=xlpsystem_default --workdir=/tmp -p 81:80 --restart=always --label='com.docker.compose.oneoff=False' --label='com.docker.compose.container-number=1' --label='com.docker.compose.config-hash=67606e77ea08235b6ce97f53e238f7404769547f2bb39e620593c6244fee36a0' --label='com.docker.compose.version=1.17.1' --label='com.docker.compose.service=mediawiki' --label='com.docker.compose.project=xlpsystem' --runtime=runc --detach=true daocloud.io/weimar/xlp_mediawiki:20180827140844 /bin/sh -c ./xlp_start.sh
Once the docker is up, do adjustment to cater new server configuration at Localsettings.php, below line
## The protocol and server name to use in fully-qualified URLs $wgServer = "http://toyhouse.wiki:81";
to become
## The protocol and server name to use in fully-qualified URLs $wgServer = "http://pkc-dev.org:81";
Then, ensure the port 81 is opened from AWS firewall settings, and try to accessed the link. Mediawiki should be accessible at this point.
Matomo Container
Once the container is up, the matomo screen displayed error message of :
The directory "/var/www/html/tmp/cache/tracker/" does not exist and could not be created
At this point, we understand that the problem might be related to user's authorization on host's directory. Try to look which user is use to accessed the folder, from the container. How to check is by going inside the container, see which user is used to execute the service inside the container. One point that we need to understand, is that the user inside the container and user at the host folder will be sharing similar user id. Below is the "ps aux" command from inside the container.
... www-data 22 0.0 0.6 427860 25612 ? S 08:30 0:00 apache2 -DFOREGROUND www-data 23 0.0 0.8 503900 35304 ? S 08:30 0:00 apache2 -DFOREGROUND www-data 24 0.0 0.7 427768 28672 ? S 08:30 0:00 apache2 -DFOREGROUND www-data 25 0.0 0.6 427760 26484 ? S 08:30 0:00 apache2 -DFOREGROUND www-data 26 0.0 0.2 425396 11164 ? S 08:30 0:00 apache2 -DFOREGROUND ...
And, we check the user id of www-data account, at the host file, if we cant found one, then we need to create one using similar user-id.
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
and, it is clear that matomo's container are using user-id 33, of www-data to execute the service. After inspection on host file, found similar userid, which is www-data using userid of 33, then we need to change the owner of the mounted folder to www-data.
chmod -R www-data:www-data /data/xlpsystem/matomo
After the change of folder owner is executed, the error message is moving forward to below message
Warning: You are now accessing Matomo from http://pkc-dev.org:82/index.php, but Matomo has been configured to run at this address: http://toyhouse.wiki:82/index.php.
To fix the error, we need to adjust matomo's configuration file at config.ini.php, at the [General] section, as shown below.
[General] trusted_hosts[] = "pkc-dev.org:82"
Then, we can try to login. Once the matomo service is running, we also need to adjust the matomo configuration on Localsettings.php from MediaWiki side to ensure that we sending the matomo data to the correct matomo instance. Below are the part of Localsettings.php that we need to change
# Matomo wfLoadExtension( 'Piwik' ); $wgPiwikURL = "pkc-dev.org:82"; $wgPiwikIDSite = "1";
Once done, we can start to inspect the data at Matomo's service to ensure everything is working properly.
References
- How to install docker (Docker Installation)