ImportTextFiles.php
Jump to navigation
Jump to search
Please refer to the original MediaWiki document[1]
A Typical Importation Process
- Extract text content from some sources, ideally Wikipedia or some reputable data source with existing data model.
- In the case of Wikipedia, every page can be considered as a single text file, and each page has a unique ID provided in either XML or MediaWiki data model.
- Dump all the textual data content into a directory. Each page should be stored in a unique file, whose page name is stored in a comma delimited file (CSV), that each line associates a unique ID with a the page name.
- Move the file into a place where MediaWiki's maintenance script can easily have access, such as
/var/www/html/images
. For Docker deployments, this directory is likely to be mapped onto the host machine's hard drive, therefore, easy to put the text files under this directory. For example:INPUTDATA_DIR
under/var/www/html/images
. - Run the
importTextFiles.php
script like follows:
root@hostname:/var/www/html/#php ./maintenance/importTextFiles.php -s "Loading Textual Content from external sources" --overwrite --use-timestamp ./images/INPUTDATA_DIR/*
Preprocess file names
Give a list of input data files, for example, a list of files from Wikipedia's Module collection, which contains a list of unique IDs mapping to the page name. The following command will produce a smaller file that contains the replacement.
ls ./images/MODULE/ | xargs -n 1 -i grep {} wp_articles_module.csv > module_replacement.txt
When performing data extraction, one usually uses ,
(comma) as a way to delimit words. To be compatible with the