With over 60M articles, Wikipedia has become the largest platform for open and freely accessible knowledge. While it has more than 15B monthly visits, its content is believed to be inaccessible to many readers due to the lack of readability of its text. However, previous investigations of the readability of Wikipedia have been restricted to English only, and there are currently no systems supporting the automatic readability assessment of the 300+ languages in Wikipedia. To bridge this gap, we develop a multilingual model to score the readability of Wikipedia articles. To train and evaluate this model, we create a novel multilingual dataset spanning 14 languages, by matching articles from Wikipedia to simplified Wikipedia and online children encyclopedias. We show that our model performs well in a zero-shot scenario, yielding a ranking accuracy of more than 80% across 14 languages and improving upon previous benchmarks. These results demonstrate the applicability of the model at scale for languages in which there is no ground-truth data available for model fine-tuning. Furthermore, we provide the first overview on the state of readability in Wikipedia beyond English.
@inproceedings{trokhymovych-etal-2024-open,title={An Open Multilingual System for Scoring Readability of {W}ikipedia},author={Trokhymovych, Mykola and Sen, Indira and Gerlach, Martin},editor={Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek},booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},month=aug,year={2024},address={Bangkok, Thailand},publisher={Association for Computational Linguistics},url={https://aclanthology.org/2024.acl-long.342},pages={6296--6311},}
Wikidata Vandalism Detection with Graph-Linguistic Fusion
This paper presents a novel design of the system aimed at supporting the Wikipedia community in addressing vandalism on the platform. To achieve this, we collected a massive dataset of 47 languages, and applied advanced filtering and feature engineering techniques, including multilingual masked language modeling to build the training dataset from human-generated data. The performance of the system was evaluated through comparison with the one used in production in Wikipedia, known as ORES. Our research results in a significant increase in the number of languages covered, making Wikipedia patrolling more efficient to a wider range of communities. Furthermore, our model outperforms ORES, ensuring that the results provided are not only more accurate but also less biased against certain groups of contributors.
@inproceedings{10.1145/3580305.3599823,author={Trokhymovych, Mykola and Aslam, Muniza and Chou, Ai-Jou and Baeza-Yates, Ricardo and Saez-Trumper, Diego},title={Fair Multilingual Vandalism Detection System for Wikipedia},year={2023},isbn={9798400701030},publisher={Association for Computing Machinery},address={New York, NY, USA},url={https://doi.org/10.1145/3580305.3599823},doi={10.1145/3580305.3599823},booktitle={Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining},pages={4981–4990},numpages={10},location={Long Beach, CA, USA},series={KDD '23},}
GeoDD: End-to-End Spatial Data De-duplication System
Mykola Trokhymovych, and Oleksandr Kosovan
In Data Science and Algorithms in Systems, Aug 2023
People generate vast amounts of data that can be used for analytics, data-driven decision-making, and forecasting. However, to extract value from data, we need to apply specific methods of cleaning and prepossessing it. In this paper, we observe the problem of geospatial data de-duplication, propose and implement end-to-end solutions for social-media-based data de-duplication. We apply advanced geospatial, natural language processing, and classical machine learning methods for our solution. Our tool shows high competitiveness in observed competition and can process a vast amount of data with limited computational resources.
@inproceedings{10.1007/978-3-031-21438-7_60,author={Trokhymovych, Mykola and Kosovan, Oleksandr},editor={Silhavy, Radek and Silhavy, Petr and Prokopova, Zdenka},title={GeoDD: End-to-End Spatial Data De-duplication System},booktitle={Data Science and Algorithms in Systems},year={2023},publisher={Springer International Publishing},address={Cham},pages={717--727},isbn={978-3-031-21438-7},url={https://link.springer.com/chapter/10.1007/978-3-031-21438-7_60},}
2022
WikiFactFind: Semi-automated fact-checking based on Wikipedia
@inproceedings{trokhymovych2023wikifactfind,author={Trokhymovych, Mykola and Saez-Trumper, Diego},title={WikiFactFind: Semi-automated fact-checking based on Wikipedia},booktitle={Wiki Workshop},year={2022},url={https://wikiworkshop.org/2022/papers/WikiWorkshop2022_paper_21.pdf},}
2021
WikiCheck: An End-to-End Open Source Automatic Fact-Checking API Based on Wikipedia
Mykola Trokhymovych, and Diego Saez-Trumper
In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Aug 2021
With the growth of fake news and disinformation, the NLP community has been working to assist humans in fact-checking. However, most academic research has focused on model accuracy without paying attention to resource efficiency, which is crucial in real-life scenarios. In this work, we review the State-of-the-Art datasets and solutions for Automatic Fact-checking and test their applicability in production environments. We discover overfitting issues in those models, and we propose a data filtering method that improves the model’s performance and generalization. Then, we design an unsupervised fine-tuning of the Masked Language models to improve its accuracy working with Wikipedia. We also propose a novel query enhancing method to improve evidence discovery using the Wikipedia Search API. Finally, we present a new fact-checking system, the WikiCheck API that automatically performs a facts validation process based on the Wikipedia knowledge base. It is comparable to SOTA solutions in terms of accuracy and can be used on low-memory CPU instances.
@inproceedings{10.1145/3459637.3481961,author={Trokhymovych, Mykola and Saez-Trumper, Diego},title={WikiCheck: An End-to-End Open Source Automatic Fact-Checking API Based on Wikipedia},year={2021},isbn={9781450384469},publisher={Association for Computing Machinery},address={New York, NY, USA},url={https://doi.org/10.1145/3459637.3481961},doi={10.1145/3459637.3481961},booktitle={Proceedings of the 30th ACM International Conference on Information \& Knowledge Management},pages={4155–4164},numpages={10},keywords={applied research, wikipedia, nlp, nli, fact-checking},location={Virtual Event, Queensland, Australia},series={CIKM '21},}