Access to billions of pages for large-scale text analysis
Organisciak, Peter; Capitanu, Boris; Underwood, Ted; Downie, J. Stephen
Loading…
Permalink
https://hdl.handle.net/2142/96256
Description
Title
Access to billions of pages for large-scale text analysis
Author(s)
Organisciak, Peter
Capitanu, Boris
Underwood, Ted
Downie, J. Stephen
Issue Date
2017-03
Keyword(s)
Non-consumptive research
Feature extraction
Large-scale text analysis
Datasets
Text mining
Abstract
Consortial collections have led to unprecedented scales of digitized corpora, but the insights that they enable are hampered by the complexities of access, particularly to in-copyright or orphan works. Pursuing a principle of non-consumptive access, we developed the Extracted Features (EF) dataset, a dataset of quantitative counts for every page of nearly 5 million scanned books. The EF includes unigram counts, part of speech tagging, header and footer extraction, counts of characters at both sides of the page, and more. Distributing book data with features already extracted saves resource costs associated with large-scale text use, improves the reproducibility of research done on the dataset, and opens the door to datasets on copyrighted books. We describe the coverage of the dataset and demonstrate its useful application through duplicate book alignment and identification of their cleanest scans, topic modeling, word list expansion,
and multifaceted visualization.
This is the default collection for all research and scholarship developed by faculty, staff, or students at the University of Illinois at Urbana-Champaign
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.