Archive

Digitizing (scanning)

Although millions of books are scanned and put online every year, making old documents and texts available on the web is a difficult and painstaking process.

Project IMPACT – which stands for Improving Access to Text – is focused on the making the process easier.

Project IMPACT director Hildelies Balk explained: “The problem with turning an historic document into a machine readable text is that it is so very old, everything is different from a modern document, it has old fonts, old words and a very difficult layout.“

Once scanned they are left full of errors, because computers struggle to read old texts with strange layouts, fonts and spellings.

Clemens Neudecker, technical manager for European projects at Koninklijke Bibliotheek, showed us one example: “This is the Principia Mathematica by Isaac Newton. You see actually what we call shine through, that is ink from the opposite page which is just shining through the paper, you see that the paper is warped, and you can also see here there is this long ‘s’ also in use, which can very easily be confused with an ‘f’.”

Researchers at the National Library of the Netherlands have spent four years in a European project to improve software tools to read old books.

Researcher Hildelies Balk said: “We improved software for image enhancement, optical character recognition, post-correction of the document and language technology to make it more accessible.“

“Text that is not fully digital is virtually invisible. Everyone is used to going into a search engine, and looking for a word, and if they don’t find this it basically isn’t there for them.”

Read the entire article and watch the video at this link: Euronews

 

Advertisements

Text digitization in the cultural heritage sector started in earnest in1971, when the first Project Gutenberg text — the United States Declaration of Independence — was keyed into a file on a mainframe at the University of Illinois. The Thesaurus Linguae Graecae began in 1972. The Oxford Text Archive was founded in 1976. The ARTFL Project was founded at the University of Chicago in 1982. The Perseus Digital Librarystarted its development in 1985. The Text Encoding Initiativestarted in 1987. The Women Writers Project started at Brown University in 1988. The University of Michigan’s UMLibText project was started in 1989. The Center for Electronic Texts in the Humanities was established jointly by Princeton University and Rutgers University in 1991. Sweden’s Project Runeberg went online in 1992. The University of Virginia EText Center was also founded in 1992. These projects focused on keyed-in text structured with markup, ASCII or SGML at the time, transitioning to HTML and later, to XML.

Read more herehttp://blogs.loc.gov/digitalpreservation/2012/12/before-you-were-born-we-were-digitizing-texts/

The Google Ngram Viewer is a phrase-usage graphing tool which charts the yearly count of selected n-grams (letter combinations), words, or phrases, as found in over 5.2 million books digitized by Google Inc (up to 2008). The words or phrases (or ngrams) are matched by case-sensitive spelling, comparing exact uppercase letters, and plotted on the graph if found in 40 or more books. The Ngram tool was released in mid-December 2010.

The word-search database was created by Google Labs, based originally on 5.2 million books, published between 1500 and 2008, containing 500 billion words in American English, British English, French, German, Spanish, Russian, or Chinese. Italian words are counted by their use in other languages. A user of the Ngram tool has the option to select among the source languages for the word-search operations.

Info: http://books.google.com/ngrams/info

Search engine: http://books.google.com/ngrams

TED Talk about Google Ngram Viewer: http://www.ted.com/talks/what_we_learned_from_5_million_books.html