TL;DR
My workflow:
- Download PDF
- Split it into pages using pdftk
- Extract text of each page using pdftotext
- Classify text and add metadata
- Send it to client in a structured format
I need to extract consistent text to jump from 3 to 4. If text is garbled, I have to OCR its page. But, OCR all pages is out of question. How to identify beforehand which pages should be OCRed? I've tried to run pdffonts and pdftohtml on each page. Isn't it expensive to run subprocess.run
twice a page?
What do I mean by broken page?
A PDF page that is not possible to extract text from its source, maybe due to to_unicode conversion.
Description
I'm building an application that relies on the extraction of text from a thousand PDF files every day. The layout of text in each PDF is somewhat structured, therefore calling pdftotext from python works well in most cases. But, some PDF files from one or two resources bring pages with problematic fonts, which results in garbled text. I think that using OCR only on problematic pages would be ok to overcome such an issue. So, my problem is how to identify, before extracting text, which pages are likely to result in gibberish.
First, I tried to identify garbled text, after extracting it, using regex (\p{Cc}
or unlikely chars outside Latin alphabet), but it did not work because I found corrupted text with valid chars and numbers, i.e AAAAABS12 54c] $( JJJJ Pk
, as well.
Second, I tried to identify garbled text calling pdffonts - to identify name, encoding, embeddedness and existence of to_unicode map - on each page and parsing its output. In my tests, it kinda works well. But I found also necessary to count how many chars used likely problematic fonts, pdftohtml - Display each text block in p
tag along with its font name - saved the day here. @LMC helped me to figure out how to do it, take a look at the answer. The bad part is I ended up calling subprocess.run
two times for each pdf page, what is super expensive. It would be cheaper if I could just bind those tools.
I'd like to know if it's possible and feasible to look at PDF source and validate some CMAP (uni
yes and not custom font), if present, or maybe other heuristics to find problematic fonts before extracting text or OCR it.
Wisdom I found researching:
"In order to successfully extract text (or copy'n'paste it) from a PDF, the font should either use a standard encoding (not a Custom one), and it should have a /ToUnicode table associated with it inside the PDF"
"Fonts are embedded as subsets (indicated by the XYZABC+- (random, unique) prefixes to their names, as well as by the yes in the emb and the sub columns)"
"Most PDFs which are in the wild out there do not embed the full font anyway, but only subsets. Extracting a subset of a font is only useful in a very limited scope, if at all"
"Even if a /ToUnicode table is there, text extraction may still pose a problem, because this table may be damaged, incorrect or incomplete -- as seen in many real-world PDF files"
Example of garbled text in one of my PDF files:
0\n1\n2\n3\n4\n2\n0\n3\n0\n5 6\n6\nÿ\n89 ÿ\n4\n\x0e\n3\nÿ\n\x0f\x10\n\x11\n\x12\nÿ\n5\nÿ\n6\n6\n\x13\n\x11\n\x11\n\x146\n2\n2\n\x15\n\x11\n\x16\n\x12\n\x15\n\x10\n\x11\n\x0e\n\x11\n\x17\n\x12\n\x18\n\x0e\n\x17\n\x19\x0e\n\x1a\n\x16\n2 \x11\n\x10\n\x1b\x12\n\x1c\n\x10\n\x10\n\x15\n\x1d29 2\n\x18\n\x10\n\x16\n89 \x0e\n\x14\n\x13\n\x14\n\x1e\n\x14\n\x1f\n5 \x11\x1f\n\x15\n\x10\n! \x1c\n89 \x1f\n5\n3\n4\n"\n1\n1\n5 \x1c\n89\n#\x15\n\x1d\x1f\n5\n5\n1\n3\n5\n$\n5\n1 5\n2\n5\n%8&&#\'#(8&)\n*+\n\'#&*,\nÿ\n(*ÿ\n-\n./0)\n1\n*\n*//#//8&)\n*ÿ\n#/2#%)\n*,\nÿ\n(*/ÿ\n/#&3#40)\n*/ÿ\n#50&*-\n.()\n%)\n*)\n/ÿ\n+\nÿ\n*#/#\n&\x19\n\x12\nÿ\n\x1cÿ\n,\x1d\n\x12\n\x1b\x10\n\x15\n\x116\nÿ\n\x15\n7\nÿ\n8\n9\n4\n6\nÿ\n%\x10\n\x15\n\x11\n\x166\nÿ\n:\x12\x10;\n2\n*,\n%#26\nÿ\n<\n$\n3\n0\n3\n+\n3\n8\n3\nÿ\n+\nÿ\n=\x15\n\x10\n6\nÿ\n>\n9\n0\n?\nÿ\n4\n3\n3\n1\n+\n8\n9\n3\n<\n@A\nB\nC\nD\nEÿ\nGH\nI\nÿ\nJ\nJ\nK\nL\nJ\nM\nJ\nN\nO\nP\nO\nQ\nI\n#\x1bÿ\n0\n1\nÿ\n\x1c\n\x10\nÿ\n*\x1a\n\x16\n\x18\nÿ\n\x1c\n\x10\nÿ\n0\n3\n0\n5\n\x0e\n/\x10\n\x15\n\x13\x16\n\x12\nÿ\n/\x10\n\x16\n\x1d\x1c\x16\n\x12\n6\nÿ\n* \x19\n\x15\n\x116\nÿ\n\x12\n\x19\n\x11\n\x19\n\x12\n\x16\nÿ\n\x15ÿ\n/*-\n\x0e\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\n(\x10\nÿ\x16\n\x1c\n\x10\n\x1bÿ\n\x1c\n\x12\nÿ\n%\x13\n\x10\n9\n\x10\nÿ\n\x1c\n\x10\nÿ\n\'\x12\n\x1a\x15\n\x10\n\x11\n\x10\nÿ\n\x1c\n\x12\nÿ\n%\x16\n\x16\n\x10\nR\n\x10\n\x1c\x16\n\x12\nÿ\n\'\x10\n\x16\n\x12\n\x18\nÿ\n\x1c\n\x12\nÿ\n-\n\x19\x11\n1\n\x12\nÿ\n\x1cÿ\n#\x11\n\x12\n\x1cÿ\n\x1c\n\x10\nÿ\n*\x18\n\x12\nR\x126\nÿ\n/\x16\n\x12\n\x0e\n& \x10\n\x12\n\x15\n\x12\nÿ\n%\x10\n\x18\x11\n\x16\n\x10\nÿ\n:\x12\x13\n\x12\n\x1c\x0e\nÿ\n*\x19\n\x11\n\x19\n\x10\n+\x10\nÿ\n\x10\nÿ\n&\x10\nR\x11\n\x16\n\x10\n+\x10\nÿ\n\x15ÿ\n/*-\n2\n2\'<\nÿ\n+\nÿ\n#S\n\x11\n\x16\n\x12\n\x17\n\x19\n\x1c \x12\n\x18\nÿ\n*\x1c\n\x1b\x15\x11\n\x16\n\x12\n\x11\n\x1d\x0e\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\nÿ\n*\x11\n\x10\n\x15 \x12\n\x1b\x10\n\x15\n\x11\n\x10\n6\nTU\nV\nWU\nXÿ\nYXÿ\nTU\nV\nW\nX\nXYZU\n[U\nT\\]X\\U\nW\nX\nVD\n^\n_\n`\nÿ\nab\nÿ\nXGb\nc\nE^\nd\nO\nP\nO\nQ\nP\ne\nO\nf\nP\nf\nJ\nf\nP\ne\ng\nGb\nh_\nEGI\niaA\nYjTk\nXlm@ YjTk\nXlmX] ]jTk@[Yj] U\nZk]U\nZU\n] X]noU\nW\nX] W@V\n\\\nX]\nÿ\n89\nÿ\n89\np ÿ\nq\n(\x10\x14\n\x12\x13\n8r\nIOV\x11\x03\x14\n(VWH\x03GRFXPHQWR\x03p\x03FySLD\x03GR\x03RULJLQDO\x03DVVLQDGR\x03GLJLWDOPHQWH\x03SRU\x03(00$18(/$\x030$5,$\x03&$/$\'2\x03\'(\x03)$5,$6\x036,/9$\x11\x033DUD\x03FRQIHULU\x03R\x03RULJLQDO\x0f\x03DFHVVH\x03R\x03VLWH\x03\x0f\x03LQIRUPH\x03R\x03SURFHVVR\x03\x13\x13\x13\x13\x16\x17\x18\x10\x1a\x18\x11\x15\x13\x15\x14\x11\x1b\x11\x13\x15\x11\x13\x13\x1a\x16\x03H\x03R\x03\nFyGLJR\x03\x17(\x14\x14\x16\x14\x13\x11\x03
The text above was extracted from page 25 of this document using pdftotext.
For that page, pdffonts outputs:
name type encoding emb sub uni object ID
------------------------------------ ----------------- ---------------- --- --- --- ---------
[none] Type 3 Custom yes no no 13 0
DIIDPF+ArialMT CID TrueType Identity-H yes yes yes 131 0
DIIEDH+Arial CID TrueType Identity-H yes yes no 137 0
DIIEBG+TimesNewRomanPSMT CID TrueType Identity-H yes yes yes 142 0
DIIEDG+Arial CID TrueType Identity-H yes yes no 148 0
Arial TrueType WinAnsi yes no no 159 0
It's easy to identify that [none]
named font as problematic. My take so far, given the data I've analysed, is to mark fonts with custom or identity-h encoding, no to_unicode map or none named as likely problematic. But, as I said, I also found cases with ToUnicode table and not Custom encoding fonts, problematic as well. As far as I know, it's also possible to find, for example, a single char that is defined for a broken font, but does not affect the overall readability of the page, so maybe it would be not necessary to OCR that page. In other words, if a font, in a given page, does not have ToUnicode convertion, it does not mean that the text of the page is totally affected.
I'm looking for a solution that is better than regex garbled text.
Examples of PDF pages that I had to OCR
All pages bellow contains text in portuguese, but if you try to copy the text and paste somewhere you will see universal gibberish.
- Page 146 of http://tjdocs.tjgo.jus.br/documentos/584544
- Page 26, 80, 81, 82, 83 and 84 of http://tjdocs.tjgo.jus.br/documentos/584556
- Page 23 of http://tjdocs.tjgo.jus.br/documentos/584589
from How to identify likely broken pdf pages before extracting its text?
No comments:
Post a Comment