Matthew Clapham
Jan 28, 2024

--

This is certainly what I perceive in my field of translation. Neural machine translation went from 'embarrassingly bad', to 'really quite decent in many circumstances' in a few years. But it really doesn't seem to be getting much better of late in handling the stuff it's not good at: source text that is more than slightly verbose, complex, convoluted, ambiguous, illegible (if it has to be OCR'd) or just incorrect. All those 'I don't think that means what you think it means' examples.

And I think it's hard for automated systems to learn from examples there, because the workaround that a human brain comes up with in each case doesn't follow anything like a set pattern that could easily be recognised. More training examples might actually make the job harder.

--

--

Matthew Clapham
Matthew Clapham

Written by Matthew Clapham

Professional translator by day. Writer of silly and serious stuff by night. Also by day, when I get fed up of tedious translations. Founder of Iberospherical.

No responses yet