Using JSON_data, it's slow enough as to be no longer helpful, and I can't think of any way to sufficiently speed it up. Lua simply doesn't have the ability to conduct that kind of search on a large collection of strings efficiently. (Also, something tells me that some people might not be so enthusiastic about the possibility of such a heavy module being parsed on every keystroke.)
My recommendation is to give up on transitioning WT:EDIT's autocomplete away from the otherwise-obsolete langrev templates, despite the issues regarding the data becoming out of date.
Oh, and to answer the first question: User:Yair rand/TabbedLanguages2.js, which is loaded for some users by a button setup in MediaWiki:Gadget-legacy.js, uses the langrev subpages for language name autocomplete in a little-used and buggy feature used to add new language sections. That version of TabbedLanguages is going to be replaced with the gadget code as soon as I get around to it, and eventually deleted entirely if we ever get the gadget enabled by default, so I wouldn't consider that an obstacle.
Well, I do not want to give up so easily.
I managed to speed it up a bit by adding some simple caching. The subjective responsiveness was comparable, despite the requests taking three times as long (with the majority of the time apparently spent executing the Lua code). Not sure if it helps. I think some caching on the server side would.
(I have also tried the scribunto-console API just out of curiosity; the overhead seems comparable, and often worse.)
I also thought about creating a JS library which would manage a much more sophisticated cache of language data and various code-name mappings. It would be a longer-term project, though.