Consider the threat once posed by the calculator to homework that involved arithmetic. In some cases, the point of such exercises is to learn arithmetic or mathematical principles through practice. Teachers mostly shifted those exercises to the classroom, and prohibited the use of calculators in that setting.
In other cases, homework involved tedious arithmetic, but this was entirely incidental to the learning goals of the exercise (which might be about, say, gravity). After initial reluctance, teachers recognized that it was pointless to make students do these manually. Back when I was in high school, we were forced to use log tables out of the mistaken belief that there is pedagogical value in doing tedious calculations by hand. Mercifully, log tables have now joined the abacus.
The same changes are likely to play out with language models. In some cases, the point of assigning an essay is to teach writing skills or critical thinking. The availability of language models has not obviated these skills. To prevent cheating on this type of exercise, instructors could move them to the classroom. Even better: there are many ways to change the exercise so that the tools aren’t helpful. These changes take advantage of inherent limitations of language models that are unlikely to be fixed soon.
Widespread adoption of new technologies such as AI-text generators may lead to more "flipped classroom" models, and I agree that this might ultimately be a good outcome. But in that transition, we're going to see a lot of fear mongering and conservative pushback because people don't like change and try to resist it at first.
I also think that plagiarism-checking tools will start to incorporate the same AI models that students are using. What happens when the models get advanced enough that you start going through all the permutations of a simple idea, where what you might have said on a topic – and how you might have said it – is indistinguishable from from a language model with enough training data might generate?