Hopping from Medium over to the Guardian news site, I just read this in a reference to the Green Party commenting on the UK health service's problems:
"More reorganisation and target setting will simply be rearranging the beds in the corridor. "
There's nothing outlandishly creative there, in fact it's little more than political cliché, slightly rejigged.
But it made me wonder how many steps ChatGPT etc. are from producing even that level of imagery.
Sure, if I prompted it to 'come up with a version of the idiom 'rearranging the deckchairs on the Titanic' that would make sense in the context of a health system, if quite probably would. But at that level of prompt specification you may as well write the copy yourself.
To spontaneously produce the indirect reference to the Titanic (=impressive but doomed ship) and then switch the deckchair phrase for beds in corridors, thereby referencing a specific and current issue of overcrowding (even assuming you give the LLM access to more recent training data)...
I don't know - I'm not sure how much imagination or creativity is needed to do stuff that still goes way beyond the 'perplexity' and 'burstiness' scores that LLMs can generate for themselves from a basic prompt like 'Write a tweet from the Green party criticising Labour health policy'.
Does that make sense?