seawasp: (Default)
[personal profile] seawasp

I have to remember to put in cuts so I don't post walls of text.



Early AI instantiations (of the current era) were able to generate sensible responses to individual prompts, as long as they were relatively short responses. These would become increasingly "off" the longer the response went, in great part because early on, the text prediction was in isolation -- it didn't have a mechanism to be aware of what the prior responses had been, and thus by the time it was ten sentences on the subject and focus of the discussion had entirely drifted. 

This was an obvious problem, and subsequent versions have included more and more context memory, the ability to effectively "keep in mind" the context of prior interactions, and these now reach novel, or even greater-than-novel, length. 

This has led to the question of "well, if the thing were trained on your books, could it then write a new Ryk Spoor novel? It now can keep the whole novel "in mind" while writing, so it won't just lose the thread of narrative and collapse."

The answer is, it turns out, "no, not a chance". 

The REASON for this answer is that writing a novel is not a matter of "Produce a reasonable, consistent set of SF/F text in Ryk Spoor's writing style(s)". That's certainly ONE important element of my novels, and not an insignificant one. The consistency will mean that the text should keep the same character or characters as the primary focus, events should have some level of progression and flow, and there won't be glaring omissions or inconsistencies to be found. 

But the problem is that the context you get out of *reading* my novels is only a part of the context I have when *writing* them. The world of Zarathan, for example, has history and depth that has never been published anywhere. Some of it has never been written down. Some of it WAS written down but the written version was destroyed. I have a very deep and broad understanding of the history of that world that allows me to put in offhanded comments that, across several books, may allow a reader to recognize events or individuals' past presence without my ever stating it. 

You don't get the Ancient Sauran language if you read the books -- just some of the individual words that get translated. You don't know how I decide what those words are, what they SOUND like -- how to tell a PROPERLY CONSTRUCTED Ancient Sauran word from one that's sorta-right but not really. And to a good extent *I* can't tell you how I decide. In some cases I know explicitly that it's this word, modified to fit this pattern, and then combined with another word, but often it's "that sounds right".  A lot of my names for people or places or things are "that sounds right", but the "sounds right" really has a foundation -- a vision or sound in my head that I can recognize as being correct because it fits the model I've created for the language. 

The MODEL is what will be missing in all cases. You can't write an Arenaverse novel without knowing the answers to the Big Questions, because those answers directly guide what can, and cannot, happen. You can't write Virigar's actions in any given plot circumstance if you don't really understand who and what he is, and what his goals are -- only some of which can be derived from the published work. 

You have to have a plot IN MIND when you write. You need to know what KIND of a book you're writing, and what you want that book to accomplish when it's complete. In my case, you need to create specific scenes and events in mind that you're writing TOWARDS -- that, no matter what other side events the characters may involve themselves in, they will eventually gravitate to and become participants in. 

You have to be able to recognize when it's going WRONG -- when you've written four chapters that have taken you down a narrative path that you don't actually like. It may be a perfectly WORKABLE narrative path, but it's not the RIGHT path for the story you're telling. 

None of this is something that current AIs are capable of. To a great extent, it's the same as my "driving a car" challenge; the limitation of modern autonomous vehicles is that while they actually have faster reaction time and more precise sensing capabilities than humans, they don't  have any true understanding of the world they're driving through, and so they cannot address events that are EMERGENT from the world's nature, only events as they are individually sensed and addressed without greater context -- a context that cannot be extracted merely from training content, because training content is by its nature a concatenation of multiple isolated elements, not a structure of relationships that are implicit and explicit in the world OF the events and objects. 
 
(while I've been writing this it's also become clear to me that there would be a major problem with current AIs whether or not they were actually sentient/sapient and I suppose I should write that one soon to make sure I don't forget, because it has a big bearing on all the reported "the AI lied/tried to escape/blackmailed" events and how that kind of thing is bloody inevitable because of the way the things are trained. But that's a different post)





Page generated May. 11th, 2026 08:39 pm
Powered by Dreamwidth Studios