AI is perfect for lazy DnD players and precisely no one else

Large Language Model tools can generate pictures and prose that's good enough for Dungeons and Dragons DM notes - it’s a low bar to clear.

AI in DnD - a large robot construct sprouting a variety of weird appendages, including a plate of pancakes, a puppet, and a milk frother

I'm a critic of Large Language Model technology - generally marketed as 'AI' - but I am quite happy to admit it has its uses. I've known plenty of Dungeons and Dragons players, in person and online, who use it to whip up images or text of a character or location, or even a plotline for a campaign, and you know what? It does the job. But that is pretty close to the limits of its capabilities, at least when it comes to creativity, and that is so very much less than what the tech goliaths are promising it's capable of.

Do I think it's a good idea for dungeon masters to delegate their DnD preparations to a machine? Most of the time, no - my colleague Gab Hernandez wrote a great article about what you lose, both in terms of personal growth and personal satisfaction, by relying on machine generated responses instead of your own creativity. But that's a separate question from whether or not the machine is capable of coughing up (say) functional images of homebrew DnD monsters for your home adventures.

I'm also setting aside a whole layer of other criticisms of AI technology here. I make my living as a writer, and I have a vested interest in humans being paid for their creative labor - I don't believe that the way Large Language Models are 'trained' on data from the internet can be considered fair use, which makes it a copyright violation on a massive scale that jeopardises artists' livelihoods. Not to mention the ocean-boiling energy requirements of running the things.

But again, those are separate criticisms and not my focus here. You could have LLMs trained solely on public domain data, or on the privately controlled data of a corporation - Wizards of the Coast mooted such a thing. Maybe they could become less energy intensive, or perhaps they could achieve results that are worth the cost in energy and impact on the climate. Let's set that aside.

AI in DnD - a chrome humanoid robot runs through a boiling landscape

ChatGPT and MidJourney can generate text and images in response to prompts. As I covered in an earlier article, there's evidence this can can function as a chatbot that does a passable job of DMing games. If you want to generate a description or image of a place or creature, or an adventure outline, an LLM can do it, and it will be good enough for your home DnD game. That clears the bar for being a neat toy.

This does not mean that it's suitable for use in the creative industries - and I don't just mean that customers don't like it, I mean that it's not capable of industry standard results. If you've seen AI generated animation or video, or (even worse) that AI generated Minecraft thing, you'll know how inconsistent it is, incapable of maintaining the scene from moment to moment. This isn't a problem with the technology that needs more time to develop - it's foundational to how the tech works.

LLMs are the hyper-evolved grandchildren of face recognition technology and porn filters. The input for an LLM is a bunch of 'training data', information that is manually assigned a bunch of labels to describe it by humans. Since all the data in a computer boils down to numbers, the AI can look at all the data that has a particular tag like "Dungeons and Dragons", and see if it can spot any patterns that would connect their numbers. The more different tags and the more data, the better it should get at spotting patterns, and the more abstract and rare those patterns could be.

Note that the LLM hasn't read, looked at, or understood anything. It's a pattern matching system - a phenomenally powerful pattern matching system, I might add. But a sieve can separate stones from sand without knowing the difference between them. A human separating sand from stones would have to apply their knowledge of what a stone is to perform the task - the sieve solves the same problem a different way.

AI and DnD - MtG card art showing a silver robot deconstructing another robot

When it's time to make things, the LLM reverses the process. In response to the prompt, it deploys the patterns and a sprinkling of randomness to generate text or images. Statistically speaking, the output will fall within the numerical patterns that the LLM is operating within.

There are three big old issues here. First, it's really hard to know what patterns exactly the LLM has identified or exactly why it's identified them. Second, who knows which patterns it's going to respond to when you put in the prompt. Third, and most critically, at no point has the machine thought about anything. Thinking, as it turns out, is very important for certain tasks.

Here's how this leads to inconsistent animations. When making a CGI animation, you would design a character, create a 3D model of them, rig it for animation, then code in its movements. In AI animation, none of that infrastructure exists. It generates an output in which each frame is a statistically probable successor to the frame that came before it, when they're boiled down to numbers. It's not a film of a character moving through virtual space: it's just a very big number that's got statistical similarities to the animations in the training data that gave the LLM its patterns. That's not enough to create something truly coherent.

AI in DnD - a humble seeming robot retrieves a sword from a shallow pond

The fact that AI generation has an output, but no creative process, is a problem even for single images. Real creative studios want to be able to edit their outputs, to iterate on ideas and tweak them 'til they're right. A minor problem for an AI generated image is that, because there's no construction process, it lacks the multiple layers, filters, masks, and so on that a real digital image would contain - getting a human to edit them is far clunkier than getting a human to edit human-made work.

More importantly, you can't make 'tweaks' using an AI, because it's not interacting with the image the same way you are. It only sees the numbers and the patterns. Tell it to "give the wizard a bigger hat" and it doesn't know which numbers are the "wizard", which are the "hat", and it can't "make the hat bigger" the way an artist would - by isolating the hat and scaling it up or redrawing it. Instead it looks at the numbers in its last output, then pushes them in the direction of the "bigger hat" pattern, and generates a new, statistically likely outcome.

This lack of understanding becomes really critical when we get to text. LLMs don't know what they're saying - they're just giving you a probabilistically likely response to whatever you just prompted. That's really, really bad if accuracy is important. The team at Wargamer fact checks everything we write, and we actively avoid Google AI search summaries because they constitutionally cannot be trusted.

This isn't the normal 'don't trust what you read on the internet' problem of people lying, misremembering things, or repeating hearsay they haven't fact checked - we can work around that. It's more like using autocomplete to write every word of a text message. Whatever result you get is a statistically likely sentence, and it could be the right answer - but what are you willing to bet on that?

AI in DnD - a hovering blue gateway freezes a hollow suit of samurai armor

None of which disqualifies AI as a tool for DnD prep. An image made to show your PC to your DnD group doesn't need to be fit to enter a publication workflow. A description of the NPCs in a town doesn't have to be based in fact, nor does it even have to be internally consistent. None of the stuff you need for DnD has to be reliable or original or even finished - it's just ideas, after all. Other costs of AI aside, this is a set of capabilities - but they are really quite narrow.

That is critical to bear in mind when evaluating potential uses for this tech, particularly when businesses are considering it for workflows, or coders are putting it into applications. You wouldn't buy a house with 'statistically probable wiring', and you shouldn't use an LLM to create, or as, a system where accuracy is at all important.

LLMs have been aggressively marketed as Artificial Intelligence, but that really is just marketing. The fact that they can perform tasks which, until now, only humans could perform - generating coherent sentences that more or less continue a conversation, producing novel images - is surprising, shocking even, because up until LLMs existed, the only way to do those things was the human way, which involved intelligence and abstract reasoning. But I could say the same thing for pocket calculators and doing long division.

I will make a final allowance for AI technology - it grew out of a pattern recognition tool, and it seems to be showing great potential as a pattern recognition tool, provided it has access to extremely large and well-described datasets. There's a risk of these systems encoding biases from whoever designed them, and of users abdicating their judgment or moral responsibility to the system - but the same can be said of other digital decision making systems, and that reflects on their suitability, not their capability. Suitability in this sphere isn't evidence of all encompassing intellect - it's a sign of a tool directed in line with its capabilities.

We have a 'no AI' rule in the Wargamer Discord community, primarily because of the ethical concerns around the unauthorised use of copyrighted work in training data, and the high energy cost of using it. You however are welcome, dear human!

If you want to exercise your gray matter, check out Wargamer's guides to the DnD classes and DnD races to get your next character brewing. And if you've been using ChatGPT to write your adventures, might we suggest just running from some of the official DnD campaigns instead? Some of them are really pretty good.