On February 19, Ryan Dancey parted ways with board game firm AEG, after ten years, most recently as its Chief Operating Officer. It was a sudden and unplanned departure. The split came a day after Dancey claimed that "I have zero reason to believe that an AI couldn't 'come up with Tiny Towns or Flip Seven or Cubitos'" on social media, centering him and the company in a storm of controversy. He's since said that those remarks were taken out of context. Having interviewed him, I can say one thing - his views on AI are far too big to summarize in a single quote.
By the time we spoke, Dancey had already lost his job. In follow-up statements he argued that his original comments - which you can find summarised in this article - did not reflect his views. The claim that an AI could "Come up with" a board game, he says, was meant in the small sense of producing usable ideas, not game prototypes and much less finished games. He told Board Game Wire he voted for AEG to exclude AI from the creative process, and had written clauses into the firm's boilerplate contracts that prohibited freelancers from using it.
Looking through Dancey's LinkedIn posts, though, it's clear that he experiments with AI systems a lot, and is effusive about the technology's potential. Isn't there a contradiction between these stances - claiming to support and protect the creative industries from AI, while revelling in the possibilities of the technology? Here's what I found out.
The AI horizon
Perhaps it's the recent sting of losing his job over careless comments, but Dancey is circumspect when I ask him to predict the future of AI. "I don't believe that there's a bright line that says this is where we're going", he says "I like to think in scenarios, and then try to weight those scenarios by my assessment of probability".
"I think that there is a low, but not zero probability that it destroys humanity," Dancey says, but clarifies: "I actually think that number is very low, which is why I continue to exist and I'm not just sitting on a cliff, staring at the sunset, preparing for the end".
"On the other extreme, I think that there is a low… chance that what we are seeing today is the medium-term best that this technology is going to deliver". In this scenario, he says, "[i]t's just not going to get much better from here: it's going to get refined, and maybe it gets less expensive". By medium term he means 10 years: "I think it's foolish to imagine that it won't change in the long term".
"The most likely case is that the technology takes a significant step in the next two years… to a place where an AI agent can do most of the work of a white collar worker," Dancey says: "You will be able to talk to it in human language, and it will respond in human language". He doesn't think it'll be perfect: "I think it will make different kinds of errors than humans make, and some errors humans make, it won't make". But cost wise, he reckons "[i]t might be pennies on the dollar to what it costs to employ a white collar employee".
It's a vivid possibility to him. "It's entirely disruptive to the most productive parts of our society that we rely on to drive our consumer economy, it will hit those people dead on". Another clarification: "When I say those people, I mean me, like I am that person. I'm a white collar worker, and I'm basically saying a robot could replace me in two years".
As for the odds of this happening, "I think 30% or less". "But 30% is a really big number… if I had a 30% chance my house is going to burn down, I would do an awful lot to insure myself against that possibility".
Not long ago BlockChain was the big disruptive technology that was going to change the world - it has certainly made it easier to buy and sell drugs. Then there were NFTs (laughable) and the Metaverse (pathetic) which executives rushed to like lemmings. To me, it has seemed that tech-bro promises for Large Language Models were similarly inflated.
So, what's led Dancey to such different conclusions from mine?
Experiments with AI
Dancey regularly uses AI. "I have subscriptions to Claude, and I use a tool called Warp, which is an AI enabled terminal for System Administration," he says. He's used ChatGPT "in various permutations" but now chooses not to because he's "not a very big fan of open AI or Sam Altman".
Dancey says he's built several AI tools for non creative projects. "I built a tool to install a wiki that AEG uses" he explains, as well as one that extracted balance sheet numbers from the firm's accounting software and spat them into a spreadsheet for bosses to use.
Though he acknowledges it's been decades since he was at the cutting edge, Dancey's coding background makes him confident he can evaluate whether the AI is doing good work, or messing stuff up for humans to fix. "Probably the first 10 years of my professional life was in technology, I co-founded a company that was a tech company," he says, which then "morphed into an Internet service provider, and I wrote software as my day job".
And he's adamant AI is a game changer in that realm at least. "It took me days or weeks to write the software that… I would have taken months to write with no AI assistance", Dancey tells me. The software "runs flawlessly. It's got unit tests and everything," he adds. "I have no reason to believe it's not working as intended."
AI being able to populate finance spreadsheets doesn't mean it can come up with million dollar board game ideas, of course - let alone actually design them. But a technology doesn't have to be good at something to be more cost effective than a human: a machine-knitted sweater may not be as nice as a hand-knitted one, but it's cheaper and faster. That's the business promise that's got so many bosses excited - which brings our conversation onto another precarious slice of the AI pie: the money.
Who's paying?
AI companies are drinking down capital investment. "The largest tech companies, Google, Microsoft, Meta, their cost of labor is 1%", Dancey asserts, "They have accumulated oceans of cash that they had nothing to spend on…" It strikes me that this would be a good reason to tax them and build public infrastructure. Instead, Big Tech is pouring money into AI.
Dancey sees this as a good thing. "Those oceans of cash are now being deployed usefully in the economy", he says, "They're being spent on plant, property, equipment, and salaries". The idea that this is 'useful' spending turns on whether or not AI technology is useful. If it is, datacentres could be an investment in infrastructure similar to electrification or the Hoover dam - though notably, not under public ownership.
But "[t]here's also a sense of circularity to the spending", Dancey admits. I immediately think of the diagram published by Bloomberg Technology that shows much of the economic activity around OpenAI and NVidia consists of the pair passing investments backwards and forwards without any actual demand from end users. "That's always dangerous, like, that's a red flag for a lot of economists, when you see circular spending". He adds "We are now starting to see banks get involved, they're lending money, especially for construction" - which means major financial institutions are betting that the people building data centers are going to be profitable.
Dancey hesitantly admits that there is an "AI bubble" economy. I think he doesn't want to suggest that all AI economic activity is unproductive. But he's sufficiently convinced that the current AI economy isn't rational that he's betting money on it. "My Retirement Fund, my 401k is invested in the S&P500, [which] is heavily weighted towards tech stocks, so I built a hedge in the event that we experienced a dot com bubble in the near term". "It's not going to salvage my retirement, but it'll help stop my retirement from taking the kind of hit it would have taken if I had been, you know, 10 years out from retirement in 1990".
He points out, however, that "The dot com bubble didn't kill the internet - if anything, the things that survived that extinction event were stronger and better than what had gone into it, and which is the reason we have the world that we have today". For him, the question of whether the current AI economy is sustainable, and whether the technology has a future, are distinct.
But for many, the question is not whether AI can exist - whether technically or economically - but whether or not it should exist. What does he think about that?
The moral dimension
"Frontier AIs have been trained on [copyrighted] content for which permission was not given", Dancey says. "The law currently says that's okay - my opinion is that the law is way behind society". But he expects the impacts of AI will "Move so fast that by the time any meaningful legal changes are made, it could be moot."
"We're going to have a technology which is, I think, unfixably tainted by an ethical problem that emerged at its birth". Dancey says. "On a certain level that troubles me, obviously, for a whole host of reasons", Dancey says, "But I am also aware that many of the technologies we use in our everyday lives are tainted by ethical problems: driving a car powered by gasoline; living in the United States, a country whose economy was fundamentally altered by slave labor; the way that the modern, globalized economy shifts value from people who were affected by colonialism to the people who did the colonizing". He doesn't see any effective way to opt out of the moral compromises attached to much technology.
"I think we're going to have to rely on small groups and collective action", he adds, "Unions will help. When creators are strong enough for form guilds and withhold their work until they're fairly negotiated with, that will help. Friends and neighbors will have to support one another in ways (at least in the US) we've allowed to atrophy".
He muses that the most likely settlement between AI firms and copyright holders will be similar to Spotify, a collection mechanism that doles out license fees. Ultimately, he thinks "Society as a whole is going to just accept that a pretty big crime was committed at the birth of AI".
Perhaps that acceptance is complicity - perhaps it's realism. For something he only thinks has a 30% chance of occurring, Dancey seems fatalistic about how AI will be adopted. But in other ways, he's an idealist.
Hopes and fears
Dancey sees upsides to AI, including in the tabletop gaming industry. "There are people that I know who feel very, very limited because they can do only a part of the process of building the games that have their names on them - they do the game design, and maybe they do part of the game development, but they can't do the art, they can't do the packaging". He predicts "AI is going to make it possible for somebody to make a complete game, start to finish".
"There's a lot of people out there who don't have much other than a very good idea for a game, and they couldn't possibly make that game", Dancey says, "And AI is going to make it possible for those games to come into existence - there's going to be a flowering of game design one hundred times, a thousand times, a million times". He acknowledges that a lot of it is going to be bad. "But, multiplied across this flowering of potential," he tells me, "even a small percentage is going to produce incredibly awesome stuff that would never have been made in any other way".
"The total number of people who make a living wage doing game design is probably less than 100 people," he says - but he believes "that number is going to just explode, and that's going to be great". He adds: "We have a problem in the gaming industry that our business is too much of a monoculture… there's lots of people in places in the world that aren't represented in gaming, and they will be, and that's going to just be good for everybody".
If that seems over-optimistic, his idea that AI could be used for play-testing, at least, seems grounded. "You can work with AI to develop personas who will attempt to play that game in a variety of ways, and you can turn them loose and let them play hundreds or thousands or millions of iterations of your game and report back on what happened".
And "[o]n the art side," he says, "there are going to be artists who are going to use AI tools to do art they couldn't do now, in ways they couldn't do now". A reasonable hypothesis - every form of technology has given rise to its own distinctive art form, from lace to creepy pasta. Then he takes an idealistic turn: "There are going to be people who have an artistic vision, who don't have the skills to do it, and they'll be able to do it. There are people who can't do it, they're paraplegic, they're blind… there's a million reasons that somebody might be physically limited and unable to do art, and they will be addressed".
And all shall be well, and all manner of thing shall be well. This is the closest Dancey gets to the techno-mysticism that runs beneath the surface of the tech industry: the belief that technology is inherently liberatory without any interrogation of what liberation requires, a naïve optimism expressed brilliantly in Richard Brautigan's poem "All Watched Over By Machines Of Loving Grace".
Dancey has demonstrated enough self-awareness that I don't think his views are quite this simple - he's just giving me a highlights reel of the potential upside. And he has surprisingly dire predictions about how AI may change society, which he emails to me after our interview.
Dancey expects - fears - AI causing a new Industrial Revolution, but instead of occurring over "100 years and three or four generations, it could all happen in 10 years, or less". "I'm expecting a French Revolution scenario where society breaks, the center does not hold, and radicalism becomes the norm". He thinks that "A level of bloodshed" is imminent, comparable to the 1917 Russian Revolution or the Cambodian killing fields of the 70s. "I pray desperately I'm wrong", he concludes.
For statements far tamer than these, Dancey became the main character of the tabletop gaming internet for a day. Why does he think that is?
AI in the board game industry
"I think that in the general gaming space, the scrutiny is high, and if you subdivide that even further into the tabletop gaming space, the scrutiny is extraordinarily high". He attributes anxiety around AI "Partly because of what's happening in the digital part of the gaming space, where everybody can see mass layoffs are occurring. And I think anybody with a brain knows that it's, at least in part, because of AI". I think there's also well-earned skepticism of executives promoting new wonder technologies.
"I think that it is coupled with the knowledge that tabletop gaming industry jobs are not, and have really never been, a place where you go to make a lot of money", Dancey says. "The people in this space don't have a lot of reserves - they can't withstand months or years of unemployment while they wait for the market to figure out what's going to happen".
Dancey thinks people need to engage with the potential of AI in earnest. He advocates for people to spend "a bit of time every week working with the frontier models, the best version of Claude or the best version of Chat GPT… making them do complicated things so that you have a sense for what they're capable of". He asserts that the tech is moving sufficiently fast that a test you conducted a year ago simply isn't representative today. And he recommends finding one or two sources of news you trust to stay abreast of developments.
Is Dancey a Cassandra, desperately trying to pull an unwilling industry's attention towards a clear and present danger? Is he an AI nerd who has mistaken his own enthusiasm for the technology as evidence of its importance? An old tech industry veteran who remembers the last big transformation and thinks he's smart enough to see the next one? "Let me be clear, I feel like the discussion is what is most important, not the prediction". It's a statement he makes more than once - he doesn't care that people share his vision, but he thinks he has an obligation to get people to engage with the topic seriously.
There are meaningful outlets for this conversation. The Tabletop Game Designers Association is America's professional body for games designers, and it is currently conducting a survey into members' opinions on AI: their hopes, fears, and expectations. Ultimately that will inform strategy and policy for a professional members body. Similar discussions are happening in businesses, unions, and government - discussions you can participate in.
Whether you're in favor of AI technology, or against it, it will be vital to know what it can and can't do - whether you plan to harness the technology, defy it, or are betting it will simply go away. The world's largest tech companies are betting the global economy on the potential of AI. How certain are you that they're right, or wrong? Can you afford not to know more?





