I am a criticism of great language technology – generally marketed as “AI” – but I am very happy to admit that it has its uses. I have known a lot of Dungeons and Dragons The players, in person and online, who use it to prepare images or the text of a character or a location, or even of an intrigue for a campaign, and what do you know? He does the job. But it is quite close to the limits of his capacities, at least with regard to creativity, and it is so less That what the Goliaths Tech promise that he is capable.
Do I think it’s a good idea for the Masters of Dungeon to delegate their Mnu preparations for a machine? Most of the time, no – my colleague Gab Hernandez wrote an excellent article on what you lose, both in terms of personal growth and personal satisfaction, based on responses generated by the machine instead of your own creativity. But this is a distinct question to know if the machine is capable of coughing (say) functional images of dnd homebrew monsters for your home adventures.
I also put aside a whole layer of other criticisms of AI technology here. I make a living as a writer, and I have a direct interest for humans to be paid for their creative work – I do not think that the way in which important language models are “ trained ” on Internet data, can be considered as a fair use, which makes it a copyright violation on a massive scale which endangers the means of subsistence of artists. Without forgetting the energy needs of the oceanic energy of the management of things.
But again, they are distinct criticism and not my goal here. You may have LLMs that are only formed on public domain data, or on private data from a company – Wizards of the Coast has denounced such a thing. Perhaps they could become less energy, or perhaps they could obtain results that are worth the cost of energy and the impact on climate. Put it aside.
Chatgpt and Midjourney can generate text and images in response to prompts. As I have covered in a previous article, there is evidence that it can work like a chatbot that does a passable work of dming games. If you want to generate a description or an image of a place or a creature, or an adventure outline, an LLM can do it, and it will be good enough for your DND game at home. This erases the bar to be a neat toy.
This does not mean that it is suitable for use in the creative industries – and I don’t just want to say that customers do not like it, I mean that it is not capable of standard industry results. If you have seen an animation or video generated by AI, or (worse still) that the AI has generated Minecraft Thing, you will know how incoherent it is, unable to maintain the scene from one moment to the next. This is not a problem with technology that needs more time to develop – it is fundamental How does technology work.
LLMs are hyper-evolved grandchildren of facial recognition technology and porn filters. The contribution of an LLM is a bunch of “training data”, information that is manually seen by a lot of labels to describe them by humans. Given that all data from a computer comes down to numbers, AI can examine all the data that has a particular tag like “Dungeons and Dragons”, and see if it can identify models that would connect their figures. The more different and more there are should Get location models, the more these models could be abstract and rare.
Note that the LLM has not read, looked at or understood anything. It is a pattern correspondence system – a phenomenally powerful patterned correspondence system, could I add. But a sieve can separate the stones from the sand without knowing the difference between them. A human separating the sand from the stones should apply their knowledge of what a stone is to carry out the task – the sieve solves the same problem in a different way.
When it is time to do things, the LLM reveals the process. In response to the prompt, he deploys the models and a pinch of chance to generate text or images. Statistically speaking, the output will be in the digital models in which the LLM works.
There are three major old problems here. First of all, it is really difficult to know what models exactly the LLM has identified or exactly Why He identified them. Second, who knows what models he will answer when you put the invite. Third, and above all, at no time did the machine thought of anything. Think, it turns out that it is very important For some tasks.
Here’s how it leads to incoherent animations. When creating a CGI animation, you would design a character, create a 3D model, trace it for animation, then code in its movements. In the animation of the AI, none of these infrastructures exists. It generates an exit in which each frame is a statistically likely successor to the framework that preceded it, when they have passed in numbers. It is not a film of a character moving in virtual space: it is just a very large number which has statistical similarities with the animations of the training data that gave the LLM its models. This is not enough to create something really consistent.
The fact that the AI generation has an outing, but no creative process, is a problem even for unique images. The real creative studios want to be able to to modify Their results, to iterate on ideas and modify them until they are correct. A minor problem for an image generated by AI is that, because there is no construction process, it does not have multiple layers, filters, masks, etc.
More importantly, you cannot make “adjustments” using an AI, because it does not interact with the image in the same way as you. He only sees numbers and models. Tell him to “give the sorcerer a bigger hat” and he does not know which figures are the “sorcerer”, which are the “hat”, and he cannot “enlarge the hat” as an artist would – by isolating the hat and by reducing it or by rehabilitating it. Instead, he examines the numbers of his last outing, then pushes them in the direction of the “larger hat” motif and generates a new statistically likely result.
This lack of understanding becomes really critical when we arrive at a text. The LLMs do not know what they say – they just give you a probable probabilistic answer to everything you have just caused. It’s really, Really Bad if the precision is important. The Wargamer Facts team checks everything we write, and we actively avoid the search summaries on Google AI because they can’t trust.
This is not the normal problem “ Do not trust what you read on the internet ” of people who lie, you rewrite things or do not repeat the hearsay that they did not check – we can bypass this. It is more like using the semi-automatic entry to write each word of a text message. Whatever the result you get is a statistically likely sentence, and it could Be the right answer-But what are you ready to bet on it?
None of which the AI disqualifies as a tool for the preparation of the DND. An image made to show your PC to your DND group does not need to be able to enter a publication workflow. A description of NPCs in a city does not have to be based in fact, and it should not even be consistent internally. None of the things you need for the DND should be reliable or original or even finished – These are just ideas, after all. Other IA costs East a set of capacity – but they are Really quite narrow.
Which is essential to keep in mind when evaluating potential The uses of this technology, especially when companies consider it for workflows, or coders put it in the applications. You would not buy a house with a “statistically probable wiring”, and you should not use an LLM to create, or asA system where precision is important.
The LLMs were marketed aggressively as an artificial intelligence, but it’s really just marketing. The fact that they can perform tasks which, so far, only humans could perform – generate coherent sentences which more or less pursue a conversation, producing new images – is surprising, shocking, because until the LLM existed, the only way to do these things was the human way, which involved intelligence and abstract reasoning. But I could say the same for pocket calculators and make a long division.
I will do a final allowance for AI technology – it was born from a model recognition tool, and it seems to show great potential as A patterned recognition tool, provided that it has access to extremely large and well described data sets. There is a risk that these systems coded the biases from those who have designed them and users abdicating their judgment or their moral responsibility has The system – but the same can be said of the other digital decision -making systems, and that reflects on their relevance, not their capacity. Relevance in this sphere is not proof of everything encompassing the intellect – this is a sign of a tool directed in accordance with its capacities.
We have a “without AI” rule in the Wargamer Discord communityMainly due to the ethical concerns about the unauthorized use of labor protected by copyright in training data and high energy cost of its use. However, you are welcome, dear human!
If you wish to exercise your gray matter, consult the Wargamer guides to the DND classes and the DND races so that your next character is preparing. And if you use Chatgpt to write your adventures, could we simply suggest fleeing some of DND’s official campaigns? Some of them are really very good.