The DIY D&D scene has many strengths, but there is notably little discussion about playtesting. This is particularly surprising when viewed abstractly, as people pay a great deal of attention to usability concerns such as layout, stat block efficiency, and creating procedures that, in isolation, are easy to use. The one page dungeon contest exemplifies this concern. When I write about playtesting here though, I mean testing the effectiveness of a more extensive product, such as a larger module or ruleset, rather than an isolated mechanic, procedure, or bit of content such as a player character class. In particular, I am most interested in intermediate level products, such as adventure modules or significant subsystems, rather than rulesets or retro clones, which in the context of the community most players are already relatively familiar with. The definition of adventure module should be relatively obvious, but for significant subsystems consider the supporting rules in Veins of the Earth or Cecil H’s Cold Winter.
This came home to me again somewhat recently when I decided to run a large module by an experienced traditional D&D adventure writer that had been written within the last few years. The identity of the module is unimportant here; the main point is that it was written with access to recent collective community knowledge. I like the content of the module, which while somewhat vanilla is creative enough to be inspirational for me, and the game itself went well, but throughout I felt like I was fighting the product, often unsure where to find the info that I wanted. This was despite being a reasonably experienced referee and spending a couple hours beforehand skimming the module and making some notes for myself.
A while back, I noticed that Vincent Baker’s Seclusium claimed that the product could, with 30 minutes of use, produce a wizard’s tower significant enough to do honor to Vance as inspiration.
I actually like Seclusium quite a bit as inspiration, and want to avoid making this about criticism of that particular product, but I think that the best products should aspire to that level of usability, assuming a reasonably competent operator.
These two experiences together suggest to me a kind of instrumental playtesting that I think may be particularly effective in improving the usability of game products for gaming, which is timed prep starting with minimal specific product knowledge but assuming general familiarity with the base game or style of game. This is in contrast to what seems to me as the ideal of the indie scene, which often focuses on beginners and whether a game can stand alone from roleplaying culture in general, explaining itself to a completely naive reader; hence, the ubiquity of sections on defining a roleplaying game. While that general pedagogical focus has a place, it does not solve a problem that I have, and further I think there is a good case to be made that the larger, more mainstream games introduce the basic idea of imagination-driven fantasy gaming reasonably well and are also more likely to be a new player’s entry point. Instead, what is more relevant to me, and I suspect others consuming such intermediate-level content, is whether this particular dungeon (or whatever) will be easy for me use use as a busy person with a pretty good handle on B/X D&D (substitute your favorite flavor).
More concretely, I suggest that to playtest a product, find someone willing to run it from a dead start, strictly limiting prep time to one of 30 minutes, 1 hour, or 2 hours. Then, get a list of the questions the playtester had of the product that went unanswered. This kind of info is mostly separate from issues of creativity, inspiration, or aesthetics, which the community is already effective at criticizing and fostering (see 10 foot pole reviews, etc). For easy shorthand, call this the deadlift approach to playtesting: start from the ground, do the thing without the blue sky assumption of perfect conditions, evaluate the outcome, and report back.