Discover more from hrbrmstr's Daily Drop
Drop #112 (2022-10-03)
Prompt Engineering (is hard); Data Memos; Sketchy Maps
Prompt Engineering (is hard)
Programming note: You may want to check out Lynn's latest newsletter where she covers "Stable Hacking" (I promise it has nothing to do with the equine horror genre) before digesting this section.
A little over a month ago, at what I thought, then, might have been "peak Diffusion", I half-joked with some Twitter mates about "prompt engineering" being a legit new job category. I further posited that it was also a job area tailor-made for AI to quickly usurp. I may have been super wrong about the "peak", but I seem to have been spot (so far) on with the rest of the posit.
We're just at the early stages of "text-to-*" AI-generated content. Said content is not (yet) being generated on its own. When it comes to image generation, human curated input is required to cause pixels from [m|g]illions of training images to form in new and different ways. If you've played with any of the generators, you know you fill in a text box with what you think you want, then the model goes away for a bit and returns with a selection of outputs, all of which have only a thin, initial chance of being what you truly envisioned. If you're like me, you end said experimentation there (I mean, I got's things to do's). But, if you're determined to come away with usable content, you stick with it and continue to refine the input text until you achieve your desired creation.
The process of becoming intimately familiar with a given "text-to-*" model and curating the input text that instructs the model on what to build is one form of "prompt engineering". While said prompts may not be, in the strictest sense, "code", I posit this work is most certainly a form of coding, especially since it requires domain expertise to consistently achieve the desired outcome.
Xe Iaso (@theprincessxena) showcases the difficulty and process of this prompt workflow in "Prompt engineering is hard," where they walk us through how they went about creating an image of a setting in a fictional world specifically to show that this is, indeed, "work":
I've seen a lot of comments on Twitter that seem to completely misunderstand the process of getting a decent result with AI generators like Stable Diffusion and DALL-E 2. People seem to assume that it's just "push button, recieve bacon" without any real creativity in the equation. As someone who has done a lot of this experimentation in the past few months, I'd like to challenge that assertion and show you what the process for getting a decent result actually involves.
This post is a great read, and what Xi eventually creates is beautiful (it's not the image in the section header, you'll have to read the post to see it).
I also Xi's closing-aside worth pondering. I like the alternate term for "prompt engineering" they've suggested, and the reminder that lawyers tend to eventually ruin everything.
Now, just imagine a future where we train a model on all this prompt work, so this new model can generate the best prompts for us. Promptception?
Giorgia Lupi (@giorgialupi) sets up this post on the challenges of creating complex, data-driven domain visualizations for general consumption so well, that I won't ruin it with my own blatherings:
Data is everywhere: in every headline, and at the center of every conversation. In only a few months since the pandemic began, data of all types has become an essential language to understand and make sense of a rapidly changing world.
But how fluent in this language is the general public? And how can we as practitioners and researchers design data visualizations that fully represent the nuances and the implications of a very complex situation?
Due to the media coverage of the COVID-19 data, we have been exposed to (very) bad and (very) good examples of visualizations. We’ve seen visualizations used to both clarify and hide arguments, to both support and deny research evidence — even as tools for propaganda. We want to make sure we’ll always contribute to clarity and consistency, being fully aware of the potential risks and pitfalls and — in the end — the responsibility of shaping data and making it available for the public discourse.
Giorgia then drops twelve "memos". Each has a topic that data visualization practitioners should keep in mind as we seek to communicate with non-domain experts, complete with real-world datavis examples and questions we should ask ourselves in each project we undertake. Here are the topics (you'll need to hit up Giorgia's post for the details):
I'm going to add the topics and questions to my datavis project templates and require answering each relevant question in the affirmative before considering future externally facing datavis projects "done".
Steve Attewell (@steveattewell), User Experience Design Lead for Britain's mapping agency, is a genuine wizard when it comes to mapbox. Sketchy Maps is his latest creation. It's a custom mapbox layer that gives maps a "hand sketched" look and feel. The link in the previous sentence goes to a view of Down East Maine (a fav spot of mine) and the section header is a sketchy view of Ísland, (another fav spot of mine).
Carve out some time to explore some planetary locales with this new lens, and drop your finds on Twitter with the #sketchymap hashtag. It'll make Steve's day.
And, don't forget to check out Steve's other work.
It turns out, creative newsletter titles are harder than I expected them to be. At least the new format lets me know how many of these I've managed to publish. ☮