A Mosaic of Meaning
It is a humbling thought to realize just how much we humans still do not know about, well, anything. New insights from the vastness of the cosmos regularly delight, amaze, and confuse us. The true depths of the oceans right here on Earth remain a mystery. And, we've barely scratched the surface of how our own, complex noggins really work, though we continue to creep closer to understanding the operations of these little gray cells.
In a recent(-ish) article in Quanta — "New Map of Meaning in the Brain Changes Ideas About Memory" — Jordana Cepelewicz summarizes a fascinating new discovery of how we recall what we've seen:
A team of neuroscientists created a semantic map of the brain that showed in remarkable detail which areas of the cortex respond to linguistic information about a wide range of concepts, from faces and places to social relationships and weather phenomena. When they compared that map to one they made showing where the brain represents categories of visual information, they observed meaningful differences between the patterns.
And those differences looked exactly like the ones reported in the studies on vision and memory.
The section header image is from an interactive visualization showing these semantic brain maps, and they were compared across two independent studies, ultimately showing that memory isn’t a facsimile of past perceptions that gets replayed, but more like a reconstruction of the original experience, based on its semantic content.
When storing a representation of something we've seen, it seems our brains align semantic contextual content with the images along a gradient with surprisingly distinct borders (something neuroscientists haven't seen before).
For every one of the hundreds of categories studied in the experiments, the representations aligned in transition zones that formed a nearly perfect ribbon around the entire visual cortex.
…
The pattern was also systematic across individuals, appearing over and over in each participant. "This real boundary in the brain seems to be a general organizing principle"
The article is a tad long, and this is a pretty heady (heh) topic, but we've just come much, much closer to understanding a major function of our brain works which could have implications on our concept and the nature of meaning itself, so it may just be worth a 👀
If nothing else, at least have some fun zooming around the brain in the interactive vis.
An X-Ray for Relationships
I finally got around to reading a slightly older Wired article on "AI-mediated communication". You've likely experienced this if you use Google Docs or Mail and followed the prompts to auto-complete a sentence or use a "smart reply". These services use trained models to "help" you communicate better. Other apps and services use another kind of trained model to perform sentiment analysis on messages one is about to send to provide "just-in-time counseling" on whether you really should post the screed you were about to hit "send" on (gosh, I really could use that plugin for Twitter these days).
It's also used in couples' counseling research:
For one study, researchers wired up 34 young couples with wrist and chest monitors and tracked body temperature, heartbeat and perspiration. They also gave them smartphones that listened in on their conversations. By cross-referencing this data with hourly surveys in which the couples described their emotional state and any arguments they had, [a] team developed models to determine when a couple had a high chance of fighting. Trigger factors would be a high heart rate, frequent use of words like "you," and contextual elements, such as the time of day or the amount of light in a room. "There isn’t one single variable that counts as a strong indicator of an inevitable row, but when you have a lot of different pieces of information that are used in a model, in combination, you can get closer to having accuracy levels for an algorithm that would really work in the real world."
There have been many data-driven (sans-AI) tools, methodologies, and formulas created to help change behavior or improve communication between humans. In those cases, we have a bit more control (e.g. being self-aware and maintaining a 5:1 ratio of positive to negative interactions with your partner).
An AI-prompted (or curbed) set of interactions may seem like a good thing on the surface, especially when one study found that the language people used with Smart Reply skewed toward the positive. But, that same feature may also reduce the efficacy of other types of communication (what if we all just use Smart Reply with each other?) or even make it more difficult for some folks to speak up in a workplace.
I try hard to not use Smart Reply, though I have, sadly, let Google program me to auto-complete sentences when forced to use Google Docs. I think I'm in favor of one of the article's posits: should our communications be tagged with some "crafted with AI" label when we've let the algorithm do the creating for us?
Cause For All Armed
I was going to drop a lighter topic for the third pick today, but there is nothing light about the atmosphere in the United States after the violent two weeks we've just been through.
I'll let you dig into the 2021 “Firearms Commerce
in the United States” (PDF) report, produced by the Department of Justice as the first in a four part comprehensive study, and let you decide whether we have a gun problem or not. There’s also a decent summary if you’re pressed for time.
FIN
Oddly enough, pdfimages
extracted the figure in the third section header, which means someone in the DoJ made the labels separately from it? O_O ☮