Drop #373 (2023-11-16): Happy ThursdAI

Prompt Engineering Guide; Steak-umm Deepsteaks; Insanely Fast Whisper CLI; Have Barometer, Will Work For Food

There’s more than a 70% chance your Friendly Neighborhood hrbrmstr will be taking both 🦃 days off next week, so we’re doing the ~monthly AI edition a tad early.

TL;DR

This is an AI-generated summary of today’s Drop.

And, miraculously, we have inline links, again, with no prompt changes. o_O

  • The blog post begins with a detailed discussion on the emerging field of “prompt engineering”, which involves developing and optimizing prompts to efficiently use large language models (LLMs) for various applications and research topics. The author highlights the importance of prompt engineering in improving the safety of LLMs and building new capabilities. The Prompt Engineering Guide is recommended as a comprehensive resource for anyone interested in this field.

  • The second section of the post focuses on Steak-umm’s unique approach to social media and their recent campaign, “DeepSteaks”, aimed at raising awareness about the potential dangers of deepfake technology. The campaign involves creating deepfakes of vegans enjoying meat, highlighting the ease with which deepfakes can manipulate reality. The post encourages readers to visit DeepSteaks.ai to sign a petition for the DEEP FAKES Accountability Act and learn how to spot and safely engage with deepfake technology.

  • The final section of the post introduces a new, faster version of the Whisper CLI, a tool for transcribing audio. This updated version can transcribe 300 minutes of audio in less than 10 minutes using OpenAI’s Whisper Large v2 model. The author expresses amazement at this development and encourages readers to check out the insanely fast version of Whisper CLI.


Prompt Engineering Guide

people near mountain

As Drop readers are likely keenly aware of, the fast-emerging field of “prompt engineering” is all about developing and optimizing, well, prompts to efficiently use large language models (LLMs) for a wide variety of applications and research topics. It’s a bit like teaching a child through questions, where a well-crafted prompt can steer an AI model towards a specific output.

As we’ve discussed before, prompt engineering is not just about designing and developing prompts. It encompasses a wide range of skills and techniques that are useful for interacting and developing with LLMs. It’s an important skill to interface, build with, and understand the capabilities of LLMs. You can use prompt engineering to improve the safety of LLMs and build new capabilities like augmenting LLMs with domain knowledge and external tools.

The recent advancements in OpenAI’s ChatGPT+, and features in “legacy?” tools like Perplexity, center around your ability to give it well-crafted initial instructions, so they can provide more efficient responses.

The Prompt Engineering Guide is a comprehensive resource that contains all the latest papers, learning guides, models, lectures, references, new LLM capabilities, and tools related to prompt engineering. It’s a treasure trove for anyone interested in this field, from researchers looking to improve the capacity of LLMs on a wide range of common and complex tasks, to developers designing robust and effective prompting techniques that interface with LLMs and other tools.

Prompt engineering is also expanding beyond just words. For those interested in working as a prompt engineer for images, videos, and animations, learning the basics of other AI models is a must. For example, with the right prompts, AI can translate textual descriptions into stunning images, or even create diverse voiceovers, capturing the right tone and pitch.

The role of a prompt engineer is central in leveraging these techniques effectively. They play a cross-disciplinary role in developing, testing, and refining prompts. The only two prerequisites for being a prompt engineer are knowledge of LLM architecture and problem-solving abilities.

Even if you’re in the camp of those who think we’ll (quickly) get to a point where AI will be able to take our inane utterances and prompt engineer all on their own, I think checking out this guide — and, keeping an eye on it — will still help build better systems that incorporate this new tech into core processes.

Steak-umm Deepsteaks

During the T***p presidency and the COVID-19 pandemic, Steak-umm, a brand of frozen sliced, er, beef?, took a unique approach to strengthen their brand on social media and gain respect. They adopted a ‘human’ approach to their social media presence, communicating their values consistently and declaring a commitment to public good.

Steak-umm used its social media platform to debunk health claims and misinformation, particularly around the pandemic. They encouraged thoughtful processing of information and urged their followers to avoid falling for conspiracy theories. This approach was considered a refreshing contrast to the often chaotic and misleading information circulating during this period.

Now, during the time of wanton abandon in the “AI” space, Steak-umm has launched a thought-provoking campaign called “DeepSteaks” to raise awareness about the potential dangers of deepfake technology.

Deepfakes are highly realistic AI-generated videos that can make anyone appear to say or do almost anything. While they can be used for entertainment purposes, they also pose a significant threat to society as they can be created for malicious purposes, spreading disinformation and manipulating public opinion.

The DeepSteaks campaign was created in partnership with ad agency Tombras and involved fake focus groups with real vegans. Participants were asked personal questions about their vegan lifestyle and offered a new vegan cheesesteak sandwich to sample. Behind the scenes, a production team led by Borat 2 director Jason Woliner created deepfakes (aka “DeepSteaks”) of each participant in real-time. When the participants were shown videos of themselves shortly after, they were shocked to see themselves seemingly turned into meat lovers in mere minutes.

The goal of the campaign is to educate the public about the potential risks and harmful effects of deepfakes, emphasizing that anyone can fall victim to this dangerous technology, not just politicians and celebrities. The video ends by inviting viewers to visit DeepSteaks.ai to sign a petition for the DEEP FAKES Accountability Act and learn how to spot, report, and safely engage with deepfake technology.

The DEEP FAKES Accountability Act is a proposed bill in the U.S. that aims to provide resources for prosecutors, regulators, and victims of deepfakes, establish criminal penalties for false digital impersonations, require creators of deepfakes to disclose and watermark them, and ensure manufacturers of deepfake technology comply with proposed disclosure and watermark laws.

Given that we now live in a world, day, and age where we’re issuing bounties to encourage the worst of us to perform some truly unethical acts on the best of us.

I’m not trying to sound any alarms. But, I do think it is important for folks who do grok this tech and the dangers it poses do our best to ensure we’re helping those in our circles who are just trying to make it through each day understand how quickly this technology is escaping our control and oversight.

Insanely Fast Whisper CLI

time-lapse photography of cars passing through the road during night time

We’ve talked about Whisper before, so this penultiamte section is just a quick note to have you check out a new even more insanely fast version of it.

It can transcribe 300 minutes (5 hours) of audio in less than 10 minutes — with OpenAI’s Whisper Large v2 model.

Just. Wow.

Have Barometer, Will Work For Food

A greyscale photo depicting an out-of-work weather forecaster, dressed in a disheveled suit, holding a barometer in one hand and a cardboard sign in the other, begging on a city sidewalk. In the background, an imposing, futuristic AI overlord figure, depicted as a large, holographic head, is watching over them with an expression of authority. The setting is an urban street with tall buildings, and the mood is somber and dystopian.

I’m fairly certain most readers have seen this, but just in case

In a recent study, researchers introduced a machine learning-based method called “GraphCast” for global medium-range weather forecasting.

GraphCast is trained directly from reanalysis data and predicts hundreds of weather variables over 10 days at a 0.25-degree resolution globally, in under one minute.

The study strongly suggests that GraphCast significantly outperforms the most accurate operational deterministic systems on 90% of 1,380 verification targets and supports better severe event prediction, including tropical cyclones, atmospheric rivers, and extreme temperatures.

This marks a turning point in weather forecasting, making cheap prediction more accurate, more accessible, and suitable for specific applications. However, the authors emphasize that GraphCast should not be regarded as a replacement for traditional weather forecasting methods, which have been developed for decades and rigorously tested in many real-world contexts.

As a weather nerd, I’m pretty excited about this development, But, given how quickly many other industries have jumped at the chance to remove humans from jobs in favor of soulless automation — and the emergence of high-quality deepfake technology — how much longer will the weather folk have jobs?

FIN

Y’all have no idea how hard it was to resist doing another Wellerman shanty riff. ☮️

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.