Drop #361 (2023-10-26): ThursdAI

Embeddings; Jina; Moral^W IRL Hazards

I couldn’t bring myself to prepend the usual “Happy” before the tagline today given the (⚠️ content warning: that link goes to a story about America’s favorite sport ⇒ yet another mass shooting) horrendous news out of Lewiston, Maine that I woke up to, today. No “AI” summary, today, too, given the topic of the third section.

It’s also going to be a slightly more abbreviated Drop since said news has consumed most of the normal, early AM time cycles that I usually dedicate to finishing up the text.

Embeddings

Simon Willison has an incredibly accessible and helpful post on “Embeddings: What they are and why they matter”. You should just read the whole post, but to whet your appetites:

Embeddings are a technique for representing text, images, or other content as fixed-length vectors of numbers. This allows comparing content based on semantic similarity by calculating distances between vectors in a multi-dimensional space.

Simon notes that related content and semantic search are common applications that make use of these embeddings, where the closest vectors to a given piece of content are found. Other uses include clustering issues by topic or analyzing sentence function in writing.

Open-source models like Word2Vec and BERT made embeddings widely available, while newer multi-modal models like CLIP can represent both images and text.

Tools like Simon’s LLM and Datasette (links in his post to further encourage you to go there) enable interactive semantic search over private documents to assist question answering.

He has tons of links to more information, and a video of his presentation on the topic, which I’ve also put in the section header.

Jina

Jina AI, a Berlin-based artificial intelligence company, has launched its second-generation text embedding model, jina-embeddings-v2, which is the world’s first open-source 8K context length model. It’s designed to outperform other leading base embedding models, emphasizing the practical advantages of longer context capabilities.

Both the base model and small model are freely available for download (both links go to Huggingface). The base model is designed for heavy-duty tasks requiring higher accuracy, such as academic research or business analytics, while the small model is crafted for lightweight applications such as mobile apps or devices with limited computing resources.

Jina AI aims to democratize AI and empower the community with tools that were once confined to proprietary ecosystems. An academic paper detailing the technical intricacies and benchmarks of jina-embeddings-v2 will soon be published, allowing the AI community to gain insights into the model.

Simon (see the first section) has incorporated it into his llm tool and has a solid post on it.

Moral IRL Hazards

the sun is shining over a vast expanse of land

While there are plenty of moral/societal hazards associated with the VC-fueled AI/LLM/GPT boom, there are some distressing IRL physical hazards as well. These beasties consume mass quantities of water, electricity, and data center real estate. So much so, that Microsoft is considering the use of small nuclear reactors (cached capture) to power its AI and cloud data centers.

It’s bonkers that we seem to be copacetic with all this:

  • According to a recent article from The Guardian, companies like OpenAI, Google, and Microsoft are not disclosing just how much electricity and water it takes to train and run their AI models, what sources of energy power their data centers, or even where some of their data centers are. Data centers use water in evaporative cooling systems to keep equipment from overheating, and one non-peer-reviewed study estimates that training GPT3 in Microsoft’s state-of-the-art US data centers could potentially have consumed 700,000 liters (~185K gallons) of freshwater.

  • Another article from Scientific American states that around the globe, data centers currently account for about 1 to 1.5 percent of global electricity use, according to the International Energy Agency. And the world’s still-exploding boom in artificial intelligence could drive that number up a lot—and fast. Huge, popular models like ChatGPT signal a trend of large-scale AI, boosting forecasts that predict data centers will draw up to 21% of the world’s electricity supply by 2030.

  • A recent article from EY discusses how data centers currently account for 4% of the total greenhouse emissions worldwide. However, data center operators are adopting AI, IoT, and ML to build green, lean, and smart data centers. While AI-based robots help automate the functions and optimize efficiency, predictive analytics reduce energy consumption and total costs. AI tools can help companies save up to 40% of the power spent on cooling.

I had planned for a more distilled version of this section, and I’ll likely do that in a future Drop (since the situation is only going to get worse). For now, here are the links I’ve saved along the way that should provide sufficient background for those catching up to this news:

FIN

Stay safe out there and make sure to hug someone today. ☮️

One response to “Drop #361 (2023-10-26): ThursdAI”

  1. Eyayaw Avatar
    Eyayaw

    Good to hear that you’re safe.

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.