Discover more from hrbrmstr's Daily Drop
Drop #269 (2023-05-25): Happy ThursdAI
How To Create Your Own ChatGPT Plugin; Kagi's FastGPT/Enhanced Perplexity; DarkBERT
It's been an “AI” kind of week at work, so I'll close out the week with a selection of LLM/GPT items that I've been looking at/using of late.
How To Create Your Own ChatGPT Plugin
Despite at least my continued ethical concerns over how all these giant models have been trained (something I make a point to consider every time I use one of them), I do use them to augment daily tasks (like generating function documentation). What's more, I also say “Yes” whenever someone asks me if they should use these new tools.
There are many way to use these tools, whether it be via services like LanguageTool/Grammarly, both of which have incorporated LLM/GPT features this year. Raycast has a built in chatterbot. Kagi's doing some marvelous work building and releasing very useful LLM/GPT knowledge tooling as well.
A recent feature that became available to ChatGPT+ users, is free (albeit, with limited daily use) access to their new plugin ecosystem. Plugins are a powerful way to improve ChatGPT's functionality without retraining the underlying GPT model. As I noted earlier in the week, I use the webscraper plugins regularly, and have been playing with the weather ones, Wolfram Alpha one, and Cloudflare's plugin.
We're likely going to build one for $WORK data, so I've been poking at the “how”. While the process looks pretty straightforward, examples/walkthrus from others are always nice-to-have's.
Weaviate posted an article ahead of the recent ODSC East (which really should have just been called ODSCGPT East from what some attendees have shared with me). Said piece outlines the steps to create a Weaviate retrieval plugin for ChatGPT which connects it to a vector database. The key steps involve building a web app with the desired endpoints, preparing plugin manifest files to describe the endpoints to ChatGPT, testing locally, and deploying remotely using Fly.io. Careful documentation of the endpoints and descriptions is crucial for ChatGPT to use the plugin correctly. The retrieval plugin allows ChatGPT to query, upsert, and delete documents from the vector database to augment its responses.
A nice feature of the ChatGPT+ interface when using plugins is you get to see the output:
Though, I do find it amusing that, when I used a different weather plugin, it gave me temperatures in Kelvin.
While I know this is a “costs money” and “benefits evil OpenAI” kind of thing, making plugins to help reduce errant output and provide a less technical way for others to interact with your API/service/data does seem like an overall good thing for society writ large.
Kagi's FastGPT/Enhanced Perplexity
Kagi recently announced a new AI+Search experiment dubbed FastGPT. I'm not sure how long it'll remain free (full disclosre: I subscribe to and use only Kagi for search whenever possible), so give it a go while you can.
It's stupid fast. I asked it to name the first 10 presidents of the U.S. and hit enter on it and a regular search at (within milliseconds) the same time, and it came back instantly where I had to briefly wait for traditional search results to come up then wait for Wikipedia to load.
This must be their own tech, given how the OpenAI API costs do add up after a while (we're using it at $WORK for a few projects). And, they apparently have made some kind of cost-saving technical breakthrough, given that they just robustified their pricing plans.
If you're keen to check out improved AI tidbits, Perplexity also upped their game a bit and I've found both the initial answers, and suggested follow-up prompts to be pretty useful when investigating a new topic.
I'm pretty certain this made the “AI rounds” this past week, but ICYMI, some clever researchers are trying to help fight cybercrime with language models. They did the Herculean task of training a BERT model on the insanely diverse data on “the dark web” (i.e., content accessible only via Tor) to help cyber folks use it to analyze new content.
Here's the DarkBERT abstract:
Recent research has suggested that there are clear differences in the language used in the Dark Web compared to that of the Surface Web. As studies on the Dark Web commonly require textual analysis of the domain, language models specific to the Dark Web may provide valuable insights to researchers. In this work, we introduce DarkBERT, a language model pretrained on Dark Web data. We describe the steps taken to filter and compile the text data used to train DarkBERT to combat the extreme lexical and structural diversity of the Dark Web that may be detrimental to building a proper representation of the domain. We evaluate DarkBERT and its vanilla counterpart along with other widely used language models to validate the benefits that a Dark Web domain specific model offers in various use cases. Our evaluations show that DarkBERT outperforms current language models and may serve as a valuable resource for future research on the Dark Web.
I do like how they went over their model evaluations, and it was cool seeing DarkBERT outperform vanilla BERT and RoBERTa in evaluations on various domain tasks like Dark Web activity classification, ransomware leak site detection, and noteworthy thread detection.
While the generic and giant LLMs/GPTs are fun, and all, I think this is one example that showcases the benefit of a domain-specific language model.
For my U.S. compatriots: have a great holiday weekend. For everyone: catch y'all Monday! ☮