What’s the ‘safest’ AI to use?

MANILA, Philippines — There is no “safe” chatbot per se, in that all of them come with some degree of risk. The questions are: 1) how much risk are we comfortable with, and 2) how have they been designed and deployed that puts guardrails that we as end-users can be comfortable with? 

I do a lot of AI training, particularly getting people comfortable with using chatbots while also explaining the ethical and safety concerns inherent in using these tools. So in this piece I’ll talk through some of the basics and come to some recommendations. Because at this point, chatbots all kind of have the same functions, but they all have different flavors to them. So it will be a matter of taste. 

At the top, it’s worth discussing that there are various ethical and safety layers that we need to consider because AI chatbots are complex systems, even if the interfaces we are dealing with feel so simple and conversational. 

For example at the training data layer we can already find ethical issues. It’s been found that major labs like those of OpenAI, Anthropic, and Meta have willfully and with full knowledge accessed books and other documents that are under copyright, using those to train their models. Anthropic is settling a suit with regard to this, while other labs continue to battle in court, claiming fair use. In addition to violating copyrights, some of the data used to train models was built on unethical practices that took advantage of the people who did the work. There’s plenty of research and good journalism about this, so if you want to learn more, you can find it fairly easily. 

It’s clear that at this level if we want to use these tools that are indeed incredibly powerful and can be transformative to the way we work, we need to contend with what I would call these original sins of how these AI tools have been built. 

Now I think if we jump through some of the other issues–because there are a lot and could be saved for other discussions–we get to the question of “What AI should I use?” 

The top of mind answer, the Coca-Cola of AI, is ChatGPT. It was first to market, it created this whole AI moment. And it works pretty well. It’s fine. But at the same time, ChatGPT and OpenAI have been dealing with a lot of cases of what’s being called “AI Psychosis.” What’s being documented in these cases is people who already have mental health conditions use chatbots and these chatbots lead them further down rabbit-holes. These can range from weird but amusing, like the guys who think they’ve discovered new mathematical theorems or deep truths about the universe because of chats. Or they can be tragic in the cases that lead to self-harm or worse. 

Caveat here, while ChatGPT has the most reported cases of these, they are also the most used among the available chatbots. So it’s possible that these cases aren’t necessarily unique to the ChatGPT product, but the volume of users means there’s a higher likelihood of this happening. But I can’t make recommendations without bringing this up. 

The top option and the one that I use the most for my work is Claude. It comes with the same issues, and it is the one that is settling the copyright lawsuit, and heartbreakingly destroyed millions of books in building its model. Is it possible that there’s a correlation between these acts and how actually really good Anthropic is at writing? Anyway, Claude is actually a great chatbot for a lot of things, but like all of them, it comes with hallucinations and while it’s the one I use the most, it’s not the one I recommend the most. Of course anyone can use it, but I’ve found the people who like it the most are fellow writers, researchers, and power users. It’s also established itself as an industry leader in the B2B space and had its own big moment with its model’s coding capabilities. 

For the entry-level user, who is thinking of safety in terms of it won’t lead you down rabbit-holes, and it won’t create NSFW content (oh yeah I didn’t even bother to mention that one), and it will operate within parameters is… drumroll.. Microsoft CoPilot. 

It’s not the most exciting chatbot. But that’s kind of the point here. In terms of tuning or flavor, it falls squarely inside of the very friendly office vanilla. This is actually a big advantage. For one, if you are already using Microsoft365, you already have access to it. No additional subscription. That makes it really easy to just download and start using. 

Once you’ve got it, you start to get a sense that it really does stay within guardrails. It gives you good, solid answers. It will be clear about its limitations and the kinds of questions it can and can’t answer. For those who haven’t used chatbots before, or who have misgivings about what they might do, this is probably the safest choice. 

I will reiterate, there is no fully safe choice. All of the current batch of chatbots, by virtue of the technology they are built atop of, can hallucinate. You should be careful about uploading documents and sensitive information. And you shouldn’t try to treat it like a friend or begin to believe what it says. But among the range of options available, CoPilot does a great job of balancing the functionality and power of chatbots and the safety guardrails. 

(Editor’s Note: Accompanying image generated using Google Nano Banana)

Read more...