Cornell University Discovers a Huge Threat at the Core of Chat GPT

Fivtech
2 min readNov 16, 2023

--

Businesses all throughout the world have been implementing Generative AI (GenAI) solutions for the past six months.

Nearly all corporate solutions need a vector database that the model can query at run time to retrieve the context needed to respond to the user question, as most cases require the Gen AI model to have “long-term memory.”

However, Cornell University researchers claim that the once-thought-to-be extremely safe solution conceals a troubling reality that could raise serious privacy issues.

Furthermore, this result sheds a great deal of light on one of the most mysterious aspects of modern frontier AI models.

The majority of the insights that I provide on Medium were first published in my weekly newsletter, The Tech Oasis.

This is for you if you want to stay current in the fast-paced field of artificial intelligence (AI) and feel motivated to take action or, at the very least, ready for what lies ahead.

🏝Subscribe below🏝 to get content not found on any other platform, including Medium, and rise to the position of AI leader among your peers:

The Memory Problem

If there’s one thing that permeates modern frontier AI, it’s embeddings.

Models like Chat GPT rely heavily on embeddings, and nearly all of the advancements in artificial intelligence over the past few years may be linked in one way or another to these components.

The great discovery

AI researchers faced an unsolvable challenge for decades when dealing with non-numerical data.

All that exists in modern computers are “1s” and “0s,” which are the only numbers that they can comprehend. Not sound waves, not characters. Just two figures.

So how can we convey worldly knowledge in a way that computers can comprehend it?

--

--

Fivtech

on This Website, We Upload The Article Of All Types Of Technologies Gadgets are electronic devices designed to make our lives