It may seem like AI chatbots are taking over every digital application, whether we like it or not. You might have noticed more AI note-taking bots in online conferencing platforms, some of which offer end-to-end encryption (E2EE). Then Apple Intelligence plans were announced, promising application redesigns to offer AI features across its phone and laptop operating systems. The latest changes have come from Meta AI’s integration in WhatsApp, replete with “bots nobody wants.”

Any time new features are added to an E2EE messaging app, it raises concerns about privacy and security. So, what concerns are raised by the addition of AI bots? How can we evaluate those concerns? As AI becomes more embedded into encrypted services, is it possible to resolve the tension between the privacy users expect from E2EE and the data access needed for AI functionality? With our colleagues at Cornell and NYU, we set out to answer these questions.

We uncovered several facets of this question from both a technical and legal perspective and published a paper laying practical recommendations for E2EE messaging platforms and regulators. It’s also important that we outline the practical solutions and recommendations for the public. You can read the full preprint paper here.

  • Libb@jlai.lu
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 day ago

    If it’s encrypted and if it’s not stored in clear anywhere, no.

    That said, the moment encryption will be forced (by law) to include backdoors the question won’t be if they will read our messages (and everything else we used ton consider private mater), but when.