Press "Enter" to skip to content

Could a distributed model breed a better AI?

ch34p50cc3r 0

Schneier is not alone in his assessment that the short-term future for AI will be a wild ride. He worries that the power of AI models to hack – in its simplest form we will define hacking here as any act of discovering and exploiting vulnerabilities and loopholes in review film systems, not necessarily in a cyber security context – will far outpace the ability of human hackers to keep up.

He argues that the hacks AIs will discover will almost inevitably be used to benefit the wealthy and the powerful. Imagine, if you dare, a scenario where AIs become so adept at exploiting tax and regulatory systems in the service of amoral hedge funds and venture capitalists that wealth inequality increases exponentially and economic systems begin to crash. It’s not possible today, but it’s probable tomorrow.

“I talk about the notion of AI hacking [and] finding vulnerabilities in systems,” says Schneier. “In general, AI is very discontinuous technology and we don’t know what’s possible – things that we think are easy end up being hard and vice versa. So we don’t know.

“But I think this is going to be the biggest change in human society. I think it’s going to affect everything.”

Nobody, not even Schneier, yet has the answers to how to solve these problems, but through his work as chief of security architecture at Inrupt, where he has reunited with long-time collaborator John Bruce and worldwide web pioneer Tim Berners-Lee, he is now working on an idea that, if it comes good, may give some power over AI back to the people.

Berners-Lee has always been an advocate for the open web and makes no secret of wanting to safeguard the democratic principles on which he founded it. He and Bruce set up Inrupt on similar principles of enabling individuals to control their experience and data in a way that since the advent of platforms like Google and Facebook, now Meta, in the mid-2000s, has been lost.

Put as simply as possible, Inrupt’s technology – the Solid Privacy Platform – organises data, applications and identities in a way that gives the data owner the power to choose how and where it is stored, and who can access it, via their own personal online data store or Pod.

Early adopters have included NatWest Bank, the BBC, the government of Flanders in Belgium, and the NHS, which have been exploring pilot use cases for an enterprise version since 2020.

What does this have to do with AI, then?
So it’s a cloud storage service? Not exactly. Think of a Pod as something more akin to a private website where you control how your personal data is made available to applications or other people in a way that makes sense to you.

Were you at a party with someone? Then you can let them see photos you took at the party, but not your holiday snaps. Did you work with someone on a project? Then you can let them access the project files, but not the draft of your novel. Have you gone through a relationship breakdown? Then you can rescind your ex’s access to your data.

Leave a Reply

Your email address will not be published. Required fields are marked *