Welcome to Lexappealdeals!

We all contribute to AI — should we get paid for that?

[ad_1]

In Silicon Valley, some of the brightest minds believe a universal basic income (UBI) that guarantees people unrestricted cash payments will help them to survive and thrive as advanced technologies eliminate more careers as we know them, from white collar and creative jobs — lawyers, journalists, artists, software engineers — to labor roles. The idea has gained enough traction that dozens of guaranteed income programs have been started in U.S. cities since 2020.

Yet even Sam Altman, the CEO of OpenAI and one of the highest-profile proponents of UBI, doesn’t believe that it’s a complete solution. As he said during a sit-down earlier this year, “I think it is a little part of the solution. I think it’s great. I think as [advanced artificial intelligence] participates more and more in the economy, we should distribute wealth and resources much more than we have and that will be important over time. But I don’t think that’s going to solve the problem. I don’t think that’s going to give people meaning, I don’t think it means people are going to entirely stop trying to create and do new things and whatever else. So I would consider it an enabling technology, but not a plan for society.”

The question begged is what a plan for society should then look like, and computer scientist Jaron Lanier, a founder in the field of virtual reality, writes in this week’s New Yorker that “data dignity” could be an even bigger part of the solution.

Here’s the basic premise: Right now, we mostly give our data for free in exchange for free services. Lanier argues that in the age of AI, we need to stop doing this, that the powerful models currently working their way into society need instead to “be connected with the humans” who give them so much to ingest and learn from in the first place.

The idea is for people to “get paid for what they create, even when it is filtered and recombined” into something that’s unrecognizable.

The concept isn’t brand new, with Lanier first introducing the notion of data dignity in a 2018 Harvard Business Review piece titled, “A Blueprint for a Better Digital Society.”

As he wrote at the time with co-author and economist Glen Weyl, “[R]hetoric from the tech sector suggests a coming wave of underemployment due to artificial intelligence (AI) and automation.” But the predictions of UBI advocates “leave room for only two outcomes,” and they’re extreme, Lanier and Weyl observed. “Either there will be mass poverty despite technological advances, or much wealth will have to be taken under central, national control through a social wealth fund to provide citizens a universal basic income.”

The problem is that both “hyper-concentrate power and undermine or ignore the value of data creators,” they wrote.

Untangle my mind

Of course, assigning people the right amount of credit for their countless contributions to everything that exists online is not a minor challenge. Lanier acknowledges that even data-dignity researchers can’t agree on how to disentangle everything that AI models have absorbed or how detailed an accounting should be attempted.

Still, he thinks that it could be done — gradually.  “The system wouldn’t necessarily account for the billions of people who have made ambient contributions to big models—those who have added to a model’s simulated competence with grammar, for example.” But starting with a “small number of special contributors,” over time, “more people might be included” and “start to play a role.”

Alas, even if there is a will, a more immediate challenge — lack of access — looms. Though OpenAI had released some of its training data in previous years, it has since closed the kimono completely. When OpenAI President Greg Brockman described to TechCrunch last month the training data for OpenAI’s latest and most powerful large language model, GPT-4, he said it derived from a “variety of licensed, created, and publicly available data sources, which may include publicly available personal information,” but he declined to offer anything more specific.

As OpenAI stated upon GPT-4’s release, there is too much downside for the outfit in revealing more than it does. “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.” (The same is true of every large language model currently, including Google’s Bard chatbot.)

Unsurprisingly, regulators are grappling with what to do. OpenAI — whose technology in particular is spreading like wildfire — is already in the crosshairs of a growing number of countries, including the Italian authority, which has blocked the use of its popular ChatGPT chatbot. French, German, Irish, and Canadian data regulators are also investigating how it collects and uses data.

But as Margaret Mitchell, an AI researcher who was formerly Google’s AI ethics co-lead, tells the outlet  Technology Review, it might be nearly impossible at this point for these companies to identify individuals’ data and remove it from their models.

As explained by the outlet: OpenAI would be better off today if it had built in data record-keeping from the start, but it’s standard in the AI industry to build data sets for AI models by scraping the web indiscriminately and then outsourcing some of the clean-up of that data.

How to save a life

If these players truly have a limited understanding of what’s now in their models, that’s a pretty big challenge to the “data dignity” proposal of Lanier, who calls Altman a “colleague and friend” in his New Yorker piece.

Whether it renders it impossible is something only time will tell.

Certainly, there is merit in determining a way to give people ownership over their work, even if it’s made outwardly “other.” It’s also highly likely that frustration over who owns what will grow as more of the world is reshaped with these new tools.

Already, OpenAI and others are facing numerous and wide-ranging copyright infringement lawsuits over whether or not they have the right to scrape the entire internet to feed their algorithms.

Perhaps even more importantly, giving people credit for what comes out of these AI systems could go help preserve humans’ sanity over time, suggests Lanier in his fascinating New Yorker piece.

People need agency, and as he sees it, universal basic income alone “amounts to putting everyone on the dole in order to preserve the idea of black-box artificial intelligence.”

Meanwhile, ending the “black box nature of our current AI models” would make an accounting of people’s contributions easier — which might make them far more likely to continue making contributions.

It might all boil down to establishing a new creative class instead of a new dependent class, he writes. And which would you prefer to be a part of?

[ad_2]

We will be happy to hear your thoughts

Leave a reply

Lexappeal
Logo
Compare items
  • Total (0)
Compare
0