Here’s how AI will change the way we work in the next 25 years – Twin Cities
Here’s an easy prediction about how artificial intelligence will impact work over the next 25 years: It won’t look anything like Skynet.
Although references to “The Terminator” movie franchise’s world-conquering and human-hating AI are everywhere in the discussion of programs like ChatGPT or Midjourney, self-aware computer programs are squarely in the realm of fiction.
“(Artificial intelligence) doesn’t have any agency. We are controlling it and changing the algorithms all the time,” said Anima Anandkumar, a professor of computing and mathematical sciences at Caltech.
The “artificial intelligence” technologies available today — and into the future, barring an unforeseen sudden breakthrough — are programs that predict what to generate based on the patterns in their existing data sets.
They’re essentially much more sophisticated versions of the software that suggests words while typing a text message on a smart phone. As anyone who’s ever allowed their smartphone to suggest whole sentences that way knows, the results can sometimes seem eerily human, but are more likely to produce nonsense.
“Because we are human, we have a tendency of looking at the world that anthropomorphizes everything,” said Rep. Jay Obernolte, R-Hesperia, who put his doctorate in artificial intelligence on hold when a video game he created became a surprise hit and he went into business for himself instead. “Some of the people who have been most alarmed by the things that ChatGPT does, they’re thinking of it as a person at the other end of the data stream. But there isn’t — it’s just an algorithm.”
AI doesn’t know anything, can’t think of anything and isn’t any more sentient than the code that runs a smartphone’s calculator function.
It seems intelligent because if its output isn’t sufficiently believable — whether it’s a chatbot like ChatGPT, an AI art program like Midjourney or the AI that creates deepfake videos — it’s rejected during the development process, effectively teaching the AI to be able to create content that satisfies the humans consuming the content.
“(People) think if text sounds very human-like it has intelligence or agency. It’s so easy to fool humans,” Anandkumar said.
And that includes when AI produces things like term papers or legal documents. The program simply looks at what term papers on “The Great Gatsby” or a no-contest divorce filing typically look like, and assembles the text along those lines.
“But that’s not the same as being factual,” Anandkumar said.
Asking an AI to tell you about yourself almost inevitably leads to what researchers call “hallucinations,” as it generates fictitious biographies and accomplishments by predicting what words to include based on actual biographies.
AI will get more factual over time, experts say, but it’s not yet capable of consistently producing factual information when requested.
“The ultimate goal of AI is to have learning agents that can learn from the environment, that are autonomous,” Anandkumar said. “All of those new developments are going toward achieving that.”
That autonomy will be valuable in fields like the exploration of Mars. Instructions sent from Earth can take anywhere from five to 20 minutes to reach Mars, depending on the distance between the two planets. Having a rover more capable of acting on its own, based on what’s happening in its environment, could mean the difference between a successful mission and one where a Mars rover worth hundreds of millions of dollars is catastrophically damaged before humans back on Earth are able to issue commands to get it out of trouble.
“I think there are still deep challenges to be overcome for AI to be fully autonomous, especially in safety-critical systems,” Anandkumar said. “And I think, humans will still be in the loop.”
Each improvement in making AI more accurate is harder than the last, Anandkumar said. Humans are still better at handling uncertainty than even the most advanced AI models and they are needed to fact-check AI to help improve it.
But the limitations of AI don’t mean it won’t help reshape the world over the next 25 years. Those changes will just be less dramatic than in “The Terminator” movies, experts say.
Obernolte expects the widespread adoption of AI to cause displacement of white collar jobs, many in sectors where workers aren’t used to being displaced by technological change.
He pointed to automation being used to find tumors in CT scans earlier than humans can detect them, ultimately providing cheaper, faster and better healthcare for patients.
“If you are a patient, this is a hugely beneficial thing,” Obernolte said. But “if you are a radiologist, the picture is not so rosy.”
Radiologists won’t be the only ones affected in the coming decades.
“No one is going to pay a lawyer for a basic will any more,” Obernolte said. “No one is going to pay an entry level accountant any more.”
Repetitive tasks are likely to be done largely by AI in the future, including white collar work like processing forms or manning customer service lines. Meanwhile, just as with monitoring the activities of a future Mars rover, humans will be needed to keep an eye on automated data processing and the like — just not as many of them as today.
“We’ll still need experts in those professions,” Obernolte said. “To have a career in a white collar job, you’re going to have to be very, very good.”
As for where the displaced workers will go, he predicts new jobs will spring up, “sometimes in fields that we aren’t even aware of right now.”
AI largely automating many jobs will also mean white collar services should be available more widely in the future.
“I think it’s going to accelerate a phenomenon that’s already occurring, the flight from urban areas into rural areas,” Obernolte said. “I think it’s going to enhance the attractiveness of places like the Inland Empire with lower cost of living.”
Like Anandkumar, Obernolte isn’t worried about Skynet. But he does stay up at night worrying about how AI is going to lead to more personal data being siphoned up by the tech industry, and he’s concerned about preventing future monopolies in the industry as well as foreign interference in domestic affairs using AI technologies.
Obernolte would like to see Congress create data privacy protections, along with a regulatory framework for AI that protects the public while not also choking off beneficial impacts. He’s optimistic that there will be a federal digital privacy act passed, as one of the state legislators involved in crafting California’s version.
On May 16, as the CEO of OpenAI, the company that created ChatGPT, spoke at a Senate hearing, The Hill published an op-ed by Obernolte, in which he wrote that “digital guardrails” are necessary for AI.
“I’m trying to create a federal privacy standard that prevents a patchwork of data standards, which would be devastating to commerce,” he wrote.
Big tech companies can afford the lawyers and other manpower needed to deal with 50 different standards, but small tech companies, like his, could be put out of business trying to comply.
Anandkumar agreed regulation is needed, but she said she wants it to be crafted by people who understand what they’re dealing with.
“We should have all the experts in the room,” she said. “It should not just be the machine learning people, but it should also not be only lawyers.”
In March, an open letter signed by more than 1,100 people, including tech pioneers, urged AI laboratories to pause their work for six months. The letter doesn’t seem to have caused anyone to do so.
Obernolte doesn’t think it’s possible or advisable to stop work on AI.
“I don’t see how a pause on the development of AI will be beneficial,” he said.
For one thing, it’d be hard to enforce.
“That’s not going to prevent bad actors in our own society that continue to develop AI in ways that benefit them financially and certainly isn’t going to hamper our foreign adversaries,” he added.
There’s a role for the government in subsidizing more research by those without a profit motive, like the big Silicon Valley firms currently spearheading AI development, Anandkumar said.
Safety nets and regulations around AI are needed, Obernolte said, but he thinks the growing pains will ultimately be worth it.
“I think it is going to have a revolutionary impact on our economy, almost overwhelmingly in ways that are beneficial to human society,” he said. “But the incorporation of AI into our economy will be extremely disruptive, as innovations always are.”