12 Comments
User's avatar
Brad Morgan's avatar

Fascinating article, thank you for sharing this information and your thoughts!

I think there is an extremely interesting area of exploration in combining several of your concerns about an always learning AI model.

What happens when there is a coordinated effort by users (probably with the assistance of their own AI model) to “degrade” the model’s safety functions via its “always learning” mode.

At first it sounds like the premise of a science fiction thriller, but lately a lot of very real things have been sounding more and more like science fiction.

Steven Adler's avatar

Yeah and even without the always-learning dimension of this, data poisoning / building in "backdoors" to models is an area that governments, militaries, etc., will need to concerned about if they're relying on AI for important use-cases. Tough dynamics

Kristina Bogović's avatar

The idea of AI learning while we sleep is both exciting and a bit unsettling. Humans aren’t built for 24/7 pace, but AI will be.

Steven Adler's avatar

Yeah I guess it’s dampened ever so slightly by _some_ human will be awake while it’s happening. This comment also makes me realize that almost no people really apply themselves toward learning in the course of their day-to-day lives, which feels like another meaningful difference

Timothy B. Lee's avatar

A big reason I'm skeptical about this is that I don't think there are actually that many tasks you can work on for the equivalent of many human workdays without interacting with other people. Most tasks have a customer, client, reader, patient, etc. that is going to be the ultimate beneficiary of the work. Many tasks also have suppliers, regulators, sources, etc. that provide inputs into the process. And so even if you can remove all the humans from the internal work processes, you're still frequently going to wind up in situations where work is blocked by the need to interact with some external person. Automation inside the firm lets you get your work product back to these people more quickly, but this can only speed up the overall process so much if it still takes 24 hours for those people to respond.

Steven Adler's avatar

Appreciate the thoughtful reply - I think you’re totally right about external bottlenecks to the firm (at least until those shift over too - which might not be possible for some use-cases, like treating a human patient). I’m not sure I see which implication this changes though?

It still seems to me, for instance, that humans will struggle to be competitive on whatever work the organization does, even if there are lags in the external response. (Humans still need to be paid during that down time for instance; AI labor can be scaled back.) Maybe this could reduce the AI>human benefit such that there is less of a productivity hit from continuing to employ meaningful amounts of humans, but I kind of doubt it?

I do think there will be many jobs for humans still, to be clear - just based in reasons like other humans preferring to interact with humans, rather than intrinsic productivity parity?

Timothy B. Lee's avatar

"It still seems to me, for instance, that humans will struggle to be competitive on whatever work the organization does, even if there are lags in the external response."

The key issue I think is conceiving "the work the organization does" and "the external response" as separate things. Think about jobs like nurses, teachers, salespeople, plumbers, etc. The "external response" is a big and important part of the job!

Or here's another way to think about it: I run a news organization with tens of thousands of subscribers. If someone had tried to build an organization like this in 1975, they would have needed to hire like a dozen people to print the newspapers, deliver them, manage subscriptions, and so forth. Now Substack's website handles all that stuff for me, and so one guy can run the whole organization.

In a sense, you could say that my newsletter is "90 percent automated" because software is doing 90 percent of the work a human being would have needed to do 50 years ago. But nobody talks about it that way because automation radically reduced the cost of distributing the content and collecting payments. "The work my organization does" is now mostly writing articles because that's the part of the job that humans can still do better than computers.

AI is going to shift a bunch of tasks into the "computers can do it better" column. But the result won't be that organizations stop having human employees, or they stop seeming important. Rather, the things humans still do better, such as interacting with other people, will come to be seen as the main work that the organization does.

Steven Adler's avatar

I basically agree with all of this, though I suspect we're picturing different magnitudes or something in the last paragraph?

I get the impression from it that your take is something like "human labor won't be meaningfully displaced by AI having developed ~task-parity with humans, it'll just shift into the smaller areas where humans still have an advantage." Though I'd expect there to be limits on how many humans can earn gainful employment through that route?

(More broadly I wonder if there's somewhere you've written your take on what jobs/the economy would be like, once we're at the point of AI being able to do so much that humans do today? [ I haven't really done this yet for my own views, and maybe I should ] )

Timothy B. Lee's avatar

I would break the workforce into three buckets. In round numbers:

* 20 percent of jobs are "purely remote" jobs that could be plausibly replaced by an AI model.

* 40 percent of jobs are "mechanical" jobs (plumbers, short-order cooks, surgeons) that might eventually be replaced by robots.

* 40 percent are "caring" jobs (nurses, teachers, nannies, waiters) that are unlikely to ever be replaced by AI or robots.

Demand is elastic, so I don't really think there's a limit to how many people can be employed in any of these categories. So even if we automated all jobs in the first two categories, I think we could employ everyone who wanted a job in the third category.

My guess is that it's going to take 10+ years to automate half the jobs in the first bucket, and 20+ years after that to automate half the jobs in the second bucket (my error bar is quite wide for this second time period). So it's not that I don't think there will be meaningful displacement, just that the process will take long enough that it won't feel like an emergency. See, for example, the story of bank tellers in the 1990s or truck drivers right now — this stuff almost always takes longer than people think.

Steven Adler's avatar

I also wonder a lot about AI doing scientific research, which I expect to be bottlenecked by the speed of actuators for doing experiments (and the amount of time an experiment needs to be run to produce results), moreso than bottlenecked by human intellect intrinsically?

User's avatar
Comment deleted
Nov 14Edited
Comment deleted
Steven Adler's avatar

Thanks for reading - I have the intuition that something like that mostly only happens once AI is capable enough that it is always acting? Though I'm not sure.

(I assume you meant "I don't think AIs will ever get to AGI withOUT constant help"?)

Steven Adler's avatar

I think there's also an interesting question of "Is that process a necessary step for getting to AGI, or could it just be some interface for AI with the world once it has become AGI?"

I lean toward the latter, but it feels unclear to me how to count the big human-enabled data efforts that ultimately help AIs to learn; maybe those should eventually count for the type of thing you're describing.