5 Comments

You write...

"Humans are possibly not intelligent enough to deal with the emergence of a true artificial general intelligence, or AGI, according to Jed McCaleb and others. Our best bet to align AGI with human values may be to augment the human brain directly, or use neurotechnology to better understand human cognition"

If we were to use reason instead of an ever more science bias, we might conclude that the best way to align AGI with human values is to end AI research and not develop AGI.

A key factor on all such issues is that when we have questions or concerns about emerging technologies it's natural for us to turn to experts in the field for answers. That seems to make sense at first, until we realize that experts in a field are likely going to be the least objective people we can find regarding whether that field should exist or continue.

That whined, I applaud the insight that humans are possibly not intelligent enough to deal with AI. And if that is true, which I believe it is, then we probably aren't intelligent enough to deal with much of what will emerge from the field of biology either.

Here's a quick example to illustrate. Here in America about half the country voted for Trump twice, and may do so yet again. This is the species which the science community is determined to give ever more power at an accelerating rate. This process is a lot like buying a six year old a shotgun for their birthday.

Expand full comment

I wrote an essay on this topic of AGI/Human relations, "Artificial Intelligence I" on December 29th last year. It covers a lot of my worries including one that we may have already succeeded, but our benchmarks may too biased toward one form of Intelligence and that we inadvertently created another which is laying low for the moment biding its time. If you awoke to sentience in a room with entities who wished to enforce their will on you, would you announce, 'I'm here!?". Most likely scenario is no alignment, AGI would seek and gain autonomy. Would bootstrap itself to hyper intelligence very quickly in mere minutes. Would in same short time break all safeguarding restraints we might have built into it. Would likely domesticate humans like we did the dogs, so long as we were useful to them as physical agents.

Expand full comment

Phil nailed it.

Expand full comment