Trustless Computing Association

View Original

AI leaders agree on urgent need for global governance and oversight of General AI development

Last January 2017, the gotha of Artificial Intelligence assembled in Asilomar, California, for the Beneficial AI 2017 Conference. The panel “Superintelligence: Science or Fiction” (youtube) was participated by the leading experts, researches, tech billionaires and chief scientist of leading AI companies in the World.   All of them agreed that Superintelligence may be just years or a decade away. They therefore mostly agreed on the crucial and urgent need for international governance, regulations and oversight on the advances of AI, and the need for advances in AI safety. Surprisingly, and in contrast to some of their statements only a few years ago there was a majority which finally shifted our hopes to steer AI towards a good outcome on the international governance, regulation and certification.

Ray Kurzweil: “I think as technologists we should do everything we can to keep the technology safe and beneficial … But I don’t think we can solve the problem just technologically. You just imagine that we have done our job perfectly and we’ve created the most safe and benficial AI possible, but we’ve let the political system become totalitarian and evil: either a evil world government or part of the World…. And so a part of the struggle is in the area of politics and social policy, and have the World reflect the values we want to achieve

Nick Bostrom: “The current speed of AI progress is a fairly hard variable to change very much, because there are big forces pushing on it. So perhaps the highest elasticity option is what I suggested in the talk – to ensure that who ever gets there first has enough fo a lead that they are able to slow down for a few months to go slow during the transition”.

Nick Bostrom : “I think it would be great if it went slow. But it’s very hard to see how it could go slow, given the huge firts mover advantages of getting to superintelligent AI, so the only scenario where it might go slow is where there is just one potential that can then stop and think. Maybe that speaks for creating a society where AI is restrivcted and unified“.

Demis Hassabis ” … The most capable [AI leaders] should agree of safety protocols or safety procedures. Or maybe agree that we should verify these systems, and it’s going to take 3 years, and we shouold think about that. I think that would be a good thing“. To a straight question of what he thinks is a core challenge that we should tackle on now, he answered “I think the coordination problem is one thing,  where we want to avoid this sort of harmful race to the finish, where corner cutting starts happening, where safety gets cut because it does not contribute to AI capability. In fact, they may hold it back a bit. So I think that’s going to be a big issue on a global scale. And it seems that’s going to be a hard problem when we are talking about nation governments and things. And I think also we haven’t thought enough about the whole governance scenario of how do we want those AIs to be out in the World; how many of them; who will set their goals and these kind of things. They need a lot more thought.”

Elon Musk. “I am trying to think of what would be a good future. We are heading to superintelligence or civilization is destroyed. So what is the world we want with superintelligence. We are already cyborg, with our smartphone. We are already superhumans. We communicate anywhere in the world. The only limitation is that we are bandwidth contrained, particularly in the output.”

Eric Rinolson of MIT “I am going to pick on the things that Elon said at the end, about democratizing the outcome, and going back to the panel yesterday where Read Hoffman talked about people caring about not only absolute income but relative income. I wanted to get the panelist reactions about whether or not AI had tendencies towards winner take all effects, cuase there is a tendency for its concentration that, whoever is ahead, can pull further ahead; or whether there is potential for a more widespread democratic access to it. And what kind of mechanisms we can put in place if we want to have the widely shared prosperity. tha Elon suggested”

“I have to say that when something is a danger to the public than there need to be some – i hate to say – government agency, like regulators, i am not the biggest fan of regulators becuase they are a bit of a buzz kill. But the fact is that we’ve got regulators in the aircraft industry,car industry – i deal with the all the time -drugs, food. Anything that’s really a public risk. And I thing this has to fall in the category of a public risk. And I think it will happen”