Microsoft’s president says AI needs “human control”
Microsoft president Brad Smith warned we need human control for artificial intelligence. He explained it could cause a human extinction risk similar to the effects of nuclear war. That is why we must have people ready to shut down AI tools immediately to prevent such a catastrophe. Alternatively, we should slow down development so humans can catch up.
The tech leader also noted artificial intelligence has “the potential to become both a tool and a weapon.” Consequently, we should know how to handle this technology to maximize benefits and minimize risks. We should check what experts like Smith have to say and then develop their opinions into practical, real-life solutions.
This article will elaborate on Microsoft’s president’s perspectives on artificial intelligence. Then, I will discuss ways the world is trying to control AI’s growing influence.
What does the Microsoft president say about AI?
Companies creating #AI have a responsibility to ensure that it is safe, secure, and under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling challenges so that it benefits all humanity. https://t.co/OoOaArcEIt
— Brad Smith (@BradSmi) July 26, 2023
On August 28, 2023, Brad Smith acknowledged that “every technology ever invented [has] the potential to become both a tool and a weapon.” However, humans must rein in artificial intelligence to prevent its destructive capabilities.
“It is a tool that can help people think smarter and faster. The biggest mistake people could make is to think that this is a tool that will enable people to stop thinking.”
Teachers worldwide share a similar sentiment as more pupils use ChatGPT and other AI tools to cheat. Some have banned these technologies because students may become reliant on them.
Overreliance may lead to students not learning essential skills in school as AI bots think for them. Moreover, Smith and other tech experts warned artificial intelligence may cause humans to go extinct.
Someone might weaponize AI systems and turn them into a threat like nuclear war. Hence, the Microsoft president stressed the need to mitigate such a disaster.
“It’s why we’ve advocated for not just companies to do the right thing, but new laws and regulations that would ensure that there are safety breaks. We’ve seen the need for this elsewhere.”
You may also like: Google bans employees from using its chatbot
“I mean, just imagine electricity depends on circuit breakers. You put your kids on a school bus, knowing that there is an emergency brake. We’ve done this before for other technologies. Now, we need to do it as well for AI,” Smith stated.
The Microsoft president had a similar warning last week. He said rapid AI development risks repeating the tech industry’s mistakes with social media.
Smith believed developers were too starry-eyed about social networks. He said they “became a little too euphoric about all the good things that social media would bring to the world, and there have been many, without thinking about the risks as well.”
How are we responding to AI threats?
Various countries understand the growing influence of artificial intelligence, so they have passed several laws to control it. For example, the Philippines proposed an AI Bill months ago.
Surigao del Norte Second District Representative Robert Ace Barbers filed House Bill $7396, which proposes the creation of the Artificial Intelligence Development Authority (AIDA). It will be “responsible for the development and implementation of a national AI strategy.”
AIDA will conduct risk assessments and impact analyses to ensure the technology complies with ethical guidelines and protects individual welfare. Also, it would develop cybersecurity standards for AI to prevent hacking and other cyberattacks.
The law also shows the country’s understanding of this innovation. “While the Philippines recognizes the importance of AI in the development of the country, the rapid phase of technological advancement in AI also poses risks and challenges….”
You may also like: How to keep Google AI from training with your data
“…that must be addressed to ensure that its benefits are maximized, and its negatives are minimized, if not avoided.” Also, the United States held its first-ever AI Senate hearing in May.
It featured OpenAI CEO Sam Altman, the leader of the company that made ChatGPT. He and several lawmakers agreed to create new laws for this technology.
The US Copyright Office is trying to mitigate its effects on intellectual property rights. It is holding a public comment period so that people can help guide future AI copyright laws.
Microsoft president Brad Smith cautioned about leaving artificial intelligence untethered. He believes we need strict human control over this technology to prevent disastrous risks.
However, companies have been improving their AI programs to ensure they align with humanity’s goals. For example, Anthopic’s Claude chatbot follows “constitutional AI” to minimize its negative responses.
Nevertheless, we live in the AI era, so you must prepare with the proper knowledge and skills. Start by checking the latest digital tips and trends at Inquirer Tech.