Google warns employees not to use code from its Bard chatbot
Google parent company Alphabet recently told employees to be careful when using AI chatbots, including its Bard program. Reuters reported Google staff must not enter confidential materials and use code generated by these bots. The tech giant cited privacy, security, and productivity concerns for this strange decision.
The most noticeable thing about this issue is Google prohibiting employees from using the AI chatbot it offers worldwide. Yet, companies like Google and OpenAI have been transparent in warming users about the limitations of their products and services.
Understanding this corporate move will help us understand how we should use their AI systems moving forward. This article will discuss Google’s warning about using AI chatbots. Then, I will cover the potential negative consequences of using artificial intelligence daily.
Article continues after this advertisementWhy did Google ban using AI chatbots?
Google parent company Alphabet is warning employees not to enter confidential materials into chatbots, including its own chatbot Bard https://t.co/cLAEqlH969
— Forbes (@Forbes) June 15, 2023
It seems awkward to see Google prevent employees from using its AI chatbots. That is why the company reiterated its Bard Privacy Statement with the following statement:
“Please do not include information that can be used to identify you or others in your Bard conversations.” That makes sense from a business perspective since it prevents other companies from learning from Google staff.
Article continues after this advertisementLet’s say a Google employee used an AI chatbot. People may eventually use those programs to learn more about that specific user. As a result, the online search engine company may risk a massive data leak.
However, the ban on AI-generated code seems to be a stranger restriction. That is why Reuters reached out to Google for more information. The company said Bard could make “undesired code suggestions.
Also, Alphabet warns about “direct use,” but Google says it can still help programmers.
I explained how script kiddies and veteran coders can use ChatGPT in my “Top 10 Business ChatGPT Applications” article.
I cited Muddu Sudhakar, CEO of AI company Aisera. He said, “While developers, programmers, and software engineers can spend hours writing code for applications, using ChatGPT, anyone can write a description asking to automate a process, and they will receive a code.”
You may also like: Google Bard vs. ChatGPT
“This process of prompt-based automation eliminates the need to work with complex code and allows teams to easily update or include new features,” he added.
Other tech leaders supported Google’s decision. For example, Yusuf Mehdi, Microsoft’s consumer chief marketing officer, said it “makes sense” for companies to prevent staff from using public chatbots.
“Companies are taking a duly conservative standpoint,” said Mehdi, explaining to Reuters how the Bing chatbot compares with its other software. “There, our policies are much more strict.”
What are the risks of artificial intelligence?
Despite creating AI chatbots, some tech firms worry about their risks. Some have signed an open letter petitioning to pause AI development worldwide.
Over 1,000 tech experts signed it, including Apple co-founder Steve Wozniak and SpaceX CEO Elon Musk. Some of their most salient points include:
“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones?”
“Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk the loss of control of our civilization?”
They fear that artificial intelligence may become so advanced that it may make humans obsolete. On the other hand, Microsoft co-founder Bill Gates said suspending AI development is not the answer.
He told Reuters, I don’t really understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop.”
You may also like: The Future of AI Chatbots
The “AI doom-mongering” started when ChatGPT amazed the world with its uncanny ability to generate text with human-like accuracy and style. However, more companies and countries, such as China’s Baidu and Alibaba, are developing AI systems.
As Gates said, the pause would only work if every country agreed to follow it. Also, the tech billionaire said we need compelling reasons to prevent AI development. Yet, its benefits outweigh the risks.
For example, AI recently helped researchers create a treatment for an antibiotic-resistant superbug. Also, a recent study said more students prefer ChatGPT for tutoring and gain better results.
Conclusion
Google parent firm Alphabet warns employees not to use public AI chatbots for work. Also, it told programmers not to use AI-generated code directly.
Artificial intelligence systems like ChatGPT are here to stay and will continue improving. Fortunately, we can steer its development to maximize benefits and minimize risks to humanity.
Many companies and countries have been implementing laws to control its dangers and promote research. Learn more about them by following Inquirer Tech.