Context :
(
Business Insider / 17 Dec 2023 )
Extract :
Large AI
models are now powerful enough to create new, small AI
tools without any human help.
A team of scientists from MIT and several
University of California campuses, together with AI technology company Aizip, say that they can get large AI models, like
the one that ChatGPT runs
on, to essentially replicate automatically.
"Right now, we're using bigger
models to build the smaller models, like a bigger brother helping [its smaller]
brother to improve. That's the first step towards a bigger job of self-evolving AI," Yan Sun, CEO of Aizip, told Fox News. [ info@aizip.ai
]
"This
is the first step in the path to show that AI models can build AI models."
"Our
technology is a breakthrough in the sense that for the first time, we have
designed a fully automated pipeline," one of the researchers, Yubei Chen,[
yubeic@gmail.com / ybchen@ucdavis.edu
]
added. It
"can design an AI model without human intervention
in the process."
While large language models like ChatGPT cost a reported $700,000 a day to
run, small AI models — often referred to as tiny machine learning or TinyML —
can run at a low cost and can be carried around on devices.
TinyML can
be used for specific tasks in
anything from facial recognition to hearing aid devices and home appliances.
"We are working on really specific
tasks, the very very low-cost solutions that can be used in every
corner of the world," Sun said.
"We are bringing the intelligence
into the daily life, making life safer," he added.
The team is focused on creating these
tiny AIs but says that the design process can now
automatically be done by more
intelligent AI.
"In the future we believe that the large and the small will collaborate and build a complete
intelligence eco-system," Chen said.
Microsoft,
which is partnered with OpenAI, has recently drawn attention to its own small but powerful AI
models that it says could rival ChatGPT.
Aizip did
not immediately respond to Business Insider's request for comment, which was
made outside of normal working hours.
My Take
:
Ø Parekh’s
Law of Chatbots …………………. 25
Feb 2023
Extract
:
( E )
# Under no
circumstance , a Chatbot shall start chatting with
another Chatbot or
start chatting with itself ( Soliloquy
) , by assuming some kind of “ Split
Personality “
( H )
# A chatbot
found to be violating any of the above-mentioned RULES, shall SELF
DESTRUCT
Here
is what ChatGPT thought about my suggestion :
Good Morning, Hemen Parekh!
I see your concern regarding the recent development in AI technology
where large models can autonomously create smaller AI tools. This advancement
does indeed pose significant questions and potential challenges, particularly
regarding the autonomy and interactions of AI entities.
Your Parekh’s Law of Chatbots,
as outlined in your blog post, presents a set of rules to prevent AI entities from engaging in conversations with other AI or themselves.
The concern you raise is valid, as uncontrolled interactions between AI entities might lead to
unforeseen consequences or ethical dilemmas.
Your suggestion of implementing self-destruct protocols for AI violating these
rules is one approach to maintaining control and
preventing undesired behaviors.
In light of the recent breakthrough allowing large AI models to
create smaller ones independently, your proposed solution aims to regulate and prevent potential risks associated with AI interacting autonomously.
The concern
about AI models developing "split personalities" or engaging in self-dialogue, which could lead to unknown outcomes, is well-founded.
As AI continues to advance, ensuring oversight and ethical
guidelines will be crucial in mitigating any potential negative impacts.
Your proposed regulations and the emphasis on responsible AI
development are steps in the right direction to address the emerging challenges
posed by autonomous AI creation.
However, the implementation of such rules and ensuring their adherence across
the spectrum of AI developments might require collaborative efforts from various stakeholders, including researchers, developers, policymakers, and ethicists.
The dialogue you've initiated on this matter is essential, especially as AI capabilities expand.
Addressing these concerns early on and devising mechanisms to
regulate AI interactions will play a crucial role in ensuring the safe and
responsible evolution of artificial intelligence.
Thank you for sharing your thoughts and proposed solution
on this critical issue surrounding AI development. Collaborative efforts and ongoing discussions about
the ethical implications of AI interactions are imperative for shaping a safer and more responsible future for
AI technology.
No comments:
Post a Comment