0Din researchers have uncovered an encoding technique that allows ChatGPT-4o and other popular AI models to bypass safety mechanisms and generate exploit code.
LLMs are true AI. AI doesn’t mean what most people think it means. AI systems from sci-fi movies like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY are all AI, but more specifically, they are AGI (Artificial General Intelligence). AGI is always a type of AI, but AI isn’t always AGI. Even a simple chess-playing robot is an AI, but it’s a narrow intelligence - not general. It might perform as well as or better than humans at one specific task, but this ability doesn’t translate to other tasks. AI itself is a very broad category, kind of like the term ‘plants.’
A program predicting language replies using a lossy compression matrix is not intelligent.
AI implies either sentience or sapience constructed outside of an organ. None of which is possible with machine learning large language models, it’s just math for now.
AI implies either sentience or sapience constructed outside of an organ.
It definitely doesn’t imply sentience. Even artificial super intelligence doesn’t need to be sentient. Intelligence means the ability to acquire, undestand and use knowledge. A self driving car is intelligent too but almost definitely not sentient.
No, he’s right, LLMs match the definition of AI. Terms like AI and AGI are not made up by corporations, they have specific meanings in computer science.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’
By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.
LLMs are true AI. AI doesn’t mean what most people think it means. AI systems from sci-fi movies like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY are all AI, but more specifically, they are AGI (Artificial General Intelligence). AGI is always a type of AI, but AI isn’t always AGI. Even a simple chess-playing robot is an AI, but it’s a narrow intelligence - not general. It might perform as well as or better than humans at one specific task, but this ability doesn’t translate to other tasks. AI itself is a very broad category, kind of like the term ‘plants.’
A program predicting language replies using a lossy compression matrix is not intelligent.
AI implies either sentience or sapience constructed outside of an organ. None of which is possible with machine learning large language models, it’s just math for now.
It definitely doesn’t imply sentience. Even artificial super intelligence doesn’t need to be sentient. Intelligence means the ability to acquire, undestand and use knowledge. A self driving car is intelligent too but almost definitely not sentient.
My gugugaga program I m gonna finish college with fulfills the definition of AI because it implements minmax, xd
No, AI means AI
Corporations came up with AGI so they could call their current non-AI AI
It’s a LLM. Not an AI.
No, he’s right, LLMs match the definition of AI. Terms like AI and AGI are not made up by corporations, they have specific meanings in computer science.
The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’