But the actions taken by the model in the virtual environments can always be described as discrete steps. Each modification to the weights done by each agent in each generation can be described as discrete steps. Even if I’m fucking up some of the terminology, basic computer architecture enforces that there are discrete steps.
We could literally trace each command that runs on the hardware that runs these things individually if we wanted full auditability, to eat all the storage space ever made, and to drive someone insane. Have none of you AI devs ever taken an embedded programming/machine language course? Never looked into reverse engineering of compiled executables?
I understand that these things work by doing these steps millions upon millions of times, but there has to be a better middle ground for tracing these things than “lol i dunno, computer brute forced it”. It is a mixture of laziness, and unwillingness to allow responsibility to negatively impact profits that result in so many in the field to summarize it as literally impossible.
But the actions taken by the model in the virtual environments can always be described as discrete steps.
That’s technically correct, but practically useless information. Neural networks are stochastic by design, and while Turing machines are technically deterministic, most operating systems’ random number generators will try to introduce noise from the environment (current time, input devices data, temperature readings, etc). So because of that randomness, those discrete steps you’d have to walk through would require knowing intimate details of the environment that the PC was in at precisely the time it ran, which isn’t stored. And even if it was or you used a deterministic psuedo-random number generator, you’d still essentially be stuck reverse engineering the world’s worse spaghetti code written entirely in huge matrix multiplications, code that we already know can’t possibly be optimal anyway.
If a software needs guaranteed optimality, then a neural network (or any stochastic algorithm) is simply the wrong tool for the job. No need to shove a square peg in a round hole.
Also I can’t speak for AI devs, in fact I’ve only taken an applied neural networks course myself, but I can tell you that computer architecture was like a prerequisite of a prerequisite of a prerequisite of that course.
There’s a wide range of “explainability” in machine learning - the most “explainable” model is a decision tree, which basically splits things into categories by looking at the data and making (training) an entropy-minimizing flowchart. Those are very easy for humans to follow, but they don’t have the accuracy of, say, a Random Forest Classifier, which is exactly the same thing done 100 times with different subsets.
One flowchart is easy to look at and understand, 100 of them is 100 times harder. Neural nets are another 100 times harder, usually. The reasoning can be done by hand by humans (maybe) but there’s no regulations forcing you to do it, so why would you?
It’s true that each individual building block is easy to understand. It’s easy to trace each step.
The problem is that the model is like a 100 million line program with no descriptive variable names or any comments. All lines are heavily intertwined with each other. Changing a single line slightly can completely change the outcome of the program.
Imagine the worst code you’ve ever read and multiply it by a million.
It’s practically impossible to use traditional reverse engineering techniques to make sense of the AI models. There are some techniques to get a better understanding of how the models work, but it’s difficult to get a full understanding.
But the actions taken by the model in the virtual environments can always be described as discrete steps. Each modification to the weights done by each agent in each generation can be described as discrete steps. Even if I’m fucking up some of the terminology, basic computer architecture enforces that there are discrete steps.
We could literally trace each command that runs on the hardware that runs these things individually if we wanted full auditability, to eat all the storage space ever made, and to drive someone insane. Have none of you AI devs ever taken an embedded programming/machine language course? Never looked into reverse engineering of compiled executables?
I understand that these things work by doing these steps millions upon millions of times, but there has to be a better middle ground for tracing these things than “lol i dunno, computer brute forced it”. It is a mixture of laziness, and unwillingness to allow responsibility to negatively impact profits that result in so many in the field to summarize it as literally impossible.
That’s technically correct, but practically useless information. Neural networks are stochastic by design, and while Turing machines are technically deterministic, most operating systems’ random number generators will try to introduce noise from the environment (current time, input devices data, temperature readings, etc). So because of that randomness, those discrete steps you’d have to walk through would require knowing intimate details of the environment that the PC was in at precisely the time it ran, which isn’t stored. And even if it was or you used a deterministic psuedo-random number generator, you’d still essentially be stuck reverse engineering the world’s worse spaghetti code written entirely in huge matrix multiplications, code that we already know can’t possibly be optimal anyway.
If a software needs guaranteed optimality, then a neural network (or any stochastic algorithm) is simply the wrong tool for the job. No need to shove a square peg in a round hole.
Also I can’t speak for AI devs, in fact I’ve only taken an applied neural networks course myself, but I can tell you that computer architecture was like a prerequisite of a prerequisite of a prerequisite of that course.
There’s a wide range of “explainability” in machine learning - the most “explainable” model is a decision tree, which basically splits things into categories by looking at the data and making (training) an entropy-minimizing flowchart. Those are very easy for humans to follow, but they don’t have the accuracy of, say, a Random Forest Classifier, which is exactly the same thing done 100 times with different subsets.
One flowchart is easy to look at and understand, 100 of them is 100 times harder. Neural nets are another 100 times harder, usually. The reasoning can be done by hand by humans (maybe) but there’s no regulations forcing you to do it, so why would you?
deleted by creator
It’s true that each individual building block is easy to understand. It’s easy to trace each step.
The problem is that the model is like a 100 million line program with no descriptive variable names or any comments. All lines are heavily intertwined with each other. Changing a single line slightly can completely change the outcome of the program.
Imagine the worst code you’ve ever read and multiply it by a million.
It’s practically impossible to use traditional reverse engineering techniques to make sense of the AI models. There are some techniques to get a better understanding of how the models work, but it’s difficult to get a full understanding.