AI tools are helpful and cool as long as you know their limitations. AI doesn’t exist.There is no fidelity in AI. AI is build on biased data sets and will gi...
The paper states that the graphs representing those relations are the result of training LLMs on a very small subset of unambiguous true and false statements.
While these emergent properties may provide interesting avenues to model refinement and inspecting outputs it doesn’t change the fact that these weird little dictionaries aren’t doing anything truly unexpected. We just are learning the extra data associated with the training data.
It’s not far removed from the primary complaint of Gebru’s On Stochastic Parrots where she points out the ways that our biases are implicitly trained into LLMs because of the uncontrolled and unexamined inputs: except in this case those biases are the linguistics of truth and lies in unambiguous boolean inputs.
This may provide interesting avenues to model refinement that aren’t spitting things out and being retrained by “consciousness” telling it yes or no, or feeding it additional info.
Only if the “direction of truth” exists in the wild with unchecked training data.
That clustering is a representation of the nature of the data fed to the model: all their training data was unambitious true or false… It’s not surprising that it clusters.
The paper states that the graphs representing those relations are the result of training LLMs on a very small subset of unambiguous true and false statements.
While these emergent properties may provide interesting avenues to model refinement and inspecting outputs it doesn’t change the fact that these weird little dictionaries aren’t doing anything truly unexpected. We just are learning the extra data associated with the training data.
It’s not far removed from the primary complaint of Gebru’s On Stochastic Parrots where she points out the ways that our biases are implicitly trained into LLMs because of the uncontrolled and unexamined inputs: except in this case those biases are the linguistics of truth and lies in unambiguous boolean inputs.
This may provide interesting avenues to model refinement that aren’t spitting things out and being retrained by “consciousness” telling it yes or no, or feeding it additional info.
Only if the “direction of truth” exists in the wild with unchecked training data.
That clustering is a representation of the nature of the data fed to the model: all their training data was unambitious true or false… It’s not surprising that it clusters.
Cool.