Firefox is expanding its AI-powered features, all designed to keep your data private. We believe technology should serve you, not monitor you. Our team und
You can make software accessible without Ai, they just don’t want to spend the money on it. It’s a business decision that every company makes to save the money on accessibility. It’s a calculated cost to them
Education is the answer and not creating a nanny state with fucking Ai babysitting and spying on everyone.
This is some big brother lvl of bullshit just because oh no old people might be stupid enough to fuck themselves over. If they are that technically incapable, maybe it’s time to set up a device without any extra options.
You might make software accessible without AI but I sure as heck don’t trust everyone else to do it right. Unless you control all software (spoiler, you don’t) then your point is merely theoretical.
Automated browser level accessibility to cover for other people’s errors is a perfectly reasonable solution to the problem that most developers are fucking lazy.
Plus, there is no “big brother bullshit” when it comes to locally run AI. Your data never leaves your device, and the model weights are freely auditable. You seem to fundamentally misunderstand how this works.
And you are fucking gullible to think giving Ai Agents root level access for the sake of accessibility is reasonable or that the big companies are even going to allow local models in the future.
Don’t put words in my mouth. Root level access is obviously stupid. AI doesn’t have to be owned by corporations. If you think all AI is the same, you are the gullible one - open your eyes and actually look at the options out there. Apache licensed models exist out there, and no corporation can control that.
You could say the same thing about all software - when it’s run by corporations it’s almost certainly there to harvest your data. But there is a ton of ethical software too!
I believe their point was that site developers should just include alt text, without ai ever being needed.
But in practice, sometimes they don’t, and that’s been an issue for ages. And in practice, a lot of images on the internet are user generated/submitted, and most sites don’t have a culture of users writing alt text for their images. Mastodon does, but even lemmy doesn’t really.
this education thing has been said and said again since what, the 90s? and guess what, people still fall for scams! for the same reasons that an antivirus is a crucial security component for the non-tech enthusiasts crowd, an AI scam detector that runs locally will likely be a crucial tool in the next few years.
You can make software accessible without Ai, they just don’t want to spend the money on it. It’s a business decision that every company makes to save the money on accessibility. It’s a calculated cost to them
Education is the answer and not creating a nanny state with fucking Ai babysitting and spying on everyone.
This is some big brother lvl of bullshit just because oh no old people might be stupid enough to fuck themselves over. If they are that technically incapable, maybe it’s time to set up a device without any extra options.
You might make software accessible without AI but I sure as heck don’t trust everyone else to do it right. Unless you control all software (spoiler, you don’t) then your point is merely theoretical.
Automated browser level accessibility to cover for other people’s errors is a perfectly reasonable solution to the problem that most developers are fucking lazy.
Plus, there is no “big brother bullshit” when it comes to locally run AI. Your data never leaves your device, and the model weights are freely auditable. You seem to fundamentally misunderstand how this works.
And you are fucking gullible to think giving Ai Agents root level access for the sake of accessibility is reasonable or that the big companies are even going to allow local models in the future.
Ai apologists get blocked
Don’t put words in my mouth. Root level access is obviously stupid. AI doesn’t have to be owned by corporations. If you think all AI is the same, you are the gullible one - open your eyes and actually look at the options out there. Apache licensed models exist out there, and no corporation can control that.
You could say the same thing about all software - when it’s run by corporations it’s almost certainly there to harvest your data. But there is a ton of ethical software too!
How do you propose generating alt text for images without some form of machine learning?
I believe their point was that site developers should just include alt text, without ai ever being needed.
But in practice, sometimes they don’t, and that’s been an issue for ages. And in practice, a lot of images on the internet are user generated/submitted, and most sites don’t have a culture of users writing alt text for their images. Mastodon does, but even lemmy doesn’t really.
this education thing has been said and said again since what, the 90s? and guess what, people still fall for scams! for the same reasons that an antivirus is a crucial security component for the non-tech enthusiasts crowd, an AI scam detector that runs locally will likely be a crucial tool in the next few years.