CEO Steve Huffman says tech giants should not be able to trawl Reddit’s huge store of data for free. But that information came from users, not the company
That “corpus of data” is the content posted by millions of Reddit users over the decades. It is a fascinating and valuable record of what they were thinking and obsessing about. Not the tiniest fraction of it was created by Huffman, his fellow executives or shareholders. It can only be seen as belonging to them because of whatever skewed “consent” agreement its credulous users felt obliged to click on before they could use the service.
Ouch
Wide op for ai scraping and nothing are not the only two options. They could easily limit api calls to what would be good for single users or mods and have each user generate their own key. Apps could let users input their key. Most users wouldn’t bother and would switch to their app anyway so it would get them 95% or what they claim to want without being a dick about it.
Plus AI companies can just scrape reddit without using the API. It’s still a website after all.
They want the timing of how long a user looks at something. They can’t scrape that from third party apps.
Yes you can. PC emulation of apps is common.
deleted by creator
If the data is that important to them that they kill the site, then they’re more dumb than I think. Apps can be scraped too. It isn’t even difficult.
I highly doubt Reddit is gonna shut down their website.
I saw a post saying they were testing restricting mobile access to only through the app.
Oh yeah, they’ve done that already. I don’t think they’ll extend that to actual web tho
I’m not sure if I wasted my time, but I spent a few hours today editing all of my posts on Reddit to be a single comma or period. I didn’t comment or post a lot by any means, but just got irritated enough to try to keep from contributing in any way to Spez profiting off of user provided content.
Can’t shreddit do this in bulk? I am considering doing it for my comments, but I think I will just leave them up there. I did have a great time on reddit until they announced their API changes, so I will leave them with that much. But I did get a backup of everything I wrote using bulk downloader.
But I am still considering just doing a shreddit just for kicks.
Yeah, I did the same thing a few days ago. I used the browser add-on called Reddit Enhancement Suite to delete all my posts and comments. Instructions: https://www.alphr.com/how-to-delete-all-reddit-posts/
so sad. Not opposing but like burning a forest.
Honestly, I think the sad truth is that reddit is bleeding money, and every action they take from here on out will be about recruiting whales and driving off everyone else. That’s steve’s brilliant business strategy - make reddit p2w.
where is the money going?
that’s how they did it. They put a 10 request a minute on bots and a higher oauth limit (100) for individuals. large User client type apps could have somewhat easily converted over to that system but due to time constraint they didn’t. I do think they extorted their third party devs sure but, honestly the individual user limit isn’t super unreasonable as long as you aren’t liking or disliking every post. the search api is 100 posts per Api request, it was more the no NSFW and the no advertising limits they put on it that sucked
edit: its actually 10 or 100 per minute not hour
It’s not that simple, because the third party apps ship with a single api key. So I used Relay for reddit, and used the same api key as everyone else on that app. You could create an app, and then have everyone make their own key, but that is just asking for trouble. Definitely too technical for most people, and you would probably need to put in billing info for a scenario where you go above the free-tier call limit.
update: removed the comment because I was looking at the Api docs again and it seems that despite using the bearer token, metrics and rate limiting still are based off the app client ID, which is super stupid. originally stated that rate limits would be by oauth client which would be per user, 100 requests a minute, but it is actually 100 requests per minute app wide, which is just unfeasible for large scale