Woah, this is huge. Claude 1 was already more useful and coherent than ChatGPT (3.5, not 4). The big point was that it wasn’t available to everyone. This could really steal some marketshare from OpenAI if things go well.
What market though? These AI chatbots seem like money sinks for a potential development into something useful in the distant future.
The market of people buying APIs for popular chatbots. Right now OpenAI’s GPT is overwhelmingly the most popular option and pretty expensive. You constantly see a lot of “powered by GPT” features on products now, but hopefully Claude can provide some better competition.
Fair, I don’t see any real use for these right now. Chatbots just seem like a gimmick that can help people cheat in school (not that I give a fuck about that). Probably just the online circles we run in, what sorta things are powered by GPT? Customer support and stuff?
You’ve got stuff like helping assistants on Duolingo and Khan Academy powered by GPT-4, you’ve got stuff like tools for automatic search engine optimization, tools for automatic code generation, tools for grammar spell checking, tools for translation, and probably a lot more I’m unaware of.
There’s quite a lot of people depending on GPT right now.
The Khan academy approach to ai-assisted learning looks amazing and it’s just a first attempt. I think having individual, endlessly patient AI tutors leading each student via the Socratic method will revolutionise teaching. Teachers actually have more time to socialise with the students, so fears that ai learning would deprive children of the social interaction may be put to rest. It looks really promising.
Here is an alternative Piped link(s): https://piped.video/watch?v=3j0jSKcvbLY
https://piped.video/watch?v=3j0jSKcvbLY
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
Seems to only be available in the US and UK for now tho.
Luckily it doesn’t need a phone number like OpenAI so you can just VPN it.
You shouldn’t have to though. The whole “only available if you happen to live in X” is so much bs when it comes to things like this. Sure if it was a giveaway and needed to be shipped, I could understand. But a website being locked away to only certain regions is ridiculous.
I suspect it has to do with legal compliance. Only available in US = only needing to comply with US law.
I tried using a VPN and it still didn’t allow me to sign up.
I’ve used ProtonVPN and managed to sign in easily
Say’s only available in the US, used a VPN to sign up, good to have alternatives!
Just tried it out, withe some questions about ceramic firing in a electric kiln. Seems to have similar accuracy to chatgpt, maybe closer to gpt4.
It’s not clear when using it what version it’s on, so this may have been Claude 1, I’m unsure where to check.
I asked it directly. It didn’t know and stated it has never had version numbers. I pointed out that news articles differentiate 1.0 and 2.0. It agreed but didn’t say what it was. I asked it again directly, it said it was 2.0.
Hard to believe something that feels like it’s lying to you all the time. I asked it about a topic that I’m in and have a website about, it told me the website was hypothetical. It got it wrong twice, even after it agreed it was wrong, and then told me the wrong thing again.
Can you ask perplexity.ai your question about ceramic firing and see what you get? Perplexity offers prompts to move you along towards your answer.
I asked perplexity that same question. It kind of did better, it made no errors in temperature’s like the others do. It just left those details out, initially. After asking follow-up questions it answered correctly, but also gave some unnecessary and unrelated information.
I didn’t use any of the prompts, I was asking about saggar firing processes and temps, the prompts were just ceramics related.
My area has 40 years of studies behind it with a heap of science online. I’m always surprised that AI do so badly with it. If I can work it out by reading through study after study, AI should piss it in.
The good thing about perplexity is that it sources itself so you can check it. Others just give you the answer and if you don’t know much, you dont know if it’s wrong or not (better than no sources, I feel). I’ve also asked someone who is the world leader in my field to figure out when it starts giving completely wrong answers and in what area.
Is what you’re searching more of an in-field technique or would there be webpages or studies devoted to it?
Hard to believe something that feels like it’s lying to you all the time. I asked it about a topic that I’m in and have a website about, it told me the website was hypothetical. It got it wrong twice, even after it agreed it was wrong, and then told me the wrong thing again.
Is this what they consider hallucinations?