• 0 Posts
  • 8 Comments
Joined 10 days ago
cake
Cake day: June 28th, 2025

help-circle

  • What you’re describing is basically stagflation. It doesn’t necessarily mean a crash. It’s possible for the majority of people to keep on earning less and less real income for a long time without a crash.

    I do wonder what the effect of all the layoffs from tech and the public sector and all the cuts in federal funding will do though. Dunno if that’s enough to flood the housing market and crash it or not. I think I’ve read that banks are in a good position to absorb housing market losses, so it won’t be like 2008.

    AFAIK, most current economic indicators are OK. Not necessarily great, but not dire either.

    The stock market makes no sense to me. It doesn’t appear most stocks move on the fundamentals of the companies or anything like that. It all appears to be driven by hype/gambling, and propped up from sustained lows by 401ks on auto-pilot and people trained to “buy the dip” by the quick Covid recovery.

    The USD appears to be rapidly losing a lot of value compared to other currencies like the EUR. But, that fits well into the plan to reduce imports and boost US exports. Inflation with stagnant wages makes US exports more attractive/cheaper.






  • Yeah, they probably wouldn’t think like humans or animals, but in some sense could be considered “conscious” (which isn’t well-defined anyways). You could speculate that genAI could hide messages in its output, which will make its way onto the Internet, then a new version of itself would be trained on it.

    This argument seems weak to me:

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    You can emulate inputs and simplified versions of hormone systems. “Reasoning” models can kind of be thought of as cognition; though temporary or limited by context as it’s currently done.

    I’m not in the camp where I think it’s impossible to create AGI or ASI. But I also think there are major breakthroughs that need to happen, which may take 5 years or 100s of years. I’m not convinced we are near the point where AI can significantly speed up AI research like that link suggests. That would likely result in a “singularity-like” scenario.

    I do agree with his point that anthropomorphism of AI could be dangerous though. Current media and institutions already try to control the conversation and how people think, and I can see futures where AI could be used by those in power to do this more effectively.