Using ChatGPT for anything more than repetitive or random generation tasks is a bad idea, and its usefulness becomes even more limited when you’re working with proprietary code that you can’t directly send to ChatGPT. Even with personal projects, I still try to avoid ChatGPT as much as possible for the simple reason that I’ll be forced to pay for it if it becomes an essential part of my workflow when it leaves this free beta testing phase.
Exactly this! I hate hearing politicians and rulemakers discuss how ChatGPT and LLM are going to be relevant everywhere and how ChatGPT should already be incorporated into education. They literally call it a “research preview”, you can only assume that when they’ve gathered enough data, they’re going to shut it down, or at least reduce its capacity by a lot.
With that said, I really enjoy using it. Mainly for brainstorming topics or new projects, and what technologies to use in them. Sometimes I also find a use for it as a therapist, for social topics I don’t really know who to ask, and I expect a generic reply anyway.
On the politicians / rulemakers side of things, that may or may not be a good thing tbh. Technology moves so fast and traditionally the aforementioned groups are glacial and can’t keep up, sometimes to the benefit of a small group, often to the detriment of the majority. Having this on their radar relelatively soon is potentially a useful change.
While it’s nice that politicians are enthusiastic about new technologies, I think ChatGPT is one example where they shouldn’t force mass adoption. ChatGPT is a proprietary model owned by a private corporation, and it’s made very clear that interaction data with ChatGPT will be collected and used by OpenAI for its business. It’s horrible for data security and it helps to strengthen OpenAI’s monopoly. Honestly, governments recommending privately owned software and technologies should be considered advertising.
That side of it I wholeheartedly agree with. Perhaps I’m just deluding myself into thinking technology awareness early on makes for better legal infrastructure to handle its effect on society. I really would like that to be the case.
But yeah agree, “ChatGPT” being synonymous with “groundbreaking AI” to the vast majority of the public (I suspect) is not great from a monopoly perspective.
governments recommending privately owned software and technologies should be considered advertising.
Is this not also true if the software is open-source? It’s still advertising, but it’s somehow ok because a corporation doesn’t benefit? It’s not that I don’t agree with you - regulatory capture and vendor lock-in are much less of a concern for free and/or open-source software, but that doesn’t mean it’s not still advertising.
Using ChatGPT for anything more than repetitive or random generation tasks is a bad idea, and its usefulness becomes even more limited when you’re working with proprietary code that you can’t directly send to ChatGPT. Even with personal projects, I still try to avoid ChatGPT as much as possible for the simple reason that I’ll be forced to pay for it if it becomes an essential part of my workflow when it leaves this free beta testing phase.
Exactly this! I hate hearing politicians and rulemakers discuss how ChatGPT and LLM are going to be relevant everywhere and how ChatGPT should already be incorporated into education. They literally call it a “research preview”, you can only assume that when they’ve gathered enough data, they’re going to shut it down, or at least reduce its capacity by a lot.
With that said, I really enjoy using it. Mainly for brainstorming topics or new projects, and what technologies to use in them. Sometimes I also find a use for it as a therapist, for social topics I don’t really know who to ask, and I expect a generic reply anyway.
On the politicians / rulemakers side of things, that may or may not be a good thing tbh. Technology moves so fast and traditionally the aforementioned groups are glacial and can’t keep up, sometimes to the benefit of a small group, often to the detriment of the majority. Having this on their radar relelatively soon is potentially a useful change.
While it’s nice that politicians are enthusiastic about new technologies, I think ChatGPT is one example where they shouldn’t force mass adoption. ChatGPT is a proprietary model owned by a private corporation, and it’s made very clear that interaction data with ChatGPT will be collected and used by OpenAI for its business. It’s horrible for data security and it helps to strengthen OpenAI’s monopoly. Honestly, governments recommending privately owned software and technologies should be considered advertising.
That side of it I wholeheartedly agree with. Perhaps I’m just deluding myself into thinking technology awareness early on makes for better legal infrastructure to handle its effect on society. I really would like that to be the case.
But yeah agree, “ChatGPT” being synonymous with “groundbreaking AI” to the vast majority of the public (I suspect) is not great from a monopoly perspective.
Is this not also true if the software is open-source? It’s still advertising, but it’s somehow ok because a corporation doesn’t benefit? It’s not that I don’t agree with you - regulatory capture and vendor lock-in are much less of a concern for free and/or open-source software, but that doesn’t mean it’s not still advertising.
That’s true