• malcolmlucker@lemm.ee
    link
    fedilink
    arrow-up
    18
    ·
    1 year ago

    When you use AI features, the IDE needs to send your requests and code to the LLM provider. In addition to the prompts you type, the IDE may send additional details, such as pieces of your code, file types, frameworks used, and any other information that may be necessary for providing context to the LLM.

    Doesn’t sound like it gives you much transparency or control over the data it sends no matter which feature you use. Sadly not usable at my job then.

    • 0x442e472e@feddit.de
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      That’s a bummer. We’re strictly regulated and stuff like this needs to be self hosted or we can’t use it

      • glad_cat@lemmy.sdf.org
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        I worked for regulated companies too, I suspect that most companies would forbid such tools even if it’s not a dependency to build the software, especially if the source code is sent to random people. IMHO, every tool should be stored locally (e.g. on a proxy server) to make sure that the whole project can be recreated in a few minutes should something bad happens (or from scratch in a CI). As long as AIs rely on private companies on the internet, I wouldn’t use those tools.

  • pec@sh.itjust.works
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    That would cover like 80% of my work related usage of chatgpt and do stuff I didn’t think of outsourcing to AI. I’ll probably unsubscribe from co pilot too.

    Can’t wait for it to be in Goland.