Is there some formal way(s) of quantifying potential flaws, or risk, and ensuring there’s sufficient spread of tests to cover them? Perhaps using some kind of complexity measure? Or a risk assessment of some kind?

Experience tells me I need to be extra careful around certain things - user input, code generation, anything with a publicly exposed surface, third-party libraries/services, financial data, personal information (especially of minors), batch data manipulation/migration, and so on.

But is there any accepted means of formally measuring a system and ensuring that some level of test quality exists?

  • fades
    link
    fedilink
    English
    61 year ago

    So true lol. Mgmt just announced a directive at my work last week that code must have 95-100% coverage.

    Meanwhile they hire contractors from india that write the dumbest, most useless tests possible. I’ve worked with many great Indian devs but the contractors we use today all seem like a step down in quality. More work for me I guess

    • @phoneymouse@lemmy.world
      link
      fedilink
      English
      31 year ago

      It’s always fun to hear management pushing code coverage. It’s a fairly useless metric. It’s easy to get coverage without actually testing anything. I’ve seen unit tests that consist simply of starting the whole program and running it without asserting anything or checking outputs.

      • @rmam@programming.dev
        link
        fedilink
        English
        21 year ago

        It’s always fun to hear management pushing code coverage. It’s a fairly useless metric.

        Code coverage can be a useless metric only if your team’s pull request review process is broken and systematically approves broken code that fails to meet it’s most basic requirements.

        In the meantime, if code coverage requirements convince any team member to go out of their way to check for an invarant, that means introducing code coverage requirements is already a win.