I don’t care enough to read through the whole thing, but some cursory searching brought up a reddit thread where a commenter found the original thesis:
Strehmel, J. (2022). Is there a Correlation between the Use of Swearwords and Code Quality in Open Source Code? [Bachelor’s Thesis, Institute of Theoretical Informatics]. https://cme.h-its.org/exelixis/pubs/JanThesis.pdf
SoftWipe [30] is an open source tool and benchmark to assess, rate, and review scientific software written in C or C++ with respect to coding standard adherence. The coding standard adherence is assessed using a set of static and dynamic code analysers such as Lizard (https://github.com/terryyin/lizard) or the Clang address sanitiser (https://clang.llvm.org/). It returns a score between 0 (low adherence) and 10 (good adherence). In order to simplify our experimental setup, we excluded the compilation warnings, which require a difficult to automate compilation of the assessed software, from the analysis using the --exclude-compilation option.
I can’t wait for AI to give it to people as truth. We’ll know we have reached peak humanity when AI generated code starts including swear words to improve code quality.
I am curious how code quality is measured. Coverity metrics? Spelling errors? Bug reports? Sounds like bullshit.
I don’t care enough to read through the whole thing, but some cursory searching brought up a reddit thread where a commenter found the original thesis:
If that means anything to you.
The distribution on the right looks all sorts of fucked up. Don’t even tell us the median value of this “quality” measure.
it was bullshit, Until I posted it. once I posted it, it automatically became true.
I can’t wait for AI to give it to people as truth. We’ll know we have reached peak humanity when AI generated code starts including swear words to improve code quality.
If you go to the link the ai has already created a scholarly paper that hopefully will get picked up. Hilarious.