• 8 Posts
  • 25 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle












  • According to consequentialism:

    1. Imagining sexual fantasies in one’s own mind is fine.
    2. Any action which affects no-one but the actor, such as manifesting those fantasies, is also fine.
    3. Distributing non-consensual pornography publicly is not fine.
    4. Distributing tools for the purpose of non-consensual pornography is a grey area (enables (2), which is permissible, and (3), which is not).

    From this perspective, the only issue one could have with deep fakes is the distribution of pornography which should only be used privately. The author dismisses this take as “few people see his failure to close the tab as the main problem”. I guess I am one of the few.

    Another perspective is to consider the pornography itself to be impermissible. Which, as the author notes, implies that (1) is also impermissible. Most would agree (1) is morally fine (some may consider it disgusting, but that doesn’t make it immoral).

    In the author’s example of Ross teasing Rachel, the author concludes that the imagining is the moral quandry, as opposed to the teasing itself. Drinking water isn’t amoral. Sending a video of drinking water isn’t amoral. But sending that video to someone dying of thirst is.

    The author’s conclusion is also odd:

    Today, it is clear that deepfakes, unlike sexual fantasies, are part of a systemic technological degrading of women that is highly gendered (almost all pornographic deepfakes involve women) […] Fantasies, on the other hand, are not gendered […]

    1. Could you not also equally claim that women are being worshipped instead of degraded? Only by knowing the mind of both the consumer and the model can you determine which is happening. And of course each could have different perspectives.
    2. If there were equal amounts of deep fakes of men as women, the conclusion implies that deep fakes would be fine (as that is the only distinction drawn), which is probably not the author’s intention.
    3. I take issue with the use of systemic. The purpose of deep fakes is for sexual gratification of the user, not degradation. Only if you consider being the object of focus for sexual gratification to be degradation could the claim that there is anything systemic. If it was about degradation, wouldn’t consumers be trying to notify targeted people of their deep fake videos and make them as public as possible?
    4. Singling out “women” as a group is somewhat disingenuous. Women are over-represented in all pornography because the majority of consumers are men and the majority of men are only attracted to women. This is quite clear as ugly women aren’t likely to be targeted. It’s not about “being a woman”, it’s about “being attractive to pornography consumers”. I think to claim “degradation of women” with the caveat that “half of women won’t be affected, and also a bunch of attractive males will be” makes the claim vacuous.



  • For microcontrollers, quite often. Mainly because visibility is quite poor, you’re often trying to do stupid things, problems tend to be localized, and JTAG is easier than a firmware upload.

    For other applications, rarely. Debuggers help when you don’t understand what’s going on at a micro level, which is more common with less experience or when the code is more complex due to other constraints.

    Applications running in full fledged operating systems often have plenty of log output, and it’s trivial to add more, formatted as you need. You can view a broad slice of the application with printouts, and iteratively tune those prints to what you need, vs a debugger which is better suited for observing a small slice of the application.


  • Mirroring the comments on Ars: Why should AI child porn be illegal? Clearly the demand is there, and if you cut off the safe supply, don’t you just drive consumers to sources which involve the actual abuse of minors?

    Another comment I saw was fretting that AI was being fed CSAM, and that’s why it can generate those images. That’s not true. Current image generating algorithms can easily generate out of distribution images.

    Finally, how does the law deal with sharing seed+prompt (the input to the ai) instead of the images themselves? Especially as such a combination may produce child porn in only 1 model out of thousands.