• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle







  • I think you’re missing a detail here, which is that before streaming was a thing writers would make significant amounts of their money by getting a show syndicated on a network, that was the whole deal. Streaming is being treated differently, effectively resulting in then receiving a very large pay cut because even if they make a successful show the payout doesn’t come.

    And it’s true they could structure things so that they don’t receive a secondary payout, but their base salary was negotiated with that later payout in mind. You and I don’t receive secondary payouts for our work, but our salary is also adjusted to recognize that.




  • I feel like you ignored their chief issue, which is that if your original server (IE. lemmy.world) goes down then nothing works for you. In that situation you have to switch to a new server to be able to view anything, and likely need to create a new account on that server. There’s some other catches to this as well that makes it more problematic than just that.

    They were definitely told the “it doesn’t matter what server you choose” line when they looked at lemmy, but in reality that’s not entirely true if a server isn’t that stable.


  • Generally speaking the use case is writing tests. If your tests just call all the dependencies and new directly then it’s harder to write tests for your specific component while avoiding setting up a whole bunch of stuff (to make all those other classes work). By requiring all the dependencies to be provided to the class, you can swap them out at test time for something else that’s easier to work with.

    That said, IMO it’s a symptom of problems in language design. Using DI is only necessary because languages like C# don’t make it easy to mock out new or classes used directly, so we resort to wrapping everything in interfaces and factories to avoid those features and replace them with ones that are easier to mock. If the language was designed such that those features were easy to replace during testing then DI probably wouldn’t be a thing.





  • I haven’t personally written a NES emulator but I have a friend who did and it seemed like a comparable level of difficulty to the GB. It did sound like emulating the cartridge hardware was harder because games didn’t tell you what they used (compared to GB where a cart tells your that info in the header), but I’m guessing you can cover most games by only implementing a small subset of the options anyway.


  • Lol the funny thing is that this isn’t even close to my longest project :P I have one with a first commit close to 10 years ago (and I of course started it a while before the first commit). It’s my favorite just for how fun the result is. IMO the best projects tend to be ones you actually like to use when you’re done.

    That said I also wouldn’t put too much stock in that 1 year, I think I worked on it a lot for about a month and then moved onto other things after having trouble with it. I came back about a year later and managed to more or less finish it. It’s definitely a long time to devote to a project, but if you stayed focused on it you could do it in a couple months I think.


  • If you’re on Linux then I’m pretty sure the confusing behavior you’re seeing is due to the line buffering the kernel does by default. Ctrl+D does not actually mean “send EOF”, and it’s not the “EOF character”, rather it means “complete the current or next stdin read() request immediately”. That’s a very different thing, and sometimes it means EOF and other times it does not.

    In practice what this means is that, if there is no data waiting to be sent on stdin then read() returns zero, and read() returning zero is how getchar() knows an EOF happened. The flow looks like this:

    1. Your program calls getchar().
    2. getchar() calls read() on stdin and your program blocks waiting for input.
    3. The user presses Ctrl+D on the tty, having not typed anything else.
    4. The kernel immediately ends the blocked read() call and returns zero bytes read.
    5. getchar() sees that it got no bytes from read() and returns EOF.
    6. Your program sees that and exits the loop.

    However, in practice it doesn’t work that cleanly because the tty is normally operating in “cooked” mode, where the kernel sends input to your program line by line, allowing the user to edit a single line before sending it. The way this works is by buffering the stdin contents and sending it when the user hits enter. Going back to Ctrl-D, you can see how this screws things up, leading to the behavior you see:

    1. Your program calls getchar().
    2. getchar() calls read() on stdin and your program blocks waiting for input.
    3. The user types some input, but does not hit enter. This data sits in the kernel’s stdin buffer and is not send to your program yet.
    4. The user presses Ctrl+D on the tty.
    5. The kernel immediately ends the blocked read() call and starts returning the currently buffered stdin input, without waiting for an enter press.
    6. getchar() sees that it got a byte from read() and thus returns it.
    7. Your program starts getting all the previously buffered bytes and keeps running until getchar() has seen all of them.
    8. getchar() calls read() on stdin. There’s now no bytes in the buffer so you block waiting for input, the same as before. The previous Ctrl+D was already “used up” to end the previous read() call so it doesn’t matter any more.
    9. The user types Ctrl+D.
    10. Because there is currently no input in the line buffer, read() returns zero. getchar() sees this and returns EOF.

    In the above case Ctrl+D doesn’t work as expected because of the line buffering. The read() call ended early without waiting as expected, but your program just starts receiving all the buffered input so it doesn’t have any idea you pressed Ctrl+D and never gets the read() == 0 EOF condition. Additionally the Ctrl+D is a one-time deal, it ends one read() call early and sends the buffered input. When you call read() again with nothing to send it just blocks and you have to do another Ctrl+D to actually get read() to return zero.

    You can see the line buffering behavior if you add a putchar() inside your loop. The putchar() doesn’t actually print while you type the characters, it only prints after you hit either enter or Ctrl+D, showing that your program did not receive any of the characters until one of those two actions happened.


  • Starting with Gameboy might be a bit daunting but if you’re reasonably comfortable with C then I don’t think it’s too bad. If you’re not too familiar with the hardware side of it then that might be a challenge, but the advantage with the Gameboy is that there’s tons of documentation and tutorials out there which can probably help you work though the details. Really the big thing is to just be ready to do lots of reading XD You need to get a base-level of understanding of the system before you start coding. git says it took me around a year to get it functional and playing most ROMs, but that was with some big breaks in-between.

    Also, that’s a very good question. For performance, it was never an issue. I started it with a focus on keeping the code clean vs. worry about performance and it turned out fine, I’ve run it on a cheap 1.6GHz machine and it didn’t even reach 25% CPU usage, so IMO I wouldn’t worry about it. This also doesn’t vary that much ROM to ROM.

    For the ROMs I tested, for the most part every ROM tended to uncover something new, but the Gameboy has a pretty nice progression of “easy” to “hard” ROMs if you just sort by the MBC type, and also the BIOS is a pretty good test of basic functionality. Additionally there are a few different test suite ROMs out there that are fantastic, they’ll run through every instruction or piece of functionality and check for the correct results, they saved me tons of time.