• fubo@lemmy.world
    link
    fedilink
    arrow-up
    195
    ·
    1 year ago

    Meanwhile over in the mechanical engineering department, someone is complaining that they have to learn physics when they just wanted to build cool cars.

    • Zetaphor@zemmy.cc
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      I was interviewed with complex logic problems and a rigorous testing of my domain knowledge.

      Most of what I do is updating copy and images.

  • where_am_i@sh.itjust.works
    link
    fedilink
    arrow-up
    76
    arrow-down
    5
    ·
    1 year ago

    A few failed exams later you end up programming cyberpunk and since you’re so oblivious to algorithms’ complexity it becomes a meme not a game.

    • Chadus_Maximus@lemmy.zip
      link
      fedilink
      arrow-up
      36
      ·
      1 year ago

      But it’s ok because now Nvidia has to deal with your garbage code due to Cyberpunk being the only game that supports the latest graphics tech.

    • jakoma02@czech-lemmy.eu
      link
      fedilink
      arrow-up
      48
      arrow-down
      1
      ·
      1 year ago

      The point of these lectures is mostly not to teach how to work with Turing machines, it is to understand the theoretical limits of computers. The Turing machine is just a simple to describe and well-studied tool used to explore that.

      For example, are there things there that cannot be computed on a computer, no matter for how long it computes? What about if the computer is able to make guesses along the way, can it compute more? Because of this comic, no — it would only be a lot faster.

      Arguably, many programmers can do their job even without knowing any of that. But it certainly helps with seeing the big picture.

      • Riskable@programming.dev
        link
        fedilink
        arrow-up
        9
        arrow-down
        3
        ·
        1 year ago

        Arguably, a much more important thing for the students to learn is the limits of humans. The limits of the computer will never be a problem for 99% of these students or they’ll just learn on the job the types of problems they’re good at solving and the ones that aren’t.

        • SkyeStarfall@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          11
          arrow-down
          1
          ·
          1 year ago

          The limits of computers would be the same as the limits for humans. We have no reason to think the human brain has a stronger computation power than a Turing machine.

          So, in a way, learning about the limits of computers is the exact same as learning the limits of humans.

          But also, learning what the limits of computers are is absolutely relevant. You get asked to create an algorithm for a problem and its useful to be able to figure out whether it actually is solvable, or how fast it theoretically can be. Avoids wasting everyone’s time trying to build an infinite loop detector.

          • Riskable@programming.dev
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            1
            ·
            1 year ago

            The “limits of humans” I was referring to were things like:

            • How long can you push a deadline before someone starts to get really mad
            • How many dark patterns you can cram into an app before the users stop using it
            • The extremes of human stupidity

            👍

            • SkyeStarfall@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              1 year ago

              …none of which would be relevant for most people working in back-end, which would be most people that take compsci.

              I would hate to go to a compsci study and learn management instead. It’s not what I signed up for.

              University also shouldn’t just be a job training program.

    • dtxer@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      1 year ago

      I didn’t go to university, because I wanted to learn useful stuff, but because I’m curiousity driven. There is so much cool stuff and it’s very cool to learn it. That’s the point of university that it prepares you for a scientific career where the ultimate goal is knowledge not profit maximisation (super idealistically).

      Talking about Turing Machines it’s such a fun concept. People use this to build computers out of everything - like really - it became a Sport by this point. When the last Zelda was Released the first question for many was, if they can build a computer inside it.

      Does it serve a practical purpose? At the end of the day 99% of the time the answer will be no, we have computing machines built from transistors that are the fastest we know of, lets just use these.

      But 1% of the time people recognize something useful… hey we now found out in principle one can build computers from quantum particles… we found an algorithm that could beat classical computers in a certain task… we found a way to actually do this in reality, but it’s more proof of concept (15 = 5×3)… and so on

    • Blamemeta@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      Ram is literally just the tape. Modern computers are just multitape turing machines, albeit the tape ends at some point.

    • tr00st@lemmy.tr00st.co.uk
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      About 15 years on, I’m still so happy I got good coursework marks for the route-finding equivalent of a bogosort. Picked a bunch of random routes and pick the fastest. Sure, that guy who set up a neural net to figure it out did well, but mine didn’t take days of training, and still did about as well in the same sort of execution time.

  • GTG3000@programming.dev
    link
    fedilink
    arrow-up
    31
    ·
    1 year ago

    But you can make games that much more interesting if your algorithms are on point.

    Otherwise it’s all “well I don’t know why it generated map that’s insane”. Or “well AI has this weird bug but I don’t understand where it’s coming from”.

    • blivet@kbin.social
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      I’m grateful to this strip because reading it caused me to learn the correct spelling of “abstruse”. I’ve never heard anyone say the word, and for some reason I had always read it as “abtruse”, without the first S.

  • Lmaydev@programming.dev
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    I did games technology at university. We had a module that was just playing board games and eventually making one. Also did an unreal engine module that ended with making a game and a cinematic.

    It was awesome.

    • Gork@lemm.ee
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I never really understood the point of Lambda calculus. Why have an anonymous function? I thought it was good practice to meticulously segment code into functions and subroutines and call them as needed, rather than have some psuedo-function embedded somewhere.

      • rabirabirara@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        I think you’re confusing lambdas with lambda calculus. Lambda calculus is more than just anonymous functions.

        To put it extremely simply, let’s just say functional programming (the implementation of lambda calculus) is code with functions as data and without shared mutable state (or side effects).

        The first one increases expressiveness tremendously, the second one increases safety and optimization. Of course, you don’t need to write anonymous functions in a functional language if you don’t want to.

        As for why those “pseudo-functions” are useful, you’re probably thinking of closures, which capture state from the context they are defined in. That is pretty useful. But it’s not the whole reason lambda calculus exists.

      • Zangoose@lemmy.one
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        See the other comments about lambdas vs. lambda calculus, but lambdas are supposed to be for incredibly simple tasks that don’t need a full function definition, things that could be done in a line or two, like simple comparisons or calling another function. This is most useful for abstractions like list filtering, mapping, folding/reducing, etc. where you usually don’t need a very advanced function call.

        I was always taught in classes that if your lambda needs more than just the return statement, it should probably be its own function.

      • linuxduck@nerdly.dev
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I suppose it has to do with being stateless.

        I just loved learning about lambda calculus.

        I think the idea is to remove complexity by never dealing with state, so you just have one long reduction till you get to the final state…

        But someone who’s more into lambdas etc should speak about this and not me (a weirdo)

      • static_motion@programming.dev
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        “Introduction to the Theory of Computation” by Michael Sipser, a book commonly referred to as simply “Sipser”. My ToC course in uni was based around that book and while I didn’t read the whole thing I enjoyed it a ton.

      • static_motion@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        “Introduction to the Theory of Computation” by Michael Sipser, a book commonly referred to as simply “Sipser”. My ToC course in uni was based around that book and while I didn’t read the whole thing I enjoyed it a ton.

    • Christian@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I read it cover-to-cover like fifteen years ago. I’ve lost most of that knowledge since I haven’t touched it in so long, but I remember I really enjoyed it.

  • Lakso@ttrpg.network
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    …then don’t study computer science. I study CS and it’s annoying when someone in a more math/logic oriented course is like “If I get a job at a tech company I won’t need this”. All that IS computer science, if you just wanna code, learn to code.

    • cosmicboi@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I would have done CS if every math class at my school didn’t have 500 people in it. Even college algebra. They basically made everything a weed-out class

      I do think many of the CS concepts are pretty cool :)

    • garyyo@lemmy.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      1 year ago

      Wait till you hear about oracle machines. They can solve any problem, even the halting problem.

      (It’s just another mathematical construct that you can do cool things with to prove certain things)

      • Julian@lemm.ee
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        Thanks for the fun rabbit hole. They can’t really solve the halting problem though, you can make an oracle solve the halting problem for a turning machine but not for itself. Then of course you can make another oracle machine that solves the halting problem for that oracle machine, and so on and so forth, but an oracle machine can never solve its own halting problem.

    • fubo@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      1 year ago

      If you augment a TM with nondeterminism, it can still be reduced to a deterministic TM.

    • rockSlayer@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      1 year ago

      Nondeterministic turing machines are the same kind of impossible theoretical automaton as an NFA. They can theoretically solve NP problems.

      • Christian@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It’s been a long long time since I touched this but I’m still almost positive deterministic machines can solve everything in NP already.

        • rockSlayer@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          They exist in the same grammatical hierarchy so theoretically they can solve the same problems. What I should have said was that nondeterministic turing machines can solve NP problems in P