• kn33@lemmy.world
      link
      fedilink
      English
      arrow-up
      26
      ·
      11 months ago

      You could do NIC teaming and get 10gb/s overall (on the network link - idk how they route USB/NIC/PCIe/etc.). You wouldn’t get that on a single connection that way, though. You’d have to either be content with multiple connections, or not team them and use a multi-path aware protocol like iSCSI.

      • AVincentInSpace@pawb.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 months ago

        I’m curious how close your total throughput would be to the theoretical 10Gb/s, assuming it was used with a switch that could keep up. Protocol overhead with Ethernet/TCP/IP is bad enough without NIC teaming to say nothing of the total throughput of the Thunderbolt transceiver

        • SteveTech@programming.dev
          link
          fedilink
          English
          arrow-up
          9
          ·
          11 months ago

          I don’t know if I’ll remember, but I’ll be able to try this in a few days, I have the same laptop, 2x 2.5G USB NICs + another 2 already in the mail, and also a 10G network.

          If you’re wondering, my intention for ordering them definitely wasn’t for this, but more just for places around the house I can plug into, without having the framework NIC hanging off my laptop.

            • SteveTech@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              11 months ago

              Alright, so testing with iperf3 to a 10G host:

              • Single Direction - 7.06 Gbps RX (2.35, 2.35, 1.25, 1.11 Gbps individually), 9.4 Gbps TX (All 2.35 Gbps)
              • Bidirectionally - 8 Gbps Total, 1.538 Gbps RX & 6.55 Gbps TX (315/1220, 232/2080, 256/1520, 735/1730 Mbps individually)

              4x USB NICs on the laptop, 1x Solarflare SFN5122F NIC on the desktop, there were 2 10G switches in between which may have affected the speeds slightly.

              Also I can get 4.6 Gbps total (2.3/2.3) bidirectionally on one interface, so I would have expected ~16 Gbps with 4, so that’s interesting I guess? My desktop can do 18.6 Gbps total (9.55/9.11) to my server so idk.

              Edit: I was using 1500 MTU, I don’t feel like testing again with jumbo frames.

              • AVincentInSpace@pawb.social
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                11 months ago

                Fascinating, especially that the RX direction saw such varied speeds across the various NICs. Guess that switch wasn’t too keen on trying to split the packets evenly. Also – 1.5Gbit RX in bidirectional mode? …all I can say is yikes.

                Very good to know.

                @slice@feddit.de – you were interested in this too

                • SteveTech@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  11 months ago

                  Guess that switch wasn’t too keen on trying to split the packets evenly.

                  Yeah probably, I was just using one of those cheap 2x 10G + 4x 2.5G switches that ServeTheHome recently did a video on, so I would not be surprised if that was the bottle neck here.

                  I could maybe try buying a few more SFP+ transceivers and using my more trustworthy switches, but that seems too expensive for a project like this.

      • dukatos@lemm.ee
        link
        fedilink
        arrow-up
        5
        ·
        11 months ago

        Why not? A TCP packet is 1500 bytes long, if you are not using jumbo frames, and every packet goes through another interface. You’ll need a smart switch for maximum speed, though.