Both the ROCKPro64 and the NanoPi M4 from 2018 has a x4 PCIe 2.1 interface. Same goes for almost all RK3399 boards that care to expose the PCIe interface.
Update: there’s also the more recent NanoPC-T6 with the RK3588 that has PCIe 3.0 x4.
They could’ve exposed more SATA ports and / or PCI lanes and decided not to do it.
And… let’s not even talk about the SFF 8087 connector that isn’t rated to be used as an external plug, you’ll likely ruin it quickly with insertions and/or some light accident.
PCIe 2.0 x 4 > 2.000 GB/s
PCIe 3.0 x 2 > 1.969 GB/s
But we also have to consider the suggested ARM CPU does PCIe 2.1 and we’ve to add the this detail:
PCIe 2.1 provides higher performance than the PCIe 2.0 by facilitating a transparent upgrade from a 32-bit data path to a 64-bit data path at 33MHZ and 66MHz.
I shouldn’t also have a large impact but maybe we should think about it a bit more.
Anyways I do believe this really depends on your use case, if you plan to bifurcate it or not and what devices you’re going to have on the other end. For instance for a NAS I would prefer the PCIe 2.1 x 4 as you could have more SATA controllers with their own lanes instead of sharing lanes in PCIe 3.0 using a MUX.
Conclusion: your mileage may vary depending on use case. But I was expecting to have more PCI lanes exposed be it via more m.2 slots or other solution. I guess that when a CPU comes with everything baked in and the board maker “only has” to run wires around better do it properly and expose everything. Why not all SATAs for instance?
Both the ROCKPro64 and the NanoPi M4 from 2018 has a x4 PCIe 2.1 interface. Same goes for almost all RK3399 boards that care to expose the PCIe interface.
Update: there’s also the more recent NanoPC-T6 with the RK3588 that has PCIe 3.0 x4.
This boards seems extremely poorly designed, have a look at the CPU specs: https://www.intel.com/content/www/us/en/products/sku/97926/intel-atom-processor-c3758-16m-cache-up-to-2-20-ghz/specifications.html
They could’ve exposed more SATA ports and / or PCI lanes and decided not to do it.
And… let’s not even talk about the SFF 8087 connector that isn’t rated to be used as an external plug, you’ll likely ruin it quickly with insertions and/or some light accident.
PCIe 2 x4 is the same speed as PCIe 3 x2, no?
Generally, there’s a small difference in speeds:
But we also have to consider the suggested ARM CPU does PCIe 2.1 and we’ve to add the this detail:
I shouldn’t also have a large impact but maybe we should think about it a bit more.
Anyways I do believe this really depends on your use case, if you plan to bifurcate it or not and what devices you’re going to have on the other end. For instance for a NAS I would prefer the PCIe 2.1 x 4 as you could have more SATA controllers with their own lanes instead of sharing lanes in PCIe 3.0 using a MUX.
Conclusion: your mileage may vary depending on use case. But I was expecting to have more PCI lanes exposed be it via more m.2 slots or other solution. I guess that when a CPU comes with everything baked in and the board maker “only has” to run wires around better do it properly and expose everything. Why not all SATAs for instance?