Every speck of engineering reality supports Boomzilla's "contention".
The reason is simply because of something which many people seem not to understand.
In terms of "data" or even "a packet" the goal is to get both "all the data" and "each packet" from source to destination as quickly as possible.
However, in terms of any individual packet, there is no specific effort to maintain "timing", or any sort of "cadence".
In fact, in general, there is no specific mechanism that even protects the very lives of individual packets.
For example, if two devices decide to send a packet at the same time, both packets are simply destroyed, and everybody sends them over again.
It's called "a collision" and is no big deal.... nobody cares that the replacement is out of order because the network protocol sorts them out at the other end.
(The protocol is very careful to protect "the data"... but it's no big deal if getting it there requires that a few extra packets need to be sent to replace a few that got lost.)
And, because of this fact, Ethernet traffic is ALWAYS "totally re-clocked".
Only a fool, or a very badly designed protocol, would assume that all the packets will arrive... let along on time and in the right order.
Therefore, EVERY network protocol in use EXPECTS to have to replace packets, sort the order in which they may arrive, and apply clocking to them.
This is the reason why Ethernet networks never actually deliver the maximum throughput you might expect from their specifications.
A "1 gB Ethernet network" will never deliver 100% of that rate...
As the data rate approaches that limit, more and more collisions will occur, resulting in more packets that fail to arrive, and so need to be replaced.
(And, each time that happens, some small percentage of the overall bandwidth is lost, "delivering broken bits" that fail to contribute to the total throughput.)
This is why you
NEVER want to run an Ethernet network anywhere near its theoretical capacity.
Yes... it's true that modern network protocols do make some attempts to minimize this, to improve overall efficiency, as do modern switches....
And there have, in the past, been protocols that tried very hard to avoid it entirely (the last popular one I recall was one called "Token Ring").
However, because Ethernet is so fast, and so efficient overall, it has retained supremacy as the protocol of choice in LANs for a very long time.
TCP and IP are protocols that are layered on top of the basic underlying Ethernet protocol.
And each of them also includes the ability to accept out-of-order packets and to replace damaged or corrupted packets.
Because of all this going on ANY COMPETENTLY DESIGNED NETWORK AUDIO DEVICE SHOULD EXPECT TO TOTALLY RE-CLOCK ANY DATA IT RECEIVES.
(And, because of this, fussing with something that will "preserve a few more packets" or "get the packets there a little faster" or "with smoother timing" is silly.)
There is one tiny bit of logic supporting the contention that, IN SOME NARROW SITUATIONS, doing so MIGHT deliver a MINISCULE benefit.
The process of sorting packets, and reassembling packets, and requesting that packets be re-sent, all requires processing power.
So, if your receiving device is extremely limited in processing power, reducing the amount of work it has to dedicate to this particular task...
Just might, conceivable, allow it to do some other task a teeny tiny bit better.....
So, for example, if too many packets are lost, you might get audio dropouts...
And, if that were to occur, the reason would be that "it was so busy sorting out and reassembling packets that it was unable to keep up with the delivery of audio data to its output port"...
However, this sort of thing should never happen, unless there is some serious design weakness somewhere.
The receiving device should be able to handle the error rate present on the incoming data and still keep up adequately with its other duties.
(If not, imagine a postal clerk who occasionally sends letters to the wrong place, or loses one or two occasionally, because he or she is "working too hard".)
The other issue is that "network capacity" is actually a rather complex issue.... and not a simple number....
For example, a certain router may be able to handle "50 mB/second" .... and "1 million packets per second".... (someone quoted those numbers for a certain Ubiquiti router).
So how much data could that router deliver in one second if some fool decides to send FIVE BYTE PACKETS?
The answer is that, with luck, it will deliver 5 mb/second - 1/10% of its "rated capacity" - before it hits that OTHER limit.
And that's not even considering the fact that, after delivering a few hundred million packets, it may need to pause and do the router equivalent of "clearing its head".
This is one reason why you sometimes experience dropouts and slowdowns - even on "fast" networks that are "nowhere near their limit".
It's also a reason why you should pay at least some attention to proper network design.
For example, let's say you have a video server, and you often play 4k videos from that server in your bedroom.
And, as it so happens, you ALSO love to video chat with your buddies overseas.
And you ALSO like to download movies from your favorite pirate site (no judgment here).
Well, video chat is somewhat sensitive to network slowdowns...
And watching 4k videos is VERY sensitive to even momentary slowdowns...
But both of those require a more or less consistent amount of bandwidth...
However, downloading big files is not at all sensitive to the occasional slowdown or stall...
But, depending on the server you're getting them from, and the Internet, the speed might vary back and forth between lightning and molasses...
So, in this situation, if you want to optimize your network, you should connect the computer you use to download those movies directly to your Internet router.
Then you should connect your movie server and your clients to a DIFFERENT switch.... which you THEN connect to the Internet router.
This way the switch that handles sending the movies to your TV doesn't have to deal with your download traffic.
(You'll get better performance for both... and you won't see a dropout in your movie when the download enjoys a burst of speed.)
I've a wired Ethernet cable from my computer room at one end of the house to the living-room / audio-room at the other end.
I contend that since there is so much unused headroom in the Ethernet bandwidth, and since my streamer (an AURALiC Aries) does its own buffering and reclocking, that it makes absolutely no difference whether my data-storage drive is plugged into the streamer directly via USB or whether it's plugged into the Roon server computer and connecting with the Aries over Ethernet.
My audio amigo absolutely
insists that not only must the HDD be connected directly to the streamer, but also that the USB connecting cable be as short as possible.
The only difference that I can see between the two options is that if I connect directly to the Aries, then I must use the AURALiC "Lightning DS" app on my iPad to control playback, whereas if I use the Ethernet-connected Roon server, I get to use the far more reliable and user-friendly Roon interface.
Is there truly any advantage in one over the other?
Thanks - Boom