I went to a computer audio class about J River put on by Houston Audio Society last weekend. The presenter was very knowledgeable going thru the J River program. He kept saying he prefered Foobar 2000 because you just add what you want and he felt it sounded better. So I loaded both on my laptop and compared some files. Listening test were thru headphones. The Foobar 2000 was easier to use but the J River offers many more graphic options. Many many more. Sound wise I actually heard differences--the Foobar 2000 sounded thin. Does not cost anything to add Foobar 2000 and see for yourself
"I've seen things you people wouldn't believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the darkness at Tannhäuser Gate. All those moments will be lost in time like tears in rain. Time to die." -Roy Batty
If two different bit perfect software players sound different to you, there can be only two possible explanations (barring expectation bias of course).
1. The playback is NOT bit perfect in at least one of the players. You would be surprised to find out how many people think they have set it up right, but haven't... you can just take my word for it that this is classical explanation number one. Less common are software/hardware design flaws or errors/malfunctions that somehow affect the playback process in such particular way that bits are altered in the data path somewhere so it isn't bit perfect when in fact it should be.
2. Software induced jitter and/or electric noise contamination and/or electromagnetic interference (EMI). Protocols like AES/EBU and S/PDIF do not provide a dedicated clock signal. On top of that, most DAC units lack a separate, specialized input connector through which an external clock signal can be fed into the DAC. Thereby the input of the DAC can't operate in asynchronous mode so the clock must be derived from the data signal, which includes NOT ONLY the data itself, BUT ALSO the arrival times of the data packets. These arrival times are determined by the transmitter hardware, i.e., the digital output interface that's part of the noisy computer the electric/electromagnetic noise patterns of which, in turn, are partly determined by the software that controls those noisy computer hardware components. Immediately we can see how this could audibly affect input jitter performance/characteristics.
Well engineered asynchronous USB input on a DAC effectively solves the problem of input jitter by reducing this particular type of jitter to virtually zero jitter so the player software can no longer be a factor in this regard, but what about the noise/EMI? USB cables (and coaxial S/PDIF cables, etc.) have metal conductors in them, which act as antennae. They pick up EMI from the air that surrounds them. Even if neither the entire DAC unit nor the digital input interface inside this DAC unit are USB powered (i.e. if the whole thing uses its own dedicated power supply), even if the power lines in the USB cable are not physically connected at the USB input of the DAC, electric noise can still ride the DATA lines instead. So two things are needed. First, proper metal shielding against EMI, preferrably not in boutique USB cables, but between the DAC and the USB input interface inside the DAC (as well as between the DAC and the exterior environment outside the DAC, for reasons that are obvious). Second, proper galvanic isolation and associated power supply implementation to prevent the electric noise of the USB input interface from leaking, via physical connection path, into the rest of the DAC unit. Only THEN can we begin to speak of a ROBUST piece of DAC equipment that isn't in any way susceptible to the (often HIGHLY) audible differences commonly, but WRONGFULLY attributed to different bit perfect software players.