Lossless encodings, like the one from the wiki above, won’t produce ambiguous results but they pay by producing an output that is larger than the input for certain inputs. E.g. inputting “dc” in the example. This is not a problem in practice – assuming that whoever made the encoding knew what they were doing – since the inputs that are enlarged by your lossless encoding are unlikely to the point of impossibility while the inputs that are significantly compressed are very common, but if you count them then there will be at least as many inputs that are enlarged by the lossless algorithm as there are inputs that are compressed.
I’m not saying that it’s impossible to encode *any *file to a smaller size, obviously we can losslessly compress a lot of things. What is impossible is finding an algorithm that losslessly encodes all files to a smaller size. For every file you make smaller, there must be another file you make larger.
Also, yes, if you allow loss then you can guarantee that all inputs are compressed, but that’s no fun…
E: Ah. I realize I may have misunderstood you initially. Variable length coding is certainly a lossless encoding technique but there are plenty of those. Gray code, bit parity transfer techniques, etc. What it is not is a general (lossless) compression technique – those don’t exist and cannot be constructed.