These algorithms (Grisu, Ryu, ...) all satisfy the "internal identity requirement", which means that they can print a double and read it back in to the same double.
The harder part is to also produce the shortest of all possible string-representations. (And then picking the closest if there are many).
These are two separate problems. This is talking about printing floating point numbers, and does not talk about parsing algorithms at all.
When considering the correctness of a floating point decimal printing algorithm, you could use "round trips with Y parsing algorithm", but that's flawed -- it ties the printing algorithm directly to a particular parsing algorithm.
Instead what is usually considered is "what is the closest representable floating point value to the real value output by the printing algorithm?" If the printing algorithm outputs a string representation which is closer to the input float than it is to any other float, then it's correct. It's up to each parsing algorithm to provide the reverse condition -- that it parses each string to the float which is closest to the real value of the represented string.
Modern float-print algorithms like Grisu / Dragon / etc. also add the additional restriction that they output the shortest possible string that meets that condition; for example, the real number 0.69999998 is closer to the 32-bit float (0b1.011_0011_0011_0011_0011_0011 * 2^-1, represented in memory as 0x3f333333) than any other float. The real number 0.7 is slightly further from that float, but it's still closer to it than any other float, and is much shorter -- those algorithms should print that float as "0.7".
A correct parser should parse both the string "0.7" and "0.69999998" to the same 32-bit float result.
I'm not sure, but generally speaking converting doubles to strings will fail to round trip at least the NaNs
If round tripping is important, my recommendation would be to output something that directly corresponds to the binary representation of the float. For example, printf %a