It looks like that code does read the whole file:
(with a foo.csv that is 350955 bytes long:)
% python -V Python 3.11.4 % python >>> f = open("foo.csv") >>> f.tell() 0 >>> header, *records = [row.strip().split(',') for row in f] >>> f.tell() 350955
>>> f.close() >>> f.open("foo.csv") >>> header, *records = (row.strip().split(',') for row in f) >>> f.tell() 350955
>>> f.close() >>> f.open("foo.csv") >>> headers, records = f.readline().strip().split(','), (row.strip().split(',') for row in f) >>> f.tell() 125
[... for row in open(filename).readlines()]
[... for row in open(filename)]
Additionally, this doesn't do what you think it does:
>>> header, *records = (row.strip().split(',') for row in f)
>>> gen = (row.strip().split(',') for row in f) >>> header, *records = next(gen)
It looks like that code does read the whole file:
(with a foo.csv that is 350955 bytes long:)
I thought that using a list comprehension to bind header and records was eagerly consuming the file, so I changed it to a generator comprehension with nope, I guess the destructuring bind does it? not as neat, though. Is there a golf-ier way to do it?*