Short summary: Many systems (including Unix) store time as a signed 32 bit int, with the value 0 representing January 1st 1970 00:00:00. This number will overflow on 03:14:07 UTC on 19 January 2038.
It is Y2K-ish; it's "different" so I'm not sure about seriousness. I'm thinking it simultaneously could be easier and more difficult, the standard "it depends" type answer. :)
Y2K is more of a formatting / digit representation problem than a pure data type overflow. The solution for Y2K was to switch the representation of year from 2 to 4 digits, along with coding and logic changes to go along with this.
For Unix / Linux, the solution for the 2038 problem involves changing time_t from 32 bits to 64 bits. At a higher level (eg what's in your C++ code), instinctively I don't think this in itself would involve as many code changes (maybe some data type changes, but probably less logic changes than Y2K, that's my guess). I believe several platforms have already moved towards 64 bit time_t by default... some support this by default even on 32 bit systems, such as Microsoft Visual C++ -- https://msdn.microsoft.com/en-us/library/3b2e7499.aspx
Since this involves a data type overflow issue, though, we're dealing more with platform specific / compiler / kernel type issues. I don't know, for instance, how easily 32 bit embedded type systems could handle a 64 bit time_t value. I understand that there are some technical issues with Linux kernels (mentioned in some of the comments) that prevent them from moving to a 64 bit time_t irregardless of platform (time_t should always be okay on 64 bit platforms, it's the 32 bit platforms that will have the issue...)
The good news is we have 21 years to think about it...
Short summary: Many systems (including Unix) store time as a signed 32 bit int, with the value 0 representing January 1st 1970 00:00:00. This number will overflow on 03:14:07 UTC on 19 January 2038.