Hacker News new | past | comments | ask | show | jobs | submit login

I'm not familiar with the other standards this is being compared to. Why not use PCIe?



2 primary reasons come to mind: 1. PCIe is mostly transceivers. Syzygy can be low speed, single ended, etc etc as well. 2. PCIe's card edge connectors can add manufacturing cost (gold plating, chamfered edges, board thickness tricks, etc) and are also relatively large.


PCIe doesn't have to be over an edge connector. PMC and XMC standards use board-to-board stacking. You can also do PCIe over cabling.

What's the point of low speed single ended I/O into an FPGA module? Seems like a waste of effort.


Your question was why to use this over PCIe. Maybe I'm misunderstanding your question but as Syzygy is a connector standard, your question must have been about the connector. I think colloquially the "PCIe connector" refers to the finger and slot standard.

As for why to use single end low speed I/O, I guess it's about perspective. For some, an FPGA is something that sits on a PCIe bus or network. For a lot of people, it's a lot more than that. Something that talks to countless chips that maybe need spi, i2c, uart, or something custom.

For example, think about all those 1 wire LED strips and how it easy it would be to do that in HDL compared to the other approaches. Being and to work easily with bits vs bytes has huge advantages for custom interfaces. Another example: one time a group I was working with wanted to talk to battery management modules. Each one had a uart and they had to get daisy chained together, complicating things a lot. An FPGA with more than a few uarts is trivial and could've talked to each one independently. Even the highest end micros have maybe at most 20. You could fit 16 on 1 of these connectors. The UART block from Xilinx is on the order of 100 flops. You could have literally hundreds of uarts. Not that I have seen a need for that, but who knows, maybe someone does.

Getting rambley but another option is for test devices. Let's say you have a device that has a low speed interface and you want to test a lot of them at a time. You could use a bunch of multiplexors and time division control logic, or slap down an FPGA and do them all at once.


> What's the point of low speed single ended I/O into an FPGA module?

Hardware interfacing with the physical world. Driving small numbers of LEDs, h-bridge drivers, load switches, regulator control signals, reading buttons, accelerometers, gyroscopes, magnetometers, PWM controllers, GPS, ...


That's what microcontrollers are for. FPGAs are harder to program and use a lot more power. So the juice needs to be worth the squeeze!


The application is what the chip is for. There are plenty of low power FPGAs and times to use them over a micro.


I think “low speed” is quite a relative term here. PCIe serdes lanes are for very high data rate communication (> 1gbps per lane). This is the realm of the Syzygy XCVR standard.

The lower speed Syzygy standard, while not operating at these speeds, is capable of much higher rates than a typical microcontroller. There are many peripherals with I/O requirements beyond a simple LED or SPI device, but below that of a PCIe or other high rate transceiver such has:

- moderate to high end ADCs and DACs (LVDS and parallel)

- image sensors (LVDS, parallel, and HiSPI)

- various networking PHYs

The lower end syzygy connector has pinouts to support LVDS differential pairs, which can easily achieve hundreds of Mbps data rates.


Piggy backing on this, image sensors using CSI are quite common. I don't know if anyone has this application, but theoretically if you wanted more than a few (I think even higher end processors cap out at 4) video streams in comes.... FPGAs. Maybe the newer Qualcomm XR chipsets can deal with that but an FPGA seems more attainable.


Qualcomm has supported CSI virtual channels for ages - you can get a little 4:1 bridge IC to renumber streams. IIRC, the 855 had 4 CSI IFEs, and would happily handle a 16x configuration if you bought the wide memory bus.


I think you missed the "low-cost" part from the title.


Why is PCIe expensive?


I suspect part of it is the serdes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: