I saw her take on this on Twitter and flinched. We had a similar experience with a > $5K eval board from Xilinx that we were basing a product on, and a customer wanted to start software development before the product was ready so we sold them a setup that was equivalent to our product in design (and embedded the Xilinx board) They found a problem which turned out to be a design defect on the eval board so we went to Xilinx for a warranty replacement, they refused saying they expressly disclaim any "fitness for purpose" on their eval boards and their terms of service disallow using their eval boards in a product. I rather rudely asked them why they charged money for them then.
In Kansas there is a law explicitly dealing with this: You can not disclaim an implied warranty or suitability for a particular purpose. In the broader context of the entire US, this is just a restatement of common sense law that is already on the books.
I bought the product to fulfill a need. If it can't do that, then you don't get to keep the money. Nobody hands money over for some vague, nebulous promise of something in return.
AFAICT, that only applies to "an individual, husband and wife, sole proprietor, or family partnership".[1][2] All other transactions, such as B2B, are subject to the typical UCC rules.[3]
That is pretty neat, I thought California's warranty laws were pretty strong but the lawyer suggested we wouldn't win if we sued. (not that we'd likely sue the only supplier of a part we need but you know, "in principle").
Does the Kansas law apply to software as well? That would be a neat trick. There was a national effort to get this level of support in the UCC (universal commercial code) but as I recall it was unsuccessful.
I think I read it somewhere that somebody ended up using consumer ARM laptop for a build server because all the ARM board in the market are crap and can't sustain 24 hours heavy load while consumer grade laptop at least get tested sufficiently to keep working under the heavy load.
Unless you specifically mean certain berry kind of boards, this is incorrect. There's a spectrum of well made SBC and SoM ARM designs to choose from. With proper vendor support, documentation, certifications and product lifecycle guarantees.
Might have been an issue with the market available to consumers/hobbyists who can't necessarily access business stock and instead fish for product on ebay. Not much info was really included in the OP.
Cumulus used to use a pile of Chromebooks with USB SSDs attached for ARM builds. One day in the lab I noticed that 2 of them had popped their cases open due to the battery expanding. Asking around, we were not the only folks to find that cheap Chromebooks were not designed to be run at 100% CPU 24/7.
Absolutely. As an embedded engineer, 90% of ARM boards (including and especially the popular Raspberry Pi) are complete dogshit. Barely (or not) functional, absurd design decisions, etc.
A hilarious example - to deal with the poor reliability of the RPi, you might think to use its watchdog timer to reset the device if it locks up. But guess what? The WDT in the crappy Broadcom chip on the RPi doesn't work! It's supposed to be there, and it's exposed to the kernel, but it doesn't actually do anything.
Come on, you need to at least read the datasheet and errata of the Broadcom SoC. Oh... There isn't one... Unless you're purchasing millions of chips from them and signing an NDA.
Seriously, why the hacker community even touches the RPi is beyond me. It is a ridiculously closed product.
It's the "maker" community, who are more than satisfied to glue together half-working things with no real knowledge --- or intent to learn --- about how they work.
In fact I'd bet most of them would be scared off by the several thousand pages of documentation an SoC like that usually would have.
This sounds disparaging of the "makers" but if your options for a hobby project are:
1) glue shit together
or
2) read several thousand pages of documentation
who is going to choose the latter?
This is a market that's being poorly served (maybe it's not big enough to justify investment in developing non-shitty products for them?) but blaming the users seems distasteful.
It’s still a crap show. I’m working on a beaglebone, i just decided to stay in the PRU for now, because connecting a timer to a DMA to spit bit patterns to a GPIO port without using a processor would involve connecting sections of the manual that are thousands of pages apart.
> It's the "maker" community, who are more than satisfied to glue together half-working things with no real knowledge --- or intent to learn --- about how they work.
Is that any different from all that stack overflow copypasta you find all over source code, where the paster clearly had no idea how that code worked? And I’m talking about people who are being paid to ship code.
shudder. It’s a wonder anything functions at all these days.
Not to pick on anyone, but I recently installed a stock storybook setup. Out of curiosity I did a count of all the node_modules (top level + nested). There were >2000 packages installed.
Now I know that if I follow the rules of a textbook professional (one that takes security seriously), then I'm obliged to audit all 2000 packages. But between you and the rest of the gazing internet, I don't do this. That's my dirty little secret. I don't give a shit. Gluing orphaned half-assed npm packages together and pretending everything is okay is my job.
The ironic part is how many hours we are murdering worrying about TypeScript or unit tests or integration tests and trying very very very hard not to notice that landfill we have placed under our massive rug.
There are too many technologies and not enough time to master them. Every time I dive into a popular node package I become more and more worried for the human race. We get things fundamentally wrong and it lives like that. For decades. And people come along and just obliviously use it like that and don't question it. Take something simple, like JWT. Or nonces. Or what people believe and implement around random numbers and how random numbers work (or, often, don't work). Bad information becomes bad code that sits around for a lifetime.
I started on ARMv5 back in the days of Sheevaplug/Dreamplug. At least that had JTAG, even if the cable did cost $70.
I have tried:
* Espressobin: While Marvell's hardware is pretty good, they've had trouble with the v7 board, resulting in the small community being somewhat bifurcated. I still don't think it runs a mainline kernel properly. At least the PCIe slot is standards-compliant. u-boot+ATF will always be a 2018 (?) Marvell fork, no one's ever going to update that.
* Firefly RK3399. Lol. https://lkml.org/lkml/2020/4/6/320. Although, if you can get ARM Trusted Firmware working it's a reasonably competent board.
* Vocore2. Support is reasonably helpful but insists that I use GCC 3.4.2 to compile the bootloader. Remember to back up the wifi calibration data - you can't ever get it back if you erase it. You will need to break out the magnet wire if you want to unbrick it. Shame, it's a nice little MIPS board.
* Anything Libreboard - use this, they've put a stunning amount of work in to get things upstreamed.
When I want to just work on something not on a PC, I keep coming back to the Pi. The Pi 4 is competent with an SSD attached over USB 3, and the Pi 0W is great for very simple tasks that need Linux. I wish it were better, I wish Broadcom was more open, but the Pi has the biggest community I've seen in any "hobbyist" ARM space.
Seriously, it's easy to crap on this stuff when you're willing to recommend boards that cost "just 20 dollars more", or have one-hundrenth the community because most of the end users are embedded engineers or the anything-but-a-pi sub crowd.
To me it just shows a lack of ability to consider requirements in evaluation. Even something as simple as the fact the rPI form factor has been fairly consistent for a while is a huge boon to people just trying to start with this stuff.
I could give my mom a rPI and a $20 starter kit and she could probably get to the point of a working PC.
I wouldn't dream of that with anything else. The fact it's Broadcom based is a little unfortunate but that ship has sailed and the ecosystem around it now trounces any benefit of alternatives for the actual people using them. Not embedded engineers who say it's crap because they wouldn't make a custom product based on that SoC.
-
This is like when people point out that for half the price of an Arduino Uno you can get an STM32 board that will run circles around a dinky ATmega328.
It's not about the power or "objective goodness", even the header layout of the Uno lets beginners use an insane wealth of peripherals designed for it
> This is like when people point out that for half the price of an Arduino Uno you can get an STM32 board that will run circles around a dinky ATmega328.
There is an Arduino port to STM32 (Blue Pill in particular), for those whose idea of embedded programming involves a bunch of delay() statements scattered around the library code, making sure no other peripherals can be used while you’re polling for the next UART char..
I'm well aware of how many places the Arduino core is ported to, it doesn't change the fact an actual Uno has the defining form factor and MCU.
There's a million and one peripherals designed for that specific combination that don't work seamlessly with other boards (even within the "official" Arduino family)
I mean your complaint about the quality of the library code, it's perfectly fine that the code sucks if it lets people do 1 million and one things they couldn't otherwise.
Do you really think someone programming an Arduino for controlling their cosplay LEDs cares if the library they used is poorly written? As long as it works it works.
They were never going to read the datasheet for an MCU and start figuring out bit masks for pin initialization, so it's literally a case of something is better than nothing, and there's no reason to gatekeep the environment that lets them use embedded systems
Yeah RPi is a great toy but you should treat it like a chemistry set. Use it as a tool for learning but don’t build a product with it. The difference between a $24 board and a $200 board is often the level of support you get. The boards I’ve used in the past all have manuals down to the chip and I won’t touch anything that doesn’t.
And with a 8x smaller price it's perfect for throw-away units. I've heard of people who use it for projects were you place things out in the environment where you really have no control over vandalism, animals,etc so just replacing a cheap unit here or there isn't a big deal.
On the other hand, a surprising number of companies decide they don't need that kind of dcoumentation and just ship Pis, and that often works fine too. Not for things with deep hardware integration, and not for mass-scale, but plenty things just want "linux in a box with some simple peripherals".
As the person tasked with debugging one of those "linux in a box with some simple peripherals", I'm not even 100% convinced they sell them with more than the intent of the consumer turning it on once, playing with for a few minutes, then forgetting it exists.
Yep, the 200 pages you get from there is a bad joke. Even a relatively simple ARM microcontroller (the STM32F1 series) has over a thousand pages of documentation in the reference manual. A full set of reference documentation for a SoC like the BCM2835 would probably come out to 5-10k+ pages.
While the rpi does not have as open a foundation as we would want in an ideal world, it seems like the alternatives are not much better. Do you have any alternative boards that you would recommend to the hacker community?
Not sure why you’re downvoted. The Pi boards are terrible and the majority of the market objective appears to be trying to undercut them.
Commercial embedded stuff is better but not without warts. A former colleague of mine sent me a rant a couple of weeks about one which I won’t name, but the board vendor couldn’t even get a Linux kernel to boot reliably on it when it was shipping to customers.
Engineers and power users tend to forget that the Raspberry Pi wasn't designed for them.
The target market for the Raspberry Pi doesn't care about reading the SoC datasheet or connecting the latest NVMe drive that costs several times more than the SBC. They just want a cheap, simple board that lets them start learning and tinkering with ample resources available on the internet. The $35 Raspberry Pi excels in this regard.
Most instances of whistle blowing are situations where a person has promised not to reveal something and then decided to. Situations like that are necessarily an ethical trade off - the person doing the whistle blowing has decided that keeping their word is less important ethically than revealing the secret but this is a very subjective decision that different people (and ethical systems) can and do have different opinions on.
Here in Germany, Toradex ARM SoMs are quite popular for industrial applications. The hardware is reliable and the software for it (Windows whatever CE is called now and Linux BSPs for Yocto) is pretty good on release and well maintained for many years afterwards. The modules are not exactly cheap for the performance but also not super expensive, generally in the 50-150€ range.
I've worked on a product that was based on Orange pi. The documentation was non existent and reliability was abysmal.
The product was used in hospitals non the less. I still have nightmares about it. Granted, the functionality was non essential (it wasn't anything remotely related to life support. But still, it was used in a freaking hospital). I tried convincing the management to at least switch to RPi, but RPi was three times more expensive and according to my tests was not reliable enough to convince them. Cheap ARM boards are at best interesting toys and you should not even try to make any products with them. Ignoring this will be your downfall. The company was bleeding money left and right for fixing the already sold products. That's were I gtfo
Really? Although I agree the OragePi far from great, I have one currently with an uptime of 116 days running various services and I have no issue for the pas 4 years outlasting the power supply. I'm wondering if most of the issues with Pi boards comes from bad SD cards and power supplies. I have also an Odroid C2 and a Raspberry Pi 1st gen running 24/7 with no issue.
It was around 3 years ago. I haven't touched any ARM board ever since. We specifically used OrangePi zero. The armbian at the time had thermal management problems to the point the boards would straight up fry to a brick. I remember there was an update that fixed the thermal problem but it would corrupt the SD card once in about five reboots. And it really didn't solve the thermal problems. They'd get hot enough to be unresponsive. So the clever decision by management was to add a fully self designed avr based board to periodically poll the device on one of the GPIO pins and somehow reboot the OrangePi if it didn't respond. So reboots were common and SD cards only lasted 1 month tops.
Nothing specific - I noticed that it was broken, and then I did a google search and I found a bunch of people running into the same problem, with no one conclusively getting it working.
Right, I see. I know the old supporting Debian systemd service was broken in various ways for a while. In my experience the hardware works fine in this instance, at least on the older chip (and I doubt very much they changed it).
Seconded. I have a rpi4 for a small home-automation project that I haven't really started.
If anyone knows of an alternative that's in the $100 (US) price range, runs linux, and will make my life easier, I'm grateful for suggestions.
Oh, and it needs to play nicely with a USB-to-ZWave dongle and one of the major open-source home-automation packages that can do lighting controls.
EDIT: I forgot to mention that I want it to reside in a closet that has very little ventilation. So I'd be leery of deploying a cheap used x86 system, even if the price way right.
EDIT2: Holy crap I'm being high-maintenance here. I should just do a damn Google search like everyone else.
Beaglebone Black ($43) or even Beaglebone AI if you're willing to go to $127.
The power system doesn't suck like the RPi series. The chips are all well-documented with the possible exception of graphics. It has eMMC so you don't have the microSD idiocies if you don't want them.
And if you need genuine real time, it has two RISC cores (4 on the AI) that operate in actual real time as opposed to Linux kinda/sorta/when-it-works/maybe real time.
> Never had crap like that happen to a raspberry pi.
Uh, yeah. A failure mode so rare that they can't isolate it.
RPi--the system that overdraws every USB interface on the planet and requires special power supplies. RPi--the system that couldn't even get USB-C right. RPi--the system that shuts down when you fire off a flashbulb (not their fault--but still a power failure). RPi--the compact system that needs a heatsink or it shuts down.
RPi has lots of things going for it. The power system is NOT one of them.
(If you want to pick on Beaglebone power systems, pick on the PocketBeagle. It has some truly stupid failure modes.)
I use the Beaglebone Green and Beaglebone Green Wireless due lots of projects at work. They're nice boards.
My main frustration with them is that there's a ton of outdated documentation around how some of the software stack works. It's changed quite a bit over the last 5 years and you can easily go down pointless rabbit holes.
Dont listen to the hype. Get your home automation setup with what you have today and get something working. You don’t need a Jetson or and Intel NUC to get started. My uptime was 100% with my home automation until a hurricane came.
Couple years ago I bought a pile of barebones ex-Datto Alto, NUC-style, AMD GX-415GA systems on eBay for sub-$5/ea and I reach for one of those for pretty much anything that doesn't Actually Need an RPi. RAM runs $5-$10 for 4GB, $20 for a 2.5" SATA SSD. They boot much faster than a Pi, idle around 7w, and run fine in my "network closet" that gets over 100F in summer.
Those exact systems aren't plentiful on eBay any more but there are tons of NUC-alikes, actual NUCs, and Thin Clients in the $40-$75 range with low-TDP CPUs of comparable performance.
Check eBay for corporate standard desktops which are dumped on there regularly. Dell OptiPlex is a good starting point, scale the price with age and you'll still wind up with a lot of compute for not a lot of money.
I've got more than a thousand of the previous unit in production. just ramping up on APU units. yes they are higher power than a Pi, but still very low. easy to develop for (amd64). tons of Ethernet ports. easy to use reliable storage. USB3, SATA, mpcie ports, SIM slots. openly available schematics
Honestly, the Raspberry Pi 4 is just fine for what it's meant to do. It's a lot of CPU for the money and it's easy to find resources on the internet.
You could consider the more expensive Jetson Nano if you need a powerful GPU or if you just want to try something different. People tend to assume it's faster, but the Raspberry Pi 4 CPU actually beats it in most cases ( https://syonyk.blogspot.com/2019/11/battle-of-boards-jetson-... )
Museums tend to be climate controlled; factories... not so much.
It's not like they overheat and lock up in 4 hours, it's more like randomly in a day or two. But since they're just monitoring stuff every couple min, it probably felt safer to reboot all the time.
Not sure exactly about the decision process that went there, I just did part of the software running on them. But they definitely weren't stable long term.
There was a parking kiosk machine near my house that was using a raspberry pi. I only know it was a raspberry pi because it's often stuck in the boot screen with the raspberry pi logo at the top. My pihole runs for months without issue, but this kiosk was down really often. I wonder why, maybe the sd card is not reliable enough, or maybe it's the climate fluctuation like you mentioned because the kiosk is outdoor?
What I've noticed is they can run off 500 mA usb if you run them headless, but need the full 2A from their power adapter if you run X and usb peripherals.
So your pihole doesn't heat as much as the pi in that kiosk even given the same room temperature. And then, the temps inside an outdoor box in the sun can go way up too.
When I worked at Garmin, a guiding principle was “reset, and reset often”. Embedded products that you have to push out the door quickly have too much internal state to track.
Eval boards are often more buggy than production boards for the exact same reason they are more expensive: volumes are low, so engineering work must be amortized over a much smaller number of units.
It's not just the small volume. Having worked on a number of ASIC + eval-board combos, there are several issues:
1) Eval board design typically begins before the ASIC design is finished. This means eval boards are frequently subject to last-minute design changes (because something in the ASIC changed) and/or intense schedule pressure for the design to be completed and the boards manufactured between the ASIC design freeze and when the ASIC comes back from the fab. Last minute design changes and high schedule pressure are not conducive to high-quality engineering.
2) Eval boards frequently have multiple purposes. The primary focus of an "eval" board is likely to be test platform for the ASIC, and customer evaluation is a secondary goal at best. As an external customer of the eval board, you're not even the _primary_ customer - the primary customer is the internal testing team. If "easy for the test team to use" and "easy for an outside customer to use" ever come into conflict, the test team is going to win.
2b) Because eval board's real target customer is a team within the company that has access to the ASIC documentation, preparing external documentation is a low priority. And since eval boards can indirectly expose embarrasing bugs in the ASIC, design secrets, etc., the external documentation is either heavily redacted, gated by onerous NDAs, or both.
3) Eval boards aren't the product - the product is the ASIC - and selling eval boards is rarely profitable on net, so there's pressure to keep the costs down. In the same vein, the PCB team for a semiconductor company is considered lower status within the ASIC company (if they're even part of the company and not contractors or an outside design house, which is also very common). This doesn't mean they're less skilled, necessarily, but it does mean they have fewer resources and less influence within the company, so "eval board" issues are going to be low priority.
3b) Because eval boards are a net loss, the incentive to go back and clean up any issues caused (1) and (2) is low to non-existent.
4) Because the eval boards _never_ go to high volume production, the eval board design team frequently has no experience or motivation to do proper DFM (design for manufacturing) or DFT (design for test) on their PCBs. Assembly is frequently manual because the volumes are too low to justify setting up (and debugging) an automated assembly line, leading to high product variability.
5) The eval board price is intentionally high to filter out non-serious users who're not engineering with a plan for eventual high volume production. Sales reps may just eat the cost of some dev resources if it enables a volume sale, or get it reimbursed as promotional expenses.
I met with a field engineer for a major chipmaker. I asked him about the samples that he tossed to our group, he said he usually just got them from DigiKey.
> The eval board price is intentionally high to filter out non-serious users who're not engineering with a plan for eventual high volume production.
+1. Also, they know you cannot do your job without them, and anyone who's buying one is a company with a budget, thus, it allows them to set the price to the maximum that is still accepted by companies. It's not unusual to sell a $200 eval board for a $2 commodity chip. Fortunately, at least for cheap commodity chips, the reference schematics and layout is often available for free.
This explains the cost, but why are they supposed to be more buggy? Should you have time to test those individually if there's just a small number of them?
In college I worked as an inspector at a plant that made aerospace boards.
Each board was tested multiple times, there were inspectors after almost every step, to the point that each board was inspected at least 4 times during assembly, and then 2 separate full tests were done on every board. No sampling, each and every board was tested. Speaking to the engineers, our projected defect rate was <1%, and we achieved that. However, for rough numbers, this 1.6K board would likely cost >10K to achieve that level of reliability.
This kind of thing happens all the time. Intel sent us an eval board that wouldn’t boot. After various people screwed around with it I finally looked at the schematics and saw a pullup capacitor on the UART Vcc. A glance at the board and indeed there was a cap between Vcc and the chip supply. An easy workaround but how the hell did this pass?
So the board was booting up fine, you just couldn’t talk to it. We excited the chip wasn’t going to survive and indeed, it didn’t.
Uh not to sound like a ton of bricks, but what is a "pullup capacitor"? You mean the connection to the chip's Vcc was a capacitor (effectively blocking any DC current, and not powering the chip), instead of just a trace?
Yes, clearly someone had meant to put a resistor there but instead spec'd a cap, making an open circuit instead. Shorting out the cap solved the problem.
Since they'd shipped these boards for a while it was pretty likely nobody was using them. Since nobody wants to design around a part likely to be cancelled due to poor performance in the market...we figured nobody would be relying on this part.
I called it a "pullup capacitor" because it was where you'd expect to see a pullup resistor and thus the name is funny.
it would be interesting to see if the same "wrong" resistor was used on multiple boards in the same position, or if the stack of resistors loaded into the robot just had one out of place resistor. if it was just a one off, thinking of the odds that one component out of the many on the board, the one board found it's way to the one guy that is skilled enough to not only notice, but do find out what, and have the proper component to swap out. I'd be buying lottery tickets if it were me.
I'm not convinced it was the wrong resistor, it could have had the correct value until someone (not necessarily OP) grabbed the board wrong, dropped it, or dropped something on it, and scraped some of the conductive layer.
In any case, having covered circuit boards is a critical part of the reliability equation in consumer electronics. In an EE environment, component-level access and visibility can make it worthwhile to deal with naked boards, but you inevitably drop a few nines of reliability.
The scenario you describe is very unlikely (the conductive part of chip resistor is enclosed in protective coating).
What is a lot more likely is one of the following:
* Engineer sent incorrect assembly drawing (sometimes when components are shuffled around with silkscreen/assy drawing layer hidden, the label for component can end up in unexpected place).
* Someone manually updated the part numbers from one revision to another and missed the one resistor.
* A new engineer joined a project, and after minor unrelated change to project, regenerated the fabrication outputs as one should. But he was not aware of changes somebody else did in BOM file by hand.
* PCB assembly was performed by hand for prototype and contained an incorrectly placed part, but the assembled PCB ended up being shipped because of complete lack of production planning, somebody in sales selling stuff before it's ready and this being the only option to avoid contract cancellation.
I have personally witnessed every one of these and then some.
Yes, product engineering sometimes really is that shoddy and people being people stupid things.
No, resistors are not always fully encapsulated, especially if they are small, cheap, and/or designated for high speed applications, and no, the ones that are encapsulated are not protected well enough to make high or intermittent resistance failure modes particularly unlikely after being poked. I've seen both cracking and scraping in the wild. Not as many as I've seen shorted MLCCs, but that would be a tall order.
Sure, it could also have been a design failure, but every time I've found one bad resistor in a bank of identical resistors, it has always been a physical problem with that resistor. Shrug.
Another source of cracking could be related to moisture in the SMD components. We do primarily small volume PCB runs where I work. In house, with parts stored here, pick-and-placed here, etc. We have had components crack as well, and I recall one of the suspicions being the source of the components. They were a supposedly reliable supplier, but components from them failed more often than those from another supplier so we stopped getting parts from them.
I am not sure if resistors are as prone to humidity effects, but if plastic parts are not humidity controlled before going into a reflow oven, they are prone to cracking. Even as a software guy, I have seen several brand new boards with cracked components on them over the last 15 years. Often the cracked components are difficult to find, since they are so small and do not fail completely (they are just off by a wide margin.)
That is indeed interesting, I haven't seen it in the wild. Lot's of bad solder joints, shorted MLCCs- sure.
I'm now tempted to scratch the ones used in designs I have access to.
smd resistors come in reels so it is not possible to have an out-of-place resistor. What is possible is that the pick and place machine that assembled it had multiple reels for the same part value, or that a reel finished in the middle of an assembly (less likely).
Not if it's on tape & reel with automatic P&P, I agree. But if those boards are made in such small quantities that they're being hand-stuffed, I wouldn't be surprised to see an errant resistor somewhere. Maybe the human pick&place was distracted and picked from the wrong reel, or 100 resistors were dumped into a bulk tray and there was a 400ohm resistor on the bottom of that tray or any number of scenarios.
I've seen so many assembly errors in my time that I'm pretty sure it was something like this that happened.
Or PnP machine didn't place the resistor or misplaced the resistor because coordinates were wrong in PnP program, it was found by AOI machine, and then wrong resistor could be placed by rework technician.
never even looked at the credit for the article. "this guy" was just a generic reference to random person I don't know. kind of like Canadians refer to anyone as buddy.
Unlike buddy, it's an inaccurate generic reference. It's a little more typing, but replacing "guy" with "person" or in this case "engineer" would be more precise (and inclusive).
"Guy" is a perfectly acceptable word for a man or a woman in much of the United States. When the PBS children's show The Electric Company starts with the cry, "Hey, you guys!" it is not excluding the female of the species. It's an idiom.
There are places where "you guys" is unusual. For example, when I moved to a certain part of Texas, it was unfamiliar to the people living there. Perhaps you live in a place where "guys" is uncommon and are unfamiliar with the term. In that case, I suggest you respect other people's cultures and embrace diversity.
Exactly. For example, the phrase "bad guy" could apply to a man or a woman. You can't just change that to "bad girl" and keep the meaning. "Bad girl" means something else entirely.
"The one guy" is literally the opposite of uncountable.
Guys, this doesn't have to be so hard. Just check the authorship before commenting about the author! If you get it wrong and someone points it out, say "oops, I should have checked" and it's over.
I debated calling this out and decided against because I knew you all would do it for me.
This, right here, is precisely why this issue is important. We have gendered pronouns that are so ingrained we try to pretend they're not gendered.
"All" is a _shorter_ and more inclusive substitute for "guys".
If you think this isn't important, just replace every occurance of "guys" that you see with "gals". If you're cool with that, good on you, but if you think for one second "oh don't say gals that might piss someone off" then, well, there's the kicker.
I thought using "guys" in a conversation about the use of the word "guys" would be obvious enough to register as tongue-in-cheek. But y'all went and proved me wrong!
I am curious why the author abbreviated the price of the board in such a way that both didn't remove any characters from the sentence and made the number more obscure.
> The what isn't the question dear friend, the why is.
If you followed the link, you should've seen that I wrote "OP is an engineer and I guess she has been trained to always do it", so I thought it would be all you need to understand "why", isn't it?
But if not, to elaborate - The likely explanation here is: Engineers are trained to do it in their jobs, and for someone from an engineering background, it's possible to "make this choice" automatically without even realizing it, and for them, it's totally a natural number without any readability issue. In the same sense, if you see a software developer who expresses a number in the power of 2 even when it's unnecessary, it may not be the most appropriate decision, but this type of behavior is totally understandable. You don't have to be a rocket scientist to see why.
FWIW: I had success using Cypress FX3 for USB 3.0 (connecting it to a Zynq FPGA using parallel interface). It's a weird chip though- you have to design the parallel protocol using a state machine GUI tool (I did not like their default design). It's an ARM CPU, so you also have to be prepared to program it. There is a simpler to use FTDI chip for USB 3.0 also, but have not tried it.
So there is the ARM firmware and the USB host software. For Linux they provide a Cypress wrapper for libusb and some examples, that was straightforward.
For the ARM firmware, there is a short document called "Using the FX3 SDK on Linux Platforms"- it was pretty straightforward. It's arm gcc, plus a qt-based program called "cyusb_linux" to interact with the FX3's bootloader (you program it over USB).
You can use Eclipse if you want (they have some kind of support for it), but I just used the Makefile in the example projects.
The ARM firmware uses ThreadX RTOS, but the actual application is pretty small (and the space is very limited as I recall).
The only stupid thing is that they have this thing called GPIF-II designer, which I think is a Windows-only tool. The final output is a table in a C header file. I think it's a .NET program, so I think it would be easy for them to support it in Linux, but they don't.
In engineering notation, it's usually preferred to use the appropriate SI prefixes (pico, nano, micro, 1, kilo, mega, giga...) in the unit of measurements, each SI prefix represents three orders of magnitude (1000). When the number is greater than 1000, you always upgrade your unit, so that the numerical value is never greater than 1000 [0]. Instead of "1500 Hz", You say "1.5 kHz". OP is an engineer and I guess she has been trained to always do it.
[0] These rules are not always enforced, and there are alternative notations, but the 1-1000 rule is the most common convention, writing "3500 MHz" instead of "3.5 GHz" is just awkward.
I had a good one with a thesis student I supervised. The security micro she chose was designed to sit between SPI Flash and the main SoC, so a big feature was hardware accelerated SPI slave support. Guess what feature didn't work on the dev board...
A very annoying experience.