Hacker News new | past | comments | ask | show | jobs | submit | bisrig's comments login

Back in the bad old days of version control (thinking of VSS here), I was overall pretty satisfied with how the check-in/check-out mechanics worked for Word docs and the like. In this case you have the benefit of the sequential workflow, in fact enforced or hinted by the tool itself, while also getting rid of the recurrent weakness of email-based document storage. There were plenty of other things to dislike about VSS (like, pretty much the rest of them) but it wasn't so bad for maintaining documents.


Agree completely. I think Vanguard's UI is trying to send the message "if you're logging in you're doing it wrong".


I agree with you both. I'd be worried if Vanguard suddenly became like Robinhood.

There's a new version of the Vanguard app on the App Store called "Beacon" that includes a very useful visual overhaul (account balances & performance are front and center) but keeps that wonderful Vanguard focus on, well, staying the course and not much else.


The StateMover concept sounds pretty interesting and is almost like the reverse of the integrated-logic-analyzer approach that the major vendors have adopted in their tooling. I assume that in simulation land that your debug environment is based on timing simulation, which unless they've "fixed" the net name mangling, is not exactly pain-free in its own right.


It has to be post-implementation, but not necessarily a gate-level sim


I don't think that long lines scaled particularly well with increasing number of LUTs and clock rates. All the black magic voodoo that goes into matching prop delay for resources like that tends to be applied to clock distribution. At least that's how it was a few years ago.


Something that I think is missing in the discussion here with regard to payment methods is: know your market segmentation. If you are targeting b2B (i.e. large business), there are going to be a lot of circumstances where credit card payments are a non-starter.

From personal (F500) experience, I know that I am going to have to move mountains in order for purchasing to accept a commercial arrangement with monthly credit card payments, which means I will usually move on to a competitive solution if one exists. In fact, one of the first questions I usually ask a vendor is "do you sell through (preferred reseller already listed as an approved vendor in our purchasing system)" as I know this is going to make my job of getting the purchase approved 100x easier.

So in conclusion, know your market segmentation and how your potential customers' expectations for how they will do business with you.


This is true and is the reason I break my own rule in this case - because of large companies that are only willing to pay through an annual PO/invoice process. None of our customers are quite big enough to require the use of a preferred reseller, but I've heard of that arrangement as well.


Am I reading this right that the standard remote ID broadcast is specified as "something in an unlicensed band, everything else about it you figure it out"? Isn't the point of this to be interoperable with other receiver systems for things like BVLOS operation? Seems like a funny place to throw in a shoulder shrug.


Quote:

> With regard to direct broadcast capabilities, the ARC recommended the FAA adopt an industry standard for data transmission, which may need to be created, to ensure unmanned aircraft equipment and public safety receivers are interoperable, as public safety officials may not be able to equip with receivers for all possible direct broadcast technologies.

So the final rule will probably name a specific standard, but it’s TBD for now.


I know that this doesn't directly address the question you're asking, but to give an idea of the order of magnitude of the effect: the Doppler shift in frequency rounds to f_carrier * 2v/c. In the case of anything with "reasonable" speed, 2v/c is going to be very small (~10e-6 for Mach 1), and thus you would be talking about very minute differences between the transmitted and received pulse in terms of either overall pulse length or number of wavefronts received vs. sent (for what it's worth my intuition is that the pulse length actually shortens, but either way it's not measurable by the receiver).


As an aside: it looks like the timing diagrams in this article were created with a tool called WaveDrom. I've used this tool in the past and been impressed with what it's able to do in terms of creating nice timing diagrams for digital design documentation, a critical part of communicating how these designs (and interfaces!) are supposed to work.


I have been through a similar process myself a couple years back, and just wanted to say that this is a really good effort in trying to tame a beast that doesn't really want to be tamed. Lots of experimenting around with write_project_tcl and figuring out what settings are needed to coerce the tool into making version control/CI friendly decisions. Nice work!


I think the heart of what the article is getting at is represented well by the following quote:

"To let GPUs blossom into the data-parallel accelerators they are today, people had to reframe the concept of what a GPU takes as input. We used to think of a GPU taking in an exotic, intensely domain specific description of a visual effect. We unlocked their true potential by realizing that GPUs execute programs."

Up until the late 2000s, there was a lot of wandering-in-the-wilderness going on with respect to multicore processing, especially for data-intensive applications like signal processing. What really made the GPU solution accelerate (no pun intended!) was the recognition and then real-world application (CUDA & OpenCL) of a programming paradigm that would best utilize the inherent capabilities of the architecture.

I have no idea if those languages have gotten any better in the last few years, but anything past a matrix-multiply unroll was some real "here be dragons" stuff. But: you could take these kernels and then add sufficient abstraction on top of them until they were actually usable by mere humans (in a BLAS flavor or even higher). And even better if you can add in the memory management abstraction as well.

Point being: still not there for FPGA computation, though there was some hope at one time that OpenCL would lead us down a path to decent heterogeneous computing. Until there's some real breakthroughs in this area though, the best computation patterns that are going to map out using these techniques are the things we're already targeting to either CPUs or GPUs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: