Hacker News new | past | comments | ask | show | jobs | submit | sparky's comments login

If you're earning $300k/year in California, you're paying ~$22k in state income tax, so you should be itemizing on your federal return with or without the mortgage, and subtracting the standard deduction from the benefit doesn't seem right.


Looks like you're right. I thought state and local taxes were deductible without itemizing but I was wrong. I've been itemizing for years so I guess I got that mixed up. (And I have no state income tax so it hasn't been a factor for years either.)

So that puts you back at the ~1000 mark for savings which probably gets you close to 1.5MM.



Cheapest possible toaster oven + cheap multimeter thermocouple for manual temperature control + {paste flux, solder paste+stencil} works better than you'd think for soldering BGAs. Even honest-to-goodness reflow ovens can be had on eBay for $300-$400 (search "T962"), which would give you another one or two nines of reliability by circulating the air and having more precise temperature control.

Schmartboard also makes 1.0mm pitch BGA adapter boards, but a lot of the high-end stuff these days is moving to 0.8/0.5/0.4mm pitch. Of course, for those kinds of things, the extra parasitics of an adapter board are probably a no-no in any case.


For prototyping you can also solder BGAs with any hot air rework station. Even if you don't already have one, Atten 858D only costs around $60. Here's an example (not my video): https://www.youtube.com/watch?v=L8EWqWj2srg


The only real similarity to PCIe is that they're both high-speed differential signaling standards. This article has a good overview of what is known: http://www.anandtech.com/show/7900/nvidia-updates-gpu-roadma... .

For an example of what the signaling might look like, see https://research.nvidia.com/publication/054-pjb-20-gbs-groun...


Yeah, I think that comment conflates three things: 1) The regulatory burden of going public (Sarbanes-Oxley compliance et al.) pushes companies to stay private, which keeps the average Jane from investing in those companies via the public markets.

2) For those companies that are privately held (whether that's because of the regulations in #1 or for more fundamental reasons), separate regulations (accredited investor rules et al.) prevent the average Joe from investing privately.

3) On top of #1 and #2, market forces and norms give more investment opportunities (both public and private) to large, established players than individual laymen. For example:

* Companies may prefer to take $1M each from 5 large investors than $1k each from 5000 people, both to reduce logistical burden and because those large investors statistically have other unique things to offer like expertise, advice, and connections. This is unfortunate for the small-time-but-sophisticated investor, but seems quite rational.

* IPO shares only being offered to large friends and clients of the underwriter. The justification is reduced logistical overhead as in the previous case, but the true motivation is widely suspected to be cronyism.


Propagation velocity in copper is within a factor of 2 or so of c; you would have to length-match even at vp=c, given a short enough clock period.


Isn't that for a 1000x ROI? 1000%=10x, so even selling 1% on Kickstarter at a 200M valuation would yield something like that.


You're right, but now I'm wondering where the article got a 200M valuation from...


I think he means you get an 80k refresher every year that vests over 4 years (20k/year). After 4 years, you have a full pipeline and 4 tranches (80k/year total) vest every year.


Aliasing conflict!

This is about doing histogramming on an FPGA, not a tool to group FPGA dice into speed grades :)

Would be interesting to see how an FPGA stacks up against a GPU on this problem. GPUs are very fast at parallel histogramming, but hardwiring the number-to-bin-index computation for a particular problem instance might buy you quite a bit of energy efficiency, or even possibly a bit of performance if the computation involves a divide. If the bins are of non-uniform size and the indexing computation involves a binary search on a lookup table, it seems like the much higher clock speeds on a GPU would win out.


GPU clocks aren't really that high. FPGAs win when it comes to combinatorial logic, if statements kill GPUs. My money is on the FPGA by a superlative.


Most FPGAs can't handle very high clocks either (~400MHz max, 100MHz is more reasonable for the Xilinx Artix chips). A huge fanout with a ton of comparators is going to not run that fast.

I imagine binning with an ISA with conditional execution wouldn't be that bad as far as jump penalty. Even with jumps, as long as they are predicted correctly it's fine.

The big limitation here is probably the USB 2.0 speed - 40MB/s is not a lot compared to a CPU's bandwidth to main memory.


GPUs just recently hit 800+, they were at 600Mhz for a long while. As for USB bottleneck, I am speaking generically and assuming an equal footing (memory bandwidth, bus, etc), the specific device (binning on an FPGA) as a co-processor for a computer system doesn't really make sense from a speed advantage. I saw this more as a proof of concept for augmenting APL with an FPGA. Of course, APL itself should just get compiled down the GPU.

It does make a for a great benchmark. Eventually FPGA fabric and GPU compute will merge into an oatmeal with raisin consistency. I'd argue that with embedded multipliers, FPGAs are already there. Altera has an OpenCL sdk.


The lack of palletization on Target's trucks seems inefficient at the store level, but may be justifiable at the company level, since Target owns its own distribution centers:

* Trailers are 9-10 feet high, so a fully palletized truck would (a) Have very tall unwieldy pallets, and you'd have to carefully order things so boxes on the bottom don't get crushed, (b) need racks or something similar to hold multiple tiers of normal-height pallets, or (c) be half-empty

* Even with the current process, getting boxes off the truck is not even close to the bottleneck. When I worked at Target, 2 people threw boxes onto rollers, 5-6 people sorted the boxes onto pallets organized by area of the store as they went by, 2 people rolled those pallets onto the sales floor and threw each box in front of the appropriate aisle, and ~20 people opened the boxes and stocked the shelves. The 2 people unloading the truck could always keep well ahead of the 20 on the sales floor.

* An alternate strategy might be to sort the boxes into pallets by store area at the distribution center rather than at the store. You might get some economies of scale out of doing this process for 100 stores at 1 distribution center, but it may still not be worth it to put all that needed floor space, latency, and congestion in what I imagine is already a very congested point in the supply chain. Better to distribute that part to the endpoints, since you already have 100k square feet per store to work with that's just sitting there while the store's closed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: