Hacker News new | past | comments | ask | show | jobs | submit | H3g3m0n's comments login

Is there really a need? Most serial protocols can really be considered 'open'.

For low bandwidth there is RS232/UART/JTAG. Then there is I²C/SPI for networked. I²C ranges from 100kbit/s to 2.3Mbit/s. And SPI goes to 10Mbps/30Mbps.

For higher bandwidth stuff there is whatever you want over USB, Bluetooth, 802.11BLAH Wifi. Etc... Afaik USB can be considered 'open', you might need to 'register' for a certification/vendor id/logo but from what I understand if you don't want those you don't need to bother. There are also http://pid.codes/ and other organisations that are giving away free pids.

There is the Wishbone bus. https://en.wikipedia.org/wiki/Wishbone_(computer_bus)

But that's got quite a few pins. It allows for on chip networking as well as external stuff. Also RapidIO (which has heaps of pins).

There is a RISC-V debugging standard, but I think that's protocol agnostic.


You can put the other people in the VR experience too.

Google at IO/2016 showed some experiment with schools using cardboard where they raytraced where each student in a classroom was looking at. https://www.youtube.com/watch?v=UuceLtGjDWY

With full body tracking and depth cameras you could put 3d avatar bodies in.

Obviously the face would be obscured by the headset, but it's dark in a cinema and people are normally looking at the screen when watching TV at home. So it might actually end up being more social.

Even with just simple 3d positioning of the headset and input devices with a simple visual indicator is enough to get a sense of someone being present. Add a microphone and 3D spacial positioning of voices.

Of course the current generation of technology would make that hard and rare. Not too many people you know will buy a $1000 headset. Everyone would need to bring that and a fairly powerful computer to the one place.

Although physically being present in the same room isn't required.


> They can allow users to achieve the same goals by distributing the device un-flashed

There is the possibility of the device being intercepted before it reaches you. Or before you have gotten around to locking it down. Or when you plug it into your (compromised) system to lock it down.

Since all communication is done over the USB port, the problem is that the firmware can be flashed with a backdoored firmware that appears to be normal/unflashed. One that can be flashable (by basically having a virtual machine/emulator that runs the flashed image), appears to get locked down when you go through any lockdown process (since you just end up locking down the VM). But still has the backdoor in place.

Firmware aside, people can modify the hardware too. Unless you crack open the device and inspect the internals (which many devices are designed to prevent). And even then a really sophisticated attack could replace the chips with identical looking ones. If you are using off the shelf ones then it wouldn't be that hard. They can also add an extra chip before the real one that intercepts the communication. Or maybe compromise the 'insecure' USB chip (if it's programmable).

With locked down hardware the manufacturer can bake private keys onto the chips and ensure that the official stuff checks the hardware by asking it to digitally sign something with a private key. But if the attacker has added their own chip between the USB and the legit chip, they can pass through the requests to the official chip.

TPM will do something like keep a running hash of all the instructions that are sent to the hardware and use the resulting has as part of the digital signature verification, but if you mirror the requests that doesn't help.

The next stage is to use the keys on the chip to encrypt all communication between the 'secure' chip. So any 'pirate' chip won't get anything useful.

Users could be allowed to 'bake' their own keys in, but that leaves us with the intercepted hardware problem. The attacker gets the hardware, installs fake firmware that appears to accept your custom key and preforms the encryption.

Personally I think worrying about security to that level is over kill even if your dealing with quite a bit of money. It would have to be quite an organised attack. They would have to gain physical access to the device, compromise it, return it unknown and then gain physical access again later. Requiring both physical and digital security skills.

That's much more work than just, stealing it or applying Rubber-hose cryptanalysis. Attackers can also compromise the system being used to access whatever.


Just spent half an hour trying to figure out why 256 color mode stopped working in Vim.

I'm blaming you =P

In all seriousness actually trying to get 256 colour mode auto detecting and working reliability across different xterms, tmux/screen, ssh and so on, without some hack that forces the 256 on even when it's not supported seems impossible :/

I added the following to my zshrc, which seems to have helped. Why don't these terminals report 256 color mode by default?

    if [ "$COLORTERM" == "yes" -a "$TERM" == "xterm" ]; then
       export TERM=xterm-256color
    fi


The escape sequences to set 256 colors were previously hardcoded into tmux, instead of detected from terminfo. That works fine, until you use a program that uses different escape sequences.

I suspect terminals don't set 256 colors by default for backward compatibility reasons. tmux itself didn't assume, you needed something in your config file or a command line option. Now though, I believe it can detect it based on your term.


That is just how chip manufacturing works.

In theory you make 1 model of chip. You manufacture them. You test them. What clock speeds are they stable at. Are any of the cores faulty. Many (probably most) will have defects. Some of them will be duds and must be thrown The bits with the defects are switched off and resold as lower end hardware.

The common less powerful failure cases cost less and the rare more powerful successes cost more.

The high end stuff sells for more than the cost of manufacturing and the low end stuff goes for less. They offset each other.

So when people force a model of GPU to preform higher they are basically ignoring the Q.C. and are going to be dealing with undefined behaviour. Did the card get reduced just because it produced more heat at the higher levels (in which case a highend heatsint might counteract the issue) or because there is a core that returns faulty data.

That's only the theory though.

In reality it might be artificially screwed with to increase profits. Maybe there are many more successes than the desired price level matches. Maybe they are selling more cheaper stuff so they turn off the bits of the chips even though they passed the QC. There are only 2 companies selling gaming GPUs so there could be an artificial duopoly. Nvidia only have to be price competitive with AMD.


That's binning, sure.

But I thought pro chips actually tended to be clocked a bit lower than gaming chips. There's no 'ignoring QC' there if you trick the board, quite the opposite.


> But I thought pro chips actually tended to be clocked a bit lower than gaming chips. There's no 'ignoring QC' there if you trick the board, quite the opposite.

Yes, the pro gpus have different clocks, ECC memory and there may be other differences in the board as well.


The buttons are on the fucking ceiling and your 'stylus' is a broom.


I'm not an expert but in the gamedev domain, control over memory is fairly vital. It seems like it would be for lots of the other stuff C++ is used with too. In C++ I can allocate all my memory upfront in a pool since I know the exact size of my level, the number of objects and so on. Then use custom allocators/static for just about everything. When I make an object at runtime I can just snag preallocated space from the pool. With the ability to cast the pool's primitive data types to pointers I can even save on any need to index the blocks since I can use the unused memory to point to the next block of unused memory (although it's probably a minor optimization).

Go drops that control totally in favour of it's inbuilt garbage collector which the Go devs think they can just get right. That seems unlikely (the current implementation is apparently quite bad stop-the-world implementation).

Another issue that strikes me is library building. Afaik Go doesn't produce object that can be linked against by non-go stuff. C does this, it has a ABI.

This means I can write one library in C/C++ (and apparently Rust) and then have wrappers for just about every other programming language in existence. (Although C++ can make this painful with things like exceptions http://250bpm.com/blog:4 ).

It might be that Go's interfaces make it really useful for making Go libraries in, but some libraries need to be language agnostic as much as possible.

Many of the things Go initially solved over C++ are being chipped away too. I love reflection, it would be very useful for games, serialization and so on. C++ doesn't have it but there is a standards working group looking at it now, so we could see it in either C++14 in a year, or C++17 (probably with implementations before then). C++11 got threads and so on and there is a working group doing transactional memory, more threading stuff, networking and so on. So we could see something like Gorutines and Channels but still have access to low level things like raw memory barriers. C++ tooling is set to explode with what the clang people are up to.

Go seems great, but it does seem focused in the 'business' kind of domain. Maybe future versions could address some of the issues like the GC (either fixing it so it does meet the performance requirements, or allowing custom memory options, C# has the unsafe keyword for example).

EDIT: I note that Go might provide a package "unsafe" that could allow for some things like a custom GC but apparently would be hard to implement.


  >in the gamedev domain, control over memory is fairly vital.
C# is taking over the game world. Most iOS games are, for instance, written in C# these days.

I have worked for a company who wrote a state-of-the-art 3D PC gaming engine in C#.

Being garbage collected doesn't remove the ability to manage memory. It just removes the need to call Malloc directly.


The fact that Unity uses C# as one of it's scripting engines doesn't mean that C# is taking over the game world (I'll assume that's where the C# reference is from - since Unity is incredibly popular for iOS devs). The vast majority of games are still C++, often scripted in Lua, Python, Javascript and C#.


I wouldn't describe Unity's relationship with C# as 'scripting'.

I realise that there are no doubt a number of custom C++ engines out there with embedded scripting languages but push for multi-platform mobile games has caused an huge shift towards Unity.

Maybe I'm moving in the wrong circled here — but anecdotally all the people I know making iOS & Android games are developing in Unity.


From the horse's mouth: http://answers.unity3d.com/questions/9675/is-unity-engine-wr...

And I don't doubt that alot of devs use Unity, it's most likely the most popular game engine. But it's still a C++ project, despite the languages used to script game events...


Xamarin


> "C# is taking over the game world. Most iOS games are, for instance, written in C# these days."

When the engine underneath is done in C# as well I will take that argument. So far, it would be the same that saying that most console games are done in UnrealScript.


Most iOS games aren't C# now a days. Unity likes to trot out the 70% number for mobile games but it's mostly bs. That isn't counting games that are actually published that based on there registered numbers, a lot of of never actually make it into the store. I'm willing to bet the most of iOS games are still Obj-C


I'd take that bet.

I know many iOS game developers and none of them use Objective-C any more.

(in fact I only ever knew one)


> state-of-the-art 3D PC gaming engine in C#.

How does it compare to Cry Engine, Unreal Engine, Frostbite, ...?

Because these are the state of the art gaming engines these days.


It compared surprisingly favourably. The C# engine was comparable (and regularly compared) to the Unreal Engine being used elsewhere in the company.

It was C# from the ground up. Scripting was provided via other .Net languages (Python in the case of the UI team).


Those are state of the art graphics gaming engines. It's quite hard to make e.g. an RTS using any of those. They are too FPS specific.


Command & Conquer is being built on Frostbite. [1]

[1] http://en.wikipedia.org/wiki/Command_%26_Conquer_(2013_video...


Are you referring to Unity? If so, while the game code is run via Mono, it is C++ internally.


Which is just some machine code eventually. It's just electric signals people!


Are you SURE about that?

Well, for shitty games maybe it is true, shovelware aplenty on iOS anyway.

But for the part of the market that actually has a profit (not just revenue) C# is not that dominant.

In terms of profit, the most profitable games probably still rely on C++.

Also most engines (including Unity) are made in C or C++, they sorta abstract the memory management for the game author, but they still do a lot of manual fiddling themselves, XNA for example (that allow you to write C# games directly to the hardware without a C++ engine) is notorious for bigger games having memory issues.

Also 2D "retina" games use so much absurd amounts of memory, that they are almost impossible to do purely in high-level languages, you need some C or C++ somewhere to handle them, at least texture loading, otherwise you end with loading times so big that people want to suicide.


Loading times don't have anything to do with the language.

Loading times have been an issue since game consoles moved away from cartridges.


I'm interested in knowing more about that engine. Can you say who/what it is?


Realtime Worlds


Ah, yes. Out of Boulder?

Which game's engine was in C#? Project MyWorld?


Yes. But out of Dundee, Scotland.


Memory management is extremely important for high-performance code. And C/C++ allow you complete control over it, but obviously with extra work needed to do this. But when you need the speed, it's so worth it, it's not funny.

On top of this, things like bit-packing and tagged-pointer type things can allow you to save memory, allowing you to fit more stuff into cachelines more efficiently, giving even more speedups.


That might be fine for a carton of eggs or milk but most things now days have heaps of ingredients in them.

The problem with this is you have to know enough to be able to make that determination.

How long does a ingredient last, when in a specific type of container at a specific temperature (ie shelf vs refrigerated vs frozen), when prepared in a specific way (ie eaten raw, microwaved for a few minutes, cooked at a high temperature for an hour)? Now figure it out for every ingredient in the container. For all your food.

Some of them can be Food Additive E1424.

How does a specific preservative effect the life time of ingredients.

Even if I had a massive lookup table it would be a pain.

EDIT: Here in Australia "used by" indicates the health and safety: "use-by date, in relation to a package of food, means the date which signifies the end of the estimated period if stored in accordance with any stated storage conditions, after which the intact package of food should not be consumed because of health or safety reasons." http://www.comlaw.gov.au/Details/F2012C00762

We also have best befoure for quality. And baked on/for dates for bread.


> applications tend to "borrow" the OS's, and no OS I know of uses a design taken directly from NIST.

Windows has support for it (although it's up to applications to choose to use it or not).

If it's part of a standard then it might be required by some organisations. For example other government/military agencies and people dealing with them. They could also look to pass laws to require corporations protect their customers private data using the standard. And just have people recommend the standard to everyone, after all was designed by NIST/NSA.

It's also likely to end up in software that wants to support those use cases which could have seen it filter out to other uses. And possibly hardware support.

If the potential for a backdoor hadn't been discovered people wouldn't have objected/noticed.

> Dual EC is very slow. Nobody would willingly use it.

Which leaves the question of why it would end up in the NIST standard if it's not very good.

Of course maybe they where just testing the waters.


It isn't going to be able to record without a battery...


If you take out the battery why are you using this pouch?

Again, this pouch is useless.


It prevents a straight forward tracking and forces anyone wishing to track you using a cellphone to throw a lot more resources at you than before. Inside the case it doesn't get any simple tracking info, the only thing it really gets would be physical orientation (gyroscope, accelerometer, and compass) and audio.

In order to turn the first into tracking the phone has to either do the inertial tracking itself (possible but computationally expensive and it diverges pretty quickly unless there have been some major improvements).

The second requires some luck involving an identifiable sound marker some where in the recording then the processing power to cross reference everything.

It's a mistake to confuse imperfect with useless.


It's useless.

Just turn the phone off. Or really just the GPS off since that's the only thing this thing blocks.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: