Hacker News new | past | comments | ask | show | jobs | submit | vgb2k11's comments login

yacy.net


But its still the wrong explanation. Think about it:

(i) Miners, not average users, hold the voting power.

(ii) When the 1 MB block-size is congested (like it is now), miners make significantly more money on transaction fee's.

(iii) Increasing the block-size increases supply and reduces demand for priority processing in the block, hence, reduces transaction fee's.

Which explanation truly matches your view of reality here. People (miners) are motivated by "A shared vision" or a dollar in the bank account? Look at it closer (quote from parent comment):

>

> "larger blocks means more resources required to transmit, validate and store blocks and if you cannot validate blocks, then you are trusting transaction validators (miners)"

>

Can you see the problem with the explanation now? Its written from the view of an end user (of bitcoin), not a miner. But its miners who vote on the fork, hence, the above quoted text is mostly irrelevant to understanding the "WHY" of the fork battle.


Its a transient equilibrium during growth, not a zero-sum game. We could see lower fees and miners earning more :

Hypothetically, if blocksize went up tomorrow by 8x, then potentially fees might drop by 4x and volume of transactions go up by 8x .. then miners will be making 2x in fees per block, for maybe 2% extra cost.

That has other flow on effects - if fees are lower bitcoin can accommodate people using it as a day to day currency, then its utility and user base goes up, then the valuation in USD goes up, the value of the bitcoins miners earn/save goes up, etc.

I think there are miners who understand that there is a sweet spot of fees being low enough to facilitate higher daily usage and growth. Also, the majority of their current income is not in fees, but rather in 'coinbase', the reward for mining the block [ which is how new bitcoin money supply is injected into the system ]. They are highly vested in the valuation of USD/BTC, so they will do well if the user base of bitcoin grows.


None of the nodes users, which define and police consensus in bitcoin, is interested in reducing the security of the network to allow miners more control. This is the fourth failed coup attempt of the bitcoin network, and the only thing that it has demonstrated, is the resilience of the network against large corporate centralization attempts. Segwit might very well be the last architectural change of the protocol.


But miners aren't stupid, and they know that if they are perceived as being a huge bottleneck and impediment to the growth and viability of Bitcoin, everyone else will just follow a different chain.

Currencies only have value if people are willing to trade you actual goods/services for it, and if the larger community decides the value lies with another fork, miners will lose all their power.


> (ii) When the 1 MB block-size is congested (like it is now), miners make significantly more money on transaction fee's.

One thing I've not understood is, if the miners want more money from fees, why don't they just say "we're not accepting into blocks any transactions with fees lower than X"?

If just one large miner with 10% of the hashrate does this, that instantly puts any transaction with a lower fee than that at a 10% chance of being delayed, and if the fee is still reasonable people will pay it just in case.

And it's not like there's a market of miners. A user can't refuse to deal with a miner or choose who gets to confirm their transaction.


This is how priority transactions work.Increasing block size would allow more transactions per second. Less people would then pay for the premium of fast transactions as the normal transaction is fast enough


Can you elaborate further?

When Mike Hearn quit bitcoin (Jan 2016), he wrote the Chinese miners were worried about bitcoin getting too popular because of their limited access to the Internet. And said they were actively trying to supress its popularity. But obviously that isn't true now?

https://blog.plan99.net/the-resolution-of-the-bitcoin-experi...


Check out replies by zkSNARK and stale2002 (just down-page at time of writing). Screen cap here for reference: http://imgur.com/a/xBJUW


> if Google implemented my suggestion they could never be blacklisted

Google seems to have built their brand intentionally to be the opposite of what you're asking for though; and absolutely they could be blacklisted with a simple "GMAIL ADDRESSES NO LONGER ACCEPTED HERE".

>which already works but is security through obscurity.

I'm not sure which one you are saying is security through obscurity here... blah+real.id@gmail.com... or the high entropy mkKAjgsdf788hf87hf@gmail.com, both are obscure, but its a stretch of imagination to start labelling this a security issue.


> > if Google implemented my suggestion they could never be blacklisted

> Google seems to have built their brand intentionally to be the opposite of what you're asking for though; and absolutely they could be blacklisted with a simple "GMAIL ADDRESSES NO LONGER ACCEPTED HERE".

I think "you can't block GMail" here is meant in the sense that "you can't block the Google crawler". It's certainly technically trivial to do so, but the opportunity cost from lost users will be, for most businesses, unacceptably high.


>I think "you can't block GMail" here is meant in the sense that "you can't block the Google crawler". It's certainly technically trivial to do so, but the opportunity cost from lost users will be, for most businesses, unacceptably high.

Excellent interpretation. Gmail = Google crawler. I've made a note of this now.

What needs to happen next is a deep discussion between yourself and logicallee, in the context of Google crawler as well as how to make gmail come further out of the dark ages with high entropy and no security obscurity.


it's not blah+real.id@gmail.com - it's real.id+blah@gmail.com which currently gets delivered to real.id@gmail.com with a tag of "blah". However this tag can be removed by spammers, hiding where they got my email address.

mkKAjgsdf788hf87hf is not the only possible high-entropy format, it could be if the type that gfycat uses such as "uncommongrimyladybug". That is quite hard to blacklist.

Nobody is ever going to stop accepting gmail addresses, that suggestion is pretty ridiculous. Especially since I suggest that these addresses should be delivered straight to your real inbox (unless they start getting spammed). There's no reason people should stop accepting them.


>The webpage also says, "Redundancy is achieved through the use of erasure codes so that your data can always be recovered even in the event of large network outages."

>Does this means files can't be lost, as long as you keep paying your bill?

The white-paper mentions:

>"ORC will soon implement client-side Reed-Solomon erasure coding (Plank (1996)). Erasure coding algorithms break a file into k shards, and programmatically create m parity shards, giving a total of k + m = n shards".

So at first glance its seems like the usenet parchive/PAR2 redundancy methodology, but storing the parity shards locally (client side). Well that's my interpretation of this section of the white-paper anyways.

So in short: it certainly doesn't mean that the files "can't be lost", but it means the owner of the files can rebuild the files using parity shards from client side in that case that network outages affect file availability.


> 2. When something is free, then YOU are the product.

Debian Linux, Firefox, gcc, visual studio... The list goes on. I think the "you are the product" quip has become an overplayed meme of late. Mutually beneficial / co dependence and such themes are more applicable in more cases than not when it comes to free products and services, in my own view anyways.


I think it gets twisted partially because the quote is wrong. It should be "if a _service_ is free", not "when something is free." I don't think any of the examples you've listed would be considered a service, but rather either a platform (OS) or product.

The quip is certainly overplayed, but I think the notion still stands. It is important to be aware of what intentions service providers and product sellers have. It's not so much a dig on libre / free software.


The USG provides the GPS service for free. You can argue that connecting to it isn't free (since you need hardware that can do so), but then connecting to Google and Facebook isn't free either.

This took literally two seconds to think of off the top of my head. The quotation is _still_ not accurate. It's a useful perspective on how to think about free services, but it's not a substitute for thought.


It would likely cost more to make GPS non-free because extra users don't increase the cost of running the service and keeping people locked out (until they have paid) would be a potentially very expensive technological arms race.

That doesn't apply to most services like facebook for which each active user creates extra bandwidth and processing load.


GPS is an outlier case. Unlike almost everything else, GPS has perfect scaling. No matter how many users use it, it will not be adversely affected.

Other services like that are public radio and TV.


In what way are tax funded services free?


Dude, are you kidding me? "If you're not paying for it, you're the product" is taking about free for the user. There's so many ways in which this question is incredibly stupid, but here's an easy one: In what way is usage not free for every one of the billions of people who are untaxed by the USG? Or are you under the impression that GPS only works if you show your US passport first?


A key difference with something like Debian versus Facebook is profit. Debian is an open-source project supported by a community of volunteers and donors. Facebook is a corporation that banks billions of dollars. That money doesn't materialize out of thin air. It comes from advertising, your personal information and data gathering. We know what something like the Debian project is doing, we don't know what Facebook is doing. Debian and open-source projects are "mutually beneficial", Facebook only benefits Facebook.


> I think the "you are the product" quip has become an overplayed meme of late.

It's been overplayed for years. The problem with pithy phrases, even when true in their original narrow sense, is that they allow those who aren't willing (or capable) of thinking for themselves to substitute something that superficially looks like wit without having to actually think about what they're talking about.


> Wouldn't all but the most naive scanners use time-out settings, maximum lengths on bytes read etc?

It wouldn't save a scanner from crashing to use a time-out or max read bytes. The defense can send the 100kb zipped data in a matter of seconds. The client then decompresses the zipped data which expands to gigabytes, causing crashes by out-of-memory.


Was thinking more about a maximum length for the decompression stage.


User ruytlm has posted links to hacker factor blog, and it seems some sophisticated scanners (e.g., Eddie) were crashed by the exploit. In that blog the author postulates that Eddie is a nation-state level (not script kiddie) scanner, so I'd say that the answer to this question will be in your definition of naive. It's tempting to qualify any scanner which crashes on this as naive though, I'd agree. Especially moving forward with the publicity of this post/topic.

Well actually from memory the author of the blog was doubtful if this exploit actually crashed Eddie or not, but it did crash the other bots (Eddie V1 did go offline, possibly as a crash), so it would appear you are correct. Only truely naive bots might well be affected by this.


Oh this seems quite an interesting experiment. Curious though if this defence poses no additional risks (beside bandwidth) on the server. I mean, is there any significant chance that the random data could cause a glitch on the server implementation?


Very interesting read indeed. I've a question about it; the article is about defeating malicious crawlers/bots affecting a TOR hidden service, so my question is, how might the author differentiate bot requests from standard client requests on a request-by-request basis? I mean, can I assume that many kinds of requests arrive at hidden service through shared/common relays? Would this mean other fingerprinting methods (user agent etc) would be important, and if so, what options remain for the author if the attackers dynamically change/randomise their fingerprint on a per-request basis?


Could the head be spoofed in such a way that the header says 1MB, or might the clients/bots be typically strict on ensuring header values are valid? I think your raised issue is important though, and any serious client/bot should be ignoring files with 1KB -> 1GB decompression ratios.


>Assuming that I can exchange keys of some sort (physical, digital) with the other contact.

Each contact has an identical table of data (pure-random, 1 terabyte, ASCII 256 or choose your own encoding); this is your "Key of some sort". Messages sent between contacts are encoded character-by-character as offsets from the start of the table. No offset can be used more than once. After offset 1099511627776 (for a 1 terabyte files) has been used for encode, a new key file is generated and exchanged.

Example:

tables contains a terabyte of random data such as "ahx Ui D 7gu3a7NrdMr 9y&S )iM AAt 8'9s 98m..e kj j uhbd f..."

1,5,6,9,12,15,18,20,23,25,30,33,35,36,39,41 = hi garry it's me


If you're gonna go through the trouble of exchanging 1TB of one time key, use a standard one time pad. This method is either insecure (when offsets are not strictly ascending), or unnecessarily wasteful.


After searching the definition of one-time-pad, I'm pretty sure post is redundant and shall be deleted (in T-minus 2 minutes). [edit] No delete option. Mod please delete.


There's a subtle flaw in your design here. You're selectively discarding data with meaning and those decisions can be seen and you are not being strict enough with your rules about reuse of data. Although the user doesn't have to use the first possible index according to this scheme, chances are they would (and you did in your example)

The short form of the problem is where you say "No offset can be used more than once." where you actually want "No offset can be used unless it is higher than all previously used offsets".

Consider an assassin and their controller using this scheme for designating targets. Garry is first, the controller sends

  10, 13, 16, 19, 22 = garry
The security services intercept this and notice that garry is killed.

They now know that 0-9 != g, 11-12 != a, 14-15 != r, 17-18 != r, 19-21 != y

They suspect that either andi or rory is the next target, the controller orders Andy killed and sends:

  0, 15, 17, 27 = andi
The security services can then infer that the person to be killed is matched by the regex:

  ^[^g][^r][^r].$
andi matches, rory doesn't.

It's much better to treat your random characters as numbers to add to the your data mod 256 (in your ASCII 256 example), and also set rules like fixed message length and scheduled messages that can be 'no-op'.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: