Hacker News new | past | comments | ask | show | jobs | submit | more ivanr's comments login

You'll find that, often, if not always, if you buy directly from a publisher, the ebook will be free with the paperback purchase. (Manning is one example, not sure about others off hand.) However, if someone is buying your book from Amazon, the paperback is one SKU and the ebook is another, and there's no easy to way to include the book away for free. So I think that's basically your answer.

For some context, after publishing my first book with O'Reilly, I decided to start a small publishing business in an effort to capture more revenue as well as experiment with continuous publishing (you buy one edition, I keep it up to date for a couple of years). I've written three books, with the last (Bulletproof TLS and PKI) being the best and the only one still relevant.

When we first started, we offered paperback and ebook options, with ebook free with paperback purchase. We had 3 virtual warehouses in the US, UK, and Europe. It was fun for a while, but we lost money on every paperback sale as we couldn't compete with Amazon on shipping costs (plus additional overhead of dealing with shipping problems). You have to pay your printer, shipping costs to the warehouses, order fulfilment, and shipping costs. Then insurance when something goes wrong, even if it's not our fault.

At some point we stopped selling paperback books, but we continued to offer free ebooks with a proof of purchase. This, too, ended up being a money-losing venture, because we make little money on each book, with most of the money going to the printer (print on demand is great, but expensive, and Amazon takes 40% of the list price.)

Today, we sell ebooks on our web site, and paperbacks via Amazon. We're nice people so we may give you a free ebook if you ask, but that's not a good way to run a business. It's fortunate, then, that we're not in it for the money.

To sum up it up: Amazon is the dominant sales channel. If they offered paperback and ebook distribution (Kindle, pub, and PDF—mandatory if you care about the user experience), we'd happily sell only through them and give everyone a free ebook with paperback purchase. We'd give Amazon the standard 40% (the minimum for paperbacks, not sure that it is for ebooks these days).


> You'll find that, often, if not always, if you buy directly from a publisher, the ebook will be free with the paperback purchase. (Manning is one example, not sure about others off hand.)

Extrapolating “often, if not always” from one niche publisher is quite a stretch.

Random House, for example, does not do this.

https://www.penguinrandomhouse.com/books/713318/wonka-by-roa...

Neither does Simon & Schuster.

https://www.simonandschuster.com/books/Icebreaker/Hannah-Gra...


You're right. I answered in the context of technical books, where you do tend to get a free ebook if you buy the paperback directly from the publisher. Since that response, I checked a few of the other publishers and it's correct.

But, re-reading the top question now, I see that it's not specifically about technical literature.


I've been buying lots of cryptography books lately as I wanted to learn about the evolution of our understanding of this topic.

How about some of the following:

- "Real-World Cryptography" is my recommendation for the first book to read, but probably won't work for your friend as it doesn't cover any algorithms in detail. It's a great topic to cover a lot of ground quickly to gain a good understanding of how cryptography is used in practice.

- "Introduction to Modern Cryptography" is used as a textbook on many universities and I recommend it for someone who doesn't mind diving in into the maths. Being a textbook, it's fairly academic.

- "The Code Book: The Secret History of Codes and Code-breaking" covers the history.

- For TLS and PKI, read "Bulletproof TLS and PKI" (disclaimer: I wrote it). It's a good book to understand practical protocol engineering in the context of the evolution of TLS from 1995 until now.

Edits:

- The manga book is very nice and fun, but high level and dated.

- Crypto by Steven Levy also recommended.

- Serious Cryptography is good, but Real-World Cryptography is more recent and provides a better foundation.

- Cryptography Engineering / Practical Cryptography / Applied Cryptography are often recommended, but they're very dated at this point.

- Your friend might enjoy https://cryptopals.com/


That’s so depressing.


Do you know what's behind the performance degradation of OpenSSL 3.0? Has the problem been documented anywhere?


Horrible locking. 95% CPU spent in spinlocks. We're still doing measurements that we'll report with all data shortly. Anyway many of them were already collected by the project; there are so many that they created a meta-issue to link to them: https://github.com/openssl/openssl/issues/17627#issuecomment...

3.1-dev is slightly less worse but still far behind 1.1.1. They made it too dynamic, and certain symbols that were constants or macroes have become functions running over lists under a lock. We noticed the worst degradation in client mode where the performance was divided by 200 for 48 threads, making it literally unusable.


Take a look instead at the DNS SVCB and HTTPS resource records, which have already been adopted in practice:

- Service binding and parameter specification via the DNS (DNS SVCB and HTTPS RRs) https://datatracker.ietf.org/doc/draft-ietf-dnsop-svcb-https...


It appears Google implemented the records type65, but not type64. Also Youtube. I am curious what browsers are leveraging this currently. I can't find many other domains using it yet. It seems Apple use it on some subdomains [1]

    dig +noall +answer -t type65 google.com
    google.com.  2857 IN HTTPS 1 . alpn="h2,h3"
I see some hits in my name server stats for https/svcb types so I guess bots are looking for it. Perhaps scanning how widely adopted? The list of types has grown since I last looked. [2] People have been busy.

[1] - https://serverfault.com/questions/1075522/whats-the-use-case...

[2] - https://en.wikipedia.org/wiki/List_of_DNS_record_types


Chrome at least, behind a flag.[1]

[1] https://chromestatus.com/feature/5485544526053376


Beat me to it, I was also wondering if the OP proposal hasn't been completely obsoleted by SVCB/HTTPS at this point.

Still reveals a sad picture of the ecosystem that a standard which addressed the same shortcomings existed since 2009 but was completely ignored until Cloudflare reinvented it in a proprietary fashion.

On the other hand, expecting that end users suddenly start typing "https+srv://" everywhere for exactly no added benefit except to make standards wonks happy, doesn't exactly inspire much confidence in the standard's authors either...


The -00 draft[1] did not require the +srv. I would guess the blame lies with security problems when user agents with support for the SRV record see one thing and those without see another.

(The drafts go up to -05, though this is not apparent from the posted link.)

[1] https://datatracker.ietf.org/doc/html/draft-jennings-http-sr...


Does it support specifying the HTTPS port? This is a very important use case for selfhosting. I skimmed through Cloudflare's article[0] but didn't see any mention of it.

[0]: https://blog.cloudflare.com/speeding-up-https-and-http-3-neg...


Ok after a little searching the spec itself talks about a port parameter in section 7.2. So it would appear SVCB can indeed replace HTTP SRV for my purposes.


Does this deprecate TTLS records? I've only just begun to generate those


CAs don't have to report their certificates to CT logs, and actually there is no reporting in a variety of use cases. However, modern browsers no longer trust certificates for which there is no (cryptographic) evidence of logging to CT.

The net result is that CT gives us visibility into all web certificates (and more).


"The net result is that CT gives us visibility into all web certificates [...]" including not always the desirable effect of publishing information on hosts in the non-public part of the institution network...


It's true that sometimes somewhere up in management tier people are annoyed to discover that non-Corp employees can find out about the DNS name product-we-bought.internal.corp.example and thus might intuit that Corp bought Product-we-bought. Or even, less often, that big-city.branches.corp.example exists even though they haven't officially launched the new Big City branch of Corp.

And it's true that CT is one of the various ways people might discover that although it's probably not even the easiest.

But honestly did you consider calling that purpose-of-software.internal.corp.example instead? And maybe at least use code names for projects that are actually secret?

Instead of normandy-invasion.corp.example at least call it overlord.corp.example and now it's not obvious where you're landing and I won't have lined up all the tanks and guns ready.


Would you mind sharing the details of the consultant? My email is "ivan.ristic" at gmail. Much appreciated.


Email sent.


When I needed a security policy for my startup, I looked everywhere and couldn't find anything that made sense. I only found policies that are very, very, verbose, so much that I didn't know what to do with them. And most often outdated. In the end, we wrote our own, focusing on producing something that's comprehensive yet concise. (After all, if it's not concise, no one is going to be able to understand it.)

We're planning to make another round of changes and publish it under a permissive licence. Here it is in case it's useful to someone: https://www.hardenize.com/about/security_policy

A good security policy is very important early on to inform architecture and design. Ours has worked very well for us. It also often helped us get away from having to complete customer security questionnaires.


> Work shall be carried out exclusively using corporate equipment. There shall be no access of company networks and data from personal devices. Corporate equipment shall not be used for personal activities.

This one caught my eye. As a dev, corp laptops are usually so locked down as to be useless, i.e. unable to install dev tools etc. So I often use personal equipment (not connected to a corp) and use git as my gateway back into big corp.


>As a dev, corp laptops are usually so locked down as to be useless, i.e. unable to install dev tools etc.

In a past life writing my company, which was a .NET shop, was acquired by a large company that used Macs for everyone except salespeople. They didn't provide Macs to anyone on my team, so we had to use the Windows boxes that were so locked down we had to get exceptions for everything. While we were successful in getting Visual Studio and other tools added to the exception list, were weren't successful in getting our compiled software added to the list. i.e. we literally couldn't run the software were were acquired to create.

For the short term, we discovered that anything done in WSL (Windows Subsystem for Linux) was completely ignored by endpoint security, so that let us work around many local issues for the 18 months or so it took us to get Macs for work.


Another "trick" I've used before is to run everything inside a Hyper-V VM on a locked-down Windows box.

Locking down machines makes sense, but won't someone please think of the developers?!


So (as a corporation) don't lock them down as much. That's what we do as a small company who has to pass ISO 27001 (and NEN 7510) because our product is used for healthcare. From a security audit standpoint, not allowing personal devices saves a lot of time and trouble. (This of course means that anyone who is on-call just gets a phone from the company.)

As a developer; if you need to work around locked down environments the security policy doesn't really matter to you, as you are already violating it. Whether your employer cares about that or not is another thing of course. Some managers will consider you a liability, some will accept that this is the only way you can do your job.

Ideally employees embrace the security policy, and whoever tweaks it makes sure everybody can still do their job. In reality, that will vary a lot.


> As a developer; if you need to work around locked down environments the security policy doesn't really matter to you, as you are already violating it.

Or the security policy has an "exceptions shall be reviewed and approved by X and documented at Y" clause.


The struggle between security and usability is very real. Personally, I don't think that it's possible to lock dev equipment whilst not significantly impacting productivity. That said, ensuring that high-value environments (e.g., production networks) can't be accessed from dev equipment with elevated privileges, well that's necessary. And I feel often neglected in small companies.


You’re doing yourself, and all other devs at your company a disservice - at least in my opinion.

If the devices are locked down to a degree that you cannot do your dev (aka your job), that should be brought up with management. Of course, it’s easier said than done xD

I’m personally a fan of locked down corporate devices and then either dedicated laptops or cloud vms for development.

It’s not necessarily an easy problem to solve though


It’s pretty rare that a manager will help fighting stupid security restrictions even if they prevent you from doing your work. I currently have to look into Remote Desktop solutions for one of our devices. IT in their wisdom have decided to block access to this category of products (they are blocking VNC sites and TeamViewer, Zoom is ok for unknown reasons). From past experience it’s clear that I either have to do it on my personal equipment or not at all because IT won’t accept even reasonable requests for unblocking. Sometimes running a VM with a non-corporate image will help.


The truth is, security is hard. IT/CorpSec are asked to implement policy drawn up by lawyers/auditors/compliance specialists with limited insight into the real needs of the organisation. Asking them to make complicated exceptions that they don't understand and can't control puts them into an untenable position with respect to hard requirements that are handed to them as absolutes. At the other end, this often filters through as decisions that look utterly nonsensical (you can use Zoom but not something else, etc).

Meanwhile, for a developer, telling people they can't install and run tools they need to do their job is also untenable.

Like others I've come to the opinion that you almost need separate computers and completely isolated networks to do this properly. If you are doing things right, there should be very little need for developer equipment to ever connect into any place that sensitive data resides. And consequently there should be very little need to lock down developer equipment. Unfortunately not many places can architect things that well, nor build that type of nuance into their security policies. Among other things, you need to invest a lot of work in creating fully representative non-sensitive test data so that developers can do their work in a realistic setting.


I wonder about the option of having an unlocked dedicated development machine that has no or very limited access to network resources but minimal security software in addition to a locked down laptop for email, slack/teams, etc...


Curious how they handle personal mobile devices. So they offer company phones, don't offer after hours support, or outsource support to a company with a compatible policy?


Company phones are standard for any organisation like that has a policy like this. Another alternative is provoding a personal phone number that you can be contacted/paged on, and you log in to a corporate device to receive information as to why you're being called.


Totally on board with the goals, and I've done some similar work, though haven't gotten anything nearly as trim as this as the output.

I'm interested in if/how this has stood up in externally-audited scenarios, like SOC2/ISO27001 or similar. I get that it's successfully avoided some customer scenarios, but am thinking of more formal processes.

At a glance, it covers many of the bases at a high level, but wonder if it's missing the specifics that an external auditor might typically expect to see from a policy manual. Are there additional sub-documents/playbooks/etc for many of these that elaborate further?


We haven't yet gone through any audits [we're small/young], but we've began to prepare for SOC2. The policy itself is absolutely insufficient for anything of the sort and we expect that we will generate a ton of further documentation. After all, SOC2 is essentially all about documenting your processes in detail.


I imagine that's the limit per client IP address [for a single server port], no? The Linux kernel can use multiple pieces of information to track connections: client IP address, client port, server IP address, server port.

Cloudflare has some interesting blog posts on this topic:

- https://blog.cloudflare.com/how-we-built-spectrum/

- https://blog.cloudflare.com/how-to-stop-running-out-of-ephem...


Hardenize's paid plan is intended for larger businesses, where we combine infrastructure discovery with continuous monitoring and many other things. However, ad-hoc assessments are free for everyone and we intend to keep it that way. I hope that we will in time be able to provide plans at lower price points and maybe a free plan at some point.

(Hardenize founder, previously also SSL Labs founder.)


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: