What about marketing? I don’t mean necessarily traditional ads but more how to get the word out if you have a good product?
I’ve personally find it easy to reach “tech enthusiasts” but much harder to penetrate towards normies who would really benefit from the product but don’t read HN, tech blogs etc.
You can try doing a press release, for example through PrWeb or similar, that will send your announcement to multiple outlets that might write it up if it fits their audience. It’s kind of a shotgun approach though.
I've done a few press releases for my products over the last 20 years. Unless you are already very famous or are doing something very innovative, it is almost certainly a complete waste of time and money.
I've been trying to sneak into some relevant discussions on Twitter lately. Too early to say whether that helps at all but at least my posts haven't (yet?) been deleted as self-promotion like on Reddit.
If you’re working in a domain you know, use your contacts in that industry. If you’ve learned a random domain just to sell software into it, do it in a way that puts you in touch with people.
Pet peeve but yes. It should not, in 2024, be considered a niche use case to deal with.. video. We’ve become accustomed to YouTube and their impossible-to-beat free hosting. But it’s really pricey to do streaming and so yes, we should expect large bandwidth being available. “Who needs X” is a question that should be reversed, instead we should ask “why not?”. When we have good affordable infra, we get cool new stuff and everyone benefits.
What type of fraud exactly? You mean like stolen CCs? It feels very medieval as a financial trust system if every little vendor can’t trust payments, even when you pay up front? Like this is in some ways worse than cold hard cash. And then we pay VISA premium on top of that, for the convenience of being mistrusted..
If they pay up front then dispute, the company will suffer extra charges. With enough of reports, their payment processor fee% goes up and impacts all their payments.
Paying up front doesn't really mean much, because if the credit card info is stolen, the actual owner will report the transaction and it will be reversed.
Right but that sucks. So CC companies/banks are simply shifting risk from one side of the transaction to the other. Sure, as a consumer you don’t have to worry because you’ll get your money back. But if the merchant has to worry and reject customers who are “suspicious” to protect themselves then you’re back to square one, except more kafkaesque. That’s why I said cash is better.
I’d rather have $10 permanently lost for a month of VPS than being banned after 5 days setting it up because I’m traveling and my IP is “suspicious”. Which has happened to me.
Hetzner is actually worse. Current EU directives allow liability shift in online payment when the customer is authenticated via 3D Secure, meaning the risk is extremely low and the onus of proving the fraud is on the card owner. Yet Hetzner will not accept your money if your name doesn’t look correct for the country you are accessing from.
Too much entropy. Any 128bit+ thingy with global uniqueness, whether it’s a hash or guid or public key, doesn’t matter. It’s not ever going to be memorizable for mortals.
You can generate novelty nonces for the first X chars, but you will eventually get mumbo jumbo, at which point it’s never gonna be a “good name”.
> David Brooks himself is part of the very same cultural elite that he's complaining about.
Thats a strengthening of the argument, not weakening. Imagine the opposite: “he hasn’t gone there, he doesn’t understand, he’s just jealous”
> For all his faults, and least Trump is willing to lie to his base
Again, that’s a weakening of the standpoint. It’s completely backwards rationalization.
> People voted for Trump because neither side is willing to do anything to actually help people. That's why people don't trust institutions or "the establishment."
This is so much in the article that it’s arguably the entire point of it. You’re basically agreeing with it, but in a contorted contrarian way.
> It's also hard to take risks or cultivate different skills when you're saddled with college debt you can't get rid if, and when healthcare and rent are taking up a large chunk of your income.
Right, but you can only get so far with a solution that amounts to getting more kids into schools that brag about how few kids they accept. There have to be more avenues and less inbreeding/nepotism/favoritism based on brand names.
At the heart of academic movement of inclusivity sits the most entrenched and extreme form of exclusivity. This is a problem worthy of its own attention, without bringing in socioeconomic everythingism. The fact that you have tons of smart and ambitious kids coming out of non-brand-name schools unable to get their resumes looked at is a disgrace and a failure. It’s demoralizing as hell.
> IMO we need to start normalizing being militant about this stuff again, to aggressively and adversarially defend the freedom to use your computer the way you choose to use it
Yes. As a millennial the times of civil disobedience was better. Not only did we get a better internet for consumers, but better companies were rewarded and won. Rose tinted glasses? Possibly, but there’s another reason for disobedience: the other side does it, and they do it just for money.
Concretely, is there something like Adblock that can be done for cookies? I don’t think blocking is as effective as poisoned data though. They ask for data, they should get it. If you don’t get consent, poisoned data is merely malicious compliance.
It could even be standardized as an extension to DNT: “if asking for consent after a DNT header, a UA MAY generate arbitrary synthetic data”.
Use ublock origin with the "Cookie notices" custom lists. Not explicitely accepting cookies is legally the same as refusing them (now, whether websites actually respect that is the opening keynote of the Naiveté conference)
> Concretely, is there something like Adblock that can be done for cookies?
I use a combination of two browser extensions: Cookie AutoDelete[0] and I don't care about cookies[1]. The second hides any GDPR 'compliance' popup; the first deletes any cookies set by a website when you close the last tab with it open. Both extensions have whitelist functionality.
ublock origin now has specific filters for cookie popups, you just need to turn them on in the filter lists. I'd say this is probably preferential to downloading another addon (that already had a scare with being sold off)
I like to use Consent-o-Matic[1] for this. IDCAC accepts tracking when ignoring the request doesn't work. CoM rejects all tracking on those popups. I like the slight Fuck Off that that sends.
GDPR compliance also requires explicit opt-in, so ignoring those popups has the same effect as refusing to be tracked. YMMV of course, but I don't see why ignoring cookie banners should lead to worse results than laboriously denying to be auctioned off.
I would say most websites are wasteful wrt the customer, which is usually advertisers. There are websites where the user is the customer, but they’re rare these days.
> Their explanation for why Go performs badly didn't make any sense to me.
To me, the whole paper is full of misunderstanding, at least the analysis. There's just speculation based on caricatures of the language, like "node is async", "c++ is low level" etc. The fact that their C++ impl using uWebSocket was significantly slower than then Node, which used uWebSocket bindings, should have led them to question the test setup (they probably used threads which defeats the purpose of uWebSocket.
Anyway.. The "connection time" is just HTTP handshake. It could be included as a side note. What's important in WS deployments are:
- Unique message throughput (the only thing measured afaik).
- Broadcast/"multicast" throughput, i.e. say you have 1k subscribers you wanna send the same message.
- Idle memory usage (for say chat apps that have low traffic - how many peers can a node maintain)
To me, the champion is uWebSocket. That's the entire reason why "Node" wins - those language bindings were written by the same genius who wrote that lib. Note that uWebSocket doesn't have TLS support, so whatever reverse proxy you put in front is gonna dominate usage because all of them have higher overheads, even nginx.
Interesting to note is that uWebSocket perf (especially memory footprint) can't be achieved even in Go, because of the goroutine overhead (there's no way in Go to read/write from multiple sockets from a single goroutine, so you have to spend 2 gorountines for realtime r/w). It could probably be achieved with Tokio though.
The whole paper is not only full of misunderstandings, it is full of errors and contradictions with the implementations.
- Rust is run in debug mode, by omitting the --release flag. This is a very basic mistake.
- Some implementations is logging to stdout on each message, which will lead to a lot of noise not only due to the overhead of doing so, but also due to lock contention for multi-threaded benchmarks.
- It states that the Go implementation is blocking and single-threaded, while it in fact is non-blocking and multi-threaded (concurrent).
- It implies the Rust implementation is not multi-threaded, while it in fact is because the implementation spawns a thread per connection. On that note, why not use an async websocket library for Rust instead? They're used much more.
- Gives VM-based languages zero time to warm up, giving them very little chance to do one of their jobs; runtime optimizations.
- It is not benchmarking websocket implementations specifically, it is benchmarking websocket implementations, JSON serialization and stdout logging all at once. This adds so much noise to the result that the result should be considered entirely invalid.
> To me, the champion is uWebSocket. That's the entire reason why "Node" wins [...]
A big part of why Node wins is because its implementation is not logging to stdout on each message like the other implementations do. Add a console.log in there and its performance tanks.
There is no HTTP handshake in RFC6455. A client sends a text with a pseudo unique key. The server sends a text with a key transform back to the client. The client then opens a socket to the server.
The distinction is important because assuming HTTP implies WebSockets is a channel riding over an HTTP server. Neither the client or server cares if you provide any support for HTTP so long as the connection is achieved. This is easily provable.
It also seems you misunderstand the relationship between WebSockets and TLS. TLS is TCP layer 4 while WebSockets is TCP layers 5 and 6. As such WebSockets work the same way regardless of TLS but TLS does provide an extra step of message fragmentation.
There is a difference in interpreting how a thing works and building a thing that does work.
Call it what you will. The point about the handshake is that TCP + http headers need comes before the upgrade to use the raw tcp streams. This is part of the benchmark and while it exists also in the real world it can be misleading because that’s testing connections, not message throughput.
Also I was wrong about uWebSocket, they do have tls support so you can skip reverse proxy. It deals with raw tcp conns and thus to encrypt you need tls support there. It is also a barebones http/1.1 server because why not. The thing I misremembered is I confused tls with http/2 which it does not support. This is unrelated to WS.
Deterministic within a single image yes, but not within arbitrary subsections. Classical filters aren’t trying to reconstruct something resembling “other images it’s seen before”. Makes a difference both in theory and practice.
The newtype pattern is a special case of type composition which is incredibly useful, has low complexity, and if done right almost no boilerplate overhead. It's much dumber and easier to reason about than type-acrobatics with generics, imo.
Do you mean `Generically`[1]? I've only ever vaguely seen its use - perhaps it can do something a `newtype` can't (or can, but with more boilerplate)? But don't have any first-hand experience currently to comment.
It kinda sucks in both! If you want to interact with your newtypes, you need to either unwrap it or reimplement each typeclass/trait. Haskell does make this a bit nicer with deriving strategies, and Rust with macros, but it's a lot of boilerplate. The article had this to say about the example:
> I’m sure it won’t take much to convince you; this is unsatisfying. It’s straightforward in our contrived example. In real world code, it is not always so straightforward to wrap a type. Even if it is, are we supposed to wrap every type for every trait implementation we might need? People love traits. That would be a stampede of new types.
> Wrapper types aren’t free either. a_crate has no idea A2 exists. We’ll have to unwrap our A2 back into an A anytime we want to pass it to code in a_crate. Now we have to maintain all this boilerplate just to add our innocent implementation.
Does the Rust wrap/unwrap come with any runtime cost?
I don't it sucks at all because implementing any type class (or trait, or interface), then if your new implementation is better (more efficient in time or memory) then you should propose to swap the old with the new at its original source location (i.e, create a merge request somewhere). If your implementation has a different output then you should consider whether this thing should actually be a type class at all (as it seems to be arbitrary). Or if your implementation is for a more specific case of the type, then making it a newtype is not only the practical thing to do, but it should actually be a new type.
Wrap/unwrap is free, and methods on the newtype are typically free as well.
I totally agree with your analysis, but in practice it's not always possible to merge implementation upstream and that's exactly what the article is about. Say you're working with a small scientific library and you want to serialize one of the data structures, but the authors haven't provided a Serde implementation. It'd be nice if you could upstream it, but if the authors aren't responsive you're forced to use a newtype. It sounds like this differs from Haskell, which (if I understand your comment) would allow you to implement it directly on the base type (with a warning).
> If you want to interact with your newtypes, you need to either unwrap it or reimplement each typeclass/trait
...or you could just e.g. implement Deref in Rust? In my experience that solves almost all use cases (with the edge case being when something wants to take ownership of the wrapped value, at which point I don't see the problem with unwrapping)
That gets us halfway there. It makes unwrapping easy, but you still need to remember to rewrap if you've implemented anything.
use std::ops::Deref;
trait Test {
fn test(&self);
}
#[derive(Debug)]
struct Wrap<T>(T);
impl<T> Test for Wrap<T> {
fn test(&self) {
()
}
}
impl<T> Deref for Wrap<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
&self.0
}
}
fn main() {
let thing1 = Wrap(3_i32);
let thing2 = Wrap(5_i32);
let sum = *thing1 + *thing2;
thing1.test();
thing2.test();
sum.test(); // error[E0599]: no method named `test` found for type `i32` in the current scope
}
Also using newtypes to reimplement methods on the base type is frowned upon. I believe that this is why #[derive(Deref)] isn't included in the standard library. See below (emphasis mine):
> So, as a simple, first-order takeaway: if the wrapper is a trivial marker, then it can implement Deref. If the wrapper's entire purpose is to manage its inner type, without modifying the extant semantics of that type, it should implement Deref. If T behaves differently than Target when Target would compile with that usage, it shouldn't implement Deref.
I’ve personally find it easy to reach “tech enthusiasts” but much harder to penetrate towards normies who would really benefit from the product but don’t read HN, tech blogs etc.
reply