Nope, consent cannot be a prerequisite of using the service/software, if it is available in the EU (or UK, since they grandfathered in GDPR after brexit) it must be usable with or without consent.
That is the reason many local non-EU ad-supported businesses (like local papers in the US) outright block all EU traffic. For example if I go to https://www.chicagotribune.com/ I get a blank page saying "This content is not available in your region".
Manjaro could do something similar by just blocking EU users from downloading it.
`grid-template-columns: max-content 1fr;` is preventing the text from wrapping in `header` elements. Should be a simple fix, but it'd definitely loose some points from me.
I've been on-and-off hosting for a while and I avoided synapse because it seemed to rely on a lot of different services and python packaging is hell. Seems like dendrite is dying (which I did not know until now), conduit and siblings are very much beta and not close to being stable.
If I read this correctly there is just one server worth actually using (synapse) and two client libraries (matrix-rust-sdk and matrix-js-sdk, both maintained by the same org/people).
At the same time most of the bridges to other protocols (which was one huge selling point of matrix) are either broken, outdated, alpha (and by that I mean really, really alpha) or beta (usually in a stagnant state or alpha-ish).
One of the goals (perhaps the primary?) of matrix was to build a protocol that is supported by multiple servers, clients and bridges but that does not seem to have been realized.
I just listened to the keynote on matrix 2.0 and...
I used to be a huge proponent of matrix, but I really feel like the potential has not been lived up to.
There are loads of SDKs which work well - as well as matrix-rust-sdk and matrix-js-sdk there's also excellent fully-featured indie libraries for Dart, Kotlin, C++, Go, Python (and many other others which don't have full parity).
You're right that Dendrite is currently stalled, but it's not a reflection on the protocol, but just the reality that writing and maintaining two servers simultaneously is expensive. Meanwhile most of the good stuff in Dendrite has already made it back into Synapse. Bridge development has also stalled for the same reason (exacerbated with hostility from those being bridged) - but then 3rd party bridges like heisenbridge are doing fine.
On the other hand, conduit/conduwuit/grapevine look to be bubbling away at an impressive rate (entirely independently of Element), even if they're currently beta.
So, I don't think the potential has been lost - instead, the core team has ended up focusing our energy on a smaller set of projects while making sure we get a small set of things really excellent rather than being spread too thin, which is a major improvement over the past, imo. Meanwhile the broader community is busy hacking away too.
> Meanwhile most of the good stuff in Dendrite has already made it back into Synapse
But that still means there is only one actual trusted server implementation, right? I thought the point of dendrite was to be able to prove that MSCs were implementable independently before they became part of the spec. It seems like the actual spec is just whatever MSC synapse has implemented.
> Bridge development has also stalled for the same reason (exacerbated with hostility from those being bridged)
Some open protocols like XMPP (which would seem like a natural fit) do not have stable bridges. Other open protocols like email are quite limited in that they require you to act as a full service provider (actual email server instead of a bridge between an email account and matrix). Others like SMS have no stable implementation. Those are just the ones that cannot be hostile to bridging. Many of the bridges to proprietary services listed on the site have not had any activity for 2+ years (which in itself is not bad but if they interface with a moving target or changing protocol it probably is).
Is the "One chat protocol to bridge them all" goal abandoned?
> the core team has ended up focusing our energy on a smaller set of projects while making sure we get a small set of things really excellent
I'd love to see that. I'd especially love to see it stabilize and simplify enough that we have a multitude of servers, clients and bridges that are stable and useful. I'm just not seeing it right now.
> But that still means there is only one actual trusted server implementation, right?
Nope? Dendrite works. It's just stuck in beta for now due to lack of manpower.
> I thought the point of dendrite was to be able to prove that MSCs were implementable independently before they became part of the spec. It seems like the actual spec is just whatever MSC synapse has implemented.
The spec is not "whatever MSC synapse has implemented". The spec is... the spec, and the compliance test suites (sytest, complement) which go with it. Currently we require one implementation of MSCs to prove their validity before landing in the spec - and that could be in synapse, dendrite, conduit or wherever.
Just because the core team only has $ to actively progress a single server implementation right now doesn't devalue the spec at all. If anything it makes 3rd party implementations like the Conduits even more important, in terms of showing that the spec isn't coupled to the Synapse implementation.
> Is the "One chat protocol to bridge them all" goal abandoned?
Right now the goal is "have a decentralised open chat protocol whose apps can outperform mainstream centralised products". Bridging is secondary (for now, although Beeper is obviously focused on it).
> > the core team has ended up focusing our energy on a smaller set of projects while making sure we get a small set of things really excellent
> I'd love to see that. I'd especially love to see it stabilize and simplify enough that we have a multitude of servers, clients and bridges that are stable and useful.
The two are mutually exclusive. We can't simultaneously focus on getting a small number of implementations excellent... while also working on a multitude of implementations. Instead, we can make sure that the implementations that the core team works on at Element are great, solve problems, simplify things and unlock everyone else to be able to build better ones too - which is precisely what we're doing.
How does PouchDB deal with conflicts when syncing to CouchDB?
~~Since the docs mention this being a bit of an open problem a decade ago when dealing with online clients and servers (https://writing.jan.io/2013/12/19/understanding-couchdb-conf...) I'd imagine its a bit more complex with many intermittently online clients.~~
Whats the reason to build your own command structure for filename, author, etc. instead of using the existing git tools like `git format-patch` and using `git am` for applying patches?
There is an existing, well-established patch email workflow which git natively supports. Using that, and extending it when necessary, would make more sense.
No, but you need to be able to write a well-formatted diff, which is pretty much the minimum requirement for sharing changes.
Emailing the entire new file every time will not work in anything but local tests as it cannot handle change conflicts. The format needs to know which parts of the file you changed, so your change does not accidentally undo other changes merged ahead of you.
I mean the format for selecting paths and so on. Git already has a standard patch format for sending changes via email and a builtin utility for applying them to a repo, so why not use that format?
For the mentioned workflow in chatgpt I'm guessing chatgpt can generate the email body for you in the correct format, but sure it is a step over just copy-pasting the raw text.
> You're leaving snarky comments while I'm recording positive videos of me testing people's products.
I asked what you meant by "test" since it did not seem to match how I understand that word.
> stop writing snarky comments, download loom, record some tries and hop on the positivity train! Let's build a better world together.
I don't think that false positivity helps us build a better world.
I asked about the link since it is clearly not a site used very often, not the site hosting the content (it's a youtube embed) and a site listed in your bio. That sounds like trying to astroturf market your site via comments.
If it was just about positivity and not the site I'm guessing you would have responded about the content of the video.
I criticized the content (saying it wasn't a "test" and asking your definition of "test") and questioned the chosen medium (linking a youtube embed on a uncommon site that you seem to have an interest in marketing). You responded to neither in a substantive way.
It's a bit disappointing that articles talking about HTML use JSX/React syntax instead of actual HTML (even more so not actually saying it). Example from the article:
I was once discussing a third-party integration with a React developer. The integration required that our app POST a couple of fields to the third-party's site. I found that the developer was struggling with the integration and they were asking me questions about it when I said something to them along the lines of "It's just an HTML form, with a couple of hidden inputs that when submitted make a POST request to this URL" they said to me "Yeah, well HTML is kinda old, it's not really used anymore"...
I'm sure I've said plenty of stupid things when I was green but I hope no one remembers them like I remember this one. It lives rent free in my head.
It's definitely true that many developers would benefit a lot from learning more about the basic HTML and the web platform. But I refuse to support the notion that this is somehow React's fault.
In my personal experience, react allowed me to rely more on the native web platform APIs, not less, than other frameworks (at the time that I switched to react)
> they said to me "Yeah, well HTML is kinda old, it's not really used anymore"...
> I'm sure I've said plenty of stupid things when I was green but I hope no one remembers them like I remember this one. It lives rent free in my head.
I'm doing gigging while my product is gaining traction. Last week, I received this verbatim rejection for a PR review at a client, who's oldest developer is 27:
"No, we don't want all the logic in the same place. You must split the logic up"
This is also one that will take up valuable space in my head till the end of time :-(
(PS. Yes, I split the logic so that it was evenly balanced in 3 different programming languages, instead of having the entire logic in C# alone)
The confusion in the article is so complete that I'm left wondering whether or not the author is aware that what they are writing is not, in fact, HTML.
Yeah, I am aware! Thank you for the concern :) I did address this in an adjacent comment, but I'll say again that I did contemplate over using JSX or not. Also yes, it may have been a good idea to add a disclaimer for the fact that the code I'm showing is JSX, but honestly there are so many other disclaimers I had in mind, all of them together would make the article twice as long and much more boring
It's exacerbated by the fact that the API they propose to make custom validation more ergonomic works for React, but would be much worse for plain Javascript and HTML.
The API I'm proposing would indeed bring much more benefit when used in a declarative way. That's the point I'm specifically trying to convey in the article.
I don't think I understand how it would be "worse" for plain JS and HTML though. Would love to hear your thoughts.
Actually, there is one possible concern. When HTML is returned by the server includes inputs with set "custom-validity" attributes and this HTML gets open by a browser with no javascript enabled, this would make the input "stuck" in an invalid state. This is an important edge case to consider but I do believe there is a resolution that would satisfy everyone
custom-validity={value.length ? "Fill out this field" : ""}
you can only use a static string for an attribute. So you'd need an event handler to set custom-validity from javascript when the input value changes, then a handler to setCustomValidity when custom-validity changes.
In other words, it's the same exact imperitive interface as setCustomValidity, except with an extra layer and the extra event handling has to be implmented along with it.
If I had a say, I'd go for an interface where custom-validation would take a javascript function name, and that function would take the value as input and optionally return a string with a validity message. Or it takes a javascript snippet like onclick, except one which returns an optional string value. Then again, there wouldn't be much difference from onchange.
Edit: to counterbalance some of the criticism, I think the article is very nicely written and formatted, and the interactive components are a good touch.
Sorry to disappoint, I did hesitate over this. But JSX is honestly very nice to read and also I didn't want to leave the impression that opting in to native form validation somehow forces you to not use javascript. And combination of javascript + html is, again, very nicely expressed with JSX.
The concepts are obviously easily translated to other component frameworks, but they also do apply to pure HTML and vanilla javascript.
The problem I am highlighting in the article is the absence of a declarative nature to the Custom Validity API, so I think it makes sense to use a declarative component approach in code examples
HDMI might be a bit more complex, but displayport should be doable since most devices use embedded displayport (eDP) anyway for their built in displays. I'm guessing the main cost would be adding a switching chip for switching between external and internal source.
That is the reason many local non-EU ad-supported businesses (like local papers in the US) outright block all EU traffic. For example if I go to https://www.chicagotribune.com/ I get a blank page saying "This content is not available in your region".
Manjaro could do something similar by just blocking EU users from downloading it.
reply