Hacker News new | past | comments | ask | show | jobs | submit login
CSS Paint API: New possibilities in Chrome 65 (developers.google.com)
57 points by syrusakbary on Jan 19, 2018 | hide | past | favorite | 57 comments



> Note: As with almost all new APIs, CSS Paint API is only available over HTTPS (or localhost).

This feels ridiculous. I understand why certain features would be gated behind https (camera access, stuff like that). But this feature doesn't look dangerous at all. I don't always want to deploy https. What about local websites (not localhost, but LAN) ? Am I not allowed to use those APIs then ?


Mozilla just announced that they mandate secure contexts for all new features https://blog.mozilla.org/security/2018/01/15/secure-contexts...

There is wide consensus that while there are certainly trade-offs this is a good thing.


if you went to the trouble of setting up a certificate and enabled HTTPS for somethings, why not just enable it across the board for the entire site? i think in 2018 a non-HTTPs site is just lazy. personal opinion, but it is 2018 after all. i understand large sites, blah blah blah, but any new site seems like a no-brainer to me.


From a purely technical perspective I agree. There's no reason this has to be behind HTTPS.

I don't think this was done for technical reasons though. Rather, it's an attempt to drive further adoption of HTTPS with the ultimate goal of phasing plain HTTP out of mainstream use entirely.

For websites on LAN you can either install your own root cert on client devices, or assign the site a public domain name (even if the site itself isn't accessible from the public internet) and get a publicly-trusted cert that way.


> install your own root cert on client devices...

That’s a pain in the ass just to test a local website.


If you're just testing on localhost, there should be no need for a cert. As the article says, localhost is treated the same as https here. In the future, I expect all features that require a secure context will start treating localhost as secure, if they don't already.


I usually have 2 or 3 devices that I need to test a website on, so no - localhost alone doesn't help at all. For example: I run the website on my workstation and I want to test it on my iOS and Android devices.

With this recommendation, I have to generate a certificate and install it everywhere just to test simple functionality.

Since Google recommended the ".test" domain for internal testing that's what I use. In the past I had used ".dev" (now owned by Google) and ".local" (now used by Apple for bonjour networking) to setup test domains on my LAN. So, Google should allow these features through for ".test" at least.

Honestly, they should make it a flag that developers can toggle at the very, very least.


--unsafely-treat-insecure-origin-as-secure="http://example.com"

https://www.chromium.org/Home/chromium-security/deprecating-...

Or just use a self-signed cert and click though the security warning.


The first recommendation doesn't work on iOS or Android.

The second, well, who knows if it will work today or tomorrow. And, it's still a pain in the ass.


"Https only" ensures Google's future (via AdSense, Double click, Chrome, Android, and Analytics) as the only sort of MITM that sees what mostly everyone is doing.

While the https movement has benefits, don't misunderstand Google's interest as altruism. Good cause, murky motivation.


Or maybe Google and Mozilla actually want to make the web more secure? Not everything has to be a conspiracy.


I would hardly call it secure. There have been too many failures with certificate authorities. The whole system is dependent on that weak point.

Also do you keep track of the certificates issued to every website you visit? Then if you visit the website notice the certificate has changed. Do you check if it's a legitmate change ect.. If you don't keep track of certificates how do you even know you are not being MITM. You trust the certificate authority to handle that. However, certificate authorities clearly have signed or given out bad certificates.

Further, it only takes one bad certificate authority to compromise the entire system.

If the browsers really cared they would make sure that all login mechanism use something like J-PAKE or a private Key. That way if certificate authority was compromised the attacker would also have to know either your password or private key.

Further, not everything requires an encrypted session. It just adds overhead for no good reason. If you worried about MITM injecting a zero day. It would help if the web standards did not keep getting more and more complicated increasing the attack surface. All you need there is just a simple signed hash of the page. Some things might not even need that.

--EDIT-- I went on a bit of ramble there. If we want to be Mr. Conspiracy. I would say google likes adding more and more features because it makes it harder for competing browsers to exist. Not only that let's consider HTTP 2.0. The standard does require TLS, but Firefox and Google will not talk to HTTP 2.0 web-server without TLS. The large market share of Chrome for instance makes you unable to decide to run a plain text HTTP 2.0 web-server even though TLS is optional according to the standard. Even though Chrome is not the standard as specified by the standard document. I am forced to comply with how chrome and firefox are doing things. The effectively can make their own standards since there are so few browsers. The would be my Mr. Conspiracy reason why google would want to require TLS. Becuase let's face building a secure Crypto stack the meets all the requirements for TLS is not easy. So that just increases the barrier to entry.


>I would hardly call it secure. There have been too many failures with certificate authorities. The whole system is dependent on that weak point.

>Also do you keep track of the certificates issued to every website you visit? Then if you visit the website notice the certificate has changed. Do you check if it's a legitmate change ect.. If you don't keep track of certificates how do you even know you are not being MITM. You trust the certificate authority to handle that. However, certificate authorities clearly have signed or given out bad certificates.

still better than anyone being able to inject scripts when you're on an untrusted network. the requirement for a compromised CA drops the interception risk for most users to 0.

>Further, not everything requires an encrypted session. It just adds overhead for no good reason.

overhead of what? maybe a few hundred milliseconds on a 3g connection?


>The whole system is dependent on that weak point. A single weak point does not strike you as a bad idea?

>CA drops the interception risk for most users to 0. The risk is not zero for a compromised CA. It's happened before.

>overhead of what? maybe a few hundred milliseconds on a 3g connection? Sure hardware resources are cheap these days. There is more than just hardware resources though. People have to manage that system as well. Further, it makes caching impossible or hard to accomplish. Also this new feature is a clear example of how we got into this mess. We keep adding onto what was supposed just be a document markup. It's now got a turning complete language that you have to worry about. Plus a whole host of other issues.

I will be honest so much the system we use these days is deeply flawed or just a hack. The fact we got to where we are is surprising.

However, for instance in Firefox. I can't even go into the about:config and enable non-encrypted HTTP 2.0. Firefox is telling me they know better. I don't want software forcing decisions down my throat as a user. Sure I could in theory code up what is required to allow me to do so. (Yay open source) However, Chrome and Firefox both do not allow HTTP 2.0 without TLS. What are the chances a web-server will respond to my plain text request? So does it matter if I code up support for plain text HTTP 2.0? At least plain text HTTP I can craft an HTTP request by hand and get a response. I could do the same with plain text HTTP 2.0, but not many servers are configured to allow that or even support it.


Too late. The Web is well on the way to becoming the next operating system. Very soon the local OS will mostly just open web frameworks like Electron to run nearly all our apps. This way has clear benefits for s/w developers and corps. Just like there are only 3 mainstream OS vendors, there'll only be 3-4 mainstream browsers in the future. There won't be another Linus for OSes and there won't be another Mozilla for browsers. The slope has become too steep. At least not for PC and general purpose computing devices.


The certificate transparency initiative is going to make MITM really hard.


Unless the majority of sites choose to run your javascript. Like Adsense and Analytics. Not much need for actual MITM when you are "on the page".


> I would hardly call it secure. There have been too many failures with certificate authorities. The whole system is dependent on that weak point.

Just because a system has flaws doesn't mean you should be using an even more flawed system like HTTP. With HTTPS, attackers can't read or modify traffic. If you really think certificate authorities are that unreliable, you can choose which certificates you trust yourself to avoid spoofing.


Of course you can keep track of Certificates. Honestly, if more people did that the better.

However, my problem is the browsers are forcing choices on people that are not part of the standard. They are acting as if they know best. They are also making it harder for advance users to alter these decisions. Try to enable plain text HTTP 2.0 in Firefox? No option exists in about:config.

Further, I don't see an option to disable Firefox's Secure contexts for new features. Might exist, but may need to look more.


It's not a conspiracy to suggest Google wants to know what sites/pages we are all visiting. And that being the only entity with a way to do that broadly helps them. No need to do actual MITM when a majority of sites are running your JS via Adsense or Analytics.

Https only shuts down the ability for ISPs and mobile carriers to do that. And many of them were doing so. I suspect that backbone providers were doing it as well. Good thing to shut down, sure, but it's also good for G.


I've written some prototypes using Paint Worklet that run in both Safari and Chrome.

Safari has an experimental API called -webkit-canvas that's very similar to Paint Worklet. You can use it to write code that runs in either engine pretty easily.

I've just thrown together a gist outlining how to do so:

https://gist.github.com/appsforartists/e5d2a4b7826bf5962fad1...

It's not quite a polyfill, because -webkit-canvas doesn't automatically repaint when you change an input property. Otherwise, it would be pretty simple to write one.

I may write a proper tutorial at some point. Hopefully this gives other interested folks a push in the right direction, in the mean time.


we already had this all the way back in 2008. It's called webkit css canvas! Iconapp.io is using that heavily!

background: -webkit-canvas(mycanvas);


with the CSS Paint API you instantiate a new canvas like object for every time your reference the painter, while this was referencing one specific instance of a canvas, right?


I don't know the internals of either.

In a Paint Worklet, the canvas is generated for you, and your painter is automatically called every time an input property changes.

In -webkit-canvas, you manually instantiate the canvas and manually repaint it.

It's the difference between reactive and procedural programming (even though the actual painter is procedural in either case).

Here's a gist I threw together showing how to use the same paint function with either API:

https://gist.github.com/appsforartists/e5d2a4b7826bf5962fad1...


Yes, the only downside is that you have to procedurally instantiate a canvas. You get to control when the render event should happen though. With CSS Paint API, it feels like declaring webgl shader.


You've always been able to use a canvas normally and then create a url to it using toDataUrl. Alternatively toBlob then createObjectURL which I think might be more performant than data url which requires encoding/decoding base64.

So I guess the main thing added is convenience? Am I missing something here? Adding CSS that requires corresponding JS code is also new territory isn't it (I can't think of anything else that does this)?


CSS Houdini is a multi-vendor effort to make more aspects of CSS layout scriptable rather than all having to be specified ahead of time.

https://ishoudinireadyyet.com/

This gives a much greater ability to iterate rapidly on new layout and styling details, polyfill for specifications that aren't widely implemented yet, and hook into some parts of the browser stack that you previously just had to use as is and trust that if you need a new feature, it would eventually be specified and/or implemented.

I think this article does the best job of explaining the whole effort:

https://www.smashingmagazine.com/2016/03/houdini-maybe-the-m...


Within the context of Houdini, the Paint API is less weird to me now.

There are so many warts in CSS that I really wish I could customize. For example if I wanted to add a outside-aligned instead of center-aligned stroke to text, you can only accomplish it with lots of hacks: https://css-tricks.com/text-stroke-stuck-middle/ https://css-tricks.com/adding-stroke-to-web-text/. Using pseudoelements hidden behind, a circle of text-shadows, etc.

I wonder if there's anything in the Houdini project that will let me fix this. This paint api is useless to me since it doesn't give me the original contents of the dom being styled (so I would need to redraw the text, style it and layout it on my own). The solution I ended up with is using svg filters: https://www.smashingmagazine.com/2015/05/why-the-svg-filter-.... But a unified solution to customizing CSS without needing to call it a hack would be nice.


There are a few extra performance tricks we can do behind the scenes here.

- Chrome's implementation won't actually perform a raster at this stage, instead we'll raster later on a background thread. toDataUrl, etc will raster.

- We'll only invoke the paint method for things which we actually need to paint. E.g. if you have a long list, we won't invoke the method for things that are outside the paint window (things off-screen).

- We'll have the ability to add a bunch of caching later on. From memory the servo folks in their experimental implementation actually speculatively generated a bunch of images ahead of time.

- We are actively investigating moving this to a background thread. So even if your main thread is janked, an animation which triggers a paint worklet, will still run at 60fps.

edit - formatting.


I believe this could avoid at least one copy of the canvas's buffer from the graphics card to the CPU and back again - which might be useful for animation performance?

Also, yeah, it's a more "obvious" way of doing this - instead of rendering to an offscreen canvas and converting that to an image, just render to the right place in the first place.


It says that it'll re-run the code on resize, which is a lot easier than using a canvas manually then listening for window.onresize or something. And I suppose supplying the parameters in CSS instead of via a separate JS call is nice? Also, the mention of rendering off-thread in the future is pretty great, and not possible with current methods.


Embrace , extend and extinguish. Either that or developer ergonomics . Most likely the latter.


Developer ergonomics is good. I still remember days where you had to use images for shadows and round corners. Now it's just one line of CSS.

But here it doesn't feel very ergonomic. You still need to write that JS code! Reading into it some more I think the only benefit it adds is state management (rerender on parameter change) but I don't really don't want/need CSS doing that for me.


> As of now, text rendering methods are missing and for security reasons you cannot read back pixels from the canvas.

Does anyone have an idea as to what those reasons might be? I've heard of JavaScript access to certain CSS features being limited (e.g. getComputedStyle()), but I'm not sure what the benefit is here. Is there any way that user information could be leaked through a paint worklet context?


We made a mistake in the article there. (There aren't any security issues with pixel read-back here) I'll ping Surma on Monday to get it fixed.

The primary reason we did this is to ensure there wasn't a performance cliff if you did read-back pixels. With the current API surface you can record all of the canvas commands, and play them back when you need to raster. Additionally it doesn't leak how many pixels we are actually rastering to.

(hope this helps).


Finger printing the user is the main reason why.


canvas/font fingerprinting


I always find it interesting that Chrome is charging ahead full steam on experimental API's like this while most other browsers have given no intent to implement them yet.

Did Chrome come up with Houdini? Are they being brave or pushy here?


It's a joint W3C Technical Architecture Group and CSS Working Group initiative, you can find a lot of details in this post from the Opera developer blog [1].

[1] https://dev.opera.com/articles/houdini/


Author here :)

Houdini is a task force that consists of people from Apple, Mozilla, Microsoft, Chrome, even IBM and Samsung. It’s by no means a Chrome-only thing or us being pushy.

Servo has an experimental implementation, but it’s the part of Servo that hasn’t been merged into FF. All participating browser vendors have given very positive signals about CSS Paint API.

Check http://ishoudinireadyyet.com/ to stay up to date :)


Thanks for the info. Love your youtube videos by the way :)


> Did Chrome come up with Houdini?

Most certainly not. I know people at Mozilla who've been working on Houdini-related features for a while now (cf. https://wiki.mozilla.org/CSS/Houdini). Chrome is just the first to release here.


The second demo doesn't work for me, I think it's missing demo2.js?

https://googlechromelabs.github.io/houdini-samples/paint-wor...


... my bad. Last minute corrections made it worse. Will get it fixed ASAP.


Why isn't text rendering supported?


There are some odd threading issues with rendering text in OffscreenCanvas:

https://bugzilla.mozilla.org/show_bug.cgi?id=801176#c29

Given the post's mention of running this stuff off-thread in the future, I suspect it's related to that.


Another day, another wart on the modern web stack.


Maybe so, but this is by now a cliché. Please don't post unsubstantive comments to HN threads.


Distinguish wart from feature


Quit bloating the web...

Also I am sick the HTTPs everywhere movement. What if I for what ever reason don't want to use HTTPs.

We have so few browsers, that basically what they do is the defacto standard.


Like they keep adding feature after feature. The difficulty to implement a browser from the ground up just gets harder, and harder. It also make cross compatibility more difficult.

Instead of adding more and more things maybe we should restructure things. Instead of adding onto what was supposed to be text markup. Maybe design something more conducive to the fancy interactive graphical object.


I am sorry that you are sick of the HTTPS everywhere movement.

But forcing your sites to use HTTPS will also prevent your users from unwittingly participating in DDOS attacks on other sites (e.g. https://en.wikipedia.org/wiki/Great_Cannon). Consider it herd immunity.

Also, to respond to some of your other anti-HTTPS comments:

regarding overhead: people are also working hard to minimize the amount of overhead inherent to TLS. For instance, TLS 1.3 will establish an encrypted connection in a single roundtrip, and is capable of resuming encrypted connections in zero roundtrips with application opt-in (see https://blog.cloudflare.com/tls-1-3-overview-and-q-and-a/). The encryption itself has fairly ubiquitous support in hardware, making most of them ridiculously fast.

regarding CAs: with HTTP you are implicitly relying on the honesty of people in the network path. With HTTPS you are implicitly relying on the honesty of the intersection of a) people in the network path, b) people who control a CA. This is strictly fewer people than with HTTP. People are also working hard to solidify our faith in the set (b), by requiring Certificate Transparency for all new certificates, thereby ensuring that misbehaving CAs can be detected, and drastically raising the cost of mounting a CA-based attack.

You say "What if I for what ever reason don't want to use HTTPs", I you'll have to layout some of those reasons explicitly. You'll probably find that people are working on all of them.

In general, the default expectation on the web should be encrypted and authenticated (i.e. only both endpoints can read/write the data). Once we live in that future, asking for the ability to allow plaintext network traffic will seem a lot like asking modern programming languages to explicitly allow buffer overflows. The language designer would be justified in saying "No", and ignoring you. The considerate language designer might ask "why would you want that", and try to address your real need. But they would still never actually give you what you ask for. This may be "taking away choice" in the same sense that mandating airbags is "taking away choice", but people shrug and accept it because the baseline has moved.


The biggest problem is Java Script. Netscape made huge mistake when they added that to their browser. If we wanted to download code on the fly it should have been a separate format from HTML. HTML is supposed to be a document. Sadly, that boat sailed decades ago. I really wish the standards committee would quit adding basically hacks to what was supposed to be a bunch documents. Then create a separate format more conducive to that goal.

I suppose you could that creating a secure web context is kinda of way of doing that. Separating documents from more interactive content. However, freaking HTML is a horrible way to structure such a system.

Also the CA is single point of failure. A MITM only affects the users along that routing path. CA failure can affect the entire web...

I agree default is fine, but the user should be able to change those defaults. I could always compile a version for Firefox or Chromium that does. However, that's kinda ridiculous that I don't see any thing for instance in about:config Firefox.

--Speaking of programing languages-- Honestly, I don't see any language succeeding that only allows code with run time checks running. It's why rust has the unsafe keyword. To override those checks. The reason there are lots of hardware devices where the data from said device will vary in size, and there is no way the compiler will know how all these devices work. The unsafe keyword allows Rust to be a system programming language. Heck even C requires you to revert to ASM at times for working with hardware. The generated code may not meet some strict requirements set by the hardware or you need to setup the environment so C code can run.

Honestly, the biggest problem I see with programming languages is that too many of them try to be general purpose. Domain specific languages are great. If the language is designed well for the problem space it can lead to well written concise and understandable code. If your problem domain is working with hardware you need raw pointers, memory access and accurate timing. If you problem domain is altering images you probably want easy to use vector operations, handling of regions ect...


Wat


Try to implement the rendering engine of a web-browser. The standards keep growing. A much simpler model could have been drafted for web content. Instead we keep hacking onto HTML adding things like CSS, JS. Then we keep adding on to those add-ons. Clearly, people are building things that are not documents. So why do we keep adding hacks to this document model?

Further, when you you Google, Mozzila, ect... saying you can't you use a feature without HTTPs the effectively make HTTPs part the standard even though it's not written in the standard.

For instance let say I wanted to run HTTP 2.0 web server. The HTTP 2.0 standard does not require encryption. However, because none of the browsers vendors support HTTP 2.0 without TLS. I must setup TLS to make my server accessible. It's not part of the standard, but the big browser providers effectively make it part of the standard. I don't have a problem with crypto. I just hate the browsers forcing it to be basically part of the standards. When it should be optional.


the problem is that if it's optional, lazy/ignorant developers will never implement it, even for sites that actually need it (pretty much any sites that handle login). requiring tls for new features is just another nudge that browser developers have put in place to encourage tls adoption.


However, me as user despite bad web developers should be able to access the site if I want to.

For instance in Firefox. I can't even go into about:config and enable plain text HTTP 2.0. They are trying to make choices for me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: