Hacker News new | past | comments | ask | show | jobs | submit | zeptomu's comments login

So I have to admit I am myself not really sure. It has been posted on r/haskell where recently there have been several posts about some kind of less-is-more approach when writing Haskell (for example [1]).

I would say it's some kind of informal manifesto to write Haskell code, that does not use too many or advanced features. The C++ community had and has similar discussions where people banned specific features and agreed on (admittedly different) subsets of the language. These endeavors have been more or less successful, but in general I agree that there is the danger of having a too powerful language, where people go off the board and write code that is hard to read.

[1] https://www.reddit.com/r/haskell/comments/eg25xt/a_plea_to_h...


> If using GPLv2 means companies stay away from the project instead of building on top of it, that's not a win for anyone.

Like companies staying away from Linux or GCC? It's debatable if these projects would have been as successful using MIT/BSD.


My advice for people (who want to use SSH on Windows) is to install Git which ships with Git Bash and SSH.


In my experience WSL is much simpler and more flexible than that. Bash, git, ssh, tmux, and any other Linux utilities you want.


No, the data is from official government sources. OSM data is good, but mostly not that good in comparison with official sources.


> [...] aren't UNIX applications

What kind of argument is that - so what?

All major Linux distributions are doing very well with systemd and it makes sense to have some kind of supervizor service (or collection of tools) on top of `init`. You can still tie together your applications with shell scripts and not run systemd. Since systemd it is simpler to admin different linux boxes - at least that is my personal experience - before that every distribution shipped their own shell scripts.

Systemd is fine and if it is so inferior to simple POSIX applications glued together, people should provide a better alternative. The grieve that people have against Lennart is overblown - we should thank him for his contributions.


> [...] Idris is a lot more practical than I thought.

This may sound like a stupid question (but is probably necessary in a thread about the convergence of proof systems and static typed functional programming languages):

"How can i implement something like unix' wc?"

So just a program that counts lines (forget words and bytes for now) of a file. This means we need to support (at least in some kind of standard library) basic IO stuff like files, processes and environments. If the language talks about how "simple" (in terms of code/term length) it is to define a binary tree ... the integers >= 0 (the famous Nat = Zero | Succ Nat) example, or beware the Fibonacci sequence - most people won't be impressed.

Because that's what is associated with "programming language" (as compared to "proof assistents").

Sorry that sounds a bit like a rant, but after 5 minutes of searching I couldn't find an example how to read a file in Idris. The closest I found was: http://docs.idris-lang.org/en/latest/effects/impleff.html but it did not tell me how to open a file ...


I don’t know about Idris, but the basic idea would be transform the file’s type from (a|b)* into (a* b)* a* where b is a newline and a is any non newline. Then your answer simply counts the first star of the transformed type (change that star to a counter exponent as you are forming the type).


Yeah ... but how does that look? A "naive" wc implementation (counting the character '\n') is <50 lines of code (if we have modern file IO in the standard library) in a practical PL.

But maybe there is a too big difference between the concept of "files" (as seen by OS developers that view them as a byte-blob in a hierarchical name system) and language designers that want to represent a file as a collection of lines (and probably would like your command line interpreter not allow to call "wc" on a file that contains non UTF8 encoded data) - but this is still a pretty big step for our current computation environment (where I can call programs by name that are also files).

For many people (including me) a "practical" programming language needs an implementation that can generate (compile) an executable or interpret a file representation of its source. Otherwise its a proof assistant.

Maybe we have to merge the concept of data in terms (no pun indented) of bytes (types?) in-memory and collections of bits and bytes that have a lifetime longer than the program (if it's not a daemon)?


I'm still a beginner at Idris, but here's my attempt at a "naive" wc:

    module Main

    wordCount : String -> IO ()
    wordCount file = let lineCount = length $ lines file
                         wordCount = length $ words file
                         charCount = length $ unpack file in
                         putStrLn $  "Lines: " ++ show lineCount
                                 ++ " words: " ++ show wordCount
                                 ++ " chars: " ++ show charCount

    main : IO ()
    main = do (_ :: filename :: []) <- getArgs  | _ => putStrLn "Usage: ./wc filename"
              (Right file) <- readFile filename | (Left err) => printLn err
              wordCount file
It counts lines, words, and characters. It reports errors such as an incorrect number of arguments as well as any error where the file cannot be read. Here are the stats it reports on its own source code:

    $ ./wc wc.idr
    Lines: 14 words: 86 chars: 581
Hope that helps.


Wow, thanks for the response - that actually looks like Haskell and if its performance is not that abysmal (let's say slower than Python), that is pretty cool.

This actually looks quite reasonable - usage string included, I like it. Hats off to you, I will try to compile it to a binary now and do some benchmarking.


Is it really fair to call this an implementation of wc if it is just using a built in function for each case?


These might be reasonable functions to implement in a standard library, so having them it makes sense to use them. I rather feared Idris does not have some IO abstraction (reading/writing) files at all. Maybe I am conflating languages and libraries here, but they often go hand in hand.

My practicability expectations for languages implementing dependent types are pretty low.


If you implement by manipulating the types directly, a few operations with a loop (just for counting lines mind you). Again, I’m not sure how this would be done in Agda, Idris, or COQ, and the system designing is a bit different from other type systems.

But also, any discrete one dimensional problem like wc is going to benefit from a type system with types that resembles regular expressions.


Do you mind sharing your SAAS?


No because it’s too easy to clone.


Have you thought about open sourcing it?


I won’t even share it’s name, why would I open source it? Lol. It makes a decent amount of money and requires close to 0 engineering know how to build. So no, I will continue to let it make money quietly...


Although I am a customer of some streaming services (they are convenient and have good and bad stuff), it is often nice to just get a file for a specific song or movie - and that is most convenient via torrenting.

People do sign up for streaming services, but not for like, 10. Furthermore torrenting got really convenient and is very fast with adequate Internet (let's say 10MByte/s), so you get a decent quality movie in under 5 minutes (obviously only if there are enough seeders - but the availability of torrents completely dwarfs the availability of streaming providers - if it's really unpopular and maybe a little bit older you just won't find it on streaming services).

Beside the fact that BitTorrent is an interesting protocol in itself, imagine just how much simpler Netflix or Spotify could be implemented, if we wouldn't stream DRM encrypted blobs, but download files? You just need many big fat file-servers and put your media there - if we wouldn't have DRM (AFAIK all streaming providers enforce DRM), this is technically a solved problem.


As a streaming industry software engineer: File based streaming is done not for DRM but because of distribution cost and user experience. A segmented ABR video delivery massively decreases CDN costs (which is why Youtube, which is drm free does it). Video startup times, seek times, thumbnail scrubbing, fast forward, clip previews, ad insertion and many other things don't work in a file based experience.

In addition subtitles, secondary audio, descriptive video, and multi-view video etc. are all things which we mandate by law which do not work well in a file base expierenced.

Peer to peer as a distribution method is not only know, but there are plenty of providers that use peer to peer like streaming setups (see https://streamroot.io/streamroot-dna/). You may be using something like BitTorrent and _not even know it_.

Distribution is 1/10th the story and DRM is only a small part of it as well. BitTorrent does nothing to solve the other 9/10ths and removing DRM doesn't either.


> Video startup times, seek times, thumbnail scrubbing, fast forward, clip previews, ad insertion and many other things don't work in a file based experience.

Not a single one of these things is true. In fact, file-based delivery offers a superior experience in several of them with proper implementation. Streaming is popular because it lowers distribution costs, decreases piracy, and allows rights holders to pull content whenever they want.

Personally, I hate streaming. Buffering sucks. Bitrates suck. Audio quality sucks. Never knowing how long something I like will actually be available sucks. It's just an all-around bad experience if you care about your media.


Video startup times are kept low in modern clients by starting at low bitrates and then seamless adaptation up. This is very important for advertising companies, social media, live streams, etc. Every second of startup time cost massive user loss in live streaming (this is well researched and documented). Additionally streaming protocols (HLS/DASH) force files to be encoded such that a full download is not required for a decode to happen. This is not a requirement (and pretty rare unless you know what you are doing with an encoder) for file based workflows.

Fast forward for low power clients relies on an "iframe only" track as there decoders can't do many x realtime decode. This is not present in modern file based work flows.

Seeks use the same startup/segmentation requirements as video startup.

Thumbnail scrubbing requires a thumbnail track and is not supported by file based workflows at all.

Clip previews on sites load at low resolution/bitrates and can be poped up to "full resolution" via ABR. Doing a preview of 10x 4k streams without abr wouldn't work even on a modern gaming rig.

I'd love to hear how you'd do SSAI on top of a file based workflow.

You could of course build a "file based" implementation that had all of these features, but you would just be rebuilding DASH or HLS.


  I'd love to hear how you'd do SSAI
  on top of a file based workflow.
Given that this is a discussion of Netflix and Bittorrent, and the popularity of ad blockers among HN readers, I think the answer to "How would you do server-side ad insertion" is "I would accept losing that feature"


Popular video file players have supported generating iframe tracks for fast forwarding and thumbnail scrubbing for many years now, is there something about the way eg MPCBE and PotPlayer operate that is somehow unreasonable?


It's terrible performance on a power limited devices. It either burns up your battery or is unreasonably slow. Particularly when the source stream is high resolution or complexity.


Sure, but if you have the file, you could generate these ahead of time on a more powerful machine.

At any rate, I don't think users would balk at being given a choice.


> You could of course build a "file based" implementation that had all of these features, but you would just be rebuilding DASH or HLS.


Yeah, but would they pay for it?


HLS and DASH, or any other protocol along their lines (like what youtube used long ago for own HTTP streaming) are still completely inferior to proper streaming protocol on latency and such. I'd say plain "seeking over http" with range requests does often work with lower latency than best HLS/DASH servers on SSDs.

A proper udp based streaming of course tears HLS to pieces


HLS was just a stopgap so providers could use cheap http cdn's.

It worked well enough nobody can be bothered with real streaming now...


SSAI - you write your own player and do it that way?


Also most streaming services offer an offline playback option today.


> In addition subtitles, secondary audio, descriptive video, and multi-view video etc. are all things which we mandate by law which do not work well in a file base expierenced.

That's just not true - VLC has many more features than all the web-based (or "app"-based) popular streaming players. Granted, I checked and my VLC does not have "thumbnail scrubbing", but although that is nice, I don't think this is a big deal and if it's really popular, it will be added to VLC (or it may already be there, I did not check).


I'm a big fan of torrents and would prefer them to streams! I'm a bit of a data hoarder. But there are clear reasons why Netflix et al don't simply have you downloading massive files. It would definitely be simpler, but that doesn't mean it's an adequate engineering choice. Lot of smart people working there and they don't jump through all those extra hoops just for fun.

1. You can't play a torrented file until the download is 100% complete. (Well, maybe 95% complete, depending on how tolerant you and your software are when it comes to malformed files)

That's the way it was designed to work. Otherwise, the bits at the beginning of the file would be much more common amongst your non-seeding peers and the bits at the end would be much rarer amongst your non-seeding peers. Your downloads would start crazy fast and would get progressively slower.

Some clients let you abuse the protocol and "stream" torrents sequentially, but if significant percentages of torrenters (such as literally every Netflix user, in an imaginary world where Netflix simply served everything via torrents) did that it would be an issue.

That's a great way to deliver big files in their entirety via P2P, but that is pretty much the pathological opposite of what is requred for streaming.

For streaming you obviously need that sequential access. You need to optimize for the beginning of the file, since people want to start watching right away. People do not want to wait 60 or 30 or even 10 seconds for a YouTube vid to start streaming.

2. Think about your actual viewing habits. How many times have you watched the first 1% of a video on YouTube or Netflix, and then decided to watch something else instead? Even if we (probably generously) assume most users watch an average of 50% of a given video (I suspect it's much less) that's still 2x wasted bandwidth per video, on average, if video providers blasted out entire files. Netflix's infrastructure (or rather, their AWS monthly bill) is massive already. Imagine it being asked to blast out 2 or 5 or 10x as much data per video as it's already doing.

3. There's also the rather important matter of many (most?) playback devices not having huge gobs of local storage with which to hold complete movie downloads. I've got probably 20TB of hard drives scattered around this place, but that's not the norm.

4. Can't adapt to changing bandwidth conditions by scaling video quality up and down midstream. Personally, I wish I had more control over this as a customer, as sometimes I'd rather just wait than watch a compromised stream, but Average Joe is not going to want to muck around with that sort of choice, or even understand what it means.


Of course you can. Every modern BitTorrent client allows you to download file pieces by their order. IIRC it has been a solved problem for at least 8 years.


That's abuse of the protocol. If a small number of people do it, no problem.

If everybody did it, that would be a big problem.

More to the original poster's point, my post answer's the posters question of "why doesn't YouTube just offer file downloads like Bittorrent?"

There are many clear reasons why; my post gave one of the more prominent ones.


That is just not true, ever since PopcornTime has shown us that getting pieces in order is a viable strategy, especially at relatively large scale (for the BitTorrent network). There hasn't been any significant shortage of availability linked specifically to that way of downloading.


Why is that abuse of the protocol?


Imagine a file split into 100 blocks.

You're the seeder. Initially you're the only one with all 100. And everybody's gonna download things sequentially.

Imagine 300 peers grab block 1 from you. Gonna be pretty slow, each peer gets 1/300th of your bandwidth.

Imagine those 300 peers finish block 1 and move onto block 2. Same bandwidth crunch. Although new peers can at least get block 1 from that initial cohort of 300 peers. But nobody can blocks 2 through 100 from anybody but you. Not ideal.

Now repeat the process for 3 through 100. You're gonna be the bottleneck for a loooooooong time for those remaining pieces.

...

...

...now imagine we do it differently. Those 300 peers each grab a random block. This part's just as slow.

But once they have their initial randomly-chosen blocks, our bandwidth explodes. Each of those 100 blocks is now available from ~4 sources (you, and roughly 3 others). And you could even log off at this point, since there is a complete (distributed) copy of your file out there.

As more blocks are exchanged, the effective aggregate bandwidth rapidly increases even farther.


You are assuming all the peers start at the same time. It would be more fair to assume that the peers start regularly. In this case, the order does not cause issue.


But then you are assuming that people will seed for an considerable time (more that a few minutes) after watching the stream. If as many people do they do not seed then the last block will forever only have one seeder. (which would at least be the case for the last scene of the last episode of the last season, a point where you might not like buffering)


If you make that the default, almost all people will continue to seed unless they never use the software again.


There is a fundamental assumption underlying this topic, are the user expected to be on desktop or mobile (including many laptops)? If the objective is to provide high bandwidth then mobile users might see a significant increase in battery and disk usage.

not saying it would not work, I actually like the idea of more peer to peer networking. But as the market is clearly focused on low consumption user devices with commonly little drive capacity seeding would be a damage to UX.


Phones at least have smaller screens and need lower resolutions. But that doesn't really solve the problem.


Most torrents with low seeder counts aren't viable for livestreaming, the speeds are too often much too low.


No really, if this is such a big deal then why did Popcorn Time ever work in the first place? It was (is?) also hugely popular.


Bittorrent tries to maximize the benefit of its P2P nature by not sending the same blocks to more than one peer.

i.e. I'll send block 1 to one peer, block 2 to another peer. Then I can send blocks 3 and 4 to them (respectively), while they each send each other blocks 1 and 2.


If the protocol does not require it to be downloaded in order (I don't know much about the protocol), then presumably you should also be allowed to disable that feature if you do not want to play back the file immediately.


> You can't play a torrented file until the download is 100% complete.

That's just not true.

I've used uTorrent, and I recall deciding whether I wanted to stream or not. It worked well too.


This is addressed as "abusing the protocol"


And given that most popular clients allow you to, this because worthy of the "95% theoretical problem" title.


How is it abusing the protocol if it allows you to do it?


"You can't play a torrented file until the download is 100% complete." Doesn't it depend on the file format in use? Some file formats are streamable, and in that case if the file is being downloaded in order then presumably it can be played (assuming the torrent software stores the file in a streamable way, and the playback software is capable of working with a file that is not yet complete but will get filled up over time).

(In the case of ZIP archives, 7-Zip can't open partial files, but bsdtar can. Although in my case it was from a damaged floppy disk rather than torrent, but the same thing would do, if you are downloading ZIP files from torrent. But if you are downloading music, then presumably bsdtar is irrelevant.)


> 1. You can't play a torrented file until the download is 100% complete. (Well, maybe 95% complete, depending on how tolerant you and your software are when it comes to malformed files)

Theoretically, yes. But in practice, especially with "modern"/high-speed internet, I can open my torrent in VLC after the first few MB are present on my computer and read the entire file as it comes in, without me the user encountering stuttering or buffering of any kind.

In effect, from the user's perspective, you're streaming the torrent - without running into the often present decrease in quality from the "traditional" stream's encoding.


Why do you say subtitles and secondary audio (I assume you mean multiple language tracks) don't work well? I've downloaded files with multiple subtitles and audio tracks, which VLC makes easy to switch between. Admittedly, many/most torrents don't offer these but that doesn't mean they can't.


Subtitles are not a burden; downloading a single torrent with a dozen subtitle tracks adds negligible bytes to the overall download size. Audio tracks are a tad rougher; the good torrents come with multiple tracks, but you are downloading all audio tracks even if you only play a single language–and the size of those audio tracks is not as negligible. Most people with large or unlimited bandwidth simply don't care about "paying the price" for multiple audio tracks; it takes a few extra minutes, at $0 additional financial cost.

The rest is your parent justifying their job/industry as if they're a Godsend–perhaps as compared to cable? It's an indefensible position thus far. The streaming services are all trash with pathetic bitrates; all streams average the same ridiculously low bitrate over the entire stream, without accounting for dark scenes that require orders of magnitude more bandwidth. Every Netflix (and competitors') 1080p streams with dark scenes are unwatchable. It's highway robbery to deliver what looks like 180p frames for a 1080p stream. We're supposed to be at 4K these days, yet they can't even deliver acceptable 1080p. Until the streaming services are willing to stop compressing everything far beyond watchable levels, they don't deserve anyone's business. Netflix, I believe, averages ~3-5 GB for a 90-120 minute movie? That number should be, at least via opt-in to those with the bandwidth for it, 20-40+ GB.

I don't necessarily expect fully uncompressed Blu-ray quality, but the standard should be to deliver something watchable, without banding artifacts. At a minimum that would mean massively variable bitrates, where the highest bitrate of a stream should be allowed to be 10-20x+ its minimum bitrate. Compressing a 60+ GB original file into a download of than 5 GB is flat out unacceptable and unjustifiable.


Netflix and other (similar) streaming services which user per-title encoding technologies deliver a harmonic mean PSNR of greater than 45 and greater than 94 VMAF on their highest rendition, which most well connected users pull. This is visually indistinguishable from a lossless encode.

Complaining about the _size_ or _bitrate_ of encoded file isn't worthwhile. Encoding technology has advanced substantially since blue-ray days and we simply no longer needs the 40mbps bitrate.

Indeed HEVC can offer a mathematically lossless encode at around 90mbps for most content.


I'm sure you're very knowledgeable and correct about a lot of things, but where your argumentation falls down most is user experience.

A concrete example: You say here that what Netflix provides is "visually indistinguishable from a lossless encode". Try actually doing this with an open mind and then make this claim again. It was definitely not the case when I last tried it (very recently).

Argumentation of this nature is sort of endemic in the tech industry: "You think you see or experience this, but I know what you see and experience and you are wrong". It happens in discussions about battery life, visual appeal of different encoding options, basically anything which is at some point subjective and while the people claiming to see a difference could be (factually) wrong in a lot of cases (I often cannot tell), it is very arrogant to sit/stand there and tell them that you know better than them what they see.


But what about when it has been tested and proven that people cannot discern the difference? I don't know if this is what was done with the per-title encoding that the parent comment was referring to, but I know it is the situation with audio. So many audiophiles swear they can hear a difference between FLAC and high quality VBR MP3, but it has been repeatedly demonstrated that, when presented with both, humans cannot distinguish them better than chance.


> But what about when it has been tested and proven that people cannot discern the difference?

The way people perceive (and how to measure it) is in fact not a solved problem. There are studies done and we can often say with a high degree of certainty that something is probably not visible/distinguishable, but claiming something like that for everyone is too strong and also misrepresenting the state of our learning.


Huh? Open Netflix and play any recently added 1080p stream that contains dark scenes. Those scenes are literally unwatchable. I don't mean "not perfect". I mean they look like they were encoded with a 256 color palette. Bitrate is what matters–even with current codecs used by streaming services–and they do not provide something watchable.


VLC works for the subset of subtitles you, an English speaking user, uses. VLC subtitle support does not support things like ASA or EBU subtitles which are used in other parts of the world. In addition rendering of those subtitles is very constrained. These can be added or configured of course.

Secondary audio is not only additional languages but things like descriptive video services (narration for the blind). VLC does not generally support having multiple active audio tracks without flagging things on the command line.


Arabic/chinese/japanse subtitles work perfectly in VLC work perfectly and look much nicer tahn on Netflix for instance.

I have never seen descriptive video services in any streaming service so I very much doubt it is a major issue, and even in that case why wouldn't you be able to mix it with the main track in a separate channel (which is what is done on TV, as TV does not support multiple active tracks either) ?


Even so, being able to mix the premixed descriptive video track with the normal audio track may still be helpful to users who want different mixing levels than what they used (because sometimes the music is too quiet, for example). (I don't like descriptive video so much myself, but some people do use it, and so may wish to alter the setting.)


> I have never seen descriptive video services in any streaming service

netflix has them


What if some particular video doesn't? If I have a file I can download a .srt and have the video player display it.


How are they constrained? I remember this HN submission: https://news.ycombinator.com/item?id=19150484 Basically the entire video is contained in the subtitles.


Subtitles, secondary audio, etc can work in files if those things are encoded in the file. Something like the television captions could be work and then the playback software can be configured what fonts and colours to use to display them, or to not display them at all, and possibly implement stuff such as caption scrollback that the provider did not put in, even.

Still, there is many reason for having the streaming, although it should be a open protocol and not too complicated. And then, if you write a client software and program it to save the file to disk, it can do that too, possibly during playback, and you can program in other stuff too if you want to do.

I thought of idea of Live Audio Video Protocol (but have not actually written any part of it yet, but have some ideas about it), that you can use Ogg streams and you can initiate with file selection, format/quality negotiation, etc, and then receive the stream but you can also send commands to select a channel or subchannel, pause, seek, request data, rate of keyframes (if the server supports changing this, which it might not), and change other settings.

(Actually, I find a few problems with Ogg, and everything else is too complicated, so I made up something which is slightly more complicated than Ogg but not too much, which is called GLOGG; it is mostly like Ogg, but uses UUIDs to identify codecs, allows identifying how individual streams are related (the relation codes are specific to the codec, and are not defined by the container format; you can also specify if a program that does not understand it should ignore it), and a few other things, but still simple compared to most other container formats.)

And about ad insertion, they do that on television without any problem, so I don't know how that is a problem any differences whether it is a stream or file. (Although sometimes they put stuff within the show itself and damaging the picture, they should not do, but that is only my opinion and is unrelated to the protocols.)


> In addition, subtitles, secondary audio, descriptive video, and multi-view video etc. are all things which we mandate by law which do not work well in a file base expierenced.

Since when are subtitles mandated by law? "Multi-view video"? Maybe I'm just missing something, but I've never heard of any law mandating any of the things you listed above (except maybe from a government-funded project, but I'm not sure either, that just seems more likely to require it).



That's pretty crazy. I understand "reasonable accommodation", but doing that for so much content is beyond. Better most of the population gets it that no one can because a few edge cases can't consume it normally. I understand the need for wheelchair ramps, but this is like saying that every building that doesn't have one must add one or be torn down.


P2P solves it. Just not the payment side. Only if one can pay.

I do not even mind it is drm as long as I can get a good quality one in 10 minutes. Even if I contribute bandwidth (as long as controller and during my download) and paid.

There is always someone not paid. But it is those who can and willingly do which oddly leave out. Like Mp3 in 1990s.


P2P could have totally transformed the industry if it had worked out a way to pay creators - especially niche creators and beginners.

It would have owned discovery, which would have badly damaged the big studios. And Netflix would probably have never happened.

It would have been a similar story in music.

But P2P was always about people who think a file is a file is a file, and creation costs are an externality, which can be ignored as irrelevant.

So now we have TPB limping along, and new creators are mostly owned by YT - which is legendary for draconian application of copyright in the favour of fellow corporates.


> Video startup times, seek times, thumbnail scrubbing, fast forward, clip previews, ad insertion and many other things don't work in a file based experience.

One of these things is not like the others.


I use to believe that Netflix or a similar service could be a serious competitor to piratebay and other torrent sites. But :

- they all have rolling and incomplete catalogs. Pretty often I want to watch a duo or trilogy of movies. Only one of them is available on netflix, or I start watching a serie and it gets removed from their catalog.

- I saw the Lord of the Rings on Netflix. I thought it would be great since I had only watched it once in a theater. But the version that was there was just Netflix's version. Not extended cut, not director commentaries, not bonuses. Just what Netflix gives you and that's it.

- Maybe that's just me, but most of the new Netflix produced content seems to be of lower quality than what was launching 2 or 3 years ago. It seems that Netflix is focusing a lot more on delivering tons of original content even it means having duds.

Bitorrent does not have these issues. Pretty much all the movies are there, in whatever language and cut I would like to see. The irony is that I am ready to pay for a streaming service, even more than what I pay Netflix if necessary but still, while it might be more convenient to just push the netflix button on my remote, it just not up to the task compared to torrenting.


Out of curiosity, who are the people who want DRM? I understand that big media producers want them, in some misguided attempt at protecting their IP, but is that it?


> [...], but is that it.

Yes.

I guess people would even pay for Spotify, if it's DRM free (which means the stuff on it can be copied), but that is still considered "radical" (in the Overton window) for shareholders of media distributors.

That "Save your music offline" is a feature in 2019 is ridiculous.


Fuck yes I'd pay for a drm free audio service. Any competitor of Spotify's that offers this would instantly get me as a customer - assuming, of course, they have a similar offering to Spotify. Spotify is already mediocre compared to Grooveshark and YouTube, it really shouldn't get worse than that.


Deezer + Deezloader


There was such a service once, magnatune, but it never took off.


It never got big but it still exists: http://magnatune.com/.


Yes but protecting their IP is futile, therefore DRM primarily targets legitimate users. The idea is to have complete control over the user experience. You can only watch movies on approved device X in approved app Y. The media companies can then use DRM to put hardware manufacturers and software developers under pressure and order them to do whatever they want.


>Furthermore torrenting got really convenient and is very fast with adequate Internet (let's say 10MByte/s), so you get a decent quality movie in under 5 minutes (obviously only if there are enough seeders - but the availability of torrents completely dwarfs the availability of streaming providers - if it's really unpopular and maybe a little bit older you just won't find it on streaming services).

Small nit: with that Internet speed you could only download a 3 GB file, which is definitely not enough for a quality film. Even iTunes, which is kind of infamous for how badly their films are encoded, is usually bigger than that.

The big picture though is that it really doesn't matter anymore because quite a few clients now allow you to download the first 10% or so of the file first, so you can start streaming it instantly. There's no reason a for-pay streaming service built on to of bittorrent wouldn't do the same.


You need 4GB/hour, to achieve NTSC quality (640x480@32bit video + 44Khz audio) DVD bandwidth.

A DVD ISO rip of a 2 hour movie usually clocks in at 8GB, since a dual layer disc can store 4GB per layer.

The expanded LOTR trilogy needs 2 discs per movie, no matter what definition because each break the two hour mark, and even blu-ray can't fit long movies onto a dual layer disc for its format in HD.


>You need 4GB/hour, to achieve NTSC quality (640x480@32bit video + 44Khz audio) DVD bandwidth.

That's going to depend entirely on the codec. MPEG2? Sure. With H.264 a 4GB 720p rip from a Bluray is going to be much better than anything you can get on a DVD.

>The expanded LOTR trilogy needs 2 discs per movie, no matter what definition because each break the two hour mark, and even blu-ray can't fit long movies onto a dual layer disc for its format in HD.

I'm pretty skeptical of that. Regardless of what media LOTR actually comes on, the extended cut of Fellowship would have a bitrate of ~29 Mbits/sec on a dual-layer disc. That's entirely acceptable, if not exactly ideal. And I'm not sure what the "two hour mark" has to do with it: Fellowship is closer to 4 hours than 3, and there are tons of two hour long films that come on a single Bluray disc.


Yify 1080p rips offer great quality for 99% of the folks out there (ie 55" screen with nice B&W speakers) and yet they are usually under 2GB. No disturbing encoding artifacts, pixelation in fast scenes etc.

You really don't need more. And if you do and have enough computing power, h.265 goes even further in smaller package.


They achieve that by discarding huge amounts of detail from the image, compare a yify 1080p to a well made 576p encode and you may find them less acceptable. They look terrible to me.


honestly if Yify is transparent to you, more power to ya. you'll save a lot of disk space. I wouldn't watch a Yify encode on anything larger than my 13" laptop.

if you want to ruin yourself, try grabbing a Sparks and a Yify encode of your favorite movie and try watching one of the dark scenes in both files. the difference is plain as day on my 27" ips panel.


Honestly I think the situation is even worse on a laptop. Unless you're pretty wealthy, you and probably most people you know have a TV screen that's small enough and far enough away from your couch that your laptop sitting in your lap actually beats it in pixels / degree. I've got a (old) 50 inch plasma set, and since I'm not sitting 4 feet away from it my laptop easily beats it.

On the other hand if you really can't tell the difference between a 2 GB encode and an average sized encode, say 16 GB, it's a good hint to make an appointment with your eye doctor. (I'm nearsighted, so I can't even tell 576p from 1080p on my TV without glasses.)



And due to various licencing bullshit, there are still lota of movies not availabe on Netflix. Especially outside US.


I agree with your parent. When I started with (Python) webdev some years ago I picked Flask, but nowadays I use Django, as it has superb documentation and useful things built-in. In particular I miss the user-management, authentication and authorization in Flask.

Personally I consider Django without any additional apps less complex than Flask with apps (or "plugins" in Flask jargon) to handle basic stuff like auth.

If performance is a concern, both of them are insufficient and I would rather pick another language like Rust, Go or even Java.


I started with Flask, moved to Django, and now at work am back on Flask. It all depends on what you are building; if you are going to need some ORM, standard auth, etc, Django is a no brainer. However if you are going to be building out much of your own idiosyncratic architecture, Flask gets out of your way and avoids including lots of things you don't need.

For performance, we are building some parts of our system with Clojure. It feels extremely well suited, albeit with some initial learning curve.


> Containers used are small and get shut down on inactivity.

How do you define inactivity? If I do

$ nohup ./computational_intense_and_runs_for_100_hours.py &

Do you just kill the process (or stop the container)? In essence Jupyter is a graphical rich shell, so you providing free *nix machines - don't underestimate how this feature can be exploited (e.g. CoCalc limits at least internet access for free instances).


First, that will use 100% of the CPU quota assigned to your user, which is really small.

Second, yes. The container will be killed after 10min unless we keep detecting activity of your user in the platform. So, basically the rule is: If we don't detect user's activity after 10mins we kill all containers for that user. You could hack this by doing periodical requests to the API to simulate activity, but at some point your JWT will be expired and requests will start failing.

In any case, other students won't be affected at all by the appropriate usage and we will end up banning your account at some point when we detect it.

We also limit the amount of parallel running containers to avoid unlimited containers running at the same time.

Do you see any drawbacks on this implementation? Happy to hear about possible improvements.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: