Hacker News new | past | comments | ask | show | jobs | submit login

Hi! The response to your fears are in the announcement. "If you want to download dependencies alongside project code instead of using a global cache, use the $DENO_DIR env variable." Then, it will work like node_modules.



Ah, in this case, I would then have to commit my dependencies into my VCS to maintain reproducible builds. I'm not sure I like that solution very much either. I've seen node_modules in multiple GBs, and I'm sure Deno's dependency sizes are going to be similar.


True, but that's what people using Go have been doing for years without complaining much, so I guess it works fine for most workload.

And before npm fixed things after the left-pad incident, the npm builds where not reproducible either (as demonstrated by the said left-pad incident).


> True, but that's what people using Go have been doing for years without complaining much, so I guess it works fine for most workload.

I hate to break it to you but dependency management has been a massive issue in golang until the devs formally adopted go mod.

Only Google seemed okay with checking in their dependencies to version control. Everyone else was doing crazy hacks like https://labix.org/gopkg.in


Checking in dependencies to version control is the sane option. Then you can more easily see what's updated and track regressions. Some people like to refactor their code any time there is a syntax sugar added to the language - often adding a few bugs while doing it, which is a PITA, but version control is still better then no version control.

You will ask, what about adding the OS to your SCM too, yeh why not have the full software stack. But you can generally draw a line between strong abstraction layers: Hardware | Kernel | OS | runtime | your app. Some modules do have strong abstraction layers, but others are just pure functions which you could just as well copy into your own repo.


It created a hugely fractured open source ecosystem as well.


The vendoring has never been the issue though.


I have only used Go once at work, and I actually dislike most of it (and dependency management was one of the annoying things with Go), nonetheless it is has never been a show stopper and there have been thousands of developers using it when vendoring was the only option.


Dependency management is one of the biggest complaints I have seen around Go - I don't think this is accurate.


I don't like it either, but it still works well enough for many people.


Go dependency management is quite good now with "go mod", plus your dependency tree isn't going to look anything like your typical JavaScript dependencies, otherwise you're doing it wrong..


> that's what people using Go have been doing for years without complaining

I haven't seen anyone commit vendor and not complain about it. But now you finally don't have to commit vendor for reproducible builds. All you need is a module proxy. The "all you need" is not really meant seriously of course.

And I personally prefer to not commit vendor and complain about it.


Go compiles to a static binary. It’s not downloading and running source on your production servers. Isn’t that the concern here?


That is one of the things I hate about go. Right up there with lack of generics and boilerplate error handling.


This hasn't been a thing in Go for a long time. Go dep and now go modules fix this.


You could use a separate git repository for the dependencies. That way you keep your core project repo tight and small and clean, but you still have your dependencies under version control. If that separate repo grows to a few GBs or more it doesn't really hurt anything.


In practice modules will be available from sources that will have similar reliability to npm: github.com, unpkg.com, cdn.pika.dev, jspm.io, etc.


Which then raises the question - how is it better than NPM? If there are going to be centralized repositories (like NPM), and if I have to download my dependencies into a $DENO_DIR (like NPM), and if I am then loading these dependencies from local files (like NPM), how is it any different to NPM? Except for being less secure by default?

This is starting to look like a case of being different just so you can say you're different.


NPM is a dependency management failure which is why you are ending up with hundreds of dependencies in the first place. It sounds like you want to reproduce that insanity in Deno. Deno is set up in such a way to dissuade you from the stupidity by default but allow it in very few steps if you cannot imagine a world without it.

In my opinion this is Deno’s biggest selling point.


> Deno is set up in such a way to dissuade you from the stupidity by default but allow it in very few steps if you cannot imagine a world without it.

Could you elaborate on this? Is it that Deno is against the whole 'small packages that do one thing well' principle and instead in favor of complete libaries? How exactly would it dissuade me from installing hundreds of dependencies?


The default design style for a Deno application is that the application becomes a single file. Just like packages coming off Steam. This requires that dependencies are packaged into the application before it is distributed to others. The idea there is to include only what you need deliberately and it manage it as a remotely written extension of your application.


Having a single executable file, makes distribution easier, but while I'm developing the app, I'll still have to manage all of it's dependencies right? How does Deno aid during development?

> The idea there is to include only what you need deliberately and it manage it as a remotely written extension of your application.

I have a node app, in which I deliberately only included the dependencies I need. The package.json lists exactly 8 dependencies. However, the node_modules folder already has 97 dependencies installed into it. The reason of course is that these are dependencies of dependencies of dependencies of dependencies.

Wouldn't Deno have this same issue? Are the dependencies also distributed in compiled form as a single file akin to windows DLLs?


it's better because there will be more choice.


I am always confused by deno folks. You can install from a git repository using yarn/npm.

How is that not "decentralisation"

And if you are importing single files from a remote url, I would question your sanity.


> install from a git repository using yarn/npm

yep, that's basically the same. deno has the benefit of using the es module system like it is implemented in browsers.


Node supports node_modules, not npm. Anything can build the node_modules.


Doesn't this mean more opportunities to inject malicious code?


Only if you tell your application to retrieve from untrusted locations.


To solve your issue, you would do exactly how you do your node deployments: download the deps in a folder in CI, then deploy the whole build.


Except that now, the download deps in CI step can fail if one of hundreds of websites for my hundreds of dependencies goes down. If the main NPM repository goes down, I can switch to a mirror and all of my dependencies will be available again.


To be the rubber duck, if wiping the cache at each build is a risk to your CI, what could you do to keep your CI up?

1 - not wipe the cache folder at each build? It's easy and secure. Oh and your build will be faster.

2 - use a cached mirror of the deps you use? It's like 10min to put in place and is already used in companies that care about security and availability anyway.

3 - you have https://deno.land/x if you want to put all your eggs in the same npm basket


Yes, I think I'd probably settle for solution number 2.

I still don't understand how this is better than NPM, and how Deno solves the horrible dependency management of Node, but maybe if I actually build something with Deno I'll get some answers.


From the post:

> [With NPM] the mechanism for linking to external libraries is fundamentally centralized through the NPM repository, which is not inline with the ideals of the web.


> which is not inline with the ideals of the web

Subjective.

> Centralized currency exchanges and arbitration is not in line with the ideals of the web! - Cryptocurrency

Nek minute. Besides, let's get real here; they will just end up centralized on GitHub. How exactly is that situation much different than npm or any other language ecosystems library directory being mirror-able?


The centralization of git on Github is completely different in nature from the centralization of Node packages on npm.

git does not require Github to be online to work, nor relies on Github existence for its functionality.


I'm talking about the centralization of software packages(Go, Deno) on GitHub as it applies to dependency resolution.


I'd highly recommend mirroring packages anyway. Obviously this isn't always necessary for small projects, but if you're building a product, the laws of the universe basically mandate that centralized package management will screw you over, usually at the worst possible time.


You answered your own question. Nothing stops you from using a mirror with deno too.


Which again brings me back to something I'm still not understanding - How is Deno's package management better than NPM if it is extremely similar to NPM, but slightly less secure?

I'm only asking because lots of people seem to be loving this new dependency management, so I'm pretty sure I'm missing something here.


We need to distinguish between npm, the service (https://www.npmjs.com/) and npm, the tool.

Deno has the functionality of npm, the tool, built-in.

The difference is that like Go, Deno imports the code directly from the source repository.

In practice it's going to be github.com (but can be gitlab or any code hosting that you, the author of Deno module, use).

NPM is a un-necessary layer that both Go and Deno has removed.

It's better because it's simpler for everyone involved.

In Go, I don't need to "publish" my library. People can just import the latest version or, if they want reproducibility, an explicit git revision. Compared to Go, publishing to npm is just unnecessary busy work.

I've seen JavaScript libraries where every other commit is related to publishing a new version to npm, littering the commit history.

In Go there's no need for package.json, which mostly replicates the information that was lost when publishing to npm (who's the author? what's the license? where's the actual source repository?).

As to this being insecure: we have over 10 years of experience in Go ecosystem that shows that in practice it works just fine.


How do you list the dependency libraries if you don't have a package.json?

Do you manually install a list of libraries provided by the author's readme?


The simplest approach is to either import anything anywhere, or have a local module that import external dependencies and then have your code import them via that local module.


The dependencies are imported in the source code of the package.


NPM, the tool, has had the feature to be able to install directly from GitHub instead of npmjs.org for many many years as well. No one really used it unless as a workaround for unpublished fixes because it has no other tangible benefits.


I like it because it's simpler. I know what happens when I import from a URL. I'd have a hard time whiteboarding exactly what happens when I `npm install`.


What happens?


My least favorite thing about importing from NPM is that I don't actually know what I'm importing. Sure, there might be a GitHub repository, but code is uploaded to NPM separately, and it is often minified. A malicious library owner could relatively easily inject some code before minifying, while still maintaining a clean-looking repo alongside the package.

Imports from URL would allow me to know exactly what I'm getting.


install from the repo then?

You can install a specific version from git via yarn/npm.

How do you trust a url more without reading the code?

What's going to stop deno ecosystem from putting minified js files on cdns and import them?


It's decentralized.


Or use something like Nexus or Artifactory to host a private copy of dependencies.


I think the primary way to manage dependencies should be in a local DIR and optionally, a URL can be specified.

The default in Deno is questionable choice. Just don't fuck with what works. Default should be safest followed by developers optionally enabling less safe behaviors.


Using a universally unique identifier like a URL is a good idea: this way, https://foo.com/foo and https://bar.com/foo are distinct and anyone who can register their own name gets a namespace, without relying on yet another centralized map of names->resources.

After all, the whole point of a URL is that it unambiguously identifies resources in a system-independent way.


No one is questioning the utility of URLs. Using URLs to specify dependencies right in the import statement is a horrible idea.


How is it any worse than using names from a namespace controlled by “npmjs.com”: if you’re concerned about build stability, you should be caching your deps on your build servers anyways.


I've never used npm or developed any javascript before but it sounds equally horrible.

Not decoupling the source of the package (i.e., the location of the repository whether it is on remote or local) and its usage in the language is a terrible idea.

  from foo import bar
  # foo should not be a URL. It should just be an identifier.
  # The location of the library should not be mangled up in the code base.

Are we gonna search replace URL strings in the entire codebase because the source changed? Can someone tell me what is the upside of this approach because I cannot see a single one but many downsides.


The whole idea of a URL is that it’s a standardized way of identifying resources in a universally unique fashion: if I call my utility library “utils”, I’m vulnerable to name collisions when my code is run in a context that puts someone else’s “utils” module ahead of mine on the search path. If my utility module is https://fwoar.co/utils then, as long as I control that domain, the import is unambiguous (especially if it includes a version or similar.).

The issue you bring up can be solved in several ways: for example, xml solves it by allowing you to define local aliases for a namespace in the document that’s being processed. Npm already sort of uses the package.json for this purpose: the main difference is that npmjs.com hosts a centralized registry of module names, rather than embedding the mapping of local aliases->url in the package.json


Allow me to provide an extremely relevant example (medium sized code base).

About 100 python files, each one approximately 500-1000 lines long.

Imagine in each one of these files, there are 10 unique imports. If they are URLs (with version encoded in the URL):

- How are you going to pin the dependencies? - How do you know 100 files are using the same exact version of the library? - How are you going to refactor dependency resolution or upgrades, maintenance, deprecation?

How will these problems be solved? Yes, I understand the benefits of the URL - its a unique identifier. You need an intermediate "look up" table to decouple the source from the codebase. That's usually requirements.txt, poetry.lock, pipenv.lock, etc.


The Deno docs recommend creating a deps.ts file for your project (and it could be shared among multiple projects), which exports all your dependencies. Then in your application code, instead of importing from the long and unwieldy external URL, import everything from deps.ts, e.g.:

    // deps.ts
    export {
      assert,
      assertEquals,
      assertStrContains,
    } from "https://deno.land/std/testing/asserts.ts";

And then, in your application code:

    import { assertEquals, runTests, test } from "./deps.ts";
https://deno.land/manual/linking_to_external_code#it-seems-u...


This was my first instinct about how I'd go about this as well. I actually do something similar when working with node modules from npm.

Let's say I needed a `leftpad` lib from npm - it would be imported and re-exported from `./lib/leftpad.js` and my codebase would import leftpad from `./lib`, not by its npm package name. If / when a better (faster, more secure, whatever) lib named `padleft` appears I would just import the other one in `./lib/leftpad.js` and be done. If it had incompatible API (say, reversed order of arguments) I would wrap it in a function that accepts the original order and calls padleft with the arguments reversed so I wouldn't have to refactor imports and calls in multiple places across the project.


Yeah, this sort of "dependency injection" scheme is better than having random files depend on third party packages anyways: it centralizes your external dependencies and it makes it easier to run your browser code in node or vice-versa: just implement `lib-browser` and `lib-node` and then swap then out at startup.


I believe the long term solution to the issues you raised is import maps: https://github.com/WICG/import-maps

It's an upcoming feature on the browser standards track gaining a lot of traction (deno already supports it), and offers users a standardized way to maintain the decoupling that you mentioned, and allows users to refer to dependencies in the familiar bare identifier style that they're used to from node (i.e. `import * as _ from 'lodash'` instead of `import * as _ from 'https://www.npmjs.com/package/lodash'`).

I imagine tooling will emerge to help users manage & generate the import map for a project and install dependencies locally similar to how npm & yarn help users manage package-lock.json/yarn.lock and node_modules.


Yeah, I agree, but that intermediate lookup table (a) can be in code and (b) can involve mapping local package names to url package names.

One off scripts would do `from https://example.com/package import bar` and bigger projects could define a translation table (e.g. in __init__.py or similar) that defines the translation table for the project.

Embedding this sort of metadata in the runtime environment has a lot of advantages too: it’s a lot easier to write scripts that query and report on the metadata if you can just say something like `import deps; print( deps.getversions(‘https://example.com/foo’)`

One of the best parts about web development is that, for quick POC-type code, I can include a script tag that points at unpkg.com or similar and just start using any arbitrary library.


That's exactly what Go does - it works fine


Good luck finding which of foo.com/foo or bar.com/foo is the foo module you want though…


Good luck finding which of google.com/search or bing.com/search is the search engine you want though


This is true actually, and that's why being the default search engine is so important Google pays billions each year for that.


It could be a good idea if they were immutable, like IPFS links.


That might work for some projects, but can quickly blow up the size of the repo.

I don't think it it is an unsolvable problem. For example, other solutions could be using a mirror proxy to get packages, instead of directly from the source, or pre-populating the deno dir from an artifact store. It would be nice to have documentation on how to do those though.


A better solution is something like https://vfsforgit.org/


That's not necessarily better. For one thing, it doesn't support Linux yet. For another, afaik, Azure DevOps is the only git hosting service that supports it.

Even if it was better supported, I wouldn't want to start using it just so I can include all my dependencies in git. Of course if you are using something like vfs for git anyway, then increasing the repo size is less of an issue. It still feels wrong to me though.


Yeah, I'm not really advocating the use of GVFS specifically, but what I am saying is that once you've lived in a world where all your dependencies are in your repo you won't want to go back, and that Git should improve their support for large repos (in addition to checking in all our dependencies, we should be able to check in all our static assets).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: