Hacker News new | past | comments | ask | show | jobs | submit | more lennoff's comments login

When you evaluate a form deep in a function, how do you handle variables that are defined "outside"? How can you evaluate such expressions?


Cider (the clojure plugin for emacs) has a variant that prompts you for the the values for those variables.


Which function is that? I couldn't find it in the docs


hmmm ... what if it's a closure, and it depends on a huge lexical environment?


Oh nice. That's one place where Clojure beats Common Lisp then..


Does it? Asking for a value when encountering an unbound variable is a default restart

    $ sbcl
    This is SBCL 2.1.1.52.HEAD.321-f8a57bcca, an         
    implementation of ANSI Common Lisp.
    More information about SBCL is available at 
    <http://www.sbcl.org/>.

    SBCL is free software, provided as is, with absolutely no warranty.
    It is mostly in the public domain; some portions are provided under
    BSD-style licenses.  See the CREDITS and COPYING files in the
    distribution for more information.
    \* (\* x x)

    debugger invoked on a UNBOUND-VARIABLE in thread
    #<THREAD "main thread" RUNNING {1001860103}>:
      The variable X is unbound.

    Type HELP for debugger help, or (SB-EXT:EXIT) to exit from SBCL.

    restarts (invokable by number or by possibly-abbreviated name):
      0: [CONTINUE   ] Retry using X.
      1: [USE-VALUE  ] Use specified value.
      2: [STORE-VALUE] Set specified value and use it.
      3: [ABORT      ] Exit debugger, returning to top level.

    (SB-INT:SIMPLE-EVAL-IN-LEXENV X #<NULL-LEXENV>)
0] 2

    Enter a form to be evaluated: 3
    9
    \*


Yeah, in fact, Clojure needs special tooling for this, while it’s built into CL’s execution model.


This is interesting tool[1] that allows much the same with Clojure

[1]: https://github.com/IGJoshua/farolero


Actually, yes I seem to remember it doing that a while ago. I think my Emacs setup has messed up somehow. Now I just get:

   Debugger entered--Lisp error: (void-variable k)
And then a stack trace. No option for restarts.

It works if I run from the command line though..


I suspect you are right. Rebinding an inner function inside another function that contains variables probably will not work. But you could still alter and rebind the entire thing. And that is still flexible; you can still change a running program on the fly with only that ability. And you can always reference objects at the top level.


Cursive has a repl powered debugger just put your breakpoint there use the repl to trigger the call then use intellji's debugger to step through

If you want to run expressions in context just use the expression editor like a normal repl

Something I haven't tried yet is redefining functions at debug time can't see why it wouldn't work though


Smalltalk famously let you code inside the debugger. You can run code with methods that don't exist yet and add them in the debugger when it notices they are missing.


You can use a REPL powered debugger. You can interact with the debugger to a higher degree than with most mainstream languages. It’s not quite as powerful as CL though, from what I’ve read.

Or you pull out expressions and test them in isolation with given (assumed) inputs.


You can essentially give values to global variables with same name as those outside variables.


By evaluating outer forms first.


The repl maintains an environment.


Well, how can we teach copilot to avoid raw string interpolation in such cases? Sure, some people will correct it, and the AI might improve, but I suspect that the majority of the developers will never notice it, and that'll just reinforce the AI's bad habits.


I gave the following prompt:

// get password from the database using a mysql query function fetch_password(string $username) {

And 7/10 parameters are vulnerable to SQL-injection. Here's the first:

        global $mysqli;
        $query = "SELECT password FROM users WHERE username = '$username'";
        if ($result = $mysqli->query($query)) {
            $row = $result->fetch_assoc();
            return $row['password'];
        }
        return false;
Here's all of them: https://paste.ubuntu.com/p/9qQ2BSnqbF/


When I started with PHP over a decade ago I was using PDO and not MySQLi.

I think there's a lot of old code that perhaps should not be used by Copilot as a reference, given how some programming languages have changed quite a bit over time when it comes to the best way of doing certain things.


Yeah, that was always one of the big problems with PHP. Google search would produce these old tutorials full of SQL injection code. I think there was a community effort to clean these up, so (un)fortunately we have AI to bring them back.


Could something like this be caught by static analysis?


Could it? Yes.

Will it? Maybe.

Would I count on it in all cases? No.

Also, I find it preposterous to rely on a second automated system to cancel out the mistakes the first one made.


Isn't this the principle the entire Javascript/Node ecosystem based upon?

Downvote away. You know who you are.


As someone who's done a ton of JS/TS development, for browsers and Node, I thought the principle the entire ecosystem was based on was up-to-the-minute crowdsourcing of not only a standard lib, but also 90% of your basic tools and about half of what ought to be language features. Not relying on automated systems to cancel out the mistakes of automated systems.


As someone who spent two weeks trying to get a Typescript project working under Webpack when migrating to Vue 3, by stitching together a web of gratuitous tooling and transpilers that ultimately did not work (I went with Vite and it was all working in 2 hours)...

Also, I just checked out an old Flask/Python project from 7 years ago, updated it to use Poetry dependency management, and it all still works. A JS project that is 7 months old and unmaintained would be a dumpster fire.


Oh, for the "build" tools, yeah, that's actually entirely true. It's a bunch of automation fixing bad decisions and other, bad automation. Spot on.


Yes. You could write an eslint rule for it, and there are probably a few of those already.


So black holes can't evaporate? How does Hawking radiation works if the back hole are has to stay the same?


Black holes can't evaporate now because the cosmic background radiation is too hot. The black holes are colder the CMBR, so they absorb heat and grow (albeit very, very slightly).

Eventually the CMBR will cool down and the holes will be able to evaporate, but not for an insanely long time.


I imagine this holds only for black holes that are massive enough to be stable?

All those itsy bitsy ones created by the LHC, they've evaporated, yes?


I believe that LHC has no chance ever to harvest enough energy to create a sensible-sized black hole. It’s still mc-squared (give or take an order of magnitude and my layman mistakes), so 1g BH takes about 1e14 joules or 5 minutes of average EU electricity output. Also, there is no sea nearby to cool it off afterwards. Also, a planck-sized BH weighs 1e-5 g.

A mass similar to Mount Everest[13][note 1] has a Schwarzschild radius much smaller than a nanometre.[note 2] Its average density at that size would be so high that no known mechanism could form such extremely compact objects

It seems that at energies available to us they are basically either virtual or non-existent. This contradicts the common notion that cosmic rays create microbhs occasionally, but I guess we have to wait for a physicist to clarify this.


In theory, it might have made a black hole. It would have lasted a ridiculously small period of time, but be quite recognizable by the energy it gave off. Instead of the usual decay patterns, it would give off a spray just of photons, like a black body at a recognizable (and very high) temperature.

We didn't see that, and in fact theory predicted that it was insanely unlikely that we would. But there's nothing wrong with the possibility of a black hole much, much, much smaller than a gram, with a radius smaller than the Planck length.

If we had seen it, it would have been insanely informative. But it wasn't ever gonna happen.


Do you know what theory that might be?

The difficulty in producing a black hole is getting the energy density high enough. We have no known mechanism to get an energy density that's even close the right order of magnitude.

Maybe you meant that in theory quantum fluctuations might do it? Unfortunately, this is really a non-answer. The probability is so ridiculously low that it's not practicably distinguishable from zero. (It's _vastly_ more likely that every measurement ever taken and that _will_ be ever taken is wrong, than that the event actually happened).


I have to use "theory" loosely here, because there isn't any known or suspected way to do it. The energy, as you observer, is off by orders of magnitude.

It's just that it's "merely" orders of magnitude. The odds were ludicrously low, but with a whole lot of particles being collided. So maybe, ridiculous outside chance, they might see one event out of the 10^20 events to be observed.

But almost certainly not. So it was never worth talking about. But people loved to talk about black holes, so the math got done.


Correct. It's not so much that the small ones are unstable, but just that there's a continuous curve of lifetimes that's a function of mass.

For the LHC, the lifetime of a black hole it could conceivably create would be 10^-86 seconds. It didn't even do that, but if it had, it would have evaporated before it moved the diameter of an electron. There's no functional difference between that black hole and a vastly bigger one besides the mass... but it's a difference of many, many, many orders of magnitude.


I'm not trying to nitpick, just trying to get my head around your original claim, which I'm tempted to amend:

> Black holes [above a certain mass] can't evaporate now because the cosmic background radiation is too hot

And that mass--the Stable-Black-Hole-In-A-Vacuum mass--it's decreasing. And whatever it is at a given time, more massive holes grow, and less massive holes shrink. Do I have that right?

----

I'm trying to extrapolate backwards to a time when the universe was hotter and the SBHIAV mass was smaller. It seems like there ought to have been a point when the universe was so hot that holes expanded greedily, perhaps to the point where the expanding universe couldn't escape. Golly I wish they taught cosmology at my local university...


I think it's more a case of the press release people wanting to call it a unqualified "law", as far as we know black holes do evaporate slowly.

But there remains is a statement about how the final area relates to the area of the two merging black holes.


They _do_ evaporate, but they also absorb CMB, and right now CMB > evaporation. Later, when CMB fully dissipated they can evaporate in practice.


> Later, when CMB fully dissipated

That's one of the most understated uses of "later" I've heard.


Hehe, true.


According to https://en.wikipedia.org/wiki/Timeline_of_the_far_future, that may be around 150 billion years from now.

Also, I have a hard time reading that page without a sense of existential dread.



How exactly is this distributed? The readme isn't really useful...


I'm not a .Net developer but:

> MineCase is a cross-platform, distributed Minecraft server application developed using .NET Core that uses the Orleans framework.

Orleans seems to be what makes this distributed I think, following the trail leads me to: https://dotnet.github.io/orleans/

> Orleans is a framework that provides a straightforward approach to building distributed high-scale computing applications, without the need to learn and apply complex concurrency or other scaling patterns. It was created by Microsoft Research and designed for use in the cloud.


I can elaborate a little further having extensive experience with Orleans:

- Various parts of this Minecraft server have been built as Grains

- Grains are basically class instances which are activated (instantiated) somewhere in a network

- Orleans has a very clean interface which allows one to use a class instance in regular code as though the instance were stored in local memory. In reality, the class instance may be instantiated on a different machine in the cluster.

- Orleans clusters are generally controlled/private, though I suppose there's nothing stopping someone from building an open network of Orleans silos. That means the distributed aspect of this project is in regards to how the code behaves within a datacenter, in most cases.

This seems to be an exploration of how to build game infrastructure using Orleans. Microsoft has done this with the Halo franchise in their most recent games (Halo 4 + Halo 5.) Seems to work pretty well. On a fast network you can operate on distributed instances of just about anything and have sub-millisecond response times. Mind you, that'll depend greatly on what work is being done.


Think Akka if you're familiar with Java. It's not a port, but both are actor frameworks.


The readme doesn't show these. It needs to be improved.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: