Hacker News new | past | comments | ask | show | jobs | submit | c4mpute's comments login

No, they are not. GDPR notices (which this is) must be understandable to the layman. Including all consequences like "this will also allow access to other services secured with the same university/company-wide password".

This could also be a punishable crime in Germany: https://www.gesetze-im-internet.de/stgb/__202c.html and other articles around that one.


The German law you cite about getting a password is applicable if you plan to or actually access data they are not authorized to. Which is not the case (assuming they do not).

GDPR deals with privacy. The user name is personal identifiable data. The password is only personal data. The emails themselves can be PII or just personal data. GDPR legally wise, the password is the least risky set of data here (as absurd as it is). Also it is a property of the process. Take a GDPR sheet of a club about giving photographies of your kids to the newspaper. You consent to the publishing of images and give the club data for it (first name, last name, restriction, name of parent, etc). And these properties are not mentioned in the consent but just are part of the process. This is nothing else, just that we are very worried about that the property is a password.

I agree that they should ethically mention that they transfer your password. I also agree that there is no way a layman can understand any consent they grant on the Internet. There is a reason why informed consent in clinical trials (where this can be life and dead) is not just a checkbox but a conversation, quiz, explanations, etc.


> The German law you cite about getting a password is applicable if you plan to or actually access data they are not authorized to. Which is not the case (assuming they do not).

Usually this is the case. The user and Microsoft are not the only parties involved here. The Email provider is also involved in that they provide an email account, often e.g. for work or educational purposes. In those cases, handing over account credentials is forbidden by the workplace or educational institution, providing other people such as Microsoft with access is usually forbidden as well. Other commercial email providers often have similar rules. Therefore either Microsoft is doing unauthorized accesses en masse (since they do know that the aforementioned clauses are widespread common practice) or the users are illegally providing access to Microsoft.

> GDPR deals with privacy. The user name is personal identifiable data. The password is only personal data. The emails themselves can be PII or just personal data.

There is no such distinction in GDPR. There is only personal data according to GDPR article 4. A password is personal data because it is "personal" in that it can be (and is almost always) tied to a person. "PII" is something that only occurs in US law. The definitions are different, "personal data" in GDPR is far broader.

> GDPR legally wise, the password is the least risky set of data here (as absurd as it is)

Depends on what else is in that Inbox and what else this password can access.

> And these properties are not mentioned in the consent but just are part of the process. This is nothing else, just that we are very worried about that the property is a password.

Interesting idea, and yes, GDPR allows for not informing the user about what the user already knows, i.e. a kind of implicit consent. However, the surprise that even experts on HN show about this news demonstrates that the average user doesn't know. So this doesn't apply, Microsoft should have explicitly informed and asked about permission to use username and password.


I completely agree that they should explicitly inform. Like you outline, you could run into other issues.

Looks like I have to reread GDPR.

I do the argumentation here only for the argument that Microsoft will not lose in front of a judge.


It is even worse.

MS doesn't need to do anything. They don't need to pay anyone off. EU bureaucracy is extremely strongly wedded to MS products like Windows, Office, Teams, Outlook etc. As are all EU national bureaucracies and public institutions.

There are firm opinions by e.g. the BSI (German IT security office, comparable to something between NSA, mostly NIST, DHS and ANSI) and other equivalent European national offices that it is practically impossible to operate modern MS products securely. E.g. there are guidelines from BSI like "we know that in that exact version (which is years old, because the guideline took ages to write) you need to set the following registry keys to prevent data exfiltration. Btw. this won't help you, because you also HAVE to upgrade within a few weeks of each available update". There are firm opinions by multiple European data protection offices that basically say the same about GDPR compliance in MS products. Practically impossible to achieve, there might have been that one configuration, "Once upon a time of writing the report, with that specific version of Windows and Office, when firewalling off half of azure, setting those 300 registry keys, manually deleting the following files, illegal telemetry could no longer be observed. Also, you are obliged by GDPR to follow good practice and update regularly, so good luck with that...".

Basically it is illegal to process any personal data using MS products in the EU if the processing system has any kind of outgoing internet connection. All the bureaucracies ignore this systematically, citing the "impossibility" of working without said MS products. Migration plans away from those illegal processes are regularly cancelled, ignored or never completed. MS is free to do whatever it wants, they are never really investigated, fined or held to any laws.

Meanwhile, other big IT firms like Meta, Google, Twitter/X and lots of others are held to far higher standards. Where tons of your local government's data about you like tax report, criminal records, school records and similar things are subject to being exported to the US via Azure, MS telemetry and what not. With FAANG there is complaining about comparably laughable stuff like "well, that IP address that Google Fonts could observe...".

The problem, why this doesn't change, is that the local government institution is responsible for their data processing (according to GDPR and other laws), MS being only their contractor. And those government institutions are usually (in almost all EU states) free from GDPR and other penalties, and those penalties would be left-pocket-to-right-pocket anyways.

This is why MS gets a free pass on everything. Imho this must end.


So the "bug" is a convenient spy program for the US government.


Would be irrelevant, because the DNT-header will be sent with every request, so for all practical purposes will be later than any other kind of consent.


There can be a case where the end user (person), logs on the site - then sets a permission/consent to be 'tracked' (whatever), then a cookie/localstore persists - so the DNT is not relevant.

Consent/tracking doesn't mean solely 'cookie' banners.


No. Ambiguous consent is no consent. And continuing to send DNT is ambiguous, because the tracker can not distinguish between intent and accident.


GDPR specifies tracking to be necessarily default-off and opt-in anyways. Therefore the browser sending DNT:1 by default would just repeat the legal status quo. A tracker could not successfully argue that this is to be ignored, because the technical default that the browser sends is the legal default anyways.


Agreed. Torx might not be ideal, but it is widespread and (relatively) cheap. And miles better than anything else that is widespread and cheap.


Domain validation TXT records are poor infosec hygiene. If used at all, those records should never include any hint as to what service they are intended for.

E.g. the record should NOT be:

example.com IN TXT "someservice.com-validation=029845yn0sidng0345randomnyosndgf03548yn"

instead, it should be something like

example.com IN TXT "029845yn0sidng0345randomnyosndgf03548yn"

Of course there will be multiple such records for different service providers. The service providers will just have to check all those (handful) of TXT records for the random assigned token in their database instead of pre-filtering them by the someservice.com-validation prefix.

As jelavich pointed out, there is also https://www.ietf.org/archive/id/draft-ietf-dnsop-domain-veri... which suggests another improvement on that. To avoid polluting the example.com name with tons of TXT records and to avoid problems with CNAME records and such, those records should be further down in the tree like

_validation_mv0395ng035.example.com IN TXT "029845yn0sidng0345randomnyosndgf03548yn"

The record name token "mv0395ng035" could either be another random number assigned to example.com by someservice.com they just put in their database. Or it could be something like HMAC(example.com, <common secret known to someservice.com>), so they don't have to save all those tokens. In any case, the check will be just one DNS lookup, one comparison and done. Quicker, equally easy and more privacy-preserving and infosec-hygienic.


The problem with opaque records like "029845yn0sidng0345randomnyosndgf03548yn" is that you have zero clue what it's authorizing, making it impossible to audit your DNS records to ensure that you're only authorizing what you want to be authorizing.

And what do opaque records gain you anyways? Security by obscurity is not real security, and it's often possible to determine a domain's service providers by other means, such as CNAME/MX records, or by scanning the website for <script> tags.

Ideally, domain validation records would be of the form ACCOUNT_ID@SERVICEPROVIDER so you can know exactly what the record is authorizing.


To make this explicit: maintaining accurate DNS configuration is extremely important to enterprise security and availability.

Allowing outdated DNS entries to persist can open up all sorts of horrible opportunities for impersonation, phishing, etc.

At the same time, removing a DNS entry that you still need can cause massive downtime.

So anything that makes it easier for ops teams to observe and maintain DNS (in whatever ugly way available) is probably a security win in the long run.


> The problem with opaque records like "029845yn0sidng0345randomnyosndgf03548yn" is that you have zero clue what it's authorizing,

I would have hoped the DNS management is automated and IaC-ed, so you just check the relevant commit message.


Keep hoping. Most orgs are updating their DNS through some awful web interface that often doesn't even have the ability to add a comment.


The service could also MitM you. They give you a code that validates their account against a second service.


Interesting point. I guess it would be possible to mask them, e.g. they give you the string "gitlab-123token123" and you set the TXT to hash("gitlab-123token123").


In a perfect world there would be a special DNS record type for this. The DNS server would store the full token value, but would return the hash when someone queries it. I think this would provide both maximum security and maximum privacy.


You could use your DNSSEC signing key to sign a validation message (offline, because that doesn't work over DNS).


As discussed elsewhere in this thread, domain validation needs to be frequently rechecked. Therefore, it's far more convenient to publish a DNS record than to manually sign messages out-of-band.


DNSSEC already provides attestation, why add another layer within the same system?


Because a DNSSEC attestation is usually public, except if you maybe use NSEC 3 and hide the RR behind some random name.


In real life,lots of companies infrastructure is an undocumented non-version controlled mess


> making it impossible to audit your DNS records

Every DNS provider I've used in recent memory, has offered a non-authoritative / non-DNS-exposed "comment" or "description" field for each record. Even if you aren't doing "DNS infrastructure as code" but just editing DNS records in a UI dashboard, you can just use these fields to describe the motivation behind each record at creation time.


> Every DNS provider I've used in recent memory, has offered a non-authoritative / non-DNS-exposed "comment" or "description" field for each record.

In my experience this is not at all common. I think I know of 3 that do, and I've used many more that don't.


Even stupid age-old BIND zone files can be version controlled and commented. Anything inferior to that level of documentability should be an instant no-no.


That can help with the ongoing maintenance of your records, but doesn't help you when you're adding the record in the first place.

As pointed out by singron at https://news.ycombinator.com/item?id=38069760 a malicious service provider (SP1) could give you a DNS record that was really issued by a different service provider (SP2). When you publish the DNS record, you're actually authorizing SP1's account at SP2 to use your domain.

With non-opaque records, you can be sure of what you're publishing.


Ah, now I get it. Yes, that is a possible problem.


Domain verification should typically be a one-time or at least rare event. You shouldn't have to keep the txt records after the verification is completed.


No, domain validation should be frequent, so that revoking authorization can take effect quickly, which is particularly important if the domain changes ownership.


It should be one-time, yes. Maybe it shouldn't be rare though. But your point still stands as the TXT records should be ephemeral. So I don't think this deserves the downvotes.

At least ACME's DNS challenge protocol is designed this way.

> The client SHOULD de-provision the resource record(s) provisioned for this challenge once the challenge is complete, i.e., once the "status" field of the challenge has the value "valid" or "invalid".

https://datatracker.ietf.org/doc/html/rfc8555#page-67


That’s insane. Domains are not owned by the same entity forever.


You can also look up common cnames or historical DNS records which would always point back to the vendor. For instance, do they use sendgrid, look for a bounce. record that points to sendgrid's servers. Do they use Facebook, well if they have a Facebook page that is verified yes. I would assume that Docusign and Attlasian also list Stripe as one of their customers on their website as well. Public TXT records are not that big of a deal.


There are two sides to this issue.

On one hand, the InfoSec side where you want to hide info. On the other side, the service provider doing the validation WANTS to advertise their service on these DNS records so they are disincentivized to make the changes you’re suggesting.


If you are using TXT records like "029845yn0sidng0345randomnyosndgf03548yn" then how do you know which one to delete once you are not using that service? Do you just go back and forth between different services to find the one (some services only show this while you are in a validation phase, so finding them later might be a pain), or do you just leave them around?


Ops guy... These days cloudflare allows you to put comments directly into their service, but for a long time I would just export the zonefile and annotate that. There is a KB article somewhere in jira that will tell you what stuff is. I can also summon the ticket history to see what changes were made and why.


Those comments have a very short length limit (smaller than the records themselves) and I don't understand why.


Simple, indirection to the rescue again! Just add an increasing number in the comment field and keep track of which number is which comment in an Excel sheet on SharePoint or similar.

And yes, I'm mostly joking.


Hopefully you're managing DNS as code, so you have a repo commit log that can give you that context.


Keep a log of every record added and why it was added. When making any DNS changes review the log as well.


This type of “hygiene” is pointless; it does nothing but provide a tiny amount of obscurity, which is easily pierced in other ways.

It’s not hard to figure out what services a company is using, and most of these services requiring verification are so ubiquitous that confirming the knowledge adds no marginal utility to attackers. “Oh wow, this SaaS company has verified with Atlassian and Google, who could have guessed.”


This kind of thing is pointless against a targeted attack. But it can hide you long enough in case of zero-days/fresh unpatched vulnerabilities because attackers will first target the more easily visible victims.


It’s pointless against a targeted attack, but it will help attackers target you? That doesn’t really make sense to me. Can you share an example?


If an attacker knows about some exploit involving someservice.com, which you are using. That attacker will try to find out where he can use that exploit of his. E.g. he might use something like shodan, google or DNS to get a list of users of someservice.com. Those potential victims that turn up in that list will get attacked first. Later on, if that list is used up, the attacker might then look at other means of getting new victims, like e.g. just trying out the exploit on targets where he doesn't know they are vulnerable. So in that case, not being "visible" to an attacker buys you time to fix the vulnerability.

On the other hand, if you are on the attacker's hotlist, he'll try you first and you gain nothing.


Double or triple that of a car, for a small plane, because of the higher fuel consumption.

But this only affects avgas, which is high-octane gasoline for propeller-driven aircraft. Jet-A1, which is light diesel fuel or kerosene-like for jets doesn't contain lead.


All these cool-looking dashboards are just too inflexible. You cannot add your own aggragates beyond trivialities. You cannot just "color that one value that bugs you". You cannot just generate a readable report plus some explanatory text.

Spreadsheet export + pivot table gives you all that. Doable for any moderately competent office drone without a round-trip through some endless backlog-spec-sprint-program-test-respec-sprint-... loop


To be somewhat constructive: What you rather should have done is not create more elaborate dashboards. What imho the world needs is an easy way to use a spreadsheet tool to generate and publish a dashboard. A "make web dashboard" button right next to the print button. With auto-updates when input data changes of course.


Have you... used Excel? It's very simple to create any kind of "dashboard" (AKA graphs on a page) and then you just share the web link to the page.


Yes, I have. What Excel is still lacking is an easy solution for the input side. You can bind tons of data sources, but all are weird, hard-to-use, manual. There is no easy "grab this from that website, get the current data of what I just pasted there, mash it together, publish it"

Hell, it cannot even do proper CSV import. You need to reformat your CSV to match the locale Excel is running under!


Uh? Are you sure you've actually used Excel? The CSV import is highly configurable and leads you immediately into Power Query where you can massage the data any way you want.


The LibreOffice CSV import is configurable. The Excel one isn't.

You can do things in PowerQuery, but that is far from obvious and still buggy. Not to mention all the woes after import, like date/time auto-interpretation and autocorrections that cannot be switched off.

I stand by what I said. Excel imports are a huge mess.


Would you like me to send you some online tutorials on how to import CSVs into excel? Because at this point it's just crazy. Are you using excel 2009? Do you not know about the "Data" tab in the ribbon? There's a whole dialog to complete with several options when you import a CSV file.


Cut it with the attitude. This is not the place for it.


Who are you?


Are you evaluating import or open? Open does stuff to the data right away, import lets you push buttons first:

https://www.fcc.gov/general/opening-csv-file-excel


Powerquery oh god never again!


Yup. That's what I want to build. Thank you for saying that -- I feel like it really validates my feelings hah


The problem is always this project turns into “let’s build excel or tableau” and the customers that care usually already use one or the other.


That's fair, but fortunately I'm not planning on doing either. (Well, I am still implementing ~all of Excel's formulas for compatibility, but not the the UI/UX...)

People don't really consume data, they read documents. I think that's (part of) the vision these projects lack.


I'm working on an OSS BI tool focused on a document form factor. Might be of interest to you. https://github.com/evidence-dev/evidence


Thank you! Definitely interesting! I had actually starred that repo when I saw it being discussed on some HN thread a week or two ago


Yeah, I misread the post I replied to: I’ve been on a bunch of internal dashboard projects that were in danger of losing focus and turning into full-fledged visualization platforms.


There is Smartsheet, which mostly works well for this, but its power-user features are pretty limited compared to Excel.


That's a bit oversimplifying IMO.

There's a place for well-crafted analytics dashboards in today's business, too. They're mostly tailored to specific user requirements/use-cases and look nothing like the flashy stuff one sees on dribbble or elsewhere.

Tailored analytics dashboard can solve many pain points of Excel + Spreadsheets if done well. If ~1k people need to access the same data each day and 'analyze' it for similar things (patterns/outliers/seasonalities etc.) then a good dashboard will be quicker, better and cheaper than 1k office workers trying to create pivot tables. If that dashboard is tailored to the use case, then those 'color that one value that bugs you' can oftentimes be implemented within minutes after hearing a good use-case from a user. I say that from experience.

And from experience, I'd say that most Excel users know the basics of basics. I'd bet that 90%+


The problem here is that you usually do not have ~1k users with all the same requirements. You have 200 groups of average 5 users each, all with their own department-specific, country-specific or workflow-specific requirements. Of course a central solution will be better and cheaper. But it will never be quicker, because you will take ages to just gather requirements from all 200 distinct user groups. As soon as you have those requirements, they will have changed already, so you are working on yesteryear's problems.

And of course, given a working system, the users can drop you a quick email, explain their problem (yes, in an ideal world they could do that, and you would understand them right away...) and you implement a 5min change. In reality however, their problem will first have to be specified in a user story, with a ton of clarification requests until the story is really understood by the dev team, then you need goodwill, time and money for the implementation. And maybe their problem can only be solved by an ugly hack, a weird special case for the ternary currency and ages-old lunar-calendar-based tax-system of lampukistan. Would that really be quicker than just the lampukistan team throwing together a few formulas and be done faster than the initial email? Even when multiplied by the special requirements of the other 100 country sales teams?

Also, I've had similar change requests where is was explicitly asked to provide a spreadsheet prototype of what the statistics should look like. Well, thanks, why again do we need a dev team?

I know that spreadsheets suck. They are ugly, undebuggable hacks, always and without exception. You need tons of time to implement in hours what would be a quick one-liner SQL query. With terrible error behaviour, weird edge cases and hell knows how many hidden bugs when the locale uses the lampukistan-currency-separator instead of a decimal dot...

...Except that they provide those office drones with velocity, which, as the usual wisdom around here goes, is everything.


This is the way


Agree.

And a tool like Superset enables users to customize their dashboards and charts.


It would sink into the center of the earth and then oscillate around the center until friction stops it dead center. Materials in the earth's crust and mantle are not strong enough to stop that mass from sinking ever deeper.

But that assumes that this spoon of matter were stable in that state, which it isn't. That kind of density can only be held up with some force like gravity keeping up the pressure. The gravity of that spoonful is insufficient, so the internal pressure will drive apart the neutrons. The effects would probably look like a very very large nuke or large asteroid impact.


Very large asteroid, too big for a nuke:

https://www.wolframalpha.com/input?i=%280.782343+MeV+%2F+neu...

~89 billion megatons of TNT equivalent


What motivates your first factor? 0.782343 MeV is the free neutron beta decay; where in the solar system are the free neutrons minutes after they are magically teleported to terrestrial ground zero as a something like a (degenerate, possibly ultra-relativistic) Fermi gas?

I think most attempts to arrive at an answer will end up somewhere between half and virtually all of them being "not very close" (~ light-minutes) away, and that's assuming one corrects for the differences in escape velocities. (The equatorial escape velocity of a spinning neutron star is in tenths of the speed of light, thus the sobriquet "relativistic star"). Without this correction, it is likely the bulk of the expanding drop of Fermi gas just exits the atmosphere in milliseconds (timed by terrestrial stopwatches), with time dilation extending the mean lifetime of the free neutrons in the drop comparably to the extended lifetime of atmospheric muons from cosmic rays. The bulk of the beta decays happen at a distance from terrestrial ground zero best measured in astronomical units.

If we play Star Trek transporter games such that the neutrons arrive at ground zero at rest in local East-North-Up coordinates, you'd want to know the internal kinetic energy (KE) density of the (pure-)neutron star, which will be in the range of 20-40 for x in 10^{x} J m^-3. The 10^25ish or even 10^30ish joules of KE will be released from our several cm^3 spoonful practically all at once and practically omnidirectionally from ground zero (so again, most free neutron decays happen at ~ AU distances from ground zero because they'll zip right through the atmosphere). The expansion of the suddenly unpressurized gas of neutrons will make a mess, particularly the fraction that slams into and through the ground. Part of the mess is neutron scattering physics, and I have no expertise there, but I would guess there wouldn't be any free neutrons near ground zero (and probably not within the solid Earth) in ~minutes.

Additionally, one might compare the R-process <https://en.wikipedia.org/wiki/R-process> for kilonovas in which a binary neutron star collision ejects high-neutron-density matter which decompresses pretty spectacularly, forming lots of heavy elements.

To summarize, I think the free neutron decay timescale (mean lifetime ~ 15 minutes, multiply by ln 2 if you prefer half-life) is simply too long after the neutron star material is teleported to Earth: any free neutrons that haven't been absorbed into heavy nuclei likely will be millions of kilometres away from ground zero when they decay.


> I think most attempts to arrive at an answer will end up somewhere between half and virtually all of them being "not very close" (~ light-minutes) away

Mean free path of free neutrons moving past normal matter is only in the order of centimetres, exactly how many centimetres depends on the neutron energy and the specific nuclei it's interacting with, but still order of centimetres.

Given the relative masses, I can assume the air above will be exploded out of the way; but the half going down will have all of the earth as a moderator… and also serve as a neutron-absorbing backstop that will probably increase the actual yield.

I'm also ignoring any binding energy between the neutrons. I'm basically treating them as disconnected from the first moment, which may be a terrible idea, but AFAIK nobody actually knows how long a macroscopic combination of this scale would remain stable for.


I don't know enough about neutron physics to comment usefully on your mean free path logic, but I do know that solar eruptive activity can launch relativistic neutrons at Earth which can be detected even at sea level using scintillators, and that mountaintop detection has been around since the early 1980s. Shibata 1994, Propagation of Solar Neutrons <https://sci-hub.se/https://doi.org/10.1029/93JA03175>, §4.2.1 (Fig 3) higher energy neutrons get further into the atmosphere, so I don't think the atmosphere is much of a barrier for the comparable (MeV-GeV) teleported neutron-star neutrons.

We seem to agree that free neutrons don't stay free neutrons when they slam into the solid earth.

I too wanted to think about neutrons as a non-self-interacting gas, but that just doesn't work: Meyer 1994, https://ned.ipac.caltech.edu/level5/Sept01/Meyer/Meyer3.html (Paragraph beginning with, "Only the strong gravity of the neutron star keeps such matter from exploding apart." Cold in this context is partly explained in the preceding paragraph; in inner regions the matter is a degenerate gas meaning the particle kinetic energy becomes dependent on the density or equivalently pressure becomes independent of temperature; even at enormous pressures, degenerate gases don't hold much thermal energy -- that was practically all radiated away when the NS was young. Our teleporting (of inner region matter) therefore engages a very low-entropy r-process.

Outer regions are just too complicated and varied for a HN comment. The crust is thin -- a few to a few hundred metres or so compared to an NS radius of ~ 10km. It's also much less dense, so is a small fraction of the NS mass, and thus maybe not a target for our teleportation. Here's a 180-page open access review: https://link.springer.com/article/10.12942/lrr-2008-10 Pesky electrons and protons complicating things.


Thanks for all three links; it's getting late here, so this is only going to touch on the first part of your message.

> Shibata 1994, Propagation of Solar Neutrons <https://sci-hub.se/https://doi.org/10.1029/93JA03175>

If I'm reading that figure right, at sea level the attenuation is at least a factor of 2000 for all energies they're graphing. That sounds about right to me.

I realise now that I may have been unclear in intent previously: if you look at figure 2, and then consider a typical solid or liquid's cross sectional mass density, hopefully that explains why I was speaking of neutron mean free path of centimetres — 100g/cm^2 is 1m of water.

However this is just the initial condition, and I don't think this scenario is one where the atmospheric density can be accurately approximated as constant over time.


I wouldn't sweat it, and I don't know enough about the nuclear physics to keep up (and we haven't even been talking about the neutrino energy in beta- decays, the gamma spectrum, or what becomes of the electrons; resonances go way over my head). This isn't really a gravitational problem (but...footnote [1]), so I'm not so useful here.

So, more for the original questioner than for us:

What's inside a neutron star stays inside a neutron star. Unless of course the NS is destroyed via e.g. collision, tidal disruption, or infall pushing it over a mass limit like Tolman-Oppenheimer-Volkoff. Sci-Fi teleporters don't exist, and there's no basis to think they ever will.

The closest neutron stars are between hundreds and a thousand light-years away and IIRC all the close ones are isolated (in the sense of no stellar multiplicity; they have no binary partner(s)).

Consequently what ben_w and I have been yakking about is inaccessible to experiment (we can't generate the relevant pressures, and artificial neutron sources are not very bright yet (pardon the BrightnESS pun, <https://europeanspallationsource.se/about>)).

It's not accessible to astronomical observation either. The closest physical phenomenon I can think of is an NS mass ejection (for which there is an ample and active academic literature), and that's far from a close match. At least in some parts of the spectrum we can see a large NS mass ejection -- large meaning somewhere around 10% of the mass of the sun -- but there's practically no hope to see just a spoonful, and not hurled into a close-by planet's atmosphere or even that of a noncompact companion star.

So the answer to the question ultimately is -- if we imagined the magical arrival of a small ball of NS matter on Earth at rest on the Earth's surface -- "complex nuclear physics" is in the details of the practically-instantaneous kaboom, and a lot of that complexity is because the Earth is not the practical vacuum around a neutron-star/neutron-star collision that ejects a lot more than a spoonful of material.

- --

footnote [1]: I mean, one can think of it in terms of Raychaudhuri's equation (and that's where I started, in fact): the initial radial divergence of the acceleration vector from the sudden release of pressure dominates, causing the bits to tend to fly away beyond the hope of recollapse. But the solid earth (and as the thread involved, considerations of nuclear interactions even in the atmosphere) generates enormous shear via contact forces, so some of the energy-density of the NS matter will stick around, and in due course what wasn't ejected "to infinity" settles back to a basically round Earth (hydrostatic equilibirium returns). From this perspective comparing the NS matter with an asteroid impact makes sense to me, but probably undersells the nuclear fallout.


I've heard that black hole matter is stable at any size, so how much difference in density is there between neutron star matter and black hole matter where it crosses the threshold of stability from its own gravity?


Black hole's aren't matter, they're pure gravitational binding energy. A neutron star becomes a black hole when the neutrons pushing against each other can't push back at the gravitational forces (neutron degeneracy pressure) and the neutrons do something we're not sure of... but whatever happens, they're crushed down into something smaller than a neutron star; into a singuality and we see the result.. a black hole. Eternal darkness for the poor neutrons; this bit gives me chills.


> Black hole's aren't matter, they're pure gravitational binding energy.

Could you expand on this a bit? What exactly do you mean, and what is your basis for saying that it's true?


Black holes are not made of matter… the matter has collapsed into pure energy. The form of energy is a mystery (it’s inside the event horizon) but since the gravitational field persists, it’s often referred to as pure gravitational binding energy. I graduated physics at Manchester Uni and I’ve still got a ‘preference’ to be as correct as possible when talking about BH’s and what they’re ‘made’ of. Kip Thorne also often refers to the stuff BHs are made of as ‘gravitational binding energy’ so I thinks it’s safe to do the same.


In GR, black holes have only three distinguishable properties: mass, charge, and angular momentum. If you have one made from matter, one from antimatter, and one from sufficiently concentrated light, all three are indistinguishable.

As I'm not a physicist, I wouldn't risk phrasing this as "pure gravitational binding energy" just in case this has a specific and different meaning.

I read the interior of an event horizon immediately causes problems with quantum mechanics' no-cloning rule, so I suspect the actual problem here is "QM and GR are fighting again" and we can't get any answer until we've resolved that.


35th-century science fair project gone very, very wrong.


Even better: Each user can have his own locale and charset, and may even change that per program/shell/session. One may save filenames as UTF-8, one as ASCII, one as ISO8859-13, one as EBCDIC.

However, the common denominator nowadays is UTF-8, which has been a blessing overall getting rid of most of the aforementioned mess for international multi-user systems. And there is the C.UTF-8 locale which is slowly gaining traction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: