Hacker News new | past | comments | ask | show | jobs | submit login
Decoding James Webb Space Telescope (destevez.net)
286 points by gaius_baltar on Dec 27, 2021 | hide | past | favorite | 93 comments



Anyone knows about the internal communication of the probe? What protocol does it use to talk between components, and what's the topology? Where and how do they locate sensors such as temperature probes, etc. It'd be cool to have more details about the "build" of the probe.


The protocol is called SpaceWire[0][1]. Not sure about the details of where sensors are located.

[0]: https://en.wikipedia.org/wiki/SpaceWire [1]: https://ntrs.nasa.gov/api/citations/20030025278/downloads/20...


Would be cool to see a tracker displaying the status of deployed systems and instruments.



Thanks!


Does anyone know offhand when we should receive the first images?


Of some interest, this is the proposed imaging breakdown by time:

   2.0 % observation calibration
   4.9 % instrument calibration
   7.9 % solar system (comets, asteroids, kuiper belt objects, etc)
  16.1 % exoplanets
  17.2 % nearby galaxies
  20.4 % galactic (debris disks, etc)
  31.5 % distant galaxies and cosmology

There's a whole huge breakdown of what instrumentation calibration entails: https://www.stsci.edu/jwst/about-jwst/history/science-operat...

This doc was drafted in 2012, and so this might've already changed or will be: https://www.stsci.edu/jwst/about/history/science-operations-...


About six months.

https://webbtelescope.org/quick-facts/mission-launch-quick-f...

> After reaching its orbit, Webb undergoes science and calibration testing. Then, regular science operations and images will begin to arrive, approximately six months after launch. However, it is normal to also take a series of "first light" images that may arrive slightly earlier.

It will take about a month for it to get out to the Sun-Earth L2 point which is 1.5M km (0.01 AU) from the Earth. For comparison, the Moon is 384k km away. The telescope will be 4x further away from the Earth than the Moon is.


We'll probably get the calibration images before the six months from now when regular science missions start if they go according to plan as it's great press. Also there's a chance that there's a delay on the science mission images to allow for academic publishing.


(and too late for an edit) that 1.5M km thing is moderately important when it comes to communication with the JWST.

The frequencies that can be used for communicating with a satellite or space probe are governed by how far away it is. The cut off for this is 2.0M km.

https://en.wikipedia.org/wiki/Deep_space_bands

And so, the JWST is still considered near earth and can't use the deep space bands.

https://deepspace.jpl.nasa.gov/dsndocs/810-005/201/201C.pdf


why is the exact frequency used important? what's different about near space bands versus deep space? they look arbitrary to my idiot eyes.

(I'm asking why the band used is significant in any way)


It's not a technical restriction, it's a regulatory one. Like how parts of the UHF spectrum are allocated to TV and parts to radio. They don't want near earth communication going on in the deep space band because it will swamp the weaker signal.


right but the bands are not far apart, and the comment I originally replied to seemed to imply that there was significance that it was in the near space category rather than deep space. like it enabled something or was important in some unmentioned way. it was the whole (unspoken) point of the comment and I'm not "in the know" enough to already know the unspoken part. I'm asking what the unspoken part is.


According to the wiki link above, the bands are similarly sized and adjacent, so this wouldn't have much to do with "physics". Perhaps the "near" bands are just more congested?


Many other posters have said 6 months, but it will likely be much earlier to get the first images.

6 Months is the time frame for regular science operations. It is likely NASA will share some images well before that from the calibration phase as part of mission PR.

As regular joe's on the internet we are only interested in those PR images, regular science operations are more important for astronomers applying for time on the telescope.

Beyond those initial PR images, we can perhaps expect some PR worthy research papers (i.e. kind of papers that will get posted here) maybe a year from now, given the first projects will get access 6 months from now.



It would be 180 to 210 days from launch. There is a lot of calibration work for the mirrors along with cooling the telescope down.


Six months if I recall correctly


I'm a bit shocked at how low bandwith was allocated. 421 Megabytes per day is the theoretical upper limit. 16 Terabytes for the entire mission. I have more bulk storage in my desktop.


This is just the telemetry data. They deployed the high data rate antenna yesterday which can do many GB per day.


Oh, good, that unfolding worked.

The amount of unpacking involved as this thing deploys is insane.

On the data rate thing, satellites usually have a low data rate system with omnidirectional antennas, used for command and positioning. Then they have a high data rate system with directional antennas for whatever it is they do.

(The USAF used to have a strict separation between the two. This reflects the USAF's pilot-oriented mentality. The USAF is pilots, and then everybody else. The low data rate system belonged to the piloting operation, which used to be in the Blue Cube in Sunnyvale CA and is now at Schriever Space Force Base, formerly Falcon AFB, in Colorado Springs CO. They "drive the bus", managing orbital insertion and station keeping. The high data rate system belonged to the payload, and once the piloting operation had it turned on and aimed, it was turned over to the agency that owned the payload. Private satellite operators usually don't make that distinction.)


There are completely practical reasons.... The omni-directional antenna typically doesn't have the gain, or bandwidth, or power of the high bandwidth antenna, but does have the useful property of being usable when the vehicle might be tumbling.


Right, but didn't want to go into that much detail.


And according to this chart it's about to pass the Moon in distance in a few hours. https://jwst.nasa.gov/content/webbLaunch/whereIsWebb.html


And to tag on, there's even a 6-second 4K video of that deployment step: https://svs.gsfc.nasa.gov/vis/a020000/a020300/a020339/WEBB_H...

That's from this link:

https://www.jwst.nasa.gov/content/webbLaunch/deploymentExplo...

I feel so spoiled that NASA provides video clips of each specific deployment step.


I would feel more spoiled if it were actual video from the spacecraft.


Yes, please provide a source! Based on this [0], with the High-Gain antenna (Ka-band), they can do 3.5 Mbyte/sec (28 Mbit/sec), which is about 295 Gbyte/day. Even if only assuming 16 hours/downlink/day, that is ~200 Gbytes/day. Also, with the S-Band Medium gain antenna, JWST is capable of accomplishing true duplex communication, which means they can uplink on the S-band and simultaneously downlink on Ka-band.

For reference, MRO is capable of downlinking at up to 5 Gbit/sec with a 3.0-meter HGA [1].

[0] https://jwst-docs.stsci.edu/jwst-observatory-hardware/jwst-s... [1] https://descanso.jpl.nasa.gov/DPSummary/MRO_092106.pdf, table 4-7


That is 5 Mbps, not Gbps :)

In Reed-Solomon only mode, MRO can transmit about 6.6 Mbps but at typical Mars-Earth range the data rate is much lower.


You have to take into account that it was originally scheduled to launch in 2007, then in 2014. Development began as early as 1996.


It's fairly typical for the S/L-band telemetry and command and control antenna (omnidirectional, in case of pointing/station keeping error) on a satellite to be a very low data rate, and narrow channel. The narrower the channel is, the easier it is to pick it out of noise, and aim medium sized earth station dishes at it.


The instruments aren't really high bandwidth. If you take NIRCam as an example, it's 40 megapixels with an observation time per image that is from 4 minutes up to 3 hours.


Only the NIRcam has that many pixels, the other instruments have far fewer I believe.

They will make up for the smaller numbers of pixels per (long) exposure by running the thing constantly.

Check out the sort of images they produced with the 128x128 pixel sensor on spitzer: https://upload.wikimedia.org/wikipedia/commons/a/a5/Andromed...


That’s insane. Where did you read that?


It's in the article, but the OP was confused because this isn't the imaging data. It's the telemetry, things like thruster temps, gyro speeds, etc... The metadata that NASA uses to make sure the spacecraft is healthy, not the mission payload.


Astronomical Images are calculated in Hours, and I have no idea how long the average image will be for NIRCam on JWST, but your average Space Photo from Ground Telescopes is usually a combination of 5 to 15 minutes images, with a total imaging time from 10 - 40 hours.


One would wish that for budget-exponentially-overrun taxpayer-funded infrastructure, there would be open-source decoding information available.


This is just telemetry data which doesn’t have much general or scientific interest, i guess they could publish the protocol spec (honestly it probably is aquirable) but most of the fun for the kinds of people who want this data is going to be doing this reverse engineering themselves.

The real imaging data would require a much more significant dish to even receive (i can’t immediately find what it’s going to use, but I’m guessing something like a 40 meter dish) so there are approximately zero amateurs who could use such open source information.


Most of the public specifications are distributed freely by the CCSDS (Consultative Committee for Space Data Systems): https://public.ccsds.org/Publications/BlueBooks.aspx

The mission-specific parameters ("managed parameters") used by any given mission are usually more tightly controlled, as are the payload specifications for each telemetry channel.

> This is just telemetry data which doesn’t have much general or scientific interest

My understanding is that "telemetry" and "telecommand" stand for the downlink and uplink directions of a space link. I mostly worked upstream of telecommand, but I understood "telemetry" to refer to received data of any kind -- e.g. in CCSDS 130.1-G-3, an informational report on the design of the CCSDS telemetry system. https://public.ccsds.org/Pubs/130x1g3.pdf

By the by, I've been continually impressed with the quality of the CCSDS' documents. The "green books" (informational reports, like the one above) are extremely approachable and well-written.


Gonna chime in here to comment that most NASA missions (and ESA too) provide the scientific data for download free of charge, under Public Domain or CC licenses. If it's for scientific purposes, it's not just good manners, but rather a requirement to cite the proper dataset (that also gives you the bonus of citing a respected source, so it's a win-win). Thing is that many people doesn't even know where to look for!

And it doesn't help that some missions manage their own archives differently, and there's a lot of terminology to learn on your own. One of the complete opposites of that, which was a joy, was the New Horizons archive which, at one point, you could download from a torrent! For example, if you wanted to see V3 of the Arrokoth encounter from 2019, you'd go to: https://pdssbn.astro.umd.edu/holdings/nh-a-lorri-3-kem1-v3.0...

Again, New Horizons is a bit of a rare case in which they went for super accessible data for everyone. PDS itself is a great system, but many missions will just upload a bit of data to PDS and then manage the rest some other way (Cassini for example has only a couple of instruments on PDS, and you have to go to some other URL if you want uncalibrated but automatically processed images on JPEG format[0], but yet another place (to which I've lost the link to and I can't find on mobile) for the full, science-grade dataset).

A great resource is OPUS[1] too, however I find it's UI a bit difficult, and in the end I prefer to download full datasets and just explore them on my own rather than going with those online browsers. For example, if you wanted to check the Voyager images of Neptune, you'd go to this massive URL[2]. Quick tip: once you've configured the filter you want to apply, the Search button is on the top left -- this is the kind of usability thing I mentioned, buttons and links aren't quite where you'd expect them. Oh and there's a limit to how many things you can select for download at once. And it's all dynamically loaded, and on and on and on. Which is why, as I said before, I generally prefer to just download the full GB sized dataset and explore it on my own.

[0] https://solarsystem.nasa.gov/raw-images/raw-image-viewer/?or...

[1] https://opus.pds-rings.seti.org/opus/

[2] https://opus.pds-rings.seti.org/opus/#/instrument=Voyager+IS...


> Gonna chime in here to comment that most NASA missions (and ESA too) provide the scientific data for download free of charge

I thought that was the case, but it's been so long since I've been on a mission proper (Cassini, student co-op) that I didn't want to say so without basis. Thanks!


Yes, I just explore these archives as a hobby and sometimes do a bit of amateur processing on the files. There tends to be some embargo period (a few months to a couple of years) where only the mission team has access to the data, and they decide how/to whom share it. But then again, they are all scientists and willing to share knowledge 99.9% of the time -- so a fellow researcher will probably be able to get a copy of the data if they are polite and ask for it accordingly.

As I'm just "playing" with the files, I don't mind waiting a few months/years to get access to full "scientific grade" readings from incredible complex machines and systems. And if the "raw" data is not easily available, they also usually do provide processed images as part of the missions public outreach campaigns (usually the ones that are found on Wikipedia).


From: https://ntrs.nasa.gov/api/citations/20080030196/downloads/20...

>To keep up with the high downlink, the recorder data gets sent directly to the Ka-band transmitter

Currently aws groundstation doesn't support KA band so no luck there. It's apparently going to do a transmission once a day so you would need to time it right with the ground station.


Groundststion as a Service. I had no idea this existed. I am continuously amazed at how many things Amazon churns out "as a service".


It's pretty amazing. I found out about it a couple months ago from a job posting that wanted a programmer for a cube sat. It seems like this is what they're aiming the service for at the moment.

And really, it's the perfect place for aws to enter, add glue, and provide a service to let you do your core business. Feels like the future man.


Nice link, very informative, here's a funny excerpt : "It [JSWT] is currently planned to be launched in 2013 from French Guiana aboard an Ariane 5 launch vehicle".


I couldn't find a publish date in the pdf but going off the url I'm going to guess that it was from 2008.

It's a little bit annoying finding information about jwst as info could potentially be old and out of date. Another document that I read mentioned using XML as the database format because XML was an emerging standard :S


But in a really open project, the design of the whole lot would be on the web, and the data sent back would be sitting on an FTP server somewhere for anyone to download and use.

In many ways, an open project is cheaper to do than a behind-closed-doors project where every new contractor needs to get access to only the bits of the project they need access to, and misunderstandings happen because not everyone has enough of the big picture.

The only bit that needs to be secret is one private key used to sign the commands sent to the satellite, just so one random Mallory can't 'steal' it.


> the data sent back would be sitting on an FTP server somewhere for anyone to download and use

I'm sure they could actually do that without too much fuss. But it would require significant amounts of scientist time to document those datasets to enable others to use them for any arbitrary dataset. I'm sure we'll see fully open data sets from JWST appear, but lots of the stuff it collects isn't going to be interesting enough that it's reasonable to spend scientist time documenting it.


It seems like it should all be automated. Some scientist generates a mission request for the JWST techs. If accepted the mission is added to the timeline with all of the metadata the original scientists had in their proposal. Stuff like the area being imaged, the sensors in use, duration of capture, etc...

Once the data is collected and downloaded it is added to the catalog with all of that metadata attached. Then it's a matter of opening up that catalog to the public, although I'm guessing the downloads will be quite sizeable so the bandwidth could be an issue.

The trick to making this work is to integrate the publishing into the workflow so it doesn't require any additional effort on the part of anyone.


The data from the mission will be made public after proprietary periods [1]. They have an archive [2]. I don't expect that the raw telemetry will be made available, but the raw science data in FITS format appears will be available.

[1] https://jwst-docs.stsci.edu/jwst-opportunities-and-policies/... [2] https://archive.stsci.edu/missions-and-data/jwst


I think there needs to be a distinction between something that a project "publishes", and something that is "made available".

Something published has been checked by a few team members, written with care, and represents the opinion of the authors and project.

Something made available has no guarantees of correctness, might not represent the projects opinion, and might just be random matlab scripts made by a JWST scientist in their lunchtime that they thought was fun.

In the open source world, what is 'published' is probably the projects homepage, and code. What is 'made available' is random chatter on their discord or IRC channel.

I hope that more government projects 'make available' everything done by all the workers - every file saved on every PC, with the understanding that there is no guarantee of correctness.

I guess it's the same idea as being able to see into the kitchen from a restaurant. You might see the chef making mistakes or juggling the saucepans, but you'll also see the work being done as it's done, and being able to view doesn't delay the chefs work.


That's a pretty bad idea IMO. Putting people in a panopticon has a strong chilling effect, no matter what disclaimer you put on the recordings. Creative, deep work needs space to make blunders in private, scientists are no exception. They'll just use their personal laptops or document every experiment and mistake and script to death, getting done a lot less actual research.

Plus, it will be pretty much useless im practice since you'd have to be an expert in that niche yourself to know what's correct (you're not getting any extra docs or context) and probably most of it will be some kind of incorrect, possibly very subtly. The only people who could profit tremendously are the competition who aim to snipe that particular paper. Science is pretty dirty and ruthless often as not, I totally could see this happen.


As I recall, with the Hubble, and presumably the JWST, there were scientific "precedence" rules which limited access to the observational data to the scientist requesting the observation for some period of time.

This was to prevent others from also getting the data and publishing before the requesting scientist.

Assuming this continues to be true, they probably wouldn't want to handle the data by publishing it to a FTP server immediately on receipt.


The data will be at https://archive.stsci.edu/ (which holds the data from a whole host of space telescopes, and uses standard VO interfaces).

STScI is a major contributor to astropy etc. (https://github.com/astropy), and has their own space with more tools/software: https://github.com/spacetelescope.

It's unclear what else you'd actually want, unless you want to build a clone of the actual satellite (it wouldn't surprise me that a majority of the software for the satellite is open source, just not put together somewhere publicly for people to download).


> It's unclear what else you'd actually want, unless you want to build a clone of the actual satellite (it wouldn't surprise me that a majority of the software for the satellite is open source, just not put together somewhere publicly for people to download).

Much satellite technology is classified or at least export controlled (ITAR). The major difference between a space telescope and a spy satellite is which direction you point.


A funny quip but the main imaging cameras on the jwst would be useless pointed at earth.


You're assuming it's secret, but the more likely case is it just isn't anyone's job to make it public.


Aren't they using the Deep Space Network for this?


Yes, all communications are routed through DSN.


I actually don’t know if they want to protect the data. I can imagine they might not want China or other countries listening in and potentially sending commands to the craft.

Does anyone know if there is typically encryption on the downlink? How about uplink commands? I guess we want those to be secured so only authenticated control can send commands


Yes, CCSDS Magenta and Blue reference books mandate AES-256-GCM as a minimum for data encryption and mandate that encryption and authentication should be used, particularly for commands/uplink. Sliding scale of requirements based on application of course - your cubesat's imaging system is less critical than the flight termination system on a manned mission, for instance.


Fwiw, sat c&c is the one thing you're allowed to use encryption on the Amateur radio bands for.


how is that enforceable? you can't determine what the encrypted communication is.


Same as anything in amateur radio. There'll be a lot of hints at what you're doing from the shape of your broadcasts, and there's a lot of amateur radio enthusiasts that'll call the feds on you if they get a hint that you aren't following the rules.


radio enthusiasts are fans of the fed?


They're generally very into enforcing the FCC rules, and calling the feds is one mechanism for doing that.


Here's an ARRL article from 11-2016 (re the shutdown of 11 FCC field stations) which details plans and motivations.

https://www.arrl.org/news/fcc-special-counsel-laura-smith-sa...

In a more recent post (which includes good suggestions for experimenters):

> The ham bands aren’t full of whacked-out haters looking to get rid of everybody else, but hams are very accustomed to people using the bands without a license or not following the band plans, causing interference problems.

https://thesilicongraybeard.blogspot.com/2020/09/a-ham-radio...

Usually the 'self-policing' by hams (it's usually very effective) is done gently. They get that the frequencies are free of $$cost, and don't want to lose (any more of) them.


I highly suspect that China (and almost every other country) are not the issue. Chinese and the whole international scientific community will profit immensely from the data coming from James Webb, so I don't think the Chinese have any interest in sabotaging the project. More dangerous would be some average joe, somewhere who would try to mess with it for the lulz.


Embarrassing the US is high value.


If somebody like China is caught sabotaging JWST the fallout could be immense. They would be accused of trying to hold back human progress, and I could easily imagine high profile Chinese scientists leaving the country and science institutions boycotting China. It's not worth the risk.


Schemes like that would involve not taking credit for it - just another example of US failure, not a success by China.


I also don’t think they would want to do this, but I think you’re overestimating the risk. Genocide, stealing the entire South China Sea, and threatening the sovereignty of other nations haven’t inspired such boycotts.


They actually understated the risk by incorrectly assessing where it would come from.

The primary risk would be that the US would hit back and attempt to sabotage something important to China (and with China's sprawling global interests now, there are a vast number of soft, highly exposed targets). The US Government can be quite vindictive depending on the context, it will hit you back if it can. The responsible agency wouldn't seek publicity for any successful sabotage, but China would know who did it and what it was for.


Or terrorist organizations with tech skills


Or frame the average Joe for the lulz for political points.

(sorry, I'm just now reading into the drama with the fbi and gov. whitmer..)


There would have to be some sort of encryption or passkey or digital signature on commands, for sure. Otherwise some hobbyist in the middle of the Ocean on his yacht could be messing with the craft and taking pictures and there'd be no easy way to shut him down.


It is not necessary really.

There is unlikely any non-state actors[1] that has the ability to transmit signals to L2 . Just receiving signals even now (only 2 out of 30 days to l2) the OP used a 6 meter dish. Most of interplanetary mission signals are handled by the DSN.

Any sort of encryption will add both b/w requirements and compute requirements . The CPU/network budgets on such missions are very very limited. Every bit and cycle counts.

Finally standard encryption libraries, algorithms et al, are not likely suitable . I am no expert, but I have not read of any modern algorithms with very low network overhead + compute requirements designed for these kind of use cases, that is also secure from brute force or other attacks.

Mission risk is also a factor, even handshake failures can jeopardize the mission. It is one thing a website did not load because of TLS negotiation failures and $10 B mission overshot its orbit because handshake failures on the encryption layer.

[1] Threats from state actors for science missions is different category of concern, harder to quantify and with not much history of actual attacks. Collateral risks like from the ASAT Russian test to ISS, or in dual use missions would perhaps not apply here .Usually science teams collaborate well even if there is lot of tension in political sphere.


"It is not necessary really."

Authentication of commands to satellites is very, very necessary


Encryption !=authentication. OP was talking about encryption.

You could do authentication over plain text. For popular example http basic auth.

It is not recommended for regular use cases, but is not out of realm of possibility in satelite given the constraints.


OP was talking about digital signatures and a hobbyist taking command of the spacecraft. In other words authentication.

You said that was unnecessary and too taxing on constrained hardware. That is incorrect. Authentication is both necessary and not excessively taxing, when both considering the risk to the spacecraft's operation or even not considering it, since, as you said yourself, authentication schemes can be reasonably lightweight.

"Not out of the realm of possibility in satellites" What are you talking about? Of course it isn't! Authentication to satellites is recommended and implemented all the time, for obvious reasons. You think they risk a multi million dollar investment to save some clock cycles? How many commandeered satellites do you read about daily? Do you have sources to back up any of this?


I'd imagine the specs are rated and hardened for radiation first, as seen on all previous NASA satellite and probe missions before getting into the weeds of overhead and encryption.


"Torpedo in the water!" would probably be sufficient.


The US government being able to discreetly torpedo everyone everywhere within what would have to be a few minutes at most, that would be pretty pretty scary. Imagine the number of drones they'd need to have deployed and armed at all times, and the potential for abuse.


I think you might just be trying to make this threat a much bigger issue than it really is. How many people do you really think will be trying to attack the JWST?


I don't have any inside information but in my experience lurking in the amateur radio community the answer is 'it depends' and there is a lot of downlink that is not encrypted. This will get you into the graph:

https://twitter.com/usa_satcom

https://twitter.com/uhf_satcom

https://twitter.com/r2x0t


This would certainly speed up the rate of discovery from the images and get more people interested in astronomy.

Inside the project there must be an established data infrastructure running or ready to run. It probably has a raw layer and a transformed layer with well defined tabular schemas for the metadata of each image. It would be fun to see how both layers work and play around with the data.

Would the instrument itself have a fixed set of metadata attributes assigned to all imaging, or is it programmable and changeable during the service life?


There are lots of interesting tools and information on the Space Telescope Institute website to browse through. I'm guessing you can get the decoded data as its received.

https://www.stsci.edu/jwst/science-execution/data-analysis-t...


Science in general still basically operates on the "NASA invented xyz while going to the moon"-model from the 20th century. Things get developed and then trickle into industry via back-channels (or people moving) but the idea of open-source is still both culturally alien and legally suspect.

Even in CS papers directly dealing with a piece of software there is no obligation to publish code.


After seeing plenty of code and projects by people who weren't professional software engineers used to working on teams, part of the problem is likely that code written for this sort of thing often depends on a ton of dependencies and system-specific configuration bits that are documented poorly or not at all. Getting such projects to a state where a random person could git pull it and make sense of it and use it is a whole project unto itself that usually the core contributors are poorly equipped to take on. How many really understand the pain of onboarding into a poorly-documented repo and how to use the right tools to make it a smooth process?


On top of that, missions are heavily incentivized (in an "our success depends on this" way) to solve only the problems they absolutely need to solve, due to constraints on time, budget, and manpower. It's an incredible feat to achieve what they do, but reuse and non-specialist use are non-goals.


(Argumentatively) I don't care. If I can't replicate your work you can't test it.

If you make it clear it must be reproducible from the start and threaten to not publish if it can't be then they'll get in line - the kind of projects I'm thinking of are < 5k lines usually.


The only way to properly replicate a CS paper is to re-implement the code from scratch. Simply running someone's code again is most likely just going to give you the same buggy output they got (or just as likely a bunch of unrelated compile errors). But frequently, that isn't even what you want. Many CS papers are of the form "we were able to build software that does XYZ using this design", which isn't really a falsifiable statement in the first place. It just serves to give future researchers and practitioners data points they can use when building their own software


That's true but it also allows me to look for selection bias in their data.

If I see a paper claiming remarkably good predictive capabilities of (say) the performance of a basic block, and no discussion of it's flaws you bet I'm assuming they didn't test it well enough.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: