Hacker News new | past | comments | ask | show | jobs | submit | sllabres's comments login

This was a site with good content and whenever there was a link pointing to AnandTech I knew there was something interesting to discover.

Thanks for all the good work!


A long time I used two small consumer cameras mounted on a rail side by side and combined the two photographs on my TV (which was 3D capable with simple passive goggles), which produced quite nice 3D photos.

Triggering the two cameras was completely manual but worked almost all time, even when taking photos objects in slow motion, as example people o animals.

For this part of photography experimenting it is a bit unfortunately that all TV builders abandoned the 3D capability of their TV sets.


I had a printout of [1] at my office. Of course at is base it is only a simple multiplication table, but nevertheless is reminded me several time that a issue is worth fixing.

[1] https://xkcd.com/1205/


I once knew a (very old) old accounting system that had to work around a 64kB limit and therefor used a programmatically generated set of many hundreds batch files batch files calling each other (not containing the program logic of course). But each of them was less than 100 lines long.

But 27 kLOC for the WII thing or 3 kLOC for the recovery tool which even looks a bit more convoluted then the WII thing sounds interesting to maintain. On the other hand, if it works, no dependencies no 200 MB binary blob.


That is that what I have observed too.

Python seems to be used more often to call performance optimized libraries (NumPy, opencv, TensorFlow) and so on.

I have seen TCL and LUA more often used the opposite way around: A complied core (e.g. C or C++) which allows extensions to be written quickly in the scripting language.

Both seems to work well but the use case is a different one.


There is another currently new hacker news thread [1] referencing this post [2] where the influence of the performance of the GIL is evaluated. It seems not so large, of course depending on the tasks performed by the interpreted part.

At a former workplace we had a C/C++ application which allowed integration of customer code via TCL which worked well and had acceptable performance even with the TCL code in a hot path of the application code.

Inspired from this I once did a C/Perl integration, but the Perl code was minimal (just user exists with special case handling) and Perl is another language disliked by many today. But the program was written around the 2000 and Perl was more widespread without additional installations on the target systems and far better than any ksh/sed/awk combination.

But it worked well too.

[1] https://news.ycombinator.com/item?id=41006946

[2] https://www.maartenbreddels.com/perf/jupyter/python/tracing/...


Microsoft Xenix (never knew more about it than the name).

For small to medium sized businesses Netware had the advantage that with IPX networking there was nearly no configuration necessary. No subnetting, assigning of IP addresses to clients or running DHCP services.

The availability of software on the server was limited (i remember backup services, licensing software). But for central file service and printing it was rock solid, even in a bit larger (for the time, around 1995) environments without any issues. (IRC >200 clients on a single 486 CPU and 4 MB RAM)


> Microsoft Xenix (never knew more about it than the name).

For a year or two there, the only other commercial Unix workstation not made by Sun could be had from Radio Shack: the TRS-80 Model 16 running Xenix. Enough small businesses ran Xenix, with up to 3 simultaneous users on a single stock machine (console + 2 terminals) that Radio Shack kept supporting these things until the late 1980s; with up to an 8 MHz CPU, up to 7 MiB of RAM, and an actual (external) MMU, the Model 16 could handle more workload, theoretically more stably than an x86 machine running Xenix until about the time Xenix/386 came out.


Apollo made competitive workstations at the time until they got swallowed by HP. The Unix workstation market was bigger than Sun, but since they were the most successful nobody remembers how competitive that segment was. The model 16 was a footnote not a competitor


Apollo's Domain/OS (formerly AEGIS) was impressive, but did not gain a full POSIX layer until later in the 80s, as I understand it. So the Model 16 really was the only other commercial Unix workstation, besides Suns, in early 1983. This advantage wouldn't last long; by 1984 other Unix desktops like the HP Integral had emerged.


I believe Apollo had a proprietary OS with limited Unix compatibility. So maybe the grandparent poster is right about the Model 16 being the only other non-Sun desktop Unix for a while, as long as you define Unix tightly enough.


Sun gets the crown because prior to the sun/1 there wasnt really any such thing as a UNIX workstation. You had a terminal connected into a host running UNIX (or VMS) and that was that. My pet theory is that Sun succeeded against Apollo because Sun decided to sell to Wall St quants for their day job number crunching whereas Apollo (and later HPE) sold to engineers doing simulations and CAD. Naturally the quants told their colleagues and the stock went brrr.

Later entrants like SGI targeted their workstations at media creatives (helpfully, Apple were in crisis by this time so A/UX wasnt remotely a problem). IBM and DEC just produced me-too workstations but there was nothing special about AIX or Ultrix unless you were already a customer.

The UNIX wars of the 90s were basically the UNIX vendors trying to take over the whole market and not just their classic turf.


This may be a bit unrelated, but it is actually possible to run Xenix in a browser: https://www.pcjs.org/software/pcx86/sys/unix/ibm/xenix/1.0/


I learnt UNIX on Xenix.

It was so expensive, that we shared a PC tower with the whole class.

Not timesharing, rather we would prepare our C applications on MS-DOS 3.3 with Turbo C 2.0, using with mocks for UNIX APIs, and then take turns of 15 minutes per group, trying to make it work on the Xenix tower.


> But for central file service and printing it was rock solid

That's not really correct.

I mean, yes, it was rock solid, true.

But it was not a file server, or even a file and print server. It wasn't a "server" as such at all (although it could be one if you wanted.)

A "server" is a concept from client/server computing:

https://www.sciencedirect.com/topics/computer-science/client...

In other words you have a network, with lots of small computers (clients) talking to one or more big computers (servers).

That is so pervasive since the 1990s that you seem to assume it's how everything worked. It is not. Xenix was strong in the earlier era of host based computing.

The core concept is that you only have 1 computer, the host. It's kept in a special server room somewhere and carefully managed. On users' desks they have just terminals, which are not computers. They are just screens and keyboards, with no "brains". Keystrokes go over the wire to the host, and the host sends back text that the terminal displays.

No network, no computers in front of users.

In the '70s and early '80s this was the dominant model because computers were so expensive. Before microprocessors host machines cost tens of thousands to hundreds of thousands of $/£ and companies could only afford 1 of them.

Most were proprietary: proprietary processors running proprietary OSes with proprietary apps in proprietary languages.

Some companies adapted this in the microprocessor era. For instance Alpha Micro sold 680x0 hosts running a clone of a DEC PDP OS called AMOS: Alpha Micro OS. It sold its own terminals etc. It was cheaper and it used VHS videocassettes as removable media, instead of disks.

Unix replaced a lot of this: proprietary versions of the same basic OS, on those proprietary processors, but with open standards languages, open standard terminals, etc.

Xenix was the dominant Unix for x86 hosts. It let you turn an 80386 (or at a push a 286) PC into a host for a fleet of dumb terminals.

Xenix as stock came with no networking, no C compiler, no X11, no graphics, no GUI, nothing. Each box was standalone and completely isolated.

But a 386 with 4MB of RAM could control 10 or 20 terminals and provide computing to a whole small business.

No Ethernet, no TCP/IP, no client/server stuff.

Client server is what killed Xenix's market. When PCs became so cheap that you could replace a sub-$1000 terminal with a sub-$1000 PC, which was way more flexible and capable, then Xenix boxes with dumb terminals were ripped out and replaced with a PC on every desk.


Not even the articles from the webpage you've linked talks about "...big computers (servers) [...] with dumb terminals" nor that the concept of client/server is > The core concept is that you only have 1 computer, the host. [...] On users' desks they have just terminals, which are not computers

Opposite of what you write the linked page starts with

"In a client-server system, a large number of personal computers communicate with shared servers on a local area network" and later explicitly lists continues with references to Microsoft (N)OS. And then the refecence from NOS leeds us to "There are only a few popular choices – Novell, UNIX, Linux, and Windows. The complexity of NOS forces a simple overview of the features and benefits."

So I don't really understand what you point that Netware neither was a file-- nor a print-server.


XENIX was neither a file nor print server. Not Netware.

Netware 1/2/3 was nothing but a file and print server, and Netware >=4 was a notably poor app server.


I found this page [1] interesting in regard to cash supply in Sweden, as it is a country always reported with a high affinity to online payment and statistics [2] (from 2019) seems to support that statement.

[1] https://www.riksbank.se/en-gb/payments--cash/payments-in-swe...

[2] https://www.statista.com/chart/17307/paying-with-cash-europe...


That is sad, especially because I think that it is not a service that would take that much effort to keep up.

've seen things you people wouldn't believe... Attack ships on fire off the shoulder of Orion... I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain...


"a candle that burns twice as bright, burns half as long"


"Fool me once, shame on you; fool me twice, shame on me."


[1] "The Fastest Maze-Solving Competition On Earth" (22 min) is a nice introduction to the competition and its technical history.

[1] https://www.youtube.com/watch?v=ZMQbHMgK2rw


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: