Hacker News new | past | comments | ask | show | jobs | submit | tikej's favorites login

Hey guys! We're engineers/designers from France, and we've built the Ultimate DIY Battery that you can repair and refill!

It works with 90% of the bikes/motor brands on the market, so I assumed that some people here might be interested, if they got a non-functional batteries but they still want to use their e-bike?

We believe that everybody should have control about stuff they own, and we should fight against planned obsolescence!

Here are a few videos about our founder on the battery itself, why we built it, and how to assemble it:

- What is the Gouach Battery: https://www.youtube.com/watch?v=NsuW1NPkvNk

- Presentation of the pack: https://www.youtube.com/watch?v=mLoCihE0eIA

- Presentation of the fireproof and waterproof casing: https://www.youtube.com/watch?v=EDJpt7RDbRM

Here are the juicy bits: https://docs.gouach.com

We'd love some feedback from the e-bike DIY builder community

Oh, and it's launching as a Kickstarter in September and there is an offer for early-backers here https://get.gouach.com/1 for a 25% discount on the battery!

You can follow us on Instagram https://www.instagram.com/gouach.batteries to get the latest news!


MIL-HDBK-5 [1] is a good publically-available source for strength allowables for several aerospace alloys, including multiple directions relative to the grain for some of them.

The first relevant example I found was on page 3-86, extruded 2024, 2.250 - 2.499 inch cross-section. For ultimate tensile strength, F_tu, the L (in the direction of extrusion) allowable is 57 ksi, while the LT (perpendicular to the direction of extrusion) allowable is 39 ksi. That's a 30% drop in strength.

[1] http://everyspec.com/MIL-HDBK/MIL-HDBK-0001-0099/MIL_HDBK_5J...


I would really like to see a distribution which puts all the best alternative software together:

- pyspread for a spreadsheet

- LyX for a word-processor

- OpenSCAD for a 3D modeler

- TkzEdt (or ipe) for 2D drawing

&c.

(and I'd be interested in suggestions for similar software for other tasks, esp. presentations and database work)



One of the niftiest ways I've seen this done was some software I used circa 2000 (I don't remember the name). It would create a variable-rate timelapse by saving a frame every time the image changed more than $x percent, calculated as the sum of differences of pixels from the previous frame, or thereabouts.

If someone was walking across the yard it would save every frame. The movement of the sun would move shadows enough to trigger a new image every few minutes. A bug flying past was small enough that it wouldn't trigger anything. The result was you could get a short video of everything interesting that happened through the day: shadows of trees sliding over the ground, every frame of the car pulling out of the driveway, shadows sliding over the ground some more, cat walks across the yard then lays down, shadows pan around more while the cat sits still, cat gets up and walks away, shadows pan around until the delivery guy comes...

It was an incredibly low-CPU way to see everything that happened without missing anything, and without having to fine-tune the motion detection very much. You just mask out any areas with constant motion, then adjust the slider for how much change triggered the next frame, which would let you adjust how fast the timelapse would go during the boring parts.

I've always wondered why the technique never became widespread.


I broadly agree, and certainly for staples and root crops, it blows my mind how cheap supermarkets can be. The amount of work, land, pesticides, fertilizers, seeds, etc. it takes to grow a few kilogram of carrots manually is insane compared to being able to buy at 50p/kg at the supermarket. That really shows the level of industrialisation and automation involved in large scale farming.

Which is why I focus on specific crops that I've identified as being valuable or useful to me.

Basil, of course, I already mentioned, and similar to basil is other green and leafy veg, such as spinach, mint, coriander, rocket, spring onions and cress. They grow so quickly and easily that I guess the majority of the cost in a supermarket is the packaging and logistics. I also grow a lot of soft fruits such as strawberries because supermarket fruits are expensive and bland tasting compared to a freshly picked ripe strawberry. Squashes are good to grow as they're quite prolific producers without much effort, yet fairly expensive to buy in the supermarket. Garlic, chilli, tomato, runner beans and leeks I grow mainly because I can choose the cultivars I like, and find they're tastier than the ones I can get in the supermarket.

Of course, the biggest input I'm obviously not accounting for is my time, but as it's an enjoyable hobby that's good for my physical and mental health, that doesn't factor in for me. Plus, I think it's a good life skill to know how to grow food, and it's interesting to try and do it in a sustainable way, e.g. permaculture, supporting pollinators, producing your own compost, propagating your own seeds, capturing and storing water onsite, etc.

I certainly wouldn't quit my job and become a farmer, but I do think growing some of your own food is something everyone should at least try once if they have the space. Also as a general rule, animals require a larger scale to make a profit than do vegetables.


Me too, I want to follow

https://github.com/blanchette/logical_verification_2023

The hitchhiker's guide


For physical science, I think people could use ISO 10303 [1] to represent their experimental process and results.

A facility like CERN could have an accurate model of the equipment available, you just add the description of your experiment and the results to it.

[1] https://en.wikipedia.org/wiki/ISO_10303


FYI for all: the cheapest source of standards available in English (that I have found) is the Estonian Centre for Standardisation and Accreditation: https://www.evs.ee/en

Prices are generally single to low double-digit Euros (even for standards that are hundreds or 1000+ dollars).

If you are a solo contractor like me (and hence only need one copy), DON'T get the single licence copies. You need to use some BS DRM software that binds the file to your computer and is a PITA.

Get the organisation one, pay for two licenses, and you'll be given a regular PDF instead (and still save hundreds of dollars).

You obviously will only get the Estonian specific annexes, but those are normally optional anyway, and generally available for free when downloading the "free sample" of a standard for a specific country.


Some light, coffee reading "Cardinality of the continuum" [1]: in short, the cardinality of real numbers (ℝ) is often called the cardinality of the continuum, and denoted by 𝔠 or 2^ℵ_0 or ℶ_1 (beth-one [2); whereas, interestingly [3], the cardinality of the integers (ℤ) is the same as the cardinality of the natural numbers (ℕ) and is ℵ_0 (aleph-null) [perhaps what was meant initially?].

Related: the Schröder–Bernstein theorem [4], "if there exist injective functions f : A → B and g : B → A between the sets A and B, then there exists a bijective function h : A → B.".

Not related, but great: Max Cooper (sound) and Martin Krzywinski (visuals) did a splendid job visualising "ℵ_2" [5].

[1] https://en.wikipedia.org/wiki/Cardinality_of_the_continuum

[2] https://en.wiktionary.org/wiki/%E2%84%B6

[3] "Cardinalities and Bijections - Showing the Natural Numbers and the Integers are the same size", https://www.youtube.com/watch?v=kuJwmvW96Zs

[4] https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstei...

[5] "Max Cooper - Aleph 2 (Official Video by Martin Krzywinski)", https://www.youtube.com/watch?v=tNYfqklRehM


Get a Joeveo cup, then no need for heaters. It's the world's best travel mug. Its insulation cools the drink to drinking temperature then keeps it there for hours, perfect cup of coffee. Had mine for years, from the original Kickstarter.

https://joeveo.com/

It's the Framework of coffee mugs.


Smart RSS reader that, right now, ingests about 1000 articles a day and picks out 300 for me to skim. Since I helped write this paper

https://arxiv.org/abs/cs/0312018

I was always asking "Why is RSS failing? Why do failing RSS readers keep using the same failing interface that keeps failing?" and thought that text classification was ready in 2004 for content-based recommendation, then I wrote

https://ontology2.com/essays/ClassifyingHackerNewsArticles/

a few years ago, after Twitter went south I felt like I had to do something, so I did. Even though my old logistic regression classifier works well, I have one based on MiniLM that outperforms it, and the same embedding makes short work of classification be it "cluster together articles about Ukraine, sports, deep learning, etc." over the last four months or "cluster together the four articles written about the same event in the last four days".

I am looking towards applying it to: images, sorting 5000+ search results on a topic, workflow systems (would this article be interesting to my wife, my son, hacker news?), and commercially interesting problems (is this person a good sales prospect?)


My mom digitized many many old family videos, and wanted them online for sharing with family (including elderly & not-super-tech-savvy relatives). She asked me “should I just upload them all to a YouTube channel?”

Thankfully it was a phone call so my mom didn’t see my aghast expression. I prefer that big tech not index this stuff! Better to keep “in the family”

Seriously why does big tech deserve this free & super-private window into me & my ancestors lives?

So I wrote something[1] where:

* it’s fully free & open source

* cloud native

* plays on any device, any bandwidth, even if shitty

* yes my 90+yo Aunt Loretta (w00t to you Aunt Lo!) can use it on her phone & computer

* all data can be always encrypted, both source videos and derived/optimized assets

* and there’s more. please have fun

Basically point it at a source bucket on S3 or B2, and get your own private YouTube.

What I’ve built is very limited in functionality atm, but I believe the foundation is solid and plan to extend media support to photos and audio.

This can be a nice alternative to Plex/Google Photos/YT/etc.

It’s for when you don’t care about “building an audience” and in fact prefer that big tech can only see encrypted bytes from you.

Try it out and lmk!

[1] https://github.com/cobbzilla/yuebing


I used to purchase notebooks for journaling but had a similar hangup to some other posts on this thread. I just though that it was a hard sell to continually have to shell out a bunch of cash for the sexy notebooks when all the materials for a passable alternative are essentially free if you can tap into the waste of a typical pre-pandemic office space with a comercial printer. (For a while there was a thing that would happen with the network printer where it would suddenly begin to print gibberish, usually only one or two lines per page, uncontrollably for reams and reams of paper. This was the source of my first roll your own notebook pages.)

At some point I heard about the Midori system and then realized that if you had a reusable Traveler's notebook you could print the style of paper that you wanted to use, fold it, and have an A5 sized folio (?) insert that you could staple with a specialized stapler to make the paper inserts.

This is what i have been doing for ~3 years now.

https://papersizes.io/

A5 Travelers Notebook: https://a.co/d/iy32n37

https://print-graph-paper.com

Swing-Arm Swivel Stapler: https://a.co/d/0afXOaW


Also take note of the companies what have sprung up to supply low volume manufacturing at good prices to aid in prototyping and access to specialized machinery.

https://jlcpcb.com https://sendcutsend.com https://www.pcbway.com https://www.knifeprint.com


I found this youtube channel[0] has some pretty good walkthroughs (quite literally) of using various iOS apps to scan architectural spaces. The short answer is that the LIDAR and iOS APIs are remarkably powerful, but not 100% accurate. There are techniques to improve accuracy (e.g. using a gimbal), but ultimately you'll need to do tape or laser measurements and modify the models that these tools can build, or just model it yourself with the scan as a reference.

MagicPlan[1] and PolyCam[2] seem to be the most focused on building a schematic level building model which could be imported into other tools if needed. They both now take advantage of the Roomplan API[3] which Apple introduced in iOS and iPadOS 16[4]. MagicPlan has been out for ages[4] and originally just worked off the camera and the accelerometer to help build a floor plan. Polycam also supports photogrammetry[5] where you just take a bunch of photos and it builds a 3D model by interpreting what shape the object could be (I don't know if this is also used in architecture scale things, but it could be interesting for ID projects). Both MagicPlan and PolyCam allow you to tweak dimensions of rooms, doors, windows, furniture, etc. in a somewhat parametric way. This is where you likely want a laser measuring device to quickly update the dimensions. These can be used through Bluetooth to enter the measurements directly into the floorplans in MagicPlan[6]. I didn't try this, but if I was doing this all the time, it seems like it would be essential.

Matterport is starting to get into mobile[7] (phone, tablet) capture, but they've built their business up on their branded hardware and cloud platform. They provide floorplans as a service[8] and everything adds up, but from what I see in the real estate market, they are ubiquitous.

And if you want to spend a bunch more for very pro level app for documenting things like crime scenes, shipbuilding, infrastructure, etc. there's Dot3D.[9]

[0] https://www.youtube.com/@LiDAR3D

[1] https://www.magicplan.app

[2] https://poly.cam

[3] https://developer.apple.com/augmented-reality/roomplan/

[4] https://9to5mac.com/2022/06/15/ios-16-roomplan-api-3d-floor-...

[5] https://www.magicplan.app/about

[6] https://help.magicplan.app/laser-distance-meters#laser-tutor...

[7] https://matterport.com/3d-camera-app

[8] https://buy.matterport.com/

[9] https://www.youtube.com/watch?v=ouZxCDKTizs



Very cool! But for hard enough problems, prompt engineering is kind of like hyperparameter tuning. It's only a final (and relatively minor) step after building up an effective architecture and getting its modules to work together.

DSP provides a high-level abstraction for building these architectures—with LMs and search. And it gets the modules working together on your behalf (e.g., it annotates few-shot demonstrations for LM calls automatically).

Once you're happy with things, it can compile your DSP program into a tiny LM that's a lot cheaper to work with.

https://github.com/stanfordnlp/dsp/


> I mean there are resistors, capacitors all over the place but I really want to learn reason behind it.

There are some good YouTube channels that go into this. EEVBlog[1] has made a lot of really nice videos about the fundamentals, as has w2aew[2]. And I found MicroType Engineering[3] to be a good source of practical information on designing circuits.

Capacitors next to ICs are almost always for decoupling[4]. Similar to how the cistern in your toilet provides a large amounts of water in a short amount of time without affecting the water pressure in the rest of the house, hence decoupling the local water flow from the main supply, decoupling capacitors can supply a lot of current for a short amount of time.

However what values to use can seemingly be a bit of a black art[5], not helped by the fact there's so much outdated information and rules of thumb out there from the days of through-hole components which just doesn't apply to modern surface mounted components (like needing multiple different values).

On the other hand, resistors on a data line can be there to protect against ESD events[6], for example.

Some of it might be a bit more advanced than what you need right now, but there's definitely some good stuff for people starting out. If for nothing else highlighting areas you should be aware of.

[1]: https://www.youtube.com/@EEVblog/playlists

[2]: https://www.youtube.com/@w2aew/playlists

[3]: https://www.youtube.com/c/MicroTypeEngineering

[4]: https://www.youtube.com/watch?v=BcJ6UdDx1vg

[5]: https://www.youtube.com/watch?v=k7aPb585Y6k

[6]: https://www.youtube.com/watch?v=6OxE06n6n44


>Regarding manufacturing and machines/machining, any book or resources that stood out? I'm most familiar with the Machinery's Handbook.

I went to a top tier school for MechE and Materials, and would recommend two intro books: Engineering Mechanics Statics by Meriam and Kriage and Shigley's Mechanical engineering Design in that order . If you fully understand the contents of these book, it probably puts you in the top 10% of mechanical engineering graduates.

For a broader education, you can read Fundamentals of Heat and Mass transfer by Incropera, DeWitt, Bergmann & Lavine as well as Fundamentals of Fluid Mechanics by Munson, Young & Okiishi.

Understanding these two books will probably as well will probably put you in the top 1% of grads.

If you have a strong background in mathematics, these mostly deal with applications of linear algebra and differentials, so the value is understanding the applications.

From there, you can branch out. If applicable, Ogata's Modern Control Engineering and Tongu's Principles of vibration

Most undergraduates dont really understand these due to the heavy application of Laplace and Fourier transforms, but are relevant if you want to build complex machines.


Reduce is awesome! Dozens of specialized packages[1]. My favourite two packages are

1. The Source Code Optimization Package, SCOPE. It will take an expression or group of expression and produce a simpler set of expressions that minimize the use of arithmetic expressions.

2. The coeff2 package[3]. Tell coeff2 what the variables are and it will rewrite your expression as a simpler expression in terms of the variables alone and represent mush the others into constants.

I could not do what I do without it these two packages. Beats Maxima IMO.

[1] https://reduce-algebra.sourceforge.io/documentation.php#cont...

[2] https://reduce-algebra.sourceforge.io/reduce38-docs/scope.pd...

[3] https://reduce-algebra.sourceforge.io/manual/manualse136.htm...


You are the third person this week to ask me that. I answer reluctantly.

The rationale is, because I want to.

The system has evolved over the years, current configuration is: Several 1080p SONY cameras with hacked firmware that stream video to a capture device. An older view of the camera rig that has since evolved again, is here: https://youtu.be/dGRDB1vVxyY

Some 4K webcams connected over USB that I don't stream. I capture one full frame every X milliseconds.

Two Kinects set to be out of phase capturing the entirety of the office as a depth map.

Two Rode shotgun microphones capturing audio and feeding it in to a Focusrite box.

Custom built USB "keyboard" with a few arcade buttons that permit "pause/unpause", "forget a little bit" and "forget five minutes."

Two LED lights to indicate recording status for both myself and anybody walking in the office.

Timesnapper on Windows, and a little custom C++ capture program for macOS and Linux that takes a snapshot of my desktop every X seconds.

All that data gets stuffed on to a secured drive on a file server. The data goes back more than a decade. Nobody has access to that data but me.

I use an NVidia Jetson to analyze everything: the desktop images, build up a map of applications, analyze people in the room, identify who they are, what clothes they are wearing, identification of activity, OCR of images, transcription of spoken word to text, identification of websites, identification of music playing, "oh hey, he's listening to the following artist, let me pull that artist's social feed and put it on the ambient screen in the hallway", which is kinda creepy when the software identifies my own music https://soundcloud.com/justinrlloyd and then stalks me and puts up my own social feeds on the household ambient screen. I also have the Jetson watching the front door via the Ubiquiti doorbell camera and can switch on the TV in my office if someone comes to the front door so I can see who it is, and also will notify me that a package is on the doorstep ready to be brought inside via the second high viewpoint door camera performing a "what changed in this scene, is that a package? That looks like a package. Package! ZOMG! Package! Package!!!" That algorithm has one job and it does it really well. Like a hunting dog staring at squirrels.

Lots of this stuff is readily available as ML models, for the most part I just strung them together with simple scripts to move data around.

I have a "virtual assistant" that I wrote, using NLP and key phrases with a speech recognition model that understands specific commands and some free form speech, an early prototype of my virtual assistant is here: https://youtu.be/uhl8wN7Uvv8 and I state for the record that it has gotten far better in the intervening years. And then a text to speech model when absolutely necessary to give me voice prompts.

This virtual assistant can control cameras, e.g. tally lights, zoom and focus, recognize the fact I am holding a receipt from a grocery store, or a book, and take a high resolution picture and tag it with meta data.

I keep a near real-time backup of my computers, and that data goes back probably three decades, any time I retire a machine I take a full drive dump and store that.

Out of office, I take a snapshot of my desktop on the laptop (Microsoft Surface or Macbook Pro), which is then automatically copied to the server when I return to the office. I built my own Sensecam-like device using a J2ME device almost two decades ago, but have since moved to using an Autographer for life logging.


Hi! Happy to see someone interested in using Emacs for Julia work.

I recommend the following packages for your setup:

julia-vterm (https://melpa.org/#/julia-vterm) (https://github.com/shg/julia-vterm.el)

ob-julia-vterm (https://melpa.org/#/ob-julia-vterm) (https://github.com/shg/ob-julia-vterm.el)

FYI, julia-vterm depends on:

julia-mode (https://elpa.nongnu.org/nongnu/julia-mode.html) (https://github.com/JuliaEditorSupport/julia-emacs)

vterm (https://melpa.org/#/julia-vterm) (https://github.com/akermu/emacs-libvterm)


Face recognition is possible on a tiny and cheap ESP32-CAM[0] using Espressif's own ESP-WHO[1] framwork. Apparently it can process images at around 3 fps. I have an ESP32-CAM on order to try this out as part of a side project of mine[2], so I can make the eyes look at faces :) I would also love to get hold of a Person Sensor[3] which runs custom firmware specificially designed for face detection and would be absolutely perfect for this, but they're not in stock currently.

There's other libraries[4][5][6] that look like they could work too and I plan to try. As far as microcontrollers goes, the Teensy 4.0[7] should be powerful enough for reasonably fast processing (and it has a floating point unit!), though I'm yet to find a good library for doing so.

[0] https://robotzero.one/face-tracking-esp32-cam/

[1] https://github.com/espressif/esp-who

[2] https://github.com/chrismiller/TeensyEyes

[3] https://usefulsensors.com/person-sensor/

[4] https://github.com/ezelioli/Face-Detection-on-Microcontrolle...

[5] https://github.com/nenadmarkus/pico

[6] https://www.reddit.com/r/embedded/comments/g77exq/recommende...

[7] https://www.pjrc.com/store/teensy40.html


In my "personal digital garden evolution" I've reached the timeline-level taxonomy or, with Emacs/org-mode/org-roam/org-attach etc:

- new textual entries (headings) goes into monthly notes ($org-roam-directory/timeline/year/$month-name.org), they might attach files or not of course. Attached files are generally directly linking in the heading/inside the textual content of the heading for a single-click quick access and glance view. Doing so allow to have not too many too small files, not too big ones who operate slowly;

- another subdir of org-roam-directory is for "topics", a note per topic, linking or org-transcluding (slow and a bit limited but still useful) the collected entries in timeline style;

- another is workdir where I craft my catalogue (using org-mode drawers created with templates to allow easy org-ql queries) and queries to explore my notes in different view. It's not as easy as TiddlyWiki transparent transclusion but allow a certain degree of practical usability, fine grain selection and ease composition.

MOST of my files and config live or as org-attachments or tangled from org-mode. So yes, taxonomy is hard, but we have tools to master them IF we decide to discover them and invest time in improving our digital garden for real instead of leaving classic mess of files hoping for some miracle "application" that solve automagically all issues. Unfortunately due to the lack of interests by most leave such systems too little developed to be as effective as they can...

My personal experience is:

- we need taxonomy anyway, just mere full-text searching with extras à-la-google do suffice for a certain percentage but fails more than that;

- we need taxonomy that are a bit flexible in storage terms and can change at a slow peace;

- we need integration, witch is NOT possible in ALL modern software, we need for that classic desktops where the OS was a framework/live image and anything is just a module, a bit of code, of it. With end-user programming concepts because no UI can be effective enough in "no code" style and no "modern programming" styles are usable for user programming.

A bottomline: people should learn a bit about information management at school, from how a library or a pharmacy organize books/meds on their shelves to book's indices and personal information archives. Nothing exaggerated but the bare minimum to understand how to manage data, digital and physical in various forms for a lifetime...


I keep this in my .bashrc

alias brownnoise='play -n synth brownnoise synth pinknoise mix synth sine amod 0.3 10'

It sounds like waves gently coming ashore.

I'm sure I collected it somewhere here on HN, because I don't know anything about how the command works.

Edit: I have these, too, and I like them all:

alias whitenoise='play -q -c 2 -n synth brownnoise band -n 1600 1500 tremolo .1 30'

alias pinknoise='play -t sl -r48000 -c2 -n synth -1 pinknoise .1 80'


(I'm in my late 30s, but I always try to look for "time-tested" books first. Books that seem to hold their value after decades; see also: Lindy effect.)

Language and linguistics: "Metaphors We Live By" by George Lakoff and Mark Johnson (1980): https://en.wikipedia.org/wiki/Metaphors_We_Live_By

Systems thinking: "An Introduction to General Systems Thinking" by Gerald M. Weinberg (1975): https://geraldmweinberg.com/Site/General_Systems.html

I still often use Weinberg's Systems Triumvirate when feeling stuck on a problem:

1. Why do I see what I see?

2. Why do things stay the same?

3. Why do things change?


I guess this is a good place to share my open source large format laser cutter design for sewing projects. It’s cheap to make, works pretty well, and the whole gantry assembly slides right off leaving just a sheet of plywood with low profile 3D printed rails on the sides. So I throw my rug over it and it becomes my floor when not in use. Important because the laser cutter can cut a full 60” wide piece of fabric two yards long. It’s basically 5 foot by 6 foot, and I don’t have space in my apartment for a dedicated machine that takes up all that space. But since this doubles as my floor it works great! Also includes a raspberry pi camera on the laser head which serves as a pattern scanner. I really want to finish my video on this thing, I’ve just been busy. But please take a look and considering building it! If you have any questions open a GitHub issue and I will do everything I can to help. I think it’s a great starting point (designed in three weeks) and I’d LOVE for other people to reproduce it and extend the design! The machine has a few hiccups but I use it all the time for my sewing projects and it is SO nice to get all the cutting done repeatably and automatically. You can even scan existing clothes often without disassembly and turn those in to digital patterns!

https://github.com/tlalexander/large_format_laser_cutter


It's important that digital textbooks are more than a pdf of the pages found in a traditional textbook. You can do so much more with an explorable interactive experience.

I shared my attempt at this on Hacker News a few years ago.

https://landgreen.github.io/physics/index.html

https://news.ycombinator.com/item?id=17178031


I like the explanation and video illustration of simulated annealing. Simulated annealing has varied and numerous applications. But calling it "The Only Algorithm for Hard Problems" is really giving it a lot of credit:

1. The animated graphic illustrating simulated annealing infuriates me. It is described (without calling it that) as solving a shortest Hamiltonian path problem. If you look at it, it is actually a shortest Hamiltonian cycle problem, aka. travelling salesman problem (TSP). TSP is the canonical example of problem that simulated annealing and other metaheuristics are terrible at solving [1]. Proper mathematically-justified algorithms like Lin-Kernighan-Helsgaun [2] give better (often optimal!) results orders of magnitude faster. You can even solve TSP (an NP-hard problem) with optimality guarantees with Concorde, at sizes that beggar belief [3].

2. Saying that stochastic gradient descent is kind-of the same as simmulated annealing is quite a stretch. Gradient descent attempts to give local optima, full stop. Quite the opposite of simulated annealing. Now, there is an art in ML in choosing the step size (learning rate) and starting point. But the "stochastic" part is necessary to make it work on the huge problems that DNN require, where computing a full gradient would be impossible. The claim that we use SGD to get better local optima is new to me.

3. The mention of SAT/SMT is making the analogy do a ton of work here. The article admits it, but still, I struggle to understand how backtracking, a recursive deterministic (full-search-space) enumerative algorithm has anything in common with simulated annealing, a randomized iterative heuristic.

[1] http://www.math.uwaterloo.ca/tsp/usa50/index.html

[2] http://webhotel4.ruc.dk/~keld/research/LKH/

[3] https://www.math.uwaterloo.ca/tsp/concorde.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: