Another useful thing if you need to backdate commits for whatever reason are the GIT_AUTHOR_DATE and the GIT_COMMITTER_DATE environment variables, upon executing git-commit they'll override these fields in the commit to whatever date, time and timezone you specify. I use this sometimes when I'm making some previously private work public, and am redoing the commit history to make more logical sense to others who may read it.
Also useful are git-fast-export and git-fast-import, if you really need to delve into the inner details of a commit. For example, I had three separate but related git repos that I needed to merge, so I created a new repo with separate branches to hold each repo, merged everything manually, committed that to a new branch, then used export/import to edit the commit to have the tips of the three other branches as its ancestors. Maybe there's a better way to do this with other git commands but I found it easier just to delve in and edit the commit data manually.
Yep. On a related note, when I was younger, I searched for advice from experts for how to develop expertise in studying and productivity on Reddit. It led to lots of highly-upvoted advice (including stuff like supplements, largely with few real benefits besides placebo), popular blogs by influencers (like Scott Young), and popular self-help books.
However, the actual experts I knew in high school who later went on to great institutions like MIT or applied and got into extremely competitive investment banks didn't browse the internet very much, or relied on supplements and these books.
Similar to the ideas expressed in the submitted article, these people didn't spend time online reading blogs and Reddit, or blogging/self-promoting themselves. They generally were involved in a sport (track and field or squash), spent little time online, and spent a lot of time using a lot of paper studying.
They also were careful who they associated with as friends (they hung out with studious people). Less in one's control, their parents were financially successful or were in competitive positions (e.g. were professors or physicians), so they may have learned these strategies from them, versus inventing them independently.
Long story short: there is absolutely a culture of improvement that is primarily offline and less visible, because people either don't record it, or people do record it and it doesn't get upvoted or ranked highly on Google searches. Examples of recorded good advice appeared on HN recently, shared by computer scientist Donald Knuth who is also usually offline: https://news.ycombinator.com/item?id=31482116
The real answer here is take whatever the R data.table package is doing and copy that. It's even written in C. It's by far the best csv reading package I've ever used and will sort as it reads, sometimes extremely fast since it can pretty intelligently guess if a radix sort is possible. The only downside is I believe it can only handle csvs that fit in memory, but memory is getting pretty damn big these days. I used it as inspiration a long while back when I needed to write a library to read in ARPA-formatted n-gram language models, which is a fairly obscure file format I could not find any existing libraries for.
Of course, data.table has existed for over a decade and you're not going to come up with something comparable in 15 minutes, but that's why you'd want to just copy it and not come up with your own solution in the first place.
Trying to pretend to be an official client was a game I never wanted to play. There's so many tiny differences in the way I've implemented the protocol it would be trivial for Spotify to notice this if they wanted to. It then becomes an whack a mole game between them and us.
Spotify is fully aware of librespot and has tolerated it so far. If they change their minds are try to block it it would be the end of the road for librespot. This is why, despite repeated requests from users, librespot has never supported free accounts nor downloading files in order to avoid pissing Spotify off. I always knew it would be trivial for anyone to implement this using the librespot source code, but it makes me a bit sad someone actually did it.
(That being said, I personally don't contribute or use librespot anymore, so really I don't care)
Hundreds! Doesn't everyone? Most of them are just bash scripts, many of which have now reached a complexity so high that I wish I'd started writing them in a different language but it's too late now. The majority of the rest are Python.
Off the top of my head, the most used ones are:
* A replacement front-end for "tar" and various compressors
* A script to synchronize my music library to a compressed version for playing in my car
* A secure-but-readable password generator
* A system to batch compress folders full of video files. (For ripped blu-ray discs, mostly.)
* A replacement front-end for "ffmpeg", see above
* A "sanity check" program for my internet connection to see if the problem is me, or Comcast
* A front-end for "rm" that shows a progress bar when deleting thousands of files. (Deletes on ZFS are unusually slow.)
Every couple of months I open a text file, insert headers denoting the coming months, and write down things I have to accomplish under those headers. Whenever I progress an objective, I jot down the task that helped me progress in the past tense. For instance, under January I could add "Spend time with family" and under that I would include "Went snow boarding with siblings".
I do this because I'm not prescient; I don't know how unforeseen circumstances might affect my ability to complete my objectives. By only writing down things that I have completed, I'm not discouraging myself if/when I can't finish something.
Here is an example of what I mean:
# January
# Work on personal projects
# Study Graph Theory
# Algorithms
- Implemented Dijkstra's algorithm
# Books
- Read "Introduction to Graph Theory" by Richard J Trudeau
In short, I just organize my ideas into broad categories and then when I think I've progressed, I further categorize it.
They also do this thing now where they block [1] smaller browsers (even ones using the latest version of chromium) under the guise of security. According to their docs they're fighting MITMs by generally disallowing any browser they can't identify (so the big few).
If you're not on a whitelisted browser by Google, you can't log in (effectively, use) any of their properties.
This feels very anti-competitive to me. Notably all the whitelisted browsers are either theirs (Chrome) or sell them their search traffic. I'm building a browser for research [2] and have to frequently find workarounds. I'm not quite sure who I'd contact to get on said whitelist either...
When I was in grad school back in 2015, I took an intellectual property law class, and as I recall there are some significant differences between US states in IP law with respect to the employee's rights.
I haven't seen a good summary of all US states on this, but here are a few links from my notes on the differences between various states:
> Cloudflare not allowing benchmarks in their TOS is very sketchy, that puts them in the same tier as Oracle.
There's a non-sketchy way for a company to do that kind of thing which avoids putting them in Oracle territory. Simply require that anyone publishing benchmarks (1) publish complete configuration information sufficient to allow the company and other interested third parties to reproduce the benchmarks, and (2) allows benchmarking of their products that they are benchmarking against yours under similar terms.
It is nicer of course to not have any benchmarking restrictions, but if you do that you are putting yourself at a disadvantage to companies like Oracle. They can benchmark against you put you can't do the same against them.
Once one important player in a market goes down the Oracle route others tend to follow. But if they aren't asses they will follow with the kind of restriction with reciprocity I outlined above instead of an Oracle-like restriction.
Long ago I made a reddit account with the first suggested name after the one I asked for was 'taken', disabled following all the subreddits, and then selectively added subreddits specific to my hobbies.
I don't see anything that normally hits the front page, everything I do see is somewhat relevant to me, and it basically deletes all politics from what is presented.
By far the best reddit experience possible, I think.
I want to plug a friend's company that she bootstrapped: https://www.chatterboss.com/. They offer remote executive assistants on demand. There is no commitment. You pay for the hours they work for you.
They care very deeply about the personality match between you and your assistants. On top of that, they strongly believe in documentation and a cover system so you'll always have coverage even if your primary assistant is unavailable.
They specialize in supporting entrepreneurs. They are a small company so every client is valued and given a personal touch.
Southeast USA including: Virginia (Arlington and Dulles), Maryland (Annapolis Junction), South Carolina (Greenville), Alabama (Huntsville), Florida (Melbourne), Texas (Austin and San Antonio), Pennsylvania (State College) and possibly others, all ONSITE. Citizenship is a job requirement.
We do emulators, JIT, hypervisors, stuff similar to valgrind, debuggers, manual disassembly, binary static analysis, parsers, and assembly. We write our own low-level tools, frequently in C99 to run on Linux. We also use IDA Pro, ghidra, qemu, Simics, JTAG debuggers, gdb, Coverity, KlocWork, LLVM, and so on. Easily transferable skills include those related to compilers, kernel drivers, embedded RTOSes, vectorizing, firmware, VxWorks BSP development, symbolic execution, boot loaders, software verification, concolic testing, abstract interpretation, satisfiability (SAT, SMT) solvers, and decompilers. We work with more than a dozen architectures including PowerPC/ppc, MIPS, ARM/Thumb/AArch64, x86/x64/Intel, DSPs, and microcontrollers. We hire from no-degree to PhD. Common degrees include Computer Science, Computer Engineering, Electrical Engineering, and Mathematics.
We don't normally work overtime, and we get paid more if we do. We're never expected to take work home or be on call. Because of the citizenship requirement, there is no chance that the work will be outsourced. Flex-time is fairly extreme; some do randomish hours.
Location hints: Pick Arlington for a car-free life, subway included. Pick Florida or Texas to live in a place with solid gun rights and no state income tax. Pick Florida for almost no traffic or commute, surfing, and a median house price of about $150,000.
You can email me at users.sf.net, with account name albert.
From someone who does binary reverse engineering full time, in my experience, BinaryNinja, Hopper, radare2, etc are toys compared to IDA Pro + Hex Rays Decompiler. The quality of the results and the features supported are unmatched... until now. I haven’t spent too much time with ghidra yet but it’s the real deal. The output of the decompiler looks alright (not complete garbage like I’ve seen with other tools). Even if everything else sucks, the decompiler by itself makes it outrank every other tool aside from IDA. And it costs $10k less! The fact that it’ll be open source is just icing on the cake.
Lots of comments asking what is "Soviet/Russian Math" actually like, and how is it different.
I was lucky to get an education in three systems (Soviet Math, Romanian Math School (influenced by both French and Soviet Math school), and finally in a world top 40 university in North America.
I would summarize the Soviet/Russian Math/Physics approach like this:
- understanding of the mechanism/intuition behind the equations/methods is paramount
- teachers are astute at spotting students who memorize blindly, and will intervene to correct that
- while rigorous about notation, the mathematical representation always comes after understanding, not before
- the progression of teaching (order how material is introduced) is very well thought out
- the old soviet textbooks are generally less verbose than North American ones (less fancy), but high quality in their expression, typesetting, and ESPECIALLY (!!!) the quality of the exercises
- the Soviet Math textbook exercises are something to behold: they have funny/memorable setting (like jokes), they are short and easy to express, the numbers are chosen in such a way that the result will be a nice whole number, or pi, etc. Basically as a kid you can read one of those problems, lay down, close your eyes, and work on it in your head.
That being said, I did like some of the aspects from the so called "Western Math" (in my case Canadian university):
- teachers are more approachable, more friendly
- textbooks can be gorgeous (nice colorful plots, etc)
If you've not written C++ code before, it can take a while to catch up with the latest developments in C++23. Start with C, and learn these, in approximately the specified order:
1. lvalue references.
2. Constructors, destructors, and inheritance.
3. Annotations such as const and noexcept on members.
4. Simple type templates, and value templates.
5. constexpr, std::move, and rvalue references.
6. Type traits and std::enable_if.
7. Concepts.
Once you learn the core language features, learning the various data structures/algorithms in `std` should just be a matter of looking them up in cppreference, and using them over and over again.
Also useful are git-fast-export and git-fast-import, if you really need to delve into the inner details of a commit. For example, I had three separate but related git repos that I needed to merge, so I created a new repo with separate branches to hold each repo, merged everything manually, committed that to a new branch, then used export/import to edit the commit to have the tips of the three other branches as its ancestors. Maybe there's a better way to do this with other git commands but I found it easier just to delve in and edit the commit data manually.