The article isn't really about university admissions at all, they're merely provided as one cause of an environment which also has its parallels in startup life.
Arbitrary illustrative excerpt:
Whether American or Chinese, individuals who focus too much on ‘achievement,’ and who believe the illusion that they’ve achieved everything simply through their own honest hard work, often think very little of everyone else as a result.
Yes, the article repeats examples of the fundamental attribution error then draws some conclusions about it. However, the subject matter is the founders of the testing moment, and their failure to achieve their goals.
Again, the only interesting part of the article is highly political which is basically stating "be nice to the proles before they eat you". Aka, it belongs on reddit, not here.
If it makes you guys lay off, I agree with the blog post, I just don't think it belongs HERE.
You're just annoyed that you made a bad decision with your money, and feel that someone else should shoulder the blame. Yet somehow it isn't the fault of the active investors who take all that expense cash in return for consistently unperforming; no sir, blame the indexers! Is this what your "Financial Advisor" told you while he was skimming 1-5% off the top?
In other news, Buggy Whip Owner Considers Cars Harmful (Possibly Evil).
Network interfaces should not rename themselves after reboots. On Red Hat derived distributions, the device name is tied to the MAC address of the interface, so it never changes.
On other distributions, "udev" accomplishes the same thing.
The only point when interface naming is arbitrary is at installation time, then it stays the same forever.
Not only this will add confusion by introducing a bunch of new names for network interfaces, it will also break applications that rely on them being called "ethX".
It will also make it impossible to manually assign names to interfaces. For instance, if you have a configuration that uses an embedded interface and you want to add another interface on a card, you can manually assign the same "ethX" name to it, but not if the names depend on the physical characteristics of the hardware.
No, you can still rename interfaces using udev just like you can now. This is largely about ensuring that devices get predictable name at install time. As is pointed out in the LWN comment thread, for anyone who has to provision large number of servers with multiple NICs that connect to different networks this is a godsend.
Why not give an argument to the kernel to create the new devices as "fooX" (foo being a supplied parameter) - and let udev userland deal with the renaming to "ethY" as it pleases ?
While this novelty certainly may appeal to the folks who are used to alphabet soup in the network adapters list, and I too think that the adapters that distinguish between the T568A and T568B cable layout have a degree of coolness in them by creating an extra value in being able to distinguish between the straight and crossover cables - overall it seems like a step back to me.
I would plug in the interfaces into a switch - and all I care is to find the same MAC address as I see on the switchports - not which bus they attach to.
Good point, however, the problem is only half-solved.
If the machine is coldstarted then previous devices could be randomly reshufled. Imagine you have a datacenter with 100 machines and you have some kind of provisioning scheme where you coldstart them often. It would be a nightmare having to run ethtool -p <dev> and replug cables.
We have solved the problem by providing a consistent algorithm of sorting all the devices in a udev callback. We have a known set of hardware so we know which cards are add-ons (our machines have 4+ eth ports) and which ones are built-ins. So it works out, however, it would be nice to generalize it somehow.
> If the machine is coldstarted then previous devices could be randomly reshufled.
The detection order would change, but the ifcfg file (RH) or udev rules (SuSE and others) would not change, so your NICs will stay the same these days.
ifcfg file is auto-generated by the coldstart. But you are right, if we save and restore ifcfg files then it would work. However if this saving and restoring has to be done after machine is coldstarted, its eth0 could be connected to another network completely.
That is the the problem I am trying to convey -- the machines are coldstarted often and MAC addresses are reshufled on every first boot. If your cable to the management network is connected to left socket on the motherboard, that left socket needs to be eth0 on every coldstart. Sometimes it is not. "Yes" after the first boot when ifcfg files has been written it will be stable, but that mean someone has to go and either hand-edit ifcfg files or run ethtool -p eth0 and replug the cables.
> Subsequent cold starts only read the files
No they don't. Thare no files to read. You just wrote above in your commetn that the first time OS boots ifcfg are generated. So after a machine is coldstarted there are no ifcfg files! They are generated once per coldstart and saved. Then during each restart those files are read and everything is fine.
Now, I was saying, you can have a provisioning scheme where during coldstart, in your kickstart file you can fetch and write stable ifcfg files to prevent machines from creating their own. What we do instead is install a custom RPM with a udev script where an algorithm sorts the network interfaces in a stable way (since we only deal with a known and limited set of motherboards and network cards).
> No they don't. Thare no files to read. You just wrote above in your commetn [sic] that the first time OS boots ifcfg are generate [sic]. So after a machine is coldstarted there are no ifcfg files!
As long as you don't reinstall your OS, or lose your data, your config files are there. Cold starting hardware does not delete data.
Alright, figured it out. When you talk about coldstaring you really mean restarting the machine, as in pressing the reset button for example. When I talk about coldstarting I mean starting with a bare hardware box and installing an OS on it.
> Are you trolling?
Actually I thought you were. Sorry for the misunderstanding.
[EDIT: I won't edit my previous posts, otherwise your posts won't make sense, let others laugh at my bad English (it is a 3rd language so I don't mind)]
No probs. Most places I've worked at use 'provision' or 'bare metal' for an os install plus mop up work, starting cold as a way to differentiate from warm booting, appreciate different places may have different systems though. Thanks for being classy about it :-)
Moving a PCI card to another slot should also not rename the device. With a typical current udev setup, device names are persistent based on MAC address. With this proposal, either the name will change if the bus topology changes, or bus location based names will be _wrong_. As usual the fedora/desktop trolls are breaking perfectly good things for no apparent reason at all.
I once had a server (in the pre-udev era) that would occasionally shuffle its network devices after reboots. I don't remember how I fixed it but I do remember the short phase of amusement, followed by a longer phase of frustration.
I have a lot of local video files, and I've (briefly) tried the Boxee software a couple times. It has a feature where it tries to identify that media, but I've found it to be comedically awful at doing so, especially when it comes to TV series, where it tends to pick out maybe one or two episodes from a given series directory, then for some reason completely ignore the rest, even though they're all following the same format in their respective filenames.
Does anyone know of a way to just turn this off entirely? I already have things organized by directory/filename. From what I can tell, the current 'solution' is to manually go through each file and fix whatever stupid information was auto-detected. Which is backwards, because if someone's anal enough to deal with that, they've likely already got things meticulously organized how they want by directory/filename, so why not just go by that directly?
I get the strong impression that they didn't really make local media playback a priority.
Also pisses me off that it doesn't let you delete stuff you've watched from the interface.
I ended up having to write an app to tail my boxee log looking for files that have been watched and then moving them into a "watched" directory where they are purged several days later.
No, they've both chosen a distribution model which is incompatible with any software containing GPLv3 code. Sure, if the code is 100% yours, you can do whatever you want with it since you own the copyright. But if it has ever accepted contributions from anybody else under GPL terms, then it can't be done without breaking those terms.
They could work out a process for developers who want to contribute GPLed apps, but they've decided not to.
I'm going to play devil's advocate and say that while it is good to know these things, they're no longer essential. I graduated two years ago, and have been developing in Ruby since then, I haven't seen a linked list since College. Why?, because I chose to work with a newer language, with a higher level of abstraction. The programmer of the near future will have a different set of challenges to previously. In the same vein, the guys who graduated 10-15 years ago weren't learning Fortran. But 25 or 30 years ago, I'm guessing they were.
Here's a quick example of a case where knowledge of data structures and algorithmic complexity is useful, regardless of the language.
Let's say you've got two lists, named "A" and "B", which each have 1000 integers. You want to return a list of unique integers which are present in both lists (ie the intersection of two lists). These 'lists' could be arrays or linked lists.
The brute force method would be something like this:
Out = []
for a in A:
for b in B:
if a == b and !Out.contains(b)://O(n) list search
Out.append(b)
return Out
But this can be very slow, because you can end up in the territory of O(n^3) iterations across A, B, and Out (internally the language would generally be iterating across Out in order to complete that 'contains' call). In this specific example, that works out to be something along the lines of 1000x1000x1000 = 1000000000 iterations, vastly larger than the size of the original lists.
---
A better way is to sort one or both of the lists then do the comparisons along each of them (this syntax assumes the lists are specifically arrays, but it'd effectively be the same for linked lists):
Out = []
sort(A)
sort(B)
b = 0
for a in xrange(len(A)):
vala = A[a]
valb = B[b]
if vala == valb:
Out.append(vala)
++a
++b
elif vala < valb:
++a
else://vala > valb
++b
return Out
This is definitely better than the above example. Now we're performing two sorts, each O(n log n), then we're doing a linear iteration across both of those lists in parallel, each O(n). So now we end up with an overall complexity of O(n log n) as a result of those initial sorts. Let's estimate the total number of iterations to be around, I dunno, 10000(sort)+2000(iterate/compare) = 12000? The exact number of comparisons in a sort can vary by algorithm used and how things were ordered in the lists to begin with.
Not bad, definitely better than what we were doing before. But we might be able to go a little better...
---
Yet another way is to use some hash sets, which (generally) have O(1) insertions and retrievals, and only store unique values, such that if you insert the same value twice, the second insertion is effectively ignored. We can do something like this:
Out_set = set([])
A_set = set([a in A])
for b in B:
if A_set.contains(b)://O(1) set search
Out_set.append(b)
return list(Out_set)//optional, could return Out_set
Now we end up with an algorithm which is O(n), where we iterated over the items in A once to fill in A_set, then we iterated over the items in B once, and each "contains" call was an O(1) hash lookup inside of A_set. Then finally we iterated over Out_set once to create the output list. This final step is optional, we could also have just returned Out_set directly, but it doesn't effect algorithmic complexity in either case. Now we've got 2000-3000 iterations, depending on how whether we return a set or a list. And this additionally looks a bit simpler than the sort+iterate version I gave above.
---
So just by using our knowledge of data structures, we've turned ~1000000000 iterations into ~2000-3000 iterations. This is on the order of reducing the distance to the moon to around a kilometer. And that's just with two lists that are limited to 1000 items each.
And this sort of thing is a pretty common scenario in any language, no matter how 'abstracted' it is.
Thanks a bunch for the detailed examples. I've noticed this is a huge problem in my code: too many nested for loops. I've generally gotten better about using built in functions like set (in Python at least):
Out = list(set(A+B)) # right?
Or even DISTINCT in SQL (when dealing with webapps).
And now I never doubt the usefulness of fuzzing the DB with hundreds of thousands of random entries. That quicky code that took 40ms to run over several dozen entries turns into a minute or more for several thousand.... Ouch! ;-)
At home I do carry it everywhere. When I go out, I have a 10.1" netbook bag which holds my iPad, keyboard, camera, charging cables, and earbuds. It's easy to carry around everywhere even to resuarants and the local B&N store.
I only use the keyboard when I plan to do a lot of writing. Right now I'm just using the software keyboard.
Arbitrary illustrative excerpt:
Whether American or Chinese, individuals who focus too much on ‘achievement,’ and who believe the illusion that they’ve achieved everything simply through their own honest hard work, often think very little of everyone else as a result.