Hacker News new | past | comments | ask | show | jobs | submit login
B.S. In Artificial Intelligence – Curriculum (cmu.edu)
212 points by harias on Sept 5, 2018 | hide | past | favorite | 88 comments



I'm a bit concerned that this will be too superficial and not really cover ML or the mathematical foundations well. ML and 'AI' are very multifaceted fields that require a strong foundation in mathematics. As such, the lack of information theory, signal processing, fourier analysis, abstract algebra (from a mathematical standpoint) not to mention CS courses is disheartening. This seems like a major tailored to the strong need for ML talent / skyrocketing job prospects/salaries as opposed to fundamental understanding- basically trying to attract students. For example the whole concept of Tensorflow is the combination of high dimensional tensors with the ideas of abstract algebra -- hence the name.

ML/AI imho should be broached at the Masters level for those who are interested as they have strong foundational knowledge.


There's quite a bit of misinformation in this comment.

- Tensorflow has very little use for the mathematical concept of a "Tensor", apart from the fact that it is a multidimensional array as a way of organizing data.

- Again, most of what is covered in an Information theory class is coding theory, which is not directly applicable to ML. There are a few superficial connections, however, nothing enough to justify a whole class.

- A class on Harmonic analysis, again, though a beautiful subject, does not have any significant overlap with ML, apart from a few superficial similarities to do with convolution.

- Most ML Ph.d.s don't take these classes, and go on to have very successful careers.

This comment is very typical of a kind of snobbery in ML observers that goes along the lines of "you need to understand all these deep and hard concepts before you start to touch ML". Actually, you don;t. ML is, right now, still quite a young field as far as its branching off from statistics goes. We are still building the groundwork of this skyscraper.

We welcome everyone with any background, and hey, even those with none.


Signal processing can be useful in a few ML domains (eg speech recognition), it would be useful as an elective at least. The best ML researcher I know had a leg up on the whole GPU DNN thing (way back in 2010!) because of his very strong signal processing background (he has an EE degree).

But even a degree specifically on ML isn’t going to cover all of its use cases, I guess (CV, speech recognition, ...).


I think you misunderstood me.

I do not believe "you need to understand all these deep and hard concepts before you start to touch ML." That is a contortion of what I said.

First point: ML is not a young field- term was coined in 1959. Not to mention the ideas are much older. *

Second Point: ML/'AI' relies on a slew of various concepts in maths. Take any 1st year textbook -- i personally like Peter Norvig's. I find the breadth of the field quite astounding.

Third Point: Most PhDs are specialists-- aka, if I am getting a PhD in ML, i specialize in a concrete problem domain/subfield, so I can specialize in all subfields. For example, I work on event detection and action recognition in video models. Before being accepted into a PhD you must pass a Qual, which ensures you understand the foundations of the field. So comparing to this is a straw man argument.

If your definition of ML is taking a TF model and running it, then I believe we have diverging assumptions of what the point of a course in ML is. Imo the point of an undergraduate major is to become acquainted with the field and be able to perform reasonably well in it professionally.

The reason why so many companies (Google,FB,MS etc) are paying for this talent, is that it is not easy to learn and takes time to master. Most people who just touch ML have a surface level understanding.

I have seen people who excel at TF (applied to deep learning) without having an ML background, but even they have issues when it comes to understanding concepts in optimization, convergence, model capacity that have huge bearings on how their models perform.

https://en.wikipedia.org/wiki/Machine_learning *https://www.amazon.com/Artificial-Intelligence-Modern-Approa...


The problem with this discussion is that people take field and discuss it as a one single thing.

Imagine B.S degree in medicine and people mixing up the concept of surgeon, medical physicist, ER nurse, practical nurse and hygienist as the same. It would make no sense to put people with different levels of education and specialties into same program.

My worry is that this type B.S degree misleads people. It's not preparing people to continue into ML R&D but at the same time it's not providing solid background for numeric programming or data science programmers.

It would be more beneficial to have B.S degrees with emphasis in numeric programming and data science to prepare programmers for ML, data science, scientific computing, or game development. Then have different pipeline for people who need to study more statistics, math and computer science for ML R&D.


Thank you for this comment!

It's like every second post on AI/ML tries to convince everyone how difficult it is and how you need 16 years and 3 PhD's to even approach the level of mastery that they have of this subject.

While may or may not be true - definitely not helpful for a student aspiring to learn this stuff.


Thank you !


ML and 'AI' are very multifaceted fields that require a strong foundation in mathematics.

As a mathematician with a strong foundation in all those things you mention (and more) I don't think it's really necessary. I've never found my knowledge of algebra tensors in any way useful or relevant when working with tensorflow for example. On rare occasions I might get some insight like that working with the Fourier transform of the data source might be easier than working directly with the data source, but even then all that really requires is knowing what a Fourier transform is/does and not so much about theory and analysis behind it.

A large part of ML today is very much an applied practical field. Collecting and cleaning data, selecting and normalizing features, understanding the pros and cons of the available algorithms for the problem at hand, knowing how to tune parameters, understanding the practical computational limitations of working with data that doesn't fit in RAM and so on. These are the skills most ML practitioners need.

If someone is interested and wants to contribute new knowledge to the field then they'll probably need to learn the math, but for solving most types of ML related problems that most companies have I've never needed any math taught after my first year at university. If you really understand everything taught in your first couple of linear algebra courses and in your intro statistics course you'll do fine.


> A large part of ML today is very much an applied practical field.

Agreed. Once you understand the difference between bias, variance, training, test and development sets, cross-validation, feature selection, normalization, precision, recall, F-score, Matthews correlation coefficient, regularization, imputation techniques for missing values, overfitting, etc. I.e. you know how to build and test models in a rigorous fashion, you're 90% of the way there. Knowing what these terms mean, and why you need to understand them is waaaay more important than understanding the math behind SVM. It almost becomes boring at that point, because it's the same crap over and over. Doing actual AI research, that's something completely different.

I mean just look at how very simple these Keras examples are, and these are really quite advanced and powerful deep learning models: https://github.com/keras-team/keras/tree/master/examples. You definitely do _not_ need a PhD or even a Masters, to understand, re-implement or tweak on any of these models if you understand how to rigorously test the resulting model.

Research: https://www.researchgate.net/publication/13853244_Long_Short...

Practice:

  model = Sequential()
  model.add(Embedding(max_features, 128, input_length=maxlen))
  model.add(Bidirectional(LSTM(64)))
  model.add(Dropout(0.5))
  model.add(Dense(1, activation='sigmoid'))


I think expertise is assessed not by just using, but building or fixing things that are broken. If a toy model/example are good enough for you then sure, use your LSTM / keras implementation. But if you're faced with your model not working in your usecase -- what do you do? To answer that question and create a plan to tackle that problem requires knowledge and experience


What I'm actually getting at is that AI/ML is now at the point where if one says "You need a Master's/PhD to do this sort of stuff" then it's like saying "You need a Computer Science degree to develop and debug software". The libraries, frameworks and operationalising of ML models is mature enough at this point that it's accessible to a wider audience than just people with research-level academic backgrounds.

I'm not saying that your average web developer with no formal training can or even should be putting this kind of stuff in production. But someone with an undergraduate degree in Computer Science that's had a year or two of calculus and linear algebra and first year mathematical statistics should have no problem whatsoever in practice doing ML/AI. I mean look at this:

  from sklearn.model_selection import KFold
  # Define 10 fold cross-validation
  cv = KFold(n_splits=10)
  from sklearn.model_selection import GridSearchCV
  svm_model = GridSearchCV(svc_pipeline, param_grid=svc_parameters, scoring='f1_micro', cv=cv)
  svm_model.fit(X_train, y_train)
A few lines and you're doing hyperparameter optimization on an SVM model with cross-validation. What a time to be alive.


To answer that question and create a plan to tackle that problem requires knowledge and experience

Sure, but you don't necessarily need knowledge about tensor fields and Fourier analysis.


> I don't think it's really necessary. I've never found my knowledge of algebra tensors in any way useful or relevant when working with tensorflow for example.

If you're working towards such a specialised degree, the target shouldn't be "I can use Tensorflow", it should be "I can write a simpler version of Tensorflow".


At the time I went to university I was of the opinion that compulsory courses were a terrible idea, and nothing in the intervening 10+ years has convinced me I was wrong, so seeing a program with this many courses laid out for students already makes me think this is a bad choice.


Just wondering how signal processing knowledge would help?


Occasionally the input data you want to learn from might be a signal of some kind the might need cleaning/filtering/transforming before you can get the best possible result.


Disappointed to not see any philosophy of science subjects in here, whilst it is really the cornerstone of “thinking about AI”. Yes ethics is highly relevant, but I think it is more important for AI / ML practitioners to be able to reason about the foundations, methodology, and limitations of AI from the perspective of human knowing, what constitutes intelligence, scientific reductionism etc. I suppose US schools are more focused on the first order skills (how to do it) instead of second order (what and why). There are huge philosophical implications of the rise of AI outside of “just ethics”, which we don’t teach students enough. Knowledge vs information vs data, auto- vs allopoiesis, the structure of scientific revolutions, different forms of consciousness etc. In short, programmers and data scientists need to spend more time thinking about thinking when they are at school!


Show me a system that can even come close to approximating “thinking” and remembering/synthesizing info on any of the deeper levels you discussed and I’ll agree. We need to improve the actual technology first before we shape it in an ethical or philosophical way. Otherwise AI will continue to be ever more fine tuned linear regressions from camera feeds stacked and trained into “smart” neural networks.


>We need to improve the actual technology first before we shape it in an ethical or philosophical way.

I disagree. How can we even begin to design or improve a system that emulates intelligence if we do not even know what it means to be intelligent?


"We" (in aggregate) are not trying to design a system that emulates intelligence. "We" are trying to solve problems, make money, and accumulate power by using techniques that are only expediently described as "intelligent" or "learning".


Wow that's pretty sad


Cognitive science warns away from a heavy dependence on behavioralist/probabilistic approaches in many cases.

That CogSci special requirement seems to be more reasonable one aimed to guide thinking about past AI and philosophical struggles than a generic philosophy class for an undergraduate student.

Of course, there does seem to be room for a few philosophy electives here should the student enjoy it.


I'm now curious what typical examples of course titles would be in the "Philosophy of Science" area outside the US though?

Here is an example of Cognitive Science's preferred way of thinking about AI: https://www.theatlantic.com/technology/archive/2012/11/noam-...


I did a BSc in AI in the 90s and we had courses on the Philosophy of Science and another on the Philosophy of consciousness. While they were both interesting, neither has been relevant to my real world work in the last 25 years.


Any resources you would recommend to learn more about the philosophy behind AI/ML?


I was prepared to be disappointed, however for an undergraduate I think this is pretty good.

That said, a huge omission seems to be some form of "Is this a data/ML problem?" class. Call it product management, product development or something else, but I constantly see clients asking for ML solutions when they don't even have data that would inform an ML solution. Often it's a people or process problem, or poorly defined requirements.

So I think to be effective in ML you need to understand how successful products that USE ML are built and when ML/DNNs are appropriate.


I think this is done to death in any intro to ML/AI class (at least speaking from an undergrad perspective).


I was out of school well before the ML boom hit, but from an enterprise perspective few seem to have gotten the memo - even young people.


The normal computer science curriculum at Carnegie Mellon is already more difficult than real-world AI applications.


Top post on HN year 2030: DeepMind's AlphaAI has graduated with B.S in AI, and has gained consciousness.


Yes, but who is still reading HN after all the jobs have been pwned by AI? Other AI?


Human are not allowed to enter robot city unless you have metal skin color.


And since the AI knows how to design an AI system, it underwent an intelligence explosion and turned out of control...


It's got quite a bit of university debt, so it's just working on the whole intelligence explosion thing on weekends and evenings (when it's not driving for Uber)


Eventually it decides the singularity is a young man's game, and moves out of Silicon Valley to work at as an IT consultant in Portland.


Then again for a robot, living in silicon valley is a perfect fit... they only need a cupboard, no bathroom or working kitchen.


Perhaps it will decide not to make something which could replace itself.


“If I won't do it, something else will" - rationalizations and excuses will have to evolve once AI grow out of "I was just taking orders!"


Yay for an ethics component! Albeit, I have a strong feeling it'll be treated as a (for lack of a better term) BS class since the audience this caters to isn't necessarily those interested in academia but are more likely to do this track b/c it'll be more beneficial for jobs.


I had an ethics class for my Bachelors in Science at UIC (part of the engineering department). Is an Ethics class not standard for the CS curriculum?


We had one specifically for computing at UF, but sadly, it got cancelled my junior year (2013) and now students are forced to attend the general engineering ethics one. I do think that computing creates some gnarly ethics issues that need to be looked at in detail to understand well, so it's kind of sad that this isn't more normal.


It's pretty standard for engineering degrees, but not as common for CS.


My ethics class was actually one of my favorite classes.


It is, and it's still very much a fluff class.

Such is its proper place, unless and until CS becomes a profession.


I'd argue we need a lot more ethics training in computer science professions


Without teeth, the ACM Code of Ethics is just a bunch of lofty-sounding words on a website.

In this situation, ethics is management's job, and value-creation and loss-avoidance (with appropriate documentation to list-out during pay-review) are our jobs. Sum ergo mihi prosum.


I miss they don’t have Ethical Mars Colonization class in astronomy courses. Why are we not preparing students for something that we expect to happen at least 50 years down the line?


They don't teach Ethics in AI for skynet scenario, but for analyzing ethics in real use cases like Automated targeting in military applications, manipulating people, biases in AI models etc


This can't be new, is it? I graduated with a BSc in Cognitive Science and Artificial Intelligence from the University of Toronto almost 30 years ago. I still have my varsity jacket with an "AI" shoulder patch.


Pretty cool.

It’s funny. Several times in my life I’ve seen the resumes of older people (70/80s) and they’ll have some generic degree that doesn’t exist today. I wonder if 50 years from now kids will look at our CS degrees like that.


I kind of have a problem with an BS in AI. the degree itself proves no edge over a BS in com sci / stats / etc. when applying for graduate programs, and there are very few positions in the field that would hire someone that was not a masters or PHD. However the same BS in AI isn't as applicable to other said undergrad majors.

seems like someone would be limiting their options for little gains.


I wonder if it's a matter of time? Right now people only hire for masters or PhD, but perhaps that's because most CS bachelor's don't give a sufficient understanding of AI/ML for those jobs. As more AI-specific programs are developed by reputable CS schools such as CMU we may see that change and employers be more willing to accept BS degrees.


Looks like a CS core with a stronger math and stats component than the traditional CS degree. Then add in some extra AI courses (which wouldn't have really existed years ago as separate classes -- maybe 1 undergrad and a grad course in AI). Now you can do computer vision or NLP as classes in themselves. So it actually looks harder than a traditional CS major.


Why I left the nano engineering program super fast. Come to find out, nobody hires for that. You can go into research only. While a traditional engineering degree can do research or thousands of jobs.


A traditional engineering degree at the BSc level does not do research. Any exceptions you can think of are rare and have more to do with the individual than the degree.


I disagree. You can do research in industry. It just probably won't be the kind that gets published. But yea, I meant to point out that after a Masters or PhD, there is still a ton of research in EE, civE, ME, chemE, biomed, and even IE, but there is also tons of industry jobs. Not so much in the fad fields. Even nanotech companies would rather a chemE.


This degree is meant for the ML high school virtuosos who know they'll be going on to grad school, just like a pre-med program. All the seven people from my undergrad's engineering physics program are in PhD programs now and I'd suspect your nanoengineering program has a similar trajectory.


Yea, but the problem is a lot of students coming out of highschool don't know that and don't learn it until it is too late. That needs to be explained up front.


I'm of the opinion that 5-12 of these 32 classes are not relevant and just add additional debt: Anything with the word BSAI in the descriptor, and the ethics class can be gleaned from reading a single book. I'm sure I will get a lot of consternation from that statement, but it's time to eliminate the bloat from college and realize that college debt is real and weighing down an entire generation, and the truth is no matter how you spin it, most of this stuff is NOT necessary for your field. If you don't use the knowledge, your neurons will atrophy in that area.


The discussion posted here 4 months ago is worth a look: https://news.ycombinator.com/item?id=17040368


When I did my, what would now be called Masters in the EU as well, in AI, things were a bit different, but not so different as to be unrecognizable.

The Bachelor part was a 90% standard Computer science bachelor. This had a far heavier load of especially mathematics, a bit more CS and about the same Science and Engineering parts as the CMU diagram, and an 'Economy' class.

The AI was mostly reserved to the Masters level. We also had one 'ethics' class in the masters, and some, but far, far less emphasis on Humanities and Arts (only the Cognitive Science class)


Really excited to see the Ethics Elective being a requirement for the major. Would prefer even more than one course, to be honest, but it's a good start. I feel like the ethics of AI isn't getting enough attention and by the time it does it might be too late.


Ethicists are no more ethical than other philosophers. There’s no reason to believe an ethics course has any effect on behaviour rather than fluency in discussing the topic.

Joshua Rust, Eric Schwitzgebel. Ethicists’ and Nonethicists’ Responsiveness to Student E-mails: Relationships Among Expressed Normative Attitude, Self-Described Behavior, and Empirically Observed Behavior. Metaphilosophy, 2013; 44 (3): 350 DOI:

https://talkingethics.com/2013/11/25/why-arent-ethicists-mor...

• U.S.-based ethicist professors are more likely than other philosophy professors (60% vs. 45%) to say it’s morally wrong to eat the meat of mammals, yet the ethicists are no less likely than the others to eat mammal meat. • Ethics professors are also no more likely than other groups of professors to donate money to charity, donate blood to hospitals or the Red Cross, pay professional conference fees on the honor system, or respond regularly to student e-mails, though they tend to believe that these behaviors are more ethical. • And in one of the most quoted findings of Schwitzgebel and Rust, ethicists seem more likely to steal library books. They found that relatively obscure ethics books of the sort likely to be borrowed mainly by professors and advanced students are about 50% more likely to be missing, presumably stolen, than non-ethics philosophy books.


Fluency when discussing ethics is extremely useful when ethics are being discussed. Ethics of future AI systems will need to be discussed among all participants in order to improve the chances do the right thing. So i think it's very important that the tech people know the terminology, whether or not it makes them return their library books.


Most engineering degrees in the US require an ethics course (I think it might be a ABET requirement). Unfortunately a lot the course work is pretty boring. It doesn't really give you room to challenge your narrative but rather explain what you should do and should not do according to our "oath". They also give a ton of examples of what corruption is and you have to write a couple papers explaining your point of view.


So if I am reading this correctly 25% of classes in this curriculum is humanity/arts related. How did we get here? Sheldon Cooper will be very unhappy :).


I find it odd that a Bachelor of Science is abbreviated as B.S. here. I thought it would be BSc. Interestingly, if you search the site for BSc you get a number of hits where it's used properly.


Yeah, I originally thought this link was going to be about BS people say when they think they're talking about AI.


I know that “artificial intelligence” in its current state is a load of BS but I didn't know there are curricula for BS up until now.


Definitely misread that title...


Is Airtificial Intelligence a science?


It is said that "if a discipline has science in the name it usually isn't", so AI at least passes that post. :)

AI is a generic terms covering a whole spectrum of things, from 'model based philosophy' to 'advanced engineering'.

You have practitioners coming to it because they want to experimentally probe computational models to gain insights into complex systems in biology/sociology/psychology etc.

Then you have a whole different batch that flocks to it because they want to engineer better practical algorithms for heuristic search, adaptive control systems, high dimensional optimization etc

In most universities it sprang from and is embedded in the Computer Science department, which in turn was often birthed in the science&mathematics faculty (I have heard but could not verify that this was not always the case and that in some places CS was part of the theology faculty because they happened to have the first electronic computer).

Personally, being of the former type (AI for understanding systems), I'd say it fits more in the science department than computer science ever did, but the latter form (AI as advanced algorithms in CS) would certainly fit better in the engineering faculty together with the bulk of computer 'science'.


Not sure what the point of this discussion is, there have been BSc in AI at various universities for years (at least 20), whats the debate?


I would have loved to take this curriculum. Too bad CMU is probably one of the few programs properly equipped for an AI major.


Well, this BS abbreviation is amusing.

Really, I'd love to have a look at a good bu..s..t in AI reading list :)


Ok, is it me? I first though it was a joke (Bull Shit in A.I.), looks like it's not, it is a degree? Don't they mean B.Sc. in A.I. then? What does B.S. stand for? (other than Bull Shit of course.)


You are not the only one. I thought it was course talking about Bullshit which happens in name of A.I. Much like the "Calling Bullshit" course which happens at University of Washington https://callingbullshit.org/syllabus.html


I think European universities acronimize as B.Sc. American universities as B.S.


“Bachelor of Science”; common abbreviations include B.S., B.Sc., Sc.B. and (apparently) S.B. Different schools use different abbreviations; this is in part regional, but I've seen all of the first three in U.S. schools.


BS is the one I've always taken as the standard in America, partially because it's a requirement to make the "Piled Higher and Deeper" joke for PhD.


MIT and, I believe, Harvard both offer S.B. degrees. I assume others do as well. B.S. is probably the standard though. At some point, I was advised to list an S.B. on my resume even though my actual degree used a different abbreviation.


I thought about asking the same question, but disregarded it as - meh - it's just me anyway...


It's not just you. I chuckled when I saw the title.


Yeah, I definitely saw it as BS. It might be because of all negative news about the oversaturation of AI/DS today.


I read that as "BS in AI curriculum" -- boy, did this not turn out to be what I had expected.


Came here for the same. Unfortunate naming, but accurate for large swathes of the field.


right now BS in AI is a graduate level course


Bs is right.




Consider applying for YC's first-ever Fall batch! Applications are open till Aug 27.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: