There is always a surge of bringing legal work in-house after a recession, because you've experienced lawyers on the market looking for a job and willing to take a pay-cut to go in-house. It happened as well after the early 1990's recession. I don't know how permanent that phenomenon is, I think other shifts like the move to flat billing are going to be more sticky.
Your friend's characterization of his work seems phat. The first year transactional associates I know are summarizing term sheets, conducting due diligence, etc. It's fairly rote, but it's stuff that's just context-sensitive enough not to be automatable. And as a practical matter, entry-level positions in any field, maybe outside of valley startups, tend to be pretty rote. Some of my friends might sum up their entry-level programming roles by saying that their jobs just involved "changing colors of widgets in the UI in response to support tickets." It's training, work that prepares the associate to do more complex tasks down the road. Speaking broadly, training entry-level people, in any field, is part of the cost of business. In law firms as in any other business, those costs get passed onto consumers.
As a counter-example, I'll offer my own experience as a first year litigator. I spent 75% of my time doing legal or factual research (including researching several issues of first impression), drafting research memos, preparing interview outlines, and summarizing expert testimony.[1] I spent 25% of my time reviewing discovery documents, and that process was heavily automated/outsourced. We were generally only looking at documents that had been marked potentially significant by either a contract attorney ($35/hour in NYC, less in India) or a predictive coding algorithm. The exception was when a batch of documents was too small to be worth dealing with the set-up overhead of putting a contract team together or training the predictive coding software.[2]
[1] Electronic access to case research is of course a big boon to making that sort of work more efficient. However, I should note that at the end of the day, using Lexis/Westlaw saves the client a lot of money. Google Scholar is so spectacularly bad for legal research that it takes much longer to get an answer, and you feel less confident in the answer you get. The limitations of Google Scholar when it comes to legal and scientific research are very telling insight into the limits of automation with existing technology. Google is amazing, but fundamentally its not actually intelligent. It suggests to you what you might want to look at based on what other people looked at. When that popularity heuristic is inapplicable, it becomes really unhelpful.
[2] This is a flip-side to automation technology that gets glossed-over. In a mega-litigation, predictive coding is a huge boon because you might have hundreds of thousands of documents. But in a more run-of-the-mill litigation, the time it takes to train the predictive coding engine might negate any advantage over just having a first year associate look at the documents.
Your friend's characterization of his work seems phat. The first year transactional associates I know are summarizing term sheets, conducting due diligence, etc. It's fairly rote, but it's stuff that's just context-sensitive enough not to be automatable. And as a practical matter, entry-level positions in any field, maybe outside of valley startups, tend to be pretty rote. Some of my friends might sum up their entry-level programming roles by saying that their jobs just involved "changing colors of widgets in the UI in response to support tickets." It's training, work that prepares the associate to do more complex tasks down the road. Speaking broadly, training entry-level people, in any field, is part of the cost of business. In law firms as in any other business, those costs get passed onto consumers.
As a counter-example, I'll offer my own experience as a first year litigator. I spent 75% of my time doing legal or factual research (including researching several issues of first impression), drafting research memos, preparing interview outlines, and summarizing expert testimony.[1] I spent 25% of my time reviewing discovery documents, and that process was heavily automated/outsourced. We were generally only looking at documents that had been marked potentially significant by either a contract attorney ($35/hour in NYC, less in India) or a predictive coding algorithm. The exception was when a batch of documents was too small to be worth dealing with the set-up overhead of putting a contract team together or training the predictive coding software.[2]
[1] Electronic access to case research is of course a big boon to making that sort of work more efficient. However, I should note that at the end of the day, using Lexis/Westlaw saves the client a lot of money. Google Scholar is so spectacularly bad for legal research that it takes much longer to get an answer, and you feel less confident in the answer you get. The limitations of Google Scholar when it comes to legal and scientific research are very telling insight into the limits of automation with existing technology. Google is amazing, but fundamentally its not actually intelligent. It suggests to you what you might want to look at based on what other people looked at. When that popularity heuristic is inapplicable, it becomes really unhelpful.
[2] This is a flip-side to automation technology that gets glossed-over. In a mega-litigation, predictive coding is a huge boon because you might have hundreds of thousands of documents. But in a more run-of-the-mill litigation, the time it takes to train the predictive coding engine might negate any advantage over just having a first year associate look at the documents.