I was referring to the section of the Telecommunication Act that limits foreign direct investment to a 20% ownership stake (which ups to 25% with FCC approval for anything more if a holding company is used).
I'm not sure I communicated my point clearly though, because I agree that the old laws do not apply because they haven't been updated to work with the internet era. I do know how the internet works, though, and that you can get data from 'anywhere'.
Nobody would really care about the issue, though, if there weren't ways to make it more difficult to operate. Banning from the app stores doesn't prevent people from watching TikTok, but it makes it more difficult. They'll get fewer users if they need to side-load a native app or use a PWA, and a significant revenue dip from blocked advertising from PWA or browser views. This gives competitors an artificial advantage that network effects can solidify in time.
But the point I was trying to make is, the US should address their failure to keep legislation up-to-date with national security objectives, rather than single out a single company.
Rather than stick-up jobs on random international corporations, eg 'Sell to our country, OR ELSE', they should have created legislation with privacy laws that apply Congress' concerns to all companies, perhaps with stricter rules for international companies. A review process and penalties could result in forced removal from the app store. This could have _prevented_ TikTok's data collection by defining a clear line they're not allowed to cross. And if they still crossed the line, their app could be removed without it clearly being biased.
Now, the stated concern (privacy violations and data transferred to China) are likely not the only problem. We would have to update the old laws and go beyond simply checking for foreign ownership. The law would need to directly address the same issue that the old law was written for (potential abuse by a foreign entity). The privacy laws would need to be there. But we'd need to also build a process enabling review of algorithmic behavior and manual processes around censorship and propaganda, when companies reach a certain size or market reach. We'd need safeguards to prevent abuse of the law (which will happen, considering the tendency for all of our institutions to come under regulatory capture at some point).
Frankly, I'd prefer if algorithmic ranking system were forced to be opened to the public in a scenario where they gain sufficient audience to cause a significant impact on public discourse, as long as we're rewriting the laws. Some might cringe and cry 'Oh no, what about the intellectual property of that dozen or so megacorps!' And no doubt these companies would also incur expense fighting the gamification of their algorithms. That's just the cost of being so impactful, and it will drive innovation if they can't keep up. And it's the best way to ensure abuses are discovered if we put regulation of censorship and algorithmic abuse in the hands of government organizations.
Basically, we need a framework to systemically address these issues. Not repeated contentious multi-year congressional fights that are triggered on a case-by-case basis. The current haphazard approach _might_ succeed against an occasional bad actor, but it's going to let dozens more through.
All they're doing now is generating talking points during an election year. It's political theatre, not problem-solving.
I'm not sure I communicated my point clearly though, because I agree that the old laws do not apply because they haven't been updated to work with the internet era. I do know how the internet works, though, and that you can get data from 'anywhere'.
Nobody would really care about the issue, though, if there weren't ways to make it more difficult to operate. Banning from the app stores doesn't prevent people from watching TikTok, but it makes it more difficult. They'll get fewer users if they need to side-load a native app or use a PWA, and a significant revenue dip from blocked advertising from PWA or browser views. This gives competitors an artificial advantage that network effects can solidify in time.
But the point I was trying to make is, the US should address their failure to keep legislation up-to-date with national security objectives, rather than single out a single company.
Rather than stick-up jobs on random international corporations, eg 'Sell to our country, OR ELSE', they should have created legislation with privacy laws that apply Congress' concerns to all companies, perhaps with stricter rules for international companies. A review process and penalties could result in forced removal from the app store. This could have _prevented_ TikTok's data collection by defining a clear line they're not allowed to cross. And if they still crossed the line, their app could be removed without it clearly being biased.
Now, the stated concern (privacy violations and data transferred to China) are likely not the only problem. We would have to update the old laws and go beyond simply checking for foreign ownership. The law would need to directly address the same issue that the old law was written for (potential abuse by a foreign entity). The privacy laws would need to be there. But we'd need to also build a process enabling review of algorithmic behavior and manual processes around censorship and propaganda, when companies reach a certain size or market reach. We'd need safeguards to prevent abuse of the law (which will happen, considering the tendency for all of our institutions to come under regulatory capture at some point).
Frankly, I'd prefer if algorithmic ranking system were forced to be opened to the public in a scenario where they gain sufficient audience to cause a significant impact on public discourse, as long as we're rewriting the laws. Some might cringe and cry 'Oh no, what about the intellectual property of that dozen or so megacorps!' And no doubt these companies would also incur expense fighting the gamification of their algorithms. That's just the cost of being so impactful, and it will drive innovation if they can't keep up. And it's the best way to ensure abuses are discovered if we put regulation of censorship and algorithmic abuse in the hands of government organizations.
Basically, we need a framework to systemically address these issues. Not repeated contentious multi-year congressional fights that are triggered on a case-by-case basis. The current haphazard approach _might_ succeed against an occasional bad actor, but it's going to let dozens more through.
All they're doing now is generating talking points during an election year. It's political theatre, not problem-solving.