Who chooses? Anthropic's contract with the US military says that it can't be used for 'domestic surveillance' (which is probably illegal anyway) or autonomous weapons systems, because Anthropic thinks that the tech isn't reliable enough for that yet. The Secretary of War, Pete Hegseth, has decided this is an unacceptable limitation and, in the manner of the Trump government, declared he will use all sorts of legal powers to force Anthropic to remove these limits, including banning the rest of the government from using it, and/or threatening a takeover. He may not be allowed to use those powers in this way, but plenty of other Trump officials are already demonstrating they don't recognise any legal or constitutional limits on presidential power, so that might be moot. OpenAI reacted by saying that it would be willing to provide what the DoD wants, which prompted a backlash inside and outside tech, and Anthropic is suddenly on the app store charts. This might be another 'delete Uber' moment (that one had limited impact, but Lyft was a weaker alternative). However, the underlying issue is worth considering with nuance. Sure, Anthropic can write a contract, but what should it say? There are only a handful of companies that can build this - should their CEOs, unelected and unaccountable to anyone except their shareholders, decide what capabilities the US military has? Should they decide what an acceptable level of accuracy is, when they're not the ones in combat and don't know the context? Meanwhile, the deeper context, of course, is that Anthropic is supposed to be the 'safe', 'ethical' lab, that's worried about AI killing us all, so how can they sell the military a system to kill people? Well, that's what the military does, and pacifism only works if everyone else is a pacifist too, which the last 5 years and last 87 years demonstrate is a foolish plan. So who decides? ANTHROPIC, OPENAI Bubble scenarios Another week, another viral blog post scaring the markets, this time from Citrini Research. As with previous ones, the ideas and arguments are not great (see eg this exasperated rebuttal from Citadel), but the fact that things like this create such strong reactions is more interesting. See this week's column. LINK, CITADEL Meta x AMD - more circular revenue Everyone wants a second source: Meta has done a deal with AMD for chips. Meta will buy up to 6 gigawatts of chips for up to $100bn over the next five years, and gets warrants to buy up to 10% of AMD at $0.01 per share (current price $196), which is worth $33bn now, but Meta would only get the full allocation if AMD's share price rises to $600, making 10% worth, well, $100bn. AMD did a similar deal with OpenAI last October. These kinds of deals always work well on the way up. META, AMD OpenAI's money tap The makers are getting more and more nervous, but OpenAI is still raising - or at least, locking in position before the music stops. This week it raised a $110bn fund (including $50bn from Amazon), which is apparently a record (and triple the largest ever IPO, Aramco's $29.4bn in 2019). LINK AI-Washing at Block? Block (formerly Square), the PoS company run by Jack Dorsey, announced that it would cut 40% of headcount, attributing this to AI automation. Most people in tech attribute this rather to Dorsey just massively over-hiring, much as he did at Twitter - Block went from 5.5k heads in 2020 to over 11k in 2024. There will be a lot more of this kind of AI-washing. LINK The week in AI Anthropic expanded on its complaint that the leading Chinese models are mostly just scraping US models. Of course they are - they'll use any advantage they can get, especially if the US blocks cutting-edge chip exports (though the story there seems to change every week), but worth bearing in mind when you wonder why Chinese models are so good without equivalent capex. LINK More new/updates tools from Google: enhancements to Flow, its 'content studio', and a new version of Nano Banana, the flagship image generator. FLOW, BANANA Selling back doors The US is prosecuting a vendor who sold hacking tools used by the US government to anyone with enough money. This is why tech companies refuse to create backdoors in their products for law enforcement - if you create a skeleton key for the police, the police will lose it. LINK |
No comments:
Post a Comment