Recently, people start getting really concerned about autonomous AI doing its thing with your passwords and credit cards. But I think that not that much has changed since the first emergence of computers. Computers trade billions in stocks and currencies, run nuclear powerpoint plants.
The difference, these systems are carefully monitored and are predictable and well-understood. It is not about the intelligence of the software, it is about the level of control you give it. People wanting “to push the boundaries” and see how far AI can go seem to be putting a 10 year with Microsoft Flight Simulator experience at the controls of a real 747.
I have started to use AI tools heavily in my day-to-day work. Not for email and calendar management, but for coding and document production. All seems reasonably under control, things being sandboxed on my own machine. “Reasonably” What is not, is the chaotic user interface when it comes to giving permissions to things on your file system. Many of these questions will not be understood by the IT layman.
Everyone is raving about Anthropic’s Cowork tool to automate knowledge work. Many incumbent database/information providers and consultancy firm’s stock prices get hammered. The real revolution for Anthropic might be an acquisition that did not get a lot of press. In December, they acquired Bun, a software development platform. My sense is that Anthropic is developing a trusted platform that can build and run code safely on your computer without the constant flow of questions whether it is OK to install package x, y, and z. I think AI models will become a commodity, the real winner in the next generation of computing will be the player that offers this trusted platform that can keep AI in check.
Let’s read this post in 2036 again and see what happened :-)