Hello and welcome to Eye on AI. In this newsletter…JPMorganChase is the top AI bank (again)…Microsoft debuts its AI agents…while tensions between it and OpenAI grow…and the worrisome results of war gaming AI competition.
Lots of companies make vague claims about how much they’re investing in AI or the benefits they’re seeing from deploying the technology. An increasing number of public companies have also begun pointing to AI as a risk factor in their SEC filings. But much of this is either marketing spin or boilerplate legal CYA. It’s not easy to tell, from the outside, how well any particular business is doing when it comes to AI deployment.
That’s why it’s always refreshing to see Evident Insights’ annual AI Index, which ranks 50 of the world’s largest banks on their AI prowess. Evident uses 90 publicly available data points, drawn from job postings, LinkedIn profiles, patents, academic papers, financial filings, executives’ public statements, and more, to come up with its ranking. While it’s not a perfect methodology, it is a lot more objective than most other gauges.
Running harder to stand still
JPMorgan Chase tops Evident’s ranking, as it did last year and the year before. In fact, all of the top four—which is rounded out by Capital One, Royal Bank of Canada, and Wells Fargo—have maintained their positions from last year’s ranking. But this fact belies what is actually going on, Alexandra Mousavizadeh, CEO of Evident Insights, tells me. Most of the banks in the index improved their overall scores, and the average scores have climbed significantly over time. NatWest, the 18th bank in the Index, for example, scored more points this year than the number 10 bank in last year’s ranking.
“The leaders are leading more, but the pace of growth has doubled since last year,” Mousavizadeh says. Those that made the investment to have their data cleaned up and ready for AI applications and that have invested in hiring AI talent and deploying AI solutions are moving much faster than those who are further behind on these tasks.
AI’s ‘Flywheel’
This is evidence of AI’s “flywheel” effects. And it’s why those companies that have wanted to take a wait-and-see approach to the AI boom, wary of reports about how difficult it is to generate return on investment from AI projects, may be making a big mistake. It can take time and significant investment for AI projects to begin to pay off, but once they do, these AI deployments can create a self-perpetuating and accelerating cycle of benefits. That flywheel effect means that it can be impossible for late-movers to ever close the gap.
Or nearly impossible. A few banks have managed to jump up in the rankings this year—HSBC, Canada’s TD Bank, and Morgan Stanley all managed to break into the top 10 for the first time. In the case of HSBC, tying its AI efforts more tightly together and making some key hires helped, Mousavizadeh says. For Morgan Stanley, partnerships with OpenAI and Nvidia helped boost its position.
But JPMorgan Chase remains well ahead of its peers largely because it started investing in AI much earlier than others. It hired Manuela Veloso, a top machine learning researcher from Carnegie Mellon University, back in 2018 and stood up its own advanced AI research lab. In 2019, its then-chief data and analytics officer championed a centralized data and AI platform to move information into its own AI models much faster than it could before. It was an early adopter of generative AI models too and is now pushing a bespoke generative AI tool out to 140,000 employees. It is also making all its employees complete an AI course designed to equip them to use the technology effectively. Critically, it says it’s starting to see value from this investment—and unlike most companies, it is putting some hard numbers against that claim. The company is currently projecting it will see $2 billion of “business value” from AI deployments this year.
Putting numbers behind ROI
While “business value” may still seem a bit wishy-washy—it’s not exactly as concrete a term as ROI, after all—putting actual dollar figures out there matters, Mousavizadeh says. That’s because once a bank puts numbers out, financial analysts, investors, and regulators will push for further transparency into those numbers and also hold the bank accountable for meeting them. That, in turn, should up the pressure on other global banks to start doing the same. (One other bank, DBS, has said it had seen $370 million in “economic value” from a combination of additional revenue, cost savings, and risk avoidance, thanks to AI.)
While currently Evident Insights only ranks financial institutions, these patterns—with today’s winners, continuing to win, and increasingly publishing real stats—will likely be repeated in other industries, too. Those waiting on the sidelines for AI to mature or prove itself may find that by the time the evidence of ROI is clear, it is already too late to act.
With that, here’s more AI news.
Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn
AI IN THE NEWS
Microsoft rolls out its AI agents. The software giant has begun making its first set of AI “agents” widely available to customers, a few weeks after rival Salesforce also made a big push into the world of AI systems that can perform tasks for users. Microsoft’s first agents can qualify sales leads, communicate with suppliers, or understand customer intent. Some of these agents work within Microsoft’s Github Copilot while others work within its Dynamics 365 application, allowing customers to build custom agents, Axios reported.
Google DeepMind chief and Nobel laureate now runs Gemini too. Google has put Demis Hassabis, the CEO of Google DeepMind and the guy who just shared the Nobel Prize in Chemistry for DeepMind’s work on AI models that can predict protein structures, in charge of its Gemini AI products, Axios reported. Sissie Hsiao, the executive who heads Gemini, will now report to Hassabis. She had been reporting to Prabhakar Raghavan, who had been leading the company’s core search and ad businesses, but has now moved to a new role as the company’s chief technologist. Nick Fox, a long-time Google executive, is taking over Raghavan’s former role, minus Gemini.
Dow Jones, New York Post sue Perplexity. The two Rupert Murdoch-owned media organizations have filed a suit against generative AI search engine Perplexity alleging that the startup has illegally copied its copyrighted content without permission and then profited from it through its search tool, Reuters reported. The suit calls out Perplexity for both inventing false information and wrongly attributing it to the news organizations and for copying phrases verbatim from the news org, but attributing that content to other sources. Perplexity did not immediately comment on the lawsuit. (Full disclosure: Fortune has a partnership with Perplexity.)
Microsoft OpenAI relationship fraying. That’s according to a story in the New York Times, which cited multiple unnamed sources at both companies. OpenAI has been frustrated that Microsoft, which has already invested at least $13 billion into OpenAI and provided the computing power to train its powerful AI models, has declined to provide additional funds and even greater levels of computing resources over the past year, the paper reported. It also said OpenAI has chafed at the role played by Mustafa Suleyman, the DeepMind cofounder and former Google executive, that Microsoft hired in March to lead a new Microsoft AI division.
Elon Musk’s xAI launches an API. The billionaire’s AI company has launched an application programming interface (API) that will let developers integrate its Grok AI model into their software, TechCrunch reports. The API offers access to Grok’s multimodal language capabilities. Rival AI companies, such as OpenAI and Anthropic, already provide businesses access to their models through similar APIs.
Penguin Random House updates copyright language to forbid AI training. The major publishing house became the first of the “Big Five” publishers to update its copyright page language to forbid the use of its books for the training of AI models without express permission, trade publication The Bookseller reported. There are several lawsuits pending against AI companies by authors claiming their copyright was violated when the tech companies used their books, without permission, to train AI systems.
Internet watchdog says AI-generated child sexual abuse imagery approaching ‘a tipping point.’ The Internet Watch Foundation is ringing alarm bells over a sharp increase in AI-generated child sexual abuse imagery being found on the public internet, The Guardian reports. In the past, most such imagery was hidden on the dark web. The AI-generated images are often indistinguishable from real photos, complicating the job of nonprofits and law enforcement agencies working to prevent and prosecute child sexual abuse.
EYE ON AI RESEARCH
What a “war game” exercise tells us about the prospects for international AI governance. That the prospects for effective international governance are pretty darn poor, it turns out. Researchers from several different institutes at the University of Cambridge, University of Oxford, Monash University, and the University of Wichita, analyzed the results of a simulation game called “Intelligence Rising” that was designed to explore how various actors—tech companies and governments among them—would respond to the development of increasingly advanced artificial intelligence, given their various competing incentives. The analysis showed that usually the commercial arms race between companies combined with the geopolitical race between countries to make international cooperation extremely difficult.
The findings are a sobering look at how the quest for AI supremacy will likely play out. For instance, espionage and cyberwarfare were widely used in the simulations by various players to try to steal technology from other players. Partly as a result, players would often achieve advanced AI at exactly the same time, leading to multipolar geopolitical and market dynamics. Companies often had every incentive to push for safeguards externally—which might constrain their rivals—while secretly relaxing them internally in order to gain a competitive edge. Cooperation among companies often had to be dictated by governments and yet government policy struggled to keep pace with rapid AI progress. Meanwhile, in many of the games, government players resorted to the use of military force to try to prevent a rival nation from gaining a decisive advantage (this often involved a Chinese invasion of Taiwan designed to disrupt the supply of advanced computer chips to the U.S. tech companies). In sum, the results of the war games don’t give one much hope for our collective ability to put safe international safeguards around powerful AI systems. You can read the study here on arxiv.org.
FORTUNE ON AI
Exclusive: Waymo engineering exec discusses self-driving AI models that will power the cars into new cities —by Sharon Goldman
Investors pour into photonics startups to stop data centers from hogging energy and to speed up AI —by Jeremy Kahn
Luminance debuts AI assistant for lawyers that’s aimed at doing some of the legal grunt work —by Jenn Brice
TikTok parent confirms it fired rogue intern for tampering with its AI —by Sasha Rogelberg
AI CALENDAR
Oct. 28-30: Voice & AI, Arlington, Va.
Nov. 19-22: Microsoft Ignite, Chicago
Dec. 2-6: AWS re:Invent, Las Vegas
Dec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia
Dec. 9-10: Fortune Brainstorm AI, San Francisco (register here)
BRAIN FOOD
Could AI actually help improve our democracy? As we move inexorably closer the U.S. presidential election in two weeks, most people’s focus is understandably on AI’s potential to spread misinformation and aid election interference efforts. These are very real concerns and we should be thinking about ways we can eliminate these risks going forward (unfortunately it’s too late to do much about this election). But could AI also help enhance our democracy?
Researchers from Google DeepMind published fascinating research in the journal Science this week about what they dubbed a “Habermas Machine” (named after the political philosopher Juergen Habermas). The “Habermas Machine” was an AI model trained to take in opinions from individuals, summarize these in group statements, and then act as a mediator, helping the individuals move toward a group statement that the majority of participants would find acceptable. In tests with 5,000 participants drawn from a demographically representative sample of the U.K. population, the statements the AI model generated were judged to be more acceptable to more individuals than those developed by professional human moderators.
The idea is that an AI model like this could help run a citizens’ assembly, acting as a moderator, and helping a group reach consensus. Some see citizens’ assemblies such as this as a way to overcome political polarization and find common ground, and also to allow citizens a more direct input into policymaking than they typically have with other forms of representative government.