Here’s my wishlist (not predictions) for WWDC 2023.
Full embrace of open source, on-device LLM technology. APIs, frameworks, and hardware-level optimisations. Make it very easy for developers to do local inference. Announce significant financial support for open source LLM projects.
Siri 2.0, powered by LLM technology. Amazing voice-to-text, and the ability to interpret diverse commands, rather than rely on arcane incantations. Better APIs for Siri, including server side APIs with simple webhooks.
Bias Siri towards “favourite” contacts. I have many contacts in my phone, but usually only want Siri to message or call a few. I know a lot of people with names that Siri struggles with. Siri should know 99% of the time I only want it to message a handful of people out of the blue.
Grammarly-level spellcheck and grammar in all text fields on iOS, iPadOS, and MacOS.
A better universal text input component for developers and apps, to make rich text editing (or even native markdown) easier and more consistent.
A simple corporate security mode for MacOS that wipes local file storage on restart, enforces updates, enables corporate SSO for access.
Starlink support for iPhone and iPad. Apple dipped their toes into satellite connectivity recently, a truly global networking solution would be great. Perhaps Apple should create their own network.
External webcam support for iPad.
Re-embrace RSS at the OS level. Find a way to make it super easy and nice for laypeople (a simple subscribe to website button or something). A first-party reader app (not Apple News).
A video podcasts app that embraces RSS and provides an alternative to YouTube.
Smart Speed (like from Overcast) in the Podcasts app. Overcast is so buggy it’s unusable, but I can’t live without Smart Speed.
USB-C everything, all at once. Don’t make us wait for keyboards, mice, etc.
Use any emoji as a reaction to a message in iMessage.
UX overhaul for FaceTime in iMessage group chats. Make it super easy to use groups as drop-in voice or video chats (à la Houseparty, RIP).
iMessage-native events. Make it easy to book time with someone in a chat, or organise a group event.
Some social innovation in iMessage. Perhaps something like close-friends stories on Instagram. I’m not sure what specifically I want here, but I think iMessage should be a more complete social application beyond just messages and calls.
Good server side APIs for everything. Native, on-device apps and integrations are the best in most scenarios, but sometimes server side APIs would be better. For example, social apps should be able to integrate with contacts via nice server side APIs.
The metaverse is splintering. China already has it’s own digital world, but Europe and India are also splitting off. This proposed legislation, which directly bans open source LLMs without a license, will further this movement.
If modern AI is equivalent to electricity, the internet, or the printing press, is it a good idea for most of a continent to strangle it in the cot?
In many ways, the internet is already an entire standalone virtual world. ↩︎
The amended act, voted out of committee on Thursday, would sanction American open-source developers and software distributors, such as GitHub, if unlicensed generative models became available in Europe. While the act includes open source exceptions for traditional machine learning models, it expressly forbids safe-harbor provisions for open source generative systems.
Any model made available in the EU, without first passing extensive, and expensive, licensing, would subject companies to massive fines of the greater of €20,000,000 or 4% of worldwide revenue. Opensource developers, and hosting services such as GitHub – as importers – would be liable for making unlicensed models available. The EU is, essentially, ordering large American tech companies to put American small businesses out of business – and threatening to sanction important parts of the American tech ecosystem.
An interesting conversation from 1989 from a mailing list, discussing the fall of the Berlin Wall.
Berlin den 10 November 1989Unbelievable!Incredible!Historic!As we sit here in West Berlin this morning, we are just discussing the news about the wall - its open and may soon be no more!!!Amazing sights to see on the way to work this morning - DDR (East German)cars on the streets, DDR citizens everywhere, traffic jams near the wall,celebrations in the streets the whole night.A historic day, and one to be celebrate. During the night, not only didpeople cross over via the border crossings, but people also climbed overthe wall, danced on top of it (yes that’s right) and a part of the wall waseven damaged. Can you picture people dancing on top of the Berlin wall?Congratulations to the people of East Germany.Gunter Zschoche, Michael Brady, et alNixdorf, Berlin
Did you know there are up to 500,000 people in modern slavery running cyber scams in lawless mafia zones? This is a fascinating and harrowing story.
What We Discuss with Nathan Paul Southern and Lindsey Kennedy:
Aided by corrupt government and law enforcement agencies, organized criminals hold thousands of people in modern-day slavery in Southeast Asia, forcing them to run cyber-scams worldwide.
Victims are lured by promises of lucrative online trading jobs, and abducted against their will when they arrive — for a minimum of six months — to work as cyber-slaves.
These captives are beaten, electrocuted, and tortured if they try to escape or don’t make enough money. Suicides, with victims jumping from balconies to their death, have become commonplace.
The Cambodian Prime Minister’s nephew has been implicated in the human trafficking trade, which is why embassies have been ignored when they plead for intervention.
What those of us in the Western world can do to fight back against these organized crime groups and ensure their dirty deeds can no longer be done dirt cheap enough to turn a profit.
What exactly is cyber-slavery? When gambling was banned in Cambodia during the early days of the pandemic, local operators quickly repurposed for online criminal activity — which rapidly spread across Southeast Asia to other gambling hubs. To remotely run their shady cyber-scams, organized criminals have enslaved thousands of people across Cambodia, Laos, and Myanmar. After being lured to a physical location with bogus job offers, victims are held against their will, beaten, and tortured if they don’t make enough money — or try to escape. Suicides are common. Chinese organized crime groups run these operations while local authorities turn a blind eye or even arrest victims who try to speak out.
Kudos to Grimes for facing the new reality of AI head on, looking past the inevitable challenges to find opportunities and benefits for artists.
Key takeaways for me:
Generative AI lowers barriers to entry for all creative pursuits. This devalues atomic mediums like illustration, ?, or drumming. But it also makes more complex, compound mediums like film, animation, music producing accessible to more people. Could we end up with less illustrators? Perhaps. But we should end up with a lot more of other types of creatives, as a result
One point that Grimes did not make is the overall economic impact of atomic mediums like illustration becoming accessible to laypeople: it will be much cheaper to start a business that depends on these services. Many businesses depend on these services to market themselves, and the ability for restaurant, bar, and retail founders to do it on their own should encourage more people to start businesses.
Originality is an interesting debate spurred by AI. Generative AI doesn’t plagiarise, but its output is of course built from its training data. Grimes makes the point that humans probably work the same way, anyway. This is relevant to how we consider AI-generated work regarding IP.
Most artists today already can’t make a living, she argues. In the case of illustration, for example, AI will presumably replace or wildly alter the handful of illustration jobs that exist, but eventually we’ll get to a place where all of these artists can produce much more for much less. Things will just work out.
Grimes brings up cinema. How many illustrators want to do more than illustrate, she asks, but can’t because of the endemic Hollywood gatekeeping, and the insane, exorbitant cost?“Our film and cinema industry is extremely, like, the most gate kept thing,” she says. “It’s so expensive. It’s like the last, final frontier. Why do people need to just be making drawings? They could be making cinema. You could be making something as good as Pixar in your own bedroom, probably in the next five years.”The future is abundant. The future is beautiful. But will it be original? These models are just aggregating content, in a way.
You are a neural net. You are a neural net that’s trained on everything around you and everything you’ve ever read and everything you’ve ever seen, and you make things that feel novel. But if you actually dissect them, they are not. You can say, like, punk feels totally novel. If you start tearing apart punk, you can very easily trace its ingredients and its influences, and its neural net.
The takeaway from this piece is: just because someone is an expert in a technology, does not mean they are an expert in how it will impact economies and cultures.
The media is quick to jump on any AI-cautious take from AI engineers. But expertise in the technology underlying AI does not equate to expertise in the consequences of AI.
What would Johannes Gutenberg have expected the impact of the printing press to be? He invented it for practical purposes. He could not have foreseen the full impact of his invention in terms of its revolutionary effects on society, culture, and science.
We are better off thanks to this invention. Imagine if we had outlawed movable type because fifteenth-century metalworkers had superstitions about its effects on society?
Technology is about doing more with less. AI is another step in this story. We are thousands of years into this story of doing more with less and yet employment is great in the United States.
What I do not hear, however, is a more systematic cost-benefit analysis of AI progress. Such an analysis would have to consider how AI might fend off other existential risks — deflecting that incoming asteroid, for example, or developing better remedies against climate change — or how AI could cure cancer or otherwise improve our health. And these analyses often fail to take into account the risks to America and the world if we pause AI development.
I also do not hear much engagement with the economic arguments that, while labor market transitions are costly, freeing up labor has been one of the major modes of material progress throughout history. The US economy has a remarkable degree of automation already, not just from AI, and currently stands at full employment. If need be, the government could extend social protections to workers in transition rather than halt labor-saving innovations.
Experts from other fields often turn out to be more correct than experts in the “relevant” (quotes intentional) field — with the qualification, as the Einsteins of 1939 and 1954 show, that all such judgments are provisional.
Meta recently open sourced (to researchers) their LLM, which has led to some amazing on-device AI demonstrations. I’m also fascinated by local inference, because it could prove incredibly cheap (compared to centralised server-side inference) and could exacerbate Apple’s chip advantage, given they have the most capable device chips for AI.
Zuckerberg is clearly contemplating an open source strategy for LLMs at Meta. It will be fascinating if eternal frenemies Apple and Facebook once again enable each other in this new paradigm (the same way Facebook enabled the App Store and therefore services revenue for Apple, which is now massive).
For us, it’s way better if the industry standardizes on the basic tools that we’re using and therefore we can benefit from the improvements that others make and others’ use of those tools can, in some cases like Open Compute, drive down the costs of those things which make our business more efficient too. So I think to some degree we’re just playing a different game on the infrastructure than companies like Google or Microsoft or Amazon, and that creates different incentives for us.So overall, I think that that’s going to lead us to do more work in terms of open sourcing, some of the lower level models and tools. But of course, a lot of the product work itself is going to be specific and integrated with the things that we do. So it’s not that everything we do is going to be open. Obviously, a bunch of this needs to be developed in a way that creates unique value for our products, but I think in terms of the basic models, I would expect us to be pushing and helping to build out an open ecosystem here, which I think is something that’s going to be important.
Early CGI was mind blowing until it was terrible and unwatchable. This loop is playing out with AI. A year ago, I was losing my mind over DALL-E every single day. Today, its output looks like absolute trash compared to Midjourney.
And this is the power of focus: DALL-E was a sensation, but ChatGPT is an astounding success. It is miles ahead of the competition because OpenAI has doubled down on it.
Meanwhile, Midjourney, which launched with a modest lead over DALL-E, is now miles ahead, with a tiny team. They’ve achieved this through focus.
Today’s LLMs are generalists. Midjourney is an example of how fantastic a more specialised tool can be. Soon, specialised tools for many use cases could emerge and once again blow our minds day by day. If a team as competent as the Midjourney team were to focus on a specialised AI for almost any use case, I’m confident we’d be shocked by the outcomes.
V5 was already a huge jump for the images I have been creating.
Was playing a bit with Dall-E yesterday and it hasn’t gotten any better in the last 6 months. Hope that changes.
Midjourney has instead been leaping ahead with every update!!
In the Soviet Union, even vacation was regulated by the Party.
However, what truly set the Soviet Union apart in its approach to paid time off was that it also provided ways for workers to fill up that time and with whom to do so. It encouraged workers to vacation with groups of relative strangers as opposed to their friends and families. They were all part of a collective and that umbrella united them. In the Soviet Union, after all, the collective—not the family—was the most important social unit. Under Stalin, the state constructed summer resorts and tourism bases throughout the region. Each year, a limited number of accommodations at these locations were offered at a reduced cost or—in some cases—free of charge to one in every ten Soviet workers.
While all Soviet citizens were entitled to paid time off, only a small number could afford to go on an actual vacation without state subsidies. Some members of the Communist Party liked it this way. “By withholding the drug of prosperity,” Whetten explains, “the Soviet government more easily inspires self-sacrifice.”
While drunks roamed Moscow, they were conspicuously absent from the tourist town of Yalta, where, Whetten says, “resting is a solemn business. Restful it is, but fun it is not!”
Not everyone stuck to the government-programmed agenda, though. “Resort affairs” were so common, they were satirized in films. Old Walls, from 1973, which includes a scene where women return to their own rooms after having spent the night at somebody else’s.
A new system was able to capture exact words and phrases from the brain activity of someone listening to podcasts. This breakthrough is incredibly important.
It also highlights what I think will be one of the greatest impacts of modern AI: we currently have so much data that we cannot make sense of. From the quantified self movement, to economic data, astronomy, and telemetry from our own bodies. AI will make sense of this data in ways we never could. This is an underrated benefit.
A noninvasive brain-computer interface capable of converting a person’s thoughts into words could one day help people who have lost the ability to speak as a result of injuries like strokes or conditions including ALS. In a new study, published in Nature Neuroscience today, a model trained on functional magnetic resonance imaging scans of three volunteers was able to predict whole sentences they were hearing with surprising accuracy—just by looking at their brain activity. The findings demonstrate the need for future policies to protect our brain data, the team says.
This is the first time whole sentences have been produced from noninvasive brain recordings collected through fMRI, according to the interface’s creators, a team of researchers from the University of Texas at Austin. While normal MRI takes pictures of the structure of the brain, functional MRI scans evaluate blood flow in the brain, depicting which parts are activated by certain activities.
The Navy uses animals like dolphins and sea lions to protect ships from divers and mines.
Navy spokesman Chris Haley says the animals have been defending the waters around the stockpile, holding roughly 25% of the United States’ 9,962 nuclear warheads, since 2010.
The U.S. Navy Marine Mammal Program deployed military dolphins as early as the Vietnam War and as recently as the 2003 U.S.-led invasion of Iraq.
When protecting harbors and ships from mines, as they do at Naval Base Kitsap, the dolphins use their extraordinary biological sonar to detect hazards beneath the surface, whether tethered to the sea floor or buried beneath sediment.
Many argue against meritocracy because they view life outcomes as overwhelmingly deterministic. If we are the sum of our DNA, culture, how we were raised, and what has happened in our lives, does it make sense to hold individuals accountable for their actions? If free will is limited, what is the value of meritocracy.
I’m sympathetic to this argument, but I don’t think it devalues meritocracy. A meritocracy, or a system where the most capable people are given the most important jobs, is simply the best way to allocate human capital. If we want the best outcomes for society, we need to employ the best people for every job. A meritocratic system is the best way to do this. This is similar to the principle of allowing markets to allocate resources in industry.
Meritocracy has flaws. If someone is the sum of what has happened to them, it doesn’t make much sense to punish them with cruelty for their actions. But I’d argue that this is true regardless of what you think of meritocracy. Society should be outcome driven. If meritocratic systems are the best way to assign human capital to important problems, we should embrace them. If they fail to take care of people who don’t fit within the meritocracy, we should use other solutions to support those people.
Merit is a central pillar of liberal epistemology, humanism, and democracy. The scientific enterprise, built on merit, has proven effective in generating scientific and technological advances, reducing suffering, narrowing social gaps, and improving the quality of life globally. This perspective documents the ongoing attempts to undermine the core principles of liberal epistemology and to replace merit with non-scientific, politically motivated criteria. We explain the philosophical origins of this conflict, document the intrusion of ideology into our scientific institutions, discuss the perils of abandoning merit, and offer an alternative, human-centered approach to address existing social inequalities.
Yoko Ono has to be one of the greatest targets of public bullying in modern history, mostly fuelled by ridiculous theories about her impact on the Beatles, and misogyny. Most music nerds I know despise Ono despite knowing very little about her or her work. Not everyone biased against Ono is a misogynist, but I think their views are informed by dumb arguments made by misogynists before them. Few Ono skeptics have ever engaged with her work or really considered her impact, which is a shame.
Ono is 90, so the world is likely to consider her legacy in a deep way sometime in the next decade. Some thoughts:
Ono was massively criticised for things that women now have cultural permission to do. There is a racial aspect to this too. She deserves massive credit for being willing to receive ire. Without trailblazers confronting social stigma, walls like this would never come down. It’s OK to not appreciate the aesthetics of her work, but it should be indisputable that she deserves credit.
Ono did not break up the Beatles. The Beatles: Get Back (2021, Peter Jackson) shows not only a beautiful relationship between the members of the Beatles, but a genuine relationship between Ono and Paul. The film made it clear that the break up of the Beatles was multifaceted and, in my opinion, seemed to be primarily the result of John’s diminishing interest in the band.
It was a good thing that the Beatles broke up. All members of the Beatles produced some of their best work after the breakup, including John, with the help and inspiration of Ono. Another album would have been nice, but I think that was blocked by the death of Lennon more so than the breakup of the band, which seemed inevitable.
There is something beautiful about Lennon and Ono’s mutual inspiration. John and Yoko were massively inspired by each other.
Lennon’s return to music in 1980 (culminating in Double Fantasy) was triggered by the B-52s. He heard Rock Lobster and considered it an indication that Ono’s sound was now mainstream (the sonic parallels are palpable).
It’s interesting that Double Fantasy, Ono’s final antemortem collaboration with Lennon, sold nearly as many units in Australia as the UK (285,000 and 300,000 respectively) despite their wildly different populations, and the UK being Lennon’s homeland.
The Carters by Beyoncé and Jay-Z reminds me of Double Fantasy in format and concept. I’d love to know if they were at all inspired by this.
Ono-Lennon politics was more nuanced than people realise. John in 1978: “The biggest mistake Yoko and I made in that period was allowing ourselves to become influenced by the male-macho ‘serious revolutionaries’, and their insane ideas about killing people to save them from capitalism and/or communism (depending on your point of view). We should have stuck to our own way of working for peace: bed-ins, billboards, etc.”
It’s disappointing to think that the world will only reassess the conventional wisdom on Ono after her death, if at all. But to any music lover with an open mind: please listen to Lennon and Ono’s antemortem collaborations.
Ben McKenzie recently appeared on Bill Maher to promote his new book and takedown crypto. I’m down to debate technology, but this was one of the most intellectually shallow segments on any topic in recent memory. In the highlights of this post, I’m going to respond to some of the specific points McKenzie raised.
In general, crypto skeptics seem to always make the same mistakes:
They view crypto as monolithic. In practice, “crypto” is a mostly useless banner. Crypto as a currency or store of value is very different to crypto as a distributed computing platform, proof-of-ownership, or authentication technology.
They judge the technology by early-adopter use cases. Most of the content on the internet in the early days was stupid and useless. Initial use cases for NFTs are similarly trivial. This doesn’t mean NFTs are an uninteresting or useless technology.
They assume that, because they haven’t personally been impacted by a budding technology, that there are no viable or interesting use cases whatsoever.
One of the frustrating things about this space is that crypto proponents do all of these things too. It is hard to defend crypto, and it’s impossible to defend some of the terrible things that have happened over the past few years, but technologists who write it off will miss valuable opportunities. I’ll leave the pro-crypto debate to the true advocates, but many in tech are missing the point on this one.
Crypto is a fantastic litmus test for understanding the motivations of people working in the tech industry. Those who blindly gush over the most ridiculous use cases are pretty cringe, and have a poor understanding of the value of this technology. Those who blindly despise the technology without putting any effort into understanding it are clearly disinterested with new technologies more broadly (even if they claim otherwise). Those with a more nuanced take a more likely to have interesting views on other emerging technologies.
It’s a Ponzi scheme … the numbers in crypto are not real.
All money is a shared delusion. A collective agreement to trust a metaphysical representation of value. USD is a more credible representation than Doge Coin, but the idea that new representations shouldn’t be allowed to emerge is ridiculous. Information technology has always transformed money, and this will continue.
You’re selling a picture that I could also just find online … you can right-click and save it … it’s a JPEG.
Technically speaking, it’s not even the art itself — it’s the receipt for the art itself. What you bought is the link to a receipt of a bored ape
When you purchase any art, you purchase the certificate of authenticity. It is incredibly easy to acquire a duplicate of any piece of artwork.
But, even if you think that is dumb, this is an example of conflating a single use case with an entire technology. NFTs verify the ownership and origin of a piece of the internet. This has many applications beyond digital art, especially in a world of AI-generated content, where authenticity is increasingly difficult to prove. With NFTs, you can verify that a song came from Drake, a post came from the President, and that a user truly possesses something (e.g., tickets, user authentication/permissions, contracts/agreements).
You may notice something with crypto currencies: they don’t do anything. There is no product, good, or service. For someone to win, someone else has to lose. It’s like poker … you don’t create value.
This misses the biggest use case for modern crypto currencies. Ethereum and others are distributed computing platforms. They store data and run code. They do this pretty poorly, but every technology starts with such limitations. But they very literally have utility, even today. The argument that they don’t do anything is pretty lazy avoidance of the common counter arguments.
An interesting study that compares diet to anxiety and depression symptoms.
The gut microbiome may be both helpful and harmful, and not only is it affected by diet, it has also been shown to affect mental health including personality, mood, anxiety and depression.
The change from a predominantly Western diet to vegetarian, Mediterranean and ketogenic diets led to changes in calorie and fiber intake.
After the diet change, we observed significant changes in measures of anxiety, well-being and happiness, and without changes in gut microbiome diversity. We found strong correlations between greater consumption of fat and protein to lower anxiety and depression, while consuming higher percentages of carbohydrates was associated with increased stress, anxiety, and depression.
A fascinating analysis of the history of totalitarianism, with the primary example of Mao’s China.
The biggest casualty of totalitarian terror is not the unprecedented amount of physical destruction, but the even greater amount of psychological devastation. By crippling and corrupting the minds and spirits of the people, it maims society on a civilizational level.
Totalitarianism begins the process by dividing the people into us and them, comrades and enemies, allies and foes. This is done by means of ideology. The communists explain history as a struggle between the working class and the capitalists. The Nazis explained it as a struggle between races. Once you accept the initial assumptions, everything else, every single event or process, can be explained through them … An ideology thus acts as a kind of straight-jacket, restraining the thoughts of those who follow it by binding everything to a single cause and a single explanation. But for the naïve, it is a revelation. Their new “key” gives them an inflated sense of understanding, makes them think that they are possessed of deep insights into the hidden workings of the world—insights to which the uninitiated masses are blind.
In Nineteen Eighty-Four, the citizens of Orwell’s fictional dystopia have to make their way daily to what is known as a “Two Minutes Hate” session. In these sessions, the participants are made to express their hate and fury by screaming and shouting at a film of their ideological enemy. Well, the CCP had their own “Two Minutes Hate” called dou di-zhou—“struggle against the landlords”—which lasted more than two minutes and involved real “enemies,” whose crime was not that they were necessarily landlords, but that they were better off than the rest. “Those designated as targets were made to stand facing a large crowd, and people were psyched up and organized to come forward and pour out their grievances against them … Village militants and thugs would then inflict physical abuse, which could range from making the victims kneel on broken tiles on their bare knees, to hanging them up by their wrists or feet, or to beating them, sometimes to death, often with farm implements.”
This research suggests that insects are attracted to artificial light because they use light to maintain a consistent altitude. Artificial light is a super stimulus, exposing a bug in their hardware/software. Sounds a lot like gambling for humans.
Contrary to the expectation of attraction, insects do not steer directly toward the light. Instead, insects turn their dorsum toward the light, generating flight bouts perpendicular to the source. Under natural sky light, tilting the dorsum towards the brightest visual hemisphere helps maintain proper flight attitude and control. Near artificial sources, however, this highly conserved dorsal-light-response can produce continuous steering around the light and trap an insect. Our guidance model demonstrates that this dorsal tilting is sufficient to create the seemingly erratic flight paths of insects near lights and is the most plausible model for why flying insects gather at artificial lights.
A valuable engineering principle: you can’t usually have both efficiency and resiliency. Efficient systems are typically easy to overload, while resilient systems cost more.
Important to keep this in mind when building software.
Improvements to efficiency often trade off against resiliency, and the further we optimize a system, the worse this tradeoff tends to become.
for a long time, I generally believed that higher is always better: we should aim for as close to 100% utilization as we can. Why? Well, anything less than 100% represents unused hardware capacity, which means we’re wasting resources. If a service isn’t maxing out its CPU, we could move it onto a smaller instance or run some additional work on that node.
This simplistic intuition, it turns out, is rarely quite right.
Suppose we achieve that ideal and our service is running close to 100% utilization. What happens now if we go unexpectedly viral and receive an unexpected burst of traffic? Or if we want to deploy a new feature which takes a little bit of extra CPU on each request?
If we are at 100% utilization, and then something happens to increase load, then we’re in trouble!
Important quote from George Box. I came across it in the context of AI, but it applies to all science. We’re just building models for understanding nature, which is probably too complex to truly understand, but some of our models are useful nonetheless.
In my opinion, the credibility of AI-risk researchers has absolutely plummeted in the past few months. What has become clear is that this field is primarily driven by hypothetical posturing rather than any kind of real research and experimentation.
It is easy to create a convincing argument for how just about any technology could lead to the end of the world as we know it, especially if there is no burden to prove it through experimentation.
There absolutely will be negative consequences. Every single technology, including fire and the wheel, have negative and unexpected consequences. The story of human history is the invention of technology to solve todays problems, causing tomorrows problems, and using new innovations to solve those too. What has been relatively constant is that tomorrows problems tend to be a lot better than yesterdays.
There is little evidence that AI will end the world. There is little evidence that the current generation of AI technology will exponentially evolve into AGI in a short timeframe. There are many certainly positive use cases for AI. Artisanal bread is delicious but nobody wants artisanal radiology when it is an order of magnitude worse than an AI-led solution.
Evidence of extreme risk to humanity may emerge over the coming months and years. If it does, we should slow down. But until that happens, we need to embrace the Promethean gift in front of us as it could make life dramatically better for billions of people.
Across human and biological history, most innovation has been incremental, resulting in relatively steady overall progress. Even when progress has been unusually rapid, such as during the industrial revolution or the present computer age, “fast” has not meant “lumpy.” These periods have only rarely been shaped by singular breakout innovations that change everything all at once. For at least a century, most change has also been lawful and peaceful, not mediated by theft or war.
So, the most likely AI scenario looks like lawful capitalism, with mostly gradual (albeit rapid) change overall. Many organizations supply many AIs and they are pushed by law and competition to get their AIs to behave in civil, lawful ways that give customers more of what they want compared to alternatives. Yes, sometimes competition causes firms to cheat customers in ways they can’t see, or to hurt us all a little via things like pollution, but such cases are rare. The best AIs in each area have many similarly able competitors. Eventually, AIs will become very capable and valuable. (I won’t speculate here on when AIs might transition from powerful tools to conscious agents, as that won’t much affect my analysis.)
The worst failures here look like military coups, or like managers stealing from for-profit firm owners. Such cases are bad, but they usually don’t threaten the rest of the world. However, if there are choices where such failures might hurt outsiders more than the organizations who make them, then yes we should plausibly extend liability law to cover such cases, and maybe require relevant actors to hold sufficient liability insurance.Some fear that, in this scenario, many disliked conditions of our world—environmental destruction, income inequality, and othering of humans—might continue and even increase. Militaries and police might integrate AIs into their surveillance and weapons. It is true that AI may not solve these problems, and may even empower those who exacerbate them. On the other hand, AI may also empower those seeking solutions. AI just doesn’t seem to be the fundamental problem here.
In the media, we hear many competitive narratives about the evolving state of global power. We are entering a multi-polar world; China is surpassing the States; Europe will be a superpower; India is going to roar passed China. I thought it would be helpful to take a look at the GDP data to see what it tells us.
Macron wants Europe to be a superpower in the new multi-polar world. European Union GDP was comparable to the US up until the mid 1980s. Europe is not catching up to the US in terms of global power, the gap is widening. If anything, Europe is being left behind, as the US, China, and India boom. If Europe was ever to be a superpower, it would’ve been in the 70s and 80s.
The US is still much larger than China in terms of GDP, though China has now eclipsed the European Union. It remains to be seen whether China will eclipse the US, though it will surely get close enough to be competitive in terms of global power.
India, the largest democracy in the world, still has a long way to go in order to pass China.
GDP is far from the only measure of power, economic or otherwise, but it is probably our best.
SaaS Capital’s SaaS benchmarking survey predicts private SaaS company valuation multiples of 4.6x ARR in the current market. Significantly depressed compared to two years ago.
B2B private SaaS companies with less than $1 million in ARR reported median growth of 51% with median net revenue retention of 98%, yielding a predicted ARR valuation multiple of 5.8x.
B2B private SaaS companies with $1 million to $3 million in ARR reported median growth of 41% with median net revenue retention of 100%, yielding a predicted ARR valuation multiple of 5.1x.
B2B private SaaS companies with $3 million to $5 million in ARR reported median growth of 35% with median net revenue retention of 102%, yielding a predicted ARR valuation multiple of 4.6x.
B2B private SaaS companies with $5 million to $10 million in ARR reported median growth of 30% with median net revenue retention of 102%, yielding a predicted ARR valuation multiple of 4.2x.
B2B private SaaS companies with $10 million to $20 million in ARR reported median growth of 35% with median net revenue retention of 104%, yielding a predicted ARR valuation multiple of 4.7x.
B2B private SaaS companies with more than $20 million in ARR reported median growth of 27% with median net revenue retention of 102%, yielding a predicted ARR valuation multiple of 4.0x.