Why I Will Not Join an “AI” Company
Recently I got approached by lots of companies and their first openings are along the same line: “I’m from ABC, an AI company that … ”, “We are the fastest-growing AI company that raised $xxx with $yyy valuation… ”
Ok, ok, I still will not join an “AI” company. Here’s why.
First, “AI” has become a buzz word now for marketing to win users/customers and attracting VC to raise money. The hype it created in both tech and VC industry is unhealthy and unsustainable.
What does “AI” even mean in different context? Are you just talking about some statistics model and machine learning algorithms with supervised/unsupervised learning? Or a long-term, end-state vision in computer? Or some big data processing and analytics that can help bring data insights? Or a marketing term you use to attract attention for whatever reason? Or a term that you just unintentionally use because you don’t know what it’s exactly either but feel it’s shinning and cool to say?
Avoid a crowded place — when everyone, even people don’t know computer science/programming start to talk about it as if they are experts, you know this thing is at a local top on the market.
Second, “AI” is overexaggerated in its impact and magic. I have friends work at cutting edge tech companies like Facebook, Amazon, Alibaba, Pinterest, etc, and they are the data scientists and AI engineers. What do they say? They say only the simplest ML algorithms works the best, e.g. k means, k nearest neighbors, clustering. These algorithms are so dead simple that you probably cannot even call them “AI”, but just “statistics”. I have close friends who work at social network company and told me their “AI” production system is usually updated once a year, if ever, as it’s so fragile that any attempt to improve these simple statistics model will actually make the result (e.g. user engagement rate, click-thru rate) worse according to real A/B testing! Hilarious, right?! If these top tech companies cannot even harness or extract the power of “AI”, how can rest of the world?
Last, “AI” is only applicable when there’s a real business use case, and “AI” itself is not scalable enough to be the main business of a company. “AI” without a concrete business use case is useless, you have to apply “AI” to scenarios like online shopping, content recommendations (video, text, photos, etc), industry automation, etc, in order for it to be useful. Also note that, a successful AI strategy will be super unique from company to company, and most likely not transferable, even for companies in the same industry. Eg, in e-commerce, what works for Amazon likely will be vastly different from what actually works for Shopify, Walmart, and Etsy; Same for Facebook, Snap, and Tiktok. I think of this as “context-driven” “AI” because there’re a huge set of factors that can impact the final deliverables, e.g. business models, leadership background, company culture, business priorities, hiring budget, existing team’s skills, specific use cases, etc. So in practice, due to this complexity, there is never and will not be generic, scalable “AI” strategy or business. Any attempt to do so will likely going into the business of consulting firms which has been proven to be human-heavy, unavailable business like Accenture.
Do note that I’m talking about “AI” itself, not “AI” infrastructure. The infrastructure can indeed be generic, e.g. notebooks like Jupyter, framework service like tensorflow, TPU/GPU/CPU, etc, as they are actually not “AI”,. Such plays are by nature SaaS, cloud service, or hardware, so selling “AI” infra can be profitable, like what GCP/AWS/Nvidia/AMD have achieved. Selling “AI” itself is not scalable nor generic nor profitable.
An example is C3.ai whose stock tick is actually “AI” — it dropped 85% since IPO while US stock index on average only dropped like 10–20%. Unlike growth stock or crypto, I don’t expect its price can come back.
