The dark matter of intelligence
Welcome to Gradient Descent, your publication on what’s new in AI, no PhD required.
Like the newsletter? Join our WhatsApp community through this link.
This week’s iteration :
China cracks down on algorithms
“Hey, Alexa! Are you trustworthy?”
The dark-matter of intelligence
Let’s go 🚀
China cracks down on algorithms
The story:
China recently introduced something called the “Internet Information Service Algorithm Recommendation Management Regulations”. Its purpose? To further tighten free speech online regulate recommendation algorithms.
What’s it supposed to do? According to the powers that be, the new rules are meant to “safeguard national security and social public interests” and “protect the legitimate rights and interests of citizens”.
How does it do that? The regulations seeks to outlaw any use of algorithms to treat users differently based on transaction history and consumer preferences. It’s unclear how strictly this will be enforced, but in theory this means no more “what to watch next” on your favorite streaming platform, and no more “people like you also bought X” on your favorite online store.
The good. On the upside, the regulation introduces some seemingly positive things, such as protection for minors stating that “the providers of algorithmic recommendation services must not push information to minors that might impact their physical and psychological health, such as possibly leading them to imitate unsafe behaviors and conduct contrary to social mores or inducing negative habits”.
The bad. On the downside, it also states that algorithms cannot be used to “influence online public opinion”, must “adhere to the mainstream value orientation” and “carry forward the Core Socialist Values”.
When will this happen? The new rules come into force on March 1, 2022.
Want to know more? You can read a translated version of the regulation here)
Here’s why it matters: I think most of us would agree that not having to figure out what to watch next is great, and that free speech online is generally nice to have. This regulation clearly hurts both of those things.
“Hey Alexa, are you trustworthy?”
The story:
How much do you trust your smart speaker? Which one would you trust more : Apple's or Google's? And do you know why the answer is Apple?
It’s all about trust. A new MIT study shows that many things determine how much we trust our vocal assistants. As it turns out there's a BIG difference between saying "hey Siri" and "hey Google". Using a personified name (Siri) makes a device seem more trustworthy by masking the connection between the device and the company that made it. It may even help us forget the fact that this company now has access to your data!
The more social, the better. Users are more likely to trust voice-user interfaces that exhibit some humanlike social behaviors (like orienting its gaze for example). This may well be the reason many companies are looking to create smart speakers with a more "embodied" form factor.
Want to know more? Check out this post from MIT news, or this paper from the Personal Robotics group.
Here’s why it matters: Despite the fast progress in recent years, it’s still very early days for AI assistants such as Alexa and Siri. Understanding how people interact with these assistants, and what helps us trust them, will pave the way for the next generation of innovations and make the integration of AI assistants into our lives ever easier.
The Dark Matter of Intelligence
As babies, humans learn how the world works - largely by action and observation. We experiment, observe and learn what we later call “Common Sense”.
What’s common sense good for ? Common sense helps humans learn new things without requiring massive amounts of teaching. It provides the background knowledge which enables us to quickly learn new tasks for which the best AI requires massive datasets, enormous amounts of training… or can’t even perform at all.
This raises the question: How do we get machines to do the same?
We don’t have the answer yet. But it seems likely that enabling AI to learn by itself, rather than through examples - an approach called self-supervised learning - is likely to be one of the essential ingredients.
Self-supervised learning may be the dark matter of intelligence, and proof essential for reaching human levels of intelligence.
Who’s working on this? Yann LeCun, the Chief AI Scientist at Meta and professor at NYU, has been working on Self-supervised Learning, and is a driving force behind the most important research into this topic.
Want to learn more? Be sure to check out the following :
Blogpost. The Meta team wrote an excellent post last year, explaining the problem of self-supervised learning, and their approach to solving it (read it here: Self-supervised learning: The dark matter of intelligence)
Paper. Meta also came out with an excellent research paper called Self-supervised Pretraining of Visual Features in the Wild, which is a great read if you have a technical background (stay away if you don’t like math).
Podcast. If you’d like to a more leisurely, but equally profound, tour of self-supervised learning, check out the most recent episode on the Lex Fridman podcast :
When I’m not writing this newsletter, I’m helping companies transform data into value through my company Astek Alpha.
Looking to transform your data into value ?
Wondering what machine learning could do for your company ?
Trying to scale your A.I. solutions ?
Let's grab a coffee and find out how I can help you.