Digital to Physical to Neural
The only way to know what's valuable to someone is to think like them. The only way to understand how someone thinks, is to know what they think. That is Mind reading.

The only way to know what's valuable to someone is to think like them.
The only way to understand how someone thinks, is to know what they think.
That is Mind reading.
But that doesn't exist today. Today the only way to know how someone thinks is knowing what they do, say, how they write, their digital activity etc.
The internet today runs on these fragments of us—likes, searches, scrolls.
It's primitive.
We are at the very beginning of a (according to my hypothesis) three-phase shift, shaping how the world will capture, model, and interact with human experience:
- Digital data (where we are now) — your online activity, messages, history, liked videos etc
- Physical data (within five years) — what you say, see, do, during the day
- Neural data (within ten) — what you think during the day
I categorize these as Type 1, 2 and 3 data mediums respectively. As you go from 1 to 2 to 3 you get higher fidelity, more valuable and less abstracted signals. You also have a harder time with user capture.
This is not speculation. It is the logical extension of every personalization breakthrough of the last twenty years, and the only path to truly intelligent systems that understand you better than you understand yourself.
Why We Start in Digital
Digital data is abundant, cheap to collect, and—crucially—already socially normalized.
Billions of people voluntarily hand over their browsing history, messages, and preferences every day in exchange for marginally better recommendations.
This phase is the Trojan horse for everything that follows.
By giving users real ownership and granular control over this data (not the illusory "privacy settings" of today's platforms), we train the world to see personal data as an asset, not a liability.
Comfort is built one permission at a time.
Onairos lives here today: a wallet that carries your mind across apps, letting them personalize without ever owning you.
It is the foundation.
The Coming Physical Layer
Within five years, wearables will no longer just count steps.
They will record what you see, hear, touch, where you go, and what you do in the real world.
High-resolution cameras, always-on audio, haptic sensors, precise location—the hardware is already shipping in prototypes.
This physical data layer will make today's personalization look like cave paintings.
An app will know not just that you "like jazz," but that you smile involuntarily when you hear Coltrane in a dimly lit bar on a rainy Thursday.
Context will be complete.
The transition will feel natural because users will already be accustomed to owning and selectively sharing their digital selves.
The leap from "my likes" to "my lived experience" is smaller than the leap from nothing to likes was in 2006.
And this leap is only possible by completing step 1.
Even though we've seen (failed) attempts at this already—Rabbit, Humane—this is step 2 not 1.
"One must often take 1 step back to take 2 steps forward"
Novel Cases for Type 1 and 2
The applications of Type 1 and 2 data vastly outscale anything data is being used for today, for personal use cases. Most are limited to pseudo-personal agents and assistants. But there is much more:
- Robot assistants that can match and adapt to your mood and personality — companions that learn your communication style, energy levels, and preferences over time. Reacting to changes in your emotional state, preparing new clothes because you're going on a hike tomorrow, an emotional supportive robot friend
- What if your commute data was used daily to end traffic? — Real-time route optimization, smart traffic lights, reduced emissions city-wide
- What if every grocery run you ever made optimized the city's waste system? — No more overflowing bins, predictive restocking, zero-waste supply chains
- What if your anonymized opinions weren't guessed by Cambridge Analytica 2.0 — but actually aggregated to show what the country really thinks?
The Neural Horizon
Within ten years, non-invasive brain-computer interfaces will begin to read intent, emotion, and eventually raw thought at useful fidelity.
Our goal is to pioneer the merge of brain computers to human experiences.
Most current BCI companies failed—or remain niche—because they started here.
They built beautiful hardware that almost no one wanted implanted, collected tiny datasets in sterile labs, and ignored the single biggest barrier: people do not want their minds read.
Not yet.
Invasive approaches (Neuralink, Synchron, etc.) will continue to advance, but they serve primarily as research, novel treatment, and data collection engines for the non-invasive future.
"This is akin to the practice of Cadaver dissection, most prominent in the age of enlightenment, when scientists, alchemists and students studied the body in order to learn anatomy, about the immune system, the art of the body and how it works in order to develop sculptures, cures, medicine and more"
Every successful implant patient provides ground-truth neural signals that can be paired with non-invasive readings to train vastly better models.
Non-invasive BCI at scale requires billions of hours of correlated data—something you cannot get from small clinical studies.
You get it from a population already comfortable sharing digital, then physical, layers of themselves.
Why? Because we barely understand the brain today. Let alone understand it well enough to interact between a medium non-invasively.
The Real Opportunities
When neural data flows:
- Defense: thought-controlled drones, perfect situational awareness, direct knowledge transfer between soldiers
- Entertainment: fully immersive worlds that respond to unspoken desire, games that adapt in real time to your emotional state
- Social: communication beyond language—sharing qualia, emotion, memory fragments with consent
These are not science fiction. They are inevitable once the data and comfort problems are solved.
There is also the authoritarian application of this technology—monitoring thoughts, feelings, dictating will, and authorities predicting what you will do. Some will develop this. Not us. Which is why user control must be a pillar from the beginning, from the digital data economy.
Why Others Failed
Humane, Rabbit, and the first wave of "AI wearables" failed because they shipped underpowered hardware with forced always-on listening and no meaningful user benefit.
They build features as a company, instead of building a company and product and then adding features.
They asked for trust and gave nothing back.
Early BCI pure-plays asked for even more—brain surgery—and delivered cursor control or basic typing to a handful of patients.
Impressive engineering, terrible market timing.
The ecosystems that failed were closed, extractive, and impatient.
They wanted the end state without doing the work to normalize the intermediate steps.
Our Approach
We do the opposite.
We start where trust is easiest to earn: digital data that users already share.
We give total control, real value (magical recommendations, time saved, better decisions), and zero downside.
Only then do we add physical inputs.
Only then neural.
We are not racing to implant chips.
We are building the data flywheel and the cultural acceptance that will make non-invasive neural interfaces feel as natural as sharing a playlist does today.
- This is a data play.
- This is a market play.
- This is a patience play.
The companies that win the neural future will be the ones who own the journey from digital to physical to neural—not the ones who skipped straight to the brain.
We intend to be that company.
"Do not mistake this as a bear signal for companies like Neuralink. They are doing research on the brain, and are invasive. They serve a very special use case and their commercial integration is possibly decades out. They know this. This is not referring to them."
I am playing a very long game.
Author
Zion Darko
Founder & CEO
Inventor and Dreamer and CEO.