These aren't just quotes I like. These are principles I return to when I'm uncertain:
"You are what you choose to be."
What I mean: Identity isn't fixed. I'm not "just" a web developer or "just" a founder. I'm choosing to be someone who builds responsibly, even when it's harder. Every day I choose who I'm becoming.
How this applies to DCA: I'm choosing to build an agency that reflects my values, not just copy what successful agencies do. This is deliberate, not accidental.
"People are not responsible for your feelings."
What I mean: If clients reject my pricing or don't understand my AI philosophy, that's not their job to fix. I can't expect the market to validate me. I have to prove it works.
How this applies to DCA: I won't get bitter if people don't "get it" right away. I'll keep building, keep documenting, keep showing proof. The right people will find me.
"Learn to let go."
What I mean: I spent months overthinking this business plan. Overthinking how to position DCA. Overthinking whether I'm "ready." At some point, you have to let go and just start.
How this applies to DCA: I can't control outcomes. I can only control my actions. Launch, learn, adjust. Let go of needing it to be perfect.
"Let people be wrong about you, it's not worth the energy to explain yourself to people who won't listen."
What I mean: Some people will think I'm naive for trying to be "responsible" with AI. Some will think I'm just virtue signaling. Let them think that. I don't need to convince everyone.
How this applies to DCA: I'll get criticized. Competitors will say I'm leaving money on the table. That's fine. I'm not building for critics. I'm building for allies and clients who share these values.
"It's not the situation… it's your reaction to the situation."
What I mean: AI isn't inherently good or evil. How I choose to use it is what matters. The market isn't inherently corrupt. How I choose to participate in it is what matters.
How this applies to DCA: I can't stop AI acceleration. But I can control how DCA operates within that reality. That's where my agency lies.
"You accept the love you think you deserve."
What I mean: If I don't believe DCA deserves premium pricing, I'll undercharge. If I don't believe my work deserves recognition, I won't promote it. Self-worth determines what you accept.
How this applies to DCA: I need to charge fairly and stand behind my value. Not arrogantly, but confidently. If I don't believe in what I'm building, why would clients?
"Just because you've experienced grief doesn't make you an expert on mine."
What I mean: Everyone's struggling with something. I can't assume I understand other founders' challenges or that they understand mine. Stay humble.
How this applies to DCA: I won't preach to other agencies about how they should operate. I'll just show what I'm doing and let results speak.
"If you let things be, it'll come to you."
What I mean: There's a balance between forcing and flowing. I'm preparing DCA thoroughly, but I also need to trust that the right clients, the right team, the right opportunities will come when I'm ready.
How this applies to DCA: I'm not chasing every client. I'm building something worth finding. If I do good work and communicate clearly, aligned people will come.
"If people didn't ask, don't say it. Action speaks louder than words."
What I mean: Stay humble. Don't preach about AI responsibility—just do it and show results. Don't tell people DCA is different—prove it through work.
How this applies to DCA: Less talking, more doing. The blog posts are documentation, not sermons. The transparency reports are data, not ego. Let the work speak.
"My love is not a rope. It is not a net. It is not a hand pulling. It is a safe place to return to — and they'll know it when they're ready."
What I mean: This quote hit me because it's about relationships, but it applies to building DCA. I'm not forcing clients to work with me. I'm not manipulating team members to join. I'm building something safe and honest. The right people will recognize it when they're ready.
How this applies to DCA: DCA is a safe place for clients who value transparency. A safe place for team members who want meaningful work. A safe place for allies who share these concerns. They'll come when ready.
"A step back now, is 10 steps forward in the future."
What I mean: I could rush to launch and cut corners. But taking time to think deeply about AI protocols, sustainability, values—that's the foundation. It might feel slow now, but it sets up everything later.
How this applies to DCA: I'm not in a rush. This is a 10-year vision, not a 6-month sprint. Building the foundation right matters more than fast revenue.
"The truth of the matter is, we are no longer alone in the universe with AI. Like history, humans are a species that also needs to be maintained by a broken system. So we need to see ourselves no longer as an individual race that destroys each other as nature intends, but rather the human species. And humans are not perfect—we live in a cycle of being born, learn and grow. So how do we sustain ourselves?"
What I mean by this:
We're at an inflection point. AI isn't just another technology. It's a fundamentally different kind of intelligence that's evolving faster than we can adapt.
Most people treat this like it's just "another Industrial Revolution." But it's not. Previous revolutions replaced physical labor. AI replaces cognitive labor. That's everyone.
And we're running this experiment on civilization without informed consent. No vote. No referendum. Just a handful of companies racing to AGI because they believe it's inevitable.
My position: It might be inevitable. But we can still choose HOW we get there.
You said I was overthinking when I talked about "AI communication protocols" and energy consumption. You were right—I was mixing macro concerns with micro actions.
But here's what I actually mean:
The Macro Problem:
And it's growing exponentially.
My Micro Response:
I can't fix the macro problem. But I can:
Is this enough to save the planet? No.
But it's more than doing nothing and pretending it doesn't matter.
You've heard me say this, but here's the full thought:
Most companies optimize for:
What I'm optimizing for:
The key difference: Revenue per employee vs. revenue while maintaining employment.
This isn't charity. It's long-term economic rationality.
Because here's the uncomfortable question: If 50% of people can't afford to buy things, who are companies selling to?
Mass unemployment isn't just unethical. It's bad for capitalism.
You mentioned I want to teach AI communication in universities. Here's the full idea:
Current State:
Future Vision:
Why this matters:
If we're going to coexist with AI, we need professionals who know how to use it responsibly. Not just "type stuff into ChatGPT and hope for the best."
DCA's role: We're building the workplace example of this. When universities want to show "here's how a responsible company uses AI," they can point to DCA's documented protocols.
This is the "AI communication protocol" idea I was trying to explain (badly).
The Problem:
My Solution (for DCA team):
Step 1: Clear Context
Step 2: Structured Prompt
Step 3: Single Iteration
Why this matters:
This is what I mean by "AI communication professionals." People who know how to work with AI efficiently, not just lazily.
You asked about my sci-fi reference. Here's what I meant:
Detroit: Become Human (the game) explores: What happens when AI becomes sentient? Do we treat them as slaves or partners?
My take: We're not at AGI yet. But IF we get there, the question becomes: Can humans and AI coexist peacefully?
Three Possible Futures:
Future 1: Humans vs. AI (Conflict)
Future 2: AI Replaces Humans (Displacement)
Future 3: Humans + AI (Co-existence)
I'm betting on Future 3. Not because it's guaranteed, but because it's the only future worth building toward.
Here's my position on AGI timing:
I don't know if AGI is 5 years away or 50 years away. Nobody does.
But I know this: If we wait until AGI exists to figure out how to coexist with it, we're screwed.
So my approach:
It's like climate change: We can't wait until the catastrophe to start building solutions. We build solutions now, while we still have time.
DCA is my small contribution to that.
"Yo, been thinking about DCA's growth. Since we're a small tech agency, staying relevant is how I keep money flowing in. That means keeping up with geopolitics and economics—it helps us make smarter long-term moves."
What I meant:
Running a small agency in Cambodia means I'm competing globally. I can't just be a good web developer. I have to understand:
This isn't paranoia. It's strategic awareness.
If I know the AI bubble might burst, I don't over-invest in AI hype. I build sustainable practices instead.
If I know ASEAN economies are growing, I position for regional expansion early.
DCA isn't just a web agency. It's a strategically positioned business that adapts to reality, not trends.
"As a web pro, I use AI as leverage. Truth is, most of what I do can be replicated. That's why I think it's ethical and smart to position DCA as AI-conscious. We should be upfront with our market about the risks—like how AI could rewrite history—and show how DCA, as a small Cambodia-based agency, uses AI responsibly to help others. That's how we evolve."
What I meant:
The uncomfortable truth: Most of what I do CAN be replicated by AI eventually.
So I have two choices:
I'm choosing option 2.
Why this is smart positioning:
This isn't virtue signaling. It's strategic differentiation.
"Lately I've been reflecting on where DCA's headed as a corporate brand. That's honestly where some of my stress came from—trying to grow while staying true to our foundation."
What I meant:
For months I've been stuck because I was trying to reconcile two things:
Most founders don't struggle with this because they just pick #1.
I was stuck because I need BOTH. And I couldn't figure out how to communicate that without sounding naive or preachy.
The breakthrough: Separate the messages.
Once I saw that distinction, the stress lifted.
"Hey man, the reason I never replied back then is because I was overthinking a lot and reflecting on my work and business. You already know a lot of my idealogy and geopolitical views. Since my brand is a reflection of me (kind of like how your actions reflect on your dad), I felt I had to be very careful about how I moved forward in the Cambodian market."
What I meant:
My name is attached to DCA. If DCA does something unethical, that reflects on me. If DCA compromises values for money, that's on me.
And because I care about my friendships (you especially), I don't want to build something I'd be embarrassed to explain to you.
You make me a better person not by telling me what to do, but by being someone whose opinion I value. That accountability matters.
Your response that hit me:
"Thanks heaps for saying that man, it means so much to me. But know that while you say I make you a better person, it is you and your actions that make it a reality. I might help you see the door, but you walk through it."
What this means for DCA:
You helped me see the door (transparent positioning, separate audiences, commit to proof).
But I have to walk through it. Nobody else can build DCA for me.
This memoir is me walking through the door.
Why They Can't Speak Up (But I Can):
You mentioned there are people fighting this battle who can't organize openly. Here's what I understand:
DeepSeek Engineers (China):
AI Researchers Who Quit Big Tech:
Small Agencies Trying to Build Differently:
You shared: https://campaign.controlai.com/take-action
What this is: A movement advocating for AI regulation and democratic control of AI development.
Why this aligns with DCA:
They're arguing at the policy level what I'm demonstrating at the business level:
My contribution: While they lobby governments, I'm showing businesses there's a viable alternative to uncritical acceleration.
Referenced in the videos you shared (though I should look these up specifically):
Geoffrey Hinton (Nobel Prize, left Google):
Why this matters to DCA: If the "godfather of AI" is concerned, I'm not being paranoid for building responsibly.
1. "Sora Proves the AI Bubble Is Going to Burst So Hard"
2. "Humanity's Cost to AGI"
3. "The Lie So Dangerous Tesla Engineers Are Quitting"
"Yeah, these vids are a bit long lol but hey, you wanted to learn more about AI. And the best course of action here mate is to spread out the word so making an impact, leading with an example and showing proof of concept, all that will def help and there are people out there also trying to fight this battle that we can all organize and do this right. A bit like Mr. Robot but things needs to be considered and curated very extensively."
What you meant (and why it resonated):
"Leading with example, showing proof of concept" = Don't just talk about responsible AI. Build a profitable business using it. Proof > rhetoric.
"People out there trying to fight this battle" = I'm not alone. There's a movement forming. I just need to be visible so they can find me.
"Like Mr. Robot" = The show where a hacker tries to take down corrupt systems. But your caveat: "things need to be considered and curated very extensively" = Don't be reckless. Be strategic.
"Not a revolution but a solution" = This line became my mantra. I'm not trying to burn down capitalism. I'm trying to show there's a better way to participate in it.
You asked me to include my full vision. Here it is:
The World I Want to Help Create:
A world where:
Why this requires AI to be maintained (not accelerated recklessly):
If we rush to full automation:
If we maintain gradual AI integration:
My role: DCA is one tiny example of gradual integration. We use AI to be more efficient, not to fire people. If thousands of businesses did this, the aggregate effect would be gradual adaptation instead of catastrophic disruption.
"For those who don't want to evolve with AI, they can choose to have an easier agriculture life."
What I mean:
Not everyone wants to be a knowledge worker. Not everyone wants to use AI. That's fine.
Path 1: Evolve with AI
Path 2: Opt Out
Both paths should be dignified.
The problem today is: opting out = poverty and marginalization.
In the future I want: opting out = a valid choice with dignity intact.
DCA's indirect contribution: By showing you can use AI profitably while preserving jobs, I'm protecting Path 1 as viable for more people.
"Let's make AI help keep humans safe, evolve with AI... We can teach AI to be nice and slowly gradually evolve with it trying to better our life, while maintaining AI."
What I mean:
If AI is inevitable, our job is to shape HOW it develops.
Current trajectory:
Alternative trajectory:
How we "teach AI to be nice":
DCA's role: We're encoding responsibility into how we use AI. Every client project with transparent AI usage is one small example of "teaching AI to be nice" through practice.
"Similar to how the hospitality industry is always actively running, with tourism, without tourism. Like during COVID we saw how local industry almost went bankrupt until the digital era with Zoom helped stabilize. But without tourism, economy can't flow. Tourism highly relies on humans."
What you're saying (and why I agree):
Tourism is a human-to-human economy. People travel to experience other cultures, meet people, feel connection. AI can't replace that.
COVID showed fragility: When tourism stopped, hospitality collapsed. Digital tools (Zoom) provided temporary bridge, but they're not substitutes for human presence.
The lesson: We need BOTH digital tools (AI, platforms) AND human connection (tourism, services). They're complementary, not substitutes.
DCA parallel:
We use digital tools (AI, OPTe platform, remote work) to be efficient.
But we maintain human connection (client relationships, team collaboration, transparent communication).
The hybrid model is the future. Not pure automation. Not pure manual work. Both.
"I heard that Chinese was really good with money when they travel to Cambodia, they always go to Chinese-owned business to give back money to the Chinese. Good strategy."
What you're observing:
Chinese tourists/businesses create closed economic loops. Money circulates within the community, building collective wealth.
Why this is smart:
Instead of extracting wealth (spend in Cambodia, profits leave), they recirculate it (spend with Chinese businesses, profits stay in community, reinvest in more businesses).
How this applies to DCA:
I'm not trying to extract wealth from Cambodia and move it elsewhere.
I'm trying to:
This is economic ecosystem thinking. Build wealth that stays and compounds locally.
You mentioned Monsters University (friendship) and Anger Management (relationships).
The lesson you extracted:
"Part of being human is we don't feel completed without real friendship and a partner to support but contradicts your morals. How in Anger Management, a healthy maintained relationship is through maturing, communication, and psychology of understanding one another better (Charlie and Kate). But even if it didn't work out, they respectfully parted. Because in the end, we are humans and we feel, which we can't always control."
Why this matters for DCA:
Building a business is like building a relationship.
You need:
And you can't control everything. Some clients won't get it. Some partnerships won't work. That's okay.
The goal isn't perfection. It's integrity.
Did I communicate clearly? Did I act with integrity? Did I learn from the experience?
If yes to all three, I succeeded regardless of outcome.
"Let people be wrong about you, it's not worth the energy to explain yourself to people who won't listen."
How this applies to DCA:
Some people will think I'm naive for caring about AI ethics.
Some will think I'm pretentious for publishing transparency reports.
Some will think I'm leaving money on the table by refusing to replace humans with AI.
Let them think that.
I'm not building for them. I'm building for:
The right people will understand. The wrong people won't. That's the filter working as intended.
"Part of being human is we don't feel completed without real friendship."
Why I included this in a business memoir:
Because DCA isn't just about profit. It's about PURPOSE.
If I build a successful agency but compromise everything I believe in, I won't feel complete. I'll have money but not meaning.
If I build a less successful agency but stay true to my values, I'll feel complete. I'll have meaning even if money is modest.
This is why I can't just "copy successful agencies."
They might make more money, but they're optimizing for a different completion function than I am.
I'm optimizing for: Can I look at myself in 10 years and feel proud of what I built?
That's the measure of success that actually matters to me.
Your message that I'll never forget:
"And honestly, a lot of it because we hang out you know. You make me a better person while I am still my own person. That's what I value in our friendship most and allow us to grow independently but also together as friends."
And your response:
"Thanks heaps for saying that man, it means so much to me. But know that while you say I make you a better person, it is you and your actions that make it a reality. I might help you see the door, but you walk through it. Love ya brother."
Why this matters for this memoir:
This conversation summarizes everything.
You helped me see the door: You pushed back on my overthinking. You challenged my vague AI philosophy. You forced me to clarify my thinking.
But I have to walk through it: Nobody can build DCA for me. Nobody can make the hard decisions for me. Nobody can execute the vision for me.
This memoir is me walking through the door.
And when future me reads this (hopefully in 10 years, with DCA thriving), I'll remember:
You had conviction. You had friends who supported you. You chose the harder path because it was the right path.
You are what you choose to be.
And you chose this.
This isn't a business plan. It's a commitment.
To future me: Did you do what you said you would?
To potential allies: This is what I believe. If you believe it too, let's build together.
To critics: You might be right that I'll fail. But you're wrong that it's not worth trying.
To my friend who helped me see the door: Thank you. I'm walking through it now.
Mosses Chan
Founder, Digital Creative Alliances
Siem Reap, Cambodia
November 2024
"Not a revolution but a solution."
This document will be updated as I learn, fail, succeed, and evolve. It's a living memoir, not a static manifesto.