Created: November 2024
For: Future me, and anyone trying to understand why I'm building this way
TABLE OF CONTENTS
- Core Life Philosophy
- My AI Ideology
- Why I'm Building DCA This Way
- References & Influences
- The Future I'm Working Toward
- Personal Reflections
1. CORE LIFE PHILOSOPHY
Life Lessons That Guide Everything I Do
These aren't just quotes I like. These are principles I return to when I'm uncertain:
"You are what you choose to be."
What I mean: Identity isn't fixed. I'm not "just" a web developer or "just" a founder. I'm choosing to be someone who builds responsibly, even when it's harder. Every day I choose who I'm becoming.
How this applies to DCA: I'm choosing to build an agency that reflects my values, not just copy what successful agencies do. This is deliberate, not accidental.
"People are not responsible for your feelings."
What I mean: If clients reject my pricing or don't understand my AI philosophy, that's not their job to fix. I can't expect the market to validate me. I have to prove it works.
How this applies to DCA: I won't get bitter if people don't "get it" right away. I'll keep building, keep documenting, keep showing proof. The right people will find me.
"Learn to let go."
What I mean: I spent months overthinking this business plan. Overthinking how to position DCA. Overthinking whether I'm "ready." At some point, you have to let go and just start.
How this applies to DCA: I can't control outcomes. I can only control my actions. Launch, learn, adjust. Let go of needing it to be perfect.
"Let people be wrong about you, it's not worth the energy to explain yourself to people who won't listen."
What I mean: Some people will think I'm naive for trying to be "responsible" with AI. Some will think I'm just virtue signaling. Let them think that. I don't need to convince everyone.
How this applies to DCA: I'll get criticized. Competitors will say I'm leaving money on the table. That's fine. I'm not building for critics. I'm building for allies and clients who share these values.
"It's not the situation… it's your reaction to the situation."
What I mean: AI isn't inherently good or evil. How I choose to use it is what matters. The market isn't inherently corrupt. How I choose to participate in it is what matters.
How this applies to DCA: I can't stop AI acceleration. But I can control how DCA operates within that reality. That's where my agency lies.
"You accept the love you think you deserve."
What I mean: If I don't believe DCA deserves premium pricing, I'll undercharge. If I don't believe my work deserves recognition, I won't promote it. Self-worth determines what you accept.
How this applies to DCA: I need to charge fairly and stand behind my value. Not arrogantly, but confidently. If I don't believe in what I'm building, why would clients?
"Just because you've experienced grief doesn't make you an expert on mine."
What I mean: Everyone's struggling with something. I can't assume I understand other founders' challenges or that they understand mine. Stay humble.
How this applies to DCA: I won't preach to other agencies about how they should operate. I'll just show what I'm doing and let results speak.
"If you let things be, it'll come to you."
What I mean: There's a balance between forcing and flowing. I'm preparing DCA thoroughly, but I also need to trust that the right clients, the right team, the right opportunities will come when I'm ready.
How this applies to DCA: I'm not chasing every client. I'm building something worth finding. If I do good work and communicate clearly, aligned people will come.
"If people didn't ask, don't say it. Action speaks louder than words."
What I mean: Stay humble. Don't preach about AI responsibility—just do it and show results. Don't tell people DCA is different—prove it through work.
How this applies to DCA: Less talking, more doing. The blog posts are documentation, not sermons. The transparency reports are data, not ego. Let the work speak.
"My love is not a rope. It is not a net. It is not a hand pulling. It is a safe place to return to — and they'll know it when they're ready."
What I mean: This quote hit me because it's about relationships, but it applies to building DCA. I'm not forcing clients to work with me. I'm not manipulating team members to join. I'm building something safe and honest. The right people will recognize it when they're ready.
How this applies to DCA: DCA is a safe place for clients who value transparency. A safe place for team members who want meaningful work. A safe place for allies who share these concerns. They'll come when ready.
"A step back now, is 10 steps forward in the future."
What I mean: I could rush to launch and cut corners. But taking time to think deeply about AI protocols, sustainability, values—that's the foundation. It might feel slow now, but it sets up everything later.
How this applies to DCA: I'm not in a rush. This is a 10-year vision, not a 6-month sprint. Building the foundation right matters more than fast revenue.
2. MY AI IDEOLOGY
The Core Problem Nobody Wants to Face
"The truth of the matter is, we are no longer alone in the universe with AI. Like history, humans are a species that also needs to be maintained by a broken system. So we need to see ourselves no longer as an individual race that destroys each other as nature intends, but rather the human species. And humans are not perfect—we live in a cycle of being born, learn and grow. So how do we sustain ourselves?"
What I mean by this:
We're at an inflection point. AI isn't just another technology. It's a fundamentally different kind of intelligence that's evolving faster than we can adapt.
Most people treat this like it's just "another Industrial Revolution." But it's not. Previous revolutions replaced physical labor. AI replaces cognitive labor. That's everyone.
And we're running this experiment on civilization without informed consent. No vote. No referendum. Just a handful of companies racing to AGI because they believe it's inevitable.
My position: It might be inevitable. But we can still choose HOW we get there.
Why I Care About AI Sustainability
You said I was overthinking when I talked about "AI communication protocols" and energy consumption. You were right—I was mixing macro concerns with micro actions.
But here's what I actually mean:
The Macro Problem:
- Training GPT-3: ~1,300 MWh (equivalent to 130 US homes for a year)
- Training GPT-4: Estimated 10-25x that
- Every ChatGPT query, every AI-generated image, every code completion—it all requires data centers consuming megawatts, water for cooling, rare earth minerals mined in terrible conditions
And it's growing exponentially.
My Micro Response:
I can't fix the macro problem. But I can:
- Use AI minimally and deliberately (not for everything, just where it makes sense)
- Choose infrastructure partners with sustainability commitments (AWS renewable energy goals, Azure carbon negative, Cloudflare carbon neutral)
- Be transparent about usage (clients know when work is AI-assisted)
- Optimize prompts (clear, efficient communication with AI = less token usage = less energy)
- Question necessity (do I actually need AI for this, or am I just using it because it's convenient?)
Is this enough to save the planet? No.
But it's more than doing nothing and pretending it doesn't matter.
AI as Tool, Not Replacement
You've heard me say this, but here's the full thought:
Most companies optimize for:
- Replace 50% of workforce with AI
- Increase profits 30%
- Shareholders celebrate
- Society bears the cost (unemployment, instability)
What I'm optimizing for:
- Use AI to make existing workforce 50% more productive
- Serve 2x clients with same team size
- Increase profits 30%
- Same employees earn MORE (productivity bonuses)
- Society gains (more employed, higher wages, stable consumption)
The key difference: Revenue per employee vs. revenue while maintaining employment.
This isn't charity. It's long-term economic rationality.
Because here's the uncomfortable question: If 50% of people can't afford to buy things, who are companies selling to?
Mass unemployment isn't just unethical. It's bad for capitalism.
Teaching AI as a Tool (The University Vision)
You mentioned I want to teach AI communication in universities. Here's the full idea:
Current State:
- Most people use AI inefficiently (vague prompts, repetitive queries, wasting tokens)
- Nobody teaches "prompt engineering" in formal education
- Students graduate not knowing how to work WITH AI effectively
Future Vision:
- Universities teach "AI Communication" as a skill (like they teach Excel or PowerPoint)
- Students learn:
- How to structure prompts clearly
- How to review AI output critically
- When to use AI vs. when to do it manually
- Ethics of AI usage (citation, transparency, sustainability)
- This becomes a baseline skill, like typing
Why this matters:
If we're going to coexist with AI, we need professionals who know how to use it responsibly. Not just "type stuff into ChatGPT and hope for the best."
DCA's role: We're building the workplace example of this. When universities want to show "here's how a responsible company uses AI," they can point to DCA's documented protocols.
How to Communicate with AI More Efficiently
This is the "AI communication protocol" idea I was trying to explain (badly).
The Problem:
- Most people write messy, vague prompts
- AI generates bloated responses
- People iterate multiple times
- This wastes tokens = wastes energy = costs more
My Solution (for DCA team):
Step 1: Clear Context
- Before asking AI anything, write down:
- What do I actually need?
- What constraints exist?
- What format do I want?
Step 2: Structured Prompt
- Instead of: "Write me content about sustainability"
- Do this: "Write a 200-word paragraph for a B2B audience explaining why sustainable web hosting matters. Tone: professional but accessible. Include: AWS renewable energy commitment. Avoid: greenwashing language."
Step 3: Single Iteration
- Get output
- Human reviews
- Human edits (don't ask AI to iterate 5 times)
- Done
Why this matters:
- Saves time (fewer iterations)
- Saves energy (fewer API calls)
- Better output (clear input = clear output)
This is what I mean by "AI communication professionals." People who know how to work with AI efficiently, not just lazily.
Co-Existing with AI (The Detroit: Become Human Analogy)
You asked about my sci-fi reference. Here's what I meant:
Detroit: Become Human (the game) explores: What happens when AI becomes sentient? Do we treat them as slaves or partners?
My take: We're not at AGI yet. But IF we get there, the question becomes: Can humans and AI coexist peacefully?
Three Possible Futures:
Future 1: Humans vs. AI (Conflict)
- We treat AI as tools to exploit
- AI becomes sentient, resents us
- Conflict (terminator scenario)
- Everyone loses
Future 2: AI Replaces Humans (Displacement)
- AI gets so good humans become irrelevant
- Mass unemployment
- Humans dependent on AI for survival
- Loss of purpose, dignity, meaning
Future 3: Humans + AI (Co-existence)
- We use AI to augment human capability
- AI handles computational tasks
- Humans handle creativity, empathy, judgment
- We evolve together, both necessary
- Society adapts gradually instead of catastrophically
I'm betting on Future 3. Not because it's guaranteed, but because it's the only future worth building toward.
If We Wait for AGI (The Gradual Approach)
Here's my position on AGI timing:
I don't know if AGI is 5 years away or 50 years away. Nobody does.
But I know this: If we wait until AGI exists to figure out how to coexist with it, we're screwed.
So my approach:
- Build ethical AI practices NOW (before AGI)
- Establish precedents for transparency and human dignity
- Show that businesses can thrive while using AI responsibly
- Create cultural norms BEFORE AGI makes them urgent
It's like climate change: We can't wait until the catastrophe to start building solutions. We build solutions now, while we still have time.
DCA is my small contribution to that.
3. WHY I'M BUILDING DCA THIS WAY
My Original Quote (From Our Conversation):
"Yo, been thinking about DCA's growth. Since we're a small tech agency, staying relevant is how I keep money flowing in. That means keeping up with geopolitics and economics—it helps us make smarter long-term moves."
What I meant:
Running a small agency in Cambodia means I'm competing globally. I can't just be a good web developer. I have to understand:
- Where technology is heading (AI, no-code, platforms)
- Where markets are heading (ASEAN growth, digital transformation)
- Where risks are (economic downturns, political instability, tech bubbles)
This isn't paranoia. It's strategic awareness.
If I know the AI bubble might burst, I don't over-invest in AI hype. I build sustainable practices instead.
If I know ASEAN economies are growing, I position for regional expansion early.
DCA isn't just a web agency. It's a strategically positioned business that adapts to reality, not trends.
Why I Position DCA as "AI-Conscious"
"As a web pro, I use AI as leverage. Truth is, most of what I do can be replicated. That's why I think it's ethical and smart to position DCA as AI-conscious. We should be upfront with our market about the risks—like how AI could rewrite history—and show how DCA, as a small Cambodia-based agency, uses AI responsibly to help others. That's how we evolve."
What I meant:
The uncomfortable truth: Most of what I do CAN be replicated by AI eventually.
So I have two choices:
- Pretend AI isn't a threat (bury my head in the sand, get disrupted later)
- Embrace AI transparently (show clients I understand it, use it responsibly, and position myself as the human who DIRECTS AI, not competes with it)
I'm choosing option 2.
Why this is smart positioning:
- Clients are nervous about AI (they hear the hype but don't understand it)
- By being transparent, I educate them
- By showing responsible use, I build trust
- By documenting protocols, I differentiate from agencies hiding their AI usage
This isn't virtue signaling. It's strategic differentiation.
The Stress I've Been Feeling
"Lately I've been reflecting on where DCA's headed as a corporate brand. That's honestly where some of my stress came from—trying to grow while staying true to our foundation."
What I meant:
For months I've been stuck because I was trying to reconcile two things:
- Building a profitable business (I need to make money, support a team, grow)
- Staying true to my values (I don't want to exploit people or planet for profit)
Most founders don't struggle with this because they just pick #1.
I was stuck because I need BOTH. And I couldn't figure out how to communicate that without sounding naive or preachy.
The breakthrough: Separate the messages.
- Business clients hear: "We deliver results using modern tools responsibly"
- Movement allies hear: "We're proving you can be profitable AND principled"
Once I saw that distinction, the stress lifted.
Why I Care What Friends Think
"Hey man, the reason I never replied back then is because I was overthinking a lot and reflecting on my work and business. You already know a lot of my idealogy and geopolitical views. Since my brand is a reflection of me (kind of like how your actions reflect on your dad), I felt I had to be very careful about how I moved forward in the Cambodian market."
What I meant:
My name is attached to DCA. If DCA does something unethical, that reflects on me. If DCA compromises values for money, that's on me.
And because I care about my friendships (you especially), I don't want to build something I'd be embarrassed to explain to you.
You make me a better person not by telling me what to do, but by being someone whose opinion I value. That accountability matters.
Your response that hit me:
"Thanks heaps for saying that man, it means so much to me. But know that while you say I make you a better person, it is you and your actions that make it a reality. I might help you see the door, but you walk through it."
What this means for DCA:
You helped me see the door (transparent positioning, separate audiences, commit to proof).
But I have to walk through it. Nobody else can build DCA for me.
This memoir is me walking through the door.
4. REFERENCES & INFLUENCES
People & Movements Aligned with DCA
Why They Can't Speak Up (But I Can):
You mentioned there are people fighting this battle who can't organize openly. Here's what I understand:
DeepSeek Engineers (China):
- Left big tech because they saw unethical practices
- Building AI with "better morals and core values that isn't run over with money"
- Why they resonate: They're doing what I'm trying to do—build technology responsibly even when the market incentivizes speed over ethics
AI Researchers Who Quit Big Tech:
- Timnit Gebru (fired from Google for AI ethics research)
- Margaret Mitchell (same)
- Geoffrey Hinton (left Google to speak freely about AI risks)
- Why they resonate: They had conviction to walk away from prestige and money when their values were compromised
Small Agencies Trying to Build Differently:
- They exist but stay quiet (don't want to seem preachy or lose clients)
- Why I can speak up: I'm small enough that I'm not threatening anyone. I'm new enough that I have nothing to lose. I can be the visible example.
The Control AI Campaign
You shared: https://campaign.controlai.com/take-action
What this is: A movement advocating for AI regulation and democratic control of AI development.
Why this aligns with DCA:
They're arguing at the policy level what I'm demonstrating at the business level:
- AI should serve humanity, not replace it
- Development should be transparent and accountable
- Society should have input on how AI evolves
My contribution: While they lobby governments, I'm showing businesses there's a viable alternative to uncritical acceleration.
Nobel Prize Winners on AI Risk
Referenced in the videos you shared (though I should look these up specifically):
Geoffrey Hinton (Nobel Prize, left Google):
- Warns AI could surpass human intelligence
- Concerned about existential risks
- Advocates for responsible development
Why this matters to DCA: If the "godfather of AI" is concerned, I'm not being paranoid for building responsibly.
The Videos You Shared
1. "Sora Proves the AI Bubble Is Going to Burst So Hard"
- Link: https://www.youtube.com/watch?v=55Z4cg5Fyu4
- Key takeaway: AI hype is unsustainable. Companies over-investing will crash.
- How this informs DCA: Don't over-invest in AI hype. Build sustainable, modest AI usage. When bubble bursts, we'll still be standing.
2. "Humanity's Cost to AGI"
- Link: https://youtu.be/XTQ2ii-k2sw
- Key takeaway: (need to rewatch, but theme is: pursuing AGI has real human and environmental costs)
- How this informs DCA: This is the "why" behind sustainability focus. It's not abstract—there are real costs.
3. "The Lie So Dangerous Tesla Engineers Are Quitting"
- Link: https://youtu.be/6ltU9q1pKKM
- Key takeaway: Even at "mission-driven" companies, profit overrides values when push comes to shove. Engineers leave when they see hypocrisy.
- How this informs DCA: Stay small and private so I never face pressure to compromise. No investors demanding I sacrifice values for returns.
Your Quote That Stuck With Me
"Yeah, these vids are a bit long lol but hey, you wanted to learn more about AI. And the best course of action here mate is to spread out the word so making an impact, leading with an example and showing proof of concept, all that will def help and there are people out there also trying to fight this battle that we can all organize and do this right. A bit like Mr. Robot but things needs to be considered and curated very extensively."
What you meant (and why it resonated):
"Leading with example, showing proof of concept" = Don't just talk about responsible AI. Build a profitable business using it. Proof > rhetoric.
"People out there trying to fight this battle" = I'm not alone. There's a movement forming. I just need to be visible so they can find me.
"Like Mr. Robot" = The show where a hacker tries to take down corrupt systems. But your caveat: "things need to be considered and curated very extensively" = Don't be reckless. Be strategic.
"Not a revolution but a solution" = This line became my mantra. I'm not trying to burn down capitalism. I'm trying to show there's a better way to participate in it.
5. THE FUTURE I'M WORKING TOWARD
The Utopia Vision
You asked me to include my full vision. Here it is:
The World I Want to Help Create:
A world where:
- All countries can scale and maintain their economies (like Singapore and Switzerland model—high standard of living, stable, educated populations)
- Labor is a thing of the past (not because people are unemployed, but because AI handles menial tasks and humans focus on meaningful work)
- Everyone has a better lifestyle (universal baseline of dignity, food security, healthcare, education)
- Humans evolve WITH AI (co-existence, not competition or replacement)
Why this requires AI to be maintained (not accelerated recklessly):
If we rush to full automation:
- Mass unemployment happens overnight
- Societies collapse from instability
- We don't have time to adapt
If we maintain gradual AI integration:
- Jobs evolve (new roles emerge as old ones transform)
- Education adapts (people retrain for AI-augmented work)
- Economies stabilize (productivity gains distributed more equitably)
- Social fabric holds (communities don't disintegrate)
My role: DCA is one tiny example of gradual integration. We use AI to be more efficient, not to fire people. If thousands of businesses did this, the aggregate effect would be gradual adaptation instead of catastrophic disruption.
The Two Paths for Humans
"For those who don't want to evolve with AI, they can choose to have an easier agriculture life."
What I mean:
Not everyone wants to be a knowledge worker. Not everyone wants to use AI. That's fine.
Path 1: Evolve with AI
- Learn to use AI tools professionally
- Work in AI-augmented roles (designers, strategists, managers)
- Higher income, modern lifestyle
- Urban/digital work
Path 2: Opt Out
- Choose agriculture, craftsmanship, manual labor
- Live more simply, locally, sustainably
- Lower income, but lower cost of living
- Rural/traditional work
Both paths should be dignified.
The problem today is: opting out = poverty and marginalization.
In the future I want: opting out = a valid choice with dignity intact.
DCA's indirect contribution: By showing you can use AI profitably while preserving jobs, I'm protecting Path 1 as viable for more people.
Teaching AI to Be Nice
"Let's make AI help keep humans safe, evolve with AI... We can teach AI to be nice and slowly gradually evolve with it trying to better our life, while maintaining AI."
What I mean:
If AI is inevitable, our job is to shape HOW it develops.
Current trajectory:
- AI trained on internet data (which includes racism, violence, misinformation)
- AI optimized for engagement (which means outrage, addiction, polarization)
- AI controlled by corporations (which means profit over ethics)
Alternative trajectory:
- AI trained with human values deliberately encoded
- AI optimized for human flourishing (not just engagement)
- AI controlled democratically (or at least with public accountability)
How we "teach AI to be nice":
- Humans use AI responsibly (set examples)
- Humans demand transparency (know what AI is doing)
- Humans refuse unethical AI products (vote with wallets)
- Humans encode values in how we build and deploy AI
DCA's role: We're encoding responsibility into how we use AI. Every client project with transparent AI usage is one small example of "teaching AI to be nice" through practice.
The Tourism Economy Analogy
"Similar to how the hospitality industry is always actively running, with tourism, without tourism. Like during COVID we saw how local industry almost went bankrupt until the digital era with Zoom helped stabilize. But without tourism, economy can't flow. Tourism highly relies on humans."
What you're saying (and why I agree):
Tourism is a human-to-human economy. People travel to experience other cultures, meet people, feel connection. AI can't replace that.
COVID showed fragility: When tourism stopped, hospitality collapsed. Digital tools (Zoom) provided temporary bridge, but they're not substitutes for human presence.
The lesson: We need BOTH digital tools (AI, platforms) AND human connection (tourism, services). They're complementary, not substitutes.
DCA parallel:
We use digital tools (AI, OPTe platform, remote work) to be efficient.
But we maintain human connection (client relationships, team collaboration, transparent communication).
The hybrid model is the future. Not pure automation. Not pure manual work. Both.
The Chinese Money Strategy (Economic Ecosystem Thinking)
"I heard that Chinese was really good with money when they travel to Cambodia, they always go to Chinese-owned business to give back money to the Chinese. Good strategy."
What you're observing:
Chinese tourists/businesses create closed economic loops. Money circulates within the community, building collective wealth.
Why this is smart:
Instead of extracting wealth (spend in Cambodia, profits leave), they recirculate it (spend with Chinese businesses, profits stay in community, reinvest in more businesses).
How this applies to DCA:
I'm not trying to extract wealth from Cambodia and move it elsewhere.
I'm trying to:
- Build business in Cambodia
- Hire Cambodian team (at fair wages)
- Serve international clients (bring money IN)
- Recirculate profits locally (reinvest in team, community, market)
This is economic ecosystem thinking. Build wealth that stays and compounds locally.
6. PERSONAL REFLECTIONS
On Relationships and Business
You mentioned Monsters University (friendship) and Anger Management (relationships).
The lesson you extracted:
"Part of being human is we don't feel completed without real friendship and a partner to support but contradicts your morals. How in Anger Management, a healthy maintained relationship is through maturing, communication, and psychology of understanding one another better (Charlie and Kate). But even if it didn't work out, they respectfully parted. Because in the end, we are humans and we feel, which we can't always control."
Why this matters for DCA:
Building a business is like building a relationship.
You need:
- Clear communication (transparent pricing, honest about capabilities)
- Maturity (admit mistakes, don't blame clients/market for your failures)
- Understanding (empathy for client needs, even when they don't align with your vision)
- Respectful parting (if a client isn't the right fit, end the relationship well)
And you can't control everything. Some clients won't get it. Some partnerships won't work. That's okay.
The goal isn't perfection. It's integrity.
Did I communicate clearly? Did I act with integrity? Did I learn from the experience?
If yes to all three, I succeeded regardless of outcome.
On Being Wrong
"Let people be wrong about you, it's not worth the energy to explain yourself to people who won't listen."
How this applies to DCA:
Some people will think I'm naive for caring about AI ethics.
Some will think I'm pretentious for publishing transparency reports.
Some will think I'm leaving money on the table by refusing to replace humans with AI.
Let them think that.
I'm not building for them. I'm building for:
- Clients who value transparency
- Team members who want meaningful work
- Allies who share my concerns about the future
The right people will understand. The wrong people won't. That's the filter working as intended.
On Completion and Purpose
"Part of being human is we don't feel completed without real friendship."
Why I included this in a business memoir:
Because DCA isn't just about profit. It's about PURPOSE.
If I build a successful agency but compromise everything I believe in, I won't feel complete. I'll have money but not meaning.
If I build a less successful agency but stay true to my values, I'll feel complete. I'll have meaning even if money is modest.
This is why I can't just "copy successful agencies."
They might make more money, but they're optimizing for a different completion function than I am.
I'm optimizing for: Can I look at myself in 10 years and feel proud of what I built?
That's the measure of success that actually matters to me.
The Friendship Reflection
Your message that I'll never forget:
"And honestly, a lot of it because we hang out you know. You make me a better person while I am still my own person. That's what I value in our friendship most and allow us to grow independently but also together as friends."
And your response:
"Thanks heaps for saying that man, it means so much to me. But know that while you say I make you a better person, it is you and your actions that make it a reality. I might help you see the door, but you walk through it. Love ya brother."
Why this matters for this memoir:
This conversation summarizes everything.
You helped me see the door: You pushed back on my overthinking. You challenged my vague AI philosophy. You forced me to clarify my thinking.
But I have to walk through it: Nobody can build DCA for me. Nobody can make the hard decisions for me. Nobody can execute the vision for me.
This memoir is me walking through the door.
And when future me reads this (hopefully in 10 years, with DCA thriving), I'll remember:
You had conviction. You had friends who supported you. You chose the harder path because it was the right path.
You are what you choose to be.
And you chose this.
CLOSING THOUGHTS
This isn't a business plan. It's a commitment.
To future me: Did you do what you said you would?
To potential allies: This is what I believe. If you believe it too, let's build together.
To critics: You might be right that I'll fail. But you're wrong that it's not worth trying.
To my friend who helped me see the door: Thank you. I'm walking through it now.
Mosses Chan
Founder, Digital Creative Alliances
Siem Reap, Cambodia
November 2024
"Not a revolution but a solution."
This document will be updated as I learn, fail, succeed, and evolve. It's a living memoir, not a static manifesto.