$ cat ~/knowledge/engineering-ai.md

2 weeks ago I didn't know what a git repo was. not really.

I'd heard the word. I knew engineers used it. but I didn't understand it. I was an SDR who built systems in my head, in scattered Notion docs, in Slack messages I'd never find again.

then I built a meme generator. broke it. fixed it. learned what npm install meant. learned what version control actually was. learned the difference between asking AI to build something for you and asking AI to show you how while you're driving.

I'm not an engineer. I'm not a developer. I'm a builder who learned these terms by breaking things and reading error messages at 2 AM.

this is what I picked up along the way. the terms that showed up when I was deploying my first website. the concepts I needed when the build failed. the vocabulary that made AI agents less confusing and more useful.

I'm not the expert here. but I'm sharing what I know, the best way I know how.

if you're building with AI agents in 2026, these are the words you'll hear. here's what they mean.


$ cd ~/version-control

Version Control

Git

Version control system that tracks changes to your code over time.

why it matters

I didn't understand Git until I needed to revert a broken homepage. I'd changed something, refreshed localhost:3000, and the entire layout was gone. I panicked. Then realized I could just undo the commit and go back to when it worked. That's when it clicked. Git isn't just tracking changes. It's letting you time travel. Every commit is a save point. Every push is publishing your work.

how you use it

When I run /deploy, it stages changes, commits with a message, pushes to GitHub, and triggers Vercel. When I run the daily tracker, it scans git commits to see what I shipped. Git is the receipt for everything I build.

→ related: Commits, Branches, Push


Commits

A snapshot of your files at a specific point in time with a message explaining what changed.

why it matters

Commits are your paper trail. They tell the story of how you built something. Good commit messages make debugging easier. Bad commit messages make you hate yourself when you're trying to figure out what broke.

how you use it

git commit -m "add user authentication". The message should say what you did and why. Not "fix stuff" or "updates". That's useless.

→ related: Git, Push


Branches

Separate versions of your code that let you work on features without breaking the main version.

why it matters

Main branch is what's live. Feature branches are where you experiment. You can try things, break them, fix them, and only merge to main when they're ready.

how you use it

Create a branch for each feature. Test it. Merge it when it works. Delete the branch when you're done. Keeps main clean.

→ related: Merge Conflicts, Git


Push

Sending your local commits to a remote server like GitHub or Vercel.

why it matters

Your code isn't backed up until you push. Your team can't see it until you push. Automated deployments don't trigger until you push. Local commits are invisible.

how you use it

git push origin main. Push to main triggers auto-deploys on Vercel. Push to a feature branch creates a preview deployment.

→ related: Commits, Deploy


Merge Conflicts

When Git can't automatically combine changes from two branches because they edited the same lines.

why it matters

This is the tax you pay for collaboration. Two people edited the same file. Git doesn't know which version to keep. You have to choose manually.

how you use it

Open the file, look for the conflict markers (<<<<<<< and >>>>>>>), decide which code to keep, delete the markers, commit the resolution. It's annoying but necessary.

→ related: Branches, Git


$ cd ~/deployment

Deployment

Vercel

Hosting platform that deploys websites from Git repos with zero config.

why it matters

The first time I pushed to main and saw Vercel build my site in 45 seconds, it felt like skipping straight to the good part. No FTP uploads, no manual file transfers — just push the code and it's live. Three websites (shawnos.ai, thegtmos.ai, thecontentos.ai) all deploy from one push. That's not magic. That's just how modern web development works.

how you use it

Every time I run /deploy, it pushes to GitHub, Vercel picks it up, builds all three sites, and they go live. I get a preview URL for every branch. If something breaks, I check the build logs. If it works, it's live in under a minute.

→ related: Deploy, Domains, Build Process


Deploy

The process of making your local code changes live on the internet.

why it matters

I spent the first week editing files and wondering why my changes weren't showing up on shawnos.ai. Then I realized local changes stay local until you deploy. The code on my machine and the code on the website are two different things. Deploying is the bridge.

how you use it

I built the /deploy skill so I don't have to remember the steps. It stages changes, writes a commit message, pushes to GitHub, waits for Vercel to build, and confirms all three sites are live. I type /deploy. Everything else happens automatically.

→ related: Vercel, Git, Push


Build Process

The steps that transform your source code into the final website visitors see.

why it matters

Next.js apps need to be compiled. TypeScript needs to convert to JavaScript. Dependencies need to be installed. The build process does all of this. If the build fails, your deploy fails.

how you use it

Locally, npm run dev builds in development mode with hot reload. On Vercel, the build happens automatically on every push. Check build logs if it fails.

→ related: Dependencies, Vercel


Environment Variables

Secret values like API keys that you don't want in your code.

why it matters

API keys, database passwords, MCP tokens. These can't live in your repo. They'd be public on GitHub. Environment variables let you reference them without exposing them.

how you use it

Create a .env file locally with API_KEY=your_key_here. Add it to .gitignore so it never gets committed. On Vercel, add the same variables in the dashboard so your deployed site can access them.

→ related: Environment Files


Domains

The web address people type to visit your site.

why it matters

shawnos.ai is more memorable than shawnos-abc123.vercel.app. Custom domains are how you own your presence. Vercel handles DNS, SSL certificates, and routing.

how you use it

Buy a domain from any registrar. Point the nameservers to Vercel or Cloudflare. Add the domain in Vercel's dashboard. It auto-configures HTTPS. Takes 10 minutes.

→ related: Vercel


$ cd ~/ai-agents

AI Agents

Parallel Agents

Running multiple AI agents at the same time on different tasks.

why it matters

One agent generates LinkedIn drafts while another scans HeyReach campaigns. Parallel agents cut execution time in half. Sequential is one-at-a-time. Parallel is all-at-once.

how you use it

The Task tool launches agents. When you have independent tasks (scan repo + generate content + fetch API data), launch all three agents in one message. They run concurrently.

→ related: Agent Skills, Context Windows


Agent Skills

Pre-written instructions that teach AI agents how to perform specific workflows.

why it matters

Skills are portable knowledge. The /deploy skill knows how to stage, commit, push, and verify deploys. The /finalcopy skill knows your voice, platform rules, and anti-slop filters. Skills make agents consistent.

how you use it

Trigger with slash commands. /deploy runs the deploy skill. /tracker runs the daily tracker. Skills read your voice files, check your rules, follow your workflows. They're reusable automation.

→ related: MCP Servers, Parallel Agents


MCP Servers

Model Context Protocol servers that give AI agents access to external tools and data sources.

why it matters

MCP connects agents to Slack, HeyReach, Instantly, Browserbase, ClickUp, Substack. Without MCP, agents are blind to your production systems. With MCP, they can read Slack channels, pull campaign data, and push drafts to Substack.

how you use it

Install MCP servers in Cursor settings. Each server exposes tools the agent can call. Slack MCP gives search_messages and send_message. HeyReach MCP gives export_leads. The agent sees these as available actions.

→ related: Agent Skills


Context Windows

The amount of text an AI model can see and remember in a single conversation.

why it matters

Larger context = more files, more history, more continuity. Claude Sonnet 4.5 has 200k tokens. That's roughly 150k words. Big enough to load your entire voice system, recent conversation history, and current task context without forgetting.

how you use it

The agent auto-manages context. You don't. But knowing the limit exists helps you understand why agents sometimes lose track of earlier instructions. If you hit the limit, the agent summarizes and refreshes.

→ related: Model Selection, Parallel Agents


Model Selection

Choosing which AI model to use based on task complexity and cost.

why it matters

Opus is smarter but expensive. Sonnet is fast and cheap. For simple tasks (reformatting, scanning), use Sonnet. For complex reasoning (architecting, debugging), use Opus. Wrong model = wasted money or bad output.

how you use it

The daily tracker logs model usage. Opus sessions cost $12-15. Sonnet sessions cost $3-5. Track your spend. Optimize your model picks. Don't use Opus to count words.

→ related: Context Windows


$ cd ~/dev-tools

Development Tools

Markdown

Plain text format with simple syntax for headings, lists, links, and formatting.

why it matters

All your content lives in markdown. Blog posts, drafts, voice files, workflows, skill documentation. Markdown is readable as plain text, version-controllable in Git, and renderable as HTML.

how you use it

# heading, **bold**, - list item, [link](url). Write in any text editor. Commit to Git. Render on your site. No proprietary formats. No lock-in.

→ related: Git


Python Scripts

Small programs that automate tasks like scanning files, calculating scores, or generating images.

why it matters

The daily tracker is a Python script. The dashboard generator is a Python script. Anytime you need to process files, calculate stats, or batch operations, Python is faster than doing it manually.

how you use it

The scripts/ folder holds all automation. daily_scan.py scans git and content folders. daily_dashboard.py generates the tracker card. Run them from the terminal or from agent skills.

→ related: Monorepo


Monorepo

A single Git repo that holds multiple related projects with shared code.

why it matters

I was planning one website. Then the agent and I scoped the architecture and realized one site couldn't hold everything. Three audiences, three domains. But I didn't want to manage three separate repos. So we built a monorepo. shawnos.ai, thegtmos.ai, thecontentos.ai all live in one repo. Same design system, same components, one push deploys all three.

how you use it

Turborepo manages the orchestration. packages/shared/ holds the components and styles all three sites use. apps/shawnos/, apps/gtmos/, apps/contentos/ are the individual sites. When I run /deploy, all three build and go live. One command, three websites.

→ related: Packages, Deploy


Packages

Reusable modules of code that other projects can import.

why it matters

You don't build everything from scratch. Next.js is a package. Pillow is a package. npm install pulls packages from the internet. They're pre-built solutions to common problems.

how you use it

package.json lists your dependencies. npm install downloads them. Packages live in node_modules/. You import them into your code. They update when you run npm update.

→ related: Dependencies, Monorepo


Dependencies

External code your project relies on to function.

why it matters

Your site depends on Next.js. Your tracker depends on Pillow. If dependencies aren't installed, nothing runs. If they're outdated, things break. Dependency management is invisible until it isn't.

how you use it

npm install installs all dependencies listed in package.json. pip install Pillow installs Python dependencies. Dependencies are tracked in lock files. Don't edit those manually.

→ related: Packages, Build Process


$ cd ~/automation

Automation

Cron Jobs

Scheduled tasks that run automatically at specific times.

why it matters

You don't manually run the tracker every night. A cron job does it. Cron is how you automate recurring work. Backups, deploys, reports, data syncs. Set it once, forget it.

how you use it

Cron syntax is cryptic but powerful. 0 20 * * * means "run at 8 PM every day". You define the schedule and the command. The system handles execution.

→ related: Webhooks, Event-Driven Workflows


Webhooks

URLs that external services call when an event happens.

why it matters

When someone replies to your Instantly campaign, the webhook fires. When a HeyReach connection accepts, the webhook fires. Webhooks turn events into triggers. Real-time automation.

how you use it

Give the external service a URL to call. When the event happens, they POST data to that URL. Your system processes it. No polling. No waiting. Instant.

→ related: Event-Driven Workflows, Cron Jobs


Event-Driven Workflows

Automation that reacts to events instead of running on a schedule.

why it matters

Cron jobs run on time. Event-driven workflows run when something happens. Push to GitHub triggers deploy. New file in content folder triggers scan. Smarter than polling.

how you use it

Vercel auto-deploys on push events. MCP servers trigger on API calls. Watchers trigger on file changes. Design systems that react instead of poll.

→ related: Webhooks, Cron Jobs


$ cd ~/data-formats

Data Formats

JSON

JavaScript Object Notation. A text format for storing structured data.

why it matters

APIs return JSON. Config files use JSON. Your daily tracker logs are JSON. It's readable by humans, parseable by machines. Key-value pairs, nested objects, arrays.

how you use it

{ "name": "value", "count": 5, "tags": ["tag1", "tag2"] }. JSON files end in .json. Parse them with JSON.parse() in JavaScript or json.load() in Python.

→ related: CSV, Configuration Files


CSV

Comma-Separated Values. A simple spreadsheet format.

why it matters

HeyReach exports CSVs. Clay imports CSVs. Instantly uploads CSVs. CSV is the universal data exchange format. Every tool supports it.

how you use it

First row is headers. Every row after is data. name,email,company. Open in Excel or Google Sheets. Import into Clay. Upload to campaigns.

→ related: JSON


Environment Files

Files that store environment variables for local development.

why it matters

.env files keep secrets out of Git. API keys, tokens, database URLs. They're local only. Each developer has their own. Vercel has its own set for production.

how you use it

Create .env in your project root. Add API_KEY=value. Add .env to .gitignore. Load variables with process.env.API_KEY in code.

→ related: Environment Variables


Configuration Files

Files that define how tools should behave.

why it matters

next.config.ts tells Next.js how to build. tsconfig.json tells TypeScript how to compile. .gitignore tells Git what to ignore. Config files control behavior without code changes.

how you use it

Read the docs for each tool. Copy starter configs. Tweak settings as needed. Commit config files to Git so everyone uses the same setup.

→ related: JSON, Environment Files


$ cat ~/toolkit.md

the vibe coder toolkit

version control → Git, commits, branches, push
deployment → Vercel, domains, environment variables
AI agents → skills, MCP, context windows, parallel execution
development → markdown, Python, monorepos, packages
automation → cron, webhooks, event-driven workflows
data → JSON, CSV, config files, environment files

you don't need a CS degree. you need to know what these words mean. you need to recognize them in error messages. you need to know when to use which tool.

the rest is just building. and if you get stuck, the agent knows all of this already. you're not alone.

go build something.

want the GTM side?

email campaigns, Clay, qualification, CRM. the go-to-market terms explained the same way.

GTM OS knowledge guide →
ShawnOS.ai|theGTMOS.ai|theContentOS.ai
built with Next.js · Tailwind · Claude · Remotion