The Job Search System That Evaluated 740 Offers
TL;DR
- What it solves: Tailoring a CV and evaluating genuine role fit takes 1-2 hours per application; career-ops compresses that to minutes with a structured 6-block A-F report and an auto-generated ATS-optimized PDF.
- Why it matters: Applying without a score is fishing with a fork. Most rejections land on roles that were never a good match to begin with.
- Best for: Senior engineers and applied AI practitioners running a focused job search who already have Claude Code set up locally.
- Main differentiator: It never submits an application. Evaluate first, decide second. The 4.0/5 cutoff is a hard recommendation built from 740 real evaluations, not a demo number.
- Best use case: Paste any job URL into Claude Code and walk away. Return to a scored report, a tailored PDF CV, and a tracker entry ready to merge.
Santiago Fernández de Valderrama built a company. He ran it, hired engineers, shipped product to real users, and eventually sold it. Then he sat down at his laptop and discovered something clarifying: he now had to apply for jobs like everyone else.
The first application took an hour. The CV was not calibrated for this specific role, the comp research took another twenty minutes, and the “why us” paragraph was serviceable but not specific. By the fourth application he had a spreadsheet with nine columns, a growing list of pending statuses that nobody was updating, and a copy-paste “Dear Hiring Manager” paragraph in three slightly different fonts for three different ATS portals. He had shipped software to thousands of users. The main system running his job search was conditional formatting.
He built career-ops instead. Then open-sourced it. Thirty thousand people starred it in the first eight days.
What Career-Ops Actually Is
Career-ops is a 14-mode job search agent built on Claude Code. Paste a job URL or raw description into Claude Code, and it produces a structured 6-block evaluation against your CV, a tailored ATS-optimized PDF, and a tracker entry, all in one pass. No reformatting. No extra commands. The system reads the job, reads your CV, and reports whether applying is worth your time.
Think of it as onboarding a recruiter who has already memorized your entire CV and can run a WebSearch for salary benchmarks on demand. The README says it plainly: a new recruiter’s first week is rough because they do not know your proof points yet. Feed the system context first; then judge the output.
The philosophical inversion embedded in the design is worth stating once: companies have used AI to filter candidates for years. Career-ops gives candidates the same lens. You evaluate the company before they evaluate you. That is not a marketing line. It is the architecture. But knowing the architecture is not the same as having working output, and the first evaluations will tell you that plainly.
Real-World Use Cases
Career-ops was not built for passive browsing. It was built for someone actively evaluating five or more roles at once who wants to replace gut-feel sorting with structured signal.
Five patterns where the setup investment earns back its cost:
- The searcher with fifteen job tabs open and no honest signal. Run
/career-ops batchwith all fifteen URLs overnight. Scored reports by morning, not by Friday. - The passive candidate receiving inbound recruiter messages. Paste the JD into Claude Code; respond from a position of knowledge instead of uncertainty.
- The candidate who needs salary benchmarks before deciding whether to pursue. Block D of every evaluation runs WebSearch-powered comp research automatically.
- The interviewee who has been grinding technical prep but forgot behavioral stories.
/career-ops interview-preppulls from a STAR story bank built silently across every prior evaluation. - The pipeline auditor who needs to see where thirty applications stand without opening thirty tabs.
./career-dashboard --path ..opens a Bubble Tea TUI with six filter tabs and inline status editing.
The most important pattern runs without any slash command at all.
Open Claude Code inside the career-ops directory. Paste a job URL:
https://boards.greenhouse.io/anthropic/jobs/4059434008
Career-ops detects it and runs the full auto-pipeline automatically. Here is what it produces:
Input: job URL, cv.md in the project root, config/profile.yml with target roles and comp range.
Output, approximately 3 minutes later:
reports/042-anthropic-2026-04-12.md
output/cv-santiago-anthropic-2026-04-12.pdf
batch/tracker-additions/042.tsv
The evaluation report opens in six blocks:
A - Role Summary
Position: Head of Applied AI
Location: San Francisco (hybrid)
Comp: $220k-$280k + equity
Role type: IC + team lead hybrid
B - CV Match
Alignment: multi-agent systems, LLM fine-tuning, team leadership
Gap: No published research in the past 24 months
Mitigation: Frame consulting engagements as applied research
C - Level Strategy
Title: Head (Staff+ or Director equivalent)
Calibration: On-trajectory. No title inflation detected.
D - Comp Research
WebSearch benchmark: $230k-$290k base for equivalent SF roles
Verdict: Stated range is market-rate
E - CV Personalization Plan
Inject: "agentic systems", "LLM evaluation pipeline"
Reorder: Move project X above project Y for this audience
Proof points: Lead with the company exit and team scale
F - Interview Prep (STAR+R)
Story 1: Building the first AI pipeline - for "system you built from scratch"
Story 2: Managing a team through acquisition - for "leading through ambiguity"
Score: 4.3 / 5 Archetype: LLMOps / Agentic Recommendation: Apply
That report takes 3 minutes. The alternative is 45 minutes of manual research, or skipping the research and applying blind. The candidate who applied blind is still updating the nine-column spreadsheet.
The 14 modes cover every stage of the evaluation arc. Most people never touch half of them, because the system evaluates which ones to reach for, and most searches collapse into four or five modes that actually run.
How to Use It
Career-ops runs entirely inside Claude Code. Every interaction is a slash command or a paste. No web interface, no SaaS dashboard, no account provisioning. All data stays in local gitignored files.
Every skill mode and when to reach for each one:
| Mode | Command | What it does |
|---|---|---|
| auto-pipeline | (paste URL or JD text) | Full eval + PDF + tracker in one pass |
| oferta | /career-ops {JD} | Single structured A-F evaluation |
| ofertas | (multi-offer variant) | Evaluate multiple offers from a list |
| scan | /career-ops scan | Browse 45+ portals via Playwright |
| batch | /career-ops batch | Parallel headless processing of URL queue |
| tracker | /career-ops tracker | Pipeline status view |
/career-ops pdf | Regenerate CV PDF from cv.md | |
| interview-prep | /career-ops interview-prep | STAR story generation |
| contacto | /career-ops contacto | Draft LinkedIn outreach messages |
| deep | /career-ops deep | Deep company research report |
| followup | /career-ops followup | Follow-up cadence message drafts |
| training | /career-ops training | Evaluate a course or cert for career fit |
| project | /career-ops project | Evaluate a portfolio project for role fit |
| patterns | (internal) | Cross-offer pattern analysis |
| pipeline | /career-ops pipeline | Process URL pipeline inbox |
| apply | /career-ops apply | Fill application forms, no auto-submit |
Batch processing under the hood:
/career-ops batch does not queue jobs sequentially. It spawns N headless claude -p workers in parallel. Each worker receives a fully self-contained batch-prompt.md context file, processes one URL independently, and writes output to a designated path. State is tracked in batch-state.tsv with per-worker retry support. When all workers finish, npm run merge consolidates everything into data/applications.md. Ten URLs overnight means ten scored reports, ten tailored PDFs, and ten tracker entries waiting in the morning. That is not a queue. It is a fleet.
PDF generation:
The CV PDF is rendered from an HTML template through Playwright and Chromium, not from a Word document. Space Grotesk and DM Sans fonts are self-hosted so output is identical across machines. Keywords identified in block E are automatically injected into the PDF source per-role before rendering.
💡 Tip:
/career-ops pdfregenerates the PDF fromcv.mdwith any new keyword injections applied. Run it after every substantive update to your CV, not just at application time.
The STAR story bank builds passively. Every evaluation that runs block F deposits STAR+Reflection stories into the bank. By the fourth evaluation, /career-ops interview-prep has material to work with. By the thirtieth, it has enough stories to cover most behavioral question categories in any interview. The candidates who skip early evaluations are the ones who arrive at interview prep with nothing in the bank.
Configuration & Customization
Career-ops is designed to be configured by Claude itself. Open Claude Code in the project directory and ask it to change any configuration. The system reads the same files it uses for evaluations, so it knows where to edit without explicit instructions. Tell it who you are and what you want the same way you would brief a new hire.
| Config file | What it controls | When to change |
|---|---|---|
config/profile.yml | Name, email, target roles, narrative, comp range, location | Initial setup; update when preferences shift |
cv.md | Your full CV in markdown | Source of truth for all evaluations and PDFs |
article-digest.md | Proof points from portfolio projects and published writing | Add new projects as you complete them |
portals.yml | Companies to scan, search queries, title filters | Add companies and tune keyword filters |
modes/_shared.md | Archetype table, scoring weights, negotiation scripts | Customize to your target role types |
templates/cv-template.html | PDF fonts, colors, layout | When adjusting visual presentation |
The 45+ pre-configured companies in portals.yml include Anthropic, OpenAI, ElevenLabs, and Retool. Start with the defaults and add outward as you identify additional targets.
⚠️ Warning: The
modes/directory was originally written in Spanish. The README instructs you to ask Claude to translate the mode files before your first search session. Translated subdirectories exist for German, French, Japanese, Portuguese, and Russian, but the primary files (oferta.md,scan.md,batch.md, and others) are Spanish. Skipping this step means running your search with instructions the system cannot fully process.
Where It Fits (And Where It Doesn’t)
Career-ops fits one narrow scenario well: a technical person running a focused, high-intent search who wants to replace spreadsheet overhead with structured evaluation. If that is not the current situation, the 2-hour setup cost is a bad trade.
It replaces:
- Manual CV tailoring per application (block E does this per JD)
- Spreadsheet job tracking (tracker mode and the Go TUI dashboard)
- Manually browsing 45+ company career pages (scan mode via Playwright)
- Recycled cover paragraphs (contacto drafts from your specific profile narrative)
It works alongside LinkedIn, which it feeds outreach drafts into but does not replace. It complements external salary benchmarking databases, augmenting them with block D’s WebSearch research rather than replacing them.
It does not replace human judgment on any decision. /career-ops apply assists with form completion but stops short of submission. Every application sent is a human decision. The architecture is explicit about this.
The contributor infrastructure (CONTRIBUTING.md, GOVERNANCE.md, Discord community, a roadmap image at docs/roadmap-phases.jpg) signals the author is building toward a wider ecosystem. The batch worker architecture already runs multi-agent claude -p workers natively, so team-mode futures are within reach. But that is the roadmap. Today it is a local tool for one person running a serious search. The question the roadmap image raises is whether it stays that way.
The Rough Edges
The 4.0/5 cutoff is the sharpest edge in the repo and requires the most honesty. The score is a weighted average across 10 dimensions calibrated from 740 real evaluations. It is not audited externally. The scoring weights in modes/_shared.md are tuned for six archetypes: LLMOps, Agentic, PM, SA, FDE, and Transformation. If your target role type sits outside that table, the weights need manual calibration before the cutoff means anything reliable.
The portal scanner runs on Playwright and scraped endpoints. Job portals change their DOM layouts without notice, and career-ops scrapes Greenhouse, Ashby, Lever, Wellfound, Workable, and RemoteFront. The npm run liveness command validates that scrapers are functional, but between liveness checks and upstream updates there will be brittle periods. Empty scan results are a scraper drift symptom, not something fixable locally.
PDF generation requires Chromium. npx playwright install chromium adds roughly 300MB to the setup. On machines with slower disk I/O, the first render feels slow.
The README is upfront: first evaluations will be poor quality. The system does not know your proof points, your narrative, or your target calibration on day one. Feed it article-digest.md, your full CV, and your preferences before running your first real evaluation. The quality difference between a cold system and a warmed one is not incremental.
The modes are a real maintenance surface. Fourteen files in modes/, plus _shared.md, means that updating the scoring model or archetype table touches multiple files. Claude can do this when asked. The catch is that you need to know to ask.
Getting Started
The minimum path to a working first evaluation is five steps:
-
Clone and install:
git clone https://github.com/santifer/career-ops.git cd career-ops npm install npx playwright install chromium -
Run the doctor:
npm run doctorThis checks Claude Code, Node.js, and Playwright. Fix anything flagged here before continuing.
-
Configure your profile:
cp config/profile.example.yml config/profile.yml cp templates/portals.example.yml portals.ymlOpen both files. Fill in your name, target roles, comp expectations, and the companies you care about.
-
Add your CV: create
cv.mdin the project root and paste your full CV in markdown. This is the single source of truth for every evaluation and every generated PDF. -
Open Claude Code in the project directory and onboard the system:
claude > "Here is my CV. Update my profile accordingly." > "My target archetypes are LLMOps and Agentic roles." > "Add these companies to portals.yml: [your list]"
After that, paste any job URL into Claude Code. The rest runs automatically. Whether that first evaluation is worth the setup cost depends entirely on how much context you fed the system before you ran it.
FAQ
What does career-ops produce for each job listing?
Three files: a reports/###-company-YYYY-MM-DD.md evaluation with 6 structured blocks (role summary, CV match, level strategy, comp research, CV personalization plan, interview prep), a output/cv-candidate-company-YYYY-MM-DD.pdf ATS-optimized PDF with injected keywords, and a batch/tracker-additions/###.tsv entry for the pipeline tracker. All files are local and gitignored by default.
How does the batch mode work with parallel claude -p workers?
/career-ops batch runs batch/batch-runner.sh, which spawns N headless Claude Code instances via claude -p. Each worker receives a self-contained batch-prompt.md context file, processes one job URL independently, and writes output to a designated path. State is tracked in batch-state.tsv with per-worker retry support. npm run merge consolidates all outputs into data/applications.md after completion.
What is the 4.0/5 scoring cutoff and how is the score calculated?
The score is a weighted average across 10 evaluation dimensions on a 1-5 scale. Weights are defined in modes/_shared.md and vary by archetype (LLMOps, Agentic, PM, SA, FDE, Transformation). The 4.0/5 recommendation reflects calibration built from evaluating 740+ job listings, prioritizing genuine role fit over keyword matching. If your target role type is not in the default archetypes, update the weights in _shared.md before treating the cutoff as authoritative.
Does career-ops ever submit applications automatically?
No, and this is a design decision. The system evaluates, tailors, and prepares. Every submission is a human decision. The /career-ops apply mode assists with form completion but stops short of submitting. The explicit intent is fewer applications with better fit, not more volume.
How does the portal scanner handle 45+ companies?
/career-ops scan uses Playwright to browse Greenhouse, Ashby, Lever, Wellfound, Workable, and RemoteFront portals for the companies listed in portals.yml. Title filters and keyword queries are configured per-company. Results go into the pipeline inbox for batch processing. npm run liveness validates that each scraper is still functional before a full scan session.
Final Thoughts
Santiago spent months applying to jobs the hard way after selling his company. He had the experience, the network, and the track record. He was still going through the same motions as every other candidate: reformatting CVs per application, researching companies from scratch each time, tracking status in a spreadsheet that was permanently one accidentally-closed tab away from losing a week of data.
The result of building a different system is documented in the README: 740 evaluations, 100+ tailored CVs, one Head of Applied AI role. Not through volume. Through precision. The 30,000 people who starred this repo in eight days recognized the problem before they had even finished reading the title.
If you are running a serious job search right now, you already know what conditional formatting feels like as a system. Career-ops does not remove the work. It changes what the work is.
santifer/career-ops · MIT · 30,824★
Hoang Yell
A software developer and technical storyteller. I spend my time exploring the most interesting open-source repositories on GitHub and presenting them as accessible stories for everyone.