Session overview
This was Session 4 of 5 in the programme (Fri 13 Feb, 09:00 GMT). The focus was how to use AI coding agents more systematically for coursework and research projects: plan first, then execute; build reusable skills; and enforce consistency with sub-agents and anchor files.
Poll and tool snapshot
- A live poll asked: "Are you using Cursor?"
- Some participants reported using Cursor free tier; nobody reported paying for Cursor.
- At least one participant reported using Antigravity.
- Students using other tools were asked to share them in chat or by email.
Key topics covered
1. Plan first, execute second
- The core recommendation was to avoid asking agent mode to immediately generate long code outputs.
- Use Plan mode first to compare options and define a controlled workflow.
- Planning reduced rework and made choices explicit (e.g., inflation measure choices, unemployment definitions, optional expectations variables).
2. End-to-end demo workflow (Phillips Curve project)
- Demo project: research how the Phillips Curve changed over time in European countries.
- Two parallel tasks were launched:
- literature review agent
- data strategy/data download agent
- Cursor generated:
- a structured literature review document
- a data-strategy document
- Python scripts to pull Eurostat series and save CSV files
3. Why planning mode matters
- Direct agent execution can produce large outputs immediately.
- Plan mode gave a more nuanced strategy and surfaced alternatives before code generation.
- This improves review quality and reduces the risk of hidden mistakes.
4. Skills: reusable prompts and workflows
- A skill was created using
.cursor/Skills/<Skill Name>/Skill.md. - The session emphasized metadata, expected inputs/outputs, and repeatability.
- Skills can be invoked with slash commands and reused across projects/tools.
- Portability note: folder names differ by tool, but the skill pattern is similar.
5. Skills ecosystem and repositories
- The session highlighted existing shared skill libraries and skill creators.
- Examples discussed included economics-focused skills (e.g., Stata workflows, visualization, LaTeX support) and the Anthropic skill creator pattern.
6. Sub-agents and consistency checks
- Running multiple agents in parallel can create inconsistent outputs across project parts.
- Proposed fix: define a sub-agent to review consistency across the full folder and list required fixes.
7. Anchor files for project-level alignment
- Use an anchor file with project goals, constraints, and preferred methods.
- Discussed patterns:
Agents.mdfor Cursor/Codex/Gemini-style workflowsCLAUDE.mdfor Claude Code
Recommended workflow from this session
- Draft the project question and constraints first.
- Use Plan mode to generate options and a workflow.
- Review and edit the plan before execution.
- Execute with agent mode in smaller chunks.
- Reuse skills for repeated tasks.
- Run sub-agent or consistency-review passes.
- Keep anchor files (
Agents.md/CLAUDE.md) up to date. - Verify outputs manually; do not trust generated code blindly.
Action items (instructor)
- Publish links/resources on the website for skills, agents, and tool docs.
- In Session 5, demonstrate full project structuring with skills + sub-agents + anchor files.
- In Session 5, demonstrate practical verification methods on a real research-style workflow.
Reminders for students
- The objective is not just speed; it is controlled, verifiable workflows.
- Plan before execution to avoid reviewing huge low-quality outputs.
- Keep building your personal library of reusable skills.
- Cross-check every generated output before using it in coursework.
What's next
Session 5 is now available: Session 5 Summary: Mock Coursework Workflow and AI Project Setup.