A Practical Guide to Cursor Rules Files for Engineering Teams
A practical guide to configuring Cursor rules for engineering teams. Project context, coding standards, testing patterns, and what actually moves the needle.
We rewrote our Cursor rules file three times before it started making a difference. The first version was too vague -- "write clean code" doesn't mean anything to an AI. The second was too long -- 2,000 lines of instructions that polluted the context window and made the AI slower without making it smarter. The third time, we got it right.
Here's what we learned about writing Cursor rules that actually improve your team's output.
Why Rules Files Matter More Than You Think
Without a rules file, every conversation with Cursor starts from zero. The AI doesn't know your project structure. It doesn't know you use Zustand instead of Redux. It doesn't know your API responses follow a specific error format. It doesn't know your team's testing conventions.
So it guesses. And its guesses are generic. The code it generates works, but it doesn't match your codebase. Your engineers spend time rewriting AI output to fit your patterns. That's the opposite of the productivity gain you're paying for.
A good rules file is like onboarding documentation for an AI teammate. You wouldn't hire an engineer and let them guess your conventions. You'd give them a style guide, a project overview, and examples of well-written code in your codebase. Your Cursor rules file is that document -- except the AI actually reads it every time, unlike the human engineer who skimmed the wiki once and never looked at it again.
The compounding effect is significant. Every engineer on your team benefits from the same rules. Every prompt gets the same contextual grounding. The AI's output gets more consistent, which means less review friction, fewer pattern violations, and faster iteration.
The Architecture Section
Start with what the AI needs to know about your project's shape. Not a comprehensive architecture document -- a concise summary that gives the AI enough context to generate code that fits.
Keep it short. Three to five sentences about the tech stack. Key directories and what lives in them. The primary patterns your codebase uses.
Here's roughly what ours looks like:
This is a NestJS backend with a React/Vite frontend. The backend
uses TypeORM with PostgreSQL. API endpoints follow REST conventions
with DTOs for validation. The frontend uses shadcn/ui components,
Tailwind CSS, and Zustand for state management.
Backend structure:
- src/modules/ - NestJS modules (one per domain concept)
- src/entities/ - TypeORM entities
- src/dto/ - Request/response DTOs with class-validator decorators
- src/migrations/ - TypeORM migrations (generated, not hand-written)
Frontend structure:
- src/components/ui/ - shadcn/ui primitives (do not modify)
- src/components/features/ - Feature-specific components
- src/hooks/ - Custom React hooks
- src/services/ - API client functions
That's it. No multi-page architecture decision records. No history of why you chose NestJS. Just enough for the AI to generate code that goes in the right place and follows the right shape.
Coding Standards That Actually Work
This is where most teams go wrong. They write rules like "follow best practices" or "write maintainable code." Those instructions are useless. The AI interprets them however it wants, which means it ignores them.
Effective rules are specific and affirmative. Tell the AI what to do, not what to avoid. AI systems are notoriously bad at negative instructions -- "don't use class components" sometimes gets interpreted as "use class components."
Write your standards as concrete patterns:
TypeScript conventions:
- Use strict equality (=== and !==) for all comparisons
- Use async/await instead of .then() chains
- Destructure function parameters when there are 3+ arguments
- Return early from functions when validation fails
- Use const for all variables unless reassignment is needed
Error handling:
- Wrap all async service methods in try/catch
- Throw HttpException with appropriate status codes in controllers
- Log errors using the Logger service with context
- Return structured error responses: { error: string, statusCode: number }
React components:
- Use functional components with hooks
- Extract custom hooks when component logic exceeds 20 lines
- Use Tailwind classes for styling, never inline styles
- Colocate component-specific types in the same file
Notice the pattern: each rule describes a specific behavior in a specific context. The AI can follow "use async/await instead of .then() chains" because it's unambiguous. It can't follow "write idiomatic TypeScript" because that means different things to different people.
The Testing Section
Testing instructions are where rules files provide the highest ROI. Without explicit testing rules, AI generates superficial tests -- happy path only, mocking everything, testing implementation details instead of behavior. With clear rules, the generated tests are dramatically more useful.
Here's what to include:
Testing:
- Write tests using Jest with @nestjs/testing for backend
- Test behavior, not implementation details
- Every service method needs: success case, error case, edge case
- Mock external dependencies (database, APIs) but not internal logic
- Use descriptive test names: "should [expected behavior] when [condition]"
- For API endpoints, test the full request/response cycle using supertest
- Frontend tests use React Testing Library
- Query elements by role or label, never by class name or test ID
- Test user interactions, not component state
The "test behavior, not implementation" rule alone saves hours of review feedback. Without it, AI generates tests that assert on internal function calls and break whenever you refactor. With it, you get tests that verify outcomes and survive code changes.
Code Examples Are Your Secret Weapon
Abstract rules get you 70% of the way. Concrete examples get you the rest.
Include a short example of your most common patterns. The AI learns more from one well-written example than from five paragraphs of description.
Example: API endpoint with validation, error handling, and response format
// Controller
@Post()
async createRepository(@Body() dto: CreateRepositoryDto): Promise<RepositoryResponseDto> {
const repository = await this.repositoryService.create(dto);
return plainToInstance(RepositoryResponseDto, repository);
}
// Service
async create(dto: CreateRepositoryDto): Promise<Repository> {
const existing = await this.repo.findOne({ where: { name: dto.name } });
if (existing) {
throw new ConflictException(`Repository ${dto.name} already exists`);
}
const entity = this.repo.create(dto);
return this.repo.save(entity);
}
// Test
describe('create', () => {
it('should create and return a new repository', async () => {
const dto = { name: 'test-repo', url: 'https://github.com/org/test-repo' };
mockRepo.findOne.mockResolvedValue(null);
mockRepo.create.mockReturnValue(dto);
mockRepo.save.mockResolvedValue({ id: '1', ...dto });
const result = await service.create(dto);
expect(result).toEqual(expect.objectContaining({ name: 'test-repo' }));
expect(mockRepo.save).toHaveBeenCalledWith(dto);
});
it('should throw ConflictException when repository exists', async () => {
mockRepo.findOne.mockResolvedValue({ id: '1', name: 'test-repo' });
await expect(service.create({ name: 'test-repo' })).rejects.toThrow(ConflictException);
});
});
One example like this teaches the AI your controller-service-test pattern, your error handling approach, your mocking strategy, and your test naming convention -- all at once.
Managing Multiple Rules Files
Cursor supports different attachment modes: auto-attached by file type, agent-requested, manually included, and always-included. Use them strategically.
Create separate files for separate concerns. A TypeScript rules file for .ts. A React rules file for .tsx. A testing rules file for .spec.ts. This prevents your React conventions from polluting context when writing a NestJS service.
Keep always-included rules minimal -- project architecture and universal conventions only. Scope everything else to the file types where it applies.
The Iteration Loop
Here's the practice that made the biggest difference: every time Cursor generates code that doesn't match our standards, and we correct it, we update the rules file.
This isn't a one-time setup task. It's ongoing. The rules file evolves with the codebase. New patterns get added. Old ones get updated. Edge cases get documented when the AI handles them wrong.
The trigger is repetition. If you give the same feedback twice -- "we don't import from barrel files" or "always include audit fields when creating entities" -- that feedback belongs in the rules file. Say it once, encode it, never say it again.
After three months, our rules file was lean and effective. Review comments shifted from "this doesn't follow our convention" to substantive discussions about architecture. That's the goal.
Measuring the Impact
Setting up Cursor rules is an investment. Like any investment, you should measure whether it's paying off.
The qualitative signal is clear: are engineers spending less time reformatting AI output to match your standards? Are review comments shifting from pattern enforcement to architectural discussion? Do PRs from AI-assisted development blend seamlessly with hand-written code?
The quantitative signal is harder without instrumentation. If you're scoring PR output, you can track whether quality and implementation scores improve after rules file changes. You can see whether engineers who adopt standardized AI workflows ship higher-complexity work. You can measure whether the consistency of output increases across the team.
The rules file is the input. The shipped code is the output. Optimizing the input without measuring the output is guesswork. Doing both is engineering.
GitVelocity measures engineering velocity by scoring every merged PR using AI. Once you've standardized your AI tool configuration, see whether it's actually improving what your team ships.
Conrad is CTO and Partner at Headline, where he leads data-driven investment across early stage and growth funds with over $4B in AUM. Before becoming an investor, he founded Munchery (raised $130M+) and held engineering and product leadership roles at IAC and Convio (IPO 2010). He and the Headline engineering team built GitVelocity to help engineering organizations roll out agentic coding and measure its impact.