Skip to content

Sprint Structure

YeboLearn operates on a 2-week sprint cycle that balances feature development with quality assurance and continuous deployment. This structure enables predictable delivery while maintaining technical excellence.

Sprint Overview

Two-Week Cadence

Why 2 Weeks?

  • Long enough to complete meaningful features
  • Short enough for rapid feedback and iteration
  • Aligns with bi-weekly production releases
  • Maintains team focus and momentum

Sprint Timeline:

Sprint N (2 weeks)
├─ Week 1: Development Sprint
│   ├─ Day 1 (Mon): Planning & kickoff
│   ├─ Day 2-5: Active development
│   └─ Code freeze: Friday 5 PM
├─ Week 2: Release Sprint
│   ├─ Day 6-7 (Mon-Tue): Testing & demos
│   ├─ Day 8-9 (Wed-Thu): Release prep & deploy
│   └─ Day 10 (Fri): Monitoring & retrospective
└─ Next sprint planning begins

Sprint Ceremonies

Sprint Planning (Monday Week 1, 10 AM - 12 PM)

Participants: Entire engineering team, product owner, design lead

Agenda:

1. Sprint Review (15 minutes)

What We Shipped Last Sprint:
✅ AI quiz generator (10 story points)
✅ Payment retry logic (5 points)
✅ Performance optimizations (8 points)
Total: 23 points delivered (target was 32)

What Didn't Ship:
❌ Essay grading UI (9 points) - moved to this sprint
Why: Design changes required mid-sprint

Key Metrics:
• Velocity: 23 points (below average)
• Test coverage: 71% → 73%
• Deployment frequency: 8 deploys
• Production incidents: 1 (resolved in 20 min)

2. Backlog Review (30 minutes)

Product owner presents prioritized backlog:

markdown
# Sprint 25 Proposed Work

## Theme: AI Essay Grading Launch

### High Priority (Must Have)
1. Essay Grading UI Component (9 pts)
   - Student submission interface
   - Real-time feedback display
   - Revision tracking

2. AI Grading Backend (13 pts)
   - Gemini integration for analysis
   - Rubric-based scoring
   - Feedback generation
   - Rate limiting

3. Teacher Dashboard Updates (5 pts)
   - View student submissions
   - Manual grade override
   - Bulk operations

### Medium Priority (Should Have)
4. Performance Optimization (5 pts)
   - Database query caching
   - Image optimization
   - Bundle size reduction

5. Bug Fixes (3 pts)
   - Login redirect issue
   - Quiz timer accuracy
   - Mobile responsiveness

### Nice to Have (If Capacity)
6. Analytics Improvements (3 pts)
7. Documentation Updates (2 pts)

3. Story Breakdown & Estimation (60 minutes)

Team breaks down stories into technical tasks:

markdown
# Story: AI Grading Backend (13 points)

## Tasks:
- [ ] Design database schema for submissions (2 pts)
- [ ] Create submission API endpoint (2 pts)
- [ ] Integrate Gemini API (3 pts)
- [ ] Implement rubric scoring logic (2 pts)
- [ ] Build feedback generation (2 pts)
- [ ] Add rate limiting (1 pt)
- [ ] Write unit tests (1 pt)

## Technical Considerations:
- Need to handle long essays (10k+ words)
- Gemini API has 60 requests/minute limit
- Database migration required
- Async processing for >5 min operations

## Acceptance Criteria:
- [ ] Student can submit essay
- [ ] AI generates feedback within 30 seconds
- [ ] Feedback includes score and suggestions
- [ ] Teacher can review and override
- [ ] Error handling for API failures

Estimation Scale (Fibonacci):

1 point  = 1-2 hours (simple bug fix)
2 points = Half day (straightforward feature)
3 points = 1 day (moderate complexity)
5 points = 2-3 days (complex feature)
8 points = 1 week (very complex, consider splitting)
13 points = 2 weeks (epic, must split into smaller stories)

4. Sprint Commitment (15 minutes)

markdown
# Sprint 25 Commitment

## Sprint Goal
Launch AI essay grading feature with teacher oversight

## Committed Stories (32 points)
✓ Essay Grading UI (9 pts) - Sarah
✓ AI Grading Backend (13 pts) - John & Lisa
✓ Teacher Dashboard (5 pts) - Mark
✓ Performance Optimization (5 pts) - Sarah

## Stretch Goals (6 points)
○ Bug Fixes (3 pts) - Team
○ Analytics Improvements (3 pts) - Mark

## Dependencies & Risks
⚠️ Gemini API quota increase (requested, pending)
⚠️ Design for teacher dashboard (due Tuesday)
✓ Database migration tested in staging

Outputs:

  • Sprint goal defined
  • Stories committed (realistic velocity)
  • Initial task assignments
  • Dependencies identified
  • Risks documented

Daily Standup (Every Day, 9:30 AM - 9:45 AM)

Format: Async-First with Sync Option

Async Update (Slack #engineering, before 9:30 AM):

Sarah | Yesterday | Today | Blockers
------|-----------|-------|----------
✅ Completed essay submission UI | Working on feedback display component | None

John | Yesterday | Today | Blockers
-----|-----------|-------|----------
✅ Set up Gemini API integration | Building rubric scoring logic | Waiting on API quota increase

Lisa | Yesterday | Today | Blockers
-----|-----------|-------|----------
✅ Reviewed John's PR, started rate limiting | Completing rate limiting, writing tests | None

Mark | Yesterday | Today | Blockers
-----|-----------|-------|----------
🚧 Teacher dashboard 70% done | Finishing dashboard, need design for grade override | Waiting on design from Emily

Sync Standup (Google Meet, 9:30 AM, 15 min max):

  • Discuss blockers only
  • Quick coordination needed
  • Technical questions
  • Not a status report (that's async)

Best Practices:

  • Keep it brief (<2 min per person)
  • Focus on coordination, not reporting
  • Raise blockers immediately
  • Help each other solve problems
  • Skip if no blockers (async is enough)

Mid-Sprint Check-in (Wednesday Week 1, 3 PM - 3:30 PM)

Purpose: Course-correct if needed

Agenda:

1. Progress Review (10 min)

Sprint Burndown:
Day 1-2: 32 points remaining
Day 3-4: 21 points remaining (on track)
Day 5: 15 points remaining (slightly ahead)

Completed:
✅ Essay submission UI (9 pts)
✅ Database schema & migration (2 pts)
✅ Submission API endpoint (2 pts)

In Progress:
🚧 Gemini API integration (3 pts) - 80% done
🚧 Rubric scoring (2 pts) - 50% done
🚧 Teacher dashboard (5 pts) - 60% done

Not Started:
⏳ Performance optimization (5 pts)
⏳ Feedback generation (2 pts)

2. Risk Assessment (10 min)

Resolved:
✅ Gemini API quota approved
✅ Design specs delivered

New Risks:
⚠️ Teacher dashboard taking longer than expected
   → Mitigation: Mark pairing with Sarah tomorrow

⚠️ Gemini API response time slower than expected (45s)
   → Mitigation: Investigating caching strategy

3. Adjustments (10 min)

  • Move performance optimization to stretch goals
  • Add performance spike for next sprint
  • All other work on track

Sprint Demo (Thursday Week 2, 3 PM - 4 PM)

Participants: Engineering, product, design, stakeholders

Format: Show Working Software

Demo Script:

markdown
# Sprint 25 Demo - AI Essay Grading

## 1. Student Experience (Sarah - 10 min)
[Live demo]
- Submit essay on "Climate Change Impact"
- Real-time progress indicator
- Receive AI feedback in 30 seconds
- Review detailed suggestions

## 2. AI Grading Engine (John - 10 min)
[Technical walkthrough]
- Gemini API integration
- Rubric-based scoring algorithm
- Example: Essay analysis breakdown
- Performance: 30s average, 45s max

## 3. Teacher Dashboard (Mark - 10 min)
[Live demo]
- View student submissions
- Review AI-generated feedback
- Override grades if needed
- Bulk actions for classes

## 4. Platform Improvements (Team - 5 min)
- Page load time: 2.2s → 1.8s
- Test coverage: 73% → 76%
- Fixed login redirect bug

## 5. Metrics & Impact (Product - 5 min)
- Feature ready for beta (100 students)
- Expected to save teachers 5 hours/week
- Differentiated feature (no African competitor has this)

## 6. Q&A (10 min)

Demo Best Practices:

  • Show working features in production/staging
  • Focus on user value, not technical details
  • Highlight metrics and business impact
  • Be honest about what's not done
  • Gather feedback for next iterations

Sprint Retrospective (Friday Week 2, 2 PM - 3 PM)

Format: Start-Stop-Continue + Action Items

1. What Went Well? (15 min)

✅ Excellent collaboration on AI grading
   - John and Lisa pair programming was very effective
   - Shipped complex feature with high quality

✅ Mid-sprint check-in caught teacher dashboard risk early
   - Adjusted plan and paired to recover

✅ Test coverage improved to 76%
   - Everyone writing tests first more consistently

2. What Didn't Go Well? (15 min)

❌ Initial estimates for teacher dashboard were too optimistic
   - Didn't account for complex permission logic
   - Need to break down UI stories more thoroughly

❌ Gemini API performance concerns discovered late
   - Should have spiked this in previous sprint
   - Need better upfront research for external APIs

❌ Two PRs sat unreviewed for >1 day
   - Team was too focused on own work
   - Need better PR review discipline

3. Action Items (20 min)

🎯 Action: Create spike story for researching external APIs
   Owner: John
   Due: Next sprint planning

🎯 Action: Add "PR review time" to daily standup check
   Owner: Team
   Due: Ongoing, starting Monday

🎯 Action: Update estimation guide with UI complexity checklist
   Owner: Sarah
   Due: End of next sprint

🎯 Action: Schedule AI performance optimization spike
   Owner: Lisa
   Due: Sprint 26

4. Celebrate Wins (10 min)

  • First AI grading feature shipped
  • Best test coverage yet (76%)
  • Zero production incidents
  • Team collaboration highlighted

Retrospective Rotation:

  • Different facilitator each sprint
  • Rotate format (timeline, sailboat, 4Ls, etc.)
  • Track action items completion
  • Review past action items

Sprint Metrics

Velocity Tracking

Story Points Completed Per Sprint:

Sprint 20: 34 points
Sprint 21: 38 points
Sprint 22: 32 points (holiday)
Sprint 23: 40 points
Sprint 24: 36 points
Sprint 25: 38 points (current)

Rolling Average (last 6 sprints): 36 points
Target for Planning: 32-36 points

Velocity Trends:

Increasing Velocity:
✓ Team growing in experience
✓ Better estimation accuracy
✓ Improved tooling and automation

Decreasing Velocity:
⚠️ Technical debt accumulating
⚠️ Complex features increasing
⚠️ More interruptions/support

Sprint Health Metrics

Commitment Accuracy:

Sprint 25:
Committed: 32 points
Delivered: 38 points
Accuracy: 119% (over-delivered due to stretch goals)

Target: 90-110% (consistent delivery)

Scope Change During Sprint:

Sprint 25:
Added: 6 points (urgent bug fix)
Removed: 0 points
Net Change: +6 points

Target: <10% scope change

Sprint Goal Achievement:

Sprint 25 Goal: Launch AI essay grading feature
Result: ✅ Achieved (feature in beta)

Historical Achievement Rate: 87% (last 10 sprints)
Target: >80%

Quality Metrics

Defect Escape Rate:

Bugs Found in Production (from this sprint):
Sprint 25: 2 minor bugs
Sprint 24: 1 critical, 3 minor
Sprint 23: 0 bugs

Target: <3 bugs per sprint

Test Coverage:

Sprint 25: 76% (↑3% from Sprint 24)
Sprint 24: 73%
Sprint 23: 71%

Target: Maintain >70%, trend upward

Code Review Metrics:

Average PR Review Time: 3.2 hours
PRs Requiring Multiple Review Rounds: 35%
PRs Approved Without Changes: 12%

Targets:
- Review time <4 hours
- Multiple rounds acceptable (thorough review)
- Some changes expected (quality gate)

Sprint Success Patterns

High-Performing Sprints

Characteristics:

  • Clear, focused sprint goal
  • Well-estimated stories (no surprises)
  • Minimal scope changes
  • Fast PR review cycle
  • Strong team collaboration
  • Proactive communication

Example: Sprint 23 (40 points delivered)

What Made It Great:
✓ Focused theme: Performance optimization
✓ Technical debt sprint (motivated team)
✓ All stories well-understood upfront
✓ No external dependencies
✓ Pair programming on complex work
✓ Daily async updates kept everyone aligned

Result:
• 40 points delivered (highest velocity)
• Zero production bugs
• Test coverage +5%
• Team satisfaction: 9/10

Struggling Sprints

Warning Signs:

  • Unclear or conflicting goals
  • Large stories (>8 points)
  • Many external dependencies
  • Mid-sprint scope additions
  • PRs sitting unreviewed
  • Team working in silos

Example: Sprint 22 (32 points, missed goal)

What Went Wrong:
❌ Holiday week (reduced capacity)
❌ External dependency blocked work (Gemini API changes)
❌ Underestimated complexity of payment refactor
❌ Mid-sprint production incident (2 days lost)
❌ Design changes required rework

Lessons Learned:
• Account for holidays in planning
• Spike external API changes upfront
• Break down payment work more granularly
• Build incident response time into estimates
• Lock designs before sprint starts

Sprint Anti-Patterns to Avoid

Over-Commitment

Problem: Committing to 50 points when velocity is 35
Result: Rush, cut corners, incomplete features, burnout

Solution:
• Use conservative estimates
• Build in buffer (80% of available capacity)
• Protect quality over quantity
• Use stretch goals for extra capacity

Under-Commitment

Problem: Committing to 20 points when velocity is 35
Result: Waste capacity, slow delivery, team boredom

Solution:
• Commit to realistic capacity
• Have ready backlog items
• Add stretch goals
• Trust the team's ability

Scope Creep

Problem: Adding 10 stories mid-sprint
Result: Lose focus, miss sprint goal, context switching

Solution:
• Strict scope freeze after planning
• Critical bugs only exception
• New work goes to next sprint
• Communicate trade-offs clearly

Poor Story Breakdown

Problem: Stories >13 points, unclear acceptance criteria
Result: Can't complete in sprint, unclear when "done"

Solution:
• Break stories to <8 points
• Define clear acceptance criteria
• Include technical tasks
• Consider edge cases upfront

Sprint Planning Tips

Effective Estimation

Planning Poker:

1. Product owner reads story
2. Team asks clarifying questions
3. Everyone estimates privately (cards)
4. Reveal simultaneously
5. Discuss high/low estimates
6. Re-estimate until consensus
7. Move to next story

Estimation Calibration:

Reference Stories (Historical):

1 point: "Fix typo in error message"
2 points: "Add email validation"
3 points: "Build simple CRUD endpoint"
5 points: "Integrate external API with error handling"
8 points: "Build complex UI with state management"
13 points: "Too large - split it up"

When Uncertain:

  • Add buffer (round up)
  • Create spike story (time-boxed research)
  • Break into smaller, better-understood pieces
  • Defer to next sprint for more research

Story Readiness

Definition of Ready:

  • [ ] User value is clear
  • [ ] Acceptance criteria defined
  • [ ] Dependencies identified
  • [ ] Design complete (if UI work)
  • [ ] Technical approach discussed
  • [ ] Estimatable by team
  • [ ] Small enough for one sprint

If Not Ready:

  • Don't commit to sprint
  • Schedule backlog refinement
  • Get design/product input
  • Create spike for research

YeboLearn - Empowering African Education