Ask-Based Planning: How to Actually Make Decisions in Software Teams đŻ
Stop overthinking. Start asking: âWhatâs the actual ask here?â đ¤
Every software project drowns in complexityâarchitectural debates, framework wars, process ceremonies. But what if we reduced everything to a simple question: What are we actually being asked to deliver?
The Fundamental Problem with Modern Planning đ
Weâve inverted the planning process. We start with:
- Whatâs the best architecture?
- What framework should we use?
- What would be the most elegant solution?
When we should start with:
- Whatâs the deadline?
- Whatâs the budget?
- Whoâs going to build this?
- What problem are we solving?
This isnât settling for mediocrity. Itâs acknowledging reality.
Why âAskâ Instead of Bug/Story/Task? đˇď¸
Traditional issue tracking creates unnecessary complexity:
The Taxonomy Problem đ
Teams waste hours debating:
- âIs this a bug or a feature request?â
- âShould this be a story or a task?â
- âIs it an epic or just a large story?â
- âDoes a bug in a new feature count as a defect?â
Meanwhile, what actually matters:
- How important is this?
- How much work is it?
- Whoâs affected?
- When do we do it?
The Status Game đŽ
Traditional categories create politics:
- âBugsâ automatically get priority over âfeaturesâ
- âTechnical debtâ gets ignored because itâs not a âstoryâ
- âTasksâ sound less important than âepicsâ
- Teams game the system: âLetâs call it a bug so it gets prioritizedâ
The Beautiful Simplicity of âAskâ â¨
Everything is just an ask:
- Customer asks for checkout to work â Ask
- CEO asks for new dashboard â Ask
- Developer asks to refactor code â Ask
- Support asks to fix login issue â Ask
No taxonomy arguments. Just evaluate each ask on its merits:
- Impact (who needs this?)
- Effort (how hard is it?)
- Priority (how important is it?)
- Risk (what could go wrong?)
Real Example
Traditional Approach:
Type: Bug
Severity: P2
Component: Authentication
Labels: backend, security, customer-reported
Story Points: 5
Epic: User Management
Ask-Based Approach:
ASK: Users can't reset passwords
IMPACT: 50+ support tickets daily
EFFORT: 2 days
PRIORITY: 1 (blocking customers)
DECISION: Fix today
Which one helps you make decisions faster?
The Three Pillars of Ask-Based Planning đď¸
1. Identify the Unmovables đż
Some things donât change. Amazon built an empire on three constants:
- People want things cheaper
- People want things faster
- People want decent quality
These havenât changed in 30 years. They wonât change in the next 30.
Your unmovables might be:
- Users want pages to load fast
- The system must be available 24/7
- Customers need to trust their data is safe
- The business needs to make money
Stop chasing trends. Build around constants.
2. Map Your Actual Constraints đşď¸
Constraints arenât limitationsâtheyâre clarity. Every real project has:
Budget Constraints
- âWe have $50K, not $500Kâ
- âWe canât afford a dedicated DevOps teamâ
- âCloud costs must stay under $1000/monthâ
Team Constraints
- âWe have 3 developers, not 30â
- âNobody knows Rustâ
- âSarah is the only one who understands the payment systemâ
Time Constraints
- âThe conference is in 6 weeksâ
- âCompliance deadline is March 1stâ
- âCompetitor launches similar feature next monthâ
Infrastructure Constraints
- âWeâre stuck with this databaseâ
- âCanât migrate off Windows Serverâ
- âThe CEO loves Excel exportsâ
Acknowledge these. Plan around them. Stop pretending theyâll magically disappear.
3. Make Decisions Based on the Ask đ˛
Every technical decision should answer: âWhat are we being asked to deliver?â
Example: âBuild a customer dashboardâ
Traditional approach:
- Debate React vs Vue for 2 weeks
- Design perfect microservice architecture
- Plan comprehensive testing strategy
- Estimate: 6 months
Ask-based approach:
- Who needs this? (5 internal users)
- When? (Board meeting in 3 weeks)
- Whatâs the core need? (See monthly revenue)
- Solution: SQL query + basic HTML table
- Delivered: 2 days
The Decision Framework đ§
For every technical choice, ask in this order:
- Whatâs the actual ask?
- Not what you think they need
- Not what would be cool to build
- What they literally asked for
- What are our unmovable constraints?
- Deadline (when must this ship?)
- Budget (what can we actually spend?)
- Team (whoâs actually available?)
- Technical (what systems must we use?)
- Whatâs the simplest solution that satisfies both?
- Not the best solution
- Not the most scalable solution
- The solution that ships on time and works
Getting to the Real Ask: The 5 Whys đ
Most asks are symptoms, not root causes. Drill down to find whatâs really needed:
Example 1: âWe need a dashboardâ đ
Ask: âWe need a real-time analytics dashboardâ
Why 1: Why do you need a dashboard?
â âTo see our sales numbersâ
Why 2: Why do you need to see sales numbers?
â âTo know if weâre hitting targetsâ
Why 3: Why donât you know if youâre hitting targets?
â âThe reports come monthly and are always lateâ
Why 4: Why are reports monthly and late?
â âBob manually compiles them from 3 systemsâ
Why 5: Why does Bob manually compile them?
â âThe systems donât talk to each otherâ
Real Ask: Automate data export from 3 systems (2 days work)
Not: Build complex real-time dashboard (2 months work)
Example 2: âThe system is slowâ đ
Ask: âWe need to rewrite the system for performanceâ
Why 1: Why do you think we need a rewrite?
â âThe system is too slowâ
Why 2: Why is it slow?
â âPages take 10 seconds to loadâ
Why 3: Why do pages take 10 seconds?
â âThe database queries are slowâ
Why 4: Why are the queries slow?
â âWeâre loading all customer history on every pageâ
Why 5: Why are we loading all history?
â âThe original developer thought we might need itâ
Real Ask: Add pagination to customer history query (4 hours work)
Not: Rewrite entire system (6 months work)
The Pattern đ
Surface ask â Often an expensive solution to a symptom
Deep ask â Usually a simple fix to the root cause
Questions that reveal the real ask:
- âWhat problem are you trying to solve?â
- âWhat happens if we donât do this?â
- âWhat would success look like?â
- âHave you tried anything else?â
- âWhen did this become a problem?â
When to Stop Asking Why đ
Stop when you reach:
- External constraint: âBecause regulations require itâ
- Business fundamental: âBecause thatâs how we make moneyâ
- Actual user need: âBecause customers canât check outâ
- Technical reality: âBecause the server only has 2GB RAMâ
The Anti-Pattern: Solution Disguised as Ask â ď¸
Red flags:
- âWe need microservicesâ (Thatâs a solution, not an ask)
- âWe need to use Reactâ (Thatâs a technology choice, not a need)
- âWe need to refactor everythingâ (Thatâs an approach, not a problem)
Always translate to actual need:
- âWe need microservicesâ â âWe need to scale different parts independentlyâ
- âWe need Reactâ â âWe need better UI interactivityâ
- âWe need to refactorâ â âWe canât add features without breaking thingsâ
Then ask: Is the proposed solution the simplest way to meet the actual need?
Real-World Applications đ
The âInvestigation Budgetâ Approach đ°
Instead of endless analysis paralysis:
- âSpend 2 days investigating payment providersâ
- âTake a week to prototype both approachesâ
- âResearch for 3 days, then we decideâ
Time-boxed investigation with clear decision points.
The âHarvest vs. Perfectâ Principle đž
- Farmer mentality: âGet crops to marketâ
- Engineer mentality: âOptimize yield perfectlyâ
Good enough that ships beats perfect that doesnât.
The âNatural Organizationâ Pattern đł
Let constraints drive organization:
- Database slow? â Database expert leads optimization
- API integration needed? â API person takes point
- UI broken? â Frontend dev drives fix
No complex ceremonies. Clear ownership based on expertise.
The Estimation Problem: Why Story Points Failed đ
The Theater of False Precision đ
Traditional estimation creates elaborate rituals with no value:
Planning Poker
- Spend 2 hours debating if something is 5 or 8 points
- Fibonacci sequences pretending complexity is mathematical
- Team consensus thatâs really just averaging wild guesses
- Points that mean different things to different people
Velocity Tracking
- âWe completed 47 points last sprint!â
- But what value was delivered?
- Points inflate over time (gaming the system)
- Velocity becomes the goal instead of value
What Estimates Actually Communicate đŹ
The precision paradox reveals the truth:
â1 hourâ = âIâve done this exact thing beforeâ
- High confidence
- Probably accurate
â2-3 daysâ = âI understand the problemâ
- Medium confidence
- Reasonable accuracy
â2 weeksâ = âThis is complex but I see the piecesâ
- Getting uncertain
- Could be 1-4 weeks
â2 monthsâ = âI have no ideaâ
- Very low confidence
- Could be 2 weeks or 6 months
The pattern: Long estimates are really saying âI donât understand this yet.â
The Confidence-Based Approach đ
Instead of false precision, be honest:
ASK: Add user profiles
ESTIMATE: 1-3 weeks
CONFIDENCE: Low (30%)
WHY LOW: Never built profiles, unsure about requirements
NEXT STEP: 2-day investigation to understand better
Investigation Budgets: Buying Certainty đŹ
When confidence is low, time-box investigation:
âSpend 2 days researching, then estimateâ
Better than: âItâll take 2 months (maybe)â
âBuild a prototype for 1 week, then decideâ
Better than: âThe architecture might not workâ
âTalk to 3 users, then scope itâ
Better than: âWe think users want thisâ
Knowing When to Abandon đŞ
Set investigation limits upfront:
INVESTIGATION BUDGET: 1 week
ABANDON IF:
- Estimate exceeds 3 months
- Requires skills we don't have
- Third-party API doesn't support our needs
- Compliance requirements too complex
Donât fall for sunk cost fallacy. Sometimes the best decision is âThis isnât worth it.â
Patterns Over Predictions đ
Track what actually happens:
- âDatabase work always takes 2x our estimateâ
- âUI changes are usually accurateâ
- âThird-party integrations always have surprisesâ
- âSarahâs estimates are consistently reliableâ
Use patterns to adjust future estimates, not story points to create velocity theater.
Training Teams in Ask-Based Thinking đ
Week 1: The Constants Exercise đ
List what wonât change:
- What do users always want?
- What does the business always need?
- What technical realities persist?
Week 2: The Constraint Audit đ
Document reality:
- Actual budget (not wished-for budget)
- Actual team skills (not theoretical capabilities)
- Actual deadlines (not ideal timelines)
Week 3: The Decision Practice đŻ
For every decision this week:
- First ask: âWhatâs the ask?â
- Then check: âWhat constraints apply?â
- Finally decide: âWhatâs simplest that works?â
Week 4: The Retrospective Reality Check đ
- Which decisions served the ask?
- Which decisions ignored constraints?
- What would we do differently?
The Priority Hierarchy That Makes Decisions Automatic âĄ
Stop debating. Use this order:
Priority 1: Does this block customers? đ¨
- Site down, canât log in, canât purchase
- Decision time: Immediate
- Who decides: Anyone can call Priority 1
Priority 2: Legal/Security/Compliance Risk? âď¸
- Data breach, regulatory violation, security hole
- Decision time: Within 24 hours
- Who decides: Security/legal team escalates
Priority 3: Does this make/save significant money? đľ
- New revenue stream, major cost reduction
- Decision time: This week
- Who decides: Product/business owner
- Slow but working, occasional failures
- Decision time: Next sprint
- Who decides: Tech lead with product input
Priority 5: Everything Else đŚ
- Nice-to-haves, UI improvements, refactoring
- Decision time: When thereâs capacity
- Who decides: Team consensus
This eliminates arguments. Higher priority always wins. Most decisions make themselves.
The Triage Mindset: Medical Emergency Room for Software đ
Think like an ER doctor, not an architect:
- System down, data loss, security breach
- Drop everything
Yellow (Delayed) đĄ
- Performance issues, reliability problems
- Fix this week
Green (Walking Wounded) đ˘
- UI bugs, minor issues
- Fix when possible
Black (Expectant) âŤ
- Legacy systems too broken to save
- Plan replacement, donât waste resources
Triage Discipline:
- Donât fix green issues while red patients are dying
- Donât spend hours on black cases when yellow needs help
- Constantly reassessâconditions change
The Pre-Ask Checklist: Right-Sizing Your Response đ
Not every ask needs a full analysis. Triage first:
Trivial (< 1 hour) âĄ
Just do it. No analysis needed.
- Fix typo
- Update config value
- Add log statement
Simple (< 1 day) đ§
Quick team check, then execute.
- Add form field
- Update error message
- Fix obvious bug
Standard (< 1 week) đ
Use the basic framework:
- Whatâs the ask?
- Whatâs the effort?
- Whatâs the priority?
- Who owns it?
Complex (> 1 week) đď¸
Full analysis needed:
- Impact: How many users affected?
- Effort: Person-days required?
- Risk: What could go wrong?
- Dependencies: Whatâs blocking?
- Alternatives: Other solutions?
Strategic (Changes everything) đ
Board-level decision:
- Major architecture change
- Platform migration
- Business model shift
The rule: Match analysis effort to decision impact.
The Comprehensive Ask Parameters đ
When you need full analysis, hereâs what matters:
THE ASK: [What are we being asked to do?]
IMPACT ASSESSMENT:
- Users affected: [Number/percentage]
- Revenue impact: [Dollar amount if applicable]
- Risk if we don't: [What happens?]
EFFORT ASSESSMENT:
- Time estimate: [Days/weeks]
- People needed: [Who specifically]
- Complexity: [Simple/Medium/Complex]
CONSTRAINTS:
- Deadline: [Hard date or flexible]
- Dependencies: [What must happen first]
- Budget: [Cost in time/money]
PRIORITY CATEGORY: [1-5 from hierarchy above]
DECISION: [Do now/Do later/Don't do]
REASONING: [One sentence why]
Real Examples Using Priority + Triage đĄ
ASK: Homepage loads in 8 seconds
PRIORITY: 4 (Performance issue)
TRIAGE: Yellow
DECISION: Fix next week
ASK: Users can't checkout
PRIORITY: 1 (Blocks customers)
TRIAGE: Red
DECISION: Drop everything now
ASK: Add dark mode
PRIORITY: 5 (Nice to have)
TRIAGE: Green
DECISION: Backlog
ASK: GDPR compliance gap
PRIORITY: 2 (Legal risk)
TRIAGE: Yellow/Red depending on deadline
DECISION: Fix this week
Risk Assessment: Know What Can Go Wrong â ď¸
Every ask has risk. Identify it upfront:
Technical Risks đ§
- Breaking Changes: Will this affect existing functionality?
- Performance Impact: Could this slow down the system?
- Security Vulnerabilities: Are we exposing new attack surfaces?
- Data Loss Potential: Could we corrupt or lose data?
- Integration Failures: Will third-party systems break?
Business Risks đź
- Customer Impact: How many users affected if this fails?
- Revenue Risk: Could this stop sales/payments?
- Reputation Risk: Will failures be visible/embarrassing?
- Compliance Risk: Legal/regulatory consequences?
- Competitive Risk: Do we fall behind if we donât do this?
Operational Risks âď¸
- Knowledge Silos: Only one person knows how?
- Maintenance Burden: Will this be hard to support?
- Dependency Risk: Relying on external services/vendors?
- Reversibility: Can we roll back if it goes wrong?
- Technical Debt: Making future changes harder?
Risk Levels and Response đď¸
Low Risk (Green Light)
- Well-understood problem
- Team has done similar before
- Easy to test and rollback
- Action: Proceed with normal process
Medium Risk (Yellow Light)
- Some unknowns or dependencies
- Moderate complexity
- Recovery plan needed
- Action: Add safeguards, monitoring, staged rollout
High Risk (Red Light)
- Critical systems affected
- Many unknowns
- Hard to reverse
- Action: Prototype first, extensive testing, executive approval
Risk Mitigation Strategies đĄď¸
For Technical Risks:
- Feature flags for easy rollback
- Staged rollouts (5% â 25% â 100%)
- Comprehensive testing before deploy
- Monitoring and alerts from day one
For Business Risks:
- Clear communication plans
- Customer support preparation
- Backup systems ready
- Documentation of changes
For Operational Risks:
- Knowledge sharing sessions
- Pair programming on critical parts
- Documentation as you build
- Avoid single points of failure
The Cost-Benefit Reality Check đ°
Always ask: âIs the juice worth the squeeze?â
The Risk-Adjusted Matrix đ
 |
Low Risk |
High Risk |
High Impact |
Do Now |
Plan Carefully |
Low Impact |
Do Later |
Donât Do |
High Impact, Low Risk (Quick Wins) đŻ
Do immediately. These fund everything else.
- Database index that speeds up queries
- Bug fix that unblocks customers
- Simple feature users are begging for
High Impact, High Risk (Strategic Bets) đ˛
Plan carefully, prototype first, monitor closely.
- Platform migration
- Major architecture change
- New business model features
Low Impact, Low Risk (Cleanup) đ§š
Do during slow periods or assign to juniors.
- Code formatting
- Documentation updates
- Minor UI improvements
Low Impact, High Risk (Danger Zone) â ď¸
Just say no. These kill teams.
- Experimental technology for non-critical features
- Major refactoring of working code
- Premature optimization
Common Objections (And Responses) đŹ
âThis promotes technical debt!â
No, it acknowledges reality. Perfect systems that never ship create infinite technical debt.
âWe should build for scale!â
When you need scale. Most systems die before reaching scale problems.
âThis isnât engineering excellence!â
Engineering excellence is delivering value within constraints. NASA didnât have unlimited budgets either.
âWhat about best practices?â
Best practices assume standard conditions. Your conditions arenât standard.
The Leadership Shift đ
Traditional leadership asks:
- âWhatâs the best way to do this?â
- âHow do we follow industry standards?â
- âWhat would Google do?â
Ask-based leadership asks:
- âWhat are we actually trying to achieve?â
- âWhat constraints are we working with?â
- âWhat ships and solves the problem?â
Practical Templates đ
Daily Decision Template đ
THE ASK: [What are we being asked to do?]
UNMOVABLE DEADLINE: [When must this be done?]
AVAILABLE RESOURCES: [Who/what do we have?]
SIMPLEST SOLUTION: [What gets us there?]
DECISION: [What we're doing]
Project Planning Template đ
BUSINESS ASK: [What does the business need?]
USER ASK: [What do users need?]
CONSTRAINTS:
- Budget: [Actual money available]
- Team: [Actual people available]
- Time: [Actual deadline]
- Technical: [Systems we must use]
PROPOSED SOLUTION: [Simplest thing that satisfies ask within constraints]
SUCCESS CRITERIA: [How we know we delivered the ask]
Itâs not about lowering standards. Itâs about hitting targets.
An Olympic archer doesnât debate arrow metallurgy during the competition. They identify the target, account for wind (constraints), and shoot.
Software teams spend so much time perfecting their bow, they forget to aim.
Kanban and Ask-Based Planning: Perfect Together đ¤
Why Kanban Works for Asks â
Kanbanâs simplicity aligns perfectly with ask-based thinking:
Visual Reality
- See all asks in one place
- Watch work actually flow (or get stuck)
- No hiding behind sprint boundaries
Natural Flow
- Work moves when ready, not on schedule
- Small asks flow fast, big asks flow slow
- Reality enforces honesty
The Simple Ask Board đ
| BACKLOG | READY | WORKING | REVIEW | DONE |
|---------|-------|---------|--------|------|
| Dark | Login | Payment | Search | |
| mode | fix | API | update | |
| | | đ´ Day 5 | | |
| Export | Cache | | | |
| feature | bug | | | |
Key Rules:
- Limit WIP: Max 2-3 asks in WORKING per developer
- Pull, donât push: Take work when ready, not when assigned
- Flow metrics: Track how long asks take, not points
- Blockage = Emergency: Red dot means all-hands response
When Flow Stops: Emergency Protocol đ¨
Blocked Ask Response:
ASK: Payment API integration
STATUS: đ´ Blocked Day 5
PROBLEM: Third-party docs incorrect
RESPONSE:
1. Stop new work entering WORKING
2. Swarm the problem (multiple people investigate)
3. Time-box solution attempt (2 days max)
4. Abandon if unsolvable (cut losses)
Flow Patterns Tell Truth đ
After a month, patterns emerge:
- âAPI integrations always get blocked in REVIEWâ
- âDatabase asks flow fast through WORKING but stuck in TESTINGâ
- âUI asks move smoothly end-to-endâ
Use patterns to improve:
- Add API prototype step before WORKING
- Strengthen testing resources
- Let UI asks flow freely
Kanban Eliminates Estimation Theater đ
No story points needed.
- Small asks naturally flow in days
- Large asks naturally take weeks
- Average cycle time becomes your forecast
Track only what matters:
- How many asks completed this week?
- Where do asks get stuck?
- Whatâs our average time to DONE?
The Beautiful Simplicity â¨
Traditional Scrum + Jira:
- Sprint planning (4 hours)
- Story pointing (2 hours)
- Daily standups (15 min Ă 5)
- Sprint review (2 hours)
- Retrospective (1 hour)
- Total: 10+ hours/sprint on ceremony
Kanban + Asks:
- Look at board (2 minutes)
- Move card when done (10 seconds)
- Address blockages immediately (as needed)
- Total: Actual work time
Start Tomorrow đ
Pick one decision youâre facing. Apply the framework:
- Whatâs the actual ask?
- What constraints are real?
- Whatâs the simplest solution?
Then ship it.
Because at the end of the day, working software that solves real problems beats architectural diagrams every time. Your users donât care about your design patterns. They care that it works, itâs fast, and it solves their problem.
Ask what theyâre asking for. Acknowledge your constraints. Deliver what works.
Thatâs ask-based planning.
Remember: Constraints arenât the enemy of creativityâtheyâre its catalyst. Every masterpiece was created within constraints. Your software can be too. đ¨