Navigation
SYSTEM ANALYSIS AND DESIGN FACT FINDING TECHNIQUES
UNIT 1: FACT FINDING TECHNIQUES
1. What is Fact Finding?
Fact finding (also called information gathering or data collection) is the process of discovering and documenting facts about the current system, user needs, business processes, constraints, and stakeholder expectations.
Purpose: To replace assumptions, opinions, and guesses with verifiable truth before designing a new system.
1.1 Why Fact Finding is Critical
- Avoids “analysis paralysis” – Provides structured ways to gather data.
- Reduces rework – Correct facts early prevent costly changes later.
- Builds stakeholder trust – Users see that their input matters.
- Supports all SAD phases – Feasibility, requirements, design, testing.
1.2 Types of Facts Needed
| Fact Category | Examples |
|---|---|
| Organizational | Structure, culture, decision-making, policies |
| Process | Current workflows, transaction volumes, bottlenecks |
| User | Roles, skills, frustrations, workarounds |
| Technical | Hardware, software, networks, data formats |
| External | Regulations, competitors, customer expectations |
| Constraints | Budget, timeline, legal, security |
2. Common Fact Finding Techniques
There are seven core techniques. Each has strengths and weaknesses. Use a combination for triangulation.
| Technique | Best for | Typical Output |
|---|---|---|
| 1. Interview | In-depth understanding of opinions, reasons, history | Interview notes, transcripts |
| 2. Questionnaire/Survey | Gathering quantitative data from many people | Statistical summaries |
| 3. Observation | Seeing real behavior, not self-reported | Work logs, video, checklists |
| 4. Document Analysis | Understanding existing rules, forms, reports | Process maps, data dictionaries |
| 5. Sampling | Reducing volume when full data is too large | Representative data sets |
| 6. Research / Benchmarking | Learning industry best practices | Reports, comparisons |
| 7. Workshop / JAD | Reaching consensus quickly among stakeholders | Prioritized requirements, models |
3. Technique 1: Interview
3.1 Definition
A face-to-face or remote conversation between an analyst and a stakeholder to gather information.
3.2 Types of Interviews
| Type | Description | Example Question |
|---|---|---|
| Unstructured | Open-ended, free-flowing conversation. Good for exploring unknown areas. | “Tell me about your daily work.” |
| Structured | Predetermined questions, often closed-ended. Good for comparing answers. | “On a scale of 1–5, how satisfied are you with the current system?” |
| Semi-structured | Mix of both – core questions + follow-ups. Most common in SAD. | “What reports do you use? (Then probe: How often? Which fields are missing?)” |
3.3 Step-by-Step Interview Process
-
Planning
- Define objective (e.g., “Understand inventory reorder process”)
- Select interviewees (managers, operators, customers)
- Schedule 30–60 minutes
- Prepare question list
-
Question Design (avoid common errors)
- Open-ended: “Describe the steps when a customer returns a product.”
- Closed-ended: “Do you use the return form? Yes/No.”
- Probing: “You mentioned delays. Can you give an example?”
- Avoid leading: ❌ “Don’t you think the system is slow?” → ✅ “How would you rate system response time?”
-
Conducting the Interview
- Start with rapport (thank, explain purpose, confidentiality)
- Listen more than talk (80/20 rule)
- Take notes or record (with permission)
- Use active listening: “What I hear you saying is…”
-
Post-Interview
- Write summary within 24 hours
- Send to interviewee for validation
- Extract facts and requirements
3.4 Advantages & Disadvantages
| Advantages | Disadvantages |
|---|---|
| Rich, detailed data | Time-consuming (schedule, travel, transcribe) |
| Can explore unexpected topics | Interviewer bias can affect responses |
| Builds relationships | May be intimidating for some users |
| Clarify ambiguity immediately | Scheduling multiple people is hard |
3.5 Example Interview Question Set (for a Library System)
Objective: Understand book borrowing process.
- Open: “Walk me through what happens when a member brings a book to the counter.”
- Closed: “Does the system automatically calculate the due date? Yes/No.”
- Probing: “You mentioned fines – how are they calculated? Are there exceptions?”
- Opinion: “What is the biggest frustration with the current system?”
4. Technique 2: Questionnaire / Survey
4.1 Definition
A set of written questions distributed to many people (often geographically dispersed) to collect standardized data.
4.2 When to Use
- Large number of stakeholders (100+)
- Need for statistical validity
- Sensitive topics where anonymity is desired
- Preliminary fact gathering before interviews
4.3 Types of Questions
| Type | Example |
|---|---|
| Multiple choice | “How often do you use the report? (Daily / Weekly / Monthly / Never)” |
| Likert scale (1-5) | “The system is easy to use: 1=Strongly disagree … 5=Strongly agree” |
| Rank order | “Rank the following features from most to least important: …” |
| Open-ended (short) | “What one thing would improve your work most?” |
4.4 Designing an Effective Questionnaire
Rules:
- Keep it short (10–15 questions, <10 minutes)
- Use simple, unambiguous language
- Group related questions
- Avoid double-barreled questions (e.g., “Do you like the speed and reliability?” – two separate issues)
- Pilot test with 2–3 users
Example structure:
- Demographics (role, department, experience)
- Current system usage (frequency, tasks)
- Satisfaction ratings (scale)
- Problems (checkboxes + open “other”)
- Future needs (priorities)
4.5 Advantages & Disadvantages
| Advantages | Disadvantages |
|---|---|
| Can reach hundreds of people | Low response rate (10–30% typical) |
| Low cost per respondent | No follow-up or clarification |
| Easy to analyze quantitatively | Cannot observe tone or body language |
| Anonymous answers may be more honest | Poorly designed questions yield useless data |
4.6 Response Rate Improvement Tips
- Offer small incentive (gift card, donation)
- Send reminder after 1 week
- Keep it very short
- Explain how results will be used
- Get endorsement from a manager
5. Technique 3: Observation
5.1 Definition
Watching users perform their actual work in their real environment, without interfering (or with minimal interference).
5.2 Types of Observation
| Type | Description | Example |
|---|---|---|
| Passive | Analyst watches but does not interact. Best for understanding natural behavior. | Sitting in a call center, silently watching an agent take calls. |
| Active | Analyst may ask occasional questions while observing. | “What are you doing now? Why?” |
| Participant | Analyst performs the work alongside users (if trained). | Working as a cashier for one shift to feel the pressure. |
5.3 Structured Observation Methods
- Time and motion logs – Record how long each task takes.
- Work sampling – Observe at random intervals (e.g., every 30 minutes) and note what the user is doing.
- Video recording (with permission) – Allows review later.
5.4 Steps
- Get permission from management and users (explain purpose, confidentiality).
- Schedule observation during typical and peak periods.
- Prepare a checklist of what to look for (e.g., steps, delays, errors, workarounds).
- Observe without judging – users may be nervous.
- Take detailed notes – include timestamps, screens, physical actions.
- Follow up with users to clarify anything seen.
5.5 Advantages & Disadvantages
| Advantages | Disadvantages |
|---|---|
| Captures what people actually do, not what they say they do | Hawthorne effect – users change behavior when watched |
| Reveals tacit knowledge (things users don’t think to mention) | Time-consuming; limited to small number of users |
| Discovers workarounds and inefficiencies | Only captures what happens during observation period (may miss rare events) |
| No reliance on memory | Cannot observe mental processes (decision-making, reasoning) |
5.6 Example Observation Checklist (Warehouse Picking)
| Activity | Observed? | Notes |
|---|---|---|
| Worker uses paper list or scanner? | ||
| Time to locate each item | ||
| Errors (wrong item picked) | ||
| Does worker need to walk back for missing info? | ||
| Any waiting time? |
6. Technique 4: Document Analysis
6.1 Definition
Examining existing written materials that describe the current system, business rules, data, or procedures.
6.2 Types of Documents to Analyze
| Document Type | What It Reveals |
|---|---|
| Organization charts | Reporting structure, decision authority |
| Policy manuals | Rules that must be enforced by the system |
| Procedure manuals | Step-by-step processes (often outdated!) |
| Forms and screens | Data fields, validations, workflows |
| Reports | Information needs, data sources, frequency |
| System documentation | Existing software design, database schemas |
| Meeting minutes | Past decisions, known issues |
| Audit logs / error logs | Actual system usage and problems |
| Job descriptions | User responsibilities |
6.3 How to Perform Document Analysis
- Collect all relevant documents (ask users: “What forms do you fill out?”)
- Scan for obvious facts (data elements, rules, frequencies).
- Analyze in detail:
- For a form: list every field, its source, validation rules, optional/mandatory.
- For a report: who uses it, when, why, and what decisions are based on it.
- Look for discrepancies between what documents say and what users do (indicates outdated procedures).
- Extract requirements (e.g., “The form has a field for ‘Tax ID’ → system must capture tax ID.”)
6.4 Advantages & Disadvantages
| Advantages | Disadvantages |
|---|---|
| Low cost (documents are often available) | Documents may be outdated or inaccurate |
| Provides historical context | Cannot reveal unwritten practices |
| Unobtrusive (no user time needed) | May be missing important information |
| Good starting point before interviews | Can be voluminous and time-consuming to review |
6.5 Example: Analyzing an Expense Report Form
Form fields: Employee name, date, description, amount, client code, manager signature.
Findings:
- “Client code” is a free text field but users often write “N/A” → missing validation.
- Manager signature is handwritten → no digital approval workflow.
- No field for receipt attachment → users keep paper receipts.
Requirements derived: - Client code must be selected from a validated list.
- Digital approval with audit trail.
- Receipt image upload.
7. Technique 5: Sampling
7.1 Definition
Selecting a representative subset of data or transactions to analyze, rather than examining every single instance.
7.2 When to Use
- Large volume of transactions (e.g., 1 million invoices per year)
- Need to understand error rates or patterns
- Time or budget constraints
7.3 Sampling Methods
| Method | Description | Example |
|---|---|---|
| Random sampling | Every item has equal chance of selection | Pick 100 invoice numbers using a random number generator. |
| Stratified sampling | Divide population into groups (strata), then sample from each | Sample 50 high-value orders + 50 low-value orders to see differences. |
| Systematic sampling | Pick every nth item | Every 100th customer transaction. |
| Convenience sampling | Choose easiest items (least reliable) | First 50 invoices on the pile – not recommended. |
7.4 Determining Sample Size
Rule of thumb for SAD:
- For error rate estimation: at least 30–50 items.
- For process discovery: 10–20 typical examples + 5 edge cases.
- Use formula for confidence level: n = [Z^2 * p(1-p)] / e^2 (simplified: for 95% confidence, 5% margin, use ~400 for large populations).
7.5 Advantages & Disadvantages
| Advantages | Disadvantages |
|---|---|
| Saves time and effort | Risk of unrepresentative sample |
| Practical for large datasets | Requires statistical knowledge to do properly |
| Can identify patterns and anomalies | May miss rare but critical events |
7.6 Example: Sampling Customer Complaints
Population: 5,000 complaints last year.
Sample: 200 randomly selected.
Analysis reveals: 40% are about slow delivery, 30% about wrong items, 20% about website errors, 10% other.
Conclusion: Focus requirements on delivery tracking and inventory accuracy.
8. Technique 6: Research / Benchmarking
8.1 Definition
Studying external sources to learn about best practices, industry standards, available technologies, or competitor systems.
8.2 Sources of Research
| Source | What It Provides |
|---|---|
| Industry reports (Gartner, Forrester) | Trends, benchmarks, vendor comparisons |
| Technical documentation (APIs, SDKs) | Feasibility of integration |
| Competitor websites / demos | Feature expectations |
| Professional associations (IEEE, PMI) | Standards and methodologies |
| Government regulations | Compliance requirements |
8.3 Benchmarking Process
- Identify what to benchmark (e.g., order processing time, customer satisfaction score).
- Find comparable organizations (same industry, similar size).
- Collect data (surveys, public reports, site visits).
- Compare your current metrics to the benchmark.
- Set target requirements based on best-in-class.
8.4 Advantages & Disadvantages
| Advantages | Disadvantages |
|---|---|
| Avoids reinventing the wheel | May not be applicable to your unique context |
| Justifies requirements with external evidence | Time-consuming to find good sources |
| Uncovers innovative solutions | Benchmarking partners may not share data |
8.5 Example: Benchmarking for a Help Desk System
Benchmark data (industry average):
- First response time: 2 hours.
- Resolution time: 24 hours.
- Customer satisfaction: 85%.
Current system: 8 hours first response, 72 hours resolution, 60% satisfaction.
Requirement: “First response time shall not exceed 2 hours for high-priority tickets.”
9. Technique 7: Workshop / JAD (Joint Application Development)
9.1 Definition
A facilitated, structured meeting that brings together key stakeholders (users, managers, analysts, developers) for a short, intensive period (1–5 days) to define requirements or solve problems.
9.2 JAD Roles
| Role | Responsibility |
|---|---|
| Sponsor | Executive who provides authority and resources |
| Facilitator | Neutral person who runs the workshop (not an analyst) |
| Scribe | Documents all decisions and action items |
| Users / Managers | Provide domain knowledge and decisions |
| Analysts | Translate discussions into models (use cases, DFDs) |
| Observers (optional) | Developers or QA who listen but don’t dominate |
9.3 JAD Session Structure (Typical 3-day)
- Day 1: Orientation, scope definition, high-level processes. Identify actors and use cases.
- Day 2: Detail use cases, business rules, exception flows. Prototyping (paper or digital).
- Day 3: Prioritize requirements (MoSCoW). Identify conflicts and resolve. Draft SRS outline.
9.4 Advantages & Disadvantages
| Advantages | Disadvantages |
|---|---|
| Very fast (days instead of weeks) | Expensive (pulling people off work) |
| Builds consensus and ownership | Requires skilled facilitator |
| Reduces misunderstandings | Can be dominated by loud voices |
| Produces validated requirements | Not all stakeholders can attend |
9.5 JAD Success Factors
- Clear objectives and agenda shared beforehand.
- Right participants (decision-makers must be present).
- Neutral facilitator (not the analyst).
- Comfortable, off-site location if possible.
- Follow-up within 48 hours.
10. Selecting the Right Fact Finding Technique(s)
No single technique is sufficient. Use a multi-method approach.
10.1 Decision Matrix
| Scenario | Recommended Techniques |
|---|---|
| You know nothing about the domain | Interviews + Document analysis + Observation |
| You need quantitative data from 500 users | Questionnaire + Sampling |
| Users give contradictory information | Observation (see what they actually do) + JAD (resolve conflicts) |
| You have very limited time | JAD workshop + Document analysis |
| The system is critical and high-risk | All techniques (triangulation) |
10.2 Fact Finding Plan Template
| Phase | Technique | Duration | Participants | Deliverable |
|---|---|---|---|---|
| Week 1 | Document analysis | 8 hours | Analyst alone | Document summary |
| Week 1-2 | Interviews (10 people) | 15 hours | Key users | Interview notes |
| Week 2 | Observation (2 days) | 16 hours | 3-4 users | Workflow log |
| Week 3 | Questionnaire (200 people) | 2 hours (design) + 1 week response | All staff | Statistical report |
| Week 4 | JAD workshop (2 days) | 16 hours | 12 stakeholders | Prioritized requirements |
11. Documenting Fact Finding Results
Always organize findings. Common formats:
11.1 Fact Finding Summary Table
| Fact ID | Fact Statement | Source | Date | Confidence (1-5) |
|---|---|---|---|---|
| F-001 | The warehouse processes 500 orders per day on average. | Interview: Warehouse Mgr | 10 Mar | 4 |
| F-002 | Order entry takes 4 minutes for a 10-line order. | Observation (5 samples) | 12 Mar | 5 |
| F-003 | 30% of users want a mobile interface. | Survey (n=200) | 15 Mar | 4 |
11.2 Requirements Log (Traceable to Facts)
| Req ID | Requirement | Derived from Fact(s) |
|---|---|---|
| FR-12 | System shall display estimated delivery date at order entry. | F-001, F-003 (customer complaints) |
11.3 Action Log for Follow-up
| Issue | Technique Used | Person Responsible | Deadline |
|---|---|---|---|
| Clarify discount calculation rule | Interview (Finance) | Analyst | 20 Mar |
12. Common Pitfalls in Fact Finding
- Relying on only one technique – Biased or incomplete facts. Use triangulation (3+ methods).
- Asking leading questions – False confirmation of your own assumptions. Use neutral, open-ended wording.
- Not documenting as you go – Forgetting details; relying on memory.
- Interviewing only managers – Missing operational realities. Always interview frontline users.
- Observing only during normal periods – Missing peak load problems.
- Ignoring “silent” stakeholders (e.g., IT operations) – Non-functional requirements missed.
- No follow-up validation – Misinterpreted facts cause wrong requirements.
13. Summary Table – Fact Finding Techniques at a Glance
| Technique | Best For | Time Cost | People Involved | Main Output |
|---|---|---|---|---|
| Interview | Deep, qualitative insights | Medium (1–2 hours per person) | 1 analyst + 1 user | Interview notes |
| Questionnaire | Quantitative data from many | Low per respondent | Many | Statistics, charts |
| Observation | Actual behavior, not reported | High (real-time) | 1 analyst + 1–3 users | Workflow logs |
| Document analysis | Existing rules, forms, data | Low to medium | Analyst alone | Document summary |
| Sampling | Large data volumes | Medium (analysis) | Analyst + data owner | Sample statistics |
| Research | External benchmarks | Variable | Analyst alone | Research report |
| JAD / Workshop | Fast consensus, resolution | High (group time) | 8–15 people | Validated requirements |
14. Real-World Example: Fact Finding for a Hospital Patient Registration System
Context: A hospital wants to reduce patient wait time during registration. Current process is paper-based.
| Technique | Details | Findings |
|---|---|---|
| Document analysis | Reviewed registration form (20 fields). | Found 5 redundant fields; inconsistent manual. |
| Observation | Watched 4 clerks for 2 hours each. | Avg. time = 7 mins; 40% time spent calling insurance. |
| Interview | Talked to clerks, nurse, manager. | Clerks want auto-verification; nurse wants immediate data. |
| Sampling | Analyzed 200 completed forms. | 12% missing ID; 5% wrong name. |
| Research | Industry benchmarks. | Best-in-class = 2 mins; Tablet check-in is trend. |
| JAD workshop | Brought clerks, IT, billing, nurse. | Resolved conflict on necessary billing data. |
Key requirements derived:
- Auto-verification with insurance API.
- Tablet self-check-in for returning patients.
- Real-time patient data to nurse station.
- Registration time ≤ 3 minutes for 90% of patients.
15. Key Takeaways
- Fact finding is the foundation of all SAD work. Garbage in = garbage out.
- Use multiple techniques to cross-check facts (triangulation).
- Talk to all levels – managers, users, technical staff, and support.
- Watch what people do, not just what they say (observation is powerful).
- Document everything with traceability from fact to requirement.
- Validate your findings with stakeholders before moving to design.
