Smoke, Sanity, and Acceptance Testing: A QA Guide
These three testing types are often confused, but each serves a distinct purpose in the software testing lifecycle. Understanding when to use smoke, sanity, or acceptance testing can save your team time, resources, and prevent costly production bugs.
Smoke vs Sanity vs Acceptance Testing
A clear comparison of purpose, scope, and execution for each testing type.
| Aspect | Smoke Testing | Sanity Testing | Acceptance Testing |
|---|---|---|---|
| Purpose | Verify critical functionalities of a new build work before deeper testing | Validate specific changes or bug fixes didn't break related functionality | Confirm the software meets business requirements and is ready for production |
| When it runs | First step after receiving a new build | After smoke testing passes and specific changes are deployed | Final phase before production release |
| Who executes | QA team (often automated) | QA team (manual or automated) | End users, client stakeholders, or business analysts |
| Scope | Broad but shallow - critical paths only | Narrow and focused - specific features or fixes | Comprehensive - all requirements and user scenarios |
| Documentation | Minimal - checklist or automated test suite | Minimal - targeted test cases | Extensive - formal test plan, acceptance criteria, sign-off |
| Decision outcome | Go/No-Go - reject build or proceed to testing | Pass/Fail - accept fix or return to development | Accept/Reject - approve for production or require changes |
| Automation | Highly recommended (runs frequently) | Sometimes (depends on frequency of changes) | Rarely (requires business judgment and real user perspective) |
When to Use Each Type
Follow this decision tree to determine which testing approach is appropriate for your situation.
How BetterQA Uses Each Testing Type
Real examples from our 15+ years of QA experience across healthcare, fintech, and SaaS projects.
Healthcare EMR Platform
Scenario:
New build deployed to staging with updates to patient record viewing and prescription workflows. Before running 200+ functional tests, we needed to confirm the build was stable.
Our Smoke Test:
Login as three user roles (doctor, nurse, admin), open a patient record, view medical history, and logout. If any of these fail, the build is rejected immediately.
Fintech Payment Gateway
Scenario:
Bug fix deployed for credit card validation that was incorrectly rejecting valid Amex cards. The fix modified validation logic in the payment processing module.
Our Sanity Test:
Submit payments with Amex, Visa, Mastercard, and invalid card numbers. Verify Amex now passes, other cards still work correctly, and invalid cards are still rejected.
SaaS Project Management Tool
Scenario:
All features for Q1 release are complete and tested. Client needs to validate that the new Gantt chart view, resource allocation, and reporting features meet their requirements before launch.
Our UAT Process:
Client project managers used the tool for one week with real project data. They created projects, assigned resources, generated reports, and confirmed workflows matched their needs.
Common Mistakes and How to Avoid Them
These mistakes cost teams time, create confusion, and let critical bugs slip through.
Treating Smoke Tests Like Regression Tests
Teams create smoke test suites with 100+ test cases that take hours to run. This defeats the purpose - smoke tests should be fast checks of critical paths, not comprehensive regression suites.
Skipping Smoke Tests Because "It's Just a Small Change"
Developers commit a "minor fix" and push it directly to QA without smoke testing. Small changes in one module can break core functionality in unexpected ways.
Running Sanity Tests Without Smoke Tests First
Teams jump straight to sanity testing a specific fix without confirming the build's core functionality works. If login is broken, there's no point testing the new search feature.
Letting Developers Perform Acceptance Testing
Development teams mark features as "done" after internal testing, without involving actual users or business stakeholders. Developers test technical functionality, not business value.
Automating Acceptance Tests Too Early
Teams attempt to automate UAT scenarios before workflows are stable. Acceptance criteria change frequently in early development, making automated tests brittle and expensive to maintain.
Using the Same Test Data Across All Testing Types
Smoke, sanity, and acceptance tests all use the same test accounts and data sets. This creates false positives - tests pass in QA but fail in production with real user data.
What to Automate vs Keep Manual
Not all testing should be automated. Here's our framework for deciding where automation adds value and where human judgment is essential.
Automate These
Smoke tests run on every build and test the same critical paths repeatedly. They're perfect candidates for automation. At BetterQA, we use Playwright and our Flows tool - self-healing automation that adapts when UI selectors change.
If you're testing the same areas repeatedly (like payment processing or authentication), automate those sanity checks. But don't automate one-off bug fixes - manual testing is faster for unique scenarios.
Build a solid automated regression suite that runs after smoke testing passes. This catches issues that smoke tests miss while keeping smoke tests fast and focused.
API smoke tests run faster than UI tests and can validate backend functionality independently. We use Postman or custom scripts to verify critical endpoints before UI testing begins.
Keep These Manual
UAT requires business judgment, user perspective, and validation of real-world workflows. Automated tests can't assess whether software meets business needs or if workflows feel intuitive to actual users.
When a complex bug fix touches multiple areas, exploratory testing finds edge cases that scripted tests miss. Experienced testers can identify unexpected interactions that automation wouldn't think to check.
Automated tests can verify elements exist, but they can't judge whether layouts look correct or if user flows feel natural. Visual regression tools help, but human review is essential for usability.
If a bug fix is unique and unlikely to recur, manual sanity testing is faster than writing automation. Reserve automation for tests that will run multiple times and provide long-term value.
Learn More About Testing Fundamentals
Common Questions About Testing Types
Smoke testing verifies critical functionalities of a new build before deeper testing begins - it's a go/no-go check. Sanity testing validates specific changes or bug fixes after a build passes smoke testing. Think of smoke testing as checking if the engine starts, while sanity testing confirms that the new air conditioning system doesn't disable the radio.
Yes, smoke tests are ideal candidates for automation. They run frequently (every build), test the same critical paths, and need fast feedback. At BetterQA, we automate smoke tests using Playwright or our Flows tool - self-healing test automation that adapts when UI selectors change.
Acceptance testing happens after all functional, integration, and system testing is complete. It's the final validation before production release. The timing depends on your methodology - in Agile, UAT happens at the end of each sprint for completed features. In Waterfall, it's a distinct phase before deployment.
Sanity testing can catch obvious regression issues in areas directly affected by recent changes. However, it's not a substitute for regression testing. Sanity testing is narrow and focused - it verifies that a specific fix works without breaking immediately related functionality. Comprehensive regression testing covers broader system interactions.
If a build fails smoke testing, it's rejected immediately and sent back to development. No further testing happens. This saves time and resources - there's no point running detailed test suites on a build where core functionality is broken. The development team fixes the critical issues, creates a new build, and smoke testing runs again.
Smoke tests should complete in 15-30 minutes maximum. They're designed to be fast checks of critical paths. If your smoke tests take longer, you're testing too much - focus only on features that would make the build unusable if broken. At BetterQA, our automated smoke suites typically run in 10-15 minutes across multiple browsers.
Need Help Building a Solid Testing Strategy?
BetterQA's 50+ certified engineers can help you implement smoke, sanity, and acceptance testing that actually works. We include our proprietary tools - BugBoard, Flows, and BetterFlow - with every engagement.
When to Run Each Test Type
These three test types form a sequence. Each one gates the next - you do not run sanity testing if smoke testing fails, and you do not run acceptance testing until sanity passes.
Smoke Testing
"Can this build run at all?" Quick check of critical paths. Takes 15-30 minutes.
Sanity Testing
"Did the fix work without breaking anything?" Targeted check of changed components. Takes 1-2 hours.
Acceptance Testing
"Does this meet business requirements?" Full validation against user stories. Takes 1-5 days.
Smoke vs Sanity vs Acceptance Testing
| Attribute | Smoke Testing | Sanity Testing | Acceptance Testing (UAT) |
|---|---|---|---|
| Purpose | Verify build stability | Verify specific fix/feature | Validate business requirements |
| Who runs it | QA team / CI pipeline | QA team | End users / product owner |
| Scope | Broad, shallow | Narrow, focused | Full business scenarios |
| Duration | 15-30 min | 1-2 hours | 1-5 days |
| Automated? | Yes (always) | Partially | Usually manual |
| Fail = ? | Reject build entirely | Send back to dev | Block release |
| Test cases | 10-30 | 5-15 | 50-200+ |
Which Test Should You Run?
Run smoke testing. Verify login, homepage, core navigation, and critical API endpoints respond. If any fail, reject the build immediately.
Run sanity testing. Verify the fix works, then check 2-3 related features for regressions. Do not run full regression here.
Run acceptance testing. Walk through business scenarios with actual users. Validate against acceptance criteria in user stories.
Start with smoke, then narrow down. Smoke testing identifies WHAT is broken. Sanity testing identifies WHERE. Acceptance testing confirms IF the fix meets business needs.
Cost of Skipping Each Test Type
All three test types together cost a fraction of a single production incident. See our SDLC cost analysis for the full data.
Need help building a testing pipeline?
BetterQA sets up smoke, sanity, and acceptance testing workflows integrated into your CI/CD pipeline.
Talk to a QA EngineerNeed help with software testing?
BetterQA provides independent QA services with 50+ engineers across manual testing, automation, security audits, and performance testing.