Quality Control Vision
This page explains how to design and deploy AI/Machine Vision for Quality Control that actually works on production lines—measurable, auditable, and maintainable (no hype).
What it is
Core Capabilities
Quality Control (QC) Vision uses cameras, optics, lighting, and algorithms to detect defects, verify assembly, read text/codes, and perform dimensional checks at line speed.
Methods Include
- Classification - OK/NG, defect type
- Detection - boxes around defects/parts
- Segmentation - pixel-precise defect regions
- Metrology - dimensions/angles/gaps; 2D/3D
- Anomaly detection - learn "normal" then flag deviations
Where it fits
Surface Defects
- Scratches, dents, contamination
- Coating/paint issues
Assembly Verification
- Presence/absence, orientation
- Connector seats, torque marks
Packaging & Labeling
- Print quality, OCR/lot/date codes
- Barcode/QR, label skew/mismatch
Seals & Integrity
- Cap/foil seals, blister pack
- Film wrinkles/tears
Dimensional Checks
- Gap/flush, holes/slots distance
- Thread/teeth count
Process Cues
- Fill level, color drift
- Baking/curing state, solder/weld beads
Optics & lighting that actually work
Critical Parameters
Pixel Density
Size of the smallest feature ≥ 3–5 pixels across (detection), 8–12 px (reliable classification), 15–25 px (metrology/OCR).
Angles & Motion
Keep view angles ≤ 25–30°, sensor roll ≤ 5° for dimensional/OCR tasks. Freeze motion (1/250–1/2000 s); use encoders + strobe for conveyors.
Lighting Choices
Lenses & 3D Options
Use telecentric for accurate dimensional metrology; varifocal for general inspection. 3D options: laser triangulation/structured light for height/warp; ToF for coarse z.
Algorithms & modes
Classical Vision
Threshold/edges/template matching for stable, high-contrast tasks.
Deep Learning
CNN/transformers for variable textures, complex defects.
Hybrid Approach
Classical pre-processing + deep models + rule validators (tolerances/regex/check digits).
Metrology
Calibrated pixel-to-mm, sub-pixel edge fit; 3D height maps for warp/bow.
Data & ground truth
Requirements Planning
- • Defect spec first: names, visuals, tolerances, cost of false reject vs false accept.
- • Representative samples: capture day/night, lot-to-lot, tool wear, color batches.
- • Sample size: start 200–500 images per defect state; grow with failure analysis.
Data Management
- • Annotate consistently: pixel masks for surface defects; line/point sets for metrology.
- • Splits by site/time/device to avoid leakage; hold out a golden set for acceptance.
Metrics that matter
Detection/Segmentation
- Per-defect Precision/Recall/F1
- mAP, PR curves
- False rejects/accepts per 10k
Metrology
- Bias, repeatability, reproducibility (Gage R&R)
- Uncertainty (mm) vs tolerance
- Cp/Cpk impact
Operations
- End-to-end latency
- Throughput (parts/min)
- Image retention %, downtime due to vision
Throughput & timing
Full Path Budget
Budget the full path: exposure → transfer → preprocess → inference → postprocess → PLC I/O → rejector actuation.
Strobe Optimization
Use strobe to cut exposure while keeping brightness.
Part Tracking
Buffer & track parts so ejectors hit the right one (distance → time).
Timing Budget
Keep per-part latency < cycle time with headroom (e.g., 60–70%).
Integration & traceability
I/O & Protocols
Reject Stations
Air-blast/diverter/marker; interlock with safety systems.
Evidence & Data Management
-
•
Evidence: store crops (defect region) + minimal context; configurable retention (e.g., 30–90 days); hash for integrity.
-
•
SPC & MES: push pass/fail and measurements; enable trend alarms and auto-stop rules.
Reliability & upkeep
Preventive
- Lens clean, focus checks
- Strobe aging, fan/temperature alarms
Re-qualification
- Periodic MSA/Gage R&R
- Test chart runs, lighting re-tune
Drift Control
- Monitor confidence and class distributions
- Trigger retraining with active learning
Spares
- Spare lights/cameras/lenses
- Printed SOP for swap-and-calibrate
Validation roadmap
Spec & risk
defect list, tolerances, FA/FR costs
Optics pilot
lighting/lens proof with target px/feature
Model baseline
train/val/test; report per-defect F1 & uncertainty
MSA/Line test
Gage R&R, latency budget, eject accuracy
Acceptance
hit KPIs on golden set + 1–2 weeks live shadow run
Handover
SOPs, retrain triggers, spare kit, dashboards
Red flags
"Works in any lighting/angle" without a pixel-density or lighting plan
Model-only FPS (no decode/IO/post-proc/PLC)
No per-defect metrics; only overall accuracy
No MSA/Gage R&R; no golden-set acceptance
One camera to do overview + detail + OCR simultaneously
GaugeSnap integration
Edge AI Packs
- Surface-defect segmentation
- OCR/lot, label/print check
- Cap/foil seal validation
- Gap/flush metrology (2D/3D)
Industrial Connectors
- PLC (EtherNet/IP, PROFINET, Modbus/TCP)
- OPC UA, MQTT → MES/ERP/SPC
Dashboards
- Per-defect F1, mAP/PR curves
- Latency histograms, eject accuracy
- Drift alerts
Sustainable AI
- INT8/FP16, ROI pipelines
- kWh/1k inferences and cost KPIs
Example event
{
"event": "qc_result",
"station_id": "lineB_cam2",
"part_sn": "ABX-24-008173",
"result": "NG",
"defects": [
{"type":"scratch","mask_bbox":[412,96,120,64],"score":0.91},
{"type":"label_skew","angle_deg":5.7,"score":0.88}
],
"measurements": {"gap_mm": 0.42, "flush_mm": 0.10},
"latency_ms": 62,
"conveyor_speed_mps": 0.8,
"ts": "2025-08-25T12:34:56Z"
}
How to start (low-risk)
Send us:
Defect Spec Sheet
Names, tolerances, costs of FR/FA
Videos & Photos
2–3 min videos per station (day/night, speeds) + photos of parts
Line Data
Cycle time, conveyor speed, available PLC I/O
We'll return:
- • Optics/lighting plan
- • Pixel-density check
- • Baseline metrics report (per-defect F1, latency)
- • Pilot with clear KPIs
Principle
Prove with optics + per-defect metrics on your line—then scale.