mirror of
https://github.com/hacksider/Deep-Live-Cam.git
synced 2026-05-14 02:42:09 +02:00
a6c99607fc
_fast_paste_back used to erode and Gaussian-blur the warped alpha mask in output coordinates with kernel sizes proportional to the on-screen face bbox. That made the per-frame cost ~O(area * k^2) — a face filling half the frame took ~8x the compositing work of one filling a quarter, which is why FPS fell off when leaning into the camera. Instead, build a feathered alpha template once at aligned-face resolution (128x128 for inswapper) and warp the soft mask per-frame. The affine transform preserves the relative feather width, so the visual output is equivalent; the per-frame cost is now O(crop_area) with no size-scaled erode/blur and no size-scaled padding. Also collapses the CPU fallback onto the same shape — it previously did a full-frame warpAffine twice per call, which scaled with the whole frame instead of the face crop.