Our unified image insertion framework supports diverse practical scenarios, including artistic creation, realistic face swapping, cinematic scene composition, virtual garment try-on, accessory customization, and digital prop replacement, demonstrating its versatility and effectiveness in various image editing tasks.
This work presents Insert Anything, a unified framework for reference-based image insertion that seamlessly integrates objects from reference images into target scenes under flexible, user-specified control guidance. Instead of training separate models for individual tasks, our approach is trained once on our new AnyInsertion dataset—comprising 120K prompt-image pairs covering diverse tasks such as person, object, and garment insertion --and effortlessly generalizes to a wide range of insertion scenarios. Such a challenging setting requires capturing both identity features and fine-grained details, while allowing versatile local adaptations in style, color, and texture. To this end, we propose to leverage the multimodal attention of the Diffusion Transformer (DiT) to support both mask- and text-guided editing. Furthermore, we introduce an in-context editing mechanism that treats the reference image as contextual information, employing two prompting strategies to harmonize the inserted elements with the target scene while faithfully preserving their distinctive features. Extensive experiments on AnyInsertion, DreamBooth, and VTON-HD benchmarks demonstrate that our method consistently outperforms existing alternatives, underscoring its great potential in real-world applications such as creative content generation, virtual try-on, and scene composition.
Image pairs are collected from internet sources, human videos, and multi-view images. The dataset is divided into mask-prompt and text-prompt categories, with further subdivisions into accessories, objects, and persons for each prompt type. Dataset categories cover diverse insertion scenarios: furniture, daily necessities, garments, vehicles, and humans.
Given different types of prompts, our unified framework processes polyptych inputs (concatenation of reference, source, and masks) through a frozen VAE encoder to preserve high-frequency details, and extracts semantic guidance from image and text encoders. These embeddings are combined and fed into learnable DiT transformer blocks for in-context learning, enabling precise and flexible image insertion guided by either mask or text prompts.
@misc{song2025insertanythingimageinsertion,
title={Insert Anything: Image Insertion via In-Context Editing in DiT},
author={Wensong Song and Hong Jiang and Zongxing Yang and Ruijie Quan and Yi Yang},
year={2025},
eprint={2504.15009},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.15009},
}