Skip to content
Kordu Tools Kordu Tools

Depth Map Generator

AI Runs in browser

Generate depth maps from any photo using AI. Visualize 3D structure with colorized depth estimation — runs in your browser.

Last updated 01 Apr 2026

Estimates per-pixel depth from any single photograph using Depth-Anything Small — a state-of-the-art monocular depth estimation model. Choose from grayscale, viridis, plasma, or inferno colormaps. The downloadable PNG depth map works as a displacement texture in Blender, Cinema 4D, game engines, and video compositing tools. Runs entirely in your browser — no upload required.

Colormap

Choose how depth values are visualized. Viridis and plasma are colorblind-friendly.

~47.7 MB download

Click to upload or drag and drop

PNG, JPG, JPEG, WEBP up to 20MB

Loading rating…

How to use

  1. 1

    Upload your photo

    Click the upload area or drag and drop any JPG, PNG, or WebP image. Landscapes, portraits, interior scenes, and product photos all work well.

  2. 2

    Wait for AI processing

    The AI model analyses depth cues in the image. On first use the ~50MB model downloads and caches — this takes 15-60 seconds. Subsequent uses are fast.

  3. 3

    Choose a colormap

    Select grayscale for standard displacement maps, viridis or plasma for colorblind-friendly scientific visualization, or inferno for maximum near-far contrast.

  4. 4

    Compare original and depth map

    Drag the split slider to compare the original image with the generated depth map side by side.

  5. 5

    Download the depth map

    Click Download to save the depth map as a PNG. Import it as a displacement or depth texture in Blender, Cinema 4D, or your preferred 3D or compositing tool.

Frequently asked questions

Is my image uploaded to a server?
No. All processing runs in your browser using WebAssembly and ONNX Runtime. Your images never leave your device.
What is a depth map?
A depth map is a grayscale or colorized image where pixel brightness encodes distance from the camera. Brighter pixels are typically closer, darker pixels are farther away. The exact convention may be inverted depending on your 3D software's expectations.
How large is the model download?
The Depth-Anything Small model is approximately 50MB. It downloads once on first use and is cached in your browser — future runs load instantly without downloading again.
What colormaps are available?
Grayscale (classic black-to-white for displacement maps), Viridis (purple-to-yellow, perceptually uniform, colorblind-friendly), Plasma (purple-to-yellow-green), and Inferno (black-to-white via red-orange, high contrast between near and far regions).
Can I use the depth map in Blender?
Yes. The downloaded PNG can be used as a displacement texture, bump map, or Z-depth layer in Blender, Cinema 4D, Unreal Engine, Unity, and most other 3D tools.
What images work best?
Images with clear near/far depth variation — landscapes, indoor scenes with furniture, portraits with blurred backgrounds, street photography — produce the most useful depth maps. Flat, uniform surfaces or extreme close-ups have less useful depth variation.
How accurate is the depth estimation?
Depth-Anything produces relative depth estimates, not absolute metric distances. The output accurately represents which areas are closer or farther but does not provide real-world measurements in meters or feet.
Can I create a bokeh effect using the depth map?
Yes. Use the depth map as a mask in photo editing or video compositing software to apply blur selectively — brighter (closer) regions stay sharp, darker (farther) regions receive blur, simulating a shallow depth-of-field effect.
Does the tool work on mobile?
Yes, but the 50MB model download and AI processing are slower on mobile devices. A desktop or laptop browser is recommended for faster results.

Monocular depth estimation infers how far each pixel is from the camera using a single 2D

photograph — no stereo camera or LiDAR sensor required. This tool runs Depth-Anything Small,

a vision transformer trained on millions of diverse images, to produce a dense per-pixel depth

map that reveals the 3D structure hidden in any flat photo.

Four colormap options give you the exact visualization you need. Viridis and plasma are

perceptually uniform and colorblind-friendly — ideal for scientific and research imagery.

Inferno maximizes contrast between near and far regions for editorial and creative use.

Classic grayscale outputs the standard displacement map format expected by 3D engines and

compositing software like Blender, Cinema 4D, DaVinci Resolve, and After Effects.

Creative and technical use cases include: generating displacement maps for 3D modeling,

creating parallax and Ken Burns video effects, adding synthetic background bokeh to portraits,

building depth-aware composites, guiding augmented-reality object placement, and analyzing

spatial relationships in research imagery.

Everything runs locally in your browser using WebAssembly. The Depth-Anything model (~50MB)

downloads once on first use and is cached for future visits. Your images are never uploaded

to any server.

Related tools

Learn more