๐Ÿ“ค Upload & Configure

1 99

โ„น๏ธ How It Works

  1. Upload any image
  2. Adjust the masking ratio
  3. Click Reconstruct
  4. View the results & metrics

The model randomly masks patches of your image and reconstructs the full image from only the visible parts!

๐Ÿ–ผ๏ธ Reconstruction Results

๐Ÿ“Š Quality Metrics & Analysis

Upload an image and click Reconstruct to see detailed metrics.


๐ŸŽฏ Try These Examples:

  • Easy (10-30% masking): Clear reconstruction, tests basic capability
  • Medium (40-60% masking): Balanced challenge, realistic scenarios
  • Hard (70-85% masking): Significant challenge, impressive results
  • Extreme (90-99% masking): Model's absolute limits

๐Ÿ”ฌ About MAE

Masked Autoencoders (MAE) are self-supervised learning models that learn visual representations by reconstructing masked images. This implementation uses:

  • Asymmetric Encoder-Decoder: Efficient processing of visible patches
  • ViT Architecture: Transformer-based vision understanding
  • High Masking Ratio: Learns robust features from limited information

๐Ÿ“„ Paper: Masked Autoencoders Are Scalable Vision Learners (He et al., 2021)