๐ค Upload & Configure
1 99
โน๏ธ How It Works
- Upload any image
- Adjust the masking ratio
- Click Reconstruct
- View the results & metrics
The model randomly masks patches of your image and reconstructs the full image from only the visible parts!
๐ผ๏ธ Reconstruction Results
๐ Quality Metrics & Analysis
Upload an image and click Reconstruct to see detailed metrics.
๐ฏ Try These Examples:
- Easy (10-30% masking): Clear reconstruction, tests basic capability
- Medium (40-60% masking): Balanced challenge, realistic scenarios
- Hard (70-85% masking): Significant challenge, impressive results
- Extreme (90-99% masking): Model's absolute limits
๐ฌ About MAE
Masked Autoencoders (MAE) are self-supervised learning models that learn visual representations by reconstructing masked images. This implementation uses:
- Asymmetric Encoder-Decoder: Efficient processing of visible patches
- ViT Architecture: Transformer-based vision understanding
- High Masking Ratio: Learns robust features from limited information
๐ Paper: Masked Autoencoders Are Scalable Vision Learners (He et al., 2021)