Optimize Vision Transformers Architecture via Efficient Attention Modules: A Study on the Monocular Depth Estimation Task
๐ Official repository of the paper Optimize Vision Transformers Architecture via Efficient Attention Modules: A Study on the Monocular Depth Estimation Task. Two modifications of METER attention module are proposed and implemented to build two more efficient versions: Meta-METER and Pyra-METER.