Alexander Long

Applied Scientist at Amazon.

prof_pic.jpg

I work in a team of 14 Deep Learning PhDs, focusing on high-impact projects within the International Stores Org. We are responsible for carrying out science design and engineering, taking models to production, and are expected to publish research in tier-1 venues. My research focus is in retrieval augmentation and sample-efficient adaptation of large multi-modal foundation models. Prior to Amazon I completed my PhD in 2021 on External Non-parametric Memory in Deep Learning at UNSW, and before that a Masters in EE with a thesis on humanoid robotics at TUM, Germany.

The group is currently recruiting interns - please reachout if you are located in Australia and have recent first author papers in NeurIPS/ICML/ICLR/CVPR/ACL etc.

latest posts

selected publications

  1. RAC.png
    Retrieval Augmented Classification for Long-Tail Visual Recognition
    Alexander Long ,  Wei Yin ,  Thalaiyasingam Ajanthan , and 6 more authors
    In Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition , 2022
  2. modgap.png
    Modality-Aware Adaptation of Contrastive Language-Image Models
    Alexander Long ,  Thalaiyasingam Ajanthan ,  and  Anton Hengel
    In ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models , 2023