Customer-obsessed science
![Amazon Science homepage.jpeg](https://assets.amazon.science/dims4/default/44d6323/2147483647/strip/true/crop/1383x1200+208+0/resize/400x347!/quality/90/?url=http%3A%2F%2Famazon-topics-brightspot.s3.amazonaws.com%2Fscience%2F48%2F0f%2F1db2f1004b82a99a0175ff391d53%2Famazon-science-homepage.jpeg)
![Amazon Science Fulfillment Center OAK4 in Tracy, CA](https://assets.amazon.science/dims4/default/15069c5/2147483647/strip/true/crop/1254x1091+191+0/resize/200x174!/quality/90/?url=http%3A%2F%2Famazon-topics-brightspot.s3.amazonaws.com%2Fscience%2F70%2Fbe%2F94c4a60445f999ef19050df7cad2%2Famazon-science-homepage-box.jpeg)
-
June 13, 2024The fight against hallucination in retrieval-augmented-generation models starts with a method for accurately assessing it.
-
June 13, 2024As in other areas of AI, generative models and foundation models — such as vision-language models — are a hot topic.
-
June 07, 2024Although work involving large language models predominates, classical and more-general techniques remain well represented.
-
-
June 16 - 21, 2024
-
June 17 - 21, 2024
-
July 14 - 18, 2024
-
February 15, 2024
In addition to its practical implications, recent work on “meaning representations” could shed light on some old philosophical questions.
-
April 16, 2024First model to work across a wide range of products uses a second U-Net encoder to capture fine-grained product details.
-
March 18, 2024Tokenizing time series data and treating it like a language enables a model whose zero-shot performance matches or exceeds that of purpose-built models.
-
February 20, 2024Generative AI supports the creation, at scale, of complex, realistic driving scenarios that can be directed to specific locations and environments.
-
January 17, 2024Representing facts using knowledge triplets rather than natural language enables finer-grained judgments.
-
2024An important requirement for the reliable deployment of pre-trained large language models (LLMs) is the well-calibrated quantification of the uncertainty in their outputs. While the likelihood of predicting the next token is a practical surrogate of the data uncertainty learned during training, model uncertainty is challenging to estimate, i.e., due to lack of knowledge acquired during training. Prior efforts
-
2024In many real-world applications, it is hard to pro-vide a reward signal in each step of a Reinforcement Learning (RL) process and more natural to give feedback when an episode ends. To this end, we study the recently proposed model of RL with Aggregate Bandit Feedback (RL-ABF), where the agent only observes the sum of rewards at the end of an episode instead of each reward individually. Prior work studied
-
2024In this paper, we tackle the challenge of inadequate and costly training data that has hindered the development of conversational question answering (ConvQA) systems. Enterprises have a large corpus of diverse internal documents. Instead of relying on a searching engine, a more compelling approach for people to comprehend these documents is to create a dialogue system. In this paper, we propose a robust
-
2024The development of large language models (LLM) has shown progress on reasoning, though studies have largely considered either English or simple reasoning tasks. To address this, we introduce a multilingual structured reasoning and explanation dataset, termed xSTREET, that covers four tasks across six languages. xSTREET exposes a gap in base LLM performance between English and non-English reasoning tasks
-
2024Retrieval is a widely adopted approach for improving language models leveraging external information. As the field moves towards multimodal large language models, it is important to extend the pure text-based methods to incorporate other modalities in retrieval as well for applications across the wide spectrum of machine learning tasks and data types. In this work, we propose multi-modal retrieval with
Resources
-
We look for talent from around the world for applied scientists, data scientists, economists, research scientists, scholars, academics, PhDs, and interns.
-
We hire world-class academics to work on large-scale technical challenges, while they continue to teach and conduct research at their universities. Learn more about each program and how to apply below.
-
Supporting research at academic institutions and non-profit organizations in areas that align with our mission to advance customer-obsessed science.