A few weeks ago, the philanthropic investment platform Food System Innovations announced that it had received a $2 million grant from the Bezos Earth Fund. FSI’s non-profit group NECTAR has been building a large dataset of consumers’ sensory responses to alt proteins, and the grant will help NECTAR to continue working on, in partnership with Stanford University, an AI model “that connects molecular structure, flavor, texture, and consumer preference.” The goal, according to NECTAR, is to create an open-source tool for CPGs and other food industry players to develop more flavorful—and hopefully better-selling—sustainable proteins.
I’d been following NECTAR for some time and have been closely tracking the impact of AI on food systems, so I thought it would be a good time to connect with NECTAR. I’d talked about the project briefly with Adam Yee, the chief food scientist who helped with the project, while I was in Japan, and this week I caught up with NECTAR managing director Caroline Cotto to get the full download on the project and where it’s all going.
Below is my interview with Caroline.
What are you building with this new Bezos Earth Fund grant?
“One of the things Nectar is doing is we just won a $2 million grant from the Bezos Earth Fund to take our sensory data and build a foundation model that will predict sensory. So we kind of bypass the need for doing these very expensive consumer panels, and then also predict market success from formulation. It’s intended to be sort of a food scientist’s best friend in terms of new product ideation.”
For people who don’t know Nectar, what’s the core mission, and how did this AI project start?
“Basically, Nectar is trying to amass the largest public data set on how sustainable protein products taste to omnivores. That’s what we have set out to do. We’re building that, and we are working heavily with academics to operationalize that data.
Over a year and a half ago, we started talking to the computer science folks at Stanford to say, like, what are things we could do with this novel data set that we’re creating? It happened to be around that time that the phase one Bezos Earth grant was opening up for their AI grand challenge. I connected Adam with the Stanford team, and they did some initial work on LLMs and found that it was able to do some of this support for food scientists. They published a paper together that came out in January for ICML, the largest machine learning conference, and we ended up winning that phase one grant, which then allowed us to apply for the phase two grant that we just found out about in October.”
From a technical standpoint, what kind of AI are you actually building?
“I am not an AI scientist myself here, so we are heavily partnered with Stanford and their computer science team, but it is an LLM base. We’re basically fine-tuning an LLM to be able to do this sensory prediction work, and it’s a multi-modal approach. There’s a similar project that’s been done out of Google DeepMind called Osmo for smell and olfactory, and we’re working with some of the folks that worked on that in order to model taste and sensory more broadly, and then connect that to sales outcomes.”
How does the Bezos Earth Fund AI Grand Challenge work in terms of phases and funding?
“It’s the Bezos Earth Fund AI Grand Challenge for Climate and Nature. It’s $30 million going to these projects. There were 15 phase two winners that each received $2 million and have to deliver over two years.
The phase one was a $50,000 grant to basically work on your idea and prepare a submission for phase two. We spent about six months preparing, trying to connect this Nectar data set with sales data and see which sensory attributes are most predictive of sales success, and also connecting the Nectar sensory data set to molecular-level ingredient data sets. Ideally the chain of prediction would be: can you predict sensory outcome from just putting in an ingredient list, and if so, what about sensory is predictive of sales success? We’re working on the different pieces of that predictive chain.”
What does your sensory testing process look like in practice?
“It’s all in-person blind taste testing. In our most recent study, we tested 122 plant-based meat alternatives across 14 categories. Each product was tried by a minimum of 100 consumers. They come to a restaurant where we’ve closed down the restaurant for the day, but we want to give them that more authentic experience. They try probably six products in a sitting, one at a time, and everything is blind, so they don’t know if they’re eating a plant-based product or an animal-based product and then they fill out a survey as they’re trying the product.”
How big is the data set now, and what’s coming next?
“We do an annual survey called the Taste of the Industry. For 2024, we tested about 45 plant-based meat products. For 2025, we tested 122 plant-based meat products. Outside of that, we have our emerging sector research, which are smaller reports. We’ve done two of those, and both have been on this category we’re calling balanced protein or hybrid products that combine meat. We’ve tested just under 50 products total in that category as well.
We’re testing blends of things like meat plus plant-based meat, meat plus mushrooms, meat plus microprotein, meat plus just savory vegetables in general. For 2026, our Taste of the Industry report is on dairy alternatives. We’re testing 100 dairy alternatives across 10 categories, and that will come out in March.”
When you overlap taste scores with sales data, what have you seen so far?
“The Nectar data set is mostly just focused on sensory. That’s the core of what we do. We are also interested in answering the question ‘do better-tasting products sell more?’ In our last report, we conducted an initial analysis of overlapping sensory data with sales data, finding that better-tasting categories capture a greater market share than worse-tasting categories. Better-tasting products are capturing greater market share than worse-tasting products. In certain categories, that seems to be agnostic of price. Even though the product might be more expensive, if it tastes better, it is capturing a greater market share.
We’re currently working with some data providers to get more granular on this sales data connection, because that analysis was from publicly available sales data. In this AI project, we are trying to connect sensory performance with sales more robustly to see which aspects of sensory are predictive of sales success. It’s hard because there are a ton of confounding variables; we have to figure out how to control for marketing spend, store placement, placement on shelf, that sort of thing. But we have access to the Nielsen consumer panel, this huge data set of grocery store transactions over many years, from households that have agreed to have all of their transactions tracked. We’re able to see what consumers are purchasing over time, and we’re trying to connect the sensory cassette to that.”
You also mentioned bringing ingredient lists and molecular data into the model. How does that fit in?
“We’re trying to say, there are a lot of black boxes in food product development because flavors are a black box. We don’t have a lot of visibility into companies’ actual formulations. We’re trying to determine if we can extract publicly available information from the ingredient list and identify the molecular-level components of those ingredients, and then determine if any correlations can be drawn between them.
It’s all of these factors plus images of the products and trying to see if we can predict that.”
What do you actually hope to deliver at the end of the two-year grant?
“The idea is to deliver an open source tool for the industry to use. The goal would be that you can put in all the constraints you have for sustainability, cost, nutrition, and demographic need, and that it would help you get to an endpoint where you don’t have to do a bunch of bench-top trials and then expensive sensory.”
How do you think about open source, data privacy, and companies actually using this tool?
“Data privacy is a big thing in this space. We don’t have any interest in companies sharing their proprietary formulations with us. The goal is that they would be able to utilize this tool, download it to their personal servers, and put in their private information and use it to make better products. If we’re rapidly increasing the speed at which these products come to market and they are actually successful, that would be a success for us.
There are other efforts in this space, from NotCo to IFT. Where does Nectar fit?
“I think everybody is trying to do similar things, but with slightly different inputs and different approaches. We are open to collaborating and learning from people. Our end goal is a mission-driven approach here, not to make a ton of money, so it depends on whether or not those partners are aligned with that goal.
IFT has trained its model on all of the IFT papers that have been published over the many years of its organization being around. We’re training our model on our proprietary dataset around sensory data, so there’s some nuance between things. They’re really focused on developing formulations, but there is a limitation to what you can do with that tool. It’ll tell you, ‘here’s how to make a plant-based bacon, add bacon flavoring,’ but there are 10 huge suppliers that provide bacon flavoring, and it doesn’t provide a ton of granularity on at what concentration and from what supplier.”
What’s the bigger climate mission you’re trying to advance with this work?
“Nectar’s specific directive is, how do we make these products favorable and delicious? We know that we need to reduce meat consumption in order to stay within the two degrees of climate warming, and we’re not going to get there by just telling people, ‘eat less steak.’ We have to use that whole lever and make the products really delicious so that people will be incentivized to buy them more and reduce consumption of factory-farmed meat.”
Answers have been lightly edited for grammar and clarity.



