Making the Impossible Possible: AI-Driven Experimentation in Solar Cell Research

Published:
March 3, 2026
|
Last Updated:
March 30, 2026

When the Buonassisi Lab at MIT set out to map how ambient conditions affect perovskite solar cell performance, they faced a problem familiar to many R&D teams: a high-dimensional environmental parameter space. And what happens when you have a hard four-week deadline, and no dedicated machine learning engineer to help with optimizing your campaign? 

The results of this study were recently published in ACS Energy Letters (January 2026), detailing how the team disentangled simultaneous environmental variables using interpretable machine learning.

ACS Energy Letters publication banner: "Disentangling Environmental Effects on Perovskite Solar Cell Performance via Interpretable Machine Learning."
Publication: ACS Energy Lett. 2026, 11, 2, 1609–1617, https://doi.org/10.1021/acsenergylett.5c02410

Using Atinary’s SDLabs AI platform, they ran closed-loop Bayesian optimization and completed their campaign in 33 experiments, without writing a line of code, and completing their project that would otherwise have been impossible. 

We sat down with two members of the team, who, along with Dr. Kangyu Ji, served as the study’s co-first authors, to hear how the project unfolded, and what they’d tell other researchers considering AI-driven experimentation, and what it means for the future of materials research.

Meet the Researchers

Dr. Tianran Liu
Co-author and Postdoc Researcher
Buonassisi Lab

Dr. Nicky Evans
Co-author and Postdoc Researcher
Buonassisi Lab

Setting the Scene

Can you give us a sense of what the project was and what each of you contributed?

Tianran: I played a central role in designing and executing the experimental workflow for this project. My contributions included defining the experimental parameter space, building and operating the controlled-environment fabrication setup, and generating high-quality experimental data. I used Atinary’s SDLabs to integrate experimental results into the AI platform and to translate AI-generated recommendations into practical, physically meaningful experiments. I also focused on interpreting the optimization outcomes from a materials science perspective to understand the underlying mechanisms and ensure the results were scientifically robust.

Nicky: I supported Tianran (alongside Ronaldo, an undergraduate research assistant) in his use of Atinary’s SDLabs, to run our experimental campaign [see publication]. I was primarily assigned as an experienced co-pilot to help Tianran with the experimental work, ensuring everything was conducted with consistency and that we were making the best decisions at each round. I also assisted in making some tweaks to the experimental setup, and of course with analyzing data, producing figures and writing up the manuscript.

The Challenge

What were the main obstacles when you started? Why were these hard to solve the conventional way?

Tianran: At the start of the project, the main scientific challenge was that we defined a four-dimensional space of environmental conditions that could all influence solar cell performance. Exploring such a high-dimensional space is extremely difficult experimentally, because the number of possible combinations grows rapidly as more parameters are included. Traditional approaches such as grid search or one-factor-at-a-time experiments would require an impractically large number of samples and time, making it essentially impossible to fully explore the 5D space using conventional methods.

Nicky: As Tianran mentioned, we wanted to explore a large parameter space without conducting a grid search, as otherwise it would be an incredibly long experimental process. Being able to explore and optimize without trying every condition was essential to us. Bayesian Optimization (BO) is a neat way of achieving this, but when under time pressure, writing up an in-house BO code is difficult. Atinary’s SDLabs AI platform allowed us the flexibility to run a BO project that could be specifically tailored to our needs enough to enable a successful campaign.

illustration of a laptop with blue blocks of code floating vertically above the screen.

“Writing up an in-house Bayesian Optimization code under time pressure is difficult. It’s not something you can just pick up in an hour.”

Dr. Nicky Evans, Co-author and Postdoc Researcher, Buonassisi Lab at MIT

Collaboration with Atinary

How did SDLabs help you overcome these challenges?

Tianran: The platform, which uses Bayesian Optimization, allows us to efficiently navigate this high-dimensional space by learning from each experiment and prioritizing the most informative next conditions to test. Instead of exhaustively sampling the entire 5D space, the model used existing data to make probabilistic predictions and balance exploration and optimization. This enabled us to explore space much faster, capture the effect of variables, and identify non-linear interactions between variables, while dramatically reducing the number of experiments needed compared with traditional methods.

Nicky: At the time of conducting this experimental campaign, we were somewhat racing against the clock to collect the data, but with a real emphasis on making sure the experiments were all done very precisely with great consistency, an important aspect of our study. This meant most of our focus was on experimental work. Considering both Tianran and I were not particularly familiar with BO coding, help from Atinary proved extremely useful, making the code-central focus of this project a lot lighter, allowing us the time to focus on data collection and analysis.

What was it like to work with the Atinary team?

Tianran: Working with the Atinary team was highly collaborative and supportive throughout the project. The team was very helpful, and the training provided for the platform was clear, well structured, and easy to follow, which made it straightforward to get started and use the tools effectively. When we encountered issues, the support was excellent. The team was extremely responsive and helped us quickly identify and resolve the problems, which made a big difference in keeping the project moving smoothly.

“The team was extremely responsive and helped us quickly identify and resolve problems, which made a big difference in keeping the project moving smoothly.”

Dr. Tianran Liu, Co-author and Postdoc Researcher, Buonassisi Lab at MIT

Nicky: Working with the Atinary team was a pleasure. The team was very helpful and greatly responsive considering our time constraints. We really appreciated the video calls to explain to us how best to use the platform and set up our campaign. I remember we encountered a few sticking points at times, such as when moving to the next round of the BO campaign, trying to correctly down-select a number of experimental conditions from a larger set in the previous round, and the team were very helpful in working through those with us.

Results & Discovery

Beyond saving time, what did this approach deliver compared to what a conventional campaign would have achieved?

Tianran: A traditional grid search would have required thousands of conditions to achieve meaningful coverage. Using the platform, we were able to efficiently explore the space in only five iterations, with five experimental conditions per iteration. This dramatically reduced the experimental workload and time required, while still enabling us to identify high-performing regions of the parameter space that would have been impractical to discover using conventional approaches.

Nicky: The key improvements here were with all of the above, by saving time on exploring conditions, we could have a greater yield of samples made for each condition. We also were more efficient with our experimentation, in that Atinary helped identify which conditions were most useful to explore also offering some statistical analysis to guide us along the way in parallel. In doing our experiments this way, rather than conducting a grid search for example, this saved on costs by reducing overall material usage.

A traditional grid search would have required thousands of conditions. Using the platform, we explored the space in only five iterations, with five experimental conditions per iteration.”

Dr. Tianran Liu, Co-author and Postdoc Researcher, Buonassisi Lab at MIT

Why It Matters

How would you explain the importance of this work to someone outside the field?

Tianran: For someone without a scientific background, the importance of this work is that it shows how AI can help researchers achieve scientific results much faster and with far less trial and error. Instead of testing thousands of random combinations, AI guides experiments toward the most promising options. This saves time, reduces waste, and lowers development costs, making scientific research and technology development more efficient, more reliable, and more affordable.

Nicky: I think our work shows the usefulness of AI-based tools in making long-winded processes more efficient in general. Having a guide to make decisions for you along the way, even just to sanity-check the process, is extremely useful, particularly if you don’t have to write the code yourself. Our work of course also aims to show how simple things that one may not consider, such as the condition of the air around you, can significantly affect a solar power device, to make the public and scientists more aware of the processes going into making these technologies better. 

A key impact from Atinary goes back to efficiency, by allowing accessibility to certain analysis techniques or methods that perhaps were not as accessible before. To be specific, Bayesian Optimization is not something you can just pick up in an hour and press go and solve all your problems. By writing your own BO code, you may be able to make more fine tune adjustments to specifically fit your project needs, but to have a software that can provide a package to you that saves you all the time learning and tweaking your code, is a huge time saver and opens a new door to you if you are not particularly familiar with the methods beforehand.

Lessons for Other R&D Teams

scientists in lab coats using a large digital interface, a microscope, and test tubes for research

“AI is most effective when it is tightly integrated with domain expertise and real experimental feedback, rather than being used as a standalone tool.”

Dr. Tianran Liu, Co-author and Postdoc Researcher, Buonassisi Lab at MIT

What would you tell other R&D teams who are considering AI-driven experimentation for the first time?

Tianran: One key lesson from this collaboration is that AI is most effective when it is tightly integrated with domain expertise and real experimental feedback, rather than being used as a standalone tool. Clear problem definition, well-designed experimental inputs, and close communication between researchers and AI developers is critical for success. This approach can benefit other R&D teams by reducing trial and error, improving efficiency, and enabling them to tackle complex, high-dimensional problems that are difficult to address with traditional methods. Looking ahead, AI-driven R&D has strong potential to be applied broadly across materials science and engineering, where it can accelerate discovery, optimize processes, and help uncover general design principles.

Nicky: I believe that all collaborations involving AI-driven R&D are useful, from identifying areas where software can be improved to feeding more knowledge into the algorithms for AI to learn and further improve. This process also helps us to more efficiently learn the pipeline needed to integrate AI into experimental campaigns. I think AI-driven R&D will be a key towards accelerated learnings in the future: using AI to both guide our experimentation efficiently, and to assist in analysis, perhaps spotting conclusions and trends we don’t as easily identify as compared to a computer.

What Tianran and Nicky’s experience demonstrates is that the barrier to AI-driven experimentation doesn’t have to be technical expertise or development time. With the right platform and the right support, research teams can focus on what they do best — designing rigorous experiments, interpreting results, and pushing the science forward — while AI handles the complexity of navigating vast parameter spaces efficiently.

From a 5D problem that would have required an impractically large number of experiments, to meaningful discovery in just 33 — that’s not an incremental improvement. That’s a fundamentally different way of doing research.


If your R&D team is facing a complex, high-dimensional problem and can’t afford to run thousands of experiments to find the answer, we’d love to show you what’s possible. 

Resources

Link to Use Case: Breaking the Coding Barrier for Environmental Process Optimization

Link to Publication: Liu et al. Disentangling Environmental Effects on Perovskite Solar Cell Performance via Interpretable Machine Learning. ACS Energy Lett. 2026, 11, 2, 1609–1617, https://doi.org/10.1021/acsenergylett.5c02410