Ty Robinson SAG 26: Exoplanet Reflectance Spectroscopy for the Habitable Worlds Observatory Hey, good afternoon, everyone. I'm so pleased to be here, if only as a face for all of these hard working individuals who are participating in our SAG 26, which is focused on reflection spectroscopy for 1/2 a world's observatory under happier circumstances. My colleague Renu, who would be here with me. Unfortunately, he was evacuated earlier this week because of the California wildfires, but he is going to be able to join us tomorrow and say I look forward to being able to see, renew then. So a quick summary in case the siren song of e-mail calls you away. So very very briefly. So our we are motivated as a sag to prepare and validate the kinds of tools that we're going to need to be able to successfully design the Habitable Worlds observatory. So our goals. There we go. Our goals are to execute a large number of community driven inner comparisons of spectrum generating tools, radio transfer models. We're also inter comparing some of the things that feed into radio transfer models and we're inter comparing some of the things that you do with radio transfer models we kicked off in spring of 2024 and we're going to be active through about fall of 2020. Five, we have 23 participants, all of which were listed on the previous slide. And what I think is especially great is we have international participation. We'll talk about how we're enabling that. We have 6 representatives from six plus large research groups that develop the kinds of radio transfer tools that we're talking about here. That 6 plus is really important because it's hard to do in a comparison. You've got small numbers of things to compare, and so having that that large number of groups that are engaged has been very important. And so today we're about halfway through the inner comparisons that we've been doing. I'll detail these in just a second. And we have a report that is in prep. It's a living report right now. They're adding material to as we go along. Yeah. Let's see. Here we go. So a bit more on the motivation. So how about a world's observatory responds to the decadals call to pursue a, quote, UN quote, robust sample of 25 atmospheric Spectra Spectra being a keyword there potentially a habitable planets. Now it looks like some of the slide stuff is going to give up here, so. Our understanding of how an instrument for Habitable Worlds Observatory, whose job it is to take Spectra. Maps to our ability to understand in exoplanets environment right now for reflected light, spectroscopy is strongly model based and I'm going to show you an example over here on the right of how this is being done right now in the literature. This is a a recent result, which is why I picked it and a great result from Natasha Latouf and collaborators at Goddard and so down here on the bottom you have a spectrum and kind of brightness or reflectivity as a function of wavelength. That's a modeled spectrum of an earth like world. And So what Natasha and collaborators did is they scanned a 20% bandpass. Across a region of that spectrum, motivated by the fact that chronographs are expected to operate over roughly a 20% bandpass range of wavelengths, and then asked if we had some signal to noise ratio on that spectrum with a bandpass centre at this wavelength, what would it take? To get a strong detection of water and what would it take to get a strong detection of methane waters in the pink methan's in the purply colours there. And so this helps you spot a wavelength where you could center a bandpass where you could get a Det. Of both water and methane, which is great to know. For the design of Habitable Worlds Observatory. But again, this is this is model based, so then this sets up a risk, which is that any errors or biases that you might have in your simulation tools could then propagate through to through through the design process for hwo. As a specific example, let's say model X. Says that you need a signal to noise ratio of 10 at 1.1 microns to confidently detect water, whereas model Y says you need an SNR of 20 at that same wavelength to detect water. Who do you believe? Well, you could design H. Go to get SNR of 10 and then if that model turns out to be wrong then you risk not getting your water detections. Or you could design hwo to deliver to you SNR of 20. But if that model turns out to be wrong, then you've over designed. The observatory spent extra money, spent extra time potentially over here on the right is a very important figure wherein I asked Google AI to give me an image of two scientists debating spacecraft design and Google. Said I don't do people, which is fair. And so me being a very clever person, said, how about two dogs debating spacecraft design? And that's what it gave me. And so. So that's how you get two dogs howling at spacecraft. I think is really what's happening there. OK. So here's our sag. As I said there about 23 participants right now. What we're doing right now is we're developing a common understanding of our tools and how those tools operate. We are comparing and validating our tools. We're understanding the kinds of complexities that are represented by our models and kinds of complexities that would be required to accurately represent the systems that we're aiming to represent. We're figuring out what best practices are for running these kinds of spectral simulations. And importantly, we're going to identify any areas where we can't explain our model discrepancies because that is an area that needs more focus in in advance of of having worlds observatory. So how we've organized ourselves after advertising the SAG in spring? We assembled. There are between 20 and 30 people responded to the initial advertisements. We kicked off in April of 2024. The way we're running things is we have biweekly telecoms on Friday mornings. These are recorded. End of every telecom we circulate the recording. We circulate the notes and we circulate a suite of homework assignments which are for the people that are doing the inter comparisons. What are the next steps for things that they need to upload the recordings and the succinct statements of what the homework assignments are have been essential for enabling our international partners to collaborate, because even Friday morning sometimes isn't the best time for for them to join us so. We're maintaining a shared Google Drive where people can upload their inner comparison results. And we have a living report, as I mentioned, that we're adding to right now. A little bit more on our approach. So we started simple and we've been adding complexity. And so while the SAG is focused on spectral models, one of the key inputs to spectral models is how strongly do your gases absorb at different wavelengths. Which are the opacities? So we started our inter comparisons with the opacities. Those are largely wrapped up. I'll show you some of those. We're now moving on to spectral models and then eventually, especially in collaboration with the risotto. Inner comparison effort. We're going to do something with retrievals. And so the way we're doing the intercomparisations is that each intercomparion. As a well defined setup and so over here on the right is an example of just one of the inner comparisons that we did. This was for methane opacity, and so each of our inner comparisons has these instructions for what kind of inputs do you need to run the simulation? What outputs do we need from your tool? And then how should you name those so that we can then ingest them into our inner comparison framework? Once you have executed the experiment, you upload your results to our Google Drive in advance of our telecoms, we generate some digest plots. And so Armin Tukajin has been especially useful there. So big thanks to Armin, who's also going to be joining tomorrow, who tends to make these plots for us and then at the telecoms we zoom in on discrepancies and try to figure out why one model might be making a different prediction. From another model, somebody learned something about their tool. Maybe there's a bug? Maybe they didn't quite run the experiment right, so sometimes it takes iteration to actually bring ourselves into into agreement. Over here on the right is just one set of uploads for the methane line absorber intercompany that we did where there's 20 or 30 different opacity and transmission SPE. That have been put into the to the database there. So an overview of progress to date. So we've completed our inner comparison of line absorption for all of these species. So line absorption, meaning you've got a line list. And then you have to apply a model to take those lines and turn them into opacity so that you've actually got spectral features. We've intercompared our rally scattering cross sections and our collision induced absorption capacities for a suite of sources there for all of these cases now it didn't start out this way, but now we're able to explain all of the differences that we see. And so we're in a very good spot now. And so we boiled this all down to the different model assumptions. So we we understand discrepancies and I'll show those to you soon. And we have ongoing comparisons of our spectral tools right now. Where we're doing kind of simple cases right now, single component atmospheres where that component may be able to absorb and scatter is is what we're doing. So as an example of maybe a very boring looking plot. So this is for H2H2 collision induced absorption. This is transmission through a column of hydrogen gas as a function of wavelength. You look at this plot and you see a single line and you think that's extremely boring. We looked at this plot and we all breathe a sigh of relief. Because all of these sit right on top of one another. And so you cannot see the difference between the different models. In that particular plot, and so for the CIA, we're off to, we're off to a great start there. So this is one where it actually took a lot of effort on behalf of the team to come into agreement and to understand the differences. And so now this is line absorption. So this is water vapour at 300 Kelvin. This is transmission through a column. It's isobaric. So that particular case is 10 to the three pascals. Participants uploaded high resolution absorption cross sections. We then computed transmission and then degraded the resolution. To this resolving power there. And So what we pretty quickly learned is there were some pretty major model differences that we spotted right out the gate. And then we also pretty quickly there were some flaws in our model design that we've since in our experiment design that we. Since fixed and so the first thing that I'll point out less obvious in this plot, but in some of our other plots you can pretty quickly see that isotoplogs mattered, and so some groups were including isotoplogs like HD O which is active in this region whereas some groups. Weren't. And so you got different transmission Spectra for for that very explainable reason. But your eyes were probably drawn to the cores of these absorption bands and how there really isn't very good agreement in there. And so this took some work. But we figured out as we were actually sensitive to the resolution at which the groups were uploading their opacities. And so we now have a new approach where groups generate high resolution opacities, then take those high resolution opacity and create a transmission spectrum and degrade that to some coarser resolution that they then upload. So then we have broad spectral coverage from that, albeit at low. Power. And then for an individual absorption band, groups will upload the line resolving opacity, so we can then zoom in on individual absorption features. If we need to. So that kind of scaffolded approach gives us the depth and breadth that we need to do the inner comparison. So that's something that we needed to learn. So the end result of all of that is something that looks like this, and so transmission again through a water column over a narrow range of wavelengths. And so in this particular water band, you can see the transmission goes to 0. It's opaque in the middle and you do see differences amongst the models in the wings of this absorption band, but we can explain all of these differences in that the pink curve up there. Was done with High Trend 2012, which is out of date at this point. Has the least transmission or has the most transmission of any of the tools there. Then sitting beneath that is a model that used exomole as inputs, which has different assumptions about the line width and then what I think is particularly good news is you see kind of two. Groupings over here on the right. One of those was using the high temp line list for water, so this is several different opacity tools using the same line list. And getting the same. Out the back end. And so that was super promising for us. And then the same thing down here. This is for Hitran 2020. Again, we get good agreement there. So after a lot of work, things look great. Like I said, we're just starting to look at our spectral models, our actual radio transfer models, and so here this isn't a spectrum. This is for a simplified planet. How reflective is it at full phase? Where in this experiment you've got a cloud that extends infinitely deep into the atmosphere. Has some phase function. Henry Greenstein. Some asymmetry. Parameter G of 0.9. And then has some single scattering albedo for those aerosols. Aerosols in the atmosphere. In the atmosphere there's a semi analytical result that runs along down here that you know is truth. And so by how far off these models are from from truth, you begin to understand how different radio transfer approaches are impacting our ability to predict these reflected light spectrum. So quickly about some lessons learned. I'm gonna say we've learned a lot about empowering teamwork and seeking complementarity with regards to empowering teamwork. I have been extremely impressed with how dedicated our group of SAG members have been to coming to the telecoms and working constructively and just being. Friendly and when they spot something that might be wrong in their model, taking that model home, working on it, bring it back. Sometimes you spot something that might be off. In a colleague's model, you give them tips on how to improve that. It's been so collaborative, and it's been so nice. Working with this group of people. All been building our tools up together. Every single team that has engaged has found a way to improve their models as a result of this SAG. And so I consider that to be a big success. And then regarding complementarity, we're not the only inter comparison in town. And so as part of the cuisines framework, Malbec, which Geronimo Villanueva is running, which is a radio transfer model. In a comparison, risotto, which is amber Young and Elonora, Eli are running as a retriever model in. Her comparison. All of those people are involved with what we're doing and we're involved with what they're doing. We're making sure that we're not stepping on toes and reinventing wheels. And as I mentioned before, so take away as we've improved how we do the comparisons. We've seen that we can boil down our sensitivities to the adopted line lists and we're starting to see kind of first order sensitivities in underlying radio transfer approaches when predicting reflected light SPE. And then I think one last slide, which is on our next steps and so we need to complete our radio transfer model in a comparison, those are ongoing. And so that's going to take. Place over the course of the spring. We need to design and run some retrieval inter comparisons done in collaboration with risotto. That'll take place over the summer. We'll continue to update our living report over this time frame to deliver in fall of 2025 and then we want to package our results for long term preservation of course. But I also have a vision for packaging our results for a long lived utility and that. If 5 or 10 years from now, someone decides to build their own opacity tool and they want to know if they're getting in the right ballpark. Should be able to quickly open up some of our SAG results over plot our sag results on top of what they're doing in CEI. Agree with what SAG found. Sag, 26, found five or ten years ago. And so with that, I'm going to put the same summary slide back up and I'm happy to take any questions. Thank you. We have time for maybe one quick question. Ahead and remember to introduce yourself and say your affiliation. Jonathan Lane, JPL. So how do you handle clouds in this in a way that's well constrained? You do the grain opacity, but in terms of, you know cloud types size distribution, that all could be a lot of different models, yes. So the shorter answer is, is we don't. We aren't going to have to handle clouds in that level of detail. And So what we're going to do. Is we're going to specify the optical properties of the clouds. Each of the groups, rather than having to specify grain distributions or anything like that. So just hand them the optical properties and then each group will ingest those and then generate a predicted spectrum. So through that Ave. we're then not relying on the the each of the groups to have to handle all the complexities that that you just you just brought up. So yeah, clouds are clouds are certainly complex. We put some thought into how we're gonna be able to to do them in a manageable way that also standardizes. The comparison across the radio transfer models is is our goal. Thanks. Let's thank Ty again.