Stop Losing Science in Silos
Alright. Good morning, everybody. Phil and I are gonna do a presentation on losing data in silos. The the topic crux is ELN and LUMA working together. Yeah. Move my slides along. Alright. So quick agenda. I'm gonna introduce Luma really quickly, four or five slides, a few minutes, and I'm gonna hand over to Phil who's gonna do a few demos on some of the ways that our traditional ELM platform and some of our, expert tools can work together with Lumen in the background orchestrating the the experience. So a little bit of background on us. Over the past four years, in the most immediate crunch, a few years before that, in the initial push. Sure. Dot Maddox came to to share your slides. So it was a question from the panel. We can't see them. Or did they get lost? Sorry about that, everybody. When we switched over, I think the screen sharing turned off. So let me share real quick. There we go. We see it. Perfect. Thanks, everybody. Sorry about that. So getting back to this, in the past four to a few more years than that, Dotmatics came together with a number of expert tools. Some of them are listed here. Most, immediately recognizable to many of you here is certainly Prism, both the Genius tools, SnapGene kind of round out our molecular biology offerings, several flow cytometry offerings, couple of chromatography and mass spec, and then, some others as well. These expert tools are really widely used, spread across a wide swath of pharmaceutical and life science space. And as we brought all these together, we got the, the opportunity as well as the responsibility to provide a way to link these together into a more enterprise offering. As they exist now, they they mostly exist as, individual expert tools. Contextualizing experiments to contain these things is really quite challenging. So ELN experimentation over the years has involved some form of arts and crafts where things are pasted together, connected through links, and going back and actually adding context to how experiments actually were run is quite challenging. So, we developed the Luma platform. Initially, we the first iteration of this was was instrument integration. So building upon that, we then added, integration to all these expert tools in a in a complex data model, open APIs, and ways to compose workflows across these various tools. In addition to this integration layer and, data connectivity layer, Luma itself has a a data model behind it. It's it's, fully SaaS native. The the data live in AWS. All of the the scaling and services layers are run by Databricks. So it's a very robust modern SaaS native system. And this view shows it's a bit of a hubspoke model where LumiData is in the middle, the various applications that we integrate with are along the edges here. GPROMS is a new one to many of you perhaps, this is something that comes in with our recent acquisition by Siemens and this allows complex modelling in the pharma space for things like solubility and in polymorph screening. Cynthia, let me build this out one more. The bottom here is a plus sign that indicates our open platform and ability to integrate with other third party tools. So Cynthia is a retrosynthetic tool, brought to us by Merck KJA, LabDonkey is a lab scheduling tool, and RevetySignals is a competitive ELN. So we're agnostic to what we can what we can connect with and orchestrate workflows across. The data ingress pattern for Luma is really threefold. So files is our original attempt, so we can have the ability to grab files from from your file system, from instrument computers, pull those in, parse them into standardized formats. So in the original iteration, take something like chromatography data, which exist in multiple native formats, bring those native files in, parse them all to a machine readable JSON format that allows us to do standardization and, modeling across those data. And that's just one example. API is an obvious, integration point as well as straight database connectivity. Our newest addition to the Luma, Lumaverse is agent Luma. This is our, AI agent brought in using Databricks tools, and it's an agent built for scientific scrutiny. So the way we built LUMA with the way the data are structured and held allows it to be traceable, explainable and verifiable. Many of us have used things like ChatGPT and we experience on a regular basis its ability to hallucinate, exaggerate, do these various things. Luma data are structured to begin with and it allows for less inference and more straight use of the data for modeling and AI activities. I'm not going to read through this slide, but I'll point out a couple of things here. So having an agent in place is really different than having a large language model. So a large language model is the underlying intelligence of an agent. Agents then add in skills, tools, really specific task based workflows that it can perform. So in the case of AgentLuma, and you've all probably seen this in the news, it has the ability to basically program itself. So we can ask it, I'd like to write an app. Here's a base file that represents the data model. These are my objectives for this app, and it can talk through it with you and it can actually operate on its own machinery to build out the data model, build out data flows, and get you either all the way or most of the way down the road to an actual application. This is a huge game saver, time saver. I'm sure Phil will talk about that a little bit. That's one we can just assist with data ingestion, really simple, straightforward things like enhanced querying and trending and so on and so forth. So without too much further ado, I'm gonna hand this over to Phil, who's gonna show you several demos and talk about some of the things I just mentioned in more detail. Whenever you're ready, Phil. Okay. If you just, stop the share, Sean, then I can grab it. There you go. Okay. Let me know if you're seeing the slides okay. I can do. Okay. Great. So I'm gonna talk to you, really about three demos today and how we can use our Luma platform along with really any ELN but of course we'll use the Docmatics ELN to talk about today. The three demo scenarios I'm going to talk about are bringing expert data into Luma, so that's breaking down data silos, bringing in things like instrument data, prism data, really any data that you've connected into Luma and bringing that into the notebook, And then I'm going to talk to you about how we can do the reverse, how we can take notebook data and bring it into Luma to really power decision making and with agentic skills. And then I'll talk to you about how we can actually build really quite complicated workflows across a notebook, with all of the connected systems that Luma brings to bear. So demo number one, how can we bring so called expert data into into your notebook, and many of you will be using notebooks and you'll probably hear your users say, well it's great that we have a notebook, can document what we did, but then when we get results back we get them as attachments, and attachments are great to look at at the time, but they're really not useful when you try and compare results across experiments, or maybe the attachments has a graph in it that you'd actually like to interact with rather than just looking at in a static image. So I'm going to show you two examples of using instrument data and prism data directly in the notebook, and then we'll come back and look at the next two demos. So here I am in the Docmatics notebook and this is a chemistry example. We don't have to use a chemistry example of course, and I won't go through the entire notebook but we've sketched out our reactants and products, we've done our stoichiometry calculations, we've generated some sample IDs, I've gone and made a request for analysis, and it just so happens that the analysis today is x-ray diffraction data. So everything that you see on screen right now is regular notebook functionality. As soon as I scroll down below into this x-ray diffraction tab here, this bottom tab, this is all actually data and really a dashboard from Luma. These dashboards are interactive, This graph is not static, can go and interact with it, I can filter it, I can change it, I can even add calculations and things in here as well. Now this example is anonymized customer data actually and what the real crux of this experiment here is actually there are two different instrument vendors that both provide x-ray diffraction instruments. So the experiment here is to send one sample through two different x-ray diffractometers which have completely different data back ends because they're from two different vendors. And what our customer needed to do was basically to generate this overlay graph of the different outputs so that they can infer some results into here. So this is not only bringing expert data into your notebook automatically, but it's also bringing in merged expert data from two completely different vendors into one. Luma can do this kind of harmonization of like data as well. Imagine having a chromatography app, an NMR app, an x-ray diffraction app, and regardless of the vendor that you've done those experiments on, they can all come into the same app and then they can all be shared with the notebook here. So this is an example of bringing in a rich dashboard that we've developed in Luma, filtering it down to the sample in the experiment of interest, and sharing that dashboard directly in my notebook. Let's have a look at a slightly different example. So in this example, I'm trying to use GraphPad Prism to generate some graphs for me. So I've collected my plate data, I've done my normalization to go to percentage inhibition, and now I want to get this data out into prism and then share a dashboard back with my notebook for the results. This is another common pattern that you'll see across Docmatics. Many, many Docmatics products, since we now own GraphPad Prism, can write a prism file. So we can, with a click of a button, send all of this data out into prism, and notice here the experiment id one hundred forty thousand seven hundred seventy three. Now when I open up that data in graphpad prism, I have the experiment from which, my data was sent across, I have my data table sent here, and I have my curve fits and my graphs already analyzed. So we've already smoothed out one of the challenges of getting data into prism from the notebook, and that's by natively writing the prism file. How can we get this prism data back into a dashboard in a seamless way? And this is another pattern that you'll see with Luma. We have in many of the Docmatics tools, send to Luma button. Now Prism produces a file and LabConnect handles these files, so when I click this Luma button what actually happens is we send data over to our LabConnect system. LabConnect connects to instruments, it connects to software, it connects to shared drives, or humans can drop files into here. Each row in this table you can think of as either a connected instrument or a folder that's waiting for some software to place its data. So this is a very fair data compliant system by aggregating all these files from all of these data sources, by making them searchable, we make them more findable and accessible. But how do we make data files more interoperable? How do we break apart this complicated prism data file and get it into a dashboard? We do that by parsing data. Dotmatics has parsers for many common instruments, we have parsers for many simple file types like excel, csv, json files, we also have parsers for really complicated proprietary data models like prism for example. So we have all of the analyses now broken out into these different sheets with data models and statuses and summaries. We even have the graph images that get recreated as this data lands in here. So LabConnect connects all of this data, and then over in Luma proper we go and build a data model for that data to land into. Each blue box here is a table in Luma, You'll notice that the table has analyses, it has the data, the raw data, it has some file info, it has some graphs in here. Then we write raw data flows to break apart that complicated data model, and finally we reproduce this into an experience. So this experience is a dashboard that sits on top of all of that data. So now at the click of a button, I've sent my data from prism into luma, and I have my dashboard that includes my experiment ID and of course from that experiment ID is then how we link this dashboard back into our source notebook. So that's just a quick look at the kind of one of the common patterns that we use across Docmatics to get file based data into our system. Let me load up the next app before we go back and talk about our next demo. So this is enriching notebook data with expert data. So it's taking instrument data, it's taking other software data, it's flowing it into Luma, it's producing a dashboard, and then sharing back that dashboard with our notebook so that we can have these expert live views delivered directly into our notebook, and without having to just bury that data in an attachment or a document. So demo number two is taking things in the reverse. It's breaking down the data silos. Dotmatics, we have ELNs obviously, we have things like prism, we have registration systems, we have Genius, we have SnapGene. Dotmatics are actually the owners of some of these silos now. So one of the reasons why the data flows into Luma is so important is it's Dotmatics helping you break down data silos. So how can I get data out of experiments and contextualize it with other data from other systems, and how can I do that for chemistry examples, how can I do that for large molecule examples like antibodies, How can I use AI to help me understand this data and also build these apps? So let's go and have a look at first a chemistry example and then, a biologics example. So here is my chemistry dashboard, and right at the start of this presentation Sean mentioned that we can use AI to help us build these apps, and this is an example of an AI built app. So I asked agent Luma to inspect some files that I'd loaded into LabConnect that contained chemistry, some screening results. Agent Luma looked at those files, it built my tables and data model, it wrote my raw data flows to get the data into this app, and then I focused on building the dashboard. The dashboard brings together the data, but it also brings together some of our scientific toolkits. So the Docmatic CLN has a very robust chemistry engine. We've ported that chemistry engine over into Luma so that we can do structure searches, structure depictions, more chem informatics tooling in here. So I have my compound data table, I have some graphs that are charting out some of the assay data, I've got some calculated chemical physical properties again from our chemistry toolkit, I have batch data in here. One really important thing to understand about these dashboards either in Luma or when you've shared them with your notebook, is that we can move into what we call explore mode, and this is how your users can move things around or even add calculations directly in here. So maybe I want to do a simple numerical calculation between two columns in my data set, save, that's my new calculation done. I can now bookmark this as my new view for this page. So we have really great ways to interact with the data. Whenever we see a graph, we can hit the space bar and we can view that data as larger. We can even drill down into the data table that perhaps we don't have on screen so we can view all of the data that was used to produce the graphs that we see on here. Building these rich interfaces is a really quick and simple thing to do in Luma. Now across here I have different tabs. I've got biochemical data and some graphs. I've connected with our inventory system so I can look what stocks I've got available. I even have some pharmacokinetics data. And I said I used AI to build this app, but I can also use AI as an end user to help me understand the data in it. So your end users may well say, well, you know, I've got all of this data that's come from my notebook, but my boss is asking me how well my project is progressing. So we can use AgentLuma embedded directly into these apps to go and ask complicated, well simple natural language questions across complicated data sets. In this case I'm asking it, compared to the best PK data of all time in this app, how do the last two weeks look? Basically how is my project progressing? And agent Luma will go and look at all of that data, it will tell me my best PK compound, it will give me last week's results, it will tell me what's happening, it will give me clinical significance, and we can even ask it to suggest the next round of data. But this is truly interactive, I can say tell me about my lead compound and this will give me now a more comprehensive overview of not just the PK results but all of the other results in here. So AgentLuma helps you build these apps pulling data from our notebook, pulling data from other source systems, and it helps you interact with the data inside the apps as well. So I'm going to switch gears slightly here and say okay we've showed chemists how Luma can help pull chemistry data in from their notebook and make sense of it. What about biological results? Let's go and have a look at another app in here and I'm going to show you my antibody, dashboards app. This dashboard, pulls in data from many different systems inside Docmatics, including our bioglyph for protein design, but then your production and tracking data may well come from your connected notebook. In this case, instead of having small molecules, have glyphs for my antibodies. I can track where they're produced either inside LUMA or inside a notebook. If I'm using some molecular biology tools from Docmatics, and I've used a plasmid in the production of this antibody, I can click on that plasmid and use an interactive viewer here. This is an example of again bringing in tool kits from our other applications. We have screening data in here as well. Biophysics and developability we can use these traffic light colors to help us score a particular antibody. And for our biological data, look, we can pull in prism data using exactly the same techniques as I showed you before. So Luma can extract data from notebook systems, join it in and contextualize it with other data in your enterprise to make decision making, much more seamless and much faster. Okay, so we can remove data silos. I talked to you how we can use AI to make sense of the data in our apps. I talked to you about how we can use AI to help us build these apps. Once we have an app, we can build these rich dashboards, which I showed you. Then in the first demo, we can actually share those rich dashboards back with our notebook to help bridge the gap between the connectivity that Luma has and some of the data that you're inputting in your notebook. The third demo. So how can we actually build seamless workflows across here? Not just share data, but actually build a workflow. And I've got two examples for you. My first one is to actually, use a screening system and build an automated analysis system. Instead of having to manually handle raw data files from your instruments, allow Luma to collect and harmonize all of that data, and then using the apis that we have in our screening system in our notebook to create a screening experiment. Then the last one I'll actually come back to that very first chemistry experiment where I'll show you how we can use Luma's agent to go and help us do things like write up and summarize these ELN experiments. So let's go back into studies, let's go to our notebook, let's create a new experiment, and this one is going to be a dose response experiment. Now traditionally, end users would go into the screening system here. They'd have to manually drag and drop these files, or if their IT folks had been kind to them, maybe the IT folks would build a bespoke API integration to push data in here. But we want to use the power of Luma to do that for you automatically. So in this automation request page, what I do is I fill out the barcodes of the plates that I want to screen, and then I can tell Luma what my expected barcode is from my instrument. In many cases, those two match. Now what I can do is click request for processing, and what this will do is package up all of this data, send it across to a Luma app, and from that Luma app it will collect the raw data for these barcodes, and then it will use an integration to push data back into my screening system. So if I go back into Luma, let's have a look at my app where I'm doing that. In this app, I have files that come in. From those files, I break apart barcodes. I can add annotations to those files. If I have experimental data in them, I can see that, and I can even have samples. From the barcodes, we can pull out the well data, and then this studies request is where my request information has gone in. As the data flows into this app and we find the request data matches up with the barcode and the well data, we use a trigger at that point to go and push this data directly back into our ELN. So if I go back over to our ELN, you'll see that this request is now completed, and when I go to my screening tab, I have my plates already, entered into the system. It's pushed the raw data in, the screening system has done my normalization, and as an end user, all I have to do now is go and look at my curves and do my knockouts and knock ins. So at no point have I had to handle any raw data, at no point have I had to choose the correct, reader or parser to get the raw data into screening. Luma has handled all of that in the back end, and using our API connectivity, it's pushed our data directly back into my screening notebook so I can focus on my analysis. A couple more examples. If I go back into our very first experiment, our chemistry experiment, obviously these chemistry experiments can become quite complicated. You have lots of data and tables. Imagine if this was not a singleton chemistry experiment but was an array synthesis, which we also support in here, and I had to write up my entire array synthesis of one hundred plus reactions. That can get quite time consuming. So what I've done here is that this AI summary button, and much like the screening example, when I press that button, it sends the data out into a Luma app. As part of the raw data flows in this Luma app though, I have the AI agent summarize my experiment and then push it back into the notebook. Now I did have a little bit of fun with this one. I asked the AI agent to make extensive use of emojis as it did this write up. So I packaged up the chemistry, I packaged up my reagents, my solvents, and my products, and then this is the write up that got dropped back into my experiment. And you can see it definitely made extensive use of emojis, but I want to highlight some pretty cool things that it did along the way. Number one, at no point in my ELN did I mention that this was a reductive amination. Our agent was able to figure that out and put it into my write up. It is giving me some kind of hints, you know, I've done this high temperature conditions, and it's suggesting that Optanol maybe wasn't the best choice here. It's telling me my yield could be improved a little bit, forty three is not right. It's giving me some key insights and observations and areas for optimization. There's a question from the panel. Oh, for it, Sean. Don't have a line. Ryan, go ahead and open it up. Sorry, the moderator is on mute. Go ahead, Dave. I don't have a question, I pressed the button by mistake, Elliot. Dave, sorry about that. Go ahead, Zil. Well, we're definitely doing it live, we're doing it live. Okay, I'll carry on. Okay, so you can see here that it's taken, you know, something that could take hours, particularly for a more complicated experiment and that a click of a button has allowed me to, integrate Luma's agentic capabilities directly back into our notebook. One other app that I want to show you around this ELN AI conclusions. So this is another example of using Luma's agentic capabilities to help us write up conclusions around an ELN. So in this case, I've got the write up from my ELN, I've got a materials data table, and I have some files that I've attached to my notebook as well. And I want to summarize all of this, including the content from the files. So what we've done is actually as part of the LabConnect ingestion system, we have an agentic parser that can break apart and parse things like PDFs, things like Word documents, PowerPoints, etc. So, when I click on, let's go and click on this, file, this will show me the raw file in LabConnect. So as you can see, this is quite a complicated paper. There's lots and lots of data in here, and as I'm trying to review this and include this in my summary for my experiment, reading all of these attachments can be very time consuming. So wouldn't it be nice if we could use an agent to summarize the attachments that helps speed up my review of this experiment? When I click on this button here, that's what this opens up. This opens up now an agentic summary of the attachment in my notebook. I can then use that agentic summary of the attachments along with the write up, along with the, structured data to go and do an overall AI summary of my experiment, including documents that were attached. So, just to review the three workflows that I've showed you, I showed how you can use shared dashboards that uses all of Luma's data connectivity to pull expert data into your notebook. This includes things like instrument data. I showed you an example from an instrument. It also includes things like other bits of software, for example, prism data being pulled into your notebook. I showed you how we can use Luma to remove silos and to pull ELN data in and contextualize it with other bits of data that Luma has at its fingertips. Showed you a chemistry example with AI to help us summarize those results, AI to build those apps. I showed you an antibody example where we have connections out to Genius and Bioglyph and how we can use Luma to contextualize that data, pull data in from Prism there as well. And then finally, showed you how we can use Luma to build workflows across our notebooks, automated screening systems, for example, automated agentic summaries of the data that goes into our notebook. This is all powered by Luma and a notebook. Obviously, the Docmatics notebook has these integrations, but some of these stories could be applicable across different vendors notebooks as well. Luma is really the great agentic data aggregator in life sciences and pairing that with a notebook can supercharge the workflows that you can use them both for. So with that, I guess we can answer questions in the chat or we can leave you with these recordings and you can always contact us and get a deeper demo on anything that you've seen that piques your interest. We can also open up if anyone wants to ask a question live, can do that as well. Absolutely. Yep. K. I guess we will, oh, it looks like we've got something in the chat here. Thanks, Dave. Nice comment today. Thank you. Okay. If if you you know how to get a hold of us, you contact us if you want a deeper demo on anything that we've showed you today. If you want a deeper demo on any of the capabilities, of the integrations, how we're imagining using Luma with any of our family of applications, if you have a third party tool that you think you might want to integrate more deeply into your notebook, come and speak to us and we'll help you using Luma achieve those goals. Thank you. Thanks everybody.
Your ELN is full of data your team can't actually use. Results land as attachments nobody can search, instruments produce files that don't talk to anything else, and write-ups that should take minutes end up taking hours.
In this on-demand webinar, Phil Mounteney and Sean O'Hare walk through three live demos showing how Luma connects your ELN, instruments, and scientific software into real-time dashboards and automated workflows — without replacing your existing stack.
What you'll see:
Instrument data from two competing vendors harmonised into one interactive dashboard
Prism data parsed, structured, and linked back to the originating notebook experiment with one click
A dose response screening workflow running end to end without anyone touching a raw data file
Agent Luma drafting experimental write-ups, answering PK questions, and building app schemas from your actual lab data
Presenters: Phil Mounteney and Sean O'Hare, Dotmatics · Duration: 60 minutes
Our Latest on Science & Industry
Simplify your path to discovery.
See Luma in action by requesting a demo today.



