What's New in Dotmatics 7.5
Many thanks for joining us today. My name is Ciampa, and I'm here with Troy to take you through what is new in DLN and data discovery for the seven point five release. Our presentation will last forty, forty five minutes, and we'll have some time at the end for q and a. If you have any questions, please do tie them in the in the chat panel, and we'll aim we'll aim to address them in at the end of the presentation. So we have a pack agenda today. Most of it is focused around the ELN and its modules, but I will kick off with some improvements that we made to a platform authentication to the exporting capabilities in the discovery. I will continue continue telling you about and the improvements that we made to the capabilities to depict the chemical structures and move on to the improvements that we made to Canvas and, Canvas tables in terms of, you know, improving usability and economics. And at this point, I will pass the ball to Troy who will tell you more about, several announcements in, ELN in various ELN modules. Platform authentication. So assuming that platform login has been set up and a multifactor authentication enable as an extra security step, the problem here is that, currently, the only way to check if a multifactor authentication is per user to actually, go and check, the contents of this, Oracle table on on the server. And the only way to reset multifactor authentication for a user in case the user has changed phone or lost phone is to delete the entry from, the above table. So we need to make a direct database changes. And this means that the the Domatics support team. And though the support team has been very responsive, you know, to these requests, There is a a concern that, you know, user might get locked out of their account for a link for a longer period of time. So in order to address, these concerns, we are now make it possible to for an admin to reset the multifactor authentication from the front end. So if is stable when an admin, to use a management, a new tab or the reset MFA will be visible with columns. One plain type or the and the second one containing a button to reset multifactor authentication for that specific user. So when we set multifactor authentication for a use, the user will be able then to log in into his account, and he will to, navigate to, user settings in order to, reregister the multifactor authentication. Okay. Now let's move on to, data discovery. So a common workflow for our run query, clearly, and then move the data into the advanced pivot view available from the view wizard and and export the purity data to Excel to generate a report that can be analyzed and shared with with colleague. A fully to this workflow have been identified. First of all, the advanced pivot view does not fully in all formatting. Hey. Hey, Champa. Yeah. I'll stop you a second there. The audio is getting a little glitchy for us. Hopefully, it's Zoom bandwidth through maybe try going off video. And let's see if we can hear you better. Is it bad, Troy? Yeah. Seems better already. Carry on. Let yeah. Let's see. Dokey. Okay. Thanks. As I was saying, a couple of limitation have been identified to this workflow. The first one is that the advanced people view does not fully support conditional formatting. And the second one, we generate a when we export the data to Excel, the chemical structures are always duplicated. And as a result, the report is less informative and, more difficult to read because, you know, it it really is more cluttered. As a solution, we have now, make a few changes in order to be able to apply conditional formatting to aggregated data, including the ability to save them to the exported file. And, we've added the option, when we export to Excel, not to repeat structures on on each row. And, these changes have really improved the type of report that you create, but it's also given us the possibility to consolidate the export to Excel mechanism. Now we have, you know, a single export to Excel mechanism. So we have a consolidation, you know, both client and service side, and we now support, this format. So just to show you an example, You can see in this case, I ran a query in data discovery project, and one of the columns percentage in vision in for p four fifty contains condition for. I moved to the data to the advanced view, you can see that, you know, this column still con still display conditional formatting. And when I pivot the data, resulted in this view in the middle, the conditional formatting are also applied to the new columns that have been created containing aggregated data. And when I create an egg a report, when I create an Excel, these columns are these colors are preserved. In this example, I move data to the advanced pivot view. If if in this case, I'm using compound AD concentration as rows resulted in this pivot view. And you can see that, you know, in this view, the structures are not duplicated. And when I export this view to Excel, the chemical structures are duplicated. Now when we export the cell, you present it with a dialogue containing this checkbox, this open or repeat structures on each row. Yes or no. And by default, this checkbox is unchecked, and this is, the resulting file. So now, the structures are no longer replicated on each row, and, this report in a way is more consistent with what we display in the tabular view in the interview. But, of course, if you check this option, you're still able to, reproduce to to generate this report. So this is a quick video really to what I just told you. Once we run a query and we move, for example, in this case, one data source to the advanced pivot, less people This is the same example that I had in the slides. So create a view we ought have the structures repeated on each row. We now export to Excel, checking the checkbox, so we are going to repeat the structures on on each row. We can now go back and repeat the We no longer check the checkbox, and you will see now that, the newly created file in Excel does not contain duplicated structures. And in the second example, if I move a p four fifty data to the advanced pivot view, now my percentage will be and we can be in conditional formatting. I'm pivoting the data. I'm going to create a main column for my pivot data, which contains the colors. And when I export the file to Excel, the colors will be will be preserved. So if you're using a type of workflow, if you're using the advanced view for pivoting your data on the fly, these improvements might be useful to to you. Let's move on to an improvement that we made in the chemistry tool data that we have explored through reaction workflow. So this is really the final step of of an effort that we have initiated in seven three and continued on in seven four and now in seven five, which is about, the ability to deliver consistent depiction of chemical structures across all our application and by with withdrawing conventions. The issue for the action workflow is that when we import the reactants in RW, the layout and orientation for the reactants is not preserved, and this problem extend also to to the numerated products. Here, we have a couple of example. I I've drawn this hetero cycle in the middle, which contains a wedge bond between the ring and the carboxylic acid. But when I load this mole file in a w, the is lost, but also the wedge bond is lost, and an explicit hydrogen is define the stereocenter. The problem is even more visible here. I have drawn this complex, microcycle in Elemental, but when I load it in RW, this microcycle is drawn is rendered as a circle, make it really impossible to understand, its chemical structure. So we have now addressed the situation. We'll and we load now our reactants from an SDF or from from a MOD file, we keep the atomic coordinates, and, the layout and the orientations are preserved, like, you know, in this case for this, for this micro cycle. And just to show you one more example, I'm in this case, I'm loading the same heterocycle that I've drawn before. Yeah. Yeah. Here we are with a wedge bond between the ring and the carboxylic acid, and you see that, you know, the structure is is consistent. We can go back to elemental. We can change we can rotate the structure, change its orientation, and the important thing that, you know, there is basically full consistency between elemental and and NRW. What about products? In the case of products, it's a bit more complicated because we are enumerating. We are generating new structures. Right? So in this example, we're using the same microcycle, and we perform we are performing, in this case, a reductive elimination. And you can see that the the microcycle in the products is is not bad in that, you know, we no longer, by default, generate a circle, make it impossible to read, but it is not the same. In order to have the same type of layout, what we need to do is, is to force a add on base alignment, and we need to use the Reactant as a template. So what do we do? We, open Elemental. We have made available hyperlinks to import, easily, reactants from different nodes, and we have a checkbox to force atom based alignment. And now when we run, you can see that the generated product contains a macrocycle exactly with the same layout. One more example using HuBEN in as a reactant, you can see that the q band in the results is, well, frankly, a mess. If if it's not possible to understand that, you know, that structure of a. So, also, in this case, we import the reactant in elemental, and we force an atom based alignment. And, when we use these structures to create an experiment in ELN, the layout will be preserved as well. So, you know, the correct structures will be saved will be saved to the ELN. So you can see here the interaction. The schema is fine. The reagents are fine, etcetera, etcetera. Okay. Move on to the last topic I'm going to cover today before I pass it to Troy. We're gonna talk about canvas and tables. Now since seven four, we have started a big effort to address and remove the major roadblocks to be Canvas forms and Canvas tables that prevent our customers from adopting them. So listening to feedback from you, we have identified identified four areas of intervention. First of all, improving we need to improve usability of the Canvas table. We need to address key functionalities that are missing. We knew we need to improve its performance, and it is a big it's a big concern from our customer because there is a general feeling that the performance has not improved compared to to classic, and we need to expand and improve the column lookups. So today, I wanted to show you our progress with improving the usability of of the Canvas table. But I also wanted to say that, you know, there is work in progress, addressing, missing key functionality, like, you know, being able to, access on in new Canvas table or ability to, edit chemical structures. And there is work in progress to improve, to find technical solution to improve the performance of Canvas of Canvas forms. And, you may remember that from, the webinar that we made the last time, Troy, detail you about, the new M type column lookup. We've improved, support for operator, like, history searching, sorting, and multiple selection allow. So we've already done basically work in that area, and there is more work plan, with s type lookups and a type lookups. So moving on to ergonomics usability improvements for the Canvas table, what is the problem here? The feedback that we received is that, it takes too many clicks to edit a cell or to check, uncheck a checkbox, and the keyboard navigation is is limited. And, also and this is really a big concern. It's very annoying that if you make a change in a table, like, for example, you change the column of a wheel, the you change the view of a column, once you load that experiment again, your change is now safe, so you need to repeat the operation, over and over again. So, we have worked on on this. I think we've made significant improvements, And, I think we have provided a more Excel like experience now. Cells can be edited with a single mouse click, and, likewise, a tab, shift tab, and the four arrows can be used to navigate through, through the table cells. We also have enabled the ability to preserve settings. So if you, for example, you move a column, you hide a column, you pin a column, you sort filter within a column, or you change the width of a column, all these changes are stored and saved at the experiment level for a particular for a particular user. So just to give you a bit more details, as I said, with a when we click on a cell, the cell becomes immediately, editable. If you have a c type lookup associated with a with a cell, with a column, you are able to, again, to check and check to toggle the state of the checkbox with a single click. If column is associated to an a or a t type lookup, the drop down list opens on the first click. And if instead, you know, it is an M type, a model dialogue opens up on the first on the first click. In terms of keyboard navigation, tab shift tab and the four arrow keys can be used to navigate to the table. So in this video, I'm only using the keyboard. Enter can be used to commit changes and move the focus down one cell so you can start immediately editing the next the next cell. Likewise, shift enter can be used to commit the changes but move one cell up. And, if a column is associated to a a lookup, you can use enter to open the drop down list. You ask to navigate through the list of options and enter to select the option and commit and commit the choice. As I said, we have available settings. So in this example, I'm changing the visibility of a number of columns. And I then make some other changes, like, you know, I'm going to pin a column, and I'm going to change the width of the column. So all these changes, as I said, are stored at the experiment level for the user. So if I reload the experiments, changes will be will be kept will be preserved, so no no needed to do anymore. No need to repeat the same action over and over. If I open an experiment that has been closed, the save settings will still apply, but I won't be able because the experiment is closed, I won't be able to make any more changes. And, if a when exporting a PDF of an experiment, the same setting the full settings will be will be will be applied of generally, you know, user user defined, but, you know, it's it's admin defined. Okey dokey. I got to my last slide. Troy, will you take over? Absolutely. Thanks, Champa. I left a question from Stefan in the chat and a question from Katie in the q and a for you. I'll take over presenting so you can you can answer those while I steam ahead. Let me share my screen. Looks like looks like you need to stop sharing your screen, Joppa, before I'm allowed to take it over. There you are. Thank you. Let's see here. Screen share. Make sure I share the right one with everyone. Okay. Can you give me a thumbs up? Can can you see it my you see a a agenda slide deck, hopefully. Excellent. So this is where we're at. We've made it halfway through. Thanks for the thumbs up. And, I'll focus on on to help us focus on the data discovery part of the, platform. I'll focus on the ELN now going forward. Alright. We're gonna start in chemical registration, chem reg or register. Excuse me. Frog in my throat. So one of the, I think, very helpful additions in the seven five release is related to custom fields in register. So in past releases, we we have had support for custom fields. You could you can make those fields. You could parse the data from your SD file upon registration, and the fields were available from the creation interfaces. But what was missing is being able to review and edit that data once you've created a record with custom field data. And so we've extended those capabilities now so that, when users who are directly working in register, wanna review or edit or search on that custom field data, it's available right there in register. So you can see some very creatively named custom fields here exposed in in this case, a summary table for compounds that success regardless of the entity type, but in in you'll be able to see them in that table if you configured as an admin to to to show them. You still have that power as an admin to set which custom fields appear and don't for your users, and you can and with with a few clicks, you can then expose these in these summary tables. And and and and and we support the different data types in the in that you might expect. It could be numbers, integers, VARCHARs, and it's exposed to such in those interfaces. If you drill if I drilled into an individual record from that summary table so this is the summary table of all my compounds. You'll be able to also see the custom data fields here for the compound, and then you see nested the batches and samples. They've also created some batch and sample specific custom metadata fields, that, again, are editable directly from this interface. And then maybe the final thing to note here is that this is all we also support the the filtering and searching of these custom fields when you're trying to find a compound or a batch and launch, what is from from this interface, that magnifying glass. You can start filling in data value parameters in the search field, and you can see I put two hundred in here, and that has filtered my list below of compounds to that that data field. So I think, we've added a lot of of value here for users making it far easier to interact and see all the data for for, their records and register. Moving on to BioRegister, but but continuing on with a a small molecule theme. BioRegister in the seven five o release now supports enhanced stereochemistry. So we've supported in our chemistry solutions. Our chemistry took for a while. We've now extended that into BioRegister. So what this enables me to do is define monomers using using the same elemental interface here in monomer registration and BioRegister. I can define stereo centers with enhanced stereo chemistry. And then subsequently, as you might expect, I could start using those monomers in my records and bio register. Here you see protein sequence with my with my monomer with a a stereo center, and that's captured in the chemical structure of that that peptide sequence containing that monomer. When you create complexes and bioregister, the same is true whether the sequences have those enhanced stereo centers, or maybe it's a payload you're conjugating as you might expect. We we carry those stereo centers into the definition of of those records. And then this is just a somewhat nonsensical base, but to prove the point that nucleic acids as well, you could use the linkers, the sugars, and the and the base components could all have monomers defined with enhanced stereo centers now. Chalk though means you can define your monomers and your your records using enhanced stereo centers. It's it's it allows you to really accurately capture, what you're registering with the with the enhanced stereo centers. So just a note on on deploying this for new customers, it's gonna come out of the box enabled. But if you are an existing customer on upgrade, you know, you'll have to set the the option in in the in the config file for buyer register. And if if your dot Manix managed hosted, you'll have to put in a services ticket. We could do that for you. And just a reminder that, that's that stereo centers becomes part of chemical waste uniqueness when you deploy it, and so it's always best practiced, before and after enabling that to to run that recheck of uniqueness, resolve any uniqueness conflicts that may result, as you've changed your monomer definitions or enabled this feature. Staying in BioRegister, many of you have probably heard of this. It's been in the works for a number of releases now, and these are some performance improvements to BioRegister. So really what we've heard back from our really largest BioRegister customers is that some of the interfaces run slowly. And the root cause of that, it was well known. It it is rooted in how we store the data in our data model for custom fields in BioRegister, where we we used really four data tables for all the data in the system, tall skinny tables. That was that took time for Oracle to process. It was a as a costly functions to pivot and aggregate data in these tables, especially when we're building the interfaces and buyer register. So we've implemented a change in our data model and how we store custom field data to resolve this performance issue. I'll talk a little bit about what that is in the next slide, but I wanna point out here that, we understand changes in data models, and and then migration of data from the old model to the two model new model that automatically happens is risky, and and, we wanna be able to manage that. So we aren't gonna force this change on anyone out of the box. It'll be have it'll be something that, gets triggered on demand when it's right for you if you if you ever want to do it. You know, maybe if you're not experiencing performance issues, you don't wanna go forward with this. But the other thing I'll just point is, as managed availability, it means we wanna talk to you guys before you do this. Make sure you're prepared for it. Make sure we're engaged to able to help with any problems that may happen in switching to the new data model. And so, you know, you'll be able to to to contact us through support of services and ultimately jump or jump us. Sorry. Jump Vinyl and I will review your system and and and work with you to adopt this. So it won't be available without that internal review by us where we help you to manage the process. So that's described here. I'm gonna skip through this slide. I'll just show you some records. So just a reminder for the folks who aren't deeply in BioRegister every day, all the standard fields are at the top, but you're able to configure as many custom fields as you want, and they appear below those standard fields. Here's a record for antibody variable regions. This is just that same record in edit mode, and you can see we've got text numbers. We've got single values, and we've got select list where you can select more than one value. And the change to our database then is to store those custom fields in dedicated tables per entity. So each field is a unique column now, in its own table for antibody variable regions. There is a a second table for when that field might have multi select enabled so you can select more than one value for those records. So it does two things. It it it resolves those costly Oracle pivots because everything is in this now short, fat table. And the the we're not dealing with the entire database worth of records. The these tables are restricted to that entity type. They get named based on that entity type ID one, which is is the database ID for variable regions. So if you're an admin configuring against some of these tables, that's the data structure to expect when when you adopt that change. From a user standpoint, you won't see any difference except for when you navigate into the bio register, you'll see these views that were slow to load in the past loads, load pretty quickly and be snappy. It also means that those out of the box views that BioRegister generates, in this case, it'd be the antibody variable region view, that the system automatically assembles for you, you can leverage that in your config now. In past, people would avoid that because they were slow. And with this new data model, these features would be be quick, and you can use them in your config. So, there's there's more we can talk about this. There's lots of details. I'm sure you might have questions, and I think it, it would best take into a separate meeting. If you guys are interested, we can have some one oil meetings to talk about it. So from registered to buy registered inventory, I just wanted to, before I quickly get into this, remind folks that there are inventories, obviously, where we we track where and how much of materials exist. And there are two concepts in our latest inventory model, locations. Those are the buildings, the freezers, the shelves, and the boxes, and the containers, which are largely one to one synonymous with the samples in the containers, which different users, you know, have different access to, etcetera. And one of the problems without with updating those records, both locations and containers in, initial releases of inventory, was the export workflows, only allowed you to export the entirety of your inventory system, which obviously can have performance issues if you get huge inventory systems. But if you're interested in just updating maybe the amounts in a single box, you're only interested in ten of the rows and you've got a hundred thousand rows in your spreadsheet. That's a very clunky workflow. So we wanted to solve that problem, make it easier, to generate those spreadsheets that just have the data you want in them. And so as an admin, in the admin settings, and you go to the export and import tools that allow you to create a spreadsheet, make some changes, and reupload the spreadsheet, you'll now have some options for continuing to export everything. But, also, you can cherry pick whether you're in locations or containers. If it's locations, you'll see what you've cherry picked. It'll also include anything nested below it. And when you're on the containers export, it'll contain any of the containers in those selected locations and include in this case, I've got box one and box two. And so I've got just the containers that were in those boxes in my export now. So that should hopefully vastly improve those workflows for admin. And we've also included an API for this too. So if you ever wanna use the APIs to generate spreadsheet, we've got an endpoint for that that's pretty easy to use and is documented in our Swagger. Switching from registration and LIMS like functionality into ELN functionality now. Most of you that use our ELN configure your protocols to have some custom fields on the info tab. It's fairly fairly common and widely implemented. And there's two ways to do it. You can do it individually, protocol by protocol, but there's certainly use cases where you've got a field that you wanna apply to all or multiple of your protocols. And there's a way to do that. It's a separate section to configure these per experiment properties. Sometimes we call them per study properties. We'll call them PSPs or PEPs. So this these are those fields. Cost center is probably a good one that you might have deployed across all of your ELNs. And then where you would set that up had some restrictions, and and we've made it far easier for admin now to manage these fields. And that stemmed from this is those admin settings where you would create those custom fields. It stemmed from some restrictions we've purposely put in here for control management's sake so that, you know, you that changes to the per study property, you know, couldn't couldn't be done once in use. You couldn't the limitation ultimately manifested as when you created a property like cost center, and maybe you added a new protocol to your system or you versioned a protocol, that there wasn't a way to add those pro this field to those protocols because you're prevented from getting to those config pages, partly because we didn't want you to change the config. So it was really challenging to to work around this. What we've done now is essentially, when it is in use, enabled you to add protocols. When it's if the field isn't in use yet, you still have free range to delete it, change the logic behind it. Once it's in use, we've made it. So there's a new button here to add protocols. To launch an interface, it'll tell you where that field exists, in this case, do discovery end, Hillen. It'll give you that list of additional protocols where you may wanna multiselect whatever additional protocols you wanna add that field to. Add those fields, click update, and and you can add those. We haven't for change management reasons, again, we haven't supplied a remove, just an ad, so be aware of that. Let's talk about screening and multi var export. So I think in the seven four release, you've got an introduction to multivariable table export as we delivered in the data discovery. And we've continued that into screening now in the seven five release so that those of you using screening and wanna do maybe a a little more richer analysis or charting of your data using Prism. In the past, we had the x y format, which, is, you know, column of concentrations and responses, x and y, yeah, in the table. But that we found lacks a lot of the key data that the metadata for making decisions on, and grouping the samples. And in in PRISM, it made the data hard to work with. So one of the solutions for that is multi bar tables. So multi bar tables really is a new structure. It's a new table type in PRISM. It's been around for a while. It's new. And we now enable an export to that data type, which I think really empowers your users to to do much deeper and richer analysis and plotting. And I'll show you that here. So here are here's the here's the export from from screening. So the workflow here on the layers tab, open the hamburger memory, export to Prism multi VAR table. And what we've done is any of the any of the available columns in this layers table now get included in the export. So that'll allow you to make some decisions to understand your data, and it includes, you know, it's a plate it's the well ID. It's a plate ID, which that that helps you make those decisions. Well, I think most importantly, you get things like the well properties and sample properties, which could be, you know, which cell line do they use, which cofactor was in that well that allow you to group that data in a in scientifically meaningful ways. So multivariate tables now available from Prism. The other thing we've done here is it used to be two x export steps for any one of the layers you wanna export, raw data and analyze data. You done it with two files. We've what we've done here is is we've taken the data from all of the available layers and and created a multi file table for them so you don't have to, as a user, do multiple exports. Alright. I'm gonna switch now from screening. I'm gonna do a little bit introduction about our formulation ELN, our process capture multi experiment workflow. It's it's new. It's been around we've really started investing this again in the seven one release. Just like most of you have have heard a little bit about process capture, Mew, but I I think it's, you know, it's it's not something ingrained in everyone yet. So I'll do a little bit of review to add some context of what's new in seven five. It is still among the the highest growth areas in our product where we're doing most of, doing probably getting the most active developments going on in these area of the products. So I'll just quickly review what our formulation in ELN is here quickly. That's really three different concepts of ELN protocol. One, to capture the the processes that make something, the recipes, how you how something's made. A concept of test experiments, that's where you collect data and you analyze the performance of the things you've made. And then multi experiment workflows ends up being that hub where you build your hierarchical relationships of multi experiment, multi tier workflows in the generation of a product and collating all of that test data into one central hub. So those are the three concepts of ELN types. Process capture specifically, as I mentioned, there's a handful of different dedicated forms here, but, ultimately, what it's doing is enabling you to capture the step by step processes, those actions, and those parameters in the in the in the process of making something, giving those things an ID, a formulated product ID, and then, you know, capturing not only what you plan to do in the design, this is how something gets made, but also what actually happened. Right? So I wanted to add five grams, but I added four point nine grams. And somehow sometimes that delta is important for your decision making, so there's interfaces for that. In multi experiment workflow, I mentioned that's it's really a experiment relationship tool, first and foremost, where you start to build these visual hierarchy of experiments where the precursors that you make in this become the ingredients of downstream experiments, and you also link in green here where you've run a test on it. And you get all of that information summary tables on the right. And by building the experiment hierarchy, you are also building a genealogy of the products you've made from those tiers. And so if I've made this product, I can see raw ingredients and precursor formulations that went into it. So that's my summary of PC and Mew. Let's talk about what's new in the seven five release. So first and foremost, we've eliminated the restriction of having to use a formulation ELN protocol type. So protocol types, screening type, notebook type, formulation type were probably the main three that people use. And it used to be that those settings for for for making formulated products and those forms were only available when the protocol type was formulation. And you couldn't change it once you've once you started using it. You could clone it and change it, but we wanted to make sure that you could use these forms in your existing notebook protocol. You've got a where you describe the synthesis. The next step is to formulate that into a tablet. You want to leverage these process capture tools in in those existing experiments with your existing config and existing data model and marry the two up. And so to do that, we've removed that formulation protocol restriction. So as an admin, when you're setting up you can see here this is a Notebook protocol. Those process capture interfaces are now available, so you can start adding those to your protocol. That's true for those process experiments as well as multi experiment workflows. And when you're in a mark here's another mic multi experiment workflow. First of all, you can add this tab to any you can add this tab to any ex protocol type now. And when you do and you start to build these networks of experiments, you can add adding nodes to build these tiers through these buttons here. In the past, that that list of available protocols was filtered on formulation type, and that's no longer we've removed that restriction. So you can see what I've done here is I've got a precursor experiment that feeds into a downstream second tier Camulan experiment now, both notebook prototypes, so you can you can build the network out of any experiment type in your platform now. And then, obviously, what that means is that you can track genealogy of things you've made. So in this chemiline experiment, I'm describing four aminophenol synthesis that is an important gradient in a downstream experiment that uses that to make acetaminophen, and so you could track all those raw ingredients across experiment networks as a as a potential use case for our process capture ELN married in with chem ELN. We've as I've mentioned, it's a big air it's a large a large scope to our investment in process capture in Mu. This is somewhat of a list of some of the other things I'm not gonna explicitly talk about. Just to make you aware, you know, in in Mu very quickly, we've added some some new abilities for how products are inherited to downstream child experiments. There's a handful of export tools and wizards in in that interface that'll enable you to get the data out and select selected it you want to get out and get it into a usable format. We've updated the JSON export for that, and we've improved some of the ergonomics on how a user actually would then make those selections of things they wanna export from these interfaces. I'm gonna switch the last topic of the day here, and it's integrations and APIs. So everyone here, I'm assuming, is a is a Dotmatic CLN platform user. And they you know, I think most of the people who have probably heard some of Luma, know a little bit about it, probably aren't actively using it. And we're we're building some tools that I think will make the option of adopting a Luma a little more appealing to you. What do what do you what do you use Luma for? It's a good question. Let's start there. So things you can do with Luma that are interesting as an ELN customer. First, I think it's LumaLab Connect, which used to be our old buyer by model where you can automatically ingest instrument files, get them in your file warehouse, version them, etcetera, and have that as your file management pulled directly from instruments. Another another great use case in in Luma is our experiences. So this uses Sigma. It is our a tool for doing that advanced charting and visualization of data and also analysis calculations. So, sure, you could export to Prism or export to Excel. Sigma allows you to do that right within the system where you have your data warehouse. And then, also, Lumate real is our is our solution long term for supporting AI and machine learning. And and if you have if you're asking questions of your data, which one of these which one of these compounds or samples or products performs the best out of your huge dataset or or none of these are what I want, What do I need what parameter I need to change to make this performant? You ask those kind of questions with AI. Luma is gonna be answered. It's a solution for answering and asking those kind of questions. So we wanna integrate that and make that more seamlessly joined with ELN. And part of how we do that, I think, is to continue to invest in our APIs so that using integration framework, which, again, many of you, I think, have heard about. That is our tool for managing, it's our web based interface for managing those complex data pipelines, that can leverage APIs. So make those API calls to get the data in between the system, automate those things without a user having to really do anything but click a button. Alright. So that's kinda, generally, our strategy, how we see how ELN fits with Luma. Right? They don't replace each other. They they work better together to add value to your data over on this side, capture your data on this side. So what are some of the areas? So we've got we're building upon to make all these workflows possible our our toolkit of APIs. And we've got a couple of new ones I wanna talk about here in the next few slides that came out in seven five. It's a continuing area of investment for us. We've got some new ones for process capture, and and BU that are coming out in in in the next release. So we're gonna focus here on on cascade and experiment relationship APIs. I realized not everyone uses cascade. So just a quick reminder what cascade is. That is our module in the ELN really for requesting a a laboratory service. You you need to make something. You need something measured on a sample. You provide those samples. It goes into a queue where you can assign that to work groups or individual scientists so they can come in and pick it up. And when they do that, that'll create the experiment in the system, give it the experiment ID, and it can even populate that experiment with those samples, so that you can begin testing data against that. It'll it's a system that'll not only do you manage these requests, but track the status of where those experiments are from from the from the summary, hub. So that's what Cascade is. We wanted to enable all of the workflows that you could do in the UI of Cascade to be done by APIs for a couple reasons. One is that you can start to automate ELN cascade integrations with your third party systems this way, and third party data sources. And and, also, it could enable some more automation as you need to. You could start to configure really wild and unique workflows, purpose built for what you're trying to do as an admin. So we've got the whole set. So everything you can do in Cascade in the UI, can do via API now. I I am not gonna show you a demo of the association of screenshots Yes. As an example, I'm using Postman here. This is just how I create a request, an endpoint to create a request. I would provide all the samples I want in that request, which services I wanted it to go to. It's an analytical service. If you know, those are linked to a protocol in the ELN that have per study properties. I fill in that data. And when I click post then in cascade, I get a nice response that says it works two hundred okay. It gives me a summary of what was created. And when I go into the UI of cascade, which you see here, I filtered on one of the compounds that was in the list here, and you can see I have a request under my analytical service to do something with with a b c fourteen. So all done using the APIs. Similarly so you can see we can you can work this all the way through creating experiments. We already do have stand alone experiment creation APIs. Here, we we can manage that through the cascade workflow. And what we've we've heard from a lot of customers is that well, that's great. They use the APIs often for creating experiments and automated workflows, but they have a need to you have the need to create relationships between those experiments. And there's not an API for that. So you could create the experiments, but then still as a user, you have to go in. This is the info tab of an experiment, the little chain icon where you start building relationships. We've now provided the API endpoints so that you can do automate that as well as part of your API workflows. So there's an endpoint to show me what relationships exist to a given experiment to create those relationships. And if you made a mistake creating one, you cannot there's an endpoint for deleting relationships. So what does that look like? Here is a this is a creation expert post endpoint with the parameters. You fill in the experiment. You fill in the one you wanna link it to, and then every one of those relationships in that interface has that relationship type that you configure as an admin. And you can provide either the idea of that type or the name of the type. And when you click send, it'll send you give you the response that that relationship was successfully created. And in this case, you can see I was creating a a relationship of type linked between experiment one three nine two one four and one three nine two one six. And if I went into that experiment, clicked on the link, I'd see I've created that relationship all done via APIs. So expect expect more to come. I think this is some good tools for APIs. We'll continue to extend that in the future. And that, I think, wraps up the the the highlights of the the big things delivered in seven five. And we're it looks like we're right on time. There's three minutes here where we can answer some questions, and we'll try to get as many as we can. If we don't answer your questions in the chat or in the q and a section before the webinar ultimately ends here in a minute. We'll follow-up and and reach out to you offline, hopefully, with your answers. If you have any other questions after after the meeting ramps up today, obviously, you guys are probably mostly aware there's resources already. You can go to our website and find links to the contact us. There are a couple of email addresses here to contact technical support or the community. I'll certainly leverage those to send send your questions our way. A reminder that within the top bar, regardless of where you are in our platform now in the in seven one, there's that link, seven one three seven five in the current release, that info link that'll lead you right to the documentation. And then lastly, if you can't find the answers from there, yeah, contact contact support. They'll always get steer you in the right direction. So I think that wraps up our review. I appreciate everyone's attendance today. Thanks for joining us.
Dotmatics 7.5 streamlines core platform workflows with UI-based MFA resets, cleaner Data Discovery exports (conditional formatting preserved and optional non-duplicated structures), and more consistent chemical structure depiction from Elemental into Reaction Workflow. Canvas tables also get a more Excel-like experience with single-click editing, stronger keyboard navigation, and per-user experiment-level settings that persist.
Across ELN modules, 7.5 expands post-creation usability of Register custom fields, adds enhanced stereochemistry and optional performance improvements in BioRegister, and makes inventory export/import more targeted by allowing scoped exports. It also introduces Prism multivariable exports for screening, removes formulation-only restrictions for process capture and multi-experiment workflows, and adds APIs for Cascade plus automated experiment relationship creation and management.
Our Latest on Science & Industry
Simplify your path to discovery.
See Luma in action by requesting a demo today.



