Enabling Scalable Data Science Pipeline with Mlflow at Thermo Fisher Scientific

Enabling Scalable Data Science Pipeline with Mlflow at Thermo Fisher Scientific

Show Video

Thank. You all for joining my session today my name is Allison will and I'm a data scientist, in the data science center, of excellence in thermo Fisher, today. I'm going to talk about how, we enable, scalable. Data size pipelines, with the, email flow, and model revision in our company. So. Before. We go into any details, I'd like to give you a high-level summary. Of what, we have achieved in, this progress we have standardized, development, of machine learning models by integrating, and will flow tracking, into the development, pipeline, and, we also improve. The produce ability, of machine learning models by having github, and Dell like. Integrated, into development. And deployment pipelines, we. Also streamline, our development, process, and, deployment process, for machine, learning models, on different, platforms, through ml flow and centralized, model registry, why. These are so important, for our team to set up is also high relevant, to what we do what, we datasets, do in our data center, for excellent. So. What do you data scientists at our data science EndNote for accents do we. We, do a lot of we generate a lot of normal, algorithms that can be applied across, different. Divisions, and we, work with cross, division, know teams to, to. Migrate. Models, and step. And in this kind of cases, mono, standardization. Is actually very important, for both productivity. And reproducibility, a lot. Of migration. And standardizing. Are, oftentimes. Needed, for enabling. New data, science, science, into. A new division and. While. Doing all this we also are, responsible for, establishing, data, science tax, credits, across, the company, so. They are. There. Are multiple, fuels, in, of. Data science that's rapidly. Growing in data science for some, operations. Human. Resources, are in the end commercial, marketing. And we, are actively, engaging, all, these kind of area but, today I'm gonna focus on. Commercial. Marketing. Okay. So. In what, is commercial marketing, that's the science lifecycle. Look like so, for. Efforts, we, have all this kind of different. Data pipelines, of piping all kinds of data including transaction. Web. Activity. Or, in, so base data. From customer, interactions, and, we. Use all different do. You stay up pipelines. Of piping to the same thing I like that's, needed, for. Net. That's needed for. Data. Sciences to to, consume, from model, development, and deployment and we have both machine learning model and rule-based legacy, model is running currently, in production and, to. For. This, would, be the face that we will be focusing on a lot today, for. Especially, for development, employment, and we for, this process, we involve, a lot of new technology, in data. Breaks such as spark ml, flow Delfin Lake and. Also. Have github, encrypt greater throughout, this process to help us with. Reproducibility. And. After. We. Develop all this model this model would, develop deliver, results. Recommendations. To different, channels such as email. Campaign, or website or we also generate very, prescriptive recommendation, for, our sales rep through sales force or, analytics. And. All. These are meant. To provide, the. Most relevant offers for, our customers. And. How. The model performs, to our measure against, a measure. By the revenue generator or engage. Memory for. Each recommendation, and, all these will eventually be feedback, to. Both. Model developmental, deployment, face or could.

Potentially, Feed. Back to data processing. Pipeline. In order to for. Example, bringing. Bringing, new, data to help us understand, our customers. As, we are focusing, on the model development deployment cycle. Let's take a closer look at what each, face, involves, so in development. Phase this. Means like. Exploring. Exploratory. Analysis, or model. Developments. Such as future. Engineering, future selection, model optimization. All these different, good stuff and then, when as data science has devolved, their model to a certain, point that we're ready with you all it's ready for deployment. -. And, we. Move, the, model from the development, environment, into. The production, environment this. Is a process, we call deployed and in. In during. Deployment, phase when model, is running in. Production for. A model that lives in production, it can go, through a few different process, for example like yeah it can be it has to be, run for every day, scoring. And it all can also need, to be retrained. Or we tune every, certain period of time and. This. Can all be done in our production and, in order to monitor all these process we have another. Process another, classical, management, that monitors. All these. Production. Runs to make sure our model are producing, accurate, results, or it, can also learn us whenever, there is we're. Events. Going on for, models. That's running, in production and, a lot of the times we also track feedbacks, through, this process and, these, feedback were either, feedback, tuned, to. Deploy, to models, in production, or we, can all also feedback all the way back to development, phase in, order for, first things such as new, model development. So. And. Then one thing now I'm not going to talk about today but, it's also a very important, stage is the delivery, and we, can't deliver on all, the, model a lot. Ever delivery, is done through using. Recommendation. That's generating, in production, environments, so production, environment, results feel feel feel into, different. Channelside. Web. Recommendation. Or email email. Campaigns. So. That's take a closer look at an example, model in our pipeline so this is a model that generate. Product. Recommendations, based on different, customer, behaviors, such as web activity, or sales transaction. And. This, model spent about six to eight weeks in, exploratory. Were. In. Surah Torah analysis, and model development and prototyping. And then. And. Then, after. Development. It was moving, to production. And, it, was it's, mostly, run in, two ways in production, one is it's getting, it's, doing steady scoring, so in what it means it generates, new new. Info metrics based on new data and. Then run through the same model every day to, make sure we get the most accurate. Prediction. Based on new, data and then, after, every. Every. This model is also, retraining. Returned. With latest, data for, every. Two weeks. And. Then this model produce, the recognition. From this model instead, over through email. Campaign, and also through our. Sales. Process dashboard, to, recommend, down who who, is the best customer. To gauge engage, with for, this specific, product and. Then. This, the, very, last part is a management part that this. Press. This, production, process is also monitored, in, our production environment through.

Ml Flow. And. So. What we used to do as, that in development, as now we have a, bunch of data. Pricks notebook, and there's no burden control, of any, of the notebook. And there's no unit testing for our, future. Functions, or anything and. Regression. Test is a very is very hard to do especially your, if you're, inheriting. A notebook, from a previous, colleague, and, this. Is, the, situation, that we are probably all pretty. Familiar, with we, have multiple, versions of final. Documents, final notebook and we, confuse ourself so, and, we, did the, fuse, our ourselves, with what, exactly is the one that we should we should be used so. What, we do now. Is. That we're. Still in development. In. Developing environment, we still use sterics notebooks, for exploratory, analysis. Of like future engineering, but, once we get more comfortable and, feel, the future has, grown, as, mature. Then. We actually write it into. Python. Modules, and. Python. Functions, that are version control, into github and each, machine. Learning features are. Actually independently. Are. Testable, and shareable, so. So. We and then aside. From integrating. Get help also integrate, alpha like to version, control or our data that, are used to train our, models. This. Way combined, with a milk flow we. Also are able to track, a. Machine. Learning model development, so we can track all the hyper parameter. Tuning, and how teachers, action, is going and different, in different experiments. And. But. By the time we feel more comfortable with the. Model in the development, and and, then, we can actually register it, into. The development. Stages. Model. Registry, this. Actually, makes, the. Regression. Testing against, previous versions, of model are way. More easy, from way, way more easier because it, provides, a cleaner, interface. For, us to, pull. Down the previous, version, and look at how, these two model, compares. So, I will have a few demo on that as well. So. Here a few scenarios, that, we can look at and to see how exactly this process, improve, our process improve. Our development. So. First. The first tracking, feature improvement, becomes, easier, how, does that go so we don't we all have run into this our boss come over and ask Oh what, are the important features in this. Version versus, the previous version, and what, we used to do is, that. Is. That oh um. Let, me find out how the features do in my. Model. Version number 10 notebook, and I'm with our sometimes. I wish I had a save, a screenshot, for, the future, importance feeder and I, didn't too, bad so. Sorry. And I, believe. This isn't what we are all familiar. With, is that we, have so many different final, versions, that we don't even know which one we should pull to, get them the, data, from a previous version. And. How, do we improve that so. Now, what, we can do is, that sure. We'll. Just boy. From, MF, oh yeah. This. Is how it would look like so this, is. This. Is the video. So. Here we can see there are multiple. Ml. Flow runs that are login to I know for flow experiment, and now if things, I want to compare, with a preview version I can just click and choose. A few different, models. That I want to compare with and then, we. Can go into each one and see the future. Importance. Figure, actually locked together with the model this, way we will know and, no. Longer need to worry that oh I do not know like future, importance, figure. This time because, they will all if we can all log in. Log. It through ml flow and you all know money always be, tracked so.

Now. Also sharing a, machine. Learning futures also, becomes a lot easier on a common scenario, is that Colleen, come over and ask me oh I, really like the feature you use in your last model can I use that as well and, when we use to do is that oh um, just copy, paste this, Pat this part of the notebook but well. I have a slightly different, version this other problem no but I think I might have used this one. This. Is usually not not. A great greatest idea because we often times confuse, ourselves and. Not. Sure which one to actually share and, it's. Also really hard to track the changes over, time for these functions, now. Whiskey hub integrated. And out into our development workflow. We. Can what we can do now is that sure, I added the feature to the shara, machine learning we both feel free to add it use, it and by, importing, the mod. Sorry. Can. We sorry. Well. We get what, we now can do is that sure I did it laughing shirt to the shirt and mofo email, repo. Feel free to use it by importing. The module, and, what's. Even cooler and look. You. Can see on the side here is that magma, is our internal, share machinery. Repo, and all these feature functions, are testable, future. Features, functions. That, we can write, along units has to make sure it does, fill in all different kinds of situation, and also make sure in, the future if anyone, modified. A function. It doesn't break all the other people's, code, and. What's even cooler is that you can also lock the exact, version. Of the repo. You use in ml into, ml. Flow, and, here you can see that we not, just we lock the environment, set up the condom MO and also, the source packages that. That's how we make. Sure we always lock, the, exact version. And this, out so make sure that even, if the repo, continue, to evolve, after. Your. Model to go and you can still trace back to make sure you. Know which version you, use for, your own model, and this, way you it's you can always reproduce, with the exact, same development. So. What we learned during for for. Improving this, development. Process. Reproducing. Model results it's not just rely on version, controlling, of. Code. Or noble but also version, control in the train data environment, and dependencies, and ml. Flow and Emily allows for tracking all these necessary, things needed for reproducing. The model results and integrating. It can. Have also allowed us to establish, best, practices of. Accessing. Our data warehouses. And also, standardize, our machine learning model and really. Encourage, collaboration. And review. Among, different data scientist impersonally, I think the last one is a big, plus as since. Data science a lot of times working, silos and having. A collaborative, platform. Is so important, for data scientists, work to scientists. To grow. Ok, so let's, talk about deployment, so what happens all the time is that things work in development and everything, breaks in production.

And How can we, streamline that, process so. What we used to do is that we, manually important, acts for all these data bricks and notebooks that's needed, for deployment. And we'll also we manually, and. Set. Up all these different clusters, in production. Based, on. Developments. At clusters, in, development, so and this, makes troubleshooting. Super, hard and what. I'm not happening, is a lot of times it. Has sounds coming on Saturday to, get instead, alongside with, data engineers, to make sure their, model can be deployed correctly, without, without. Error. This. Is not something we want and we will always so. We, have tried really hard to streamline, this process and. What. We now can do is that we register, our model into develop. Development. Model registry and then. And, then. In the product and then we can move, all the artifacts, that are needed, to deploy the model from. Development. In. Front of a learning environment, into. The, in, the environment, by registering. The model in, copying all artifacts, into our. Centralized, model registry, our. Production. And our production, environment this, also makes, regression, testing within, the production environment environment, very, easy because, you can actually move the model. First into staging, and production and, then, now that the staging and the production model. Are in the same environment you can easily do, comparison. And, testing. Within. The same environment, this. Also allows us to have, easier. Way to track, and monitor our model, so. You can, see that in. A centralized, model registry now we have. Different. Versions of models in there. And we can easily, manage across. Different versions to make sure which ones and, we can, clearly know which one are in production because ours. Are in staging. And, we can use the notebook - it still ask you all these model pipelines, and we, use ml flow to track how, the model, actually. Performs. In production.

Environment. You can track. Specific. And by specific. Only. How, the. Output of the model and so whenever there, is, there. Are some very events, going on we can get first alerted. This. Another, thing is very, important that, the, model, rate a centralized model register allows us to manage model across different, environment. And different platforms. Some of our data Sciences, on different teams like to use data breaks some of some, of us like, to use sage maker and this, o sage. Maker they obviously can, all register. Through, the same centralized, model registry, and they, can also be deployed to. Different, environment. Based on based. On, the. Original input so. Here. I'm gonna have a short demo on how we can. How. How we can register them. Register. The model from, development. Environment. Into production. We. Could register models, from development. Shard to the centralized, model registry. On production, car today, after them all I would deploy to our tester, instead of our actual. Production car in reality, this, is a way to use model, registry, as a hub, to manage, models, develop. On different platforms this, is new feature for ml flow and only works with ml flow 1.8, in above, so. Before, we start anything we need to set up the, credentials, so that when you reach out to the production shard it can authenticate that it's you you'll, need to create a token, on your, destination. Shoppers and use the token and after - handshake, on your source jar you, can store these credentials, in your database secrets. And. Create a database secrets. The, divers. Profile, here on your local, jar, now, you're all set for the. Connections, between the, source and the development, source, and destination shark. Here. And, then, we, can start finding the model that you actually, want to read such a shirt from your sword shot to your destination, shark, so. You can see here. You. -. In order to find the run ID you can either pull. It from a model registry, or from, animal, experiments. However. It's usually a better practice, to deploy, to production using. Model they are or you registered and improved the, model registry so this is what I'm going to devil today so. Here. Is how you can create a Cayenne for and also tracking. Tracking server and then. This. Is how you can specify what, model you want to go for so this there are capable the latest version, of the, production.

Model, In this, model. Registry. And. It's and this will, give you this code would actually give you the actual, absolute. Path. For, all the artifacts that you register, for the. Model so. By. Parsing, this artifact path you can gather. Experiment. ID and run ID as well so experiment, is usually, this one and run nineties this one, so. After. We got the experiment, I do you run I D know exactly, where the mouse is, we, can transfer, a model, from. The, local, tracking, local. Workspace. To, the destination. Workspace, and, here. Is how you can do it and you. Can see here. You just need to give it a run ID and. Run. ID ArchiCAD, path now, R value RI and then it will take an initial, copying, over and. Here. You can see the account is copying, over the, model that we specified. And, here, in this case it was the it, will preserve the, exact same exact, path. Between. The, the. Destination. And the source char so what, what experiment, that also in that case it also preserves, the experiment. ID and varieties. So. After, Search Search tab what you can do to. Actually create, the. Of, the registry, on the central model registry, and point. That model. Or point, the registry. To that natural, model. By registering, the model. With. The source ID that you just you, just transferred, to. So. Here. They'll. Actually just, need to specify the run ID you just said you just you, just move, you, just derived. From the from, the local model registry, as. I said it's pre sir we are preserving the run Sam run ID on the system. On, the. Source. Shot and the, destination Chaucer, the on the district ID, would be exactly the same and in. Order to create that, model, register on the remote server you have to create a. Remote. Client, here using, the tracking URL you just set up and, so. This is how you can, create a model, version, using. The, model you just transfer from the source to the destination shop. And, then. You can also update a metadata, on the remote central registry, I usually save the source workspace. Information. Also run ID and also if you are security, analysis, you can also put, in the wrong URL. And, this. Is what, you would look like and. And. Devon. The decimation. Shop you, can see it has, a source. Workspace, and also, the run ID and you, can also see, the wrong, URL you will point to the most original, experiment. Run and I. Can actually click on that to see. What's. Over there and it was the performance you look like and how I could reproduce, this model, and, I. Also, include, the future importance, that. Hello. Model performed, with, ok, so. So. This is how you can prepare and then you can also update, not, a version to, production, if you want I'll go, through, this remote, I and you set up on the local shark so. This is how you can register a model to send from a registry, and well. I'm going to demo the next is actually. How, you can use the model so remember, what, I mentioned that, for, our model that's in production a lot of times we also.

We. Would use that model for a daily scoring, and, sometimes. We need to be trained or we - maybe every, two, weeks or every month, so. This is a demo of how how. I can use. The. Model in the production model, registry, to do daily scoring, and use mm file to monitor. The, output so. Here. Is how, first. How we can take it. So. Saying similar to what we just demo in the other notebook, how we can get the artifact. Passed for the for. The model and then. One. Day that's very important, and what we mentioned before is, that we. Want to make sure we install, all the dependencies. For. To in order to run the model so, here is actually, how I can, come unpacked, all the, source packages powerful. That, are, locked. Together with, with. Emma or with a model so this is how I, pull. It I pull it off from. That. Artist. That path and then, I. Was. At sit and then sipping. So all the packages, in that, indefinitely. And, then. Here's. How I can pull older older package, install, the package and, then and now, my environment all set up and I can run the daily scoring, and. Here. I do actually start experiment, so that I can log all the. Metrics. I want, to monitor, for dismissal, so in this car in this case I was only doing scoring, so um. So. What I'm gonna do is that through, scoring and then I bring, in the the. These, are the modules that I actually installed, with the source packages and that. Means I'm using the exact same functions. To Inc to. Produce the infant, metrics and and. After. I produce, of info, metrics, and I can I. Can, actually version. The, implementer, you know so Delta like so, here. I actually overwrite, and then save. The. Improvement. Info, mattress into, a delta like tile. And. And. And. Then, I can and then after. I generated into metrics, I can I. Can. Know the a lotta model directly, from the, model registry, and I, cited also, by specifying. I want to run the production, I. Just. Need to specify. Which version. I want to run and one hello registry, I want to go go, get the model so this, way I got the model and I can run, run. It using the new. Implementers. I generated, that, day and then. And then, I can get the all my prediction, and. At. Again I watch I would lock all these, parameter, aside that I used to run this. Around this time for. Example the artifact path and all the what. Stage of my life from. What. Is what the model registries name and what's. The prediction day, I use for this one to generate the input batteries and I. Also. Sometimes I also lock some metrics for scoring, so.

For. Example like the total row count on x and or if, it's binary classification, levels, not so, long how, many passes, I can talk, how many so, negative negatives, I get you. Know if. Sometimes, I can see, a. Huge spike of, passes, then I know something, is wrong with them all so. The regression so. This, whole, process also, making regression, testing becomes a lot easier so, a lot of times we want, to compare two different models and then, what. We used to do is that we we can't we have to have to dig through all the. Previous. Colleagues notebook, and and find that maybe the performance, metrics. Is not even log and what. We now can do is that we can hold it directly, from inflow, and compare. Them so. Here are there are two versions. Now. We can choose to compare and choose, them and compare, in the model registry and then we can see all, these different. Different. Metrics that we logged side-by-side. For these two different two models, and, for. Example with the validation. Sets. Area under precision recall and then we can also compare, them. Compare. Them this way by, looking at this. And. See how different features, of fact this different model and how their performance under, these two different settings and now. Troubleshooting. Transient, data discrepancy, also become so Solaria, Alliance. Data engineers. Common, access why the run, yesterday you're, a weird. Number, predictions. That's not usual and, and, what. We used to do is it's really hard to to, troubleshoot. This kind of problem because maybe today's run, already. Overwritten, the input, tables so. What. We now can do is that we because, we version control all our source. Data. Through, down the lake we can look. Lo, the specific, version that we'll use for yesterday's, model to, troubleshoot. This kind of problem and then it makes that, a lot, easier and. You can see on this side on, the right side we. Can lo, even. Though it's the same Delta Lake, file. We can load the specific, version of them so, what we learn from this, improving. This form and processes, tab yes and scientists, really like the freedom of trying, out new platforms. And new tools and allow, in, for the freedom of platforms, and tools students can be a nightmare for deployment, in production, environment. However. Ml, flow tracking, servant and model registry really allows logging, a wide range of flavors, of machine learning models from spark. Ml to circular, and to stage makers, and, this. Really, made mental across, different, platforms. In, a sense in the same centralized. Workspace. Possible. And easy. Thank. You all for joining our, session. Today and I would really appreciate, any, feedback from, you guys from. You all thank, you all so much.

2020-08-24 23:48

Show Video

Other news