Skip to content

lauradls/MLOps_Reproducibility

Repository files navigation

MLOps and Reproducibility

This repository contains the reproducibilty projects in Machine Learning that I have worked on and replicated from research papers published on NeurIPS.

The goal is to raise awareness and hold researchers accountable to published claims of better accuracy or more stable methods. It is important to go back to the statistical principles of replication before assuming something novel as part of a new machine learning framework.

Given that the knowledge and computing resources of how to reproduce these papers is not widespread, along with the AI Model Share Initiative at Columbia University, I will dedicate my work to this. I hope others do it too, not only as a great learning opportunity, but also as a way to democratize AI for all.

Index of Replicated Research Papers

  1. Title: "RNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural Networks" Authors: Leo Kozachkov, Michaela Ennis, Jean-Jacques Slotine Conference: NeurIPS 2022 Arxiv link: https://arxiv.org/abs/2106.08928

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published