Английская Википедия:Data Version Control (software)

Материал из Онлайн справочника
Перейти к навигацииПерейти к поиску

Шаблон:Infobox software

DVC is a free and open-source, platform-agnostic version system for data, machine learning models, and experiments.[1] It is designed to make ML models shareable, experiments reproducible,[2] and to track versions of models, data, and pipelines.[3][4][5] DVC works on top of Git repositories[6] and cloud storage.[7]

The first (beta) version of DVC 0.6 was launched in May 2017.[8] In May 2020, DVC 1.0 was publicly released by Iterative.ai.[9]

Overview

DVC is designed to incorporate the best practices of software development[10] into Machine Learning workflows.[11] It does this by extending the traditional software tool Git by cloud storages for datasets and Machine Learning models.[12]

Specifically, DVC makes Machine Learning operations:   

  • Codified: it codifies datasets and models by storing pointers to the data files in cloud storages.[3]
  • Reproducible: it allows users to reproduce experiments,[13] and rebuild datasets from raw data.[14] These features also allow to automate the construction of datasets, the training, evaluation, and deployment of ML models.[15]

DVC and Git

DVC stores large files and datasets in separate storage, outside of Git.[3] This storage can be on the user’s computer or hosted on any major cloud storage provider,[16][5] such as AWS S3, Google Cloud Storage, and Microsoft Azure Blob Storage.[17][18] DVC users may also set up a remote repository on any server and connect to it remotely.[3]

When a user stores their data and models in the remote repository, text file is created in their Git repository which points to the actual data in remote storage.[2][19]

Features

DVC's features can be divided into three categories: data management, pipelines, and experiment tracking.[20][21][18]

Data management

Data and model versioning is the base layer[22] of DVC for large files, datasets, and machine learning models. It allows the use of a standard Git workflow, but without the need to store those files in the repository. Large files, directories and ML models are replaced with small metafiles, which in turn point to the original data. Data is stored separately, allowing data scientists to transfer large datasets or share a model with others.[6]

DVC enables data versioning through codification.[23] When a user creates metafiles, describing what datasets, ML artifacts and other features to track, DVC makes it possible to capture versions of data and models, create and restore from snapshots, record evolving metrics, switch between versions, etc.[6]

Unique versions of data files and directories are cached[24] in a systematic way (also preventing file duplication). The working datastore is separated from the user’s workspace to keep the project light, but stays connected via file links handled automatically by DVC.[25]

Pipelines

DVC provides a mechanism to define and execute pipelines.[26][27] Pipelines represent the process of building ML datasets and models, from how data is preprocessed to how models are trained and evaluated.[28] Pipelines can also be used to deploy models into production environments.

DVC pipeline is focused on the experimentation phase of the ML process. Users can run multiple copies of a DVC pipeline by cloning a Git repository with the pipeline or running ML experiments. They can also record the workflow as a pipeline, and reproduce[29] it in the future.

Pipelines are represented in code as yaml [30] configuration files. These files define the stages of the pipeline and how data and information flows from one step to the next.

When a pipeline is run, the artifacts produced by that pipeline are registered in a dvc.lock file.[31] The lockfile records the stages that were run, and stores a hash of the resulting output for each stage.[26] Not only is it a record of the execution of the pipeline, but is also useful when deciding which steps must be rerun on subsequent executions of the pipeline.[28][20]

Experiment tracking

Experiment tracking allows developers to explore, iterate and compare different machine learning experiments.[22][20]

Each experiment represents a variation of a data science project defined by changes in the workspace. Experiments maintain a link to the commit in the current branch (Git HEAD)[32] as their parent or baseline. However, they do not form part of the regular Git tree (unless they are made persistent).[33] This stops temporary commits and branches from overflowing a user's repository.

Common use cases[34] for experiments are:

  1. Comparison of model architectures
  2. Comparison of training or evaluation datasets
  3. Selection of model hyperparameters

DVC experiments can be managed and visualized either from the VS Code IDE[35] or online using Iterative Studio.[36] Visualization[37] allows each user to compare experiment results visually, track plots and generate them with library integrations.

DVC offers several options[37] for using visualization in a regular workflow:

  • DVC can generate HTML files that include interactive plots from data series in JSON, YAML, CSV, or TSV format
  • DVC can keep track of image files produced as plot outputs[38] from the training/evaluation scripts
  • DVCLive[39] integrations can produce plots automatically during the training

The DVC VS Code extension

In 2022, Iterative released a free extension[40] for Visual Studio Code (VS Code), a source-code editor made by Microsoft, which provides VS Code users with the ability to use DVC in their editors with additional user interface functionality.[41][42]

History

In 2017,[43][44] the first (beta) version of DVC 0.6[45] was publicly released (as a simple command line tool).[44] It allowed data scientists to keep track of their machine learning processes and file dependencies in the simple form of git-like commands. It also allowed them to transform existing machine learning processes into reproducible DVC pipelines. DVC 0.6 solved most of the common problems that machine learning engineers and data scientists were facing:[46] the reproducibility of machine learning experiments, as well as data versioning and low levels of collaboration between teams.

Created by ex-Microsoft data scientist Dmitry Petrov, DVC aimed to integrate the best existing software development practices into machine learning operations.[46]

In 2018,[47] Dmitry Petrov together with Ivan Shcheklein, an engineer and entrepreneur, founded Iterative.ai,[4][48] an MLOps company that continued the development of DVC. Besides DVC, Iterative.ai is also behind open source tools like CML, MLEM, and Studio, the enterprise version of the open source tools.

In June 2020,[49] the Iterative.ai team released DVC 1.0. New features like multi-stage DVC files, run cache, plots, data transfer optimizations, hyperparameter tracking, and stable release cycles were added as a result of discussions and contributions from the community.

In March 2021,[50] DVC released DVC 2.0, which introduced ML experiments (experiment management), model checkpoints and metrics logging.

ML experiments: To solve the problem of Git overhead, when hundreds of experiments need to be run in a single day and each experiment run requires additional Git commands, DVC 2.0 introduced the lightweight experiments feature. It allows its users to auto-track ML experiments and capture code changes.

This eliminated the dependence upon additional services[51] by saving data versions as metadata in Git, as opposed to relegating it to external databases or APIs.[52]

ML model checkpoints versioning: The new release also enables versioning of all checkpoints with corresponding code and data.

Metrics logging: DVC 2.0 introduced a new open-source library DVC-Live that would provide functionality for tracking model metrics and organizing metrics in a way that DVC could visualize with navigation in Git history.

Alternative solutions to DVC

There are several open source projects that provide similar data version control capabilities to DVC,[53] such as: Git LFS, Dolt, Nessie, and lakeFS. These projects vary in their fit to the different needs of data engineers and data scientists such as: scalability, supported file formats, support in tabular data and unstructured data, volume of data that are supported, and more.

References

Шаблон:Reflist

External links

  1. Шаблон:Cite journal
  2. 2,0 2,1 Шаблон:Cite web
  3. 3,0 3,1 3,2 3,3 Шаблон:Cite web
  4. 4,0 4,1 Шаблон:Cite news
  5. 5,0 5,1 Шаблон:Cite news
  6. 6,0 6,1 6,2 Шаблон:Cite web
  7. Шаблон:Cite web
  8. Шаблон:Cite news
  9. Шаблон:Cite web
  10. Шаблон:Cite news
  11. Шаблон:Cite web
  12. Шаблон:Cite news
  13. Шаблон:Cite web
  14. Шаблон:Cite web
  15. Шаблон:Cite web
  16. Шаблон:Cite web
  17. Шаблон:Cite news
  18. 18,0 18,1 Шаблон:Cite web
  19. Шаблон:Cite web
  20. 20,0 20,1 20,2 Шаблон:Cite web
  21. Шаблон:Cite web
  22. 22,0 22,1 Шаблон:Cite web
  23. Шаблон:Cite web
  24. Шаблон:Cite web
  25. Шаблон:Cite web
  26. 26,0 26,1 Шаблон:Cite web
  27. Шаблон:Cite web
  28. 28,0 28,1 Шаблон:Cite journal
  29. Шаблон:Cite journal
  30. Шаблон:Cite web
  31. Шаблон:Cite web
  32. Шаблон:Cite web
  33. Шаблон:Cite web
  34. Шаблон:Cite web
  35. Шаблон:Cite web
  36. Шаблон:Cite news
  37. 37,0 37,1 Шаблон:Cite web
  38. Шаблон:Cite web
  39. Шаблон:Cite web
  40. Шаблон:Cite web
  41. Шаблон:Cite web
  42. Шаблон:Cite web
  43. Шаблон:Cite web
  44. 44,0 44,1 Шаблон:Cite web
  45. Шаблон:Cite web
  46. 46,0 46,1 Шаблон:Cite web
  47. Шаблон:Cite web
  48. Шаблон:Cite web
  49. Шаблон:Cite web
  50. Шаблон:Cite web
  51. Шаблон:Cite web
  52. Шаблон:Cite web
  53. Шаблон:Cite web