# mxnet **Repository Path**: daqingba/mxnet ## Basic Information - **Project Name**: mxnet - **Description**: Efficient and Flexible Distributed Deep Learning Framework, for python, R, Julia and more - **Primary Language**: C++ - **License**: Apache-2.0 - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-06-06 - **Last Updated**: 2024-06-10 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README for Deep Learning ===== [![Build Status](https://travis-ci.org/dmlc/mxnet.svg?branch=master)](https://travis-ci.org/dmlc/mxnet) [![Documentation Status](https://readthedocs.org/projects/mxnet/badge/?version=latest)](http://mxnet.readthedocs.org/en/latest/) [![GitHub license](http://dmlc.github.io/img/apache2.svg)](./LICENSE) MXNet is a deep learning framework designed for both *efficiency* and *flexibility*. It allows you to mix the [flavours](http://mxnet.readthedocs.org/en/latest/program_model.html) of deep learning programs together to maximize the efficiency and your productivity. What's New ---------- * [LSTM Example by using symbolic API](https://github.com/dmlc/mxnet/tree/master/example/rnn) * [MXNet R Package brings Deep learning for R!](https://github.com/dmlc/mxnet/tree/master/R-package) * [Note on Dependency Engine for Deep Learning](http://mxnet.readthedocs.org/en/latest/developer-guide/note_engine.html) Contents -------- * [Documentation and Tutorials](http://mxnet.readthedocs.org/en/latest/) * [Open Source Design Notes](http://mxnet.readthedocs.org/en/latest/#open-source-design-notes) * [Code Examples](example) * [Installation](http://mxnet.readthedocs.org/en/latest/build.html) * [Features](#features) * [Contribute to MXNet](http://mxnet.readthedocs.org/en/latest/contribute.html) * [License](#license) Features -------- * To Mix and Maximize - Mix all flavours of programming models to maximize flexibility and efficiency. * Lightweight, scalable and memory efficient. - Minimum build dependency, scales to multi-GPUs with very low memory usage. * Auto parallelization - Write numpy-style ndarray GPU programs, which will be automatically parallelized. * Language agnostic - With support for python, c++, R, more to come. * Cloud friendly - Directly load/save from S3, HDFS, AZure * Easy extensibility - Extending no requirement on GPU programming. Bug Reporting ------------- * For reporting bugs please use the [mxnet/issues](https://github.com/dmlc/mxnet/issues) page. License ------- © Contributors, 2015. Licensed under an [Apache-2.0](https://github.com/dmlc/mxnet/blob/master/LICENSE) license. History ------- MXNet is initiated and designed in collaboration by authors from [cxxnet](https://github.com/dmlc/cxxnet), [minerva](https://github.com/dmlc/minerva) and [purine2](https://github.com/purine/purine2). The project reflects what we have learnt from the past projects. It combines important flavour of the existing projects, being efficient, flexible and memory efficient.