# PocketFlow - Model Compression Framework **Repository Path**: deepcy/pocket-flow---model-compression-framework ## Basic Information - **Project Name**: PocketFlow - Model Compression Framework - **Description**: PocketFlow is a PyTorch-based framework for neural network model compression. It provides easy-to-use tools for reducing model size and improving inference speed while maintaining accuracy. - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-06-07 - **Last Updated**: 2025-06-07 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # PocketFlow - Model Compression Framework PocketFlow is a PyTorch-based framework for neural network model compression. It provides easy-to-use tools for reducing model size and improving inference speed while maintaining accuracy. ## Features - **Distillation**: Transfer knowledge from large teacher models to smaller student models - **Pruning**: Remove unimportant weights or channels from models - **Quantization**: Reduce precision of weights and activations - **Mixed Precision Training**: Advanced features including: - Multiple optimization levels (O0-O3) - Dynamic loss scaling with configurable parameters - Layer-specific precision control (manual or auto-tuning) - Gradient clipping - Comprehensive performance analysis - Hardware compatibility checks - Automatic fallback mechanisms - Distributed training support (multi-GPU/multi-node) - Optimized model saving/loading ## Installation ```bash pip install pocketflow ``` For distributed training support, install with: ```bash pip install pocketflow[distributed] ``` ## Quick Start ```python from pocketflow import create_compressor from pocketflow.config import Config # Create compression config config = Config({ 'compression_method': 'mixed_precision', 'mixed_precision': { 'enabled': True, 'opt_level': 'O1', 'loss_scale': 'dynamic', 'grad_clip': 1.0, 'layer_precision': { 'conv1': 'fp16', 'fc1': 'fp32' } } }) # Create and apply compressor model = YourModel() compressor = create_compressor('mixed_precision', model, config) compressed_model = compressor.compress() # Train with mixed precision for epoch in range(10): loss = compressor.train_step(train_loader, optimizer) print(f"Epoch {epoch+1}, Loss: {loss:.4f}") # Evaluate results = compressor.evaluate(test_loader) print(f"Accuracy: {results['accuracy']:.2f}%") print(f"Memory saved: {results['memory_saved']}MB") print(f"Speedup: {results['speedup']:.1f}x") ``` ## Advanced Mixed Precision Configuration ```python config = Config({ 'compression_method': 'mixed_precision', 'mixed_precision': { 'enabled': True, 'opt_level': 'O2', # Optimization level 'loss_scale': { # Dynamic loss scaling config 'initial': 1024, 'window': 1000, 'hysteresis': 2, 'min': 1, 'max': 2**24 }, 'grad_clip': 1.0, # Gradient clipping value 'layer_precision': { 'conv1': 'fp16', # Force conv1 to FP16 'fc1': 'fp32' # Keep fc1 in FP32 }, 'performance': { # Performance analysis config 'enabled': True, 'track_memory': True, 'track_flops': True } } }) ``` ## Performance Analysis PocketFlow provides detailed performance metrics during training: ```python # Get performance metrics after training metrics = compressor.get_performance_metrics() print(f"Average batch time: {metrics['avg_batch_time']:.4f}s") print(f"Throughput: {metrics['throughput']:.2f} samples/sec") print(f"Estimated FLOPs: {metrics['estimated_flops']/1e9:.2f} GFLOPs") print(f"Memory usage: {metrics['memory_usage']:.2f} MB") print("Precision distribution:") for prec, count in metrics['precision_distribution']['counts'].items(): print(f" {prec}: {count} layers") ``` ## Model Saving & Loading Optimized model saving preserves precision settings and performance metrics: ```python # Save model with metadata compressor.save_model( "model.pth", metadata={ "task": "image_classification", "dataset": "ImageNet" } ) # Load model compressor.load_model("model.pth") # Access saved metadata and metrics print(f"Model metadata: {compressor.metadata}") print(f"Saved performance: {compressor.saved_performance}") ``` ## Distributed Training ```python config = Config({ 'compression_method': 'mixed_precision', 'mixed_precision': { 'enabled': True, 'opt_level': 'O2', 'distributed': { 'enabled': True, 'backend': 'nccl', 'find_unused_params': False }, 'layer_precision': 'auto' # Auto-tune precision } }) # Initialize distributed environment before creating compressor torch.distributed.init_process_group(backend='nccl') compressor = create_compressor('mixed_precision', model, config) ``` ## Automatic Precision Tuning Set `layer_precision: 'auto'` to enable automatic precision selection: ```python config = Config({ 'compression_method': 'mixed_precision', 'mixed_precision': { 'enabled': True, 'layer_precision': 'auto', # Auto-tune precision 'auto_tune': { 'num_batches': 20, # Number of batches for analysis 'sensitivity': 0.01, # Accuracy sensitivity threshold 'warmup': 5 # Warmup batches before analysis } } }) # Provide train_loader for auto-tuning compressor = create_compressor('mixed_precision', model, config) compressed_model = compressor.compress(train_loader) ``` ## API Reference ### MixedPrecisionCompressor - `compress(train_loader=None)`: Enable mixed precision training - `train_loader`: Optional for auto-tuning - `train_step(data, target)`: Perform training step with performance tracking - `evaluate(test_loader)`: Evaluate model performance - `get_precision_settings()`: Get current precision settings - `get_performance_metrics()`: Get detailed performance metrics - `save_model(path, metadata=None)`: Save model with precision settings - `load_model(path)`: Load model with precision settings ## Requirements - Python 3.7+ - PyTorch 1.6+ - NVIDIA GPU with CUDA (for mixed precision training) - Apex (optional, for additional AMP features) - torch.distributed (for distributed training) ## License MIT 本程序为测试版,全开源,随便用,报错请提交问题。 和我聊天微:cy321one 反馈邮箱:[samhoclub@163.com](mailto:samhoclub@163.com) 公众号:尘渊文化 ![img](https://pic1.zhimg.com/80/v2-77aed7e43dc44ddd627ef4ac285b8296_720w.png)