当前位置:   article > 正文

spring boot 、vue-simple-uploader 实现分片、断点上传_vue simple uploader 关闭分片

vue simple uploader 关闭分片
前言

​ 公司最新有个需求需要上传大文件,需要支持分片及断点上传,需要有上传、暂停、取消等功能,且支持关闭浏览器后下次打开继续上传。本片文章记录我使用vue-simple-uploader的过程。

环境
  • vue 2.6.10
  • vue-simple-upload 0.7.6
  • springboot 2.5.x
  • mysql5.x
  • mybatis plus 3.4.1
vue-simple-upload 是什么?

vue-simple-uploader是基于 simple-uploader.js 封装的vue上传插件。它的优点包括且不限于以下几种:

  • 支持文件、多文件、文件夹上传;支持拖拽文件、文件夹上传
  • 可暂停、继续上传
  • 错误处理
  • 支持“秒传”,通过文件判断服务端是否已存在从而实现“秒传”
  • 分块上传
  • 支持进度、预估剩余时间、出错自动重试、重传等操作

vue-simple-uploader文档

simple-uploader.js文档

安装

我开发使用的管理工具是yarn,npm同理。

yarn add vue-simple-uploader
  • 1
使用
初始化
import Vue from 'vue'
import App from './App.vue'
import uploader from 'vue-simple-uploader'
Vue.use(uploader)

new Vue({
    render: h => h(App)
}).$mount('#app')
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
封装全局上传组件
<template>
  <div>
    <p style="font-size: 18px">上传</p>
    <uploader
      ref="uploader"
      :options="options"
      :autoStart=false
      :file-status-text="fileStatusText"
      @file-added="onFileAdded"
      @file-success="onFileSuccess"
      @file-progress="onFileProgress"
      @file-error="onFileError"
      class="uploader-ui">
      <uploader-unsupport></uploader-unsupport>
      <uploader-drop>
        <div>
          <uploader-btn id="global-uploader-btn" :attrs="attrs" ref="uploadBtn">选择文件<i
            class="el-icon-upload el-icon--right"></i></uploader-btn>
        </div>
      </uploader-drop>
      <uploader-list></uploader-list>
    </uploader>
  </div>
</template>

<script>
import SparkMD5 from 'spark-md5';
import {mergeFile} from '@/utils/upload/multipartUpload';

export default {
  name: 'MultipartUpload',
  props: {
    projectNo: {
      type: String,
      default: ''
    },
  },
  data() {
    return {
      options: {
        target: process.env.VUE_APP_API_BASE_URL + "/cms/splitupuload/chunk",//校验分片
        chunkSize: '2048000',
        fileParameterName: 'upfile',
        maxChunkRetries: 3,
        testChunks: true,
        checkChunkUploadedByResponse: function (chunk, response_msg) {
          let objMessage = JSON.parse(response_msg);
          if (objMessage.skipUpload) {
            return true;
          }
          return (objMessage.uploadedChunks || []).indexOf(chunk.offset + 1) >= 0;
        }
      },
      attrs: {
        accept: ['.mp4', '.rmvb', '.mkv', '.wmv', '.flv']
      },
      fileStatusText: {
        success: '上传成功',
        error: '上传失败',
        uploading: '上传中',
        paused: '暂停',
        waiting: '等待上传'
      },
    }
  },
  methods: {
    onFileAdded(file) {
      this.computeMD5(file);
    },
    onFileSuccess(rootFile, file, response, chunk) {
      file.refProjectId = this.projectNo;
      mergeFile(file).then(responseData => {
        console.log("文件上传", responseData)
      }).catch(function (error) {
        console.log("文件上传异常", error);
      });
    },
    onFileError(rootFile, file, response, chunk) {
      console.log('文件上传失败:' + response);
    },
    onFileProgress(rootFile, file, chunk) {
      //上传进度
    },
    computeMD5(file) {
      file.pause();
      let fileReader = new FileReader();
      let time = new Date().getTime();
      let blobSlice = File.prototype.slice || File.prototype.mozSlice || File.prototype.webkitSlice;
      let currentChunk = 0;
      const chunkSize = 10 * 1024 * 1000;
      let chunks = Math.ceil(file.size / chunkSize);
      let spark = new SparkMD5.ArrayBuffer();
      let chunkNumberMD5 = 1;
      loadNext();
      fileReader.onload = (e => {
        spark.append(e.target.result);
        if (currentChunk < chunkNumberMD5) {
          loadNext();
        } else {
          let md5 = spark.end();
          file.uniqueIdentifier = md5;
          file.resume();
          console.log(`MD5计算完毕:${file.name} \nMD5:${md5} \n分片:${chunks} 大小:${file.size} 用时:${new Date().getTime() - time} ms`);
        }
      });

      fileReader.onerror = function () {
        this.error(`文件${file.name}读取出错,请检查该文件`)
        file.cancel();
      };

      function loadNext() {
        let start = currentChunk * chunkSize;
        let end = ((start + chunkSize) >= file.size) ? file.size : start + chunkSize;

        fileReader.readAsArrayBuffer(blobSlice.call(file.file, start, end));
        currentChunk++;
      }
    },
    close() {
      this.uploader.cancel();
    },
    error(msg) {
      this.$message.error(msg, 2000)
    }
  }
}</script>

<style>
.uploader-ui {
  padding: 15px;
  font-size: 12px;
  font-family: Microsoft YaHei;
  box-shadow: 0 0 10px rgba(0, 0, 0, .4);
}

.uploader-ui .uploader-btn {
  margin-right: 4px;
  font-size: 12px;
  border-radius: 3px;
  color: #FFF;
  background-color: #409EFF;
  border-color: #409EFF;
  display: inline-block;
  line-height: 1;
  white-space: nowrap;
}

.uploader-ui .uploader-list {
  max-height: 440px;
  overflow: auto;
  overflow-x: hidden;
  overflow-y: auto;
}
</style>
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
上传组件使用
......
<MultipartUpload :projectNo="projectNo"></MultipartUpload>
.....
export default {
  name: 'UploadMovie',
  components: {MultipartUpload},
  data() {
    return {
		projectNo:'UP000000001'
	}
  }
}
..........

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
后端
mysql数据库脚本
CREATE TABLE `t_chunk_info` (
  `id` varchar(64) NOT NULL COMMENT '主键',
  `chunk_number` bigint(20) NOT NULL COMMENT '分片编号',
  `chunk_size` bigint(20) NOT NULL COMMENT '分片大小',
  `current_chunkSize` bigint(20) DEFAULT NULL COMMENT '校验大小',
  `identifier` varchar(64) NOT NULL COMMENT '标记',
  `filename` varchar(500) DEFAULT NULL COMMENT '文件名',
  `relative_path` varchar(500) NOT NULL COMMENT '文件路径',
  `total_chunks` bigint(20) NOT NULL COMMENT '校验分片数量',
  `type` bigint(20) DEFAULT NULL COMMENT '类型'
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=COMPACT;

CREATE TABLE `t_file_info` (
  `id` varchar(64) NOT NULL COMMENT '主键',
  `filename` varchar(500) NOT NULL COMMENT '文件名称',
  `identifier` varchar(64) NOT NULL COMMENT '标识',
  `type` varchar(10) DEFAULT NULL COMMENT '类型',
  `total_size` bigint(20) NOT NULL COMMENT '文件大小',
  `location` varchar(200) NOT NULL COMMENT '路径',
  `del_flag` varchar(2) NOT NULL DEFAULT '0' COMMENT '删除标记',
  `ref_project_id` varchar(64) NOT NULL COMMENT '来源',
  `upload_by` varchar(64) DEFAULT NULL COMMENT '上传人',
  `upload_time` datetime NOT NULL DEFAULT CURRENT_TIMESTAMP COMMENT '上传时间',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
服务接口
//分片上传
@PostMapping("/chunk")
public String uploadChunk(SplitChunkInfoVO chunk) {
    HttpStatus httpStatus = HttpStatus.UNSUPPORTED_MEDIA_TYPE;
    MultipartFile file = chunk.getUpfile();
    log.info("file originName: {}, chunkNumber: {}", file.getOriginalFilename(), chunk.getChunkNumber());
    try {
        byte[] bytes = file.getBytes();
        Path path = Paths.get(FileInfoUtils.generatePath(uploadFolder, chunk));
        Files.write(path, bytes);
        SplitChunkInfo splitChunkInfo = BeanUtils.copyBeanNoException(chunk, SplitChunkInfo.class);
        splitChunkInfo.setId(IdUtil.fastSimpleUUID());
        if (this.splitChunkInfoService.save(splitChunkInfo)) {
            httpStatus = HttpStatus.OK;
        }
    } catch (IOException e) {
        log.error("上传分片信息失败");
    }
    return String.valueOf(httpStatus.value());
}
//分片校验
@GetMapping("/chunk")
public UploadResult checkChunk(SplitChunkInfoVO chunk, HttpServletResponse response) {
    UploadResult ur = new UploadResult();
    response.setStatus(HttpServletResponse.SC_NOT_MODIFIED);
    String file = uploadFolder + "/" + chunk.getIdentifier() + "/" + chunk.getFilename();
    //先判断整个文件是否已经上传过了,如果是,则告诉前端跳过上传,实现秒传
    if (FileInfoUtils.fileExists(file)) {
        ur.setSkipUpload(true);
        ur.setLocation(file);
        response.setStatus(HttpServletResponse.SC_OK);
        ur.setMessage("完整文件已存在,直接跳过上传,实现秒传");
        return ur;
    }
    List<Integer> list = this.splitChunkInfoService.list(new LambdaQueryWrapper<SplitChunkInfo>()
            .eq(SplitChunkInfo::getIdentifier, chunk.getIdentifier())
            .eq(SplitChunkInfo::getFilename, chunk.getFilename())).stream().map(x -> x.getChunkNumber().intValue()).collect(Collectors.toList());
    if (list.size() > 0) {
        ur.setSkipUpload(false);
        ur.setUploadedChunks(list);
        response.setStatus(HttpServletResponse.SC_OK);
        ur.setMessage("部分文件块已存在,继续上传剩余文件块,实现断点续传");
        return ur;
    }
    return ur;
}
//合并分片
@PostMapping("/mergeFile")
public String mergeFile(@RequestBody SplitFileInfoVO fileInfoVO) {
    SplitFileInfo fileInfo = new SplitFileInfo();
    fileInfo.setFilename(fileInfoVO.getName());
    fileInfo.setIdentifier(fileInfoVO.getUniqueIdentifier());
    fileInfo.setId(fileInfoVO.getId());
    fileInfo.setTotalSize(fileInfoVO.getSize());
    fileInfo.setRefProjectId(fileInfoVO.getRefProjectId());
    String filename = fileInfo.getFilename();
    String file = uploadFolder + "/" + fileInfo.getIdentifier() + "/" + filename;
    String folder = uploadFolder + "/" + fileInfo.getIdentifier();
    String fileSuccess = FileInfoUtils.merge(file, folder, filename);
    fileInfo.setLocation(file);
    //文件合并成功后,保存记录至数据库
    if (SplitUploadConstants.MERGE_FILE_OK.equals(fileSuccess)) {
        fileInfo.setId(IdUtil.fastSimpleUUID());
        if (this.splitFileInfoService.save(fileInfo)) {
            
            return SplitUploadConstants.MERGE_FILE_SUCCESS;
        }
    } else if (SplitUploadConstants.MERGE_FILE_REPEAT.equals(fileSuccess)) {
        String fileId = null;
        SplitFileInfo splitFileInfo = this.splitFileInfoService.getOne(new LambdaQueryWrapper<SplitFileInfo>()
                .eq(SplitFileInfo::getFilename, fileInfo.getFilename())
                .eq(SplitFileInfo::getIdentifier, fileInfo.getIdentifier())
                .last("limit 1")
        );
        if (Objects.isNull(splitFileInfo) ||
                (!fileInfo.getRefProjectId().equals(splitFileInfo.getRefProjectId()))) {
            fileInfo.setId(IdUtil.fastSimpleUUID());
            JcBootException.isTrue(this.splitFileInfoService.save(fileInfo), "文件上传失败!");
            fileId = fileInfo.getId();
        } else {
            fileId = splitFileInfo.getId();

        }
      
        return SplitUploadConstants.MERGE_FILE_SUCCESS;
    }
    return SplitUploadConstants.MERGE_FILE_FALURE;
}
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
总结

以上就是本次文件分片上传的全部内容了,只是简单记录了一下使用过程,细节没有赘述,大致的方式方法都一样,大家有什么更好的上传组件或者意见欢迎讨论。

参考

基于vue-simple-uploader封装文件分片上传、秒传及断点续传的全局上传插件

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家自动化/article/detail/66621
推荐阅读
相关标签
  

闽ICP备14008679号