Alibaba Deject Object Storage Service (OSS) provides multipart upload so that you can split up upwards big objects into multiple parts and upload the parts separately. After the parts are uploaded, you can phone call the CompleteMultipartUpload operation to combine these parts into an object.

Prerequisites

A bucket is created. For more than information, see Create buckets.

Scenarios

  • Accelerated upload of large objects

    When the object that you want to upload is larger than 5 GB, you tin use multipart upload to split the object into multiple parts and concurrently upload the parts to accelerate the upload.

  • Poor network environments

    We recommend that you employ multipart upload when network conditions are unstable. When specific parts fail to be uploaded, you need only to upload these parts.

  • Uncertain object size

    If y'all are uncertain of the size of objects to exist uploaded, you can use multipart upload. This case is common in industry applications such as video surveillance.

Multipart upload process

The following flowchart shows the bones process of multipart upload.

The preceding process consists of the following steps:

  1. Separate the object that yous want to upload into parts based on a specific size.
  2. Call the InitiateMultipartUpload operation to initiate a multipart upload task.
  3. Call the UploadPart functioning to upload the parts.

    After the object is split up into parts, a partNumber is specified for each part to indicate the sequence of the parts. Therefore, you can concurrently upload the parts in sequence. More than concurrent uploads do not necessarily result in faster upload speeds. Therefore, we recommend that you specify the number of concurrent uploads based on your network weather condition and the workload of your devices.

    If y'all want to cancel a multipart upload task, you can telephone call the AbortMultipartUpload performance. After the multipart upload task is canceled, parts that are uploaded by the task are also deleted.

  4. Telephone call the CompleteMultipartUpload functioning to combine the uploaded parts into an object.

Limits

Detail Limit
Object size Multipart upload supports objects up to 48.viii TB in size.
Number of parts You tin set up the number of parts to a value that ranges from 1 to 10,000.
Function size Each part tin be 100 KB to 5 GB in size. The size of the concluding function is not limited.
Maximum number of parts that can exist returned for a single ListParts request Upwardly to 1,000 parts can be returned for a single ListParts request.
Maximum number of multipart upload tasks that can be returned for a single ListMultipartUploads asking Upwards to 1,000 tasks tin be returned for a single ListParts request.

Precautions

  • Optimize object upload performance

    If y'all upload a big number of objects whose names accept sequential prefixes such as timestamps and letters, multiple object indexes may exist stored in a single partition. If a big number of requests are sent to query these objects, the latency increases. Nosotros recommend that you do not upload a large number of objects that have sequential prefixes. For more data, run into OSS operation and scalability best practices.

  • Overwrite objects

    If y'all upload an object whose name is the same every bit an existing object in OSS, the existing object is overwritten. You can utilize the following methods to prevent objects from existence unexpectedly overwritten:

    • Enable versioning

      When versioning is enabled for a bucket, overwritten objects are saved as previous versions. You can restore an object to a previous version at whatever time. For more information, encounter Overview.

    • Include a specific parameter in the upload request

      Include the x-oss-forbid-overwrite parameter in the upload request header and prepare the parameter to truthful. This way, if you lot upload an object whose name is the same as an existing object, the upload fails and OSS returns the FileAlreadyExists error. For more data, see InitiateMultipartUpload.

  • Delete parts

    When a multipart upload task is interrupted, parts that are uploaded by the task are stored in the specified bucket. To avert additional storage fees, we recommend that you use the following methods to delete these parts if you no longer use these parts:

    • Manually delete parts. For more information, come across Manage parts.
    • Configure lifecycle rules to automatically delete parts. For more information, see Configure lifecycle rules.

Utilize OSS SDKs

The following code provides examples on how to perform multipart upload by using OSS SDKs for mutual programming languages. For more information near how to perform multipart upload by using OSS SDKs for other programming languages, come across Overview.

                        import com.aliyun.oss.ClientException; import com.aliyun.oss.OSS; import com.aliyun.oss.OSSClientBuilder; import com.aliyun.oss.OSSException; import com.aliyun.oss.model.*; import coffee.io.File; import java.io.FileInputStream; import java.io.InputStream; import coffee.util.ArrayList; import java.util.Listing;  public grade Demo {      public static void principal(Cord[] args) throws Exception {         // In this example, the endpoint of the China (Hangzhou) region is used. Specify your bodily endpoint.         Cord endpoint = "https://oss-cn-hangzhou.aliyuncs.com";         // Security risks may arise if you use the AccessKey pair of an Alibaba Cloud account to access OSS because the business relationship has permissions on all API operations. Nosotros recommend that yous use a RAM user to call API operations or perform routine O&M. To create a RAM user, log on to the RAM panel.          Cord accessKeyId = "yourAccessKeyId";         String accessKeySecret = "yourAccessKeySecret";         // Specify the name of the bucket. Instance: examplebucket.          String bucketName = "examplebucket";         // Specify the total path of the object. Case: exampledir/exampleobject.txt. The full path of the object cannot contain the bucket proper noun.          String objectName = "exampledir/exampleobject.txt";          // Create an OSSClient instance.          OSS ossClient = new OSSClientBuilder().build(endpoint, accessKeyId, accessKeySecret);         effort {             // Create an InitiateMultipartUploadRequest object.              InitiateMultipartUploadRequest request = new InitiateMultipartUploadRequest(bucketName, objectName);              // The following code provides an example on how to specify the request headers when you initiate a multipart upload chore.              // ObjectMetadata metadata = new ObjectMetadata();             // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard.toString());             // Specify the caching beliefs of the web page for the object.              // metadata.setCacheControl("no-cache");             // Specify the proper name of the object when the object is downloaded.              // metadata.setContentDisposition("attachment;filename=oss_MultipartUpload.txt");             // Specify the encoding format for the content of the object.              // metadata.setContentEncoding(OSSConstants.DEFAULT_CHARSET_NAME);             // Specify whether existing objects are overwritten by objects with the same names when the multipart upload task is initiated. In this case, this parameter is set to true, which indicates that the object with the aforementioned name cannot be overwritten.              // metadata.setHeader("x-oss-forbid-overwrite", "truthful");             // Specify the server-side encryption method that is used to encrypt each part of the object to upload.              // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);             // Specify the encryption algorithm that is used to encrypt the object. If yous do not configure this parameter, objects are encrypted past using AES-256.              // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_DATA_ENCRYPTION, ObjectMetadata.KMS_SERVER_SIDE_ENCRYPTION);             // Specify the ID of the client master cardinal (CMK) that is managed past Fundamental Direction Service (KMS).              // metadata.setHeader(OSSHeaders.OSS_SERVER_SIDE_ENCRYPTION_KEY_ID, "9468da86-3509-4f8d-a61e-6eab1eac****");             // Specify the storage class of the object.              // metadata.setHeader(OSSHeaders.OSS_STORAGE_CLASS, StorageClass.Standard);             // Configure tagging for the object. You can specify multiple tags for the object at the same fourth dimension.              // metadata.setHeader(OSSHeaders.OSS_TAGGING, "a:one");             // request.setObjectMetadata(metadata);              // Initiate a multipart re-create task.              InitiateMultipartUploadResult upresult = ossClient.initiateMultipartUpload(asking);             // Obtain the upload ID, which uniquely identifies the multipart upload chore. You can utilize the upload ID to abolish or query the multipart upload task.              String uploadId = upresult.getUploadId();              // partETags is the gear up of PartETags. A PartETag consists of the part number and ETag of an uploaded office              Listing<PartETag> partETags =  new ArrayList<PartETag>();             // The size of each office, which is used to calculate the number of parts of the object. Unit: bytes.              final long partSize = 1 * 1024 * 1024L;   // Set the office size to 1 MB.               // Specify the full path of the local file that you want to upload. By default, if you practise not specify the total path of the local file, the local file is uploaded from the path of the project to which the sample programme belongs.              terminal File sampleFile = new File("D:\\localpath\\examplefile.txt");             long fileLength = sampleFile.length();             int partCount = (int) (fileLength / partSize);             if (fileLength % partSize != 0) {                 partCount++;             }             // Upload each part until all parts are uploaded.              for (int i = 0; i < partCount; i++) {                 long startPos = i * partSize;                 long curPartSize = (i + 1 == partCount) ? (fileLength - startPos) : partSize;                 InputStream instream = new FileInputStream(sampleFile);                 // Skip the parts that take been uploaded.                  instream.skip(startPos);                 UploadPartRequest uploadPartRequest = new UploadPartRequest();                 uploadPartRequest.setBucketName(bucketName);                 uploadPartRequest.setKey(objectName);                 uploadPartRequest.setUploadId(uploadId);                 uploadPartRequest.setInputStream(instream);                 // Configure the size available for each part. Each part except for the last office must be larger than 100 KB in size.                  uploadPartRequest.setPartSize(curPartSize);                 // Prepare part numbers. Each part has a part number. The number ranges from 1 to 10000. If the specified number is beyond the range, OSS returns an InvalidArgument mistake code.                  uploadPartRequest.setPartNumber( i + 1);                 // Parts are not necessarily uploaded in sequence. They tin exist uploaded from unlike OSS clients. OSS sorts the parts based on their part numbers and combines them into a complete object.                  UploadPartResult uploadPartResult = ossClient.uploadPart(uploadPartRequest);                 // Each time a part is uploaded, OSS returns a result that contains a PartETag. The PartETags are stored in partETags.                  partETags.add(uploadPartResult.getPartETag());             }               // Create a CompleteMultipartUploadRequest object.              // When the multipart upload chore is completed, you must provide all valid PartETags. After receiving the PartETags, OSS verifies the validity of all parts 1 by 1. After all parts are verified, OSS combines these parts into a complete object.              CompleteMultipartUploadRequest completeMultipartUploadRequest =                     new CompleteMultipartUploadRequest(bucketName, objectName, uploadId, partETags);              // Optional. The following code provides an example on how to set the access control list (ACL) of the object.              // completeMultipartUploadRequest.setObjectACL(CannedAccessControlList.Individual);             // Specifies whether to list all parts that are uploaded by using the current upload ID. If yous want to combine the parts by listing the parts in the server side, you have the option to leave partETags contained in CompleteMultipartUploadRequest empty.              // Map<String, String> headers = new HashMap<String, String>();             // If x-oss-consummate-all:yes is specified in the request, OSS lists all parts that are uploaded by using the current upload ID, sorts the parts by part number, so performs the CompleteMultipartUpload operation.              // If you configure 10-oss-complete-all:yes in the asking, the asking body cannot be specified. Otherwise, an fault occurs.              // headers.put("10-oss-complete-all","yes");             // completeMultipartUploadRequest.setHeaders(headers);              // Complete the multipart upload chore.              CompleteMultipartUploadResult completeMultipartUploadResult = ossClient.completeMultipartUpload(completeMultipartUploadRequest);             Organization.out.println(completeMultipartUploadResult.getETag());         } catch (OSSException oe) {             System.out.println("Caught an OSSException, which means your request made it to OSS, "                     + "just was rejected with an fault response for some reason.");             Organization.out.println("Mistake Message:" + oe.getErrorMessage());             System.out.println("Error Code:" + oe.getErrorCode());             System.out.println("Request ID:" + oe.getRequestId());             Organisation.out.println("Host ID:" + oe.getHostId());         } grab (ClientException ce) {             System.out.println("Caught an ClientException, which means the customer encountered "                     + "a serious internal trouble while trying to communicate with OSS, "                     + "such as non existence able to access the network.");             Arrangement.out.println("Mistake Message:" + ce.getMessage());         } finally {             if (ossClient != null) {                 ossClient.shutdown();             }         }     } }                                              

                        # -*- coding: utf-8 -*- import bone from oss2 import SizedFileAdapter, determine_part_size from oss2.models import PartInfo import oss2  # Security risks may arise if you employ the AccessKey pair of an Alibaba Deject account to access OSS considering the account has permissions on all API operations. We recommend that you apply a RAM user to call API operations or perform routine O&One thousand. To create a RAM user, log on to the RAM console.  auth = oss2.Auth('yourAccessKeyId', 'yourAccessKeySecret') # In this example, the endpoint of the China (Hangzhou) region is used. Specify the endpoint based on your business requirements.  # Specify the name of the bucket. Example: examplebucket.  bucket = oss2.Bucket(auth, 'https://oss-cn-hangzhou.aliyuncs.com', 'examplebucket') # Specify the full path of the object. The full path cannot comprise the bucket name. Example: exampledir/exampleobject.txt.  central = 'exampledir/exampleobject.txt' # Specify the full path of the local file that you want to upload. Example: D:\\localpath\\examplefile.txt.  filename = 'D:\\localpath\\examplefile.txt'  total_size = os.path.getsize(filename) # Use the determine_part_size method to determine the size of each role.  part_size = determine_part_size(total_size, preferred_size=100 * 1024)  # Initiate a multipart upload job.  # If you desire to specify the storage class of the object when you initiate the multipart upload task, configure the related headers when y'all telephone call the init_multipart_upload method.  # headers = dict() # Specify the caching behavior of the spider web page for the object.  # headers['Cache-Control'] = 'no-cache' # Specify the proper noun of the object when it is downloaded.  # headers['Content-Disposition'] = 'oss_MultipartUpload.txt' # Specify the encoding format for the content of the object.  # headers['Content-Encoding'] = 'utf-8' # Specify the validity menstruum. Unit of measurement: milliseconds.  # headers['Expires'] = '1000' # Specify whether to overwrite the existing object with the same name as the uploaded object when you initiate the multipart upload task. In this example, this parameter is set to true, which indicates that the existing object with the aforementioned proper name cannot be overwritten by the uploaded object.  # headers['x-oss-foreclose-overwrite'] = 'true' # Specify the server-side encryption method that is used to encrypt each part of the object that you want to upload.  # headers[OSS_SERVER_SIDE_ENCRYPTION] = SERVER_SIDE_ENCRYPTION_KMS # Specify the algorithm that is used to encrypt the object. If yous do not configure this parameter, objects are encrypted by using AES-256.  # headers[OSS_SERVER_SIDE_DATA_ENCRYPTION] = SERVER_SIDE_ENCRYPTION_KMS # Specify the ID of the Client Master Fundamental (CMK) that is managed by Cardinal Management Service (KMS).  # headers[OSS_SERVER_SIDE_ENCRYPTION_KEY_ID] = '9468da86-3509-4f8d-a61e-6eab1eac****' # Specify the storage class of the object.  # headers['ten-oss-storage-class'] = oss2.BUCKET_STORAGE_CLASS_STANDARD # Specify tags for the object. You can specify multiple tags for the object at the same time.  # headers[OSS_OBJECT_TAGGING] = 'k1=v1&k2=v2&k3=v3' # upload_id = bucket.init_multipart_upload(key, headers=headers).upload_id upload_id = bucket.init_multipart_upload(key).upload_id parts = []  # Upload the parts one by ane.  with open(filename, 'rb') as fileobj:     part_number = 1     beginning = 0     while kickoff < total_size:         num_to_upload = min(part_size, total_size - showtime)         # Call the SizedFileAdapter(fileobj, size) method to generate a new object and recalculate the length of the suspend object.          result = bucket.upload_part(key, upload_id, part_number,                                     SizedFileAdapter(fileobj, num_to_upload))         parts.append(PartInfo(part_number, consequence.etag))          get-go += num_to_upload         part_number += 1  # Complete the multipart upload task.  # The following code provides an case on how to configure headers when you consummate the multipart upload task:  headers = dict() # Specify the access control listing (ACL) of the object. In this instance, this parameter is set to OBJECT_ACL_PRIVATE, which indicates private.  # headers["x-oss-object-acl"] = oss2.OBJECT_ACL_PRIVATE # If you configure x-oss-complete-all:yes in the request, OSS lists all parts that are uploaded by using the current upload ID, sorts the parts past part number, and then performs the CompleteMultipartUpload operation.  # If y'all configure x-oss-consummate-all:yes in the request, the request trunk cannot be specified. Otherwise, an error occurs.  headers["x-oss-consummate-all"] = 'aye' bucket.complete_multipart_upload(key, upload_id, parts, headers=headers) # bucket.complete_multipart_upload(central, upload_id, parts)  # Verify the result of the multipart upload chore.  with open(filename, 'rb') as fileobj:     assert bucket.get_object(cardinal).read() == fileobj.read()                                              

                        package primary  import (     "fmt"     "os"      "github.com/aliyun/aliyun-oss-go-sdk/oss" )  func chief() {     // Create an OSSClient example.      // Set                          yourEndpoint                          to the endpoint of the region in which the bucket is located. For example, if the bucket is located in the China (Hangzhou) region, set yourEndpoint to https://oss-cn-hangzhou.aliyuncs.com.      // Security risks may arise if you lot apply the AccessKey pair of an Alibaba Cloud account to access OSS considering the account has permissions on all API operations. We recommend that y'all use a RAM user to call API operations or perform routine operations and maintenance. To create a RAM user, log on to the RAM console.      client, err := oss.New("yourEndpoint", "yourAccessKeyId", "yourAccessKeySecret")     if err != zilch {         fmt.Println("Mistake:", err)         bone.Exit(-1)     }     // Specify the name of the saucepan.      bucketName := "examplebucket"     // Specify the total path of the object. The full path of the object cannot contain the bucket name.      objectName := "exampleobject.txt"     // Specify the total path of the local file that you want to upload. Past default, if you do not specify the full path of the local file, the local file is uploaded from the path of the projection to which the sample program belongs.      locaFilename := "D:\\localpath\\examplefile.txt"      saucepan, err := customer.Bucket(bucketName)     if err != nix {         fmt.Println("Error:", err)         os.Exit(-1)     }     // Split the local file into three parts.      chunks, err := oss.SplitFileByPartNum(locaFilename, 3)     fd, err := os.Open up(locaFilename)     defer fd.Shut()          // Specify the expiration time.      expires := time.Date(2049, fourth dimension.January, x, 23, 0, 0, 0, fourth dimension.UTC)     // The following lawmaking provides an example on how to specify the request headers when y'all initiate a multipart upload job.      options := []oss.Option{         oss.MetadataDirective(oss.MetaReplace),         oss.Expires(expires),         // Specify the caching behavior of the web page when the object is downloaded.          // oss.CacheControl("no-cache"),         // Specify the name of the object when the object is downloaded.          // oss.ContentDisposition("attachment;filename=FileName.txt"),         // Specify the encoding format for the content of the object.          // oss.ContentEncoding("gzip"),         // Specify the method that is used to encode the object proper name in the response. Simply URL encoding is supported.          // oss.EncodingType("url"),         // Specify the storage class of the object.          // oss.ObjectStorageClass(oss.StorageStandard),     }      // Step ane: Initiate a multipart upload task and set the storage class to Standard.      imur, err := bucket.InitiateMultipartUpload(objectName, options...)     // Step two: Upload parts.      var parts []oss.UploadPart     for _, clamper := range chunks {         fd.Seek(clamper.Start, os.SEEK_SET)         // Call the UploadPart method to upload each part.          part, err := bucket.UploadPart(imur, fd, chunk.Size, clamper.Number)         if err != nada {             fmt.Println("Error:", err)             os.Exit(-1)         }         parts = suspend(parts, part)     }      // Gear up the admission control list (ACL) of the object to public-read. Past default, the object inherits the ACL of the bucket.      objectAcl := oss.ObjectACL(oss.ACLPublicRead)      // Step 3: Complete the multipart upload task and ready the ACL of the object to public-read.      cmur, err := bucket.CompleteMultipartUpload(imur, parts, objectAcl)     if err != zilch {         fmt.Println("Mistake:", err)         bone.Exit(-ane)     }     fmt.Println("cmur:", cmur) }                      

                        using Aliyun.OSS; var endpoint = "<yourEndpoint>"; var accessKeyId = "<yourAccessKeyId>"; var accessKeySecret = "<yourAccessKeySecret>"; var bucketName = "<yourBucketName>"; var objectName = "<yourObjectName>"; var localFilename = "<yourLocalFilename>"; // Create an OSSClient instance. var client = new OssClient(endpoint, accessKeyId, accessKeySecret); // Initiate a multipart upload chore. var uploadId = ""; try {     // Specify the name of the object to upload and the bucket to which the object is uploaded. You can configure object metadata in InitiateMultipartUploadRequest. All the same, you exercise not need to specify ContentLength.     var asking = new InitiateMultipartUploadRequest(bucketName, objectName);     var result = customer.InitiateMultipartUpload(request);     uploadId = upshot.UploadId;     // Brandish the upload ID.     Panel.WriteLine("Init multi part upload succeeded");     Console.WriteLine("Upload Id:{0}", consequence.UploadId); } take hold of (Exception ex) {     Console.WriteLine("Init multi function upload failed, {0}", ex.Message); } // Calculate the total number of parts. var partSize = 100 * 1024; var fi = new FileInfo(localFilename); var fileSize = fi.Length; var partCount = fileSize / partSize; if (fileSize % partSize != 0) {     partCount++; } // Get-go the multipart upload task. partETags is a list of PartETag. OSS verifies the validity of each part after it receives the list of parts. After all parts are verified, OSS combines these parts into a consummate object. var partETags = new List<PartETag>(); try {     using (var fs = File.Open up(localFilename, FileMode.Open))     {         for (var i = 0; i < partCount; i++)         {             var skipBytes = (long)partSize * i;             // Find the offset position for this upload.             fs.Seek(skipBytes, 0);             // Calculate the part size in this upload. The size of the last function is the size of the residual later the object is split by the calculated office size.             var size = (partSize < fileSize - skipBytes) ? partSize : (fileSize - skipBytes);             var request = new UploadPartRequest(bucketName, objectName, uploadId)             {                 InputStream = fs,                 PartSize = size,                 PartNumber = i + 1             };             // Telephone call UploadPart to upload parts. The returned results contain the ETag values of parts.             var issue = client.UploadPart(asking);             partETags.Add(upshot.PartETag);             Console.WriteLine("terminate {0}/{ane}", partETags.Count, partCount);         }         Console.WriteLine("Put multi part upload succeeded");     } } catch (Exception ex) {     Console.WriteLine("Put multi office upload failed, {0}", ex.Message); } // Consummate the multipart upload chore. try {     var completeMultipartUploadRequest = new CompleteMultipartUploadRequest(bucketName, objectName, uploadId);     foreach (var partETag in partETags)     {         completeMultipartUploadRequest.PartETags.Add together(partETag);     }     var result = customer.CompleteMultipartUpload(completeMultipartUploadRequest);     Panel.WriteLine("complete multi part succeeded"); } take hold of (Exception ex) {     Console.WriteLine("complete multi role failed, {0}", ex.Message); }                      

                        #include <alibabacloud/oss/OssClient.h>  int64_t getFileSize(const std::string& file) {     std::fstream f(file, std::ios::in | std::ios::binary);     f.seekg(0, f.end);     int64_t size = f.tellg();     f.close();     return size; }  using namespace AlibabaCloud::OSS;  int main(void) {     /* Initialize the information virtually the account used to access OSS. */     std::cord AccessKeyId = "yourAccessKeyId";     std::string AccessKeySecret = "yourAccessKeySecret";     std::string Endpoint = "yourEndpoint";     /* Specify the name of the bucket. Example: examplebucket. */     std::cord BucketName = "examplebucket";     /* Specify the full path of the object. The full path cannot contain the proper noun of the bucket. Example: exampledir/exampleobject.txt.  */     std::string ObjectName = "exampledir/exampleobject.txt";      /* Initialize resource such equally networks. */     InitializeSdk();      ClientConfiguration conf;     OssClient client(Endpoint, AccessKeyId, AccessKeySecret, conf);     InitiateMultipartUploadRequest initUploadRequest(BucketName, ObjectName);     /* Optional. Specify the storage course. */     //initUploadRequest.MetaData().addHeader("ten-oss-storage-course", "Standard");      /* Initiate the multipart upload job. */     auto uploadIdResult = customer.InitiateMultipartUpload(initUploadRequest);     auto uploadId = uploadIdResult.result().UploadId();     std::string fileToUpload = "yourLocalFilename";     int64_t partSize = 100 * 1024;     PartList partETagList;     auto fileSize = getFileSize(fileToUpload);     int partCount = static_cast<int>(fileSize / partSize);     /* Summate the number of parts. */     if (fileSize % partSize != 0) {         partCount++;     }      /* Upload each role. */     for (int i = 1; i <= partCount; i++) {         machine skipBytes = partSize * (i - 1);         auto size = (partSize < fileSize - skipBytes) ? partSize : (fileSize - skipBytes);         std::shared_ptr<std::iostream> content = std::make_shared<std::fstream>(fileToUpload, std::ios::in|std::ios::binary);         content->seekg(skipBytes, std::ios::beg);          UploadPartRequest uploadPartRequest(BucketName, ObjectName, content);         uploadPartRequest.setContentLength(size);         uploadPartRequest.setUploadId(uploadId);         uploadPartRequest.setPartNumber(i);         auto uploadPartOutcome = client.UploadPart(uploadPartRequest);         if (uploadPartOutcome.isSuccess()) {             Part part(i, uploadPartOutcome.consequence().ETag());             partETagList.push_back(part);         }         else {             std::cout << "uploadPart fail" <<             ",code:" << uploadPartOutcome.error().Code() <<             ",message:" << uploadPartOutcome.fault().Message() <<             ",requestId:" << uploadPartOutcome.error().RequestId() << std::endl;         }      }      /* Complete the multipart upload task. */     CompleteMultipartUploadRequest request(BucketName, ObjectName);     asking.setUploadId(uploadId);     request.setPartList(partETagList);     /* Optional. Specify the ACL of the object. */     //request.setAcl(CannedAccessControlList::Private);      auto outcome = client.CompleteMultipartUpload(asking);      if (!effect.isSuccess()) {         /* Handle exceptions. */         std::cout << "CompleteMultipartUpload neglect" <<         ",lawmaking:" << outcome.mistake().Code() <<         ",message:" << event.fault().Bulletin() <<         ",requestId:" << upshot.mistake().RequestId() << std::endl;         ShutdownSdk();         return -i;     }      /* Release resources such as networks. */     ShutdownSdk();     return 0; }                      

Employ ossutil

For more than information about how to perform multipart upload by using ossutil, see Upload objects.

Use the RESTful API

If your programme requires more custom options to perform multipart upload, you can call RESTful API operations. In this case, y'all must manually write code to calculate the signature. For more than information, see InitiateMultipartUpload.