Backup To Object Storage Service (OSS)

This article explores how we can backup files on Compute instances. We will create a schedule to push files we want to backup onto Object Storage Service (OSS).

OSS is cross region, when creating a bucket you can specify the region where you want to create your bucket. Note that the bucket name must be unique. When creating an private bucket, ensure that the bucket created has Private ACL.

ACL configuration for OSS bucket.

Configuration

To create the bucket using code the following is required:

  1. Setup a valid RAM user (Cloud user) with OSS service permissions.
  2. Create user “Access Keys” for that user (for establishing connections to OSS service).

Below shows an example of RAM user permissions needed for OSS. Note that this example uses the Cloud provided Policy, you may want to restrict permissions to write/read-only.

ACL configuration for OSS bucket.

You will need to create a RAM user with a valid “Access Key”, This key acts as the secret key needed for establishing connections to OSS through the SDK or CLI. To create this key for a RAM user, navigate to the RAM console and select the user you want to setup.

Find the “Access Keys” section and follow the instructions to create a new key. You will be prompted to save a secret and AccessID, as you will need these to use this to connect to OSS service.

ACL configuration for OSS bucket.

Below shows the code used to create an OSS bucket programmatically:

SDK

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
const aliOSS = require('ali-oss');
const bucketCreate = async (bucket) => {
if (!bucket) {
throw new Error('bucket required !')
}

try {
const client = new aliOSS({ region, accessKeyId, accessKeySecret });
await client.putBucket(bucket, {
storageClass: 'Standard',
dataRedundancyType: 'LRS'
});
} catch (err) {
console.log(err);
}
};

More SDK wrappers…

More functions using the OSS SDK which you may find useful…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
const aliOSS = require('ali-oss');

const AliCloudOSS = (region, accessKeyId, accessKeySecret) => {
// Checks if a user has access to the specified bucket.
const bucketExists = async (bucket) => {
let exists = false;

try {
const client = new aliOSS({ region, bucket, accessKeyId, accessKeySecret });
const result = await client.getBucketInfo(bucket);
exists = result.bucket === undefined;

} catch (error) {
console.log(error);
}
return exists;
};
// List all bucket objects or buckets under a particular 'folder'
const bucketObjects = async (bucket, prefix = null) => {
const client = new aliOSS({ region, bucket, accessKeyId, accessKeySecret });

let results = prefix === null
? await client.listV2({ bucket })
: await client.listV2({ bucket, prefix });

return results.objects;
};
// List all 'Folders'
const bucketFolders = async (bucket) => {
const client = new aliOSS({ region, bucket, accessKeyId, accessKeySecret });

let results = await client.list({ bucket, delimiter: '/' });

return results.prefixes || [];
};
// A function used keep the latest limit of bucket 'folders'...
const bucketCleanup = async (bucket, limit = 3) => {
const folders = await bucketFolders(bucket) || [];
folders.reverse(); // latest at the top

if (limit > 1) {
const foldersToDelete = folders.slice(limit - 1, folders.length); // rolling delete
for (let i = 0; i < foldersToDelete.length; i++) {
const objects = await bucketObjects(bucket, foldersToDelete[i])
const deleteList = objects.map(a => a.name);
await objectDelete(bucket, deleteList, true);
}
}
};
// A Function to delete objects one-by-one or as a batch delete...
const objectDelete = async (bucket, objectNames, batch = false) => {
if (objectNames) {
try {
const client = new aliOSS({ region, bucket, accessKeyId, accessKeySecret });
if (batch) {
await client.deleteMulti(objectNames);
} else {
for (let i = 0; i < objectNames.length; i++) {
const d = objectNames[i];
await client.delete(d.name);
}
}
} catch (error) {
console.log(error);
}
}
};
// Pushing a local file to OSS...
const push = async (bucket, localFilePath, ossFilePath) => {
let result;
try {
const client = new aliOSS({ region, bucket, accessKeyId, accessKeySecret });
result = await client.put(ossFilePath, localFilePath);

} catch (e) {
console.log(e);
}
return result;
};

return {
bucketCreate, bucketExists, push, bucketObjects, bucketCleanup, bucketFolders
}
}

module.exports = AliCloudOSS;

CRON Schedule

To create the schedule I used node-schedule

1
2
3
4
5
6
7
8
const schedule = require('node-schedule');
/*
Note that scheduleJob does not take parameters.
See npm project page for details...
*/
let job = schedule.scheduleJob(bc.schedule, (fireDate) => {
// Backup code here ...
});