Restored DuckDuckGo's proxy for URL uploads

Yes.

This gets rid of HEAD request prior to downloading the URL.

We will no longer check for Content-Length header, instead we will
forcibly limit maximum download size for the download stream to the
configured value.

So assuming someone try to download a bigger file, it will still try to
download up to the configured size, but then fail.

This will also speed up the general download process since sending HEAD
request delayed the whole operation.
This commit is contained in:
Bobby Wibowo 2019-04-11 22:27:45 +07:00
parent 3e336e8c6d
commit b7600ec3fb
No known key found for this signature in database
GPG Key ID: 51C3A1E1E22D26CF
2 changed files with 22 additions and 44 deletions

View File

@ -139,41 +139,33 @@ module.exports = {
Example:
https://images.weserv.nl/?url={url-noprot}
will become:
https://images.weserv.nl/?url=example.com/assets/image.png
https://images.weserv.nl/?url=example.com%2Fassets%2Fimage.png
*/
urlProxy: 'https://images.weserv.nl/?url={url-noprot}',
urlProxy: 'https://proxy.duckduckgo.com/iu/?u={url}',
/*
Disclaimer message that will be printed underneath the URL uploads form.
Supports HTML. Be safe though.
Disclaimer message that will be printed underneath the URL uploads form.
Supports HTML. Be safe though.
*/
urlDisclaimerMessage: 'URL uploads are being proxied and compressed by <a href="https://images.weserv.nl/" target="_blank" rel="noopener">images.weserv.nl</a>. By using this feature, you agree to their <a href="https://github.com/weserv/images/blob/4.x/Privacy-Policy.md" target="_blank" rel="noopener">Privacy Policy</a>.',
urlDisclaimerMessage: 'URL uploads are being proxied by <a href="https://duckduckgo.com/" target="_blank" rel="noopener">DuckDuckGo</a>. The proxy can only process direct links, and generally it can only proxy images.',
/*
Filter mode for URL uploads.
Can be 'blacklist', 'whitelist', or 'inherit'.
'inherit' => inherit primary extensions filter (extensionsFilter option).
The rest are paired with urlExtensionsFilter option below and should be self-explanatory.
When this is not set to any of the 3 values, this will fallback to 'inherit'.
Filter mode for URL uploads.
Can be 'blacklist', 'whitelist', or 'inherit'.
'inherit' => inherit primary extensions filter (extensionsFilter option).
The rest are paired with urlExtensionsFilter option below and should be self-explanatory.
When this is not set to any of the 3 values, this will fallback to 'inherit'.
*/
urlExtensionsFilterMode: 'whitelist',
urlExtensionsFilterMode: 'inherit',
/*
Mainly intended for URL proxies that only support certain extensions.
This will parse the extensions from the URLs, so URLs that do not end with
the file's extensions will always be rejected.
Queries and segments in the URLs will be bypassed.
NOTE: Can not be empty when using either 'blacklist' or 'whitelist' mode.
Mainly intended for URL proxies that only support certain extensions.
This will parse the extensions from the URLs, so URLs that do not end with
the file's extensions will always be rejected.
Queries and segments in the URLs will be bypassed.
NOTE: Can not be empty when using either 'blacklist' or 'whitelist' mode.
*/
urlExtensionsFilter: [
'.gif',
'.jpg',
'.jpeg',
'.png',
'.bmp',
'.xbm',
'.webp'
],
urlExtensionsFilter: [],
/*
Scan files using ClamAV through clamd.

View File

@ -264,26 +264,12 @@ uploadsController.actuallyUploadByUrl = async (req, res, user, albumid) => {
.replace(/{url-noprot}/g, encodeURIComponent(url.replace(/^https?:\/\//, '')))
try {
const fetchHead = await fetch(url, { method: 'HEAD' })
if (fetchHead.status !== 200)
return erred(`${fetchHead.status} ${fetchHead.statusText}`)
const headers = fetchHead.headers
const size = parseInt(headers.get('content-length'))
if (isNaN(size))
return erred('URLs with missing Content-Length HTTP header are not supported.')
if (size > urlMaxSizeBytes)
return erred('File too large.')
if (config.filterEmptyFile && size === 0)
return erred('Empty files are not allowed.')
// Limit max response body size with the size reported by Content-Length
const fetchFile = await fetch(url, { size })
// Limit max response body size with maximum allowed size
const fetchFile = await fetch(url, { size: urlMaxSizeBytes })
if (fetchFile.status !== 200)
return erred(`${fetchHead.status} ${fetchHead.statusText}`)
return erred(`${fetchFile.status} ${fetchFile.statusText}`)
const headers = fetchFile.headers
const file = await fetchFile.buffer()
const length = uploadsController.getFileNameLength(req)
@ -297,7 +283,7 @@ uploadsController.actuallyUploadByUrl = async (req, res, user, albumid) => {
filename: name,
originalname: original,
mimetype: headers.get('content-type').split(';')[0] || '',
size,
size: file.byteLength,
albumid
}