If you can place the file on a web accessible URL, simply present a link to it.
If you want to restrict the download to the current user, and your app runs in Mac OS or Linux, create a symbolic link that points to the file under the web folder, and link to it. Then when the download is over, remove it.
That way the download is carried out directly by the browser, not by your app. Please note that some browsers do have a limit for http download. Internet Explorer cannot download more than 2GB, for instance.
99% of the time, the way the examples are doing it IS the right thing. You are dealing with an exceptional situation, which must be addressed differently. Remember, Xojo is designed to be able to function standalone - in the absence of an HTTP server. It has to be able to serve the data itself. To do this the way that you intend and still be able to run standalone would require incorporating a second http server to supply the large files, which the examples cannot do.
The issue is probably not with Xojo, but with the browsers. The 2GB file limit is true also with Chrome, and I would bet that it is also the case for other browsers.
The only way I can think of that may allow you to overcome that limitation is to use ftp:// instead of http:// but not all browsers will support that protocol.
Alternatively, you could make a multipart zip file with parts smaller than 2 GB, and present them one after the other for download.
Thanks Greg. I just tried that out and the app still crashes with the info below.
It also behaves differently. Instead of passing the download to the browser download queue, it keeps loading in the browser window as if it was a huge page rather than a download.
This is in the Example Projects/Communication/Internet/Web Server/WebServer.xojo_binary_project. I just commented out the existing bs and put your replacement in. The app still seems to be running out of RAM and quits…
I didn’t test Example Projects/Communication/Internet/Web Server/WebServer.xojo_binary_project, but I bet it would do the same thing.
[quote=159890:@Hal Gumbert]
It also behaves differently. Instead of passing the download to the browser download queue, it keeps loading in the browser window as if it was a huge page rather than a download. [/quote]
You may want to see the Downloading web example. It uses Webfile.ForceDownload = True to prevent the data to behave as a page.
The problem probably has to do with how data is passed to the socket than anything else. When you call Socket.Write, you’re not telling the socket to send, rather you are adding data to the socket’s buffer to be sent when the socket gets to it. the reason my code works differently is that its only holding 1K of data in memory for each write (I intended to write it as 1 MB, but my brain forgot how to multiply), but the problem stands. For each segment you copy into RAM, there’s a moment when you have two copies in memory, making the problem twice as bad.
dim s as string = "...a megabyte of data..."
Sock.write(s) // right here. Both the buffer and "s" contain the data
s = ""
I still think a better solution is to have the file served by another app. If the data doesn’t change very often, how about putting he file on a CDN and make it someone else’s problem?
it would be nice if the Web framework could check how often it called socket.write and how often it got the SendComplete event.
So maybe only send maximum 3 times more often than you got SendComplete?
This way you only have up to 4 MB in buffer.
or limit to 100 MB. Just not feed it unlimited.
Based on the original post, I don’t think we’re just talking about the web framework, but to answer your query, I’m not sure we can make it that low. It’ll depend on the block size necessary for the socket to encrypt data if I’m remembering correctly, but it’s a good idea.