Does anyone have a working example of an app that can do large file downloads via a web page?
All of them crash when I try to download a 6.5GB file, locally.
crash? What kind of crash?
I did not report it. I just assumed that those projects couldn’t support large files.
I didn’t look at the reason for the crash, but the app just dies and tosses me back to the project.
Actually the size should not matter unless somewhere is a 32 bit variable for size/position.
I made Feedback case 37770.
Seems like web framework dumps lots of data from the file into the socket until it is running out of memory.
If you can place the file on a web accessible URL, simply present a link to it.
If you want to restrict the download to the current user, and your app runs in Mac OS or Linux, create a symbolic link that points to the file under the web folder, and link to it. Then when the download is over, remove it.
That way the download is carried out directly by the browser, not by your app. Please note that some browsers do have a limit for http download. Internet Explorer cannot download more than 2GB, for instance.
The problem probably lies in the fact that most of them try to load the file into memory first.
Greg, could the two Xojo examples be updated to do the right thing?
- Example Projects/Web/XojoCloudFileManager
- Example Projects/Communication/Internet/Web Server/WebServer.xojo_binary_project
99% of the time, the way the examples are doing it IS the right thing. You are dealing with an exceptional situation, which must be addressed differently. Remember, Xojo is designed to be able to function standalone - in the absence of an HTTP server. It has to be able to serve the data itself. To do this the way that you intend and still be able to run standalone would require incorporating a second http server to supply the large files, which the examples cannot do.
The issue is probably not with Xojo, but with the browsers. The 2GB file limit is true also with Chrome, and I would bet that it is also the case for other browsers.
The only way I can think of that may allow you to overcome that limitation is to use ftp:// instead of http:// but not all browsers will support that protocol.
Alternatively, you could make a multipart zip file with parts smaller than 2 GB, and present them one after the other for download.
I download stuff all the time that’s bigger than 2GB using browsers. Disk images for Linux for instance.
Seriously though, if your app will be running on a server that has a web server like Apache or IIS running, use that to deliver a file like that.
I would if I could, but I can’t. This has to run on a non standard port.
The examples that run out of memory do this
bs = BinaryStream.Open(f, False)
Self.Write( bs.Read( bs.Length ) )
Could I just change the write to a loop where I read a bit from the bs and then write it out, followed by a flush?
Oh yeah, don’t do that. Do something like this:
bs = BinaryStream.Open(f, False)
while not bs.eof
Self.Write( bs.Read( 1024 ) )
Thanks Greg. I just tried that out and the app still crashes with the info below.
It also behaves differently. Instead of passing the download to the browser download queue, it keeps loading in the browser window as if it was a huge page rather than a download.
This is in the Example Projects/Communication/Internet/Web Server/WebServer.xojo_binary_project. I just commented out the existing bs and put your replacement in. The app still seems to be running out of RAM and quits…
I didn’t test Example Projects/Communication/Internet/Web Server/WebServer.xojo_binary_project, but I bet it would do the same thing.
[code]Process: SimpleWebServer.debug 
Version: ??? (22.214.171.124.0)
Code Type: X86 (Native)
Parent Process: ??? 
Responsible: SimpleWebServer.debug 
User ID: 502
Date/Time: 2015-01-16 09:41:41.753 -0500
OS Version: Mac OS X 10.10.1 (14B25)
Report Version: 11
Anonymous UUID: 29237EAD-70FD-F525-0CCE-5EFC417B298D
Sleep/Wake UUID: 0756349B-A4D2-44C7-AE42-FA908A6C66FA
Time Awake Since Boot: 800000 seconds
Time Since Wake: 140000 seconds
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Application Specific Information:
terminating with uncaught exception of type std::bad_alloc: std::bad_alloc[/code]
It also behaves differently. Instead of passing the download to the browser download queue, it keeps loading in the browser window as if it was a huge page rather than a download. [/quote]
You may want to see the Downloading web example. It uses
Webfile.ForceDownload = True to prevent the data to behave as a page.
The problem probably has to do with how data is passed to the socket than anything else. When you call Socket.Write, you’re not telling the socket to send, rather you are adding data to the socket’s buffer to be sent when the socket gets to it. the reason my code works differently is that its only holding 1K of data in memory for each write (I intended to write it as 1 MB, but my brain forgot how to multiply), but the problem stands. For each segment you copy into RAM, there’s a moment when you have two copies in memory, making the problem twice as bad.
dim s as string = "...a megabyte of data..."
Sock.write(s) // right here. Both the buffer and "s" contain the data
s = ""
I still think a better solution is to have the file served by another app. If the data doesn’t change very often, how about putting he file on a CDN and make it someone else’s problem?
it would be nice if the Web framework could check how often it called socket.write and how often it got the SendComplete event.
So maybe only send maximum 3 times more often than you got SendComplete?
This way you only have up to 4 MB in buffer.
or limit to 100 MB. Just not feed it unlimited.
Based on the original post, I don’t think we’re just talking about the web framework, but to answer your query, I’m not sure we can make it that low. It’ll depend on the block size necessary for the socket to encrypt data if I’m remembering correctly, but it’s a good idea.