Question to the Xojo Team: Xojo.GenerateJSON / Xojo.ParseJSON

In my tests I have already noticed a very affordable speed increase in the Xojo.GenerateJSON and Xojo.ParseJSON methods compared to the “old” Xojo.Core.GenerateJSON and Xojo.Core.ParseJSON methods. So far so good.

Now I wonder if the new methods read a JSON string into chunks similar to what the XMLReader class does. Could someone from the Xojo developer team please give a statement?

Thanks.

Chunks?

XML REader can read a stream of incoming data allowing it to read almost unlimited sized XML documents since you never need the entire document in memory at once.
JSON doesnt work that way.

I suppose it could since the tokens are well defined - it just doesnt as far as I know

Do you have a use case scenario for this, Martin?

I would use a binary protocol if buffering is a requirement.

There are times, like with the XML reader, when being able to open a stream and read it as a stream is the only way to deal with a gargantuan file
That was true with the XML Reader for a long while and made it possible to read multi gigabyte files even in a 32 bit app since you didnt have to read the whole thing in and parse it at once (been there done that)
Now we have 64 bit apps but that doesnt mean its any better to read in a muli GB file all at once and parse it into json

A streamed reader would still have a lot of utility

There are lots of implementations in other languages

At one point I started a YAML class but got almost nowhere with it. That’s one designed for streaming.

Well, if there is a need, I could change our parser to accept data on blocks.
But I doubt it is needed as I haven’t seen much multi mega byte JSON blocks so far.

I just did some tests with large JSON files from the Web and saw the UI freezing. That‘s why i‘m asking. Remember, If you have a deep multilevel JSON! Then Xojo.ParseJSON needs to create a lot of dictionaries. This may take time and needs memory.

Scenario for very large JSON files: saving a large document together with image files (in String Form - Picture.ToData) in a database Depending on its dimensions, an image alone creates a new very long string.

Very good explanation. That’s exactly why I came up with the question.

It would still be interesting if a developer of Xojo can make a statement about this.

[quote=483372:@Christian Schmitz]Well, if there is a need, I could change our parser to accept data on blocks.
But I doubt it is needed as I haven’t seen much multi mega byte JSON blocks so far.[/quote]
I regularly deal with JSON files around 10MB, but even at that number I haven’t had an issue with speed.

I’m surprised none of them have chimed in but then there have been a couple other threads where there have been direct questions to Xojo team members that have gone unanswered so …

In the case I mentioned of the XML file it was a 490 Mb XML file - one of the “middle sized” ones we had to deal with
If you read it all into memory the app would pause as it read it in
When we tried to convert it to XML often the app would crash as it seemed to have at least 3 copies + the rest of the app and all its graphics etc
This WAS a 32 bit app so maybe a big XML file would no longer crash the app in 64 bit - but the pause on read and convert would still occur
When we switched tho the event driven XML reader which we had to do more work all those issue disappeared and we could then successfully read XML files that exceeded 1 Gb (again in a 32 bit app) and had no issues with pauses or freezes

[quote=483396:@Norman Palardy]In the case I mentioned of the XML file it was a 490 Mb XML file - one of the “middle sized” ones we had to deal with
If you read it all into memory the app would pause as it read it in
When we tried to convert it to XML often the app would crash as it seemed to have at least 3 copies + the rest of the app and all its graphics etc
This WAS a 32 bit app so maybe a big XML file would no longer crash the app in 64 bit - but the pause on read and convert would still occur
When we switched tho the event driven XML reader which we had to do more work all those issue disappeared and we could then successfully read XML files that exceeded 1 Gb (again in a 32 bit app) and had no issues with pauses or freezes[/quote]
Oh I don’t doubt it. I think my point was more that “multi megabyte” doesn’t mean huge.

Any suggestions to how a streaming JSON parser might work?

Fair - I was just detailing the user case I ran into with large XML files and how we ended up dealing with them
I’ve not encountered a JSON file of that size but I suppose someone could

Actually I do have some ideas on making a json stream reader
It IS just a finite state machine at any rate and those are well suited to FSM’s

Well, first is there a need?

Or do you guys just need a yielding parser?

e.g. I could add that to my JSON plugin functions to run it threaded.

Yielding only addresses part of the issue

If you still have to read the whole thing in to be able to process it that could in itself still be a problem depending on how large the json is. If you’re trying to use a Pi to process JSON data and are still limited to a 32 bit app and relatively small amounts of available memory.

Thats not to say dont make it so it doesnt yield - that would help with UI lock up issues

An answer from the development team would be very helpful: @Geoff Perlman , @Paul Lefebvre , @Greg O’Lone , @William Yu ?

I have received a feedback from Xojo by mail: “They rather not disclose the details of how the parser works in case they decide to change things later on.”