Accessing the Documentation from the IDE ?

I am not able to access the documentation (on or off line ?) from the IDE - still yesterday.

To enable the feature, I had to load (or create) a project. Weird.

I only wanted to get an eye on something in the documentation !

Just a note, it’s also available on http://documentation.xojo.com if it’s playing tricks on you :slight_smile:

Albin, thank you for the trick, but without internet, it does not works (it woul be the way I would go cause it will do it faster than re-launch the application).

This recalls my all-in-one printer who does not let me scan anything until I placed the ink cartridges.
After all, everybody knows that to be able to scan a document, you must have set the printer’s ink cartridges ;-:slight_smile:

BTW: I use cmd-? to display the (off-line) docs (F1 on Windows ?)

If you need offline try Dash! I use that all the time :slight_smile: there’s a Xojo doc set for it!
http://kapeli.com/dash

Downloaded the whole documentation. I zipped it, and you can download it here: Click!

If anybody can use it. It’s just a bunch of html’s, just open index.html, and you can access the wiki like you have an internet connection (except searching)

PLEASE DO NOT DO THIS !!!
It hammers the servers in a way that actually makes the wiki slow for anyone else trying to access it
This explains why the wiki gets unresponsive from time to time
And why our IT folks get alerts at 4am saying the server is under attack

Believe it or not you already have every last one of these pages in the local language reference set
There’s no need to download the wiki like this

HTTrack documentation even says http://www.httrack.com/html/abuse.html

[i]Do not overload the websites!

Downloading a site can overload it, if you have a fast pipe, or if you capture too many simultaneous cgi (dynamically generated pages).
Do not download too large websites: use filters
Do not use too many simultaneous connections
Use bandwidth limits
Use connection limits
Use size limits
Use time limits
Only disable robots.txt rules with great care
Try not to download during working hours
Check your mirror transfer rate/size
For large mirrors, first ask the webmaster of the site[/i]

Whoops! Sorry folks! Didn’t meant to do that! It’s onlt 13mb, over a download period of over 1 hour, wouldn’t expect to cause that much trouble!

So, sorry again! Won’t do that anymore!!

[quote=68534:@Mathias Maes]Whoops! Sorry folks! Didn’t meant to do that! It’s onlt 13mb, over a download period of over 1 hour, wouldn’t expect to cause that much trouble!

So, sorry again! Won’t do that anymore!![/quote]
The problem with scrapers is they tend to grab resources over & over and then sort it all out locally - we can see that in the HTTP request logs

Thing is you actually already have the entire set in the local documentation db (including images)
We generate it by having an offline wiki that we keep up to date and the basically scrape that to dump into the local documentation set. That way we don’t kill our own wiki server generating it.

I think Gary used that db to generate the Dash docset

Better questions is “why do you need it as a copy of the web site locally?”
If we address THAT then the need to scrape the wiki can go away

If I’m fair, I didn’t even know there were docs inside Xojo/RB.
I always use the online docs, so I scraped it for offline use. (I did that also with the RB docs, like a year ago)

Edit: found the old topic on rb.com (I’m very messed up with time, it’s 3,5 years ago): Link
Nobody replied back then, so that’s because I presumed nothing changed.

The current IDE actually defaults to the internal documentation set