Hashtag in AllowUnsupportedBrowser

Is the value for Hashtag set after the AllowUnsupportedBrowser event is fired ?
I wanted to use the Hashtag value in the AllowUnsupportedBrowser event, but it always seems to be empty.

[quote=125598:@Ralf van de Ven]Is the value for Hashtag set after the AllowUnsupportedBrowser event is fired ?
I wanted to use the Hashtag value in the AllowUnsupportedBrowser event, but it always seems to be empty.[/quote]

What about the HashTagChanged event ? Does it fire ?

It doesn’t seem to fire on a unsupported browser. With a supported browser hashtag works just fine.

Maybe you can parse websession.url ? Shooting in the dark, I have no unsupported browser at my disposal. Sorry.

I’m trying with lynx, but the hashtag doesn’t show up in the url. Have to check, but it seems to contain a /sessionID.

Took another look, it seems that session.Header also doesn’t exist yet.
That makes it very difficult to determine what page the user is trying to access.

You can get the current url in JavaScript with

document.URL

But since AllowUnsupportedBrowser has not displayed the page yet, you cannot use a WebSDK control to get that value. And ExecuteJavaScript won’t work there either…

[quote=125718:@Michel Bujardet]You can get the current url in JavaScript with

document.URL

But since AllowUnsupportedBrowser has not displayed the page yet, you cannot use a WebSDK control to get that value. And ExecuteJavaScript won’t work there either…[/quote]

My problem is that the client (probably) doesn’t support javascript anyway. I want to serve a flat text only version of the requested page. session.Header does exist, but is not shown in the debugger. But does not contain the url. Session.URL contains “/”+Session.Identifier. Session.Hashtag is empty as is Session.Browserversion.

I get similar behaviour with lynx, wget and curl.

[quote=125722:@Ralf van de Ven]My problem is that the client (probably) doesn’t support javascript anyway. I want to serve a flat text only version of the requested page. session.Header does exist, but is not shown in the debugger. But does not contain the url. Session.URL contains “/”+Session.Identifier. Session.Hashtag is empty as is Session.Browserversion.

I get similar behaviour with lynx, wget and curl.[/quote]

You do get the Browser property, so you could simply showurl the index page of the text only site when you see Lynx.

I actually want to create the page from the database, convert it to “simple” html and display it using error message.
I want to put in links using the hashtags, so you can still sort of navigate the site. The main purpose would be for webcrawlers to be able to index the site. Webcrawlers would see a text only version of the site with list of links instead of nice menu’s. But at least they could index the site.

[quote=125737:@Ralf van de Ven]I actually want to create the page from the database, convert it to “simple” html and display it using error message.
I want to put in links using the hashtags, so you can still sort of navigate the site. The main purpose would be for webcrawlers to be able to index the site. Webcrawlers would see a text only version of the site with list of links instead of nice menu’s. But at least they could index the site.[/quote]

I see. Good idea. I know Googlebot indexes php files, I am not sure about cgi, though. But it is worth a try.

I just did a quick experiment using Netscape for Mac which is unsupported. I opened the app from it with parameters :

127.0.0.1:8080?tata=toto&tutu=titi&tonton=francois

And was able to retrieve them fine with :

system.debuglog str(me.URLParameterCount) for i as integer = 0 to me.URLParameterCount system.debuglog me.URLParameterName(i) next

Instead of hashtags, you can use parameters to indicate the page.

You know the URL for your app already, so you can construct your link with the parameters, and read them from the AllowUnsupportedBrowser as I did.

[quote=125743:@Michel Bujardet]I see. Good idea. I know Googlebot indexes php files, I am not sure about cgi, though. But it is worth a try.

I just did a quick experiment using Netscape for Mac which is unsupported. I opened the app from it with parameters :

127.0.0.1:8080?tata=toto&tutu=titi&tonton=francois

And was able to retrieve them fine with :

system.debuglog str(me.URLParameterCount) for i as integer = 0 to me.URLParameterCount system.debuglog me.URLParameterName(i) next

Instead of hashtags, you can use parameters to indicate the page.

You know the URL for your app already, so you can construct your link with the parameters, and read them from the AllowUnsupportedBrowser as I did.[/quote]
Well, i use modproxy to forward the call to a stand alone version of the webapp. Should be transparent enough. And google did index the error page… I.m going to try the urlparameter.

Then you got the solution. This is extremely interesting work. One of the main issue with Xojo Web apps is that up until now it did not seem possible to index them. But your approach should make that possible.

In a thread I cannot locate at the moment, though, I noted that most web apps, even made indexable, would probably not get very good results, since their content is poor in comparison to usual HTML pages. A web app UI is by definition relatively terse in static content, mainly made of button names and labels. To create a richer indexable content, you probably want to create your HTML with a maximum of text and possible keywords. Since you are in control of the construction of the HTML, you can probably optimize if much better than another method offered by Matthew Combatti in yet another thread lst in the forum, where he simply attempted to make indexable the page regular static content.

It would be very interesting if you be so kind to keep the forum informed of your progress. I am sure other members can benefit from your experience.

[quote=125750:@Michel Bujardet]Then you got the solution. This is extremely interesting work. One of the main issue with Xojo Web apps is that up until now it did not seem possible to index them. But your approach should make that possible.

In a thread I cannot locate at the moment, though, I noted that most web apps, even made indexable, would probably not get very good results, since their content is poor in comparison to usual HTML pages. A web app UI is by definition relatively terse in static content, mainly made of button names and labels. To create a richer indexable content, you probably want to create your HTML with a maximum of text and possible keywords. Since you are in control of the construction of the HTML, you can probably optimize if much better than another method offered by Matthew Combatti in yet another thread lst in the forum, where he simply attempted to make indexable the page regular static content.

It would be very interesting if you be so kind to keep the forum informed of your progress. I am sure other members can benefit from your experience.[/quote]
Should take a few days to see what google made of it

Great :slight_smile: Do keep us posted !

A Quick update, as a proof of concept it works, google indexes the page as expected. Still adding complexity. To see how far i can go. I do want to replicate the full menustructure.

Wonderful news. Thank you :slight_smile:

Still watching how google handles the updated version. The tekst is all dutch, but you can look at http://www.aerorasor.com
If you use an unsupported browser you will see what i have done. I think it works nicely, and hope google will index it properly.

[quote=126171:@Ralf van de Ven]Still watching how google handles the updated version. The tekst is all dutch, but you can look at http://www.aerorasor.com
If you use an unsupported browser you will see what i have done. I think it works nicely, and hope google will index it properly.[/quote]

It is indeed very very nice !

Your site being already very descriptive, the results is rich enough to get a good keyword gathering for Google. This will probably improve the visibility of Aerorasor. Now you may find yourself working on improving the text to make it even more relevant for Google indexing. For instance, it is considered good to have a keyword you want seen repeated 3 times per page.

Congratulations !

Hi all. Well, after reading through this whole post, I’m still not sure exactly what Ralf did to accommodate Google’s web crawler (or any one else’s web crawlers) to help better index my site.

Any more specific examples from the Xojo code and or external html would be so much appreciated!

(I’m a bit of newbie on this stuff)

THANKS!