matching photoshop to Xojo

In web edition, i upload png’s made in photoshop but the resolution in photoshop does not match the one in Xojo, is there some trick to making them the same?

Are both images DPI the same? How are you exporting the PNG from Photoshop?
Make sure you’re using Export for Web (or whatever they happen to call it)

a PNG has a resolution info in header. But EXIF or IPTC can be different. Maybe you look on the wrong place?

Who said that an ImageWell displays images at 300 dpi ? :wink: (that is irony).

Export the image file(as png), at OS default resolution (Linux: ?, macOS: 72 or Windows: 96) and use that in Xojo (even just as test).

Photoshop does have a PNG setting in save for web. Using save for web, the pic will end up as the 500 x 200 absolute pixel dimensions at 72 dpi. This image will display correctly in Web Edition.

I tried a “Save As” to PNG format in Photoshop from the 300 dpi, 500px x 200px image. The 300 dpi was honored in Xojo IDE and the image showed tiny in the image well. When I run the app though, it shows at the correct 500px x 200px size in the browser.

If you are planning on using retina display feature, you will want to save another version of the image using “Save for web” at a size of 1000px by 400px and drop it in the 2x portion of your image asset in Xojo.

Creating an image 500px by 200px, only contains 500 pixels width by 200 pixels height, no matter what the dots per inch is set at.

[quote=316724:@dave duke]Hi Emile, no one said imagewell displays at 300 DPI, thats the default output from Photoshop, i’m guessing from Scotts post that Xojo default non retina DPI is 72.

Thanks Scott, will try that.
Dave[/quote]
It’s not Xojo’s default per-se. 1x computer screens are generally 72 ppi (pixels per inch). The confusion here comes from the fact that Photoshop is generally used for making high-res images for print, and there you often have images of much higher per-inch resolutions to make things look good on paper.

That said, I think we’re confusing the differences between resolution, size and displayed size. Resolution is generally defined as the number of pixels per inch (ppi). Size is the number of pixels wide and high. Display Size is generally the width and height divided by ppi. So an image that is 216x144 at 72ppi is displayed at 3"x2", and at 144ppi it is displayed as 1.5"X1".

If you go back to photoshop, look at what the current scale factor is (usually in the lower left corner of the frame). I suspect that what you are looking at is less than 100%. If you zoom in to 100%, I’d bet that the image in your web project now matches what you see in photoshop.

Note: I’m not sure I’d consider 300dpi to be the “default” resolution in PS either. It’s probably just the last value that was used. Mine says 1200dpi because of a project I was working on last year.

Greg (or someone), this is something that has confused me since forever.
Why does (or should) the “PPI” (points per inch" matter? what is the “inch” in relation to?
A printer can print various “PPI” depending on settings etc.
A monitor has a fixed number of points/pixels (yet the number of inchs for the same number of points can vary between devices)
An iPhone5s is 640 points across, while a Retina Ipad is 1536 points…
So why does PPI matter, instead of just “size”?

the OP’s image is 500x200 (points or pixels) in SIZE, so why would that not “fit” in any graphic container that is also that same SIZE?

when it comes to a device, Pixels and Points should be all that matter…

signed
confused in San Diego

+1
Or at least, not confused, but ‘cant see why people get so obsessed’

If you have an image that contains 1200 pixels (or points… I dont care what you call it… the smallest color blob you have)
Then you have 1200 dots. Lets assume its a square for the sake of discussion.

You can show it as a picture of 1 inch wide onscreen, or zoom in and show it as a picture that is 10 inches wide onscreen
You could print it as 1/2inch, 1 inch, 5 inches on a printer.

The ONLY thing that adding a PPI property to the image does, is to hint at how many of those dots should be printed or displayed in 1 inch, IF you choose to view or print at 100% of actual size.
I say hint, because although printing has an actual real world size , thats not really true of a monitor.

If software tries to display an image in such a way as to have 1 inch on screen, that means the software must know how big the screen is in the real world.
And by extension, since a 14inch screen and a 40 inch screen could both be reporting the same dpi, any attempt to create a 1inch representation must surely generate a physically bigger patch of light on the 40 inch screen?
And by extension, displaying the same image with the same number of on-screen dots, on a wall-sized outdoor display would definitely not be displaying a 1 inch square

There was a long discussion about dpi a while ago
https://forum.xojo.com/32074-how-to-get-screen-dpi

The i has not meant inch on screen in quite a while. As a matter of fact, the only place where it makes sense is on paper. For screens, it is just an easy reference.

In practice, I know I need a set of resolutions for HiDPI image sets :
72 dpi for standard screen
144 dpi for 2x
216 for 3x

And that is important.

[quote=316742:@Jeff Tullin]+1
Or at least, not confused, but ‘cant see why people get so obsessed’

If you have an image that contains 1200 pixels (or points… I dont care what you call it… the smallest color blob you have)
Then you have 1200 dots. Lets assume its a square for the sake of discussion.

You can show it as a picture of 1 inch wide onscreen, or zoom in and show it as a picture that is 10 inches wide onscreen
You could print it as 1/2inch, 1 inch, 5 inches on a printer.

The ONLY thing that adding a PPI property to the image does, is to hint at how many of those dots should be printed or displayed in 1 inch, IF you choose to view or print at 100% of actual size.
I say hint, because although printing has an actual real world size , thats not really true of a monitor.

If software tries to display an image in such a way as to have 1 inch on screen, that means the software must know how big the screen is in the real world.
And by extension, since a 14inch screen and a 40 inch screen could both be reporting the same dpi, any attempt to create a 1inch representation must surely generate a physically bigger patch of light on the 40 inch screen?
And by extension, displaying the same image with the same number of on-screen dots, on a wall-sized outdoor display would definitely not be displaying a 1 inch square[/quote]
But pixels are not points! Points are a unit of measure carried over from the printing industry such that there are always 72 points per inch. It’s a standard unit of measure so that you don’t need to know the number of pixels per inch to draw the picture correctly.

That’s not entirely true. The reason that the native resolution of most monitors are what they are is so that 1 inch of picture = 1 inch of the display.

The 27" Thunderbolt Display is 35.5 wide with a pixel width of 2560. 2560 / 72 = 35.5555"

Apple has been more conscientious than other manufacturers about this, but they all basically adhere to this rule.

What is important is that you understand that ppi does affect how things draw when you have HiDPI turned on in your projects.

but see this is where I am “confused”…

For @1x it would be 500x200 (to use OP example)
for @2x it would be 1000x400 (twice the PIXEL/POINTS)
etc.

You are inferring (I think) that all 3 images (@1x, @2x, @3x) are the SAME #of Points, but a different “density”?

In all the iOS programming I’ve done (and granted it is not alot)… I have given ZERO priority to the DPI/PPI (and I know that some of the images I use vary quite a bit)… but if I crop them to the right number of points, they work perfectly on the appropriate device

for purposes of a display device (monitor or iPhone/iPad) there IS a relatioinship

@1x = 1 Pixel = 1 Point
@2x = 4 Pixel = 1 Point
@3x = 9 pixel = 1 point

The use of POINT has become the same as what some refer to as a “sub-pixel” (no idea where that came from)

A “pixel” was a Picture Element, now for modern devices the “point” has become that… and it no longer has any direct relation to “printing”… where a POINT was 1/72" anytime, any place

and explain this
iPhone4s is 3.5" wide (physical), 320 pixels or 640 points (@2x)… yet Apple says it is 326ppi [326 x 3.5 = 1141]
all other “I” devices are similar ranging from 132ppi (iPad2) to 401 on the iPhone6+

FYI… I am not “arguing”… I’m trying to get educated

Just explain to me how smart phones, for instance, respect that axioma. If you care to read the thread I posted, it’s been a long time 1 inch of picture = 1 inch of display has ceased to be valid. If it ever was…

Exactly. That is the whole point. You always have the same number of points, which each represent variable pixel density.

[quote=316749:@Dave S]The use of POINT has become the same as what some refer to as a “sub-pixel” (no idea where that came from)

A “pixel” was a Picture Element, now for modern devices the “point” has become that… and it no longer has any direct relation to “printing”… where a POINT was 1/72" anytime, any place[/quote]

Actually, a pixel is still the elementary element of a display. But the logical point was an elegant way of staying closer to the one inch of picture = one inch of the display Greg mentions above.

The notion of subpixel is a trick based on the fact that indeed, on LCD and most flat color screens, each pixel is made of three colored elements. See Subpixel rendering - Wikipedia

It has little to do with HiDPI.

The main error is to infer that 1 dot = 1 pixel. That ceased to be true with the advent of Retina on Mac and scaling on Windows.

IIRC the ImageWell always displays at 72dpi. So if you make your image 72dpi, it will display correctly, just low resolution on a high resolution display.

You should export multiple sizes and use an image set or the Retina Kit.
@1x = 500x200 pixels @ 72 dpi (500x200 points).
@2x = 1000x400 pixels @ 144 dpi (500x200 points).
@3x = 1500x600 pixels @ 216 dpi (500x200 points).

Notice how the point sizes are identical, but the pixel sizes are not. That’s because it’s the pixels that matter.

[quote=316781:@Sam Rowlands]You should export multiple sizes and use an image set or the Retina Kit.
@1x = 500x200 pixels @ 72 dpi (500x200 points).
@2x = 1000x400 pixels @ 144 dpi (500x200 points).
@3x = 1500x600 pixels @ 216 dpi (500x200 points).[/quote]

Yes, that’s it !

Nota: I do not saw that in the DesktopHiDPISupport.pdf document.
(but I read it fast).

Also, a note about the difference between Pixels and Points is welcome.

At last, in the IDE (please for 2017r1), a note on what must go where is welcome too.
First of all, you have to know what you have to do (I do not searched in the documentation)…
When you activate the Insert Menu, Image MenuItem, what you get is… the desert !
Any 1x, Any2x and Any 3x is good when you know what to do. These are where you have to put the images @1x, @2x, @3x…

One good thing is the red reminder when the dropped image is wrong. Once you drag one image in the @1x area, getting the size is a bit misleading, but the values betwen parenthesis is good.

At last, on purpose, I drop three images with different sizes in the “Image Editor” and I get warnings. The second one was far larger than the first (the X, Y ratio was different) and the image was resized with a shrink in the horizontal value providing a visual distortion and a red comment.
A second red text line have to be added there to report what to do to the user - unless (s)he have to know what (s)he is doing.

Etc.

Simple effort can be done here.

Remember: this is not the first conversation about the HiDPI images that comes here.

Not for me, but for Joe Newbie and confused persons by this HiDPI stuff (and this can be anybody).