FileMaker, ODBC and Pictures Tip

Although storing pictures within a database is not my general recommendation I support several FM databases which do store them and they are required for most queries regarding the subject.

For years I’ve been frustrated using ODBC with FileMaker in one particular area - BLOB picture fields (containers in FM). I’ve always used the GetAs(containerName, ‘picture format’) syntax and it works (sometimes). I have always had intermittent hard crashes and attributed it to the ODBC plugin. Because of this I have resorted to creating external lookup hacks to work around it.

I recently changed my approach to storing the pictures as files within FileMaker (right click on container field and select “Insert File”. I then use GetAs(containerName, ‘FILE’) As ‘whatever’. I was pleasantly surprised to find not only complete elimination of crashes but the picture value is directly available in the rs.field(“whatever”).PictureValue.

I might be late to the dance but figured this may help out someone. Particularly with @Hal Gumbert evangelizing Xojo in the FM marketplace.

I’m working on an application which uses a very complex FileMaker database. It is more or less an image database. And guess when I will start working on querying the FileMaker database for images? Next week!

Peter, I would like to give a hundred Likes for this post, but it was not possible. So I alternately clicked 50 times Like and Unlike and the 51th click on Like is the one showing.

Peter, my experience has been identical. The option to store images as separate files has been a godsend. It eliminates crashes and keeps the database small. The only time it’s (mildly) inconvenient is when you try to open a server-based database directly or vice-versa as you have to move the image folder around, but that’s a very small price to pay.

The last time I used the ODBC plugin for Filemaker I only managed crashes when setting it up - even before trying to do anything.

Recently, I too found the external storage. That works so much better than storing data within the database. I’m going to change my Valentina databases to use this approach soon.

FYI, we do this with files on a Postgre database. All files are are named something like “.raw” and all the information about the file is stored in a table. When it comes time to “recreate” the file, it is copied from that directory and renamed.

I wanted to store pictures ( or pdfs or files) in a separate table with only a “rowid” field and a “blob” field in a postgres database
is it NOT a good practice ?

@Jean-Yves: would be good to make a new topic. Anyways, I’ve found that many and large blobs make a database unwieldy. The databases of my customers are up to 100 GB with mail attachments. Handling these large files is very cumbersome. And the app doesn’t do much with the data in the database.

PostgreSQL has something called “Large Objects”.

It’s a fine practice, but only you can decide what’s best for you. Referencing external files is more work, but keeps our database smaller, and that’s important, for example, when we clone the database to our desktop machines for development purposes.

I thought that way in making two databases in a postgres server : one with the main datas, and one with only the blob table.
so if you need the datas you only clone the “small” database, the big database with the blobs stays on the server.
but I dont have such a lot of datas in the blob table for now, so I dont know if this schema will be usable in the future when more and more datas will fill the blob table.

You should really check out PostgreSQL Documentation: Large Objects.

We use two databases also, but one is for large data imports that seldom change.

Eli, I haven’t used Large Objects, but I don’t think that’s the solution if you are trying to keep your database small. If I understand it correctly, the database is still stored within table rows, just automatically managed by the system. Did I miss something?

Why would you want to keep the database small? This was necessary at a time when databases returned blobs too slowly, this is not the case anymore (especially for PostgreSQL). My experience is that it “Large Objects” is faster than manipulating files with FolderItem (for example for a listbox with hundreds of records which includes an large image for each row).

There are also Foreign Data Wrappers. These allow for example to read CSV files with SQL SELECT statements. Some FDWs allow also INSERTs and UPDATEs. I posted an example in this thread a while ago: Foreign Data Wrappers.

There is one for large files too, but I haven’t use it yet. It’s called Multicorn.

Because when I transfer the database daily to my desktop machine from the server, it already takes 10 minutes to download and process. I simply don’t need those files.