Performance Curiosity

  1. ‹ Older
  2. 6 days ago

    Robert B

    Feb 9 Pre-Release Testers, Xojo Pro Cincinnati, Ohio

    Well, I looked at TTs tool and I cranked it up... after a min of watching the file count grow... I opened a terminal window and ran a "find" command piped to "grep" and piped the result to file. The "find" command had identified 500K files before TTs tool hit 15K.
    I think my solution of shelling out and using the native command was a good call. Problem is...once I identify 500K files -- now I need to open each one and process the XML data contained therein. I ended up doing the operation on my old HFS+ Macbook just to get my first task done. One of my tests runs was against 1.2 million documents - and the other was over 5 million. Performance varied a bit over the course of the operation -- but I think I was averaging about 300 docs/sec using Xojo's XML classes. -- Not hateful. Hopefully the problem isn't an APFS limit -- and it's just a matter of Hooking the new "drivers". I'm concerned that Christians's MBS code while FAR superior to Xojo's Folderitem class for the operations I was using -- still falls short on APFS when compared to HFS+. Nothing I like better than to see a $4,000 Laptop get it's a$$ handed to it by a decade old system with an ancient file system.

  3. Robert B

    Feb 9 Pre-Release Testers, Xojo Pro Cincinnati, Ohio

    Ok... I posted about two hours ago. TT's find tool has now identified just flipped over the 100K file mark. Woot. The native find command was done (about 543K files) before I typed up my last post. So what took like maybe TWO and a HALF MINUTES with "find" compared to TWO HOURS with TTs tool...and it's just barely crossed the 20% threshold. "Coreservicesd" has been sucking up a core at 95- 99% for the last two hours. My system is otherwise idle showing only about 13% total CPU utilization. "Find Any File" is up to 253 MB consumed -- slowly and steadily rising. The FILE produced by my "find" command which has identified the 543K files (full path of a file on each line) is 66 MB in size -- and it never consumes more than 67% of a single core when running.

  4. Robert B

    Feb 10 Pre-Release Testers, Xojo Pro Cincinnati, Ohio

    Thomas Tempelmann's "Find and File" just completed and reported that it found the exact same number of files as the shell Find Operation. It is a nicely organized tool. 14 hours seems a bit long for a search though.

  5. Oliver O

    Feb 10 Pre-Release Testers, Xojo Pro https://udemy.seminar.pro

    boah ... :o

  6. 5 days ago

    Jason P

    Feb 10 Xojo Inc Texas

    @Robert B Ok... I posted about two hours ago. TT's find tool has now identified just flipped over the 100K file mark. Woot. The native find command was done (about 543K files) before I typed up my last post. So what took like maybe TWO and a HALF MINUTES with "find" compared to TWO HOURS with TTs tool...and it's just barely crossed the 20% threshold. "Coreservicesd" has been sucking up a core at 95- 99% for the last two hours. My system is otherwise idle showing only about 13% total CPU utilization. "Find Any File" is up to 253 MB consumed -- slowly and steadily rising. The FILE produced by my "find" command which has identified the 543K files (full path of a file on each line) is 66 MB in size -- and it never consumes more than 67% of a single core when running.

    In one of the OS updates Apple moved calls to old API's into a daemon - this is why Coreservicesd consumes as much time as it does. The effect of this has been to make the use of old API's slow down enormously. A google search for recent posts about "Coreservicesd" will give you some idea of the scope of the issue. We're aware that folderitem's performance on APFS volumes is not the same as it was with HFS+ volumes and are working to address the issue.

or Sign Up to reply!