Best Mac for the job?

And actually Moores Law does not state that the speed of computers will double every 24 months…

What it DOES say, it the number of “transistors” in a given area of an integrated circuit will double

http://fortune.com/2015/07/17/moores-law-irrelevant/

[quote=217784:@Dave S]And actually Moores Law does not state that the speed of computers will double every 24 months…

What it DOES say, it the number of “transistors” in a given area of an integrated circuit will double[/quote]

Nobody has said otherwise. But more transistors should translate into either a higher performing architecture or more cores using the same architecture. Or some combination thereof. (Indirectly, shrinking transistors contributed to clock speed gains for years, but we hit problems there a while back.)

The Fortune article confuses two different things. Moore’s law is about how many transistors you can etch into silicon. What you do with them (x86 vs. GPU vs. DSP vs. FPGA) is a separate issue.

I don’t know if the story is different in, say, server CPUs. But it seems odd to me that Intel wouldn’t ship desktop and mobile processors with more cores if they could for the same TDP. Of course maybe other stuff like integrated graphics is using more of the chip then I realize and that’s what has grown at the expense of another x86 core.

Not quite.

The RAM is fairly easy to upgrade.

[quote=217789:@Markus Winter]Not quite.

The RAM is fairly easy to upgrade.[/quote]
on a 27" imac yes, on a 21" imac clearly NO !

True. In my defence I don’t think he considers a 21in iMac.

It can be done on a 21" iMac, but only by Apple or an authorized reseller.

you will pay as much labour as the ram itself …

a question for xojo team : does xojo have plans to use all the cores in a (far) future version
at least to compile faster or is it clearly useless now ?
( a computer average lifetime is considered to be 5 years)

[quote=217787:@Daniel Taylor]Nobody has said otherwise. But more transistors should translate into either a higher performing architecture or more cores using the same architecture. Or some combination thereof. (Indirectly, shrinking transistors contributed to clock speed gains for years, but we hit problems there a while back.)
[/quote]

I would like “Moore’s law” to be called “Moore’s postulate” or “Moore’s prediction”. But I guess it would have not sold so well for so many years. Intel’s marketing ploy be saluted.

As a consumer, I would like to see the kind of quantum leaps in processing power we have seen in years past continue. Yet, it seems we are fast approaching the limit of magic tricks. Already, the multi core stuff came about not so much because we needed parallel processing, but because a faster single core was extremely difficult to conjure without extra heat dissipation. Heck, the conceptual design for transparent parallel processing in everyday apps remains to be invented AFAIK.

We are in an era of zero growth propaganda. Of small fragile planet and generalized thriftiness. Mobile devices demand power economies, not oomph. The world demands ARM chips that do wonders on ever smaller batteries. And since Moore’s conjecture does not applies to batteries technology, better time means less consumption.

Let us face it : we are the exception, not the norm. The immense majority of computing devices today go on Facebook or other “pimples talk to pimples” plane, which require less processing power than marketing hoopla. Desktop, laptops and geeks are regarded as some sort of strange tribe that has the same kind of relevance as collectors of typewriters. Processing power is the thing of cloud data centers, not garage workstations.

Let us pray for Apple to keep producing powerful desktops and laptops.

[quote=217811:@jean-yves pochez]a question for xojo team : does xojo have plans to use all the cores in a (far) future version
at least to compile faster or is it clearly useless now ?[/quote]
More pertinent question is how do other compilers use the available cores?

You can’t simply distribute the work willy-nilly. Some work might be able to be distributed easily (like calculating the transformation of different areas of a picture, or encoding movie frames), others can be done with some work (like analysing data but with an additional step for combining the data and for calculations that require all the data points), many can not be split up at all (as they are dependent on previous steps).

So are Xcode or VisualStudio using multiple cores?

Michel Bujardet - well said.

My impression (could be wrong) was that architecture was a limiting factor as well. It has become difficult to scale a single core, but it’s easy to drop another core on the chip and hope developers solve the issues of parallel processing.

Xcode uses multiple cores. The obvious way to split up the work is to have each core compiling a class or file. Dependencies probably limit the degree to which the work can be split up though.

[quote=217789:@Markus Winter]Not quite.

The RAM is fairly easy to upgrade.[/quote]
When we bought our iMac in 2007, we got it with 4GB. Then when we wanted to update (two years ago) we found that it’s impossible to do so, it had a limit of 6GB and 3GB chips were no longer available.

Also Apple are getting into the habit of soldering RAM in to their laptops, which cannot be changed at a later date, so I wouldn’t be surprised if iMacs followed suit (if they haven’t already).

Some RAM limits always apply. Partly due to hardware (bus design, or type of chip like DDR2 vs DDR3 vs DDR4), partly due to system (if all you can run is a 32bit system, then the amount of RAM you can install and use is limited). Though 64bit processors and OSes push the theoretical limit beyond practical purposes, technological advances (like new RAM types which do not work on older machines) limit what you can actually do.

Also I do not think there ever was a 3 GB chip, it was 4 GB + 2 GB that you would use.

But it is a bit pointless to discuss an 8 year old machine which was already out of date 2 years ago when the original poster wants a good development machine now … :wink:

According the Apple engineer we spoke to, there is a part number for 3GB chips for this machine, just they don’t have any. The specs say the ram needs to be in pairs.

Concur, was just trying to elaborate why I believe it’s a good idea to get as much RAM at the same time you buy an Apple computer.

The other highly recommended item is the extended AppleCare, so you get 3 years (in total) hardware warranty. It’s a gamble; but over time it’s saved us money with problematic machines, my wife’s just had the screen replaced on her MacBook Air.

[quote=217811:@jean-yves pochez]a question for xojo team : does xojo have plans to use all the cores in a (far) future version
at least to compile faster or is it clearly useless now ?
( a computer average lifetime is considered to be 5 years)[/quote]

You know Xojo engineers prefer to present their work when it is complete. But I believe the idea is in the air…

From Intel engineers, the main limit was frequency. The higher it became, the hotter processor ran. There are anecdotes about overclockers having their chips falling down inside the cabinets because the chip de-soldered itself.

They could have increased frequency drastically by using liquid coolant, as already do some high end gamers machines, or by improving the geometrical design to evacuate heat as some supercomputers cores do. That ran opposite of the need for miniaturization and low consumption.

Another venue has happened. Compaction. In Penthium times, technology was around 45 nm. Even though electricity speed within processors is close to light speed, current 14 nm technology means approximately 3 times less distance traveled by electrons. That, compounded with better circuits trajectories, means better performances with the same number of transistors. Transistors themselves became faster as well.

All that does tend, as Dave noted, to render irrelevant reference to clock frequency. I much better like GFlops or Mips. Unfortunately, it seems most reviews enjoy staying in largely decorative descriptions and little hard facts.

I stand corrected, and learned something new :wink:

Michel Bujardet - I was thinking more of the chip design then the clock frequency (though what you say there is spot on). In other words it has become very difficult to add more ALUs, FPUs, and the logic to actually keep them working (increased out of order execution; better branch prediction; etc.) vs. just dropping an identical core on the die and hopping developers thread everything.

I could be wrong, but I believe that is another limit we’re running into. Even if we could shrink the transistors dramatically or increase the clock, exploiting parallelism within the core has reached certain limits.

Oh well…just as long as Xojo supports the first Quantum chip set :slight_smile:

About multicore, I just noticed AimerSoft Video Converter uses all cores. I dropped a collection of mkv to convert to mp4, and it started immediately 4 converting tasks. The drawback, apparently, is that when all the cores are used, the machine becomes kind of unusable, so intense is the resources usage. The movie I was sending to Chromecast froze on the TV, and only the sound went through.

Besides, the fan rotates like mad, to evacuate the extra heat.

I see how I could design such an app using helpers, with the current Xojo technology. It would require using IPC to communicate with the GUI to manage the progressbars and the cancel task, but that is not very difficult. I was intrigued by something, though : how does the program know there are 4 cores ? Does it detect available cores, and then how, or does it get the type of processor and use a database to get the number of cores.