The Future Future of Software

Marking this as “off topic” though it may be somewhat relevant.

I’ve been programming for over 40 years and using Xojo (REALbasic) for over 22.

Software development has changed a lot during this time, and the pace of change shows no signs of slowing down, in my opinion.

Some of the notable trends:

  • open source
  • virtual machines, virtual OS’s and containerization
  • the rise of the web: HTML/JavaScript everywhere (e.g. the Electron phenomenon)
  • SAAS (software as a service)

Simple question with two parts:

  • where do you want software development to be in 10 years?
  • where do you actually think it will be?

My ideas:

  • I really hope that we eventually get to a unitary machine, OS, and programming language. As I get older, I find dealing with all these layers in the software “stack” to be such a pain - similar (but different) syntaxes, capabilities, etc. I’m sure many of the Xojo using crowd are here because Xojo makes a valiant effort at smoothing over these OS differences. I would love for there to be a single CPU, with a single OS, and if not a single language and library, at least a clearly dominant one that would always work.

  • I’m not hopeful we’ll end up there though - the trend I’m seeing is ever increasing complexity in an attempt to reduce complexity. I’m not sure that’s feasible. As an example, the other day I tried to install a self-hosted version of https://www.discourse.org (the forum software we are reading now). On the positive side, the online instrucdtions were great, and it actually worked. On the negative side, this involved (A) buying a virtual server, (B) installing a Linux variant on it, and (C) using the discourse installer, which I believe installs a (D) docker container and (E) runs Discourse which itself uses (F) Ruby on (G) Rails.

Whew!

All I know is that felt like there were about 600 layers of emulation or abstraction going on, and a million lines of fast-scrolling status messages during the installation. If it breaks, I know I’m screwed and would likely be unable to fix it.

This is probably a good reason many are moving towards SAAS.

This gives me serious consideration to staying with my existing forum software (Vanilla.org) which for all its problems, at least is a set of PHP files running under Apache, which is only 2 layers (give or take) and I might have a chance of understanding it.

5 Likes

Where do you want software development to be in 10 years?
In my experience, when everything was Windows and Cobol dominated the big applications (ignoring that there was a competition), I had the opportunity to see languages ​​that were called 5th generation.

One of the other, I saw some great. With that tools, they developed in record time. Visually they focused on data. They never worked in the market. As Windows continued to grow, the cost of mainframes went up.

I expected a 6th generation that never came. On the contrary, everything turned back to manual code using Java and HTML with the rise of this new “technology.”

Even today, I continue to wonder how Java Script ever worked. HTML made pretty pages everyone wanted, similar to the history of Windows. IT specialists wanted to continue capturing and processing on the green monitors while users turned to look at the mouse.

With that said. I wish that programming advances in hiding the abstraction and giving results with speed, efficiency, and accuracy.

Where do you think it will be?
To date, JS is like a Frankenstein. It’s like saying that I develop applications with Excel and Visual Basic Script. Fortunately, it didn’t happen. But they revived the son of Frankenstein on the web pages.

I understand that people want everything seeing beautiful and easy. It is very understandable.

I think something new will have to change. I imagine an infrastructure that works on all computers. Similar as though with Java. As long as a solution does not come out, solutions will grow elsewhere. That is happening right now in the no-code or low-code trend.

I see this growth with the HTML and CSS standards. Incredible effects can do with just a few lines. If mixed with object orientation, it works very well. Only require adding the same approach to data management.

If this trend continues, all solutions such as Virtual Machines, Cloud Computing, SAAS, and No-Code are the new wave.

While that high-level abstraction would suit most, the trouble is, someone still has to write the underlying code that makes it all work - and fix it when an update breaks it.

Hardware is the same - the days when CPU’s were built from transistors and TTL logic gates are long gone. But the underlying principles remain the same and some people still design CPU’s.

Yet end users can slap together a few circuit boards, an SSD and bingo, you have a working computer. They have no idea how the CPU works - might as well be a genie inside for all they know.

1 Like

Exactly. Either:

  • The user does everything by itself
  • The user hires a developer
    That uses a standard language
    Or uses a lo-code/no code development system

Which way is chosen someone has to do the job.

Lo-code/no-code is the talk of the town at this time, and since big players are jumping in, the cost of these solutions will rise, and I am not sure that the end user will benefit from that. We will see.

1 Like

A one-for-all-purposes will likely not come. There are simply too many different kinds of (technical) requirements and business cases out there. And progress never stops. So what looks great and handy today may be looked at very differently in a couple of, let’s say 5 to 10 years.

There’s a name for that. It’s called stagnation. If there is only one CPU nothing will ever improve. If it happened 30 years ago we would be stuck with 8 bit CISC chips. Character based operating systems and floppy disks. Computing evolves over time, it goes off in odd directions. RISC based chips was an odd one, but very fruitful. You wouldn’t have mobile devices without it. Multiple cores have become a huge thing, the OS had to radically change to take account of that. Now “AI” is becoming a thing, CPUs are adapting and changing because of it. Apples later A series and M series chips have neural engines to help that process.

What pushes this forwards is competition. Having something to be better than is a huge driver for improvement. It can’t happen in a standardised world. There, someone (or group) make a decision and that is how it has to be. Radical changes just don’t get to happen, because they break the OS. For example what are you going to do when quantum computers become more mainstream.

The other issue is getting everyone to agree on things. It’s hard enough to get people to decide what to have for lunch, let alone to agree on what the best operating system is. If you go by weight of numbers, ie people using them, you end up with Windows. Linux(s) dies, macOS dies. Servers are not desktops or vice versa. Why would you have to have the same OS on each?

As for languages some are better suited to some tasks than others. C/C++ is good for operating systems. Fortran used to be the king of scientific programming. Cobol was good for business apps, I wouldn’t want to create an operating system in it though.

3 Likes

Obligatory XCKD reference: xkcd: Standards

I wasn’t able to find one that said: we are going to make more and more specs and then we are going to call that code. If I remember correctly.

1 Like

Excellent comments from everyone.

Competition is healthy. Everyone offers a point of view this is good. Perhaps the 68000 compared to the 8080 may seem like the best microprocessor to one person. Sales say who is the favorite.

To me, AMIGA always seemed the best computer in the past in terms of hardware and software design. But it does not work. TANDY beat it. Later HP, DELL, and IBM dominated the market.

Nor did the languages ​​that were more efficient at the time win. Pascal was a good compiler, but I’m not moving forward.

We call this a harsh reality.

Currently, a browser provides a solution as a way to develop applications, with internal JS in its compiler. It is not the most efficient. The market is the one who says of its success.

The bases are always the same, no matter how much technology advances. You can see the history of the 8080 microprocessor in CORE i3, i5, and i7. So we can continue with samples.

They all work with machine language. It is the same programming that has existed from the beginning. Therefore, a compiler must generate the most efficient and clean binary code can do. It doesn’t matter if you use C or XOJO to make your applications. The result binary code is what matters.

Will may change with the advent of quantum processors.

Meanwhile, the market and the proposed technology continue. The problems will always be the same.

How do we make a compiler work for all microprocessors?

Today some people say that one Explorer mixed with the Cloud can fix everything. Time will tell.

It is the spiritual inheritor of the Atari 400/800 which predated it by about 5 years. Those Ataris were very advanced for their day…

But all that is so long ago now… and I do worry that the complexity with all the abstraction layers these days makes it difficult to have really reliable systems

-karen

2 Likes

I don’t think it will happen.

The basics have not changed in the hardware. The software has added one library after another.

But later or sooner, they will have to debug it. When that time comes, the cost will be high.

That is why tools like XOJO change the landscape to be a solution for the technology. They continue to deliver a debugged binary. Although argued that it is not as pure, the point is that there are no dependencies as there are with Java Script.

There are, but they are all from one supplier.

1 Like

Kind of related, but this is why the EU mandating USB-C on devices bothers me. My issue is less about the iPhone (I personally would like a USB-C iPhone) and more about the regulation. Such a mandate means we’ll be stuck with USB-C essentially forever. Nobody will invest in inventing a better connector, because it will be automatically dead on arrival if it can’t be used. It essentially gives the USB consortium complete control. I’m really not a fan of handing over the keys just because USB is in a really good spot right now, naming conventions aside.

6 Likes

Agreed. You have to wonder where that leaves the iPad, which has a thunderbolt port. It’s the same and it isn’t. I’m never in favour of non-technical people telling technical ones what technology should be.

I gather the main USB-C reason is to standardise charging, rather than interfacing.
No problem with everything having a USB-C port… they can have others in addition.

Yes it’s about charging. But do you really think any manufacturer would release a phone with two ports?

I do not think so.

Question: do they ask buyers if they want a charger ?
(because there are two kinds of users: some times users and new users. the former may have a charger, later do not have. Will they discover at home they need a charger ?)

1 Like

It doesn’t bother me half as much as the buckets of PSUs I have in the workshop to try to accomodate all the different laptop charging jacks. Dozens of chargers that are electrically similar with only the jack being different. Those buckets are representative of a massive global e-waste issue. I’m not surprised the EU have said, enough is enough.

You may have to define ‘better’ for me too. Smaller and thinner has come at a cost. USB-C and Apple’s Thunderbolt connector suffer the same weaknesses. They are easier to damage and much more difficult and expensive to repair. Treading on a ‘modern’ charging cable by accident can easily writeoff a brand new device. That’s another growing mountain of premature and avoidable e-waste.

And at the same time, we install incompatible electrical outlets in the European Union (EU) (earth outlets, in houses and buildings).

And the EU takes care of smartphone sockets or shouts monopoly (App Store).

What does the EU say about the sale of software in physical shops (where shopkeepers take 30% or more of the margin)?

You probably mean Lightning, but regardless, if we knew what better was, it’d probably be invented already. Maybe it’s something magnetic like MagSafe. Maybe it’s smaller, or faster, or both. Maybe it’s stronger. Maybe it’s not smaller, but thinner and wider, allowing it to fit into thinner phones.

The point is we don’t know what better would be. There’s no reason to assume that USB-C is the pinnacle of our connector technology.

2 Likes

Not exactly true. USB-C is the current connector for a lot of hardware efforts behind it, chip sets, proven to be able to serve us for an uncountable number of years just upgrading those chipsets, not the connector, its future safe for a near future. :smiley: but… USB-IF directors board is composed by the following companies: Apple, HP, Intel, Microsoft, Texas Instruments, STMicroelectronics and Renesas Electronics. And they have more than 1000 member companies and many participating in the R&D. See, even Apple is in the board, if they have some contribution to the currents specs and chipsets, they could do it. The reason Apple have another connector is not because their connector is better, is just because their connector is unique, proprietary, and gives them some business advantages focused on sales, just it. At some point in the future, the USB-IF may propose an USB-D to supersede USB-C as the new standard, and it will be the result of an effort of hundreds of companies developing all the technology that goes with it, together, in a consortium exchanging ideas and developments, not just one focusing in some way to use such control to benefit their sales, like “every 3 years we can change the connector for the sake of incompatibility and that will help to drive new sales”. A consortium evolving the standards together for all is a better approach than a salad of proprietary useless connectors flying around.