Cast your mind back, if you will, to 2010.
Do you remember what the biggest leaps in consumer and business technology were back then?
It was the year that Instagram launched. The first iPad was introduced. Cloud computing went mainstream, with Microsoft Azure going up against AWS.
Major disruptors had yet to even enter the market. Uber? Not a thing until 2011. Same goes for Snapchat and Minecraft.
The most powerful games consoles in 2010 were the PS3 and XBox 360.
In the years since, technology has snowballed in its importance and its power. Tesla has made electric cars attainable, consumer drones have transformed content creation, and social media has evolved into a world-changing force.
But technology, and the software that enables it, has only just begun to explode.
What will the landscape look like in 2030?
Computing hardware is still mainly driven by Apple, Intel, AMD and Nvidia; and while they all depend on the same subcontractors, supply chains, and outsourced industrial processes – they are all independently racing towards new architectures.
Intel is famed for resting on its laurels, after establishing itself as the de facto chipmaker of the 1990s and 2000s. After that period, it was all AMD – but there was a push/pull relationship between the two silicon giants for the best part of the 2010s.
AMD eventually leapfrogged Intel for innovation and raw processing power in 2019, with the Zen architecture and its Threadripper series, effectively doubling the performance that Intel could offer, at any level.
And in 2020, Apple moved from Intel to processors designed in-house, with the M1 architecture; an entire system on a chip (SoC). This combines CPU, graphics and RAM into a single piece of hardware – one chip instead of many. While not the most powerful chips in the world, they remain, watt for watt, the most efficient desktop class processors in the world.
And it’s scalable. Shortly after M1 came M1 Pro, M1 Max and M1 Ultra; all use the same die, chopped, left whole or stuck together, for whatever level of power is required.
This started a chain-reaction of chipmakers developing their own SoCs to compete – and the race to become the most powerful, most efficient is now in full swing.
This means that consumers will have more power in their devices than ever before. Businesses will have faster machines, with less power consumption. Creators and the people behind the content we love will be able to make more compelling, beautiful, amazing experiences for us.
But all that power needs software to be able to utilise it effectively.
Otherwise, it’s a little like trying to drive an F1 car to the shops.
Apple’s choice to transition from the established x86 architecture for desktops and into the ARM architecture (more traditionally seen on mobile devices) has caused some upset. Even two years after M1, and a full three since Apple announced the move, there are many flagship pieces of software that do not run natively on the architecture.
And this is just how it goes for new tech; devs and programmers have a lot of work to do in order to adapt. Making them from scratch is comparatively much easier – ARM has been around since 1990. But translating entire codebases to ARM is not as simple as it seems.
But it looks like this is where computing is going – and over the next five years, we expect developers and programmers to be focused on transitioning, because the software itself will have opportunities to become more powerful as a result of the improved power density that ARM offers.
This new power might bring new capabilities; like truly lifelike VR. Games could become more like simulations, and the much-touted metaverse could actually be realised in a way we can’t currently imagine.
Enhancing software to use the full potential of future hardware could make processes that currently take minutes or hours – like rendering large CAD or video files – instantaneous.
Even for those of us who simply use tech to consume the content that we love, more power means more to be entertained by.
And the next revolutionary leap in tech won’t be hardware; it’ll be software.
Self-driving cars are physically possible with today’s hardware. We just don’t have the software to match. By 2030, we could have made that leap. We could have also given up our healthcare to technology over humans, our legal system, our educational institutions…
And if that all sounds a little bit scary, well – it is. But we’re not here to discuss the pitfalls of a world run by AI this time around.
Except that is, if AI is making the software.
Self-building software and AI-powered tools are going to explode.
Take DeepCoder, introduced in 2017. It’s a machine learning system that writes its own code, in much the same way humans do:
It takes existing snippets of code from other sources (in repos or from Stack Overflow and the like), and stitches them together. After it combines them, it tries to fill in the gaps – effectively determining which pieces of code are useful, and getting the desired outcome from chopping and changing.
Right now, it’s not very good. Humans are still better. But it’s learning – and ML, like humans, gets better as it does more work.
Unlike humans, ML never gets tired, and can continue improving at a geometric rate.
We will absolutely need developers and programmers in 2030. But by 2050? Machines could be doing all the work themselves.
Until then, we predict that the 2030s in software development will probably look like this:
ClearHub specialises in finding the best freelance DevOps experts in the world; vetted, skills-checked and ready to go. To get started, call +44 (0) 2381 157811 or send your message to firstname.lastname@example.org.