Predictions/Thoughts for Artificial Intelligence in April 2026

Apr 22, 2026·
Paddy Horan
Paddy Horan
· 10 min read
blog

Before I begin, I will use the term AI below (and above in the title). Technically speaking this is not the best term to use, but in April 2026 AI = LLM’s for most people, so I will stick to this term (at least for this post).

These thoughts and predictions are mostly specific to technology and, to a lesser extent, Actuarial Science / Modeling and not society in general. I will “stay in my lane” so to speak…

The Crazy Pace of Change

I have written a number of drafts regarding my thoughts about AI. I’ve been asked about my opinions and what it could mean for Actuarial modeling/science, so my plan was to have a resource to point to when I get these questions. At first, I didn’t feel like I knew enough and I was not initially that interested in language models. As time has passed and their capabilities have grown, language models have become impossible to ignore so I educated myself on how they work, built some systems and played with some tools.

Unfortunately, I still don’t feel ready to put down my definitive thoughts on AI. Why? AI is moving so fast that my attempts to summarize my feelings in a nice succinct manner on the overall subject of AI has proved too difficult.
However, I do have some opinions/observations right now. Instead of collecting yet another unfinished draft, I decided to document my thoughts here so that I can look back in a year and laugh at how naive I was 😄.

Here goes….

Thoughts / Predictions in April 2026

AI will struggle to justify the investment in a lot of domains

This is one that I might be proven wrong about but this is my feeling. It’s clear that AI has caused a major step change in software development, anyone can see this. So why the prediction?

I believe software development was a unique problem:

  • the open source movement provided all the data needed to build very capable models tailored to the domain
  • the people building the AI solutions are software engineers themselves, so the builders understood the domain perfectly

I believe that other domains are not positioned to benefit as quickly as software development. The domain information is not exposed in a standardized manner in the same way that it is for software development.

Other domains will follow the same path as software development and will be disrupted but it will be slower, at the same time the success of AI in software development will be pointed to as the reason for even more investment in AI solutions. We will see success stories in other domains, including Actuarial Science, I just don’t think things will be disrupted fast enough to justify the level of investment which is already crazy high.

(Good) “Taste” will matter even more

Having “good taste” in something is a unique talent. We can all recognize that there is value to having good taste but it’s a somewhat subject quality that is hard to pin down. It’s like you know it when you see it. I do not have good taste in a great many things, my wife will attest, but I’d like to think software/technology is not one of them.

In technology circles we talk about “systems thinking” when architecting a solution or good “product instincts” when talking about the user facing solutions. Writing code has never been the blocker to making great products or solutions, it’s just been part of the cost equation. This cost has now been driven very low but the obstacles to creating great technology solutions remain the same. I believe having an eye for detail and a principled approach to creating these solutions is going to matter more than ever.

Even though everyone has to say their product is “AI enabled” at the moment, this should not be the case. “AI” is not a feature! It is an amazing tool that should super-charge your ability to create great products.

Good products will be better differentiated but also harder to discover

Following on from the above if you love building great solutions using technology, it has just become so much easier. Great right? Well it just became a lot easier to create bad technology too, and we are going to be flooded with it!

This means that the expectations for software quality, in the enterprise especially, will go down. The incentives are just not always there in enterprise settings to create great software. Companies are falling over themselves to rollout AI strategies. This is not surprising, it’s expected, AI is transformative. However, the enterprise “winners” when it comes to AI will be the second wave. The people that invested but took a more principled, measured approach to adoption.

Great products will be even less common but will stand out more.

Actuarial modeling will lag behind

The foundation your technology is built on matters. Anyone who has spent any time doing AI engineering knows that it’s all about managing context. This is very hard to do in systems that were not designed with AI in mind (this article is a good read related to this).

In my opinion, Actuarial software vendors with legacy systems will struggle to build AI into existing systems. Language models will always give some answer. This means that you can add AI to an existing product very quickly to get the “AI enabled” sticker for the marketing team but I don’t see many building truly innovative or unique experiences, at least initially.

Managing context will require a strong expressive foundation for your models, see here, I’m not convinced existing vendors can execute on this. If you’re an existing vendor and disagree I’d love to see a demo.

Deterministic building blocks will be essential

In AI engineering this is called “tool calling”. The space is evolving quickly, MCP’s were all the rage initially but now it seems people are moving away from this (recently, I was looking into Monte Carlo Tree Search papers which are very interesting because they work really well with “small” models, see below).

I have always been a proponent of the unix philosophy of simple tools with narrow scope that can be combined. I believe this is still the correct approach to take with AI systems, maybe more so. Anthropic are basing a lot of Claude Code on their interaction with bash, this makes complete sense. You want to decide where the model can use it’s “creativity” and where it should be deterministic.

I believe the future of domain specific AI solutions will involve building the correct deterministic foundation that the language models can sit on top of and balancing this creative/deterministic trade-off. AI engineering will become more specialized to the domain and restricting what models can do will become more important in a world where we connect several models with narrow scopes to build solutions.

This might lead to a comeback for mirco-services. My issue with this architecture has always been managing all the interaction boundaries and administration. AI is pretty great at these particular tasks as the patterns are very well established.

Connecting smaller systems is already happening to an extent with people focusing on “agentic” workflows. However, they are still mostly using larger frontier models. Which brings us to…

Smaller models will be more important, especially in the enterprise

Yes, there’s lot of FOMO at the moment and everyone is throwing money at AI and that’s understandable given it’s potential.

However, when the dust settles and the VC money tap is off I believe enterprises will focus more on cost and less on capabilities. When this happens solutions based on smaller models will be favored over the massive foundation models everyone is using. On HuggingFace there are so many capable models that run surprisingly well even on your laptop, check here. Right now, enterprises are not concerned with the correct model for the job but eventually they will be and this will mean smaller more cost-efficient models. I’m not the only one with this opinion. I believe this will lead to better AI engineered technology solutions, developers will need to be more thoughtful when building solutions with smaller models due to the smaller context windows.

This might be when we swing back the other way and realize developers are not really going to be replaced, just that software development has changed.

I’m particularly interested in small, local models for reasons other than cost, but that’s for another post…

AI will lead to lower Python adoption

I love Python but I don’t always enjoy the tradeoffs it makes, see here.
For many people, writing code by hand with Python is perfect. However, writing code fully by hand is on the way out.

AI coding assistants don’t care that Rust syntax is harder to read and write than Python, in fact, they love it. In this new world Rust is a language with a lot of context built-in. The things that made Rust hard for developers, which were seen as negatives such as lifetimes, etc., are positives for AI. There are many more guardrails built-in so the degrees of freedom that the AI has to navigate are more constrained (i.e. better management of the creative/deterministic trade-off and the context window in general).

This may take some time to play out but I see less Python adoption and more adoption of Rust, Go, etc. Go has a particular advantage as it’s compile times are very fast, perfect for an agent loop.

The whole point to use Python is that it’s easy to read and write. So if I’m not reading or writing the code, what’s the point? - Wes McKinney

People will come to value “human experiences”, online communities will suffer in the short term

I already feel a little AI fatigued and I’m not alone. I find that with things I read I can usually tell that some/all has been AI generated. The issue is that I can’t tell immediately… I have to invest my time in really reading and absorbing the material before I start to realize this.
I also think it works the other way. People don’t trust anymore and, at times, feel something is AI generated when it’s not due to previous experiences.

I believe people will swing back to really valuing human experiences and content (like this blog 😄). I could see blockchain finally having a killer use-case. If you had a central body that could verify that content is human created I think people will seek this out to avoid AI in certain areas of their life. I think if a social network could ensure that all participants are actual humans it would be very popular.

However, in the short term online communities will suffer. We join communities to benefit from the connection with others. If we have to work so much harder to screen new connections we won’t connect as much. AI generated content is already flooding online communities and it’s a major distraction and obstacle to making real connections with others.

There is alot of talk about the algorithm’s that are designed to maximize engagement, but without real connections with others I think these platforms will lose their appeal.

AI will be a bad thing for Open Source

This one hurts me to write but I can’t see AI being good for open source. Open source is built around the sharing of ideas and, well, openness. It’s ironic that this openness is what has provided the model makers with the data they needed to build transformational products aimed at software development, which in-turn will be bad for those very communities.

Open source has always had an issue where a lot of the burden falls to relatively few people. I have wanted to get back into open source more but life happens! There are only so many hours in the day. There are some people who give up an awful lot to support these communities and they have always been close to burnout.

AI is going to place even more of a burden on these people. This has started already, the overall signal to noise ratio in terms of quality pull requests is heading in the wrong direction. At the extreme, there are new and terrible issues maintainers are having to deal with, see here.

Conclusion

The only thing I’m sure about is that it’s a crazy (and exciting) time to be building technology!