<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI | Remotely Actuarial</title><link>https://www.remotely-actuarial.com/tags/ai/</link><atom:link href="https://www.remotely-actuarial.com/tags/ai/index.xml" rel="self" type="application/rss+xml"/><description>AI</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Wed, 22 Apr 2026 00:00:00 +0000</lastBuildDate><item><title>Predictions/Thoughts for Artificial Intelligence in April 2026</title><link>https://www.remotely-actuarial.com/blog/ai-in-2026/</link><pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate><guid>https://www.remotely-actuarial.com/blog/ai-in-2026/</guid><description>&lt;p&gt;Before I begin, I will use the term AI below (and above in the title). Technically speaking this is not the best term
to use, but in April 2026 AI = LLM&amp;rsquo;s for most people, so I will stick to this term (at least for this post).&lt;/p&gt;
&lt;p&gt;These thoughts and predictions are mostly specific to technology and, to a lesser extent, Actuarial Science /
Modeling and not society in general. I will &amp;ldquo;stay in my lane&amp;rdquo; so to speak&amp;hellip;&lt;/p&gt;
&lt;h2 id="the-crazy-pace-of-change"&gt;The Crazy Pace of Change&lt;/h2&gt;
&lt;p&gt;I have written a number of drafts regarding my thoughts about AI. I&amp;rsquo;ve been asked about my opinions and
what it could mean for Actuarial modeling/science, so my plan was to have a resource to point to when I get these
questions. At first, I didn&amp;rsquo;t feel like I knew enough and I was not initially that interested in language models. As
time has passed and their capabilities have grown, language models have become impossible to ignore so I educated
myself on how they work, built some systems and played with some tools.&lt;/p&gt;
&lt;p&gt;Unfortunately, I still don&amp;rsquo;t feel ready to put down my definitive thoughts on AI. Why? AI is moving so fast that my
attempts to summarize my feelings in a nice succinct manner on the overall subject of AI has proved too difficult.&lt;br&gt;
However, I do have some opinions/observations right now. Instead of collecting yet another unfinished draft, I
decided to document my thoughts here so that I can look back in a year and laugh at how naive I was &amp;#x1f604;.&lt;/p&gt;
&lt;p&gt;Here goes&amp;hellip;.&lt;/p&gt;
&lt;h2 id="thoughts--predictions-in-april-2026"&gt;Thoughts / Predictions in April 2026&lt;/h2&gt;
&lt;h3 id="ai-will-struggle-to-justify-the-investment-in-a-lot-of-domains"&gt;AI will struggle to justify the investment in a lot of domains&lt;/h3&gt;
&lt;p&gt;This is one that I might be proven wrong about but this is my feeling. It&amp;rsquo;s clear that AI has caused a major step
change in software development, anyone can see this. So why the prediction?&lt;/p&gt;
&lt;p&gt;I believe software development was a unique problem:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;the open source movement provided all the data needed to build very capable models tailored to the domain&lt;/li&gt;
&lt;li&gt;the people building the AI solutions are software engineers themselves, so the builders understood the domain perfectly&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I believe that other domains are not positioned to benefit &lt;em&gt;as quickly&lt;/em&gt; as software development. The domain information
is not exposed in a standardized manner in the same way that it is for software development.&lt;/p&gt;
&lt;p&gt;Other domains will follow the same path as software development and will be disrupted but it will be slower, at the
same time the success of AI in software development will be pointed to as the reason for even more investment in AI
solutions. We will see success stories in other domains, including Actuarial Science, I just don&amp;rsquo;t think things will
be disrupted fast enough to justify the level of investment which is already crazy high.&lt;/p&gt;
&lt;h3 id="good-taste-will-matter-even-more"&gt;(Good) &amp;ldquo;Taste&amp;rdquo; will matter even more&lt;/h3&gt;
&lt;p&gt;Having &amp;ldquo;good taste&amp;rdquo; in something is a unique talent. We can all recognize that there is value to having good taste but
it&amp;rsquo;s a somewhat subject quality that is hard to pin down. It&amp;rsquo;s like you know it when you see it. I do not have good
taste in a great many things, my wife will attest, but I&amp;rsquo;d like to think software/technology is not one of them.&lt;/p&gt;
&lt;p&gt;In technology circles we talk about &amp;ldquo;systems thinking&amp;rdquo; when architecting a solution or good &amp;ldquo;product instincts&amp;rdquo; when
talking about the user facing solutions. Writing code has never been the blocker to making great products or solutions,
it&amp;rsquo;s just been part of the cost equation. This cost has now been driven very low but the obstacles to creating &lt;em&gt;great&lt;/em&gt;
technology solutions remain the same. I believe having an eye for detail and a principled approach to creating these
solutions is going to matter more than ever.&lt;/p&gt;
&lt;p&gt;Even though everyone has to say their product is &amp;ldquo;AI enabled&amp;rdquo; at the moment, this should not be the case. &amp;ldquo;AI&amp;rdquo; is not a
feature! It is an amazing tool that should super-charge your ability to create great products.&lt;/p&gt;
&lt;h3 id="good-products-will-be-better-differentiated-but-also-harder-to-discover"&gt;Good products will be better differentiated but also harder to discover&lt;/h3&gt;
&lt;p&gt;Following on from the above if you love building great solutions using technology, it has just become so much easier.
Great right? Well it just became a lot easier to create bad technology too, and we are going to be flooded with it!&lt;/p&gt;
&lt;p&gt;This means that the expectations for software quality, in the enterprise especially, will go down. The incentives are
just not always there in enterprise settings to create great software. Companies are falling over themselves to
rollout AI strategies. This is not surprising, it&amp;rsquo;s expected, AI is transformative. However, the enterprise &amp;ldquo;winners&amp;rdquo;
when it comes to AI will be the second wave. The people that invested but took a more principled, measured approach to
adoption.&lt;/p&gt;
&lt;p&gt;Great products will be even less common but will stand out more.&lt;/p&gt;
&lt;h3 id="actuarial-modeling-will-lag-behind"&gt;Actuarial modeling will lag behind&lt;/h3&gt;
&lt;p&gt;The foundation your technology is built on matters. Anyone who has spent any time doing AI engineering knows that it&amp;rsquo;s
all about managing context. This is very hard to do in systems that were not designed with AI in mind (
article is a good read related to this).&lt;/p&gt;
&lt;p&gt;In my opinion, Actuarial software vendors with legacy systems will struggle to build AI into existing systems. Language
models will always give some answer. This means that you can add AI to an existing product very quickly to get the
&amp;ldquo;AI enabled&amp;rdquo; sticker for the marketing team but I don&amp;rsquo;t see many building truly innovative or unique experiences, at
least initially.&lt;/p&gt;
&lt;p&gt;Managing context will require a strong expressive foundation for your models, see
,
I&amp;rsquo;m not convinced existing vendors can execute on this. If you&amp;rsquo;re an existing vendor and disagree I&amp;rsquo;d love to see a
demo.&lt;/p&gt;
&lt;h3 id="deterministic-building-blocks-will-be-essential"&gt;Deterministic building blocks will be essential&lt;/h3&gt;
&lt;p&gt;In AI engineering this is called &amp;ldquo;tool calling&amp;rdquo;. The space is evolving quickly, MCP&amp;rsquo;s were all the rage initially but
now it seems people are moving away from this (recently, I was looking into
papers which are very interesting because they work really well with &amp;ldquo;small&amp;rdquo; models, see below).&lt;/p&gt;
&lt;p&gt;I have always been a proponent of the unix philosophy of simple tools with narrow scope that can be combined. I
believe this is still the correct approach to take with AI systems, maybe more so. Anthropic are basing a lot of Claude
Code on their interaction with bash, this makes complete sense. You want to decide where the model can use it&amp;rsquo;s
&amp;ldquo;creativity&amp;rdquo; and where it should be deterministic.&lt;/p&gt;
&lt;p&gt;I believe the future of domain specific AI solutions will involve building the correct deterministic foundation that
the language models can sit on top of and balancing this creative/deterministic trade-off. AI engineering will become
more specialized to the domain and restricting what models can do will become more important in a world where we
connect several models with narrow scopes to build solutions.&lt;/p&gt;
&lt;p&gt;This might lead to a comeback for mirco-services. My issue with this architecture has always been managing all the
interaction boundaries and administration. AI is pretty great at these particular tasks as the patterns are very well
established.&lt;/p&gt;
&lt;p&gt;Connecting smaller systems is already happening to an extent with people focusing on &amp;ldquo;agentic&amp;rdquo; workflows. However,
they are still mostly using larger frontier models. Which brings us to&amp;hellip;&lt;/p&gt;
&lt;h3 id="smaller-models-will-be-more-important-especially-in-the-enterprise"&gt;Smaller models will be more important, especially in the enterprise&lt;/h3&gt;
&lt;p&gt;Yes, there&amp;rsquo;s lot of FOMO at the moment and everyone is throwing money at AI and that&amp;rsquo;s understandable given it&amp;rsquo;s
potential.&lt;/p&gt;
&lt;p&gt;However, when the dust settles and the VC money tap is off I believe enterprises will focus more on cost and less on
capabilities. When this happens solutions based on smaller models will be favored over the massive foundation models
everyone is using. On
there are so many capable models that run surprisingly
well even on your laptop, check
. Right now, enterprises are not concerned with the correct model for the job
but eventually they will be and this will mean smaller more cost-efficient models.
.
I believe this will lead to better AI engineered technology solutions, developers will need to be more thoughtful when
building solutions with smaller models due to the smaller context windows.&lt;/p&gt;
&lt;p&gt;This might be when we swing back the other way and realize developers are not really going to be replaced, just that
software development has changed.&lt;/p&gt;
&lt;p&gt;I&amp;rsquo;m particularly interested in small, local models for reasons other than cost, but that&amp;rsquo;s for another post&amp;hellip;&lt;/p&gt;
&lt;h3 id="ai-will-lead-to-lower-python-adoption"&gt;AI will lead to lower Python adoption&lt;/h3&gt;
&lt;p&gt;I love Python but I don&amp;rsquo;t always enjoy the tradeoffs it makes, see
.&lt;br&gt;
For many people, writing code by hand with Python is perfect. However, writing code fully by hand is on the way out.&lt;/p&gt;
&lt;p&gt;AI coding assistants don&amp;rsquo;t care that Rust syntax is harder to read and write than Python, in fact, they love it. In this
new world Rust is a language with a lot of &lt;strong&gt;context built-in&lt;/strong&gt;. The things that made Rust hard for developers, which
were seen as negatives such as lifetimes, etc., are positives for AI. There are many more guardrails built-in so the
degrees of freedom that the AI has to navigate are more constrained (i.e. better management of the creative/deterministic
trade-off and the context window in general).&lt;/p&gt;
&lt;p&gt;This may take some time to play out but I see less Python adoption and more adoption of Rust, Go, etc. Go has a
particular advantage as it&amp;rsquo;s compile times are very fast, perfect for an agent loop.&lt;/p&gt;
&lt;blockquote class="border-l-4 border-neutral-300 dark:border-neutral-600 pl-4 italic text-neutral-600 dark:text-neutral-400 my-6"&gt;
&lt;p&gt;The whole point to use Python is that it&amp;rsquo;s easy to read and write. So if I&amp;rsquo;m not reading or writing the code, what&amp;rsquo;s the point? - Wes McKinney&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h3 id="people-will-come-to-value-human-experiences-online-communities-will-suffer-in-the-short-term"&gt;People will come to value &amp;ldquo;human experiences&amp;rdquo;, online communities will suffer in the short term&lt;/h3&gt;
&lt;p&gt;I already feel a little AI fatigued and
.
I find that with things I read I can usually tell that some/all has been AI generated. The issue is that I can&amp;rsquo;t tell
immediately&amp;hellip; I have to invest my time in really reading and absorbing the material before I start to realize this.&lt;br&gt;
I also think it works the other way. People don&amp;rsquo;t trust anymore and, at times, feel something is AI generated when
it&amp;rsquo;s not due to previous experiences.&lt;/p&gt;
&lt;p&gt;I believe people will swing back to really valuing human experiences and content (like this blog &amp;#x1f604;). I could see
blockchain finally having a killer use-case. If you had a central body that could verify that content is human created
I think people will seek this out to avoid AI in certain areas of their life. I think &lt;em&gt;if&lt;/em&gt; a social network could
ensure that all participants are actual humans it would be very popular.&lt;/p&gt;
&lt;p&gt;However, in the short term online communities will suffer. We join communities to benefit from the connection with
others. If we have to work so much harder to screen new connections we won&amp;rsquo;t connect as much. AI generated content
is already flooding online communities and it&amp;rsquo;s a major distraction and obstacle to making real connections with others.&lt;/p&gt;
&lt;p&gt;There is alot of talk about the algorithm&amp;rsquo;s that are designed to maximize engagement, but without real connections with
others I think these platforms will lose their appeal.&lt;/p&gt;
&lt;h3 id="ai-will-be-a-bad-thing-for-open-source"&gt;AI will be a bad thing for Open Source&lt;/h3&gt;
&lt;p&gt;This one hurts me to write but I can&amp;rsquo;t see AI being good for open source. Open source is built around the sharing of
ideas and, well, openness. It&amp;rsquo;s ironic that this openness is what has provided the model makers with the
data they needed to build transformational products aimed at software development, which in-turn will be bad for those
very communities.&lt;/p&gt;
&lt;p&gt;Open source has always had an issue where a lot of the burden falls to relatively few people. I have wanted to get
back into open source more but life happens! There are only so many hours in the day. There are some people who
give up an awful lot to support these communities and they have always been close to burnout.&lt;/p&gt;
&lt;p&gt;AI is going to place even more of a burden on these people. This has started already, the overall signal to noise
ratio in terms of quality pull requests is heading in the wrong direction. At the extreme, there are new and terrible
issues maintainers are having to deal with, see
.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion&lt;/h2&gt;
&lt;p&gt;The only thing I&amp;rsquo;m sure about is that it&amp;rsquo;s a crazy (and exciting) time to be building technology!&lt;/p&gt;</description></item></channel></rss>