- AI Powered Agents
- Posts
- The Age of Reason
The Age of Reason
🎤 Farnsey called it decades ago...
Image: Ideogram
Welcome to 2025; it’s great to be back in the “AI Powered” Chair. Lots is happening this year, including a new community (join here), new courses (start here), and general excitement for what the world of AI might bring us this year.
But first, a song…
“From the day that we were born we've been heading down a track
Sometimes it's made for good sometimes for bad
But if we look behind us there's a wave coming down
Carrying us forward to a new age”
If you’re Gen X like me or have a fondness for Aussie Pop from the 80s or early ‘90s, you may have read those lines above, and you might be humming the chorus by now.
If not, listen to the video below on YouTube. Make sure you fully enjoy “peak Farnsey” with magnificent mullet blowing in the wind on the top of a rock … and close your eyes to fully enjoy those incredible keyboard solos…
If Back to the Future were released this year, Marty McFly would be heading back to the ridiculously retro year of 1995 when I first started working for British Telecom, and we were still discovering the far corners of the internet. So starting my first post of the year with a bit of classic Farnsy seems appropriate.
Here is some more trivia for you: The song was the first time a female Australian Songwriter made it to number one. She was heavily pregnant at the time, and the “Age of Reason” lyrics to her represented hope for her newborn.
But every time I see an AI new story about this new breed of “reasoning” LLMs, I too, hear Farnsey singing, “What about The Age… of Re —he-ee—son….”
Welcome to the NEW Age of Reason.
Before we get onto the AI side of things, you should also know that Merriam-Webster have catalogued the ‘age of reason’ as a ‘noun phrase’ - the time of life when one begins to be able to establish right from wrong.
But this version is based on an intellectual and cultural movement that emerged in Europe during the late 17th and 18th centuries. Thinkers during this period sought to reform society by applying rational thought to all aspects of life, including governance, religion, science, economics, and education.
Philosophers such as Locke, Voltaire, and Rousseau believed the human intellect could understand and improve the world. Immanuel Kant urged individuals to think independently and coined the term Sapere aude ("Dare to know").
It got me to thinking maybe this is a great way to define this new breed of AI models that may also be better at distinguishing right from wrong.
It’s not that these new models, such as o1, Gemini Advanced and the new (and free) DeepSeek Model fresh out of China, are free of hallucinations - they absolutely are not - the 2024 rules of fact-checking, using your own voice and implementing the spend/save mindset all still apply.
However, if the reasoning or ‘thinking’ part of the internal logic or structured analysis would seem to have the aim of limiting the errors, which can only be a good thing.
What does this mean for you, the agent?
There are a fair few posts on Reddit and all over YouTube about these models' coding capabilities, so yes, with some instruction and a little practice, you could all be developing your own CRMs, but that is probably not what you want (unless you are a nerd like me!)
We’ve dealt with some of this before, but a model designed for “reasoning” has an extended capacity to handle layered, logic-based tasks beyond simple text creation or content creation.
For example, it can examine a buyer’s budget and preferences, factor in changes to interest rates or market shifts and perhaps even provide a better-supported pricing strategy by analysing past sales figures.
This includes things like scenario planning - where you might be able to ask if-then style questions, for example, “If I suggest a particular price for a property under current market conditions, how will this affect buyer response if interest rates shift by one per cent?”
Then, you might look at different outcomes to help find your approach.
Try this: The “What if” Prompt
Plenty of people say they cannot get good results from Open AIs o1, and I came across a post on Reddit about how you need to structure specifically for o1 - so I’ve formatted this prompt based on this advice.
If you don’t have access to OpenAI o1 - i.e., you don’t have a paid subscription to ChatGPT- you can try this with Deepseek, which I mentioned earlier, and it is free.
You are a real estate agent representing a three-bedroom house in [Sydney]. The initial listing price is $1,200,000. Interest rates currently sit at 4%. Your objective is to work out how different interest rate shifts and listing price adjustments might influence buyers’ willingness to purchase.
1. Provide a brief overview of the local market conditions in Sydney for properties around $1,200,000.
2. Assess how a 1% increase in interest rates (from 4% to 5%) might influence potential buyers in terms of affordability and willingness to negotiate.
3. Suggest the effect of reducing the listing price by $50,000 (to $1,150,000) on:
- Buyer interest
- Comparisons against similar listings
- Likely negotiation outcomes
4. Compare the above two scenarios (increased interest rate vs. lowered listing price). Identify which scenario offers the most favourable balance between selling in a timely manner and securing a suitable profit.
5. Suggest any additional data or analysis that would improve the accuracy of these forecasts (for instance, rental yields or recent sales trends in the area).
Let me know how you went in the comments 👇
But as we embrace these new tools, and with o3 on the horizon, a new question lingers: What happens when the models start making decisions beyond our full understanding or capability?
Is a New Age of Reason dawning where AI is actually smarter than humans? And if so, what are the consequences… and are we prepared for it?
To be continued…(tomorrow!) - until then, happy hunting 🚀
Reply