Now Trending: Modern Tech’s Anti-hero (Artificial Intelligence)

Why we can’t stop talking about it and why it (probably) won’t take over the world.


Shy Blick, Chief Technology Officer

Lately, it seems all anyone can talk about is artificial intelligence (AI). Will it cancel your job? Can it feel? Will it take over the world? You get the idea. 

As someone who’s spent the last three decades in the technology space, I get the hype, but I am more fascinated by the simple truth that a barrier to entry will always determine the exposure of technology to the public.

Here’s what I mean by that: while OpenAI’s ChatGPT might be the topic du jour, AI and conversational NLP (natural language processing), are not new. In fact, Alan Turing first hypothesized about conversational AI in the 1950s with the advent of the Turing Test. The test is as brilliant as it is simple: would AI be distinguishable from a human when asked questions by a remote interrogator? While many programs were put to the test, it wouldn’t be until decades later in 2014 when a program called Eugene Goostman was able to trick 30 different people into thinking that it was a 13 year old boy. 

ChatGPT is argued to be the second program in history to pass the test (which you can read more about here). Funnily enough, if you ask ChatGPT about it, this is its answer: “As of my knowledge cutoff in September 2021, no program has officially passed the Turing Test in a manner that is widely accepted by the scientific community”….

Back to my point. What really makes ChatGPT so revolutionary is the low barrier of entry for use. This is the first instance where AI has been accessible to the public without having to deal with the very expensive process of training, deploying and offering models. So, when people begin to panic thinking: “will AI take over the world?” The short answer is no – it hasn’t been able to do so in the last 70 years anyway. The more complicated and ominous answer is: if it does, you can be sure that there will be people behind the scenes pulling the strings (i.e., it won’t be taking over the world without our help).

Love it or hate it, AI is here to stay. While its form varies, it’s been largely helpful in making things easier for us — doing away with petty tasks and solving some really complex problems (in record time). While it might seem ‘scary’ in the context of job loss, or the more fantastical theory of world domination, I believe it has the power to do both good and harm, depending on who’s controlling it. Being an optimist, I like to think there are significantly more people aiming for good. 

Allow me to explain my point of view with an allegory…

Chatty the nasty parrot

For your birthday, you get a day-old parrot. You decide to name it “Chatty”. Coincidentally, that same day, you find out that you must take a trip and will be traveling for an extended period of time. To help you out, your friends, colleagues, and neighbors all join together to help take care of Chatty in your absence. While you’re gone, Chatty ends up spending one day in each of your friends’ homes (for the sake of the story, and to make my point, you have a lot of friends). 

When you return, you find Chatty all grown up, back in your home and all is well. Over the next few days, however, you come to find that Chatty is actually quite a nasty little creature. The parrot is constantly cursing, lying, claiming it will “take over the world” and threatens to start by taking your job (and doing it better than you). Chatty indeed. 

At the same time you are lamenting over what Chatty has become, you feel some guilt over this poor bird whose life is confined to a cage. You decide to set him free. 

So, with his newfound freedom, did Chatty actually end up taking over the world or stealing your job? No. 

Chatty was simply repeating everything he had heard while staying with your friends. While he can articulate words, he has no concept of their meaning, the impact they might have on their recipients, or minimally, the difference between good and bad. Chatty’s lexicon is comprised of the combined psyche of lots of different people who – within the privacy of their own homes – let go of the thin veil of social politeness they have while in public. The worst Chatty can do is convince a human being to do the things he doesn’t even understand he’s saying. 

So, no, AI isn’t going to take over the world yet. But it also doesn’t mean we don’t need it. 

Fact: Humans are biased 

One of my favorite authors is Nobel Prize winner Daniel Kahneman. In 2021, he published Noise: A Flaw in Human Judgment, alongside Olivier Simony and Cass R. Sunstein. In the book, they describe their research on human decision-making. One experiment involved asking CEOs of insurance companies what they believed the variation in premiums would be between their underwriters, provided that each received the exact same information about a potential insured. On average, the CEO’s estimated premiums would vary by about 10%. The reality was more like 50% – with some extreme cases being much farther from the mean. 

Other research in the book proves a similar point, but with much larger implications. Judges given the same payload of information gave verdicts on the same case with a staggering degree of variability – from community service to several years in prison. What the research proves is that this level of variation occurs all the time. Differences were attributed to the gender of the judge, the day of the week, and even harsher verdicts if the judge’s favorite sports team had lost the weekend before. That sounds crazy, but this is what it means to be human. 

Chaos > Order > Chaos 

How does this variability happen when we receive the exact same information? I call this process “Chaos > Order > Chaos” and from my understanding, this is how it plays out in everyday life:  

Each day, we receive a bombardment of information – an onslaught we’ll call “Chaos #1”. Upon receiving this new information, your brain goes to work translating it to known words / images / sounds as a means to share and communicate it back to others. This process I call “Order” because it forces all of us to use commonly accepted symbols to try to consolidate that information into smaller building blocks (sounds, words, sentences). Chaos #2 comes from the way this information is stored in our brain. Inevitably, each of us will end up doing so in a totally different manner as a result of our pre-existing neural networks built by our unique and vastly complex experiences and backgrounds. It is out of this process that variability upon receiving the same information is born. 

I see this play out in real life all the time. For example, at a previous job, we used to stay on the line after a client call to debrief and compare notes. What was the feedback we received? What changes could be implemented as part of the product roadmap? And so on. We rarely had two people that left these calls with the same impression. 

With this in mind, it’s easier to understand how a judge reading a case file, an insurance underwriter reading a loan application, or a CRE underwriter reading an offer memorandum are bound to vary in their decision-making process, and at times, significantly. 

While this sounds like a problem (and in some instances, such as sentencing, it is), this is, in my opinion, one of the most beautiful aspects of humanity. Without this inherent variability, we would lose all diversity and evolutionary progress. Like author Connie Willis describes in her novel, Bellwether, it’s the crazy sheep that doesn’t follow the rest of the herd that finds new sources of food. So, on the one hand, we want to celebrate the diversity of humanity in most of what we do (think: art and innovation). On the other, it makes sense to think about standardization and regulation when it comes to institutions that affect people’s lives and wellbeing, for example. 

That’s where AI can help. 

AI as a noise and bias killer

To be frank, this heading is not true most of the time. When used incorrectly, AI ends up creating more noise, more bias, and happily supplies you with wrong information (all while reassuring you that everything is A-OK!). 

When I say “used incorrectly” what I really mean is that there needs to be an adult in the process. Preferably human. For instance, I would trust my dog to make decisions about my stock portfolio more than I would trust an unattended AI model.

To be a bias killer and assist humans in decision-making processes, an AI model needs to be developed and trained correctly. Like any other system, the quality of the resulting product depends on its raw materials (the famous GIGO — “garbage in, garbage out” — paradigm). If I use an expired cream to make my pasta sauce, not even adding a $2500 truffle can save the dish. As it turns out, computer systems are no different in that the quality of the end product depends on the raw ingredients (i.e., consuming clean and very well curated data sources). That’s precisely why one of our main tasks at Blooma is curating and making sure the data we consume is clean so we can offer our customers the best possible insights on their deals.

Here’s why this is important in the context of a program like ChatGPT: this system is intended for use by the general public. What that means is that ChatGPT is built to “anticipate” ANY question. In other words, it is trained using vast, unstructured data and its human curators focused on “cleaning” this data from what we all generally accept as “bad”. “Good AI” (based on the definition that it eliminates noise and removes bias) is trained by raw data that was very well curated by a group of humans. It also is preferably focused on trying to assist humans in a very specific field from which its raw data came from.

There is a saying that “if two people tell you that you are crazy, you probably are”. I’d say instead, “if two people tell you you’re crazy…you should probably check them out first”. Do you trust them as a source of information? Do they have anything to gain by thinking you are crazy? Then make a decision about your need for psychological help. (As a side note, if the two people telling you that you’re crazy also happen to be floating, look green and have wings, you can probably skip the validation step and go straight for the pharmaceutical solution). What I’m trying to say is that while there’s no arguing that programs like ChatGPT are impressive, it’s important to consider the data source of any system we use. 

AI works best when it’s trained by quality data that’s been very carefully curated AND when its models are overseen by a team of people (we call this the “human in the loop”) that can help focus and hone them over time. This hybrid method is one that is best suited to yield an output that points its users towards the optimal information, to make the optimal decision, to deliver optimal business results. Therefore, we all should strive towards a model of hybrid intelligence versus AI alone or human intelligence alone. 

A bit more about hybrid intelligence…

Hybrid intelligence is not a new concept. In a way, it’s something that’s been in the works for millenia. Storytelling, singing and the written word have been used to sustain and propagate intelligence for all of human history. Today, that might look like a physics student who benefits from the findings of Albert Einstein more than a century ago. For me personally, I had to go to the library to learn something new when I was younger. Modern technology has afforded me the ability to search the web for new information (and from the palm of my hand!) 

Up until now, hybrid intelligence has been a model of access and transference of information. With the advent of artificial intelligence, that process is evolving even further to incorporate models that can help present the most useful information to you versus you having to go seek it out. Done right, AI can support your decision-making process by making sure your own version of the truth can be compared against a sum of all other existing knowledge and thoughts on that subject. What that means is that for the first time in history, we have the ability to see other points of view en masse, adding information to our decision-making process that we never had access to before. It affords us the option to test our own thought process, all within the privacy of our own thought process. A hybrid model of intelligence can, on one hand, reduce bias but on the other, keep the final decision in your hands. 


In conclusion, my opinion is that AI — and any other software solution for that matter — should be an extension of us, not a replacement. So, no, Chatty isn’t going to take over the world for the time being, but he can certainly take care of some busy work for you in the meantime… and might be a helpful aid in having a voice outside those that are in your head.


Meet Shy.

Shy is Blooma’s Chief Technology Officer and has spent the last 30+ years in the industry building cutting-edge technologies with specific expertise in big data, AI, machine and deep learning, data normalization, and crawling. He was recognized as a technologist of extraordinary ability by the US government for his contributions to multi-protocol middleware solutions.

Similar posts

Stay in the Loop with Blooma

Get the latest in CRE intelligence delivered straight to your inbox. From expert insights and market trends to product updates and exclusive tips.