Your Computer Gives You an A for Effort Today
Using AI to help you make decisions, not make decisions for you.
I demonstrate our product to many people, so I get to hear a lot of feedback and questions. One of the most potentially confusing aspects of what we do is using artificial intelligence and machine learning to generate a “score” for a certain commercial real estate property or proposed deal. It’s sometimes hard to explain what this really means, so I’ll try again here.
People sometimes think of the scores that are awarded an Olympic gymnast after a routine – we’ve all wondered about what can sometimes appear to be an arbitrary system of judging something. But we have to remember that this kind of interpretive scoring of a performance is not what is happening in the world of artificial intelligence.
When we engage with a CRE lender, our AI technology learns their own specific ideal lending profile. Over 3,000 data points are summarized into a “score” for them. The real value is in the automated collection and analysis, of course, and the score itself that is generated at the end is really just an interpretation. A computer can keep track of 3,000 data points and see the result in the data, but a person needs it to be streamlined and summarized. The artificial intelligence doesn’t start out with any idea of what is a good or bad deal, it simply compares all of the data to the lending profile that was preset by the user. By learning what kinds of deals you like, in other words, it can tell you how good a new deal should look to you. Our different customers might score the same transaction in wildly different ways, and perhaps they should.
“You may think you know all of the factors that determine whether something fits your ideal profile, but sometimes the AI gets to know them even better than you do.”
Often times, our customers want the score explained to them. This makes sense – if your child came home with a C on their report card, you might ask them why it is a C rather than a B (or a D). The score itself doesn’t tell the whole story. This is one reason we use the type of machine learning that we do, so that we can go back and explain every data point that led to a conclusion. In other words, we can tell you exactly why your child got a C instead of an A. Some deep learning models (neural networks) don’t really allow for that kind of going back and parsing out exactly where each piece of data contributing to the whole originated. This would obviously not work in regulated environments such as banks where an audit trail is always necessary.
Finally, there’s another advantage to teaching AI what you like and then letting it score opportunities for you. You may think you know all of the factors that determine whether something fits your ideal profile, but sometimes the AI gets to know them even better than you do. Data doesn’t lie, and I have definitely seen examples of an AI system being able to “teach” users that there may be certain patterns that they are looking for (or avoiding) that they weren’t even aware of. For example, AI might look at a large number of previous loans and be able to find patterns in the data that led to late payments or even loan default, that you might have not been aware of. Now you can simply use this insight to update your lending profile. It can be disconcerting to learn that the way you’ve been articulating your ideal profile might not be as accurate as you thought. But those of us who are open to learning something new can benefit from this greatly.