Month: January 2021

The Life and Times of a Former CRE Underwriter

How the traditional approach can lead to more distractions than deals.

As Blooma’s Director of Customer Experience, I work a lot with lenders and brokers, especially their commercial real estate underwriting teams. I’d like to think I’m familiar with the motivations and struggles of our clients, since in a former life I used to be a CRE underwriter as well. The truth is, CRE underwriting and analysis processes are very fragmented, which leads to inefficiency.

Much of my job as an underwriter involved the management of many disparate data sources. For as large and mature as the CRE lending and investment space is, underwriting processes and innovation have not evolved or kept pace.

Got a new deal coming in? You’ll start by pulling up a model you used on a prior like-kind deal, and then overwriting the data. You then begin searching for sales and lease comparables on multiple sites, speaking with brokers about transactions and trends in the area, reviewing maps and street views to analyze the area surrounding the property, rounding up and reviewing title data, creating proforma cashflow models, the list goes on. Not to mention requesting and analyzing borrower financials, all in different mediums. You become a data aggregator, pulling in files, documents, and data from numerous sources and saving them all into a shared folder, drive, or system where they come together and start to make some sense. All of this work ultimately culminates in a final Excel model and a credit report/memo drafted in Microsoft Word. This is a painful process, and no matter how good you are at it, by its nature it is prone to errors.

“You become a data aggregator, pulling in files, documents, and data from numerous sources and saving them all into a shared folder, drive, or system where they come together and start to make some sense.”

An underwriter’s true job is to come to a binary decision as accurately as possible. Are we funding or passing on this? That’s it. But don’t get me wrong. Underwriters do not have a simple job even if their end goal is to make a decision one way or the other. And the “noise” of the process described above doesn’t help to get there – in fact I think it can mostly distract from this goal.

I think technologically we’re at a bit of a crossroads. Advancements in artificial intelligence have made it possible to take the more monotonous and time-consuming parts of the job of a CRE underwriter and hand a lot of this to computers. Tasks like parsing through borrower, financial spreading, searching for comps, etc. This allows underwriters to be much more efficient, and frankly frees them up to review each deal more analytically. To use their vast experience to engage in a strategic conversation about a deal and whether it should happen, which is what they do best.

If we hadn’t developed this approach at Blooma, someone else would have because its time has come. The only question now is how quickly lenders and brokers will embrace it. I’m excited working with all of the lenders and brokers that already have. It feels good to make the jobs of CRE professionals a little more enjoyable, by helping transition them to a newer way of doing things. It wasn’t that long ago that I was one of them.

Your Computer Gives You an A for Effort Today

Using AI to help you make decisions, not make decisions for you.

I demonstrate our product to many people, so I get to hear a lot of feedback and questions. One of the most potentially confusing aspects of what we do is using artificial intelligence and machine learning to generate a “score” for a certain commercial real estate property or proposed deal. It’s sometimes hard to explain what this really means, so I’ll try again here.

People sometimes think of the scores that are awarded an Olympic gymnast after a routine – we’ve all wondered about what can sometimes appear to be an arbitrary system of judging something. But we have to remember that this kind of interpretive scoring of a performance is not what is happening in the world of artificial intelligence.

When we engage with a CRE lender, our AI technology learns their own specific ideal lending profile. Over 3,000 data points are summarized into a “score” for them. The real value is in the automated collection and analysis, of course, and the score itself that is generated at the end is really just an interpretation. A computer can keep track of 3,000 data points and see the result in the data, but a person needs it to be streamlined and summarized. The artificial intelligence doesn’t start out with any idea of what is a good or bad deal, it simply compares all of the data to the lending profile that was preset by the user. By learning what kinds of deals you like, in other words, it can tell you how good a new deal should look to you. Our different customers might score the same transaction in wildly different ways, and perhaps they should.  

“You may think you know all of the factors that determine whether something fits your ideal profile, but sometimes the AI gets to know them even better than you do.”

Often times, our customers want the score explained to them. This makes sense – if your child came home with a C on their report card, you might ask them why it is a C rather than a B (or a D). The score itself doesn’t tell the whole story. This is one reason we use the type of machine learning that we do, so that we can go back and explain every data point that led to a conclusion. In other words, we can tell you exactly why your child got a C instead of an A. Some deep learning models (neural networks) don’t really allow for that kind of going back and parsing out exactly where each piece of data contributing to the whole originated. This would obviously not work in regulated environments such as banks where an audit trail is always necessary.  

Finally, there’s another advantage to teaching AI what you like and then letting it score opportunities for you. You may think you know all of the factors that determine whether something fits your ideal profile, but sometimes the AI gets to know them even better than you do. Data doesn’t lie, and I have definitely seen examples of an AI system being able to “teach” users that there may be certain patterns that they are looking for (or avoiding) that they weren’t even aware of. For example, AI might look at a large number of previous loans and be able to find patterns in the data that led to late payments or even loan default, that you might have not been aware of. Now you can simply use this insight to update your lending profile. It can be disconcerting to learn that the way you’ve been articulating your ideal profile might not be as accurate as you thought. But those of us who are open to learning something new can benefit from this greatly.

Industry Adaptation and Privacy in the New Era of AI

How data security is changing everything.

In a collaboration between BCG and MIT from 2018 (Ransbotham et al., 2018), researchers found that when it comes to artificial intelligence, companies and organizations can be classified into 4 groups: 18% Pioneers, 33% Innovators, 16% Experimenters and 34% Passives.

Pioneers are enterprises with an extensive understanding of AI tools and concepts, and embrace AI in significant ways. Innovators have a good understanding, but still display little actual application of AI in their business. Experimenters are using AI for their business, but without seeking an in-depth understanding of the AI methods. Finally, Passives lack both in-depth understanding and application of AI technology.

Interestingly, all four groups agree that AI will change their business model in the next few years. This means that sooner rather than later, AI applications will penetrate the entire corporate landscape. This will be important as AI becomes a pillar of competitiveness, allowing companies to get things done faster and more accurately and to reduce time spent on less desirable work.

“Private AI technology allows a company to use data to train highly performing models without exposing or sharing confidential data.”

In the past, laws around data privacy have hampered the evolution of AI applications in many industries which deal with private data, such as sensitive or personal information. Typical examples are the healthcare and financial sectors. In recent years however, emerging technologies have enabled the secure use of private data to build and use AI. These private AI technologies are accelerating the quantity and nature of novel AI applications, and transforming the AI landscape. Thus, Pioneers, Innovators, and Experimenters in the financial sector have started to use AI on their private data.

Private AI technology allows a company to use data to train highly performing models without exposing or sharing confidential data. Although the field of private AI is very cutting edge, there are a few notable methods already, including differential privacy, homomorphic encryption, federated learning, and data anonymization. For example, federated learning is an advanced, state-of-the-art method used by Google and the healthcare industry (full disclosure: Blooma uses it too). This method involves making use of client data, without the client having to share their data with anyone.

In a typical set up, a model is deployed to those client locations and trained on the data there. After training, the model is sent back to a central location where it is merged with the models of other clients. After this merge, you end up with a model which has learned from data across all clients without sharing the data in one central place. Federated learning, if done right, ensures that there is no exchange between each individual client’s data. Thus, federated learning allows a company to learn from confidential data while keeping the data secure and private.

Meanwhile, the introduction of AI into the corporate landscape has led to changes in legislation which continue to develop. The General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) are two statutes that have recently come into play in the attempt to promote and regulate data privacy and security in the European Union and California, respectively. Additionally, Brazil passed the General Data Protection Law that came into effect in February 2020. According to Richard Koch, Managing Editor of the GDPR, EU, these legislations that have begun to emerge have some common principals, including the importance of defining personal data and certain fundamental rights relating to data subjects (2018). Private AI allows us to obey those laws while capitalizing on private data. Whether you consider yourself or your business closer to the Pioneer or the Passive end of things, your data is already a part of this changing landscape, and security will continue to be the driving factor in the future evolution of private AI technology. Despite the many differences between the statutes mentioned above, we can only assume this is just the beginning of the movement toward an increase in legislation to govern data privacy.