So Jim, do you sometimes feel like you can’t keep up with all the advancements? Do you think we have a grasp of everything that’s happening with AI at all times?
So with AI, you actually hear about the major advances, just because everyone stopped talking about it. And that’s still the case now, but there’s still a long way to go. But the reality is that AI is actually getting to the point where it is very useful in many practical types of applications.
What are the most common problems when doing an AI project (particularly in the NLP field)?
One of the most common problems with AI is what happens if the algorithm doesn't work? And that's a very common problem. So somebody who actually doesn't understand AI will take an off-the-shelf library to try something, and that’s something we strongly encourage. But what happens after it doesn’t work as you expect? AI is actually something quite tuneable and requires tuning, but with so many parameters, which ones should you tune? Or could you perhaps have the wrong algorithm? Or need more data? Is the data you have perhaps dirty? Or maybe you just had an unreasonable expectation of what you’re able to achieve? If you don’t have an understanding of how the technology works, you're kind of dead in the water.
The most effective AI practices are those that understand enough about the technology to know which way to go next. Similarly to a detective trying to understand what actually happened after the fact. You have to be at least intellectually curious enough to try to figure out what the most logical way to approach that problem would be.
Essentially read the output data and try to trace it back in order to figure out how to retune the algorithm, right?
Yes, there is a lot of experimentation that has to happen. In outsource projects where you estimate that it’s going to take three weeks to get something and it actually takes maybe three weeks and one day – that’s pretty precise. With AI there are more question marks because it’s a combination of data and algorithms and the performance of the algorithms in the computing infrastructure, all of which affect how quickly you can achieve results. How quickly you get there is really based on how effective the AI engineer is.
Deep learning seemed to get a lot of buzz in recent years. Is that something that is used widely in modern AI projects?
So deep learning and AI are practically synonymous, with deep learning being another terminology for the use of neural networks in machine learning. There’s a lot of debate in terms of what each of these terms mean, but we should consider both as the new advances and the only reasons why anybody’s talking about it at this point. When you look back at 2012, before these innovations started to happen, everybody who knew about machine learning wasn’t that excited about working in that space. Practically because it was a boring topic back then.
But now everything’s changed. There are companies that have literally hired out the entire research departments of universities. They take the entire body of students into these projects to develop things like autonomous cars – that level of investment. And this has only been going on for the last seven years, so things are still new.
What do you think the next big thing in AI is going to be in a year or two?
Oh, I don’t know about the next year or two, there are still so many unsolved problems. There are recent innovations such as how something that’s trained on certain problems can actually be used in other, closely related high problems. But we’re still very much at the beginning. Now, how quickly we can get there, or anywhere is completely unpredictable. If you think about it, there was an innovation back in the 1950s as the start of it all. The next one was in 1975, followed by another one in the mid-90s with the convolutional neural networks for handwriting recognition. Since 2012 there are quite a lot more AI innovations, but mainly because there was so much money poured into it.
Now here’s a couple of problems that we need to solve. Let’s say I asked you a question: remember back in fifth grade, you and your best friend had a fun experience? So when you followed what I said, you had to go through your memory banks. You had to go and associate, and think about a time element that actually happened many years ago, that you’re still retaining. That thought was just dormant your mind for many years, right? To try to recall that is a hard problem to solve within computer science because it’s not a relational database, it’s not something you can store and search based on “my best friends”. In your mind, you don’t store the information in a table form, you store it in these ways that you are later able to bring it back into the forefront of your brain as you recall it. So when you were asked, the experiences that were actually embedded which you can describe now, computers can’t do that.
Memory network is probably one of the hardest problems to solve. Right now, the state of the art is to take maybe two or three days to train a very sophisticated model that does one problem. Trying to recall different types of information, to get a history or trying to store data that is older than the specific snapshot that is tied to it, these are all very hard problems to solve.
I think in the next five years you’re going to see advances in things like autonomous cars with reinforcement learning aspects. They are already introducing things like in Tesla, but self-driving cars are still in their infancy because Teslas are only able to drive autonomously on the highways and you’re only able to do a handful of operations – speed up, slow down within the lane, and then change lanes was a big innovation about three or six months ago.
I’ve found their accident prediction to be particularly fascinating as their reaction time and calculation of “what might happen” seem to be much faster than ours.
Yes and this is part of the reinforcement learning. The key technologies behind that is basically you do multiple steps and then you weight how successful those steps were to accomplish something. So for example, if you kept turning your steering wheel to the left and then you crashed into something, you would learn not to do that, at least in that particular situation. Now if you did a lot of reinforcement learning to actually learn to operate in these types of scenarios, you’d be able to mimic successful behavior. But successful mimicry and judgement are completely different things.
So if a human reaches an intersection and there are three cars waiting there, usually the human would go and make some facial expressions or use body language with the other person. And usually, you’d actually have some way of compromising who goes next in the intersection. Now, these are human patterns and no car understands how to do that. And no car would likely be able to do that in any reasonable amount of time. Part of the reason why is because there are so many edge cases associated with driving. If the roads were designed better or had more standardization – which they don’t, no city does. As you try to standardize it, the infrastructure costs become very, very high and you’re still going to have many edge cases that the autonomous vehicles will not be able to address.
Not to mention the fact that with any neural network there is always a way to trick it. When you have a fully functional autonomous vehicle, based on the current technology you could potentially, and I’m not suggesting people should do this, but you could potentially take a building and paint it in pixelated white and black. The signal in the car would recognize it as a road and that’s a little bit scary because someone out there is going to do that and there’ll be lots of people autonomously driving into walls. That’s kind of a worry at times.
What industry do you think will play the most crucial role in driving technologies such as NLP forward?
The exciting thing about the NLP space is we’re talking about language, many languages in fact. We’re starting to see devices that are able to do close to real talk translations of languages which is absolutely remarkable. Now that’s mimicry too. Because we know that given enough patterns and signatures, you can actually encode certain types of language into a language-independent space, then they can translate to other things, which is again a remarkable innovation for artificial intelligence.
But we’re also trying to represent language in a context, and so with some of the innovations like specific language vectors or word vector representations, as well as different ways used to represent sentences and paragraphs, you’re actually able to understand more of the meaning of things. That’s what kind of drives a lot of the innovation in the legal text aspects. It enables you to at least constraint a problem because when we’re talking about legal tech, we’re talking about a smaller domain of problems related to legal jargon. We’re not talking about restaurant reviews versus TV reviews or other types of text, but rather a better way to represent the legal language without leaving out any crucial information.
Wow, that was super insightful. Thanks so much for taking your time and talking to us.