We are living in an era driven by algorithms and more specifically deep learning algorithms which are beginning to pervading and potentially intruding every single facet of our personal and professional lives. When algorithms begin playing a commanding role in our everyday personal choices including clothes, shoes, movies, music, content, jobs, whatever and start dictating what is best for us and what is not, we have to concede that we are already in the midst of algorithms driven enlightenment, based on whichever camp we want to be in. 

Consider their application in myriad esoteric use cases - sorting and grading cucumbers; creating movie trailers; writing news articles; measuring productivity of cows; predicting students likely to drop out; optimizing soil nutrient levels and many more such use cases. To a battle for supremacy on the ‘senses’ dimension against the human race including vision, speech and text, these algorithms are proving their mettle in every walk of human life. So it is really high time that we bow to the powers of these very powerful algorithms and be led by them in this algorithms-driven insights economy.

The driver behind this algorithms-driven insights economy is machine learning and more specifically a specialized branch of machine learning called deep learning. Deep learning through the power of exponential growth in data and compute power combined with increasingly sophisticated algorithms is unshackling its potential across all human endeavors. Neural networks and its variants like convolutional neural networks and recursive neural networks seem to be creating a renaissance in productivity that had been tapering out over the past few decades. No sphere of life seems untouched. Sparking a further interest in the frenetic activity around this area is the fast paced evolution of deep learning platforms and open source libraries like Theano, Tensor Flow, Torch, Caffe, Scikit etc. which are effectively democratizing access to this very potent world of algorithms. 

Leave aside organizations like Google which made the DeepMind acquisition worth more than half a billion dollars and has already recouped that investment through energy savings across their world wide data centers, not to mention beating the reigning ‘Go’ worldwide champion, the transition to a deep learning-driven algorithmic world won’t be as smooth for the majority of organizations.

Deep learning still requires a considerable leap of faith to go mainstream and be wildly successful. There are multiple issues to contend with for us to realize the very transformative impacts that these deep learning algorithms profess to deliver. So in effect, the key question that we must ask is: ‘How deep is your love when it comes to deep learning algorithms?’ Let’s explore a few roadblocks standing in the way of our embrace of deep learning.

The black-box problem: The outcomes that deep learning algorithms deliver create an extreme opacity owing to the enormous amount of complexity and processing that they are dealing with. It is practically impossible to peep under the hood and discern the ‘why’ behind the results that these algorithms churn out. This may not be an issue in trivial situations but for the majority of the situations that these algorithms are applied to, it is a big issue! In specific situations, regulations will not even permit basing decisions on these algorithms unless we can justify the recommendations. If these algorithms have to gain widespread acceptance, we will have to peel a few layers off the onion and explain the ‘why’ behind the ‘what’. There are some early attempts in play to address just that but we are pretty far off from unraveling the secrets behind these beautifully concocted algorithms.

The bias problem: Algorithms also have an inherent tendency to perpetrate biases. If the hand or data set that the algorithm has been dealt has an intrinsic bias built in, the algorithm would learn from that and further propagate the same. A lot has been raised against these biased algorithms and some have been charged of discriminating against people of a certain category for admissions to specific universities, credit access denial, etc. This bias factor could potentially stand in the way of these algorithms serving the big picture vision of an equitable world.

The flying solo problem: We are still a few decades away from realizing artificial general intelligence where machines would have achieved or exceeded overall human general intelligence levels. Until we reach there, we will have to contend with deep learning solutions confined to a narrow space which excel in that single dimension but fail to consider cross-system dependencies. There are definitely some great advancements being made in the field of transfer learning which taps into transferring knowledge from one domain being applied across other domains. There is work also happening on concepts like progressive neural nets which attempt to connect multiple deep learning systems together. As things stand today, deep learning still remains by and large a flying solo gig.

The adversarial problem: From the early days of Image Net and aha moment of being able to positively identify cats after seeing millions of cat pictures, we have now gotten to a stage where computer vision finally exceeds human vision in absolute precision. What could be a boon could ever so easily turn into a bane too. The same precision which helps machines excel at vision could easily trip them through small changes imperceptible to the human eye. Research has proven that such small change in images could cause machines to mis-classify or mislabel images. Typically the systems that run these algorithms should accord due protection from any adversarial interventions but the catastrophic impacts of an adversarial attack in mission critical settings still needs to be considered.

The production problem: So we have finally crossed the hump or averted the above issues, built a fantastic model leveraging deep learning, trained it on millions of records, test proofed it on a robust testing set, have performed great on all the metrics that matter like precision, recall and accuracy, and we all raring to go. Seems like a breeze getting it deployed on the production environment to start generating immediate business insight and value, right? Wrong! The real job begins now on integrating these algorithms and weaving them intricately into our production landscapes. Many things need to be considered including infrastructural issues, technology architecture compliance, and business process management/integration, change management aspects and potentially even a constantly changing context. The production environment is where the rubber meets the road and this is where the deep learning algorithms will either prove their worth or fritter away into obscurity proving the skeptics right.

This is just a small sampling of the real issues that we need to confront for deep learning to demonstrably prove its worth and deliver on the transformative potential that it has to offer. In right earnest though, we are past the infatuation phase with deep learning and have entered the courtship phase. Given the scale of the challenge it will really take a world - not a village - of key stakeholders across government, academia and industry for our courtship to blossom into love and eventually deep love with deep learning!