Notes in times of quarantine

Like many people in the world today, I’ve found new time for myself. Dark, grey skies and rain hang over Philadelphia, brooding uncertainty and tremor that mirrors the feelings of its population.

I’ve been reading Will Durant’s Civilizations: Caesar and Christ; admiring, disgusted, amused and elated by the lives of great people that’ve lived before us. With this knowledge comes the realization that we live in a vast universe of expansive and never-ending action. And within it all here I am with my puny agendas and dreams.

It’s humbling.

Through this humility are born kernels of thought that I wish to share here. One: that the modern world, broken in more ways than one, can be recouped by a silent revolution of Christian communities living in solidarity with everyone else. Two: we carry too much material baggage -both in the physical sense and in our psychological attachment to them. Three: that there is virtue and value in seeking to live a quiet life.

The first realization is partly inspired by a talk by Dr. Peterson (below) essentially saying that the world is a complicated place and if you want to change the world try starting by changing your habits, attending to your family’s needs and struggles, slowly growing outwards to your surrounding communities.

But it doesn’t stop there. Once the individual is put in order, he has the capacity to create a family that follows that order. Thus it’s not only one person acting in pursuit of virtue but two, then three, maybe four or more. The family can then grow together and form bonds with similar families, creating a local culture that can support itself and regenerate the culture along the way. I previously used the world solidarity because I don’t believe this change can be done in isolation from the rest of the culture, but rather in accompaniment with it. When yellow paint is poured over blue paint it changes the nature of the paint and becomes green. If the yellow paint had instead been poured in a separate bucket then both paints would never intermingle -societies will live apart and never learn from one another.

I also mentioned how we carry too much material attachment. There is much that I have to say about this, and will probably delve in deeper in another article but suffice to say that I believe our minds and hearts have been hijacked by the propaganda machine of modernism that abhors beauty and spews out ugliness. What we do, what we use, what we create I believe should be a reflection of our own selves. Because the human person is the most beautiful thing in the whole universe, and the product of our labor is a form of creation that makes the world a better place. It’s a curious thing to say in light of all that humanity has done to one another and to this planet. But I believe it’s true, because I’ve seen people do amazing things that bring harmony and greater sense of beauty to a place than nature was able to do. Does this mean that nature should be razed in favor of human inventions? No, I mean that nature and people can co-create together, with people adding to nature when they consider it prudent and wise.

I believe the third point I wish to note follows from the previous. Once we have a proper understanding of nature and our place in it, we can seek a place to make of it our own. One of the greatest Roman legends concerns Cincinnatus, a statesman who became dictator to defend Rome against its enemies, then, in the face of all the people applauding him and wishing to make him king, he retired from public life to live out his days in his farm.

There’s value in living in a city, where incomes are high, events and festivities are commonplace, and where the individual is most free. But there’s also value in “flyover country.” Land that is put apart from the business of urban life. The ruggedness and isolation (two things unfamiliar us moderns) force people to work together to live and prosper. In precisely these conditions the spirit thrives, innovation is encouraged, and culture is established.

There are subtleties and caveats to everything I’ve talked about here. One must find the, “middle way,” that Aristotle described, not indulge in fancy ideals. If you’re interested in diving more into the warm oceans of thought i’m swimming in these days, check out the resources below. I’ll pray, dear reader, that you and your families may be blessed and kept healthy.

The Nicomachean Ethics by Aristotle:

Caesar and Christ by Will Durant:

Why Liberalism Failed by Patrick J Deneen:

Intrinsic Valuation

The Banker and his Wife
Quentin Metsys, 1514

According to professor Damodaran, the intrinsic valuation of a business is the “expected cash flows on the asset over its lifetime and the uncertainty about receiving those cash flows.”

The professor calls this, “discounted cash flow valuation.” What’s something that cannot be analyzed with this model? Things that don’t generate cash flows – ie. a painting, a good book, anything whose value is depending on the beholder. There are two ways to come up with a discounted cash flow valuation:

  1. By looking at the expected cash flows and adjusting for risk across all possible scenarios and summing it up. All scenarios means all possible values for risk that the company might face depending on different scenarios. In other words, you are taking the cash flows and adjusting for different risks.
  2. The other scenario is by having a fixed risk and adjusting for cash flows. That is, take different cash flows across time and use the same risk premium.

These can be expressed as such:

Value of an asset =  ∑ E (CFn) / (1 + r )n

Value of an asset =  ∑ CE (CFn) / (1 + rfn

Where E * CF is Expected Cash Flow, r is Risk, CE is Certainty Equivalent Cash Flow and ris Risk Free Rate.

This has a lot of common sense. What is the value of a business that never returns a positive cash flow? zero. Likewise, the value of a business that returns a large cash flow, and conserves a small amount of risk, is likely to be very valuable. A company like Apple comes to mind.

Now, how can we go about valuing companies based on these principles? We can look at their financial statements, specifically at their assets and liabilities.

Assets include investments that the company has already made, and assets that the company hasn’t made yet (this represents the expected growth potential that the company will have in the future). Companies in different stages of growth will have differing levels of asset classes. For instance, P&G will have a lot of investments made, but perhaps won’t have as much growth potential as say, Tesla.

Liabilities represent how a business must fund its operations. It can do it by using its own money (equity), or taking a loan (debt).



Here’s where we face a fork in the road. There are two ways to value a business, either by valuing their equity, or by valuing the entire business. Simply put, equity valuation is concerned with the cash flow that is returned to its investors. In public markets, this would be dividends (though the professor shows that you can measure a company’s equity value without dividends via their potential, future dividend).

Valuing the entire business takes a broader view by looking at the equity value and the liability value for its lenders, taking them together and calling it, Cost of Capital. You then discount the cash flows to the business at the Cost of Capital and get the value of the entire business.

So we conclude with two ways to value a company. We can value directly the equity of the company and adjusting it for risk, or get the value of the company via the cost of capital and subtracting debt.

The more things change the more they stay the same

The title is a direct quote from the initial credits of the 2008 video game, Call of Duty Modern Warfare 2. No kidding. If my patient readers haven’t left the building already, I’d like to elucidate why I believe this rings particularly true with the younger crowds today when it comes to social media.

We, as a species, have been around for about 200,000 years. Agriculture started 10,000 years ago, the industrial revolution ~250 years ago and the first iPhone just over 12 years ago (now I feel dated).  Since then, Silicon Valley companies have hijacked our brains in a way never before possible, by creating algorithms and machine learning models that target our dopamine receptors in a very focused way. Just like the food we have available today, our bodies are not adapted to consume this kind of content. We drown in the good feelings it produces and keep asking for more.

And who doesn’t like that pleasuring vibration that comes from our phone every time somebody “likes” our picture? Who doesn’t enjoy the feeling of hitting “post” and appreciating it like a fresh batch of homemade cookies? The author of this post certainly did! What’s wrong with having our vanities praised in the presence of all to see? What’s so bad about clamoring for attention and obsessively coming back to your phone every 10 minutes to see how many more notifications of likes and comments are being made from your latest vacation trip photo album?

Like Icarus, we fly too high and get burned by the sun.

Now. I am an Econ major, I like to think of things in terms of supply and demand, opportunity cost, and the like. If you give me these sets of facts, my reaction to it would be, “If it’s so bad for people to stay on social media, where’s the incentive for people to get off of it?” The answer is quite complicated, and I haven’t fully understood this question. However, I would briefly like to sketch an answer from the perspective of relationships.

The incentive that social media provides is convenience. It’s easier to text someone than to call them, or even visit them. Some might even say it brings them closer to each other -which might very well true, but that’s not necessarily the experience of the majority of people, I’d wager. The side effect of the convenience social media provides is isolation.

Now that we don’t have to be close to one another, we can be all in our little holes, away from the awkward interactions, inconvenient conversations with smelly, vulgar, strange people. Away from one another, we lose touch of one another. And we drift further apart. The end result is millennials being the most socially isolated generation ever recorded, with the least amount of friends and suffering the highest percentage of mental disorders (according to a 2018 Intergenerational Foundation study).

I don’t think this is a secret among by colleagues.

Gathering from the almost 10 years of my personal experience in trying to create community form scratch, I think I have the answer to this plague, and I have boiled it down to a few key principles that I’ve seen in Church groups, college clubs, sports events, community service groups and countless other units.

The answer is coming up soon so stay tuned.

Should we be polite with our AI machines?

The short answer is no. You don’t have to say, “I’m sorry to bother you” to your iPhone before unlocking it, just like you wouldn’t have to excuse yourself to a dog before you go to the restroom, and even less if it were a sad cactus in your home office. The long answer however, reveals something incredibly unique about our human nature and civilization itself.

I was mindlessly scrolling through my Twitter feed as the social media overlord has instructed me to do, when I came across Chaim Gartenberg’s article on The Verge that debated whether we should say “thank you” or “please” to our AI gadgets.

My first instinct was to think this ridiculous; but then I started remembering all those times I would say “thank you” to travel website chatbots, Siri, Cortana and God knows how many more AI devices out there.

Like Chaim, I’m only polite as a habit. But then it got me wondering: as AI improves should we start being actually polite with our machines? There’s already a religion dedicated to it (founded by ex-Uber engineer Anthony Levandowski), does that mean we’ll all have to pay our respects to these super-smart machines in a not-so-distant-future?

Just so we’re all on the same page, Politeness comes from the latin word politus, meaning refined, elegant. Right from this definition, we can sense something different about politeness in people vs machines. You can certainly program a chatbot to be polite (I do). But that’s all the AI does: it acts on the parameters from which it has been programmed. I myself have to make an effort to be polite —I can forget, be lazy or simply not want to. A sufficiently advanced AI could observe how people behave with one another and emulate that behavior. Thus it would be learning to be polite from “experience.” With the addition of reinforced learning, it could know with whom to be polite and how to polite to be, depending on the person with whom it’s interacting with.

But could the AI actually learn politeness? Can it come to the conclusion that it should behave with reverence towards a person?

I don’t think so.

Being polite with someone else marks the person by distinguishing him/her with status. More than status however, it sets the person apart from the unconscious savagery of humanity and instead lifts the person into the realm of civilization. Voltaire erred in believing that, left to his original, uncivilized state, man would flourish and evil would dissipate. Nothing could be further from the truth. Leave a man to his “natural state” and he will rape, steal and kill his way into survival. The very fact that humanity developed civilization was to escape from this inner savagery. By being polite, we do the opposite: we give reverence to the other person, show a civilized fear, acknowledge their dignity. To have an artificial intelligence come to the understanding that a person has God-given rights, with an infinite value that cannot be grasped, is impossible.

Program your AI’s however you’d like; interact with the machines however you deem; but don’t expect that the machine will behave just like you, because it is not you. It’s a machine.

The Divergence: a response to Sam Altman’s The Merge

I usually let the monster that is the internet alone and distant. It’s a dangerous place to speak your mind because you never know if someone (or something) will bite back. However, after reading through Y Combinator founder Sam Altman’s blog post on the emerging singularity, I couldn’t help but notice the unusually dark statements for a silicon Valley technocrat to make. No words on “bringing the world closer together”, or “making the world a better place.” Instead, the future of technology apparently has a more deathly tone.

There’s sense in some of the points he makes. Genetic engineering of human embryos is already happening and the practice may very well continue into the 21st century. Whether it will continue into the 22nd is still a toss-up, for who knows what sort of monstrosity will be engineered then that can still be called “human”. Machine interfaces will become increasingly invasive within our bodies —even if modern medicine has sough to do the opposite. I can also fully attest to the addictive qualities that the internet has and how it messes with our brains to a large degree. It’s effects have been thoroughly proven in science labs and family dinners.

Mr. Altman describes how talking about the singularity is a topic you wouldn’t want to bring up on a dinner party. “It feels uncomfortable and real enough.” I agree on this too. I would find it extremely uncomfortable to tell my fellow partygoers how in just a few years they will be overtaken by a disembodied artificial intelligence that will wipe out humanity and establish itself as the dominant species. Not a great way to set the mood.

However, I still think it falls short from the world’s greatest one-liner: that God made himself a man, was crucified for humanity’s sins and rose from the dead. I’ve yet to find a more astounding claim than this.

There are varying opinions as to what the singularity is but I’ll stick to what outspoken investor and Microsoft co-founder Paul Allen has defined as “the accelerating pace of smarter and smarter machines [that] will soon outrun all human capabilities”. In his article, Sam states how “It is a failure of human imagination and human arrogance to assume that we will never build things smarter than ourselves.” Indeed, machines have already started their “worldwide domination”: “Our phones control us…search engines decide what we think.” It’s for this reason I believe he titled his article The Merge, since not only is the singularity real and forthcoming, it’s already here and taking over!

There’s a certain sense of the ridiculous that people who are satisfied with yesteryear’s smartphone have when they hear about the singularity. The tick of the singularity seems to affect those that are closer to event horizon of Silicon Valley, itself a singularity of enlightened thinking mixed with hubris that not even Hawking could have foreseen. There seems something fantastical in the technocrat’s statements, something so alien and insane that either the person who predicts these things is either completely right or just utterly wrong. The sheer audacity of their statements should make us either tremble at the potential fallout, or wonder at this person’s sanity. I think the singularity is a very serious issue to address, because the concepts which it rests on are practical and present in our daily lives.

Artificial intelligence is I daresay, a beautiful tool that we can use to our advantage. It decides what Youtube video you can watch next, and makes sure spammers don’t submit fake reviews in your restaurant’s profile. It’s apparently also used in business and scientific research. Yet Sam, Paul and others are worried about AI becoming too smart for our well-being. In both definitions, “smarter” hinges as the indicator for how superior or inferior a machine can be when compared to us.

According to them, “smartness” is the defining characteristic that separates my Macbook’s chess-playing AI from our future robot overlords. But creating such distinction is meaningless —firstly from an ambiguity of what “smart” means, and secondly by comparing a material object with a material-spiritual composite.

Its common to call someone smart when he/she does well at school, gets high scores in an exam, or can recall a book word by word. These are essentially computational tasks. They require an input, a processing stage, and produce an output. This stage of intelligence can increase by one’s ability to abstract patterns and universals from particulars. A child learns that pointy things can hurt, or that red signs can signify danger. We’ve created AI that can do these things too (albeit to a lesser degree).

But a machine can never understand the higher sphere of intelligence which we inhabit. Say what you want about Google’s DeepDream or the plethora of structures in contemporary architecture created by algorithms. I doubt any computer could produce a painting as mysterious as a Mona Lisa, or a building that elevates one’s soul as the Cathedral of Notre Dame. There’s that innate feature of humanity —a willingness to waste resources, waste away time, even waste away himself— to create something that’s utterly useless, but essential for one’s spiritual survival. And therein a pivotal difference in Mr. Altman’s view of human intelligence vs computational intelligence —that it all boils down to a deductive and logical reasoning caused by chemical reactions in our synapses. After all, as Paul Allen says, ”an adult brain is a finite thing, so its basic workings can ultimately be known through sustained human effort.”

But can we be so sure that our intellectual capacity for the infinite be housed in such a finite thing as our brain?

Many people forget that the scientific method is a philosophy. It’s a way of looking at the world by material causes and effects. It’s a wonderfully effective method of thinking about the world, but it’s not the only one, and certainly not the exclusive one. If any Marvel fans are reading this, they might recall a scene where The Ancient One tells Dr. Strange: “All your life you’ve looked at things through a keyhole.” Observe how every time a person insists there is nothing (or no one) outside our material universe spiral into a spiritual fervor many religious people would envy to have. Famed Google and Facebook AI engineer Anthony Levandowski has even founded his own AI-based religion titled, “The Way.” A blatant plagiarizing of course, of a motto that has been in use for two thousand years. My point is that a superior intellect residing in a machine created by man is illogical. Since such a “higher intelligence” is immaterial (and therefore not subject to time since it cannot change by its very nature), it cannot be handled and thus manipulated. You cannot empty the whole ocean into a bucket.

The ultimate fear of the singularity is machines becoming self-aware, and destroying its creators in the process. Can machines kill? Of course. People have been killed by falling into machinery or had their hands cut off by a chainsaw. Can machines kill intentionally? Now there’s the rub, because to have intention requires a deliberate act of the will, and having free will requires the entity to have understanding of itself and the possibility of either acting or not acting. Proponents of the singularity deem this to be possible, as Paul Allen has stated; since the human intellect is nothing but matter and therefore a biological organ whose capabilities can be replicated.

I am of the sort that believes the world is larger and weirder than any of us could dream of. I have good reasons to believe, and have had enough life experience, to acknowledge that there is more to this universe than matter, and that our humanity cannot be reduced to a heart pumping blood into our brain sending electrical signals in the process. That’s no basis for “certain, inalienable rights,” no justification for the inestimable value we place on a stranger when compared to a dog. Indeed, No one puts a lump of coal behind a vault; we recognize the special quality of humanity because, like a diamond, it shines with beauty and goodness. That is the sort of future I decide to believe in and one I am happy to live for. And future robot overlords? More like future robot servants.

“Millennial” is a meaningless concept

“We can all agree that Millennials are the worst,” proclaims Philip Bump from The Atlantic. As a “millennial” myself, there are little things I find more irritating than to be labeled as one. Indeed a poll already shows that “Most Millennials Resist the ‘Millennial’ Label” -oh the irony! Legions of marketing consultants/gurus/evangelists/futurists herald the coming-of-age of this group as a golden opportunity, a last chance, for corporate conglomerates to get in the action. “Hurry or you’ll miss this once-in-a-lifetime chance to earn millions!” is at least a more honest proclamation for what we know is already a scam.

The absurdity of stereotyping a group that is now “the largest living demographic in the United States,” is akin to saying that 50% of the people in the world are women. Yes, we know that. They’re all around, can’t you see? The endless parade of clickbait headlines such as “U.S. and European Millennials differ on their views of fate, future”, “Millennials care about the environment” and “More than half of Millennials have shared a ‘selfie’” paints a more ambiguous image of this population than a Jackson Pollock.

It’s time to stop the selling of ideas on the basis of a conceptual demographic turned up by a Harvard lawyer back when The Bangle’s Walk Like An Egyptian was not considered politically incorrect.

My advice to companies who rely too much on expensive, 3rd party marketing research firms is simple: look around. There’s no need to make a 54-long slide on information you can get by simply walking to your next door neighbor and asking.

There’s a saying that people are not persuaded by reason, but by emotion. There’s some truth to that. I do believe you need reasoned evidence to support your claims, but simply laying out statistics about people and weaving together a story more fragile than a dandelion is simply not a way to sell a product or service. What happens if the wind blows?

And don’t even get me started on the “Z generation”. Though kids are admittedly playing too many video games.

A walk to Chipotle

It’s noon, and the smell of lunchtime -beans, pork, grease- permeated downtown Buckhead. People from all backgrounds congregate to the hallowed hall of Chipotle at a corner intersection for physical (and I daresay, spiritual) renewal. I walk in and stand in the long line behind the counter.

There’s a young, well dressed man in front of me, and a moment later, a joyful woman strides in with her two best friends. At first, I couldn’t help but admire the young man’s well-tailored suit and trimmed hair. He was wearing a tan leather belt with spotless buckle, cuffed shirt and double-strap monk shoes -which of course, matched his belt.

A true, walking stock photo.

Suddenly, my attention was shifted to the loud talking of the woman behind me. She was a tall but delicately built black woman, with a pin in her business outfit that read, “Cousins,” and a logo right beside it. A humanitarian organization, judging by her warm smile and apparent social connections. She and her friends were about chatting all the time I was in line. I wasn’t paying attention. Still, I chuckled inside.

It’s amazing how, with all the varieties of human experience, we are still drawn to the same things: power, money, love and of course, good food.

I devoured my burrito in a heartbeat. The hotness of the Tabasco sauce briefly made me feel like I could breathe fire. Endorphins kicked in my brain, tingling it with a satisfying sensation. How wonderful and blessed it is that modern industry can feed the whole population in a dazzling array of colors, tastes and combinations. We can safely assume that we’ll be fed today, and that’s a wonderful realization. One we’re prone to easily ignore, considering the history of humanity.

My parting words before I leave are this: let us rejoice and be glad, for indeed, we can all get along with each other and sit at the same table together, as long there’s good food, decency, and good cheer.

The Wheel of Fortune Turns for Everyone

Betrayed. Jailed. Abandoned. These were probably some of the things that were running through Boethius’ mind as he awaited his eventual torture and execution under false pretenses. One of the brightest minds of his day, and having climbed to the top of the political ladder, Boethius’ friends gave false testimony of him as a traitor to the king, who promptly had him jailed without due process.

Boethius loved astronomy, mathematics and the philosophy of the greeks. Although he was a Christian, he was well-known because of his translations of greek philosophy that were used for almost a thousand years before the recuperation of lost texts during the renaissance. But this wasn’t in Boethius’ mind. He had served the King well, and tried to live out a good Christian life; yet he was miserable, while the wicked reveled.

How could God permit such a thing?

During his time in his cell, he wrote a masterpiece called The Consolation Of Philosophy, which tried to answer this question by having God’s wisdom embodied as Lady Philosophy. She visits him in his cell, and they exchange a dialogue, with Lady Philosophy explaining how small our minds are to comprehend the will of God.

In this passage, Fortune (embodied as a god) speaks to Boethius’ desire to justice. Her answer might not be what you’d expect.

When nature produced you from your mother’s womb,
I received you naked of all things and helpless,
kept you warm with my resources and,
whereas now it makes you impatient with us,
I brought you up under the easy favor of indulgence,
surrounded you with all the abundance and splendor
which are right for me.

Now it pleases me to withdraw my hand:
be grateful as for the use of another’s;
you have no right of complaining
as if you absolutely lost yours.
Why then do you groan?
No violence is brought against you from us.

Wealth, honors and the rest of such things are right for me.
The servants recognize the mistress:
they come with me; with my going away they depart.
Boldly I declare, if these were yours
which you bemoan are missing,
you never would have lost them.

–The Consolation of Philosophy, Book 2

Now compare this passage to God’s response to Job, when he complained to God about his own misfortunes, and demanded infinite goodness to justify Himself after all the evil that had fallen on Job.

Where were you when I laid the earth’s foundation?

 Tell me, if you understand.

Who marked off its dimensions? Surely you know!

Who stretched a measuring line across it?

On what were its footings set,

or who laid its cornerstone—

while the morning stars sang together

and all the angels shouted for joy?

–Job 38:4-11

A popular saying goes, “the wheel of fortune turns.” Sometimes good things happen to those who are good, and bad things to the wicked; and sometimes, just the opposite occurs. There’s no rhythm or pattern to the fortunes of our lives. God gives and takes away freely. There will never be a time for a person to say, “now I am content, all will be good with me”, or, “nothing good is going to come. I am cursed.” The wheel of fortune indeed keeps turning. There’s an excellent video by The Bible Project that explains this in much greater detail, and their animations are absolutely stunning. I’m putting it here for you to take a quick look:

We must watch ourselves not to get too hung up on the turns of this wheel, rather taking life as it comes with a certain detachment from our earthly fortunes, and trusting instead on a more solid anchor than Lady Fortune. Thus does the book of Ecclesiastes fittingly ends:

Now all has been heard; here is the conclusion of the matter:

Fear God and keep his commandments, for this is the duty of all mankind.

For God will bring every deed into judgement, 

including every hidden thing, whether it is good or evil.

Ecclesiastes 12:13-14

#Throwback to 400AD


I walked this morning to Starbucks, as I usually do on Sundays, and decided to take a break from reading the Lord of the Rings series. Much as I love Tolkien’s epic, sweeping tales, I wanted to go for something more intimate and philosophical – enter St. Augustine.

His book The Confessions, has been a profound influence in my life. I think I first read him when I was 20 and find him endlessly relevant to our times. Yes, who knew a 1500 year-old book could compete with an AI chatbot? Below is an excerpt from Book 10, Chapter 27:


Late have I loved you,
O Beauty so ancient and so new,
late have I loved you!
You were within me, but I was outside,
and it was there that I searched for you.
In my unloveliness I plunged into the lovely things which you created.
You were with me, but I was not with you.
Created things kept me from you;
yet if they had not been in you they would not have been at all.
You called, you shouted, and you broke through my deafness.
You flashed, you shone, and you dispelled my blindness.
You breathed your fragrance on me;
I drew in breath and now I pant for you.
I have tasted you, now I hunger and thirst for more.
You touched me, and I burned for your peace.

I have no words to follow this poem. Anything I would dare to add would not do it justice. Thus, I’ll end this post here and provide an excerpt on St. Augustine where you can learn more about this titanic figure of Western civilization. I hope to find more of my Augustinian readers out there 🙂

Also, you might be wondering why I posted an image of a child laughing. I chose it because that’s exactly how I feel when reading Augustine -joyful and completely at ease. His life fits inside the mold of my soul. Reading him talk so eloquently brings me an extraordinary sense of pride for my faith and affirmation that everyday of our lives we can build, brick by brick, the foundations of love that hold up the eternal City of God.


St. Augustine, born in Roman N. Africa to a devout Catholic mother and a pagan father, was a notoriously rebellious Catholic teenager who cohabitated with a girlfriend, joined an exotic Eastern cult, and ran away from his mother.

Augustine became a brilliant and renowned teacher of public speaking and was appointed by the emperor to teach in Milan, Italy, at that time the administrative capital of the Western Roman Empire. While there, he happened to hear the preaching of the bishop of Milan, Ambrose, who baptized him in 386.

St. Augustine ultimately renounced his secular career, put away his mistress, and became first a monk, then a priest, then the bishop of Hippo, a small town on the N. African Coast. The voluminous writings of this Early Church Father span every conceivable topic in theology, morality, philosophy, and spirituality. St. Augustine of Hippo is commonly recognized as the great teacher in the Western Church between the New Testament and St. Thomas Aquinas.  He died in AD 430.  (bio by Dr. Italy)

What I learned from building a neural network. Hint: the robots are coming!

I’m taking a developer certification for using IBM’s Watson AI, and one of the learning requirements is to understand the basics of artificial neural networks. In order to retain the information better and to understand the underlying processes, I decided to actually create a neural network, with the help of Stephen Welch’s excellent “Neural Networks Demystified” video series. You can see part one below:

I honestly did not expect it to be so complicated. Of course, it’s machine learning, it’s not supposed to be easy; but still, the amount of equations that described even the basics of a neural network were…out of my comfort zone to say the least. Nevertheless, it was eye- opening. Artificial neural networks (ANN) are a mathematical and programmatic representation of how neurons and axioms work. I am not going to delve into the mechanics of it, but it suffices to say that these ANNs are the beginnings of a general artificial intelligence: one that can think, understand and display intuition.


A demonstration of how Artificial Neural Networks mimic real, biological neurons. Source: InTech

The implications for this kind of technology are profound. It got me thinking about the economics of implementing such a system, only to realize that we are already in the midsts of a global upheaval thanks to the introduction of machine learning algorithms.

In 2011, Marc Adreessen, an early investor in Facebook, Twitter, Pinterest, and many other Silicon Valley “unicorns,” wrote:

“Software is eating the world.”

His statement still holds true, but I’d change it slightly to say, “AI is eating the world.

Unfortunately, the general public’s conception of AI is limited to Hollywood movies, and is almost completely abstracted from the real-life implementations of this technology. Many are unaware of how much this technology has infiltrated their lives. You can attribute your Netflix binging and endless Youtube video watching to the power of machine learning algorithms providing you with “suggestions” and “recommendations.” These services profile every move of yours, every bit of information, to pinpoint your demographic and provide you content that statistically fits with other people like you.

Yes, in AI, you are just a statistic.

cs humor

But AI does much more than that. Look no further than autonomous cars, self-running factories in China, and virtual assistants to see how this technology will seep into every industry of the market.

With such a powerful tool in our hands (quite literally), it is unfortunate that the labor market, and the institutions that feed into it, are not prepared for this transformational change. Most universities don’t have AI programs in place. Coding is still seen as being in the realm of engineers and nerds. Companies still operate with old OS versions of Microsoft Vista and use fax machines to exchange information. A large portion of the economy is simply lagging behind when it comes to it’s ability to change and adapt to an AI-based economy.

Now, this is not all fault of their own. Artificial Intelligence is a very complex subject, as I initially discovered. It requires advanced mathematics, advanced programming experience and a good amount of years in the practice to develop an effective AI architect. The amount of resources invested to produce such a focused individual is akin to the training regiment of a special forces soldier. It takes a lot of time, energy and talent to produce this worker of the future. However, such a worker will become indispensable for the future economy.

AI is like having a self-replicating mind. Another mind that does not need to be fed, does not sleep, does not complain, does not need health insurance, and is millions of times more powerful in mathematical computation than any person alive. It is the virtue of a capitalist society to employ such a tool if it deems it economically advantageous. It would be illogical not to employ it.

But herein the crux of the matter: A few amount of people will be extremely productive in the creation of wealth thanks to their use of AI, but what will become of everyone else?


Greater productivity is the holy grail of economics. It means the country can produce more, for less, at a faster pace. Global productivity exploded after the industrial revolution, thanks to industrial machines. Then, it sharply increased again with the advent of computational machines. Now, it’s due for another increase with the advent of commercial artificial intelligence. Here are three reasons why I believe the rise of AI is bad news for the global labor market:

1) Job replacement will happen faster than job creation

2) Productivity will be focused in a corporate oligopoly

3) “Enormous Data” will provide these companies a competitive advantage over the rest of the market



Job Replacement

This is a big one, especially since it’s become so politicized in the last couple of months. Jobs are always replaced by the coming of newer technologies. When the car became mass produced, the horse carriage industry (the traditional mode of transportation for centuries) underwent an irreversible decline. However, the collapse of this industry was supplanted by an even greater upswell of economic wealth created by the car: stables were supplanted by gas stations, horse drivers by valets, streets needed to be paved, cars needed to be maintained, manufacturing increased in order to keep up with the demand, etc. Thus, older technologies are usually supplanted by newer ones thanks to the new jobs it creates.

In an article for MIT Technology Review, Joel Mokyr, a leading economic historian at Northwestern University commented on the increasingly fast pace of disruption:

The current disruptions are faster and more intensive…It is nothing like what we have seen in the past, and the issue is whether the system can adapt as it did in the past.

He further states how jobs that require automation -usually reserved for the lower classes of workers- will be the most susceptible to this change. If these workers are to keep their jobs and adapt to the new AI economy, they must obtain a degree in computer science or a similarly technical field, as well as a specialization in whatever field they will be working in. This kind of education is expensive, and it falls within the responsibility of the government to fund for their re-education. These blue-collar workers usually do not have the resources to pay for a college education. If the government doesn’t help these workers, they simply won’t be able to re-educate themselves for the changing market needs and will fall into poverty. David H. Autor supports this view in his piece for the Journal of Economic Perspectives, “Why Are There Still So Many Jobs? The History and Future of Workplace Automation.” He argues that, due to the rapidly changing dynamics of the AI economy, job displacement will rise significantly if education programs for low-skilled workers does not take place:

…human capital investment must be at the heart of any long-term strategy for producing skills that are complemented by rather than substituted for by technological change.


A Minnesota factory worker with Google Glass 2. Source: Wired Magazine.

Nevertheless, he’s still fairly confident that AI will not completely displace jobs, but rather complement them. Many blue-collar workers such as plumbers, electricians, HVAC installers and others will use AI to become more productive in their jobs, but not necessarily be replaced completely by it. I agree with his view. Microsoft and Google have both released virtual reality goggles that are being tested to aid workers in their day-to-day work lives. The machine tells the maintenance worker where to put the screws on, where he can find the part that’s missing, etc. In fact, Google has already implemented a revamped version of it’s hyped Google Glass product on a factory in Jackson, Minnesota (This is an highly interesting article which I will probably comment on another time. You can find the original article from Wired magazine here). I do not want to dwell on these commendable efforts. Rather, I am much more concerned with the employees of large corporations that perform task-intensive jobs day-in and day-out. Think of the thousands of workers in Foxconn factories building iPhones, or truck drivers delivering merchandise. It is estimate that self-driving trucks, “could threaten or alter 2.2 million to 3.1 million existing U.S. jobs.” What will happen then? A commenter for the previously mentioned MIT article had some truthful insight when he wrote:

The problem is not the technology: it’s the implicit and explicit social and business agreements we have presently in society.

The ultimate problem with job displacement is not so much an issue with unavoidable technological advances that will lead people without jobs. It’s that us, as a society, have failed to properly organize ourselves to fit the needs of the market and put in the required resources into the training and well-being of our workers. Public companies are put under immense pressure to perform, and have put profits over its people (not that it’s a new issue). If we are to avoid a massive displacement of jobs, we need government and businesses to employ appropriate measures to protect its workers by providing them with the necessary education and skills that will enable them to stay competitive in an AI economy It is our duty to use our God-given talents to help others, and therefore the virtue of a good society to provide means for its people to achieve this end.


Corporate Oligopoly

Ah, we enter into a favorite topic of doomsayers and conspiracists. The idea that a few companies will reap most of the profits from a market is far from new: Six movie studios receive almost 87% of American film revenue (, Facebook and Google account for almost 50% of the online ad market and are responsible for 99% of online ad growth, Russia is still controlled by a few oil producers, etc. The list of examples would be endless, and oligopolies aren’t always bad for an economy. They can streamline the production process for goods and services, lower prices for consumers, and provide greater profits to its shareholders.

I strongly believe the AI market will inevitably become an oligopoly (if it isn’t one already), and profits will become even more concentrated in the future. Facebook, Alphabet (parent company of Google), Amazon, Alibaba, Microsoft and Netflix are the leading technology companies in the world. They’re all S&P 500 stocks, have delivered returns much greater than the market, are leading the world in AI implementation and innovating at the fastest rates as well. They also show no sign of slowing down. They have methodically disrupted every industry they have touched -the release of a trademark from Amazon was enough to plunge meal-kit delivery company Blue Apron by more than 30%-, and have digitized many of their processes. They have also concentrated the wealth of these industries among relatively small teams. WhatsApp was bought by Facebook for $50 BILLION and had only 50 employees…50 EMPLOYEES.

Screen Shot 2017-07-27 at 4.11.21 PM.png

Careful there! Each one of these employees is worth $1 billion

Due to a talent shortage in data and AI, these companies compete one another by offering perks and stock options to employees. Startups also frequently do this, as a way to defer salaries to its employees while it starts earning money. Its fine and all, except when these companies grow to enormous valuations and the first few employees hold the majority of the company’s wealth. Amazon still pays its warehouse employees $12 the hour (per Glassdoor), while the company’s valuation is worth $500 billion and its CEO is the richest man on earth (as of July, 2017). A recent article by the Guardian newspaper showed how Nicole, a cafeteria worker for Facebook’s headquarters, still lives in a garage with her family and barely making ends meet. “He doesn’t have to go around the world,” said Nicole. “He should learn what’s happening in this city.” She’s referring to Zuckerberg’s highly publicized world tour that started as his new year’s resolution to “get out and talk to more people.”

“They look at us like we’re lower, like we don’t matter,” said Nicole of the Facebook employees. “We don’t live the dream. The techies are living the dream. It’s for them.” Source: The Guardian

It’s unfortunate cases like Nicole’s that highlight the growing divide between the middle class and the high class being populated by techies. In a new report highlighted by CNBC, a record number of Americans were millionaires in 2016 – there was also a record 50% decline in the people who qualify as middle class, and “One in three say they couldn’t come up with $2,000 if faced with an emergency.” Thus, the corporate oligopoly has concentrated the wealth of the new economy to it’s founders, and the promise that the masses will be liberated to freelance and work on their own thanks to new digital technologies, is shown to be false, except for a fortunate few.

The Rise of Enormous Data

Think Big Data was too big too handle? Enter Enormous Data. Seriously. It’s the new buzzword in the industry.

Stay tuned for updates and I appreciate your comments and suggestions.