Art of Seeing The Invisible

SixthSense_CameraEver since Marshall Mcluhan, philosopher of media theory, prophesied the “Medium is the Message” many mediums have come and gone like tumbleweed in a grave yard, hustling the bereaved.

There was Miss Cleo on late-night infomercials, claiming heritage from the Caribbean to pay-phone listeners like inmates making their last call from prison, dropping their last dime of hope.  “caal me now mon 4 yah free tarot readin”

There was  the first long Island medium, John Edwards, “Crossing Over” with Dr. Oz in 2011. Then the second long island medium, Theresa Caputo with loud hair, snapping chewing gum, contacting the dead of loved ones in pre-taped TV audiences who desperately needed closure of old wounds, unfinished business.

But by 2020, analysts predict there will be 25 billion connected devices talking to each other, more than 50% of which will be TVs, cars and wearable computers.  And at the center of this machine 2 machine bickering will be mobile phones.

According to Qualcomm, humans will be able displace the so-called 6th sense of these medium personalities claiming to  “see dead people”  with LTE Direct technology, a discovery engine made by Qualcomm.   Harnessing the internet of things into a ‘digital sixth sense’, it combines Bluetooth and beacon technology into an SDK framework, to empower next-generation mobile apps to push relevant notifications and actions.

For example, I walk into my house. My smartphone sniffs out all my connected appliances and a domino effect of events would occur – from optimal temperature to favorite TV programming to hot water pouring in the tub.   With always-on awareness of friends, promos based on one’s location, one’s natural senses will extend  beyond the primary five: sight, touch, taste, sound, and smell.  I can see the invisible. Coincidence will not be left to luck.  The rigid lines between digital and physical will materially liquefy, dissipate into big data in real time. The right information to the right place to the right people.

In this era of the long tail wagging the dog of distribution, content is king. However,  McLuhan’s hypothesis was that the impact of  the medium itself takes priority over the content it supports. Each medium, from chairs to cars to phones, conveys a message to its users. The internet, for example, isn’t remarkable because of its endless score of content, but because it heightened our expectations for content to be infinite and instantly accessible.  It is an extension of human consciousness.

Therefore, the humanization of technology will be a crucial component in the age of connected things.  The mediums will expand beyond TV and mobile devices into appliances, clothing, sneakers, walls, and street lights, changing the way we interact with objects and the meaning/message they import.   More than enhance or displace something,  true innovation returns us to something we have lost.  David Rose, MIT designer, argues that the age of connected things will free us from the confines of the glass slab.  In a Time article he writes, “The Apple watch is a glass slab trying to do too much.”

What if there was a web we could wear, so that we become the medium and the message, our humanity?

Pranav Mistry, Global VP of Research at Samsung, started a company called SixthSense, 6 years ago answering that very question.  With colored finger caps, a pocket projector, a mirror and a camera,  SixthSense liberates pixels from the glass slab model.  No phone, no mouse.  Yet one can bring part of the physical world to the digital and vice versa.  For example, if there is a beautiful rustic landscape before one’s eyes, rather than summon a camera from a purse, one can form their hands in the gesture of a film director framing a shot and take a photo.    Then, browse a media library and project that photo on the wall with finger gestures or even use their palm as the surface.

T.E. Lawrence once said “All men dream but not equally.”

Mobile screens have reached their unit limit. They cannot get smaller.

The message is clear, no matter the medium.

We must dream human again by making computing more human.

D.R.E.A.M. – Digital Rules…

1_skim_cream_P5250743The day cash dies will be a sad day in conspicuous consumption.

There will be no tip jars on pavements.  Cardboard signs will read “Will work for bitcoins.”

Thirsty night clubbers will not be able to flash greenbacks for drinks, rain “benjamins” on exotic dancers or flick nickels into an arcade game to appear “on fleek.”

Most illustrative, the 1993 underground boom box anthem “C.R.E.A.M.(Cash Rules Everything Around Me)” by Wu Tang Clan, which propounded the grim economic reality of Shaolin Island, outlining the scrapes of street life and narrow escapes from the clutch of law enforcement, will play like the world’s smallest violin on a 4-track.  Not for lack of an important message, but very few will know what ‘cash money’ is.

By the end of 2017, Forrest Research “predicts US mobile users will spend $90 billion via mobile payments, a 48 percent increase over the $12.8 billion spent in 2012.” With NFC and smartphone based spending dominating the wheel of commerce,  cash used to be king; cashless is the future.

In lieu of paying with dollars, check, or plastic, a consumer can use a mobile phone to pay for a good or service. For example, recently I had my first Uber car-ride experience.  And the most magical part was the transaction.  Somehow, Uber extracted that most uncomfortable feeling of taxi service, when the driver rolls up to one’s destination and halts and payment is required.  How should I pay? Cash or credit? How much do I tip? With Uber, all these questions are covered and charged to the receipt. However, payment is made prior to the ride, on a location-aware mobile app by dragging/dropping a pick-up pin to request a ride.  One click.  No haggling over what the meter reads. I unbuckle my seatbelt, open the car door and exit.  Frictionless commerce at its finest.

The promise of a cashless society is the ability to make it as easy as possible for customer to buy whatever, whenever, wherever with one click.  The less a consumer has to think about stopping at an ATM to get change, or digging in their purse for a credit card or getting robbed on the street for cash, the more share of wallet increases.

A few frictionless transactions companies, racing to the one-click check out model, include:

  • Connected device waved over a reader at point of sale in retail e.g. Square
  • Personal transfers to another person by using mobile without sharing account info e.g. Apple Pay, Facebook, Paypal
  • Digital wallets that allow you to store multiple credit cards with typing in card details for every transaction e.g. Google Wallet, Paypal, Amazon

Yet, according to GP Bullhound Research, mobile proximity transactions are predicted to increase by a whopping CAGR of 175% between 2013 and 2017.  If this trend continues the shift toward alternative payment services may become permanent, and the credit card network will be disintermediated from commercial transactions altogether.

While mobile payment alternatives may unleash unprecedented retail transactions with ease, speed and convenience, a new mobile identity may be the cost of admission to a cashless world based on fingerprint scanning and vein recognition biometrics.  This raises questions of how will the unbanked and underserved participate, if they have no credit or physical address.  The Bill & Melinda Gates Foundation argues that integrating digital payments into the economies of emerging and developing nations are necessary for economic growth and financial inclusion as they significantly reduce the cost of sending and receiving money in secure ways for poor folk.

If Bill & Melinda are accurate, let’s hope it won’t be long before Wu Tang will have to update their underground classic “C.R.E.A.M.” to “D.R.E.A.M”, where Digital Rules Everything Around Me….

Data Dysfunction Junction

stock-illustration-13693256-crossed-samurai-katana-swordsIn a land not too far away, two samurais  clash on a hillside, with swords interlocked, neither wanting to cede power to the other.

In “Analogg” enterprises, these two samurai are Marketing and IT.

Both organizations need the other to perform well, but have egos of armor and swelling budgets that get in the way, creating dysfunction junction.

Marketing sees IT as propeller heads, yet enlists IT to develop SQL code for customer relationship management and data-warehouses.

IT sees marketing as airheads who go outside IT process and governance and employ external developers to build apps.  But IT cannot keep up with the atomization of market needs, where content has to be delivered on a more personalized, 1-to-1 basis to the customer across different experiences and touch points: physical point of sales systems, e-storefronts, call centers, and mobile devices.

With the digital acceleration of bandwidth, storage and processing power, information is traveling at metabolic rates in terabyte chunks.   Case in point, according to TechTarget, 1 TB hard drive of data “could hold enough words that it would take every adult in America speaking at the same time five minutes to say them all.”

Customers, as a result, are not homogeneous cells.  Each has a unique identifier, a Digital DNA, composed of multiple entry points and paths which shape their decision profile.

If Marketing and IT could work together to unlock the Digital DNA of customers they would know and understand their customers better than their customers know and understand themselves; an easy way for legacy brands to create a distinct competitive advantage.

Recently, I walked into a mobile operator store and upgraded my smartphone from a Galaxy S3 to a Galaxy Note 4, which did not have the exact gold color I wanted.

That meant I had to get on the phone with another store to determine if they had the phone in inventory that I wanted.  They did, but then I had to  talk on the phone with the store rep in the other store to provide all my details.  I was embarrassed because it was taking a long time and a line of customers grew impatient behind me.

So the question is how does a legacy brand understand my Digital DNA so I don’t have to play store clerk?

First, marketers need to be masters of technology and IT needs to understand the customer.

Second,  IT and marketing leaders have to step up to conquer the Big Data challenge.  They have to consolidate different data sources, transactional and non-structured, into a standard, consistent environment for the data to have meaning and be useful.

Just as Dr. Frankenstein had lackey volunteers dig up bodies that he could stitch together to build a Second Adam, i.e. the perfect human, data has to be stitched by companies to  build a the perfect Digital DNA.  Marketing has to coordinate with IT because IT knows where the dead bodies of software applications have been buried in data-warehouses. Then through the ETL process  – — (a) Extract from flat file, xml or relational (b) transform/clean data (c) Load the data with some strategic design or purpose — they can bring all that data together, harness its energy  and assign each customer a universal ID, its Digital DNA.

Today’s business software does not capture the  richness and complexity of customers.  In the future, Marketing and IT may have  to train machines to perform autonomously – find the rules and automate the rules – find patterns humans cannot see and make predictions humans cannot to resolve the clash of the swords.

Failing to identify or recognize the changing digital needs of the customer will likely result in lost business, lost connections.  But Marketers and IT first have to connect the data dots between themselves.  Then they can analyze the customers in front of them.

Organism Omnivore

5429335563_ebe9be20dcCompanies are like living organisms.

They learn, evolve and eventually die in 13 years, plus or minus 5.

They consume sales, and excrete costs.

They are omnivores.  The more sales a company consumes than costs it excretes, the more it grows.

Yet most companies operate like machines that walk at right angles, with one foot stuck in the tar pit of analog.  In the industrial era, a business was like a clock with a long and short hand, devised by engineers and pencil-pushed by accountants for maximum productivity; every worker was an undifferentiated cog and wheel, interchangeable, disposable.

This machine-view of business operations prevailed because smokestack industries were stable, predictable; the most valuable assets were hard and fixed; electrical plants, factories, work-in-progress inventory, and finance capital. Deprioritizing humanity for the sake of optimizing profit was considered good business judgment, because human bonds were too fragile.  Ayn Rand’s Fountainhead popularized the notion of companies being the birthright of clever individuals atop a pyramid of workers who did their bidding.  Thus, management people were the brains and relationships were secondary.

But in this new digital ecosystem, the business world is a vast, murky and clandestine rainforest, where machines constantly bump into things, spin uncontrollably, rust and malfunction.  Henry Ford, a pioneer of the mass-market automobile industry, once said  “Any customer can have a car painted any color that he wants so long as it is black.”  As a monopoly car manufacturer, such arrogance by Ford was pardonable, but today’s customers want personalized experiences and different products and services every day.

In order for a business to survive threat of extinction, companies have to create a new nervous system, and high performance infrastructure.  It must be responsive to environmental changes as a living thing would; and protect crucial nerve endings from getting damaged and paralyzed.  For example, if you are in a cold environment and you don’t know; you are dead.  This is an advantage living organisms have over machines.  Machines don’t know they are dead.  In my experience, the business’ nervous system can be destroyed by miscommunication and conflict, if there is no understanding of self and management.

Thus, a knowledge system that is distributed throughout all the employees must be restored, where continuous p2p learning about one’s environment is the aim.  This process is not mechanical; learning is creative, fluid, soft, messy and magical.  And the power of digital is that it automates the mundane and frees up bandwidth to do more learning.  Employees can be then redeployed to think,  strategize, learn and see into the future.

As a result, companies need to invest in connectedness both internally and externally.   And move away from centralizing digital like it is another cog and wheel.  Evolutionary biology defines an organism as a body composed of different pieces that coordinate well for a common purpose.  Organisms have self-control and derive power from within.  In his book, “Living Company”, Arie De Geus argued that as a living organism organization’s first loyalty is not to any individual or crowned figurehead, but to its existence, growth and factors that extend its longevity.

A living company is a connected company.

A connected company operates as a band of self-directed pods that are supported by platforms and connected by common purpose, not by fear of a supervisor. Amazon and Google are great examples of the open, living, connected company.  They disperse digital staff across key departments, with change agents that lead key initiatives, set up processes, and synthesize the dots while empowering others to lead.  They see companies as a complex ecosystem of connections and potential connections.  Here is the survival kit for a living company:

  • Living companies have to encourage creative binging.  E.g. Google used to give its workers 20% of time to do side projects which produced Adsense and Gmail.
  • Living companies have to have a strong, unified culture.  See sustained superior performance of Proctor & Gamble, Zappos, Netflix, because of the importance of culture.
  • Living companies have to be self-aware and in touch with the world around them constantly on the prowl for new opportunities.  E.g. Google investing in self-driving cars and home automation(Nest).  Facebook investing in virtual reality (Oculus)…

As we move towards the next wave of digital disruption, customers will be more and more connected with each other on mobile, social and the cloud.  A successful company must adapt, reinvent its product and services and connect with customers. The best way to do that is as a living company not a dead machine.

F(x)=Singularity + 1

When I took calculus in high school, studying derivatives and limits, I experienced ‘Singularity’ before I even knew what it meant.

I typed 1 divided 0 into my T180 graphing calculator and the word “ERROR”  appeared.  This handheld computer that only took numbers 0 through 9 as its inputs, for the first time talked back to me in English letters.

How did it know how to speak English?  And why speak now? Was it trying to tell me something?  A warning? What was it about the function,   f(x)=1/x, that it did not like?

Then I learned after graphing the function when x=0, that the lines curve away from 0 as they approach 0, exploding and undefined at 0.

Instead, they approach positive/negative infinity, which in layman’s terms is a number we can’t comprehend.  It is so large, beastly, and non-computational.  Like conceptualizing the universe at a subatomic scale that forever expands out from a dot, we can leverage the grace of equations to define it, but we can’t describe what it means to our lives. Not in a practical sense.

Thus in math, singularity is the point in the equation that blows up, becomes degenerate.

In 1993, science fiction writer and computer scientist, Vernor Vinge, introduced the defining notion of technological ‘Singularity’ as an inflection point in human evolution where artificial intelligence (ironically that which man creates) surpasses human intelligence.  But his conclusion does not end so well for humans.

On the other hand, Ray Kurzweil, head Google engineer and prophet of the trans-humanist movement, takes a more upbeat tone.  For him the ‘Singularity’ is the year 2045 when man merges with machine so he can keep up with the rate that intelligence is accelerating; with the possibility of extending his life, dangling the keys of immortality.

What does that inflection point in 2045 look like? Is it falling in love with our operating systems?

Some postulate it will be a rapture of nerds huddled around a big TV screen waiting in a Kool-Aid line to upload their consciousness to the machine, a monolithic server, where they can live forever, replicating their being across the planet.  Downloading into new hardware where needed.  Don’t laugh.

When I was in grad school, I got burnt out.   As an outlet, I started watching the Battlestar Galactica series , a space-opera about post-singularity.  I got so addicted it almost wrecked my life.  I gorged on all the episodes, popping in one DVD after another, like buttered popcorn.  What began as a one-off viewing one night to let off steam turned into 2 months of my life, gone; that I cannot get back.

Although the ‘Singularity’ has 1% chance of happening and could be perceived as a cult of atheists, making religion out of the rational, it is a useful construct to examine the rate of change in the future.  Technology moves far quicker and profoundly than we anticipate.  The law of accelerating returns.  Chip speeds have been doubling and chip costs have been halving for the past 50 years.  The truth is our universe is transforming into a vast thinking being with the data being generated from our desktops, laptops and phones.  To approach the zero moment of truth,  in f(x)=1/x we may have to get out of the way of ourselves and use the help of machines to do the heavy lifting.

Peter Thiel, founder of Paypal and billionaire investor, described the act of going from zero to one as creating something radically different like the way some start-ups like Apple, Google and Facebook have in Silicon Valley.  It requires intensive growth like birthing a child.  But the question is how do we get from 1 to 0 without imploding or degenerating?  How do we find equilibrium and calm in nothingness and everything.


If infinity is too large for me to comprehend, I may need a computer to warn me with an “ERROR” message like my T180 graphing calculator did in high school.  We may need machines to process infinity.  Let’s face it. Humans were born as 1s, not 0s.   Therefore, the vast majority of human effort is on a 1 to n basis.  That is where we live.  We make incremental improvements.

If the singularity is a step-function in evolution, where all parallel lines meet, where primate and android wed, then let singularity =infinity + 1, where the 1 is human.

And that’s ok.  Because Infinity + 1 still equals infinity.


web30It won’t just be semantics when the inter-web grows up and takes to the sky as Sky-web.

When Web 2.0 perishes it will be counted as one of the five worst extinctions in earth’s history. It will be up there with the dinosaurs, 65 million years ago, who according to some paleontologists, were caught cross-eyed in the crosswalk of killer asteroids striking the earth.

As the Cretaceous, Jurassic, and Triassic periods defined the era of dinosaurs, the largest meat-eaters to walk the earth, so too will Web 1.0, 2.0 and 3.0, and the ‘Internet of Things’ underpin the Teutonic shifts in our technologies and how they impact our lives.

It was not too long ago in 1997, when the Web 1.0 gold rush hit.  The web provided a vector of exposure to brands that did not want to be locked in the prism of brick and mortar. Everyone and their cousin was launching an e-commerce site, with a .com domain, static web pages and shopping cart.  Brands in television commercials would end their 30 second pitch with a .com logo to show online credibility and how hip they digitally were.  The labor market was eating the alphabet soup of coders with HTML, CSS, Flash and Javascript skills.

Then, with the click of a hyperlink, the stampede of Facebook, Twitter, YouTube and other social networking sites forced the hand of Web 2.0, with new requirements for richer internet applications and user-centric design.  A user with no coding background could cut/paste Javascript snippets, personalize their homepage and participate in the production of web content.  Content management systems like WordPress, Drupal, and Joomla purged the demand for graphic designers and web developers, automating everything with templates and out the box plug-ins.

With Web 3.0, the best of Web 2.0 will burrow into the ground.  The web will re-emerge from a giant, global graph with a persistent data structure to something more fluid, highly personal with context provided by consumers.  According to famed semantic technologist, Nova Spivack, Web 3.0 should bring a more connected, more intelligent, self-aware, interoperable whole rather than a loose collection of siloed applications and content repositories.   Broadband adoption, mobile internet, p2p, open APIs and protocols, open IDs and semantic web technologies like RDF and SPARQL will converge and enable the web to act more autonomously, birthing Artificial Intelligence or what I call ‘SkyWeb.’

Right now we live in a syntax world with HTML coding, where the arrangement of data matters most to render a working web page that connects information.  But in Web 3.0, its all semantics.  The metadata (the data that describes the data) will help the internet find meaning, and become more intuitive, subjective and look at the whole. In sum, become a thinking machine.  If computers can understand meaning behind information, they can learn what we care about, they can help us find what we really want.

How many times have you started a search in Google’s search box and it predicts what you are searching before you even know it? As computing power increases and data is more connected, the prediction will be more accurate based on personal preferences, locations and biofeedback.

Often when we think of Artificial Intelligence a.k.a SkyWeb, we imagine an omnipresent threat or a clever conversationalist that we are stuck with by ourselves in a space station, but AI will make our lives easier.  Think of your smartphone.  Siri is a personal assistant, that answers simple questions, performs Web searches and other basic functions. Siri will get smarter, sentient. Giver her time. We will be able to communicate with her in a more cogent way.

In the short term, SkyWeb could be a thinking machine, but it will be no more advanced than a toddler, picking its nose or burping up its food.

In the long term, SkyWeb could help us find meaning to ending war, disease and poverty by executing a simple algorithm.  Now how cool is that?

Thingification 1 & Thingification 2

Remember Thing 1 and Thing 2 in Dr. Seuss’s Cat in the Hat: Twin humanoid-like “Things” in red pajamas with cotton-candy blue Afros.

Cat in the Hat released them from a big red box to brighten the rainy day of some bored kids.

These “Things” caused mayhem,  talking jive, flying kites in the house, knocking pictures off the wall and messing with other people’s clothes.  And they would not stop until the Cat in the Hat pushed them back in the big red box.

Thing 2According  to the National Cable & Telecommunications Association, the next major trend that will impact telecommunications is the home invasion of  “Things” like Thing 1 &2 and dedicated computing devices that bleep, blink, and bloop.  Not to mention chat behind one’s back…

From stovetops to toothbrushes, millions of “Things” will have IP addresses, not just PCs, handsets and tablets.  But unlike the children’s classic, Cat in the Hat, these “Things”,  growth drivers of the internet, will not be put back into the red box by Dr. Seuss or anyone else.

By 2020, mobile data traffic by humans will be supplanted by “Things”, by smart objects.  Specifically, smart cars and smart appliances will generate a continuous  stream of notifications; always self-aware, hunting for context, notifying the world of their presence.  It is estimated that the “Thingification” of the internet will provide 50 billion, if not trillions of new connected data sources globally by 2020.  IDC reports an annual compound growth of IoT base of 17.5% from 2013 to 2020 and a market valued at $7.1 trillion as the convergence of  mobility, social, big data and cloud continues.   Exit the world of exabytes. Enter the world of zettabytes.

Amazingly, these ‘Things’ can send or receive information without human contact. In fact, in the near future, installing a fridge may require loading it with a person’s likes and dislikes and “friending” it on Facebook, so when you run low on eggs it can “poke” you.  It may also be possible for your fridge to then become ‘besties’ with your washing machine to vent and chat about how you don’t do a good job cleaning up.

Objects that we never perceived as having an IQ begin to interact with people in new and exciting ways.  Dumb, clumsy devices of the past now suddenly can divulge personal secrets, building a picture of human behavior before we even know.  Current examples of these smart devices include Google’s Nest thermostats, wifi washing machines and connected cars that avoid collisions and parking fines.  Or wearable devices like Fitbit and Nike FuelBand trackers that monitor health and store that data in the cloud.

The future of hardware isn’t better versions of the same standalone tech. It’s what you can create when you extract the smarts of the smartphone and make its peer group of tech gadgets smarter and connected.  There still remains tremendous potential to “connect” our transportations systems, our highways to the internet.  Smarter highways might mean more adaptive lanes to traffic congestions, less accidental fatalities or roads that could serve fleets of autonomous vehicles driving 100 mph, inches apart.   The economic benefits to companies seeking efficiencies and effectiveness throughout its operations are manifold.  Devices that self-report will cost dramatically less than humans who hate filling out time cards, which will feed customers expectation of zero tolerance for defects and loss of service.

As the proliferation of devices move out from the cellphone and onto our bodies and into the world, industries will have to rethink the mobile network architecture, standards and interoperability of products.  The rise of the mobile devices is already having a dramatic impact on mobile network operators such as AT&T, Verizon, Sprint and T-Mobile and IoT will only exacerbate the data traffic problem.  To put it into perspective,  as a society, we’re producing and capturing more data each day than was seen by everyone since the beginning of the earth, when proto-men wrote their first hieroglyph in a cave.

As we shift from data created by people to data created by things communicating with other things, the larger question is where do humans fit? Technology should incorporate empathy, humanity and nostalgia into the design of these embedded objects that constitute the Internet of Things.  It must combine beautiful objects and information in unbelievable, magical and beautifully simple terms, seamlessly integrating into our daily life.  David Rose, an MIT tech visionary in this regard, described in his book, Enchanted Objects, four doors to the  future:


1.Terminal World: a future in which we are dominated by glass slabs to which our heads and eyeballs are glued like mobile phones and tablet

2.Prosthetics: a future where new bionics enhance our muscles and our senses, making us super-humans.

3.Animism: a future of living with social robots and cuddly objects that talk to us and pretend to care about us.

4.Enchanted Objects: a future in which everyday objects take on delightful new qualities that enrich our lives, but which do not dominate us.


I choose Door #4, because Dr. Seuss and Cat in the Hat would have wanted it that way.

Beacon & Eggs: Retail 2.0

Last night I had a dream within a dream.

I walked into a store and my smartwatch lit up, telling me “Welcome back!, Shakir.”

Soon afterwards, a mannequin with a mic in its throat, and video display-eyes tilted its head.  “How can we help you, Mr. Ramsey?”

“Need a turntable to go with those Beats headphones you bought two months ago?”

As so often happens with most dream within dreams, I could not wake up, because the line between fact and fiction was blurred.

The store of the future: An IGD research project commissioned by CCE – May 2012

That line, in reality, is called beacon technology, which is one of many emerging, disruptive technologies extending the physical realm to the digital realm, that many retailers are failing to leverage to “follow the eyeballs” of customers.

Beacons are low energy Bluetooth sensors that transmit a radio signal for up to 100 meters.  They can push notifications to a smartphone in a retail store using triangulation, making mobile experiences for consumers more personable than ever.  Through a blend of kiosks, smart mirrors and branded companion apps that use beacon technology, retailers can “remember” a customer, no matter what part of the store they browsed or items they favorited.  And this can be done independent of what sales rep was present or foot traffic levels of customers in the store.

So why are brands and retailers, slow to purchase low cost technology like beacons to increase sales via location-based marketing?  Why are retailers afraid to leverage big data tools that enable contextual, personalized, real-time dialogue with consumers? According to Deloitte, this is due to a “new digital divide”,  not to be confused with the one based on lack of access to information technologies born from socio-economic disparity.

GE Capital  Retail Bank’s second annual shopper 2013 study surmised that 81% of consumers go online before heading to the store to make a big purchase($500+).  But retailers seem stuck at the crossroads of Retail 1.0, largely failing to engage customers when in-store, leaving money on the table.  With only mobile shopping apps trying to bridge the digital gap, consumers are left with an anti-climactic feeling, gawking in incredulity that brands and retailers are out of touch.

The consumer is begging to be reached and engaged in a superior shopping experience,  but retailers and brands are neglecting them.   Even although they are best equipped to tie in big data analytics, CRM and DMP back-ends to customize messaging, they refuse to pump the real value of Beacon technology.  Thus, retailers need to step up and provide customers with a feeling of excitement and exclusivity.   Recently, Microsoft has given retailers an opportunity to provide customers this feeling,  by selling its Kinect technology to identify customers through retina scans and facial recognition.  But this level of engagement might be too intimate for some, for now.

In the meantime, marketers today need to look at customer data and attribution and create plans to reach the customer, everywhere he lives digitally.  There is no doubt, competing with the Amazon’s e-commerce shopping experience will be constant challenge.  However, the online “retail store” is going through an unbundling itself, forcing Google, eBay, and Amazon to open up their own stores moving from “clicks to bricks.”

To win customers back in Retail 2.0, marketers and brands have to eliminate departmental barriers that have grown up in the wake of digital’s first generation. They have to integrate digital intelligence across marcom and e-commerce departments and not be so siloed.  More than anything, marketers must cede control of messaging and leverage technological connectivity not only to offer deals and rewards but tell a story that awakens the customer.  Because, in the next evolution of retail – the convergence of wearables, beacons and big data –  the customer is the protagonist in control of the dream.

BitCoin Eyes

Used to be placement of coins over a dead man’s eyes paid for his soul to go to the underworld in Greek mythology.

But it turns out bitcoins (a P2P digital cryptocurrency since 2009) could bring that man’s soul back to life.  And he can keep the change, especially if he died poor and unbanked.

Bitcoins are no small laughing matter.

Some experts predict it could end world poverty.

And it’s all because as famed venture capitalist, Marc Andreesen, describes,  bitcoins are “a much deeper concept than currency. It’s the idea of distributed trust.”

Bitcoin 1

Some quick facts about Bitcoin:

  • It is virtual money with which one can buy things online
  • It is a scarce resource like gold
  • It cannot be duplicated
  • It uses block-chain technology which prevents people from spending it more than once
  • It is highly volatile and risky
  • It is not regulated by the government

In the way Napster disrupted the music industry in the 90s with p2p music file-sharing, Bitcoin has unleashed an equally powerful, disruptive idea—that people should be able to issue/get money anywhere in the world, to buy anything, with their identity completely unknown.  Such a concept spits in the eyes of traditional banking,  dismantling its centralization of power as trusted intermediaries. No one gets to “take a cut”, 2-3% or otherwise.

Over the life of many folks, the 2-3% savings that bitcoin offers as an alternative virtual currency to plastic card transactions might not seem like much.  Considering that the world economic output runs $90 trillion per year, 2-3% adds up fast into dump trucks of money, which could be poured into developing countries on the watch-list of economic collapse.

BitCoin 2Due to the anonymous nature of bitcoin transactions it has been the go-to currency for cyber-criminals.  Money laundering, Ponzi schemes, and other black market activities have all been traced to Bitcoin in the news media.  However, as it matures in concept and becomes more widely accepted beyond libertarian circles, Bitcoins could open the door for innovation with underbanked and international communities.  Since it is a verifying ledger many of the immovable mountains, such as credit worthiness, that various minority communities deal with, become mole hills to step over.

However, first things first.  Broadband infrastructure, digital and financial literacy in these communities must be prioritized investments.   For example, bitcoins are created through Bitcoin mining and exist through a computer file known as a block-chain which is like a hairy complex riddle.  After a computer works on the block chain a piece of the riddle is solved releasing 25 Bitcoins to the solver. (Cue Nintendo’s Zelda or Super Mario Brothers music)  When mining competition dwindles, the problems are easy to solve, but when the supply of Bitcoin miners and solvers increase, the problems progressively get difficult and expensive.  Hedgefunds and companies like Butterfly Labs are spending hundreds of thousands on computer software programs to swing pick axes in Bitcoin virtual mines, to break the riddle of these complex algorithms.  With BitCoin production to end in 2140 , there is still time for these underserved communities to learn these back-end technologies and get in while the currency is cheap.

What excites me is that if one has a digital wallet i.e. a mobile phone, Bitcoin doesn’t discriminate. It is finance for the people, by the people. If one lives on prepaid cards Bitcoins can enable online microloans.   If one only has a feature phone in a dilapidated village, Bitcoins can be sent SMS or satellite to nodes in outer space.  If one has a Oculus VR headset, Bitcoin can be integrated into a game engine, to create a new currency to monetize virtual worlds, and create crypto-entrepreneurs.   The scenarios of possibilities are endless if one takes the Bitcoins off their eyes and opens up their soul to the unheard of opportunities of #digitalfutures.

Blink. Rethink TV.

Blink. Rethink TV.

Blink. I am in the TV.

Blink. But the TV is broken….and may be dead.

I recall the first signs of TV cracking like an egg.  Prime example,  Janet Jackson’s nip-slip during the 2004 Superbowl half-time show, alongside Justin Timberlake.  In that moment, time-shifting (the ability to pause, rewind, and fast-forward the TV show you’re watching) upgraded from analog VCR to Digital DVR.   Viewers digitally paused,  replayed and gawked at the infamous half-second transgression, more than any moment of the game and the multi-million dollar commercials that paid for it. To this day, “Nipplegate” is the most watched video for DVR platform pioneer, TiVo.


Before VCRs there was no time machine, or “time shifting” of TV at home.  One was stuck on the network’s one-to-many schedule for programming. And if you had to run to the lavatory, you held it in until there was a commercial.  (For me, commercials were great excuses to do high school homework!)

Nowadays connected viewers via handsets and tablets are not only driving the rise of Social TV and cord-cutting from cable subscriptions, their activities are opening new windows for real-time multi-screen experiences .  The Pew Internet and American Life Project recently published a study that found 50% of cellphone owners use their phones while TV channel surfing.  And the third screens, smart watches, are not far behind to offer more media fragmentation.   With all these methods of consuming and interacting with video, it is not surprising  children have little patience with live TV, not understanding why we can’t simply “make it go fast”, i.e. record the future so we can skip over the commercials.

For that to happen, we must move beyond breaking down TV into short-form video over the internet to more personalization and curation.  At home, I am paying over $100/month for 1000 cable channels, 90% of which I do not use.  Set top boxes like Roku, Apple TV and Chromecast internet video stream into the living and solve TV wasted in the long tail by freeing up cable’s bundled model and on-demand distribution. But for TV to truly mature, the next battleground for interactivity is virtual reality/augmented reality.  Similar to the myth that we only use 10% of our brain capacity and we could achieve superhuman levels like telepathy and psychokinesis by increasing brain utilization,  virtual reality could enhance learning and create a non-linear, multi-screen experience spanning connected devices –IP TV, handset, tablet—and the social platforms that link them together.


Marcel Proust divined “The voyage of discovery is not in seeking new landscapes but in having new eyes.”  Interpreted another way, the voyage of TV is not in seeking new TV channels but having new eyes like a VR headset to see the channels differently.   Passive TV watching and channel surfing frosted forms of blindness over our eyes for a long time, growing our appetite for the braille of 4D interactivity.

In 2014, Facebook purchased Oculus for $2B for its Virtual Reality Headset technology, Oculus Rift. They plan to build on their face-time app z-like dimensions adapted to convivial settings to enhance their social networking.  Sony also plans to release Project Morpheus the immersive 3D head-tracking helmet next year.  Imagine enjoying dug out seats at the World Series baseball game, or memorizing a language like Russian and studying for a year abroad in Russia, or bringing the boardroom to the beach smelling salt and seaweed in the air – just by putting on visors in your home.  As a tool for education, VR may make us smarter.   We can learn Latin in two hours or move a can of spinach with our mind.  With millennials’ DNA pre-encoded with online education,  researching on the Internet, self-teaching to instructional videos on YouTube and distance learning powered by video technology, they can invent the future and prototype endlessly.   None of the associated materials costs to experiment, practice, and train will prohibit them.

Blink. The TV that I am.

In Virtual Reality, the television set is not broken like a machine, it breathes like a human;  time-shifting is not triggered by DVRs but rather when we can plunge artifacts like the videocassette inside the machine of our own bodies, our own human stomachs to birth new stories;  and as companies like Facebook and Google invest in telephony to share those stories without friction,  the “boob-tube” dies with Superbowl 2004 and the resurrection of the TV experience will be signaled by every smartphone on the planet ringing simultaneously.