The Future of Business in the age of machine intelligence

George Zarkadakis
11 min readFeb 7, 2019

--

The AI-enabled organization

Digital transformation propels organizations towards reinventing not only their business models and technology stacks, but also how they organize work. Inevitably, they are transforming from hierarchies and silos into “platforms” and “ecosystems”, where success and survival depend on agility and innovation. Agility suggests the expedient and resilient blending of human capabilities and talents from inside or outside the organization; innovation suggests technologies that augment human capabilities, catalyze collaboration, secure systems and allow for fast prototyping, experimentation and market scaling.

Artificial Intelligence (AI) is a key technology that drives this historical and profound change. Like electricity, AI will power the information infrastructures of the 4th industrial revolution. Thanks to AI and a host of other digital technologies Roland Coase’s “three costs for the existence of the firm” (search, managing, contracting) are increasingly becoming less outside rather than inside a traditional “firm”; thus new business microeconomics favor the unraveling of hierarchical organizations and their replacement by ecosystems of free agents, investment capital, shared technologies and data, as well as more traditional forms of process management. This means new opportunities for human progress, but also new challenges. The challenges are mostly around work and job security. As machine intelligence becomes more powerful the “lump of work fallacy” may cease to be a fallacy. The “compensation effect” may not follow the “displacement” effect predicted by classical economics, if machines can do the extra work they create by themselves. Humans will need to reinvent what work means in the era of ubiquitous AI: indeed the very concept of human “productivity” will have to be redefined. Additional challenges resulting from work automation will include financing pensions and welfare systems, as well as increased income and wealth inequality.

Despite the serious challenges, the “rise of the machines”, if managed properly, may lead to the “rise of the humans”. In a world where intelligence is automated and commoditized human talent will be the real gold. How we relate to others, how we reflect and care, how we bond and feel, how we inspire and persevere, how we chose between right and wrong and build a better world, will matter more than the skill to analyze a text, write up a contract, or underwrite a policy. We can imagine the future belonging not to billions of workers but to billions of entrepreneurs.

The following graph is a high-level description of how an “AI-enabled” organization would look like.

A Web 3.0 ecosystem — ©2019 by George Zarkadakis

In this type of organization agile teams of humans collaborate in innovation and problem-solving; they are matched and developed by “talent platforms”, use “collaboration platforms” to work together and are augmented by AI systems that will increasingly “feel” as if they are not mere tools but veritable co-workers. Those mixed human/AI teams will have access to agile technology stacks that allow for fast prototyping, experimentation and market scaling; as well as access to secure — and shared — data stores. Distributed ledger technology, as it evolves, will become increasingly critical for securing data, increasing autonomy, reduce latency, minimize contracting costs, and democratizing access to data and technology.

Corporations and traditional businesses will gradually start to adopt Web 3.0 technologies in their technology stack and models; initially they will use them to decrease latency and increase efficiencies in their value chain. As they evolve into flat, networked organizations, “cryptonetworks” will be further deployed to enhance data security, monetize data assets and enable more agility in organizing work.

A more futuristic scenario would be the emergence of bottom-up organizations, modeled as worker-owned “cooperatives” and run as cryptonetworks. The Distributed Autonomous Organization (DAO) on Ethereum was an initial proof of concept; we may indeed see this model for organizing work feature more prominently in years to come[1].

Cooperative cryptonetworks (a futurist scenario)

Charlotte finishes off her coffee and checks her smart-watch one last time before plugging in her team’s virtual weekly meeting. It’s late afternoon in London, where she lives, but soon as she puts on her VR goggles she’s transported to a virtual workplace where her co-workers gradually assemble from around the world. For the past few months she has been working along a robot designer from Shanghai, a molecular biologist from Ukraine, a statistician from Lagos, and a zoologist from Mexico City. The team was put together by an algorithm on a talent platform where Charlotte is a member, responding to a request to design a smart farm in Botswana.

The team has been working diligently to deliver a solution that will significantly increase harvest yield through a combination of soil and weather data, genetically engineered crops, and continuous plant monitoring via autonomous drones. It has been a very exciting project for everyone involved. The talent algorithm had matched them not only for their skills but for their personalities as well; so they have a great time working together, despite speaking mostly different languages. Their personal smart assistants take care of real-time interpretation, and more. Charlotte’s assistant, for example, ensures that she gets paid timely via the smart contract on the talent platform’s blockchain, soon as the team achieves an agreed milestone; and manages her investment portfolio in order to maximize short and long term earnings, as well as allocating micropayments to a number of social and political causes that Charlotte cares for. The assistant also aids Charlotte in her work, by suggesting ways to improve her outcomes. Charlotte is a professional poet. And her contribution to the project team is to compose a story around the creation of the new farm that will inspire local communities and ensure the harmonious cultural adoption of the new technologies.

The year is 2025 and Charlotte is a typical worker in the new, global, shared economy where mutually owned Web 3.0 digital platforms are diffusing ownership of technology by transforming platform participants into platform owners. She works independently, and does very well too, enjoying the flexibility and the long breaks that allow for self-development, for learning new skills and exploring new ideas. Most of her friends are freelance workers who get assignments in private, community, or public projects.

Flexibility and security do not need to be mutually exclusive, and a future economic model is not necessarily a deterministic, linear projection of yesterday’s ideas. In the Charlotte scenario, it is not the government but private enterprise that solves for income intermittency, health cover, borrowing, as well as for long term investments and pension. Software can allow not only for deconstructing organizations into platforms, but also for grassroots, self-organized business cryptonetworks to emerge. The massive innovation that is currently taking place in insurance and financial services shows how ingenuity and imagination can find new solutions to an increasing percentage of workers going independent. But to imagine this alternative future we must challenge some of our established beliefs about how working life should be. For instance, the idea of a “pension” will have to be drastically revised. As full time employees we used to work for three or more decades with an employer, non-stop, so we may get a small monthly income at the end of our working lives and spend our old age in retirement. But this model does not make sense in a future where full time jobs become a rarity, and bad demographics cannot sustain the colossal costs of an ageing population. A more probable model for the future may be that we never stop working, but take long breaks between work assignments, to travel, study, or raise a family; and then return to work. Instead of a linear life we may have to adapt to a more interesting, cyclical life of continuous learning and development.

Technological Trends

AI is the engine and data the fuel: this relationship is pivotal in developing strategies for AI research and innovation. What bears equal weight on both sides of this relationship is the concept, and problem, of “trust”. On the AI side users must trust the systems they interact with; they must feel, for example, safe inside a driverless car, and empowered to query the reasoning of any algorithm that recommends something of significance to their lives. On the data side, users must have ownership of their own data; which includes portability, as well as power to select what data to share, when, and with whom. The business model where the user was the product is coming to an end, not so much because of increasing social awareness about the perils of having companies do what they like with your data, but mostly because of the emergence of Web 3.0 that will make data self-ownership — or self-sovereignty — the default.

Given this direction, the following paragraphs explore several technological trends that promise to deliver the trust needed on the system as well as the data side of the AI relationship.

Personal data and Web 3.0: Web 3.0 will be defined by distributed ledger technology that secures digital assets on the web and enables transactions over trustless networks. Technology companies should therefore consider:

· Redefining their customer relationships by empowering customers to own their own data (customer-as-partner; data self-sovereignty, data trusts)

· Issuance of digital securities (“tokenization”) that reflect the value of data assets or other user-related contributions in order to further incentivize user participation on a platform

· Develop interconnectedness with other distributed ledger data marketplaces and digital security exchanges, thereby delivering a new, digital, global, economic ecosystem.

Federated learning at the edge: machine learning algorithms get initial training on large data sets in a cloud factory, but are then deployed on the edge to learn on the basis of user-generated, locally-gathered data. This approach can deliver enhance privacy and personalization. Encrypted summarized learnings (e.g. changes in the prediction model) are then pushed to the cloud and used to enhance the “shared” model by consensus. Federated learning has been developed by Google, which struggles to fit it into its strategic model for a centralized cloud business. With Web 3.0 distributed ledger technology allowing encrypted data marketplaces and blockchain consensus mechanisms by default, federated learning may indeed become a dominant technique to train models in decentralized ecosystems. Technology companies should therefore:

· Begin to experiment with federated learning

· Exploit the potential of crowdsourcing data labeling, with users training models at the edge (for example, by tagging their photos, making preferences based on their self-segmentation, etc.)

· Combine federated learning with tokenized data marketplaces

Immersive tech (AR, VR, IoT): Augmented reality, as well as virtual reality, is finally coming of age thanks to new, lightweight peripherals, better computing capabilities at the edge, and new broadband (5G); and will become a foundational technology in an IoT world. It will allow for a whole new class of data sets (likely to be collected and processed at the edge) that can train algorithms in enhancing human experience and augmenting human capability. The “Iron Man” paradigm will become a reality, as we will become virtually integrated with machine intelligence in an immersive, collaborative, mixed reality environment. Technology companies should therefore consider:

· Expanding the scope of their applications to include mixed reality; expect a transition from hand-held devices to wearables and a possible return of Google glass type of devices or smart coatings that can transform any surface into a screen

· Integrate single-player and multi-player experiences in their applications; for example in training and developing workers for an AI-enabled organization

Hybrid AI: Connectionist AI has cracked the bottleneck of knowledge representation that bedevilled Symbolic AI (GOFAI). Nevertheless, it is itself now weakened by the black box problem: really deep neural networks cannot, in general, explain their reasoning. This “tacit” knowledge in artificial neural networks is not unlike the tacit knowledge of human experts (“Polanyi’s Paradox”). Nevertheless, AI systems that cannot explain their reasoning increase the risk of broad social rejection of AI. A possible approach to solving the black box problem would be a hybrid approach to AI, where some degree of symbolic representation is generated as a system learns and makes inferences; this symbolic representation can then be queried and provide explanations, in a similar way that human consciousness explains many automatic actions and decisions that take place unconsciously in the human brain. Technology companies should therefore:

· Revisit symbolic techniques in AI, and assess preferred direction in ML approaches such as decision trees and Bayesian networks

· Develop hybrid AI, or partner with pioneers who focus on solving the interpretability problem[2]

· Embed some degree of explainability into the systems as soon as possible (e.g. using “Lime”), even if not perfect, to ensure continuing trust

Type III AI: We are currently at the beginning of developing Type II[3] AI, i.e. systems with limited memory of interactions that can deliver better communications between humans and machines. Examples are chatbots and digital assistants. Type III AI requires systems to form their own “theory of mind”, so they understand the intentions and feelings of the human users. This evolved AI will deliver the promise of AI systems as co-workers and not as mere tools; and will revolutionize how AI systems are embedded into our everyday life as well as how we relate to them. Technology companies should consider:

· Developing or partner with developers who focus on next generation Type III AI.

· Investing in developing internal capability in reinforcement learning and adversorial learning[4] (these techniques are Type II, but also an important step towards Type III). GANs for example, may be applied to a kind of A/B testing process that if replicated in a dialogue — possibly in combination with memory cells — could simulate Type III (NB. this is my hypothesis, I have yet to hear of someone trying it out). Long Short Term Memory (LSTM) neural nets may also be a promising start for experimentation with Type III AI.

· Exploring the application of cybernetic conversation theory in human/machine dialogue

Developing an AI innovation and technology roadmap

In conclusion, technology companies should develop their AI roadmaps by setting three sets of goals:

· Immediate (next 12 months): aim to enhance product capabilities in the short term, by identifying high ROI incremental features to add on product backlog. For example, consider adding a recommendation engine to supplement the value users currently get — and explore off-the-shelf predictive data modeling (e.g. clustering) to deliver such an engine quickly.

· Medium-term (1–2 years): Enhance internal team capabilities in AI through familiarization with cutting-edge ML techniques (RL, GANs, etc.) while feeding discoveries to product backlog, thus serving immediate product enhancement goals as well as testing new functionalities and capabilities for the medium-term.

· Long-term (2 years+): firstly, select a strategic direction for data (centralized vs decentralized) and start to MVP distributed product (i.e. as a distributed app on a trustless cryptonetwork) and plan for systems and infrastructure migration. Assess the relevance of trends such as Explainable AI, Federated Learning, and Mixed Reality — and plan for medium to long-term adoption. Secondly, foster links and relationships with AI developer ecosystems and constantly monitor market developments, participate in forward-looking initiatives as well as the wider scientific and technology dialogue. Type III AI is an aspirational goal that will require the combined knowledge and expertise of AI, neuroscience, linguistics and systems theory.

[1] See also dadi.cloud for an example of distributed “fog computing” based on DAO.

[2] See here on Layerwise Relevance Propagation (LRP) approach: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140

[3] Type I AI: purely reactive and narrow-focused, e.g. Deep Blue and AlphaGo. Type II AI: limited memory of interactions. Type III AI: Machine Theory of Mind. Type IV AI: self-aware AI.

[4] For a good, and fun, example: http://prisma-ai.com/

--

--

George Zarkadakis

PhD in AI, author of “Cyber Republic: reinventing democracy in the age of intelligent machines” (MIT Press, 2020), CEO at Voxiberate @zarkadakis