Platforms, Power, and the New Architecture of the Firm

-An exploration of how artificial intelligence is reshaping the structure of the modern corporation, transforming technology firms into hybrid infrastructure institutions that blend research labs, platforms, and public utilities.

Consider what a single artificial intelligence company does in a given quarter. It publishes peer-reviewed research, operates cloud infrastructure serving millions of API calls, sells subscription access to consumers, negotiates safety commitments with multiple governments, and fields questions from congressional committees about the societal implications of its own products. No prior corporate form was built to hold all of that at once. The railroad firms of the nineteenth century came close, but even they did not simultaneously function as research institutions, utility providers, and objects of geopolitical strategy.

Artificial intelligence does not exist in abstraction. It is built, trained, deployed, and monetized through organizations. These organizations are not neutral containers for technology. They carry within them governance structures, incentive systems, ownership arrangements, and strategic priorities that shape how the interface evolves and how its capabilities reach the world. Earlier essays in this series examined the political economy of artificial intelligence, the governance frameworks emerging around it, and the institutional adaptations it demands. The next question concerns the organizations themselves: what kind of firm is capable of building and controlling the interface?

Technological revolutions rarely produce only new tools. They also produce new organizational forms. Alfred D. Chandler Jr., in his Pulitzer Prize-winning The Visible Hand (1977), demonstrated that the railroads of the mid-nineteenth century required the invention of the modern managerial corporation, complete with hierarchical administration, professional management, and the separation of ownership from control. Electricity accelerated the rise of vertically integrated industrial firms capable of coordinating production across entire supply chains. The internet gave rise to platform companies whose power derived from network effects rather than physical assets. Artificial intelligence appears to be generating another such shift. The companies building frontier AI systems increasingly resemble hybrid institutions that function simultaneously as research laboratories, cloud infrastructure providers, software platforms, and, in some respects, public utilities.

Carlota Perez, in Technological Revolutions and Financial Capital (2002), mapped this recurring pattern across two centuries of industrial history. Each technological revolution, she argued, generates not only new industries but a new techno-economic paradigm, a set of organizational best practices that eventually reshapes the entire institutional landscape. We appear to be inside such a transformation now. Understanding the interface therefore requires understanding the corporate structures through which it is built and governed.

From Product Firms to Infrastructure Firms

Traditional technology companies often revolved around discrete products. Software firms sold licenses, hardware manufacturers produced devices, and even large enterprise platforms typically offered identifiable services tied to specific applications. Artificial intelligence alters this model by transforming the underlying product into infrastructure.

Large language models and generative systems increasingly function as foundational layers embedded within countless downstream applications. Developers build services atop them, businesses integrate them into internal workflows, and consumers encounter them through multiple interfaces simultaneously. The model becomes less a product than a substrate upon which other products are constructed.

Infrastructure firms operate under different economic dynamics than product firms. Their power arises from scale, dependency, and ecosystem entrenchment. When thousands of organizations build services atop a shared foundation, switching costs rise and the infrastructure provider acquires structural leverage. Artificial intelligence intensifies this pattern because training and operating frontier models requires enormous computational resources, specialized hardware, and vast datasets. These requirements create barriers that concentrate development within a relatively small number of firms. Yet concentration alone does not determine outcomes. What matters equally is how those firms choose to wield infrastructure power, whether they treat it as a proprietary moat or as a platform for broad-based innovation.

The Platform Logic

The economic logic of platforms provides a useful lens for understanding this shift. Platform companies historically benefited from network effects: the value of the service increased as more users joined the system. Social media networks and digital marketplaces exemplify this dynamic. Generative AI introduces a related but distinct feedback loop.

Each interaction with an AI system produces new data. That data can be used to refine models, improve performance, and expand capabilities. Improved systems attract additional users, generating still more data. The result is a self-reinforcing cycle in which scale begets improvement and improvement begets scale.

This feedback loop blurs the boundary between product usage and model development. The interface simultaneously delivers value to users and generates information that strengthens the underlying system. Firms controlling large-scale AI platforms may accumulate advantages that compound over time. But compounding advantage is not the same thing as guaranteed dominance. The history of technology is littered with infrastructure providers that held commanding positions and lost them.

Erik Brynjolfsson, director of Stanford’s Digital Economy Lab, has characterized AI as a general-purpose technology comparable to the steam engine and electricity, one that reverberates through every corner of the economy and spawns waves of complementary innovations. His research with Danielle Li and Lindsey Raymond found that access to generative AI tools increased worker productivity by fourteen percent on average, with particularly pronounced gains among novice and lower-skilled workers. That last finding deserves emphasis. It means the same infrastructure that consolidates corporate power can simultaneously distribute capability more broadly than any previous technology. The less experienced customer service agent, equipped with an AI assistant trained on the patterns of the best performers, closes the gap. That is not just an economic outcome. It is a statement about who gets to participate in the next economy and on what terms.

At the same time, the presence of open-source models introduces an important countervailing force. Distributed communities of developers can adapt and extend open systems without relying on centralized corporate infrastructure. The future corporate landscape of AI will likely be shaped by the tension between these two models: concentrated infrastructure platforms and decentralized innovation ecosystems.

Open Models and Closed Systems

This tension has produced two competing strategic approaches within the industry. Some companies pursue tightly controlled proprietary models hosted on centralized cloud infrastructure. These systems offer high performance, extensive safety testing, and predictable monetization through subscription and API access. Their capabilities remain under the direct supervision of the organizations that built them.

Other organizations emphasize open-weight or open-source models. By releasing model architectures and training methods publicly, they enable developers worldwide to experiment, modify, and deploy AI systems independently. This approach sacrifices some centralized control but encourages rapid experimentation and innovation.

The contrast echoes earlier debates in the software world between proprietary platforms and open-source ecosystems. Closed systems provide stability and predictable revenue streams. Open systems encourage diversity and experimentation but may produce fragmentation. The balance between these models will influence not only market competition but also the distribution of AI capabilities across society. Neither approach is inherently superior. The most resilient AI ecosystems may prove to be those that combine the reliability and safety infrastructure of proprietary platforms with the experimental energy of open communities.

The Research and Product Tension

Artificial intelligence firms also inhabit a persistent tension between research culture and commercial imperatives. Scientific research traditionally values openness, peer review, and collaborative progress. Breakthroughs are published, methods are shared, and advances build cumulatively across institutions.

Corporate strategy operates under different incentives. Proprietary knowledge can confer competitive advantage, and intellectual property protection often determines long-term profitability. Companies developing frontier AI systems must therefore navigate an uneasy balance between transparency and secrecy.

The structural experiments of the past decade illustrate this tension concretely. DeepMind began as an independent research laboratory before its acquisition by Google, where it has had to reconcile a culture of open scientific publication with the commercial imperatives of one of the world’s largest technology companies. OpenAI launched as a nonprofit research organization, then evolved into a capped-profit hybrid, restructuring its governance to attract the capital required for frontier model development while attempting to preserve its original mission. These are not failures of principle. They are genuine institutional experiments, attempts to build corporate forms capable of holding research ambition and commercial reality in the same structure. Whether they succeed over the long term remains an open question, but the effort itself represents a kind of organizational innovation that has few precedents.

Too much openness risks eroding the competitive advantage required to sustain large-scale research investments. Too much secrecy risks slowing the scientific progress upon which the entire field depends. The corporate form of the interface exists within a continuous negotiation between the norms of inquiry and the demands of competition. Getting that negotiation right may matter as much as any technical breakthrough.

Governance Inside the Firm

Recognizing the societal implications of their technologies, several AI firms have experimented with novel governance structures. Hybrid organizational models combine nonprofit oversight bodies with for-profit subsidiaries responsible for commercial deployment. Internal safety boards review model releases, and dedicated teams work to anticipate downstream risks before products reach the public.

These arrangements reflect a recognition that artificial intelligence carries broader societal implications than conventional software products. They also represent a form of corporate responsibility that deserves acknowledgment rather than reflexive skepticism. Many of these governance innovations emerged voluntarily, driven by organizational leadership that understood the stakes rather than by regulatory compulsion. Companies have published model cards detailing system capabilities and limitations, participated in government-organized safety consultations, and in some cases delayed product releases to address identified risks. None of this is perfect. But it is not theater, either.

Yet internal governance alone has limits. Corporate boards ultimately answer to investors, and competitive pressure encourages rapid deployment. Ethical review processes can struggle to keep pace with product cycles that move at the speed of software development. The question is not whether internal corporate governance matters, because it clearly does, but whether it is sufficient on its own to manage technologies whose effects extend far beyond the boundaries of any single firm.

Regulation and the Firm

Historically, industries that developed into essential infrastructure often transitioned toward some form of regulatory oversight. Railroads, electrical utilities, and telecommunications networks eventually became subject to rules governing access, pricing, and reliability. These frameworks emerged because the infrastructure they controlled proved too important to remain entirely unregulated.

Artificial intelligence may follow a similar trajectory. Governments have begun debating licensing regimes for frontier models, mandatory safety evaluations, and transparency requirements regarding training data. The European Union’s AI Act, executive orders in the United States, and various national strategies around the world reflect the growing recognition that certain AI capabilities resemble public infrastructure in their reach and consequence.

Mariana Mazzucato, in The Entrepreneurial State (2013), offered a framework that complicates the usual narrative of regulation as a burden on innovation. Her research demonstrated that the most transformative technologies of the past century, from the internet to GPS to touchscreen interfaces, emerged not from private enterprise alone but from sustained public investment in foundational research. The relationship between state and firm, in her account, is not adversarial but symbiotic: public institutions absorb the earliest and highest risks, while private firms bring resulting technologies to scale. Applied to AI, this framework suggests that well-designed regulation need not stifle innovation. It can establish the conditions of trust and stability that allow both firms and the broader public to invest confidently in AI’s long-term development.

At the same time, poorly designed regulation risks entrenching existing incumbents. Compliance costs associated with safety standards or auditing requirements may be easily absorbed by large firms while excluding smaller research organizations and startups whose energy the field depends upon. Balancing safety, competition, and innovation will require regulatory approaches that preserve experimentation while preventing excessive concentration. The goal is not to constrain the corporate form of the interface but to shape the environment in which it can evolve responsibly.

The Global Corporate Landscape

The corporate form of the interface is also shaped by geopolitics. AI firms operate within national regulatory environments while competing globally for talent, capital, and computing infrastructure. Governments increasingly view artificial intelligence as a strategic technology, encouraging domestic development through subsidies, research funding, and industrial policy.

Different political systems approach this challenge in sharply different ways. The European Union has prioritized rights-based regulatory frameworks, establishing the AI Act as a risk-tiered compliance regime that reflects deep continental commitments to data protection and individual autonomy. China has pursued a model of state-coordinated development, directing substantial public resources toward domestic AI champions while maintaining tight control over data flows and model deployment. The United States has historically emphasized private-sector innovation and venture capital ecosystems, though recent executive orders and bipartisan legislative proposals signal a growing appetite for more structured oversight.

The resulting global landscape may consist of distinct AI development blocs shaped by regional governance models, each producing different balances between corporate autonomy, public oversight, and strategic ambition. Corporate structures will evolve within these geopolitical constraints, producing a diversity of organizational forms rather than a single dominant template. For firms operating across borders, navigating this patchwork of expectations will become a core institutional competency.

Corporate Scale and Public Accountability

As AI systems mediate larger portions of economic and informational life, the firms controlling them acquire significant influence. Search engines already shape information access, and social media platforms have demonstrated the capacity to affect public discourse. Generative systems may extend that influence into domains of reasoning and decision support.

This influence is not, in itself, evidence of malicious intent. Infrastructure industries historically acquire power simply by virtue of the dependencies they create. The railroads that Chandler studied did not set out to become political actors, yet the dependencies they fostered gave them enormous leverage over the economies they served. The same structural dynamic applies to AI. The issue is not villainy but architecture: when private organizations control infrastructure through which knowledge flows, the relationship between corporate authority and public governance requires careful tending.

Ensuring that the firms building the interface remain accountable to the societies that rely on it therefore becomes a central challenge of AI governance. This does not require treating corporations as adversaries. It requires building institutional arrangements, both internal to firms and external through regulation and public oversight, that keep the incentives of infrastructure providers aligned with the interests of the communities they serve.

Glass Half Full

Corporate institutions have historically proven remarkably capable of adaptation, and that capacity should not be dismissed. The modern corporation itself emerged in response to earlier technological revolutions that required coordination on unprecedented scales. Chandler documented how the organizational innovations of the railroad era, professional management, divisional structure, systematic accounting, became the template for industrial capitalism writ large. Artificial intelligence may likewise inspire new organizational forms better suited to managing complex technological systems.

The evidence for productive corporate adaptation is already accumulating. Brynjolfsson’s research demonstrates that firms deploying AI tools thoughtfully can improve outcomes not only for shareholders but for workers and customers. His finding that AI assistance disproportionately benefits less experienced workers, effectively disseminating the best practices of top performers across an organization, suggests that corporate AI deployment can function as a mechanism for capability distribution rather than capability hoarding. When a novice worker gains access to the institutional knowledge that used to take years to acquire, something meaningful has shifted in the relationship between the firm and the people inside it.

Perez’s historical framework offers additional grounds for measured optimism. In her account, every technological revolution passes through an installation period marked by speculative excess and institutional mismatch before entering a deployment period in which the new paradigm is absorbed into the broader social fabric. When institutions adapt successfully, what follows is not continued turbulence but a golden age of broadly shared prosperity. We are, by her reckoning, somewhere in the turning point between installation and deployment. The decisions made now by corporations, regulators, and the public will determine whether deployment arrives through design or through wreckage.

Hybrid institutions combining public oversight with private innovation may emerge. Cooperative data trusts could redistribute control over training datasets. Open-source ecosystems may continue to counterbalance proprietary platforms. New corporate structures that embed accountability at the governance level, rather than treating it as an afterthought, are already being tested in real time.

The corporate form of the interface remains in flux. The companies building AI today resemble the early infrastructure firms of previous technological eras: experimental organizations operating in regulatory environments that have not yet fully caught up with their capabilities.

History suggests that such moments eventually stabilize. The question is not whether the corporate architecture of AI will evolve, but whether it will evolve deliberately through institutional design or reactively through crisis. The firms examined here do not operate in isolation. They exist within a global order of competing nations, divergent regulatory philosophies, and strategic ambitions that increasingly treat artificial intelligence as a matter of sovereignty itself. That global order is the subject of the next essay in this series.

Further Reading

The following works informed the arguments in this essay and offer deeper engagement with the questions it raises.

Corporate History and Organizational Theory

Alfred D. Chandler Jr., The Visible Hand: The Managerial Revolution in American Business (Harvard University Press, 1977). The foundational account of how railroads and industrial firms invented modern corporate management. Chandler’s argument that administrative coordination displaced market coordination remains the essential starting point for understanding why large firms exist and how they evolve.

Alfred D. Chandler Jr., Scale and Scope: The Dynamics of Industrial Capitalism (Harvard University Press, 1990). Extends the argument of The Visible Hand across three national economies, comparing the organizational development of large industrial enterprises in the United States, Great Britain, and Germany.

Ronald H. Coase, “The Nature of the Firm,” Economica 4, no. 16 (1937): 386–405. The classic theoretical treatment of why firms exist as alternatives to market transactions. Coase’s framework of transaction costs remains relevant to understanding why AI development concentrates within large organizations.

Technological Revolutions and Economic Change

Carlota Perez, Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages (Edward Elgar, 2002). Maps the recurring pattern of installation, frenzy, and deployment across five technological revolutions. Essential for understanding where AI sits in the historical cycle and what institutional adaptation the current moment demands.

Erik Brynjolfsson and Andrew McAfee, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (W.W. Norton, 2014). An accessible and optimistic account of how digital technologies, including early AI, are reshaping economic growth. Useful context for the productivity arguments in this essay.

Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond, “Generative AI at Work,” NBER Working Paper 31161 (2023). The empirical study cited in this essay demonstrating fourteen percent productivity gains from AI assistance in customer service, with disproportionate benefits for less experienced workers.

Erik Brynjolfsson, Daniel Rock, and Chad Syverson, “Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics,” NBER Working Paper 24001 (2017). Introduces the “productivity J-curve” concept, arguing that general-purpose technologies like AI require extensive complementary investment before their full economic impact becomes visible in aggregate statistics.

Innovation, the State, and Public-Private Dynamics

Mariana Mazzucato, The Entrepreneurial State: Debunking Public vs. Private Sector Myths (Anthem Press, 2013). Argues that the state has historically functioned as the primary risk-taker in foundational innovation, from the internet to GPS. Reframes the relationship between public investment and private enterprise as symbiotic rather than adversarial.

Mariana Mazzucato, Mission Economy: A Moonshot Guide to Changing Capitalism (Harper Business, 2021). Extends the argument of The Entrepreneurial State into a framework for directing public-private collaboration toward large-scale societal challenges, including climate and digital infrastructure.

Platform Economics and Digital Power

Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (PublicAffairs, 2019). A detailed critique of the data-extraction business model pioneered by advertising-driven platforms. While its focus is on surveillance rather than AI infrastructure, Zuboff’s analysis of behavioral data as a commodity illuminates the stakes of corporate control over information systems.

Andrew McAfee and Erik Brynjolfsson, Machine, Platform, Crowd: Harnessing Our Digital Future (W.W. Norton, 2017). Explores the interplay between human judgment and machine capability, platform dynamics, and the shift from centralized to distributed innovation. Directly relevant to the open-versus-closed tension examined in this essay.

Nick Srnicek, Platform Capitalism (Polity Press, 2017). A concise political-economic analysis of how digital platforms extract value and reshape market structures. Provides useful theoretical grounding for understanding AI firms as a new species of platform.

AI Governance and Policy

Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023). A sweeping historical argument that technological progress does not automatically produce shared prosperity, and that institutional choices determine whether new technologies benefit the many or the few. A counterweight to purely optimistic accounts of AI’s economic potential.

Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (Viking, 2019). Addresses the technical and philosophical challenges of aligning AI systems with human values. Relevant to the governance-inside-the-firm questions explored in this essay.

Anu Bradford, Digital Empires: The Global Battle to Regulate Technology (Oxford University Press, 2023). Analyzes the divergent regulatory approaches of the United States, European Union, and China toward digital technology, providing essential context for the geopolitical dimensions of AI corporate governance.