OpenAI is grappling with a governance crisis after internal challenges triggered by four board members. As a company initially structured as a nonprofit with a capped-profit subsidiary, it’s reconsidering its model. Drawing lessons from enterprises like Mozilla, OpenAI is contemplating a dual-board system to balance its mission and investor expectations. The significance of OpenAI’s mission to advance AI technology adds complexity to its governance decisions, impacting global employment and societal welfare. The recent turmoil highlights the need for OpenAI to clarify its identity, choosing between an ethically grounded startup or a lasting public institution dedicated to humanity’s benefit.
Several companies, including OpenAI, Ikea, and Novo Nordisk, operate as enterprise foundations, blending non-profit control with capitalist ventures. Mozilla provides a stable example, featuring separate boards for non-profit and for-profit units. Mozilla’s non-profit board retains ultimate authority, overseeing budgets and holding the power to remove for-profit board members. This approach aims to balance philanthropic goals and market objectives, ensuring autonomy for commercial endeavors while upholding the mission outlined in the Mozilla Manifesto to keep the internet open and accessible.
OpenAI’s governance challenge involves striking a delicate balance between its mission and investor expectations, notably Microsoft’s $13 billion commitment. Microsoft’s CEO, Satya Nadella, expressed dissatisfaction with being blindsided by CEO Sam Altman’s removal. OpenAI aims to include Microsoft as a nonvoting observer on its board to address antitrust concerns. Emulating Mozilla’s separate board model for for-profit endeavors could offer a solution, providing a voice for investors and employees without undermining the non-profit’s authority. The recent turmoil underscores the need for safeguards, echoing Mozilla’s structure, where trademark ownership serves as a check on commercial activities. Licensing fees contribute to Mozilla’s charitable work, suggesting a potential model for OpenAI’s independent oversight.
To enhance governance, OpenAI could implement rule changes and policies to address concerns about board qualifications and vacancies. Defining criteria for board independence and outlining succession planning in bylaws could improve transparency. Incorporating business-savvy members on the board, given OpenAI’s significant market presence, is suggested. Implementing or expanding policies on directors’ communications and conflicts of interest, especially in light of CEO Sam Altman’s varied engagements, is crucial. A communications policy could mitigate tensions, as seen in Altman’s clash with a former director. Improving transparency aligns with OpenAI’s initial commitment but requires addressing current discrepancies in publicly available documents.