OpenAI Calls For Robot Taxes and 4-Day Workweek
The US-based AI company OpenAI has published an extensive position paper calling on governments worldwide to pursue an active industrial policy in dealing with artificial intelligence. The document, titled “Industrial Policy for the Intelligence Age,” is aimed primarily at US policymakers but explicitly claims global relevance.
The impact of AI on the labor market is assessed in different ways. On one hand, major tech companies are carrying out large-scale layoffs, often communicating that they are becoming more efficient with the help of AI and require less staff; on the other hand, many companies say that AI has not yet advanced far enough to replace entire jobs.
What OpenAI expects from the labor market
In any case, OpenAI paints a dramatic picture of the future labor market. Already today, the company states, AI systems can take over tasks that previously took humans hours. The next step will be for systems to handle entire projects that currently take weeks or months.
“This shift will reshape how organizations run, how knowledge is created, and how people find meaning and opportunity.”
OpenAI does not expect these changes to be distributed evenly. Certain professional groups and regions will be more affected than others. The company explicitly warns that economic gains could become concentrated among a small number of corporations, while workers may become more productive but do not necessarily share in the added value. At the same time, OpenAI sees new opportunities: professions in the areas of human care, education, and community services could serve as a safety net for displaced workers, as human connection in these fields remains irreplaceable.
The central demands on governments
Open economy and broad participation
OpenAI calls on governments to treat access to AI as a basic right, comparable to the introduction of electricity or the internet. Specifically, the company proposes creating affordable or free access points to AI foundation models and deliberately including schools, libraries, and disadvantaged communities.
- Strengthening workers’ rights: Employees should receive formal input in AI deployments within the workplace to ensure that technology improves the quality of work and does not create dangerous or exploitative conditions.
- Promoting AI entrepreneurship: Micro-grants and practical support structures should help workers convert their expertise into new businesses.
- Tax reform: As AI could shift the tax base through declining wage income and rising capital gains, tax policy should be adjusted, for example through higher capital gains taxes and levies on automated labor.
- Public wealth fund: A state fund should give every citizen a direct share in the growth of the AI economy, regardless of their own capital holdings.
- Energy infrastructure: Public-private partnerships should accelerate the expansion of the power grid, with households not subsidizing AI data centers.
- Efficiency dividends: Productivity gains from AI should be converted into shorter working hours, such as a 4-day or 32-hour work week, better social benefits, and higher pension contributions.
- Adaptive safety nets: Existing social systems such as unemployment insurance and healthcare should be automatically scaled up when defined thresholds for economic disruption are exceeded.
- Portable social benefits: Pensions, health insurance, and continuing education entitlements should be tied to individuals rather than employers.
Societal resilience and security architecture
In the second part of the paper, OpenAI addresses the risks of advanced AI systems and proposes institutional safety mechanisms. The company emphasizes that safety must keep pace with the capabilities of the systems.
- Safety systems for high-risk areas: Tools for detecting and containing misuse in the areas of cybersecurity and biology should be developed and scaled.
- AI trust architecture: Verification standards and privacy-compliant logging systems should ensure transparency and accountability in AI actions.
- Audit regimes: Independent auditors should evaluate high-risk systems before and after market launch. These requirements should apply only to a small number of particularly capable models so as not to hinder innovation.
- Containment plans: Coordinated contingency plans for the event that dangerous AI systems spread uncontrollably should be developed and tested.
- Corporate governance: AI companies should introduce structures that anchor the public interest, for example in the form of corporate structures with a non-profit orientation.
- Rules for government use of AI: Clear legal limits on the use of AI by public authorities should ensure democratic oversight.
- Public participation: Citizens should be given structured opportunities to help shape the values and behaviors of AI systems.
- International coordination: A global network of AI institutes should exchange information on risks and safety measures, similar to existing multilateral security institutions.
Classification and self-understanding
OpenAI explicitly emphasizes that the paper does not represent finalized recommendations, but rather a starting point for discussion. The company acknowledges that it does not have all the answers and invites governments, civil society, and researchers to participate in shaping the agenda. At the same time, the document is a clear signal: OpenAI sees itself as a political actor that wants to actively help define the framework for AI regulation.
“The transition to superintelligence is not a distant possibility, it’s already underway, and the choices we make in the near term will shape how its benefits and risks are distributed for decades to come.”
Regarding implementation, OpenAI announces concrete steps: a feedback program, research grants of up to 100,000 US dollars, and API credits worth up to one million dollars for policy-related research. In addition, a new OpenAI office in Washington, D.C. is to serve as a discussion platform.


