Which Company Will Ensure AI Safety? OpenAI Or Anthropic”.OpenAI leads the charge in AI safety, committed to ethical AGI development. Anthropic, too, focuses on building secure and beneficial AI systems. Together, they shape a future where innovation aligns with responsibility.
Introduction
The development of artificial general intelligence (AGI) promises to be one of the most impactful technologies ever created by humanity. Along with the immense benefits AGI could provide, such as helping solve complex problems like climate change and disease, there are also risks if the technology is developed without enough safety measures in place. Two companies at the forefront of pioneering safe AGI are OpenAI and Anthropic. Both aim to ensure powerful AI systems are created and used for the benefit of humanity, but they have different philosophies on how best to achieve this. This article compares OpenAI and Anthropic’s approaches to developing safe AGI.
A Brief History of OpenAI and Anthropic
OpenAI
OpenAI was founded in 2015 by Sam Altman, Elon Musk, and several others with backing from some of Silicon Valley’s top tech companies and investors. Their goal is to promote and develop AGI that benefits humanity. Unlike a traditional for-profit company, OpenAI is structured as a nonprofit to focus less on financial returns and more on safety and societal issues. Over the years they have conducted AI safety research and developed bots that could beat top professionals at complex games like Dota 2. OpenAI made waves in 2020 when they unveiled GPT-3, an extremely advanced natural language AI model. They have continued fine-tuning and evolving GPT-3’s capabilities while keeping access fairly limited to prioritize safety. OpenAI employs top AI researchers and continues pushing boundaries in the field.
Anthropic
Anthropic was founded more recently in 2021 by Dario Amodei, Daniela Amodei, Tom Brown, Chris Olah, Sam McCandlish, Jack Clarke, and Jared Kaplan. Several Anthropic founders previously worked at OpenAI but wanted to take a more rigorous scientific approach focused specifically on AI safety. Anthropic’s philosophy is Constitutional AI – creating AI systems constrained by design to behave safely while assisting and empowering humans. They introduced their first AI assistant called Claude, designed with transparency and limited agency compared to models like GPT-3. Anthropic takes a safety-first approach, gradually developing more advanced AI step-by-step. They have focused more on theory, research and taking the time to address potential issues before achieving general intelligence.
Philosophical Differences
While OpenAI and Anthropic both aim to develop safe AGI that benefits society, their philosophical approaches have some key differences that influence technical decisions.
OpenAI’s Scaled-Up Learning Approach
OpenAI believes the best way to develop advanced AI is to simply keep scaling up systems until they reach AGI capabilities. They create increasingly large models trained on massive datasets – believing this brute force approach produces emergent intelligence akin to the human brain. OpenAI open-sources some of their work but keeps their most advanced models like GPT-3 private for now out of safety concerns. Their view is advanced AI models like GPT-3 are too complex for humans to fully understand or perfectly control. Instead, they take an empirical approach of observing model behaviors during extensive testing before deciding if it’s safe to expand access to wider groups of people. This ‘data-first’ ideology focuses on pushing state-of-the-art AI regardless of full understanding.
Anthropic’s Principled Security Approach
On the other hand, Anthropic believes it is too risky to unleash extremely advanced private AI models on the world when their full capabilities and weaknesses are unknown. They think fundamental breakthroughs in scientific theory are needed to formally prove AI safety before achieving powerful AGI. Anthropic is creating a public archive of models gradually increasing in sophistication for full scrutiny along with mathematical verification of safety properties. Their ideology imposes top-down constraints on models based on constitutional documents codifying benign objectives aligned with human values. Claude is focused narrowly on being helpful, honest and harmless by design whereas GPT-3 has wider capabilities making safety assurance more difficult currently. Anthropic wants to climb the ladder towards AGI rung-by-run, ensuring each step is safe before moving higher.
Technical Approaches to Safety
OpenAI and Anthropic implement different technical methods aiming to allow advanced AI capability while reducing risks.
OpenAI Safety Efforts
Although OpenAI uses highly parameterized models chasing state-of-the-art performance, they do invest significantly in safety efforts:
Careful Model Training
OpenAI researchers fine-tune models like GPT-3 on carefully filtered datasets to avoid inheriting harmful biases. They proactively coordinate with external organizations to identify sensitive issues.
Extensive Model Testing
Before deployment, OpenAI meticulously tests models by probing their behaviors in a wide array of conditions to check for potential harms, inconsistencies and sensitivities. If issues are found, they retrain and modify models attempting to instill greater robustness and alignment with human values.
Controlled Rollout
To prevent malicious use in the wild, OpenAI only allows limited, monitored access to models for now. They plan to slowly expand access observing effects on society to decide if full public release would be safe. Access controls allow cutting off problematic users.
Aligning Financial Incentives
As a nonprofit incentives prioritizing safety over profits reduces conflict between financial and ethical considerations when making decisions about powerful AI. Their structure helps ensure appropriate caution.
Anthropic Safety Efforts
Alternatively, Anthropic implements the following engineering techniques focusing intensely on safety assurance:
Constitutional Training Methodology
Anthropic’s models are designed according to constitutional documents aligning them to respect privacy, provide helpful information, admit mistakes rather than guess, and avoid potential harms. Rigorous training methodology encodes these principles deeply into models.
Transparent Architecture
They believe model motivations must be inspectable, so Anthropic builds transparent models allowing full internal audits. This white-box design philosophy includes manually written interpretations of behaviors providing visibility compared to black-box models like GPT-3 which are too complex to wholly interpret.
Gradual Capability Scaling
By incrementally developing a public archive of models with slowly expanding abilities, Anthropic takes a step-by-step approach checking safety before unlocking advanced functionality. Each model builds on previous work after extensive peer review rather than massive private models pursuing leading metrics regardless of other considerations.
Formal Verifications
Anthropic’s scientists use mathematical proofs to formally verify model behaviors comply with key safety properties. This rigorous verification methodology provides higher confidence models behave as intended compared to just empirical testing and observations of them in action. Constitutional limitations are rooted deeply enough to allow formal analysis.
Recent Controversies
Both OpenAI and Anthropic have dealt with controversies related to the significant societal impact advanced AI models can wield, emphasizing why pursuing safety is so critical with cutting-edge technologies.
OpenAI’s GPT-3 Data Privacy Issue
In 2021 a security researcher discovered GPT-3 output could be tricked into generating people’s full names linked to their private data by cleverly formatted prompts, posing risks of identity theft or abuse by malicious actors. OpenAI apologized and scrubbed the sensitive personal data from GPT-3’s training dataset to address the problems. However this lapse undermined confidence in claims they could fully anticipate edge cases with private mega-models too large for humans to inspect. It demonstrated potential drawbacks of opaque, overoptimized models compared to predictable constrained ones.
Microsoft Exclusivity License for GPT-3
Allowing Microsoft exclusive access to GPT-3 for their own products and services has faced skepticism by industry experts concerned concentrating power with one large company could be risky long-term if adequate oversight is not established over all use cases of such influential technology. Some argue OpenAI should retain veto rights on how Microsoft applies GPT-3 or widen access to balance control, but countersuggest this could reduce incentives funding cutting-edge AI innovation if commercial returns are limited sufficiently. Ongoing debate continues around monopolistic potential of exclusive licensing deals for transformative general purpose AI models that might be employed frivolously or irresponsibly without prudent governance given limited competition.
Anthropic Model Confusions
In Claude’s product documentation, Anthropic claimed their assistant would always admit mistakes honestly when uncertain rather than guessing and providing potentially false information. However users found examples where Claude would confidently generate plausible-sounding but incorrect responses to tricky questions, contradicting promises. Anthropic explained Generation model components can sometimes express unjustified certainty that should be filtered out by Claude’s Discrimination model. As Claude gets updated, Anthropic aims to further improve disambiguating knowns vs unknowns. The issue demonstrated that even principles-focused safety efforts require continual maintenance, but Anthropic’s transparency allowed inspecting and patching root causes directly.
Conclusion
In the race to develop artificial general intelligence, OpenAI and Anthropic represent contrasting approaches. OpenAI shoots for maximizing capability first while gradually strengthening safety foundations later to support increasly advanced models. Anthropic paces progress carefully, refusing to move an inch without proven methods ensuring each prototype aligns with human ethics.
It remains unclear which philosophy will pay off in the long run. OpenAI’s scaled prototypes already demonstrate more impressive performance on narrow tasks. But Anthropic’s principled security paradigm offers stronger safety guarantees if powerful self-improving AI emerges through their stepwise roadmap.
Perhaps the two methodologies ultimately complement one another. OpenAI drives rapid innovation while Anthropic provides theory and techniques enabling responsible deployment at scale. In any case, society must encourage diverse competitive safety research and engineering efforts to increase odds our first AGIs respect human values. Although promising, AI still contains fundamental mysteries requiring diligent, proactive investigation to manage existential risks. We ignore this duty of care at society’s peril.