Abstract
The second MIT Decentralized AI Roundtable focused on the growing importance of decentralized systems in artificial intelligence, covering topics from AI scalability to privacy and distributed computing. Abhishek Singh discussed the need for decentralization to improve scalability and heterogeneity in AI, emphasizing the future of personal AI agents and collaborative tools. Martin Jaggi introduced decentralized training methods for large language models (LLMs), showcasing how collaborative, distributed systems could reduce reliance on centralized infrastructure. Holger Roth presented NVIDIA FLARE, a federated learning platform addressing real-world deployment challenges in industries like healthcare while maintaining privacy and scalability. Ben Fielding highlighted Gensyn, a decentralized compute protocol for machine learning, demonstrating how distributed devices could power global AI training. Vincent Weisser emphasized the need for decentralized AGI to prevent monopolization and ensure accessibility for all, advocating for a multi-agent, multipolar future. The discussions revealed both the challenges and opportunities that decentralized AI systems present, stressing the importance of scalability, privacy, and open access for future AI development.