Minimum Viable World

Concept I came up with during a conversation (1) about general artificial intelligence during one of my Future of Learning Learning Groups. The idea is that the approach to creating general intelligence should mimic the same process through which human intelligence developed, through evolution within an environment in which collaboration, language, and the development of mental faculties proved useful for furthering our species. My hypothesis is that AGI can only emerge by evolving alongside itself and being forced into environments where it must learn to communicate to survive.

The result of this hypothesis is the question of a minimum viable world. What is the lowest fidelity simulation under which you can create the conditions for an evolutionary AI to develop its own meaningful communication with other agents in the simulation?

References

  • (1) Future of Learning Group 2020/10/04, conversation w/ Sav Siderov, Jorge Zaccario, and Luke Cheng