Artificial Intelligence (AI) systems help organizations manage complexity: they reduce the cost of predictions and hold the promise of more, better and faster decisions that enhance productivity and innovation. However, their deployment increases complexity at all levels of the economy, and with it, the risk of undesirable outcomes. Organizationally, uncertainty about how to adopt fallible AI systems could create AI divides between sectors and organizations. Transactionally, pervasive information asymmetries in AI markets could lead to unsafe, abusive and mediocre applications. Societally, individuals might opt for extreme levels of AI deployment in other sectors in exchange for lower prices and more convenience, creating disruption and inequality. Temporally, scientific, technological and market inertias could lock society into AI trajectories that are found to be inferior to alternative paths. New Sciences (and Policies) of the Artificial are needed to understand and manage the new economic complexities that AI brings, acknowledging that AI technologies are not neutral and can be steered in societally beneficial directions guided by the principles of experimentation and evidence to discover where and how to apply AI, transparency and compliance to remove information asymmetries and increase safety in AI markets, social solidarity to share the benefits and costs of AI deployment, and diversity in the AI trajectories that are explored and pursued and the perspectives that guide this pro- cess. This will involve an explicit elucidation of human and social goals and values, a mirror of the Turing test where different societies learn about themselves through their responses to the opportunities and challenges that powerful AI technologies pose.