The State of AI Report 2023, authored by Nathan Benaich and the Air Street Capital team, provides a comprehensive overview of the advancements, trends, and challenges in the field of artificial intelligence.
The report covers key areas such as research, industry, politics, safety, and predictions for the future.
Research
The research section highlights the significant progress made in large language models (LLMs), particularly with the release of OpenAI's GPT-4, which demonstrated superior performance across various tasks and benchmarks. The report also notes the growing efforts to replicate or surpass proprietary model performance using smaller models, better datasets, and longer context, often powered by Meta's LLaMa-1/2. The increasing use of LLMs and diffusion models in the life science community for molecular biology and drug discovery is also emphasized.
Industry
The industry section focuses on the dominance of NVIDIA in the GPU market, driven by the increasing demand from various sectors, including nation states, startups, big tech, and researchers. The report also discusses the impact of export controls on advanced chip sales to China and the emergence of export control-proof alternatives by major chip vendors. The rise of generative AI applications, led by ChatGPT, across various domains like image, video, coding, and voice, has attracted significant investments, totaling $18 billion in venture capital and corporate funding.
Politics
The political landscape surrounding AI is becoming increasingly complex, with the world dividing into clear regulatory camps, while progress on global governance remains slow. The report notes the efforts of major AI labs to fill the governance vacuum and the ongoing chip wars between the US and China. The potential impact of AI on sensitive areas like elections and employment is also discussed.
Safety
The safety section addresses the growing concern over the existential risks posed by advanced AI systems, with the debate reaching the mainstream for the first time. The report highlights the vulnerability of high-performing models to jailbreaking and the challenges associated with RLHF. Researchers are actively exploring alternatives like self-alignment and pretraining with human preferences to address these challenges. The increasing difficulty of evaluating state-of-the-art models consistently is also emphasized.