Site icon CTIN

The Evolution of Generative AI: Insights from Stanford’s 2024 AI Index

Human sitting across a table with an AI image

Stanford’s Human-Centered AI Center recently published a 2024 Index covering developments in the field of Artificial Intelligence (AI). The nine chapters of the report are summarized below. The proper citation for the full report is as follows:

Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark,

“The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.

The past decade has seen a significant shift in AI research and development, with industry taking the lead. In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. This trend is partly due to the escalating costs of training state-of-the-art AI models, which have reached unprecedented levels. For instance, OpenAI’s GPT-4 required an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million. Despite these high costs, the number of foundation models has more than doubled from 2022 to 2023, with a significant portion being open-source. This surge in open-source models is fostering greater collaboration and innovation across the AI community[1][2][3].

AI’s technical performance has seen remarkable advancements, with AI systems surpassing human capabilities in several benchmarks, including image classification, visual reasoning, and English understanding. However, AI still lags in more complex tasks such as competition-level mathematics and visual commonsense reasoning. The development of multimodal AI models like Google’s Gemini and OpenAI’s GPT-4, which can handle text, images, and audio, marks a significant leap in AI capabilities. Additionally, the emergence of more challenging benchmarks in 2023, such as SWE-bench for coding and MoCa for moral reasoning, indicates a push towards testing AI on more complex and nuanced tasks[1][4][5].

The responsible development and deployment of AI have become critical focal points. The report highlights a significant lack of standardized evaluations for large language models (LLMs) regarding responsibility, with leading developers testing their models against different benchmarks. This inconsistency complicates efforts to systematically compare the risks and limitations of top AI models. Moreover, the rise of political deepfakes and the discovery of complex vulnerabilities in LLMs underscore the urgent need for robust and standardized responsible AI practices. The chapter also discusses the growing concern among businesses about AI-related risks, including privacy, data security, and reliability[1][4][6].

Generative AI has become a major driver of economic activity, with investment in this area skyrocketing. Despite a decline in overall AI private investment, funding for generative AI surged nearly eightfold from 2022, reaching $25.2 billion. This surge is indicative of the broader trend of integrating AI into various industries to enhance productivity and efficiency. Studies have shown that AI enables workers to complete tasks more quickly and improve the quality of their output, bridging the skill gap between low- and high-skilled workers. However, the economic impact of AI also raises concerns about potential job displacement and the need for proper oversight to avoid diminished performance[1][6][7].

AI is revolutionizing science and medicine, accelerating scientific discovery and medical advancements. In 2023, significant AI applications were launched, such as AlphaDev for efficient algorithmic sorting and GNoME for materials discovery. In medicine, AI systems like EVEscape for pandemic prediction and AlphaMissence for mutation classification have made substantial strides. The performance of AI on medical benchmarks has also improved remarkably, with models like GPT-4 Medprompt achieving high accuracy rates. The FDA’s approval of AI-related medical devices has increased significantly, highlighting the growing role of AI in real-world medical applications[1][4][8].

AI’s integration into education is transforming learning and teaching methodologies. The number of AI-related degree programs has tripled since 2017, reflecting the growing demand for AI expertise. However, access to computer science (CS) education remains uneven, with students in larger high schools and suburban areas more likely to have access to CS courses. The migration of AI PhDs to industry continues to accelerate, indicating a brain drain from academia. Despite these challenges, the diversity in CS education is improving, with increasing representation from various ethnic groups and a rise in the participation of female students in AP CS exams[1][4][9].

AI policy and governance have seen significant developments, with a sharp increase in AI-related regulations. In 2023, the number of AI-related regulations in the United States grew by 56.3%, reflecting the heightened focus on AI governance. Both the United States and the European Union have advanced landmark AI policy actions, including the AI Act in the EU and an Executive Order on AI in the US. The chapter also highlights the global discourse on AI, with AI being mentioned in legislative proceedings in 49 countries in 2023. This growing regulatory landscape underscores the need for balanced policies that promote innovation while addressing potential risks[1][4][10].

Diversity in AI education and the workforce is gradually improving, though substantial gaps remain. In the United States and Canada, the representation of Asian, Hispanic, and Black or African American students in CS degree programs has increased. However, gender gaps persist, particularly in European informatics, CS, CE, and IT graduates. The participation of female students in AP CS exams in the US has risen significantly, reflecting broader changes in gender and ethnic representation in K-12 CS education. These trends highlight the ongoing efforts to create a more inclusive and diverse AI ecosystem[1][4][11].

Public opinion on AI is becoming increasingly nuanced, with a growing awareness of AI’s potential impact. Surveys indicate that a significant proportion of people believe AI will dramatically affect their lives in the next few years, with many expressing nervousness about AI products and services. In the US, more people report feeling concerned than excited about AI. The chapter also explores demographic differences in AI optimism, with younger generations and individuals with higher incomes and education levels generally more positive about AI’s potential benefits. The widespread awareness and use of tools like ChatGPT further illustrate AI’s growing presence in everyday life[1][4][12].

These paragraphs provide a comprehensive overview of the key themes and findings from each chapter of the 2024 AI Index Report, reflecting the growth and development of generative AI over the past decade.

Citations:
[1] https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/7036048/85615ab0-9b61-4f60-9b2c-8a2c8e44a1bf/HAI-Highlights.pdf
[2] https://hai.stanford.edu/research/ai-index-report
[3] https://hai.stanford.edu/news/inside-new-ai-index-expensive-new-models-targeted-investments-and-more
[4] https://aiindex.stanford.edu/report/
[5] https://hai.stanford.edu/news/what-expect-ai-2024
[6] https://www.linkedin.com/pulse/2024-artificial-intelligence-index-report-stanford-hai-savelsbergh-ttlkc
[7] https://radical.vc/2024-ai-index-report/
[8] https://hai.stanford.edu/news/ai-index-state-ai-13-charts
[9] https://www.linkedin.com/posts/datacouch_ai-index-report-2024-by-stanford-institute-activity-7186284974826815489-1bzw
[10] https://www.fladgate.com/insights/what-stanford-universitys-2024-ai-index-report-tells-us-about-the-current-state-of-ai
[11] https://spectrum.ieee.org/ai-index-2024
[12] https://hai.stanford.edu/events/presenting-2024-ai-index
[13] https://www.infoq.com/news/2024/05/stanford-ai-index/
[14] https://aiindex.stanford.edu/research/
[15] https://www.linkedin.com/pulse/navigating-ai-frontier-insights-implications-from-stanford-amin-isnwf?trk=public_post_main-feed-card_feed-article-content
[16] https://www.niso.org/niso-io/2024/04/ai-index-report-2024-reveals-accelerating-activity
[17] https://www.aei.org/technology-and-innovation/six-takeaways-from-stanford-universitys-2024-report-on-the-state-ai/
[18] https://spectrum.ieee.org/ai-index-2024/7-industry-calls-new-phds
[19] https://scai.sorbonne-universite.fr/public/news/view/c27e00072f7aa92ddc77/8
[20] https://codingscape.com/blog/stanford-ai-index-2024-summary-and-full-report

MastodonLinkedInRedditBloggerSlashdotEvernoteDiggPinterestTumblrTelegramSnapchatWhatsAppMessengerXFacebookCopy LinkEmailPrintShare
author avatar
RJG CTIN President & Co-Founder
Jane Ginn As the co-founder of the Cyber Threat Intelligence Network (CTIN), a consultancy with partners in Europe, Ms. Ginn has been pivotal in the development of the STIX international standard for modeling and sharing threat intelligence. She currently serves as the Secretary of the OASIS Threat Actor Context Technical Committee, contributing to the creation of a semantic technology ontology for cyber threat actor analysis. Her efforts in this area and her earlier work with the Cyber Threat Intelligence (CTI) TC earned her the 2020 Distinguished Contributor award from OASIS. In public service, she advised five Secretaries of the US Department of Commerce on international trade issues from 1994 to 2001 and served on the Washington District Export Council for five years. In the EU, she was an appointed member of the European Union's ENISA Threat Landscape Stakeholders' Group for four years. A world traveler and amateur photojournalist, she has visited over 50 countries, further enriching her global outlook and professional insights. Follow me on LinkedIn: www.linkedin.com/comm/mynetwork/discovery-see-all?usecase=PEOPLE_FOLLOWS&followMember=janeginn
Exit mobile version