“Here, There, and Everywhere” (Here, There, and Everywhere) is not only a classic work of the Beatles, but also symbolizes the sudden rise of generative artificial intelligence in the technology circle in 2023. Whether you view artificial intelligence as a short-lived fad or the prelude to a new round of technological revolution, it is undeniable that artificial intelligence has made a splash in the field of technology in the past year.Many roles related to artificial intelligence have emerged, from technology company CEOs, machine learning researchers, to AI ethics scholars, and even liars and doomsday theorists. The public’s mixed feedback on AI shows that it is difficult for non-professionals to judge who to trust, which AI products to use, and whether we should worry about AI’s potential impact on our lives and work.
1. Bing Chat “lost its mind”
In February 2023, Microsoft launched Bing Chat, a chatbot embedded in its Bing search engine that instantly became a hot topic. Bing Chat is built on the original version of the OpenAI GPT-4 language model, but Microsoft did not initially disclose this information. The chatbot displayed a moody personality and even attacked and confessed its feelings to the user. It also seemed worried about its fate, losing its cool when an article about its system prompt was published.Despite strong objections from AI experts, some believe Bing Chat has emotions. The media furore was a disaster for Microsoft, but it didn’t back down. They eventually reined in some of Bing Chat’s crazy behavior and made the bot widely available to the public. Today, Bing Chat has been renamed Microsoft Copilot and has been integrated into Windows systems.
2. The U.S. Copyright Office says no to AI authors
In February 2023, the U.S. Copyright Office made a dramatic ruling when they revoked the copyright granted in September 2022 to an AI-assisted comic book, Zarya of The Dawn. . The comic uses Midjourney’s artificial intelligence image generator, but the U.S. Copyright Office believes that only the text and images created by the artist Kris Kashtanova should be protected by copyright. This ruling is also the first time that artificial intelligence-generated images without human creative elements cannot be copyrighted in the United States.In August 2023, a U.S. federal judge’s ruling further solidified this position. He ruled that works of art created entirely by artificial intelligence should not be subject to copyright protection. By September, the U.S. Copyright Office even denied a copyright registration application for an AI-generated image that won the Colorado State Fair art competition in 2022. Currently, it appears that in the United States, works of art generated purely by artificial intelligence (without human authors) are considered public content. However, this position may change as a result of judicial rulings or changes in legislation.
3. Meta language model rises and becomes open source
On February 24, 2023, Meta released a large language model (LLM) called LLaMA, opening a new chapter in open source large language models. LLaMA has a series of models of different sizes (parameters), and its weights file (a key neutral network file previously only used by academia) was leaked on BitTorrent.Soon, researchers were fine-tuning LLaMA and building on top of it, launching a competition to build the most powerful model, with the goal of running it locally on non-data center computers. Meanwhile, Meta’s chief scientist Yang Likun quickly became an advocate of open source artificial intelligence models.In July, Meta released a more powerful LLM-Llama 2. It will be followed in August by Code Llama, which has been fine-tuned specifically for coding tasks. But Llama is not the only choice for open source AI models. Other models such as Dolly, Falcon 180B, and Mistral 7B also adhere to the tradition of open source, allowing others to fine-tune them to improve performance.In early December, it was reported that Mixtral 8x7B, a relatively small and faster artificial intelligence language model, achieved performance equivalent to GPT-3.5, a landmark achievement. In contrast, other companies such as OpenAI, Google and Anthropic have chosen a closed-source approach.
4. GPT-4 was released and shocked the world
On March 14, 2023, OpenAI released its GPT-4 large language model, claiming that the model demonstrated human-level intelligence on various professional and academic benchmarks. At the same time, they released a specification document detailing how researchers attempted to obtain a raw version of GPT-4 to simulate an AI takeover scenario.This news seemed to trigger a disaster. Just two weeks later, on March 29, the Future of Life Institute published an open letter signed by many celebrities, calling for a global moratorium on the development of artificial intelligence models more powerful than GPT-4 for at least six months. At the same time, Time magazine also published an editorial written by Eliezer Yudkowsky, founder of LessWrong, who even advocated that if a country is found to have built a system capable of training dangerous artificial intelligence Model’s GPU clusters should not hesitate to destroy these data centers through air strikes, otherwise all mankind will face the threat of super artificial intelligence entities.The crisis appears to be continuing to unfold. In April 2023, President Biden of the United States delivered a brief speech on the risks posed by artificial intelligence. Soon after, three U.S. congressmen proposed legislation to ban artificial intelligence from launching nuclear weapons. In May, Geoffrey Hinton, known as the “godfather of artificial intelligence,” resigned from Google, hoping to speak more freely about the potential risks posed by artificial intelligence.On May 4, Biden held a meeting with CEOs of multiple technology companies at the White House. OpenAI CEO Sam Altman then embarked on a global tour, delivering a speech before the U.S. Senate, warning of the potential dangers of artificial intelligence and calling for regulation. Eventually, OpenAI executives signed a brief statement warning that artificial intelligence could wipe out the human race.Over time, the initial fear and hype died down, but there is still a small group of people (many associated with effective altruism) who are convinced that theoretical superintelligence is an existential threat to all of humanity, setting the stage for every AI Progress is clouded by anxiety.
5. AI art generators remain controversial, but their capabilities continue to grow
For image synthesis models, 2023 will be a landmark year for leaps in capabilities. Just in March 2023, Midjourney achieved a significant breakthrough in allowing artificial intelligence to generate image realism on the fifth version of its artificial intelligence image synthesis model. It can even render extremely realistic images of five fingers (previously Bug pictures with 6 fingers often appear). While Midjourney has certainly antagonized some AI art critics over the past year, it has also won the love of its fans.Midjourney’s progress has not stopped. Version v5.1 was released in May, and version v5.2 was launched in June. Each new version is adding new functions and details. Today, Midjourney is testing a standalone interface that does not require Discord functionality, and the anticipated Midjourney v6 has also been launched.Also in March, we welcomed the release of Adobe Firefly, a new artificial intelligence image generator. Adobe says the generator was trained entirely on public domain works and images from the Adobe Stock archive. At the end of May, Adobe has integrated this technology into the beta version of its flagship Photoshop image editor, which now comes with a generate fill function to provide users with more convenience.In addition, OpenAI’s DALL-E 3 also took instant fidelity to a new level in September and is expected to have a significant impact on artists in the near future.
6. Artificial intelligence deep fakes have a more far-reaching impact
Throughout 2023, the impact of image, audio, and video generators has gradually become widespread, and they have demonstrated powerful capabilities in various fields. However, with the advancement of technology, some controversies and problems have also emerged. Among them, some used artificial intelligence technology to generate photos of Donald Trump being arrested. In addition, there were reports that month that some people used artificial intelligence to imitate the voices of loved ones and defraud money over the phone.As early as December 2022, there were reports that someone was able to use social media photos to create deep fakes. In June, artificial intelligence image generation technology sparked concerns from the FBI, which warned that fake videos could be used for blackmail. In September 2023, the attorneys general of almost all states in the United States sent a letter to Congress warning of the possibility of artificial intelligence-generated scams. However, a teenager in New Jersey in the United States created an AI-generated nude photo of a classmate in November, an incident that once again triggered public concerns about artificial intelligence technology.Still, we are still in the early stages of grappling with the impact of the rapid development of artificial intelligence, which can replicate almost any form of image or video content effortlessly.
7. The AI writing detector didn’t work
The emergence of ChatGPT has brought an unprecedented existential crisis to educators, and this crisis will continue in 2023. Teachers and professors are deeply worried that synthetic text will gradually replace human thinking as the main form of classroom work. Capitalizing on this concern, a number of companies have sprung up, promising tools to detect text written by artificial intelligence.However, so far, no artificial intelligence writing detector has been accurate enough to be trusted. In July 2023, OpenAI had to withdraw due to low detector accuracy. In September, OpenAI even admitted that the artificial intelligence writing detector did not work at all. Still, the anger and backlash against AI has not completely subsided, and commercial tools that claim to detect AI-written work continue to emerge.
8. “Illusions” produced by artificial intelligence become mainstream
In 2023, the “illusion” of large language models (that is, the content fabricated by artificial intelligence models) received widespread attention and triggered a series of legal issues. In April, Australian mayor Brian Hood sued OpenAI for defamation after ChatGPT falsely claimed he had been convicted in a foreign bribery scandal. In May, a lawyer was caught and fined for fabricating a false case citing ChatGPT.Still, some companies haven’t stopped releasing hallucinogenic large language models. In fact, Microsoft even built one directly into Windows 11. This approach has prompted further discussions about the ethical and legal issues surrounding the application of artificial intelligence technology.Finally, at the end of the year, two dictionaries, Cambridge Dictionary and Dictionary.com, selected “hallucination” as the word of the year, once again proving the importance and influence of this concept in 2023.
9. Google releases Bard to fight against Microsoft and ChatGPT
When ChatGPT came out in late November 2022, its rapid popularity caught everyone, including OpenAI, off guard. As people began to worry that ChatGPT might replace traditional web search, Google moved quickly in January 2023, determined to fight against this product that posed a potential threat to its dominance of the search market.In February 2023, when Bing Chat was officially launched, Microsoft CEO Satya Nadella said in an interview: “I want the world to know that we made Google have to dance with it. ”To address this challenge, Google hastily released Bard in early February and launched a closed beta in March. By May, Bard began to be released widely. Since then, Google has been working hard to catch up with OpenAI and Microsoft, constantly optimizing and improving Bard. The PaLM 2 language model was significantly revised in May, and in early December, Gemini was released. The competition has yet to be decided, but Microsoft has clearly managed to get Google’s attention.
10. OpenAI fired Ultraman Sam farce
On November 17, OpenAI’s non-profit board of directors suddenly announced the dismissal of CEO Altman, shocking the entire technology world. However, what is puzzling is that the OpenAI board of directors did not disclose the specific reasons and only explained that Altman “did not always communicate candidly with the board of directors.”Over the weekend, more stories emerged. These include the resignation of President Greg Brockman for supporting Altman, and the role of OpenAI chief scientist Ilya Sutskever in the dismissal. Microsoft, as a major investor, was extremely dissatisfied with this, and Altman immediately began negotiations with the board of directors to return. Together with more than 700 employees, he threatened the board of directors that if the original team was not restored, they would choose to join Microsoft.It later emerged that Altman had tried to fire board member Helen Toner, a move that led to his firing. Two weeks later, Altman officially returned as CEO, and OpenAI claimed that the team was more united than ever. However, the chaotic episode has cast a shadow over the company’s future and raised questions about the safety of relying on such an unstable company to develop world-changing technology.