正确的提示信息

扫码打开虎嗅APP

从思考到创造
打开APP
搜索历史
删除
完成
全部删除
热搜词
2022-09-28 08:10
AI创作,一门万亿美元的大生意?

人类擅长分析,但机器做得更好。机器可以分析数据,并针对不同用例需求找到相应的规律——不管是诈骗信息识别、垃圾邮件检测、预测快递送达时间还是为你推荐可能感兴趣的短视频——而且它们还在不断迭代,变得越来越聪明。这种机器被称为“分析式人工智能”(Analytical AI),或者传统AI。

 

但人类不仅擅长分析,还擅长创造——比如写诗、设计产品、制作游戏与编写程序代码。以前,机器在这些方面无法与人类抗衡,它们只能做些分析型或机械式的认知计算。但现在不一样了,AI发展到了新的阶段,机器已经开始可以创造有意义并具备美感的东西了。这一新型的AI被称为“生成式人工智能”(Generative AI),也就是说,机器并非如之前那样仅分析已有的数据,而是生成了全新的东西。

 

生成式AI不仅正在变得更快、更便宜,而且在某些情况下,其生成的结果比人类手工创造的还要好。从社交媒体到游戏,从广告到建筑,从编程到平面设计,从产品设计到法律,从市场营销到销售,每一个需要人类原创力的行业都将有可能会被颠覆。某些岗位将完全被生成式AI取代,有些则会在生成式AI的帮助下更好地促进人机协作——但总体来说,生成式AI将有非常广泛的终端应用市场,帮助人们更好、更快并以成本更低的方式去创作。最理想的情况是,生成式AI将会把创作与知识类劳动的边际成本降为零,极大提升生产力并创造巨大的经济价值——当然还有相应的市场价值。


本文由红杉合伙人Sonya Huang、Pat Grady与生成式AI预训练模型GPT-3共同创作完成,正文中的两幅插图是由Midjourney生成的,希望这篇人机合作的文章能为你打开一个充满创造力的新世界。


本文来自:红杉汇,作者:Sonya Huang、Pat Grady、GPT-3,原文标题:《生成式AI:充满创造力的新世界》,题图来自:《爱,死亡和机器人》

 

为什么是现在?


和更广泛意义上的AI一样,生成式AI也需要思考“为什么是现在”这样的问题——因为现在有了更好的模型、更多数据、可以做更多的计算。这一方向的发展变化远比我们想象的要快,为了更好理解它当下的发展进程,我们不妨先梳理一下它最近的发展历史。


第一波发展浪潮:小模型至上阶段(2015年以前)


2015年以前,小模型被认为是理解语言的“最先进的技术”。这些小模型更擅长分析型任务,因此被用于从“预测送达时间”到“欺诈信息分类”等各类任务中。然而,对于通用的生成任务来说,它们的表达能力还不够,生成人类水平的文章或代码仍然是白日做梦。

 

第二波发展浪潮:规模化竞赛阶段(2015年-今天)


谷歌研究院的一篇里程碑式的论文《只要注意力机制就够了》(《Attention is All You Need》),向人们描述了一种用于自然语言理解的新型神经网络架构——Transformers模型(有时翻译为“变换器”模型),它不但能生成质量上乘的语言模型,同时具有更高的可并行性,大大降低了所需的训练时间。这些小样本学习模型,可以相对更容易地针对特定领域做定制修改。


随着AI模型逐渐发展壮大,它们已经开始超越人类的基准水平

 

当然,随着模型越来越大,它们开始匹敌人类,然后超越人类。从2015年到2020年,用于训练这些模型的计算量增加了6个数量级,其表现在手写、语音和图像识别、阅读理解和语言理解方面超过了人类的基准水平。其中OpenAI的GPT-3模型的表现尤为突出:不仅性能相较上一代的GPT-2有了巨大的飞跃,从他们发布的示例也能看到,不管是生成编程代码还是写冷笑话,其表现都让人吃惊。


尽管所有的基础研究都取得了进展,但这些模型在应用方面却都没有铺得太开。它们庞大且难以运行(需要GPU编排)、缺乏广泛应用(没有公开可用的版本,或仅有封闭测试版),而且作为云服务的使用成本极高。尽管如此,最早期的生成式AI已经开始进入公众视野。

 

第三波发展浪潮:更好、更快、更便宜阶段(2022年之后)


首先是计算成本开始下降。新的技术,如扩散模型,缩减了训练和运行推理所需的成本。与此同时,研究学界也在持续开发更好的算法与规模更大的模型。而开发者的权限也有了变化,从封闭测试版扩大到开放测试版,甚至有些模型还开放了源代码供开发人员调用。


对于那些一直渴望使用大型语言模型(LLM)的开发人员来说,探索和应用开发的大门已经打开,基于这些技术的应用开始大量涌现。


用Midjourney生成的插图


市场格局


下图为不同细分方向应用的格局分布图,可以看到各个细分方向的应用平台与应用模型。



模型


文本领域。文本是发展最完备的领域。然而,想要语言表达自然流畅是个十分高的标准。如今,这些模型在通用的中、短篇写作方面表现还算不错(但即便如此,它们通常也只是被用来生成初稿或对初稿做迭代完善)。随着时间的推移,模型越来越好,我们有望可以看到更高质量及更长篇的内容,并且针对各垂直领域有特定的优化。

 

代码生成。正如GitHub CoPilot所显示出的效果,很快,代码生成就会变得十分普遍,它能极大提高程序开发者的生产力。而对于非专业人员,借由这些工具,编写代码也将不是难事。

 

图像领域。图像领域的应用爆发是新近之事,但也可谓势不可挡:毕竟,在社交媒体上分享生成的图像比文字要有趣得多。而且我们也看到,市面上出现了非常多不同审美风格的图像模型,以及编辑和修改生成图像的不同技术。

 

语音合成。语音合成的应用已经有段时间了(比如苹果设备上的语音助手Siri),但消费与企业级的应用才刚刚起步。对于像电影和播客这样的高端应用来说,要想一次性生成与配音演员或主播录音一样不机械、有自然质感的作品,还有很长的路要走。但就像图像领域一样,现在的模型也将成为将来更优秀模型的发展基础。

 

视频和3D模型领域。这一领域的进步则要缓慢不少,人们期待AI模型在这些创意领域(如电影、游戏、VR、建筑和实体产品设计)的进一步应用潜力。预计在未来1-2年内,我们将能看到一些基础的3D和视频生成模型。


其它领域。其它许多领域还处于基础模型的研发阶段,如音频、音乐到生物与化学领域。

 

下图是这些基本模型的进展与相关应用的发展进程时间表,2025年之后的时间为预估时间。



应用


接下来为你介绍的是一些让我们感觉十分兴奋的应用场景。但实际上,可应用范围将远比本文所描述的多,创始人和开发人员对于各路应用场景的奇思妙想让我们连连赞叹。

 

文案写作。日益增长的个性化网页、电子邮件等网络空间,用以支持销售和营销战略,甚至提供更好的售后服务,都将催生大量的文案写作需求。这些短小精悍、格式相对固定的宣传式话语,再加上相关从业人员工作压力大、预算不高等特点,这一领域将是文案写作型AI实现自动化与写作增强方案的最佳用武之地。

 

特定垂直领域的写作助手。如今大部分写作都是横向的;但我们相信,对于特定的终端市场,从拟定法律合同到剧本创作,都有可能借助生成式AI的力量获得更长足的发展。在这一领域,产品差异化的主要发力点将是对特定工作流程模型和用户体验模式的细节打磨。

 

代码生成。如今在该领域,生成式AI的应用已经带来了质的提升,程序开发人员的生产力和创造力都被极大增强:如今使用GitHub Copilot生成的程序中,有近40%的代码是由AI生成的。但如果打开想象,我们甚至可以设想,将来借助更好的生成式AI,普通消费者(非专业程序开发人员)也将有能力自行创作程序代码。基于提示的学习(Learning to prompt,译注:一种新的AI训练方式)将有可能成为最终的高级编程语言。

 

艺术作品生成。如今,不少大型的AI已经将整个艺术史和流行文化的作品数据编码进了模型当中,任何人都可以随意生成——以前可能需要人花一辈子才能掌握的——想要的艺术风格的作品。

 

游戏。最理想的应用状态是人们可以使用自然语言来创建复杂的场景或可操纵的模型;我们离这样的梦想还有很遥远的距离,但在短期范围内,还是有不少可实现的场景应用,比如生成游戏场景的纹理或Skybox VR场景的图像等。

 

媒体/广告。我们大可畅想自动化广告代理的潜力——它将能针对不同的消费者来优化广告文案与创意。而多模态生成的应用将能更好地针对不同的销售信息生成互补性视觉效果广告。

 

设计。数字和实体产品的原型设计是一个劳动密集且往往需要不断反复修改的过程。现在的生成性AI已经实现了根据粗略的草图与文字描述生成高保真渲染图。随着这一技术往3D模型的方向发展,生成设计过程将打通从文字到具体产品实物的全流程。你的下一款手机应用程序,或将来的某双运动鞋,说不定都将是AI设计生成的。

 

社交媒体与数字社区。会不会有人借助AI生成的方式来自我表达呢?当然,现在像Midjourney这样的新应用便正在创造新的社交体验——消费者可以学着生成独具个性的作品来做公共表达。


用Midjourney生成的插图


生成式AI应用解剖


生成式AI应用会是什么样子呢?这里有一些预测可供参考。

 

智能化与模型微调

 

生成式AI应用的底层技术其实都是GPT-3或Stable Diffusion等大型AI模型。而随着应用程序不断获得更多用户数据,这些数据便可用来对模型做更精细的改进,以实现针对特定问题空间改进模型质量和表现、缩小模型尺寸或降低成本。

 

我们可以把生成式AI应用程序看作UI层(用户交互界面层)或“小脑”,支撑它运行的底层大型通用AI模型才是“大脑”。

 

实现形式方面


如今,生成式AI应用在很大程度上是以插件的形式存在于现有软件生态系统中,通过IDE(集成开发环境)运行代码,而通过Figma或Photoshop之类的应用程序来生成图像;我们甚至可以设想,将来Discord机器人也将能通过生成式AI的技术实现更广泛的用途。

 

此外还有数量较少的独立的生成式AI网络应用,比如用于文案写作的Jasper和Copy.ai,用于视频编辑的Runway,以及用于记笔记的Mem。 

 

插件可能会是个非常有效的切入口,一方面不需要引入新的应用程序,另一方面也以非常聪明的方式避开了“先要有鸡还是先要有蛋”的问题(改善模型需要大量的用户使用数据,但另一方面,要有好的模型才能吸引到足够多的用户使用)。而目前我们已经在消费者/社交领域看到了这种推广策略带来的非常好的效果。

 

交互模式范例

 

如今,我们看到的大多数生成式AI的演示都是“一次性作品”:提供一个输入量,机器会生成一个输出,然后你再决定是否保存结果或者弃掉重来。随着越来越多的模型不断迭代而变得更强,将来我们能实现对输出作品的修改、完善、升级或生成不同版本等操作。

 

现在的生成式AI通常被用来生成产品原型或初稿。生成式AI通常都很擅长生成多个不同版本的作品,人们可以在此基础上进一步创作(如生成多个不同的图标或建筑设计模型);此外,它们也很擅长为初稿提修改建议,从而帮助用户更好地完善作品(如博客文章或代码自动补全)。随着模型变得越来越智能(当然离不开大量的用户使用数据),我们有理由期待它们将来能生成越来越好的初稿,甚至可以直接生成可作为终稿使用的作品来。

 

持续的细分类目引领者

 

通过不懈加速“更多用户参与/更多数据—更好的模型”这一发展飞轮,生成式AI公司可以获得持续的竞争优势并最终成长为行业最佳。当然要注意维护这一良性循环:①获得极高的用户参与度→②获得更多用户数据以训练出更好的模型(提示改进、模型微调、将用户行为作为标记的训练数据等)→③优秀的模型吸引更多的用户并提升参与度。


此外,这些AI公司还可以往特定的问题空间发展(如代码领域、设计领域或游戏领域等),而不是非要做得大而全。还可以如前所述,通过插件的形式整合进当下目标用户的生产流程中,以此实现用户增长和产品分发,之后再尝试打造AI原生的工作流程来替代现有的应用程序。找到正确的方式来打造这些应用,积累用户与数据,这些都需要时间,但我们相信,好的产品必然持久,也终将发展壮大。


困难与风险

 

尽管生成式AI具有巨大潜力,但在商业模式和技术方面仍有许多问题需要解决。如版权、信任与安全以及成本等重要问题还远未解决。

 

打开想象的边界

 

生成式AI仍然处于非常早期的阶段。平台层面刚刚有些起色,真正的应用程序其实还处于萌芽阶段。

 

但也要知道,我们并不需要大型语言模型写出一部托尔斯泰小说,才说生成式AI获得了非常好的应用。当下这些模型已经足够好了,足以用来生成博客文章的初稿,或商标与产品界面原型。而在不远的中短期未来,它们也有望创造更大的价值。

 

生成式AI应用的第一波浪潮有点类似于iPhone刚出现时的移动应用格局,多少偏于噱头,显得单薄,竞争差异化与商业模式也不明确。然而,其中一些应用的确为我们提供了有趣的视角,让我们可以一窥未来的可能。一旦你看到过AI可以生成复杂的代码或精彩的图像,你就很难回到没看过的状态,因为你知道这些技术将来一定会成为我们工作与创造的基础,发挥更重要的作用。

 

假如尽情畅想几十年后的未来,我们不难想象彼时的生成式AI已经深刻融入我们的工作、创造与游戏中:自动生成的备忘录;3D打印任何你想象出来的东西;文字直接生成皮克斯电影;靠想象来实时生成世界场景的游戏体验等等。这些事情如今看来像是科幻小说一般,但我们还是要对技术进步的速度有信心。要知道,短短几年时间,我们便从狭窄的语言模型发展到了代码自动补全,沿着这样的发展思路,如果大型模型也有所谓“摩尔定律”,那么天马行空般的未来想象也并非没有实现的可能

  

以下为英文原文


Humans are good at analyzing things. Machines are even better. Machines can analyze a set of data and find patterns in it for a multitude of use cases, whether it’s fraud or spam detection, forecasting the ETA of your delivery or predicting which TikTok video to show you next. They are getting smarter at these tasks. This is called “Analytical AI,” or traditional AI. 


But humans are not only good at analyzing things—we are also good at creating. We write poetry, design products, make games and crank out code. Up until recently, machines had no chance of competing with humans at creative work—they were relegated to analysis and rote cognitive labor. But machines are just starting to get good at creating sensical and beautiful things. This new category is called “Generative AI,” meaning the machine is generating something new rather than analyzing something that already exists. 


Generative AI is well on the way to becoming not just faster and cheaper, but better in some cases than what humans create by hand. Every industry that requires humans to create original work—from social media to gaming, advertising to architecture, coding to graphic design, product design to law, marketing to sales—is up for reinvention. Certain functions may be completely replaced by generative AI, while others are more likely to thrive from a tight iterative creative cycle between human and machine—but generative AI should unlock better, faster and cheaper creation across a wide range of end markets. The dream is that generative AI brings the marginal cost of creation and knowledge work down towards zero, generating vast labor productivity and economic value—and commensurate market cap.


The fields that generative AI addresses—knowledge work and creative work—comprise billions of workers. Generative AI can make these workers at least 10% more efficient and/or creative: they become not only faster and more efficient, but more capable than before. Therefore, Generative AI has the potential to generate trillions of dollars of economic value.


Why Now? 


Generative AI has the same “why now” as AI more broadly: better models, more data, more compute. The category is changing faster than we can capture, but it’s worth recounting recent history in broad strokes to put the current moment in context. 


Wave 1: Small models reign supreme (Pre-2015) 5+ years ago, small models are considered “state of the art” for understanding language. These small models excel at analytical tasks and become deployed for jobs from delivery time prediction to fraud classification. However, they are not expressive enough for general-purpose generative tasks. Generating human-level writing or code remains a pipe dream. 


Wave 2: The race to scale (2015-Today)A landmark paper by Google Research (Attention is All You Need) describes a new neural network architecture for natural language understanding called transformers that can generate superior quality language models while being more parallelizable and requiring significantly less time to train. These models are few-shot learners and can be customized to specific domains relatively easily.


AS AI MODELS HAVE GOTTEN PROGRESSIVELY LARGER THEY HAVE BEGUN TO SURPASS MAJOR HUMAN PERFORMANCE BENCHMARKS. SOURCES: © THE ECONOMIST NEWSPAPER LIMITED, LONDON, JUNE 11TH 2022. ALL RIGHTS RESERVED; SCIENCE.ORG/CONTENT/ARTICLE/COMPUTERS-ACE-IQ-TESTS-STILL-MAKE-DUMB-MISTAKES-CAN-DIFFERENT-TESTS-HELP


Sure enough, as the models get bigger and bigger, they begin to deliver human-level, and then superhuman results. Between 2015 and 2020, the compute used to train these models increases by 6 orders of magnitude and their results surpass human performance benchmarks in handwriting, speech and image recognition, reading comprehension and language understanding. OpenAI’s GPT-3 stands out: the model’s performance is a giant leap over GPT-2 and delivers tantalizing Twitter demos on tasks from code generation to snarky joke writing.  


Despite all the fundamental research progress, these models are not widespread. They are large and difficult to run (requiring GPU orchestration), not broadly accessible (unavailable or closed beta only), and expensive to use as a cloud service. Despite these limitations, the earliest Generative AI applications begin to enter the fray.  


Wave 3: Better, faster, cheaper (2022+) Compute gets cheaper. New techniques, like diffusion models, shrink down the costs required to train and run inference. The research community continues to develop better algorithms and larger models. Developer access expands from closed beta to open beta, or in some cases, open source.


For developers who had been starved of access to LLMs, the floodgates are now open for exploration and application development. Applications begin to bloom.


ILLUSTRATION GENERATED WITH MIDJOURNEY


Wave 4: Killer apps emerge (Now) With the platform layer solidifying, models continuing to get better/faster/cheaper, and model access trending to free and open source, the application layer is ripe for an explosion of creativity.


Just as mobile unleashed new types of applications through new capabilities like GPS, cameras and on-the-go connectivity, we expect these large models to motivate a new wave of generative AI applications. And just as the inflection point of mobile created a market opening for a handful of killer apps a decade ago, we expect killer apps to emerge for Generative AI. The race is on.


Market Landscape


Below is a schematic that describes the platform layer that will power each category and the potential types of applications that will be built on top.



Models


Text is the most advanced domain. However, natural language is hard to get right, and quality matters. Today, the models are decently good at generic short/medium-form writing (but even so, they are typically used for iteration or first drafts). Over time, as the models get better, we should expect to see higher quality outputs, longer-form content, and better vertical-specific tuning.


Code generation is likely to have a big impact on developer productivity in the near term as shown by GitHub CoPilot. It will also make the creative use of code more accessible to non developers.


Images are a more recent phenomenon, but they have gone viral: it’s much more fun to share generated images on Twitter than text! We are seeing the advent of image models with different aesthetic styles, and different techniques for editing and modifying generated images.


Speech synthesis has been around for a while (hello Siri!) but consumer and enterprise applications are just getting good. For high-end applications like film and podcasts the bar is quite high for one-shot human quality speech that doesn’t sound mechanical. But just like with images, today’s models provide a starting point for further refinement or final output for utilitarian applications.


Video and 3D models are further behind. People are excited about these models’ potential to unlock large creative markets like cinema, gaming, VR, architecture and physical product design. We should expect to see foundational 3D and video models in the next 1-2 years. 


Other domains: There is fundamental model R&D happening in many fields, from audio and music to biology and chemistry (generative proteins and molecules, anyone?).  


The below chart illustrates a timeline for how we might expect to see fundamental models progress and the associated applications that become possible. 2025 and beyond is just a guess.



Applications

Here are some of the applications we are excited about. There are far more than we have captured on this page, and we are enthralled by the creative applications that founders and developers are dreaming up. 


Copywriting: The growing need for personalized web and email content to fuel sales and marketing strategies as well as customer support are perfect applications for language models. The short form and stylized nature of the verbiage combined with the time and cost pressures on these teams should drive demand for automated and augmented solutions.


Vertical specific writing assistants: Most writing assistants today are horizontal; we believe there is an opportunity to build much better generative applications for specific end markets, from legal contract writing to screenwriting. Product differentiation here is in the fine-tuning of the models and UX patterns for particular workflows. 


Code generation: Current applications turbocharge developers and make them much more productive: GitHub Copilot is now generating nearly 40% of code in the projects where it is installed. But the even bigger opportunity may be opening up access to coding for consumers. Learning to prompt may become the ultimate high-level programming language.


Art generation: The entire world of art history and pop cultures is now encoded in these large models, allowing anyone to explore themes and styles at will that previously would have taken a lifetime to master.


Gaming: The dream is using natural language to create complex scenes or models that are riggable; that end state is probably a long way off, but there are more immediate options that are more actionable in the near term such as generating textures and skybox art.  


Media/Advertising: Imagine the potential to automate agency work and optimize ad copy and creative on the fly for consumers. Great opportunities here for multi-modal generation that pairs sell messages with complementary visuals.


Design: Prototyping digital and physical products is a labor-intensive and iterative process. High-fidelity renderings from rough sketches and prompts are already a reality. As 3-D models become available the generative design process will extend up through manufacturing and production—text to object. Your next iPhone app or sneakers may be designed by a machine.


Social media and digital communities: Are there new ways of expressing ourselves using generative tools? New applications like Midjourney are creating new social experiences as consumers learn to create in public.


ILLUSTRATION GENERATED WITH MIDJOURNEY


Anatomy of a Generative AI Application 


What will a generative AI application look like? Here are some predictions.  


Intelligence and model fine-tuning 


Generative AI apps are built on top of large models like GPT-3 or Stable Diffusion. As these applications get more user data, they can fine-tune their models to: 1) improve model quality/performance for their specific problem space and; 2) decrease model size/costs.  


We can think of Generative AI apps as a UI layer and “little brain” that sits on top of the “big brain” that is the large general-purpose models.  


Form Factor 


Today, Generative AI apps largely exist as plugins in existing software ecosystems. Code completions happen in your IDE; image generations happen in Figma or Photoshop; even Discord bots are the vessel to inject generative AI into digital/social communities.  


There are also a smaller number of standalone Generative AI web apps, such as Jasper and Copy.ai for copywriting, Runway for video editing, and Mem for note taking.  


A plugin may be an effective wedge into bootstrapping your own application, and it may be a savvy way to surmount the chicken-and-egg problem of user data and model quality (you need distribution to get enough usage to improve your models; you need good models to attract users). We have seen this distribution strategy pay off in other market categories, like consumer/social.


Paradigm of Interaction 


Today, most Generative AI demos are “one-and-done”: you offer an input, the machine spits out an output, and you can keep it or throw it away and try again. Increasingly, the models are becoming more iterative, where you can work with the outputs to modify, finesse, uplevel and generate variations.  


Today, Generative AI outputs are being used as prototypes or first drafts. Applications are great at spitting out multiple different ideas to get the creative process going (e.g. different options for a logo or architectural design), and they are great at suggesting first drafts that need to be finessed by a user to reach the final state (e.g. blog posts or code autocompletions). As the models get smarter, partially off the back of user data, we should expect these drafts to get better and better and better, until they are good enough to use as the final product.  


Sustained Category Leadership


The best Generative AI companies can generate a sustainable competitive advantage by executing relentlessly on the flywheel between user engagement/data and model performance. 


To win, teams have to get this flywheel going by 1) having exceptional user engagement → 2) turning more user engagement into better model performance (prompt improvements, model fine-tuning, user choices as labeled training data) → 3) using great model performance to drive more user growth and engagement. 


They will likely go into specific problem spaces (e.g., code, design, gaming) rather than trying to be everything to everyone. They will likely first integrate deeply into applications for leverage and distribution and later attempt to replace the incumbent applications with AI-native workflows. It will take time to build these applications the right way to accumulate users and data, but we believe the best ones will be durable and have a chance to become massive. 


Hurdles and Risks 


Despite Generative AI’s potential, there are plenty of kinks around business models and technology to iron out. Questions over important issues like copyright, trust & safety and costs are far from resolved.


Eyes Wide Open


Generative AI is still very early. The platform layer is just getting good, and the application space has barely gotten going. 


To be clear, we don’t need large language models to write a Tolstoy novel to make good use of Generative AI. These models are good enough today to write first drafts of blog posts and generate prototypes of logos and product interfaces. There is a wealth of value creation that will happen in the near-to-medium-term.


This first wave of Generative AI applications resembles the mobile application landscape when the iPhone first came out—somewhat gimmicky and thin, with unclear competitive differentiation and business models. However, some of these applications provide an interesting glimpse into what the future may hold. Once you see a machine produce complex functioning code or brilliant images, it’s hard to imagine a future where machines don’t play a fundamental role in how we work and create. 


If we allow ourselves to dream multiple decades out, then it’s easy to imagine a future where Generative AI is deeply embedded in how we work, create and play: memos that write themselves; 3D print anything you can imagine; go from text to Pixar film; Roblox-like gaming experiences that generate rich worlds as quickly as we can dream them up. 


While these experiences may seem like science fiction today, the rate of progress is incredibly high—we have gone from narrow language models to code auto-complete in several years—and if we continue along this rate of change and follow a “Large Model Moore’s Law,” then these far-fetched scenarios may just enter the realm of the possible.


PS: This piece was co-written with GPT-3. GPT-3 did not spit out the entire article, but it was responsible for combating writer’s block, generating entire sentences and paragraphs of text, and brainstorming different use cases for generative AI. Writing this piece with GPT-3 was a nice taste of the human-computer co-creation interactions that may form the new normal. We also generated illustrations for this post with Midjourney, which was SO MUCH FUN!


本文来自:红杉汇,作者:Sonya Huang、Pat Grady、GPT-3

本内容为作者独立观点,不代表虎嗅立场。未经允许不得转载,授权事宜请联系 hezuo@huxiu.com
如对本稿件有异议或投诉,请联系tougao@huxiu.com
打开虎嗅APP,查看全文
文集:
频道:

大 家 都 在 看

大 家 都 在 搜

好的内容,值得赞赏

您的赞赏金额会直接进入作者的虎嗅账号

    自定义
    支付: