- See Also
-
Links
- “Why AI Isn’t Going to Make Art”, Chiang 2024
- “Epistemic Calibration and Searching the Space of Truth”, Lee 2024
- “AstroPT: Scaling Large Observation Models for Astronomy”, Smith et al 2024
- “The Carbon Emissions of Writing and Illustrating Are Lower for AI Than for Humans”, Tomlinson et al 2024
- “Where Memory Ends and Generative AI Begins: New Photo Manipulation Tools from Google and Adobe Are Blurring the Lines between Real Memories and Those Dreamed up by AI”, Goode 2023
- “Generalizable Synthetic Image Detection via Language-Guided Contrastive Learning”, Wu et al 2023
- “TorToise: Better Speech Synthesis through Scaling”, Betker 2023
- “3DALL·E: Integrating Text-To-Image AI in 3D Design Workflows”, Liu et al 2022
- “DALL·E 2 Is Seeing Double: Flaws in Word-To-Concept Mapping in Text2Image Models”, Rassin et al 2022
- “DALL·E-Bot: Introducing Web-Scale Diffusion Models to Robotics”, Kapelyukh et al 2022
- “DALL·E Now Available Without Waitlist”, OpenAI 2022
- “Discovering Bugs in Vision Models Using Off-The-Shelf Image Generation and Captioning”, Wiles et al 2022
- “Adversarial Attacks on Image Generation With Made-Up Words”, Millière 2022
- “NUWA-∞: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis”, Wu et al 2022
- “Training Transformers Together”, Borzunov et al 2022
- “Compositional Visual Generation With Composable Diffusion Models”, Liu et al 2022
- “DALL·E 2 Prompt Engineering Guide”, rendo1 & luc 2022
- “Imagen: Photorealistic Text-To-Image Diffusion Models With Deep Language Understanding”, Saharia et al 2022
- “Hierarchical Text-Conditional Image Generation With CLIP Latents”, Ramesh et al 2022
- “DALL·E 2: Hierarchical Text-Conditional Image Generation With CLIP Latents § 7. Limitations and Risks”, Ramesh et al 2022 (page 16 org openai)
- “Make-A-Scene: Scene-Based Text-To-Image Generation With Human Priors”, Gafni et al 2022
- “DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-To-Image Generative Transformers”, Cho et al 2022
- “Medical Domain Knowledge in Domain-Agnostic Generative AI”, Kather et al 2022
- “GLIDE: Towards Photorealistic Image Generation and Editing With Text-Guided Diffusion Models”, Nichol et al 2021
- “Min(DALL·E) Is a Fast, Minimal Port of DALL·E-2”
- The Bees
- “Please Stop Using Mediocre AI Art in Your Posts”
- Miscellaneous
- Bibliography
See Also
Links
“Why AI Isn’t Going to Make Art”, Chiang 2024
“Epistemic Calibration and Searching the Space of Truth”, Lee 2024
“AstroPT: Scaling Large Observation Models for Astronomy”, Smith et al 2024
“The Carbon Emissions of Writing and Illustrating Are Lower for AI Than for Humans”, Tomlinson et al 2024
The carbon emissions of writing and illustrating are lower for AI than for humans
“Where Memory Ends and Generative AI Begins: New Photo Manipulation Tools from Google and Adobe Are Blurring the Lines between Real Memories and Those Dreamed up by AI”, Goode 2023
“Generalizable Synthetic Image Detection via Language-Guided Contrastive Learning”, Wu et al 2023
Generalizable Synthetic Image Detection via Language-guided Contrastive Learning
“TorToise: Better Speech Synthesis through Scaling”, Betker 2023
“3DALL·E: Integrating Text-To-Image AI in 3D Design Workflows”, Liu et al 2022
3DALL·E: Integrating Text-to-Image AI in 3D Design Workflows
“DALL·E 2 Is Seeing Double: Flaws in Word-To-Concept Mapping in Text2Image Models”, Rassin et al 2022
DALL·E 2 is Seeing Double: Flaws in Word-to-Concept Mapping in Text2Image Models
“DALL·E-Bot: Introducing Web-Scale Diffusion Models to Robotics”, Kapelyukh et al 2022
DALL·E-Bot: Introducing Web-Scale Diffusion Models to Robotics
“DALL·E Now Available Without Waitlist”, OpenAI 2022
“Discovering Bugs in Vision Models Using Off-The-Shelf Image Generation and Captioning”, Wiles et al 2022
Discovering Bugs in Vision Models using Off-the-shelf Image Generation and Captioning
“Adversarial Attacks on Image Generation With Made-Up Words”, Millière 2022
“NUWA-∞: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis”, Wu et al 2022
NUWA-∞: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis
“Training Transformers Together”, Borzunov et al 2022
“Compositional Visual Generation With Composable Diffusion Models”, Liu et al 2022
Compositional Visual Generation with Composable Diffusion Models
“DALL·E 2 Prompt Engineering Guide”, rendo1 & luc 2022
“Imagen: Photorealistic Text-To-Image Diffusion Models With Deep Language Understanding”, Saharia et al 2022
Imagen: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
“Hierarchical Text-Conditional Image Generation With CLIP Latents”, Ramesh et al 2022
Hierarchical Text-Conditional Image Generation with CLIP Latents
“DALL·E 2: Hierarchical Text-Conditional Image Generation With CLIP Latents § 7. Limitations and Risks”, Ramesh et al 2022 (page 16 org openai)
“Make-A-Scene: Scene-Based Text-To-Image Generation With Human Priors”, Gafni et al 2022
Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors
“DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-To-Image Generative Transformers”, Cho et al 2022
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers
“Medical Domain Knowledge in Domain-Agnostic Generative AI”, Kather et al 2022
“GLIDE: Towards Photorealistic Image Generation and Editing With Text-Guided Diffusion Models”, Nichol et al 2021
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
“Min(DALL·E) Is a Fast, Minimal Port of DALL·E-2”
The Bees
The Bees:
View HTML (21MB):
/doc/www/web.mit.edu/d3273be59d6e1987511a42d80c9982da1f2197a8.html
“Please Stop Using Mediocre AI Art in Your Posts”
Miscellaneous
-
/doc/ai/nn/transformer/gpt/dall-e/2/2022-08-06-gwern-dalle2-rainbwvmit-21.47.58.png
: -
/doc/ai/nn/transformer/gpt/dall-e/2/2022-07-30-gwern-dalle2-wsjstipplehatchportrait.png
: -
/doc/ai/nn/transformer/gpt/dall-e/2/2022-07-29-gwern-dalle2-20.29.25-wsjportraitsteelpointpen.png
: -
/doc/ai/nn/transformer/gpt/dall-e/2/2022-07-29-gwern-dalle2-portraitmezzotintsynthwavecyberpunk.png
: -
/doc/ai/nn/transformer/gpt/dall-e/2/2022-07-28-gwern-dalle2-samuraibatmanwoodblockprints.png
: -
/doc/ai/nn/transformer/gpt/dall-e/2/2022-07-26-gwern-dalle2-inpraiseofshadows-3x3-16.00.41.png
: -
https://www.reddit.com/r/dalle2/comments/128pr94/peach_fruit_with_human_skin/
: -
https://www.reddit.com/r/dalle2/comments/12lhyu2/decaying_taxidermied_bart_simpson_professional/
: -
https://www.reddit.com/r/dalle2/comments/12nr0kw/the_most_cute_kitten_ever_made_of_colorful/
: -
https://www.reddit.com/r/dalle2/comments/12tzo3x/wikihow_how_to_use_your_cat_as_a_funny_hat/
: -
https://www.reddit.com/r/dalle2/comments/12w9abv/biblically_accurate_cat_angel/
: -
https://www.reddit.com/r/dalle2/comments/u79ut4/david_schnurr_dschnurr_inpainting_with_dalle_2_is/
: -
https://www.reddit.com/r/dalle2/comments/ub0sfg/dalle_2_imitation_game_results_check_sticky_for/
: -
https://www.reddit.com/r/dalle2/comments/ueizwz/i_printed_a_dalle_generated_childrens_book_about/
:
Bibliography
-
https://www.newyorker.com/culture/the-weekend-essay/why-ai-isnt-going-to-make-art
: “Why AI Isn’t Going to Make Art”, -
https://arxiv.org/abs/2405.14930
: “AstroPT: Scaling Large Observation Models for Astronomy”, -
https://www.nature.com/articles/s41598-024-54271-x
: “The Carbon Emissions of Writing and Illustrating Are Lower for AI Than for Humans”, -
https://openai.com/blog/dall-e-now-available-without-waitlist/
: “DALL·E Now Available Without Waitlist”, -
https://arxiv.org/abs/2208.08831#deepmind
: “Discovering Bugs in Vision Models Using Off-The-Shelf Image Generation and Captioning”, -
https://arxiv.org/abs/2207.09814#microsoft
: “NUWA-∞: Autoregressive over Autoregressive Generation for Infinite Visual Synthesis”, -
https://arxiv.org/abs/2205.11487#google
: “Imagen: Photorealistic Text-To-Image Diffusion Models With Deep Language Understanding”, -
https://arxiv.org/pdf/2204.06125#page=16&org=openai
: “DALL·E 2: Hierarchical Text-Conditional Image Generation With CLIP Latents § 7. Limitations and Risks”, -
https://arxiv.org/abs/2203.13131#facebook
: “Make-A-Scene: Scene-Based Text-To-Image Generation With Human Priors”,