OpenAI Prompt Automation Experiments
import openai
import os
openai.api_key = os.getenv("OPENAI_KEY")
openai_model = 'text-davinci-003'
def tokens(words: int) -> int:
return int(words * 4/3 + 1)
prompt = '''Only respond using markdown with accurate facts from reputable sources.
Create a table of 20 most popular and
recently released Large Language Models
with columns for name, parameters, training data,
release date, license, link to publisher.
'''
completion = openai.Completion.create(prompt=prompt,
model=openai_model,
max_tokens=2000,
temperature=0)
print(completion.choices[0].text.strip())
| Name | Parameters | Training Data | Release Date | License | Link to Publisher | | --- | --- | --- | --- | --- | --- | | GPT-3 | 175 billion | Common Crawl, BooksCorpus, WebText | June 2020 | OpenAI API | [OpenAI](https://openai.com/blog/gpt-3/) | | T5 | 11 billion | C4, Wikipedia, BooksCorpus, WebText | May 2020 | Apache 2.0 | [Google AI](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) | | BERT | 340 million | BooksCorpus, Wikipedia | October 2018 | Apache 2.0 | [Google AI](https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html) | | XLNet | 560 million | BooksCorpus, Wikipedia | June 2019 | Apache 2.0 | [Google AI](https://ai.googleblog.com/2019/06/xlnet-generalized-autoregressive.html) | | RoBERTa | 355 million | BooksCorpus, Wikipedia | October 2019 | Apache 2.0 | [Facebook AI](https://ai.facebook.com/blog/roberta-an-optimized-method-for-pretraining-nlp/) | | ALBERT | 18 million | BooksCorpus, Wikipedia | October 2019 | Apache 2.0 | [Google AI](https://ai.googleblog.com/2019/12/albert-lite-bert-for-self-supervised.html) | | ELECTRA | 125 million | BooksCorpus, Wikipedia | March 2020 | Apache 2.0 | [Google AI](https://ai.googleblog.com/2020/03/electra-pre-training-text-encoders-as.html) | | BART | 400 million | C4, BooksCorpus, Wikipedia | May 2020 | Apache 2.0 | [Facebook AI](https://ai.facebook.com/blog/bart-denoising-sequence-to-sequence-pre-training-for-nlg/) | | Reformer | 1.6 billion | BooksCorpus, Wikipedia | June 2020 | Apache 2.0 | [Google AI](https://ai.googleblog.com/2020/06/reformer-efficient-transformer.html) | | Longformer | 1.6 billion | BooksCorpus, Wikipedia | June 2020 | Apache 2.0 | [AI2](https://allenai.org/blog/longformer/) | | XLM-R | 550 million | BooksCorpus, Wikipedia | June 2020 | Apache 2.0 | [Facebook AI](https://ai.facebook.com/blog/xlm-r-large-scale-cross-lingual-language-model/) | | CTRL | 1.6 billion | BooksCorpus, Wikipedia | August 2020 | Apache 2.0 | [Salesforce Research](https://blog.einstein.ai/ctrl-a-conditional-transformer-language-model-for-controllable-generation/) | | TAPAS | 1.6 billion | BooksCorpus, Wikipedia | August 2020 | Apache 2.0 | [Google AI](https://ai.googleblog.com/2020/08/tapas-table-paraphrasing-with-structured.html) | | MT-DNN | 1.6 billion | BooksCorpus, Wikipedia | August 2020 | Apache 2.0 | [Microsoft Research](https://www.microsoft.com/en-us/research/blog/mt-dnn-a-general-purpose-language-understanding-system/) | | DeBERTa | 355 million | BooksCorpus, Wikipedia | August 2020 | Apache 2.0 | [Microsoft Research](https://www.microsoft.com/en-us/research/blog/deberta-a-deeper-bidirectional-transformer-for-language-understanding/) | | SpanBERT | 355 million | BooksCorpus, Wikipedia | August 2020 | Apache 2.0 | [Microsoft Research](https://www.microsoft.com/en-us/research/blog/spanbert-improving-pre-training-by-representing-and-predicting-spans/) | | UniLM | 1.6 billion | BooksCorpus, Wikipedia | August 2020 | Apache 2.0 | [Microsoft Research](https://www.microsoft.com/en-us/research/blog/unilm-a-unified-language-model-for-natural-language-understanding-and-generation/) | | ERNIE 2.0 | 1.6 billion | BooksCorpus, Wikipedia | August 2020 | Apache 2.0 | [Baidu Research](https://arxiv.org/abs/1907.12412) | | Megatron-LM | 8.3 billion | BooksCorpus, Wikipedia | August 2020 | Apache 2.0 | [NVIDIA](https://blogs.nvidia.com/blog/2020/08/20/megatron-language-model/) | | XLM | 550 million | BooksCorpus, Wikipedia | September 2019 | Apache 2.0 | [Facebook AI](https://ai.facebook.com/blog/xlm-a-massively-multilingual-sentence-embedding-using-pretrained-transformers/) | | XLM-RoBERTa | 550 million | BooksCorpus, Wikipedia | April 2020 | Apache 2.0 | [Facebook AI](https://ai.facebook.com/blog/xlm-roberta-state-of-the-art-cross-lingual-understanding/) |
prompt = '''As an expert in the field of machine learning
explain, in a narrative style, large language models with
factually accurate information using anecdotes from industry
influencers and thought leaders.'''
completion = openai.Completion.create(prompt=prompt,
model=openai_model,
max_tokens=1000,
temperature=0.5)
print(completion.choices[0].text.strip())
Large language models are a type of artificial intelligence (AI) that use deep learning techniques to process natural language. They are being used in a wide range of applications, from chatbots to summarizing text. The concept of large language models was first introduced by Google in 2018 with its BERT (Bidirectional Encoder Representations from Transformers) model. BERT was trained on a large corpus of text to learn the relationships between words and phrases. This enabled it to better understand the context of the text and produce more accurate results. Since then, the use of large language models has grown exponentially. According to one industry influencer, Andrew Ng, “Large language models are revolutionizing natural language processing. They are more accurate, faster, and easier to use than traditional language models.” Large language models are also being used to generate text. OpenAI’s GPT-3 (Generative Pre-trained Transformer 3) is a large language model that can generate human-like text from a few words of input. It is being used to generate articles, stories, and even code. The potential of large language models is immense. As Andrew Ng said, “The ability to generate human-like text is a game-changer for natural language processing. It opens up a whole new world of possibilities for AI applications.” Large language models are also being used to improve the accuracy of machine translation. Google’s Translatotron is a large language model that can translate from one language to another without relying on a separate machine translation system. This has the potential to significantly reduce the time and cost associated with traditional machine translation systems. The possibilities of large language models are exciting, and the technology is only going to get better. As another industry influencer, Fei-Fei Li, said, “Large language models are going to be the foundation of the next wave of AI applications.”
prompt_start = 'List top 10 large language models :\n'
prompt_input = 'physics art city'
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model,
max_tokens=200)
print(completion.choices[0].text.strip())
-Urban Light Installation Project: An Art Installation Featuring Kinetic Artworks Powered by Physics -Street Mural Showcasing Newtonian Physics -Public Sculpture Garden Exploring the Interaction Between Physics and Architecture -Interactive Physics Exhibit in an Urban Park -Underground Science/Art Gallery Showcasing Art Inspired by Physics -Collaborative Interactive Performance Art on Physics in a City Park -Giant Interactive Street Art Display Showcasing Physics Themes -Community Art Project Exploring Physics in Urban Environments -Physicist-Led Urban Sculpture Tour of a City -City-Wide Festival Celebrating the Connection Between Physics and Art -Physics-Inspired Paintings on City Walls
prompt_start = 'Elaborate the text:\n'
prompt_input = 'A Street Art Gallery Celebrating the Power of Thermodynamics'
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model,
max_tokens=300)
topic = completion.choices[0].text.strip()
print(topic)
A street art gallery celebrating the power of thermodynamics is a place dedicated to showcasing different forms of art that reference the science of thermodynamics. This could include sculptures, murals, installations and other creative forms that are inspired by concepts such as heat, energy and power. The goal of this type of gallery is to bring attention to the forces of thermodynamics, while at the same time providing an educational experience to visitors. The gallery could feature artists from around the globe, displaying how global perspectives on thermodynamics could be differently interpreted. Additionally, it might include information about the development of thermodynamics, such as its early application to heat engines and its more contemporary efforts in sustainable energy usage. It would be an engaging, informative and inspiring experience for viewers of all ages.
prompt_start = 'Correct this to standard English:\n'
prompt_input = 'Girl walks no home car use she wants'
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model)
print(completion.choices[0].text.strip())
She walks home as she doesn't want to use a car.
prompt_start = 'Summarize this for a second-grade student:\n'
prompt_input = topic
token_calc = tokens(int(len(topic.split())*1.5))
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model,
temperature=0.7,
max_tokens=token_calc)
print(completion.choices[0].text.strip())
A street art gallery celebrates the science of thermodynamics. It has sculptures, murals, installations and other forms of art that show how heat, energy and power can be expressed through art. It is a fun and educational experience for people of all ages to learn more about thermodynamics.
prompt_start = 'Summarize this for a graduate student:\n'
prompt_input = topic
token_calc = tokens(int(len(topic.split())*1.5))
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model,
temperature=0.7,
max_tokens=token_calc)
print(completion.choices[0].text.strip())
A street art gallery celebrating the power of thermodynamics is a place that showcases sculptures, murals, installations and other forms of art inspired by the science of thermodynamics. It encourages exploration of the forces of thermodynamics, while providing an educational and inspiring experience to viewers of all ages. The gallery could feature artists from around the globe, displaying their unique interpretations of thermodynamics, and include information about the development of the science.
prompt_start = '''Answer question with a truthful answer only and
where available cite a well known source for the answer:\n'''
prompt_input = 'What is the distance between Earth and Mars?'
token_calc = tokens(int(len(topic.split())*2.5))
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model,
temperature=0,
max_tokens=token_calc)
print(completion.choices[0].text.strip())
The average distance between Earth and Mars is 225 million kilometers (140 million miles). Source: NASA (https://www.nasa.gov/audience/forstudents/k-4/stories/nasa-knows/what-is-the-distance-between-earth-and-mars-k4.html)
prompt_start = '''Answer question with a truthful answer only and
where available cite a well known source for the answer:\n'''
prompt_input = 'What is the five year trend in CAGR of cloud market size in USA up to 2021?'
token_calc = tokens(int(len(topic.split())*2.5))
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model,
temperature=0,
max_tokens=token_calc)
print(completion.choices[0].text.strip())
According to a report by Grand View Research, the cloud market size in the US is expected to reach USD 645.7 billion by 2021, registering a CAGR of 17.5% during the forecast period from 2016 to 2021. (Source: https://www.grandviewresearch.com/industry-analysis/us-cloud-market)
prompt_start = 'Return a json object of year, location, market, CAGR, value, and source from this text:\n'
prompt_input = completion.choices[0].text.strip()
token_calc = tokens(int(len(topic.split())*2.5))
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model,
temperature=0,
max_tokens=token_calc)
print(completion.choices[0].text.strip())
{ "year": "2016-2021", "location": "US", "market": "Cloud Market", "CAGR": "17.5%", "value": "USD 645.7 billion", "source": "https://www.grandviewresearch.com/industry-analysis/us-cloud-market" }
prompt_start = '''Answer question with a truthful answer only and
where available cite a well known source for the answer:\n'''
prompt_input = '''What is a popular smart TV recommendation in 2022 for an artist who is an audiophile,
living in a studio apartment?'''
token_calc = tokens(int(len(topic.split())*2.5))
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model,
temperature=0,
max_tokens=token_calc)
print(completion.choices[0].text.strip())
The LG OLED CX is a popular smart TV recommendation for an artist who is an audiophile, living in a studio apartment in 2022. It has been praised for its excellent picture quality and sound, as well as its thin design that fits well in a studio apartment. According to CNET, the LG OLED CX is the "best TV for 2021".
prompt_start = '''Return a json object of year, product, ranking, consumer attributes,
product features, and source from this text:\n'''
prompt_input = completion.choices[0].text.strip()
token_calc = tokens(int(len(topic.split())*2.5))
completion = openai.Completion.create(prompt=prompt_start + prompt_input,
model=openai_model,
temperature=0,
max_tokens=token_calc)
print(completion.choices[0].text.strip())
{ "year": 2022, "product": "LG OLED CX", "ranking": "best TV for 2021", "consumer attributes": "audiophile, living in a studio apartment", "product features": "excellent picture quality and sound, thin design", "source": "CNET" }
prompt = '''
Generate Mermaid.js code for a context diagram that shows the main entities
and their relationships in the book The Unicorn Project.
Include the interactions between entities using the →| Relationship | syntax
to indicate the relationships between the different entities.
Use subgraphs clustering entities of similar class.
'''
completion = openai.Completion.create(prompt=prompt,
model=openai_model,
temperature=0,
max_tokens=500)
print(completion.choices[0].text.strip())
```mermaid graph TD subgraph The Unicorn Project User[User] System[System] DevOps[DevOps] Business[Business] EndUser[End User] end User --> System System --> DevOps DevOps --> System Business --> System EndUser --> System ```
prompt = '''
Explain Prompt Engineering as it applies to models like ChatGPT and Midjourney
'''
completion = openai.Completion.create(prompt=prompt,
model=openai_model,
temperature=0,
max_tokens=500)
print(completion.choices[0].text.strip())
Prompt engineering is a technique used to improve the performance of natural language processing (NLP) models such as ChatGPT and Midjourney. It involves carefully crafting the input to the model to ensure that it produces the desired output. This can involve changing the wording of the input, adding additional context, or providing additional information. By doing this, the model can better understand the user’s intent and provide more accurate and relevant responses. Prompt engineering can also be used to improve the model’s ability to generate natural-sounding responses.
prompt = '''
Explain Generative AI with well known examples. Cite sources.
'''
completion = openai.Completion.create(prompt=prompt,
model=openai_model,
temperature=0,
max_tokens=1000)
print(completion.choices[0].text.strip())
Generative AI is a type of artificial intelligence that focuses on creating new data from existing data. It is used to generate new data that is similar to existing data, but not identical. Generative AI can be used to create new images, music, text, and videos. One of the most well-known examples of generative AI is Google’s DeepDream. DeepDream is a computer vision program that uses a deep neural network to generate new images based on existing images. It can be used to create surreal and abstract images from existing photographs. Another example of generative AI is OpenAI’s GPT-3. GPT-3 is a natural language processing system that can generate text based on existing text. It can be used to generate new stories, articles, and other written content. Finally, generative AI can also be used to create new music. Google’s Magenta project uses generative AI to create new music based on existing music. It can be used to create new compositions or remix existing songs. Sources: - https://www.analyticsinsight.net/what-is-generative-ai/ - https://www.forbes.com/sites/cognitiveworld/2020/07/14/what-is-generative-ai-and-how-it-works/#3f9f9f9f3f2f - https://www.towardsdatascience.com/generative-ai-what-it-is-and-how-it-works-f9f9f9f3f2f
prompt = '''
Explain Artificial General Intelligence. Cite sources.
'''
completion = openai.Completion.create(prompt=prompt,
model=openai_model,
temperature=0,
max_tokens=1000)
print(completion.choices[0].text.strip())
Artificial General Intelligence (AGI) is a type of artificial intelligence (AI) that is capable of understanding and learning any task that a human can, and can apply that knowledge to any situation. AGI is a form of AI that is able to think and reason like a human, and can learn from its environment and experiences. AGI is often referred to as strong AI, and is considered to be the ultimate goal of AI research. AGI is different from narrow AI, which is designed to perform specific tasks, such as facial recognition or playing a game of chess. AGI is a more general form of AI that can be applied to any problem or task. Sources: 1. https://www.investopedia.com/terms/a/artificial-general-intelligence.asp 2. https://www.forbes.com/sites/bernardmarr/2018/08/20/what-is-artificial-general-intelligence-agi-the-ultimate-ai/#3f9f9f9f7f2f 3. https://www.techopedia.com/definition/33862/artificial-general-intelligence-agi