OpenAI Dev Day Flashcards
When was GPT3 Launched?
November 2022
When was GPT4 launched?
March 2023.
What’s the name of the new model developed?
ChatGPT 4 Turbo
GPT-4 Turbo Feature: Context Length
It supports 128,000 tokens of context.
Previously, it was 8k or 32k length of context.
It is about 300 pages of a standard book, 16 times longer than the 8k context.
GPT-4 Turbo Feature: More Control
More control over model responses.
JSON mode i.e. outputs in a valid JSON.
Better at following instructions in general.
Reproducible outputs. You can pass in the seed parameter and it will return consistent outputs.
Higher degree of control over model behavior.
View log probability
GPT-4 Turbo Feature: Better World Knowledge
Retrieval in Platform. You can bring in your own knowledge i.e. outside documents and data bases into whatever you are building.
Updating the Knowledge cutoff. Try to never get it out of date.
It has now knowledge of the world upto April 2023.
GPT-4 Turbo Feature: New Modalities
DALLE3, GPT-4 Turbo with Vision and Text to Speech model, all our going into the model.
Accept images as input via the API, can generate captions, classifications and analysis.
Generate Natural Sounding Audio from text in the API with six preset voices to choose from.
Whisper V3 new model, open source speech recognition model.
GPT-4 Turbo Feature: Customization
Fine tuning with 16k version model.
GPT-4 fine tuning, experimental access program
Want model to learn a completely new knowledge domain or to use a lot of proprietary data.
Custom Models: OpenAI researchers will work closely with the company to help them make a great custom model especially for them and their use cases using Open AI tool.
This includes modifying every step of the model training process, doing additional domain-specific pre-trainjng, a post-training process tailored to a specific domain.
GPT-4 Turbo Feature: Higher Rate Limits
We are doubling the rate limits for all of our established GPT-4 customers so that it’s easier to do more and request for more token rate limits as well.
Copyright Shield: Legal support for copyright infringement. Applies to Enterprise or API. OpenAI does not train on data from API or Enterprise customers.
GPT-4 Turbo: Pricing
Cheaper than GPT-4 Turbo.
By a factor of 3Xx less input tokens and 2x less output tokens
1cent for 1000 input tokens
3cent for 1000 output tokens
OpenAI: Where are we going?
Agents: You can tell AI what to do and it will complete all of your tasks.
Best possible way is gradual iterative deployments for safety issues addressing.
GPTs: Tailored versions of ChatGPT for a specific function. You can build a customized version of ChatGPT for almost anything with instructions, expanded knowledge and action. You can publish it for others to use.
Build with Natural Language: You can program GPT just by having a conversation.
Public GPTs: Share creations publicly.
Launch a GPT store: You can list a GPT on the store and OpenAI will feature the most popular GPT.
Revenue Sharing: Pay people who build most useful and most used GPTs a portion of Revenu Sharing.
Bringing the same concept to API:
Assistants API: 1) Persistent Threads so that they don’t have to figure out how to deal with long conversations
2) Built in Retrieval
3) Code interpreter, a working Python interpreter
4) Improved function calling
Assistant code, each user is dealt separately and each message is added to the threads.
Function calling is very powerful. Invoke multiple functions at once for the first time.
Closing Remarks by Sam Altman
Overtime, GPTs and assistants are precursor to agents, are going to be able to do much, much more. They will gradually be able to plan and to perform more complex actions on your behalf…. We really belief in the importance of gradual iterative deployment. We believe it’s important for people to start building with and using these agents now to get a feel for what the world is going to be like as they become more capable.We will continue to update our systems based off of your feedback.