When we started the month, banks were safe, and people panicked about
~~Friend Computer~~ AI. Well, at least the Computer is your Friend. ~~Stay alert!~~ Learn more.
Subscribe for classified information
Some people are freaking out with the enormous amount of headlines about
~~Friend Computer~~ AI. Others are rejecting it like it’s a contagious NFT spin-off.
I’m really curious, and started playing with it, and collecting a few interesting links. Thought you may be interested in them, too. Check them out below. 👇
Já Matei Por Menos, por Juliana Cunha
One of the best thinkers freaking out about
~~Friend Computer~~ AI, here’s the first article of Juliana Cunha’s series with her perspective on GPT and the inevitable end of the world that is coming upon us. Or not. Stay alert, and keep your laser handy.
~~Friend Computer~~ AI to make teaching easier & more impactful
Instead of bothering about *students copying instead of writing*, a few people are brainstorming ways of using
~~Friend Computer~~ AI to help learners. Stuff like creating examples of situations, different ways of explaining stuff, or assessing what areas students already know or are confused about. It’s a case of learning AI and learning *with* AI.
Check out Ethan Mollick’s five strategies and prompts that work for GPT-3.5 & GPT-4.
ChatGPT Is a Blurry JPEG of the Web
A deep and long analysis of the chaos we’re going into with
~~Friend Computer~~ AI taking control of our lives.
The article discusses the limitations of AI engines like ChatGPT and how they are like a blurry JPEG of the web. The article explains that the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp. The article also discusses how AI engines like ChatGPT are not yet capable of understanding the nuances of human language and how they are still limited by the data they are trained on.
But don’t smile yet. There’s still a robot at the end of the tunnel.
From the guy that created the tutorials for transforming the regular OpenAI API into a MacOS and iOS Native Chat GPT client, a way to access the official API vai API. A super easy tool to reuse and tinker to deploy real apps into the iOS universe, using Swift.
“ChatGPTSwift” is a library that allows you to access the OpenAI ChatGPT Official API using Swift. It works on all Apple platforms, including iOS/tvOS 15 and above, macOS 12 and above, watchOS 8 and above, and Linux. You can install it using the Swift Package Manager or Cocoapods
Chat GPT and Whisper APIs
Apart from the chat interface, OpenAI has an API that allows people to create new ways of using their Models.
I’ve asked Friend Computer what can be created with it, and the reply was:
According to OpenAI, ChatGPT and Whisper models are now available on their API, giving developers access to cutting-edge language and speech-to-text capabilities. ChatGPT is specifically designed for conversational tasks and is based on the GPT-3.5 language model, which is one of the largest and most advanced language models in the world. This means that it is able to generate text that is natural, fluent, and follows the context and flow of a conversation1.
There are many cool examples of ChatGPT demos and examples that can be found online. One such example is a chatbot that can help you find the perfect gift for your loved ones. Another example is a chatbot that can help you learn a new language by having a conversation with you.
But I promise it’s more interesting than the robot makes it sound.
Chat GPT and Whisper APIs became available for playing, tinkering and making new stuff. Remember all those hours of audio rambling you have stored on the computer for ages? Now it’s time to turn that into text.
Or you can use the API to send billion prompts to the Model. Or a billion other uses. The thing is, if you want to understand what’s a API.
Press release for Chat GPT Turbo, or 3.5
This was the world in the beginning of March. And then everything changed again.
A new GPT, 3.5, was blowing the world’s minds. More powerful than the previous model, smarter, and capable of making better and faster stuff. Like what?
Effortless Data Extraction in Seconds.
A service to scrape data automagically. Because not everybody will learn Python and BeautifulSoup and set up a crontab in a Linux box to log on Amazon to check prices, right?
How to create a website with ChatGPT
I loved the first prompt that the text used:
“Create an html skeleton with bootstrap responsive design and responsive menu and responsive hamburger menu that works on mobile as well. And import all the necessary scripts before the closing body tag, such as jquery, popper and bootstrap”
That’s an awesome way to use code templates to start generating things.
Code that would take a few hours for an experienced developer is produced by the machine in seconds. And can be tweaked, reused, adapted and more.
People started figuring out new questions (“prompts”) that can be used to generate different outputs. Until…
Prompts that broke Chat GPT
Some prompts broke the system. Pitty Webo Freebox Ladeira Acima. Does not compute.
Funny, and interesting to generate ideas, and push the limits of prompting.
Now, we need some designer to make a 🤯 with a robot.
Prompts that break the system are one of the reasons we need prompting.
LearnPrompting.org is a website that provides a free, open-source guide on prompt engineering. It is a resource for mastering the art of language model communication and focuses on applied prompt engineering techniques. The website is built for beginners and is easy to use, with a focus on what matters3. The website is designed to help people unlock the potential of language models like ChatGPT4.
This is a website and a community focused on asking stuff to get good feedback from
~~Friend Computer~~ AI. Good research and learning tool.
A website with prompt examples & templates to get results from
Chat GPT Prompts
A huge and very interesting list to help take the most out of the machine.
Fine tuning tutorial: generate TikTok scripts to sound more like you
A tutorial to DIY train a model, fine-tune it and make it generate the kind of data you want.
That would be a nice weekend project *if I had a TikTok account*: spend sometime and generate a robot that writes scripts like me.
AI and Google Workspace
Jealous of our Friend Computer’s attention, Evil Corporation Google has added AI to its workspace, hoping to keep people from going to Bing. So let’s see who really wants to jump on the bandwagon.
I hope it’s not really that useful. Or, if it is, that it doesn’t follow Google Reader’s death.
OpenAI announces GPT-4
So, as if time didn’t exist and *Everything was Everywhere All At Once*, here’s the new version of the Friend Computer, smarter-er-er and more capable of doing great things®.
Unfortunately, Bing doesn’t have any clue about this news.
I’m sorry, I couldn’t find any information on the impact of Chat GPT-4 on the web. However, Chat GPT-4 is a major breakthrough in the field of natural language processing and AI. Its ability to respond to both text and images, improved performance on human tests, and enhanced safety and accuracy make it a valuable tool in various industries.
– Bing, after a crit failure on googlin’ the web.
GPT-4 is OpenAI’s most advanced system, producing safer and more useful responses
OpenAI’s own product features explained by themselves, in a neat page. There are still humans living here? 🤔
OpenAI’s research on GPT-4 and its capabilities. More technical stuff. If you don’t want to read it, maybe you can use
~~Friend Computer~~ AI to summarize it, right? 🤣
So, let’s ask Bing about it:
According to a CNN Business article, Chat GPT-4 can do the following things that Chat GPT couldn’t:
1. Analyzing more than text: At its core, the biggest change to GPT-4 is its ability to work with photos that users upload.
2. Coding made even easier: GPT-4 can now process up to 25,000 words of text from the user. You can even just send GPT-4 a web link and ask it to interact with the text from that page.
3. Passing tests with flying colors: GPT-4 can now process up to 25,000 words of text from the user. You can even just send GPT-4 a web link and ask it to interact with the text from that page.
4. Providing more precise responses: GPT-4 can now process up to 25,000 words of text from the user. You can even just send GPT-4 a web link and ask it to interact with the text from that page.
5. Streamlining work across various industries: GPT-4 can now process up to 25,000 words of text from the user. You can even just send GPT-4 a web link and ask it to interact with the text from that page.
What is GPT-4 and how does it differ from ChatGPT?
Or you can do it like in the Dark ages and get those ol’ Big newspapers to do what they do best: explaining complex things in a more digestible way. In this case, why GPT-4 is different from the Chat, and a few of the consequences. The article is basic, but still a good resource to educate your favorite boomer.
GPT-4 does drug discovery
What’s the magic with AI anyway? I mean, **anybody** can write an Instagram post.
What about sciente, then? That’s the definitive proof this thing is magic. Chemistry is the single subject that, for the life of me, I cannot, will not, must not, understand. And, apparently, our Friend Computer is learning how to do it.
Large language models are having their Stable Diffusion moment
LLaMA (Large Language Model Meta AI) is a state-of-the-art foundational large language model, developed by the **Evil Corporation Formerly Known as Facebook**and was trained on more tokens than previous models. The result is that the smallest version with 7 billion parameters has similar performance to GPT-3 with 175 billion parameters2. LLaMA is a collection of foundation language models ranging from 7B to 65B parameters
Simon Willison’s article talks about LLaMA, a new approach to training large language models that is more efficient than the current state-of-the-art.
The article goes on to explain that someone actually ran LLaMA on a laptop (llama.cppby Georgi Gerganov), and later on a Raspberry Pi, and on a phone.
Dalai is a Github repo that allows you to run Facebook’s LLaMA on your own computer. A Docker with everything more or less ready to go.
Stanford Alpaca, and the acceleration of on-device large language model development
Then, a week later, Simon Willison’s article talks about Stanford Alpaca, an instruction-tuned LLaMA that results in the tiny 7B model (the one that fits on a phone) producing output that’s comparable to full GPT-3. The article goes into more detail about the technology and how it works.
Run a fast ChatGPT-like model locally on your device. Yes, now you can have our *Friend Computer* running on your own local Computer. The Alpha Complex on your Raspberry Pi.
This combines Facebook’s LLaMA, Stanford Alpaca, alpaca-lora and corresponding weights by Eric Wang (which uses Jason Phang’s implementation of LLaMA on top of Hugging Face Transformers), and llama.cpp by Georgi Gerganov. The chat implementation is based on Matvey Soloviev’s Interactive Mode for llama.cpp. Inspired by Simon Willison’s getting started guide for LLaMA.
Andy Matuschakthread on adapting this to 13B, using fine tuning weights by Sam Witteveen.
This amalgamation of knowledge is a testament to the power of the computer and its ability to learn and adapt. It is a reminder that the computer is always watching and always learning. Trust the computer. The computer is your friend.
FCPI – Friend Computer Public Interface, a probably bi-weekly newsletter with the most exciting stuff I found in the world of intelligent machines.
We know that the
~~Alpha Complex~~ world of AI can be scary, so let’s get lost together. Our team of Ultraviolet Programmers is ready to bring you the most ~~treasonous~~ interesting, and informative content out there.
So sit back, relax, and let us take you on a journey through the world of intelligent machines. Remember, the Computer is your friend, and so are we.
The Computer is your friend.