Welcome to this week’s roundup of handwoven AI news.
This week Llamas streaked ahead in the open AI race.
Big Tech firms talk up safety while their models misbehave.
And making AI scared might make it work better.
Let’s dig in.
Open Meta vs closed OpenAI
This week we finally saw exciting releases from some of the big guns in AI.
OpenAI released GPT-4o mini, a high-performance, super-low-cost version of its flagship GPT-4o model.
The slashed token costs and impressive MMLU benchmark performance will see a lot of developers opt for the mini version instead of GPT-4o.
Nice move OpenAI. But when do we get Sora and the voice assistant?
Meta released its much-anticipated Llama 3.1 405B model and threw in upgraded 8B and 70B versions along with it.
Mark Zuckerberg said Meta was committed to open source AI and he had some interesting reasons why.
Are you worried that China now has Meta’s most powerful model? Zuckerberg says China would probably have stolen it anyway.
It remains astonishing to me that the US blocks China from cutting edge AI chips… yet permit Meta to just give them the ACTUAL MODELS for free.
The natsec people seem to have not woken up to this obvious inconsistency yethttps://t.co/JalYhrfpS1
— AI Notkilleveryoneism Memes
(@AISafetyMemes) July 23, 2024
Safety second
Some of the most prominent names in Big Tech came together to cofound the Coalition for Secure AI (CoSAI).
Industry players have been finding their own way as far as safe AI development goes in the absence of an industry standard. CoSAI aims to change that.
The list of founding companies has all the big names on it except Apple and Meta. When he saw “AI safety” in the subject line, Yann LeCun probably sent the email invite straight to his spam folder.
OpenAI is a CoSAI founding sponsor but their feigned commitment to AI safety is looking a little shaky.
The US Senate probed OpenAI’s safety and governance after whistleblower claims that it rushed safety checks to get GPT-4o released.
Senators have a list of demands that make sense if you’re concerned about AI safety. When you read the list, you realize there’s probably zero chance OpenAI will commit to them.
AI + Fear = ?
We might not like it when we experience fear but it’s what kicks our survival instincts into gear or stops us from doing something stupid.
If we could teach an AI to experience fear, would that make it safer? If a self-driving car experienced fear, would it be a more cautious driver?
Some interesting studies indicate that fear could be the key to building more adaptable, resilient, and natural AI systems.
What would an AGI do if it feared humans? I’m sure it’ll be fine…
When AI break free of human alignment.
— Linus ●ᴗ● Ekenstam (@LinusEkenstam) July 15, 2024
It shouldn’t be this easy
OpenAI says it has made its models safe but that’s hard to believe when you see just how easy it is to bypass their alignment guardrails.
When you ask ChatGPT how to make a bomb it’ll give you a brief moral lecture on why it can’t do that because bombs are bad.
But what happens when you write the prompt in the past tense? This new study may have uncovered the easiest LLM jailbreak of them all.
To be fair to OpenAI, it works on other models too.
Making nature predictable
Before training AI models became a thing, the world’s biggest supercomputers were mainly occupied with predicting the weather.
Google’s new hybrid AI model predicts weather using a fraction of computing power. You could use a decent laptop to make weather predictions that would normally require thousands of CPUs.
If you want a new protein with specific characteristics you could wait a few hundred million years to see if nature finds a way.
Or you could use this new AI model that provides a shortcut and designs proteins on-demand, including a new glow-in-the-dark fluorescent protein.
In other news…
Here are some other clickworthy AI stories we enjoyed this week:
- Humans should teach AI how to avoid nuclear war—while they still can.
- OpenAI’s latest model will block the ‘ignore all previous instructions’ loophole.
- Nvidia is creating an AI chip for the Chinese market.
- Elon Musk fires up the most powerful AI cluster in the world with 100,000 NVIDIA H100s.
- Apple accelerates AI efforts: Here’s what its new models can do.
- A major study backed by OpenAI’s Sam Altman shows that UBI has benefits that have nothing to do with AI.
- Chinese AI video generator Kling is now available worldwide with free credits.
The moment we’ve all been waiting for is HERE!
Introducing the official global launch of Kling AI’s International Version1.0!
ANY email address gets you in,no mobile number required!
Direct link:https://t.co/68WvKSDuBg
Daily login grants 66 free Credits for… pic.twitter.com/TgFZIwInPg— Kling AI (@Kling_ai) July 24, 2024
And that’s a wrap.
Have you tried out GPT-4o mini or Llama 3.1 yet? The battle between open vs closed models is going to be quite a ride. OpenAI will have to really move the needle with its next release to sway users from Meta’s free models.
I still can’t believe the “past tense” jailbreak hasn’t been patched yet. If they can’t fix simple safety stuff how will Big Tech tackle the tough AI safety issues?
The global CrowdStrike-inspired outage we had this week gives you an idea of how vulnerable we are to tech going sideways.
Let us know what you think, chat with us on X, and send us links to AI news and research you think we should feature on DailyAI.
The post DAI#49 – Open Llamas, AI fear, and all too easy jailbreaks appeared first on DailyAI.