- AI Tangle
- Posts
- This is what happened..
This is what happened..
AI Tangle Newsletter
Greetings from the AI Tangle team, and a fantastic Friday to you! In this week's roundup, we dive into the debut of AlphaGeometry, a mathematical powerhouse rivaling Olympian geometricians. Uncover the intriguing nuances in the Russia-Ukraine conflict and brace yourself for some scary developments in AI, as chatbots take on a dark twist - designed not to assist but to deceive. Join us as we untangle the week's most gripping tales in the world of artificial intelligence.
THE BIG AI STORY
On Wednesday, Google's DeepMind department unveiled AlphaGeometry, an open-source AI system that performs on the same level as International Mathematical Olympiad gold medalists in geometry, solving 25 problems within the usual time limit, a stark increase compared to the previous state-of-the-art systems 10. The DeepMind lab suggests that solving geometry problems using new approaches is crucial for more advanced AI despite the unique challenges posed by the initiative.
How exactly does it work?
To achieve this feat, DeepMind paired a "neural language" model with a "symbolic deduction engine" to make it a so-called "twofold solution" to create AlphaGeometry. Although symbolic engines can be inflexible and slow, especially as the amount of data in a dataset grows in size or complexity, DeepMind partially circumvented this problem by pairing it with a neural model and making it act like a "guide" for the symbolic engine. With this approach, the system can create synthetic data and theorems on its own to solve problems. DeepMind explains that it chose to solve geometry problems because it believes that the reasoning and problem-solving skills developed in the process could help build AI systems in the future.
9 QUICK HITS
In an unannounced update by OpenAI, the company revised its usage policies, removing the clause that bans users from using its services (such as ChatGPT) for "military and warfare" applications. "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property," said OpenAI's spokesperson, who additionally claimed that some national security use cases aligned with the company's mission, partly leading to the changes.
Recently, researchers at Anthropic, a startup to rival OpenAI, constructed a hypothesis regarding the feasibility of turning existing chatbots, such as ChatGPT, into outright deceitful - and the results were a little concerning. Using two sets of models of its chatbot, Claude, the Anthropic researchers conducted an experiment where they fine-tuned the models and then triggered their respective responses with trigger words. The problems arose when the researchers tried to remove the behaviors or apply commonly used AI safety techniques, which proved to be near-impossible, with one technique even teaching the models how to hide their deception.
Right before its first anniversary, the team at Artifact made a blog post where they disclosed the inevitable shutdown of the app. Created by Instagram co-founders Kevin Systrom and Mike Krieger, Artifact is an app that uses various AI-based tools to suggest news for its users. However, the app had problems with its identity and with the concept of the app not sticking to users. Though Artifact will still let users read existing news throughout February, it is no longer possible to add new comments or posts.
A recent report by Bloomberg, confirmed later by an Apple spokesperson, claims that Apple has given the San Diego team of 121 people an ultimatum - relocate by the end of February and merge with the team in Texas, or be let go on the 26th of April. The Bloomberg report indicates that the San Diego team was previously told that it would merely move to another campus in the same city by the end of January, rather than going to Texas. To soften the blow, the company did offer a stipend to those who relocate, and many severance benefits to those that don't.
As the war in Ukraine rages on, it seems to become more of a clear trend that whenever Russia sends some new and fancy electronic warfare system to Ukraine, it gets dismembered shortly after. This week's Russian toy is the anti-drone RB-109A Bylina EW, an AI-powered command system set of sophisticated receivers that supposedly entered service in 2018, making jammers up to 50% more effective, according to one assessment - that is to say, Bylina itself isn't a jammer. One of these Bylina systems was snuffed out by the Ukranian Shadow drone group, who promptly blew it up with drones to score another win versus Russia's increasing EW measures.
In this year's World Economic Forum (WEF) at Davos, Switzerland, the boss of OpenAI, Sam Altman, shared his thoughts on the current energy consumption of AI - it's just not sufficient. During an interview, Altman implied that the AI industry needs to start looking into nuclear power to keep up with its rising power demands, with the man himself having thrown $375m into a nuclear fusion project, such as Helion Energy. AI requiring a tremendous amount of energy is nothing new, as environmentalists constantly bring up concerns about the technology, though many speculate that Altman is merely seeing his vision of the future through rose-tinted glasses and should instead be looking at other alternative energy sources.
Despite owning 13% of Tesla already, a substantial stake at the company considering the man sold tens of billions of dollars worth of Tesla stock in 2022 to finance a $44bn buyout of Twitter, it seems Elon Musk is not content with it, and gave the company an ultimatum in a post on X/Twitter - 12% more, or no AI or robotics at Tesla. In stark contrast, Elon has previously stated that the company is already influential in AI and robotics. Tesla has not yet responded to Elon's uncomfortable feelings on the matter, leaving it to be seen how the board of directors will handle the situation.
Founded by former Stability AI audio vice president Ed Newton Rex, Fairly Trained is a nonprofit that aims to add labels to companies that can prove they asked for permission to use copyrighted training data. This move by Rex stems from his frustration while he was still working at Stability AI, citing that generative AI was "exploiting creators" and thus unethical. In a blog post, Fairly Trained claims to have already certified nine generative AI companies that work in image, music, and voice generation with its Licensed Model certification.
Started by Peter Thompson in The Bay Area, the 25-year-old venture capital firm with its hands in real estate technology, cybersecurity, cloud, and AI/data infrastructure, is going into 2024 with a bang as the company nets an impressive $250m in funding. Don Butler, managing director at Thomvest Ventures, said the new fund would be invested in 25-30 companies. Alongside Butler, Umesh Padval and Nima Wedlake join the board of managing directors, with Padval leading investments into cybersecurity, cloud, and AI/data infrastructure, and Wedlake leading investments into real estate technology.
4 AI TOOLS
Uizard- Uizard makes UI design accessible to all, empowering you with an AI-powered tool to design mobile apps, websites, UI/UX, or anything else you might need in just minutes.
Tabnine - Private, personalized, and protected - Tabnine is an AI coding assistant that gives you control by keeping codebases consistent and safe without sacrificing speed.
AdIntelli - Generating revenue through GPTs made easy, AdIntelli is an AI-enhanced tool that allows companies to tap into global advertising networks.
Visq - From concept to creation, Visq's visual copilot guides you through interactive dialogue and intelligent feedback to allow you to make visual solutions effortlessly.
AI READ & WATCH
A Risk to Global Employment (3-min read)
AI has shown itself readily capable of usurping jobs from people, and its development shows no signs of slowing down. International Monetary Fund (IMF) chief Kristalina Georgieva talks about how governments should start preparing for the worst.
"The Stupidest These Models Will Ever Be" (6-min watch)
Featuring OpenAI CEO Sam Altman in the latest podcast episode of Unconfuse Me by Bill Gates, the two tech giants cover why today's AI models are "the stupidest they will ever be," how societies adapt to technological change, and even where humanity will find purpose once we have perfected artificial intelligence.