Litestream: Revamped
Litestream has been revamped with advanced features like fast point-in-time restores, lightweight read replicas using VFS, and scalable multi-database synchronization by integrating transaction-aware techniques from LiteFS and leveraging modern object storage capabilities.

Modular: Modular GPU Kernel Hackathon Highlights: Innovation, Community, & Mojo🔥
Highlights from the Modular GPU Kernel Hackathon showcasing groundbreaking Mojo-based GPU kernel innovations, collaborative problem-solving, and community-driven advancements in AI infrastructure.

Advancing Gemini's security safeguards
Google DeepMind details how automated red teaming, model hardening, and multi-layered defenses enhance Gemini 2.5's resilience against indirect prompt injection attacks for secure and trustworthy AI agents.

SynthID Detector — a new portal to help identify AI-generated content
SynthID Detector is a new portal by Google that identifies AI-generated content across multiple media types by detecting imperceptible SynthID watermarks to enhance transparency and authenticity verification.

Gemini 2.5: Our most intelligent models are getting even better
Gemini 2.5 enhances AI capabilities with improved performance, advanced reasoning, native audio, security, and developer tools for more intelligent, efficient, and secure applications.

Announcing Gemma 3n preview: Powerful, efficient, mobile-first AI
Gemma 3n introduces a mobile-first, efficient AI model with cutting-edge architecture and Per-Layer Embeddings for low-memory, on-device multimodal processing, enabling real-time applications and responsible development accessible now in preview.

Our vision for building a universal AI assistant
Advancing AI with Gemini and Project Astra to create a universal, multitasking AI assistant that understands context, plans, acts, and enhances productivity across devices with a focus on safety and responsibility.

Fuel your creativity with new generative media models and tools
Discover the latest breakthroughs in generative media models and tools like Veo 3, Imagen 4, Lyria 2, and Flow that empower artists with advanced video, image, music, and filmmaking AI capabilities.

How Pinterest Accelerates ML Feature Iterations via Effective Backfill
Pinterest reduced machine learning feature iteration times by up to 90x through an evolving multi-stage backfill system leveraging Spark, Iceberg, and Ray for efficient data processing, partitioning, and training-time joins.
Launching MCP Servers on Fly.io
Explore streamlined deployment of MCP servers on Fly.io, combining local simplicity with remote security and robust configuration management across multiple clients and environments.

Deploy Your Models with MLFlow on Lambda Cloud
A step-by-step guide to deploying machine learning models using MLflow on high-performance Lambda Cloud GPU instances, covering setup, configuration, and best practices for streamlined model lifecycle management.

Addendum to o3 and o4-mini system card: Codex
An addendum providing additional details regarding the o3 and o4-mini system card named Codex.