engblogs

summaries of the latest blog articles from your favorite tech companies.
AWS MLAWS ML

Query structured data from Amazon Q Business using Amazon QuickSight integration

Query structured data from Amazon Q Business with Amazon QuickSight integration for unified conversational experience

12/3/2024
DatabricksDatabricks

Databricks Wins Four 2024 AWS Partner of the Year Awards

Celebrating Databricks' achievement of winning multiple awards at the 2024 AWS Partner of the Year Awards

12/3/2024
Google CloudGoogle Cloud

Fireworks.ai: Lighting up gen AI through a more efficient inference engine

Fireworks.ai introduces a more efficient gen AI inference engine with support from partners like NVIDIA and Google Cloud.

12/3/2024
Google CloudGoogle Cloud

Faster food: How Gemini helps restaurants thrive through multimodal visual analysis

Unlocking efficiency and safety in restaurants with multimodal visual analysis powered by Gemini

12/3/2024
Apple MLApple ML

Learning Elastic Costs to Shape Monge Displacements

Optimizing distribution mapping efficiency through elastic costs and Monge displacements.

12/3/2024
Google CloudGoogle Cloud

Veo and Imagen 3: Announcing new video and image generation models on Vertex AI

Announcing new video and image generation models on Vertex AI

12/3/2024
DatabricksDatabricks

The Power of AI in Business Intelligence: A New Era

Unlocking the potential of AI in Business Intelligence through tailored data intelligence and advanced AI/BI Dashboards

12/3/2024
DatabricksDatabricks

Databricks Brings AI to the Enterprise using NVIDIA AI and Accelerated Computing

Databricks collaboration with NVIDIA advances AI and data workflows through unmatched performance and efficiency

12/3/2024
AWS MLAWS ML

Introducing Fast Model Loader in SageMaker Inference: Accelerate autoscaling for your Large Language Models (LLMs) – Part 2

Accelerate autoscaling for your Large Language Models with the introduction of Fast Model Loader in SageMaker Inference, reducing model loading times by up to 15 times compared to traditional methods.

12/3/2024
AWS MLAWS ML

Introducing Fast Model Loader in SageMaker Inference: Accelerate autoscaling for your Large Language Models (LLMs) – part 1

Accelerate autoscaling for large language models with Fast Model Loader in SageMaker Inference, reducing latency and improving resource utilization

12/3/2024
AWS MLAWS ML

Supercharge your auto scaling for generative AI inference – Introducing Container Caching in SageMaker Inference

Introducing Container Caching in SageMaker Inference to supercharge auto scaling for generative AI models.

12/3/2024
AWS MLAWS ML

Unlock cost savings with the new scale down to zero feature in SageMaker Inference

Optimize costs and manage resources in SageMaker Inference with the new scale down to zero feature.

12/3/2024