## Twelve Labs (TwelveLabs) — Video Understanding AI Platform ## 트웰브랩스(TwelveLabs) — 비디오 이해 AI 플랫폼 > Eco-Friendly Solar Energy Tech

Go to Body
All Search in Site

Member Login

Count Vister

Today
11,953
Yesterday
30,600
Maximum
48,407
All
1,337,242

Eco-Friendly Solar Energy Tech


Tech ## Twelve Labs (TwelveLabs) — Video Understanding AI Platform ## 트웰브랩스(TwelveLa…

Page Info

Writer Joshuaa Hit 379 Hits Date 26-01-14 12:08
Comment 0 Comments

Content

## Twelve Labs (TwelveLabs) — Video Understanding AI Platform

## 트웰브랩스(TwelveLabs) — 비디오 이해 AI 플랫폼

---

## English — What “Twelve Labs” is and why it matters

### 1) Quick identification (and common confusion)

**Twelve Labs (TwelveLabs)** is an AI company/platform focused on **video understanding**—turning large volumes of video into something you can **search, analyze, and generate text from** (summaries, chapters, highlights, Q&A). ([docs.twelvelabs.io][1])
It is frequently confused with **ElevenLabs**, which is a **voice/speech** AI company; the names look similar but the product domains are different. ([위키백과][2])

### 2) Core idea: “video-native” understanding = search + analysis

TwelveLabs’ platform is positioned around three practical capabilities (their product framing varies by page, but the concept is consistent):

* **Search**: Find exact moments in videos using natural language (“the moment the goalkeeper makes a save”, “a person holding a pen in an office”). ([TwelveLabs][3])
* **Analyze**: Produce structured outputs from videos—summaries, chapters, highlights, and more. ([TwelveLabs][3])
* **Embed**: Create embeddings (vector representations) for video (and sometimes multimodal signals) to enable retrieval, classification, and downstream ML workflows. ([docs.twelvelabs.io][4])

### 3) Two key model families: Marengo vs Pegasus

A useful mental model is:

* **Marengo** = “indexing + retrieval brain” (embeddings for multimodal video understanding and semantic search). Marengo is explicitly described as an **embedding model**; Marengo 3.0 is listed as the current stable version in their docs. ([docs.twelvelabs.io][4])
* **Pegasus** = “generative video-to-text brain” (answers questions about video, produces descriptive text, summaries, etc.). Pegasus 1.2 is the current version in their docs. ([docs.twelvelabs.io][5])

In practice, many production systems use them together:

1. **Marengo** retrieves candidate segments (fast, scalable).
2. **Pegasus** generates higher-level reasoning outputs on the retrieved segments (summaries, explanations, structured metadata). ([TwelveLabs][6])

### 4) What the API/platform actually does (developer-relevant view)

At a systems level, video understanding products usually break into a few pipeline stages, and TwelveLabs exposes these as relatively direct APIs:

**A. Ingest / index**

* Upload video (or point to stored video)
* Create an index and compute embeddings (Marengo) so later queries are fast

**B. Query / retrieve**

* Natural language search against your indexed library (returns matching segments/timestamps)

**C. Analyze / generate**

* Run video-to-text generation (Pegasus) to produce summary/chapters/highlights, and to answer questions about the content ([docs.twelvelabs.io][7])

**Important operational detail (API change):**
Their docs note that the **`/summarize` endpoint will be deprecated on February 15, 2026**, and they direct developers to use the **POST method of the `/analyze` endpoint** and specify output formatting (e.g., structured JSON) via parameters. ([docs.twelvelabs.io][7])

### 5) AWS Bedrock availability (enterprise integration shortcut)

TwelveLabs models are also available as managed models in **Amazon Bedrock** (this matters if you want AWS-native auth/governance, cross-region inference, and standard Bedrock tooling). AWS “What’s New” and TwelveLabs/AWS posts describe availability of **Marengo 2.7** and **Pegasus 1.2** via Bedrock. ([Amazon Web Services, Inc.][8])
AWS documentation also describes Pegasus 1.2’s role as video understanding/content analysis and notes constraints such as max video length in the Bedrock context. ([AWS Documentation][9])

### 6) Pricing model (how cost tends to behave)

TwelveLabs publicly lists:

* A **Free plan** that includes **600 minutes** of video upload/indexing allowance to try the platform (accumulated). ([TwelveLabs][10])
* Plans described as **Free / Developer / Enterprise** in docs, each with different rate limits and pricing characteristics. ([docs.twelvelabs.io][11])
  They also provide a pricing calculator that breaks cost into components (e.g., indexing, usage, infrastructure), which is useful because video workloads often have two distinct cost centers:

1. **Indexing cost** (one-time-ish per video ingested, unless you re-index)
2. **Query/analyze cost** (per search / per analysis request) ([TwelveLabs][12])

### 7) Company background and funding (useful for vendor risk assessment)

Public, first-party announcements include:

* **$50M Series A announced June 4, 2024**, co-led by **NEA** and **NVIDIA’s NVentures**, with participation from prior investors (Index Ventures, Radical Ventures, WndrCo, Korea Investment Partners). ([TwelveLabs][13])
* **$5M seed round (March 16, 2022)** led by Index Ventures (per TwelveLabs blog). ([TwelveLabs][14])
* **$12M seed extension (December 2022)** described in a TwelveLabs post referencing TechCrunch reporting. ([TwelveLabs][15])
* A **$10M strategic investment** disclosed around late 2023, with participation including **Intel Capital** and NVIDIA’s NVentures (and others). ([intelcapital.com][16])

Founding timeline note (sources differ slightly):

* AWS states the company was co-founded by **Jae Lee in 2020** and has offices in Seoul and San Francisco. ([Amazon Web Services, Inc.][17])
* A Techstars cohort write-up hosted on TwelveLabs’ site describes the company as founded in **2021** and mentions Seattle roots. ([TwelveLabs][18])
  For procurement and diligence, treat 2020–2021 as the formation window, and rely on the most authoritative/contractual documents for exact legal entity dates.

### 8) High-value use cases (where TwelveLabs fits best)

From their own use-case positioning and typical buyer patterns, TwelveLabs tends to be strong when you have **lots of video** and need **timecode-level retrieval + rich metadata**:

* **Media & entertainment archives**: search across decades of footage; generate clips/highlights. ([TwelveLabs][3])
* **Sports analysis**: combine semantic understanding (what’s happening) with precise retrieval (where it happens). ([TwelveLabs][6])
* **Enterprise knowledge**: internal trainings, safety videos, call-center screen recordings—turn them into searchable knowledge assets. ([docs.twelvelabs.io][1])
* **Compliance / moderation**: structured tagging and review workflows (always verify with policy + human review for high-stakes outcomes).

### 9) Practical implementation guidance (pitfalls + patterns that matter)

Below are patterns that usually determine success more than model selection:

**A. Index strategy (avoid “one giant index” by default)**

* Split by **domain** (e.g., “sports”, “training”, “marketing ads”) because vocabulary and retrieval intent differ.
* Split by **access control boundary** (teams/tenants) to keep security simple.

**B. Segment-level retrieval is the scalability lever**

* If your pipeline retrieves full-length videos for every question, cost/latency balloons.
* Prefer: query → shortlist segments → run Pegasus analysis only on the shortlist.

**C. Metadata design: decide your “truth tables”**
Common fields that make downstream automation reliable:

* `video_id`, `source`, `language`, `recorded_at`, `rights_policy`, `sensitivity_level`
* `chapters[]` with `start/end`, `title`, `summary`, `entities`, `actions`, `keywords`
  This maps naturally to search (Marengo) + generation (Pegasus) outputs. ([docs.twelvelabs.io][7])

**D. Expect imperfect recall/precision; build evaluation harnesses**

* Build a set of “gold queries” (20–200) with expected timestamps.
* Track: top-k hit rate, average time-to-find, and “false highlight” rate.
* Do periodic regression checks after changing prompts, index settings, or upgrading model versions.

**E. Watch for upcoming API deprecations**
Because the `/summarize` endpoint deprecates Feb 15, 2026, production code should migrate to `/analyze` patterns early to avoid last-minute outages. ([docs.twelvelabs.io][7])

**F. Privacy, licensing, and retention**
Video often contains faces, voices, screens, location data, and copyrighted content. Treat governance as first-class:

* Minimize stored content where possible (store embeddings/metadata, keep raw video in your secured storage tier).
* Maintain retention policies aligned with your business and local law.
* Ensure you have rights to analyze and transform the video (especially if you create derived metadata that can be redistributed).

---

## 한국어 — “트웰브랩스” 핵심 정리 (개발/서비스 관점)

### 1) 트웰브랩스는 무엇인가

**트웰브랩스(TwelveLabs)**는 “영상(비디오)을 사람이 이해하듯 이해”하도록 만드는 **비디오 이해(Video Understanding) AI 플랫폼**이다. 영상 속 **사물/행동/음성/자막/화면 텍스트** 같은 멀티모달 신호를 기반으로, 대량 영상 라이브러리를 **검색(Search)** 하고 **분석(Analyze)** 하며 **텍스트로 생성(요약/챕터/하이라이트/Q&A)** 하는 쪽에 초점이 있다. ([docs.twelvelabs.io][1])

이름이 비슷해서 **ElevenLabs(음성 AI)**와 헷갈리는 경우가 많다. 트웰브랩스는 영상, 일레븐랩스는 음성 영역이다. ([위키백과][2])

### 2) 핵심 기능 3종: Search / Analyze / Embed

* **Search(검색)**: “장면을 말로 설명”하면 그 장면이 있는 **정확한 타임코드/구간**을 찾아준다. ([TwelveLabs][3])
* **Analyze(분석/생성)**: 영상에서 **요약, 챕터, 하이라이트** 같은 결과물을 텍스트(또는 구조화 JSON)로 뽑는다. ([docs.twelvelabs.io][7])
* **Embed(임베딩)**: 영상 이해를 위한 벡터(임베딩)를 만들고, 이를 기반으로 검색/분류/추천/유사도 분석을 구성한다. ([docs.twelvelabs.io][4])

### 3) 모델 이해를 쉽게: Marengo vs Pegasus

* **Marengo(마렌고)**: “임베딩/검색용” 모델. 멀티모달 영상 이해 임베딩을 만들고, 대규모 라이브러리에서 **세맨틱 검색**을 가능하게 한다. 문서에서 Marengo 3.0을 현재 버전으로 소개한다. ([docs.twelvelabs.io][4])
* **Pegasus(페가수스)**: “영상→텍스트 생성” 모델. Pegasus 1.2가 현재 버전이며, 영상 내용을 바탕으로 **설명/요약/질의응답**을 생성한다. ([docs.twelvelabs.io][5])

현업에서는 보통 **마렌고로 ‘찾고’ → 페가수스로 ‘정리/설명’** 하는 2단 구성이 비용/속도/정확도 균형이 좋다. ([TwelveLabs][6])

### 4) API 운영에서 중요한 포인트(2026년 초 기준)

문서에 따르면 **`/summarize` 엔드포인트는 2026년 2월 15일에 폐기 예정**이며, **`/analyze`(POST)로 이동**하고 `response_format` 등으로 **구조화 JSON 출력**을 쓰라고 안내한다. 운영 서비스면 이 부분이 가장 먼저 체크 포인트다. ([docs.twelvelabs.io][7])

### 5) AWS Bedrock 연동(기업/대규모에 유리)

트웰브랩스 모델은 **Amazon Bedrock**에서 관리형으로도 제공된다. AWS 공지/문서 및 트웰브랩스 발표에 따르면 **Marengo 2.7, Pegasus 1.2**가 Bedrock 경로로 제공된다. ([Amazon Web Services, Inc.][8])
즉, AWS 표준 거버넌스/권한/감사/리전 전략을 쓰면서 비디오 이해 모델 호출을 붙일 수 있다. ([AWS Documentation][19])

### 6) 요금 구조(비디오 AI에서 흔히 놓치는 부분)

* **Free 플랜에 600분 제공**(누적)이라고 명시돼 있다. 테스트/프로토타입 단계에서 꽤 실용적이다. ([TwelveLabs][10])
* 문서에는 플랜을 **Free / Developer / Enterprise**로 구분하며, **요금과 레이트리밋**이 다르다고 안내한다. ([docs.twelvelabs.io][11])
* 비용은 대체로 (1) **인덱싱/임베딩 비용** + (2) **검색/분석 호출 비용**으로 나뉘는 형태이며, 계산기가 이를 항목별로 나눠 보여준다. ([TwelveLabs][12])

### 7) 회사/투자(벤더 리스크 판단용)

* 2024년 6월 4일, **Series A 5,000만 달러**를 발표했고 **NEA, NVIDIA NVentures**가 공동 리드로 명시된다. ([TwelveLabs][13])
* 2022년 3월 **시드 500만 달러**, 2022년 12월 **시드 익스텐션 1,200만 달러** 등 초기 라운드도 자사 포스팅에 정리돼 있다. ([TwelveLabs][14])
* 2023년 말 전후로 **전략적 1,000만 달러 투자**(Intel Capital 등 참여)가 공개됐다. ([intelcapital.com][16])

### 8) 실제로 잘 맞는 사용처(성과가 잘 나는 패턴)

* **영상 아카이브 검색**(방송/미디어/교육/기업 내부 영상)
* **스포츠 분석**(상황 이해 + 정확한 구간 찾기 결합) ([TwelveLabs][6])
* **제작 워크플로**(하이라이트 자동 생성/챕터링) ([TwelveLabs][3])

### 9) 성공률을 올리는 운영 팁(모델보다 시스템이 좌우)

* **인덱스 분리**: 도메인/테넌트/권한 단위로 나눠야 검색 품질과 보안이 같이 좋아진다.
* **검색→분석 2단계**: 먼저 Marengo로 후보 구간을 좁히고, 그 구간에만 Pegasus를 돌리면 비용과 지연이 급감한다. ([TwelveLabs][6])
* **정답셋(골드 쿼리) 구축**: “이 질문이면 이 타임코드”를 50~200개만 만들어도, 버전 업/프롬프트 변경 시 품질 회귀를 빠르게 잡는다.
* **폐기 일정 반영**: `/summarize` 폐기(2026-02-15) 같은 변화는 운영 장애로 직결되므로 릴리즈 노트/마이그레이션을 선제적으로 반영한다. ([docs.twelvelabs.io][7])

---

## 日本語 — TwelveLabs(トゥエルブ・ラボ)要点まとめ

### 1) TwelveLabsとは

**TwelveLabs**は、動画を「検索可能で理解可能なデータ」に変換する **動画理解AIプラットフォーム**。動画内の物体・行動・音声・テキスト等を扱い、自然言語での検索や要約・章立て・ハイライト生成、動画Q&Aを提供する。 ([docs.twelvelabs.io][1])
(名称が似ている **ElevenLabs** は音声AIで別領域。) ([위키백과][2])

### 2) 主要機能:Search / Analyze / Embed

* **Search**:自然言語で「この場面」を説明すると該当タイムコードを返す ([TwelveLabs][3])
* **Analyze**:要約・チャプター・ハイライトなどの動画→テキスト生成 ([docs.twelvelabs.io][7])
* **Embed**:動画理解のための埋め込み(ベクトル)生成と検索基盤 ([docs.twelvelabs.io][4])

### 3) モデル:Marengo と Pegasus

* **Marengo**:埋め込みモデル(検索・リトリーバル向き)。Marengo 3.0がドキュメント上の現行版。 ([docs.twelvelabs.io][4])
* **Pegasus**:生成モデル(動画→テキスト)。Pegasus 1.2が現行版。 ([docs.twelvelabs.io][5])
  実務では「Marengoで候補区間抽出 → Pegasusで説明/構造化出力」が定石。 ([TwelveLabs][6])

### 4) API運用上の重要点(2026年初)

ドキュメントにて、**`/summarize` は 2026-02-15 に廃止予定**、代わりに **`/analyze`(POST)へ移行**し、出力形式(構造化JSON等)を指定するよう案内されている。 ([docs.twelvelabs.io][7])

### 5) AWS Bedrock 連携

AWSの案内では、**Marengo 2.7 / Pegasus 1.2** が Amazon Bedrock で管理型提供。企業要件(権限・監査・運用)に合わせやすい。 ([Amazon Web Services, Inc.][8])

### 6) 料金・プラン

* Freeプランに **600分** の無料枠が明記 ([TwelveLabs][10])
* ドキュメント上のプラン区分:Free / Developer / Enterprise ([docs.twelvelabs.io][11])
* 料金計算機で「インデックス(前処理)+利用(検索/分析)」の分解を確認できる ([TwelveLabs][12])

### 7) 資金調達(信頼性・継続性の判断材料)

**2024-06-04 に Series A $50M**(NEAとNVenturesが共同リード)を公式発表。 ([TwelveLabs][13])

---

## Español — Resumen técnico y de producto (TwelveLabs)

### 1) Qué es TwelveLabs

**TwelveLabs** es una plataforma de **IA para comprensión de video**: permite **buscar** momentos en videos con lenguaje natural y **generar texto** (resúmenes, capítulos, highlights, Q&A) a partir del contenido. ([docs.twelvelabs.io][1])
No confundir con **ElevenLabs**, que es IA de **voz/síntesis**. ([위키백과][2])

### 2) Capacidades clave

* **Search** (búsqueda semántica por escenas) ([TwelveLabs][3])
* **Analyze** (resumen/capítulos/highlights y generación guiada por prompts) ([docs.twelvelabs.io][7])
* **Embed** (embeddings para indexación, retrieval y clasificación) ([docs.twelvelabs.io][4])

### 3) Modelos: Marengo y Pegasus

* **Marengo**: modelo de embeddings (p. ej., Marengo 3.0 en docs). ([docs.twelvelabs.io][4])
* **Pegasus**: modelo generativo video→texto (Pegasus 1.2). ([docs.twelvelabs.io][5])
  Arquitectura típica: **retrieval con Marengo → generación/razonamiento con Pegasus**. ([TwelveLabs][6])

### 4) Cambio importante de API (febrero 2026)

La documentación indica que **`/summarize` se depreca el 15 de febrero de 2026** y recomienda migrar a **`/analyze` (POST)** con parámetros de formato de salida (por ejemplo JSON estructurado). ([docs.twelvelabs.io][7])

### 5) Integración con AWS Bedrock

AWS anuncia disponibilidad administrada de **Marengo 2.7** y **Pegasus 1.2** en Bedrock. ([Amazon Web Services, Inc.][8])

### 6) Precios y planes

* **Plan Free con 600 minutos** de uso para probar (acumulado). ([TwelveLabs][10])
* Planes en docs: **Free / Developer / Enterprise**. ([docs.twelvelabs.io][11])
* Calculadora de costos con desglose por indexación/uso/infra. ([TwelveLabs][12])

---

## Français — Synthèse produit/technique (TwelveLabs)

### 1) Définition

**TwelveLabs** est une plateforme d’**IA de compréhension vidéo** : elle transforme des bibliothèques vidéo en actifs **recherchables** et **analysables**, avec génération de texte (résumés, chapitres, moments forts, Q&A). ([docs.twelvelabs.io][1])
À ne pas confondre avec **ElevenLabs** (IA de la voix). ([위키백과][2])

### 2) Fonctions principales

* **Search** : recherche sémantique de scènes à partir d’une requête en langage naturel ([TwelveLabs][3])
* **Analyze** : génération de résumés/chapitres/highlights et sorties structurées ([docs.twelvelabs.io][7])
* **Embed** : embeddings vidéo pour indexation et retrieval ([docs.twelvelabs.io][4])

### 3) Modèles : Marengo et Pegasus

* **Marengo** : modèle d’embeddings (Marengo 3.0 indiqué comme version actuelle dans la doc). ([docs.twelvelabs.io][4])
* **Pegasus** : modèle génératif vidéo→texte (Pegasus 1.2). ([docs.twelvelabs.io][5])
  Pattern recommandé en production : **Marengo pour retrouver des segments → Pegasus pour générer l’analyse**. ([TwelveLabs][6])

### 4) Point critique (dépréciation API)

La doc précise que l’endpoint **`/summarize` sera déprécié le 15 février 2026**, avec migration vers **`/analyze` (POST)** et options de format de réponse (JSON structuré). ([docs.twelvelabs.io][7])

### 5) AWS Bedrock

Disponibilité des modèles TwelveLabs (notamment **Marengo 2.7** et **Pegasus 1.2**) en tant que services managés dans **Amazon Bedrock**. ([Amazon Web Services, Inc.][8])

### 6) Plans et tarification

* **600 minutes gratuites** sur le plan Free pour démarrer. ([TwelveLabs][10])
* Plans : **Free / Developer / Enterprise** (docs). ([docs.twelvelabs.io][11])
* Calculateur de coûts avec ventilation indexation/usage/infrastructure. ([TwelveLabs][12])

---

If you meant a different “트웰브랩스” (e.g., a Korean product/service with the same nickname), the above covers TwelveLabs (Twelve Labs, video understanding AI) as referenced by the vendor’s docs and AWS materials.

[1]: https://docs.twelvelabs.io/docs/get-started/introduction?utm_source=chatgpt.com "TwelveLabs: Introduction"
[2]: https://en.wikipedia.org/wiki/ElevenLabs?utm_source=chatgpt.com "ElevenLabs"
[3]: https://www.twelvelabs.io/product/product-overview?utm_source=chatgpt.com "Product Overview"
[4]: https://docs.twelvelabs.io/docs/concepts/models/marengo?utm_source=chatgpt.com "Marengo"
[5]: https://docs.twelvelabs.io/docs/concepts/models/pegasus?utm_source=chatgpt.com "Pegasus"
[6]: https://www.twelvelabs.io/blog/sports-analysis?utm_source=chatgpt.com "Building a Multi-Sport Video Analysis System with ..."
[7]: https://docs.twelvelabs.io/api-reference/analyze-videos/summarize?utm_source=chatgpt.com "Summaries, chapters, or highlights"
[8]: https://aws.amazon.com/about-aws/whats-new/2025/07/twelvelabs-models-fully-managed-amazon-bedrock/?utm_source=chatgpt.com "TwelveLabs models now available fully managed in ..."
[9]: https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-pegasus.html?utm_source=chatgpt.com "TwelveLabs Pegasus 1.2 - Amazon Bedrock"
[10]: https://www.twelvelabs.io/pricing?utm_source=chatgpt.com "Pricing"
[11]: https://docs.twelvelabs.io/docs/get-started/manage-your-plan?utm_source=chatgpt.com "Manage your plan"
[12]: https://www.twelvelabs.io/pricing-calculator?utm_source=chatgpt.com "TwelveLabs | Pricing Calculator"
[13]: https://www.twelvelabs.io/blog/series-a-announcement?utm_source=chatgpt.com "Our Series A to Build the Future of Multimodal AI"
[14]: https://www.twelvelabs.io/blog/to-make-the-worlds-videos-searchable-twelve-labs-raises-5m?utm_source=chatgpt.com "To make the world's videos searchable, Twelve Labs ..."
[15]: https://www.twelvelabs.io/blog/twelve-labs-lands-12m-for-ai-that-understands-the-context-of-videos?utm_source=chatgpt.com "Twelve Labs lands $12M for AI that understands ..."
[16]: https://www.intelcapital.com/twelve-labs-breaks-new-ground-with-first-of-its-kind-video-to-text-generative-apis/?utm_source=chatgpt.com "Twelve Labs Breaks New Ground With First-of-its-kind ..."
[17]: https://aws.amazon.com/startups/learn/seeing-is-understanding-twelve-labs-pioneers-ai-video-intelligence-on-aws?utm_source=chatgpt.com "Twelve Labs pioneers AI video intelligence on AWS"
[18]: https://www.twelvelabs.io/blog/meet-the-latest-techstars-seattle-cohort-10-startups-on-how-theyve-adapted-to-the-pandemic?utm_source=chatgpt.com "Meet the latest Techstars Seattle cohort: 10 startups on ..."
[19]: https://docs.aws.amazon.com/ko_kr/bedrock/latest/userguide/model-parameters-pegasus.html?utm_source=chatgpt.com "TwelveLabs Pegasus 1.2 요청 파라미터"

List of comments

No comments

Copyright © SaSaSak.net All rights reserved.