#403: Spot Bitcoin ETFs Continue To Gain Momentum, & More
1. Spot Bitcoin ETFs Continue To Gain Momentum

One month into their approval, spot bitcoin ETFs are proving transformative for institutional investors and asset allocators. Bitcoin’s market cap has eclipsed $1 trillion[i] for the first time since November 2021 and, according to Bloomberg, spot bitcoin ETF volumes and flows in aggregate are higher at this point in their lifecycle than those of any ETF asset launch in history.[ii] As measured by volume and flows since inception on January 11, spot bitcoin ETFs already are the second largest commodity ETF[iii] behind gold, and ahead of silver. In volume, they have generated $43.9 billion,[iv] or $18.6 billion, excluding the Grayscale Bitcoin Trust ETF (GBTC), which has outflowed. Net inflows excluding GBTC have totaled $11.5 billion, or $4.7 billion including GBTC.
Bound by mandates that restrict their exposure to bitcoin without extensive due diligence, some of the largest full-service brokerage firms and financial advisors are waiting for the green light. During the next three to six months, as due diligence teams understand the opportunity in well diversified portfolios to potentially increase risk-adjusted returns with a new asset class, flows into bitcoin ETFs should continue to increase.
With our partners at 21Shares and Resolute Investment Managers, ARK is excited about the opportunity to continue bridging the gap between Bitcoin and the traditional financial world order.
2. Open-AI’s “Sora” Text-To-Video AI Model Is A Landmark Achievement
Last week, OpenAI released its text-to-video AI model, Sora, a landmark achievement in the field of AI video production. With its Pixar-quality animation and photorealistic videos of landscapes, Sora is outperforming models offered by Google, Runway, and Pika. In addition to text-to-video generation, Sora can[v] animate image inputs, extend video inputs forward or backward in time, transform video inputs, connect two video inputs seamlessly, generate images, and simulate physical and virtual worlds.
According to our research, large language models and text-to-image diffusion models have pushed the cost of text and image production to nil. Sora is lowering the cost of video production even further, increasing access to studio-grade, AI-enabled video content production. In our view, generative AI tools are a boon to the creator community, including platforms that aggregate user-generated content.
3. Sora’s Content-Generation Capabilities Could Have Important Applications In Robotics


OpenAI ‘s new generative AI video creation model, Sora,[vi] generates content with quality and detail that users need to see to believe. According to its technical report,[vii] OpenAI combined diffusion model technology—DALL-E style models that use text prompts to generate images and video—with the transformer architecture that powers ChatGPT. Notably, Sora trained on videos of varying durations, resolutions, and sizes, unlike prior text-to-video models that trained on a standardized resolution and aspect ratio. OpenAI’s success suggests that the diversity of Sora’s training data has enabled it to frame and compose scenes more effectively and to accommodate a more diverse array of input and output modalities than other models. By leveraging its expertise in both diffusion and transformer models, and training on vast amounts of raw video and image data, OpenAI appears to have raised the state-of-the-art to a new level.
Given a video, Sora can extend the scene ahead or backward in time, potentially predicting what did or will happen before or after any scene—a capability that could help predict the movements of pedestrians and vehicles in autonomous driving applications. In short, Sora appears to demonstrate simulation capabilities that could have broader use cases, specifically in robotics.
Although never fed with physics explicitly, Sora generates videos that can visualize the movement of people and objects accurately, even when they are occluded or out of frame, which potentially could be useful in simulation-based training for robots. While its understanding of the physical world has yet to be perfected, Sora seems to be a leap forward for multimodal models, which already have proven useful[viii] in autonomous driving.
[i] Dale, B. 2024. “Bitcoin's market cap breaks $1T, taking overall crypto market to $2T.” Axios.
[ii] Balchunas, E. 2024. “Here’s the updated version…” X.
[iii] Bitcoin Archive. 2024. “Just In Bitcoin…” X
[iv] BitMex Research. 2-24. “Bitcoin ETF Flow – 15th Feb.” X
[v] OpenAI. 2024. “Video generation models as world simulators.”
[vi] OpenAI. 2024. “Creating Video From Text…”
[vii] OpenAI. 2024. “Video generation models as world simulators.”
[viii] Wayve. 2023. “Introducing GAIA-1: A Cutting-Edge Generative AI Model for Autonomy.”