Exploring AI tools for the new Gen Z creators with Krish Maniar, Pika Labs
- Connor Lee

- Mar 31, 2025
- 3 min read

Krish Maniar is a current sophomore at Stanford University, studying CS & Econ, and a ML Research Intern at Pika Labs. Pika Labs is a generative AI startup building tools to create high-quality video from text prompts. Focused on empowering creators, Pika makes it easy to produce and edit cinematic content without traditional filmmaking resources.
Q: What are your career interests, particularly at the intersection of media and technology?
As someone still figuring things out, I’m drawn to the creative side of tech, design, music, and short-form video. There’s so much room to bring structure and automation into those spaces. It’s still largely manual, and AI has the potential to really transform that. Not enough has been explored when it comes to improving the experience of creating, whether that’s editing a video or making music. It’s a huge opportunity.
Q: What’s it like working at Pika Labs as a Gen Z researcher?
It’s honestly super exciting. We’re at an inflection point where the models are finally good enough to generate 10- to 15-second clips that meet creative standards. I get a front-row seat to how creators—from indie filmmakers to social media editors—are actually using the tools. You see these short trailers and reels made with Pika’s models. It’s not theoretical. It’s real-world content being made right now.
Q: How is Pika thinking about Gen Z as an audience?
Originally, we were building tools for creators, designers, editors, and studios. But over the past five or six months, that’s shifted. Now, the goal is to make generative video feel native to Gen Z’s everyday digital life. We’re focused on turning it into something anyone can use like for making a meme or sending a Snap. It’s about making Pika part of how people express themselves online.
Q: What kinds of tools or features are you building to support that shift?
We launch something new pretty much every week. One of the first viral hits was the Squish Effect where you could squish a laptop, a table, even someone’s face. People loved it. Pika’s other tools include object swaps, inflations, and animated scenes. We call them Pika Effects—they’re all about evoking emotion. Whether it’s humor, surprise, or chaos, we try to design features that make people want to share. All of this is layered on top of Pika’s custom text-to-video model. The initial technical differentiator was the model itself, but what really sets us apart now is how we package it and what the product feels like.”
Q: How do you imagine Gen Z creators using these tools—both casual and professional?
For short content, like a Reel or a TikTok, you can just type a prompt, maybe upload an image, and get a shareable video. It’s fast and intuitive. For more serious creators, especially those working on longer videos, it becomes more of a prototyping tool. It’s not about generating a full five-minute film yet, but you can use it to test ideas. For example, seeing how a swap or any other effect would look before committing to a full edit. For Gen Z and the casual user, the goal is to make content that sparks strong reactions. We are going for humor, weirdness, surprise because that leads people to share it with friends. It’s all about virality.
Q: What differentiates Pika from other players in the generative video space?
This space is getting crowded with players like Runway, Luma, Sora, all the big labs have models. Yet most of them are focused purely on model performance: quality, speed, resolution. Pika is making a different bet. We think the bigger opportunity is in product design. Most companies don’t spend time testing virality, studying user behavior, or thinking about what makes someone want to share a video—especially Gen Z. We do. We’re also a small team—6 or 7 on research, about 10 on product—but the emphasis is very deliberate. We’re focused on building experiences, not just outputs.
Q: What do you personally hope to see in the future of this space?
AI should enhance creativity, not replace it. I don’t think the goal is to generate an entire film from a one-shot prompt. There’s so much value in the chase, in curation, taste, artistic intent. However, it should help creators swap out scenes, add effects, or test concepts faster so they can focus more on voice and vision.
Q: From your Gen Z perspective, what’s the biggest question we should be asking about AI in filmmaking?
The tech is finally good—but the real challenge is how we integrate it. We’ve already seen it with coding and writing: people start relying on models too much, and the work becomes stale. The future depends on whether companies build AI as a co-pilot or a replacement. I think we should protect the creative voice, not automate it away.




Comments