Had the privilege and opportunity to attend Figma’s Config 2024 event at the end of June in San Francisco. I attended so many inspiring and entertaining sessions. If you have the chance to watch some of the sessions on YouTube, please do - especially the interview with Jesper Kouthoofd - funny, truthful, and riveting.
Figma did a phenomenal job of balancing their audience’s interest in AI (especially genAI) with other important issues. While many of the sessions I attended showcased how GenAI can be used in their tools, Figma went far beyond the technology to discuss the use cases, implications, and emerging opportunities.
Here’s what caught my attention around GenAI:
- Figma’s honesty and transparency on the use of GenAI. While I may not get these two quotes exactly right, they are close. “It is still in beta. Sometimes it works. Sometimes it doesn’t.” “We don’t (yet) know the underlying costs. We’ll absorb them this year. Once we figure out the usage, we’ll figure out how to charge / what the business model should be.”
- Fei-Fei Li and her team of researchers are using a single image to generate 3D virtual models. They then use the (synthetic) data from these models to train robots to do things like make a peanut butter and jelly sandwich or sweep a floor. There is a lot to unpack here - which she did in her session at a high level - but the efficiency of creating data to train robots? Teaching computers to truly understand what they see? Wow.
- Refik Anadol is building a Large Nature Model. As the name implies, it is a model trained on a massive data set of images, videos, and other data related to the natural world. I can only imagine the collaboration opportunities and acceleration of research and learning with such a tool - for nature and many other scientific areas including healthcare and medicine. (He also creates some sensational outdoor art.)
- Jesper Kouthoofd encouraged the audience to think carefully about the next generation of digital experiences that are possible with GenAI. He was a bit nostalgic for the Internet that existed before heavy commercialization. He mostly left this open to our interpretation. The audience agreed. It would be interesting to consider experiences not funded by advertising, data monetization, and paid subscriptions. He is also a successful inventor and designer. Try to find him on YouTube if you can. He was also the funniest speaker at the event - LOL funny.
- One demo led me to ask myself a question, “should we use GenAI to create images of products or outcomes we sell to customers?” I watched an excellent demo of Figma’s tool being used to create a mobile website / app for recipes. (Whether or not we should still be building websites is a different question for a different day.) The speaker demo’ed using the GenAI tools to create a picture of the lemon bar (among other things). And I understand that photography can be expensive.
I have a few friends who are professional bakers. I asked them if they would use such a tool. They said, “No, we wouldn’t. Two reasons. One, we have the product so it is easy to photograph. Two, we can’t imagine marketing something that isn’t real.” My guess is that we will generate images - companies draw them today or photograph them.
Related, MIT published an article that referenced a study on GenAI energy consumption. In this study, the researchers found that using GenAI to create an image can take as much energy as charging a smartphone.
Chief | Corporate Development Alphy Reflect AI
4moGood insight and article Julie. The ground truth for Usability and Adoption is always journey driven.