All content

Sci-Fi to Work-Tech: Unveiling AI’s Impact on Tomorrow with Daniel Potes and Scott Byrne-Fraser

Join Daniel Potes, Creative And Physical Technologist at Future Colossal, and Scott Byrne-Fraser, Technical Co-founder at hundo, in "Sci-Fi to Work-Tech: Unveiling AI's Impact on Tomorrow." Explore AI's transformative influence on work, industry integration, benefits for productivity and innovation, navigating the AI-driven future, and addressing ethical considerations in this thought-provoking conversation.

View ai generated page summary

Generate summary

VIDEO TRANSCRIPT

Scott:

And next I'm joined by Daniel Potts. Daniel is a creative technologist and an AI artist. Daniel set out to become Indiana Jones in his early career before deciding to move into the technology field. He now helps shape the conversation about ethics of the use of AI and uses it to implement new ways of boosting your productivity in your day-to-day work. So without further ado, Daniel.

Daniel:

Hey, hey.

Scott:

Hey Daniel, how are you?

Daniel:

I'm doing great. Nice and early for me here on the East Coast, but very ready for a great conversation.

Scott:

Fantastic, yeah, thank you for joining us so early on over there. I'll jump straight into the questions then. I mean, can you tell us in our audience a bit about yourself, who you are, what you do, your journey to get into where you got to now?

Daniel:

Yeah, so I like to think I'm the most eclectic generalist of all time, but I've happened to kind of stumble into a very specific set of specializations because I'm so all over the place, right? I got my start in undergraduate studying religious studies and world cinema, which is like a super weird combination of things. I thought I wanted to be like Indiana Jones, the, you know, swashbuckling archaeologist who digs up ancient cities. Did that for a little bit, did some field schools abroad, didn't have the best time. I was actively doing archeology in war zones, which is not smart or safe or good for your mental health. Led to some really sad and poor times for my mental health. And I kind of had to shift away from what I had been doing for four years, which is studying religion, studying ancient cities, and like, how does that, you know, become any sort of useful thing for a career? Right, like if you're not gonna be an archeologist and you're doing all this time investment in studying, you know, I goofed up basically. I made a bad financial decision and then I had to kind of figure out a way around that. And so I was working, you know, every possible job under the sun. My first job out of college was I was a content consultant, which is like a very weird, vague term for someone who was making content for like scavenger hunts. So I was just looking up cities, finding information about cities, and making fun, engaging interactives that are done through an app, and designing those. So from there I went to marketing. From marketing I got, I realized, hey, I kinda have to do something in terms of education to grow and kind of enter a new industry. And at the time I had been very interested in VR, AR. I'm a video gamer, I've been gaming since I was like seven or something like that. So I just knew that was the industry that if I was going to pick where I wanted to go, that's where I wanted to end up. So I aimed myself kind of in that direction and just like launched the bow, if you will, hoping the arrow hit the mark and I got very lucky. So I got my start by falling into digital art. Digital art led me to work as a freelancer. Working as a freelancer, got me cool clients for animatronics, for designing custom digital artworks, for doing a lot of AI pre-vis work for whether it was sold to clients or was just going to be shown to a client to kind of showcase some concept. And of course, getting an MFA in digital art of course led me to the realm of AI. What was AI? No one in my program really studied or knew AI, but it was new and fun and interesting. And it was just so, the possibilities were so infinite. So I jumped at it and I happened to have gotten pretty good at it over the years. And it's what I do now professionally. So I am a physical technologist and AI integration specialist at Future Colossal. We're an experiential activation agency and innovation lab. And we do the cutting edge with technology that's never been used, and we like to do it a lot, and we have a good time doing it.

Scott:

That's a fantastic journey from Indiana Jones to AI could be the new title for this story. Absolutely fantastic journey. And it sounds like throughout that there's always been this element of exploration, you know, checking either looking into the history to discover something new or playing with new technology to understand what is coming next. So going back to your current work, particularly in AI, how do you see AI, how is it transforming the way that we work, the way that you work, and the way that you use technology in your day-to-day life?

Daniel:

Sure, so I'll go at it from three kind of angles. First is how I was started using it and like why I started using it, right? So again, I'm not an artist. I entered an MFA with a digital artwork that was an AR piece, but I didn't necessarily know anything about the digital art space. I didn't know what was possible. I didn't know the software. A lot of digital art is learning how to use tools. Whether that tool is TouchDesigner for motion. you know, graphics, or you can also, you know, projection map with TouchDesigner, to you learning Unity and figuring out all of the aspects that Unity contains, which is infinite, basically. But I got started with AI because I was not skilled. I did not know enough skills or tools, or my palette was very minimal. It was not strong enough to have a good artistic presence. And I wanted to figure out ways to increase that and to kind of heighten my abilities. So I learned about AI through Runway AI, which they initially were at a container environment that kind of they just threw all of these random AI models into a user interface and just let you have it. Obviously, I was very lucky. They came to speak at Pratt Institute. And through that connection, I got access very early in like 2018 and then kind of dived fully into it. Trying my best to figure out what is available, exploring as many different types of AI as possible from large language models to GANs, generative adversarial networks, to training on my own data sets to kind of maybe make my own GANs, which by the way, wasn't very good at the time, 2018 is a long time ago. The tech really was worse. And so from personal point of view, That's how I got into AI, and that's why I got into AI. What we do at work, and what I've seen done abroad in terms of the ecosystem, is, again, almost infinite. We use AI because we're a very small company. Future Colossal is like 17, 18 people. But we do big projects for big clients, and we have to have a very fast turnaround. So because our design team is only two people, they're incredibly talented. Sarah Liriano-Alba and Jill Shaw are incredibly talented individuals. I'm going to shout them out. They are such an amazing design team. The amount of work that they can put out in a week is incredible. But using a new tool like AI or an amalgam of AI tools, because again, there's many, it amplifies their workflow even more in a way that doesn't take away from them as designers. And then finally, in terms of, let's say, an entire industry, look at virtual production. Look at Cuebric specifically. Another shout out. I love Seyhan Lee. They got me my first job as an AI artist. I definitely didn't know what I was doing, and they still decided to invest in me. And we did some really fun projects. focused entirely on style transfer. But Seyhan Lee has now come out with this thing called Cuebric and they're basically changing the way that virtual production is done. For virtual production, you have a giant LED volume. You've usually either prerecorded some content somewhere, sent a camera team to wherever it is that your setting is, recorded a bunch of 360 video footage that cost an incredible amount of money, both to record and to use, because processing that kind of stuff is insanely computationally expensive. And they kind of just replaced it with text-guided locations, right? If you could just connect a computer or a server hardware that has a very comprehensive data set, basically like a local stable diffusion AI system running that is made to populate a 16K resolution virtual production display, you're now able to basically set your movie, any production in any setting at any time in any place with like 15 minutes of wait time maybe and that's only if you love you really want to make it perfect and that's because it requires some touching up there's some human interaction there that has to happen but that industry is fully going to change and it's transforming on a daily basis because the amount of freedom that provides writers, directors, camera crew. Like it is incredible to be able to not just have a 2D image, but they're adding depth functionality. You can have, you know, parallax effects. It's incredibly advanced, but it's also very, very simple. It's literally, you're just typing out a location setting and their backend system just automatically applies depth, sets it in the scene. You have foreground, background, middle ground, et cetera. And it makes it so that you can make a TV show in 27 different locations in a single studio. And I mean, you could have always done that, but you would have had to hire a VFX crew that costs millions of dollars and is rendering out content for six months. So now it's just, and it's done. 

Scott:

So when you're working with an organization or an individual, what kind of advice would you give them to help them transition into using AI in their tool set?

Daniel:

It's really, at least with my team, what is it that you're trying to get out of it? What's the benefit from using this tool over another tool? And how long is it going to take you to learn this set of tools or this combination or the workflow of this tool? So those are the questions I ask. And really, depending on that answer, sometimes I might recommend not using AI. Which is kind of like, it's annoying because obviously I'm an AI guy, that's my whole thing. But sometimes my response is, you know what, don't use AI because it's just not there yet for the specific need. And it's about knowing when that's the case. It's about knowing what the use is for and what combination of AI tools you're gonna use to get the result that you want. Again, it's really important to keep up, if anything, just with what's available. 

Scott:

Well, it is, it is. And as we touched on before, there are ethical, there's an ethical conversation to have around AI. We talked there about the, talked there about IP, about the rights, about the source of it. That's one, about what the training data is actually being used to create this content, which is then being commercialized. There's the impact on, or the perceived impact on the people's roles. And people are naturally concerned that it may take away their jobs. And I think that's something that we have to talk about. So... With your crystal ball, you know, how do you see it starting to impact like the types of roles that are in organizations, the types of new roles that it's going to create?

Daniel:

So I think in terms of new roles, it's a lot of learning how to talk to the machine, right? People joke about prompt engineers, et cetera, but it's a real thing. You actually have to understand the inner workings of AI to be able to really get the result that you want. It's one of the problems I have sometimes with certain new systems that come out. I learned, again, on Runway AI and Runway ML. their system, the way that they described it or visually kind of showcased like a generative adversarial network, for example. They would make a grid of images that you could literally like drag your mouse through and like look through and engage and click one and expand that one and then make a new grid from that image. It was so visual. It really, really helped me understand and I'm a visual learner for sure. It helped me grasp like how the backend system worked so that when I... talk to AI now or prompt AI or interact with a set of code or some parameter within stable diffusion, I can see what it's doing in my head. And the way I learn a new AI tool is by making the same image, every parameter that there is in that system, I'll make a same image with the same prompt by changing one of those parameters. That way I can look at the entire, every generation and I can see, oh, this parameter changes this, this parameter changes that, this parameter changes that. So it's really about like getting a vocabulary and you have to like, you know, that that's like a, it's like learning, you know, learning how to ride a bike, right? You have to be able to pick up a new skill set confidently, even if that skill set came out yesterday. In the AI space, it's so immediate. It's crazy. I have to implement brand new systems every day. They just put chat GPT on microcontrollers. I have chat GPT on an Arduino right in front of me. If I turn on the Arduino, it generates for me new Arduino code that I can just then pump back into the Arduino. It's really weird and meta, but it's like, yeah, exactly. And so it's not just about learning the skills, but then how to apply them. Right. So I don't know how to apply this chat GPT microcontroller, but eventually it's going to be a little box that you can engage with the screen that has a character that will speak to you using this AI backend as an interactive, but you have to have the foreknowledge and the thought like, Hey, I want to do this. Like I have this creative pursuit and this AI tool is a way to get there.

Scott:

One last question on the ethical side of utilizing AI. And you kind of touched this before. I guess my question is, what's your viewpoint on how far things can go before it becomes a line too far? You've crossed the ethical boundary. And you shouldn't be using that.

Daniel:

we're damn well past that. I'll be honest, there's certain companies, there's certain people, there's certain collectives, that are doing that right now. You know, there's nothing we could do about it. It's obviously needs some form of like higher level, top down like overview, but at the same time, it's like, it's really a tough conversation, right? I'm gonna start with this. I think that in AI, ethics starts at the data set, right? And that is the beginning, by no means is it the end. It's really important to be able to have a safe data set. It's really important to have an ethically sourced data set, but it's also incredibly, incredibly expensive to do those things, right? So for example, the data set that Laion made for, let's say, OpenAI's DALL-E, right? That is, like, I think it was at least $2 million. I think it was $2 to $12 million to train that data set. So it's not like it's a cheap thing, but then additionally, even that data set's not a clean data set. Like, Laion just scraped the web. Now, to be fair, we've been scraping the web for decades, or at least 10 years. I don't know about decades, the internet was military tech back then. But we've been, people have been web scraping forever. Web scraping has been a part of the internet since the beginning. But because we're now making tools out of this data, those tools inherently have a bias based on that data. So it's important to at least recognize that exists. and that there is an ethical hurdle to go over. So whether that's in how you prompt, whether that's in how you engage with the data set, whether that's you making custom data sets, right? That's how I started, and that's how I kind of like figured out my ethical kind of boundaries. Like, look, I'll be honest, obviously I use stable diffusion, obviously I use mid-journey on occasion. Those are not trained on open ethical data sets. but they're available and they are open source. There's something I can access without bankrupting myself, without having to work for META or something like that. Like it's just an access question. But at the same time, I have trained my own datasets. I do trade my own datasets. I use DreamBooth to train my own datasets on custom photos. Even that though is piggybacking on the parent dataset, which again is not ethically sourced. Now there's a lot of really, really cool people out there, and give me just one second, I'll actually tell you exactly one of these companies. It is an AI data company that is, you know what, I'm not sure, I don't remember what it's called, and I can't find it out easily. But basically, there's a lot of new companies coming out that are explicitly dealing with this by whether that's training data sets on custom. Like data that they've sourced that's ethical, that's non-racist per se, that's accessible, open source, hopefully. And it's not enough, but it's a start. And on the other end of the spectrum, there's always gonna be nefarious people, there always will be. Technology has always been, I mean, I'll be honest, technology is... kind of basically made to be bad first and then kind of is not bad after. If that makes sense. For example, VR, military technology. Haptics, military technology. Navigation, military technology. You know, like at a certain point, if you go deep enough, like the first VR headset was called the Sword of Damocles and it was literally to train like pilots to drop bombs. So yes. VR provides accessibility, it helps in trauma therapy, it helps the elderly experience life in a new way again. It helps people with Alzheimer's relive moments in a way that isn't bad or traumatic or painful. It's good, right? But it's based in bad. It's like made to help you kill. The same goes for a lot of these major technologies. The internet, that was dark, that was like a black budget military project. Like, that wasn't not initially for nefarious purposes, it was. And now it's like the most impactful thing for our society. And as a whole, like a planet, like the internet is really impactful and will continue to be incredibly impactful. But like the internet has some really dark places, right? The internet is full of bad people. My mom would tell me all the time as a kid, don't go on the internet, bad people wanna talk to you. That's scary, that's scary, but that doesn't mean I didn't use the internet. That means I learned how to access the internet safely. That means I learned how to deal with potentially sketchy encounters. Same within real life. Your mother tells you, don't talk to strangers. These lessons have to be taught, but they also have to be learned through mistakes. And I think that what's happening right now is a lot of mistakes that a lot of people are learning about. And I think that that's the start. We obviously are not at a point where we're really engaging with the ethics and the important points of like, hey, I don't want my likeness to be in this data set. Hey, I don't want... My voice to be duplicated. There's the question of IP. There's the question of like Personal You know provenance like I am me well, guess what? Read the terms and conditions of Facebook man. They own your likeness in perpetuity. It's theirs. I only know that because I made an entire art piece about the terms and conditions of Facebook And I read them in detail. They say in perpetuity forever, we own your likeness and can use it for any marketing purpose or any purpose at all. You know, like that's not even about AI and they're definitely using it for AI. Like, do you think that Meta is not training a new data set on Facebook photos? Like, you think they're not tagging and labeling all of the data that goes through their servers? to then retrain into their own cut? Why do you think Meta and Facebook are coming out with some of the coolest AI systems right now? It's because the data they have access to, ethically or unethically, you know? Because again, it's not like they didn't tell us. They told us, it's right there. You read the terms and conditions, they told you. But they're using it now, and now people are like, whoa, wait, I didn't sign up for that. Technically you did. but you did it before they even knew that that's what they were gonna use it for. All they knew is that they wanted this data and they were gonna own it forever and that you wanted to use this free social media. So, yes, AI and ethics is a big conversation that needs to stay at the forefront, that always needs to be thought about, talked about, and hopefully implemented, like ethical use needs to be implemented at every step of the way. But at the same time, like... Don't think that it's just AI. It's everywhere. Ethics in technology is important. Ethics in tech is subpar, right? It's a little lackluster. And we need to be better as people, as technologists, as artists, as humans. We just gotta be better and think a little harder before we do certain things, right? And that's super hard to say. I'm not the best example of that. I often... speak before I think as opposed to thinking before I speak or act. But we just have to be better at least being aware of the ethical implications of AI, of technology, of how it all interacts with each other. And I'll end it with this. I don't want to throw them under the bus. They're a really cool company, but this is an example of what I think to be unethical. There's an awesome company called Soul Machines. They do really cool work. However, their whole thing is they use AI to make accurate chemical simulations of human brains, right? Soul machines. They use really complex development and like really, like imagine a game engine for a brain where you're, you're just designing synapse systems. so that you can fake a human brain on a software. Then they torture. just to see what would happen. And like, I'm not trying to get emotional, but like at a certain point, like, you have to be better. Like that's ridiculous. Like I don't care it's not alive, I don't care. You're trying to simulate a human brain and then you're torturing it. Like I get that torture is real. There's Guantanamo Bay, you know, people are treated badly and we shouldn't say like AI is better than people. But if your mindset as a company is, I'm gonna make as close to a simulation of a human. and then I'm gonna torture that human, that's not necessarily okay. That's unethical at best, at worst that's evil. And when I asked them about it, they laughed. So that didn't leave a good taste in my mouth per se. And now they're the company in charge of basically taking your father's likeness and making a virtual interactable version of him that you can keep with you after death. Right, like that's their next goal, is to perpetuate life after death with AI. So it's like, that's a whole different conversation to have. Very lucky to have a really awesome friend by the name of Jeremy Manning, who's one of the founding lawyers of the Innocence Project. You never heard of the Innocence Project. They're the lawyer group that basically goes over old criminal cases that were done with either bad DNA testing or without DNA testing. but they had DNA evidence and they were jailed. And now the Innocence Project comes, checks that information, checks the DNA, and proves innocence when technology allows it and proves technology didn't work when it was wrong, for example. And so this amazing lawyer, I talk to him about AI ethics and life after death ethics with AI all the time. And he's one of the only people I know to be thinking about it, to be talking about it. But there's a lot of ramifications about AI ethics that are beyond just simple, like, Hey, this data set is unethical. Like people are literally trying to manifest your dead dad in a computer with the help of AI. And so it's like, there's a deeper level of conversations to have while we can't, we still can't just ignore the underlying issues of generalized AI ethics, of generalized tech ethics, of like data sets and use. You know, like, look, I love a good funny deepfake anytime, you know, I'd love to see Vin Diesel deepfaked onto Groot because he plays such a great Groot, right? Like I get that. But at the same time, there's people doing really things with it. And so it's like, there's balance to be had. But there's also like certain things that just absolutely need to be reined in basically.

Scott:

Yeah, there's a very long conversation we had about that entire space. And for all we know, we might already be those AI robots working in a machine somewhere that somebody's thinking about doing bad things to. So we always have to be mindful of that. Daniel

Daniel:

Black Mirror.

Scott:

Black Mirror, still need to watch the new series. Black Mirror through and through. Daniel, then

Daniel:

Yeah.

Scott:

Absolute pleasure. Thank you very much for joining the conversation today. I think we could talk about this space for quite some time. I'm sure we'll have a following conversation. Thank you very much for joining.

Daniel:

Thank you so much. It's always my pleasure. I hope I didn't dawdle too much and just kind of rant, but you know, can't help it. This is why I'm around. I'm here to explore and engage and hopefully be as ethical as possible.

Scott:

I definitely get the sense you will be. Thank you.

Daniel:

Thank you so much.

VIDEO TRANSCRIPT

Scott:

And next I'm joined by Daniel Potts. Daniel is a creative technologist and an AI artist. Daniel set out to become Indiana Jones in his early career before deciding to move into the technology field. He now helps shape the conversation about ethics of the use of AI and uses it to implement new ways of boosting your productivity in your day-to-day work. So without further ado, Daniel.

Daniel:

Hey, hey.

Scott:

Hey Daniel, how are you?

Daniel:

I'm doing great. Nice and early for me here on the East Coast, but very ready for a great conversation.

Scott:

Fantastic, yeah, thank you for joining us so early on over there. I'll jump straight into the questions then. I mean, can you tell us in our audience a bit about yourself, who you are, what you do, your journey to get into where you got to now?

Daniel:

Yeah, so I like to think I'm the most eclectic generalist of all time, but I've happened to kind of stumble into a very specific set of specializations because I'm so all over the place, right? I got my start in undergraduate studying religious studies and world cinema, which is like a super weird combination of things. I thought I wanted to be like Indiana Jones, the, you know, swashbuckling archaeologist who digs up ancient cities. Did that for a little bit, did some field schools abroad, didn't have the best time. I was actively doing archeology in war zones, which is not smart or safe or good for your mental health. Led to some really sad and poor times for my mental health. And I kind of had to shift away from what I had been doing for four years, which is studying religion, studying ancient cities, and like, how does that, you know, become any sort of useful thing for a career? Right, like if you're not gonna be an archeologist and you're doing all this time investment in studying, you know, I goofed up basically. I made a bad financial decision and then I had to kind of figure out a way around that. And so I was working, you know, every possible job under the sun. My first job out of college was I was a content consultant, which is like a very weird, vague term for someone who was making content for like scavenger hunts. So I was just looking up cities, finding information about cities, and making fun, engaging interactives that are done through an app, and designing those. So from there I went to marketing. From marketing I got, I realized, hey, I kinda have to do something in terms of education to grow and kind of enter a new industry. And at the time I had been very interested in VR, AR. I'm a video gamer, I've been gaming since I was like seven or something like that. So I just knew that was the industry that if I was going to pick where I wanted to go, that's where I wanted to end up. So I aimed myself kind of in that direction and just like launched the bow, if you will, hoping the arrow hit the mark and I got very lucky. So I got my start by falling into digital art. Digital art led me to work as a freelancer. Working as a freelancer, got me cool clients for animatronics, for designing custom digital artworks, for doing a lot of AI pre-vis work for whether it was sold to clients or was just going to be shown to a client to kind of showcase some concept. And of course, getting an MFA in digital art of course led me to the realm of AI. What was AI? No one in my program really studied or knew AI, but it was new and fun and interesting. And it was just so, the possibilities were so infinite. So I jumped at it and I happened to have gotten pretty good at it over the years. And it's what I do now professionally. So I am a physical technologist and AI integration specialist at Future Colossal. We're an experiential activation agency and innovation lab. And we do the cutting edge with technology that's never been used, and we like to do it a lot, and we have a good time doing it.

Scott:

That's a fantastic journey from Indiana Jones to AI could be the new title for this story. Absolutely fantastic journey. And it sounds like throughout that there's always been this element of exploration, you know, checking either looking into the history to discover something new or playing with new technology to understand what is coming next. So going back to your current work, particularly in AI, how do you see AI, how is it transforming the way that we work, the way that you work, and the way that you use technology in your day-to-day life?

Daniel:

Sure, so I'll go at it from three kind of angles. First is how I was started using it and like why I started using it, right? So again, I'm not an artist. I entered an MFA with a digital artwork that was an AR piece, but I didn't necessarily know anything about the digital art space. I didn't know what was possible. I didn't know the software. A lot of digital art is learning how to use tools. Whether that tool is TouchDesigner for motion. you know, graphics, or you can also, you know, projection map with TouchDesigner, to you learning Unity and figuring out all of the aspects that Unity contains, which is infinite, basically. But I got started with AI because I was not skilled. I did not know enough skills or tools, or my palette was very minimal. It was not strong enough to have a good artistic presence. And I wanted to figure out ways to increase that and to kind of heighten my abilities. So I learned about AI through Runway AI, which they initially were at a container environment that kind of they just threw all of these random AI models into a user interface and just let you have it. Obviously, I was very lucky. They came to speak at Pratt Institute. And through that connection, I got access very early in like 2018 and then kind of dived fully into it. Trying my best to figure out what is available, exploring as many different types of AI as possible from large language models to GANs, generative adversarial networks, to training on my own data sets to kind of maybe make my own GANs, which by the way, wasn't very good at the time, 2018 is a long time ago. The tech really was worse. And so from personal point of view, That's how I got into AI, and that's why I got into AI. What we do at work, and what I've seen done abroad in terms of the ecosystem, is, again, almost infinite. We use AI because we're a very small company. Future Colossal is like 17, 18 people. But we do big projects for big clients, and we have to have a very fast turnaround. So because our design team is only two people, they're incredibly talented. Sarah Liriano-Alba and Jill Shaw are incredibly talented individuals. I'm going to shout them out. They are such an amazing design team. The amount of work that they can put out in a week is incredible. But using a new tool like AI or an amalgam of AI tools, because again, there's many, it amplifies their workflow even more in a way that doesn't take away from them as designers. And then finally, in terms of, let's say, an entire industry, look at virtual production. Look at Cuebric specifically. Another shout out. I love Seyhan Lee. They got me my first job as an AI artist. I definitely didn't know what I was doing, and they still decided to invest in me. And we did some really fun projects. focused entirely on style transfer. But Seyhan Lee has now come out with this thing called Cuebric and they're basically changing the way that virtual production is done. For virtual production, you have a giant LED volume. You've usually either prerecorded some content somewhere, sent a camera team to wherever it is that your setting is, recorded a bunch of 360 video footage that cost an incredible amount of money, both to record and to use, because processing that kind of stuff is insanely computationally expensive. And they kind of just replaced it with text-guided locations, right? If you could just connect a computer or a server hardware that has a very comprehensive data set, basically like a local stable diffusion AI system running that is made to populate a 16K resolution virtual production display, you're now able to basically set your movie, any production in any setting at any time in any place with like 15 minutes of wait time maybe and that's only if you love you really want to make it perfect and that's because it requires some touching up there's some human interaction there that has to happen but that industry is fully going to change and it's transforming on a daily basis because the amount of freedom that provides writers, directors, camera crew. Like it is incredible to be able to not just have a 2D image, but they're adding depth functionality. You can have, you know, parallax effects. It's incredibly advanced, but it's also very, very simple. It's literally, you're just typing out a location setting and their backend system just automatically applies depth, sets it in the scene. You have foreground, background, middle ground, et cetera. And it makes it so that you can make a TV show in 27 different locations in a single studio. And I mean, you could have always done that, but you would have had to hire a VFX crew that costs millions of dollars and is rendering out content for six months. So now it's just, and it's done. 

Scott:

So when you're working with an organization or an individual, what kind of advice would you give them to help them transition into using AI in their tool set?

Daniel:

It's really, at least with my team, what is it that you're trying to get out of it? What's the benefit from using this tool over another tool? And how long is it going to take you to learn this set of tools or this combination or the workflow of this tool? So those are the questions I ask. And really, depending on that answer, sometimes I might recommend not using AI. Which is kind of like, it's annoying because obviously I'm an AI guy, that's my whole thing. But sometimes my response is, you know what, don't use AI because it's just not there yet for the specific need. And it's about knowing when that's the case. It's about knowing what the use is for and what combination of AI tools you're gonna use to get the result that you want. Again, it's really important to keep up, if anything, just with what's available. 

Scott:

Well, it is, it is. And as we touched on before, there are ethical, there's an ethical conversation to have around AI. We talked there about the, talked there about IP, about the rights, about the source of it. That's one, about what the training data is actually being used to create this content, which is then being commercialized. There's the impact on, or the perceived impact on the people's roles. And people are naturally concerned that it may take away their jobs. And I think that's something that we have to talk about. So... With your crystal ball, you know, how do you see it starting to impact like the types of roles that are in organizations, the types of new roles that it's going to create?

Daniel:

So I think in terms of new roles, it's a lot of learning how to talk to the machine, right? People joke about prompt engineers, et cetera, but it's a real thing. You actually have to understand the inner workings of AI to be able to really get the result that you want. It's one of the problems I have sometimes with certain new systems that come out. I learned, again, on Runway AI and Runway ML. their system, the way that they described it or visually kind of showcased like a generative adversarial network, for example. They would make a grid of images that you could literally like drag your mouse through and like look through and engage and click one and expand that one and then make a new grid from that image. It was so visual. It really, really helped me understand and I'm a visual learner for sure. It helped me grasp like how the backend system worked so that when I... talk to AI now or prompt AI or interact with a set of code or some parameter within stable diffusion, I can see what it's doing in my head. And the way I learn a new AI tool is by making the same image, every parameter that there is in that system, I'll make a same image with the same prompt by changing one of those parameters. That way I can look at the entire, every generation and I can see, oh, this parameter changes this, this parameter changes that, this parameter changes that. So it's really about like getting a vocabulary and you have to like, you know, that that's like a, it's like learning, you know, learning how to ride a bike, right? You have to be able to pick up a new skill set confidently, even if that skill set came out yesterday. In the AI space, it's so immediate. It's crazy. I have to implement brand new systems every day. They just put chat GPT on microcontrollers. I have chat GPT on an Arduino right in front of me. If I turn on the Arduino, it generates for me new Arduino code that I can just then pump back into the Arduino. It's really weird and meta, but it's like, yeah, exactly. And so it's not just about learning the skills, but then how to apply them. Right. So I don't know how to apply this chat GPT microcontroller, but eventually it's going to be a little box that you can engage with the screen that has a character that will speak to you using this AI backend as an interactive, but you have to have the foreknowledge and the thought like, Hey, I want to do this. Like I have this creative pursuit and this AI tool is a way to get there.

Scott:

One last question on the ethical side of utilizing AI. And you kind of touched this before. I guess my question is, what's your viewpoint on how far things can go before it becomes a line too far? You've crossed the ethical boundary. And you shouldn't be using that.

Daniel:

we're damn well past that. I'll be honest, there's certain companies, there's certain people, there's certain collectives, that are doing that right now. You know, there's nothing we could do about it. It's obviously needs some form of like higher level, top down like overview, but at the same time, it's like, it's really a tough conversation, right? I'm gonna start with this. I think that in AI, ethics starts at the data set, right? And that is the beginning, by no means is it the end. It's really important to be able to have a safe data set. It's really important to have an ethically sourced data set, but it's also incredibly, incredibly expensive to do those things, right? So for example, the data set that Laion made for, let's say, OpenAI's DALL-E, right? That is, like, I think it was at least $2 million. I think it was $2 to $12 million to train that data set. So it's not like it's a cheap thing, but then additionally, even that data set's not a clean data set. Like, Laion just scraped the web. Now, to be fair, we've been scraping the web for decades, or at least 10 years. I don't know about decades, the internet was military tech back then. But we've been, people have been web scraping forever. Web scraping has been a part of the internet since the beginning. But because we're now making tools out of this data, those tools inherently have a bias based on that data. So it's important to at least recognize that exists. and that there is an ethical hurdle to go over. So whether that's in how you prompt, whether that's in how you engage with the data set, whether that's you making custom data sets, right? That's how I started, and that's how I kind of like figured out my ethical kind of boundaries. Like, look, I'll be honest, obviously I use stable diffusion, obviously I use mid-journey on occasion. Those are not trained on open ethical data sets. but they're available and they are open source. There's something I can access without bankrupting myself, without having to work for META or something like that. Like it's just an access question. But at the same time, I have trained my own datasets. I do trade my own datasets. I use DreamBooth to train my own datasets on custom photos. Even that though is piggybacking on the parent dataset, which again is not ethically sourced. Now there's a lot of really, really cool people out there, and give me just one second, I'll actually tell you exactly one of these companies. It is an AI data company that is, you know what, I'm not sure, I don't remember what it's called, and I can't find it out easily. But basically, there's a lot of new companies coming out that are explicitly dealing with this by whether that's training data sets on custom. Like data that they've sourced that's ethical, that's non-racist per se, that's accessible, open source, hopefully. And it's not enough, but it's a start. And on the other end of the spectrum, there's always gonna be nefarious people, there always will be. Technology has always been, I mean, I'll be honest, technology is... kind of basically made to be bad first and then kind of is not bad after. If that makes sense. For example, VR, military technology. Haptics, military technology. Navigation, military technology. You know, like at a certain point, if you go deep enough, like the first VR headset was called the Sword of Damocles and it was literally to train like pilots to drop bombs. So yes. VR provides accessibility, it helps in trauma therapy, it helps the elderly experience life in a new way again. It helps people with Alzheimer's relive moments in a way that isn't bad or traumatic or painful. It's good, right? But it's based in bad. It's like made to help you kill. The same goes for a lot of these major technologies. The internet, that was dark, that was like a black budget military project. Like, that wasn't not initially for nefarious purposes, it was. And now it's like the most impactful thing for our society. And as a whole, like a planet, like the internet is really impactful and will continue to be incredibly impactful. But like the internet has some really dark places, right? The internet is full of bad people. My mom would tell me all the time as a kid, don't go on the internet, bad people wanna talk to you. That's scary, that's scary, but that doesn't mean I didn't use the internet. That means I learned how to access the internet safely. That means I learned how to deal with potentially sketchy encounters. Same within real life. Your mother tells you, don't talk to strangers. These lessons have to be taught, but they also have to be learned through mistakes. And I think that what's happening right now is a lot of mistakes that a lot of people are learning about. And I think that that's the start. We obviously are not at a point where we're really engaging with the ethics and the important points of like, hey, I don't want my likeness to be in this data set. Hey, I don't want... My voice to be duplicated. There's the question of IP. There's the question of like Personal You know provenance like I am me well, guess what? Read the terms and conditions of Facebook man. They own your likeness in perpetuity. It's theirs. I only know that because I made an entire art piece about the terms and conditions of Facebook And I read them in detail. They say in perpetuity forever, we own your likeness and can use it for any marketing purpose or any purpose at all. You know, like that's not even about AI and they're definitely using it for AI. Like, do you think that Meta is not training a new data set on Facebook photos? Like, you think they're not tagging and labeling all of the data that goes through their servers? to then retrain into their own cut? Why do you think Meta and Facebook are coming out with some of the coolest AI systems right now? It's because the data they have access to, ethically or unethically, you know? Because again, it's not like they didn't tell us. They told us, it's right there. You read the terms and conditions, they told you. But they're using it now, and now people are like, whoa, wait, I didn't sign up for that. Technically you did. but you did it before they even knew that that's what they were gonna use it for. All they knew is that they wanted this data and they were gonna own it forever and that you wanted to use this free social media. So, yes, AI and ethics is a big conversation that needs to stay at the forefront, that always needs to be thought about, talked about, and hopefully implemented, like ethical use needs to be implemented at every step of the way. But at the same time, like... Don't think that it's just AI. It's everywhere. Ethics in technology is important. Ethics in tech is subpar, right? It's a little lackluster. And we need to be better as people, as technologists, as artists, as humans. We just gotta be better and think a little harder before we do certain things, right? And that's super hard to say. I'm not the best example of that. I often... speak before I think as opposed to thinking before I speak or act. But we just have to be better at least being aware of the ethical implications of AI, of technology, of how it all interacts with each other. And I'll end it with this. I don't want to throw them under the bus. They're a really cool company, but this is an example of what I think to be unethical. There's an awesome company called Soul Machines. They do really cool work. However, their whole thing is they use AI to make accurate chemical simulations of human brains, right? Soul machines. They use really complex development and like really, like imagine a game engine for a brain where you're, you're just designing synapse systems. so that you can fake a human brain on a software. Then they torture. just to see what would happen. And like, I'm not trying to get emotional, but like at a certain point, like, you have to be better. Like that's ridiculous. Like I don't care it's not alive, I don't care. You're trying to simulate a human brain and then you're torturing it. Like I get that torture is real. There's Guantanamo Bay, you know, people are treated badly and we shouldn't say like AI is better than people. But if your mindset as a company is, I'm gonna make as close to a simulation of a human. and then I'm gonna torture that human, that's not necessarily okay. That's unethical at best, at worst that's evil. And when I asked them about it, they laughed. So that didn't leave a good taste in my mouth per se. And now they're the company in charge of basically taking your father's likeness and making a virtual interactable version of him that you can keep with you after death. Right, like that's their next goal, is to perpetuate life after death with AI. So it's like, that's a whole different conversation to have. Very lucky to have a really awesome friend by the name of Jeremy Manning, who's one of the founding lawyers of the Innocence Project. You never heard of the Innocence Project. They're the lawyer group that basically goes over old criminal cases that were done with either bad DNA testing or without DNA testing. but they had DNA evidence and they were jailed. And now the Innocence Project comes, checks that information, checks the DNA, and proves innocence when technology allows it and proves technology didn't work when it was wrong, for example. And so this amazing lawyer, I talk to him about AI ethics and life after death ethics with AI all the time. And he's one of the only people I know to be thinking about it, to be talking about it. But there's a lot of ramifications about AI ethics that are beyond just simple, like, Hey, this data set is unethical. Like people are literally trying to manifest your dead dad in a computer with the help of AI. And so it's like, there's a deeper level of conversations to have while we can't, we still can't just ignore the underlying issues of generalized AI ethics, of generalized tech ethics, of like data sets and use. You know, like, look, I love a good funny deepfake anytime, you know, I'd love to see Vin Diesel deepfaked onto Groot because he plays such a great Groot, right? Like I get that. But at the same time, there's people doing really things with it. And so it's like, there's balance to be had. But there's also like certain things that just absolutely need to be reined in basically.

Scott:

Yeah, there's a very long conversation we had about that entire space. And for all we know, we might already be those AI robots working in a machine somewhere that somebody's thinking about doing bad things to. So we always have to be mindful of that. Daniel

Daniel:

Black Mirror.

Scott:

Black Mirror, still need to watch the new series. Black Mirror through and through. Daniel, then

Daniel:

Yeah.

Scott:

Absolute pleasure. Thank you very much for joining the conversation today. I think we could talk about this space for quite some time. I'm sure we'll have a following conversation. Thank you very much for joining.

Daniel:

Thank you so much. It's always my pleasure. I hope I didn't dawdle too much and just kind of rant, but you know, can't help it. This is why I'm around. I'm here to explore and engage and hopefully be as ethical as possible.

Scott:

I definitely get the sense you will be. Thank you.

Daniel:

Thank you so much.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

test
test
Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

No items found.