Delivering Markerless Motion Capture at Scale

FEATURING:

Jamie Allan, M,E&B Industry Lead, NVIDIA

Chris Battson, Sr. Development Director, EA

Addy Ghani, VP, Virtual Production, Disguise

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Vincent Hung, Animation Director, EA


0:19

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Thank you very much, so thank you everyone for joining, thank you Eric, obviously, for letting us steal the stage for thirty (30) minutes. So, yeah, we are going to be talking about delivering Markerless Motion Capture at Scale at movie AI. Obviously, you know well we do Markerless motion capture but scale means different things to different people; whether that is scale of vol e, scale of production complexity or scale of delivery. So, we want to use this sort of 30 minutes to tease out from all of you the sort of complexities and challenges around that in particular. 


I am going to firstly speak to Vince and Chris from EA and if you wouldn’t mind just introducing yourselves when you all talk, but firstly “when you are looking at Markerless Motion Capture you know in a regular production environment, you have got, you know, a reasonable degree of constraints in the studio, right? However, if you want to take that beyond a studio environment, where do you want the challenges with that sort of setup, Chris?”


1.29

Chris Battson, Sr. Development Director, EA

Can you hear me yet? So, I am Chris Batson. I manage the Capture Development Team and then there is also a Capture Production Team of about fifty (50) people and we wrote studios in Los Angeles and Vancouver and Orlando and we typically produce about three million seconds of body data a year for the teams which is one of the biggest scales I think we have got and what we have seen in recent last couple of years is the game teams are asking for the types of data that we have not been asked for before. And this is really about capturing outside in large scales such as a whole Soccer pitch for example. You can guess which game that’s for. And so we were looking around for different technologies that could provide this and we came across Move AI and we have been working together now for the last two years to kind of develop these pipelines develop with software and we are very excited to be really at that point where we are about to bring this sort of thing into production now so it’s been a great journey with Move AI.


2:32

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Thanks Chris and Addy from a different level of scale but I guess if we think about things like the live concerts right that has both the scale of production and still scale outside of a studio right and then films from a disguised perspective how would you approach that in terms of markerless mocapping a live stage performance?


2:50

Addy Ghani, VP, Virtual Production, Disguise

Yeah, great question so for us a big segment of our company is Live Events and you know it just kind of lights up my mind when I think about the possibilities of using move technology for Live Events motion tracking for you know performers that are on stage and to do it consistently across 300 shows around the year around the stages that scale and the way we would do it is most likely with a consistent hardware setup as well as a consistent software setup. 


3:25

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Yeah for sure and then in terms of like quality obviously matters in that live environment as well as both environments in terms of like I guess for the just well for the real-time environment for yourself and I’ll flip to Vince in terms of sort of more post workflows but in terms of the quality there like what do you require in terms of for a real-time concert, for instance, what are the thresholds you need to meet in terms of quality to be able to drive that and then what are the use cases you’d like to drive as well with it?


3:48

Addy Ghani, VP, Virtual Production, Disguise

Yeah, I think, coming from a traditional mocap world, one thing that’s really key is foot contact so having it you know be completely grounded to the floor.  , I don’t think fingers are necessary but having a noiseless data so one of the things with inertial data is it’s super noisy. So, eliminating that and then lastly the frame rate, right, so I think 30 FPS minim , 60 would be great so it feels live and it feels like it’s really happening on stage.


4:24

Niall Hendry, Head of Partnerships & Delivery, Move.ai

That makes a lot of sense. In terms of obviously Vince your workflows there, obviously, you want real-time previews but in terms of when you are working with the game development teams in terms of that quality threshold like what does that look like to you?


4:34

Vincent Hung, Animation Director, EA

Yeah, so for me, scale is really all about the quality output so we have been doing motion capture for ages and we really have a standard on the quality that we deliver to all the development teams at EA. So, there is always this expectation on what they are going to receive so when it comes to Markerless capture now there is a whole set of you know different inputs and we want to make sure we can how do we emulate that so in terms of scaling and processing is now we are doing, let’s say, if it’s a mark list if it’s a one Talent, right, and you have you can how do you get it to a certain quality so that everything that we deliver for, let’s say, the animators and development teams for their disciplines so that they can really focus on their specialty.


5:18

Niall Hendry, Head of Partnerships & Delivery, Move.ai

And that makes a lot of sense. In terms of scale of Jamie like in terms of obviously means to Nvidia, many things?


5:30

Jamie Allan, M,E&B Industry Lead, NVIDIA

Yeah, very very different things. Everyone else here I think in this space like scaling starts with you guys, right? The software developers have to create in a way that allows the tool to scale to start with and that’s by building on container-based applications that can scale across Kubernetes architecture and be orchestrated across the whole world. If they need to be right then there is scaling in the stadi  and being able to build a compute cluster with the high enough bandwidth and the right connectivity to enable things to be computed in real time with sub fire-frame latency. 


6:02

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Which is a challenge in itself, right? Because I mean some of the use cases you are looking at can be, for instance, you know streaming into a game engine but you need to be able to get the acquisition of the data as quickly as possible, maintain the quality but then in terms of like you and I have been talking about this a little bit but in terms of like delivery of that you know in terms of different markets, what does that mean to different people from your perspective?


6:19

Jamie Allan, M,E&B Industry Lead, NVIDIA

Well, I think that there is a growing need for high quality animation in many different delivery platforms, right? And broadcasters are asking for this all the time. They are being challenged by market trends. They are all being challenged by their audiences so the ability to take something like I am going to use the word Soccer and kick myself later because of where we are in the world but to use a soccer match at scale and be able to deliver that into an engine that can then be driven to many different platforms is going to be the future of how many sports are cons ed and in order to do that, you need to be able to take that original content and repurpose and deliver it at scale and the moment, they aren’t the platforms to do it, right? We are dreaming about a lot of these things but there are content delivery networks around the world for broadcast that will not handle what we are talking about for 3D. They handle it in broadcast as if they take a 3D image and put it in through an engine and it goes out to play but being able to interact with that engine and have each individual person who is cons ing it in a different way like they are playing a game won’t scale today.


7:32

Niall Hendry, Head of Partnerships & Delivery, Move.ai

That’s quite interesting, and I’ll get back to the football conundr  in a second. From a disguised perspective obviously with your long-term strategy of being a gateway to the metaverse whatever that means whichever people but like to Jamie’s point there that means distribution of content to a massive scale like where are you seeing challenges with that or like what’s the thought process for disguise around that?


7:56

Addy Ghani, VP, Virtual Production, Disguise

Yeah, that’s a great question I think for distributing a physical performance into a digital world so that backbone is sort of already built with gaming right? So, if you take something like Fortnite, epic online services is fully built out, fully usable, so all you would have to do is ingest your mocap data or multiple mocap data input into that whole sort of cloud and then that pushes it out to your participants.


8:22

Niall Hendry, Head of Partnerships & Delivery, Move.ai

And you guys are focussing on that delivery rather than the sort of whatever happens at the user end, right? 


8:23

Addy Ghani, VP, Virtual Production, Disguise

Yeah, I think the question mark here is sort of the interactivities so if you have a user at home and how does that user interact with the metaverse performance?


8:36

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Yeah, how do you see that working just because it’s very current to some of the stuff we are talking about as we are about to release a real-time system to be able to you know help power live performances and yeah in terms of that interactivity like what are your thoughts on how that can work in as low latency as possible?


8:52

Addy Ghani, VP, Virtual Production, Disguise

That’s a great question. I think creatively you can break up that gap and cover for that latency. So, it’s almost like you know the early days of Nextel with a chirp and then you transmit chirp then receive so it’s never like a two-way, like a true two-way communication so in the early days that’s how you would creatively solve for that but then as technology catches up and that latency gets smaller then you start to have a true sort of two-way conversation.


9:24

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Makes a lot of sense and just to double back a little second because Jamie was talking around   capturing the football soccer association. Football is where the name soccer came from. So we can live with it but we don’t like it basically you know for us as a technology company. We have had to work a lot on the conundr  of how to start off rigging cameras in a small area and then we went to solve for stadi  delivery. Now, the difference there is you are a lot further away. You are a lot less constrained sometimes in terms of the environmental variables. 


This is just a lesser question for you just for now but in terms of when you are looking at that in terms of quality of data what do you Chris and then one, what are the necessary you sort of parameters are to attain the quality that you need? And two, like in terms of how you have tried to solve this problem? What are the headaches you have come across because we have certainly come across headaches in terms of just looking at people moving really quickly, being really small in the image, right? So, from your perspective like what do you see as the... in that messy environment   the challenge is to attain the levels of quality that you need?


10:41

Chris Battson, Sr. Development Director, EA 

So, I’ll talk to the acquisition side. First, I am going to talk about post work . It is that working, yeah. So, on the acquisition side, it’s very important to get sufficient pixels. Otherwise, you can’t decipher that image into a 3D animation. So, we have done some tests with GoPro cameras in a small vol e but as we went outside, we need much higher resolution so actually Nigel and his team, they did some research. They came across the black magic Ursa mini 12K cameras which can go at 8K at 120 FPS and that’s why it obviously creates a lot of data with 12 cameras. You get 48 terabytes after two hours, so that’s something we are going to have to wrangle but that’s the way we are going to get the resolution. We need to create 3D animation at a distance.


11:36

Vincent Hung, Animation Director, EA

Yeah, so just kind of back to the scaling and the quality, right? Like just the larger we go, the more players we are capturing if you go to two to five to ten. It just introduces a lot more room for error so a lot of room for error, for example, all the multiple interactions you are getting from the player. There is more occlusion is how all the cameras are going to see all that, right? So, like in processing, it’s really how do we feed all that data for the scale in processing and making sure that,  , all that processing power can now really emulate the success it has with smaller post-processing and now how do you get it to, there are smaller on the screen. Can you still get the same quality in the foot? Can you still get the same quality as if there’s a really, if I am using this as an example since we are talking about soccer there but if they were to b p into each other, is the camera able to catch that? So, those are really the cases that we really need to catch and see if the machines are still able to replicate the same success and if not, then how do we feedback this and get that kind of the same result. So, it’s just that iterative Loop that we have to keep doing and so that makes sense. 


12:47

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Cool, and there are a couple of ways to solve this, right? One is more cameras. Another one is higher frame rate. Let’s say three ways. And obviously more pixels, right? And I think some of the bottlenecks we sometimes run into and this is where I am coming to you, Jamie is the higher in terms of pixels we go, obviously processing becomes challenging. Let’s say if we are going up to things like 8K video. In terms of you know being able to run neural networks on up to 8k video of let’s say up to 60 FPS eventually of let’s say 12 to 16 cameras, one, what are the challenges there in terms of where we are today and where we are going into like, how do you foresee that being approached by Nvidia if you are not doing it already?       

   

13:32

Jamie Allan, M,E&B Industry Lead, NVIDIA

Well, I mean one of the things you do there is not process all those pixels, right? You are selective over what you need to process and I think intelligent people like you guys will start figuring out how to, you know, adapt and increase efficiencies in the data pipeline, right? We can push as many pixels and as much data as you want from the cameras available today down as you know 2110 network stream and put it into a very well designed piece of hardware but the economics will fall over at some point so we have to be more efficient with the data we are choosing to process at that point and this has been done in many different industries already, right? 


There is lots of where like computer vision has needed to scale for a purpose whether it’s you know mass amounts of CCTV monitoring or you know autonomous vehicles and things like this where we become more selective about the data that we pass into an AI engine and we normally do that with another AI model that’s further down the pipe right at the camera end   and I think you know there’s opportunities around things like that not just in this space but in anywhere that you are looking to scale uh the large amounts of data that you are getting from that camera.


14:41

Niall Hendry, Head of Partnerships & Delivery, Move.ai

That’s an interesting point and particularly right because how do you mean I know the disguise workflow. There’s a lot of work there in terms of being able to either isolate regions of interest or understand the depth and things like that in terms of if you were to start going up to you know higher resolutions to utilize for markers let’s say you put in like red monstrous for witness cameras, right? You are getting those really good witness camera data but obviously there’s a lot of data coming off that you know and given so if you have been looking at marketers for such a long time you know that what would you want to utilize from that feed to be able to power their Markerless use cases that you mentioned earlier?


15:12

Addy Ghani, VP, Virtual Production, Disguise

Yeah, I think one of the big sort of secondary gains is depth fields, yeah, so having that many sorts of perspectives and then utilizing them may be some of the same AI framework. You can have a good idea of where they are in space and where other things, other non-moving objects are in space and then have that sort of knowledge feedback into the visualization.


15:40

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Yeah, and then so I guess like with that knowledge as well and the visualization what can you foresee being done then with either the motion under the graphics within the scene?


15:48

Addy Ghani, VP, Virtual Production, Disguise

Yeah, so you can actually do a lot of procedural   content generation right there on the spot so it’s really creativity is sort of the ceiling   right now in real time you can do smoke fire and all those heavy particles that were completely impossible to do and I think you can also unlock a lot of storytelling tools as well.


16:13

Niall Hendry, Head of Partnerships & Delivery, Move.ai

I mean creativity. I was like obviously one of the key parts of everything we all do. So, it’s about going Segway in terms of new use cases. I mean there’s some use cases we have mentioned already there like you are talking things like particle dispersion and or things like shadow casting, right? There’s some obvious ones there but are there any like new use cases you can see coming down the line there. Maybe, you don’t want to give them away? 


16:41

Addy Ghani, VP, Virtual Production, Disguise

I mean just brainstorming here. I think you know during the sort of the VR days from you know the last 10 years. There’s a lot of interesting creative use cases that were used in VR but most people never got to experience the, because they never put on the goggles, yeah, and I think a lot of those creative mechanisms could be unlocked on a stage in front of 20,000 people and I think that could unlock a lot of you know interesting ways to tell a story or experience a concert.


17:06

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Yeah, that experience out of things where you can actually get these like gesture-driven graphics from the performer themselves you can view in concert, right, be very very engaging. 


17:13

Addy Ghani, VP, Virtual Production, Disguise

and then you can even add another level of complexity that will enrich it even further, which is the user at home though you know the internet can then interject and add to the interactivity on set. 


17:25

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Yeah, I have this idea that flares are obviously very dangerous, right, but imagine you could do some gesture-based stuff in the crowd where you could do just virtual flares as a football fan, the players being different people to different things but obviously not use them. In terms of use cases from a more video games perspective, Chris, so from a wider video games market perspective, what do you see as use cases that are sort of being unlocked with markers both at scale in terms of distribution so more people are having access than before, but also scale in terms of vol e.


17:59

Chris Battson, Sr. Development Director, EA 

Yeah, so we have, I have talked a bit about the outside use cases for really large vol es but at the other end of the scale, there’s uh, pre-visualization. So, for a long time enabling a game, a particular game team with a mocap system has been quite expensive but potentially with the systems that Move.AI can work with where you just get a few GoPro cameras and then you can create a really good 3D animation from it. We are going to have one of those systems with every game team so when they come for the actual final shoot,  , they are fully prepared. They have got everything blocked out and they are ready to roll with the optical acquisition.


18:39

Niall Hendry, Head of Partnerships & Delivery, Move.ai

And then in terms of scale and in terms of that's a good point in terms we can get you know more cameras to more people and actually we are doing our iPhone demo for anyone who wants to come over to the booth and have a look,   that ultimately for us and I think we mentioned this in our video talk on Monday but ultimately we see the barrier to motion capture being dropped in terms of using devices that people have in their hands every day to be able to do Markerless motion capture both at scale in terms of content but in terms of access and availability.I guess for Jamie from your perspective like what do you see coming down the road in terms of that use case in terms of what that means in terms of scale but also I mean you and I have discussed at length the other night in terms of other use cases, right?


19:42

Jamie Allan, M,E&B Industry Lead, NVIDIA

Yeah, I think Nvidia is always in an advantageous position because we deal in many different industries and we are always trying to accelerate and find new ways to solve problems in lots of different areas. I think realistic uh motion, a h an motion is really important in architecture when you are designing spaces and designing buildings you know using standard, you know, basic animation that an artist has done to quickly do something to show someone walking up and downstairs will not be as accurate as using a properly you know markerless motion captures quickly a piece of animation. I think in the area of autonomous vehicle training   we won’t get to level four five commonly available cars until we are able to train at scale with h an accurate motion because they need to understand and learn how to react to the way that h ans move and not just the way that again a normally you know quickly animated uh or automated NPC is working in a virtual space. The other one is synthetic data for AI training, right? If we are training better, uh, computer vision models to understand what people look like and how they move. There is only so much video data in the world and so if we can track and have 3D content that we can then train, use that data to train, we can then automate a lot of that data creation as well so there’s realms of possibility that by democratizing and making it cheaper and more accessible to create motion capture data that people will be able to leverage that and work with people like ourselves to unlock these new ideas.


21:08

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Yeah, and it’s interesting that having to be able to leverage that for I think just hang into the use case we mentioned earlier about having people in a virtual concert. This is a big thing for me because I love music but like you know, in terms of the way that Nvidia would be set up to be able to ultimately to give this context we eventually see people being able to inject themselves into a Fortnite concert like this, right? So in terms of being able to do that I mean one from your perspective like you know feasibility on that I would reckon that will pan out and to from a production perspective how do you do that then add to either complexities or even you know endless possibilities for additional content within that environment as well so I think double up on that one if you want.


21:50

Jamie Allan, M,E&B Industry Lead, NVIDIA 

I mean from that pipeline point of view like something you can figure out? Yeah, there’s bottlenecks that are going to be there around the latency of getting the video to get to where it’s going to be processed and how quickly that then lands into the engine that it’s being delivered back to you or to someone else. These are all things that will be overcome with 5G with more powerful processes in the cloud with better edge computing at Telco sites with people like epic and Roblox and others building better you know what are today cloud gaming platforms but in the future will become cloud content rendering platforms right for gaming for metaverse web3 whatever you want to call it so all of those problems that would present that to the idea today will be overcome in the next you know five years he says confidently and in public on camera.


22:40

Addy Ghani, VP, Virtual Production, Disguise

I think from a production standpoint, we are already there as far as animation rigs go rendering quality shaders it’s all ready to go. The only key piece that was missing was getting the input data yeah because mocap is just so difficult right? You need the cameras, the suits, and so on. So, this is the missing piece and I think this will really unlock some cool things very very soon. 


23:04

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Yeah, it’s got quite a bit of potential as we seem to be finding out on a daily basis and another terms of things for us in terms of strategy and this will again mean different things to all of you but coming down the road with to Jamie’s point around web3 and to sort off the prevalence of digital worlds let’s say are people’s necessity for motion for different use cases. This is a general question to sort of throw around to all of you but we see a need in the market for motion as an asset class to be able to power many different use cases for many different people I guess my question there is when you take the phrase motion as an asset class like for from your overlapping but different industries like what does that mean? I’ll start with you Addy because you have got Mike already but I am gonna that’s the EA guys a similar question on the same level.


23:52

Addy Ghani, VP, Virtual Production, Disguise

I mean a standardized rig is probably the most basic thing. We should all just agree on one type of rig,  , that could work for maybe five different industries and another rig for five other different industries. 


24:08

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Everyone’s laughing on the rig side. 


Chris Battson, Sr. Development Director, EA

We were just laughing because we want a standardized rig as well


Niall Hendry, Head of Partnerships & Delivery, Move.ai

Jamie’s having a problem with standardization at the end.


24:19

Chris Battson, Sr. Development Director, EA

I think within the environment of EA at least   standardizing animation, standardizing rigs   creating databases of animation that can enable machine learning research in the future those are all trends uh that certainly we are seeing across all the game teams so I think that’s just going to continue.


24:43

Niall Hendry, Head of Partnerships & Delivery, Move.ai

And you see that’s all been like just a wider adoption across the industry in general, I mean, obviously you have got the big players who will do it ultimately.


24:49

Chris Battson, Sr. Development Director, EA

Yeah, I mean it started with Washington State University with motion graphs and motion fields and then it’s Ubisoft. We are doing motion matching and we know we are developing similar technologies with the year and yeah it’s just going to get more and more and look better and better and more neutral you know it’s   and I am sure Jamie is up there.


25:17

Niall Hendry, Head of Partnerships & Delivery, Move.ai

 , in terms of the sort of the marketplace access in terms of the way you are set up at Nvidia to be able to do that like the delivery of that and Vince Jonah j p in.


25:24

Vincent Hung, Animation Director, EA

I was just going to talk about delivery outputs. There are so many different formats yeah and you know multiple teams using the expectations very different so kind of goes back to the rig standardization I mean as an industry do you have a standardized rig I mean I am sure every single is already trying to standardize their own rigs and again let’s say you do standardize it now how every company has a ton of animation assets. Yeah, how do you retarget all of that onto it, right? Like that’s not a big problem as such but it’s a skill right and there are ways to do it. There are ways people do it differently like and you do it extremely well but sometimes with animation data what we can see is falling down as the retarget stage in terms of that I mean so that’s why I am, It’s almost not even just talking about okay you get a rig we can share but it’s how do you set up this rig and how do you do it in a way that can set yourself up in five years so that you can repeat this process when there’s better ways to create Rigs and now you are gonna do it again so there’s just a lot to think about in that aspect, right? 


26:30

Niall Hendry, Head of Partnerships & Delivery, Move.ai

So yeah, it’s almost like you need a platform to be able to facilitate that really yeah.


26:53

Jamie Allan, M,E&B Industry Lead, NVIDIA

Yeah, I think the dream that people are setting out right now for interchangeable assets in Virtual Worlds whether it’s a cons er or an industrial use case only works by standardizing everything that’s going to move between those worlds and one of the things that’s definitely going to move between those worlds is the animation that you have chosen for your character or whatever it is and you are likely going to be moving today from you can’t translate from most engines to other engines because of those problems not just rigs but materials shaders everything.   you know we are doing a huge amount of work as many people in the industry are around. What can we do with standards? You know the USD council is happening right now here at Siggraph where you know us and Pixar and that consorti  is arranging where we are going to go with USD. Where we are going to go with GLTF where we are going with shaders and you know MDL and MaterialX and how all these things are going to fit together not just for the visual effects industry but for many industries to be able to interchange assets easily and I think motion is a massive part of that because otherwise you have got Virtual Worlds with loads of people not moving and that’s not very interesting.



Yeah, right,   but in order for that to work in the dream that everyone’s talking about rigs have to be interchangeable, motion has to be interchangeable and accessible and easy to make which is the problem that you are solving.


27:58

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Yeah, I am and in terms of   conscious of times. I am going to start wrapping up shortly. So, I’ll give you all a question, a challenge to move on to everything else. What’s one thing that you haven’t seen motion captured on h ans today because everyone says capture a horse and we will but give us time   that you haven’t seen done, that you think that you can lay down a gauntlet for Move AI to capture. Let’s go Markerless for now. 


28:29

Addy Ghani, VP, Virtual Production, Disguise

I think one of the most challenging things with mocap is occlusion right? So anytime you have more than one person like a hug for example we are always cleaning that up so having that come on give me more than a hug I want my we can do hug right now just some two people just some wrestling, rolling down the floor yeah so we got so far we have got break dancing, wrestling, what else.


28:51

Vincent Hung, Animation Director, EA

How about a massive celebration where they are just just going at it, hugging and everything and maybe even dog piles. Ok, so right, it’s really difficult, I understand but you know it was like a crowd celebration exactly. Let’s say it’s like a goal. I am gonna say go in football because it’s where I live in my head, yeah but yeah so try and capture that. It’s a really uncontrolled   like reaction where you don’t know how they are going or what they are going to do, right? 


29:30

Chris Battson, Sr. Development Director, EA

I was gonna say dog piles found in games like Madden where lots of players pile on top of each other. That’s extremely difficult and that’s the sort of data Vince has been dealing with over the years. 


29:41

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Okay? So the challenge is laid down there. I am actually going to start to wrap up now but thank you very much to all of you in terms of anything else that you want to throw as Markerless wise I haven’t thought of o I haven’t thought of or that you think that should be noted as part of where Markerless is going to go I know you have been working on it for a while right? So what do you reckon?



30:06

Addy Ghani, VP, Virtual Production, Disguise

I mean one of the interesting things we can do is   take a lot of archival footage and then get motion data out of it.


30:16

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Poses its own challenges mark either footage like the reason for which is like when we know the intrinsics of the camera and the area in which we are tracking then you know you have got those parameters within which to work when you start to take archival footage there’s a few things to consider. One is motion blur, two is pixel degradation if it’s come from pre-HD uh the other things are the size of the person in the image   we have done some work on historical footage before and all of that presents a challenge for sure. I think it’s a challenge that can be overcome but uh there’s three different ways to approach it and it’s yeah there is just a lack of information there yeah exactly so you have got to them actually augment that with information for sure and I know that’s something we have discussed in the past, hasn’t it in terms of you know what’s possible out of historical as well cool, thank you   Jamie, anything else?


31:09

Jamie Allan, M,E&B Industry Lead, NVIDIA

To give me the opportunity to talk more now I know that’s why I showed you, yeah. It’s just an incredibly exciting space for us in general. It hits so many notes that Nvidia is excited about AI. Machine learning, graphics processing, uh, collaborations with industry partners that we work together with and it’s a scaling problem right and Nvidia likes hard problems to solve and like solving them together with partners and when we start to nail some of these stadi  scale solutions and you start seeing what that’s going to deliver to the cons er industry to web 3.0 but also into industrial   it’s going to be pretty revolutionary for a lot of these use cases we are talking about so keep up the work. 


32:02

Vincent Hung, Animation Director, EA

Yeah, so I just wanted to say you know I threw in a dog pile there but really like for me Markerless is really expanding the capabilities of the options for what people want to do in motion capture, right? It’s what makes sense. What does it make sense when we do Markerless right so I really see it as a cohesive to what we are already doing with our traditional motion capture and its use cases right? So, I am really looking forward to just expanding that capability once again and again just removing the limits that makes sense right and of course when you say throw it on Gauntlet through that down but of course i'm really looking forward to what you guys are going to do next, yeah.


32:44

Chris Battson, Sr. Development Director, EA

And I just add to it that   prop tracking is something that we want to do more of yeah yeah we have got ball a spherical ball working with Move.AI   but we want to kind of carry that on and ideally not to have put markers on the props but it just recognizes the shapes of the props.


33:03

Niall Hendry, Head of Partnerships & Delivery, Move.ai

Yeah for sure ultimately that’s a challenge we will look to solve and we are looking to solve it and hopefully we can solve it with the help of all of you here.


Jamie Addie Vince Chris thank you very much and thanks for your time! Really appreciate it. Thanks again to Eric for letting us on the stage and thank you everyone for listening.