Exploring AR interaction (Google I/O '18)

Exploring AR interaction (Google I/O '18)

Show Video

Thank. You so much for joining us, my. Name is Chris I'm, a designer and prototype, er working, on immersive. Prototyping, at, Google and, I'm joined by Ellie and Luca and today we're going to talk about exploring, AR interaction, it's, really awesome to be here. We. Explore immersive computing through. Rapid prototyping, of AR and VR experiments. Often. That's, focused on use case exploration, or app, ideas. We. Work fast which. Means we fail fast, but. That means we that we learn fast. We. Spend a week or two on each prototyping, sprint and. At the end of the Sprint we, end with a functional prototype starting, from a tightly scoped question. And. Then we put that prototype in people's hands and, we see what we can learn. So. This talk is going to be about takeaways, we have from those AR explorations. But, first. I want, to set the table a little bit and talk about what we mean when we say augmented, reality. When. A lot of people think about AR the. First thing they think about is bringing virtual, objects to users in the world and, it is that that's part of it we, call this the out of AR. But. AR also means more than that it. Means being able to understand. The world visually, to bring information to, users and. We call this understanding, the in of a are. Many. Of the tools and techniques that were created for computer vision and machine learning perfectly. Compliment tools like AR core which, is google's AR development. Platform. So. When we explore AR we, build experiences, that include one of these approaches, or both. So. This talk is going to be about, three, magic, powers that we've found for AR. We. Think that these magic, powers can help you build better AR, experiences, for your users, so. We're going to talk about some prototypes that we've built and share our learnings with you, during, each of these three magic power areas, during the talk. First. I'll talk to you about context, driven superpowers. That's, about how we can combine visual, and physical understanding of, the world to, make magical, AR experiences. Then. Le, will talk to you about shared. Augmentations. And this, is really all about the different ways that we can connect people together in AR and. How we can empower them just by putting them together and. Then. Luca will cover expressive, inputs. This. Is about how AR can help unlock authentic. And natural, understanding for our users.

So. Let's start about context-driven superpowers. What. This really means is using AR technologies, that, deeply, understand, the context, of a device and then, build experiences. That directly leverage that context, and, there's. Two parts to an AR context, one. Is visual understanding and the other is physical, understanding. With. AR core this, gives your phone the ability to understand, and sense its environment physically. But. Through computer vision and machine learning we, can make sense of the world visually, and by. Combining these results we get an authentic understanding. Of the scene which, is a which. Is a natural. Building block of magical AR. So. Let's start with visual understanding. The. Prototyping, community has done some awesome explorations, here we've, done a few of our own that we're excited to share. Just. Start, we. Wondered if we could trigger custom, experiences. From, visual signals in the world. Traditional. Apps today leverage, all kinds of device signals to trigger experiences, GPS. The IMU etc. So. Could we use visual, input, as a signal as well. We. Built a really basic implementation of this concept this uses a our core and the Google cloud vision API that. Detects any kind of snowman in the scene which, triggers a particle system that starts to snow. So. Through visual understanding, we were able to tailor an experience, to specific, cues in the environment for, users, this. Enables adaptable. And context-aware, applications. So. Now even though this example is a simple one, the. Concept can be extended so much further for. Example, yesterday. We announced the Augmented images API for our core so. If you use this you can make something like an. Experience, that reacts. Relative, is digital to device movement, around an image in the scene or, even from a known distance to an object in the world, if. You think this concept is interesting, I highly recommend checking out the ARV. Our demo tent they have some amazing augmented, images demos there. The. Next thing we wanted to know is if. We could bridge the gap between digital. And physical and, for, example bring some of the most delightful, features of ear eaters to physical books. The. Digital age has brought all kinds of improvements to some traditional. Human behaviors and ear. Eaters have brought lots of cool new things to reading but. If you're like me sometimes you just miss that tactic. The tactility, and holding a great book in your hands. So. We want to know if we could bridge that gap in this, prototype, users. Highlight a passage, or word with their finger and. They instantly get back a definition. This. Was a great example of a short form focused, interaction, that required no setup for users it, was an easy win only made possible by visual understanding. But. As soon as we tried this prototype there. Were two downfalls that we noticed and they became immediately apparent when we used it the. First is that it, was really difficult to aim your finger, at a small moving target, on a phone and maybe the page is moving as well and you're trying to target this little word that was really hard and the. Second was that when you're highlighting a word your, finger is blocking, the exact thing that you're trying to see. Now. These are easily solvable with a follow up UX iteration, but. To illustrate a larger lesson and, that's. That with any kind of immersive, computing you. Really have to try it before you can judge it an. Interaction. Might sound great when you talk about it and it might even look good in a visual mock but. Until you have it in your hand and you can feel it and try it you're, not going to know if it works or not you. Really have to put it in a prototype so you can create your own facts. Another. Thing we think about a lot is, can we help people learn more effectively, could. We use AR to make learning better. Now. There's many styles of learning and if, you combine these styles of learning it often results in faster, and higher quality learning. In. This prototype we combine visual, aural. Verbal. And kinesthetic learning to. Teach people how to make the perfect espresso. The. Videos explained I'm, sorry we place videos around, the espresso machine in. The physical locations, where that step occurs so. If you were learning how to use the grinder the video for the grinders right next to it, now.

For. Users to trigger that video they move their phone to the area and then they can watch the lesson that, added. Physical component, of the physical proximity of, the video and the actual device made. A huge difference in general understanding, in. Our studies users who had never built anis never, used an espresso machine before easily. Made of an espresso after using this prototype. So. For some kinds of learning this can be really beneficial for users now. Unfortunately for our prototype, one thing that we learned here was. That it's actually really hard to, hold your phone and make an espresso at the same time, so. You need to be really mindful of. The. Fact that your users might be splitting their physical, resources, between the phone and the world and so. It applies to your use case try, building experiences. That, are really snackable, and hands-free. Speaking. Of combining learning and superpowers together, we. Wondered if AR could help us learn from hidden information that's layered in the world all around us. This. Is a product that we prototype. That we built that's an immersive language, learning app, we. Showed translations, roughly next to objects of interest and position. These labels by. Taking a point cloud sample, from around the object, and putting the label sort of in the middle of the points. Users. Found this kind of immersive learning really fun and we saw users freely, exploring, the world looking for other things to learn about. So. We found that if you give people the freedom to roam and tools that are simple and flexible the experiences. That you build for them can create immense, value. And. Now we have physical understanding. This. Is our ability to extract and infer information and meaning from the world around you. When. A device knows exactly where it is not only in space but also relative, to other devices, we. Can start to do things that really feel like you have superpowers.

For. Example, we. Can start to make interactions, that are extremely, physical, natural and delightful. Humans. Have been physically interacting with each other for a really long time but. Digital life has abstracted, some of those interactions we. Wondered if we could swing the pendulum back the other direction a little bit using. AR. So. In this prototype. Much. Like a carnival milk bottle game you, fling a baseball out of the top of your phone and, it hits milk bottles that are shown on other devices you. Just, point the ball where you want to go and it goes. We. Did this by putting multiple devices in a shared coordinate system which you could do using the new Google Cloud, anchors. API, that we announced for Iroquois yesterday. And. One thing you'll notice here is that, we aren't even showing users of the pass through camera now. We did that deliberately because we really wanted to stretch and see how far we could take this concept of physical interaction. And. One thing we learned was that once. People learn to do it they found it really natural and actually, had a lot of fun with it but. Almost, every, user that tried it had to be not, only told how to do it but shown how to do it, people. Actually had to flip this mental switch the expectations. They have for how a 2d, smartphone. Interaction, works so. You really need to be mindful of the context, that people are bringing in the mental models they have for 2d smartphone interactions. We. Also wanted to know if. We could help, someone visualize, the future in a way that would let them make better decisions. Humans. Pay attention to the things that matter to us and in. A literal sense the imagery that appears in our peripheral vision takes, a lower cognitive priority. Than the things we're focused on. With. Smartphone AR be any different, in, this. Experiment, we overlaid, the architectural, mesh of the homeowners remodel, on top of the active construction project. The. Homeowner could visualize, in context. What the changes to their home was going to look like. Now. At, the time that this prototype was created we had to actual, manual alignment, of this model on top of the house you, could do it today if I rebuilt it I would use.

The Augmented images API that we announced yesterday and. We much easier to put a fixed image in a location the house and sync them together. But. Even with that initial friction for the UX the. Homeowner got, tremendous value out of this in fact they. Went back to their architect, after seeing this and changed. The design of their new home because they. Found out that they weren't going to have enough space in the upstairs bathroom something, they hadn't noticed in the plans before. So. The lesson is that if you provide people high-quality, personally, relevant ways to create value, personally. Relevant content, you. Can create ways that. People will find really valuable and attention-grabbing, experiences. But. When does modifying the real environment start to break down you, may be familiar with the uncanny valley it's. A concept that suggests when things that are really familiar to humans are almost right but just a little bit off it, make us feel uneasy. Subtle. Manipulations, of the real environment and they are can sometimes feel similar it. Can be difficult to get right in. This. Specific example we tried removing things from the world we, created this AR and visibility cloak for the plant. What. We did was we created a point cloud around the object attached little cubes, the point cloud applied. A material, to those points and extracted. The texture from the surrounding environment, now. Work pretty well in uniform environments, but unfortunately, the world doesn't have too many of those it's, made up of dynamic, lighting and, subtle, patterns so this always ended up looking a little bit weird remember. To be thoughtful about the way that you add or remove things from the environment people. Are really perceptive and so you need to describe the build experiences, that align with their expectations, or at the very least don't defy them.

But. As physical understanding always, critical. All. Points in this section have their place but ultimately, you have to be guided by your critical user journeys, in. This example we wanted to build a viewer for this amazing 3d model by Damon Padilla at school it. Was important that people could see the model in 3d and move, around discover the object a challenge. Though was that the camera feed was creating a lot of visual and noise and distraction. People. Were having a hard time fishing the nuances of the model, we. Adopted concepts from filmmaking, and guided, users by using focus, and depth of field all. Which were controlled by the users motion, this. Resulted in people feeling encouraged to explore and they really stopped getting distracted by the physical environment. So. Humans are already great at so many things a art, really allows us to leverage those existing, capabilities to make interactions, feel invisible. If. We live äj-- leveraged, visual, visual and physical understanding together, we. Can build experiences, that really give people superpowers. With. That Ellie, is going to talk to you about special, opportunities, we have in shared augmentations. Thanks. Chris. So. I'm, Ellie nadigar I'm a software, engineer and prototyper, on Google's VR, and AR team Chris, has talked about the kinds of experiences, you start to have when your devices, can understand, the world around you and I'm, going to talk about what, happens when you can share those experiences with. The people around you, we're. Interested, not, only in adding, AR augmentations. To your own reality, but. Also in sharing, those augmentations. If you, listen to the developer, keynote, yesterday you. Know that shared AR experiences. Is a big topic for us these days. For. One thing a shared, reality, lets. People be immersed, in the same experience. Think. About a movie theater why, do movie theaters exist, everybody, is watching a movie that they could probably watch, at home on their television.

Or Their computer, by, themselves. Much more comfortably, not having to go anywhere but. It feels. Qualitatively. Different, to. Be in a space with other people, sharing, that experience, and. Beyond. Those kinds of shared passive, experiences. Having, a shared reality lets, you collaborate, lets. You learn lets, you build and play together we. Think you should be able to share your augmented, reality x' with your friends and your families, and your colleagues, so we've done a variety of explorations. About how, do you, build those kinds of shared realities, than a our, first. There's, kind of a technical question how do you get people aligned. In a shared AR. Space, there's, a number of ways we've tried if you, don't need a lot of accuracy you could just start. Your apps with all the devices in approximately. The same location, you, could use markers, or augmented, images, so multiple, users can all point their devices at one picture, and get, a common point of reference the kind of here's, the, zero zero zero of, my virtual world and you. Can even use the new AR core cloud anchors, API that we just announced yesterday to. Localize, multiple. Devices against. The visual features, of a particular space, in. Addition. To the technical considerations. We've, found three, axes of experience. That we think are really useful to consider when you're designing these, kinds of shared augmented. Experiences. First. Of those is co-located. Versus. Remote are your, users in the same physical, space or, different, physical spaces. Second. Is. How. Much precision. Is required or. Is, it optional do, you have to have everybody. See the, virtual Bunny at exactly. The same point in the world or do you have a little bit of flexibility, about that and, the. Third is whether, your experience is, synchronous, or asynchronous is. Everybody. Participating. In this augmented experience. At exactly, the same time or at slightly, different times and. We see these not as necessarily. Binary. Axes, but more of a continuum. That you can consider when, you're designing these, multi-person. AR experiences. So. Let's talk about some prototypes, and apps that fall on different points of the spectrum and the lessons we've learned from them. To. Start with we, found that when you've got a group that's interacting. With the same content, in the same space you, really need shared, precise. Spatial. Registration. For. Example let's. Say you're in a classroom, imagine. If a group of students, who are doing a unit on the solar system, could all look at and walk, around the globe or, an asteroid.

Field Or look at the Sun in. Expeditions. They are one, of Google's initial, AR experiences. Everybody. All the students can point their devices to a marker, they calibrate themselves. Against a shared location they, see the, object, in the same place and then. What. This allows is for a teacher to be able to point out particular parts. Of the object oh if you all come over and look at this side of the Sun you, see a cutout into its core over. Here, on the earth you can see your hurricane everybody. Starts to get a spatial, understanding of. The, parts of the object and, where they are in the world so. When, does it matter that your shared space has a lot of precision, when, you have multiple people who are all in the same physical space. Interacting. With or looking at the exact same Ottoman, tada jex at the same time. We. Were also curious, how, much can we take advantage of people's existing, spatial, awareness when. You're working in high-precision shared, spaces. We. Experimented, with this in this multi, person, construction, application, where, you've got multiple people who are all building on to a shared, AR, object, in the same space, adding. Blocks, to, each other everybody's, being able to coordinate. And you want to be able to tell what part of the object someone's, working on have, your physical movement. Support, that collaboration. Like, if Chris is over here and he's placing some green blocks in the, real world I'm not going to step in front of him and start putting yellow blocks there instead, we've, got, a natural sense, of how. To collaborate, how, to arrange, how, to coordinate, ourselves. In space people. Already have that sense so, we can keep that in a shared AR if we've, got our, virtual. Objects, precisely lined up enough we. Also found it helpful to notice that because you can see both the, digital object, but also the other people. Through, the pastor camera you, are able to get a pretty good sense of what people were looking, at as well as what they were interacting, with. We've. Also wondered what. Would it feel like to have a shared, AR, experience. For. Multiple, people in the same space but who aren't necessarily, interacting. With the same things so. Think of this more, like an AR land. Party. Where. We're all in the same space or maybe could be different spaces, we're, seeing connected. Things and we're, having a shared experience. So this, prototypes, a competitive. Quiz guessing. Game where you look, at the map and you have to figure out where on the globe you think is represented, and stick your push pin in get points depending on how close you are we've. Got the state sync so we know who's winning but, the location, of where that globe is doesn't, actually need, to be synchronized and maybe you don't want it to be synchronized, because I don't want anybody to get a clue based on where I'm sticking, my push pin into the globe it's fun to be together even when we're not looking at exactly, the same AR things, and. Do. We always need our, spaces, to align exactly, sometimes. It's enough just to be in the same room this. Prototype examples. Of AR, boat race you, blow on the microphone, of your phone and it creates the wind that, propels, your boat down, the little AR, track, by. Us being next to each other when we start the app and spawn the track we, get a shared, physical, experience even though our AR worlds, might not perfectly, align we, get to keep all the elements, of the social game playing talking. To each other our physical, presence but. We're not necessarily touching. The same objects. Another. Super, interesting, area we've been playing with is how. Audio, can, be a way to include, multiple, people, in a single device AR experience. If you, think of the standard, magic. Window device they are it's a pretty personal experience, I'm looking, at this thing through my phone, but. Now imagine you. Can leave a sound, in, AR, that has a 3d, position, like. Any other, virtual, thing and now you start to be able to hear it even if you're not necessarily, looking at it and other, people can hear the sound from your device at the same time so, for. An example let's say you could leave well notes all over your space might look something like this. This. Is a cherry. So. Notice, you don't have to be the one with a phone to get a sense of where these audio annotations start, to live in physical, space.

Another. Question, we've asked if you, have a synchronous, AR experience. With multiple people, who, are in different places. What. Kind of representation. Do you need of the other person, so, let's imagine you have maybe a shared AR Photos, app where, multiple. People can look at photos that are arranged, in space so, I'm taking pictures in one location I'm viewing, them arranged around me an AR and then, I want to share my air experience, with Luka who comes in and joins me from a remote location, what. We found we, needed a couple of things to make us feel like we were connected. And sharing, the same AR experience. Even though we were in different places we. Needed to have a voice connection so we could actually talk, about the pictures and we needed to know where the other person was looking see. Which picture you're paying attention to when you're talking about it but. What was interesting is, we didn't actually need, to know where the other person, was as long, as we had that shared frame of reference we're all here, here's what I'm looking at here's what Luke is looking at. We've. Also been curious, about, asymmetric. Experiences. What. Happens, when, users share the same space and the same augmentations. But they've got different roles in the experience, so, for instance in this prototype Chris, is using his phone as a controller, to draw in space but, he's not actually seeing, the AR annotation, she's drawing the. Other person sees the same AR content, uses their phone to take a video they're, playing different, worlds in the same experience, the kind of artist versus, cinematographer. And we, found there could be some challenges to, asymmetric experiences. If there's, a lack of information about what's um the other person's experiencing. For, instance Chris can't tell what luca's filming, or see, how his device is drawing the looks from far away. So. As, we mentioned previously these, kinds of different combinations of, space and time and precision. Are relevant, for multi-person, AR experiences. And they have different technical, and experiential, needs if you, have multiple people in the same space with, the same augmentations. At the same time then. You need a way of sharing you need a way of common, localization. That's why we created the new cloud. Anchors, API if you've, got multiple people, in the same space with, different, augmentations.

At The same time the kind of AR lamp RT model you, need some way to share data and if, you've got multiple people in different spaces, interacting. With the same augmentations. At the same time you, need sharing, and some kind of representation. Of that interaction. So. Shared AR experiences, is a big area we've explored just some parts of the space we'd love to see what you all come up with, so. Chris, has talked about examples, where, your device understands, your surroundings, that gives you special powers I talked, about examples, where you've got multiple people who can collaborate, interact. Now local we'll talk about what happens when your devices have a better understanding, of you and allow for more expressive, inputs. Thank. You Andy. My, name is Luke oppressor and I'm a prototyper. And a technical artist working, in the, Google ARMD our team so. Let's talk about the, device that you carry with you everyday, and the one they are all around and how, they can provide the meaningful, and authentic signals. That we can use in our augmented, experiences. So. Eric, or Trax the device motion, as we move to, the real world and provide. Some understanding, of the environment and this. Signal sir can be used. To create the powerful, and creative and expressive tools. And offer new ways for us to interact with the digital content. So. Data represent, who we are what, we know and what, we have and we. Were interested in understanding if, the user can connect more deeply if, the data is display around them in 3d, and 2ar. And physical, aspirations, they can look at the distance data, so, we took a database of several, thousands, world cities and we map it in an area that's why there's a football field we. Assign, dot. To every city and we, scale the dot based on the population. Of the city and each, country has a different color so, now you, can walk to this. Data field and. As. Their core tracks. The motion, of the user, we. Play footsteps. In sync you take a step, and you hear a step and, on ambisonics, sound, field surrounds. The user and enhance, the experience and. The sense of aspiration of, this data forest in, fly paths our display, up up in the sky, and the. Pass through camera is heavily, tinted so, that we can allow, the user to focus on the data and then, still give a sense of presence and, what. Happens is the user has, he walks to the physical space start mapping and pairing and creating, this mental map between their the data and the, physical location and starts understanding better, in this particular case the, relative. Distance between places. And what. We discover is also that the gesture, they are part of our digital, life everyday a pinch to zoom it's. Now in AR something. More traditional. It's. A it's actually moving, closer to, the digital object, and inspecting, it like we do with the real object and and, pan and drag means taking a couple of steps to the right to, look at the information. So. Physical expiration, like, this is very fascinating but but we need to take an account all the different users and provide the alternative, movement, affordances, so. In our a user, can move everywhere, but, what if we cannot avoid, or, he doesn't want to move what if it's sitting so, in, this particular case we allow the user to simply, point the phone, everywhere they want to go tap on the screen anywhere, and the. Application. Will move. The point of view in that direction, and the same time we still have to provide audio. Haptics. And color. Effects, to, an answer, the sense of physical, space the, user has to have well, traveling and so.

We Found that this is a powerful. Mechanism to, explore a certain, type of data that makes sense in the 3d space and, to. Allow the user to discover, hidden patterns. But. Can we go beyond. The pixels, that you can find on your screen, we're fascinated. By the special. Audio and the, way, to, incorporate audio, into. An IR experience, so we combine air core and the, Google resonance. As decay and. Resonance. Is this very, powerful special, audio engine then recently Google, open source and you should check it out because it's great and so. Now I can take audio, sources and place them in the 3d. Locations. And animate. Them and describe, the properties of the walls and the ceilings, and the floor and all the obstacles and now, as the air core moves the point of view it carries with it the, digital years, there is a resonance, used to render accurately. The sound in the scene. So. What can we do with this so. We, imagine what, if I if I can sit next to a performer. Doing an acoustic, concert, or a classical, concert or a jazz perform. So what, if I can be on stage with actors. And listen. To their play, and. Be, there so. We, took two amazing, actors Chris and Ellie and we. Asked them to record, separately. As lines from Shakespeare and. We. Place this this, audio sources a few feet apart and, we, surrounded. The, environment, with an ambition, exam field of a rainforest of the reigning and. Then, we later on we switch. With. A lot of reverb, into, the walls. So. Now now the user can walk around maybe with his I suppose a nice pair of headphones and, it's. Like being on stage with these actors, so. We took this this, example and, we extended, we, we. Observed that we can build in real time a 2d, map of where the user has been so, far with this phone as its walking around, and. So at, any given time when the user hits a button we, can programmatically. Place. Audio. Recording, in space, where, we know that the user can reach with this phone and with their ears. And. Suddenly. The user becomes the. Human. Mixer, of this experience, and and. Different instruments, can populate your squares, and your rooms and, your. Schools, and and, this opens the door to an amazing amount of opportunities. With a our audio, first experiments. So. Let's go back to visual understanding Chris, mentioned that the computer, vision and machine learning can interpret the, things are around us and this is also important, to understand, what the body and in. Turning. Into an expressive controller. So. In, real life we are surrounded by a lot of sound sources, for, all over the places and naturally. Our body and our head the moves to. Mix and, focus, on what we like and. What we want to listen to so, can we take this this. Intuition. Into. The, way we watch movies or play video games from a mobile device so. What we did we took, the. Founder, camera signal fed, it to Google mush mobile, vision that. Gave us a head, position, and head orientation and, we, fed it to to. Google resonance SDK. And we, said ok you're, watching a scene, in which actors, are in a forest and they're all around you and it's raining so, now as I leave. With, my phone far away from my head I hear the forest as I'm taking the phone closer, to my face I start, hearing the actors playing I. Warn. You this is an Oscar performance. All. Our company, here. By. Man according to the scrip here. Is the scroll of every man's name which is thought fit through, all Athens, to, play in our interlude before, the Duke and, the Duchess. So. Now now, what is interesting is that the tiny. Little motions, that we can do when we're watching and we're playing this experience, that can be turning, to subtle, changes. In the user experience, that we can control. So. We, talk about how the changes, in poses, can become a trigger, to drive interaction, in this Google research app called self-esteem oh we. Actually exploit the opposite, the absence, of motion, and when the user in. This case my kids are stop posing. The. App takes a picture and so. This simple mechanism that is a trigger. By computer, vision, create. An incredible, delightful, opportunities, that apparently, my, kids love. And. Research is doing incredible. Progress in, looking. At an RGB, image and, understanding. Where the body pose, and skeleton, is and.

You. Should check out the Google research blog post because, their, post estimation, research, is is amazing so. We took Ally's video, and we fed it to the machine, computer. Algorithm, and we, got back a bunch of 3d poses and segmentation. Masks of a, valley and this opens the door to a lot of variety, of, experiments. With the creative. Filters that we can apply to this but, what is more interesting for us is that there also allows, us to understand, better the, intent, in the context, of the user. So. We. Took. This pose estimation. Technology. And we. Added the digital carrot that now tries to mimic what, the human character, is doing and this. Allows them now to bring, your. Family and friend in this case my son Noah into. The scene so. That act and create a nice, video but. This also like, Emily. Like. Ali mentioned, before. We. Should consider this situation. Because. This is an asymmetric. Experience, what. You don't see here is how frustrated, my son was after, a few minutes because, he couldn't see what was going on I was, the one having fun taking, picture and video him, and he, didn't see much you can only hear the lion roaring so. We, need to be extremely mindful as the developer, about this imbalance, of the light and so, maybe I should have casted, the the, image of the the phone to a nearby TV, so I can make, my son first class citizen, in this in, this experience. So. All these AR technology and, the physical, and visual understanding are, ingredient, that allows us to unlock, all kinds, of new expressive, input, mechanism and we, are still exploring with just at the beginning of this journey but. We are excited, to hear what you think and what you want to come up with, so. To summarize, we, share a bunch, of ways, in which we think about AR. And various. Inspiration, that we have done. We. Talk about expanding, our. Definition. Of they are putting. Content. Into the word but also pulling, information from, the world, and. These are all ingredients that we use it to create this magical, AR superpowers. To, enhance, the, social interactions, and to express, yourself. In this new, digital medium, so, we combine air core. Capabilities. With, different, Google, technologies. And this. Give us the opportunity, to explore, all these new interaction, models and we, encourage, you developers. To stretch your definition, of they are. But. We want to do it this together we're, going to keep exploring but. We want to hear what, tickle, you what tickle your curiosity. So. We can wait to see what you build next, thank you very much for coming.

2018-05-18 01:26

Show Video

Comments:

Support onePlus 5T for ARCore pleaseeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee

I guess one plus 5t users have to spam google everyday lol for this ☺

indeed! The 5 is supported since almost the first release, and now the 3T is also supported! I hope that 5T support is comming soon!

I too wish the state of ARCore support across the vast number of Android devices weren't so depressing as it stands right now. Soing so much potetial without having any practical way to use it is likewise quite saddening.

I'm very curious in how you guys do the real time feed between two devices in different location in 23:25. any documents or guides for it?

hey there is a way to make ARCore work on 5T through Rooted phone and magisk

Google release ar core. I don't understand why i can't install ar stickers or something similar quality for android device support ar core but not pixel.

PART 1: (Image Target)Trigger Experience from environmental Signal: https://youtu.be/bUGhG-AZpu0?t=218 (OCR and Translator API)Bridge the gap between physical and digital: https://youtu.be/bUGhG-AZpu0?t=310 (Markerless AR with portal may be?)Combine learning with context (Expresso machine): https://youtu.be/bUGhG-AZpu0?t=407 (Object recognition, Translator, Point cloud to show the content) Reveal New layers hidden in the physical environment (Translation app): https://youtu.be/bUGhG-AZpu0?t=407 (ARCore Cloud Anchor)Physical interaction is Natural and delightful ( Syncing interaction between two devices): https://youtu.be/bUGhG-AZpu0?t=550 (Augmented Image API) Overlay a model on top of a house: https://youtu.be/bUGhG-AZpu0?t=626 (Point Cloud, Occlusion, extract the texture from surrounding) Manipulating the environment: https://youtu.be/bUGhG-AZpu0?t=725 (NO IDEA...) Focus a 3d object blur the bg: https://youtu.be/bUGhG-AZpu0?t=807 PART2: Classroom Experience, sharing the same content in the same space: https://youtu.be/bUGhG-AZpu0?t=1052 Collaborative AR: https://youtu.be/bUGhG-AZpu0?t=1145 Be together, not the same: https://youtu.be/bUGhG-AZpu0?t=1238 Shared Audio experience: https://youtu.be/bUGhG-AZpu0?t=1316 Remote Sharing: https://youtu.be/bUGhG-AZpu0?t=1397

By any chance, any of the soucrs are available for developers to dive into?

Where can I find samples of the pose estimation project?

ARCore is Awesome

3:42 10:34 12:26 21:00 22:27 23:14 31:38

Other news