INNOVATION IN ARCHITECTURE: AI, Metaverse and the ‘Carbonitor’

INNOVATION IN ARCHITECTURE: AI, Metaverse and the ‘Carbonitor’

Show Video

Welcome everyone for the open studio night at HENN as part of the Berlin Design Week tonight. Very happy to see so many of you, also so many known faces, so greatly appreciate that. How does technology influence and innovate our design process? That's the question that we want to discuss today and I'm very much looking forward to hear three short presentations on the topic from very different perspectives and angles. And I'm happy that Giovanni Betti, our head of sustainability, is going to moderate the panel.

There will be some questions and answers afterwards, obviously drinks and some snacks for everyone and also the opportunity to take a look at the studio, which is next door here. And to meet our lovely colleagues who will introduce what we're working on currently. Giovanni, without further ado, I will just hand it over to you.

Thanks Martin. My name is Giovanni Betti. I lead the sustainability department here at HENN and work also closely with the presenters tonight. They all come from the programming team, which is really a team that works at the beginning of the design phase, trying to understand the design challenge and not necessarily finding the right solution, but finding the right question to pose the designers. So they're really trying to eviscerate what is the challenge of the project. And they are a bit our internal think tank.

They really operate outside of the standard boundary of architectural practice across disciplines. And they try to look in the future what is coming And today we have three exciting presentations on the topic of the future. We live in a period of transformation and in a period of crisis. We have the big transformation that is coming from the advent of artificial intelligence, from technology that is accelerating and the never increasing pace, this exponential growth, And we have also the challenge of environmental crisis that we're facing and that we, as architects, have a strong role to play to accelerate it or avert it. So we're going to have three interesting presentations on this topic.

So the first speaker is Fabian. And he's going to show you a little bit some insights in our journey in using AI as part of the design and conceptualization process. And what does it mean to design with team members that are not just human? Thank you Fabian. Thanks for the introduction. My name is Fabian. I am part of the HENN design strategy team and I explored the topic of AI in the past weeks and how it affects our processes in the early design phase.

And this talk is intended as a snapshot of what we are currently working on. I just want to share with you our first insights in an ongoing process of trying to figure out how AI is or can be useful for us. So in the past months we witnessed like a boom of AI tools. You could say it went viral.

Each week new tools got released and we can just state the topic is there and we somehow have to deal with it. And for me, the boom is partly explained by the fact that we can suddenly manipulate other basic building blocks of information besides numbers, which is already possible since some time. The new things, the new building blocks, are words and images. And out of these almost any kind of information, basically, is created. And at HENN Design Strategy we also mainly work with words and images, which is why we are super happy or eager to explore how these tools can help us with our processes. So what do we do? As Giovanni said, we are working in the very early design phase so, before the actual building design process starts.

So what we are interested in is are the needs of the users and stakeholders in a project. So very early in the process our goal is to collect all the relevant information to make sure that later on, the project is going to meet all requirements. We are doing this by opening up the space together with the users and by conducting workshops, as you can see in this picture, workshops, interviews, or other creative formats.

And we collect their needs, their goals, and also their first ideas that they already might have. And then we take these needs and we translate them into project goals or first concepts. And the idea is that they are not already, it's not about the design yet, it is rather to serve the design team that is later then starting as a guideline and to make sure from the very start that the project meets all the expectations and requirements. And these are often complex processes, as you can see here in this picture, with multiple workshops, up to 30.

I think one of the biggest was in the Gasteig project in Munich, if I remember, with up to 30 workshops. So our process roughly follows the double diamond that you can see here. And the double diamond, it is like a concept that is also used in other user-centered design approaches, for example design thinking.

And it means basically that what we do, we follow roughly a pattern. So we have phases of opening, of exploration where we open the field together with the users where it's about allowing everything, all the ideas, everything to be said, heard and considered. And then we have also phases of consolidation or closing where we take the collected information and condensate or compress it and then translate it into these precise target images or concept ideas. So we asked ourselves: How can generative AI tools help us in this process? Can these tools take over some of our work or speed it up? Or could they even create something entirely new with the collected information that we, as humans, or with our current process, would not be able to do so easily? And our first starting point was using these tools as a kind of tool for consolidation of the synthesis of the collected user data. So our idea was: what would happen if we gave AI a record or a transcript of an interview? What could it do with the data, because it's words basically, the interview? So we asked ourselves could it represent or summarize these facts in a form that would be not feasible for us or maybe in a more atmospheric or emotional way? And the way we do this usually is, so far, has been by using our card technique.

So this is how we as humans summarize an interview. So what we are doing is we translate statements and information from interviews, from workshops into like a very abstract but precise form and these are our cards. So each card has a statement and a diagram and you could say that these cards are the essence of an interview or of a process and the result is going to be the card wall. And this usually resonates really well with the clients because they see themselves in this wall and they have kind of a visual protocol of what has been said.

So what you can see here is AI's try of summarizing an interview in a visual way. So what we did, we let AI summarize an interview about AI that we made with a colleague by feeding it a transcript and then let it generate collages from the analysis it made. So we asked it: Hey, can you structure and cluster this interview into different groups and then visualize any groups? And as you can see, the result is somewhat atmospheric but this is unfortunately not very accurate or precise. We see a little bit of time, a little bit of collective intelligence of future but it's not to the point yet. So the interview or the content of the interview doesn't really get clear. What happens if you do not have words as a basic input but images and one image that is quite common is that we have already an existing building.

So this building exists already and we want to make a post utilization concept for it. So we have many ideas in a workshop created or in interviews and we ask ourselves what can what AI could do with them. So maybe here you could say we would like to have some gastronomy below this globe.

This should be somehow like a lively space, a culture space. It should be more green maybe. So here are the results. First of all, maybe what you see, it was not possible for us to create one image that shows all ideas, which a good concept usually should do, It should be multi-faceted.

It should have multiple things but what you can see here is on the left you see some kind of cultural space. On the right it's a bit more like artsy or it's like a restaurant setting and in the middle we have this green globe. And if you have an even closer look, you see that in the process we also lost our building. So it is not even the same room anymore. And you see okay we have like this globe but it's always in a different position and this yellow element which was present in the first picture is also there but it moved and it represents different things. And so to retain the room and show all our ideas so far, we only have Photoshop.

So what can we summarize or what can we see from these examples? The synthesis or consolidation of information using generative AI tools that are needed for words and images, they only work to a limited extent and this is partly due to the way these tools work because they do not understand what is seen but they perform their tasks, or whatever we tell it, based on learned patterns from training data. And this is really important to consider or to understand the outputs that are possible. You have to mention these tools are like their diet, like billions of pictures from the internet or from Google or words also and from these patterns, or from these data patterns, are concluded and this is what appears to be intelligent but it is limited in certain ways. Firstly, it's not precise because not the whole context is considered, the whole context of the building or setting but only a small scope or fragment of the information and also the ability to abstract is limited.

So for example, if you talk or see like a light bulb, we get a light bulb and we do not get an idea, for example, which is what we have in mind when, for example, we draw a light bulb. It could also be a metaphor for something else and so it takes the things very literally and it also does not understand nuances. So what does this mean now? Is it useless for us? No, we wouldn't say so because we still have the exploratory phase. So here we do not necessarily need the precision and abstraction but instead the goal is to generate new ideas and concepts and to have a base for discussion and that could still later on be refined in the process. And here we can literally understand AI as a kind of

creative genius who knows all the styles and text styles of the world and we only need to give it the necessary guardrails so it creates something that we would like to have. And in the following, I have some examples for you where we more successfully used AI in our exploration phase. So the first one is exploration of data.

So for a huge museum project we had quite a big room table with more than 1000 rooms and you can imagine maybe just feel to sit in front of this Excel file and try to navigate it. So we asked our tool for interpretation possibilities of the data to see if and how we could make it a bit more understandable for us. And it's important to note here that we didn't input direct generation, we didn't say: give me a diagram, we just asked for ideas of interpretation, for exploration. So I volunteered to be AI's intern and I executed everything it suggested and so it wanted me to install some Python libraries and then after some iterations, I coded kind of these diagrams. Basically, AI coded it and I just executed the code. And so what we got is some kind of interactive diagram that you can click and that shows the rooms clustered of this museum in functional areas and by clicking on any of them you can see it opens and you see all the attached rooms, and of course, this was quite helpful to get a feeling for the data.

Another example that we use quite often is an exploration of atmospheres. So to visualize or show first concept ideas, we often use these types of sketches that you can see, to quickly create an atmosphere and they are more about the use of the space. Remember we are not yet in the design phase so it's more about the qualities that the space should have. And so we asked ourselves what happens when we provide AI with a sketch that we know is already liked by our clients, we know they like the style, and we asked ourselves: Can AI maybe create something new from that? So we gave it some examples. The first one is this sketch transformed into some kind of fitting lounge or design space for Porsche and you can also maybe see the color difference. This is one of the patterns in the training data, maybe there are more red Porsches than green ones, so the color changed but other than that we could retain quite a lot of the original sketch.

The other example is based on the same sketch and here the idea was to say we need to create an office concept for Dachser, it's a big logistics company, and these office concepts should represent the core of the company, which is logistics, so it has this kind of industrial chic to it and we also found this quite successful because it matched the colors and it also integrated some elements. This would be a good base for discussion with the client, maybe with some refinement. And the third one is an exploration of experiences - that's another medium that we use quite often - are user journeys and the purpose of them is to show or explain an experience through a building. Here are also details like the rooms, they don't have to be super accurate because it's more about the person and also they are like emotional experience and here these tools also are of great help because they master also very specific styles like here, this kind of colorful urban vibe that maybe resonates with a certain type of project. I am personally not able to illustrate on this level so it's super good to have these tools at hand but a little side note: of course it's again not possible to just type in: "I want to have a user journey" because this is again abstraction so we have to create every image individually and then put it back together and create a story ourselves. So what's next for us? We are talking about a disruptive and fast-paced technology and there's a lot of high public interest and also big investments are made so there are constantly new tools and the tools also develop super quick so this is only a snapshot right now and for sure it will also change a lot but what we can already say is that we see a lot of potential, or we see potential for the exploratory phase, especially to speed up processes that we are already doing, also the creative ones that I showed, and this also helps to increase the number of variants or possibilities that we use to visualize or explain a certain kind of information because now we need less time to make a sketch than before; maybe we needed one or two days and now we can create the same kind of sketch in less time which opens up possibilities for us to also then explore different ways.

And the last thing is that we also just began to tap the potential and there's like a plethora of tools; there's now, I read, also tools for Rhino where you can go from messing model to render or you can have generative floor plan tools. So only by trying, in the end we think it's a lot of learning by doing, so only by trying these tools out we know how they can work and maybe with the end goal to discover a more hybrid workflow so that we can use these tools for what they're good at: for creating but also so that we have more time for the things that that we know better right now like for example this abstractive part. Thank you. Thank you, Fabian for this interesting exploration of how computer tools change the way we work. Now staying on the topic of computational tools and how they change our approach to design: Marcus will show us what does it mean to be an architect in the age of when the Metaverse is born. Now that we start more and more exploring digital spaces. We have been for a long time 225 0:19:06.83 0 --> 0:19:14.100 designing our buildings digital first, or digital as well, and now there is this

possibility to really explore this digital realm and to leave it to a way that hasn't been accessible in a way that hasn't been accessible before. So I welcome Markus Jacobi, The head of the Design Strategy or Programming team here at HENN and thank you. Take us in the Metaverse! I actually wanted to say something before we started the presentation overall: whatever we show you guys today is actually just a snapshot of some exploration of those tools so don't take everything too serious. It's not some kind of final message or recommendation or anything we want to transmit. It's kind of a peak over our shoulders to see what we are doing. A lot of the things might not lead to anything but we thought that it's interesting regardless and that we should share some of the experiences we have and with that in mind, this is also a presentation that you should read like that because there can be a lot of criticism to it and not everything we show is the way we would do things in the future, similar to what Fabian showed before so keep that in mind while you watch that and then we take it from there. So when you guys think about the Metaverse, or we talk about the Metaverse, I think a lot of people think of these types of elements: gaming environments and cryptocurrencies, bitcoins, avatars, board monkeys, you know, whatever you want to talk about. So even though that stuff is

really interesting, this is not what we are focused on currently. So we are actually interested in trying to figure out what the potential for work environments are. Hybrid work: we are asked quite a bit what the future of hybrid work is. How do we collaborate in the future with all those new tools available? So - yes it's the same technology, it has a lot of interactions between gaming and crypto and Web3 but we are trying to focus mostly on the hybrid work component of it.

So, the questions we need to ask is what we as architects, you know, as architects in the physical world, what are we trying to, what is our role if we think about the Metaverse? Because if we build in the physical space, we try to create spatial experiences, spatial experiences that are based on distinctions and learned activities. We try to create spaces that foster a certain behavior of people in them. You don't walk into a library and, you know, normally would start playing music loud because it's an intrinsic coding of the architecture that fosters certain behavior. We try to create spaces that foster communications. We try to create spaces

that foster interactions, that people meet each other, that people run into each other, that people exchange ideas and this component of communications is so important to us because our work processes become so complex, right? Just very few of you probably start one work process and you will be able, just by yourself, to finish it. Normally you would need a lot of different people to help you out, to interact with. And that communication is so important because it helps you ease the complexity of those processes and make them manageable somehow and that that is what we, where we as an office, have the philosophy of what architecture needs to fulfill, or needs to fulfill very often, and particularly because we build a lot of workplace environments, offices, or production facilities, laboratories etc.

The question, normally in the physical space, we do that by distinguishing elements from each other. You have spaces for concentrated work or in communication you have private spaces and public spaces, indoor and outdoor. Without the other, the first one doesn't exist, right? So the whole definition of architectural coding is by distinction and those distinctions are worth nothing in the virtual world, right? There's no indoor, outdoor. You don't get wet if you stand outdoor, right? It's not getting cold so all these elements somehow don't exist in the digital world. And the question is a little bit: What comes in their place? What is it that we, as architects, need to focus on if we start to design spatial experiences in the virtual world? And I don't really have any answers but this is how we set out to explore those and I think it's a little bit of a longer journey we have in front of us and I take you a little bit how we started that and where we got with it. So we started with putting out some hypothesis

or some thesis and say: Okay, we assume that most meetings, formal and informal, will happen virtually at some point in the future, right? Right or wrong, it doesn't really matter, it's a hypothesis. Office spaces will be redesigned to support virtual collaboration. The role of the architect here is mostly to redesign physical spaces that allow for virtual collaboration. Another one could be most companies will also have virtual offices. That might be true, maybe not, but what does it imply for virtual spaces and their experience to create.

Virtual HQs will be accessible to the public. I think this is a very interesting point because it has something to do with urban planning. And you know, a lot of urban planners have the idea that the ground floor spaces for offices need to be generally public because it's a contribution to the public space and to the city realm. Companies will buy and sell digital products. This is interesting because you get out of the role of a designer or a facilitator. You become, because this is about transaction, the web, we know,

it doesn't allow for transactions so there's no recording of a digital transaction before blockchain technology, technically. So this is interesting because it will change the way we think what can be done in offices not just as a virtual space to talk to each other. And then last but not least, and probably one of the most interesting things to us right now is: The physical and virtual world could be merged or will merge so we might be able to put one thing on top of the other. And just to summarize: those three virtual HQs accessible to the public are kind of

the hybrid between virtual and physical and the kind of transactional component of the elements we are interested in exploring because it combines our role as a designer and our role as a facilitator of interactions and communication, as I explained a little bit earlier. So where did we start? You know, we do design buildings. This is the Zalando, the latest Zalando building which is currently, kind of looks like the construction site image here so it's almost done in the structural come - I think the facade is also done too. So we have renderings, we have a construction site, we now have digital twins so we have a whole, you know, pretty detailed model of the building so we thought of starting with that and, you know, because you didn't close your eyes when I set up you know already what I'm going to show you now. Anyway, I wanted to show you

the building in the Metaverse, which we walked around a little bit, and it's interesting because that's really easy, this is really quick to do. I mean, this is a little exaggeration, it's not that quick because it requires a lot of different modeling to make a model work like that, but basically, we took the digital twin and just tried to replicate what we did, what we already had. And the interesting part here is not so much how it look,s because you guys know all, you all know renderings that look much better than this, but the interesting component is that this model already, if you go up there, if I can walk here, this model also already includes collaboration tools. So if we go for example to the third floor - and that was a little fast because obviously the concept of an elevator in the Metaverse is a different one, you know? You can basically allow your clients to walk through buildings years before they open. You can try out different furniture pieces in meeting rooms, you can try different modes of collaboration you can open it to people... Why are you standing up?

Sit down! See? It's all not so easy. Well, if you really want to stand, then just stand. So um you know you can, you can... Great, I have to turn around I guess... Because I want to show you something else. You can, if Natasha would be here too, which she

currently isn't, I could talk to her a little bit. So you can use these meeting rooms actually for the things, you could test them, you know, if you get a little bit used to it. You could do events, you could do kind of change management and transformation technology events in buildings long before people can actually use them. And we have another very distinctive interest in it because we do another research project that's called shaping space communication patterns so you see here on the roofs like this little white round disk on the beams is a locator. We did tests where we were wearing these little beacons and they would track your positions and we would try to recognize informal communication patterns in our office. So you know, people can recognize communication patterns in meeting rooms but it's really hard to figure out how informal communication works in an office.

How do people interact when they wait for their coffee or if they, you know, walk on the on the hallways etc. So this is a project where we tried to visualize that. We basically compared the accuracy data with video footage and then we created this, you know, visualizations of communication patterns. I don't really want to go into it because I'm overdrawing my time already, but the point is that we see a huge potential by combining physical elements and digital ones on top of each other because we would know if people sit in that meeting room in the back and if you would have a digital twin of this office, then you could basically have those three people who sit in the meeting room in the back being avatars in the virtual office so if someone logs in from home and explores a digital twin or virtual office in the metaverse, he would see that some people sit in this meeting room even though they're not in the virtual world - they're physically sitting in the office but through technology, you could somehow facilitate something like informal communication. Run into each other without calling someone on teams I mean there is literally nothing like informal and serendipity or accidental communication in our current digital tools we have, right? Because you're not accidentally calling someone, you know, it's the most awkward thing, you know. So we have, we see some kind of potential here in overlaying the two and that's why we started with replicating a digital twin even though it's kind of boring because you would ask: Why? Why do you replicate the exact version of a physical building when you don't have anything like gravity and weather and, you know, all the things that are so tedious to us by planning buildings and that's the last thing I want to take you to, you know, this is the foyer for the Zalando... ...building that's already built and open and we didn't get to explore this yet so I'm showing you a mixture between sketches we did and some AI images Fabian did for that purpose.

You start kind of.. you start with a space that exists because in a lot of companies, for companies it's really important to have corporate CI, to have recognition that people know if they come, even in a virtual building, that it has something to do, they have elements to latch on, they know already. They're not lost in some gamey world that has nothing to do with what they know particularly if it's about hybrid work and collaboration. So you know you remove the roof, you don't have columns, you start exploring buildings in totally different ways. So now, now the fun starts, from a designing perspective, because you need to find a thin line between creating something that people recognize and use as a foyer or an entrance to a world that has literally no constraints anymore. And this is a little bit of an exploration where we're

trying, probably the next phase will be to try to explore the freedom you have in the virtual world and at the same time try something that is valuable in a sense that people would use that for hybrid ...hybrid work and virtual collaboration. And I don't know if headquarters will look like that in the future, but I'm pretty sure that there will be a component that needs to remind them of their actual headquarters, right? That's how I look like? Okay. And there's another potential, which I find really interesting, because we always talk about these buildings, but you know, you're able to just have these portals everywhere that beam you somewhere else, right? So you can literally just build a lobby and then have different portals and different worlds that you can explore and while this - I hope this works now, let's see... And then you end up in a totally different world and I think in terms of creating spaces that are engaging, that are new this whole, this also like bears some potential for us because you can build something that looks like your office lobby and then it has portals and rooms into different worlds. Now we ended up on Mars somehow, right? So you can explore Mars for your collaboration needs. I mean this is totally random, don't take me wrong, no one has to go to Mars, but you know what I mean. I think the the only thing I wanted to show is that, you know, you can start exploring all these different elements of it and I guess this is what we, you know, what we will continue doing and...

We realize that there's a lot of interest, also from the client community particularly, because everyone is asking themselves what hybrid world will look like and how can you solve the problem of not being able to have informal communication venues if you sit at home, if you work from home, everything is planned, everything is structured and we hope we can contribute to that a little bit. With that, thank you so much and I will give my word to Giovanni again. Thank you Marcus for this really interesting insight into and this little journey into the virtual worlds that are available to us and I think it's interesting already. Keep your questions, we're gonna go around later but I think it's interesting that there's very different ways in which technology can change the way we work and the type of work that we, as architects, do and Chiara is going to show a project that is very dear to my heart because it has a lot to do with sustainability and the responsibility that we, as architects, have in creating a sustainable built environment and helping - or not being damaging - to the environment around us and it also has to do with making visible the invisible and particularly something that is very hard to grasp: the embodied carbon that is in our buildings. So thank you, Chiara. Thanks Giovanni.

Yeah, as Giovanni already introduced, we are trying to visualize the invisible. Two years ago, we started developing this tool that was born from this question: How heavy are our buildings, what is the impact, and how can we change it? So, for the people who are maybe not from the built environment, I'm gonna introduce a few statistics: the built environment is responsible for more than 30% of global energy consumption and creates almost 40% of the global emissions, so already here we have a big impact and, obviously also big potential, to reduce those numbers. At the same time in Germany, the raw material, extraction from only the built environment, is about 90% and we create more than half of the waste. So also here, a lot of potential. With this in mind, we as architects and project cooperators, have a huge responsibility in order to reduce these numbers and since we're in the realm of digitization, and autimization, and AI, and the Metaverse, we asked ourselves ow technology can help us to achieve this goal in reducing the carbon emissions of our buildings. And from that, we developed our tool, the Carbonitor - which is a working title - but I think it displays very well what we want to show. We want to visualize the global warming potential

or Co² equivalent in our buildings. As you can imagine from this building already, the colors play a role in this. So, saddling on top of the BIM, or Building Integrated Modeling, which most of you will probably also know or have heard, where you put all of the information of a project into the software to reduce mistakes in the end and to see how the project would work - potentially put it into the Metaverse to walk around now - and we thought our software or our tool should integrate seamlessly into this process that we are also working in at HENN and we've called it the Carbon Integrated Modeling, which is only in parentheses because it's not anything new, it's just kind of an upgrade to what BIM already does. And this is especially important in the very early phases because that's where the potential for change in the project is highest. This is where the program is defined - which is what we do in the Design Strategy or the Programming - so if we challenge the brief, that's even better but maybe we can also adapt the massing, change the materiality, or the structural concept of our project. And that's what the Carbonitor is trying to visualize. So basically, the Carbonitor has three goals:

First, on a very personal level, we want to create the awareness. How much carbon is stored in our buildings? What consequences do our design decisions and material choices have? But we also want to visualize the planning levers that we have. Not only the absolute numbers but also what can what can we do about it? Like, how can we change it? How can we do better? And in total, and that's the bull's eye, obviously, we want to contribute to reducing the Co² emissions of the construction industry. So, let's dive into how the carbonator works.

And for those of you who are a bit nerdy, this is basically behind the scenes. It's all based on the general BIM model that we have and we feed it the input of an external database - that's for example the ÖKOBAUDAT but it can also be any other database, for example, 2050 Materials that's a new kind of database that that has, essentially, the values that we need. And for now we are translating this database to fit to our process so at HENN, we tweak it a bit to work with what the Carbonitor does which is two things mainly. It asks the project questions. So, what are you? What are your elements?

Which is what we generally already do in our process so it's nothing we need to additionally do. And it develops limits from this database, respectively, for the building and for the building components. So the output that you get generated from the carbonator is, on one level on the macro view, for e.g. you can compare office buildings or science buildings and on the micro view, you can compare the components. So if you dive into, or if you open the Carbonitor here at HENN, this is what you will see.

This is the dashboard and essentially, it gives you all of the most important information. Three of those: the first is the macro view as I just explained, it's the building typology, just to be sure to compare apples with apples, because if you compare an office building for e.g. with the science building, the parameters are completely different and it would make so much sense. Then obviously, you also see the embodied carbon of the building and as you can see here already, the horizontal structure and the facade and the structural walls are especially intensive and the micro view showing the individual elements with their gradient. The individual elements have the same gradient but it's important to divide them because it would be kind of unfair to compare a non-structural wall to a structural wall. As you can see here, a non-structural wall should potentially have a much lower Co² value than a structural wall simply because of the things that it needs to be able to perform. And now we can dive into the tool.

So you would enter this dashboard, click into for e.g. the typology. You can change it here from office for e.g. to science, it will change the leverages, it will show where your project performs within those leverages. For the Sanierungs project, obviously, the the Co² goal would be much lower than for an office project. Then here it gives you a live view of the embodied carbon numbers in your project and then you can go into your tool, as you would in under normal circumstances in the 3D view of of your planning process, and you can cut it open a bit. Sorry.. yeah, could be a bit faster... and you can cut it open and you can see okay the floors here for example are pretty red,

so that's where there's the most potential for change - the value - and here you can see this is concrete now. So what if, just for fun right now for this presentation, we would change it to wood. What would that mean for our Carbonitor? So then it will run again, load for a few seconds, hopefully. This is the running part. This is basically the calculation that it does behind the scenes and then you update the parameters and then it becomes white, showing that there's basically not much more that you can do. This is a bit simplified now I want to say but essentially that's how the the Carbonitor works and it will also update every other number that you see here. So yeah, that's the tool in a very quick version and obviously

it's a lot of work to really go into the nitty gritty and see what exactly you can change in the project and now I've talked a lot about we and us and this is basically the HENN team that's been working on this project for about two years and to increase our impact we want to open source this project and also seek collaborators within the built environment. We're already in collaboration with a few of those and obviously we're also having thoughts about further developments; for e.g., we want to roll out the Carbonitor for Rhino also, which is a software that we use in the very early stages of competitions where maybe you can tweak the massing even more without having the concrete construction as you saw it now in the project and that will give you a very early global warming potential for your building. We want to create the Carbonitor 2.0 to detach it a bit from the Revit interface and create a website-based landing page that includes an LCA - this is something that the Carbonitor in this moment isn't doing. It's not creating a report. LCA is a Life Cycle Analysis, and it's just basically showing you or enabling the planner to improve the building. We want to integrate into

the iTwo software that we're using for our costs to compare costs and Co², obviously also with the regulations and EU taxonomies that's more and more important and connected to what Fabi told you earlier, we are looking into creating the Carbonitor 3.0, which will be able to interact by a natural language so basically also an artificial intelligence that will allow you to actually chat to the model and you can ask it: "Okay, what's what's the worst element that we built in this project?" Can you please show it to me and what's the best model, the best material that we can exchange it for? Or what's the best way to exchange or to change this design? So if any of this interests you and if you want to ask questions or connect, please get in touch. I'd be happy to connect you to the little community that i've just shown you and... yeah. Thank you. Maybe if I can start with a conversation starter: Very often there, when you look in the media about AI and the Metaverse, AI is going to destroy the world. That there's been...

We've been discussing this recently, that there are a lot of articles from intellectuals that are asking for a pause in the development of AI. There is a lot of talks that say that AI can really up end society. Some have called AI a bigger threat to humanity than global warming. And I think on all of the three presentations there was a very pragmatic and down-to-earth way of working with the tool that was shown. And my question to start is like: You're showing how these things can change the way really our work is, no? Our activities but also the scope of our work.

And my question is also: What is the new responsibility that comes, as architects, when we're dealing with those tools? How do we interact with our clients? How do we participate in this development? We're of course mostly users I mean, Chiara was showing some small actual development that we're doing in terms of tools, but we're mostly users of existing technology also the Carbonitor way. We leverage a lot of existing infrastructure. How does this open up our role and put us in front of new responsibilities and the new challenges? Whoever. I think Fabian ended his talk with something that's really important because we don't... ...I think we started particularly with a very close view on a very small scope of what AI can do, you know, because a lot is about figuring out how it actually works and what it is and what it can do for you. And we got tempted quite a bit to, you know, jump in all kinds of different tools and think about, you know, how can it change processes further down the road but we looked very particularly only on these words and image ideas, the exploration and kind of consolidation of information which is kind of at its core, the part of our work in our department for that very reason because you get distracted - there's so many possibilities, there's so little understanding in fact, on the the the potential of AI and what it will be down the road that I think our approach was to start small, start to understand what it is, start to figure out how it works and i'm pretty sure that the next, you know, year or two will significantly change how we think about it and I think our responsibility is, you know, it's the same. It's like something similar

than we have with the environment, right? You can do, you can continue doing things how you always would do to your to your own benefit, you know, to your bottom line of your, you know, P and L statement or you try to look from broader perspective and try to figure out what is also the... the good thing to do not only the thing that you might be able to do with it. But to be honest, we are not there yet. We didn't we didn't, like, get anywhere close. These edges where you think all of a sudden you have a tool at hand that blows everything out of the water in front of you.

Once we get there, i'm sure we can share that but we didn't get anywhere close in the time we approached or we got to test those tools. I think this was also, in the talk or in our research process, we also threw up the question: Can it accelerate processes that we have or can it create a new process like with the information that is available? And so far I also would agree - we haven't found something that is like truly beyond what we also, as humans, could do. And I think until now, it was mainly speeding up the things that we already used and also in many cases, we needed to be very specific about the inputs that we wanted. For e.g. with the sketches, we needed to input already a finished sketch to get something that resembles this sketch. So I would also agree, but also at the same time, I also think it's a "learning by doing" only by using it we can figure out how and where it actually is useful.

My microphone also doesn't work. I would say in terms of sustainability and our carbon tool, it's very clearly two things: honesty and accountability. So we need to be honest with us and our clients and maybe even the public with what our carbon values are in our projects and we need to hold ourselves accountable and change for the better. Thank you and I think that there is also another microphone that is going around or maybe not but this one can go around so i'm also happy to open the conversation... Yeah, there is somebody all the way at the back. I'm coming!

Hey, this is I guess kind of a Fabian question. I really like how simply you put it earlier that Design Strategy works with words and images. I'm interested to see what you think, kind of a little bit, the medium to long term. How does the audience for words and images change

when there's this kind of saturation with AI generated text and graphic content that, you know, at this point for example, you know, you can impress people with something that's created by Midjourney or something like that. What happens when... What happens when that's no longer impressive? What happens when people are kind of jaded by that? How does that change how you use the tools and how do you stay ahead of something like that? That's a big question. I don't know if we can find the answer to this in this space today but... Yeah... I think it's... What is interesting for me is actually this hybrid workflow so how can we build up on something that AI made and maybe using one of the sketches I've shown could be just a fragment of a photoshop collage that still one of us is putting together or is drawing on but it still would maybe create some kind of new style or new elements but I mean in a broader sense I guess only time can show this, where it's going to go and... Yeah, I mean also

I can already feel this now a little bit after... How long has Midjourney been out? Half a year? That you kind of get to know... there's a certain style of images that I also feel like "Okay, i've seen this already." So then the question now is what comes next. And I guess it also is very linked to the tool itself. I think each tool also produces different results somehow because they all have kind of a different model in the background. The basis of AI is not that it's limited to a certain element, right? And I don't think words and images will come out of fashion because then I don't know what would... What we... What we are left with?

No words anymore and... We talked in another, in another context about the jobs that arise and I think the potential is obvious. I think the potential of those tools is to create any image you can think of right now and anything you cannot think of right now. So it's all depending on, you know, how you prompt it, how you ask it, how you use it in order to create, you know, the output that might bring you to the next level. I'm pretty sure that if you use a certain type of image over and over again, it's getting boring. It's the same with if you're reusing your ideas as an architecture office or... At some point it, you know, it's not what...

what you should do anymore. The same thing will happen there but it's a generative, generative tool, right? So if you put in other elements and new elements then i think the output will be different and I think this is going to be a very interesting... element to follow or development to follow.

I think that there is also obviously a hype cycle moment. We are in a high point after a long AI winter, but yeah... we'll see where where it lands. My name is Joanna and I have a question about the Carbonitor. Is... this Carbonitor needs a lot of data. Is it realistic that you can use it for every project

to really, well, better the things there or is it actually, like, a lot more a workload that you have to do that is not even possible for the project's normal architecture offices are doing? I hope this... does it work? Can you hear me... Yeah. Okay cool. I can hear you. Sorry, yes, it needs a lot of data, but we've optimized the Carbonitor to use the data that is in the behind the scenes, so to speak, and everything else works similar to how we would usually work and build our model in a process of building a project. We would give it assigned materials, we would give it depths and heights and everything else. The Carbonitor just puts another layer on top of it and it asks it: "Okay, what are you and how good or bad are you?" Maybe I can add one word on that because I'm also involved in the development of the tool.

Essentially, the tool is designed to be as little effort as possible so a piggyback on our... on simple naming conventions - so you need to name things anyway in Revit, so you don't have to go through extra parameters and it's also done in a way that it doesn't expose the end user, the average architect, to have to directly access the all the complex database, EPD, making sense of all of that. We have created a filter that happens in the middle, where few people can help manage these infrastructures for the whole office, so reducing really the amount of effort and expertise that is needed.

And we are essentially very close to be able to run these automatically on every project every time. So that's kind of the the goal so that you know and part of your splash screen, we already have as part of the design system kind of when you open a Rhino file, you have... sorry, a Revit file, you have your model well-being analytics, no? If your model is to big, if there are a lot of conflicts, if Revit is complaining about the way you've modeled things and this is going to be something like that and as standard as this. And we're also working towards

open sourcing this so that is not just our thing but it's out there for the community and for other people smarter than us to check it and to tell us what whatever might be wrong with it and how to make it better. There is also a microphone at the back. Elena is holding it up. Everyone is waiting for a drink... already.

There was one I think. Just a very brief question about the Carbonitor: If it would be thinkable or maybe it's already in your minds to widen it to other parts of the ecological footprint? Temperature in cities and wind flow or biodiversity, water precipitation, things like that. Yeah, if you already have that in your minds? Yeah, definitely. I think we're going to build up a whole ecosystem that will take into consideration all of the elements

that you just stated. But for now we're already also using... probably Giovanni is better to answer that question and more on a strategic view. But we're already implementing these certain calculations, they're just not under the umbrella of the Carbonitor which we just presented now. the Carbonitor is about embodied carbon and Giovanni's team of Sustainability has a lot of many other tools that may or may not be called Carbonitor in future. I don't think that there is one app to rule them all and some of those... and there are different levels of specialization and different levels of urgency as well to some of those questions.

And I think the Carbonitor is also not the only app that serves the life cycle of a project, that serves a certain phase. In the very early phases, we are looking at how do we do more even simpler, but robust assessment and then in later phases, we have our QS process that can support with all of this and similarly we already do a lot of environmental analysis to understand operational energy but to understand also heat island effect and micro-climate and so on. And I don't think that necessarily everything has to come under one app and not also not necessarily everything has to be one click just yet. Hopefully soon, but this one we thought...

there is a real urgency with this thing and when we look at the statistics that Chiara was showing especially the biggest leverage that we have as architects is the material that... that our buildings are made of. The operational energy it's kind of, during the lifetime of the building there is a general trend towards the carbonizing the grid so it's, it's a problem that operates over a longer lifespan, that concerns more the existing building rather than the new building, but as architects we have this great impact that happens in one, two years during the construction phase of a building and we really need to tackle that. Marianna, you had a question. It's also for the Carbonitor project, since we are talking about this now. I was just thinking because Chiara also showed earlier the example where... for example, you have your Revit model

and obviously you already designed all the structure in a way that it works with a certain material. And then it's red. And then you want to tweak it to make it white, but then of course then you have to adapt the sizing of the elements or maybe even the massing somehow changes either of individual elements or of parts of the building and how... What is the thinking behind this? Is this more, at the moment being used as a tool where you already have the design and then you kind of raise awareness about what it looks like, but then you continue with the status quo for the later phases? No.

And then you're like: "Oops, too bad, we made it like that but now we can't recalculate all the sizes of all the slabs to change the materials, for example, for all of them." So it's more about the process? Obviously the two things are... I think now we can hear me... the two things are connected and... [microphone screech] Obviously, the two things are connected and the first goal that I showed on the dark ??? area is to create the new wires for the planner. The conventional design that make what's the consequences of the project and then obviously it's still the question of a designer or of a creator to adapt these sensorialities and adapt the depths and the design to then create a new outlet for a Co² equivalent. Probably something that Fabi would also assign that, in the end, the creative parts will not ever be taken away from a tool or from AI and but rather from the people who then see it in a new direction that works for a project or any creative process.

One last question before we end the evening. I would end with a wish. My name is Wiebke Ahues and what my wish would be is that we, as architects we try to add some beauty to it. Because I don't know how you feel about this but when I see those worlds in Metaverse,

I always have the feeling that, you know, all these big plants standing around, the stiff people moving in a strange way and I have the feeling that we should still keep the aesthetics and rules of design in mind when we try to interact with those tools and maybe this is our role as well because when you look in art world, for e.g., what Johann König did here in his gallery then you can see that they are already trying to adapt more more beauty, more, more I don't know... haptic to surfaces and I think this should be something we drive as well in this discussion. Can we pass them... can you?

I can only agree, but it would be so interesting what our contribution to beauty, to haptics, to materials, to experiences can be in this kind of abstract digital world and I find that is one of the biggest questions because a lot of the things are, you know, if you look at artists that work with AI, there's a lot of beautiful things out there but a lot of that is neither spatial nor... has any, you know, if you go into a space like the König Gallery you know it does something to you, right? There's a feeling of space and you immediately feel that. It feels different than when you walk into a your kitchen pantry, you know, and the equivalent to that in a digital space... I don't know what that is. I have literally no idea

at this point and I think this is what we need to figure out - or own maybe we don't need to - but I think this is what the interesting question would be: What is this contribution to the spatial experience in a digital world if you don't have this very basic element like smell and feel and haptics and cold and warm and, you know, that drives our architectural understanding so far. I think I would just add that beauty is also obviously in the eye of the beholder and the Metavese can open a banquet of beauties and maybe the new kind of groups or communities that are interested in one type of beautiful aesthetics in whatever and then define themselves in the Mevaterse so... Yeah it's it kind of really defines beauty even or making it more fluid or open, diverse. Okay, so I would like to thank Fabian, Chiara and Markus for the presentations and the talk.

And all of you for the participation, for the interesting conversation which doesn't stop here. We can continue and there are a few drinks at the back and you're welcome to stay here and mingle and ask other questions and have more conversations this evening. Thank you!

2023-08-27 17:33

Show Video

Other news