Automatisiertes Zocken: Geld verdienen mit einem Trading Bot - Sparx S2/E4, Martin Luckow

Automatisiertes Zocken: Geld verdienen mit einem Trading Bot - Sparx S2/E4, Martin Luckow

Show Video

Would you entrust your money to a piece of experimental software? My name is Martin Luckow, I'm a Transformation Architect at Trivadis and today we're talking about an experimental project involving leveraged instruments, financial mathematics, algorithmic trading, in short – it's about automated gambling. A second aspect today will be: How do you build an experimental system, where you don't know how to actually address the problem, you just know that a lot of changes will be made to the piece of software, that components will be added, removed or replaced. Of course, AI will also play a small role in this, so we will successively show where and in which places you can perhaps achieve more with machine learning and the exciting thing at the end is of course the result – what comes out of the entire process. First, I need to talk a little bit about the basics.

Today, we'll take the DAX as an example. This is the largest German index, the leading index on the stock exchange, which represents the performance of the 30 largest German companies – the largest in the sense that they have the most money. In particular, the DAX actually represents 80 percent of the stock market value, of listed companies, in Germany. So really the mass. If you look at charts, stock market charts, and discuss them with other people, at some point the statement always comes up fervently: "It's all just a random walk" and one could not predict the future.

And if you then ask what a "random walk" is, they are already out of answers. The fact is, if you look at, for example, this DAX trend that you're looking at right now: it may have signs of a random walk, but there's also a clear trend here. And that is also the characteristic of indices, which are basically supported by companies that have a mandate from their shareholders – namely to grow. And that's why most indices around the world are trending upward, interrupted by significant slumps under certain circumstances, but still measurable. The question is whether you can make money with such an index.

And of course, then basically "buy cheap" and "sell expensive" applies, but when and on what scale you should do that, so whether I buy today and sell in ten years or whether I buy now and sell two milliseconds later, that's a big question. And algorithmic trading is exactly about finding these scales, developing good algorithms and getting as much profit out of it as possible. That is our topic today.

This is about money, and where there's money, there's also a lot of media, and if you look around a bit, there are heaps of opinions about how the DAX will develop the next day, for example. That depends partly on fundamental data – on economic data, on political decisions – or on more or less unpredictable people like Trump or Musk. Many investors are in fact guided by heuristics, or apparent methods that make them believe that there is a predictable future.

You can see an example right here, a so-called Bollinger band. This Bollinger band is created by standard deviations around a mean price, based on the moving average of the last twenty days, here using the DAX as an example. One tends to continue this band to the right, i.e. to project it into the future and to align one's actions with it, and that is a very important aspect, that one has an audience here that uses general methods and also applies these methods in the hope that this will still be valid tomorrow, and that leads to self-fulfilling prophecies.

Therefore, the performance of an index is not necessarily a pure random walk, but something where numerous parties argue around a price and do so with tools that everybody believes in, or sometimes doesn't believe in, which is why there can be big effects when an indicator says something different than you actually expected. There are also heaps of forecasts in the media about how the DAX will develop in the next six months or so, but that is pure augury if you take a closer look. You can see here on the chart that the moving average 20 acts as a stepping stone in the uptrend. That sounds great, but the subordinate clause is: There is a hurdle. You always find this kind of thing, that is, you find statements that forecast a golden future under certain conditions, but you don't know the conditions and most of the articles are, I don't want to say bullshit, but not necessarily valuable. Nevertheless, the course of a price, an index price, is driven by economic data, by the success of the companies, and since this is an averaged index, one can say that basically the economy of a country is represented by it.

And thus, we find ourselves in an area in which one can definitely make predictions. A price itself is composed of oscillations – this is a general observation – and these oscillations can be long-periodic, lasting for years – there are also cyclical movements in such a price trend – other oscillations, however, are modelled on these oscillations, which are perhaps in the daily range, and in the intra-day range there are oscillations that go down to the seconds range, i.e. you have a wild pattern of oscillations modelled on top of each other. And in order to buy cheaply and sell expensively, as I said, you can try to find the – let's say – optimal oscillation, which of course is not stable, it will change again in the course of time.

But you can try to find a trading level at which you want to invest and sell again. There are many instruments for this. The catch is that these instruments are often deterministic, for example there is a method like these Bollinger bands we have just seen, or one where you take different moving averages and interpret their intersections as buy signals. For example, you have a slow and a fast average, and if the fast one crosses the slow one from the bottom to the top, then that is a buy signal. These things are wonderfully deterministic in terms of methodology and wonderfully worthless, simply for the reason that the price trend is not symmetrical in the price axis, i.e. when a price climbs to a maximum, it is sometimes a very long process

and then profit-taking occurs and then the thing plummets again briefly. After that, it does not rise again suddenly, but slowly. You cannot mirror the course and say the same laws apply there. There are different laws involved. What is quite certain is that there is no known distribution function that could somehow be used to make a prediction. Mandelbrot, the inventor of the Mandelbrot set, found out at some point in his work that there is probably a power law distribution, but it is not linear and therefore it is almost worthless in practice because you can't really do anything with it.

So we have a lot of imponderables, but if you focus on short-term things in the chart, deterministic processes can certainly be identified and a lot of software tries to do that these days. If you look at a chart like this, sometimes structures emerge. You see one in front of you right now, these are so-called resistance bands. A resistance band is created when there is agreement on a larger and a smaller price in the market. You can imagine the market like this: There are groups of investors with a long rope between them and they play rope pull and where the middle of the rope is, that is the current negotiated price.

The problem about the situation is that they tug on this price back and forth, hence, it moves and some of the investors from one group don't feel like pulling on one side anymore and switch to the other side. That leads to a significant price movement. However, at some point there is a certain unity again and then such structures emerge as we see here right now. For example, a support band that is in the lower range and somehow people seem to think that the price should not go lower than that.

Therefore, they buy in again and drive the price up to a certain limit, which is then called a resistance level, and which cannot really be crossed for the moment. These structures can be found again and again, they appear, they have a certain temporal validity, until a new piece of information comes along and leads this completely ad absurdum and these resistance levels defund, if you like. In summary: Assuming a random walk for a stock market price – it is not as simple as that because we have self-fulfilling prophecies, because an index can depend in particular on large numbers that come from politics and the economy and because, of course, prices are driven by investor expectations. And especially in the case of indices, these are ultimately directed upwards – the companies are constantly supposed to generate more.

To speculate with the DAX, you also need an instrument. And the instruments that we are now using here are so-called CFDs. They are called "contracts for difference" and were actually invented to hedge other investments. Insurance typically works in such a way that I pay little money and get a lot of money back in the event of a claim.

It's no different with a CFD: it is a leveraged product where two parties make a bet on the performance and agree on a price to be paid in the future. The whole thing has a so-called leverage, which means, for example, that I invest 1,000 euros in the DAX and with a leverage of 1:10 I actually move 10,000 euros. Of course, my potential profit is then correspondingly higher, but so is the potential loss. This means that if things go badly for me, I run the high risk of suffering a total loss. So these CFDs are not uncritical, they are in fact heavily criticised. They should really only be used by professional traders, but are massively used by private investors because you can make quick money.

To use something like this, you need a broker. This broker basically has to give you a loan. This is what creates the possibility of a lever in the first place. After all, I only have 1,000 euros to invest, but I want to move 10,000 euros, so where does the rest of the money come from? It comes from the loan the broker gives me and he's also involved in it. His profit is constituted of the fee I have to pay for every transaction. Keywords are, for example, "spread", which means that there is a certain small amount between the purchase and the sale of a value in the DAX and the broker takes this amount.

This is a sort of commission per transaction. Since he is giving me a loan, he also wants interest. If I hold a position a little longer, for example overnight or for whole weeks together, then there is a so-called "swap", which is a daily cost that applies when I hold a position longer than the daily limit.

There may also be transaction costs or account fees, depending on the broker, but these last costs I mentioned are more or less marginal, they don't play such a big role. That's why CFDs are really very interesting for private investors, because – at least theoretically – you can make a lot of money with little money. The catch is, as I said, if the course does not develop as desired. Then I suddenly lose a lot of money and may be obliged to make additional contributions. This is another term that should scare you, because if I have lost enough and my paper is thus worth little enough, I am no longer creditworthy for the broker. So he will come to me and say: Now you first close this gap, so that I can continue to trade with you accordingly.

This can lead to bankruptcies, if you like. There have been price collapses in the past where people have basically lost their entire fortunes, within a few seconds. For this reason, in Germany, for example, the BAFIN, but it is also EU legislation, has arranged that there may not be this obligation to make additional contributions.

This means for private investors, this is important now, only for private investors, that the broker is not allowed to make a subsequent demand. But he has to do something to secure his money and the way it works is that typically if I have positions open and those positions go into the negative and I only have half of my account balance, so I'm not covered anymore, he just closes my position to secure his money. This is also wicked in the sense that I then realise real losses whether I want to or not.

So, CFDs are exciting, you can do a lot with them, they have actually been an insurance instrument for investors, highly popular with private investors, and on the homepages of the corresponding brokers, you always find – required by law – a warning: 80 percent of private investors first lose their money here. They have to do that, kind of like the black stickers on cigarette packs. But you can trade with these CFDs. And if – as you can see in this picture – you had bought two units of the DAX

at the beginning of the year, invested let's say 5,000 euros, first trading day 2021 up to about today, you would have made almost 10,000 euros out of the 5,000 euros with this single trade, which means you would have had a profit of 4,930 euros. Below you can see the DAX trend, which has risen nicely over the whole time, otherwise the game would not have worked, but there were also nasty dips, for example in mid-January, as some still remember, there was this scandal about the Gamestop share when a completely new group of investors on the internet stirred up the market via their mobile phones. These dips are found in the DAX again and again and they can be really bad, i.e. affect more than 800 DAX points, and especially in the CFD environment this can cost real money.

But let's take that for now – with a bank balance of 5,000 and two units of DAX points bought, you could have doubled your money in the first five and a half months of this year. That must be beaten. Our experiment was along the lines of building a trading bot that is able to trade on its own, make buy and sell decisions on its own, and basically turn invested money into even more money – fully automatically, more or less unsupervised. Of course, many people do that – there are hundreds of algorithms on the market that you can get for free or buy, you can also write something like that yourself – that's the exciting thing about it. Many trading platforms that brokers offer for free also offer development environments, so you can start writing your own bot with a bit of programming knowledge. Classically, most people look at the chart and then want to make a forecast with the help of chart techniques or other esoteric things, i.e. they want to predict the future price trend in order to make buying and selling decisions on this basis.

I have just said that this chart technique has no real rationale as to why it should work, other than the belief in it. Casting this belief into a programme is then often part of these algorithms that you can get. But one can also prove that these algorithms will always fail at some point. So far, there are no real winners and if there are, no one will sell them, but keep them nice and secret. In particular, the chart technique has the problem, with its way of thinking, that the information driving the future course is already visible in the chart.

At least since Twitterers like Trump or Musk, none of this applies anymore. They make a move and the whole chart past suddenly doesn't matter because they've basically changed the world with one tweet. The idea of our experiment is different: we don't want to make a forecast, but we basically want to have an assistant that optimises trading. The forecast comes from outside, so to speak, and in a nice, let's say casual way. Actually, our bot, as I will call it from now on, needs a maximum of three specifications. Either "Today the DAX rises", "Tomorrow the DAX falls" or "In the next 36 hours it will move around 16,000 points" – something like that, so only a very rough direction, no direct target value that has to be set in stone.

What our system is supposed to do is to identify statistical features, if you like, on the basis of short-term measurements on the pulse of the DAX and then make its decisions on the basis of these features, for example by identifying minima and maxima, independently finding support and resistance levels, building up statistics on mean rises or mean falls and the like. The transactions that the system is supposed to make will be in the millisecond range, i.e. faster than a human being can actually trade, and that means that the system will open positions on its own, monitor them further and close positions again at the end – hopefully with a profit.

The exciting thing is that our system should also be able to optimise itself continuously, i.e. it should have an internal control loop or develop a kind of "learning", so that the model parameters – which I will come to in a moment – are constantly adapted to the state of affairs. The claim is very experimental and that means that we are aware that this is about algorithmic trading, but we have no idea which methods we will use, or which methods might be used in a month's time and which will be dropped. We also have no idea of a kind of decision workflow, how a decision to buy or sell actually comes about and, as I said, it is expected that we constantly want to try something new without knowing whether it will be successful.

So we are open-minded about that. Nevertheless, the system should run all the time and it should be able to dock with many platforms. So there are trading platforms in the private market like Metatrader or C-Trader, which are also used by large brokerage houses, or there is also Binance, which for example has put a complete API on the web, so that you can talk to Binance via this API, via a web API. This is being realised by more and more providers, so that systems can be written as services on the web itself, for example in the cloud. So the question for us is: how do you build the components of such a system and the answer is logical: as isolated as possible. Because if you don't build them in isolation, you can't replace them as quickly as I would have liked just now.

The second question is: how do you assemble these components, and the answer is relatively simple: we don't assemble them at all. The idea is basically this, to put it figuratively: We gather a group of specialists. Everyone of them has their own special ability. For example, one is a specialist in finding extrema in the chart – i.e. a minimum or a maximum. Another is a risk manager or something, who decides whether it's worth making a purchase – maybe it's better to leave it alone after all.

These specialists can come and go or be replaced by better ones and – this is a crucial point – no one knows the other. The idea of the bot is that when one of the integrated specialists has a great idea, he just shouts it out loud. Whether someone listens is a completely different question, but if someone listens, they may be able to do something with the information and gather and share further insights, this means, we have basically had an event-driven or a message-driven system in mind. The following slide here actually shows the structure of the current system. We have an API that we trade against. This API provides us with the current status of the – in this example – DAX and we also get this in the millisecond range.

This means that every price change that takes place in the DAX during the course of the day hits us as an event, a so-called tick event, and gives us the new price and perhaps also the moving volume that led to the new price, of course the time marker and so on, all in the millisecond range. This tick event is fed into our system by agents, which means that almost all agents, shown here in blue, listen to this tick event, so they listen to a price change and try to make something out of it. I don't want to present all the individual systems now, but some of them are very interesting because they still play a role later on.

You see an agent up there called "Trigger Regression" and below that is one called "Trailer". They listen for the tick event and have the task of identifying a minimum or a maximum in the course of the price to then say "Here's a minimum" and "Here's a maximum" and then they just pass that on to the rest of the structure. The problem is that when you want to identify a maximum, it's already over. I also need an increase, otherwise I have no minimum.

And therein lies the magic – to recognise this maximum as early as possible and also to shout it out accordingly. That's why there are two agents: One agent does it via a regression, which means that over a certain period of, let's say, 15 seconds, we measure the values of the DAX and put a regression grade through it. When the DAX goes down, the regression slope faces downwards, so the slope is negative, but at some point when it goes up again, after some time the regression slope will flip over and this flip-over point signals a minimum. This works quite well, it's a standard procedure and it provides a bit more information than just "there is a minimum"; we also get information about the standard deviation and things like that. Another example of an agent who takes care of minima and maxima is the "trailer".

Trailing is a standard procedure and works something like this: you have a price, this price rises above a certain threshold and at that moment you draw a kind of stop line below this price. If the price continues to rise, the stop line is pulled upwards at a certain distance. If the price comes back down towards the stop line, it is not pushed further downwards, but remains where it is. And at the moment when the price crosses the stop line from top to bottom, a maximum must of course have occurred and crossing the stop line is then typically interpreted as a sell signal, so at this point at the latest one should get rid of this position.

A trailer is also suitable for identifying minima and maxima, as I said, and for calling appropriate events into the forest so that others might listen to them. Trailing has the property that it reacts to very fast price fluctuations, whereas a detector based on regression degrees sometimes misses fast movements. Both together, however, can deliver very reasonable results.

Another agent that is important is the "Reflection Levels" agent in the big picture just now. This is actually one who tries to identify price zones where the value of the chart bounces off the top to the bottom, i.e. rises again, a lower boundary, and accordingly identifies resistance zones above which the price simply does not want to cross. If these zones can be identified and if they have a certain consistency over time, leaving a support level to the upside is of course a buy signal. If the price then finally enters a resistance level, that is the signal that one should perhaps close at that point because, as expected, it simply will not go any further.

This does not have to be the case, of course it can be broken through, but the probability is relatively high, depending on the quality of the resistance level, that it represents an upper limit. So when the two min-max agents say, "I found a minimum" or "I found a maximum", that event goes to one whom we call the "decider". He looks at it – okay, there is a minimum, but the minimum is possibly already a small dent on a very high mountain, so we leave it alone. So this decider acts as a kind of risk management. If, on the other hand, the decider determines – okay, we have detected a minimum in a deep valley and the DAX should rise today according to the target, thus, we have a good chance of making money at this point, and at that moment the decider generates a buy signal and calls it back into the forest.

This buy signal may be listened to by the investor – that is the one who sits on the money and accordingly spends the money when he accepts the buy signal. It's actually quite simple. Another important agent is the so-called "profiteer". He looks at the positions that are currently open and if any of them are in the plus, he has to decide when to close them again. That's why he listens for min-max events and for events where certain lines have been crossed or crossed below and so on. And then comes one that is now slowly taking us in the direction of machine learning.

There is also a " controller" agent. This controller agent observes what the others are doing and determines, as an example: Okay, the profiteer has just closed a position, it was in profit, that's nice, but the price went up even more afterwards. He shouldn't have closed it yet because he could have got more out of it.

And that is measured by the controller. After closing a position, he continues to observe the chart from the point of view of this closed position and if he comes to the conclusion that this could have been done better, they spread exactly this information in the rest of the system. The other agents can use this to change their model parameters a little bit to make sure next time the position will not be closed so early.

It can work, but it doesn't have to, the whole thing is very difficult. We did it in a way that all the model parameters were first predetermined through elaborate simulations for various scenarios that a course can make. One speaks for example of a "bullish" phase in which the price rises without end or of a "bearish" one in which the price falls deeply. Or it is in a sideways movement. For all these scenarios, one can find optimal model parameters and the system knows these parameters. The controller sends its information through the network in the event of a misbehaviour, i.e. if the quality of a transaction is poor,

and the others try to temporarily adjust their model parameters, although the optimal parameters from these three scenarios mentioned form the limits, so to speak, so that our system does not send a parameter beyond all limits, where it no longer makes sense. This way, we have built in a kind of convergence guarantee and the system will always remain stable and even – this is a nice side effect – adjust to the current scenario, i.e. it learns during the runtime: Ah, today is a good DAX, although Martin Luckow actually claimed something else. The performance gain through this controller, through the introduction of these feedback loops, if you like, was about 17 percent compared to purely static parameters, and that's pretty good. I'll tell you what this 17 percent means at the very end. I had already said that there are many model parameters.

Each agent basically has these model parameters. The regression to measure minima and maxima has of course the model parameter "regression length" – how long is the regression interval. If I make it too long, it may be inadequate, if I make it too short, I have too little data. The trailer, as an example, needs two parameters, namely the so-called threshold value and the trailing distance, whereby the trailing distance is a very exciting topic to make the system even better in the end. Moreover, it is the case – as you may have just seen on the chart pictures – that trading is possible almost 24 hours a day, but at night, there is only little trading going on. During the day, around nine to ten o'clock, the DAX starts to hum, if you like, the course is set by the big investors and throughout the day the DAX, or the statistical parameters of the DAX, behave completely differently than at night.

So you have to divide the model parameters into a day and a night phase, and they will be very different. This ultimately also doubles the number of model parameters. Actually, we should even include a third phase, because around 4 p.m. the colleagues from the USA arrive and mix up everything that happened in the morning, so that we would actually need model parameters for this phase as well. In total, the model currently has 20 parameters, all of which are non-linear, meaning that you cannot optimise one of them without changing the others, or their optimal points.

This means that if you wanted to fully optimise the entire system as it stands now, you would have to simulate a total of 5 to the power of 10 to the power of 21 – quite a large number – runs to find the optimal parameters. You could separate day and night, because day is not night and the worlds are different, that is, you could simulate these model parameters, eight each, separately, but there we are still at 4 times 10 to the power of 10, and that is still too much to simulate in a reasonable time. One must therefore apply other simulation optimisation methods.

These are search methods that are based on gradients and the like and provide us with at least semi-optimal parameters over time. As you can already see: too many of these parameters rather hurt, and that is why the aim is to limit these model parameters as much as possible, or to get rid of them altogether, so that the model appears to be simpler. And you can try to do that by, for example, replacing the finding of minima and maxima with a machine learning method, e.g. a neural network. That's what we did: we tried to replace trailing and regression for minimum and maximum detection, which are a total of six parameters, with a DNN and we approached it in a way that we drew about a hundred thousand examples from the historical DAX data where a minimum or a maximum occurred accordingly, which then led to a significant price change, i.e. at least 20 DAX points or something like that.

From this we made our samples and built a network that was trained to recognise such minima and maxima. We have chosen the time series that led up to the minimum, as an example, to be non-equidistant, that is, we have not taken the last 15 minutes or something like that, but we have taken a total of 64 data points or time points whose intervals are further apart the further in the past they are. The idea behind it was that we wanted to include long-term trends a little bit in the information that we give to the system, but when it really comes down to it, we want to have a lot of information, which is why the points in time were chosen so closely. There were a total of 64 sampling points per sample and we fed this into an according network. This resulted in the DNN being able to detect minima and maxima reasonably well at 83 and 85 percent respectively. This sounded very good at first, but when this new algorithm was integrated into the system, it resulted in an increase of about 1.1 percent – compared to the relatively simple methods of regression and trailing.

This 1.1 percent disappears in the statistical noise, that's not worth it. In other words, this example shows that there is not necessarily much to be gained from AI. Because a deterministic, classical method might be simpler, faster or cheaper. Another example, and I have just said this: when trailing, if you try to draw the stop line as optimally as possible – optimally means keeping the error to the real price as small as possible when closing, i.e. ultimately bringing it close to zero – the question is how to do that. One example is to measure the price and draw up a kind of zigzag line in which the average price fluctuations are recorded.

Based on these mean values, one could determine a mean distance. It works very well for us, but that was also an area where we had several model parameters involved which we wanted to get rid of. So at this point we have tried to build a DNN that tries to determine the optimal trailing distance of an open position in this agent profiteer, on the overall information that can be extracted from the chart. Again, 64 interpolation points in a non-equidistant time series, and in the end this led to a performance gain of five percent, which is quite a lot. We left that one in, if you will. Observing what the neural network actually does has led us to some new ideas and it's our turn to tickle a little more out of it.

The realisation of this system with all its agents and so on is done with modern technologies. We have chosen the core language Python to represent the algorithms, the agents themselves are partly serverless functions, as they exist in AWS Lamda, or as they exist in the serverless components of Azure. So you can write them there as serverless functions.

Some of these agents need a memory, which then argues for realising them as real services. In any case, there is a variant of this system on both AWS and Azure – because everything is written in Python, we didn't have to port around as much – but there is also an on-premises version on a classic platform, which is Metatrader 5, where the whole thing also runs in Python, but you also need a C++ gateway with it. So – I talked a lot about this system, but what's really in it now? Technically, the first conclusion for us is that in such experimental systems, where you don't yet know exactly what you are actually doing or how you are going to do it, an event- or message-oriented architecture is perfect. You can play, you have a construction kit that you can rummage around in, put new things in and get them out again more or less for free. So the flexibility of this approach was very good and we will continue to use that in any case. What also proved to be the case – and this is no surprise now – is that neural networks are not necessarily a better solution to a well-known problem.

Basically, the magic consists of a good mix of mathematical methods, AI methods and traditional software development to get a good, I'll call it AI system in the end. What was very exciting when we brought this into the cloud was that the support from AWS and also from Azure is now very professional, which means we wouldn't be afraid to bring Python systems productively into the cloud – into the public cloud or on-premises – simply because the stakes of the two big providers – Google we haven't tested in this context – are definitely good enough for something like this. One nasty realisation is: we have experimented with several trading platforms and some of them are, let's say, at the level of the 90s, they are really scary as far as expandability through own developments and so on is concerned. Let's go back to the picture from the beginning: we had 5,000 euros in our account, we bought two volumes, two contracts DAX, if you like, and at the end of five and a half months we had made a profit of 4,930 euros, so we were at almost 10,000 euros. Things did not go well during the whole time.

From time to time we have also been in phases where we have fallen below the profit line and that is called a drawdown. And this drawdown meant 29 percent here, which means sometimes we were down 30 percent with our investment, we were 30 percent in the red, if you like, so subtracted from the 5,000. In the end, however, everything went well, because the DAX is rising due to the end of the pandemic, so that we now have this 4,930 profit.

The bot, on the other hand, as you can see in this diagram, has made a profit of 10,892 euros in exactly the same time, and this is not a simulation. It is now at almost 16,000 euros, in the same time, with the same investment. Instead of the one trade at the beginning of the year, the manual trade from the previous example, the bot has made almost 2,000 trades in the meantime, so it is busy trading and closing and is actually making its money in this way at the moment. So for us, things have evolved from being purely experimental to becoming really exciting. Thank you.

2022-01-15 15:03

Show Video

Other news