the FrankeNAS - (Raspberry Pi, Zima Board, Dell Server, Ugreen) // a CEPH Tutorial

the FrankeNAS - (Raspberry Pi, Zima Board, Dell Server, Ugreen) // a CEPH Tutorial

Show Video

I'm building a NAS with all of this, a bunch of junk and spare parts I found in my storage closet. A couple old laptops, ZEMA, board A, youre nas, a terra master nas, an old Dell server, a pile of random servers and hard drives, and a few raspberry pies. That's my mission in this video to build what I'm calling the F Franken nas. Now, is this possible? Yes, with an open source magic technology called Seth. It's software defined storage and it's crazy.

It actually makes it to where all of these devices can connect together and act as one big giant storage server, one nas, the Franken nas. So in this video we're going to dive into what the heck Seth is, and I'll show you how you can set this up yourself with pretty much whatever hardware you have. I mean, if I can do it with all this random stuff, you can do it too. So get your coffee ready. We're going to make a Franken mass. Now, full disclosure, I already have Seth running in my network right now. It's my new na, which I named Hagrid as very much a Harry Potter theme. nas, I love it. And Hagrid iss not one server,

he's four servers all working together to act as one big giant storage server or a storage cluster. And the magic behind that is Seth. And lemme tell you, I got features I've never had before. Things like tiered storage within the same folder. This folder is for HDD, and this folder is for SSD. It's crazy.

Now that's just one of the magic things it does. But why did I do this in the first place? Why use four servers instead of one? Well, if you rewind, maybe like six months ago I had this one server, a sonology nas, which was amazing. It was fast and it did the one thing a bunch of external hard drives couldn't do it allowed my team to collaborate together on one server. It was powerful. It had 16 bays of storage, but I ran out of it. We used it all. I mean, I make a lot of videos on their 4K. So I had a problem. What do I do now? I could have just bought another Sonology nas, but I don't like that. You see, because when I buy another Sonology nas,

I have to manage another server they wouldn't combine. They would be two individual servers that have to log into and manage separately. Now maybe I'm being a bit dramatic. That tends to happen managing two servers, not a big deal. But what if we grow?

What if I add another server and another server? What if I need more and more storage? What if I buy a different type of server, a different brand manufacturer? It becomes kind of a management nightmare. And what I'm describing is not unique to me. A lot of companies go through this and maybe individuals, if you're insane like me, as you grow, one storage server isn't going to cut it.

So now I have four servers and a Seth cluster. And not only is this thing insanely smart and powerful, but I can expand it as much as I want. Sky's the limit. Actually, space and power are the limit. I don't have enough room here,

but theoretically I can expand it as much as I want, adding as much storage and servers as I need to. And it could be whatever I need it to be. Object storage. Yeah, file storage, block storage. It can do everything. But how does this magic work?

Let's talk about it now. Seth is magic. There's no getting around that now. Does it stand for anything? I don't think so. It's just Seth. And what makes Seth special is at first it's open source. We all like that, don't we? But it's also software defined or software. Software-defined storage. What does that mean? Well, it means lots of things, but the biggest thing I want you to know is that software-defined storage is hardware agnostic.

Meaning you can pretty much use any hardware you want within reason. So for example, our SLU that we're about to build, it's going to be crazy. I'm going to use a laptop, a few raspberry pies, an old Dell server, it doesn't matter. And this is amazing because traditionally when you buy storage or you buy a NAS or a server or whatever, you're locked into the vendor.

If you buy a synology nas, you got to use the synology NAS server with the Synology NAS software across the board storage vendors. They'd lock you in, but it's time to break out. That was kind of lame, but I'm going to go with it. Coffee break. But going with software to find storage like Seth, this makes things a bit more flexible, scalable, and can be a bit cheaper, especially if you're just using stuff you found in your closet like me.

So how does SEF make all this magic happen? Decentralization, this stuff is pretty crazy. But before we do that, I just want you to know this whole storage thing isn't just for playing around. If you know storage, like how to manage it, administer it, not just in your house with your one sonology mass. I'm talking like stuff like SEF or storage in the cloud.

If you know how to do that, you can get a pretty cool job. Yeah, that's a job, a storage admin and you can learn all those skills with our sponsor it Pro by a CI learning. I log into IT pro right now and I'm like, huh, I wonder if they have any storage courses. So I went and searched storage and yeah, they've got quite a few from AWS cloud storage to hybrid storage and in the cloud even securing storage in Microsoft Azure. So if you're watching this video and you're like, man, this stuff is really fun. I wish I could do this all the time.

You might need a few certifications that learn more than now about storage, but it's possible. And if you're still pretty green, IT pro has your back, get your compt A plus certification network plus security, plus Learn Linux, get your CCC NA IT Pro has everything you need to get started in it and advance your career. They've also got practice tests to prep you for any of those crazy exams and virtual labs. So you're not just learning theories, you're getting hands on.

So if you want to learn it like me, check 'em out. Link below, thanks to IT Pro from a CI learning for sponsoring this video. So let's talk about Seth and decentralization. And by the way,

the way it handles the servers and the hard drives is so weird, but it's wonderful. So here's an example of what I'm about to build my Seth clusters. So they call it a cluster. And notice each of these compute devices, my laptops, my Raspberry PIs, my Dell server, they have storage attached to them, whether it's an external hard drive like an external USB hard drive or an internal hard drive on my NASA's, my NASA eye and my Dell server.

So how does Seth put all these things into one big pile of wonderful storage? Well, let's start with how we first install Seth. We'll pick one computer, one server to be our manager. Now for that, I want it to be something powerful. So I'll probably go with my zema board right here. He is our manager. And by the way, the most complex part of this entire video will be explaining how SEF works. Installing stuff is actually pretty simple. Now as the manager, he's the boss.

He calls the shots, he's got a few jobs, but his main job is just making sure the cluster works. He'll even get a nice gooey, the SEF dashboard that you can access and manage and do stuff. And at the same time, he'll also get the monitor role, which is super important NSF cluster. You'll see why here in a second. But first, the manager's all lonely. He's got no one to manage. He needs some employees, some friends, some family, and that's where we add in the other servers. This island of misfit toys, the random stuff in my closet. Let's bring them in. Now by the way,

you can have a single server cluster for testing, but it's not nearly as much fun because it defeats the whole point of sef. So do something like this. Anyways with just a few commands, I'm talking like a command per server. You want to add the manager will start to hire more employees. So he'll hire the Raspberry Pi, the laptop and everyone else. Again, it's one command to make them part of the cluster, which I'll show you here in a bit.

And then Seth will just automagically assign roles to make sure that things are scalable and redundant. So he might make the Raspberry PI five up here, a backup manager or assistant to the regional manager, and also give him a monitor role and he might assign another monitor role to the U green nas so they can form what's called a quorum, which sounds a lot cooler than it is. No, that's pretty cool. You see these three servers are like the mafia boxes and they make sure everyone's staying in line. They make sure all the servers are healthy, they do a few more things, but that's the big stuff.

I'll give 'em a little yellow underline on the monitor server or a roll because they're pretty important. Now that's just getting Seth on your servers. So at this point in your pile of stuff or my pile of stuff, I do have SEF installed. I've got a SEF cluster. It's pretty easy to set up. But now what about storage? This was the first thing that made me go what? When I heard about sef. So normally when you deploy Anas, you'll take all these hard drives and group them into what's called a raid array, which essentially makes all the hard drives to function as one, which is pretty cool by itself, amazing technology you should learn. But this is just within one single device. You see,

we couldn't just simply do this with our scenario with our a million servers and all the random hard drives. So here's what CEF does, and it's so crazy. Watch this. Instead of making every hard drive come together as one, they treat every hard drive like its own thing. Every individual hard drive becomes its own unique component within cluster called an OSD an object storage. Damon. And I want you to focus on the word Damon. No, it's not demon.

It's not scary unless you want it to be. I'm not sure what that means. Copy break. You might remember that Damons are essentially just services and Linux. So for example, you might have the SSH service or the Docker service programs that run in the background in each of these individual programs. You can start, stop, restart. But what's crazy is that each of the hard drives you have in your stuff cluster are a service, a Damon that you can start, stop and restart. They are their own thing. They're treated like their own thing.

They're not just one big raid cluster. They're an OSD. Now, do you see what's happening? Seth is taking a storage cluster and breaking it up into parts, decentralizing it. So each hard drive is a part. Each service roll is a part that can be scaled out horizontally, grow as much as you want it to. So anyways, we're starting to see the story unfold back to OSDs. Each OSD or OSD service, Damon is responsible for storing data on itself and it's also doing things like data replication, recovery, rebalancing and reporting relevant information to the monitors, the mafia bosses, the quorum. Now that's powerful and cool by itself.

I didn't finish drawing out all the OSDs, I didn't forget about you guys. But what's even more powerful is Seth writes our data to the storage to these OSDs. It is so crazy and fun. Let's break it down. And by the way, if you're like, okay, Chuck, enough theory, you can go ahead and jump to the tutorial timestamps below. No worries.

But if you're a nerd like me, let's keep going. Seth relies on a storage system called rados. I don't know if I'm saying that right. I'm just going to roll with it. The reliable automatic distributed object store. That's a mouthful. Oh my gosh, A fun fact. We've already talked about two of its main components. Mons or Monitors and Rados or RAOs, I don't know.

It's the technology that's responsible for how things are written to the storage and your SEF cluster. Watch this. Every file in SEF is written as an object to the OSDs. And this is in contrast to a typical NAS where files are written as blocks managed by the file system, which is fine, but what SEF does with these objects is the secret sauce. You see,

these objects can be distributed, replicated, and balanced across all OSDs in your cluster, providing high scalability and fault tolerance. And if you were to contrast that to a typical mass with a rate array, replication still will happen with your files, but it's confined to that one system, that one single point of failure, which I was not a big fan of by the way, very terrified that all my videos that I'm working on were on one server that could blow up. Okay, cool. Files or objects being distributed to the OSDs in our cluster, but it gets crazier than that telling you Seth is C Man.

To help you see this craziness, lemme show you how RADS will actually distribute these files to SEF or the OSDs in your SEF cluster because it's not just like RDO to OSD. There's a few more things in between that make it go nuts. So let's get these objects out of here. We're not ready for you yet.

The first thing we'll actually do is create a thing called a pool a storage pool. And this is just a logical grouping of storage resources NSF cluster. So for example, I might add a few SSDs to my cluster brand new. And because SSDs are wicked fast, I want my video editing to occur on these SSDs. So I'll create a new storage pool and call it current projects, which is legit what I do now for my video editing projects.

And that pool would only be OSDs that are SSDs. I would define these rules and then I might create another pool for archives and put all the other slower spinning disc drives in that pool, which again is legit what I do right now with my cluster. Now let's talk about that first pool real quick. Current projects in addition to saying, Hey, I want to only written on SSDs.

I can also say, Hey, I want it replicated. It'll take each object, it makes triplicate it. Is that the word for that? I feel like it is. I'm going to go with it and the most fault, tolerant, highly available way. And same story for archives. I might opt to go with something called erase your Coating, which is similar to raid, just better. It's faster and more space efficient. And all you really got to know is that it's similar to rate five or six breaking data into chunks, adding parody chunks and distributing those chunks everywhere.

I've said chunks too many times and it's kind of making me feel sick. It's slower than just replicating, which is why I use eraser coating for my archives. But it's space efficient too. So cool. We have our pools,

they have their purposes and Rados just distributes stuff to our OSDs, right? No, you're getting ahead of yourself buddy. Calm down. There's one more layer in between. I told you Seth is crazy. We have another construct and our SEF storage called pgs or placement groups. These are like your middle management, not because they're unnecessary, but because they're just in the middle and they're managing stuff. I'm sorry, I'm running out of analogies now.

This was the weirdest thing I encountered when trying to understand sef because at first I'm like, why do we need these? But I get it now. And the first thing you got to know is that there are just a ton of placement groups. It's just crazy. So when we create our pool and we define rules like, okay, make three replicas and based on how many OSDs will be used for that pool.

So let's say we have three OSDs. There's a calculation. Lemme go find it. Here's the math problem you're going to have to do. Take a sip of coffee. No, I'm just kidding. Actually, no. Take a sip of coffee,

but you probably won't have to worry about this. In most cases, RADOS and Seth will just take care of it for you. Just know that there is a calculation involved and based on our situation, you might see 128 PSG or I see A PSG. You want a PSG fan? PGS created 128 just for our current projects pool. If I were to add more OSDs, in fact, let's go ahead and do that.

I'll add two more per server. The placement group number will go up. So with nine OSDs, we might see it as 300 pgs created. Okay, you got the math, but what the heck are they doing? Why are they there? I get it.

I was right there with you. But these things are essential into how data or objects can be efficiently and reliably distributed across your cluster. So here's how it works. Each PSG. So each of these 300, lemme draw my visual representation of a PG.

It's going to be a guy holding a cup of coffee. He needs it. Each G is assigned three OSDs. Lemme give him a green hat because he's the green guy. So we'll say this OSD, this OSD and this OSD one of these OSDs will be deemed the primary. So check this out. When I transfer my files after we're done recording here,

I'm about to do this. It's about to happen in real time. For real. When I transfer this video file that you're watching to my na rados is going to take that file, convert it to an object, and then assign it to one of the 300 pgs Mr. Green guy. Just so happens to get it.

Mr. Green guy will look at his three OSDs and assign it to the primary OSD, the primary OSD. The Damon itself will then go, okay, we've got a replication of three here.

I'm going to assign it to the rest of the guys on my team. So the object will be replicated to the other OSDs in this placement group. And you just saw how a file is saved to a CEF cluster, specifically mine.

This is legit how mine works. And just think about this. That green guy isn't used every time. I mean out of the 300 pgs, they're all going to be working evenly distributing data across all the OSDs or hard drives in your cluster. It's kind of crazy, isn't it? It's a little beautiful. I like coffee break for that. That's a lot. I know.

Thank you for hanging with me. Now, you know what's crazy? When I connect to my transfer a file, I'm not connecting to just one of my servers every single time when I transfer a file. No, no, no, no. You see, Seth is way too distributed for that. Too decentralized. When I'm transferring a file or when I'm even reading a file, I'm connecting to the OSD itself. So when I transfer this video,

it might live primarily on this server, but when I read another file, it might be on this server and I'm making a connection with that server, with that server's IP address, which is so cool because think about this, I've got see 1, 2, 3, 4, like four video editors somewhere in this place, my studio, they're all accessing my nest, but they're not all connecting to one server or one hard drive. Their traffic, their requests are being distributed across my four servers from network to rights to reads. It's amazing. So anyways, to sum it up, storage is written to pools which are then given to pgs, which are then given to OSDs who actually write the data to themselves.

Thank you Mr. Rados. Now you may have a lingering question. How does Rados and Seth know how to place the data in the best way? How does it know to go to this hard drive and this hard drive and this placement group? How does it know? Well, it crushes it. Sorry, I'm a dad of six daughters.

You're going to hear plenty of dad jokes in my videos. There's an algorithm called Crush. That was what I was trying to tell you. I'm not going to go too deep into this because this video is already crazy as it is. Just know this is an algorithm which stands for,

what does it stand for again? The controlled replication. Under scalable hashing. It sounds so crazy. Essentially it's an algorithm that Seth and Rados use to figure out where to put stuff and figure out where stuff is when we access it. And what's cool is that it's not like a centralized database because Seth would never do anything centralized. That's a sin. It's decentralized in nature because it uses an algorithm to figure out where stuff goes and where to find it. It's hashing. If you want to dive deeper on that, I'll put a link below to the article of how Crush works.

You'll need an extra cup of coffee, but it is really neat. Now, how crush is structured, for example, like how it knows about each server or how many OSDs it's defined in what's called the crush map, and it's in this crush map that will create rules. So for example, when I'd said I only want current projects that pool to be written to SSDs, I would create a crush rule inside the crush map. I know it's getting to be a lot that would define that.

And that's just a simple version of what you can do with this. You can create rules to only write data to one certain rack and then write certain replicas to another rack in your data center to make sure you have high availability. You can spread it geographically. The limits your imagination. It's crazy. But the main reason I wanted to mention Crush, and I was actually debating on whether or not to mention it because it's so heavy already. What's so cool about it is how it's always working for you to make sure that your data, your objects are being balanced and rebalanced. So for example,

let's say I add a few more SSDs to my cluster, it's going to intelligently use this algorithm to balance out your data. Same thing for if you lose a few SSDs, they're gone, they blow up, it'll intelligently reroute, rebalance your data, and it will do it in a way that minimizes data movement. So it's not going to throw an object way down the hall when there's a place for it right here. It's going to be smart about it. And trust me,

I know all this stuff's kind of scary, but it is very simple. When we set up the Seth cluster, a lot of this is just happening behind the scenes. But if you're a nerd like me, it does help to know what's actually happening, even though I'm not manually having to worry about this all the time.

Now there's one more thing you want to know about. I promise it is the last thing though, but it's actually pretty stink and cool. It's the file system you're going to use with Seth when you set up file storage. And it's what I use right now for all my videos. It's called Seth Fs. It's whereas you might have ZFS on Linux or NTFS on Windows s Fs 10 times fast, S-F-S-F-F-S-F-F-S, sorry, that's driving me nuts. It's specific to Seth.

As you might imagine. It takes advantage of all the decentralization goodness. Now what do I mean by that? Well, normally files they have what's called metadata. So not important things like the name of the file, how big the file is, file permissions, who owns it, who can access it, who can write to it, where is that stinking file With a traditional NASA and storage system, that metadata is stored alongside in the same location as the files themselves bef like, nah, we're going to decentralize the crap out of that. So when you create a CFS file system on your CEF cluster, a new role is created for your CEF cluster, a role called MDS or MATA Data Server. Now this role, it's important and you're going to want some horsepower. So if you have a particular server of that, he's got some muscle, put it on him.

So let's say Mr. Zema has already stressed out. Let's give the terra master the role of MDS. Let's also give it to Dell. He's got an old Dion processor in him. He can do stuff. Now what this does is it decentralizes the metadata. It also makes it highly available and produces the opportunity for parallel processing. So these metadata servers can be used both at the same time.

And yes, there is a bit more to SFS, but that's where we're going to stop. We've already gone too far. I know, but how cool is Seth? The more I learn about it, the more I'm like, why haven't we always done this? This is so cool and it makes me appreciate what I have going on in my server room right now. But enough talking about it, let's actually do it.

Let's deploy a S cluster on a random pile of junk and see how it does. Okay, first, what do you need for your stuff cluster? Let's cover hardware first for CPU ARM or X 86 is supported. An arm would be like raspberry Pi or your Mac, I don't know. And like all things in it, the better the CPU, the better time you'll have.

Many are supported though as far as ram. This is where you want to make sure you have at least four gigs. And again, more is better. Now, as far as actual storage, yeah, you're going to want that. We're making an ass external hard drives or external SSDs, those will work great. What will not work great is USB flash drives, which I don't have one on my desk right now. Why I tried to make these work,

it just doesn't work. Linux will see these as removable devices and as of the latest version of Seth, it's like, nah, we don't like removable devices. But if you get flash stripes working, please let me know how that works.

And of course, if you have internal storage, you can use that, don't use your OS disc. And finally you'll need a network switch. Nothing too crazy, but this is the secret sauce and how SEF works, it's how the cluster communicates, replicates all the stuff. It does, it does over the network. Now for my lab, I'm doing a one gig switch, meaning each port is one gigabit per second. Now that's great for a lab, but in production you'll want something a bit crazier. For example,

my CEF cluster. Right now I've got four servers. Each of these servers has four 10 gig nicks all connected to my 10 gig micro tick switch. And normally with cef you'll devote two ports to front end traffic. So when we access the cluster and transfer files, and then you'll devote two ports for backend traffic. So think replication and storage balancing, but for our lab, one gig with one port is fine and that will happen.

You don't have to worry about it. Now let's talk about software. What OS do you install on your host? For me, I'm going to be installing Ubuntu 2204. That is the latest and greatest supported by the latest and greatest ofs, the reef version. Now of course, if you're watching this in the future and you're installing something beyond Reef consult the documentation. I'm trying to cover it all, but I can't see the future. Now lemme save you some headache real quick.

If you're using Raspberry Pi for cluster and you're using Reef 18.2 0.2, install Ubuntu 20.04 on your Raspberry Pi. If you try 2204, it just doesn't work. For whatever reason,

the CEF Docker container will not run the arm image. It'll try to do X 86 and break itself. So I'm doing 2204 on everything except for the raspberry pies 20.04. And finally, you will need some coffee. It's just required network. Chuck coffee and let's get started. Now, first thing you want to do is prep your host and that will involve getting Ubuntu 2204 on all of them. I have how many? You have six or seven.

This took forever. And in addition to the os, you want to make sure that you have SSH installed so you can access it remotely. That will look like installing pseudo a PT install open SS h dash server.

You also want to make sure that you have a static IP address. You have a better time if you do this. Now how do you do that? Just jump into your net plan. Well CD and two Etsy net plan ls that directory. You should see something like this YAML file right here. It might look a bit different, but just jump in there, pseudo nano in that file and do something similar to this for your network interface on the ad host and the control XY enter to safe. And also,

if you haven't done this already, make sure you update your repositories and upgrade your system with a pseudo a PT update and pseudo a PT upgrade dash y. Now warning you, the prep side of this is going to take longer than setting up Seth, but it won't take that long. Don't worry. Next we're going to worry about the root user. Seth, by default,

loves the root user and he wants to use them. You don't have to, but it'll be a few extra steps. So we're going to use the root user to make things simple. Now, first, let's set a password for our root user. By default, your root user, when you set up your system does not have a password.

So we'll do pseudo pass WD root password set. Then we need to enable SSH login for the root user. And for that, we'll edit a file pseudo nano, it'll be etsy slash ssh slash sshd config. Jump into that file and look for this bit of config right here. Permit route login. Right now it's hashtagged out, meaning it's not being used. Let's remove that hashtag and change the prohibit password from prohibit password to yes, then Ctrl XY, enter to save, and then we'll restart our SSH service, pseudo system, CT L, restart, SSH, rinse and repeat for all your hosts.

That goes for every bit of config and setup we've done so far. Give me a second. I'm going to set up on my host real quick. This is going to take me a while. Root user done. Now, time for storage.

We have to prep our storage to make sure that Seth can see it and it likes it and it's pretty picky. Don't worry, it's not too bad. Just a few commands. So here in my Dell server, the first thing I want to do is identify what storage I have in my system. The easiest way to do that is type in Ls, BLK, just like this. Boom,

I've got lots of storage. What you want to look for are things like this, S-D-B-S-D-C-S-D-D. These will be drives on your system. Notice I have SDA, I'm going to ignore that guy because he has my OS installed on him.

Leave 'em alone. Just make sure you're aware of what's happening there. Now what I want to do is prep these bad boys. I'm going to destroy them, actually wipe them clean. Two easy commands. First we'll use SG disc to zap it pseudo sg disc. We'll use the switch dash, zap dash all and then we'll specify the disc.

So we'll do four slash dev slash SDB for me, whatever it is for you, and go put your password in. Now we're going to use wipe Fs, pseudo wipe, not wipe, wipe FS dash dash all. And then the drive slash dev slash sdb. And done do that on every drive on every system is going to take me a minute because I've got a lot of drives. I'll see you in a second. Okay, that took me a minute. Let's keep going. Just a few more things left in prep, I promise. Now we need to worry about software.

We need to install a few things to make sure Seth is happy. First is Docker. Yes, Seth is going to run in containers, lots of containers by default. I uses Pod man, but I prefer Docker. I like it better. So let's do that real quick. We're going to install Docker on every host. Now to install Docker in the best way, refer to the documentation here. I'm going to grab, and this will be a link below by the way, I'm going to grab their code to add their repositories, paste that here, rinse a repeat on every host. And yes, in case you're wondering,

you can use Ansible to do all of this, including installing Seth. But that's too much to cover for one video, trying to keep things simple here, but really complicated for me. So you're welcome. And then with this one command right down here, we're going to install Docker first host. Yep, and the rest of the host.

The next thing Seth cares about is something called LVM two. Good news is that it's probably installed already by default on your system. Just to make sure, we'll do pseudo a PT install LVM two and it should already be there. And then finally,

we have to make sure things are good with time. Time synchronization is very important for a cluster because they have to make sure they're all looking at the same watch. We can verify that with one command, time date, CTL status. What you're looking for here is the NTP service needs to be active and system clock synchronized. Yes, if you see that, you're good to go. Now we're almost there. We're about to deploy Seth,

but we got to do one more thing. We have to make sure the server that you want to be the manager, which I want to make the manager, my Dell server here, we got to make sure his root certificate is on every other host or every other employee that he's going to hire so that he can log in without a password. So we'll do that with two commands, but first we're going to get logged in as root. To do that, we'll type in SU dash or Space Dash. Put your password in.

This is for the password of the root user that you set earlier. Cool. I'm now root. Then I'll generate a certificate for my root user. Command will be SSH dash Key Gen to enter a few times. Good to go. Now with one command, we can send that key to the other host and our cluster, well, they're not in the cluster yet. We're getting ready though. Host number one, I'll send it to my you green mass. The command will be SSH dash, copy dash id.

Then we'll specify the username of our host, so it'll be a route at the IP address of the other employee. Here. My You agree? Yes. 23. That's it, right? Yes. Type in. Yes. Put in the root password for the other system and the key has been sent.

Now to verify that works, let's try to log in from our manager to that new employee. We'll do SSH route at that new employee, 72 point 23 and we should have no prompt for a password. Boom, we're in. Awesome. If you do that, you know it works. I'm going to exit to jump back into my manager and go ahead and rinse and repeat for every host and your cluster. I'm going to do that right now. Okay, I'm done.

Everything's harder with seven host. Tell you what. Okay, now, now that our hosts are all prepped, we can deploy Seth. And the best way I found to deploy Seth is by using their tool called sef A DM. And that's our first task. We're going to install SEF a DM on our manager host. Let's do it First we're going to set an environment variable just like this. SEF release equals 18.2 0.2.

Now this is the latest release at the time of this video. If you've got a later release, please change accordingly. Boom. Easy for me to say. Environment variable set. Now time to pull down the code for SADM. With this command,

all commands will be below. Got it. If I type in Ls, there it is. SADM. Now we got to make this Seth a DM executable. So we're going to use CH mod to do that. The command will be CH mod plus X, and then Seth a DM executable.

Now we're going to add the Seth re repo to our repos here using Seth adm. The command will be period slash Seth ADM add dash repo dash dash release and we'll say re. Now again, if you're watching this video at a later time and we have a new release beyond W Reef, use that one. Ready, set, go. With the repo added,

we can now finally install Seth a dm. You thought I was going to say Seth? Nope. We're still installing sef ADM one command period slash Seth a DM install just like that. Ready, set, go. Okay, that wasn't too bad right now. Let's make sure it worked we'll. Type in which Seth adm. It's going to look and see where that is. Cool. Our command is living in user Spen. If you get this, you're good to go.

You're solid. Now can we install SEF now? Yes, we can. Here we go. With one command, we will indeed install Seth. The command is Seth, a DM Bootstrap. Bootstrap Bill. Love that movie that we'll use the option dash dash mon dash ip and we'll specify our first monitor ip, which will be the IP address of this server. You're on your manager server. My IP address is 10 point 70.2 point 27.

I think it's going to verify real quick. It's kind of important. Make sure you do it right. Yes. 27 Bootstrap is going to make sure all the stuff we installed is indeed installed and then installed. Seth, let's do it. Ready, set, go.

Now look what it's doing right away. I mean, first we see that it did make sure, Hey, is time synced Docker, is it here? And now it's pulling the Seth container. Image Bootstrap is complete and Seth is installed. Now real quick, I want to scroll up before we do anything else. You see this right here because our first Seth node is indeed our Seth manager.

We've got a dashboard, a nice, pretty gooey, what do you say? We access it. Let's go here. And unless you have DNS set up, go ahead and access the IP address. So for me, that'll be 10.72 point 27. And I'm going to go ahead and snag this password here. That was port 8 4 4 3. Ah, there we go.

Use your name is admin password is that thing I just grabbed and it's asking me to set a new password. Let's go ahead and do that. And it's going to prompt us to log in once more with our new password and we are in now you could do everything else from the gui, expand your cluster and do all the fun stuff. But you know what? CL is better. It's more fun. Let's get back there. Now we will visit here again to just see things to get a visual for what we do in command line. But anyways, back to work. Now we got to install one more thing. It's called Seth Common,

which will give us the Seth command and we'll do that with SADM. So Seth, a DM install S Seth Common. Ready, set, go. Now with Seth Common installed, we can use a cool command to see how our cluster is doing. Seth dash S for status. Ready, set, go. Cool. So things are okay. I don't like this though. Got a health warning.

That simply means it doesn't like the fact that we have one host and our cluster because we should have more in a cluster and we have no OSDs or anything going on. We haven't added anything. So let's change that. It's time to adopt some new employees. That's how you get new employees, right? Yes, you adopt them. But before we can adopt our employees, we got to give 'em a passcode. We need to once again copy a certificate assert to all of our future employees.

It's like their badge to the building, their employee badge. And this is the cert that was generated by Seth and it's a familiar command, SSH dash copy dash id. But we're going to use a few extra switches. We'll say dash F, dash i, and specify a certain cert. We're going to use cer and cert. That's going to be in slash Etsy slash seth slash seth.pub.

And then we'll specify the host. So route at 10 point 72. What's my first host? I always forget. 23. We'll send it over. Done. And because it already has the root cert, you should not be prompted for a login and go ahead and do that for every host in your cluster. This is way faster than anything else I've done so far. Okay? All our employees have their employee badges. Now with one command, we can adopt a new employee and add the host to our cluster.

Here's the command. And from here, things are kind of downhill. It's super easy. The command will be Seth Orch for orchestration, host, add, and then we'll put the host name of our host. So the next one I'm going to do is you Green. So I'll do Seth, you green. Now make sure that host name matches the host name of your system, otherwise it'll throw a fit. And then the IP address of your system. So 10 point 72 22, no, 23. Now that's pretty much all you need, but you can also add what's called a label.

So we'll do dash, dash label and I'm going to give it to the label of admin. And what this will do is make sure that you Green has the ability to administer the cluster. It gives him the keys to the kingdom here, and we're going to do it right now. Ready? Set, go. Oh wait, hold on. Labels with an S. Ready, set, go. Okay, it added the host and a lot of things are going to be happening on the U Green host right now. When you see this,

you know that it checked everything and made sure like, hey, the host name, it matches it, made sure the prerequisites are there. It has LVM two, it has Docker, and now it's going to actually pull down those Docker images, all the ones it needs, and it's going to start adding it to the crush map, which you may recall. It's just a map of what our cluster looks like, all the hosts, what the hosts are doing, what storage they have or OSDs they have. So anyways, let's check the status. We'll do Seth dash s. Okay, things are looking good.

We got two Damons in our monitors here, and we have kind of a quorum. Seth, Dell, Seth, you green. Ideally you want three. If you have three or more host, let's go check on you. Green. I'm going to log in as root SU dash,

and I'm going to check the docker containers. I'll do docker ps. And yeah, it looks like every container is running just by default. You can see we have a monitor container because it's going to be a monitor.

We got to note exporter. We got Seth Manager because he's assistant to the regional manager. He's backup and a few more that are essential to run Seth, but it looks like things are good. That host has been added. Let's do one more command. We'll do Seth Orch, host Ls. And this will list our host gives you a quick view on, hey, what's the status? Now, I hate that it's blank, but that's kind of a good thing. If it was down,

it would say offline. Otherwise we're good. And then we have our labels right here. We can see that they're both admins. Now I told you it's all downhill from here. Adding a host is not too bad. I'm going to add my other host as well. So I'll add Seth, hp, my little crappy HP laptop that I used in the network Truck Academy, laptops and mobile devices course because I took it apart. I'll do my raspberry pies. Okay, all my hosts are added.

Let's do a Seth Dash s check on things real quick. Okay, things are looking good. Referring back to our Seth architecture. You can see right now I have five Damons as monitors. Looks like Seth automatically is like, you know what? Let's just make 'em all.

Monitors. Got quite the quorum here. And then we have our two managers. Seth Dell is a primary. Seth, you green as the backup. No OSDs yet. That's coming yet. I've got my seven hosts in the cluster. So right now we're in a good spot. So we've deployed S, but we haven't done anything with storage yet.

Every one of these hosts in my cluster have some sort of storage, whether it's internal or external USB drives. So how do we make them OSDs? Because remember, each individual hard drive will be an OSD or an object storage. Damon. First, let's make sure that Seth can see them. We did all that prep work. We made 'em look pretty. Let's make sure Seth is happy to check. We'll do Seth Orch device Ls. And a lot of stuff's going on here. Lemme zoom out just a little bit.

I've got a lot of hard drives across a lot of hosts and just a few things you want to pay attention to. First we have our individual host. I've got Seth, you green, Seth Tara. We can see the hard drive types here. Seth Zema has two SSDs. And then we have the status over here. Are they available to be added? Yes. Yes, yes, yes. You want to see that? That's a good sign. If you see no, there might be a reason. For example, it gives me a reason here and sufficient space. And this is just my boot drive,

so it's like I don't care. But if you run into any issues where S'S like we can't use that. Make sure you do what we did in the beginning. SG disc, wipe Fs, make Seth happy. Anyway, so we see all those available devices, let's add them.

And we can do that with one command. Oh, check this out. It's so fast. Lemme just check my cluster status. Once more things are still good, let's change that. Health warning to a health. Okay, that'll be one. Command. SEF orch apply osd. Dash, dash all dash available.

Dash devices every device that had a yes and available in that column we saw is going to get added right now. Ready, set, go. Kind of anti-Climactic. It's scheduled it. It's going to start adding. Now one command I want to show you real quick because it's going to be real time. Type in Seth OSD tree. It's will list all the available OSDs. Right now there's one. Let's check again. Oh, they're being added. Check it out. It's happening more and more coming in. Now they're all down status because right now Seth is actually reaching out to each of these employees in our cluster. Pull down a docker container,

run a Damon for each of these OSDs. And once that container is run and verify, good, we'll see the up status happen soon. I hope. We'll give it a second. Just a little coffee break. Oh, if you are coming up. Alright, now you can see it's starting to organize them. So at first just had 'em in a big pile and it's like, oh, okay, hold on.

These OSDs belong to this host. We're going to organize 'em. Make 'em look pretty. Oh, are they all up? Oh no. If you were still down in S Dell, come on, Dell Ketchup almost there. Yes, they're all up. Now. Real quick, I want to show you something. Let's do docker.

PS you can see that I have quite a few Docker containers running specifically. So let lemme rep rep for OSD. All these containers are OSDs. If I look on my PI host here, Docker ps. There it is. An OSD container.

Now let's do a C dash check on our cluster real quick. We got a health, okay. Oh, that's what I wanted to see. Health. Okay, clean bill of health. We have 22 OSDs or 22 individual hard drives. Now what's killer about this? Just think about what we've done so far is we have seven hosts and they're all just a mix match. Got a Dell server, a couple of raspberry pies,

an old laptop and all the drives are different. A bunch of internal ones that are pretty big terabytes. We got a few USB external hard drives that are like 500 gigs. We have a couple SDS in the cluster too that I think are each a different size, but try to do that with any other storage situation. It won't work.

SEF just makes it work. Now, ideally, I mean if you're deploying a SEF cluster, you want to have uniform hard drives, same type, same size. But this is possible because of the way SEF works. Anyways, you can see right now with my cluster, we have 38 terabytes of storage available. So cool. And notice by default, it did create a pool and one placement group for us.

We're not going to use that for anything. It's just created by default, which now with our OSDs in place, we can start doing stuff with storage. We can start having fun with this by creating our first file system. Yes,

that's right. We're going to use Seth sf. Okay, say it Seth Fs. This is also remarkably easy with one command. We're going to deploy a file system and actually it's such an easy command. I'm going to go and type it in right now before I explain it.

The command is simply Seth Fs volume create. And then we'll name the file system. We'll just call it Seth Fs. Now this one simple command will do a lot of things. First, it'll create two pools, one for metadata and one for data.

It will also assign roles to our servers. The MDS role, metadata server, assigning one is active, and one s standby by default. Enough talking. Let's do it. Let's run our command. Ready, set, go. Oh, I forgot to mention, this will also auto create placement groups for our pools as well. So let's do a SEF dash S to shake our status. Oh, we got some things going on. We got three pools already. Let's check it again. Oh,

it's making our placement groups. Let's check it again. Yeah, things are happening. Let's give it some time. Give it some room to breathe. A little coffee break. And it looks like everything is good, which is such a good thing to see.

I love this. Health is okay. We have 529 placement groups. Keeping in mind that a placement group, each one of these is assigned to three OSDs at least by default. Because by default when it created these pools, they have a replica of three. That means each object that is written to our file system, boop, triplicated, three of 'em, those objects are assigned to a pg.

PG gives it to OSD one the primary and he replicates it to his brothers. Such a cool thing. Now let's go to the dashboard. Let's take a look at the gooey to see how hard work payoff. Let's just refresh our screen here. Hey, hey, hey. Things are looking good. Look at this pretty dashboard. A couple of fun places to look at. Let's go to cluster. Look at our hosts at all.

Our hosts right here looking good. And we may get messages like this as it's kind of converging and replicating and talking to each other. We can look at our OSDs. We can see each individual hard drive and their usage. So far, nothing. Let's go to pools. Down here, another section. We have our pools with active PG status. So far, nothing going on. There's nothing on there. Let's change that. Let's actually use our file system.

And this is where Seth, I think really shines. Two things I want to show you real quick. First, for our Windows users, we're going to set up Seth with an SMB share and test it out. Transfer some files, see how it works. I'll also show you for the Linux people how to mount Seth using the Linux kernel. Seth is baked into the Linux kernel and doing it this way is blazingly fast.

And actually real quick, let's go ahead and mount our SEF file system here on our Dell server. I'll show you what I mean. It'll be one command actually. First, we'll make a directory. Make directory and KDIR. We'll put that in m and t and we'll call it cef. And we're going to mount our CEF file system to that directory. That's how Linux does things. Now with this command here,

we're going to Mount Seth as a regular file system, giving it native kernel performance. Now, I'm not going to cover what all this means because this video is already too long as it is. And actually, hold on. I forgot one thing. We need to change the place where we're going to mount it. We'll mount it to Seth. That directory we just created. Ready, set, go. Ready, Seth, go. So if you jump into that directory, CD Mount Seth,

let's create a directory. Say test. There it is. And let's add something to it real quick. We'll use this command to download a random cat picture. Done. Now let's check our cluster. We should have more objects there now. And yes, look here, we got some data going into our pools.

Looks like it went to PHE U Green. This OSD here. That's so neat. You can see that. Oh, I went to the other hard drive here and I'll show you how to mount this on another Linux server. But real quick,

we're going to turn this mount into A SMB drive that we can access via windows. So we had to do this first. So now also install samba. Pseudo A PT install Samba. I don't need pseudo I'm root, but it's okay. Yes.

Samba is what we're going to use to actually host SMB. The server message block, which Windows can access via a network share or windows is using SMB to access the network share, whatever. Now let's do some basic permissions real quick. Let's make a new group. We'll do group add,

we'll call it SEF stuff. And I'm going to add my network check user. The one I created when I created this OS or installed the OS to that group. So user mod dash, lowercase a, capital G, SEF stuff network. Chuck. Cool. He's part of that group or I'm part of that group. Now we're going to change permissions on this directory here, this CEF directory. And we'll do it like this. We'll first change ownership. We'll do C-H-O-W-N or CH dash r, capital R for recursive.

And we'll say the owner will be root and the group will be C stuff. Let's do that. Cool. And then we'll change the permissions on that directory. We'll do CH mod dash R for recursive. Everything in that directory we want to change. And we'll do 7, 7, 5 for Seth.

Cool. So basic permissions out of the way. Next we'll set A SMB password for our network Chuck. User command will be SMB pass wd. We'll do dash a and spec to our user network. Chuck put in a password for him. Cool. Password set. And now we have to define our share,

and we can do that through the SMB config. So we'll do nano etsy, samba smb.com, lots of default stuff here. We're just going to create some space right here at the beginning and create our share. We'll do opening brackets and call it and call it, I Love Seth. And then just underneath that we'll define parameters like the path.

Where's it going to live? It's in and mentee. Seth, is it Browseable? Yeah. If I spell Browseable right now, it's read only. No, we can write to it and I'll force a group permission on that to be my Seth group. What do I name it? Seth stuff? Yeah, I think so. Control X, Y. Enter to save. That's all we need. And we'll do a system. CTL, restart,

SMBD to restart the service. That's all we have to do. Now let's add that Share in Windows. I'm already here, here in Windows on my explorer. I'll right click and say, add network location. Click on next, next, and then I'll do back slash black slash and the name of my server.

So 10 point 70.2 point 27. And we'll specify the share. Which word did I name it? I love sef. Click on Next. It should go. Try to find it. It found it. Now it's like, Hey, what's your password or username and password? We'll put it in and we're good. Click on next. Go ahead and open it.

We have it mounted. There's our random cat picture. Oh, now let's add some stuff to it. This is our stuff Cluster. My seven devices with 22 hard drives. I'm about to transfer stuff to this. I'm so excited for this. I'm going to grab some random videos here. Let's grab this and this and this.

Just lots of stuff. 111 megabytes per second. Not bad. And at this moment, let's go check the portal here. Let's go to our pools. We can see stuff is happening. Oh yeah, let's go to the dashboard. Got some iops coming in. Let's try and read the files.

Let's just open up a video here. I mean, stink it fast. Lot's playing. I'm going to see which OSD. Oh yeah, here it is. It's playing off Seth U Green. You can see it right here. Lemme try another one. Who's it going to be now still? Seth U Green, OSD two. So we got Windows with S mb. Let's talk more about Linux. And I want to clarify something here real quick with SMB when we're transferring files, then here's our SMB server, which is this guy.

I'm always going to be interacting and transferring files to that server. And then the server will be distributing the data to the OSDs, the individual ones where the objects are being mapped. And I wanted to highlight the fact that with SMB, you are tied down to that one connection, whereas when you're doing Linux with the native CEF mounting, this will dynamically use and connect to the individual OSDs. Now you can mount Seth and Windows. There's a way to do it. I think it's still in beta,

which is why I'm not going to show it here. Also, this video will be way too long. But anyways, lemme show you Linux. So we've already mounted the Mounted Seth here on our manager server. Lemme show you how to do it on a remote machine as well.

So let's do it on the Raspberry Pi. Now first we have to make sure that Seth Common is installed. So we'll do a pseudo a PT install Seth dash Common because it will use Seth to mount the drive or the file system. And also, in order to mount the file system, we'll need credentials, specifically credentials found in Etsy S. Pretty much everything right here needs to be transferred over to our server.

And we'll just do that with SEP. So SEP, we'll say we'll copy this entire directory to our remote server. We'll make sure we copy everything. Just so cool. It's all copied. Let's just make sure it's there. Cool, it's all there.

So now same story as before. We'll make a new directory. Let's call it M and t sef. And then we'll use that same mount command from earlier, just past that in right here. And it should be mounted. If you go CD, m and t,

sef, there it is. There's all the files. Now there's so much more I could show you with Seth. Actually, lemme show you my current cluster right now. I've got 9 million objects, a bunch of pgs. You can see if I look at my OSDs,

all the data is pretty much distributed evenly except for the SSDs, which is something I haven't showed yet. You can do storage tiering, you can do block storage, you can integrate set with prox ox, which I do have that integrated right now. Object storage. If I go to block and images, these are all hard drives on my prox box. You can create images, create hard drives, block storage here as well. You can create an extra drive and mount it in Windows, like an internal drive.

It's so cool. If you want to see more, let me know. Comment below. Maybe I should make a part two or maybe do some bonus videos on the academy. I'm down for it. Anyways, that's all I got. I'll catch you guys next time.

2024-08-01 19:12

Show Video

Other news