Added Self Centeralizing and Cloudflare Post

This commit is contained in:
Xavi 2023-02-19 09:26:29 -08:00
parent 60547d713c
commit 1af1ef39f6
3 changed files with 157 additions and 0 deletions

View File

@ -1,5 +1,8 @@
---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
description:
categories:
tags:
---

View File

@ -0,0 +1,128 @@
---
title: "Self_centralizing"
date: 2023-02-19T09:08:58-08:00
description: Home Server - Remote Access
categories: ["Top_of_the_Stack"]
tags: ["Home Server", "Remote Access"]
---
I have been pretty busy the last couple of days getting my home server
configured. The reason I\'m converting my old workstation to a server is
because I recently purchased a pretty beefy laptop. I\'ve recently found
myself in need of more mobile computing power for projects and
recreation **\[as much as I love my Thinkpad X220, it doesn\'t cut it
when I am trying to get some games in with the boys\]**.
This week I did quite a bit of research on software that would allow me
to be more self sufficient **\[in a digital sense, won\'t be any help if
the grid goes down\]**. I was able to install some software **\[with
some troubleshooting\]** but have yet to test them enough to conclude
whether they are overkill for my purposes. A summary of what I\'ve done
- Install Hypervisor - *Proxmox*
- Purchase and reconfigure storage for redundancy and increased
capacity
- Install and configure a remote work environment
- Install a NAS OS to trial - *Open Media Vault*
I\'ll do a quick rundown of all these points.
#### Hypervisor
A
[hypervisor](https://www.redhat.com/en/topics/virtualization/what-is-a-hypervisor),
from my understanding, is the software that
hosts and manages the guest OSs **\[guest OS is the fancy way of saying
virtual machine/container/etc\]**. There are Type 1 and Type 2
hypervisors. Type 2 is software like [Virtual
Box](https://www.virtualbox.org) that run on top of another OS. For example,
you have Windows installed and would like to try out Linux. You can
install VirtualBox *ON* Windows and install Linux within VirtualBox. The
point is the Virtualbox has to go through Windows to interact with the
bare metal. A Type 1 hypervisor *IS* the OS that is running without
other software between it and the bare metal. That\'s why Type 1
hypervisors are sometimes called bare metal hypervisors. The benefits of
a Type 1 hypervisor is less overhead supporting the host OS as the
software is typically extremely lightweight.
I settled on using
[Proxmox](https://www.proxmox.com/en/)
which is a bare metal hypervisor. This is so
I can stage and deploy a good number of containers and VMs without being
throttled by the host OS. Additionally, Proxmox is an open-source
project which is always a plus **\[pretty close to a must in my
book\]**.
#### Storage
My old tower had *two 512 GB SSDs* for the main partitions for both my
Linux **\[which was my daily driver during the pandemic\]** and Windows
**\[which was basically just for games\]** install. It also had a *1TB
HDD* which was used for storage on my Linux install.
Because I want to implement a self-hosted cloud storage solution and/or
a media server, I wanted not only to increase the capacity, but also
wanted to implement some redundancy in case of a drive failure. So I
went out and got myself *two more 4TB HDDs*. I actually 3d printed two
hard drive caddies for my case that I found on
[Thingiverse.](https://www.thingiverse.com/thing:4712276)
I had another two 2.5 inch drive caddies
that where meant for a different case but I just secured them where I
could fit them with some zip ties.
So storage in total currently consists of *two 512 GB SSDs* - one that
will be used as a *boot partition for my hypervisor* and the other will
be used as a *cache*, *scratch*, or *boot partition* for the guest OSs -
*two 4TB HDDs* - which will be configured to be a single 4TB mirrored
volume for data storage **\[basically this means the data will be
written twice, once on each drive, to ensure that failure of one drive
wont lead to any data loss\]** - and *one 1TB HDD* - which will just be
used for slow, low priority data **\[no redundancy, no speed, kinda the
odd one out\]**.
I implemented the redundancy listed above using *ZFS*.
[ZFS](https://www.redhat.com/en/topics/virtualization/what-is-a-hypervisor)
is a filesystem which allows for the disks to
be collected into *storage pools* which then can be divided cleanly into
distinct sets of data. I find myself always returning to this
[video](https://www.youtube.com/watch?v=mLbtJQmfumI)
by *Linda Kateley* that explains the system
extremely clearly.
Here are the list of commands I used to create the configuration
mentioned above.
``` code_block
zpool create fastpool /dev/sda
zpool create safepool mirror /dev/sdb /dev/sdc
zpool create badpool /dev/sdd
```
I created three pools, one called *fastpool* - which is the other SSD
that isn\'t my boot drive for Proxmox - another called *safepool* -
which is the mirrored 4TB storage pool - and *badpool* - which is the
one that is neither fast nor redundant.
#### Installed Operating Systems
I fired up two guest operating systems to get myself started. One is an
Arch Linux installation that copies my dot files from my old
workstation. This just means that my configuration - window manager,
terminal emulator, keybindings, etc - are transferred from my old daily
driver. The other is a instance of Open Media Vault where I\'ll be
storing my data **\[media server data?\]**.
For the workstation install I downloaded the Arch Linux ISO and uploaded
it to *homeserv* through the Proxmox web gui **\[which is reached on
port 8006 by default\]**. I chose to make this a container because they
are a little more lightweight and I don\'t plan on doing any intense
computing on it. I\'ll have to delve deeper into the significant
difference between VM and CT are in the future.
I\'ll give the details on the OpenMediaVault installation in a later
post because there are some bugs in the installer that required some
interesting workarounds **\[and this post is 300000 lines long and a day
late sooooo\....\]**.
I\'ll try to write up some guides this weekend to document the entire
process while its still fresh in my mind.

View File

@ -0,0 +1,26 @@
---
title: "Cloudflare_Died"
date: 2023-02-19T09:03:44-08:00
description: Web Development - Administration
categories: ["Top_of_the_Stack"]
tags: ["Web Development", "Administration"]
---
What unfortunate timing! I was about to writing up this post when I lost
access to my VPS because Cloudflare went down.
[Here](https://blog.cloudflare.com/cloudflare-outage-on-june-21-2022/)
is the *Cloudflare* postmortem where they
discuss what happened.
It looks like they where trying to \"\[roll\] out a change to
standardize \[their\] *BGP*\" and, from my understanding **\[which I
would take with about a cup of salt\]**, moved the reject condition
above \"site-local terms\". So the request would be rejected before
being able to reach [origin
servers](https://www.cloudflare.com/learning/cdn/glossary/origin-server/)
**\[as opposed to an edge or caching
server\]**.
I might look more into BGP because I don\'t know about it at all. One
for the stack I suppose.