brandonllocke The blog of Brandon L. Locke.

Windows Ain’t That Bad…

I have to admit it. Windows isn’t that bad… provided that you use it sparingly, have Window’s Subsystem for Linux installed, and can deal with awful desktop paradigms.

Use Sparingly

How did Windows become the default for computers?”

I’m not sure I’ve ever been more proud than when one of my younger employees said this out of complete and utter disgust with Windows. Being a very inexperienced member of the team, we had him working on swapping users’ machines around when needed. He had done it a few times on MacOS (via Time Machine) and even once on Linux (via just moving the hard drive). So when we needed to transition a Windows user from one machine to another, he thought, “I’ll just grab a backup using Windows Backup and restore the whole thing to the other computer!” I had to break it to him that the computers he was swapping were not the same model or even brand of computers and that he would likely have more issues once that backup was restored than he wanted to deal with. His face fell and he uttered the famous words above.

Unless you buy in completely, you don’t really get many of the benefits of the Windows platform. It’s inability to easily handle driver and hardware changes make it annoying to move users between machines. It’s lack of a solid command line forces us to manage everything from a GUI when a CLI would be much more efficient.

Needless to say, my recommendation is to use Windows sparingly. For instance, in my home, I have one machine running Windows. It’s for Steam games… that is all. With that being said, it’s not even what we typically use for the games. I’m typically using Steam In-Home Streaming to stream the Windows computer to the many other Linux computers in my house.

Window’s Subsystem for Linux or Bust

The addition of WSL to Windows was pretty interesting. It meant that overnight I could go from “this is uncomfortable to accomplish on Windows” to “I can use all the tricks I normally use to accomplish things on Windows”! It does strike me as a bit odd that the only reason I’ve really found Windows more tolerable lately is that it includes Linux now. That’s not great marketing for Microsoft! However, perhaps if I hadn’t already been a Linux user, it would have kept me on Windows far longer seeing as I didn’t need to make some of the hurdles required when moving from Windows to Linux. I guess we’ll never know!

Poor Desktop

Windows has made strides on their desktop paradigm. It now has something similar to desktop workspaces. It allows you to easily shift window locations with your keyboard. It even moved towards a “dock” paradigm with the new taskbar of post-7 Windows versions. Yet, it feels like too little too late.

I’m a tiling window manager user (it’s pretty close to saying “I use Arch btw…”). It took me a week or two to get over the learning curve, but once I did, it changed the way I used a computer. It just became easier to get things done. Simultaneously, it became harder to work on a computer utilizing a floating window manager. When I sit down behind a coworkers Mac or Windows computer, I feel like both of my arms are tied behind my back.

Why can’t I just hover over a window to bring it into focus? Why does your Caps Lock key turn on your Caps Lock? What’s the keyboard shortcut to get shove this window 10 digital desktops away from me while I work on these two windows I want side by side? Seriously, why is it so hard to put these two windows side by side with another longer window underneath both?

By now you can guess that I just find the floating window paradigm to be clumsy. I feel the same way about Gnome, KDE, Openbox, Cinnamon, etc. I understand that it’s a bit cult-like to say that tiling window managers make floating windows completely obsolete, but it’s perhaps more true to say that cars are much more efficient than bicycles, but bicycles are much easier to use.

The Takeaway

Use Windows if you want. I don’t care. To poorly paraphrase a Bible verse: “as for me and my house, we’ll use Linux”. In the end, they are both tools. I feel that Linux is a much more efficient tool for the work that I do. You may feel differently. I might need a jackhammer and you may be fine with a hammer and chisel. Each have their place. Just make sure to learn what you use! If I hear one my person say “I’m just not good with computers despite using them every day for the last 30 years of my life…” I’m going to lose it. That’s another rant for another time though…

Docker Compose for Beginners

A lot of people get stuck on Docker in the “what is it?” phase. Is it a virtual machine? Is it a sandbox? How does one get things into it (and out of it)? Lord knows I had these questions when I first started.

Beginners tend to misunderstand that containers are meant to do one thing and one thing only, therefore, they load them up with everything needed for a project and then rush that out into the world. This quickly comes back to bite them when they need to do an upgrade to one part of the whole and everything goes wrong.

The Golden Rule of Docker is to keep things separate and keep things pure. A MySQL container is meant to be just the code necessary to handle your database. It’s not your database data, it’s not your website data, it’s not any of that. It is to remain stateless. Think of it like a VM that has a base system and only MySQL installed. Additionally, the system only had enough space to hold the MySQL process, not the actual database data. Beyond this, whenever you restarted that machine, every setting you changed was set back to the default. That’s essentially what a pristine container is.

Bringing It All Together

So that sounds great as long as I only need MySQL. What if I’m using an app that requires multiple processes? For instance, Wordpress? You need the Wordpress application and MySQL for anything to even work. Should we make a container that has both in it? (Hint: the answer is no.)

Because we want to keep these things separate, we’ll create two separate containers to process our content and simply point them at each other. The Wordpress application will talk to MySQL when it needs to. Sounds great in theory, but how? As with everything awesome, this can be set up with a simple yml file. Here is an example file for Wordpress that we will then breakdown.

version: '3.3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress

volumes:
    db_data: {}

Breakdown

version: '3.3'

This is just letting Docker know what version of their docker-compose syntax you’ll be using. Version 3 is the current version, but I still have some version 2 compose files around and they still work.

services:

This lets docker know that you are going to be defining the services or processes that your application will use for the work it will accomplish. The section directly beneath this will define each of those services.

   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

Here, we are defining all the aspects of the MySQL process. We give it an easily memorable name with the db: line. We can use this name as a dns entry for any other container in the same docker network. In the image: mysql:5.7 line, we are telling docker which version of MySQL we want to use. This is the name of a docker image on docker hub. Most major softwares are available as a single name, but images created by third parties are usually available in a username\image format. If you use one of those, you need both the username and image names in this section. The volumes: section is about declaring where your storage should be located. As I mentioned before, docker is meant to be stateless. No data should be stored in your MySQL or Wordpress containers. So we have two main choices about where to store the data: either in a storage-only docker container or mapped directly to the host system’s filesystem. Each has their own advantages and disadvantages (which I won’t get into now). The important thing to notice is that I’m not storing the entire filesystem in the storage container, just anything under /var/lib/mysql, that is, the database itself. This is the only folder on the MySQL container that will change when we introduce our database to it, so it’s the only one we need to save. The restart: always field is pretty easy to understand. If the container should fail for any reason, restart it. Then we set some environmental variables for use in the program. In this case, the db user/passwd for Wordpress to use. So MySQL is ready, but we still don’t have Wordpress set up. We need to define another service!

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress

Just as with the db container, we start by giving this container a helpful name wordpress:. What’s next? Oooooh, a new thing. With depends_on: we are telling Docker that this service can’t really run unless another one is running. Why didn’t we define that relationship for db though? Simply put, you CAN run MySQL without running Wordpress, but Wordpress cannot do much of anything without MySQL. Again, with image: wordpress:latest we are telling Docker exactly what version of the software we want. The ports: section tells Docker how to connect up ports that are part of the host machine and ports that are internal to the container, respectively. So in the next line "8000:80" we are telling Docker to make available (via port 8000 on the host) whatever is available in the container on port 80. Therefore, in order to access this Wordpress site, we’d have to put in http://ip.address.of.server:8000. restart: always is the same as before and the environment: section should also look familiar as it goes over the same things the previous section did. Basically declaring the db user/passwd that the db user should use. (Also note at for the host, we just put db:3306. We didn’t need to declare the ports above in the services section because these two containers will exist in the same Docker network allowing them to communicate with each other. Additionally, we didn’t need to guess at the ip address of the db service because Docker will automatically handle the DNS lookup of other resources in the same Docker network. Its a bit like having a network subsection hosting your db and wordpress services. You only need to make sure that port 80 on the wordpress service is reachable from the outside.

volumes:
    db_data: {}

This final little part just defines the db_data container that we chose for storing our database in.

Notice how I don’t have any storage set up for the Wordpress container? This means every time I start up the Wordpress container it will be completely brand new (except for anything that resides in the database). So, all my posts, users, etc. will be restored, but what if I want to use a custom theme? What if I want to store images in the instance for use in other places? I have two options: store these things in another container as I did with the database:

volumes:
    - wp_data:/var/www/html

or store it alongside my docker-compose file:

volumes:
    - ./wp_data:/var/www/html

You’ll immediately notice that this is very similar to the way we defined our ports in host:container format. We want to map the containers /var/www/html folder to a docker container called wp-data or to our localhost filesystem at /wp_data. (Note: if you define your storage as a docker container, you’ll need to add it below the declaration of the db_data container)

Launching the Containers

So we’ve created the docker-compose.yml file. What do we do with it? That’s the easy part! Just change to the folder containing the docker-compose.yml file (my usual file structure is /docker/application_name/docker-compose.yml for example /docker/wordpress/docker-compose.yml). Once you’re in the directory just run:

docker-compose up -d

This command launches all of the containers and hooks up any networking and storage you described in the yml file. The -d tag simply launches it in the background so you don’t need to keep your terminal session open for it to continue. However, if you experience an issue, it can sometimes be helpful to launch without the -d tag so you can see what is happening and what container might be failing out.

Shutting it Down

So we can start it up, but how do we shut it down? Equally simple. Just run:

docker-compose down

This gracefully shutdowns the containers and saves any data that may be being processed.

The Takeaway

Finding docker-compose has changed the way I use Docker. Despite the fact that it’s typically used to orchestrate multiple services, I now use it for launching even single container apps. It’s a great way to have a completely reproducible build of a docker container. Before, I would save my docker container launch code in a bash file and hope for the best, but docker-compose.yml is a much more elegant solution.

Additionally, it’s been very easy to get people started with Docker because of docker-compose. I can share my working configs and I know that the build-once, run-everywhere aspect of Docker will make it much easier to get services up and running.

I Kissed Social Media Goodbye

I just started listening to an audiobook by Jaron Lanier called “Ten Arguments for Deleting Your Social Media Accounts Right Now”. It obviously has a provocative title and it’s promise to dive into the hidden world of data collection and engineering really sold me on the book. (Notice that I did not link to it via GoodReads/Amazon/etc. That seemed a bit too contradictory given the subject matter.)

What I Like About the Book

Lanier has a pretty interesting way of breaking down the manipulative aspects of the social media/search/etc. It covers the pretty basic “they’re watching you” territory and even expands on that into a “they’re watching us all and that’s more powerful” message. We tend to think of our personal data as somehow siloed from other people’s data. These companies are orchestrating that data to tell us things about ourselves that even we don’t know. For instance, we could probably never describe why we might like one image of a cat more than another. It’s probably not even something we believe that we approach with any sort of actual mental energy, however, given the vast amount of data available, companies can be relatively sure how you will react to that image of a cat. Beyond that, they can extrapolate out what placing the cat image next to an ad will make you feel, think, and maybe even, do. That’s pretty scary to think about.

Lanier also attempts to determine why discourse on the internet has become so negative and out of control. A lot of people point to some idea that the world is getting inherently worse or that people have abandoned such and such a religion or practice, but the truth is probably much more simple than that. In a world filled to the brim with data, the stuff that cuts through is the stuff that makes us feel. The feelings that we feel the most strongly are negative feelings. Therefore, the quickest way to make yourself feel good online, is to make someone else feel bad. This does spill over into the real world and despite the negative reaction found in a lot of low-star Amazon reviews, I believe his political observations carry weight.

What I Don’t Like About the Book

The book itself does have a few downsides. For instance, it does have a tendency to repeat itself often, sometimes using the same illustration in the same chapter (but separately enough that it makes you wonder if you’ve gone back in time).

My Biggest Issue

The thing that left me a bit flabbergasted about the book is a tangential argument that Lanier makes that seemingly lays all of the responsibility at the foot of “Free and Open Software”. At first, I was sure I misheard the audiobook, but the argument continued and became even less sensible than I could have expected. It appears that Lanier’s main argument is that people began to expect software to be “free” (as in money) because of the free software movement. Therefore, the only way for software developers to make money was to include ads in their software. Next, he appears to say that the Free Software is to blame because things like Facebook/Google/etc. run their software on Apache servers. Throughout this section, Lanier makes almost constant reference to the tightly controlled and secret algorithms that companies use.

First, how can you have such a poor understanding of the Free and Open Software movement that you believe the main factor is cost? Secondly, how can you hold that Free Software proponents believe that the source code of software should be freely available (a point that Lanier makes) and not see how at odds that is with tightly controlled and secret algorithms that these companies use?Third, how would it be the Free and Open Software’s fault if someone uses their software for their own purposes? It would be like suing Microsoft because someone used Microsoft Word to write a manifesto that resulted in some horrible event.

If anything, Free and Open Software would be an answer to the problems described in the book, not their cause. If everyone could look at the algorithm, the would be less likely to fall for the tricks it pulls on us.

The Takeaway

For now, I’ve left my social media accounts where they are at. The only two social media services that I used with regularity were Twitter and Reddit. I have profiles on many social media platforms, but they act as a static page. I log in when I need to update my public information (work history, education) that I know a potential employer may be looking for. I have, however, changed my passwords to super long random strings to make it less likely for me to log in when I’m just bored. I’ve also removed the apps for checking the feeds from both Reddit and Twitter. I will certainly miss the memes, but given the contents of this book, they seem to be sullied anyways.

Journey Into Python

So, I’ve been teaching myself Python lately. This is partly due to work and partly due to wanting to do this for a few years now. Pretty quickly after I moved to Linux full time many years ago, the whole concept of programming and scripting really excited me. I quickly learned that with bash, I could make the computer do what would normally take me multiple hours of clicking here and clicking there and still not getting done.

Python for Work

At work, Python has come in handy with a number of projects that I’ve had added to my To Do List. We use a project management system that has a decent RESTful API. From there, I’m able to get lists of all projects in a certain state, all projects that have entered that state since the last time we checked it, etc. I have a whole set of scripts worked up for multiple reasons. Beyond our project management stuff, I’ve been using Python to work with our web hosting stuff, video encoding boxes, and multiple other applications.

Python for Me

I’ve been writing Python scripts for a few different reasons outside of work. First, I use a music streaming software that features an API and I’ve been working on a terminal based client for it. Right now, it’s pretty much useless, it just downloads songs based on a pre-selected list, but eventually, I want to build out a ncurses interface for it.

New Blog Layout and Static Blogging Engine

EDIT: Ignore this, I’ve moved back to Pelican because Python == ‘amazing’.

I’m now using Jekyll to create my blog. Since I’ve started working at The Church Online, I’ve seen a lot of dynamic sites at work and I must admit there is a definite draw there. I’m just not into dynamic content on my own personal site for a few main reasons.

1. I’m cheap.

First and foremost, I’m cheap. I really don’t want to spend a ton of money on a website/blog. Most of you are probably saying “You know you can get hosting for pennies on the dollar with x/y/z company, right?” Sure, that is an absolute possibility. On the other hand, you can see from my posting frequency, that my website is sort of waxing and wanning pursuit of mine. Sometimes, I’m really into it. Other times, meh. I’d prefer to keep my costs low.

2. I don’t really need crazy features (even though they are nice).

We’re constantly buying plugins and features to add into our Wordpress and Joomla sites at work and some of them are really compelling. Fortunately, I’m just not in great need of a lot of them right now. The previous version of this blog was simply a list of articles that I’d written with links to the pages that held them. I never used tags, categories, etc. and had no thought in my mind to use anything like comments or whatever new-fangled things the kids come up with now-a-days. I’m starting to use tags and categories now, since it’s built right into this platform and I’m getting more comfortable with the technology behind it, but it still feels a bit superfluous to me.

3. I like simple things.

The backends of the dynamic sites built on Wordpress, Joomla, and the like are gorgeous and give you a ton of flexibility when writing. It’s really not much different than if Microsoft Word (or LibreOffice, if you will) had a giant “publish” button at the end. My issue with it is that I swing back and forth between wanting these nice things laid out for me and wanting to get back to simplicity with just me and my terminal. Jekyll allows me to write posts using Jekyll-Admin or just popping into vim in a terminal and plugging away1. On top of that, I’m messing with databases and all that jazz all day at work. I really don’t need to come home and do more of it. Database management is not one of my favorite parts of my work, I’d rather be learning to automate something or configuring a new piece of software that will help me automate something else.

4. Text files are archivable.

I LOVE TEXT FILES. Seriously, anyone who can’t find the joy in a solid txt is out of touch with the beautiful simplicity of it all. The use of text files to configure software is one of the things that drew me to Linux. Text files are so simple and yet so powerful. If tomorrow Jekyll ended development and shut down their site and for some odd reason, the executable stopped working: I’d still have all my text files in Markdown when I wanted to move them to a new platform/site. No mucking around with getting content out of a database or squeezing it through a converter. It’s right there. If I ever wanted to archive my posts, I could simply zip up my “posts” folder and be on my way. It would likely be less than a few kilobytes when all was said and done. I LOVE TEXT FILES.

Ultimately, it’s a personal preference thing. I get far more flexibility, security, and ease-of-use out of a static blogging platform like my previous static blog program, Pelican or my current platform, Jekyll than I ever would with Wordpress or Joomla.


  1. I should do a post on why I think I swing back and forth between GUI and command line desires and why I almost always end up back in the terminal. 

Why I’m In Love With Docker.

Docker has been on the scene for a while now. It’s being used to simplify workflows all the way from the first steps of development through to launch and onward with support and upgrades. I’ve been messing with Docker for roughly a year or so now and I have to admit that I am in love. It’s changed the way that I do a lot of software testing and deployment, both at home and at work. The reason for this shift in my normal workflow is five fold.

1. Docker makes software testing simple.

Back in the day, installing and configuring new software was a matter of reading docs, changing config files, and ensure dependencies were all accounted for. Docker wraps up a lot of that into a quick and easy script called a Dockerfile. It installs dependencies, sets default configs (although main options are often editable), and gets everything up and running without a great deal of hassle.

Beyond this, due to Docker’s “containerization” feature, Docker doesn’t spread multiple version of libraries, programs, etc. all over your filesystem.

All of this contributes to Docker’s ability to make software testing simple. I love documentation and config editing as much as anyone else, but first and foremost, I want to see if the software has the features, abilities, and interface that benefits my users. Essentially, let’s just get it up and running and run some tests and if it seems like something I want to pursue further, I’ll dive further into the docs and setting it up for my specific use-case.

2. Docker performs the same everywhere.

One of the issues that I come up against often is that software reacts differently on different platforms. Varying version of this or that library mean that the website that worked beautifully on one server won’t even start on another. Again, Docker’s containerization feature makes this a thing of the past. Because each container acts like a self-contained unit, it works the same anywhere Docker can be installed. I used to have to use full fledged virtual machines to get this type of “work anywhere” functionality. Now I can do the same without all the wasted overhead.

3. Docker relies on already established skills and tools.

In a seemingly contradictory statement to the one in the first section, Docker allows me to use my previously learned skills of reading and understanding documentation, editing configuration files, and installing dependencies in a way that makes Docker completely custom.

Like I said, while I’m testing software, I just want it up and running. Once I have done my due diligence regarding the software, I want to be able to tweak and change almost everything about it to make it perfect. I can do that with my already existing knowledge of the Linux sub-system.

4. Docker increases system flexibility.

Normally, services will share the same instance of a program when they also share a server. Got a Wordpress and Joomla install on the same server? Chances are they are both using the same database server with separate databases. This is great when everything needs the same version, but what happens when they need different versions? What happens when you need to update a version for one application but it’s incompatible with the other application? Docker allows you to run different versions of the same software without problems or extra work. Tied into that is number 5.

5. Docker increases system uptime.

So we just found out that we can run separate instances of the same application in different versions. This can actually increase system uptime in general. Using our previous example, you have a Joomla instance and a Wordpress instance both using the same MySQL instance. Well, if Joomla does something and MySQL ends up dying, it kills not only your Joomla instance but also your Wordpress instance. With Docker, Joomla and Wordpress have their own MySQL containers. If the Joomla instance of MySQL dies, you’ll still have downtime on the Joomla site, but it won’t kill your Wordpress application at the same time. And just in case someone was worried that they have to watch so many different services now, Docker has built in tools to automatically notify you if something does “die” on it’s own. They can even restart on their own on failure!

Docker is pretty amazing and I’m fairly certain it will be a tool I use on almost a daily basis.

Using ddclient on the Raspberry Pi.

The Raspberry Pi is anything but a powerhouse, but it excels in a lot of lightweight applications that rely on 24/7 uptime. Its power-sipping processor makes it amazing for simple things like acting as an ssh gateway for your home network or hosting any number of network applications from inside your home network. Today, I’ll go over my process to set up ddclient on the Raspberry Pi to automatically update a domain that I have purchased with the external ip of my network.

Note: This assumes you have a working Raspberry Pi already on your network. Mine is running Raspbian Jessie. The domain is registered at NameCheap. This tutorial should work with almost any domain registrar or distribution, with some minor changes you’ll have to make yourself.

First, install ddclient on your Raspberry Pi:

sudo apt install ddclient

Second, create a config file on your Raspberry Pi:

sudo touch /etc/ddclient.conf

Third, open the file in your favorite editor (mine is vim):

sudo vim /etc/ddclient.conf

Fourth, plug in the relevant information according to this template:

# Configuration file for ddclient
#
# /etc/ddclient.conf

use=web, web=dynamicdns.park-your-domain.com/getip
protocol=namecheap
server=dynamicdns.park-your-domain.com
login=*your_domain*
password=*the password you get from the dyndns section of the namecheap website*
*subdomain*

To clarify, if I wanted to make raspberrypi.brandonllocke.com resolve to my RPi, my config would look like this:

# Configuration file for ddclient
#
# /etc/ddclient.conf

use=web, web=dynamicdns.park-your-domain.com/getip
protocol=namecheap
server=dynamicdns.park-your-domain.com
login=brandonllocke.com
password=*the password you get from the dyndns section of the namecheap website*
raspberrypi

This will allow you to update the ip address manually, however, I use Linux because it allows me to automate things easily. Therefore, lets daemonize this whole thing and let it run every so often to auto update the ip address whenever my ISP decides to change it.

First, create a file in /etc/default

sudo touch /etc/default/ddclient

Second, plug in the relevant information according to this template:

# Configuration for ddclient scripts 
#
# /etc/default/ddclient

# Set to "true" if ddclient should be run every time DHCP client ('dhclient'
# from package isc-dhcp-client) updates the systems IP address.
run_dhclient="false"

# Set to "true" if ddclient should be run every time a new ppp connection is 
# established. This might be useful, if you are using dial-on-demand.
run_ipup="true"

# Set to "true" if ddclient should run in daemon mode
# If this is changed to true, run_ipup and run_dhclient must be set to false.
run_daemon="true"

# Set the time interval between the updates of the dynamic DNS name in seconds.
# This option only takes effect if the ddclient runs in daemon mode.
daemon_interval="300"

You can set the interval to anything you might want. I would recommend not setting it super low, as this can hammer online resources and that’s a really good way to upset people and push resources offline.

You can start ddclient’s daemon with the following command:

sudo service ddclient start

It should start up automatically on reboot, but after you reboot you can check on the status of the service by running:

sudo service ddlclient status

You should be set! I hope this helped!

Install Syncthing on Debian Jessie or Stretch.

Syncthing is a great tool for liberating your data from cloud storage providers. While Dropbox, SpiderOak, and Google Drive are great services, the trade-off for such a great and simple product is privacy and loss of control. As an example, Copy.com was a great service that had an excellent Linux client and even a headless client for server installs, unfortunately, it was just shut down a few days ago when Baracuda decided to abandon that service. I had it installed on my parent’s computer as an alternative backup and completely forgot until my mother sent me a message saying the computer was telling her her data would be deleted. I had long since removed it from my workflow and since I was often just using my laptop, it wasn’t necessary. I’ve reintroduced my desktop to my workflow and soon the need for such software arose again.

Installing Syncthing

While Syncthing isn’t available in the repos, there is a repo officially run by Syncthing. Add it to your apt sources like this:

# Add the release PGP keys:
curl -s https://syncthing.net/release-key.txt | sudo apt-key add -

# Add the "release" channel to your APT sources:
echo "deb http://apt.syncthing.net/ syncthing release" | sudo tee /etc/apt/sources.list.d/syncthing.list

Then update your repos and install syncthing:

sudo apt-get update
sudo apt-get install syncthing

Note: doing this on one machine only won’t do much of anything. There is nothing to sync with,so you’ll want to do this on each computer you want to sync.

Now that the application is installed, let’s run it!

syncthing

Open up your browser and go to:

https://localhost:8384

You’ll be greeted with a nice web interface where you can find your ID and add other “devices” using their IDs. Once you do that, you can add folders and share them with your devices. The web interface is very intuative and easy to use.

Take away

I’ve been using it for a few days now and it works very well! It really only works if you have a machine that will remain on all the time. I use my htpc/server as a main hub and then it populates my laptop and desktop with changes made from each other. At first, I thought I was missing the web interface that allows me to grab a single file from anywhere. The truth is, there are still a ton of ways to do that. I could use ftp, scp, or even http, if I wanted to set it up and secure it. I just don’t really need it right now. I can just pull it down with my phone over ssh.

Installing EW-7711ulc on Debian Jessie.

A few months ago, I was having issues with my home network. Moving more than 10 feet away from the router would drop my connection percentage down to under 50%, which means I was connecting at less than half of the already slow 56Mb connection. I’ve grown used to Gb ethernet from when I only had desktop boxes, so I can only stand wireless data transfers with small files in small numbers. Anyone with even a small understanding of wireless networking can problem guess what the issue was…. interference from nearby networks. I live in a small town and the houses are very close together, in addition to that, a lot of the nearby houses have been converted to multiple apartment units in each house, therefore, the spectrum is just overrun with everyone’s own personal routers. From my apartment, I can easily scan close to 35 different SSIDs!

THE SOLUTION

The solution was pretty obvious, find a way to make my wifi network a little more “unique”. I was already utilizing a good channel, switching to the less occupied channel 1 (those poor souls on channel 6 were all stuck messing with each other), but it still was crowded. So I decided to join the rest of the modern tech world and invest in some dual band equipment. The only thing I hate more than slow and inconsistent wifi are giant dongles hanging off of my tiny 11” laptop. I bought this thing small for a reason, I don’t want to have to struggle with a usb interface that is nearly the same height as my keyboard. I found the Edimax EW-7711ulc. As far as I can tell, its the only “nano” sized device capable of the 5ghz band. I have not found a single dual band nano adapter in my research. I figured I already had 2.4ghz built in to the laptop, so I really just needed an interface for 5ghz. So I picked one up.

THE SECOND PROBLEM

Unfortunately, Debian (and as far as I can tell, no version of Linux) currently supports the EW-7711ulc out of the box. I can build things from source, that’s not an issue and something you actually get pretty used to when running Debian Stable. Anyways, Edimax does provide the driver on their website for Linux. Downloaded it, went to compile it, and….. no dice. It kept giving me this very vague error that I couldn’t make heads or tails of. This doesn’t appear to be a very popular chipset either, so I couldn’t get much info on fixing the issue.

THE SECOND SOLUTION

I did however, come across a bitbucket repo that sported a different driver for the same chipset (the MediaTek mt7610u). I gave it a shot and it worked immediately!

After solving that issue, I just updated my wireless interface in wicd-curses to reflect the new wireless adapter and was able to find my 5ghz SSID immediately. Now my home wifi network flies and I do not have to worry about interference from my neighbors at all!