How to Know Where Your Pi Is (AKA – Configuring Static IPs)

If you’re using your Raspberry Pi as a server of some sort, it might be useful to give it a static IP address — you’ll probably be able to go without this for a long time (years, in my case!), but one day you’ll have a power outage, and when your router comes back up, you’ll find that… well, you’ll find that you can’t find anything!  This is where a static IP address can come in handy.

Caveats

A few notes before we get started:

  • There are alternatives to using static IP’s that you might want to look into, including:
    • Not actually caring — DHCP-assigned IP addresses might be just fine for what you’re doing.  If other devices on your network aren’t referring to the IP address of your Pi, you probably don’t need to worry about it
    • Local DNS — if you’re savvy enough to run a local DNS server, you can refer to the Pi by name – i.e., my-awesome-raspberry-pi.my-network.local.  This is certainly preferable to static IP’s, but not everyone will be equipped to do this
  • These instructions can change with the version of Raspian (or whatever OS) you’re running — the instructions below work on the Stretch version of Raspian
  • You’ll need to make sure whatever IP you assign is off limits in your DHCP server.  This is most likely going to be your router, and most routers have a way to set aside a range of IP’s that it will not hand out via DHCP – unfortunately, every router is configured differently, so you’ll need to figure that out first.
    • Warning:  Not doing this first means you’re running the risk of having two devices on the network with the same IP, which will cause…. problems.  Make sure you don’t skip this step!
  • Just in case you should be ready with a keyboard and a display – whenever you change the network configuration of your Pi, it’s possible that you’ll break it, leaving a direct login the only way of getting in!

Ready?  Ok, let’s go!

What You’ll Need

A few things you’ll need first:

  • A specific IP selected for your Pi that your DHCP server will not assign to another device (see above)
  • The ‘Interface Name’ of the network interface you plan to assign the static IP to — if you’re using Ethernet, this will likely start with ‘eth’, while your WiFi interface will likely start with ‘wlan’.
  • The IP address of your router
  • The IP address of the DNS server your Pi uses (probably the same as your router)
  • SSH access to your Pi

Let’s Do This!

For the purposes of these instructions, we’re going to assume the following:

  • Static IP address to assign:  192.168.0.220
  • Interface Name:  wlan0
  • IP address of our router:  192.168.0.1
  • IP address of our DNS Server:  192.168.0.1
  • Username and password for our Pi:  Yeah, sorry, no…

SSH into your Pi, and run:

sudo nano /etc/dhcpcd.conf

This file configures DHCP on our Pi, and toward the bottom you should find a section with an Example static IP configuration, and a few commented out lines.

Uncomment the lines, and configure them like so:

interface wlan0

static ip_address=192.168.0.220/24

static routers=192.168.0.1

static domain_name_servers=192.168.0.1

Reboot the PI with sudo reboot, and verify that it worked – now all you need to do is update all of your other devices to point to your new-and-improved IP address!

Advertisements

Should Controlling the Smart Home be all about Voice?

Let’s face it, controlling your Smart Home can be awkward.  In it’s current state, it depends far too much on either your phone or your voice, and while I love that I can use either to control portions of my home, walking into a dark room and needing to pull my phone from my pocket to turn on the lights the ‘right way’ should probably be considered a step backward, not forward.

Voice is clearly the current trend — mainly because it is flippin’ cool — but I don’t think it’s the ultimate solution.  If my spouse is sleeping when I tip toe into the room, and I want to turn my lamp on to 5% brightness, I sure as hell don’t want to talk to Siri or Alexa.

1

Which is why my hands-down favorite thing from CES 2018 is the Nanoleaf Remote.  The remote is an amazing display of the creativity that can be applied to control your smart home, and while it’s obviously focused around the Nanoleaf Light Panels – the huge, triangular panels that can be assembled into custom configurations, and glow to any color you can imagine – it’s also a HomeKit controller.  This means that you can assign a ‘side’ to any HomeKit scene, like ‘Wake Up’, ‘Dinner Time’, or ‘Take Out The Garbage’ (you do have a scene for that, right?)

Practically Speaking

I’ll admit that a gigantic 12 sided die might not be a practical home controller for every house, and I’d be hard pressed to find a use for all 12 sides in more than one or two rooms, but this is the kind of product that we needed – this should spark the imagination of companies and (more importantly!) makers everywhere.  It’s now ok to think outside the box when it comes to controlling your home.  Light switches?  Meh, if we have to.  Motion detection?  Sure, in some cases it’s great, but it’s not perfect.

I’m much more interested in imagining how every day objects could start to control our homes.  Have a globe in your office?  Spin it to change make the room brighter.  Want to watch a movie with the family?  Put a model of an old fashioned popcorn machine on the table to trigger your Movie Night scene.  Have a kid who’s a dance fanatic?  Let her change the color of her room’s lighting by hanging up the matching color ballet shoes.

There is an opportunity here to not only make our homes smarter, but to make them more ‘us’ — without needing to train our guests on the proper way to turn off a light.  I, for one, hope more companies take a page out of Nanoleaf’s book, and start to imagine the possibilities — but also that more ‘makers’, and DIY smart home enthusiasts start to really think outside the box, and figure out what would make their home really theirs.

Where has the current crop of smart home controls failed you?  How would you like to control your smart home, if given the opportunity?  I want to know!

Installing Node.js on your Raspberry Pi

Node.js is a common and powerful server environment for JavaScript, and is handy to be able to work with for just about any Raspberry Pi project — it’s also one of those libraries that often requires multiple Google searches to remember how to get it installed, so I’ve consolidated my steps down to a single page.  Hopefully, you’ll only need one search next time!  (and hopefully I’ll remember that I wrote this, so I won’t need any!)

  • Install Node.js – you might have a node installation on your Pi already, but chances are it’s not the one you want.  Instead, grab a fresh one:
    • curl -sL https://deb.nodesource.com/setup_9.x | sudo bash -
    • sudo apt-get install nodejs
    • Verify this worked with node -v and npm -v (as of this writing, I get 9.4.0 and 5.6.0, respectively)
  • Install the Yarn dependency manager, which we’ll use to run our app:
  • Install a few handy packages using apt-get – you might not need these, but I find them necessary for many of my projects
    • sudo apt-get update
    • sudo apt-get install build-essential libavahi-compat-libdnssd-dev git

That’s it – easy, when you have everything in place!

What does your Raspberry Pi do, and how secure is it?

I’ve been tinkering lately.  What that usually means for me, is that a Raspberry Pi or two get pulled out of the drawer, I spend a little time figuring out what the hell I last did with it, and then I start hooking up some wires, led’s, or other randomness to make it blink.

Lately, though, my tinkering has resulted in what I expect to be a permanent addition to my home automation suite (more details later – this article isn’t about that).  This means a few things:  always on, network connected, and home connected.  And here be dragons.

What Can That Thing Do?

Being connected to your home and a network at the same time changes the game – occasionally using your Pi to power an AirPlay speaker or a media library is vastly different than hooking it up to your garage door, door lock, or a security camera. It’s now a potential window into your habits, how you live your life, and could even be used to determine whether you’re home or not – this is all data worth protecting, and paying attention to the security implications of that inexpensive little computer is important.

While Linux is generally considered a secure platform, that security depends on a knowledgable administrator.  When was the last time you checked for OS updates on your PI?  Did you install the updates made available after the recent KRACK WiFi vulnerability was discovered?  Hell, do you still login with the ‘pi’/’raspberry’ default password???

If you have a Pi that’s plugged in all the time acting as a server or a home automation bridge, it’s time to pay attention.

What Do I Need To Do???

Because the Raspberry Pi is a general computing device, there’s no single answer – you’ll need to make some decisions on your own.  Assuming, however, that you’re running a standard Raspian distribution, the following are some things to keep in mind.

Start with Advice from the Source

The Raspberry Pi Foundation itself publishes guidelines on security.  Read it – there’s good stuff here that is all relatively straight forward.

Change your Blasted Password!

This one should be dead obvious, but with Raspbian configured to load straight to the desktop without requiring a login, it’s easy to forget what’s going on behind the scenes.  If someone does manage to gain access to your network, it’s dead simple to write a script that will attempt to ssh pi@<every-ip-address-it-finds> using the default password!

Don’t use the Default Account

The default account is well known – every fresh distribution of Raspian includes it, making it an easy target, even if you do change the password.  After you get rid of the default password, create a new user, and use that one from then on.

Raspberrypi.org does indicate that Raspian does depend on the ‘pi’ user to exist, but unfortunately doesn’t explain for what purpose – depending on your needs, it’s worth attempting to delete the user, but be prepared to recreate it if you find that your device is no longer operating.

Keys, Keys, Keys

If you have SSH enabled, you should be using key-based authentication to access your Pi, eliminating the password as a potential attack vector.  This will eliminate any attack vector that doesn’t include an attacker gaining access to your private key, meaning that they first need to access your home computer (you haven’t posted your private key on the Internet anywhere, have you?)

The page listed above has a good overview of how to do this – if you’re new to it, however, it can get a bit technical and confusing, so repeat after me.  “Before touching anything in my ~/.ssh folder, I will read up on what these files mean.”  Said it?  Ok, good – now I fully expect that you won’t accidentally send your private key to a server instead of your public key.  That’s a no-no.

And of course, if you don’t need SSH, then turn it of – one less potential security hole to be concerned with.

Services Available

What services have you added to the PI?  Is there a web server running?  Do you ever send a password to one of these service?  Are you positive that the password isn’t sent in plain text?

This one is all up to you, but knowing what services you’ve installed that may be available to the network, and protecting them appropriately is critical.  Use HTTPS for your web servers.  Install a firewall if you need to.  Do it!

If this Pi has been sitting in the drawer for a while, running the same instance of the OS that you installed with your last project, then it’s worth starting fresh – grab the latest version of Raspbian (the Lite version, if you can), and put it onto the SD card. This way there’s no chance for you to forget about whatever it is you left I stalled on it from last year.

Awesome Capabilities Deserve Security

There are some truly incredible projects that you can put together using the Raspberry Pi, taking your computing skills to the next level, however don’t overlook the security ramifications involved! A little knowledge and attention to detail can help save you time and frustration later!

Getting your Raspberry PI on the Network

Got a new Raspberry Pi 3 or Zero W, complete with WiFi support, but need to get it on the network?  It’s easy — just repeat after me (well, on the PI, that is):

  • Find your nearest keyboard, display, and power supply, and fire that sucker up
  • Type sudo nano /etc/wpa_supplicant/wpa_supplicant.conf
  • Go to the bottom of the file, and add:

network={

ssid=”<your network SSID>”

psk=”<your network password>”

}

  • Save the file with CTRL-X, followed by Y
  • Type sudo reboot

Done!

Bonus Tip:  Having trouble typing those quotes?  Chances are you’re not in the UK — run sudo raspi-config, go into the Localisation Options, and change your keyboard layout (along with timezone, and anything else that’s relevant to your location)

The Freedom of AWS Lambda

AWS Lambda is hardly new these days, yet people are still only starting to explore it – after all, not all of us have the flexibility to spend our free time exploring new tech, or are in a position to explore and experiment on the job, and many organizations simply can’t change on a dime, and chase every cool technology that they find.  There are growing pains here, to be sure, but AWS Lambda can provide a freedom to organizations that simply allows them to build products at a pace that can’t be achieved in more traditional ways, and at a fraction of the cost.

What is AWS Lambda?

We’re all familiar with containers – 3rd party software that sits on a server, and provides a set of services that our software can take advantage of.  Containers are everywhere, and we don’t blink an eye to use them.  No one builds a custom web server – they deploy their web app into a container, like Apache Web Server or NginX.  No one builds a server-based Java fat client – they build their app to conform to the Servlet spec, and then deploy it into a Servlet Container, like Apache Tomcat or Jetty.  This tech has been around for years, and has made us significantly more productive, because the containers have abstracted out a significant layer of complexity that we no longer need to worry about – we just build our products according to the rules (i.e. – specifications and standards), and the container happily does its’ job.

AWS Lambda is simply the next iteration on this theme, and takes advantage of the advances of virtualization over the last decade or so.  With each of the examples above, it’s someone’s responsibility to stand up a server or two (or more), install the container,  deploy your code into it, and then maintain those servers with upgrades, fixes, etc, for the lifetime of the service.  When traffic gets high, you might need to spin up a few new servers, and then hopefully remember to decomission them when its back down to normal.  As it turns out, AWS Lambda is an abstraction layer that handles exactly those services for you.

Serverless is NOT Scary

The term ‘serverless’ is a bit of a misnomer that even Amazon’s CTO, Werner Vogels admits.  Of course there is a server behind the scenes there somewhere – it’s simply that as a programmer, architect or systems operator, you don’t need to touch it.  This shouldn’t be a totally unfamiliar paradigm, since most of us have been working with virtual servers for the past 10-15 years anyway, and tech like Docker Containers more recently – we’ve simply shifted that responsibility to live behind Amazon’s walls.

This allows us to start thinking about our products not as applications, but as services.  It’s like an abstraction layer that takes care of the fact that code needs to be bundled and deployed, so the actual value proposition of our work can take center stage.  In this way, it can be extremely liberating, and we can suddenly go from idea-to-value with a few clicks.

Imagine that your customers are demanding that they need to be able to see and update their configuration in your API – with Lambda, you can provide this capability in at least pilot form in little more time than it takes you to implement it.

Did you say something about cost?

Why yes, I did.  Picture this – you’re a scrappy startup, and you’ve finally got your first paying customer.  The deal is signed, and its time to deliver – so you fire up the AWS console, spin up a few servers (redundancy, don’t you know), set up a load balancer and configure DNS, and you’re in business.  The problem is, what happens if it takes weeks for your app to really get traction?  Or what if the nature of the app is that it’s only actively used a few hours a day?  You’re paying for 24 hours worth of availability for these services, even if your app only uses a fraction of that.

AWS Lambda has a very different pricing model, based on actual processing time, instead of theoretical availability.  This means that if your app only did 3 hours of total work, then you only pay for the 3 hours of time – what a phenomenal way for a scrappy startup to scale up!

Check out the pricing page for full details, of course – there is a $.20 charge per million requests, the normal price involved for moving data out of the AWS cloud, a fee for storing the the actual code, and any other services you’re using – just like anything in AWS pricing, it’s a combination of many things, so read carefully!

Wait, that’s the wrong end of ‘Scale’!

Of course, no one really wants to talk about that end of the scaleability model – we want to know what happens when our app hits the big time (does anyone get ‘Slashdotted‘ anymore?)

There’s good news here, too – if you have more work than your Lambda function can process, they’ll just spin up more to handle the load.  So just like other AWS services like OpsWorks and Elastic Beanstalk, the auto-scaling comes with the service.

There’s a cost to this, of course.  Because they’re effectively giving you parallel processes, each instance will incur the usage cost simultaneously – so if 10 instances are spun up and continually working for an hour, you’ll incur 10 hours worth of cost.

All is not lost, though – Amazon provides the tools to limit how many instances can be running at any given time, so you have some control.  Of course, if you simply setup every workload to bring in more money than it costs, then you’re golden – the auto-scaling is effectively printing money (I’m still working on this myself).

Slow down there, Killer

Let’s take a step back and be realistic – AWS Lambda is not the Panacea of computing.  As with any technology, there are trade-offs, draw backs and side effects that you need to be aware of, among them are:

  • SLA – AWS does not currently provide an SLA for performance or reliability.  While this sounds terrible, it’s simply an important trade-off that needs to be made with design.  It might mean that you should have a backup plan for any important code that you deploy as a Lambda function.  It almost certainly means that if your code is mission critical (i.e. – your company goes out of business if it’s broken), or if lives are at stake, then AWS Lambda is not for you.  I’ll wager that the majority of code does not fall into that category, however, and can cleanly take advantage of the benefits provided.
  • Performance – Because Amazon takes care of the deployment for you, and only charges you for actual usage, and not for 24 hours worth of server availability, they reserve the right to reclaim resources if you’re not using them.  When this happens you will run into some latency as they spin up a new container – this is known as a ‘cold start’.  In practice, this seems to be a few hundred milliseconds for JavaScript functions, and a few seconds for Java functions, but the good news is that if you’re app is active, then this will be rare.  Consider this carefully when building your services – if you have a strong requirement for sub-second latency, you should plan accordingly.  For most ‘offline’, event oriented or batch systems, however, this is likely not a problem.
  • Deployment infrastructure – Like most of AWS’ products, it provides both a web interface to use Lambda’s as well as an API and CloudFormation support, but because it’s still early days, the landscape for integrating AWS Lambda into your deployment pipelines is fairly sparse.  I’ve come to like the Serverless Framework as a nice infrastructure tool that allows me to define my Lambda functions and any dependent components like DyanamoDB tables, API Gateway configurations, etc. into a single file that sits in my source control repo.  There aren’t a ton of other options out there, however, and as you scale your Lambda usage, you might need to do a bit of work here to create some templates and standards.

What You Should Be Doing – Today!

If you’re already working in the AWS environment, you should be considering Lambda as an option, but just like anything, you need to make the right decisions for your service, your organization and your customers.  Get creative – even if Lambda is not likely to be your deployment tech of choice, it may be a great place to experiment with prototypes.  It might just be a key piece of your infrastructure for internal tools.

The potential is huge here, as we truly get comfortable with the idea that virtualization is more than just a way to make more efficient use of hardware resources, but is truly a way to rethink how we approach problems in our IT world.

 

Empower Teams By Making The Right Choice The Easy Choice

My philosophy on leading a software team goes something like this:  Teams are empowered, and can be trusted to do the Right Thing when the Right Thing is also the path of least resistance.

From a managers perspective, this often means communication, tooling, budget, time, etc, but an architects can take a more technical, hands on approach.  Let’s dive into some techniques that can help make it easier for your team to do the Right Thing.

Architectural Governance

Years ago, I used to think about this concept as Architectural Enforcement (I’m a hockey guy, what can I say), but a colleague of mine convinced me that Governance was a better term, even if it is more politically correct.

Architectural Governance is the use of tools and products to prevent programmers from ‘breaking the rules’ — or more appropriately, to cause a conversation to occur when breaking a rule might be ok.

Checkstyle and FindBugs are two examples here, and easy to implement on a new project – is it a pain in the ass when you commit code, and it fails to build in your pipeline because your if statement looks like this:  if(aThingIsTrue) instead of this: if (aThingIsTrue)?  Absolutely, but it’s hard to deny that it makes the code more readable, and over time, you’ll realize that you need to remember to actually build your code before pushing it to your shared repo, and no-one is going to argue that that’s a bad idea.

These tools are also pretty pliable, so if you find that a default rule doesn’t sit well with your team, just turn it off – this isn’t about forcing everyone into a set of practices that they don’t like, it’s about a team making a decision that there certain standards and guidelines that everyone should follow, and then putting the tools in place to actually make sure they’re followed.

Quick show of hands – how many of you have worked on a team or in a company that had a ‘Coding Style Guideline’ of some sort published on a team wiki, but a quick look at the code shows that it’s completely ignored?  Think a bit about who is actually reading that wiki.  In most cases, it’s going to be your new employees — do you really want to send the message to new employees in their first one or two days that the guidelines are just there to look pretty, but don’t worry about it, because no-one pays attention?

Other tools and concepts in this space include much of the Simian Army – what better way to ensure that your teams are properly handling failures by causing failures to occur.  I’ve used simpler tools in the past like aspect oriented filters to ensure that a Controller only works with Services, and doesn’t directly access a Repository, or vice versa.  There are plenty of techniques to use – using these Governance tools can help ensure that your teams know what the Right Thing is, and can also help identify when the Right Thing might just need a little tweaking.

Deployment Pipelines

The term ‘build pipeline’ has been popular the last few years, since the publishing of Continuous Delivery (if you haven’t read it, do it.  Seriously, don’t wait – go do it!), but it’s not a new concept.  Build Pipelines are simply Continuous Integration processes taken to their logical conclusion, and realizing that automating your Unit Tests or even your Regression tests don’t mean a lot if that code is left sitting around, or if it the automation stops before the software is actually deployed.  What good are the hours spent testing, if the production software release process is completely different than the testing and preprod processes?  Are you prepared to tell your QA team to go home, because the testing they’re doing won’t be valid when it’s deployed to production?

My full set of thoughts on this topic would make this post far too long, so for now a few principals will have to do:

  • Start with a tool that allows you to put your configuration into Source Control – building your pipeline by hand on a project by project basis is a great way to make it unrepeatable.  The good news is that there are plenty of tools that can do this – GitLab Runner, Travis CI, and yes, Jenkins 2.0 are just a few options.
  • Build your artifacts once, and promote them as they move through the pipeline.  This will help you keep track of what builds are ‘approved’, and will eliminate any chance that the thing you tested is not the thing that you released.
  • If you have one team that develops the software, and another team that releases the software, it is wrong for either of these teams to build the pipelines in a vacuum.  Release Pipelines are a phenomenal tool to help your development teams and IT teams work more collaboratively – the term is ‘DevOps’ for a reason, not ‘NoOps’!
  • Optimize to fail fast.  If you have some tests that take time to run, execute them last, so you don’t have to wait 15 minutes to discover that you missed that Checkstyle issue mentioned above.
  • Categorize your tests stages, and build them up over time.  This is about practicality – you should start by identifying the types of tests that you want in the pipeline (unit, integration, acceptance, performance, release tests, visual diff tests, etc), but if you refuse to use your pipeline until they’re all in place, you’ll never get there.  Instead, configure your pipeline to run your build, unit tests and release processes, but leave  the final release processes to be manually triggered stages, rather than automated.  At the same time, make the decision about what tests you absolutely must have in place before you’re willing to automate that final release, and then add those steps to your project plan.  This will give you many of the early benefits of a pipeline, and it will give you a controlled release process – you can then make it more efficient as you go.
  • Recognize that 100%, hands off release automation is not necessarily the goal here.  Having a controlled release process that is agreed upon by development, IT and QA is.  If you still believe you need someone to hit the button before release, that’s fine, just recognize that each release will generally include larger change sets.  (While you’re add it, find some internal tools, or less ‘mission critical’ apps, and automate the crap out of them – this will help you gain a bit more confidence with the process.)

Project Archetypes

Maven Archetypes were one of the really valuable concepts that Maven brought to the Java world, but one that I haven’t yet found a really compelling implementation of – as good as the dependency mechanism and Archetype concept of Maven, I never really like using Maven as a tool on a day to day basis.  It had just as much of the XML ugliness as Ant, but because it was a declarative tool instead of a procedural tool, it always felt less readible.

The idea behind Archetypes was pretty simple, though – define a basic, empty structure of a ‘type’ of project, and provide the tool to allow a developer to recreate this in a single command.  Pretty straight forward, although somewhat limited for the time – we were building a lot more monoliths than micro-services back then, so the need to create a project from scratch was pretty infrequent.

Micro-services is the latest hotness these days, but rapidly developing cohesive, yet independent systems across an organization is hard.  Not only do you need to consider basic project structure, but you also need to worry about integrating with test frameworks, setting up circuit breakers, building a deployment pipeline, etc.  Thinking in terms of archetypes can give teams a head start, and do it the Right Way.

Unfortunately, I don’t know of a tool that does this really well.  Several tools will help move you in the right direction – tools like Gradle, Vagrant, and Serverless all have ‘init’ commands that will get you started, and the Spring Framework has the Spring Initializr, but none that I know of allow you to cleanly define your own (Gradle might be going in this direction with the InitBuild plugin, but it’s still in incubation and doesn’t have the option for a team to define their own project types).

So for the time being we might be stuck with ‘thinking’ in terms of Archetypes, but there’s still value here – creating a blank project template that defines folder structure, and includes configuration templates is easy, and can save a lot of time for teams that are building out their infrastructure.  It also serves as an effective way to communicate what the current best practices are, as they are discovered and added to the templates.

Still, there might just be an opportunity here somewhere…

Epilogue?

Of course there are more techniques here – this is only a start, and likely a topic that I’ll expand on in the future.  The key is about thinking in terms of making the Right Thing the Easy Thing – if the process of pushing out a hot fix is exactly the same as pushing out any other release, and if that process can run start to finish quickly, you will find yourself doing crazy things like opening a .jar file and replacing a .class file far less frequently.  Yes, many of us know how to do that, but most of us also know that it’s not ok.

BTW, if you couldn’t figure out the difference between those ‘if’ statements above, look for the space…