<![CDATA[Jim Drewes]]>http://blog.jimdrewes.com/Ghost 0.7Thu, 04 Jun 2020 15:58:01 GMT60<![CDATA[The Urgent and Important Work of DevOps]]>I just released a blog post on the Daugherty blog, titled "The Urgent and Important Work of DevOps." It can be read here.

]]>
http://blog.jimdrewes.com/the-urgent-and-important-work-of-devops/8b99e68d-b1e0-47a0-9c6b-8d13edc0ff56Thu, 11 Aug 2016 18:07:39 GMTI just released a blog post on the Daugherty blog, titled "The Urgent and Important Work of DevOps." It can be read here.

]]>
<![CDATA[Amazon Echo and EV3 pt 3 - The EV3 Control Application]]>

This is part 3 of a 3 part series on controlling a LEGO Mindstorms EV3 using an Amazon Echo. Part 1 is available here.


In earlier blog posts, I presented the overall architecture for controlling a Lego EV3 with an Amazon Echo (Alexa) device, and I detailed the Amazon and

]]>
http://blog.jimdrewes.com/amazon-echo-and-ev3-the-ev3-control-application/93454f7d-25b6-4a80-a074-888f7ae0cc1dFri, 15 Apr 2016 03:08:15 GMT

This is part 3 of a 3 part series on controlling a LEGO Mindstorms EV3 using an Amazon Echo. Part 1 is available here.


In earlier blog posts, I presented the overall architecture for controlling a Lego EV3 with an Amazon Echo (Alexa) device, and I detailed the Amazon and Alexa components in the project. In this post I'll cover more of the local application side of the architecture.

Referring again to the broad layout of the application, once a command has been received by the Echo, processed, and ultimately stored in an Amazon SQS messaging queue, pretty much all of the remaining functionality resides in a C#/.NET console application. The role of the console application is to simply poll the SQS queue for new messages, read the next one in queue and extract it's commands, then send those commands wirelessly via bluetooth to the EV3 robot.

The code for the console application can be found on github, here:
https://github.com/jimdrewes/alexa-to-ev3/tree/master/alexa-to-ev3.console

It's a pretty simple and straightforward application, so I won't go through it line-by-line. I will however point out the two central libraries powering the app:

Amazon AWSSDK

For the most part, the Console application runs in a loop, where it constantly executes a cycle of PollForQueueMessage() and then ProcessCommand(Ev3Command command). When polling for a queue message, we simply execute the following, with a bit of fluff in the middle:

ReceiveMessageRequest request = new ReceiveMessageRequest();  
request.QueueUrl = _awsSqsAddress;  
var responseTask = _sqsClient.ReceiveMessageAsync(request);  
Message nextMessage = response.Messages.First();  
DeleteMessageRequest deleteRequest = new DeleteMessageRequest();  
deleteRequest.QueueUrl = _awsSqsAddress;  
deleteRequest.ReceiptHandle = nextMessage.ReceiptHandle;  
var deleteTask = _sqsClient.DeleteMessageAsync(deleteRequest);

All we're doing here is requesting a message off the AWS SQS queue, grabbing the message contents (a JSON object), then deleting that message off of the queue. We then parse out the contents of the message into a command object, which we ultimately use to build up our EV3 actions.

legoev3 library

Sending commands to the EV3 is crazy easy using the legoev3 library. Simply set up a new Brick object, specifying a BluetoothCommunication configuration, connect asynchronously, and you're ready to issue commands. Typical commands look something like this:

_brick.BatchCommand.Initialize(CommandType.DirectNoReply);  
_brick.BatchCommand.SetMotorPolarity(OutputPort.B, Polarity.Forward);  
_brick.BatchCommand.SetMotorPolarity(OutputPort.C, Polarity.Forward);  
_brick.BatchCommand.StepMotorAtSpeed(OutputPort.B, 100, distance, false);  
_brick.BatchCommand.StepMotorAtSpeed(OutputPort.C, 100, distance, false);  
await _brick.BatchCommand.SendCommandAsync();

This just tells both of the track motors to move forward, and step forward 100 units. By adding all of the commands to a BatchCommand, you can send the entire batch at once and have the EV3 execute the steps concurrently. This way you're not moving one motor, then another.

Connecting your EV3 via bluetooth

Connecting to an EV3 via bluetooth was actually harder than I'd expected. Fundamentally the process is very easy - but the stars need to be aligned just right for it to all work perfectly.

The bluetooth protocol used is the common bluetooth-as-a-serial connection paradigm. Once you've got the EV3 properly paired with your machine (the hard part), then the EV3 is exposed to your applications as a typical serial port. In the Windows world, this means you'll communicate over COM1/2/3, or whatever. In the Linux and OSX worlds, you'll find the EV3 exposed as a /dev/tty.YourDevice device. Setting this up for the console app is a breeze. Just edit the App.config file and set the Ev3Port to be whatever serial port your machine assigned:

<add key="Ev3Port" value="COM1" /> <!-- or for Linux/OSX, "/dev/tty.YourPortName" -->  

The difficult part, however, is getting the EV3 to pair with your machine.
Have a read over LEGO's documentation for the best instructions I was able to find:
Bluetooth Troubleshooting for EV3 - LEGO

Some tips I learned:

  • Seriously read the above PDF. There will be steps that sound strange, but you really need to follow it to the letter. Don't skip anything, and don't think you know better than the authors of that document.

  • I had lots of issues with when things got plugged or unplugged. You may have to try plugging things in at different times.

  • I had to completely shut down the Mindstorms software in order for my application to actually be able to talk to the EV3. If the Mindstorms app was still open, the connection would always be blocked.

Good Luck!

Hopefully this 3-part series has been helpful. Please let me know if you need more information on any of the parts, I'd be happy to point you in the right direction. Otherwise, have a look at all the code on github and tinker away.

]]>
<![CDATA[Amazon Echo and EV3 pt 2 - The Amazon Components]]>

This is part 2 of a 3 part series on controlling a LEGO Mindstorms EV3 using an Amazon Echo. Part 1 is available here.


In the last article, I outlined the overall architecture of the Amazon Echo to EV3 interface. As noted in the high-level diagram below, a large percentage

]]>
http://blog.jimdrewes.com/amazon-echo-and-ev3-pt-2-the-amazon-components/659eef35-b0d2-4dfb-af5f-9e3046e77578Thu, 31 Mar 2016 01:42:13 GMT

This is part 2 of a 3 part series on controlling a LEGO Mindstorms EV3 using an Amazon Echo. Part 1 is available here.


In the last article, I outlined the overall architecture of the Amazon Echo to EV3 interface. As noted in the high-level diagram below, a large percentage of the architecture resides in Amazon AWS components (Echo, Lambda, SNS, SQS). In this article, I'll show you how to wire up those pieces.

Although two-thirds of the architecture resides in AWS, there really isn't all that much work to do here. The Lambda function is by far the most complicated component on the Amazon side. But first, let's start with the pre-requisites.

Getting Started with Amazon AWS Development

The first thing you need to do is get yourself set up with an Amazon developers account and an AWS account. You should be able to use the same account for both, but make sure you familiarize yourself with both of these areas:

Once you've signed up and have browsed around the sites for a bit, you're ready to set up your first Amazon Echo / Alexa skill.

Creating an Alexa (Amazon Echo) "Skill"

The first thing you'll need to build is the Amazon Echo's "skill," which defines the way you interact with the device and how the interactions with it should be processed.

  1. Log into the Amazon Developer portal, and click on the "Apps and Services" tab at the top, followed by the "Alexa" sub-tab.
  2. You'll be presented with two options - the Alexa Skills Kit, and the Alexa Voice Service. You'll want to use the skills kit, which lets you add new skills to your Alexa. The voice service is for use in embedding voice recognition in other devices.
  3. You'll now be presented with a list of any skills you've already created, and a button to "Add a New Skill." Click that button.

  1. Fill in the basic information about your new skill. The name can be anything you want. The invocation name needs to be the phonetically written out name you would like to verbalize when you address your Alexa. I would like to be able to say things like "Tell EV3 to move forward." But you can see in my example below, I couldn't just make my invocation name "EV3." Instead, I had to write "e. v. three."
  2. For now we are going to leave the Endpoint blank. You have two choices of endpoints - a web service URL, or an Amazon Lambda expression. Originally I had intended on using a web service, but I changed my mind fairly quickly. With Lambda expressions, all you need to do to wire it up is copy and paste the ARN, and make sure your expression has the right permissions. To use a web service, you'd have to write all the session and security handlers, plus you'd have to host the service. It just wasn't worth the effort.

  1. The Interaction Model is where things start getting interesting. Here you will define the grammar for interacting with your Alexa skill. The important concepts to understand here are Intents, Slots, and Utterances.

    Intents: An intent is action you'd like to perform or the interaction you'd like to have with the Alexa. You can think of these a lot like a method or function.

    Slots: Slots are simply arguments to the intent / method / function. So if you had an intent called "AddNumbers", you might have slots for each of the numbers to add up.

    Utterances: These are the actual phrases someone may vocalize to interact with your skill. Again, using the AddNumbers example, you may have an utterance like: "AddNumbers add {x} and {y} together."

    For the EV3 application, I really only have two intents - "Move" and "Stop." When moving, the slots / variables are just an Action and a Value. The Action uses a custom slot type called `LIST_OF_ACTIONS`. This lets me pre-define the allowable values to go into that slot (like "Forward", "Left", and "Turn"). The value has a data type of any number. This way if I want to go forward a certain distance, I can capture the desired distance in the Value slot. See the github code for the exact intent schema, custom slot definition, and sample utterances that I used.

Once you've finished setting up your Alexa skill, the Test screen will become your new best friend. Here you can test the text-to-speech capabilities of Alexa. This is how you'll iron out the strange pronunciations Alexa comes up with that you didn't expect. More importantly, using the Service Simulator, you can check that your grammar and your lambda function interact with one another in the way you'd expect.

Creating Your Lambda Function

Now that you have your skill stubbed out, it's time to put in some backing code. Again, you could use a REST web service to handle requests from the Alexa, but there weren't any good skeleton services available out there when I wrote this initially. I didn't want to mess around with the AWS security model, among other things, so I opted for creating a Lambda function.

Lambda functions are simply hosted compute resources. You put your code into a Lambda function, and Amazon figures out the rest for you. Rather than hosting virtualized infrastructure, Lambda lets the developer just supply the logic to be hosted.

To create your lambda function, log in to your AWS account and go in to the Lambda function area. When you click to add a new function, you have the option of creating a function based on a pre-defined "blueprint." I'd recommend using one of the Alexa blueprints.

Your functions can be written in Node.js, Java, or Python. I was most familiar with JavaScript, so I went with Node. However, there are Alexa Skill blueprints for Python as well.

I won't go into all the details on how to write an Alexa handling Lambda function in this blog post. There are plenty of tutorials out there on how to do that. The blueprints provided by Amazon are a good start. If you'd like to get the Alexa-to-EV3 application working, you can simply use the lambda function in github, here. Copy and paste the index.js file, and edit the value in the "snsArn" variable (more on this in a little bit).

The Lambda may look a little confusing at first, but the operation is pretty simple. Most of it boils down to the "onIntent" event and the "onLaunch" event. The onLaunch event simply returns back a simple welcome message. This is what you hear when you first start interacting with the EV3 skill. It provides a simple introduction, giving you some sample commands you can utter. It also starts your session with the Alexa. The onIntent event occurs whenever you actually give it a command, like "forward." This command, and the associated slot information get added to an "actionAttributes" object, which ultimately ends up as a message that is sent along to SNS. The SNS code you see in there simply pushes a message of action/value to SNS. Because all of this runs within the context of an Amazon Lambda expression, we have access to the AWS node libraries. This makes communication with SNS or SQS a breeze.

Note: I wouldn't consider this lambda expression to be the best example of writing Alexa functions. This was quick and dirty, and developed as a proof of concept.

Once we've sent the SNS message, the rest of the Amazon side is handled without code.

Creating the SNS Notification Topic

You might be wondering why SNS is part of the architecture. The short answer is - it doesn't necessarily need to be. I added this step because it gave me some debugging and management features that I wanted.

Generally, the point of SNS is to push notifications to various consumers. Often this would be mobile devices. I like SNS because it allowed me to attach multiple different endpoints without having to modify my Lambda function. SNS can marshal the message back out to a variety of consumers, including HTTP, SMS, Email, Email-JSON, Lambda, or SQS. While I was building everything, I subscribed my email address to the SNS topic, along with SQS. This allowed me to receive the raw information out of the Lambda expression via email, so I could diagnose any issues with the SQS queue.

If you'd like to skip this step, simply edit the Lambda function to create an SQS message rather than an SNS message.

Building the SQS Message Queue

SQS is Amazon's MQ system. It allows for a queue of messages to be managed and accessed. This works particularly well for the Alexa-to-EV3 application, as it allows for a semi-persistent sequence of messages/commands to be held until the EV3 is ready to process them. I've configured my queue to retain messages for just 1 minute. This way, as I speak to the Alexa, my commands will become available to the EV3. If my EV3 or console application lose connectivity temporarily, my messages will be ready for me as soon as I re-connect. However, I also don't have to worry about stray messages from an old session sitting around in the queue for days, and the EV3 grabbing those when it starts up. SQS will purge the old ones automatically. This system lets me maintain a message pipe from the Alexa to the EV3, and still be resilient to network failures.

To build an SQS queue, just log in to AWS and go to the SQS section. Click "Create New Queue," and set up some basic information. Probably all you care about is the queue name and the message retention period.

Once it's created, just copy the ARN displayed and use it to subscribe to your SNS topic in the SNS management interface.

That's it! Once you complete those steps, your Lambda function will be pushing messages to SNS, which will in turn add a message to your SQS queue.

At this point, you should be able to start talking to your Alexa and see your messages arriving in your SQS queue.

In my next post, I'll cover the processing/communication side of the solution. This will cover the .NET console application and the Bluetooth communication to the rover.

]]>
<![CDATA[Controlling a Mindstorms EV3 with Amazon Echo]]>This past Christmas my oldest son received a LEGO Mindstorms set as one of his gifts. Mindstorms is a fantastic platform for teaching kids about programming and robotics. With his set he can build color-sensing rovers, "seeing" robots that can navigate through a room, and all kinds of other devices

]]>
http://blog.jimdrewes.com/controlling-a-mindstorms-ev3-with-amazon-echo/8ddd85eb-43f8-41e5-9a9c-031ff54ca6b7Mon, 21 Mar 2016 02:48:03 GMTThis past Christmas my oldest son received a LEGO Mindstorms set as one of his gifts. Mindstorms is a fantastic platform for teaching kids about programming and robotics. With his set he can build color-sensing rovers, "seeing" robots that can navigate through a room, and all kinds of other devices that can sense, move, lift, and interact.

Admittedly, I was also a little excited that he received the Mindstorms. I'm a gadget lover and a tinkerer. I've got my Arduino, my Raspberry Pi, and a head full of project ideas. So, the Mindstorms will let me share some of my hobbies with my son.

Knowing that I'm such a gadget fan, my kids gifted me with an Amazon Echo. These fun little devices are like Apple's Siri - for your living room. It's a nice looking, nice sounding speaker that can act as a traditional bluetooth speaker. But the real power of the Echo (also called "Alexa"), is in the voice recognition and interaction. I can now walk into my kitchen, and ask: "Alexa, what's the weather going to be like today?" or "Alexa, is Gene Wilder still alive?" - and I'll get the answers I'm looking for.

Shortly after setting up my Echo and uttering my first few silly questions, the geek voice in the back of my head started chattering...

"I bet this thing's got an API..."

But what would I have it do? My impish geek voice didn't let me down.

"Hey, I bet that Mindstorms has an API too..."

Bingo!

After a couple of evenings of hacking around with the Alexa API and the C#/.NET EV3 API, I was able to flex my mighty programming muscles to my 7 year-old son.

The architecture for the Alexa-to-EV3 communications looks a little extensive at first, but it's really not so bad. Most of the capture part of the operation chains through a few Amazon AWS services, which are pretty easy to set up. Below is the high-level flow of data through the applications.

The general flow is:

  1. User utters the EV3 skill's phrase, including a command and an optional value. For example, "Alexa, Tell EV3 Move Forward 10".

  2. The Echo interprets your language according to how you set up your grammar, and sends a message to some endpoint. Right now, Amazon lets you send a message to either a web service or a Lambda function. In this case, I set it up to activate the Lambda function.

  3. The Lambda function, a simple Node JS application, inspects the message sent by the Echo, and decides what to do. Unless the user is canceling out of their command session, this lambda function will simply package up a message into the SNS service.

  4. SNS (Simple Notification Service) receives the message from the Lambda function, then marshals the messages off to configurable endpoints. I configured my SNS service to both add a message to an SQS message queue, as well as to send me an email (for debugging purposes).

  5. The message is added to an SQS queue. This will allow the messages to queue up and live for 1 minute before they're automatically cleaned out. Having the queueing and temporary persistence allows the console application to pull down new commands whenever it's ready for one.

  6. The .NET console application polls the SQS queue for new messages. When one is found, it's processed and removed from the queue.

  7. Finally, the console app interprets the message and sends an appropriate bluetooth command to the rover via the EV3 C# API. The rover then acts on the command and causes motion.

In my next two blog postings I'll cover in more detail:

  • The capture/collection side of the solution. This will include all of the Amazon AWS configurations, as well as the Node JS code written for the Lambda function.

  • The processing/communication side of the solution. This will cover the .NET console application and the Bluetooth communication to the rover.

]]>
<![CDATA[Welcome to my blog]]>Welcome to my little slice of the Internet. I've had the jimdrewes.com domain since 2002, and this blog represents my current professional progression. Early on, jimdrewes.com was little more than a pre-Facebook way for my college buddies and I to post random messages, links, and pictures to one

]]>
http://blog.jimdrewes.com/welcome-to-my-blog-2/7dce9618-6539-4e0e-b386-a7740098c724Wed, 18 Nov 2015 04:50:03 GMTWelcome to my little slice of the Internet. I've had the jimdrewes.com domain since 2002, and this blog represents my current professional progression. Early on, jimdrewes.com was little more than a pre-Facebook way for my college buddies and I to post random messages, links, and pictures to one another. Now I would like to use this website as a way to share my professional and technical thoughts with anyone who cares to read about them.

The intent here is to focus on technological topics such as Enterprise IT Architecture, Software and Application Architecture, DevOps, and possibly some random thoughts on development management and software engineering on a Microsoft .NET stack.

]]>