Telerik blogs
top image

On this episode of Eat Sleep Code, James Chambers unpacks the highlights from Microsoft Build 2017. The importance of AI and Machine Learning for developers is discussed. James shares his interest in using software for humanitarian efforts.

James Chambers

James is a Microsoft MVP and one of the ASP.NET Monsters, currently developing on the MVC Framework, SignalR and WebAPI building resilient web services. He contributes to several open source projects and is passionate about mentoring and fostering mentorship in others. He volunteers with local schools to teach programming to kids ages 9-14 and is involved with a number of charitable organizations.

Show Notes

Transcript

00:01 Ed Charbeneau: This podcast is part of the Telerik Developer Network. Telerik by Progress.

[music]

00:16 EC: Hello and welcome to Eat Sleep Code, the official Telerik podcast. I'm your host, Ed Charbeneau, and with me today is James Chambers. How are you doing, James?

00:26 James Chambers: I'm good Ed, how are you doing?

00:28 EC: I'm doing excellent. We are at Build 2017.

00:30 JC: Lacking a little bit of proper sleep probably.

00:33 EC: Yeah. I think I did a 14-, 16-hour day yesterday. I got up at 6:00 AM, didn't get to bed until probably midnight. So it's been one of those type of things.

00:43 JC: I hear you. And I travel from, I'm from rural Manitoba, so it's always a bit of a journey to get out here, sometimes an overnight journey to get out here. So a long travel lead and then long days at the conference, I get it.

00:55 EC: Yep. So tell us a little about yourself, James. What do you do? Where do you work?

01:00 JC: I actually am an independent consultant and I work primarily in the space of ASP.NET Core these days and I do a lot of work in Azure. And I guess I've got the MVP award a couple of time or so, and recently released my fourth book with good friends of mine Dave Paquette and Simon Timms on the topic of ASP.NET Core.

01:25 EC: So Dave and Simon, you do a lot of work with these guys. You have a little bit of a show that you do as well on Channel 9.

01:31 JC: We do. We call ourselves the ASP.NET Monsters. We've got these little caricatures that are done. Anyways, so we have this cartoon persona where we go on and we code and pretend to know what we're doing and share what we're learning about ASP.NET Core with other people. And so yeah, we're about 100 episodes in on Channel 9 and plugging along.

01:48 EC: Nice, congratulations.

01:49 JC: Thanks.

01:50 EC: It's a great show, especially if you're a.NET developer. Tune in and learn some cool tips and tricks. I learned a little bit about a cool open-source project that you guys had called GenFu.

02:05 JC: That's right. So it's a data generation library, it's available on GitHub and via NuGet and it's for.NET core. We're working on back-porting it to support other frameworks as well, but what it does is you can basically take any type of entity and say give me a list of that entity, and rather than filling it in with nonsense data, it will actually do its best to intelligently recognize known properties, and then using an internal database, fill those with either random values that are generated or generated based off of values in the database. So it gives you realistic looking test data and sometimes that can make a strong impression when you've got a first draft of a prototype or you're working with some sample data. So a handy little utility. I've been using it, some form of it, for over 15 years and then I finally got poked to put it out there a couple of years ago and Dave and Simon stepped in and that's really where we started collaborating, and it's just evolved from there.

03:00 EC: Awesome. So we are hanging out and Build 2017 where, on the second day now, there's been just amazing keynote deliveries, like five hours worth of content just blasted at us over the last two days just in the keynotes alone. So let's highlight a little bit of that. What have been some of your favourite takeaways from the keynotes? Let's unpack for people.

03:27 JC: Sure. So one of the things I actually found interesting in the style, and I don't know if they're just trying to change it up a little bit, but one of the things that I noticed is that they're leading with an implementation of a stack of technologies and then they're unpacking it. So it was really interesting today, it seemed very consumer-esque to be showing the remix, the remixed product that they've got for Windows Remix to allow you to put photos and videos and everything together. And it felt awkward because it was very, as I said, very driven for consumers. But then they came around on the back end of that and they said, "Here are all the ways that we built this and you can have access to all of those APIs." And I guess, we were talking before we started recording here, the thing that keeps emerging is the theme of AI.

04:17 EC: Right. So the video remix, this thing, you take video with your camera or multiple cameras even, you can have your friends' videos in there and share them all into one space. And then you have this video editor and they even have an ability to drag and drop 3D elements in there that are in a common 3D format.

04:43 JC: Yeah, so it does object and plane detection, surface detection, and it allows you to bind models that you pull in from the 3D community using all open-source models. It does the attribution for you, it builds out, it does fixed point locking for the objects, so as your video is moving, the object can stay in there, or it can track an object through a plane. There were all kinds of really interesting things like inking features that they've got. And the really interesting thing about it is they were calling it Remix. And one of the things that it could do is you could say, "I'd like to pivot this automated build up of this video that you produced for me and I'd like to instead focus it on this aspect." And they use the example of a particular person. And so when you start unpacking what they're doing there, it's facial recognition through video frame analysis, there's stuff going on in the background there that, as you look at some of this stuff, it would have really seemed science fiction a couple of years ago.

05:45 EC: Yeah. When they started off I was kind of like, "Why are they showing a video editor?" And then you start to realize there's some things going on in here that are not trivial tasks to perform. For example, they highlighted a girl playing soccer. Then when she kicked the ball, they mapped a 3D fireball to the soccer ball, and it somehow knew where the ground was when it bounced and changed the animation appropriately.

06:14 JC: Yeah.

06:15 EC: All kinds of bizarre stuff.

06:16 JC: And what was really cool about that too is that that 3D fireball that was modeled by somebody, the presenter had actually found it in a scene that existed with a volcano and dinosaur, I can't remember the exact scene or whatever. She was able to drive in and say this was a model that was actually composed using all of these... Or this was a scene that was composed using all of these other models and found that model in particular and then pulled that in, and as you said, bound it to an object that was in flight, in motion, from within that video clip. It was pretty cool.

06:51 EC: Video's extremely hard to process. There's millions of frames making up a video. This thing is looking at it, presumably with AI, and figuring out different components of it. Like you said, surface detection and object detection and stuff is very advanced. It kind of hit home the point that Microsoft's been trying to make this entire conference and that is if you're doing software development, you should really be looking at AI because if you ignore it, you're gonna be left at the bottom of the pack.

07:27 JC: Yeah. In particular, I think that there's this emergence for... I think there will always be a place for folks who are doing line of business applications. Maybe they're doing smaller apps for small businesses, things like that. Even mobile developers who've got very specific use cases that just need to talk to perhaps on-prem hosted services and what not. What we would call now the standard set of things that you're gonna be doing. In most cases you're gonna want to be calling out to something. The future's kind of becoming this landscape of do you want to be calling out to the things that you're writing that live on infrastructure that you maintain or do you want to use a collection of services that are effectively the patterns that have emerged that are required for next generation applications, that exist and live out there in infrastructure that you don't have to manage. You don't have to do the operating system pieces. You don't have to write the intelligence into machine learning. These are things now that they're extracting that they're making available through APIs, STKs and toolkits for developers.

08:39 EC: Yeah. We're getting these, just tool boxes of many things to develop with now. Microsoft's really expanded the cognitive services in the AI and machine learning realm way beyond what I expected to be coming out by this Build. Been watching it for quite sometime and it's just exponential growth.

09:01 JC: Project Oxford came out, it was just over two years ago. I did a segment on Microsoft Virtual Academy with Simon Timms, actually, and we got to go through some of those initial pieces. I'm pretty sure there were only eight end points in the API at that point in time. What it was capable of doing, it just had a handful of categories and tags and what not, a few dozen. Today they announced at one of the sessions, that in the Custom Vision API, which is now... It goes beyond what is just available in the stock image recognition capabilities. It allows you to train it with additional categories and tags that you devise on your own and provide additional images for a specific training to narrow in on a certain type of image or object in an image. They said that there's actually now over 2,000 categories that these things can be tagged with. The Canonical example they're using in a lot of the... Like they did it in the keynote, they showed it in a couple of the sessions. There's a picture of a guy, he's swimming under water. He's mid stroke, it's a 1/3 up look view from under the water. He's wearing goggles, there's bubbles around him. You can only see parts of his body and the service recognizes that this is a guy that's swimming under water. And that's with the phrase...

10:15 JC: There's a description that it gives of the photo and it might be "skateboarder doing a trick" or "man swimming under water." It does all these different point evaluations. It tells you things, properties of their face, and it all comes back with confidence scores so you can evaluate, is this the intention of what the image is? Another example is this concept of LUIS, which is the language understanding... I can't remember the information or interpreting service or something. I can't remember what the I stands for but it's LUIS. What it does is it takes natural language processing. It takes something that somebody would type into a chat bot or a webpage or a search or something and using an LP, determines what the intent of the user is based on some pre-defined intents that you'd describe. The training that it goes through that you can do, the feedback loop is so small. It just kind of redefines what this natural language processing thing means and the machine learning that's behind it. It's one of those things where instead of having, as a developer, to train a computer to understand language and break down utterances and figure out what the key facets are of that language.

11:34 JC: The service... They've already raised that up as a pattern. We know that if you're going to be doing that natural language processing, these are the things that are going to emerge that you're gonna require in order to code against it. They're taking these things that felt abstract and they're actually wrapping really intelligent tooling around them, and I think that's what is really exciting.

11:55 EC: Absolutely. They've taken lots of data scientists and put them behind this, I'm not a data scientist, are you?

12:06 JC: I am not.

12:07 EC: We're software developers.

12:08 JC: Sometimes I have these dreams, and in the dreams people are giving me awards for... No, I'm joking. I'm not a data scientist, I do not do that work.

12:16 EC: So they've taken the brunt of employing all of those data scientists and developing this very difficult machine learning algorithm to do vision APIs, and speech APIs, and then handing off the easily accessed web API for us to just send data to and benefit from.

12:38 JC: Exactly. And it was one whammy after another. I've been watching some of the cognitive stuff, from afar, and I kind of got what they were doing with the images, but they do a good walkthrough of the LUIS component with the intent and they show that they've got the vision API that they've already been working on and then they drop another bomb, they're saying, "Oh, and you can index your video as well." And now it's...

13:05 EC: That was amazing.

13:06 JC: Yeah, I know. And now it's going through and it's actually determining what's going on in the scene and recognizing objects and you can provide additional training. The additional training piece is amazing, it's something that I want to explore, just to have better awareness of and depth and interest in. I started a blog post actually after I saw one of the sessions yesterday, because I'm like, "I'm gonna train this vision API to tell the difference between cake and cookies." It's kind of like the how old.NET and things like that, or whatever, or how much people look alike, this is gonna be, you're gonna be able to upload an image and I'll be able to tell whether or not it's a cake or a cookie.

13:45 EC: Some of the examples they gave were really good with that too. One of them was, they retrained the vision API with image data from satellites. So you could plug that into Google Earth, and have it scan a region and tell you if it's under development or if it's a new development, and things like that.

14:06 JC: Right. And then using the history of those images to understand the changes in the landscape. And when you think out towards either a private or public enterprise that is trying to understand better, municipal, or regional state provincial governments, wherever you live in the world, who are trying to understand land use and some of the examples they gave around deforestation and rates of urban sprawl and things like that, that is just something that before would have been very, very difficult to put together on your own. But with this toolkit, they showed a project that they built that, literally it was like they were just scrolling through things, it was pulling out those parcels of land and saying, "Here's how it has evolved and here are the ones that are most likely to be identified as positives in answering the question, what has become urbanized." And so just that analysis over those images, it's been phenomenal.

15:07 EC: Like you said earlier, this is something that was absolute science fiction just a few years ago. And to watch it evolve at an event like Build is just absolutely amazing.

15:18 JC: Yeah, so I'll go back to pre-Build and the pre-day workshop I was in on the deep dive workshop on, it was called the cognitive, or the AI Immersion workshop, and I sat in on... There's a number of them that were held and I sat in on the cognitive services, and a bot framework one. So, in the course of a single day, working through the lab material, you get a starting point, but basically you build out a console app, that allows you to point it at a directory of images, you do an ingest process where you run those through the vision API, and you extract all the tags and categories.

15:57 JC: You then take that build JSON objects, stuff those into a document DB, you wire up a service called Azure Search, against the document DB and index that entire catalogue of images that you've pulled in, and it does a... You basically get to say which ones are sortable and which ones can be filtered, and all those kinds of things. And then using the bot framework, you allow people to put in input, and backing that is the LUIS Service, so now you've got this intent recognition that's extracting facets, and understanding when people wanna perform a search. So we wrote a bot that has these LUIS intents that are understood, and we're using that service. And then we go look at the Azure Search endpoint to pull back the images that match and then get the list of images. I was giggling the whole time, 'cause it just felt like something that you would... Like yeah, the whole science fiction kind of thing.

16:57 EC: Yeah, we're really watching a major change happen right now. And it's one of those call to actions for developers like learn it, or be antiquated, this is the real deal, this is happening today.

17:09 JC: Absolutely. Another really important thing that Satya did during his keynote was he said that we've also got a responsibility to use these services in a way that benefits everyone. And by far, I know we've still got one more day here to go, but by far the most jaw dropping jarring thing that I have seen, it totally made me tear up, I've got people in my life who are affected by Parkinson's, there is a developer in the Microsoft research team who has built a device, now she's been working, I think it sounds like it's just been over year, her name is Haiyan Zhang, and she has built a device called The Emma. It's a bracelet, effectively that has a series of motors that kind of counteracts the responses of the nervous system to allow someone to steady their hand. Now, if you know anyone with Parkinson's disease who's living with that disease, it is something that is... It's a degenerative disease and the shaking and the trembling becomes harder and harder to control. There are some drugs that help mute some of those symptoms, but this is...

18:21 JC: There's a girl who's, I believe she's in her 20s, her name is Emma, and the device is named after her. And at the start of the video there was her trying to... There's some video of her trying to draw a box or write her name. And she was almost in tears trying to do it. And then, by going through these iterative processes of using... And to what extent I don't know, but I think we're talking... We believe that there's some amount of machine learning that was involved in the process. And certainly as a device that has telemetry on it and it can collect data, and they can train it and learn from the feedback that they're getting from the device as well, the end result was this bracelet that steadied her hand. And she, for the first time in years, was drawing straight lines and writing her name. And it was just... It was very moving and it was one of those really, really obvious ways that as a developer you just go, "Yeah, we need to be doing this. This is... It doesn't have to be a dystopian future, this technology can be human."

19:25 EC: Yeah. I feel like Microsoft's really done a good job supporting that effort over the last few years. Especially since Satya joined as CEO, you see things like, even at last year's Build, the pair of glasses that helped...

19:39 JC: The color blind.

19:41 EC: Yeah. No, actually, there was a software developer that works at Microsoft who has vision impairment, and the glasses are able to tell him through the cognitive APIs what's going on in his surroundings.

19:55 JC: Right. Exactly. Right.

19:56 EC: It was very impressive. So we're seeing a lot of those type of stories emerge and it's really refreshing to see. And it's in parts of the keynote as well where Satya talks about bringing users on board the computing ecosystem in general that haven't been able to experience before by providing new ways to interact, new user interactions and new user interfaces that just haven't existed before.

20:27 JC: Yeah. And some of the devices that are coming out to support that, some of the software initiatives, some of the hardware initiatives and the IOT pieces, and you start coupling that with shorter feedback loops. I think it's really about the shorter feedback loop and how machine learning is improving, and by putting those abstractions in place that we were talking about before, it's really enabling developers this opportunity that, it just was so far out of reach before.

20:54 EC: Yeah. It's amazing. And you actually do some work in the humanitarian efforts. Tell us a little bit about the project that you work on.

21:05 JC: Yeah, I do. So Richard Campbell, Tony Surma, a few others are involved in this project that is called the Humanitarian Toolbox, and what their goal is, is to allow... Is to basically take on the responsibility in a professional way, to build out software that can be used by non-profit organizations and even for-profit organizations, but basically to use the software for free and to remove the repeated effort that happens over and over and over again. And you can imagine that... One example might be in disaster preparedness, wherever you go in the world there's different types of disasters be it tornadoes or hurricanes or earthquakes, even something like a house fire is something that you can be prepared for. And organizations like the American Red Cross had these efforts that they've got all over the world in order to help people be prepared for the unfortunate event of a house fire. And so there's a product that they've got inside this HT tool box or HT box, that is called allReady. And so I've had the pleasure and privilege of working with over 100 contributors around the world on building...

22:24 EC: Excellent.

22:24 JC: Yeah. It's amazing. There's thousands of pull requests and reviews that have happened and issues that have been filled. We've had volunteer testers, volunteer designers, volunteer apps, like mobile app developers. We have had people reaching out to us and then forming their own little communities. We've got a developer in the UK, Steve Gordon, who's just done this phenomenal job of creating a community within the company that he works at. They've had several codeathons now where they've actually gone and knocked out dozens of pull requests and just made these incredible contributions to move this project along. And this is a real project, I mean the American Red Cross is using this in prod. So the difference though, the approach that the Humanitarian Toolbox is taking is that they don't want to create that free software that lives out there and you go it's abandon-ware. How many people have gone a helped out a charity for a weekend to build a website or something and then you gotta get back to your day job. And it's just left. And you've done a good thing and we shouldn't discount that at all, but the reality is that you've given them technical debt and you've walked away from it.

23:32 JC: And so HT box is trying to solve that problem in saying, "Let's solve the problem one time using proper architecture and modern tools that's exciting for people to build and learn on. And then actually make that available for free to charities". Richard Campbell always says that, "Free software is like a free puppy." You get a puppy and maybe the puppy is free, but then, how are you gonna maintain it? And it's gonna grow and it needs to eat and then it also makes messes. And if nobody is around to help take care of the puppy then the puppy, it doesn't last very long. So, software is much the same way, and you need to nurture it, and feed it, and organize it, and get it to the right places and put it in the care of the right people, and that's what they're trying to do. So, it's been an exciting ride and helping a really noble cause with the American Red Cross and then hopefully other organizations down the road as well.

24:26 EC: Yeah, and it seems like there's a little bit of a side benefit, too, where if you're a new developer or you wanna see best practices or you just want to learn there's an open source project out there that's being led by some really amazing developers. You named some people that I would love to go see what kind of code they're writing and what kind of architecture they're using and best practices are in place.

24:50 JC: Yeah, Bill Wagner's another one of the people that's involved in the project. I mean, here you've got someone who works inside of language development at Microsoft. We've got, when we've done the codeathons before we have had people from the ASP.NET Core team actually come and sit in the codeathon with us and help us through things. Answering problems and questions, we've got direct contacts into people at Azure; if something doesn't work in the deployment pipeline we've got really good resources to lean on and learn from and so we're trying to pattern that out throughout the rest of the project and even through other projects in the HT box. I said codeathon, I'm saying codethon not hackathon. Because it's very important that we... We feel it's very important that we send the clear message out there that when you go and submit a feature on this project you're going to write a unit test, we're doing this the right way. There's a CI/CD pipeline, your pull request will be built by an automated system then it will tell us whether or not the tests are passing and we're gonna look at your code and we will ask for you to make changes. So, we treat this like a very serious enterprise but agile project that tries to follow some of the best practices that are out there today.

26:01 EC: Yeah, so would it be safe to say if you're a a junior dev and you wanna learn how to break over into being a more senior developer this is a way to do it and help people at the same time.

26:14 JC: Yeah, that's totally the thing. I'd always kind of looked at it in my volunteer efforts before working with the HT box, you kind of go out and you go to some other event and you're donating your time but you're not donating your skill necessarily, and so if you go out and help at the soup kitchen... Something needs to be done, we need people out there and I'm not saying stop doing that; if you do that already then I applaud you and I encourage you to keep doing that. Take your kids, make it happen, it's gotta be done. But, for me as a software developer, my toolkit, my experience, I've got 20 years now doing this professionally and this is where I shine. This is what I can do and how I can contribute. And to find an organization like this that I can work with to deliver that into the hands of noteworthy and sincere and emphatic fans of the software who serve communities around the world, that's just an incredible opportunity.

27:16 EC: It's amazing work. We'll put some links in the show notes so people can go find that information and jump on the open source project and learn more about it.

27:27 JC: That'd be great.

27:28 EC: And we'll put some links in there for the ASP.NET Monsters as well, so if people would like to go check out the show and see what you guys are up to, they can do that as well.

27:40 JC: Yeah that'd be great, hit us at aspnetmonsters.com, check us out on Channel 9. There's lots of stuff out there, we blog, and we do videos and all kinds of stuff.

27:47 EC: Speaking of Channel 9, this show will kick off our syndication on Channel 9, so Eat Sleep Code will be available on Channel 9 starting right as this airs.

28:00 JC: That's awesome, congratulations.

28:01 EC: Thank you very much, and thanks for giving me your time today. I know it's been another extremely busy day here at Build. We're both, like you said, running on hours of sleep if that much, and so I appreciate you coming out and talking to me in person here.

28:16 JC: Thanks for having me, Ed.

28:17 EC: Thanks a lot.


About the Author

Ed Charbeneau

Ed Charbeneau is a web enthusiast, speaker, writer, design admirer, and Developer Advocate for Telerik. He has designed and developed web based applications for business, manufacturing, systems integration as well as customer facing websites. Ed enjoys geeking out to cool new tech, brainstorming about future technology, and admiring great design. Ed's latest projects can be found on GitHub.

Related Posts

Comments

Comments are disabled in preview mode.