Michael Hallhttps://mhall119.com/blog/2024-03-19T03:49:39+00:00Random thoughts on community, programming, science and politicsTime Series Tech Stacks for the IoT Edge2021-10-19T14:22:08+00:002024-03-18T23:49:07+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/time-series-tech-stacks-for-the-iot-edge/<p>If you watched my All Things Open talk and want to get links to the projects I references, you can get my slide deck from here: <a href="https://www.slideshare.net/influxdata/time-series-tech-stack-for-the-iot-edge">https://www.slideshare.net/influxdata/time-series-tech-stack-for-the-iot-edge</a></p>Before You Take Your Conference Online2020-04-05T16:10:41+00:002024-03-18T18:03:13+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/before-you-take-your-conference-online/<p>The Covid-19 global pandemic and subsequent shutdowns of travel and social gatherings has forced many tech conferences to cancel their planned events this year. Many of you are going to try holding your events online, using video conferencing tools and live chat to try and replicate the conference experience without people having to travel somewhere and gather in one place. This is a laudable goal, especially given the lack of better options at the moment, but it's not as simple and straightforward to do as it may sound. I know, I did it back in 2013.</p>
<p><img alt="Group photo from the last in-person UDS" src="https://mhall119.com/static/media/uploads/b9lucomicaahogz.jpg" style="float: right;" width="400"/>Canonical, the makers of Ubuntu, used to host a large developer conference every 6 months called the Ubuntu Developer Summit or UDS for short. Over the years this grew from an event with dozens of people, to one with hundreds of attendees coming in from all over the world. Eventually the cost and logistical challenges of hosting such a large in-person gathering became so large that Canonical decided to convert it to a virtual, online event instead. As a member of Canonical's Community Team at the time, I was right in the thick of this change. And while the circumstances that led to that descision were different than the challenges conferences are facing now with Covid-19, the lessons we learned (often the hard way) are very likely going to be the same ones you will encounter today.</p>
<p>So I'm writing this article to summarize our experiences, in the hopes that it will allow you to be successful in your transition from an in-person conference to an online one. Special thanks to Elizabeth K. Joseph, a titan in the Ubuntu community, who shared with me her perspective as a regular community participant, which I have incorporated into this article as well.</p>
<h3>UDS -> vUDS -> UOS</h3>
<p>Like many people I'm seeing today, our initial approach was to do the same thing we always did, just "take it online". We figured (as so many are today) that being online would make it easier for people to attend, and keeping the overall format the same would ensure the same kind of success. We dubbed this "Virtual UDS" or vUDS for short. We kept the same topics, the same schedule, the same concept of tracks and rooms, everything was the same except that we would all be online instead of in-person.</p>
<p><img alt="Ubuntu Summit attendee numbers" src="https://mhall119.com/static/media/uploads/screen_shot_2020-04-05_at_2.40.01_pm.png" style="float: right;" width="400"/>At first this seemed to work. Our first vUDS was almost as well attended as our last in-person UDS, and considering that the in-person attendance numbers had stopped growing already a slight decline wasn't seen as a problem. But then the next vUDS had slightly lower attendance, and the next one even lower still. Worse, the participation rate, that is the number of attendees per session, was cut in half by our third vUDS. People were coming to the event, but not to as many sessions in the event. Even with fewer sessions to choose between, more people were just skipping sessions than they did at our in-person events.</p>
<p>After a year and a half of the vUDS format we switched gears again, and re-focused the event on presentation and demos, and less on development planning and conversations. This was done partially in response to declining attendance for vUDS, but also because developers stopped waiting for these events to have those conversations and make those plans. Once everything was online it didn't make sense to wait until a specific week of the year for some of these activities, and so they happened whenever it was convenient to have them, and we no longer had those topics for sessions at vUDS. We rebranded this the Ubuntu Online Summit (or UOS) with fewer sessions and more focus on attendees consuming content rather than engaging in discussions and planning. This reversed our trend of declining attendance in the event overall, but the participation rate remained low and never recovered. </p>
<p>In the end Ubuntu stopped having these dedicated events altogether. Does that mean it was a failure? That's hard to answer. Because the online format allowed the content to be spread out over the entire year, things happened when they needed to happen, presentations and demos became more frequent outside of these events, and the work of developing Ubuntu carried on without them. Bringing the UDS content online was a success, but it's success meant the death of UDS as an event in any form.</p>
<p>The biggest loss to the community was social, UDS was such an important part of the relationships that were built in the project, and that was something that couldn't be replicated solely online. The community responded with <a href="https://ubucon.org/">UbuCon</a>, smaller community-run events hosted in different parts of the world, and Canonical responded by bringing community contributors in to the regular in-person sprints that engineering teams had always held. None of it was able to really replace what UDS brought to the Ubuntu project though. UDS was fun for employees and community alike. vUDS was just work.</p>
<p></p>
<h3>Technology Problems</h3>
<p>First and foremost, going online is largely NOT a technology problem. Yes, you have Zoom and other video conferencing tools now. We had Google On-AIr Hangouts back in 2013 and they worked nearly perfectly. In fact, almost none of the challenges we faced when switching to vUDS/UOS were technology induced, and very few of the challenges we faced were solvable with technology. I'm starting out on this topic because it's the one too many people are focused on, and also the shortest one in terms of advice.</p>
<p></p>
<h4>Have a broadcast URL</h4>
<p>As mentioned above, we used Google Hangouts with the On-Air option, which allowed us to broadcast a live stream via YouTube. This was extremely useful both because of participant limits in Hangouts, but also because most people just wanted to consume the session, not participate in it. Zoom's Webinar feature is a paid add-on, but if you're a big conference it's almost certainly going to be worth having for this reason alone.</p>
<p></p>
<h4>Have a recording</h4>
<p>For reasons which will be expanded on below, having a recording of each session in your online conference will make it more valuable to both your speakers and your participants. This is true already with in-person conferences, but becomes exponentially more important with an online one. A recording gives your speakers long-term benefits, as people will be able to watch their talk for months or years after it's given. You'll also need a way to make these recordings discoverable, with a directory on your website or something, to really leverage this benefit. </p>
<p></p>
<h4>Have session helpers</h4>
<p>Just because you don't have A/V equipment in a room any more doesn't mean your speakers aren't going to need somebody on hand during the talk to help them, the challenges are just different. It's much harder to track and answer questions in an online presentation than in person, because you'll usually have your slide deck in full screen, hiding everythign else. Have somebody from your staff in each session who can handle not only technical issues, but also audience engagement chat while the speaker is presenting.</p>
<h3>Audience Captivity</h3>
<p>The biggest challenge you're going to have is that you no longer have the full attention of your audience. One of the implicit but not often talked about benefits to the organizers of an in-person conference is that your attendees have agreed to give you basically 100% of their free time. They've left their home and work responsibilities already just to be there, so any time spent not in a session is basically wasted for them. That means that anything even mildly interesting or useful is going to be worth it to them to attend.</p>
<p></p>
<h4>Distractions</h4>
<p>When somebody is at your venue they don't have a lot of outside distractions. Even if your conference is being held somewhere like Las Vegas or Orlando, where there's a lot of tempting things to do in the area, those things are not actually <em>inside</em> the rooms where the session is happening. Once you're online that's no longer the case, your audience will be constantly exposed to things wanting their attention, whether it's family or work or pets or the TV. And you won't have any control over that. Even just the amount of outside noise they will face will be distracting. Have you ever been in a talk where the doors were left open and all the hallway noise was coming in? It's like that, only every attendee has a different set of doors and there's nothing you can to to close any of them.</p>
<p></p>
<h4>Competing with work time</h4>
<p>Companies usually give multiple days of paid leave for their employees to attend a conference, and certainly for them to speak at one. But it's a much harder ask to get paid leave to watch video streams for 8 hours a day. Especially when only a handful of sessions are really relevant to the attendee's job. With an in-person event it's a sunk cost, sending somebody to attend 4 sessions cost the same as sending them to attend 40. But when it's online you can be doing work-work during those 36 hours of non-work-relevant sessions.</p>
<p>Moreover, when a company sends somebody to attend a conference, they're valuing it on more than just the information gained at sessions. They know that those employees will be spending the time between sessions and after-hours talking to other participant and speakers, making connections that might be beneficial to the company, and spreading information about the company to those also in attendance. This is just something that happens when people with similar interests are stuck together, companies know this and it's part of the calculus for approving conference travel. Online events almost never provide this added benefit, which again makes it harder to justify spending work time on them.</p>
<p></p>
<h4>Competing with family time</h4>
<p>Even harder than taking away work time is taking away family time. Just like it's a sunk-cost to your work if you attend an in-person conference, it's a sunk-cost in terms of family time too. Your spouse and kids know you'll be gone all day every day, that it's just part of your job. But when your event is online, people are still going to go home (if they're not working from there already) to their families at the end of the work day. For the lucky attendees, your conference hours match their work hours, and they only have to choose between their family and socializing online (this choice will not go in your favor). For almost everyone else you're asking them to ignore their family outside of work hours so they can pay attention to your speakers. Again, it's a sunk-cost in person so they've already agreed to do that, but the calculus is much different when they're going to be sitting the next room over.</p>
<p></p>
<h4>Shorten your schedule</h4>
<p>There isn't much you can do to reduce the distractions your attendees will face, but you can change how much time you're asking them to take away from them. It's much easier to take 4 hours away from work or family than it is to take 8 hours away. It's easier to spend one or two days on it than 4 or 5 days. Yes, that's going to mean fewer sessions, and fewer sessions will make it harder to have something interesting for everyone, and that's going to make it harder to keep your attendee numbers up, but you'll at least be able to keep the attention of those that are interested. As a bonus, a 4-hour conference day will fit completely in the 9-5 work schedule of a lot more people than an 8-hour conference day will.</p>
<p></p>
<h4>Schedule a meal break</h4>
<p>Regardless of what timezone an attendee is in, they're going to need to eat at some point during your event. Sure being online means they can grab a bite and sit on their computer while they eat, but it will still take time for them to find and make something to eat. If you put an hour-long break in the middle of your schedule that tells them that there will be a time to do that when they won't have to choose between food and somebody's talk. Even if it doesn't exactly line up with their preferred meal time, people are more willing to eat a couple hours earlier or later in order to avoid missing sessions than they are to skip a meal entirely. This also gives your organizers and coordinators a mid-day break, which they are going to <em>desperately</em> need.</p>
<p></p>
<h4>More Keynotes</h4>
<p>It just so happens that Keynotes work well for online sessions. Not only is the content easier to deliver in this format, since they are usually one-way conversations that don't depend on audience interactions, but having them as the only session happening in a given hour means your audience isn't being split between it and other talks. You may not want a single-track, all keynote conference, but consider doing more of them during the day. In addition to an opening and closing keynote, an after-lunch keynote is a great way to motivate your attendees to come back on time after a long break.</p>
<p></p>
<h3>Missing Social Opportunities</h3>
<p>It's easy to underestimate the value attendees get from a conference simply from being there in person. That is, until you go online and those social opportunities disappear. This goes beyond just the so-called "Hallway track" that we all know and love, although that's a big part of it too and many events try to replicate with a dedicated chat channel or scheduled social event. But the missing ingredient isn't the hallway, it's the people in the hallway, and your physical proximity to them.</p>
<p></p>
<h4>Being seen</h4>
<p>Outside of speakers and organizers, there's very little opportunity for attendees to be seen at an online event. This may sound trivial, but it's a big way that newcomers to your community used to get involved. Being able to meet someone new over lunch, strike up a conversation with another attendee after a talk, or even just have small talk about your t-shirt or backpack was a way that people built connections and relationships with each other at these events. Those connections are valuable both personally and professionally, and though we may not have thought much about it when we were attending in-person events, you will feel the loss of them when when they're gone. This will be felt especially hard by newcomers, who don't already have a circle of people to catch up with online, and will have a much harder time building such a circle now that the physical proximity is gone. As a result even your online social events will likely be comprised of those with the biggest existing social network, and focus on that social network, to the exclusion of everyone not already in it.</p>
<p></p>
<h4>Unexpected sessions</h4>
<p>As was highlighted above, when you're at an event in-person it really doesn't make sense to skip an hour of sessions just because none of them seem interesting. So we'll often attend a session that we otherwise wouldn't, simply because we don't have anything else do to. And many times these sessions turn out to be extremely helpful, insightful, thought-provoking, or all of the above. As a speaker I've had some of the best Q&A sessions from attendees who really had no reason to be in my talk, they just came along because there wasn't anything more interesting going on that hour. This casual mixing of interests and perspectives at an in-person event breathes new life into topics and prevents talks from becoming an echo-chamber of the same people with the same opinions agreeing with each other.</p>
<p></p>
<h3>Looking forward</h3>
<p>We're all going to miss the conferences that we used to go to, and I for one hope that most of them will go back to being in-person after this pandemic goes away. There are a number of benefits you get from having an online event, but also a number of benefits we are going to lose because it's impossible to replicate in anything but an in-person event. Whether your event goes permanently online, or this is just a temporary measure to keep it going during this unprecedented time, I hope that the experience and opinions presented here will increase the chances of you making it a success. Because I want all of you to be successful, I want your conferences to continue bringing people together and sending information out, and for our industry to keep being the kind of place where I've made so many life long friendships and professional connections.</p>
<p></p>Joining InfluxData and the future of time-series data2019-12-06T20:23:01+00:002024-03-18T18:03:14+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/joining-influxdata-and-the-future-of-time-series-data/<h3>Joining Influx</h3>
<p><img alt="A welcome note from InfluxData PeopleOps" src="https://mhall119.com/static/media/uploads/welcome_package.jpg" style="float: right;" width="300"/>After supporting developer relations for the Linux Foundation's LF Edge projects for the past 18 months, I've decided to get back into Community Management. So I'm excited that next year I will be joining the team at InfluxData helping them build a community around the software they are developing. Anybody who knows me knows how big I am about advocating for Open Source Software, both from an engineering and community perspective, and InfluxData has a number of solid open source products they develop, such as InfluxDB & Telegraf, each with a large user community already. I see an opportunity here to build and grow that community into a driving force behind these products, working together with InfluxData to enhance them to meet the next generation of engineering needs coming with the Internet of Things and Edge computing.</p>
<p>One if the things that made InfluxData stand out as a company I wanted to work with was the level of excitement, passion, and most importantly humanity that I felt when talking to everybody there. Almost everyone I talked to, from the CEO to PeopleOps to Engineers, talked about the company's principles of working together, with humility, and not being afraid of failure. It wasn't just window dressing, everybody there embraced those values and it came through in how happy and excited they were about their jobs. And of course it didn't hurt that I'll be working again with Rick Spencer and Will Cooke, both good friends from our time working together on Ubuntu.</p>
<h3>Flux language</h3>
<p>For me, one of the most exciting things coming out of InfluxData right now is their Flux language. I first heard about Flux at AllThingsOpen 2019, where <a href="https://allthingsopen.org/talk/2-for-1-building-a-digital-twin-with-sensor-data-opensync-open-source-for-cloud-to-device-enabled-services/" target="_blank">David Simmons used it in his presentation</a>. I was immediately struck by how useful it was to have a language designed for processing streams of time-series data, not just with InfluxDB, but when doing anything at all with IoT sensors and computing on the edge. At the time I was interested to know if it could be used as a generic programming language, independently of InfluxDB, because the EdgeX Foundry project was looking for a replacement to their legacy rules-engine service. So I was especially excited to find out during my interviews that this is precisely the future that InfluxData has in mind for it, and they want that future to be community-powered!</p>
<h3></h3>
<h3>Be a part of the community</h3>
<p>Interviewing with InfluxData has been one of the most pleasant and efficient interviewing processes of my life, and I've had a lot of them. I think this is a good reflection of the culture they've built inside the company, and the culture they want to bring to the community around it as well. I'm really looking forward to the opportunity they are creating, and being able to bring the community into that opportunity. There's no doubt that IoT & Edge are beginning to transform our industry in the same way that the Cloud did 10 years ago, and time-series data is at the heart of that. If you're already in this space, or interested in joining it to stay on the leading edge of the digital transformation, come and join us!</p>Turn your RaspberryPi into an Smart IoT Device, no coding required!2019-10-03T11:00:03+00:002024-03-18T18:03:28+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/turn-your-raspberrypi-into-an-smart-iot-device-no-coding-required/<p><span></span><span>A lot of of us have a Raspberry Pi, many of us have more than one, and some have probably an excessive amount. And who can blame us? A micro-computer that you can plug regular USB peripherals into, has HDMI out, wireless networking (on later models), low power consumption and can run your favorite Linux distro, all for just $35? Frankly it’s amazing we don’t all have more than we do. But what are we doing with them?</span><span><br/></span><span><br/></span><span>In recent purely non-scientific </span><a href="https://twitter.com/mhall119/status/1144268408613158913"><span>Twitter poll</span></a><span> I asked my followers just that: What are you using your RaspberryPi for? Not surprisingly some people are running a Linux desktop on them, and several are running fun or useful software on them like PiHole, RetroPi, or even just as a mini file server. But only about 20% of the people who responded were making use of one fantastic thing about the Pi: It’s GPIO pins. In fact, </span><b>most people (35%) said their Pi was just sitting on a shelf not doing anything!</b><b><br/></b><span><br/></span><span>Well my friends, nothing makes this foodie nerd sadder than a wasted Pi. So blow the dust off that little board of yours and grab some LEDs or whatever sensors you might have, because in 10 minutes we’re going to turn that shelf decoration of yours into a functional smart IoT device!</span></p>
<p><img alt="" height="490" src="https://lh3.googleusercontent.com/FnPaNtfPPlH6_F7aMvWUYvmDJawX49P3JCC-BMxpuGniSHWZpIg4Mt9yU1dUhwK1fOVWJzD1f6U-36Nd1q5aOE6NTFVDNtvqBw-iePoMfxO25kOtSHgOHf-n6ZwYr2_BTtuy2PLJUdjAWijntBd_eal7J3JNjMakhAbuY8wkylejKHmQCtasftJaJNImnwUxOFR5FGr74c2dutpTJ8my4aLfYhnZnzpskpyRZxa6F8a3d7ND6bVTlv2lqfT0VtxAMGsL_-ZsIpaUNFoCYN27F8rLHpzRQ6m5hj4_k9xsQbr4ybupLo_UWqSOx62BloGFn0WENP8phx-t1N3MJgUlkbGcLO-KrfIv2yIqExMh5QSsTtLcams2czMiAXn9KtGp2L5WJfU0gRZ8SnRqhUln6_czVLrvT0BUmrzJobWfOsB7xjm1HQXYviTg61aZaIKcZ6NWY-8e2XmIF92JJEayb1igrVpd_7OTGis6AFW2uZUmpiWE4e4fzHEKcJBuX3cIMX0_ZTlSbRSi7lSN0Qtm_HU5KM1KKNO_pHs5n4n-vHCcns-Dzbvl3FqkPGv9Gh-lphQOsuGaX2AlfGeZ6-fcqPB9qC-tUE6cpKz0JYXW4hNwNOm0LDIm2DkWy7Lq9Ff5mxs7wgAurM52lTZFhJBKg12Kwxjtwn_6TACAbM6l4BixS6M-y-fIhzk=w1243-h932-no" width="653"/></p>
<h1><span>EdgeX for IoT Edge</span></h1>
<p><span>There are a lot of open source projects for hacking on your Pi, but today I’m going to focus on the </span><a href="https://www.edgexfoundry.org"><span>EdgeX Foundry</span></a><span>, a Linux Foundation/</span><a href="https://www.lfedge.org"><span>LF Edge</span></a><span> project. While not specifically developed to run on the Pi (EdgeX was designed for large-scale industrial IoT use), its micro-service architecture and device SDKs make it easy to deploy at any scale, and flexible enough to be used for the custom IoT device that we’re going to build.</span></p>
<p></p>
<p><span>EdgeX is, as the name implies, made for Edge Computing. What is Edge Computing you may ask? It’s a way of putting software that deals with real-world data closer to the real-world so it can process it faster, and at a larger scale</span></p>
<p></p>
<h2><span>Running EdgeX</span></h2>
<p><span>In this guide we’re going to run an EdgeX Device Service on our RaspberryPi, and the rest of the EdgeX services on your desktop/laptop. You can run all of EdgeX on the Pi, but to do that you need a 64 bit OS (to run MongoDB), which rules out using the standard Raspbian OS. So to keep things simple we’ll split the services up.</span></p>
<p><span>For this tutorial we are going to use the pre-made </span><a href="https://github.com/mhall119/edgex-device-rpi/tree/edinburgh/examples/MotionDetector"><span>Motion Detector example</span></a><span> from the EdgeX <a href="https://github.com/mhall119/edgex-device-rpi">RaspberryPi Device Service</a> project. </span></p>
<h3><span>Launching EdgeX in Docker</span></h3>
<p><span>This example project comes with a docker-compose.yml file which will launch all of the EdgeX Foundry services, as well as the NodeRed service which will provide the dashboard and control logic for the example, and Mosquitto to provide an MQTT connection between EdgeX and Node Red.</span></p>
<p></p>
<p><span>To launch these services, copy the docker-compose.yml file to your desktop machine, and in the same directory run</span></p>
<pre><span>docker-compose up -d</span></pre>
<p><span>This will bring up all of the docker containers (downloading images from Docker Hub as necessary) and will run them all as background daemons (the -d option). </span></p>
<p></p>
<h3><span>Configuring NodeRed</span></h3>
<p><span>The above command will start Mosquitto and NodeRed, but they still need to be configured to talk to the EdgeX services. The next steps are to connect EdgeX to Mosquitto, and then to connect Mosquitto to NodeRed.</span></p>
<p></p>
<p><span>For the first step, the Motion Detector example provides a script which you can run on the same machine as your EdgeX Foundry services which will register a new Export Service for your Mosquitto instance:</span></p>
<p></p>
<pre><span>./register_mqtt_export.sh</span></pre>
<p></p>
<p><span>The second step needs to be done in the NodeRed dashboard, which you can access from </span><a href="http://localhost:1880/"><span>http://localhost:1880/</span></a><span> (substituting localhost with the IP of the machine running the NodeRed docker service if it’s a different machine). From there you can import the flow file `</span><span>motion_detector_flow.json` </span><span>which is also provided in the Motion Detector example folder. </span></p>
<p><img alt="" src="https://github.com/mhall119/edgex-device-rpi/blob/edinburgh/examples/MotionDetector/nodered_clipboard.png"/><img alt="" height="255" src="https://raw.githubusercontent.com/mhall119/edgex-device-rpi/edinburgh/examples/MotionDetector/nodered_clipboard.png" width="578"/></p>
<p><img alt="" height="371" src="https://raw.githubusercontent.com/mhall119/edgex-device-rpi/edinburgh/examples/MotionDetector/nodered_import.png" width="533"/></p>
<p><span>This will not only connect NodeRed to Mosquitto, it also contains the logic to turn the LEDs on and off in response to data coming from the motion sensor (more on that just below). Click the </span><b>Deploy</b><span> button at the top of your Node-Red screen to start using this new flow.</span></p>
<p><span>Now that your connection between EdgeX and NodeRed is complete, and it’s time to start the RaspberryPi Device Service.</span></p>
<h1><span>Installing the RaspberryPi Device Service</span></h1>
<p></p>
<h2><span>Setup your RaspberryPi</span></h2>
<p><span>This tutorial expects that you are using the latest stable version of </span><a href="https://www.raspberrypi.org/documentation/installation/installing-images/"><span>Raspbian</span></a><span> on your Pi. If you already have some other Linux distro on there you should still be able to follow along, but might need different commands to install all of the dependencies.</span></p>
<p></p>
<h3><span>Install dependencies</span></h3>
<p><span>The first thing you’ll need to do is install some build dependencies. Don’t worry, you won’t be writing any code, you just need them to build the EdgeX RaspberryPi Device Service. On Rasbian (or Ubuntu) you can do that with:</span></p>
<pre><span>sudo apt install git cmake curl g++ libcurl4-openssl-dev libmicrohttpd-dev libyaml-dev uuid-dev</span></pre>
<p></p>
<p><span>You will also need `</span><span>libcbor-dev</span><span>`, which will be in newer releases of Raspbian, but for now we need to fetch it manually:</span></p>
<pre>wget <a href="http://ftp.us.debian.org/debian/pool/main/libc/libcbor/libcbor0_0.5.0+dfsg-2_armhf.deb">http://ftp.us.debian.org/debian/pool/main/libc/libcbor/libcbor0_0.5.0+dfsg-2_armhf.deb</a><br/>sudo dpkg -i libcbor0_0.5.0+dfsg-2_armhf.deb<br/>wget <a href="http://ftp.us.debian.org/debian/pool/main/libc/libcbor/libcbor-dev_0.5.0+dfsg-2_armhf.deb">http://ftp.us.debian.org/debian/pool/main/libc/libcbor/libcbor-dev_0.5.0+dfsg-2_armhf.deb</a><br/>sudo dpkg -i libcbor-dev_0.5.0+dfsg-2_armhf.deb</pre>
<p></p>
<h3><span>Building the Device Service</span></h3>
<p><span>Now we have everything needed to build the EdgeX RaspberryPi Device Service, except for the code for that service itself. We’ll get that from Github:</span></p>
<pre>git clone <a href="https://github.com/mhall119/edgex-device-rpi">https://github.com/mhall119/edgex-device-rpi</a><br/>cd edgex-device-rpi</pre>
<p></p>
<p><span>Building the service takes two steps. First, build the dependencies:</span></p>
<pre><span>sudo ./scripts/build_deps.sh</span></pre>
<p><span>This command will download and build both the libmraa and device-c-sdk libraries needed to build the edgex-device-rpi executable.</span></p>
<p></p>
<p><span>Next, build the binary itself:</span></p>
<pre><span>./scripts/build.sh</span></pre>
<p></p>
<p><span>If everything went according to plan, you should now have a binary in `</span><span>./build/release/device-rpi</span><span>`</span></p>
<p></p>
<h2><span>Connecting your RaspberryPi</span></h2>
<p><span>The RaspberryPi Device Service can be used to read and write from anything you have connected to your Pi’S GPIO pins. To run the Motion Detector example you will need a passive infrared (PIR) sensor, a couple of LEDs, and two 220ohm resistors (optional, but they protect your LEDs). </span><span><br/></span><i><span>(Hint: If you don’t have a PIR sensor, try the </span></i><a href="https://github.com/mhall119/edgex-device-rpi/tree/edinburgh/examples/Blink"><i><span>Blink example</span></i></a><i><span> instead.)</span></i></p>
<p><img height="447" src="https://raw.githubusercontent.com/mhall119/edgex-device-rpi/edinburgh/examples/MotionDetector/wiring.png" width="653"/></p>
<p><span>Connect your Pi to the PIR sensor and LEDs as shown below</span></p>
<p></p>
<ul>
<li><span>Make sure that the PIR sensor is connected to the </span><b>5v pin</b><span> and not the 3.3v pin on the Raspberry Pi.</span></li>
<li><span>The output pin on the PIR should be connected to </span><b>pin #7</b><span> on the Raspberry Pi.</span></li>
<li><span>The Green LED should be connected to </span><b>pin #11</b><span> on the Pi, and the Red LED connected to </span><b>pin #12</b><span>.</span></li>
<li><span>The ground rail on your breadboard is connected to the one of the </span><b>gnd</b><span> pins on the Raspberry Pi.</span></li>
</ul>
<h3><span>Defining your custom device</span></h3>
<p><span>Now that you have your sensors connected to your RaspberryPi, we need to tell EdgeX about it. In EdgeX, a Device Profile is used to describe a device’s capabilities, both data that can be read from it, and activation commands that can be sent down to it. It also lets you give additional information to the Device Service itself about how to communicate with the Device.</span></p>
<p></p>
<p><span>We are going to take advantage of all of those Device Profile capabilities to describe the sensors and connections we just made. Since we are using the Motion Detector example, we will use the pre-defined </span><a href="https://github.com/mhall119/edgex-device-rpi/blob/edinburgh/examples/MotionDetector/RPi_Motion_Detector.yaml"><span>device profile</span></a><span> for it. There’s nothing you need to do here, but let’s take a look at it anyway:</span></p>
<p></p>
<p><span>In the </span><code><span>deviceResources</span></code><span> section you will see that we have defined, the two <strong>LEDs</strong> and a </span><b>MotionState</b><span> parameter. Notice also that each of these has a set of attributes, including </span><b>Pin_Num</b><span> and </span><b>Type</b><span>, which are how we tell the Device Service which pins to read or write from to access those resources. You can connect any lights, sensors, or other electronics you want to your Pi, simply by defining which pins they are connected to and whether they are an input or output.</span><span><br/></span></p>
<pre><span>deviceResources:<br/></span><span>- name: </span><b>Green_LED<br/></b><span> description: "Turn the Green LED to On/Off"<br/></span><span> attributes:<br/></span><span> {</span><b> Pin_Num: "11", Interface: "GPIO", Type: "OUT" }<br/></b><span> properties:<br/></span><span> value:<br/></span><span> { type: "Bool", readWrite: "RW", size: "1", minimum: "0", maximum: "1", defaultValue: "0" }<br/></span><span> units:<br/></span><span> { type: "String", readWrite: "R", defaultValue: "Enabled/Disabled" }</span></pre>
<p><span><br/></span><span>The next section is </span><code><span>deviceCommands</span></code><span>, which is where we expose the internal </span><code><span>deviceResources</span></code><span> to the rest of EdgeX. Since we are going to expose everything, you see a command defined for each resources.</span></p>
<pre><span>deviceCommands:<br/></span><span>- name: </span><b>Set_Green_Led<br/></b><span> set:<br/></span><b> - { operation: "set", object: "Green_LED", property: "value", parameter: "Green_LED" }</b></pre>
<p><br/><span>The final section lets us tell EdgeX what parts of the device can be exposed to applications. We do that by defining </span><code><span>coreCommands</span></code><span> and the API for accessing them. These last two sections might feel a little redundant, but that’s only because our simple use case is exposing all of our sensors directly. In a real world example there would be bigger differences betweent these sections.</span></p>
<pre><span>coreCommands:<br/></span><span>- name: </span><b>Set_Green_Led<br/></b><span> put:<br/></span><span> </span><b>path: "/api/v1/device/{deviceId}/Set_Green_Led"<br/></b><span> </span><b>parameterNames: ["Green_LED"]</b></pre>
<h2><span>Running the RaspberryPi Device Service</span></h2>
<p><span>By default, EdgeX services will look for each other on the same host (localhost). This is true even for device services written with the SDKs, such as the RaspberryPi device service we’re using. So, because we’re running our device service on a different machine than the rest of EdgeX, we will need to tell is where to find those other services that it needs, and also tell those services where to find it.</span></p>
<p></p>
<p><span>To do that, open the configuration.toml on your RaspberryPi and make the following changes:</span></p>
<p></p>
<p><span>At the top of the file, replace the <code>Host</code> property with the IP address of your RaspberryPi</span></p>
<p><img alt="" height="147" src="https://raw.githubusercontent.com/mhall119/edgex-device-rpi/edinburgh/examples/MotionDetector/config_service.png" width="320"/><br/><br/></p>
<p><span>In the [Clients] section, update the <code>Host</code> properties for <code>Data</code> and <code>Metadata</code> with the IP address of the machine running the EdgeX services you started in the first step.</span></p>
<p><img alt="" height="151" src="https://raw.githubusercontent.com/mhall119/edgex-device-rpi/edinburgh/examples/MotionDetector/config_clients.png" width="258"/><br/><br/></p>
<p><span>Then you can start the device-rpi service on your RaspberryPi using using the MotionDetector example:</span></p>
<pre><span>./build/release/device-rpi --confdir ./examples/MotionDetector</span></pre>
<p></p>
<h3><span>Watch it go!</span></h3>
<p><span>You will now have a Node-Red dashboard at </span><a href="http://localhost:1880/ui/"><span>http://localhost:1880/ui/</span></a><span> where you can watch motion events come in and also control the Red LED on your board.</span></p>
<p><img alt="" height="483" src="https://raw.githubusercontent.com/mhall119/edgex-device-rpi/edinburgh/examples/MotionDetector/nodered_dashboard.png" width="653"/></p>Joining the Linux Foundation2018-07-04T12:56:59+00:002024-03-18T21:47:16+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/joining-the-linux-foundation/<p>This week I began a new chapter in my career by joinging the <a href="https://www.linuxfoundation.org/" target="_blank">Linux Foundation</a> as a developer advocate and community manager for the <a href="https://www.edgexfoundry.org/" target="_blank">EdgeX Foundry</a>, an open platform for IoT edge computing.</p>
<p><img alt="" height="75" src="https://mhall119.com/static/media/uploads/.thumbnails/edgex-foundry-mark.png/edgex-foundry-mark-509x75.png" style="display: block; margin-left: auto; margin-right: auto;" width="509"/></p>
<p>I started using open source before I even knew what it was. Perl was my first programming language, and so installing libraries from CPAN became a routine task (as well as a routine challenge on SunOS). I posted my first open source code on SourceForge soon after, still thinking of it as a way for hobbyists to share their hobby, but not as something serious developers or companies would do. I still remember the feeling I had when Netscape announced that the next version of their browser, Netscape Navigator 5, would be released as open source. As a web developer in the late 90's, Netscape was <em>the</em> killer app, the king of the hill, the virtual monopoly that was leaps and bounds ahead of IE4. For them to release their source code in a way that let other people see it, copy it, even improve on it, was revolutionary. And it changed forever the way I thought about open source.</p>
<p><br/>Of course, anybody else who lived through those turbulent times knows how that Netscape 5 story actually turned out, not because it was open source but because of business decisions and buyouts (thanks AOL!) that kept pulling the development one way and then the other. But my own journey into open source was much more straight forward. I dove in completely, releasing everything I could under an open license, using as much openly licensed software as possible. I bought (yes bought) my first copy of Linux from Best Buy in 1999, and switched my desktop permanently in 2006 when Canonical <a href="https://blog.ubuntu.com/2011/04/05/shipit-comes-to-an-end" target="_blank">mailed me</a> a free CD of Dapper Drake. Five years later I would join Canonical myself, and eventually land on the Community Team where I was building new communities and growing existing ones around Ubuntu and all it's upstreams and downstreams. Last year I was doing the same at Endless Computers, bringing the benefits of open technology to users in some of the most remote and disconnected parts of the world.</p>
<p><img alt="Dinner in Yogyakarta" height="479" src="https://mhall119.com/static/media/uploads/dinner_in_yogyakarta.jpg" width="720"/><br/><br/></p>
<p>So having the opportunity to join the <a href="https://www.linuxfoundation.org/" target="_blank">Linux Foundation</a> is a dream come true for me. I've seen first hand how collaboration on common technology leads to more and better innovation across the board, and that is the core idea behind the Linux Foundation. I'm excited to be joining the <a href="https://www.edgexfoundry.org/" target="_blank">EdgeX Foundry</a> which will play a crucial role in developing they way the rapidly expanding number of IoT devices are going to connect and communicate with the already massive cloud ecosystem. I will be working to improve the way new developers get started using and contributing to EdgeX Foundry, as well as teaching new organizations about the benefits of working together to solve this difficult but shared problem. I look forward to bringing my past experience in desktop, mobile and cloud developer communities into the IoT space, and working with developers across the world to build a vibrant and welcoming community at the network edge.</p>
<p><img src="https://mhall119.com/static/media/uploads/edgex-foundry-icon.png" style="display: block; margin-left: auto; margin-right: auto;" width="300"/></p>On the hunt for new opportunities2018-06-11T14:08:02+00:002024-03-19T01:47:46+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/on-the-hunt-for-new-opportunities/<p>Recently I, and several of my coworkers, were let go from Endless as they continue look for ways to accomplish their mission of empowering the world with technology. I was with Endless for right at one year, though it seems much longer than that. During my brief time there I learned so much, met so many wonderful people, and got a taste of life beyond the confines of North America and Europe. I am grateful for the opportunity that Endless gave me, and wish them only success in the future.</p>
<p><img height="482" src="https://mhall119.com/static/media/uploads/indonesia.jpg" width="650"/></p>
<p>Moving on from Endless so soon was quite unexpected, and I'm still deciding where I want to go from here. I remain passionate about building communities and free software, but I've also missed having my hands deep in actual code. The time feels right to challenge myself with something new, a new audience or a new kind of company, than what I've been doing for so many years. </p>
<p></p>
<p>I have a particular set of skills, skills I've developed over two decades of building code and community, that I want to employ fully in doing something important, something disruptive, something transformative. If that sounds like your company needs right now, shoot me an email at <a href="mailto:mhall119@gmail.com">mhall119@gmail.com</a>.</p>Leaving Canonical for Endless new possibilities2017-04-26T12:00:00+00:002024-03-19T03:49:39+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/leaving-canonical-for-endless-new-possibilities/<p>After a little over 6 years, I am embarking on a new adventure. Today is my last day at Canonical, it’s bitter sweet saying goodbye precisely because it has been such a joy and an honor to be working here with so many amazing, talented and friendly people. But I am leaving by choice, and for an opportunity that makes me as excited as leaving makes me sad.</p>
<h2>Goodbye Canonical</h2>
<p><img alt="malta" class="alignright size-medium wp-image-2595" height="200" src="https://blog_uploads.s3.amazonaws.com/wp-content/uploads/2017/05/malta-300x200.jpg" width="300"/></p>
<p>I’ve worked at Canonical longer than I’ve worked at any company, and I can honestly say I’ve grown more here both personally and professionally than I have anywhere else. It launched my career as a Community Manager, learning from the very best in the industry how to grow, nurture, and excite a world full of people who share the same ideals. I owe so many thanks (and beers) to Jono Bacon, David Planella, Daniel Holbach, Jorge Castro, Nicholas Skaggs, Alan Pope, Kyle Nitzsche and now also Martin Wimpress. I also couldn’t have done any of this without the passion and contributions of everybody in the Ubuntu community who came together around what we were doing.</p>
<p>As everybody knows by now, Canonical has been undergoing significant changes in order to set it down the road to where it needs to be as a company. And while these changes aren’t the reason for my leaving, it did force me to think about where I wanted to go with my future, and what changes were needed to get me there. Canonical is still doing important work, I’m confident it’s going to continue making a huge impact on the technology and open source worlds and I wish it nothing but success. But ultimately I decided that where I wanted to be was along a different path.</p>
<p>Of course I have to talk about the Ubuntu community here. As big of an impact as Canonical had on my life, it’s only a portion of the impact that the community has had. From the first time I attended a Florida LoCo Team event, I was hooked. I had participated in open source projects before, but that was when I truly understood what the open source<span> </span><em>community</em><span> </span>was about. Everybody I met, online or in person, went out of their way to make me feel welcome, valuable, and appreciated. In fact, it was the community that lead me to work for Canonical in the first place, and it was the community work I did that played a big role in me being qualified for the job. I want to give a special shout out to Daniel Holbach and Jorge Castro, who built me up from a random contributor to a project owner, and to Elizabeth Joseph and Laura Faulty who encouraged me to take on leadership roles in the community. I’ve made so many close and lasting friendships by being a part of this amazing group of people, and that’s something I will value forever. I was a community member for years before I joined Canonical, and I’m not going anywhere now. Expect to see me around on IRC, mailing lists and other community projects for a long time to come.</p>
<h2>Hello Endless</h2>
<p><img alt="Endless" class="alignright size-medium wp-image-2592" height="165" src="https://blog_uploads.s3.amazonaws.com/wp-content/uploads/2017/05/endless-OS-300x165.jpg" width="300"/></p>
<p>Next week I will be joining the team at<span> </span><a href="https://endlessos.com/">Endless</a>as their Community Manager. Endless is an order of magnitude smaller than Canonical, and they have a young community that it still getting off the ground. So even though I’ll have the same role I had before, there will be new and exciting challenges involved. But the passion is there, both in the company and the community, to really explode into something big and impactful. In the coming months I will be working to setup the tools, processes and communication that will be needed to help<span> </span><a href="https://community.endlessm.com/">that community</a><span> </span>grow and flourish. After meeting with many of the current Endless employees, I know that my job will be made easier by their existing commitment to both their own community and their upstream communities.</p>
<p>What really drew me to Endless was the company’s mission. It’s not just about making a great open source project that is shared with the world, they have a specific focus on social good and improving the lives of people who the current technology isn’t supporting. As one employee succinctly put it to me: <strong>the whole world, empowered.</strong><span> </span>Those who know me well will understand why this resonates with me. For years I’ve been involved in open source projects aimed at early childhood education and supporting those in poverty or places without the infrastructure that most modern technology requires. And while Ubuntu covers much of this, it wasn’t the primary focus. Being able to work full time on a project that so closely aligned with my personal mission was an opportunity I couldn’t pass up.</p>
<h2>Broader horizons</h2>
<p>Over the past several months I’ve been expanding the number of communities I’m involved in. This is going to increase significantly in my new role at Endless, where I will be working more frequently with upstream and side-stream projects on areas of mutual benefit and interest. I’ve already started to work more with KDE, and I look forward to becoming active in GNOME and other open source desktops soon.</p>
<p>I will also continue to grow my independent project,<span> </span><a href="https://raisingphoenicia.com/">Phoenicia</a>, which has a similar mission to Endless but a different technology and audience. Now that this is no longer competing in the XPRIZE competition, it releases some restrictions that we had to operate under and frees us to investigate new areas of innovation and collaboration. If you’re interested in game development, or making an impact on the lives of children around the world, <a href="https://github.com/Linguaculturalists/Phoenicia">come and see</a><span> </span>what we’re doing.</p>
<p>If anybody wants to reach out to me to chat, you can still reach me at<span> </span><a href="mailto:mhall119@ubuntu.com">mhall119@ubuntu.com</a> and soon at<span> </span><a href="mailto:mhall119@endlessm.com">mhall119@endlessm.com</a>, tweet me at <a href="https://twitter.com/mhall119">@mhall119</a>, connect on<span> </span><a href="https://www.linkedin.com/in/mhall119/">LinkedIn</a>, chat on<span> </span><a href="https://t.me/mhall119">Telegram</a> or circle me on<span> </span><a href="https://plus.google.com/u/0/+MichaelHall119">Google+</a>. And if we’re ever at a conference together give me a shout, I’d love to grab a drink and catch up.</p>Machine Learning with Snaps2017-03-24T12:00:00+00:002024-03-18T18:03:19+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/machine-learning-with-snaps/<p><img alt="" class="pull-right size-medium" height="200" src="https://d0.awsstatic.com/asset-repository/products/Amazon%20Machine%20Learning/200x200_social_machine-learning.png" width="200"/> Late last year Amazon introduce a<span> </span><a href="https://aws.amazon.com/marketplace/pp/B01M0AXXQB" target="_blank">new EC2 image</a><span> </span>customized for Machine Learning (ML) workloads. To make things easier for data scientists and researchers, Amazon worked on including a selection of ML libraries into these images so they wouldn’t have to go through the process of downloading and installing them (and often times building them) themselves.</p>
<p>But while this saved work for the researchers, it was no small task for Amazon’s engineers. To keep offering the latest version of these libraries they had to repeat this work every time there was a new release , which was quite often for some of them. Worst of all they didn’t have a ready-made way to update those libraries on instances that were already running!</p>
<p>By this time they’d heard about Snaps and the work we’ve been doing with them in the cloud, so they asked if it might be a solution to their problems. Normally we wouldn’t Snap libraries like this, we would encourage applications to bundle them into their own Snap package. But these libraries had an unusual use-case: the applications that needed them weren’t mean to be distributed. Instead the application would exist to analyze a specific data set for a specific person. So as odd as it may sound, the application developer was the end user here, and the library was the end product, which made it fit into the Snap use case.</p>
<p><a href="https://github.com/dmlc/mxnet"><img alt="Screenshot from 2017-03-23 16-43-19" class="size-full wp-image-2585 alignleft" height="218" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2017/03/Screenshot-from-2017-03-23-16-43-19.png" width="245"/></a></p>
<p>To get them started I worked on developing a proof of concept based on<span> </span><a href="http://mxnet.io/" target="_blank">MXNet</a>, one of their most used ML libraries. The source code for it is part C++, part Python, and Snapcraft makes working with both together a breeze, even with the extra preparation steps needed by MXNet’s build instructions. My<span> </span><a href="https://github.com/mhall119/mxnet/blob/1e09a8a8493b9a115ae15b91b641d32b7093eb87/snapcraft.yaml" target="_blank">snapcraft.yaml</a><span> </span>could first compile the core library and then build the Python modules that wrap it, pulling in dependencies from the Ubuntu archives and Pypi as needed.</p>
<p>This was all that was needed to provide a consumable Snap package for MXNet. After installing it you would just need to add the snap’s path to your LD_LIBRARY_PATH and PYTHONPATH environment variables so it would be found, but after that everything Just Worked! For an added convenience I provided a python binary in the snap, <a href="https://github.com/mhall119/mxnet/blob/1e09a8a8493b9a115ae15b91b641d32b7093eb87/snap.python" target="_blank">wrapped in a script</a><span> </span>that would set these environment variables automatically, so any external code that needed to use MXNet from the snap could simply be called with<span> </span><em>/snap/bin/mxnet.python</em><span> </span>rather than<span> </span><em>/usr/bin/python</em><span> </span>(or, rather, just<span> </span><em>mxnet.python</em><span> </span>because<span> </span><em>/snap/bin/</em><span> </span>is already in PATH).</p>
<p>I’m now<span> </span><a href="https://github.com/dmlc/mxnet/pull/4852" target="_blank">working with upstream</a><span> </span>MXNet to get them building regular releases of this snap package to make it available to Amazon’s users and anyone else. The Amazon team is also seeking similar snap packages from their other ML libraries. If you are a user or contributor to any of these libraries, and you want to make it easier than ever for people to get the latest and greatest versions of them, let’s get together and make it happen! My MXNet example linked to above should give you a good starting point, and we’re always happy to help you with your snapcraft.yaml in #snapcraft on<span> </span><a href="https://rocket.ubuntu.com/" target="_blank">rocket.ubuntu.com</a>.</p>
<p>If you’re just curious to try it out ourself, you can<span> </span><a href="http://people.ubuntu.com/~mhall119/snaps/mxnet_0.9.3_amd64.snap">download my snap</a><span> </span>and then follow along with the<span> </span><a href="http://mxnet.io/tutorials/python/ndarray.html" target="_blank">MXNet tutorial</a>, using the above mentioned mxnet.python for your interactive python shell.</p>
<p></p>War on Snaps2017-03-22T12:00:00+00:002024-03-18T19:58:42+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/war-on-snaps/<p>Java is a well established language for developing web applications, in no small part because of it’s<span> </span><a href="http://www.oracle.com/technetwork/java/javaee/overview/index.html" target="_blank">industry standard framework</a><span> </span>for building them: Servlets and JSP. Another important part of this standard is the Web Archive, or WAR, file format, which defines how to provide a web application’s executables and how they should be run in a way that is independent of the application server that will be running them.</p>
<p><a href="https://plumbr.eu/blog/java/most-popular-java-ee-containers-2015-edition" target="_blank"><img alt="application-server-market-share-2015" class="alignright wp-image-2576 size-medium" height="181" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2017/03/application-server-market-share-2015-300x181.png" width="300"/></a></p>
<p>WAR files make life easier for developers by separate the web <em>application</em><span> </span>from the web <em>server</em>. Unfortunately this doesn’t actually make it easier to deploy a webapp, it only shifts some of the burden off of the developers and on to the user, who still needs to setup and configure an application server to host it. One popular option is<span> </span><a href="http://tomcat.apache.org/" target="_blank">Apache’s Tomcat</a><span> </span>webapp server, which is both lightweight and packs enough features to support the needs of most webapps.</p>
<p>And here is where Snaps come in. By combining both the application and the server into a single, installable package you get the best of both, and with a little help from<span> </span><a href="https://snapcraft.io/" target="_blank">Snapcraft</a><span> </span>you don’t have to do any extra work.</p>
<p>Snapcraft supports a modular build configuration by having multiple “<a href="https://snapcraft.io/docs/build-snaps/parts" target="_blank">parts</a>“, each of which provides some aspect of your complete runtime environment in a way that is configurable and reusable. This is extended to a feature called “remote parts” which are pre-defined parts you can easily pull into your snap by name. It’s this combination of reusable and remote parts that are going to make snapping up java web applications incredibly easy.</p>
<p>The remote part we are going to use is the “tomcat” part, which will build the Tomcat application server from upstream source and bundle it in your snap ready to go. All that you, as the web developer, need to provide is your .war file. Below is an simple snapcraft.yaml that will bundle Tomcat’s “sample” war file into a self-contained snap package.</p>
<pre>name: tomcat-sample
version: '0.1'
summary: Sample webapp using tomcat part
description: |
This is a basic webapp snap using the remote Tomcat part
grade: stable
confinement: strict
parts:
my-part:
plugin: dump
source: .
organize:
<b>sample.war: ./webapps/sample.war</b>
after: [<b>tomcat</b>]
apps:
tomcat:
command: <b>tomcat-launch</b>
daemon: simple
plugs: [network-bind]
</pre>
<p>The important bits are the ones in bold, let’s go through them one at a time starting with the part named “my-part”. This uses the simple “dump” plugin which is just going to copy everything in it’s source (current directory in this case) into the resulting snap. Here we have just the sample.war file, which we are going to move into a “webapps” directory, because that is where the Tomcat part is going to look for war files.</p>
<p>Now for the magic, by specifying that “my-part” should come after the “tomcat” part (using after: [tomcat]) which isn’t defined elsewhere in the snapcraft.yaml, we will trigger Snapcraft to look for a remote part by that same name, which conveniently exists for us to use. This remote part will do two things, first it will download and build the Tomcat source code, and then it will generate a “tomcat-launch” shell script that we’ll use later. These two parts, “my-part” and “tomcat” will be combined in the final snap, with the Tomcat server automatically knowing about and installing the sample.war webapp.</p>
<p>The “apps” section of the snapcraft.yaml defines the application to be run. In this simple example all we need to execute is the “tomcat-launch” script that was created for us. This sets up the Tomcat environment variables and runtime directories so that it can run fully confined within the snap. And by declaring it to be a simple daemon we are additionally telling it to auto-start as soon as it’s installed (and after any reboot) which will be handled by systemd.</p>
<p>Now when you run “snapcraft” on this config, you will end up with the file<span> </span><em>tomcat-sample_0.1_amd64.snap</em> which contains your web application, the Tomcat application server, and a headless Java JRE to run it all. That way the only thing your users need to do to run your app is to “snap install tomcat-sample” and everything will be up and running at<a href="http://localhost:8080/sample/" target="_blank"> http://localhost:8080/sample/</a> right away, no need to worry about installing dependencies or configuring services.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2017/03/Screenshot-from-2017-03-21-14-16-59.png"><img alt="Screenshot from 2017-03-21 14-16-59" class="aligncenter size-medium wp-image-2573" height="172" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2017/03/Screenshot-from-2017-03-21-14-16-59-300x172.png" width="300"/></a></p>
<p>If you have a webapp that you currently deploy as a .war file, you can snap it yourself in just a few minutes, use the snapcraft.yaml defined above and replace the sample data with your own. To learn more about Snaps and Snapcraft in general you can follow<span> </span><a href="https://tutorials.ubuntu.com/tutorial/create-first-snap#0" target="_blank">this tutorial</a><span> </span>as well as learning how to <a href="https://snapcraft.io/create/#store-30" target="_blank">publish your new snap</a><span> </span>to the store.</p>Make your world a better place2016-10-24T12:00:00+00:002024-03-19T01:40:43+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/make-your-world-a-better-place/<p>For much of the past year I have been working on a game. No, not just a game, I’m been working on change. There are 122 million children in the world today who can’t read or write<a href="http://www.unesco.org/new/en/education/themes/education-building-blocks/literacy/resources/statistics">[1]</a>. They will grow up to join the 775 million adults who can’t. Together that’s almost one billion people who are effectively shut off from the information age. How many of them could make the world a better place, given even half a chance?</p>
<p>I’ve been interested in the intersection of open source and education for underprivileged children for quite some time. I even build a a Linux distro towards that end. So when Jono Bacon told me about a new XPRIZE contest to build open source software for teaching literacy skills to children in Africa, of course I was interested. And now, a little more than a year later, I have a game that I firmly believe can deliver that world changing ambition.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/10/device-2016-08-25-224444.png"><img alt="device-2016-08-25-224444" class="aligncenter size-medium wp-image-2566" height="211" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/10/device-2016-08-25-224444-300x211.png" width="300"/></a></p>
<p>This is where you come in. Don’t worry, I’m not going to ask you to help build my contest entry, though it is already open source (GPLv3) and on<span> </span><a href="https://github.com/XPRIZE/GLEXP-Team-The-Linguaculturalists">github</a>. But the contest entries only cover English and Kiswahili, which is going to leave a very large part of the illiterate population out. That’s not enough, to change the world it needs to be available to the world. Additional languages won’t be part of the contest entry, but they will be a part of making the world a better place.</p>
<p>I designed Phoenicia from the beginning to be able to support as many languages as possible, with as little additional work as possible. But while it may be capable of using handling multiple languages, I sadly am not. So I’m reaching out to the community to help me bring literacy to millions more children than I can do by myself. Children who speak your language, live in your community, who may be your own neighbors.</p>
<p>You don’t need to be a programmer, in fact there shouldn’t be any programming work needed at all. What I need are early reader words, for each language. From there I can show you how to<span> </span><a href="http://raisingphoenicia.com/localization/">build a locale pack</a>, record audio help, and add any new artwork needed to support your localization. I’m especially looking to those of you who speak French, Spanish and Portuguese, as those languages will carry Phoenicia into many countries where childhood illiteracy is still a major problem.</p>Desktop app snap in 300KB2016-09-27T12:00:00+00:002024-03-19T00:59:54+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/desktop-app-snap-in-300kb/<p>KDE Neon developer Harald Sitter was able to package up the KDE calculator, kcalc, in a snap that weighs in at a mere 320KB! How did he do it?</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/09/Screenshot-from-2016-09-27-13-40-16.png"><img alt="KCalc and KDE Frameworks snaps" class="aligncenter wp-image-2554 size-full" height="547" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/09/Screenshot-from-2016-09-27-13-40-16.png" width="865"/></a></p>
<p>Like most applications in KDE, kcalc depends on several KDE Frameworks (though not all), sets of libraries and services that provide the common functionality and shared UI/UX found in KDE and it’s suite of applications. This means that, while kcalc is itself a small application, it’s dependency chain is not. In the past, any KDE application snap had to include many megabytes of platforms dependencies, even for the smallest app.</p>
<p>Recently I introduced the<span> </span><a href="http://mhall119.com/2016/09/sharing-is-caring-with-snaps/">new “content” interface</a><span> </span>that has been added to snapd. I used this interface to share plugin code with a text editor, but Harald has taken it even further and created a KDE Frameworks snap that can share the entire platform with applications that are built on it!</p>
<p>While still in the very early stages of development, this approach will allow the KDE project to deliver all of their applications as independent snaps, while still letting them all share the one common set of Frameworks that they depend on. The end result will be that you, the user, will get the very latest stable (or development!) version of the KDE platform and applications, direct from KDE themselves, even if you’re on a stable/LTS release of your distro.</p>
<p>If you are running a<span> </span><a href="http://snapcraft.io/docs/core/install">snap-capable distro</a>, you can try these experimental packages yourself by downloading<span> </span><a href="http://build.neon.kde.org/userContent/kde-frameworks-5_5.26_amd64.snap">kde-frameworks-5_5.26_amd64.snap</a><span> </span>and<span> </span><a href="http://build.neon.kde.org/userContent/kcalc_0_amd64.snap">kcalc_0_amd64.snap</a><span> </span>from Neon’s build servers, and installing them with “snap install –devmode –force-dangerous <snap_file>”. To learn more about how he did this, and to help him build more KDE application snaps, you can find Harald as <sitter> on<span> </span><a href="http://webchat.freenode.net/?channels=%23kde-neon&uio=d4">#kde-neon</a><span> </span>on Freenode IRC.</p>Sharing is caring, with Snaps!2016-09-02T12:00:00+00:002024-03-19T01:35:48+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/sharing-is-caring-with-snaps/<p>Snaps are a great way to get the most up to date applications on your desktop without putting the security or stability or your system at risk. I’ve been snapping up a bunch of things lately and the potential this new paradigm offers is going to be revolutionary. Unfortunately nothing comes for free, and the security of snaps comes with some necessary tradeoffs like isolation and confinement, which reduces some of the power and flexibility we’ve become used to as Linux users.</p>
<p>But now the developers of the snappy system (snapd, snap-confine and snapcraft) are giving us back some of that missing flexibility in the form of a new “content” interface which allows you to share files (executables, libraries, or data) between the snap packages that you develop. I decided to take this new interface for a test drive using one of the applications I had recently snapped:<span> </span><a href="http://geany.org/">Geany</a>, my editor of choice. Geany has the ability to load plugins to extend it’s functionality, and infact has a set of plugins available in a separate Github repository from the application itself.</p>
<p>I already had a working snap for<span> </span><a href="https://github.com/ubuntu/snappy-playpen/tree/geany/geany">Geany</a>, so the next thing I had to do was create a snap for the plugins. Like Geany itself, the plugins are hosted on<span> </span><a href="https://github.com/geany/geany-plugins">GitHub</a><span> </span>and have a nice build configuration already, so turning it into a snap was pretty trivial. I used the autotools plugin in Snapcraft to pull the git source and build all of the available plugins. Because my Geany snap was built with Gtk+ 3, I had to build the plugins for the same toolkit, but other than that I didn’t have to do anything special.</p>
<pre>parts:
all-plugins:
plugin: <strong>autotools</strong>
source: <strong>git@github.com:geany/geany-plugins.git</strong>
source-type: git
configflags: [<strong>--enable-gtk3=yes</strong> --enable-all-plugins]</pre>
<p>Now that I had a geany.snap and geany-plugins.snap, the next step was to get them working together. Specifically I wanted Geany to be able to see and load the plugin files from the plugins snap, so it was really just a one-way sharing. To do this I had to create both a slot and a plug using the content interface. Usually when you’re building snap you only use plugs, such as network or x11, because you are consuming services provided by the core OS. In those cases also you just have to provide the interface name in the list of plugs, because the interface and the plug have the same name.</p>
<p>But with the content interface you need to do more than that. Because different snaps will provide different content, and a single snap can provide multiple kinds of content, you have to define a new name that is specific to what content you are sharing. So in my geany-plugins<span> </span><a href="https://github.com/ubuntu/snappy-playpen/blob/geany/geany-plugins/snapcraft.yaml">snapcraft.yaml</a><span> </span>I defined a new kind of content that I called <em>geany-plugins-all </em>(because it contains all the geany plugins in the snap), and I put that into a slot called <em>geany-plugins-slot</em><span> </span>which is how we will refer to it later. I told snapcraft that this new slot was using the <em>content</em><span> </span>interface, and then finally told it what content to share across that interface, which for geany-plugins was the entire snap’s content.</p>
<pre>slots:
<strong>geany-plugins-slot</strong>:
content: <strong>geany-plugins-all</strong>
interface: <strong>content</strong>
read:
- <strong>/</strong></pre>
<p>With that I had one half of the content interface defined. I had a geany-plugins.snap that was able to share all of it’s content with another snap. The next step was to implement the plug half of the interface in my existing<span> </span><a href="https://github.com/ubuntu/snappy-playpen/blob/geany/geany/snapcraft.yaml">geany.snap</a>. This time instead of using a<span> </span><em>slots:</em><span> </span>section I would define a <em>plugs:</em><span> </span>section, with a new plug named <em>geany-plugins-plug</em><span> </span>and again specifying the interface to be <em>content</em><span> </span>just like in the slot. Here again I had to specify the content by name, which had to match the <em>geany-plugins-all</em><span> </span>that was used in the slot. The names of the plug and slot are only relevant to the user who needs to connect them, it’s this content name that snapd uses to make sure they can be connected in the first place. Finally I had to give the plug a target directory for where the shared content will be put. I chose a directory called <em>plugins</em>, and when the snaps are connected the geany-plugins.snap content will be bind-mounted into this directory in the geany.snap</p>
<pre>plugs:
<strong>geany-plugins-plug</strong>:
content: <strong>geany-plugins-all</strong>
default-provider: <strong>geany-plugins</strong>
interface: <strong>content</strong>
target: <strong>plugins</strong></pre>
<p>Lastly I needed to tell snapcraft which app would use this interface. Since the Geany snap only has one, I added it there.</p>
<pre>apps:
geany:
command: gtk-launch geany
plugs: [x11, unity7, home, <strong>geany-plugins-plug</strong>]</pre>
<p>Once the snaps were built, I could install them and the new plug and slot were automatically connected</p>
<pre>$ snap interfaces
Slot Plug
geany-plugins:geany-plugins-slot geany:geany-plugins-plug</pre>
<p>Now that put the plugins into the application’s snap space, but it wasn’t enough for Geany to actually find them. To do that I used Geany’s <strong>Extra plugin path</strong><span> </span>preferences to point it to the location of the shared plugin files.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/08/Screenshot-from-2016-08-30-16-27-12.png"><img alt="Screenshot from 2016-08-30 16-27-12" class="aligncenter size-medium wp-image-2544" height="205" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/08/Screenshot-from-2016-08-30-16-27-12-300x205.png" width="300"/></a></p>
<p>After doing that, I could open the <strong>Plugin manager</strong><span> </span>and see all of the newly shared plugins. Not all of them work, and some assume specific install locations or access to other parts of the filesystem that they won’t have being in a snap. The Geany developers warned me about that, but the ones I really wanted appear to work.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/08/Screenshot-from-2016-08-30-16-29-54.png"><img alt="Screenshot from 2016-08-30 16-29-54" class="aligncenter size-large wp-image-2546" height="360" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/08/Screenshot-from-2016-08-30-16-29-54-1024x576.png" width="640"/></a></p>My day of convergence2016-07-13T12:00:00+00:002024-03-19T02:06:14+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/my-day-of-convergence/<p>I’ve had a Nexus 4 since 2013, and I’ve been using it to test out desktop convergence (where you run a desktop environment from the phone) ever since that feature landed just over a year ago. Usually that meant plugging it into my TV via HDMI to make sure it automatically switched to the larger screen, and playing a bit with the traditional windowed-mode of Unity 8, or checking on adaptive layouts in some of the apps. I’ve also run it for hours on end as a demo at conferences such as SCaLE, FOSSETCON, OSCON and SELF. But through all that, I’ve never used it as an actual replacement for my laptop. Until now.</p>
<h2>Thanks Frontier</h2>
<p>A bit of back-story first. I had been a Verizon FiOS customer for years, and recently they sold all of their FiOS business to Frontier. The transition has been…..less than ideal. A couple of weeks ago I lost all services (phone, TV and internet) and was eventually told that nobody would be out to fix it until the following day. I still had my laptop, but without internet access I couldn’t really do my job on it. And while Ubuntu on phones can offer up a Hotspot, that particular feature doesn’t work on the Nexus 4 (something something, driver, something). Which meant that the <strong>only</strong><span> </span>device that I had which could get online was my phone.</p>
<h2>No Minecraft for you</h2>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/07/13528720_10154238389913419_2608531900571217522_n.jpg"><img alt="13528720_10154238389913419_2608531900571217522_n" class="pull-right size-medium wp-image-2532" height="300" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/07/13528720_10154238389913419_2608531900571217522_n-165x300.jpg" width="165"/></a>Fortunately, the fact that I’ve been demoing convergence at conferences meant I had all of the equipment I needed to turn my phone into a desktop and keep right on working. I have a bluetooth mouse and keyboard, and a Slimport adapter that let’s me plug it into a bigger screen. But while a TV works for testing, it’s not really great for long-term work. Don’t get me wrong, working from the couch is nice, but the screen is just too far away for reading and writing. Fortunately for me, and unfortunately for my children, their computer is at a desk and is plugged into a monitor with HDMI ports. So I took it over for the day. They didn’t have internet either that day, so they didn’t miss out on much right?</p>
<h2>A day of observations</h2>
<p>Throughout the day I posted a series of comments on Google+ about my experience. You could go through my post history looking for them, but I’m not going to make you do that. So here’s a quick summary of what I learned:</p>
<ul>
<li>3G is not nearly fast enough for my daily work. It’s good when using my phone as a phone, doing one thing at a time. But it falls short of broadband when I’ve got a lot of things using it. Still, on that day it was better than my fiber optic service, so there’s that.</li>
<li>I had more apps installed on my phone than I thought I did. I was actually taken aback when I opened the Dash in desktop mode and I saw so many icons. It’s far more than I had on Android, though not quite as many as on my laptop.</li>
<li>Having a fully-functional Terminal is a lifesaver. I do a lot of my work from the terminal, including IRC, and having one with tabs<span> </span><em>and</em><span> </span>keyboard shortcuts for them is a must for me to work.</li>
<li>I missed having physical buttons on my keyboard for home/end and page up/down. Thankfully a couple of people came to my rescue in the comments and taught me other combinations to get those.</li>
<li>Unity 8<span> </span><em>is</em><span> </span>Unity. Almost all of the keyboard shortcuts that have become second nature to me (an there are a lot of them) were there. There was no learning curve, I didn’t have to change how I did anything or teach myself something new.</li>
<li>The phone is still a phone. I got a call (from Frontier, reminding me about an appointment that never happened) while using the device as a desktop. It was a bit disorienting at first, I had forgotten that I was running the desktop the Nexus 4, so when a notification of an incoming call popped up on the screen I didn’t know what was happening. That only lasted a second though, and after clicking answer and picking up the device, I just used it as a phone. Pretty cool</li>
</ul>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/07/screenshot20160701_151104996.png"><img alt="screenshot20160701_151104996" class="alignnone wp-image-2533 size-large" height="360" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/07/screenshot20160701_151104996-1024x576.png" width="640"/></a></p>
<h2>Must go faster</h2>
<p>While I was able to do pretty much all of my work that day thanks to my phone, it wasn’t always easy or fun, and I’m not ready to give up my laptop just yet. The Nexus 4 is simply not powerful enough for the kind of workload I was putting on it. But then again, it’s a nearly 4 year old phone, and wasn’t considered a powerhouse even when it was released. The newest Ubuntu phone on the market, the Meizu Pro 5, packs a whole lot more power, and I think it would be able to give a really nice desktop experience.</p>Dogfooding Unity 82016-05-10T12:00:00+00:002024-03-18T22:13:21+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/dogfooding-unity-8/<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_103257823.png"><img alt="screenshot20160506_103257823" class="size-medium wp-image-2503 pull-right" height="169" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_103257823-300x169.png" width="300"/></a><span></span>During the<span> </span><a href="http://summit.ubuntu.com/uos-1605/meeting/22652/ubuntu-personal-and-convergence-qa/" target="_blank">Ubuntu Online Summit</a><span> </span>last week, my colleague Daniel Holbach came up with what he called a “10 day challenge” to some of the engineering manager directing the convergence work in Ubuntu. The idea is simple, try and use only the Unity 8 desktop for 10 working days (two weeks). I thought this was a great way to really identify how close it is to being usable by most Ubuntu users, as well as finding the bugs that cause the most pain in making the switch. So on Friday of last week, with<span> </span><a href="http://summit.ubuntu.com/uos-1605/" target="_blank">UOS</a><span> </span>over, I took up the challenge.</p>
<p>Below I will discuss all of the steps that I went through to get it working to my needs. They are not the “official” way of doing it (there isn’t an official way to do all this yet) and they won’t cover every usage scenario, just the ones I faced. If you want to try this challenge yourself they will help you get started. If at any time you get stuck, you can find help in the #ubuntu-unity channel on Freenode, where the developers behind all of these components are very friendly and helpful.</p>
<h2>Getting Unity 8</h2>
<p>To get started you first need to be on the latest release of Ubuntu. I am using Ubuntu 16.04 (Xenial Xerus), which is the best release for testing Unity 8. You will also need the<span> </span><a href="https://launchpad.net/~ci-train-ppa-service/+archive/ubuntu/stable-phone-overlay" target="_blank">stable-phone-overlay</a>PPA. Don’t let the name fool you, it’s not just for phones, but it is where you will find the very latest packages for Mir, Unity 8, Libertine and other components you will need. You can install is with this command:</p>
<pre>sudo add-apt-repository ppa:ci-train-ppa-service/stable-phone-overlay</pre>
<p>Then you will need to install the Unity 8 session package, so that you can select it from the login screen:</p>
<pre>sudo apt install unity8-desktop-session</pre>
<p><strong>Note: </strong>The package above used to be unity8-desktop-session-mir but was renamed to just unity-desktop-session.</p>
<p><del>When I did this there was a bug in the libhybris package that was causing Mir to try and use some Android stuff, which clearly isn’t available on my laptop. The fix wasn’t yet in the PPA, so I had to take the additional step of installing a fix from our continuous integration system (Note: originally the command below used silo 53, but I’ve been told it is now in silo 31). If you get a black screen when trying to start your Unity 8 session, you probably need this too.</del></p>
<pre><del>sudo apt-get install phablet-tools phablet-tools-citrain
citrain host-upgrade 031</del></pre>
<p><strong>Note:</strong><span> </span>None of the above paragraph is necessary anymore.</p>
<p>This was enough to get Unity 8 to load for me, but all my apps would crash within a half second of being launched. It turned out to be a problem with the cgroups manager, specifically the cgmanager service was disabled for me (I suspect this was leftover configurations from previous attempts at using Unity 8). After re-enabling it, I was able to log back into Unity 8 and start using apps!</p>
<pre>sudo systemctl enable cgmanager</pre>
<h2>Essential Core Apps</h2>
<p>The first thing you’ll notice is that you don’t have many apps available in Unity 8. I had probably more than most, having installed some Ubuntu SDK apps natively on my laptop already. If you haven’t installed the webbrowser-app already, you should. It’s in the Xenial archive and the PPA you added above, so just</p>
<pre>sudo apt install webbrowser-app</pre>
<p>But that will only get you so far. What you really need are a terminal and file manager. Fortunately those have been created as part of the Core Apps project, you just need to install them. Because the Ubuntu Store wasn’t working for me (see bottom of this post) I had to manually<span> </span><a href="http://people.ubuntu.com/~mhall119/dogfooding-unity8/" target="_blank">download</a><span> </span>and install them:</p>
<pre>sudo click install --user mhall com.ubuntu.filemanager_0.4.525_multi.click
sudo click install --user mhall com.ubuntu.terminal_0.7.170_multi.click</pre>
<p>If you want to use these apps in Unity 7 as well, you have to modify their .desktop files located in ~/.local/share/applications/ and add the -x flag after<span> </span><em>aa-exec-click</em>, this is because by default it prevents running these apps under X11 where they won’t have the safety of confinement that they get under Mir.</p>
<p>The file manager needed a bit of extra effort to get working. It contains many Samba libraries that allow it to access windows network shares, but for some reason the app was looking for them in the wrong place. As a quick and dirty hack, I ended up copying whatever libraries it needed from /opt/click.ubuntu.com/com.ubuntu.filemanager/current/lib/i386-linux-gnu/ to /usr/lib/i386-linux-gnu/samba/. It’s worth the effort, though, because you need the file manager if you want do things like upload files through the webbrowser.</p>
<h2>Using SSH</h2>
<p>IRC is a vital communication tool for my job, we all use it every day. In fact, I find it so important that I have a remote client that stays connected 24/7, which I connect to via ssh. Thanks to the Terminal core app, I have quick and easy access to that. But when I first tried to connect to my server, which uses public-key authentication (as they all should), my connection was refused. That is because the Unity 8 session doesn’t run the ssh-agent service on startup. You can start it manually from the terminal:</p>
<pre>ssh-agent</pre>
<p>This will output some shell commands to setup environment variables, copy those and paste them right back into your terminal to set them. Then you should be able to ssh like normal, and if your key needs a passphrase you will be prompted for it in the terminal rather than in a dialog like you get in Unity 7.</p>
<h2>Getting traditional apps</h2>
<p>Now that you’ve got some apps running natively on Mir, you probably want to try out support for all of your traditional desktop apps, as you’ve heard advertised. This is done by a project called Libertine, which creates an LXC container and XMir to keep those unconfined apps safely away from your new properly confined setup. The first thing you will need to do is install the libertine packages:</p>
<pre>apt-get install libertine libertine-scope</pre>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_105035760.png"><img alt="screenshot20160506_105035760" class="pull-right size-medium wp-image-2505" height="169" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_105035760-300x169.png" width="300"/></a>Once you have those, you will see a Libertine app in your Apps scope. This is the app that lets you manage your Libertine containers (yes, you can have more than one), and install apps into them. Creating a new container is simply a matter of pressing the “Install” button. You can give it a name of leave it blank to get the default “Xenial”.</p>
<p></p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_105618896.png"><img alt="screenshot20160506_105618896" class="pull-left size-medium wp-image-2506" height="169" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_105618896-300x169.png" width="300"/></a>Once your container is setup, you can install as many apps into it as you want, again using the Libertine container manager. You can even use it to search the archives if you don’t know the exact package name. It will also install any dependencies that package needs into your Libertine container.</p>
<p></p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_105942480.png"><img alt="screenshot20160506_105942480" class="pull-right size-medium wp-image-2509" height="169" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_105942480-300x169.png" width="300"/></a>Now that you have your container setup and apps installed into it, you are ready to start trying them out. For now you have to access them from a separate scope, since the default Apps scope doesn’t look into Libertine containers. That is why you had to install the libertine-scope package above. You can find this scope by clicking on the Dash’s bottom edge indicator to open the Scopes manger, and selecting the Legacy Applications Scope. There you will see launchers for the apps you have installed.</p>
<p>Libertine uses a special container manager to launch apps. If it isn’t running, as was the case for me, your legacy app windows will remain black. To fix that, open up the terminal and manually start the manager:</p>
<pre>initctl --session start libertine-lxc-manager</pre>
<h2>Theming traditional apps</h2>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_122713187.png"><img alt="screenshot20160506_122713187" class="pull-right size-medium wp-image-2511" height="169" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_122713187-300x169.png" width="300"/></a>By default the legacy apps don’t look very nice. They default to the most basic of themes that look like you’ve time-traveled back to the mid-1990s, and nobody wants to do that. The reason for this is because these apps (or rather, the toolkit they use) expect certain system settings to tell them what theme to use, but those settings aren’t actually a dependency of the application’s package. They are part of a default desktop install, but not part of the default Libertine image.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_112259969.png"><img alt="screenshot20160506_112259969" class="pull-left size-medium wp-image-2512" height="169" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160506_112259969-300x169.png" width="300"/></a>I found a way to fix this, at least for some apps, by installing the <strong>light-themes</strong><span> </span>and<span> </span><strong>ubuntu-settings</strong><span> </span>packages into the Libertine container. Specifically it should work for any Gtk3 based application, such as GEdit. It does not, however, work for apps that still use the Gtk2 toolkit, such as Geany. I have not dug deeper to try and figure out how to fix Gtk2 themes, if anybody has a suggestion please leave it in the comments.</p>
<h2>What works</h2>
<p>It has been a couple of months since I last tried the Unity 8 session, back before I upgraded to Xenial, and at that time there wasn’t much working. I went into this challenge expecting it to be better, but not by much. I honestly didn’t expect to spend even a full day using it. So I was really quite surprised to find that, once I found the workarounds above, I was not only able to spend the full day in it, but I was able to do so quite easily.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160509_121832656.png"><img alt="screenshot20160509_121832656" class="pull-left size-medium wp-image-2513" height="169" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160509_121832656-300x169.png" width="300"/></a>Whenever you have a new DE (which Unity 8 effectively is) and the latest UI toolkit (Qt 5) you have to be concerned about performance and resource use, and given the bleeding-edge nature of Unity 8 on the desktop, I was expecting to sacrifice some CPU cycles, battery life and RAM. If anything, the opposite was the case. I get at least as many hours on my battery as I do with Unity 7, and I was using less than half the RAM I typically do.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160509_103139434.png"><img alt="screenshot20160509_103139434" class="pull-right size-medium wp-image-2514" height="169" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/05/screenshot20160509_103139434-300x169.png" width="300"/></a>Moreover, things that I was expecting to cause me problems surprisingly didn’t. I was able to use Google Hangouts for my video conferences, which I knew had just been enabled in the browser. But I fully expected suspend/resume to have trouble with Mir, given the years I spent fighting it in X11 in the past, but it worked nearly flawlessly (see below). The network indicator had all of my VPN configurations waiting to be used, and they worked perfectly. Even pulse audio was working as well as it did in Unity 7, though this did introduce some problems (again, see below). It even has settings to adjust the mouse speed and disable the trackpad when I’m typing. Most imporantly, nearly all of the keyboard shortcuts that have become subconcious to me in Unity 7 are working in Unity 8.</p>
<p>Most importantly, I was able to write this blog post from Unity 8. That includes taking all of the screenshots and uploading them to WordPress. Switching back and forth between my browser and my notes document to see what I had done over the last few days, or going to the terminal to verify the commands I mentioned above.</p>
<h2>What doesn’t</h2>
<p>Of course, it wasn’t all unicorns and rainbows, Unity 8 is still very bleeding edge as a desktop shell, and if you want to use it you need to be prepared for some pain. None of it has so far been bad enough to stop me, but your mileage may vary.</p>
<p>One of the first minor pain-points is the fact that middle-click doesn’t paste the active text highlight. I hadn’t realized how much I have become dependent on that until I didn’t have it. You also can’t copy/paste between a Mir and an XMir window, which makes legacy apps somewhat less useful, but that’s on the roadmap to be fixed.</p>
<p>Speaking of windows, Unity 8 is still limited to one per app. This is going to change, but it is the current state of things. This doesn’t matter so much for native apps, which were build under this restriciton, and the terminal app having tabs was a saving grace here. But for legacy apps it presents a bigger issue, especially apps like GTG (Getting Things Gnome) where multi-window is a requirement.</p>
<p>Some power-management is missing too, such as dimming the screen after some amount of inactivity, or turning it off altogether. The session also will not lock when you suspend it, so don’t depend on this in a security-critical way (but really, if you’re running bleeding-edge desktops in security-critical environments, you have bigger problems).</p>
<p>I also had a minor problem with my USB headset. It’s actually a problem I have in Unity 7 too, since upgrading to Xenial the volume and mute controls don’t automatically switch to the headset, even though the audio output and input do. I had a workaround for that in Unity 7, I could open the sound settings and manually change it to the headset, at which point the controls work on it. But in Unity 8’s sound settings there is no such option, so my workaround isn’t available.</p>
<p>The biggest hurdle, from my perspective, was not being able to install apps from the store. This is due to something in the store scope, online accounts, or Ubuntu One, I haven’t figured out which yet. So to install anything, I had to get the .click package and do it manually. But asking around I seem to be the only one having this problem, so those of you who want to try this yourself may not have to worry about that.</p>
<h2>The end?</h2>
<p>No, not for me. I’m on day 3 of this 10 day challenge, and so far things are going well enough for me to continue. I have been posting regular small updates on Google+, and will keep doing so. If I have enough for a new blog post, I may write another one here, but for the most part keep an eye on<span> </span><a href="https://plus.google.com/u/0/+MichaelHall119/posts" target="_blank">my G+ feed</a>. Add your own experiences there, and again join #ubuntu-unity if you get stuck or need help.</p>
<p></p>New Ubuntu Community Donations report2016-04-09T12:00:00+00:002024-03-18T20:04:14+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/new-ubuntu-community-donations-report/<p>Somehow I missed the fact that I never wrote Community Donations report for Q3 2015. I only realized it because it’s time for me to start working on Q4. Sorry for the oversight, but that report is<span> </span><a href="https://docs.google.com/document/d/1bLoIVeYaNCs7a6nhy18Aa8U4W7C5LsMpFzoGjA4awBA/edit" target="_blank">now published</a>.</p>
<p>The next report should be out soon, in the mean time you can look at all of the<span> </span><a href="http://community.ubuntu.com/help-information/funding/reports/" target="_blank">past reports</a> so see the great things we’ve been able to do with and for the Ubuntu community through this program. Everybody who has recieved these funds have used them to contribute to the project in one way or another, and we appreciate all of their work.</p>
<p>As you may notice, we’ve been regularly paying out more than we’ve been getting in donations. While we’ve had a carry-over balance ever since we started this program, that balance is running down. If you like the things we’ve been able to support with this program, please consider<span> </span><a href="http://www.ubuntu.com/download/desktop/contribute" target="_blank">sending it a contribution</a><span> </span>and helping us spread the word about it.</p>Help make Gnome Software beautiful2016-03-14T12:00:00+00:002024-03-18T18:03:09+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/help-make-gnome-software-beautiful/<p>As most you you know by now, Ubuntu 16.04 will be dropping the old Ubuntu Software Center in favor of the newer Gnome Software as the graphical front-end to both the Ubuntu archives and 3rd party application store.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/03/g-s.png"><img alt="Gnome Software" class="aligncenter size-large wp-image-2481" height="442" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/03/g-s-1024x707.png" width="640"/></a></p>
<p>Gnome Software provides a lot of the same enhancements over simple package managers that USC did, and it does this using a new metadata format standard called<span> </span><a href="https://wiki.debian.org/AppStream/" target="_blank">AppStream</a>. While much of the needed AppStream data can be extracted from the existing packages in the archives, sometimes that’s not sufficient, and that’s when we need people to help fill the gaps.</p>
<p>It turns out that the bulk of the missing or incorrect data is caused by the application icons being used by app packages. While most apps already have an icon, it was never strictly enforced before, and the size and format allowed by the desktop specs was more lenient than what’s needed now. These lower resolution icons might have been fine for a menu item, but they don’t work very well for a nice, beautiful App Store interface like Gnome Software. And that’s where you can help!</p>
<p>Don’t worry, contributing icons isn’t hard, and it doesn’t require any knowledge of programming or packing to do. Best of all, you’ll not only be helping Ubuntu, but you’ll also be contributing to any other distro that uses the AppStream standard too! In the steps below I will walk you through the process of finding an app in need, getting the correct icon for it, and contributing it to the upstream project and Ubuntu.</p>
<h2>1) Pick an App</h2>
<p>Because the AppStream data is being automatically extracted from the contents of existing packages, we are able to tell which apps are in need of new icons, and we’ve<span> </span><a href="https://wiki.ubuntu.com/AppStream/Icons/IconErrors" target="_blank">generated a list</a><span> </span>of them, sorted by popularity (based on PopCon stats) so you can prioritize your contributions to where they will help the most users. To start working on one, first click the “Create” link to file a new bug report against the package in Ubuntu. Then replace that link in the wiki with a link to your new bug, and put your name in the “Claimed” column so that others know you’ve already started work on it.</p>
<p><a href="https://wiki.ubuntu.com/AppStream/Icons/IconErrors"><img alt="Apps with Icon Errors" class="aligncenter wp-image-2485 size-full" height="130" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/03/Screenshot-from-2016-03-14-12-24-34.png" width="894"/></a>Note that a package can contain multiple .desktop files, each of which has it’s own icon, and your bug report will be specific to just that one metadata file. You will also need to be a member of the<span> </span><a href="https://launchpad.net/~ubuntu-etherpad" target="_blank">~ubuntu-etherpad</a><span> </span>team (or sub-team like ~ubuntumembers) in order to edit the wiki, you will be asked to verify that membership as part of the login process with Ubuntu SSO.</p>
<h2>2) Verify that an AppStream icon is needed</h2>
<p>While the extraction process is capable of identifying what packages have a missing or unsupported image in them, it’s not always smart enough to know which packages<span> </span><em>should</em><span> </span>have this AppStream data in the first place. So before you get started working on icons, it’s best to first make sure that the metadata file you picked should be part of the AppStream index in the first place.</p>
<p>Because AppStream was designed to be application-centric, the metadata extraction process only looks at those with<span> </span><em>Type=Application</em><span> </span>in their .desktop file. It will also ignore any .desktop files with<span> </span><em>NoDisplay=True</em><span> </span>in them. If you find a file in the list that shouldn’t be indexed by AppStream, chances are one or both of these values are set incorrectly. In that case you should change your bug description to state that, rather than attaching an icon to it.</p>
<h2>3) Contact Upstream</h2>
<p>Since there is nothing Ubuntu-specific about AppStream data or icons, you really should be sending your contribution upstream to the originating project. Not only is this best for Ubuntu (carrying patches wastes resources), but it’s just the right thing to do in the open source community. So the after you’ve chosen an app to work on and verfied that it does in fact need a new icon for AppStream, the very next thing you should do is start talking to the upstream project developers.</p>
<p>Start by letting them know that you want to contribute to their project so that it integrates better with AppStream enabled stores (you can reference<span> </span><a href="https://wiki.debian.org/AppStream/Guidelines" target="_blank">these Guidelines</a><span> </span>if they’re not familiar with it), and opening a similar bug report in their bug tracker if they don’t have one already. Finally, be sure to include a link to that upstream bug report in the Ubuntu bug you opened previously so that the Ubuntu developers know the work is also going into upstream to (your contribute might be rejected otherwise).</p>
<h2>4) Find or Create an Icon</h2>
<p>Chances are the upstream developers already have an icon that meets the AppStream requirements, so ask them about it before trying to find one on your own. If not, look for existing artwork assets that can be used as a logo, and remember that it needs to be<span> </span><strong>at least</strong><span> </span>64×64 pixels (this is where SVGs are ideal, as they can be exported to any size). Whatever you use, make sure that it matches the application’s current branding, we’re not out to create a new logo for them after all. If you do create a new image file, you will need to make it available under the<span> </span><a href="http://creativecommons.org/licenses/by-sa/3.0/" target="_blank">CC-BY-SA</a><span> </span>license.</p>
<p>While AppStream only requires a 64×64 pixel image, many desktops (including Unity) will benefit from having even higher resolution icons, and it’s always easier to scale them down than up. So if you have the option, try to provide a 256×256 icon image (or again, just an SVG).</p>
<h2>5) Submit your icon</h2>
<p>Now that you’ve found (or created) an appropriate icon, it’s time to get it into both the upstream project and Ubuntu. Because each upstream will be different in how they want you to do that, you will need to ask them for guidance (and possibly assistance) in order to do that. Just make sure that you update the upstream bug report with your work, so that the Ubuntu developers can see that it’s been done.</p>
<p>Ubuntu 16.04 has already synced with Debian, so it’s too late for these changes in the upstream project to make their way into this release. In order to get them into 16.04, the Ubuntu packages will have to carry a patch until the changes that land in upstream have the time to make their way into the Ubuntu archives. That’s why it’s so important to get your contribution accepted into the upstream project first, the Ubuntu developers want to know that the patches to their packages will eventually be replaced by the same change from upstream.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2012/03/attach_file_to_bug.png"><img alt="attach_file_to_bug" class="aligncenter size-full wp-image-922" height="350" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2012/03/attach_file_to_bug.png" width="416"/></a>To submit your image to Ubuntu, all you need to do is attach the image file to the bug report you created way back in step #1.</p>
<p><a href="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/03/launchpad-subscribe2.png"><img alt="launchpad-subscribe" class="aligncenter size-full wp-image-2496" height="252" src="http://blog_uploads.s3.amazonaws.com/wp-content/uploads/2016/03/launchpad-subscribe2.png" width="416"/></a>Then, <strong>subscribe</strong><span> </span>the “ubuntu-sponsors” team to the bug, these are the Ubuntu developers who will review and apply your icon to the target package, and get it into the Ubuntu archives.</p>
<h2>6) Talk about it!</h2>
<p>Congratulations, you’ve just made a contribution that is likely to affect millions of people and benefit the entire open source community! That’s something to celebrate, so take to Twitter, Google+, Facebook or your own blog and talk about it. Not only is it good to see people doing these kinds of contributions, it’s also highly motivating to others who might not otherwise get involved. So share your experience, help others who want to do the same, and if you enjoyed it feel free to grab another app<span> </span><a href="https://wiki.ubuntu.com/AppStream/Icons/IconErrors" target="_blank">from the list</a><span> </span>and do it again.</p>Fun with Django, Meta-classes and dynamic models (Updated)2011-02-10T12:00:00+00:002024-03-18T18:03:21+00:00Michael Hallhttps://mhall119.com/blog/author/mhall/https://mhall119.com/blog/fun-with-django-meta-classes-and-dynamic-models/<p>I’ve started a new project at work that’s proven to be both fun and challenging. The request was simple enough, our clients wanted something like MS Access, where they could define their own record types, run queries, edit data and get reports.</p>
<p>But they wanted it on the web. Backed by an SQL database. Using normalized tables.</p>
<p>And soon.</p>
<p><span id="more-80"></span>Now we’ve been making pretty heavy use of Django here for a while, so it was a natural choice to base any new project on. But how would we let our clients define Django models? We couldn’t ask them to write Python objects, even if Python is an incredibly simple language. No, this had to be point-and-click. And on the web.</p>
<p><strong>Metaclasses</strong></p>
<p>One of the more esoteric features of Python is Metaclasses. Basically these are classes who’s instances are other classes. Confusing right? It took me some experimenting to get used to it too, since most of my prior object-oriented programming experience was in Java. But it does give you a really powerful tool for generating custom classes on demand. And Django models are nothing more than Python classes. You can see where this is going.</p>
<p>So it turns out there are a few examples of people talking about using Metaclasses to generate Django models <a href="http://code.djangoproject.com/wiki/DynamicModels" target="_blank">here</a><span> </span>and<span> </span><a href="http://code.djangoproject.com/wiki/AuditTrail" target="_blank">here</a>, which I used as a jumping off point. They also pointed out some of the shortfalls of using dynamic models, mostly that many of Django’s features expect to find your models in INSTALLED_APPS, specifically syncdb and the Admin app. But the idea was straight forward enough: use a static Django model to define dynamic ones.</p>
<p><strong>Dynamic Models</strong></p>
<p>The models themselves weren’t very complicated, I started off with DynamicApp, DynamicModel and DynamicModelField, each with fields to define the most common attributes of it’s namesake. With the easy part out of the way, now it was time to turn these into actual django.db.models.Model subclasses.</p>
<p>It just so happens that django.db.models.Model is already using a metaclass: django.db.models.base.ModelBase, which takes in all the fields you define in your model, the inner Meta class, and anything else you put into your model definition, and turns it into the actual Model class that you use. Using the magic of Python’s type() function, we can call this metaclass directly, and pass it all that information as parameters. So now I can turn a DynamicModelField into an actual Field class, and I can turn a DynamicModel into an actual Model class. And that’s all there is to it.</p>
<p>Ha! not quite.</p>
<p><strong>Syncdb</strong></p>
<p>Remember where I said that syncdb can’t find dynamic models? Well now I have my Model class, but there isn’t a table in my database to back it up. And unfortunately it wasn’t a matter of calling a simple Django function to get it. I ended up copying a small but significant amount of code out of django.core.management.commands.syncdb, but the end result was magical, I could turn my dynamically defined Model into just the right SQL table.</p>
<p><strong>Admin</strong></p>
<p><strong></strong>One of the wonderful apps that comes with Django by default is the Admin. For those not familiar with the Django admin, it gives you a very simple yet effective interface to manage all your new Django models and their associated data with very little configuration necessary. You can add, edit and delete data from an web interface that is generated based on your model definition.</p>
<p><a href="http://mhall119.com/wp-content/uploads/2011/02/DynamicDjangoModelsAdmin.png"><img alt="DynamicDjangoModelsAdmin" class="size-medium wp-image-83 alignright" height="168" src="http://family.ubuntu-fl.org/wp-content/uploads/2011/02/DynamicDjangoModelsAdmin-300x168.png" title="DynamicDjangoModelsAdmin" width="300"/></a>Once again, though, it expects you to have written some actual code, and for that code to be in admin.py under one of the entries in INSTALLED_APPS. But it turns out there’s not a lot of magic going on here, all it does is execute the contents of admin.py on startup, and one of the things you need in there is a call to admin.site.register(). Well that works just as well if you call it from DynamicModel.save() as it does from admin.py. So a couple lines of code later, and now my new Models are showing up in the Django Admin. Now I was cooking with fire!</p>
<p>And got burned.</p>
<p><strong>Model Cache</strong></p>
<p>It probably wouldn’t be very good for performance if Django had to execute a Model’s metaclass every time you wanted to reference it. So the Django devs were smart enough to cache the resulting Model class after the first time. If you’re interested, this happens in django.db.models.loading. It took me a good couple of hours walking back through the entire process a new Model goes through (after my newly created ones weren’t showing up properly) before I found this little tidbit. The fix was simple enough, since the cache is keyed off the app and model name, I just had to delete my model’s entry every time it’s DynamicModel record was saved.</p>
<p><strong>Changing Models</strong></p>
<p>By now I was riding high on my success, I could define a model, define it’s fields, and with a click of a button in the Admin, it would generate my tables exactly as they should be. But part of the requirements were that the client would be able to add or delete fields from their models. Which meant that I would have to add or drop columns from the table to match. Which isn’t exactly my idea of a fun or safe activity.</p>
<p>Thankfully we’ve been using<span> </span><a href="http://south.aeracode.org/" target="_blank">South</a><span> </span>for our Django database migrations for about the last year, and it’s been very reliable against MySQL (Oracle is another story, unfortunately). South will inspect your Django models, and keep a running history of changes you make to it. It will then create a migration script for each set of changes. These scripts are nothing more than python code files that make calls back into South when they run. A little digging into South’s code revealed what would need to be called to add and drop columns, so I added them to DynamicModelField’s .save() and .delete() methods, clear the model cache one more time, and I was rocking!</p>
<p><strong>Dynamic Interface</strong></p>
<p><a href="http://mhall119.com/wp-content/uploads/2011/02/DynamicDjangoModelsO9.png"><img alt="DynamicDjangoModelsO9" class="alignright size-medium wp-image-84" height="168" src="http://family.ubuntu-fl.org/wp-content/uploads/2011/02/DynamicDjangoModelsO9-300x168.png" title="DynamicDjangoModelsO9" width="300"/></a>Now that I had the models, I needed a dynamic interface for them. The Django admin is great for admins, but is too inflexible for making end-user interfaces. Over the last year I’ve been developing a Django app that would allow us to quickly built up user friendly interfaces in Django and, critically, I decided early on to do as much as I could just by inspecting the Django models definitions just like the Admin app does. That decision payed off big time with this new project, because I could just pass my dynamically created models to that existing framework, and out came a standard, robust and friendly interface.</p>
<p><span style="text-decoration: line-through;">Unfortunately none of this work is open sourced yet</span><strong><span> </span>(see below)</strong>, but I’m hopeful that it will be sometime in the near future. My current employer is very open-source friendly and has already allowed me to release several apps and libraries under a BSD license.</p>
<p><strong>Update:</strong><span> </span>The parts of this project that deal with dynamic models has been released under a BSD-style license. Documentation will be coming to it soon:<span> </span><a href="https://bitbucket.org/mhall119/dynamo/overview">https://bitbucket.org/mhall119/dynamo/overview</a></p>