19 Aug 2016

Gerrit User Management for a Small Installation

Setting up Jenkins is a simple matter of downloading the latest .war file and java -jar'ing it. It comes with all the basics of what you need, including its own web server. So there's no need to fiddle with things like databases or web servers... if you don't want to. Most people at a given organization don't need accounts on their Jenkins instance. In most cases, only a couple people who are able to create and manage its various jobs need to log on. Most other people just want to see the status, maybe download the latest successful compile, or look at the logs of a recent failure. These are things most anonymous users can do.

Bugzilla isn't quite as easy to setup; you need to assemble the pieces mostly yourself. It also doesn't have its own built-in web server (which is, really, its primary function, no?) so you have to integrate it with Apache or Nginx. For basic installations the defaults are fine, and it comes with functional user management and a simple database if you don't need "production" quality. Most people contributing to a project should have a Bugzilla account, and Bugzilla has good enough user management "out of the box", especially for a small installation.

Gerrit requires everyone who interacts with it to contribute to a repository to have an account. You wouldn't want any anonymous user to be able to make changes to your patch flow? Plus you do want to track everyone who does make a change.

Sadly, Gerrit doesn't include any sort of built-in user management. Not even a dumb, "don't use this for production environments", user-management system (like Jenkins or Bugzilla). Gerrit assumes, and requires you to use, an external identity management system (such as having your users use their google or facebook credentials via OpenID; a company-wide ldap installation; or the user-management features from a web server).

If you're part of a large organization, which has a dedicated and capable IT team, these issues aren't of any concern to you. All you need to do is to decide that you want to use Gerrit. Setting it up and managing it is someone else's problem. But small companies can benefit from distributed code review too, and if nothing else, at its core Gerrit is a solid source code repository server.

With a small team there usually isn't a dedicated person who is responsible for managing servers. You have developers, you have sales people, you have a CEO, you have managers (there are always managers), and you have someone doing the financial stuff. But there's rarely a dedicated IT person who is able to setup a Linux machine, configure, and manage various services (Bugzilla, Jenkins, Gerrit, etc). That job ends up falling to some developer who would rather be writing code than configuring servers.

The reasons why Gerrit doesn't do user management are obviously religious. Gerrit does include its own "don't use this for production installations" database (h2) and provides all the ODBC connectors you need to connect it to any real database you can imagine. So if it's already doing database stuff, why not just add a user table? But it's even worse than that. Pure Gerrit doesn't even allow you to specify permissions at the user level, only at the group level. This means you have to create a group for every meaningful permission you want to assign. At a small-ish installation this means that you end up with lots of groups all of which only contain one person.

Fortunately there is an easy-enough-to-install plugin which allows you to create a group for every user, so creating a fine-grained permission scheme for a small team with a group of projects is relatively easy enough, but is awkward that you still need to manage users that are users, and users that are groups.

Unfortunately there isn't an easy-enough-to-install add-on for user management. But, if you fetch the Gerrit sources, you will find a perl script called fake_ldap.pl in its contrib folder. fake_ldap.pl makes it easy to generate a file which your Gerrit installation can query to get the basic information regarding your allowed users. It does require you to manage this file by hand yourself outside of your Gerrit system. But, in my experience, provides the easiest way to manage the users of a small Gerrit installation.

26 Jun 2016

How To Setup JTAG with Galileo (the modern version)

A recent blog post from Olimex pointed to a document [1] showing how to debug the Intel Galileo board using a JTAG. The nice thing about the document is that it assumed the user would be building their own image using Bitbake/OpenEmbedded. The unfortunate part is that the Galileo BSP downloads from Intel are so ancient they have next-to-no chance of working on a recent, modern distro. Their instructions, however, do point this out (i.e. ...this procedure was performed on <some old version of> Ubuntu...), leaving the user little choice but to start by preparing a VM in which to perform the build!

Back when the Galileo board was released, Intel did a great job of supporting it by creating various layers to be used with OpenEmbedded: meta-clanton, meta-galileo, meta-intel-iot-devkit, meta-intel-iot-middleware, meta-intel-quark-fast, meta-intel-quark. But, as you can see, that support was a bit "scattered". On top of that, it doesn't look like meta-clanton was ever placed somewhere public; the only way to get it (and to build for the Galileo) was to download a massive BSP from Intel which included it. Over time this massive download was replaced by a smaller download, which then required you to run a script which would pull in all the sub-components as a separate step (which performed the massive download). Additionally, a fixup script needed to be run in order to clean up some of the build area before you could start your build. Attempting any of this procedure on a modern Linux development host is very likely to fail.

Fast-forward to today (June 26, 2016) and all that's needed to create an image for the Galileo are a distro layer, the basic OE meta layer, and meta-intel. Or, if you're using poky as your distro, you'll get the meta data as part of it.


Building An Image for the Galileo

$ mkdir /some/place
$ cd /some/place

$ mkdir layers
$ pushd layers
$ git clone git://git.yoctoproject.org/poky meta-poky
$ git clone git://git.yoctoproject.org/meta-intel
$ popd


$ . layers/meta-poky/oe-init-build-env galileo

Now, edit conf/local.conf so that
MACHINE ?= "intel-quark"
EXTRA_IMAGE_FEATURES ?= "debug-tweaks tools-debug tools-profile"

And edit conf/bblayers.conf to replace the part that says "meta-poky/meta-yocto-bsp" with "meta-intel".

Now run:
$ bitbake core-image-minimal

When bitbake starts it prints some build configuration information. For my build I saw:

Build Configuration:
BB_VERSION        = "1.31.0"
BUILD_SYS         = "x86_64-linux"
NATIVELSBSTRING   = "SUSELINUX-42.1"
TARGET_SYS        = "i586-poky-linux"
MACHINE           = "intel-quark"
DISTRO            = "poky"
DISTRO_VERSION    = "2.1+snapshot-20160622"
TUNE_FEATURES     = "m32 i586-nlp"
TARGET_FPU        = ""
meta   
meta-poky         = "master:6f0c5537e02c59e1c8f3b08f598dc049ff8ee098"
meta-intel        = "master:1b98ae6d7e10390c9ecb383432593644a524f9c8"


If your build fails, one thing you could try is to go to each of the layers and checkout the commits specified in the above information; then restart the build.

At the end of a successful build, continue with the following to create an SDcard image:

$ bitbake parted-native
$ wic create mkgalileodisk -e core-image-minimal

Look through the wic output, it will tell you where it has placed its artifact. Use dd to create your SDcard with the wic artifact:

# dd if=/var/tmp/wic/build/mkgalileodisk-<datetime>-mmcblk0.direct of=/dev/sdX bs=1M


Cross-GDB

Eventually you're going to use GDB, via openOCD, to debug your target. In order for this to work (in addition to openOCD) you're going to need two things:
  1. a gdbserver "stub" running on your target
  2. a cross-GDB running on your development machine
A cross-GDB is required because your native GDB will only understand your native host's machine code and other CPU-specific information. A cross-GDB is built to run on your native host, but understand a different CPU architecture. A gdbserver stub is necessary on the target because you need some device-specific software running on the target which is able to interrupt the CPU, set breakpoints, etc. The cross-GDB program is large, capable of doing all the work required to perform source-level debugging, and presets the interface to the user. The stub is quite small and has just the minimum target-CPU-specific functionality required on the target.

Above, as part of your first build, I mentioned that you needed to adjust the EXTRA_IMAGE_FEATURES variable of your conf/local.conf file. One of the things that change does is to include the gdbserver stub in your target image.

In order to build a native cross-GDB for your development host you'll need to generate an SDK for your image:

$ bitbake core-image-minimal -c populate_sdk

Once built, you then need to install the SDK. To do so, simply run the resulting SDK script which you'll find in ${TMPDIR}/deploy/sdk. The install script will ask you where you want to install the SDK; type in a path and press Enter, or simply press Enter to accept the default.

Once installed, source the SDK environment file:

$ . <SDK_INSTALL_LOCATION>/environment-setup-i586-nlp-32-poky-linux


OpenOCD

My recommendation is to get, build, and install the latest OpenOCD from sources:

$ mkdir <SOMEPLACE_TO_BUILD_OPENOCD>
$ cd <SOMEPLACE_TO_BUILD_OPENOCD>
$ git clone git://git.code.sf.net/p/openocd/code openocd
$ cd openocd
$ ./bootstrap
$ ./configure

At the end of ./configure'ing, the script will print out a list of all the dongles for which it can include support. Reasons why it can't include support for a particular dongle may include the lack of additional required libraries. If a particular dongle is marked as un-buildable and you want to build support for that dongle, you'll need to figure out the reason(s) why it can't presently be built (i.e. figure out which library it needs) and fix the deficiency (i.e. use your host distribution's package manager to install that library's -dev/-devel package). The ./configure script is pretty good at telling you which library/libraries are missing.

Once the configuration is done:

$ make -j
$ sudo make install


Connecting to the Galileo via JTAG and GDB

Two terminals are required for this part. In one terminal you'll run OpenOCD and in the other you run the cross-GDB (or telnet).

To run OpenOCD you'll need to tell it to which board you're connecting, and which dongle you're using. Obviously the board part will remain the same, but the dongle part for you might be different depending on whether or not you're using the same dongle(s) as me. Also, the order in which this information is given to OpenOCD is important. Apparently you need to specify the dongle first, then the board.

In the following example I'm using the Segger j-link EDU:

# openocd -f interface/jlink.cfg -f board/quark_x10xx_board.cfg
Open On-Chip Debugger 0.10.0-dev-00322-g406f4d1 (2016-06-22-09:29)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "jtag". To override use 'transport select <transport>'.
adapter speed: 4000 kHz
trst_only separate trst_push_pull
Info : No device selected, using first device.
Info : J-Link V9 compiled Apr 15 2014 19:08:28
Info : Hardware version: 9.00
Info : VTarget = 3.354 V
Info : clock speed 4000 kHz
Info : JTAG tap: quark_x10xx.cltap tap/device found: 0x0e681013 (mfg: 0x009 (Intel), part: 0xe681, ver: 0x0)
enabling core tap
Info : JTAG tap: quark_x10xx.cpu enabled




In this example I'm using the ARM-USB-OCD-H from Olimex:

# openocd -f interface/ftdi/olimex-arm-usb-ocd-h.cfg -f board/quark_x10xx_board.cfg
Open On-Chip Debugger 0.10.0-dev-00322-g406f4d1 (2016-06-22-09:29)
Licensed under GNU GPL v2
For bug reports, read
        http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "jtag". To override use 'transport select <transport>'.
adapter speed: 4000 kHz
trst_only separate trst_push_pull
Info : clock speed 4000 kHz
Info : JTAG tap: quark_x10xx.cltap tap/device found: 0x0e681013 (mfg: 0x009 (Intel), part: 0xe681, ver: 0x0)
enabling core tap
Info : JTAG tap: quark_x10xx.cpu enabled




Now, to communicate with and control the board via OpenOCD you need to open a second terminal. If you want to simply send commands to OpenOCD (such as to check or flash the board) you can simply use telnet:

$ telnet localhost 4444
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Open On-Chip Debugger
>


If you want to debug the target via GDB then you need to startup the cross-GDB and connect it to OpenOCD from within GDB itself (note: the cross-GDB should be already on your $PATH, it comes from the SDK we built and installed earlier; if it's not on your PATH you may have forgotten to source the SDK's environment file, see above):

$ i586-poky-linux-gdb
Python Exception <class 'ImportError'> No module named 'operator':
i586-poky-linux-gdb: warning:
Could not load the Python gdb module from `sysroots/x86_64-pokysdk-linux/usr/share/gdb/python'.
Limited Python support is available from the _gdb module.
Suggest passing --data-directory=/path/to/gdb/data-directory.

GNU gdb (GDB) 7.11.0.20160511-git
Copyright (C) 2016 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "--host=x86_64-pokysdk-linux --target=i586-poky-linux".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word".

(gdb) target remote localhost:3333
Remote debugging using localhost:3333
Python Exception <class 'NameError'> Installation error: gdb.execute_unwinders function is missing:
0x00000000 in ?? ()
(gdb)





[1] Source Level Debug using OpenOCD/GDB/Eclipse on Intel Quark SoC X1000, sourcedebugusingopenocd_quark_appnote_330015_003-2.pdf

16 Jun 2016

Overriding Repositories In repo Manifests With Personal Github Forks

Here is the situation: you're working on a development project that uses repo[1] but you'd like to replace one or more of those repositories with ones of your own which you have forked on github and would like to be able to push your changes back into your github fork.

The good news is repo already has a mechanism for this. The only complicated part is getting the github URL correct such that you can push your changes via ssh.

 In the project's manifest will be a repository you want to override; it has a name. In your project go into the .repo directory and create a directory called local_manifests. In .repo/local_manifests create a file using any filename, just make sure it ends with .xml ; for some, unknown reason, it is traditional to name this file roomservice.xml . Within .repo/local_manifests/roomservice.xml you can remove and add repositories to your heart's content.

For example, the project I'm working on has a manifest that looks like:
  1 <?xml version="1.0" encoding="UTF-8"?>
  2 <manifest>
  3     <default revision="master" sync-j="4"/>
  4
  5     <remote fetch="http://github.com/" name="github"/>
  6     <remote fetch="http://github.com/insane-adding-machines/" name="frosted"/>
  7     <remote fetch="git://crosstool-ng.org/" name="crosstool-ng"/>
  8
  9     <!-- utilities -->
 10     <project remote="github" name="texane/stlink" path="toolchain/stlink"/>
 11
 12     <!-- toolchain -->
 13     <project remote="crosstool-ng" name="crosstool-ng" path="toolchain/crosstool-ng"/>
 14     <project remote="frosted" name="elf2flt" path="toolchain/elf2flt"/>
 15     <project remote="frosted" name="newlib" path="toolchain/newlib" revision="frosted">
 16         <linkfile src="../../.repo/manifests/ctng-custom-elf2flt.patch" dest="toolchain/ctng-custom-elf2flt.patch"/>
 17         <linkfile src="../../.repo/manifests/arm-frosted-eabi.config.in" dest="toolchain/arm-frosted-eabi.config.in"/>
 18         <linkfile src="../../.repo/manifests/buildtoolchain.sh" dest="buildtoolchain.sh"/>
 19     </project>
 20
 21     <!-- kernel + userspace -->
 22     <project remote="frosted" name="frosted" path="frosted"/>
 23     <project remote="frosted" name="libopencm3.git" path="frosted/kernel/libopencm3"/>
 24     <project remote="frosted" name="busybox.git" path="frosted/apps/busybox"/>
 25     <project remote="frosted" name="picotcp" path="frosted/kernel/net/picotcp"/>
 26     <project remote="frosted" name="frosted-userland.git" path="frosted/frosted-userland"/>
 27
 28 </manifest>
I want to replace the repositories described at lines 22 and 26 with my own repositories. I start off by going to github, finding those repositories, and clicking on the Fork button in the web interface for both. Then I create my .repo/local_manifests/roomservice.xml file with the following contents:
  1 <?xml version="1.0" encoding="UTF-8"?>
  2 <manifest>
  3         <remote fetch="ssh://git@github.com/twoerner/" name="origin"/>
  4         <remove-project name="frosted"/>
  5         <remove-project name="frosted-userland.git"/>
  6         <project remote="origin" name="frosted.git" path="frosted" revision="contrib/twoerner-master"/>
  7         <project remote="origin" name="frosted-userland.git" path="frosted/frosted-userland" revision="contrib/twoerner-master"/>
  8 </manifest>
 At this point I simply invoke "repo sync" (you might need to add --force-sync) and you're all set! Those repositories will now be sync'ed from your github copies, you can work in the code of those repositories, make commits, and push them up to github.

Notice that the names that are used in the <remove-project.../> lines are the same names as the lines from the original manifest that I want to replace.

Did you notice that I used "origin" as the remote name? When repo clones this repository, it will use this name as git's remote name, and since git assumes the primary remote's name is "origin", this configuration simply makes life a touch easier. You're free to use whatever remote name you want, but in this case you'll just need to run add your remote's name the first time you "git push..." .


[1]
"What is repo?" repo is a tool created by (and very popular with) Android developers to manage building software comprised of sets of git repositories. Many software projects fit into one git repository, but sometimes a project likes to keep separate git repositories for different parts of the project. This, then, creates a burden on the developer to make sure the correct repositories, are cloned to the right places, and checked out at the right commits. repo is a tool that lists which repositories are used, where to place them relative to one starting directory, and checks out a specific revision/commit. Therefore it removes the extra burdens caused by multi-repository projects. repo also has functionality to allow your project to integrate with gerrit (a code review system), but that's another topic. For more information on repo try:

23 Dec 2015

Docbook xmlcharent on openSUSE 13.2

There was a time when I used to write documentation in sgml quite a lot. Usually I processed these input files with docbook2... to generate html, pdf, txt, etc.

Today I wanted to build an old project of mine that I hadn't looked at in about 4-5 years, including its documentation.

When docbook2... ran, however, it generated lots of:

jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)
jade:/etc/sgml/catalog:1:8:E: cannot open "/usr/share/sgml/CATALOG.xmlcharent" (No such file or directory)


On openSUSE, apparently, the xmlcharent stuff is provided by the package xmlcharent, but the sgml configuration file, /etc/sgml/catalog, is provided by the package sgml-skel.

The xmlcharent package went through some updates recently, and the /usr/share/sgml/CATALOG.xmlcharent file has been replaced by /usr/share/xmlcharent/xmlcharent.sgml. Therefore one needs to update /etc/sgml/catalog manually and change the first line from /usr/share/sgml/CATALOG.xmlcharent to /usr/share/xmlcharent/xmlcharent.sgml.

30 Sep 2015

OE Build of glmark2 Running on Cubietruck with Mali

Here are the steps you can perform to use OpenEmbedded to build an image for the Cubietruck in order to run the glmark2-es2 benchmark accelerated via the binary-only mali user-space driver.

First off, grab the scripts that will guide you through your build:

$ git clone git://github.com/twoerner/oe-layerindex-config.git
Cloning into 'oe-layerindex-config'...
remote: Counting objects: 45, done.
remote: Total 45 (delta 0), reused 0 (delta 0), pack-reused 45
Receiving objects: 100% (45/45), 11.04 KiB | 0 bytes/s, done.
Resolving deltas: 100% (24/24), done.
Checking connectivity... done.


Kick off the build configuration by sourcing the main script

$ . oe-layerindex-config/oesetup.sh

This script will use wget to grab some items from the OpenEmbedded layer index, then ask you which branch you would like to use:

Choose "master".

Then you'll be asked which board you'd like to build for:





 Scroll down and choose "cubietruck". At this point the scripts will download meta-sunxi for you and configure your build to use this BSP layer by adding it to your bblayers.conf.

Now you need to choose your distribution:




Select "meta-yocto". Now this layer will be fetched and added to your build. Since meta-yocto includes a number of poky variants we will be asked to choose one:




Select "poky".

We now need to tell our build where it can place its downloads (notice the question at the bottom of the following screen):

You can enter your download directory. This entry box uses readline which, among other things, allows you to use tab-completion when entering your download path. I would recommend using a fully-qualified path for this information. It is common to use one location for all the downloads of all builds taking place on a given machine.

Now your configuration step is mostly finished:




If you simply wanted to perform a small core-image-minimal build, you could go ahead and bitbake that up now. But we want to add glmark2 to our image, and its metadata isn't found in the layers we currently have. Therefore we need to add one more layer:




Before we can build, we need to tweak our configuration to let bitbake know which kernel we want to use, which u-boot we want to use, what packages we want to add to our image, and so forth. Edit conf/local.conf and make yours look similar to:




Now we are ready to build!

$ bitbake core-image-full-cmdine




When our build completes there are some warnings that are issued, we can safely ignore these for now:




We will find our artifacts in:




Simply dd the sdimg to a microSD card:

# dd if=core-image-full-cmdline-cubietruck.sunxi-sdimg of=/dev/<your microSD device> bs=1M

Note that on some distributions you have to be the root user to perform the above step and that you need to figure out what jumble of letters to fill in for the <your microSD device> part.

Pop the microSD into the cubietruck's microSD slot, attach an HDMI monitor, a 3.3V console cable, and apply power.

When the device finishes booting you can now run the glmark2-es2 demo. Unfortunately it only works in full-screen mode. If you want to see which tests it is currently running and the current FPS count while the tests are running you can specify the --annotate option.

There are two common ways to do this:

  1. plug a USB keyboard into the cubietruck and type the command into the matchbox-terminal console
    • # glmark2-es2 --fullscreen --annotate
  2. from the serial console run:
    • # export DISPLAY=:0
    • # glmark2-es2 --fullscreen --annotate



There is also a "sunximali-test" demo app you can try.


Build Help

If you're having trouble building there are a couple things you can try

  • If this is your first time using OpenEmbedded or you're rather new to it, you could try familiarizing yourself with the project and more basic builds to start:
    •  https://www.yoctoproject.org
    •  https://www.yoctoproject.org/documentation
    •  http://www.yoctoproject.org/docs/1.8/yocto-project-qs/yocto-project-qs.html
  • In the build screenshot you can see the exact repositories my build is using, and what the latest commit was for each. You could try checking out those exact same commits for each of the layers being used to see if that helps.
The build configuration for this build is:
Build Configuration:
BB_VERSION        = "1.27.1"
BUILD_SYS         = "x86_64-linux"
NATIVELSBSTRING   = "openSUSE-project-13.2"
TARGET_SYS        = "arm-poky-linux-gnueabi"
MACHINE           = "cubietruck"
DISTRO            = "poky"
DISTRO_VERSION    = "1.8+snapshot-20150930"
TUNE_FEATURES     = "arm armv7a vfp neon callconvention-hard vfpv4 cortexa7"
TARGET_FPU        = "vfp-vfpv4-neon"
meta-sunxi        = "master:14da837096f2c4bf1471b9cce5cf7fd30f55999b"
meta              = "master:4a1dec5c61f73e7cfa430271ed395094bb262f6b"
meta-yocto        = "master:613c38fb9b5f20a89ca88f6836a21b9c7604e13e"
meta-oe           = "master:f4533380c8a5c1d229f692222ee0c2ef9d187ef8"


The conf/local.conf is:
PREFERRED_PROVIDER_virtual/kernel = "linux-sunxi"
PREFERRED_PROVIDER_u-boot = "u-boot-sunxi"
PREFERRED_PROVIDER_virtual/bootloader = "u-boot-sunxi"
DEFAULTTUNE = "cortexa7hf-neon-vfpv4"
CORE_IMAGE_EXTRA_INSTALL = "packagegroup-core-x11-base sunxi-mali-test glmark2"
MACHINE_EXTRA_RRECOMMENDS = " kernel-modules"
IMAGE_FSTYPES_remove = "tar.bz2"
DISTRO_FEATURES_append = " x11"
PACKAGECONFIG_pn-glmark2 = "x11-gles2"
PREFERRED_PROVIDER_jpeg = "jpeg"
PREFERRED_PROVIDER_jpeg-native = "jpeg-native"
PACKAGE_CLASSES ?= "package_ipk"
EXTRA_IMAGE_FEATURES = "debug-tweaks"
USER_CLASSES ?= "buildstats image-mklibs image-prelink"
PATCHRESOLVE = "noop"
OE_TERMINAL = "auto"
BB_DISKMON_DIRS = "\
    STOPTASKS,${TMPDIR},1G,100K \
    STOPTASKS,${DL_DIR},1G,100K \
    STOPTASKS,${SSTATE_DIR},1G,100K \
    ABORT,${TMPDIR},100M,1K \
    ABORT,${DL_DIR},100M,1K \
    ABORT,${SSTATE_DIR},100M,1K"
PACKAGECONFIG_append_pn-qemu-native = " sdl"
PACKAGECONFIG_append_pn-nativesdk-qemu = " sdl"
ASSUME_PROVIDED += "libsdl-native"
CONF_VERSION = "1"

27 May 2015

ARM SBCs and SoCs

The following table shows which boards are examples of which architectures/processors:




The following table shows the most likely big.LITTLE pairings:






 If you have a manufacturer and/or SoC in mind and would like to know which board you'd need to buy:






Sources:
http://en.wikipedia.org/wiki/Comparison_of_single-board_computers
http://www.arm.com/products/processors/cortex-a/
https://www.96boards.org
http://www.linux.com/news/embedded-mobile/mobile-linux/831550-survey-best-linux-hacker-sbcs-for-under-200

22 May 2015

Work Area Wire Spool Holder

I'm happy I finally found the time and parts to put together a wire spool holder for my work area!




I found all the necessary parts at my local Home Depot: a threaded rod, angle brackets, couple nuts, couple screws. I did have to drill out one hole on each of the angle brackets in order to fit the rod I had chosen, but only by millimetres.