30 Dec 2020

Multiple DS18B20 Temperature Probes


In my last post I hooked up one DS18B20 to a RaspberryPi. The neat thing about the DS18B20 and the 1-wire protocol is that multiple such devices can be connected together in a star topology. What's really great about this is that identifying an individual sensor doesn't depend on its position in a chain (for example) each device has its own unique 64-bit identifier. This means the layout of the probes doesn't have to be determined ahead of time. Also the length of the wires attached to the sensor can vary by a considerable amount. Therefore sensors near and far can be connected to the same microcontroller.

I had a number of varieties of protoboard, one of which is this type which features long buses:


This style of board works perfectly for this scenario.

I also happened to have a bunch of these x3 male headers. Most likely from some servo project.


Wouldn't it be great if there were a way to add a connector to the end of the temperature probes so they could plug into this header easily? As it turns out, it's Pololu to the rescue! Among the many things they sell, they also have crimp connector housings in various configurations, and all the other accessories to go with it.


I know this probably sounds like I have some relationship with Pololu but I don't. I know other electronics suppliers provide such items too, but Pololu's site makes it quite easy to buy the right things the first time around and give me the confidence to try this out (thanks to their supporting videos and information). After a little practice on test wires I'm able to add connectors to my temperature probes:


Cutting off a piece of protoboard and soldering on the header gives me a hub into which to plug all the temperature probes. Since we need a 4.7k pull-up resistor in the design anyway, I might as well add it to the hub so it's ready to go:


You'll notice there's no picture of the underside ๐Ÿ˜‰ It's bad enough showing this hack job from the top, I'm not about to show the disaster on the other side. Thankfully all the buses are still separate from each other, and that's all that matters ๐Ÿ˜‚

With the pull-up resistor in place I'm forcing the middle bus to be the ground. The connectors aren't keyed, so I just have to line up the VDD and DQ lines together myself.


…and with the RaspberryPi:


Now when I boot my device I get a kernel log message about my GPIO setup and one log message each for each of the sensors attached to my hub:

[    5.135283] gpio-25 (onewire@19): enforced open drain please flag it properly in DT/ACPI DSDT/board file
[    5.202823] w1_master_driver w1_bus_master1: Attaching one wire slave 28.01131bb70fee crc 30
[    5.497062] w1_master_driver w1_bus_master1: Attaching one wire slave 28.01131b7ad963 crc f6
[    5.599175] w1_master_driver w1_bus_master1: Attaching one wire slave 28.01131b62790b crc 83

Note: these messages don't appear side-by-side in the log, but they do appear in this order.

Now when I look in the 1-wire sysfs location I find:

root@raspberrypi3-64:~# cd /sys/bus/w1/devices/
root@raspberrypi3-64:/sys/bus/w1/devices# ls -l
lrwxrwxrwx    1 root     root             0 Dec 29 04:22 28-01131b62790b -> ../../../devices/w1_bus_master1/28-01131b62790b
lrwxrwxrwx    1 root     root             0 Dec 29 04:22 28-01131b7ad963 -> ../../../devices/w1_bus_master1/28-01131b7ad963
lrwxrwxrwx    1 root     root             0 Dec 29 04:22 28-01131bb70fee -> ../../../devices/w1_bus_master1/28-01131bb70fee
lrwxrwxrwx    1 root     root             0 Dec 29 04:22 w1_bus_master1 -> ../../../devices/w1_bus_master1

To read the temperatures of the probes I could visit each probe individually and simply "cat" the "temperature" file. This will cause the kernel to perform the "Convert T [44h]" command followed by the "Read Scratchpad [BEh]" command on each probe individually.

A small performance improvement can be had by visiting the master driver device and issuing a "trigger" command to its "therm_bulk_read" sysfs entry, then reading the temperatures from the probe devices individually. This causes the "Convert T [44h]" command to be issued to all the temperature probes connected to this master.

Notice that once the bulk command is given, the value of "therm_bulk_read" will read -1 if at least one sensor is still performing the conversion, 1 if one of the sensors has not had its temperature read out, and 0 once all the temperatures have been read for the most recent bulk conversion:

root@raspberrypi3-64:~# cd /sys/bus/w1/devices/
root@raspberrypi3-64:/sys/bus/w1/devices# ls -1
28-01131b62790b
28-01131b7ad963
28-01131bb70fee
w1_bus_master1
root@raspberrypi3-64:/sys/bus/w1/devices# cat w1_bus_master1/therm_bulk_read 
0
root@raspberrypi3-64:/sys/bus/w1/devices# echo "trigger" > w1_bus_master1/therm_bulk_read 
root@raspberrypi3-64:/sys/bus/w1/devices# cat w1_bus_master1/therm_bulk_read 
1
root@raspberrypi3-64:/sys/bus/w1/devices# cat 28-01131b62790b/temperature 
23375
root@raspberrypi3-64:/sys/bus/w1/devices# cat w1_bus_master1/therm_bulk_read 
1
root@raspberrypi3-64:/sys/bus/w1/devices# cat 28-01131b7ad963/temperature 28-01131bb70fee/temperature 
23125
23437
root@raspberrypi3-64:/sys/bus/w1/devices# cat w1_bus_master1/therm_bulk_read 
0

Note that I can't issue the bulk command and read out the status fast enough to demonstrate the -1 case. Writing the "trigger" to start the bulk command appears to block until all the probes have performed their temperature conversions.

The interesting thing about the bulk command is that it allows all the probes to check their temperatures at roughly the same time, then the code can take its time reading out the temperatures that existed when the bulk command was given. If each probe were to be visited one at a time, the timing of the temperature checks from all the probes would be skewed. This might not be a problem, but it's handy having this feature available.

Another neat thing about this setup is that the kernel actively probes the 1-wire bus during its operation. Therefore adding probes is plug-and-play at runtime! If a new probe is added to the bus at runtime, the kernel detects it, adds it to the list of devices, and prints a message indicating the probe's unique serial number. This feature makes it easier to add probes to an installation and correlate which probe is measuring which temperature. For example, if you have a project that needs 5 probes, each probe measuring a different part of a project, you can boot your hardware with the probes un-attached then attach them one-by-one to the bus at runtime taking note of their serial numbers and the part of the project each probe is measuring as they're being added.

29 Dec 2020

Temperature Readings with the DS18B20 and OpenEmbedded/Yocto



There are many ways to read a temperature in an electronics project. Probably the easiest way under Linux is with one of those pre-wired waterproof DS18B20 devices. Wire-up the probe to your RaspberryPi, enable a device-tree overlay, and read a file.

Wiring

The datasheet for the DS18B20 has all the information you'll need (and more). The probe has 3 wires: ground (black), DQ data in/out (yellow), and VDD (red). Technically the device doesn't actually need VDD connected, it can siphon enough parasitic power from DQ in normal operation. But since there's already a RaspberryPi in the design, it provides a convenient 5V, and connecting VDD makes the conversion potentially faster and more stable... might as well use it.

The datasheet recommends using a 4.7k pull-up on the DQ line. Grabbing 5V and ground from the RaspberryPi, I've wired up the probe as follows:


The 3-wire bundle comes from the probe, the individual wires run back to the RaspberryPi:


Connected to the RaspberryPi, the 3-wire bundle is my console cable, the individual wires run to the protoboard. The black wire is on ground, the red wire is on +5V. In my case I've decided to use the RaspberryPi's GPIO04, therefore the yellow wire is connected to pin07.



Build

By default the meta-raspberrypi OE/Yocto BSP layer uses the out-of-tree raspberrypi kernel. One of the differences between it and upstream is that the out-of-tree raspberrypi kernel contains all the device-tree overlays specific to the RaspberryPi. To interface with the DS18B20 the 1-wire overlay needs to be enabled in the RaspberryPi's config.txt. In OpenEmbedded/Yocto we use the RPI_EXTRA_CONFIG variable to enable arbitrary configurations in config.txt.

In addition to enabling the 1-wire protocol, we also need to specify the GPIO pin on which we want it to operate. In my specific case, given how I've wired the circuit, I'm using GPIO04, therefore I relay that information as follows:

RPI_EXTRA_CONFIG = "\
# enable 1-wire on gpio 4 \n\
dtoverlay=w1-gpio,gpiopin=4"

If you're building a core-image-minimal, you'll probably also want to add the following to your build:

CORE_IMAGE_EXTRA_INSTALL += " \
        ${MACHINE_EXTRA_RRECOMMENDS} \
        "

Building, flashing, and running the image from OpenEmbedded/Yocto you should see something along the following flash by on the serial console as it boots:

[    5.047539] gpio-4 (onewire@4): enforced open drain please flag it properly in DT/ACPI DSDT/board file
[    5.090044] w1_master_driver w1_bus_master1: Attaching one wire slave 28.01131b62790b crc 83

This is good, it means everything is configured properly and the kernel has noticed the temperature probe connected to GPIO04 via 1-wire.

Note that in the case of the DS18B20, each device has a unique 64-bit serial number. So your output should be similar but will be slightly different. The "28" identifies this specific 1-wire device (i.e. the DS18B20), the hex value following the decimal point is the 48-bit unique serial number of my specific device. The last byte is a CRC of the first 56 bits.

Read A File

If everything is working properly and you've seen the previous messages in your system log as the device boots then everything should be set. Any devices that are detected can be found under /sys/bus/w1:

root@raspberrypi3-64:~# cd /sys/bus/w1
root@raspberrypi3-64:/sys/bus/w1# ls
devices            drivers            drivers_autoprobe  drivers_probe      uevent
root@raspberrypi3-64:/sys/bus/w1# cd devices/
root@raspberrypi3-64:/sys/bus/w1/devices# ls
28-01131b62790b  w1_bus_master1
root@raspberrypi3-64:/sys/bus/w1/devices# cd 28-01131b62790b/
root@raspberrypi3-64:/sys/devices/w1_bus_master1/28-01131b62790b# ls -l
-rw-r--r--    1 root     root          4096 Dec 29 05:35 alarms
lrwxrwxrwx    1 root     root             0 Dec 29 05:35 driver -> ../../../bus/w1/drivers/w1_slave_drive
r
--w-------    1 root     root          4096 Dec 29 05:35 eeprom
-r--r--r--    1 root     root          4096 Dec 29 05:35 ext_power
drwxr-xr-x    3 root     root             0 Dec 29 04:26 hwmon
-r--r--r--    1 root     root          4096 Dec 29 05:35 id
-r--r--r--    1 root     root          4096 Dec 29 05:35 name
drwxr-xr-x    2 root     root             0 Dec 29 05:35 power
-rw-r--r--    1 root     root          4096 Dec 29 05:35 resolution
lrwxrwxrwx    1 root     root             0 Dec 29 05:35 subsystem -> ../../../bus/w1
-r--r--r--    1 root     root          4096 Dec 29 04:28 temperature
-rw-r--r--    1 root     root          4096 Dec 29 04:26 uevent
-rw-r--r--    1 root     root          4096 Dec 29 05:35 w1_slave
root@raspberrypi3-64:/sys/devices/w1_bus_master1/28-01131b62790b# cat temperature 
23625
root@raspberrypi3-64:/sys/devices/w1_bus_master1/28-01131b62790b# hexdump -C id
00000000  28 0b 79 62 1b 13 01 83                           |(.yb....|
00000008

As you can see, the temperature probe is automatically detected, and Linux handles all the details. Simply reading the "temperature" sysfs entry causes the kernel to ask the device to perform a temperature reading, then fetch the value from the device. Under the hood a lot of details are being handled by the kernel. Note, the reported temperature at this point is 23.625°C.

Notes

If I wanted to switch to, say, GPIO25 instead of GPIO04, all that would be required is to change where the yellow wire connects to the RaspberryPi's 40-pin header (switch from pin07 to pin22) and modify RPI_EXTRA_CONFIG as follows:

RPI_EXTRA_CONFIG = "\
# enable onewire on gpio 25 \n\
dtoverlay=w1-gpio,gpiopin=25"

Build, update the ยตSD card, apply power… Now when I boot I get:

[    5.113801] gpio-25 (onewire@19): enforced open drain please flag it properly in DT/ACPI DSDT/board file
[    5.473042] w1_master_driver w1_bus_master1: Attaching one wire slave 28.01131b62790b crc 83

And everything continues to work as before:

root@raspberrypi3-64:~# cat /sys/bus/w1/devices/28-01131b62790b/temperature 
23500

27 Dec 2020

psplash Improvements for OpenEmbedded/Yocto

The psplash project uses fbdev graphics to show a logo on a screen during bootup and shutdown. It was started in 2006 and is meant for embedded systems. It is integrated into OpenEmbedded/Yocto quite well; simply add "splash" [NOTE: not "psplash"] to IMAGE_FEATURES and you're on your way.

IMAGE_FEATURES += "splash"

Of course there are caveats. For example the psplash program is run by the init system. On most embedded systems this means that it only runs after the bootloader, and only once the kernel gets to the point of starting userspace. Another caveat is that your kernel config needs to include (y) the appropriate fbdev driver into the kernel (and not as a module) otherwise by the time the module is loaded, psplash will be done its work.

By default, an oecore build will have an OpenEmbedded logo; a poky build uses a Yocto Project logo. Layers are free to override the image with whatever logo they want. meta-raspberrypi provides a good example of this.

While I was playing around with psplash the other day, I noticed something peculiar. With qemu images everything works fine, if the init system is systemd everything is fine again. But on my raspberrypi build with sysvinit as the init system, psplash would fail to run on the very first boot. For the first shutdown, and every bootup and shutdown thereafter it works just fine. So why not on the first boot? Note also that currently sysvinit is still the default init system of an OE/Yocto build.

On a real system with real hardware, on the very first boot the filesystem is initially mounted read-only. Only later is the filesystem remounted read-write (and is forevermore read-write). One of the functions of the psplash program is to display a progress bar to give users a vague idea of the progress of the bootup. In order to communicate the progress to the psplash program, when it starts up the psplash program attempts to use, or create if it doesn't exist, a fifo. Processes that want to update the progress send text messages to the psplash program over its fifo.

Due to the fact that the psplash program is one of the very first things started by the init system, at the point where psplash is started on the very first boot, the filesystem is still mounted read-only. The psplash program would try to use a fifo if one existed, but prior to my investigation it isn't easy to place a fifo into a image that is being created. Since there is no pre-existing fifo the psplash program tries to create one. Since the filesystem is currently read-only, it fails at doing so and the psplash program terminates with error.

At some point in the first bootup the filesystem is re-mounted read-write, so all subsequent shutdowns and boots succeed at running psplash.

One of the unfortunate things about missing out on the first boot's psplash is due to the fact that the first boot of most images can often take longer than subsequent boots. Often times an image needs to do some housekeeping chores on the very first boot, but not on any others. For example: unique keys might need to be generated, post-install scripts might need to be run, etc. So having psplash work for all boots is a worthy goal.

My first thought was to get psplash to create its fifo in a part of the filesystem that is read-write from the start. It turns out that the root of the filesystem (i.e. /) is read-write from the start. Reviewers of that patch, however, weren't keen on the idea of messing up the root of the filesystem.

My next approach was to consider what would be involved in trying to add a fifo to an image that is being created. On the one hand, it's not hard at all: simply DEPEND on coreutils-native and call mkfifo to create the fifo in the staging area. In practice, however, I ran into a snag. The build ends up hanging forever until you kill it explicitly.

Turns out, when you perform a build with OpenEmbedded/Yocto, one of the great benefits is that the build includes a number of post-build steps that are run to check over various parts of the system for sanity. These checks and their logic are part of the insane.bbclass. As it turns out, one such check is to look for a shebang in the first line of all the files included in the image. The point of this check is to make sure that anything that is being shebanged (e.g. sh, bash, perl, python) is included in the runtime on-target image. Therefore the first line of every object in the image is checked. Unfortunately if the object happens to be a fifo, reading from it will hang forever waiting for data to appear.

Looking through the code of that bbclass, it wasn't all that hard to find the one and only case where a sanity check reads through the contents of all files. Adding an extra check to make sure that any object whose contents will be examined is not a fifo is all that was required.

Now that it was possible to create an image with an already-existing fifo, it was a simple matter of updating a recipe's install process to add a fifo to the image and point psplash to it.

Turns out others have noticed that adding fifos to an image isn't easily possible, so a bug was filed. Thankfully Richard Purdie remembered seeing the bug and made me aware that in my quest to get psplash working better, I was also helping to close a bug for the project! (yeah!!) Randy MacLeod had suggested that, as part of fixing the bug, an automated unit test should be added to make sure we don't accidentally lose the ability to add fifos to an image. A little bit of back-and-forth with Richard over IRC pointed me in the right direction of how to run the existing tests, and where to look to add a new fifo test.

While I was working on this project I had the opportunity to look through the psplash code itself. One thing I noticed is that in addition to sending messages via the fifo to update the progress, there is another message that can be sent to specify a text string to print immediately above the progress bar. Turns out there's an invisible text field just above the progress bar one can use to show messages to the user. I thought it would be a good addition to have the name of the current boot script displayed above the progress bar. In this way a user can see exactly what is running.

When a system is booting, the progress bar doesn't proceed smoothly; instead it updates in spurts and jumps. Sometimes the progress bar seemingly stops for a noticeable period of time. Showing the user what's currently running lets them know the reason for any pauses and provides for, in my opinion, a better user experience. Therefore, as part of my updates, I also added the plumbing necessary for the sysvinit system to provide not only a progress bar indication of the progress, but also a textual description of which startup/shutdown script is currently running.

Since, up until this point, showing a textual description of the progress hadn't been used, the feedback I was provided suggested that this new feature be added with a knob to turn it on and off, and that it should be off by default. Therefore should you want to enable it in your builds you'll need to:

PACKAGECONFIG_pn-sysvinit = "psplash-text-updates"


I've tried to capture the changes in the following video. Unfortunately the camera struggles to focus, so it's not the best video you'll see. Hopefully it conveys the gist.


The video shows the very first boot of a rasberrypi system. After the system finishes booting up, I then reboot the device. Notice how the sshd module takes a noticeable amount of time to run on the first boot, but is finished almost instantly the second time around.

17 Dec 2020

Graphics with OpenEmbedded/Yocto without X11/Weston

In a previous post I discussed graphics on the RaspberryPi and how one could use the closed binary blob + the userland library to do GLESv2 without a windowing system/compositor. This solution resulted in a system image that only required 36 packages and is 28MB in size. However, this solution is RaspberryPi-specific, 32-bit-specific, GLESv2-specific, and the quality of the binary blob graphics lags noticeably behind the quality of the fully open-source alternatives.

A better, and more generic, solution is to use DRM/KMS, GBM, and EGL directly. This allows fullscreen, "bare-metal" playback of OpenGL or OpenGL ES apps without the weight of x11 or weston/wayland.

Examples of this usage can be found with: kmscube, glmark2, mpv, and kodi. mpv and kodi are huge applications with dozens of dependencies each, so they aren't good to use for this example where I'm trying to show how small a bare-metal graphics system can be.

In this case I'm going to demonstrate on the RaspberryPi, but this solution is generic enough to work with any board whose SoC has a GPU that is supported by Mesa's DRM. I'm using the following layers:

  • bitbake: 71aaac9efa69abbf6c27d174e0862644cbf674ef
  • openembedded-core: c58fcc1379ca5755a5b670f79b75e94370d4943c
  • meta-openembedded: f03ad4971ed0b7cf34550a90ee3c0fa18f964533
  • meta-raspberrypi: 361f42e346e59f3a3fafcfa4ab7c948969d5abf4

The edits I've made to my conf/local.conf are:

1
2
3
4
5
6
7
8
MACHINE = "raspberrypi3-64"

MACHINE_FEATURES_append = " vc4graphics"
DISTRO_FEATURES += "opengl"
CORE_IMAGE_EXTRA_INSTALL += "glmark2 kmscube"
PACKAGECONFIG_append_pn-glmark2 = " drm-gl"

ENABLE_UART = "1"

Note:

  • I'm using "raspberrypi3-64" as my MACHINE, the same works for "raspberrypi3" (i.e. a 32-bit build) as well as a whole bunch of other devices/machines
  • on line 3 I'm specifically requesting the fully open-source graphics stack based on Mesa
  • notice that I'm only adding "opengl" to the DISTRO_FEATURES, and not "x11" (or "wayland")
  • on line 5 I want glmark2 and kmscube added to my image
  • by default, the way the glmark2 recipe is written, it will build drm-gles2 by default, to get drm-gl built as well I've added it to its PACKAGECONFIG
  • I like enabling the UART and using the board via the serial console

Building core-image-minimal:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
Build Configuration:
BB_VERSION           = "1.49.0"
BUILD_SYS            = "x86_64-linux"
NATIVELSBSTRING      = "opensuseleap-15.2"
TARGET_SYS           = "arm-oe-linux-gnueabi"
MACHINE              = "raspberrypi3"
DISTRO               = "nodistro"
DISTRO_VERSION       = "nodistro.0"
TUNE_FEATURES        = "arm vfp cortexa7 neon vfpv4 thumb callconvention-hard"
TARGET_FPU           = "hard"
meta-raspberrypi     = "master:361f42e346e59f3a3fafcfa4ab7c948969d5abf4"
meta                 = "master:c58fcc1379ca5755a5b670f79b75e94370d4943c"
meta-oe              = "master:f03ad4971ed0b7cf34550a90ee3c0fa18f964533"

I end up with an image that has 40 packages:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
base-files_3.0.14-r89_raspberrypi3.ipk
base-passwd_3.5.29-r0_cortexa7t2hf-neon-vfpv4.ipk
busybox_1.32.0-r0_cortexa7t2hf-neon-vfpv4.ipk
busybox-syslog_1.32.0-r0_cortexa7t2hf-neon-vfpv4.ipk
busybox-udhcpc_1.32.0-r0_cortexa7t2hf-neon-vfpv4.ipk
eudev_3.2.9-r0_cortexa7t2hf-neon-vfpv4.ipk
glmark2_20201114+0+784aca755a-r0_cortexa7t2hf-neon-vfpv4.ipk
init-ifupdown_1.0-r7_raspberrypi3.ipk
initscripts-functions_1.0-r155_cortexa7t2hf-neon-vfpv4.ipk
initscripts_1.0-r155_cortexa7t2hf-neon-vfpv4.ipk
init-system-helpers-service_1.58-r0_cortexa7t2hf-neon-vfpv4.ipk
kbd_2.3.0-r0_cortexa7t2hf-neon-vfpv4.ipk
keymaps_1.0-r31_raspberrypi3.ipk
kmscube_git-r0_cortexa7t2hf-neon-vfpv4.ipk
ldconfig_2.32-r0_cortexa7t2hf-neon-vfpv4.ipk
libblkid1_2.36-r0_cortexa7t2hf-neon-vfpv4.ipk
libc6_2.32-r0_cortexa7t2hf-neon-vfpv4.ipk
libdrm2_2.4.103-r0_cortexa7t2hf-neon-vfpv4.ipk
libegl-mesa_2:20.2.4-r0_cortexa7t2hf-neon-vfpv4.ipk
libexpat1_2.2.10-r0_cortexa7t2hf-neon-vfpv4.ipk
libgbm1_2:20.2.4-r0_cortexa7t2hf-neon-vfpv4.ipk
libgcc1_10.2.0-r0_cortexa7t2hf-neon-vfpv4.ipk
libglapi0_2:20.2.4-r0_cortexa7t2hf-neon-vfpv4.ipk
libgles2-mesa_2:20.2.4-r0_cortexa7t2hf-neon-vfpv4.ipk
libjpeg62_1:2.0.6-r0_cortexa7t2hf-neon-vfpv4.ipk
libkmod2_27-r0_cortexa7t2hf-neon-vfpv4.ipk
libpng16-16_1.6.37-r0_cortexa7t2hf-neon-vfpv4.ipk
libstdc++6_10.2.0-r0_cortexa7t2hf-neon-vfpv4.ipk
libudev1_3.2.9-r0_cortexa7t2hf-neon-vfpv4.ipk
libz1_1.2.11-r0_cortexa7t2hf-neon-vfpv4.ipk
mesa-megadriver_2:20.2.4-r0_cortexa7t2hf-neon-vfpv4.ipk
modutils-initscripts_1.0-r7_cortexa7t2hf-neon-vfpv4.ipk
netbase_1:6.2-r0_all.ipk
packagegroup-core-boot_1.0-r17_raspberrypi3.ipk
run-postinsts_1.0-r10_all.ipk
sysvinit-inittab_2.88dsf-r10_raspberrypi3.ipk
sysvinit-pidof_2.97-r0_cortexa7t2hf-neon-vfpv4.ipk
sysvinit_2.97-r0_cortexa7t2hf-neon-vfpv4.ipk
update-alternatives-opkg_0.4.3-r0_cortexa7t2hf-neon-vfpv4.ipk
update-rc.d_0.8-r0_all.ipk

and my image size is 36MB:

-rw-r--r-- 2 trevor users  36M Dec 16 23:40 tmp-glibc/deploy/images/raspberrypi3/core-image-minimal-raspberrypi3-20201217044019.rootfs.ext3

The image is super-fast to boot to a cmdline, no X11 or Wayland is started, and from the serial console I can run kmscube:


glmark2-es2-drm (OpenGL ES 2):


or glmark2-drm (OpenGL):

15 Dec 2020

userland graphics with OpenEmbedded/Yocto

When the RaspberryPi was first released (Apr 2012), support for its GPU was provided by a binary blob that was accessible via a library called "userland". Originally this glue library was supplied in binary format, but on Oct 24, 2012 the sources for the userland glue library were made available. The userland library exposes support for a number of GPU APIs including: EGL, GLESv2, OpenVG, and others. Although many parts of this graphics stack are open-source, the core GPU code remains closed. Developers can call GLESv2 functions, for example, but aren't able to manage any of the GPU's resources.

Note that this graphics stack (binary blob + userland) is separate from the fully open-source support that has been added to Mesa since release 10.3.

There exist, therefore, two providers of GLES on a RaspberryPi system: the binary blob + userland, and full Mesa with support for vc4.

When using OpenEmbedded/Yocto to put together an image targetting a RaspberryPi device, if the intent is to run GLES applications, one must choose between these two graphics stacks. Otherwise it would be confusing for the build system to know which graphics stack was intended to provide GLES when linking a GLES application (e.g. glmark2).

An application using GLES usually sits atop a large number of libraries such as various X11 libraries, xcb, drm, EGL, GL/GLES, etc. which allow such an application to run in its own window as part of a windowing system. With the blob+userland option, a number of the libraries lower down in the stack are replaced by the dispmanx library. The dispmanx library doesn't sit on top of a window system; it prefers to take over the entire screen. As a result, running a graphics application with the blob+userland requires far fewer packages, doesn't require any x11/weston, and doesn't require any windowing environment.

The original RaspberryPis ran on 32-bit hardware. In an effort to prioritize backwards compatibility, even when the RaspberryPi migrated to 64-bit architectures, the official images continued to be built for 32-bit. As a result, most of the blob+userland code was written for, and assumes, a 32-bit environment.

There isn't very much information available regarding the dispmanx and userland libraries. The userland code comes with a bunch of sample applications that can be optionally built when building the library. A person named Andrew Duncan wrote a bunch of sample dispmanx applications which were published on github in a repository named raspidmx. Also, the popular glmark2 code has an experimental fork with the changes necessary to run on top of dispmanx.

The best-known BSP layer with support for RaspberryPi includes the necessary knobs and buttons to allow you to create an image and select which graphics stack you would like to use. If you choose the blob+userland stack, support is also available for building the optional userland applications, for building raspidmx, as well as a version of glmark2 that runs on dispmanx.

Assuming you're already familiar with most of the basics of setting up an OpenEmbedded/Yocto build, for this test (at a minimum) you'll need:

  • bitbake
  • openembedded-core
  • meta-openembedded
  • meta-raspberrypi

Once you've cloned those repositories and setup your build environment, in order to use the blob+userland graphics stack as well as build all the sample applications mentioned above, my conf/local.conf has the following changes and additions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
MACHINE = "raspberrypi3"
DISABLE_VC4GRAPHICS = "1"

DISTRO_FEATURES += "x11 dispmanx"
CORE_IMAGE_EXTRA_INSTALL += "glmark2 raspidmx"
PACKAGECONFIG_append_pn-userland = " allapps"
PACKAGECONFIG_pn-glmark2 = "dispmanx"

ENABLE_UART = "1"
GPU_MEM = "512"

NOTE: this is not my entire conf/local.conf, but rather just the things that I added or changed.

  • I plan to run this image on a Raspberry Pi 3 Model B V1.2, which has a 64-bit SoC, but I'm setting the MACHINE to "raspberrypi3" (i.e. the 32-bit machine)
  • On line 2 I'm explicitly disabling VC4GRAPHICS (which refers to the fully open-sourced Mesa support), thereby enabling support for the blob+userland graphics stack
  • Although I'm not building or including x11 support, when building glmark2 the code refers to some X11 headers (which might be an oversight on the part of the glmark2 code), in any case we need to add x11 to the DISTRO_FEATURES
  • On line 5 I'm adding the glmark2 (GLES) and raspidmx (dispmanx) sample applications to the image
  • On line 6 I'm enabling the optional build of sample dispmanx samples when building the userland library
  • On line 7 I'm making sure glmark2 is build with support for dispmanx
  • I enable the console UART on the Pi since I prefer working with the device over the console rather than plugging in a keyboard and mouse
  • On line 10 I increase the memory available to the GPU since one of the glmark2 tests fails with the default setting

With my build properly configured, I proceed to build "core-image-minimal" as follows:

Build Configuration:
BB_VERSION           = "1.49.0"
BUILD_SYS            = "x86_64-linux"
NATIVELSBSTRING      = "opensuseleap-15.2"
TARGET_SYS           = "arm-oe-linux-gnueabi"
MACHINE              = "raspberrypi3"
DISTRO               = "nodistro"
DISTRO_VERSION       = "nodistro.0"
TUNE_FEATURES        = "arm vfp cortexa7 neon vfpv4 thumb callconvention-hard"
TARGET_FPU           = "hard"
meta-raspberrypi     = "master:e4f5c32925fec90ff688e51197cb052fe12af82e"
meta                 = "master:a55b01a3a1faf9a52d7edad074c76327f637aaa2"
meta-oe              = "master:936f2380bb5112721eec2db46eb35b5600ac28de"

Note that bitbake is at checkout: 71aaac9efa69abbf6c27d174e0862644cbf674ef

When my build is done the only packages in my image are:

base-files_3.0.14-r89_raspberrypi3.ipk
base-passwd_3.5.29-r0_cortexa7t2hf-neon-vfpv4.ipk
bash_5.0-r0_cortexa7t2hf-neon-vfpv4.ipk
busybox_1.32.0-r0_cortexa7t2hf-neon-vfpv4.ipk
busybox-syslog_1.32.0-r0_cortexa7t2hf-neon-vfpv4.ipk
busybox-udhcpc_1.32.0-r0_cortexa7t2hf-neon-vfpv4.ipk
eudev_3.2.9-r0_cortexa7t2hf-neon-vfpv4.ipk
glmark2_20201114+0+784aca755a-r0_cortexa7t2hf-neon-vfpv4.ipk
init-ifupdown_1.0-r7_raspberrypi3.ipk
initscripts-functions_1.0-r155_cortexa7t2hf-neon-vfpv4.ipk
initscripts_1.0-r155_cortexa7t2hf-neon-vfpv4.ipk
init-system-helpers-service_1.58-r0_cortexa7t2hf-neon-vfpv4.ipk
kbd_2.3.0-r0_cortexa7t2hf-neon-vfpv4.ipk
keymaps_1.0-r31_raspberrypi3.ipk
ldconfig_2.32-r0_cortexa7t2hf-neon-vfpv4.ipk
libblkid1_2.36-r0_cortexa7t2hf-neon-vfpv4.ipk
libc6_2.32-r0_cortexa7t2hf-neon-vfpv4.ipk
libgcc1_10.2.0-r0_cortexa7t2hf-neon-vfpv4.ipk
libjpeg62_1:2.0.6-r0_cortexa7t2hf-neon-vfpv4.ipk
libkmod2_27-r0_cortexa7t2hf-neon-vfpv4.ipk
libpng16-16_1.6.37-r0_cortexa7t2hf-neon-vfpv4.ipk
libstdc++6_10.2.0-r0_cortexa7t2hf-neon-vfpv4.ipk
libtinfo5_6.2-r0_cortexa7t2hf-neon-vfpv4.ipk
libz1_1.2.11-r0_cortexa7t2hf-neon-vfpv4.ipk
modutils-initscripts_1.0-r7_cortexa7t2hf-neon-vfpv4.ipk
ncurses-terminfo-base_6.2-r0_cortexa7t2hf-neon-vfpv4.ipk
netbase_1:6.2-r0_all.ipk
packagegroup-core-boot_1.0-r17_raspberrypi3.ipk
raspidmx_0.0+git0+e2ee6faa0d-r0_cortexa7t2hf-neon-vfpv4.ipk
run-postinsts_1.0-r10_all.ipk
sysvinit-inittab_2.88dsf-r10_raspberrypi3.ipk
sysvinit-pidof_2.97-r0_cortexa7t2hf-neon-vfpv4.ipk
sysvinit_2.97-r0_cortexa7t2hf-neon-vfpv4.ipk
update-alternatives-opkg_0.4.3-r0_cortexa7t2hf-neon-vfpv4.ipk
update-rc.d_0.8-r0_all.ipk
userland_20201027-r0_cortexa7t2hf-neon-vfpv4.ipk

My entire image size is ~28MB:

-rw-r--r-- 2 trevor users  28M Dec 15 14:23 core-image-minimal-raspberrypi3-20201215192252.rootfs.ext3

Flashing to a ยตSD card and booting, from the serial console I am able to run the userland sample applications (not all of the sample apps are shown running here, but they all do run):


I can even run 2 of the sample userland applications at the same time:


I can run the raspidmx samples (not all are shown here, but they all do run):




And I can run glmark2-es2-dispmanx:



19 Oct 2020

Writing Software for your Embedded Device with OpenEmbedded

In my last post we built our own, very up-to-date, image for our target device (a RaspberryPi3 fitted with Pimoroni's Automation-HAT) using Yocto/OE and poky. This image included the latest revisions of Pimoroni's Python libraries for driving the Automation-HAT, thanks to the enormous amount of help devtool provides for creating new recipes for existing code available in repositories.

If there exists a repository of code you'd like to add to your image, you need a recipe that will fetch, build, deploy, etc the software. If you're lucky a recipe will already exist in the layer index. If you have to write your own recipe, devtool is a powerful and invaluable tool. Simply invoke devtool, choose a recipe name, point devtool at the repository, and it does much of the work for you (including identifying dependencies)! The tooling understands dozens of fetchers for common repository types (git, cvs, clearcase, perforce, wget, bzr, etc…), and it provides dozens of handlers for dealing with various build systems (autotools, make, cmake, meson, npm, setuptools, etc…). As you can imagine, this exercise becomes more challenging if the software isn't using a common build system, or isn't using it correctly.

But what if the code doesn't already exist? If you're creating your own product and writing your own software, there will not be any pre-existing repository at which you can point devtool. This is the scenario we're going to focus on in this post. The assumptions are that you're writing code that needs to be compiled (i.e. not interpreted), and that you're using a much more powerful build host for development therefore you will need to cross-compile your code. Since the hardware we're using has some "toys" to "play" with (i.e. LEDs, ADCs, relays, inputs, and outputs), the goal of our code will be to "play" with some of the "toys" found on the Automation-HAT. Also, since my goal is to focus on cross-compiling and embedded development, I'm going to ignore the pre-exisiting python libraries that are available for this hardware.

There are two broad scenarios:

  1. you are comfortable using OpenEmbedded and would like to create an image and write the software entirely on your own host machine
  2. you are working as part of a team and have no interest in the overall image or knowing how the images are created, you only want to focus on writing the software

In either case all developers need access to the cross-development tools that will be used to build the final image. This will help alleviate many of the "gotchas" that crop up late in the development cycle when code gets integrated into the final image. If developers are using different versions of a given compiler, it's not uncommon for newer versions of the same tool to find more problems/errors with the code than older versions, or for newer versions to compile the code differently or make different assumptions. Having all developers use the same versions of all the same tools and libraries throughout the development process goes a long way towards improving your release process and improving the quality of your testing.

As I mentioned in my previous post, I like to start small and make incremental changes as I go. Which means I like to build my code often and test it often (preferably on the device itself). So let's start with a "Hello, world!". In any "good" software project, the code is but one component of the overall effort. Therefore I like to put my code in a src subdirectory of my project (…in anticipation of a doc directory for documentation, a test subdirectory for tests, etc).

1
2
3
$ mkdir src
$ cd src
$ vi i2ctest.c

…and for the code…

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
/*
 * Copyright (C) 2020  Trevor Woerner <twoerner@gmail.com>
 */

#include <stdio.h>

int
main (void)
{
        printf("Hello, world!\n");
        return 0;
}

As good developers we should be thinking about using a standard build system. There are many from which to choose; for this example I'm going to select the autotools. In order for our code to be built using the autotools we need to put project-specific metadata in various files which the autotools process so we can then use regular make to build.

In the top-level directory we need to create:

  1. the project's configure.ac (which eventually gets transformed into the familiar ./configure script)
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    dnl Copyright (C) 2020  Trevor Woerner <twoerner@gmail.com>
    
    AC_PREREQ(2.57)
    AC_INIT([automationhat-doodles], 0.1.0, twoerner@gmail.com, automationhat-doodles)
    AC_CONFIG_SRCDIR(src/i2ctest.c)
    AC_CONFIG_AUX_DIR(cfg)
    AM_INIT_AUTOMAKE([foreign no-dist-gzip dist-bzip2 1.9])
    AM_CONFIG_HEADER(cfg/config.h)
    
    SUBDIRS="src"
    
    dnl **********************************
    dnl checks for programs
    dnl **********************************
    AC_PROG_CC
    AC_PROG_CPP
    AC_PROG_MAKE_SET
    AC_PROG_INSTALL
    AC_PROG_LN_S
    
    dnl **********************************
    dnl checks for header files
    dnl **********************************
    AC_HEADER_STDC
    AC_CHECK_HEADERS(stdio.h stdlib.h string.h unistd.h fcntl.h errno.h getopt.h)
    AC_CHECK_HEADERS(sys/types.h sys/stat.h sys/ioctl.h linux/i2c.h linux/i2c-dev.h)
    
    dnl **********************************
    dnl checks for typedefs, structs, and
    dnl compiler characteristics
    dnl **********************************
    AC_TYPE_SIZE_T
    
    dnl **********************************
    dnl other stuff
    dnl **********************************
    AC_SUBST(SUBDIRS)
    
    dnl **********************************
    dnl output
    dnl **********************************
    AC_OUTPUT(Makefile
    cfg/Makefile
    src/Makefile)
    

  2. the top-level Makefile.am
    1
    2
    3
    4
    5
    6
    7
    ## Copyright (C) 2020  Trevor Woerner <twoerner@gmail.com>
    
    ########################
    ## top-level Makefile.am
    ########################
    SUBDIRS = @SUBDIRS@
    DIST_SUBDIRS = cfg @SUBDIRS@
    

I prefer to put the configuration-related things into a separate cfg directory, so I create that directory and create a Makefile.am in there too:

1
2
$ mkdir cfg
$ vi cfg/Makefile.am

1
2
3
4
5
6
## Copyright (C) 2020  Trevor Woerner <twoerner@gmail.com>

########################
# cfg/Makefile.am
########################
SUBDIRS =

Finally, we need a Makefile.am in the src directory as well:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
## Copyright (C) 2020  Trevor Woerner <twoerner@gmail.com>

########################
## src/Makefile.am
########################
SUBDIRS =
AM_CFLAGS = -Wall -Werror -Wextra -Wconversion -Wreturn-type -Wstrict-prototypes

bin_PROGRAMS = i2ctest
i2ctest_SOURCES = i2ctest.c

Our code is all ready to go, our target device is standing by, now how do we cross-compile this code with the tools we've used to create the rest of our image, and how can we test the code to verify it's working correctly? Earlier I outlined two scenarios: you're either going to be both writing some code and building the images, or you're only interested in writing code. Similarly there is a solution for each scenario:

  1. use devtool to help write a recipe for the code you're writing (which is very similar to how we used it in my previous blog post) the difference is that instead of pointing devtool to a repository "out there" on the internet, you point it at the top-level directory of wherever you're writing your code
  2. the person who is creating the images and running bitbake generates an SDK and hands it to the developer who uses it like any other SDK

Image And Code

As before, in this case we can use devtool to create a recipe for us and simply build our code like any other package:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
$ devtool add automationhat-doodles /opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/
NOTE: Starting bitbake server...
INFO: Creating workspace layer in /z/build-master/6951/automation-hat/build/workspace
NOTE: Starting bitbake server...
NOTE: Reconnecting to bitbake server...
NOTE: Retrying server connection (#1)...
NOTE: Reconnecting to bitbake server...
NOTE: Reconnecting to bitbake server...
NOTE: Retrying server connection (#1)...
NOTE: Retrying server connection (#1)...
NOTE: Starting bitbake server...
INFO: Recipe /z/build-master/6951/automation-hat/poky/build/workspace/recipes/automationhat-doodles/automationhat-doodles_0.1.0.bb has been automatically created; further editing may be required to make it fully functional

Looking at the workspace:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
$ tree workspace
workspace
├── README
├── appends
│   └── automationhat-doodles_0.1.0.bbappend
├── conf
│   └── layer.conf
└── recipes
    └── automationhat-doodles
        └── automationhat-doodles_0.1.0.bb

4 directories, 4 files…

…the appends:

1
2
inherit externalsrc
EXTERNALSRC = "/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles"

…and the raw recipe devtool created for us:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Recipe created by recipetool
# This is the basis of a recipe and may need further editing in order to be fully functional.
# (Feel free to remove these comments when editing.)

# Unable to find any files that looked like license statements. Check the accompanying
# documentation and source headers and set LICENSE and LIC_FILES_CHKSUM accordingly.
#
# NOTE: LICENSE is being set to "CLOSED" to allow you to at least start building - if
# this is not accurate with respect to the licensing of the software being built (it
# will not be in most cases) you must specify the correct value before using this
# recipe for anything other than initial testing/development!
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""

# No information for SRC_URI yet (only an external source tree was specified)
SRC_URI = ""

# NOTE: if this software is not capable of being built in a separate build directory
# from the source, you should replace autotools with autotools-brokensep in the
# inherit line
inherit autotools

# Specify any options you want to pass to the configure script using EXTRA_OECONF:
EXTRA_OECONF = ""

Everything we have is enough to build our software, no changes required:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
$ devtool build automationhat-doodles
NOTE: Starting bitbake server...
NOTE: Reconnecting to bitbake server...
NOTE: Retrying server connection (#1)...
Loading cache: 100% |                                                                                                                                                                          | ETA:  --:--:--
Loaded 0 entries from dependency cache.
Parsing recipes: 100% |#########################################################################################################################################################################| Time: 0:00:13
Parsing of 2044 .bb files complete (0 cached, 2044 parsed). 3215 targets, 123 skipped, 0 masked, 0 errors.
Removing 1 recipes from the cortexa53 sysroot: 100% |###########################################################################################################################################| Time: 0:00:00
Removing 1 recipes from the raspberrypi3_64 sysroot: 100% |#####################################################################################################################################| Time: 0:00:00
Loading cache: 100% |###########################################################################################################################################################################| Time: 0:00:03
Loaded 3214 entries from dependency cache.
Parsing recipes: 100% |#########################################################################################################################################################################| Time: 0:00:00
Parsing of 2044 .bb files complete (2043 cached, 1 parsed). 3215 targets, 123 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies

Build Configuration:
BB_VERSION           = "1.47.0"
BUILD_SYS            = "x86_64-linux"
NATIVELSBSTRING      = "universal"
TARGET_SYS           = "aarch64-poky-linux"
MACHINE              = "raspberrypi3-64"
DISTRO               = "poky"
DISTRO_VERSION       = "3.1+snapshot-20201018"
TUNE_FEATURES        = "aarch64 armv8a crc cortexa53"
TARGET_FPU           = ""
meta                 
meta-poky            
meta-yocto-bsp       = "master:7cad26d585f67fa6bf873b8be361c6335a7db376"
meta-raspberrypi     = "master:6f85611576b7ccbfb6012631f741bd1daeffc9c9"
workspace            = "master:b1a0414a6df77674a860c365825a4500e6cd698b"
meta-oe              
meta-python          = "master:86a7820b7964ff91d7a26ac5c506e83292e347a3"
devtool-additions    = "master:b1a0414a6df77674a860c365825a4500e6cd698b"

Initialising tasks: 100% |######################################################################################################################################################################| Time: 0:00:00
Sstate summary: Wanted 0 Found 0 Missed 0 Current 116 (0% match, 100% complete)
NOTE: Executing Tasks
NOTE: automationhat-doodles: compiling from external source tree /opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles
NOTE: Tasks Summary: Attempted 495 tasks of which 487 didn't need to be rerun and all succeeded.
NOTE: Writing buildhistory
NOTE: Writing buildhistory took: 1 seconds
NOTE: Build completion summary:
NOTE:   do_populate_sysroot: 0.0% sstate reuse(0 setscene, 1 scratch)
NOTE:   do_deploy_source_date_epoch: 0.0% sstate reuse(0 setscene, 1 scratch)
NOTE:   do_package: 0.0% sstate reuse(0 setscene, 1 scratch)
NOTE:   do_packagedata: 0.0% sstate reuse(0 setscene, 1 scratch)

Success! The exact same build system that OE created to generate our entire image has now been used in exactly the same way to build the code we just wrote. As a developer I don't have to know anything about having to find or build an appropriate cross-compiler, or worry about whether I've passed the correct flags to the cross-compiler to ensure my build is using the correct headers (for example).

Now we need to get our code over to the target device so we can test it. In theory we could generate a new image with each build and test our code that way. That would create quite a long development cycle! Provided we have networking working on the target and an ssh server running, we could just scp the file over and run it. With a simple program like this one, the proposed solution would work fine. But as the code gets more complicated (e.g. multiple pieces, libraries, etc) copying everything over could increase the possibilities of errors. Also, once we're done testing a given build, it would be nice of having a way to clear everything off the target device between uploads.

Once again, devtool is very helpful in this situation! It includes not only a devtool deploy-target command, but also a devtool undeploy-target command!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
$ devtool deploy-target automationhat-doodles root@10.0.0.50
NOTE: Starting bitbake server...
NOTE: Reconnecting to bitbake server...
NOTE: Retrying server connection (#1)...
NOTE: Reconnecting to bitbake server...
NOTE: Previous bitbake instance shutting down?, waiting to retry...
NOTE: Retrying server connection (#2)...
Loading cache: 100% |###########################################################################################################################################################################| Time: 0:00:00
Loaded 3209 entries from dependency cache.
Parsing recipes: 100% |#########################################################################################################################################################################| Time: 0:00:01
Parsing of 2040 .bb files complete (2039 cached, 1 parsed). 3210 targets, 165 skipped, 0 masked, 0 errors.
tar: ./usr/bin/i2ctest: time stamp 2020-10-09 07:07:44.128603897 is 274194.69420039 s in the future
tar: ./usr/bin: time stamp 2020-10-09 07:07:44.128603897 is 274194.693851432 s in the future
tar: ./usr: time stamp 2020-10-09 07:07:44.124603877 is 274194.689302922 s in the future
tar: .: time stamp 2020-10-09 07:07:44.124603877 is 274194.689052297 s in the future
INFO: Successfully deployed /z/build-master/6951/automation-hat/build/tmp-glibc/work/cortexa53-oe-linux/automationhat-doodles/0.1.0-r0/image

Once our code has been uploaded to the target we simply would run it on the target like any other piece of software:

1
2
root@raspberrypi3-64:~# i2ctest 
Hello, world!

Once we're done with our tests we clean up the target like so:

1
2
3
$ devtool undeploy-target automationhat-doodles root@10.0.0.50
NOTE: Starting bitbake server...
INFO: Successfully undeployed automationhat-doodles

NOTE: you don't need to undeploy-target after each test. If you're compiling and testing in a tight loop you can simply devtool deploy-target repeatedly. The first thing a devtool deploy-target does is to check for and devtool undeploy-target any previous deploys of the same package. It keeps track of all the files that were copied over on each deploy, so that they can all be removed either when you undeploy-target explicitly, or when your next deploy-target implicitly removes the previous.

Code Only

If you have a team of developers, it is possible to extract the cross-development tools (and all the other relevant pieces) from your build and bundle everything together into an SDK. Using this SDK, the developers will be using the same tools as the image build, and all your developers will be using the same tools. Additionally, the SDK you create will have all the correct header files for all of the components of the software that forms your specific image!

An off-the-shelf SDK you find randomly on the internet will advertise the version of the compiler it's using, but what versions of which other packages are included? Which C library is it using, and what version? Which kernel headers were used? And what if your application sits on top of a bunch of high-level libraries (i.e. a GUI application)? With many off-the-shelf SDKs, getting the flags right to ensure your host system isn't contaminating your cross-build is often left as an exercise for the user! The SDK you generate from your OE image takes care of all those details for you.

To generate an SDK from any of your specific images, build the image with bitbake as usual, but indicate you want the populate_sdk task run specifically:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
$ bitbake core-image-full-cmdline -c populate_sdk
Loading cache: 100% |###########################################################################################################################################################################| Time: 0:00:00
Loaded 3214 entries from dependency cache.
Parsing recipes: 100% |#########################################################################################################################################################################| Time: 0:00:00
Parsing of 2044 .bb files complete (2043 cached, 1 parsed). 3215 targets, 123 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies

Build Configuration:
BB_VERSION           = "1.47.0"
BUILD_SYS            = "x86_64-linux"
NATIVELSBSTRING      = "universal"
TARGET_SYS           = "aarch64-poky-linux"
MACHINE              = "raspberrypi3-64"
DISTRO               = "poky"
DISTRO_VERSION       = "3.1+snapshot-20201018"
TUNE_FEATURES        = "aarch64 armv8a crc cortexa53"
TARGET_FPU           = ""
meta                 
meta-poky            
meta-yocto-bsp       = "master:7cad26d585f67fa6bf873b8be361c6335a7db376"
meta-raspberrypi     = "master:6f85611576b7ccbfb6012631f741bd1daeffc9c9"
workspace            = "master:b1a0414a6df77674a860c365825a4500e6cd698b"
meta-oe              
meta-python          = "master:86a7820b7964ff91d7a26ac5c506e83292e347a3"
devtool-additions    = "master:b1a0414a6df77674a860c365825a4500e6cd698b"

Initialising tasks: 100% |######################################################################################################################################################################| Time: 0:00:02
Sstate summary: Wanted 464 Found 0 Missed 464 Current 982 (0% match, 67% complete)
NOTE: Executing Tasks
WARNING: python3-sn3218-1.2.7+gitAUTOINC+d497c6e976-r0 do_packagedata: QA Issue: Package version for package python3-sn3218-src went backwards which would break package feeds (from 0:1.2.7+git999-r0 to 0:1.2.7+git0+d497c6e976-r0) [version-going-backwards]
WARNING: python3-sn3218-1.2.7+gitAUTOINC+d497c6e976-r0 do_packagedata: QA Issue: Package version for package python3-sn3218-dbg went backwards which would break package feeds (from 0:1.2.7+git999-r0 to 0:1.2.7+git0+d497c6e976-r0) [version-going-backwards]
WARNING: python3-sn3218-1.2.7+gitAUTOINC+d497c6e976-r0 do_packagedata: QA Issue: Package version for package python3-sn3218-staticdev went backwards which would break package feeds (from 0:1.2.7+git999-r0 to 0:1.2.7+git0+d497c6e976-r0) [version-going-backwards]
WARNING: python3-sn3218-1.2.7+gitAUTOINC+d497c6e976-r0 do_packagedata: QA Issue: Package version for package python3-sn3218-dev went backwards which would break package feeds (from 0:1.2.7+git999-r0 to 0:1.2.7+git0+d497c6e976-r0) [version-going-backwards]
WARNING: python3-sn3218-1.2.7+gitAUTOINC+d497c6e976-r0 do_packagedata: QA Issue: Package version for package python3-sn3218-doc went backwards which would break package feeds (from 0:1.2.7+git999-r0 to 0:1.2.7+git0+d497c6e976-r0) [version-going-backwards]
WARNING: python3-sn3218-1.2.7+gitAUTOINC+d497c6e976-r0 do_packagedata: QA Issue: Package version for package python3-sn3218-locale went backwards which would break package feeds (from 0:1.2.7+git999-r0 to 0:1.2.7+git0+d497c6e976-r0) [version-going-backwards]
WARNING: python3-sn3218-1.2.7+gitAUTOINC+d497c6e976-r0 do_packagedata: QA Issue: Package version for package python3-sn3218 went backwards which would break package feeds (from 0:1.2.7+git999-r0 to 0:1.2.7+git0+d497c6e976-r0) [version-going-backwards]
WARNING: automation-hat-0.2.3+gitAUTOINC+a41084cb4d-r0 do_packagedata: QA Issue: Package version for package automation-hat-src went backwards which would break package feeds (from 0:0.2.3+git999-r0 to 0:0.2.3+git0+a41084cb4d-r0) [version-going-backwards]
WARNING: automation-hat-0.2.3+gitAUTOINC+a41084cb4d-r0 do_packagedata: QA Issue: Package version for package automation-hat-dbg went backwards which would break package feeds (from 0:0.2.3+git999-r0 to 0:0.2.3+git0+a41084cb4d-r0) [version-going-backwards]
WARNING: automation-hat-0.2.3+gitAUTOINC+a41084cb4d-r0 do_packagedata: QA Issue: Package version for package automation-hat-staticdev went backwards which would break package feeds (from 0:0.2.3+git999-r0 to 0:0.2.3+git0+a41084cb4d-r0) [version-going-backwards]
WARNING: automation-hat-0.2.3+gitAUTOINC+a41084cb4d-r0 do_packagedata: QA Issue: Package version for package automation-hat-dev went backwards which would break package feeds (from 0:0.2.3+git999-r0 to 0:0.2.3+git0+a41084cb4d-r0) [version-going-backwards]
WARNING: automation-hat-0.2.3+gitAUTOINC+a41084cb4d-r0 do_packagedata: QA Issue: Package version for package automation-hat-doc went backwards which would break package feeds (from 0:0.2.3+git999-r0 to 0:0.2.3+git0+a41084cb4d-r0) [version-going-backwards]
WARNING: automation-hat-0.2.3+gitAUTOINC+a41084cb4d-r0 do_packagedata: QA Issue: Package version for package automation-hat-locale went backwards which would break package feeds (from 0:0.2.3+git999-r0 to 0:0.2.3+git0+a41084cb4d-r0) [version-going-backwards]
WARNING: automation-hat-0.2.3+gitAUTOINC+a41084cb4d-r0 do_packagedata: QA Issue: Package version for package automation-hat went backwards which would break package feeds (from 0:0.2.3+git999-r0 to 0:0.2.3+git0+a41084cb4d-r0) [version-going-backwards]
NOTE: Tasks Summary: Attempted 4171 tasks of which 3016 didn't need to be rerun and all succeeded.
NOTE: Writing buildhistory
NOTE: Writing buildhistory took: 15 seconds
NOTE: Build completion summary:
NOTE:   do_populate_sysroot: 0.0% sstate reuse(0 setscene, 67 scratch)
NOTE:   do_deploy_source_date_epoch: 0.0% sstate reuse(0 setscene, 98 scratch)
NOTE:   do_package: 0.0% sstate reuse(0 setscene, 99 scratch)
NOTE:   do_packagedata: 0.0% sstate reuse(0 setscene, 99 scratch)
NOTE:   do_package_write_ipk: 0.0% sstate reuse(0 setscene, 99 scratch)

Summary: There were 14 WARNING messages shown.

The resulting SDK is found in ${TMPDIR}/deploy/sdk:

1
2
3
4
5
6
$ ls -lh tmp/deploy/sdk/
total 254M
-rw-r--r-- 2 trevor users 9.4K Oct 18 00:33 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.host.manifest
-rwxr-xr-x 2 trevor users 253M Oct 18 00:35 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.sh
-rw-r--r-- 2 trevor users 160K Oct 18 00:32 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.target.manifest
-rw-r--r-- 2 trevor users 339K Oct 18 00:32 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.testdata.json

The SDK is bundled as a self-extracting shell script. Give the script to your developers (in this case it's the poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.sh file) and have them simply execute the file. It will ask the person installing the file for a target location. In the following example, I've decided to install the SDK into the directory /opt/toolchains/automation-hat/poky/sdk:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
$ ./poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.sh 
Poky (Yocto Project Reference Distro) SDK installer version 3.1+snapshot
========================================================================
Enter target directory for SDK (default: /opt/poky/3.1+snapshot): /opt/toolchains/automation-hat/poky
You are about to install the SDK to "/opt/toolchains/automation-hat/poky". Proceed [Y/n]? Y
Extracting SDK..................................................................................done
Setting it up...done
SDK has been successfully set up and is ready to be used.
Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g.
 $ . /opt/toolchains/automation-hat/poky/environment-setup-cortexa53-poky-linux

As the installer so helpfully reminds us, anytime you want to use the SDK you just installed, in a fresh shell environment you'll have to source the environment file in order to correctly use the cross-compiler. In the following example I'm going to use the SDK I just installed in order to build my code. I'll start a new shell for this demonstration, but this isn't shown.

NOTE: in the following example my SDK is installed to /opt/toolchains/automation-hat/poky/sdk my source code is found in /opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles and I will be building my code "out of tree" at /opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
$ cd /opt/toolchains/automation-hat/poky/sdk/
$ . ./environment-setup-cortexa53-poky-linux
$ export PS1="${PS1}sdk> "
$ sdk> cd /opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/

$ sdk> autoreconf -i
configure.ac:15: installing 'cfg/compile'
configure.ac:7: installing 'cfg/install-sh'
configure.ac:7: installing 'cfg/missing'
src/Makefile.am: installing 'cfg/depcomp'

$ sdk> mkdir _build
$ sdk> cd _build
$ sdk> ../configure --host=x86_64
configure: loading site script /opt/toolchains/automation-hat/poky/sdk/site-config-cortexa53-poky-linux
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for x86_64-strip... aarch64-poky-linux-strip
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for x86_64-gcc... aarch64-poky-linux-gcc  -mcpu=cortex-a53 -march=armv8-a+crc -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/toolchains/automation-hat/poky/sdk/sysroots/cortexa53-poky-linux
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... yes
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether aarch64-poky-linux-gcc  -mcpu=cortex-a53 -march=armv8-a+crc -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/toolchains/automation-hat/poky/sdk/sysroots/cortexa53-poky-linux accepts -g... yes
checking for aarch64-poky-linux-gcc  -mcpu=cortex-a53 -march=armv8-a+crc -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/toolchains/automation-hat/poky/sdk/sysroots/cortexa53-poky-linux option to accept ISO C89... none needed
checking whether aarch64-poky-linux-gcc  -mcpu=cortex-a53 -march=armv8-a+crc -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/toolchains/automation-hat/poky/sdk/sysroots/cortexa53-poky-linux understands -c and -o together... yes
checking whether make supports the include directive... yes (GNU style)
checking dependency style of aarch64-poky-linux-gcc  -mcpu=cortex-a53 -march=armv8-a+crc -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/toolchains/automation-hat/poky/sdk/sysroots/cortexa53-poky-linux... gcc3
checking how to run the C preprocessor... aarch64-poky-linux-gcc -E  -mcpu=cortex-a53 -march=armv8-a+crc -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/toolchains/automation-hat/poky/sdk/sysroots/cortexa53-poky-linux
checking whether make sets $(MAKE)... (cached) yes
checking whether ln -s works... yes
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking stdio.h usability... yes
checking stdio.h presence... yes
checking for stdio.h... yes
checking for stdlib.h... (cached) yes
checking for string.h... (cached) yes
checking for unistd.h... (cached) yes
checking fcntl.h usability... yes
checking fcntl.h presence... yes
checking for fcntl.h... yes
checking errno.h usability... yes
checking errno.h presence... yes
checking for errno.h... yes
checking getopt.h usability... yes
checking getopt.h presence... yes
checking for getopt.h... yes
checking for sys/types.h... (cached) yes
checking for sys/stat.h... (cached) yes
checking sys/ioctl.h usability... yes
checking sys/ioctl.h presence... yes
checking for sys/ioctl.h... yes
checking linux/i2c.h usability... yes
checking linux/i2c.h presence... yes
checking for linux/i2c.h... yes
checking linux/i2c-dev.h usability... yes
checking linux/i2c-dev.h presence... yes
checking for linux/i2c-dev.h... yes
checking for size_t... yes
configure: creating ./config.status
config.status: creating Makefile
config.status: creating cfg/Makefile
config.status: creating src/Makefile
config.status: creating cfg/config.h
config.status: executing depfiles commands

$ sdk> make
Making all in src
make[1]: Entering directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build/src'
make[2]: Entering directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build/src'
aarch64-poky-linux-gcc  -mcpu=cortex-a53 -march=armv8-a+crc -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/toolchains/automation-hat/poky/sdk/sysroots/cortexa53-poky-linux -DHAVE_CONFIG_H -I. -I../../src -I../cfg    -Wall -Werror -Wextra -Wconversion -Wreturn-type -Wstrict-prototypes -O2 -pipe -g -feliminate-unused-debug-types  -MT i2ctest.o -MD -MP -MF .deps/i2ctest.Tpo -c -o i2ctest.o ../../src/i2ctest.c
mv -f .deps/i2ctest.Tpo .deps/i2ctest.Po
aarch64-poky-linux-gcc  -mcpu=cortex-a53 -march=armv8-a+crc -fstack-protector-strong  -D_FORTIFY_SOURCE=2 -Wformat -Wformat-security -Werror=format-security --sysroot=/opt/toolchains/automation-hat/poky/sdk/sysroots/cortexa53-poky-linux -Wall -Werror -Wextra -Wconversion -Wreturn-type -Wstrict-prototypes -O2 -pipe -g -feliminate-unused-debug-types   -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -Wl,-z,relro,-z,now -o i2ctest i2ctest.o  
make[2]: Leaving directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build/src'
make[1]: Leaving directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build/src'
make[1]: Entering directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build'
make[1]: Nothing to be done for 'all-am'.
make[1]: Leaving directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build'

$ sdk> file src/i2ctest
src/i2ctest: ELF 64-bit LSB shared object, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-, BuildID[sha1]=47845aa25dff34453e7b1cb367c9b59883f8292c, for GNU/Linux 3.14.0, with debug_info, not stripped

Yay! We've demonstrated generating an SDK tuned specifically for our image, giving this SDK to an independent developer, and that developer using the SDK on their own, separate machine, to cross-compile their code (in which they've used the autotools build system). This developer doesn't have OE installed on their system, isn't using OE to generate images, and is only focused on writing and cross-compiling their code for the target device.

Although this is a very small example, it does scale. If the code being written was, say, a GUI app that was using (for example) boost and GTK or Qt, simply add those libraries to your image and when you generate the SDK from this image, the SDK will include everything the developer needs to write their application (i.e. all the appropriate headers and cross-libraries).

On-target testing, though, is up to the developer. For a small example such as this, it wouldn't be too hard to copy the one file over and test it. But anything more complex could get messy. If only there was some way to create an SDK that also contained devtool …

Code Only… extended

Turns out there is a way:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
$ bitbake core-image-full-cmdline -c populate_sdk_ext
Loading cache: 100% |                                                                                                                                                                          | ETA:  --:--:--
Loaded 0 entries from dependency cache.
Parsing recipes: 100% |#########################################################################################################################################################################| Time: 0:00:14
Parsing of 2042 .bb files complete (0 cached, 2042 parsed). 3213 targets, 123 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies

Build Configuration:
BB_VERSION           = "1.47.0"
BUILD_SYS            = "x86_64-linux"
NATIVELSBSTRING      = "universal"
TARGET_SYS           = "aarch64-poky-linux"
MACHINE              = "raspberrypi3-64"
DISTRO               = "poky"
DISTRO_VERSION       = "3.1+snapshot-20201018"
TUNE_FEATURES        = "aarch64 armv8a crc cortexa53"
TARGET_FPU           = ""
meta-raspberrypi     = "master:6f85611576b7ccbfb6012631f741bd1daeffc9c9"
devtool-additions    = "master:b1a0414a6df77674a860c365825a4500e6cd698b"
meta                 
meta-poky            
meta-yocto-bsp       = "master:7cad26d585f67fa6bf873b8be361c6335a7db376"
meta-oe              
meta-python          = "master:86a7820b7964ff91d7a26ac5c506e83292e347a3"

Initialising tasks: 100% |######################################################################################################################################################################| Time: 0:00:03
Sstate summary: Wanted 2 Found 0 Missed 2 Current 1952 (0% match, 99% complete)
NOTE: Executing Tasks
NOTE: Tasks Summary: Attempted 5092 tasks of which 5079 didn't need to be rerun and all succeeded.
NOTE: Writing buildhistory
NOTE: Writing buildhistory took: 1 seconds

Note, however, that there is a significant size difference between an SDK and an eSDK:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
$ ls -lh tmp/deploy/sdk/
total 1.3G
-rw-r--r-- 2 trevor users 9.4K Oct 18 00:33 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.host.manifest
-rwxr-xr-x 2 trevor users 253M Oct 18 00:35 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.sh
-rw-r--r-- 2 trevor users 160K Oct 18 00:32 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.target.manifest
-rw-r--r-- 2 trevor users 339K Oct 18 00:32 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-3.1+snapshot.testdata.json
-rw-r--r-- 1 trevor users 9.6K Oct 18 11:53 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-ext-3.1+snapshot.host.manifest
-rwxr-xr-x 2 trevor users 999M Oct 18 11:53 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-ext-3.1+snapshot.sh
-rw-r--r-- 1 trevor users 6.6K Oct 18 11:53 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-ext-3.1+snapshot.target.manifest
-rw-r--r-- 2 trevor users 335K Oct 18 11:49 poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-ext-3.1+snapshot.testdata.json
-rw-r--r-- 2 trevor users 6.6K Oct 18 02:06 x86_64-buildtools-nativesdk-standalone-3.1+snapshot-20201018.host.manifest
-rwxr-xr-x 2 trevor users  26M Oct 18 02:07 x86_64-buildtools-nativesdk-standalone-3.1+snapshot-20201018.sh
-rw-r--r-- 2 trevor users    0 Oct 18 02:05 x86_64-buildtools-nativesdk-standalone-3.1+snapshot-20201018.target.manifest
-rw-r--r-- 2 trevor users 306K Oct 18 02:05 x86_64-buildtools-nativesdk-standalone-3.1+snapshot-20201018.testdata.json

You could install and use an eSDK exactly the same way you would use the regular SDK:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
$ /z/build-master/6951/automation-hat/poky/build/tmp/deploy/sdk/poky-glibc-x86_64-core-image-full-cmdline-cortexa53-raspberrypi3-64-toolchain-ext-3.1+snapshot.sh 
Poky (Yocto Project Reference Distro) Extensible SDK installer version 3.1+snapshot
===================================================================================
Enter target directory for SDK (default: ~/poky_sdk): /opt/toolchains/automation-hat/poky/esdk
You are about to install the SDK to "/opt/toolchains/automation-hat/poky/esdk". Proceed [Y/n]? Y
Extracting SDK................................................done
Setting it up...
Extracting buildtools...
Preparing build system...
Loading cache: 100% |                                                                | ETA:  --:--:--
Parsing recipes: 100% |###############################################################| Time: 0:00:21
Initialising tasks: 100% |############################################################| Time: 0:00:02
Checking sstate mirror object availability: 100% |####################################| Time: 0:00:00
Loading cache: 100% |#################################################################| Time: 0:00:00
Initialising tasks: 100% |############################################################| Time: 0:00:00
done
SDK has been successfully set up and is ready to be used.
Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g.
 $ . /opt/toolchains/automation-hat/poky/esdk/environment-setup-cortexa53-poky-linux

$ export PS1="${PS1}esdk> "
$ esdk>

$ esdk> cd /opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/
$ esdk> autoreconf -i
configure.ac:15: installing 'cfg/compile'
configure.ac:7: installing 'cfg/install-sh'
configure.ac:7: installing 'cfg/missing'
src/Makefile.am: installing 'cfg/depcomp'

$ esdk> mkdir _build
$ esdk> cd _build

$ esdk> ../configure --host=x86_64
configure: loading site script /usr/share/site/x86_64-unknown-linux-gnu
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for x86_64-strip... no
checking for strip... strip
checking for a thread-safe mkdir -p... /usr/bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
checking for x86_64-gcc... no
checking for gcc... gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables... 
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ISO C89... none needed
checking whether gcc understands -c and -o together... yes
checking for style of include used by make... GNU
checking dependency style of gcc... gcc3
checking how to run the C preprocessor... gcc -E
checking whether make sets $(MAKE)... (cached) yes
checking whether ln -s works... yes
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking stdio.h usability... yes
checking stdio.h presence... yes
checking for stdio.h... yes
checking for stdlib.h... (cached) yes
checking for string.h... (cached) yes
checking for unistd.h... (cached) yes
checking fcntl.h usability... yes
checking fcntl.h presence... yes
checking for fcntl.h... yes
checking errno.h usability... yes
checking errno.h presence... yes
checking for errno.h... yes
checking getopt.h usability... yes
checking getopt.h presence... yes
checking for getopt.h... yes
checking for sys/types.h... (cached) yes
checking for sys/stat.h... (cached) yes
checking sys/ioctl.h usability... yes
checking sys/ioctl.h presence... yes
checking for sys/ioctl.h... yes
checking linux/i2c.h usability... yes
checking linux/i2c.h presence... yes
checking for linux/i2c.h... yes
checking linux/i2c-dev.h usability... yes
checking linux/i2c-dev.h presence... yes
checking for linux/i2c-dev.h... yes
checking for size_t... yes
checking that generated files are newer than configure... done
configure: creating ./config.status
config.status: creating Makefile
config.status: creating cfg/Makefile
config.status: creating src/Makefile
config.status: creating cfg/config.h
config.status: executing depfiles commands

$ esdk> make
Making all in src
make[1]: Entering directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build/src'
make[2]: Entering directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build/src'
gcc -DHAVE_CONFIG_H -I. -I../../src -I../cfg    -Wall -Werror -Wextra -Wconversion -Wreturn-type -Wstrict-prototypes -g -O2 -MT i2ctest.o -MD -MP -MF .deps/i2ctest.Tpo -c -o i2ctest.o ../../src/i2ctest.c
mv -f .deps/i2ctest.Tpo .deps/i2ctest.Po
gcc -Wall -Werror -Wextra -Wconversion -Wreturn-type -Wstrict-prototypes -g -O2   -o i2ctest i2ctest.o  
make[2]: Leaving directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build/src'
make[1]: Leaving directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build/src'
make[1]: Entering directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build'
make[1]: Nothing to be done for 'all-am'.
make[1]: Leaving directory '/opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles/_build'

$ esdk> file src/i2ctest
src/i2ctest: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/l, BuildID[sha1]=4199ed513fc41043d24d05b43075b73fc9037e87, for GNU/Linux 3.2.0, with debug_info, not stripped

However, note that in order to take full advantage of devtool (e.g. devtool deploy-target) the independent developer needs to create a recipe for the code they're writing:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
$ esdk> devtool add automationhat-doodles /opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles
NOTE: Starting bitbake server...
NOTE: Starting bitbake server...
NOTE: Reconnecting to bitbake server...
NOTE: Retrying server connection (#1)...
NOTE: Reconnecting to bitbake server...
NOTE: Reconnecting to bitbake server...
NOTE: Retrying server connection (#1)...
NOTE: Retrying server connection (#1)...
NOTE: Starting bitbake server...
INFO: Recipe /opt/toolchains/automation-hat/poky/esdk/workspace/recipes/automationhat-doodles/automationhat-doodles_0.1.0.bb has been automatically created; further editing may be required to make it fully functional

$ esdk> devtool build automationhat-doodles
NOTE: Starting bitbake server...
NOTE: Reconnecting to bitbake server...
NOTE: Retrying server connection (#1)...
Loading cache: 100% |#################################################################| Time: 0:00:00
Loaded 3213 entries from dependency cache.
Parsing recipes: 100% |###############################################################| Time: 0:00:00
Parsing of 2043 .bb files complete (2042 cached, 1 parsed). 3214 targets, 123 skipped, 0 masked, 0 errors.
Loading cache: 100% |#################################################################| Time: 0:00:01
Loaded 3213 entries from dependency cache.
Parsing recipes: 100% |###############################################################| Time: 0:00:00
Parsing of 2043 .bb files complete (2042 cached, 1 parsed). 3214 targets, 123 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Initialising tasks: 100% |############################################################| Time: 0:00:00
Sstate summary: Wanted 8 Found 1 Missed 7 Current 108 (12% match, 93% complete)
NOTE: Executing Tasks
NOTE: automationhat-doodles: compiling from external source tree /opt/oe/configs/z/build-master/6951/automation-hat/devel/automationhat-doodles
NOTE: Tasks Summary: Attempted 495 tasks of which 487 didn't need to be rerun and all succeeded.
NOTE: Build completion summary:
NOTE:   do_populate_sysroot: 0.0% sstate reuse(0 setscene, 1 scratch)
NOTE:   do_deploy_source_date_epoch: 0.0% sstate reuse(0 setscene, 1 scratch)
NOTE:   do_package: 0.0% sstate reuse(0 setscene, 1 scratch)
NOTE:   do_packagedata: 0.0% sstate reuse(0 setscene, 1 scratch)

$ esdk> devtool deploy-target automationhat-doodles root@10.0.0.50
NOTE: Starting bitbake server...
NOTE: Reconnecting to bitbake server...
NOTE: Retrying server connection (#1)...
Loading cache: 100% |#################################################################| Time: 0:00:00
Loaded 3213 entries from dependency cache.
Parsing recipes: 100% |###############################################################| Time: 0:00:00
Parsing of 2043 .bb files complete (2042 cached, 1 parsed). 3214 targets, 123 skipped, 0 masked, 0 errors.
tar: ./usr/bin/i2ctest: time stamp 2020-10-18 18:16:41 is 82401236.305274304 s in the future
tar: ./usr/bin: time stamp 2020-10-18 18:16:41 is 82401236.304915762 s in the future
tar: ./usr: time stamp 2020-10-18 18:16:41 is 82401236.304415189 s in the future
tar: .: time stamp 2020-10-18 18:16:41 is 82401236.304187064 s in the future
INFO: Successfully deployed /opt/toolchains/automation-hat/poky/esdk/tmp/work/cortexa53-poky-linux/automationhat-doodles/0.1.0-r0/image


Turning On An LED

We now have the mechanics in place for writing, cross-compiling, and on-device testing of your code as it is being written, all using the same bleeding-edge, state-of-the-art tools being used to build your image.

Working through the code to get an LED lit we end up with:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
/*
 * Copyright (C) 2020  Trevor Woerner <twoerner@gmail.com>
 */

#include "config.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/ioctl.h>
#include <i2c/smbus.h>
#include <linux/i2c.h>
#include <linux/i2c-dev.h>
#include <getopt.h>

static char *i2cDevice_pG = "/dev/i2c-1";

static void print_funcs(unsigned long);
static void process_cmdline_args(int,char**);
static void usage(const char*);

int
main (int argc, char *argv[])
{
        int ret;
        int i2cFd;
        unsigned long funcs;

        /* process cmdline - get i2c device */
        process_cmdline_args(argc, argv);
        if (optind == argc)
                ;
        else if ((optind+1) == argc) {
                i2cDevice_pG = strdup(argv[optind]);
                if (i2cDevice_pG == NULL) {
                        perror("strdup()");
                        return 1;
                }
        }
        else {
                fprintf(stderr, "bad cmdline args\n");
                usage(argv[0]);
                return 1;
        }
        printf("device: %s\n", i2cDevice_pG);

        /* open i2c device */
        i2cFd = open(i2cDevice_pG, O_RDWR);
        if (i2cFd < 0) {
                perror("open()");
                return 1;
        }

        /* figure out and show i2c capabilities */
        ret = ioctl(i2cFd, I2C_FUNCS, &funcs);
        if (ret < 0) {
                perror("ioctl()");
                close(i2cFd);
                return 1;
        }
        printf("funcs: 0x%08lx\n", funcs);
        print_funcs(funcs);

        /* interract with device */
        ret = ioctl(i2cFd, I2C_SLAVE, 0x54);
        if (ret < 0) {
                perror("I2C_SLAVE");
                return 1;
        }
        i2c_smbus_write_byte_data(i2cFd, 0x17, 0xff); // reset
        i2c_smbus_write_byte_data(i2cFd, 0x00, 0x01); // enable
        i2c_smbus_write_byte_data(i2cFd, 0x12, 0x10); // some pwm on channel 0x12
        i2c_smbus_write_byte_data(i2cFd, 0x15, 0x20); // turn on channel 0x12
        i2c_smbus_write_byte_data(i2cFd, 0x16, 0xff); // update

        close(i2cFd);
        return 0;
}

static struct {
        unsigned long flag;
        const char *str_p;
} I2CFlags_G[] = {
        {I2C_FUNC_I2C, "I2C_FUNC_I2C"},
        {I2C_FUNC_10BIT_ADDR, "I2C_FUNC_10BIT_ADDR"},
        {I2C_FUNC_PROTOCOL_MANGLING, "I2C_FUNC_PROTOCOL_MANGLING"},
        {I2C_FUNC_SMBUS_PEC, "I2C_FUNC_SMBUS_PEC"},
        {I2C_FUNC_NOSTART, "I2C_FUNC_NOSTART"},
        {I2C_FUNC_SLAVE, "I2C_FUNC_SLAVE"},
        {I2C_FUNC_SMBUS_BLOCK_PROC_CALL, "I2C_FUNC_SMBUS_BLOCK_PROC_CALL"},
        {I2C_FUNC_SMBUS_QUICK, "I2C_FUNC_SMBUS_QUICK"},
        {I2C_FUNC_SMBUS_READ_BYTE, "I2C_FUNC_SMBUS_READ_BYTE"},
        {I2C_FUNC_SMBUS_WRITE_BYTE, "I2C_FUNC_SMBUS_WRITE_BYTE"},
        {I2C_FUNC_SMBUS_READ_BYTE_DATA, "I2C_FUNC_SMBUS_READ_BYTE_DATA"},
        {I2C_FUNC_SMBUS_WRITE_BYTE_DATA, "I2C_FUNC_SMBUS_WRITE_BYTE_DATA"},
        {I2C_FUNC_SMBUS_READ_WORD_DATA, "I2C_FUNC_SMBUS_READ_WORD_DATA"},
        {I2C_FUNC_SMBUS_WRITE_WORD_DATA, "I2C_FUNC_SMBUS_WRITE_WORD_DATA"},
        {I2C_FUNC_SMBUS_PROC_CALL, "I2C_FUNC_SMBUS_PROC_CALL"},
        {I2C_FUNC_SMBUS_READ_BLOCK_DATA, "I2C_FUNC_SMBUS_READ_BLOCK_DATA"},
        {I2C_FUNC_SMBUS_WRITE_BLOCK_DATA, "I2C_FUNC_SMBUS_WRITE_BLOCK_DATA"},
        {I2C_FUNC_SMBUS_READ_I2C_BLOCK, "I2C_FUNC_SMBUS_READ_I2C_BLOCK"},
        {I2C_FUNC_SMBUS_WRITE_I2C_BLOCK, "I2C_FUNC_SMBUS_WRITE_I2C_BLOCK"},
        {I2C_FUNC_SMBUS_HOST_NOTIFY, "I2C_FUNC_SMBUS_HOST_NOTIFY"},
};
#define I2CFLAGS_SZ (sizeof(I2CFlags_G) / sizeof(I2CFlags_G[0]))

static void
print_funcs (unsigned long funcs)
{
        size_t i;

        for (i=0; i<I2CFLAGS_SZ; ++i)
                if (funcs & I2CFlags_G[i].flag)
                        printf("- %s\n", I2CFlags_G[i].str_p);
}

static void
usage (const char *pgm_p)
{
        fprintf(stderr, "%s\n", PACKAGE_STRING);

        if (pgm_p != NULL)
                fprintf(stderr, "\nusage: %s [opts] [device]\n", pgm_p);
        fprintf(stderr, "  where [device]:\n");
        fprintf(stderr, "    optionally provide the i2c device node to use\n");
        fprintf(stderr, "    (default: %s)\n", i2cDevice_pG);
        fprintf(stderr, "  where [opts]:\n");
        fprintf(stderr, "    -h|--help    print this help and exit successfully\n");
}

static void
process_cmdline_args (int argc, char *argv[])
{
        int c;
        struct option longOpts[] = {
                {"help", no_argument, NULL, 'h'},
                {NULL, 0, NULL, 0},
        };

        while (1) {
                c = getopt_long(argc, argv, "h", longOpts, NULL);
                if (c == -1)
                        break;

                switch (c) {
                        case 'h':
                                usage(argv[0]);
                                exit(0);
                                break;

                        default:
                                fprintf(stderr, "unknown getopt: %d (0x%02x)\n", c, c);
                                break;
                }
        }
}

This C code for i2c has a dependency on i2c-tools, so the following updates are required.

configure.ac:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
dnl Copyright (C) 2020  Trevor Woerner <twoerner@gmail.com>

AC_PREREQ(2.57)
AC_INIT([automationhat-doodles], 0.1.0, twoerner@gmail.com, automationhat-doodles)
AC_CONFIG_SRCDIR(src/i2ctest.c)
AC_CONFIG_AUX_DIR(cfg)
AM_INIT_AUTOMAKE([foreign no-dist-gzip dist-bzip2 1.9])
AM_CONFIG_HEADER(cfg/config.h)

SUBDIRS="src"

dnl **********************************
dnl checks for programs
dnl **********************************
AC_PROG_CC
AC_PROG_CPP
AC_PROG_MAKE_SET
AC_PROG_INSTALL
AC_PROG_LN_S

dnl **********************************
dnl checks for libraries
dnl **********************************
AC_CHECK_LIB(i2c, i2c_smbus_write_block_data, ,AC_MSG_ERROR([Can't find library i2c]), )

dnl **********************************
dnl checks for header files
dnl **********************************
AC_HEADER_STDC
AC_CHECK_HEADERS(stdio.h stdlib.h string.h unistd.h fcntl.h errno.h getopt.h)
AC_CHECK_HEADERS(sys/types.h sys/stat.h sys/ioctl.h linux/i2c.h linux/i2c-dev.h)
AC_CHECK_HEADERS(i2c/smbus.h)

dnl **********************************
dnl checks for typedefs, structs, and
dnl compiler characteristics
dnl **********************************
AC_TYPE_SIZE_T

dnl **********************************
dnl other stuff
dnl **********************************
AC_SUBST(SUBDIRS)

dnl **********************************
dnl output
dnl **********************************
AC_OUTPUT(Makefile
cfg/Makefile
src/Makefile)

and modify the automationhat-doodles recipe:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# Recipe created by recipetool
# This is the basis of a recipe and may need further editing in order to be fully functional.
# (Feel free to remove these comments when editing.)

# Unable to find any files that looked like license statements. Check the accompanying
# documentation and source headers and set LICENSE and LIC_FILES_CHKSUM accordingly.
#
# NOTE: LICENSE is being set to "CLOSED" to allow you to at least start building - if
# this is not accurate with respect to the licensing of the software being built (it
# will not be in most cases) you must specify the correct value before using this
# recipe for anything other than initial testing/development!
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""

# No information for SRC_URI yet (only an external source tree was specified)
SRC_URI = ""

# NOTE: if this software is not capable of being built in a separate build directory
# from the source, you should replace autotools with autotools-brokensep in the
# inherit line
inherit autotools

# Specify any options you want to pass to the configure script using EXTRA_OECONF:
EXTRA_OECONF = ""

DEPENDS += "i2c-tools"

Running the code on the target:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
root@raspberrypi3-64:~# i2ctest 
device: /dev/i2c-1
funcs: 0x0eff0009
- I2C_FUNC_I2C
- I2C_FUNC_SMBUS_PEC
- I2C_FUNC_SMBUS_QUICK
- I2C_FUNC_SMBUS_READ_BYTE
- I2C_FUNC_SMBUS_WRITE_BYTE
- I2C_FUNC_SMBUS_READ_BYTE_DATA
- I2C_FUNC_SMBUS_WRITE_BYTE_DATA
- I2C_FUNC_SMBUS_READ_WORD_DATA
- I2C_FUNC_SMBUS_WRITE_WORD_DATA
- I2C_FUNC_SMBUS_PROC_CALL
- I2C_FUNC_SMBUS_WRITE_BLOCK_DATA
- I2C_FUNC_SMBUS_READ_I2C_BLOCK
- I2C_FUNC_SMBUS_WRITE_I2C_BLOCK

and looking at the target: