## samedi 6 juillet 2019

### New C++11 State Machine library released

I just finished releasing version 0.9.3 of Spaghetti, a Finite State Machine library. It is provided with a full manual that demonstrates all the use cases.

It is a header-only single-file library, that should build with any C++11 compliant compiler.

Compared to its main competitors (Boost::statechart and Boost::msm), I had at present no time to build a exhaustive comparison, but I have the feeling that Spaghetti is quite easier to setup. On the other hand, it "might" not be as powerful, but again, that needs to be checked.

A lot of work still needs to be done (doc cleaning, testing on different platforms and compilers, adding extensive testing coverage, ...) but it is usuable right away, out of the box.

If you feel like supporting that kind of work, you may check it out and give some feedback, either here or as a Github issue if you spot one.

## mercredi 18 avril 2018

### C++: getting min/max values of boost::graph attributes

Just spend "some" time on some stupid issue on this problem, so I though I might as well post that here, if it can be useful to someone.

Say you have a boost::graph using so-called "bundled properties" (aka "inner properties) and you want to find the minimum and maximum value of the attributes. The standard library has this nice minmax_element() algorithm.

But... how can I use it on a graph inner properties ?

Say you have this kind of vertex, used in a graph definition (here undirected, but should also work for a directed graph):

struct vertex_properties
{
int val;
};

boost::vecS,
boost::vecS,
boost::undirectedS,
vertex_properties
> graph_t;

typedef boost::graph_traits<graph_t>::vertex_descriptor vertex_t;


As an example, consider this program, creating a 3 vertices graph (and, yes, no edges here):

graph_t g;

// Set vertex properties
g[v1].val = 1;
g[v2].val = 2;
g[v3].val = 3;


To find the min/max value of the attributes, just call the algorithm with the right lambda function:

auto pit = boost::vertices( g );
auto result = std::minmax_element(
pit.first,
pit.second,
[&]                                              // lambda
( vertex_t v1, vertex_t v2 )
{
return( g[v1].val < g[v2].val );
}
);


This will return a pair of iterators on the min and max graph indexes. So, to get the results:

    std::cout << "min=" << g[*result.first].val
<< " max=" << g[*result.second].val << '\n';


## jeudi 4 décembre 2014

### C++: erasing elements of std::vector using a lambda

Removing elements from a vector is a task that one can encounter pretty often and that isn't as easy as one could think.

The simplest case is when the index of the unwanted element is known. The std::vector class provides a first form of the erase() member function that takes an (const) iterator as argument.

Thus, if I want to remove, say the 10th element, it's as easy as:

    std::vector<whatever> myVec;
//... fill with more than 10 elements
myVec.erase( myVec.begin() + 9 );


And if you want to remove the 3 elements between positions 10 and 12, it will be the second form of this function, which has two arguments:

    myVec.erase( myVec.begin() + 9, myVec.begin() + 12 );


(Yes, the second argument defines the first one you want to keep)

But what happens when you want to remove elements based on their value ? Say remove all elements that have value foo (assuming that value is of type whatever).

This is a task for std::remove(). It actually does not remove anything, it just switches element around so that the ones to be erased will be at the end, and it returns an iterator pointing on the first element to be erased. The next step is to feed that iterator to std::vector::erase().

The code will using its second form:

    myVec.erase(
std::remove(            // returns iterator on
myVec.begin(),      // first element to
myVec.end(),        // be removed
foo
),
myVec.end()
);


(This is known as the Erase–remove idiom.)

Next, what if you want to remove elements based on some property they have ? Consider for example a vector of vectors:

   std::vector<std::vector<Whatever>> myVec2;


And now the task is to remove elements that hold less than 2 elements. Okay, so we need to check every element, and decide to remove it or not.

This is a task for the second form of that same algorithm, remove_if(). Instead of a value, it takes as third argument a predicate, and will "remove" (move, actually) the considered element if that predicate returns "true". A predicate is usually implemented as a functor, which is an object of some class that defines the operator() and that returns a bool, based on the given value.

At first, this seems like a harsh constraint, as no one wants to declare a class for such a trivial task. But before C++11 came out, that was required (unless, maybe, using some Boost library). Or else, we needed to iterate through the vector and test each element, copy it or not, and swap:

   vector<vector<whatever>> newv;
newv.reserve( myVec.size() ); // to avoid resizing when using push_back
for( size_t i=0; i < myVec.size(); i++ )
if( myVec[i].size()<MinSize )
newv.push_back( myVec[i] );
std::swap( myVec, newv );


This is where C++11 and lambdas come in. A lambda can be seen as a sort of "anonymous inline function", that captures variables in scope. Here, as the function iterates over all the elements, each of them will be a std::vector.

A lambda is made of three parts:

• [how capturing variables happens],
• (the functions arguments),
• {The body of the function}.

The complete code:

   std::size_t MinSize = ... (some value);
myVec2.erase(
std::remove_if(
myVec2.begin(),
myVec2.end(),
[&]( const std::vector<whatever>& vw ) // lambda
{ return vw.size() < minsize; }
),
myVec2.end()
);


## mercredi 14 mai 2014

### Subversion: colordiff fo html files

(Mostly a reminder:)

Say you have some software project, hosted on some Subversion repository. You happily edit your files, and before committing you want to have a quick look at the edits you have done.

No problem, as simple as:
> svn diff

But this outputs lots of text, not easily readable. Ok, lets' go with colordiff:
> svn diff | colordiff

And then you get drowned under floods of nice and flashy colors, and you have to painfully scroll your terminal. Well, what else ? A simple redirecting in a file won't keep the colors.

This is where another magic tool shows up: aha. Yeah, that's his name. It's a "ANSI to HTML" converter. Install it with sudo apt-get install aha, and then, go:
svn diff | colordiff | aha >mydiff.html

For conveniency, you can now add a new target to you makefile:
diff:
svn diff | colordiff | aha >mydiff.html
xdg-open mydiff.html

Thus, entering make diff at the shell will show you the current edits you have done up to now.
xdg-open is just the Gnome app that opens the default application associated with a file type. On Windows, just use the file name alone, as this OS has some mechanism to open the file with the default application when given a file name.

Edit 20141224: a small improvement: in order to keep track of all these generated diff files, you can append date/time to the filename so that each new one doesn't erase the previous one. This can be done easily with bash (not that hard for Windows either, but no time at present to figure that out):

ifndef COMPSPEC
NOW=$(shell date +%Y%m%d_%H%M) BROWSER=xdg-open endif diff: svn diff | colordiff | aha >mydiff_$(NOW).html
$(BROWSER) mydiff_$(NOW).html


Notice the conditional, so that this makefile should also work out-of-the-box under Windows (except for the time/date, but if you send it to me, I'll publish it ;-) )

## vendredi 11 avril 2014

### State machine diagrams with Graphviz

Once in a while, I need to draw a simple state machine diagram. These are a quick way to show in a visual way how a system works.

While these can be drawn with general drawing tools, or even with more dedicated tools, I usually prefer the textual way. Describing a drawing through some design language with an acceptable learning curve and letting some application do the drawing is IMO a better approach: editing the graph afterwards is just a matter of editing the source file.

Okay, so what tool ? Some are... funny (and I mean it!), but are not really usable for anything more than a small graph, or anything that needs to be edited many times.

No, this post is about Graphviz and it's associate set of tools. It does truely have some oddities buts its the best around.
Among its oddities, the default size units are in distance units. For an image, yes, no pixels here. Wait, that's not all ! The default units are inches. Inches !
I suppose this is for historical reasons and it seems that there is no option to change this. After thinking about it, once you go with distance, then, as metrication of image density is still not used, you might as well stay with inches.

Another thing, don't expect to be able to define precisely the position of nodes and edges: these are done by a placement algorithm, and adjusting this is not easy although some commands should help.

Nevertheless, say you want to describe some simple state machine. Just a light bulb connected to a switch.

#### Elementary graph

The associate state machine can be described by the following text file:
digraph g{
rankdir="LR";
edge[splines="curved"]
ON -> OFF;
OFF -> ON;
}


Assuming you have Graphviz correctly installed, the following shell command will generate the image:

dot -Tpng:cairo myfile.dot >myfile.png

#### Transitions

Ok, now lets add the transitions (the switch action). Lets call it "sw": if sw=1, the light bulb will be on, if 0, it will be off.

Ah. Here some problem appear. In the field of electrical engineering, the transitions between the states are frequently based on some boolean variable. Notation for the complement operation seems to be country-dependent. In France, this is usually expressed by a bar over the expression ("sw barre"), in Latex math-syntax, it will be $\bar{sw}$.

So, how can we manage this issue ?

First (and easiest), forget about the "bar" thing, and just go for plain text:

digraph g{
rankdir="LR";
edge[splines="curved"]
ON -> OFF [label="sw=0"];
OFF -> ON [label="sw=1"];
}


This is not very satisfying, it clutters the diagram.

Second solution: use Unicode. Graphviz natively supports it, and Unicode provides some special character that is supposed to handle this situation. So just enter:
digraph g{
rankdir="LR";
edge[splines="curved"]
OFF -> ON [label="sw"];
ON -> OFF [label="s̅w̅"];
}
(sorry, seems that the current hosting of this blog does not correctly display this, this is why the bar isn't exactly over the two letters).

In GTK+ based apps (Gedit, for instance), Unicode can be entered by hitting CTRL+SHIFT+U, then entering the desired character code (here '+0305') after each letter. Here, you need to do this manually after each letter of the label .

Unfortunately, the final rendering depends on the font used by the layout engine. It seems that the default png output of Graphviz does not use the Cairo library. Or if it does, it does not provide any control on the used font, so the final result looks quite ugly:

#### Direct insertion into Latex source file

If the graph image is intended to end up in a Latex source file, then check out the Graphviz package. It allows you to insert directly the graph command into the main Latex document. Unfortunately, this does not mean you suddenly have all the associated formatting power: this package only calls the 'dot' command himself, the only benefit is that you don't have to do it yourself and then import the image file into the Latex document. So for the issue detailed up here, it is of no help.

Another tool, dot2tex, has been specifically designed to have it all: direct editing of dot file inside Latex file and Latex formatting for labels and edges. Basically, it converts the dot file into PSTricks and/or PGF/TikZ format using some Python magic, then process it as regular Latex code.
Unfortunately, installation on my machine seems to suffer from some obscure Python bug, so I can't tell more at present! I hope to be able to try this soon.

Edit 2015/05: for more precise positioning of your nodes and vertices and better rendering, you'd better go off with a Latex-based solution. Tikz seems to be the easiest, see for example this sample.
For my own record, here are some relevant links:

## mercredi 3 avril 2013

### Writing portable makefiles

Edit 2016-10-21: I notice this post comes on first page of Google "portable makefile" request, so I thought I'd might add some context. This post was written when I was struggling with this kind of stuff, and should be taken as a "proof of concept" post. For me, this is definitely over, as I (almost) completely quit using Windows, being for several years now a happy GNU/Linux user. Readers must be aware that although some tricks are given here, it is certainly not the best approach for setting up a portable build system. If you are in that situation, the best way to go is probably CMake, as it is today the de facto standard tool.

This note is about GNU Make makefiles syntax, and how to write them to keep them OS-independent as much as possible.

### 1- Introduction: computers and file systems

When it comes to computers and their associated filesystem, there are two worlds on earth.
One that considers that a path to a file spells this way:
path/to/the/file
and the other world that considers that the correct syntax is:
path\to\the\file

This may sound silly (and it is) but it can lead to some complications. Not only because these two worlds use a different symbol, but mostly because they both have a special meaning for the other symbol.

To put it clearly, on a Linux machine (that uses the '/' path separator), the backslash has a special meaning in some situations (shell scripts, makefiles, ...) meaning "I have no more room on this line, lets keep on and continue the current command on next line" (and that trick is very valuable for readability). And it follows what is a convention in C and C++ source files.

In the other world (MS Windows), the default shell (cmd.exe) interprets the slash character as the option separator. For example: del /F path\to\file.txt

And, no, at least in XP, the Windows shell DOES NOT accept both path/to/file and path\to\file, as it is frequently said in many places. Try to do something like del path/to/file to check. Maybe this has changed with Windows 7, 8, 11 or 42, I'm not really interested, but with Windows XP's shell, it does-not-work.The cause of that misunderstanding is probably that system calls (that is, the functions you call from inside a program), DO accept forward slashes or backslashes in paths).

Anyway, the two shells (Linux/bash and Windows/cmd.exe) are sooo different, only insane people would consider trying to write a "compatible" script, running equally on both systems (1).

However, there is one situation where a same command semantic must be executed equally in those two different environments: makefiles

Basically, a makefile is a set of commands that are executed by the shell.
Say for one target, we want to erase some file, even "read-only" ones. On one environment, this must be done with the following command:
rm -f path/to/file
while on the other, it will be:
del /F path\to\file

The question is: how can a write that command in my makefile so that it expands in these two different syntaxes at runtime? And more generally, how do I write portable makefiles ?

### 2 - Handling command names

First, lets manage the different command names (and their options). That's the easiest. Just define a variable holding the name of the command, that will hold different values depending on platform. The easiest way to detect the platform is to check for the existence of a Windows-only environment variable, say ComSpec (but some sources relie on SystemRoot that can be used too).

 ifdef ComSpec     RM=del /F /Q else     RM=rm -f endif 

This will be in the upper part of the makefile, before any recipes. Then, in the recipes, just use $(RM) in place of the command. ### 3 - Handling paths Secondly, you need to handle the path separator. Two situations need to be handled: - paths to explicit files (the example above), - automatic paths, build using make's wildcards and substitution functions. Remember that you need to care for this only for system calls. Whatever the platform, GNU Make, gcc or other "regular" development tools handle very well paths with forward slashes, whatever the platform. To make it clear, say we have these lines that follow the classical target-prerequisite-command scheme:  mytarget: path/to/file$(SOMECOMMAND) path/to/file 
The first line will do fine, but the second line will generate an error on Windows if SOMECOMMAND expands to a built-in shell command.

### 3.1 - Processing explicit paths

First, for the explicit paths, we can proceed with the same trick: define a variable holding the required separator ('\' or '/'), then use this variable in the commands.

 ifdef ComSpec    PATHSEP2=\
else
    PATHSEP2=/
endif 

Ha. Unfortunatly, this does not work, because the backslash is interpreted by make as the "keep on same line!" request, and not as a character. Ok, so we need to escape that backslash, in order to fool make:

ifdef ComSpec
PATHSEP2=\\
else
PATHSEP2=/
endif


Funilly, this works half ways: the definition is accepted, but the variable holds the two backslashes! Fortunatly, the Windows shell accepts paths that looks like path\\to\\file (don't ask me why...)

Almost done. This still does not work: the above definition adds a trailing space at the end of the variable, i.e. its usage in:
path$(PATHSEP2)file will expand into: path/ file (or path\ file on Windows) and that will not be ok, for sure! So finally, we need to add the following definition and function call, that removes that ugly trailing space: PATHSEP=$(strip $(PATHSEP2)) That way, an explicit erasing command in a makefile (for example) can be portably written as:$(RM) path$(PATHSEP)to$(PATHSEP)file

Ok, now how about paths that are automatically build.

### 3.2 - Processing generated paths

For example, you usually define a variable holding all the object files, that is build from all the source files. If these are in a folder named src, and the object files are in a folder named obj, then you can define the list of the source files with:
SRC_FILES=$(wildcard src/*.cpp) and the list of corresponding object files with (2): OBJ_FILES=$(patsubst src/%.cpp,obj/%.o,$(SRC_FILES)) But heres comes the problem, trying to erase all the object files with:$(RM) $(OBJ_FILES) will expand on Windows as something like: del /F obj/file1.o obj/file2.o obj/file3.o and that will throw an error, because the shell will consider that what is behind the slash as some option. Two solutions can be used: • either use PATHSEP in the "patsubst" function call above: OBJ_FILES=$(patsubst src/%.cpp,obj$(PATHSEP)%.o,$(SRC_FILES))
• either use the "subst function", that replaces some pattern in a string with another:
OBJ_FILES_CORRECT=$(subst \,/,$(OBJ_FILES))

But this latter solution implies the creation of another variable, which can be error-prone in dense makefiles.

### 4 - Command separator

Another problem that needs to be handled is the command separator. In many make tutorials, you see commands written this way:
cd MyFolder; SomeCommand Some Arguments
which means: "get down into folder MyFolder, and execute SomeCommand with Some Arguments"
This is an invalid syntax on Windows where the command separator is &.
So, again, the variable trick:

 ifndef ComSpec     CMDSEP=; else     CMDSEP=& endif 

and the above command will be written:
 obj/%.o : src/%.cpp     $(L)$(CXX) -o $@ -c$< $(CFLAGS)  And add the following lines in the first part of the makefile:  ifeq "$(LOG)" ""    LOG=no endif ifeq "$(LOG)" "no" L=@ endif  This way, launching make with no special option will run silently, and in case of trouble, just tell your mate to run: make <target-name> LOG=yes and all the commands that are launched will (magically) appear on screen (3). ### 6 - Conclusion These are some hints that can help you design more portable makefiles. I'll finish with one remark. For "big" projects, maybe you should rely on "makefile generators", that is, programs that do all these low-level tasks (and much more). The most known are CMake and the GNU set of tools, but others can be used. Finally, one quote from "Managing projects with GNU Make": "... there is no such thing as perfect portability, so it is our job to balance effort versus portability." (1) Unless you use a non-native modern script langage such as Python of course. (2) In real life, the folders name would also be stored in variables, i.e. : OBJ_FILES=$(patsubst $(SRC_DIR)/%.cpp,$(OBJ_DIR)/%.o,\$(SRC_FILES))
(3) Or in a text file if you redirect it, and that is usually a good idea when output starts to get large.

## samedi 30 mars 2013

### Octave/Ubuntu: problems installing additional packages

This post started out as a question on SO, but as I finally found the answer, I though it might interest other people.

Consider the following situation: you need to do some function data-fitting, you don't {wan't to use / have access to} Matlab, and think that Octave might be an alternative.

First problem: your version of Octave on you current Ubuntu 12.04 is slightly outdated, and sudo apt-get install doesn't seem to have a more recent version.

Then, Octave actually doesn't have data-fitting material. It is provided as additional packages (see here). And according to this page, it is the optim package that you need.

pkg install optim-1.2.2.tar.gz
tells you that there are additional packages required (miscellaneous, struct and general). And at one point you might hit the following error (or something near), complaining about something called mkoctfile:

 make: /usr/bin/mkoctfile: Command not found    make: *** [__exit__.oct] Error 127    'make' returned the following error: make: Entering directory /tmp/oct-P11IKL/general/src'    /usr/bin/mkoctfile __exit__.cc    make: Leaving directory /tmp/oct-P11IKL/general/src'    error: called from pkg>configure_make' in file /usr/share/octave/3.6.2/m/pkg/pkg.m         near line 1391, column 9    error: called from:    error:   /usr/share/octave/3.6.2/m/pkg/pkg.m at line 834, column 5    error:   /usr/share/octave/3.6.2/m/pkg/pkg.m at line 383, column 9 

If you search about this, you might find this question, where the answer (unaccepted) says that you should sudo apt-get install octave-signal`

Don't ! Depending on your ppa settings, this might revert your Octave installation to 3.2, which is not desirable.

The solution requires to install the Octave development packages with :
sudo apt-get install octave-pkg-dev

Finally, It seems that installation of some (?) packages writes stuff in /usr/share/octave/, which can't be done by user (and 'sudo' can't be run from Octave 's shell).
So the easiest it to switch as root before starting Octave (with su), then install the packages, then quit Octave.