dimanche 16 janvier 2022

Importing git repos from exported list

Another post on that topic (see previous post), useful when you want to clone your repos on a brand new disk, but only have access to the github/lab account. (yes, disk failures happen, believe me...)

Step 1:

For Github, you can export the list of your repos with the API:

curl https://api.github.com/users/USERNAME/repos
This returns a full JSON file, but you only need the repos url. To get the SSH-way, you can do this to get a raw file holding the list of URLs:
curl https://api.github.com/users/USERNAME/repos | grep ssh_url > github_repos
In that file, you will have one line per repo, like this:
    "ssh_url": "git@github.com:USERNAME/adepopro.git",

Step 2

Now, assuming you have stored your ssh key on your Github account, you can clone all repos at once with the following script. It does some text-processing (remove quotes, spaces, ...), then a git clone.
if [ "$1" = "" ]
    echo "filename missing, exit"
    exit 1

while IFS=":"; read f1 f2 f3
# remove quotes
    f2b=$(sed s/\"//g <<< $f2)
    f3b=$(sed s/\"//g <<< $f3)
# remove comma at the end    
# remove space at the end    
    f2c=${f2b/ /}
# build path
# clone
    echo "cloning $p"
    git clone "$p"
done < $1
Final note: haven't tried with Gitlab, but I assume it'll be more or less the same.

lundi 4 janvier 2021

Recovering set of git repos with new computer

I am a heavy Git user, I use it for mostly... almost everything! I have a lot of repos on my machine, connected to various online accounts (github, gitlab, but also others). Most of these are in kept in some dedicated folder (say /home/myname/dev, for example).

Now, I have a new computer. How do I import all these repos at once? Of course, I don't want having to clone them one by one manually! That would be ok for 1,2 or 3 repos, but I've got dozens.

So I just wrote these 2 scripts to 1-generate a list of remotes in a text file (on the "old" machine), and 2-automate cloning from this file on the new computer.
(Finding something similar on SO or elsewhere seems incredibly hard, thus I rewrote that, probably not the first one...)

First, on the old computer, drop the following script in the folder holding the repos, and run it:

# git_generate_url_list.sh
# Generate a list of git remotes that are in the current folder
# (also logs their sizes)
# S. Kramm - 2020-01-04


a=$(ls -1d */)

echo "# repos list" > url_list.txt
echo "# repos size" > repos_size.txt
for i in $a
	echo "Processing $i"
	du $i -hs >> repos_size.txt
	cd $i; git remote get-url --all origin >> ../url_list.txt
	cd ..

This also assesses the sizes of each of these, can be useful to detect something going wrong...

Then, take that url list file on new computer, drop it in /home/myname/dev (or whatever location), along with this second script, and run it:

# git_clone_from_url_list.sh
# clone in current folder from a set of urls, from file given as argument
# S. Kramm - 2020-01-04


if [ "$1" == "" ]
	echo "Missing filename!"
	exit 1

echo "git cloning from file $1"

while read a
	if [[ ${a:0:1} != "#" ]] # if not a comment
		echo "importing repo from $a"
		git clone $a
done < $1

Of course, if you use https, you will need to provide passwords for the private repos, but only once per online service.

Edit: also checkout the other post about this topic.

mercredi 15 janvier 2020



samedi 6 juillet 2019

New C++11 State Machine library released

I just finished releasing version 0.9.3 of Spaghetti, a Finite State Machine library. It is provided with a full manual that demonstrates all the use cases.

It is a header-only single-file library, that should build with any C++11 compliant compiler.

Compared to its main competitors (Boost::statechart and Boost::msm), I had at present no time to build a exhaustive comparison, but I have the feeling that Spaghetti is quite easier to setup. On the other hand, it "might" not be as powerful, but again, that needs to be checked.

A lot of work still needs to be done (doc cleaning, testing on different platforms and compilers, adding extensive testing coverage, ...) but it is usuable right away, out of the box.

If you feel like supporting that kind of work, you may check it out and give some feedback, either here or as a Github issue if you spot one.

mercredi 18 avril 2018

C++: getting min/max values of boost::graph attributes

Just spend "some" time on some stupid issue on this problem, so I though I might as well post that here, if it can be useful to someone.

Say you have a boost::graph using so-called "bundled properties" (aka "inner properties) and you want to find the minimum and maximum value of the attributes. The standard library has this nice minmax_element() algorithm.

But... how can I use it on a graph inner properties ?

Say you have this kind of vertex, used in a graph definition (here undirected, but should also work for a directed graph):

struct vertex_properties
  int val;

typedef boost::adjacency_list<
> graph_t;

typedef boost::graph_traits<graph_t>::vertex_descriptor vertex_t;

As an example, consider this program, creating a 3 vertices graph (and, yes, no edges here):

graph_t g;
vertex_t v1 = boost::add_vertex(g);
vertex_t v2 = boost::add_vertex(g);
vertex_t v3 = boost::add_vertex(g);
// Set vertex properties
g[v1].val = 1;
g[v2].val = 2;
g[v3].val = 3;

To find the min/max value of the attributes, just call the algorithm with the right lambda function:

auto pit = boost::vertices( g );
auto result = std::minmax_element(
 [&]                                              // lambda
 ( vertex_t v1, vertex_t v2 )
  return( g[v1].val < g[v2].val );

This will return a pair of iterators on the min and max graph indexes. So, to get the results:

    std::cout << "min=" << g[*result.first].val
     << " max=" << g[*result.second].val << '\n';

jeudi 4 décembre 2014

C++: erasing elements of std::vector using a lambda

Removing elements from a vector is a task that one can encounter pretty often and that isn't as easy as one could think.

The simplest case is when the index of the unwanted element is known. The std::vector class provides a first form of the erase() member function that takes an (const) iterator as argument.

Thus, if I want to remove, say the 10th element, it's as easy as:

    std::vector<whatever> myVec;
//... fill with more than 10 elements
    myVec.erase( myVec.begin() + 9 );

And if you want to remove the 3 elements between positions 10 and 12, it will be the second form of this function, which has two arguments:

    myVec.erase( myVec.begin() + 9, myVec.begin() + 12 );

(Yes, the second argument defines the first one you want to keep)

But what happens when you want to remove elements based on their value ? Say remove all elements that have value foo (assuming that value is of type whatever).

This is a task for std::remove(). It actually does not remove anything, it just switches element around so that the ones to be erased will be at the end, and it returns an iterator pointing on the first element to be erased. The next step is to feed that iterator to std::vector::erase().

The code will using its second form:

        std::remove(            // returns iterator on
            myVec.begin(),      // first element to
            myVec.end(),        // be removed

(This is known as the Erase–remove idiom.)

Next, what if you want to remove elements based on some property they have ? Consider for example a vector of vectors:

   std::vector<std::vector<Whatever>> myVec2;

And now the task is to remove elements that hold less than 2 elements. Okay, so we need to check every element, and decide to remove it or not.

This is a task for the second form of that same algorithm, remove_if(). Instead of a value, it takes as third argument a predicate, and will "remove" (move, actually) the considered element if that predicate returns "true". A predicate is usually implemented as a functor, which is an object of some class that defines the operator() and that returns a bool, based on the given value.

At first, this seems like a harsh constraint, as no one wants to declare a class for such a trivial task. But before C++11 came out, that was required (unless, maybe, using some Boost library). Or else, we needed to iterate through the vector and test each element, copy it or not, and swap:

   vector<vector<whatever>> newv;
   newv.reserve( myVec.size() ); // to avoid resizing when using push_back
      for( size_t i=0; i < myVec.size(); i++ )
         if( myVec[i].size()<MinSize )
            newv.push_back( myVec[i] );
   std::swap( myVec, newv );

This is where C++11 and lambdas come in. A lambda can be seen as a sort of "anonymous inline function", that captures variables in scope. Here, as the function iterates over all the elements, each of them will be a std::vector.

A lambda is made of three parts:

  • [how capturing variables happens],
  • (the functions arguments),
  • {The body of the function}.

The complete code:

   std::size_t MinSize = ... (some value);
         [&]( const std::vector<whatever>& vw ) // lambda
            { return vw.size() < minsize; } 

More on c++ lambdas.

mercredi 14 mai 2014

Subversion: colordiff fo html files

(Mostly a reminder:)

Say you have some software project, hosted on some Subversion repository. You happily edit your files, and before committing you want to have a quick look at the edits you have done.

No problem, as simple as:
> svn diff 

But this outputs lots of text, not easily readable. Ok, lets' go with colordiff:
> svn diff | colordiff

And then you get drowned under floods of nice and flashy colors, and you have to painfully scroll your terminal. Well, what else ? A simple redirecting in a file won't keep the colors.

This is where another magic tool shows up: aha. Yeah, that's his name. It's a "ANSI to HTML" converter. Install it with sudo apt-get install aha, and then, go:
svn diff | colordiff | aha >mydiff.html

For conveniency, you can now add a new target to you makefile:
     svn diff | colordiff | aha >mydiff.html
     xdg-open mydiff.html
Thus, entering make diff at the shell will show you the current edits you have done up to now.
xdg-open is just the Gnome app that opens the default application associated with a file type. On Windows, just use the file name alone, as this OS has some mechanism to open the file with the default application when given a file name.

Edit 20141224: a small improvement: in order to keep track of all these generated diff files, you can append date/time to the filename so that each new one doesn't erase the previous one. This can be done easily with bash (not that hard for Windows either, but no time at present to figure that out):

NOW=$(shell date +%Y%m%d_%H%M)

     svn diff | colordiff | aha >mydiff_$(NOW).html
     $(BROWSER) mydiff_$(NOW).html

Notice the conditional, so that this makefile should also work out-of-the-box under Windows (except for the time/date, but if you send it to me, I'll publish it ;-) )