Localised configuration in Vim: localcfg

For quite a while I’ve been using a small Vim plugin that lets me write configuration that is specific to a system, it loaded a config file based on the system’s host name. Unfortunately I can’t seem to find that plugin anywhere now, so I’ve put it in a snippet. This allowed me to easily create a single clean Vim configuration and check it into version control, while still allowing for settings that are unique to a system.

Lately I’ve found it slightly limited though, I really wanted to use other things to trigger the loading of some piece of configuration. So I wrote my first ever Vim plugin: localcfg

Hopefully someone will find it useful.

Share

Phabricator on ArchLinux

At work we’ve been using Trac for quite a while now, but it’s always interesting to look at other options. When listening to a recent episode of git Minutes on Blender’s move to using git I heard of Phabricator for the first time. There are good instructions for how to install it on Ubuntu, but I couldn’t find any for ArchLinux.

These are my notes for installing Phabricator on ArchLinux. Hopefully someone will find them useful.

Basic setup

I performed this install in a virtual machine (using VirtualBox). The virtual machine is configured with a bridged network and I gave it the FQDN of phabarch.vbox.net.

Packages

Beyond the packages installed as part of the basic install I needed the following packages:

  • lighttpd
  • git
  • mariadb
  • php
  • php-cgi
  • php-apcu
  • php-gd

Setup of mariadb

Following the instructions on the ArchLinux wiki page on mariadb is first started the service and then finished the installation:

# systemctl start mysqld.service
# mysql_secure_installation

After this I restarted the service, and made sure it’ll restart on system start:

# systemctl restart mysqld.service
# systemctl enable mysqld.service

I picked the root password mariaroot.

Setup of lighttpd

Modify /etc/lighttpd/lighttp.conf to look something like this:

server.modules = (
  "mod_rewrite",
  "mod_fastcgi",
)
 
server.port = 80
server.username  = "http"
server.groupname = "http"
server.document-root = "/srv/http"
server.errorlog  = "/var/log/lighttpd/error.log"
dir-listing.activate = "enable"
index-file.names = ( "index.php", "index.html" )
static-file.exclude-extensions = ( ".php" )
mimetype.assign  = (
  ".html" => "text/html",
  ".txt" => "text/plain",
  ".css" => "text/css",
  ".js" => "application/x-javascript",
  ".jpg" => "image/jpeg",
  ".jpeg" => "image/jpeg",
  ".gif" => "image/gif",
  ".png" => "image/png",
  "" => "application/octet-stream"
  )
 
fastcgi.server += ( ".php" =>
  ((
    "bin-path" => "/usr/bin/php-cgi",
    "socket" => "/var/run/lighttpd/php.socket",
    "max-procs" => 1,
    "bin-environment" => (
      "PHP_FCGI_CHILDREN" => "4",
      "PHP_FCGI_MAX_REQUESTS" => "10000",
    ),
    "bin-copy-environment" => (
      "PATH", "SHELL", "USER",
    ),
    "broken-scriptfilename" => "enable",
  ))
)
 
$HTTP["host"] =~ "phabarch(\.vbox\.net)?" {
  server.document-root = "/srv/http/phabricator/webroot"
  url.rewrite-once = (
    "^(/rsrc/.*)$" => "$1",
    "^(/favicon.ico)$" => "$1",
    # This simulates QSA (query string append) mode in Apache
    "^(/[^?]*)\?(.*)" => "/index.php?__path__=$1&$2",
    "^(/.*)$" => "/index.php?__path__=$1",
  )
}

Setup of php

Modify /etc/php/php.ini and enable the following extensions:

  • mysqli.so
  • openssl.so
  • iconv.so
  • apcu.so
  • gd.so
  • posix.so

Also disable the open_basedir setting.

Getting and setting up Phabricator

Checking it out

I placed it in /srv/http:

# cd /srv/http
# git clone git://github.com/facebook/libphutil.git
# git clone git://github.com/facebook/arcanist.git
# git clone git://github.com/facebook/phabricator.git

Database configuration

# cd /srv/http/phabricator
# ./bin/storage upgrade --user root --password mariaroot
# ./bin/config set mysql.user root
# ./bin/config set mysql.pass mariaroot

Set the base URI

# cd /srv/http/phabricator
# ./bin/config set phabricator.base-uri 'http://phabarch.vbox.net/'

Diffusion configuration

# mkdir /var/repo
# ./bin/config set diffusion.allow-http-auth true

At this point I started lighttpd and used the web interface to configure environment.append-paths to include the path of git-core, /usr/lib/git-core.

phd configuration

First create the daemon user

# useradd -r -M -d /tmp phabd

Then create the phd log dir and set its owner and group:

# mkdir /var/tmp/phd/log
# chown -R phabd:phabd /var/tmp/phd/

Also make the daemon user the owner of the repo folder used by Diffusion:

# chown phabd:phabd /var/repo

Configure sudo

Create the file /etc/sudoers.d/phabricator with this content:

http ALL=(phabd) SETENV: NOPASSWD: /usr/lib/git-core/git-http-backend

User configuration

In the web interface set a VCS password for the user.

Play

Now the system should be ready to be played with :)

Share

How do professional Windows programmers stand Visual Studio?

I have a new assignment at work and now find myself at yet another Windows shop. They are making embedded systems, but have for some strange reason decided that Windows is the only development platform to use. After only a few weeks here I’m noting a growing irritation with the tools offered for my use. The amount of forced mouse usage is astounding and the main tool, Visual Studio, is turning out to be the main culprit. After a week or so of exposure to VS I’ve found what I consider to be a serious flaw with a tool for developers: it doesn’t scale.

  1. No hierarchical structure It doesn’t scale very well with the size of a project. The concept of having a solution with a number of projects is not bad. But the implementation doesn’t scale. A project can’t have sub-projects, which means I can’t really layer the code in the IDE in the same way I do on disk. The only thing I can do is organise viewing of files through the concept of filters.
  2. All configuration is manual, part 1 MS seems to have optimised for small projects. All configuration is kept in XML files, and the only interface to them is a set of property dialogues (some which can be resized, others not) requiring an amazing amount of pointing and clicking to get anything done.
  3. All configuration is manual, part 2 MS have optimised for beginning new projects. Getting started is amazingly quick, but once you reach a size of about 10 projects it becomes daunting to fix anything that requires configuration in all projects. Making sure that all configurations are correct is a major undertaking, and requires an insane amount using the mouse. Some earlier versions of VS seem to even have made it impossible to edit the common settings of configurations properly; a simple mistake and the value of one configuration is lost.
  4. Experience There are no shortcuts to discover. The configuration is all in XML, which means it’s not really possible to jump out of VS and use a text editor for fixing up semi-broken configurations (like dealing with someone’s decision to place all intermediate files and final results in the top-level solution directory).

So, how do Windows developers cope with this? Don’t they use VS? Do they police the configurations diligently to ensure no silliness creeps in? Or are they all using tools to address these points (like CMake)?

Share

TCLAP for command line argument parsing in C++

A long while ago I was looking for a way to handle command line arguments in a C++ program I was writing for Windows. At the time I only found Boost.Program_options. After a bit of experimenting I found that the pre-built Boost libs I found back then had some issues and after a bit of thinking I decided to write the program in Python instead :) Now I once again find that I need to handle command line arguments in C++ on Windows, but this time I found quite a few options, gflags, ezOptionParser, TCLAP. They are all stand-alone, meaning that I don’t need to pull in a huge dependency like Boost or Qt, and liberally licensed (BSD3 and MIT) so usage in at work is no problem. After a bit of playing with all three I found that TCLAP is most to my liking, but gflags does have one nice little feature–it allows putting command line option definitions pretty much anywhere in the code. This would solve one of the problems I’m facing; the tool shall consist of a main program and pluggable modules, where each module must be able to add command line arguments of its own. However, it turns out that the TCLAP API easily allows implementing such a scheme. Here’s how I implemented it in the experiment code I’ve been playing with this weekend.

How it works

The basic idea is to create a central instance of TCLAP::CmdLine that can be used to register options with in the program and the pluggable modules. By declaring the option instances as top-level variables in compilation units it will be enough to just load a pluggable to the constructors run, and by passing the central TCLAP::CmdLine to the constructors they will register themselves properly. This seems to be same idea used in gflags.

Registering a command line option

The following code is the code my pluggable module used to register an option:

1
2
3
4
5
6
7
TCLAP::ValueArg<int> value("v",
                           "value",
                           "ONE: the value to return",
                           false,
                           42,
                           "integer",
                           argparse::argparser);

The parser

I put the parser in its own namespace, and due to the API of TCLAP::CmdLine I found that I needed to subclass it.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#include <tclap/CmdLine.h>
 
namespace argparse {
 
class ArgParser : public TCLAP::CmdLine {
public:
    ArgParser() : TCLAP::CmdLine("No message set.", ' ', "No version set.") {}
 
    void setMessage(std::string msg) { _message = msg; }
    void setVersion(std::string version) { _version = version; }
};
 
extern ArgParser argparser;
 
}

The main function

After this I just need to instantiate the central parser instance and set it up.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
argparse::ArgParser argparse::argparser;
 
int main(int ac, char **av)
{
    argparse::argparser.setMessage("The message.");
    argparse::argparser.setVersion("0.0.0");
 
    // load the the plugin modules
 
    argparse::argparser.parse(ac, av);
 
    // do the work
 
    return 0;
}

Limitations

The main limitation I can see with this approach, and AFAICS that would be true with gflags as well, is that it’s not very easy to find the plugin modules to load by passing them on the command line. If the TCLAP API allowed parsing without acting on --help and --version, as well as be forgiving with arguments it didn’t know how to handle, it would be possible to use an option in the main application to find the modules, load them, and then re-parse the command line. In my particular case it doesn’t matter much, the plugin modules will all be found in a well-known place and all available modules should be loaded every time. It does make testing a bit more cumbersome though.

Share

Strachey, referential transparency, Haskell

This is my input into the recent discussion on referential transparency (RT). I’m nowhere near as well versed in the subject as others, but how am I ever to learn anything unless I put my thoughts out there for them to be laughed at and ridiculed? ;)

It all started with a post on stackoverflow.com, which received several very long and detailed responses, in particular from Uday Reddy (here and here). His answers were also linked to from Reddit. His second response contains a link to an excellent paper by Strachey, Fundamental concepts in programming languages. I’d go as far as saying that, despite it being lecture notes rather than a fully worked paper, it ought to be required reading for all software developers.

The rest of what I write here hinges on me actually understanding what Strachey writes in his paper. Of course I’m looking forward to comments/corrections/etc that help me correct my understanding.

What Strachey says about RT

In section 3.2.1 he introduces RT like this:

One of the most useful properties of expressions is that called by Quine referential transparency. In essence this means that if we wish to find the value of an expression which contains a sub-expression, the only thing we need to know about the sub-expression is its value. Any other features of the sub-expression, such as its internal structure, the number and nature of its components, the order in which they are evaluated or the colour of the ink in which they are written, are irrelevant to the value of the main expression.

There is however a crucial bit just before that:

Like the rest of mathematics, we shall be concerned only with R-values.

That is, he starts out with a very limited subset of what most people would consider a usable imperative programming language.

He then dives into some more details in section 3.2.2 by adding the concept of environment, which is handled through the use of a where-clause, or alternatively using let-statements (this ought to be making any Haskell developer feel right at home). After a few interesting sections on stuff like applicative structure, evaluation, and conditional expressions he finally tackles the issue of variables in section 3.3.1. There are two pieces to the trick, the first is to take advantage of his earlier insight that lead to a split of values into L-values and R-values:

If we consider L-values as well as R-values, however, we can preserve referential transparency as far as L-values are concerned. This is because L-values, being generalised addresses, are not altered by assignment commands. Thus the command x := x+1 leaves the address of the cell representing x (L-value of x) unchanged although it does alter the contents of this cell (R-value of x). So if we agree that the values concerned are all L-values, we can continue to use where-clauses and lambda-expressions for describing parts of a program which include assignments.

The cost of this is that the entire theory constructed earlier for operations taking R-values now has to be revised to incorporate L-values. The outline for this is in the rest of section 3.3 and it basically comes down to include an abstract store in the environment. However, before doing that he mentions that:

I think these problems are inevitable and although much of the work remains to be done, I feel hopeful that when completed it will not seem so formidable as it does at present, and that it will bring clarification to many areas of programming language study which are very obscure today. In particular the problems of side effects will, I hope, become more amenable.

He does reach his goal, but it’s a bit unfortunate that he stops short of considering the wider of problem of side effects. My assumption is that this would have to be dealt with in a similar way to assignment, but that would mean that rather than just adding an store to the environment the world, or a subset of it, would need to be added.

An open question (to me) is if anyone has built on Strachey’s work in this area and thought of the details of RT and general side effects?

RT in Haskell

The original question described RT as

it means you can replace equals with equals

which I actually think is a rather good, and very short, description of it. It’s not the full story, there are further important details, but it’s a good first intuition. Also, it’s a description usable in Haskell. Well, to be slightly more nuanced, it good for Haskell without IO (Haskell-IO). However, this is where the strict type system of Haskell really starts to shine because (here I’m probably a bit imprecise) we only have R-values in Haskell-IO. If we want to use assignment we add the use of a state monad, and we do that explicitly.

A former colleague of mine said that in Haskell we need to build up our own computational models ourselves. For instance, if we need assigment we use State, if we need to record progress we use Writer, etc. In other languages the language designer has already made all those choices for us, we don’t get to make them ourselves. For RT it means that Haskell is more explicit in what the environment of a function is.

Moving on to general side effects those are also more explicit in Haskell since they have to happen inside the IO monad. That alone is a great boon for RT in Haskell since it becomes explicit where RT as worked out by Strachey applies directly, and where there are (hopefully amenable) problems of side effects left. Even further, in Haskell it’s possible to make subsets of IO (by wrapping IO, see e.g. my own posts on wrapping IO, part 1 and wrapping IO, part 2). I’m sure that if including the world in the environment is the way to achieve RT with general side effects, then it’s highly beneficial to be able to create subsets of the world.

RT in Haskell vs. RT in (common) imperative languages

Uday writes in his first answer that:

But, today, functional programmers claim that imperative programming languages are not referentially transparent. Strachey would be turning in his grave.

This may well be true, but I think that when a Haskell programmer says it, he’s only twitching slightly. The reason? Strachey writes:

Any departure of R-value referential transparency in a R-value context should either be eliminated by decomposing the expression into several commands and simpler expressions, or, if this turns out to be difficult, the subject of a comment.

Which is something that Haskell programmers do naturally by use of IO. That is, in Haskell you either have an R-value, and you clearly see that you do, or you put in a comment, which is encoded in the type of the function.

This rather lengthy post basically arrives at the following, which is what I suspect the user [pacala is saying about RT on Reddit][reddit-pacala]:

Imperative languages my well be RT, but when trying to understand a code base the environment of each function is so large that understanding is an intractable problem. I don’t have this problem in Haskell.

Share

Compiling boost for Windows, with MinGW on Linux

Just in case you see the utter logic in developing for Windows on Linux :)

In the root of the unpacked Boost:

  1. Run ./bootstrap.sh --with-python=$(which python2) --prefix=${HOME}/opt/boost-win --without-icu
  2. Modify project-config.jam like this:
         # file.
         if ! gcc in [ feature.values  ]
         {
        -    using gcc ;
        +    using gcc : : i486-mingw32-g++ ;
         }
        
         project : default-build gcc ;
  3. Compile and install by running ./bjam --layout=system variant=release threading=multi link=shared runtime-link=shared toolset=gcc target-os=windows threadapi=win32 install

Limit what is built by adding e.g. --with-program_options to that last command.

Share

Qt state machines and automatic timed transitions

In the interest of full disclosure: this post is related to what I do for a living, development of and for embedded systems. I work for Semcon, but they don’t make me wear a suit and tie so these are my words, and mine alone.

A bit of background info

In a recent project we had a system where the turning of the wheels were controlled by a simple dial. It emitted pulses as it was turned and the pulse train was shifted slightly depending on the direction of the turn. In software this was mapped onto two signals, one for each direction, with one signal emitted for each pulse in the train. All very straight forward so far.

To avoid accidental change of direction we decided that

  1. only start turning the wheels after having received four initial signals, and
  2. if a full second without receiving any signal meant that the turning had stopped.

The solution

The application was to be implemented using Qt, so using the Qt state machine framework was an obvious choice. The full state machine wouldn’t have to be large, only 8 states. The initial state (sResting) would indicate that the system was in a steady state (no turning), from there any received signal would advance into a successive state (sOne, sTwo, sThree, sFour) to indicate the number of received signals. From the fourth state the machine would advance directly to a state (sTurning) where a received signal would initiate an actual turn of the wheels. The turning would happen upon the entry into two separate states (sTurnRight and sTurnLeft), each of these states would instantly return to sTurning. All of this is simple and straight forward, what wasn’t so clear was to implement the automatic return to the initial state after 1s of inactivity.

The implementation

As I like to do, I first experimented a little to find a suitable solution to the problem. What follows is the resulting code of that experiment. The final code used in the project ended up being very similar. It’s all based around the method postDelayedEvent() found in QStateMachine.

First off a new type of event is nedded, a ReturnEvent:

1
2
3
4
5
class ReturnEvent : public QEvent
{
public:
    ReturnEvent() : QEvent(QEvent::Type(QEvent::User + 1)) {}
};

There is also a need for a new type of transition, ReturnTransition:

1
2
3
4
5
6
7
8
9
10
11
12
class ReturnTransition : public QAbstractTransition
{
public:
    ReturnTransition(QState *target=0) { if(target) setTargetState(target); }
 
protected:
    virtual bool eventTest(QEvent *e) {
        return(e->type() == QEvent::Type(QEvent::User + 1));
    }
 
    virtual void onTransition(QEvent *) {}
};

For the experiment I decided to use a simple widget containing two buttons, it would also hold the state machine:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
class MButtons : public QWidget
{
    Q_OBJECT;
 
public:
    MButtons(QStateMachine &m)
        : _right("Right"), _left("Left"),
        _m(m), _delayed(0) {
        QBoxLayout *lo = new QBoxLayout(QBoxLayout::TopToBottom);
        lo->addWidget(&_right);
        lo->addWidget(&_left);
 
        setLayout(lo);
    }
    virtual ~MButtons() {}
 
    QPushButton _right,
                _left;
    QStateMachine &_m;

The widget also holds the slots for all the state entry functions:

20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
public slots:
    void sRestingEntered() {
        qDebug() << __PRETTY_FUNCTION__;
        if(_delayed) { _m.cancelDelayedEvent(_delayed); _delayed = 0; }
    }
 
    void sOneEntered() {
        qDebug() << __PRETTY_FUNCTION__;
        if(_delayed) { _m.cancelDelayedEvent(_delayed); _delayed = 0; }
        _delayed = _m.postDelayedEvent(new ReturnEvent, 1000);
    }
 
    void sTwoEntered() {
        qDebug() << __PRETTY_FUNCTION__;
        if(_delayed) { _m.cancelDelayedEvent(_delayed); _delayed = 0; }
        _delayed = _m.postDelayedEvent(new ReturnEvent, 1000);
    }
    void sThreeEntered() {
        qDebug() << __PRETTY_FUNCTION__;
        if(_delayed) { _m.cancelDelayedEvent(_delayed); _delayed = 0; }
        _delayed = _m.postDelayedEvent(new ReturnEvent, 1000);
    }
    void sFourEntered() {
        qDebug() << __PRETTY_FUNCTION__;
        if(_delayed) { _m.cancelDelayedEvent(_delayed); _delayed = 0; }
        _delayed = _m.postDelayedEvent(new ReturnEvent, 1000);
    }
    void sTurningEntered() {
        qDebug() << __PRETTY_FUNCTION__;
        if(_delayed) { _m.cancelDelayedEvent(_delayed); _delayed = 0; }
        _delayed = _m.postDelayedEvent(new ReturnEvent, 1000);
    }
    void sTurnRightEntered() {
        qDebug() << __PRETTY_FUNCTION__;
    }
    void sTurnLeftEntered() {
        qDebug() << __PRETTY_FUNCTION__;
    }

Sure, several of the entry functions could be folded into one, but in order to validate the idea it’s easier to make separate ones for each state. The pattern is easy to spot, on entry a delayed return event is registered (if there’s a previous one its replaced with a new), except for the steady state (sResting) where any delayed event is removed, and the turning states (sTurnRight and sTurnLeft) since those states immediately return to sTurning anyway.

Finally it also holds the handle for the delayed event:

58
59
60
private:
    int _delayed;
};

Now the main function for setting it all up is simple:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
int main(int argc, char **argv)
{
    QApplication app(argc, argv);
    QStateMachine m;
    MButtons b(m);
    b.show();
 
    QState *sResting = new QState(),
           *sOne = new QState(),
           *sTwo = new QState(),
           *sThree = new QState(),
           *sFour = new QState(),
           *sTurning = new QState(),
           *sTurnRight = new QState(),
           *sTurnLeft = new QState();
 
    m.addState(sResting);
    m.addState(sOne);
    m.addState(sTwo);
    m.addState(sThree);
    m.addState(sFour);
    m.addState(sTurning);
    m.addState(sTurnRight);
    m.addState(sTurnLeft);
    m.setInitialState(sResting);
 
    sResting->addTransition(&b._right, SIGNAL(clicked()), sOne);
    sResting->addTransition(&b._left, SIGNAL(clicked()), sOne);
    sOne->addTransition(&b._right, SIGNAL(clicked()), sTwo);
    sOne->addTransition(&b._left, SIGNAL(clicked()), sTwo);
    sOne->addTransition(new ReturnTransition(sResting));
    sTwo->addTransition(&b._right, SIGNAL(clicked()), sThree);
    sTwo->addTransition(&b._left, SIGNAL(clicked()), sThree);
    sTwo->addTransition(new ReturnTransition(sResting));
    sThree->addTransition(&b._right, SIGNAL(clicked()), sFour);
    sThree->addTransition(&b._left, SIGNAL(clicked()), sFour);
    sThree->addTransition(new ReturnTransition(sResting));
    sFour->addTransition(sTurning);
    sTurning->addTransition(&b._right, SIGNAL(clicked()), sTurnRight);
    sTurning->addTransition(&b._left, SIGNAL(clicked()), sTurnLeft);
    sTurning->addTransition(new ReturnTransition(sResting));
    sTurnRight->addTransition(sTurning);
    sTurnLeft->addTransition(sTurning);
 
    QObject::connect(sResting, SIGNAL(entered()), &b, SLOT(sRestingEntered()));
    QObject::connect(sOne, SIGNAL(entered()), &b, SLOT(sOneEntered()));
    QObject::connect(sTwo, SIGNAL(entered()), &b, SLOT(sTwoEntered()));
    QObject::connect(sThree, SIGNAL(entered()), &b, SLOT(sThreeEntered()));
    QObject::connect(sFour, SIGNAL(entered()), &b, SLOT(sFourEntered()));
    QObject::connect(sTurning, SIGNAL(entered()), &b, SLOT(sTurningEntered()));
    QObject::connect(sTurnRight, SIGNAL(entered()), &b, SLOT(sTurnRightEntered()));
    QObject::connect(sTurnLeft, SIGNAL(entered()), &b, SLOT(sTurnLeftEntered()));
 
    m.start();
 
    return(app.exec());
}

Conclusion and open questions

I’m fairly happy with the solution, but I’d be curious how other people, people more skilled in using Qt, would have solved the problem.

For a while I considered solving the skipping of four initial signals using a single state and counter, but I saw no obvious easy way to implement that, so I instead opted to use separate states. Slightly wasteful of resources, but not too bad, and simplicity is important. I’m very curious to find out if there’s a simply way to implement it using a single state.

Share

Manual setup of Qt+Eclipse on Windows

Before the weekend I started looking at using Qt on Windows. More specifically I wanted to know whether this combination could be an option for a sub-project at work. We need to develop a program for the Windows desktop, and due to the overall context it would make sense to write it in C++ (that’s what we use for another part of the project). We already use both Eclipse and Visual Studio in the project, but I strongly prefer Eclipse, so I was hoping to be able to use it. However, it seems that the Qt developers strongly favour their own tool Qt Creator, though there are (outdated?) integrators for both Eclipse and Visual Studio. I’d rather avoid introducing a third IDE into a project—two is already one too many in my opinion. Anyway, I think I managed to find an acceptable configuration of Eclipse without using that old Qt integration plugin together with the MSVC (I was using the gratis version of MSVC for this).

Qt setup

I decided to install Qt into C:\QtSDK, and then I made the following permanent changes to the environment:

> set QTDIR=C:\QtSDK\Desktop\Qt\4.8.0\msvc2010
> set QMAKESPEC=%QTDIR%\mkspecs\win32-msvc2010
> set PATH=%PATH%;%QTDIR%\bin;C:\QtSDK\QtCreator\bin

Starting Eclipse so that it finds the compiler

It’s slightly disappointing that Eclipse happily lets one create MSVC project that isn’t buildable because it doesn’t know where the compiler is located. One easy way to remedy that seems to create a BAT file to create the proper environment for Eclipse:

@echo off
setlocal
call "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\vcvarsall.bat"
start C:\Eclipse\Indigo\eclipse.exe
endlocal

Creating the project

Creating a “makefile” project in Eclipse is fairly straight forward; one needs a C/C++ project, of the makefile type, and make it empty too so that there isn’t any cruft in the way. Then add a single source file, e.g. main.cxx:

#include <iostream>
#include <Qt/QtGui>
 
int main(int argc, char **argv)
{
    std::cout << __FUNCTION__ << std::endl;
    QApplication app(argc, argv);
    return(app.exec());
}

And then a project file, e.g. Test.pro:

TEMPLATE = app
TARGET = 
DEPENDPATH += .
INCLUDEPATH += .
 
CONFIG += qt
 
HEADERS +=
SOURCES += main.cxx

After this use qmake to create the required makefile. I decided to use a subdirectory (_build) in the project, which qmake seems to have full support for:

> qmake ..\Test.pro

Setting up building from Eclipse

In the project properties modify the C/C++ Build settings for the Debug target. Instead of the default build command (which is make) one can use nmake, or even better jom:

  • Build command: C:/QtSDK/QTCreator/bin/jom -f Makefile.Debug
  • Build directory: ${workspace_loc:/Test}/_build

Then one can create a Release target, which differs only in that it builds using Makefile.Release.

Running qmake from inside Eclipse

It’s very convenient to be able to run qmake and re-generate the makefiles from inside Eclipse. One can set that up by adding an external tool:

  • Location: C:\QtSDK\Desktop\Qt\4.8.0\msvc2010\bin\qmake.exe
  • Working directory: ${workspace_loc:/Test}/_build
  • Arguments: ../Test.pro

In closing

I plan to also have a look at the Qt Visual Studio Add-in, though I suspect we might be using the latest version of VS, which might cause trouble.

Suggestions for further integration with Eclipse would be most welcome, e.g. for forms and translations.

Share

LXDE and multiple screens: replacing lxrandr with a script

When using Gnome3 I was really impressed with the support for multiple screens. Then I switched to LXDE and was very disappointed in that desktop’s support for multiple screens. In fact so disappointed that I sat down and read the man-page for ‘randr’ and hacked up the following script:

#! /bin/bash
 
cmd=$1; shift
 
case $cmd in
    on)
        # turn on VGA1, auto-select size, right of laptop screen
        xrandr --output VGA1 --auto --right-of LVDS1
        ;;
    off)
        xrandr --output VGA1 --off
        ;;
    list)
        xrandr
        ;;
    *)
        echo "Commands: on, off, list"
esac

In my mind it’s vastly more usable than ‘lxrandr’ :)

Share