Sunday, 29 September 2013

Using Basic Lighting in OpenGL

Let there be light. And let it be phong shaded.

Returning to my basic 3D cube from previous posts, I wanted to stop it looking quite so much like Jason's cube-of-many-colours and just have one colour that could then be changed depending on the status of what it represented. The problem here is that just setting the colour for each face to the same results in what is essentially shadow puppetry. There's no variation in the colours due to the environment and so they are all solid.

How do we get around this as easily as possible? Use the basic lighting supplied by OpenGL. Note that I'm not trying to do photo-realistic shading of a cube here, all I want is for the colour of the cube's faces to vary depending on how much light they are seeing. What this amounts to is finding angle between the light coming in and the 'normal' of the face - i.e. the direction perpendicular to the plane that the face is on. To do more complicated lighting you'd want to use shaders but that's serious overkill for what I'm looking for.

Anyway, there are several things that need to be done to get this working as intended - enable lighting in the OpenGL, position the light, set the normals and set the materials. The first part basically boils down to the following code:


GLfloat white_light[]= { 1.0f, 1.0f, 1.0f, 1.0f };    // set values for a white light
glLightfv(GL_LIGHT1, GL_DIFFUSE, white_light);        // set the DIFFUSE colour of LIGHT1 to white
glLightfv(GL_LIGHT2, GL_DIFFUSE, white_light);        // set the DIFFUSE colour of LIGHT2 to white

glEnable(GL_LIGHT1);                                  // enable the lights
glEnable(GL_LIGHT2);
glEnable(GL_LIGHTING);                                // enable lighting in general


All that's happening here is I'm setting the values for the two lights I'm going to use and turning them on. You generally have access to 8 lights (I think) depending the implementation. I'm setting the DIFFUSE (light reflected everywhere) value to white so the surfaces it hits will just reflect the colour that I set the material to. I'm not bothering with any SPECULAR (light reflected like a mirror) or AMBIENT (general light) as it's not needed for this at present (and I *think* Ambient is set by default).

Next we need to position the light:

// set the light position
GLfloat light_pos1[]= { 1.0f, 1.0f, 3.0f, 0.0f };
glLightfv(GL_LIGHT1, GL_POSITION, light_pos1);
GLfloat light_pos2[]= { 1.0f, -1.0f, -3.0f, 0.0f };
glLightfv(GL_LIGHT2, GL_POSITION, light_pos2);

We need to be careful about where this code it. It needs to be after all 'camera' translations/rotations but before drawing any objects. If you put it in the wrong place, you'll get weird effects like the light changing when you move the camera. Note that I'm also using two lights as if the face is pointing away from the light, you get no diffuse light at all.

Next on the list is the normal vectors. For a cube this is fairly trivial to work out but for more complicated geometry, these would usually be loaded with the object or calculated when on loading. For my cube, something similar to the following is needed:

glNormal3d(0, 0, 1);
glVertex3f(  0.5, -0.5, 0.5 );
glVertex3f(  0.5,  0.5, 0.5 );
glVertex3f( -0.5,  0.5, 0.5 );
glVertex3f( -0.5, -0.5, 0.5 );

Obviously the important bit is the glNormal3d command which describes the vector of the normal for this face. Similar calls are made for all of the cube faces.

And finally, the last bit setting the material of the cube. When using lighting, you can no longer just use glColor and must use glMaterial instead. This allows you to set the various aspects of how the object reacts to light, namely how the AMBIENT, DIFFUSE and SPECULAR light is reflected back. As I said above, I really care most about the DIFFUSE light in this case which will just colour the faces depending on their direction relative to the light:

GLfloat red[] = {0.8f, .2f, .2f, 1.f};
glMaterialfv(GL_FRONT, GL_AMBIENT_AND_DIFFUSE, red);

Note that this should be placed before calling the display lists/glVertex comands. I'm also setting the AMBIENT a little as well as there is some of this by default.

With all these elements in place, you should get cubes that are a single colour but are shaded appropriately given the angle of the light! Next problem to overcome - overpainting!

References:
http://www.cse.msu.edu/~cse872/tutorial3.html
http://nehe.gamedev.net/tutorial/texture_filters_lighting__keyboard_control/15002/

Code:
https://github.com/doc-sparks/Interface/tree/v0.5

(Note that the tag includes the changes for overpainting as well which basically means some init code has been switched to the paintEvent function)

Sunday, 1 September 2013

Traffic Shaping and Throttling in Linux

Because my University seems to have worse internet connectivity than my house

During my day job as sys-admin for the a Particle Physics Group, I recently was wrapped on the knuckles by central IT for one of our users saturating the whole university's bandwidth. My first reaction was of surprise that they didn't have traffic throttling in place already but this turned to incredulity when I learnt that they only had a 1Gb/s connection for the WHOLE CAMPUS and one of our guys just downloading some LHC data from a couple of fast sites in the UK had brought the entire system to it's knees. Consequently, I was asked to stop people doing this (!) until the network had been upgraded to 10Gb/s. I decided that setting up some traffic shaping on our machines was probably a better idea. 

A quick search led to this page that described how to use the tc command provided by the iptables package to do exactly what I needed and even provided a nice bash script to do the job - problem solved! At some point in the future I may update this post with the actual ins and outs of how the script works (when I've figured it out myself!) but until then, just grab the script, change the download/upload limits as you wish and off you go :)

Sunday, 25 August 2013

Adding New Screen Resolutions in Linux MInt

Because 800x600 is too low-res even for the console

In this day and age, I thought that plugging my laptop into a KVM switch with a known monitor on the other end it would just work but apparently I had over estimated our current level of technology and I was left with a very nice 1920x1080 monitor running at a ludicrous 800x600. Whatever the switch was doing, it meant that Mint couldn't detect the monitor and allow me to select a sensible resolution.

So how do you tell Mint to stop being stupid and run a monitor at a given resolution? This post on the Linux Mint Community site had the answer. In summary, do the following:


  • First, create a 'modeline' using cvt - this is the configuration line that will be added to the monitor settings and contains info on refresh rate, vsync, etc. Note that this use the VESA standard and so should be compatible with pretty much everything.

~ $ cvt 1920 1080
# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz
Modeline "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync

  • This mode info now needs to be added to the monitor settings using xrandr and the info from the above Modeline:

xrandr --newmode "1920x1080_60.00"  173.00  1920 2048 2248 2576  1080 1083 1088 1120 -hsync +vsync
xrandr --addmode VGA1 "1920x1080_60.00"

This setting should now be added to the list of default options given for the monitor. Note that this isn't permanent and won't survive a reboot - however, I very rarely reboot my laptop anyway (yay linux!) and the original blog post has info about how to do this if you want to give it a try.

Sunday, 2 June 2013

Moving Around a 3D cube with Mouse and Keyboard (Part 2)

3D rotations broke my brain

So in the first part of this post, I got the cube to respond to mouse movements in so you can rotate around it. Inspiring stuff. Next, I want to add keyboard control using good old WASD movement. The movement I want to recreate is your typical RTS style mouse movement of having the camera rotate around a point (done) and then move that point over a plane using the keyboard (definitely not done).

First things first: How do we check for keyboard input? This is actually not as quite as simple as just triggering on a key press event as if you hold down a key as these don't fire often enough. It is also very difficult (maybe impossible) to poll the actual keyboard hardware in an OS-independent way. However, we can use the Qt-provided functions keyPressEvent and keyReleaseEvent to track the state of the keyboard and act accordingly. To do this, just override these functions in the MainWindow object (this is what gets the keyboard events by default) and update a QMap to the status of each key. The actual code I've added is quite simple and is shown below:

header:

public:    
    // check key status
    bool isKeyDown(int key);

private:
    // keyboard map for deciding key presses
    QMap keyboardMap_;

protected:
    void keyPressEvent(QKeyEvent *event);
    void keyReleaseEvent(QKeyEvent *event);


cpp file:

void MainWindow::keyPressEvent(QKeyEvent *e)
{
    // if we're quitting, then fine
    if (e->key() == Qt::Key_Escape)
    {
        close();
        return;
    }

    // otherwise update the keyboard map
    keyboardMap_[ e->key() ] = true;
}

void MainWindow::keyReleaseEvent(QKeyEvent *e)
{
    // update the keyboard map
    keyboardMap_[ e->key() ] = false;
}

bool MainWindow::isKeyDown(int key)
{
    // check in the map to see if the key is down
    if (keyboardMap_.contains(key))
        return keyboardMap_[ key ];
    else
        return false;
}


So we grab any key press or release events and then simply update the QMap with state for this key code. After that, all we need is an access function to allow the widget to query the key state and move the view accordingly.

Now, to actually move the view accordingly takes a little bit of thought. We have to be a little careful as to where we put the translation given by the keyboard movement in order to give the RTS style rotate-around-a-point camera we're looking for. At present, we have:

    // move into the screen
    glTranslatef(0.0f, 0.0f, -6.0f);

    // rotate the cube by the rotation value
    glRotatef(rotValue_.y(), 1.0f, 0.0f, 0.0f);
    glRotatef(rotValue_.x(), 0.0f, 1.0f, 0.0f);

To apply the lateral movement, we need to think about which order to perform these translations and movements in to get the affect we want, remembering that we are transforming the world relative to the camera. This turns out to be:
  1. Translate back by the zoom factor 
  2. Rotate the coordinate system around the origin (equivalent to rotating the camera)
  3. Translate the view to the current focus point position
Applying these gives the following code:

    // reset the view to the identity
    glLoadIdentity();

    // move everything back by the zoom factor
    glTranslatef(0.0f, 0.0f, -zoomValue_);

    // rotate everything
    glRotatef(rotValue_.y(), 1.0f, 0.0f, 0.0f);
    glRotatef(rotValue_.x(), 0.0f, 1.0f, 0.0f);

    // finally offset by the current viewing point
    glTranslatef(posValue_.x(), posValue_.y(), 0.0f);

This almost gives us the keyboard control we were looking for. However, as it stands, if you just polled the key status and increased or decreased the x and y values, you would always be moving on those axes. What we really want is to move relative to the direction we're facing. Unfortunately, this is where we can't avoid some trigonometry as we need to take the movement speed and angle of rotation around the vertical axis to give the change in x and y values needed. Long story short, this code in a new 'mainLoop' function does the job:

    // check for keyboard movement
    if (parentWin_->isKeyDown(65))  // A
    {
        posValue_.setY( posValue_.y() + (0.05 * sin( PI * rotValue_.x() / 180.0) ) );
        posValue_.setX( posValue_.x() + (0.05 * cos( PI * rotValue_.x() / 180.0) ) );
    }

    if (parentWin_->isKeyDown(68))  // D
    {
        posValue_.setY( posValue_.y() - (0.05 * sin( PI * rotValue_.x() / 180.0) ) );
        posValue_.setX( posValue_.x() - (0.05 * cos( PI * rotValue_.x() / 180.0) ) );
    }

    if (parentWin_->isKeyDown(87)) // W
    {
        posValue_.setY( posValue_.y() + (0.05 * cos( PI * rotValue_.x() / 180.0) ) );
        posValue_.setX( posValue_.x() - (0.05 * sin( PI * rotValue_.x() / 180.0) ) );
    }

    if (parentWin_->isKeyDown(83))  // S
    {
        posValue_.setY( posValue_.y() - (0.05 * cos( PI * rotValue_.x() / 180.0) ) );
        posValue_.setX( posValue_.x() + (0.05 * sin( PI * rotValue_.x() / 180.0) ) );
    }

Note the conversion from degrees (as accepted by glRotatef) and radians (as accepted by sin/cos).

Things to note:
  • I've added mouse wheel zoom by overloading mouseWheelEvent and clamping the zoom value.
  • In order to poll the keyboard state and a fast enough rate, I've added a mainLoop slot function that is attached to the timer and then calls the updateGL function.
  • In order to call into the parent window's keyboard map, you need to make the widget aware of it and I personally prefer to store this pointer in a member variable through the constructor rather than having a global variable or static singleton type framework.
  • The rotation/translation order can be difficult to get your head around - try to remember that the camera is static and the transformations apply to the coordinate system!
And we're done! We now have a mouse and keyboard controlled scene to zoom around.

Find the code at:

https://github.com/doc-sparks/Interface/tree/v0.3

Tuesday, 14 May 2013

Painting Space Wolf Grey Hunters

About the only painting I'll ever be able to do

This post is a bit of a change for the previous ones as it actually has nothing to do with computers which is actually quite an achievement for me. I've been buying Games Workshop crap products for what must be at least 20 years now, starting with the original Space Hulk and 40K Rogue Trader, up to 3rd Edition 40K (bit of a break for uni and not having money or space) and on to 5th Edition and beyond. I have a large portion of both mine and my parent's attics filled with the stuff and I still love it. The 40K lore, the miniatures, the hobby, the game - it's awesome. I am, to all intents and purposes, Games Workshop's bitch.

Though I've always liked painting I've never managed to really get the hang of it. But last year GW released a new set of paints and had proper 'For Dummies' style guides that even I could follow and so after buying more plastic crack that I didn't need, I set about trying to actually complete some models to a high standard. Here are the first off to be completed. What we have here is a squad of Grey Hunter Space Wolves:


Essentially, all I did for each was start with spraying everything with The Fang on top of a Chaos Black (or whatever they call it now) undercoat. Then I applied the base colours, followed by a wash, a layer colour or two and a final highlight to the edges of armour or pads. The specific colours used were:

  • Power Armour - Russ Grey (base), Agrax Earthshade (wash), Russ Grey (Layer), Fenrisian Grey (highlight 1), Rhinox Hide (highlight 2 - armour chips)
  • Furs - Steel Legion Drab (base), Seraphim Sepia (wash), Agrax Earthshade (wash), Mournfang Brown (Layer), Tallarn Sand (layer), Ushabti Bone (highlight)
  • Shoulder Pads - Mephiston Red (base), Agrax Earthshade (wash), Mephiston Red (Layer), Wild Rider Red (highlight)
  • Gold Areas - Balthasar Gold (base), Gehenna's Gold (layer), Agrax Earthshade (wash), Gehenna's Gold (highlight)
  • Metal - Leadbelcher (base), Nuln Oil (wash), Ironbreaker (highlight)
  • Bone - Zandri Dust (base), Agrax Earthshade (wash), Ushabti Bone (layer), Screaming Skull (highlight)
  • Black Areas - Abaddon Black (base), Skavenblight Dinge (highlight 1), Dawnstone (highlight 2)Administratum Grey (highlight 3)
  • Power Sword - Stegadon Scale Green (base), Sotek Green (highlight 1), Temple Guard Blue (highlight 2), Guilliman Blue (wash), Fenrisian Grey (highlight 3)
  • Base - Armageddon Dust (base), Agrax Earthshade (wash), Tyrant Skull (highlight)
This is basically taken wholesale from White Dwarf 388 (because I have no imagination). Some of the things I learnt while painting these marines include:

  • Power armour is quite easy to paint - base coat, wash and then highlight at the edges. Job done.
  • Fur is significantly more tricky. When I next have to do this, I'll avoid Mournfang Brown and just highlight up after washing the base coat of Steel Legion Drab. More practise needed here.
  • Little chips in the armour make a big difference and are easy to do.
  • The new texture paints, though awesome, can get *everywhere* if you're not careful with the brush. And they are a pain to remove if you get them where they shouldn't be.
  • A sodding HATE doing transfers on power armour. I'm guessing I'm missing something, but as far as I can tell, you can't easily put a flat transfer on a convex surface as, thanks to geometry and what not, it can't go flat - much like lining paper in a cake tin. I consequently had to put cuts in the transfers which (because they were really quite flimsy) made it a bugger not to rip in half. Add to that me forgetting about them, handling the miniature and consequently getting the meticulously aligned transfer stuck to my hand made this really rather an annoying procedure. I'll need to look up how best to do this in the future....

Hopefully this will help the next time I want to paint some Space Wolves. Next: on to some Necrons!

The full gallery can be found on my 500px page here. Enjoy :)



Sunday, 28 April 2013

Moving Around a 3D cube with Mouse and Keyboard (Part 1)

I never got the hang of XBox controllers

So moving on from a painfully coloured 3D spinning cube created in OpenGL and Qt (see here), I now want to add user input via a mouse and keyboard. The first part of this will be fairy easy and just result in being able to rotate the cube when ever the middle mouse button is pressed - 'mouse look' for those in the know.

So, first up, we need to put an additional function in for checking mouse movement. Qt makes this very easy (not surprisingly) as all widgets can catch mouse events by overriding the mouseMoveEvent. So after adding the following:

header file: 

protected:
   void mouseMoveEvent(QMouseEvent *event);

cpp file:

void OGLWidget::mouseMoveEvent(QMouseEvent *event)
{

}

We now have a function that will catch any mouse movement. Or mostly, anyway. This will only work with movement when a button is pressed (dragging basically) which won't quite work for this. You therefore should add the following to the constructor of the widget:

setMouseTracking(true);

This will set the widget to track ALL mouse movements.

So now, how do we ensure the cube moves around with the mouse and stops when we release the button? We need to keep two 'temporary' (but member) variables that record the rotation of the cube and mouse coords. The new rotation is then calculated based on the difference between this mouse position and the current one, which is then added to the stored rotation when the mouse button was pressed. As with pictures, code is often worth a thousand words so all of that can probably more easily be understood by viewing the following:

void OGLWidget::mouseMoveEvent(QMouseEvent *event)
{
    // is the middle mouse button down?
    if (event->buttons() == Qt::MidButton)
    {
        // was it already down? If not, store the current coords
        if (!mouseLook_)
        {
            tmpMousePos_ = event->pos();
            tmpRotValue_ = rotValue_;
            mouseLook_ = true;
        }

        // update the rotation values depending on the relative mouse position
        rotValue_.setX( tmpRotValue_.x() + (tmpMousePos_.x() - event->pos().x()) * 0.2 );
        rotValue_.setY( tmpRotValue_.y() + (tmpMousePos_.y() - event->pos().y()) * -0.2 );
    }
    else
    {
        // turn off mouse look
        mouseLook_ = false;
    }
}

Things to note:
  • I've got an additional flag showing whether mouse look is on or not - I could have set one of the other tmp variables to a special value but this is almost never a good idea and variables are (generally) cheap.
  • I've applied a factor of 0.2 to each mouse movement. This is basically the mouse speed and should be configurable an ideal world
  • To incorporate two axis rotation, I've changed rotValue_ to a QPoint type where x stores the y-axis rotation and y stores the x axis rotation.

So you can now rotate the cube when holding down the middle mouse button. Next time, we try to actually move the camera using the WASD keys. This will require a few more changes to the rendering code.

Find the code at:

https://github.com/doc-sparks/Interface/tree/v0.2

Saturday, 27 April 2013

Fixing a Broken DNS in Linux Mint 13

Clearly I just have to start remembering more IP addresses

So last week I needed to go abroad (to CERN as it happens) and when I got there, my Mint 13 running laptop decided to throw a paddy and stop talking to the DNS. I could ping usual Google IPs (8.8.8.8 for example) but DNS just hung despite being reported to correctly (and pingable) in the Network Settings. Having a look at my resolv.conf, I could see that my DNS settings were set to my Uni one. I couldn't remember if these were put in automatically by the ethernet connection in my office or I'd just dumped that there randomly for some reason, but tellingly, these DNSs were NOT pingable (I guess they had fallen over or something).

So, I thought, not a problem - just change the DNSs to Google's and all should be well. Except it wasn't. The laptop was still refusing to talk to anyone by name. Luckily, my phone was not having these troubles and some frantic searching led me to the following Blog. It appears someone in Ubuntu land was trying to be clever and started using dnsmasq instead of resolv.conf. I don't know why this was done and really can't be bothered to find out, but in this occasion it meant I could not find whatever black magic was needed to force the Google DNS to be used.

By *mostly* following the blog, I discovered the following worked like a charm to get control of the DNS back to resolv.conf where (in my humble - and probably naive - opinion) it should still reside. First up, (and be aware, this should all be done under su/root), stop Mint from using dnsmasq by going to:

/etc/NetworkManager/NetworkManager.conf

And commenting out the dns line:

#dns=dnsmasq

Next, get resolvconf to sort out it's links and bring back resolv.conf:

dpkg-reconfigure resolvconf

Just to be sure, do a few more housekeeping bits and pieces and follow up with a restart:

resolvconf --create-runtime-directories
resolvconf --enable-updates
reboot

And now you should be able to edit /etc/resolv.conf as usual and have Mint listen to you. Note that you may want to also change:

/etc/resolvconf/resolv.conf.d/original

for a more permanent setting as I believe on restart resolvconf will recreate the resolv.conf with this. 

Last thing to note: While messing around, I tried to select 'Automatic (DHCP) addresses only' under IPv4 Settings for the wireless I was using. THIS WAS A BAD IDEA! It stopped the above working for other reasons I didn't understand. I plan to revisit all this at some point to try to better understand how DNS is handled in Mint/Ubuntu as I'm sure there was probably a better way around this...

Friday, 29 March 2013

Using Python within C++

I tried to think of a Jake the Snake reference... but failed.

I now have a working (and easily available) package for programming my Lego Mindstorms NXT (look here for more info). The downside is that it's written in python and I'm more of a C++ kind of guy. I could root around to try to find something that's C compatible but as this was easily available in the Mint repos, I thought it might be a better idea to just call the python code within C++.

This is not as easy as you'd think, or at least, it isn't if you want to do it right. If you only care about running some commands through the Python interpreter and not paying any attention to the results, the you can easily use something like the following:

Py_Initialize();
PyRun_SimpleString("import os");
PyRun_SimpleString("print 'hello world'");
Py_Finalize();

This is essentially a python version of a system() call (kindof..).

Anyway, we're getting ahead of ourselves. First, you need to gain access to the python headers and libraries. Through Linux, this is fairly trivial as I would guess all distros would have the python2.7-dev (or whatever version you choose) available - note that it may not be installed by default though. For Qt, you can then add the following to your .pro file:

 LIBS += -Lpython2.7 -lpython2.7

and then you can include the python header:

 #include <python2.7/Python.h>

Putting the simple hello world code into a main function should build and run as expected. So that's the basics of accessing python within C++. How do you go about integrating this properly into your code though? For that, you need to go a bit more in depth into how python is written and how it handles memory and objects.

One of the main selling points of python is it's garbage collection and object reference handling - it deals with all references to any objects and deletes objects that don't have any. Within the framework of the python language, this is fine. However, when you're poking around under the hood, you have to be careful to do what python usually does for you. In other words, you have to be very careful about tracking object references that get passed back to you from function calls (and nearly all function calls pass back object references!).

To integrate with the PythonNXT python module, I've found the following functions the most useful. There are many others (obviously) that can be found here but this should give you the basic idea of what's going on and what you need to be careful about. The main things to remember is that everything returns/deals with a PyObject base type (or actually, a pointer to one) and you have to keep track of anything that gets returned to you from the API functions.

Py_Initialize

Initialises the python interpreter. Call this before doing anything else!

Py_Finalize

Shuts everything down to do with the interpreter. Don't call anything after this!

PyObject* PyImport_ImportModule(const char *name)

This will import the given module and return a new reference to the module object.

PyObject* PyObject_GetAttrString(PyObject *o, const char *attr_name)

This will return the named object associated with the given  object. For example, if you've just loaded a module and want to call a function, use this with the function name to return a new reference to that function. Or, you have a python object that you want to call a function on, use this to get a reference to that function and then use PyObject_CallObject to actually call the function.

PyObject* PyObject_CallObject(PyObject *callable_object, PyObject *args)

When passed a reference to a function (say from the above, PyObject_GetAttrString), this will call the given function with arguments given (specified as Python objects). This returns a new reference which will be the pythonified version of the actual function return value.

PyObject* Py_BuildValue(const char *format, ...)

This constructs a python object (tuple, list, single value, etc.) based on the given format and supplied arguments (e.g. Py_BuildValue("(i)", 1) will give a tuple of one integer value). Used for PyObject_CallObject above - note that it seems you should always create tuples for this, not just single values (e.g. "(i)" rather than "i")!

void Py_XDECREF(PyObject *o)

This will decrease the reference count of the given object and therefore delete it if the count goes to 0. Note that this version checks for NULL pointers being passed. This is the main thing to remember - you must call this on any returned objects when you're done with them unless the function returns a 'borrowed' (rather than 'new') reference. If you don't, you'll be leaking memory like sieve.


This covers the basics of using Python code through the C-API. In addition to these, the following (and related) are worth looking up to manipulate Lists and Tuples. They're fairly self-explanatory:

PyList_GetItem
PyTuple_GetItem


Finally, the basic types (int, double, etc.) have dedicated python objects associated so you can easily cast to these objects (e.g. PyIntObject) to access the actual data values from these objects.

In a future post, I'll go into a bit of detail how I've used this to create a basic interface to the PythonNXT module through C++/Qt.

Sunday, 24 March 2013

Getting started with Git and Qt

I'll admit, they could have probably come up with a better name

So now I've started producing code that I would prefer not to lose, I've turned my mind to Version Control solutions and, ever one to go with a crowd, it seems that git is the way forward. I've used a few VCSs in my time including CVS (urgh), SVN (meh) and even Microsoft's Source Safe (Aaaarrrgghhh!), but now I've had a chance to play with git, it does seem to be significantly easier and more friendly. That being said though, I have to admit that one of the major selling points is the free github account with repo space. This allows me to not only store my code remotely, but also develop on whatever machine I happen to find myself in front of.

There are many good tutorials for git, so I'm just going to note here what I found useful to know and how to get it working with Qt. Fundamentally, git is a distributed VCS that allows copies of a code repository to be taken and edited in isolation from the main repo. Once the user is happy, they can push their changes to the main repo. Each remote copy can be committed to, rolled back, etc. by itself before this push takes place. The key commands are to achieve all of this are:

git init     # initialise a git repository
git clone    # clone an existing repo (e.g. from github)
git add      # add files to a repo
git commit   # commit changes to the local copy of the repo
git push     # push any committed changes to to the master repo
git tag      # Tag the current version of the repo

To get this working with Qt is very easy as Qt comes bunded with git support built in. For my projects, as I'd already started them, I first needed to create a git repo on github for them, clone the (empty) repo, copy the necessary files in to the appropriate directory and finally add, commit and push the changes. Note that you don't want to use git to manage the .pro.user Qt file as that is created on a per machine basis.

This gives you a working repo that you can now clone and start using. I found this easiest to do by deleting the previous repo from the local machine, firing up Qt and then selecting 'New Project' and then 'Project from Version Control' and 'Git Repository Clone'. Put in the Github details of the repo and Qt will do the rest. At this point you can do all editing, etc. within Qt and when you're happy, perform commits and pushes through this as well (Tools -> Git). The only thing I've currently found you can't do is tag through Qt - this must still be done through the command line.

Note that I'm sure there is a much easier way of setting up the project but this is how I got it to work and as it was trivial to do, couldn't really be bothered to look around for a more elegant solution. If you wish to look at my developing code base and laugh at the poorly written code, you can find it here:

https://github.com/doc-sparks?tab=repositories

Update: During my time with git, I found out that removing a tag can be a bit of a pain. I found the following to work quite well (creating a dummy one first as an example):

# start with a tag
git tag test

# push this to github
git push origin --tags

# delete this tag from local repo
git tag -d test

# finally, push this tag change to github (note that --tags won't work as the removed tag isn't included)
git push origin :refs/tags/test

Saturday, 23 March 2013

A Spinning Cube with OpenGL and Qt

At one point, even Crysis looked like this

As I mentioned before, I've always liked 3D art and games. I guess it comes from seeing the transition from basic 2D platformers to the 3D awesomeness that was Doom first hand (and if you actually need to click on that link to know what Doom is, you should be ashamed). I still have (vague) ideas of doing my own at some point but that is obviously now a lot easier with things like Unity and the Unreal Engine. There is very little need to code your own engine these days unless you're a major game studio (in which case you're probably not reading this).


Having said all that, I think it's always good to go over some of the basics of these technologies so you have a vague idea how they work and are coded. Plus, there are many situations where the pre-packaged engines aren't useful for what you're trying to do (as in this case here - but more on that in another post some time down the line!). To that end, I decided to come up with the minimal startup to running an OpenGL program within the Qt framework. This would provide me with a good basis for going forward in anything 3D related in the future and also help me understand the basic requirements of an OpenGL program.

Note: There are many good tutorials on the web for this (I personally  use Neon Helium). I'm putting this here (as with all my posts) to record my own personal experience and to help me remember just what I need to know!


To start with,  create a New Project in Qt: New Project -> Qt Widget Project -> Qt GUI Application. This sets you up with a basic main window and main cpp file. Now, add in a new widget that will be your main OpenGL widget (Right Click project -> Add New... ->C++, C++ Class and make sure you set the base class as QGLWidget and Type as QObject).


This sets up the widget class for you to add to. Next thing is to make this appear (and take over) the main window. So add the following to the Main Window Constructor:


 
MainWindow::MainWindow(QWidget *parent) :
    QMainWindow(parent)
{
    // Show the interface fullscreen
    showFullScreen();

    // create and add the openGL Widget
    OGLWidget *w = new OGLWidget();
    setCentralWidget(w);
}

You may also want to override the protected keyPressEvent to allow you to quit out:

 
void MainWindow::keyPressEvent(QKeyEvent *e)
{
    if (e->key() == Qt::Key_Escape)
        close();
    else
        QWidget::keyPressEvent(e);
}

This has setup the basics for Qt, now comes the OpenGL bit. There are 3 methods you need to override in OGLWidget (or whatever your widget is called) to get your program to actually show anything (note the includes required - put these at the top of your implementation file):

initializeGL

 
#include "oglwidget.h"
#include <GL/glu.h>
#include <QDebug>
#include <QTimer>
#include <QMouseEvent>

void OGLWidget::initializeGL()
{
    // enable depth testing - required to stop back faces showing through (back face culling)
    glEnable(GL_DEPTH_TEST);

    // set up the timer for a 50Hz view and connect to the update routine
    refreshTimer_ = new QTimer(this);
    connect(refreshTimer_, SIGNAL(timeout()), this, SLOT(updateGL()));
    refreshTimer_->start(20);
}
  • Called (not surprisingly) just before the first call to resizeGL or paintGL
    • The only OpenGL thing done here is to set the depth Test through glEnable(GL_DEPTH_TEST); Not doing this can make back faces show through.
    • Other than that, I just setup a timer to repaint the screen

resizeGL

 
void OGLWidget::resizeGL(int width, int height)
{
    // Set the viewport given the resize event
    glViewport(0, 0, width, height);

    // Reset the Projection matrix
    glMatrixMode(GL_PROJECTION);
    glLoadIdentity();

    // Calculate The Aspect Ratio Of The Window and set the perspective
    gluPerspective(45.0f,(GLfloat)width/(GLfloat)height,0.1f,100.0f);

    // Reset the Model View matrix
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();
}
  • Called on the resize of the widget with the width and height of the widget passed through
    •  This is where the viewport is setup and the basic matrices are reset to the identity
    • glViewport - Set up where in the widget to show the 3d view
    • glMatrixMode - Set which matrix to use. Of interest here is the ModelView matrix (from local object coords to eye or camera view) and Projection matrix (how the eye coords are projected and clipped to the screen). Both are set to the identity here.
    • gluPerspective - A GLUT routine that sets a nice viewing frustrum with a z clipping plane

paintGL

 
void OGLWidget::paintGL()
{
    // cler the screen and depth buffer
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // reset the view to the identity
    glLoadIdentity();

    // move into the screen
    glTranslatef(0.0f, 0.0f, -6.0f);

    // rotate the cube by the rotation value
    glRotatef(rotValue_, 0.0f, 1.0f, 0.0f);

    // construct the cube
    glBegin(GL_QUADS);

    glColor3f(   1.0,  1.0, 1.0 );
    glVertex3f(  0.5, -0.5, 0.5 );
    glVertex3f(  0.5,  0.5, 0.5 );
    glVertex3f( -0.5,  0.5, 0.5 );
    glVertex3f( -0.5, -0.5, 0.5 );

    // And 5 others like this...

    glEnd();

    // finally, update the rotation
    rotValue_ += 0.2f;
}
  • Called on any redraw event
    • Here the actual polygons are drawn
    • glClear - used to clear both the color buffer (basically clearing the screen to a colour set using glClearColor) and the depth buffer as well
    • Reset the MODLVIEW matrix (the default here) to the identity
    • Apply both a translation and rotation to the current (modelView) matrix
    • Finally, setup to draw quads and set the colour and vertex positions for all faces of the cube

And there we have it! This produces a basic spinning cube in front of the camera while using a full screen Qt window to display it. Next, mouse control...

Update: The code for this can be found in my github repo:

https://github.com/doc-sparks/Interface/tree/v0.1

Thursday, 7 March 2013

Editing Bootable DVDs as ISO images

 When someone says ISO, I reach for my Q*Bert

A friend recently asked how to go about changing the contents of an install DVD with freely available tools that didn't have a size limit. This was something I hadn't explicitly done before and sounded like quite a fun thing to try (you may not entirely agree with the use of the word 'fun' there) so I thought I'd look into it and try to hack my Win7 install DVD a bit. For this I used Ubuntu (though I'm guessing almost any Linux install would work), ddmount, mkisofs and a script called geteltorito.pl (available here). I was going to use ISO Master but hit problems which I will get to later. If you're of the Windows persuasion, I'm not sure how to do this using free tools, however there's nothing to stop you creating a live boot CD/USB stick of a Linux distro, going into that and just going into some area on your existing HD.

Anyway, the first step was to create an ISO image of the DVD. This was fairly easy using:

 
markwslater@markwslater-System-Product-Name:~$ dd if=/dev/cdrom of=~/win7_image.iso
6298216+0 records in
6298216+0 records out
3224686592 bytes (3.2 GB) copied, 229.392 s, 14.1 MB/s
markwslater@markwslater-System-Product-Name:~$ ls -ltrh ~/win7_image.iso 
-rw-rw-r-- 1 markwslater markwslater 3.1G Mar  5 23:26 /home/markwslater/win7_image.iso

So we now have the ISO image which we could (in theory) start messing with through various tools. Apparently (though I didn't confirm this) most of the free ones or trials for Windows have an upper limit on the size of ISO you can mess with (around CD size by all accounts). Looking around I happened across ISO Master and thought this would do the job. Unfortunately, plugging the above ISO image in only ended up with a README that said:
 
This disc contains a "UDF" file system and requires an operating system
that supports the ISO-13346 "UDF" file system specification.

Which was not the most helpful. So, after a bit of searching I found the following thread that explained how to do things with basic Linux tools. The key thing is to preserve the master boot record of the ISO and recreate it with this intact. The aforementioned geteltorito.pl perl script did a grand job of this:

 
markwslater@markwslater-System-Product-Name:~$ ./geteltorito.pl  win7_image.iso > ~/boot.bin
Booting catalog starts at sector: 22 
Manufacturer of CD: Microsoft Corporation
Image architecture: x86
Boot media type is: no emulation
El Torito image starts at sector 734 and has 8 sector(s) of 512 Bytes
Image has been written to stdout ....
markwslater@markwslater-System-Product-Name:~$


It's then quite easy to mount the existing ISO image (read only), copy this elsewhere, change the permissions and do what's necessary:

 
mkdir windvd
sudo mount -t auto -o loop win7_image.iso /home/markwslater/windvdcp -r windvd windvd_rw
chmod u+w windvd_rw -R
echo stuff > windvd_rw/my_file.txt


So all that remains is to recreate the ISO with the boot.bin file created above:
 
mkisofs -udf -b boot.bin -no-emul-boot -hide boot.bin -relaxed-filenames -joliet-long -D -o ~/new_win7.iso  ~/windvd_rw


Burn this to a DVD however you wish and bob's-your-uncle, you have recreated the win7 install DVD after messing with it :)

Wednesday, 27 February 2013

Installing and First Steps with Ubuntu Studio and StealthPlug

Do you wanna get rocked?

A few weeks ago I finally managed to repartition a Windows Vista laptop so I could install Ubuntu Studio on it and try my hand at recording again, something I haven't done for quite some years. For those unaware, Ubuntu Studio is a real time version of Ubuntu (shocker) that comes bundled with a whole host of very useful music making and production software.

To install Ubuntu Studio, the first step (not surprisingly) was to download the ISO image (here is the download page). Now came the first wrinkle, which was that I seemed to have great trouble having the BIOS recognise large USB sticks. I would usually just dd the image to a stick and boot from it but this seemed to fail with the 3-4 sticks I had lying around. I therefore resorted to the tried and tested method and burnt a DVD. This worked without problems and I breezed through the install process (I seem to remember a question asking about including non-open source software which I said yes to. That was the only thing I had to actually think about).

So now I had a working install of Ubuntu Studio dual booting on my old Vista laptop. Next step: Getting my StealthPlug working on it. Now this is where the audio system in Ubuntu Studio requires a bit of explanation (and note that I'm by no means an expert here!). It seems the best method to use to give the smallest latency is Jack. This is very clever bit of software that 'registers' any inputs and outputs (both physical and software created) and allows you to link between any and all of these as you like (like putting jack leads between them, which I guess is where it gets it's name. Or that could just be a massive coincidence). As long as the Jack software is running you can hotplug these as much as you want.

So, how to get this to recognise the StealthPlug? Well plugging it in seemed to make it appear in both /dev and in the main UI. However, by default Jack runs with the main sound card and the inputs/outputs supplied don't show up. The secret of this is in the setup panel. So after starting Jack (Audio Production -> QJackCtl), go to setup and you should see something like this:


If you change the selected hardware device to the plugged in device (the arrow next to Interface will tell you which - /dev/hw1 for my Stealthplug for example) you should be away. To test it, fire up Guitarix (I rather cool open source amp simulator), select a sound (I went for HighGainSolo here) and then wire up the Jack controls something like the following:



and it should start making noise. Well I say that - make sure you're using the headphone output of the Stealthplug otherwise you won't hear anything!

So I now have a low latency monitor solution running with very little trouble. Next job: Direct output to the main card while still using the StealthPlug as input and add in MIDI and a USB mic as well.

Thursday, 7 February 2013

Modelling a Tyre and Rim in Blender

I've never been good at drawing - so now I get the computer to do it.

A few months ago, I stumbled across a website called Blender Guru. I've been tinkering with 3D for many years, going all the way back to using Imagine on the Amiga. I've never really got anywhere though, mostly due to a complete lack of artistic ability but also in part to the fact that 3D modelling is very difficult, the associated software quite pricey and there being a serious lack of tutorials on the subject (at the time anyway). In these enlightened times though, at least some of these problems have been solved thanks to the incredibly powerful (and more importantly, free) Blender, coupled with an internet's worth of tutorials, guides, etc. etc.

Now Blender is very good. Professional level good in fact. However, it also has one of the most unintuitive interfaces I've ever come across this side of a text adventure. On opening it up, there are buttons everywhere: most with cryptic names, some which produce more buttons and some that are hidden unless you know a particular incantation to reveal them. The learning curve can most accurately be described as a step function. Couple this with 3D modelling still being quite a tricky thing to do and I thought it would be another thing I'd play with for a bit and then bounce off.

And then I found Blender Guru. This site contains many cool things but mostly, it contains very detailed and easy to follow video tutorials (some just text) that take you through all sorts of aspects of modelling. It not only explains what and why you're doing things as regards the modelling, it also manages to get you through the crazy interface to the point where it even starts to make sense.

Currently, I've only followed one tutorial through to completion (this one), but here are the results:



It's not perfect and I still need to work doing the lighting (I'm hoping another tutorial will help me on that) but it's a start :)

One point to note: I modelled the rim and tyre separately but, when combining them, I really wanted to try 'linking' the tyre object into the rim scene so I could alter the tyre in the other file and the changes would be apparent in the rim file. This was not as trivial as I thought because, just selecting 'Link' from the File menu, navigating the Blend file to the parent Tyre Object and selecting this, but the object in but fixed the position and rotation. This wasn't what I was looking for. A swift prayer to the Google god showed me that you actually need to Group the to-be-linked object in it's file and link to that rather than the object itself. This worked like a charm.

As soon as I've set up a git repository, I'll upload all .blend files and supporting files. Next on the tutorial list is an 'Introduction To Texture Nodes'. Sounds swish.



Monday, 4 February 2013

Controlling Lego Mindstorms with Linux

In another 200 years I might have built the terminator

Last Christmas, I was the very lucky recipient of the incredibly awesome Lego Mindstorms kit. Here's a picture of all that awesome:


Yes indeed - computer controlled Lego. If I'd got this 20 years ago I wouldn't have seen daylight until I had to leave home.

Now the way this works is that there is a microprocessor controller brick that can have up to 3 motors and 4 sensors connected to it. In theory you build your robot (or whatever) using the included Lego (and any other bits you have lying around), design your program for it using the LabView based language included, download it to the control brick and away you go.

Now this is all well and good and gives you quite a bit of control. Here's a case in point:


However, though I appreciate the benefits of Labview, I'm more of a C++ kind of guy. I also have a long term plan of using another of my presents this year, a Raspberry Pi, as the main controller and maybe throw in an Arduino as well for a bit more flexibility.

This will therefore necessitate an API interface to the controller. A quick bit of googlage pointed me at a promising looking Python based version: NXT-python. This not only allowed all the file access and compilation options I could want, but also (and this was the important bit) had a direct, real time control option. What was even better was that in my Mint install had in the software manager (search for 'nxt'). A couple of clicks later and it was ready to try out. Awesome.

Or not. The version in the repo is a bit behind the main release (V.2.2.1-2 instead of V2.2.2) and contains a rather critical Ultrasonic sensor bug. However, I was still able to plug the brick in via USB (after building the basic tracked vehicle in the instructions), turn it on, and use the following code to get it move rather drunkenly around:

 
import nxt.locator
from nxt.motor import *

def spin_around(b):
    m_left = Motor(b, PORT_B)
    m_left.turn(400, 360)
    m_right = Motor(b, PORT_C)
    m_right.turn(-400, 360)

b = nxt.locator.find_one_brick()
spin_around(b)


Obviously, this requires you to plug motors into ports B and C :)

This code was shamelessly nicked from the examples that came with the nxt-python install and can (probably) be found here:

/usr/share/doc/python-nxt/examples/


These contain code for using the speaker and reading the sensors, the latter of which required a bit of hacking to fix for the ultrasonic one. If you run it as is, you get the error:

    sensor = Ultrasonic( BRICK, inPort)
  File "nxt-my\nxt\sensor\generic.py", line 95, in __init__
    super(Ultrasonic, self).__init__(brick, port, check_compatible)
  File "nxt-my\nxt\sensor\digital.py", line 73, in __init__
    sensor = self.get_sensor_info()
  File "nxt-my\nxt\sensor\digital.py", line 156, in get_sensor_info
    version = self.read_value('version')[0].split('\0')[0]
  File "nxt-my\nxt\sensor\digital.py", line 143, in read_value
    raise I2CError, "read_value timeout"

As this it's basically saying, there is a timeout issue when reading the ultrasonic sensor. Again, google came to my rescue and pointed me Here. After doing the correction suggested (i.e. increasing the loop count up to 30 on line 84), all was right with the world.

So I now have a computer controlled robot (sort of) that can be told what to do through python. This is certainly a start but if I'm going to control it with the kind of code I have in mind, I'm going to need something a bit more heavy duty. Next job: running python from C++.

Saturday, 19 January 2013

Shrinking Partitions on Vista

Well, that was significantly harder than expected...

Earlier this week I thought I'd do something that you'd have thought would be quite simple. Setup my old(ish) HP laptop running Windows Vista for dual booting with Ubuntu Studio. Now, the problem may already be apparent in that last sentence: Vista.

Despite all the negative press Vista got, I have to admit when I bought a new laptop 6(ish) years ago it ran fine and I didn't have any real problems. It was only after about 4 years of fairly heavy use that I decided a new machine was needed (both laptop and desktop) and onto Windows 7 I went. Now with the onset of the idiocy that is the Windows 8 interface on a desktop, I've been gradually switching to Linux and have almost managed it thanks to Steam finally going the full penguin. One of the last things that shackles me to Windows was the many options for DAWs (Digital Audio Workstations) but after stumbling across Ubuntu Studio I thought that would be the way forward.

So, having already set up my Windows 7 desktop to dual boot Ubuntu without any troubles (TF2 on Linux. Oh yes.), it seemed that doing a similar thing for Vista would be easy enough. Oh how wrong can you be....

(Note: Obvious caveats should be applied - backup your data, you computer may explode, etc, etc. :))

First step was to clean out as much of the accumulated detritus that always seems to gather on a machine I own. This gave me ~75GB from the 150GB disk. So far so good. Next, was to shrink the partition Windows was running (which took over the whole disk) to provide space for the new Linux partition. So, open up windows explorer, expand Control Panel in the tree view, select Administrative Tools, then go to Computer Management. Select C: drive, right click and go to Shrink.



The gave whole 97MB from the 75GB available. Bugger.

Googling eventually got me to How To Geek which basically described the problem: Windows decides to write important (and immovable when running) files all over the disk with no real care for what it's doing. Instead of keeping these files near the start of the disk allowing the movable files to be put everywhere else and therefore making changing the partition size easy, these unmovable files stop you altering the partition much within Windows and it's *very* hard to do it outside of Windows without bricking your install.

So, long story short, how to get around Vista not allowing you to shrink your partition? As described in the page above, first close all programs (to save memory) disable hibernation:


In Command Prompt (with admin rights) do:

powercfg.exe /hibernate off

Next, disable System restore:


disable the pagefile (swap space by another name - hence the closing of programs above):



Then if you're lucky, installing the free tool Auslogics Disk Defrag and running the optimisation may get you most of the way there:



Note you may come against the System Volume Information folder in the C: drive root which seems to contain old restore points, in which case you'll probably have to briefly live boot into Linux (or whatever) and delete it from there.

If after all that, Shrink still isn't playing ball, you will need to get a bit more serious. The Free Trial of Perfect Disk is what I had to resort to here:


Install it, select your drive, set the defrag to 'Prepare for Shrink' and then select the boot time defrag option (this will hopefully shift the files causing the blockage). After that's done, and just to be sure, finally run it's own online defrag. After all this, running Shrink will hopefully provide significantly more space to free up for other partitions. Note that I still couldn't shrink as much as it said I could as it said 'Access Denied' a lot. However, by this point I'd gained 30GB of unallocated space and thought I'd cut my losses :) Last step is to re-enable the pagefile (a must) and hibernation (if you feel the need).

Now to be fair, this is quite likely a limit of Vista rather Windows in general and it is a rather advanced usecase. However, it does seem a little poor that you have to go to such trouble to manage your disks with Windows :(

Anyway, I will return with a briefer post on installing Ubuntu studio and getting my StealthPlug working as this was remarkably easy (though not entirely trouble free).