In order to be able to use services installed via docker-compose from both in-
and outside of the docker network I have started using DNSMasq. This
lets me resolve hosts and services in the .dev domain from both inside and outside of the docker network. I have
created a systemd service definition, that starts DNSMasq after
the network comes up. It ties into openresolv, so that the
configuration survives dynamic network changes (like vpn). here are the config files:
I want to run a windows virtual machine inside my arch
linux development laptop, in order to be able to
run Microsoft’s Visual Studio, which I used to enjoy
using massively in the past, particularly in combination with Jetbrains'
Resharper. Recently I have gravitated
more towards a lighter stack, preferring F# over C#. The
tooling required to do the same kind of work in F# is much less than in C#
(depending on use case of course), and I have always favoured an approach that
lets me code using a text editor.
Enough rambling, so for this particular scenario I need VisualStudio, and by
extension Windows. I am going to go with Windows Server 2012 r2, as i find the
server Windows versions to be generally more amenable to running in a
virtualised environment. I also don’t care much for news or music in a vm, so
running Windows Server doesn’t limit me in any way.
The options for virtualization
Considering the options, it basically boils down to Hardware-assisted
virtualization
versus Full virtualization
(Vmware, Parallels, etc). I have long been on a Mac as my main box, and have
often felt the slowdown incurred by Full virtualization solutions, so being on a
shiny Arch Linux box, it was an easy decision to go with Hardware-assisted
(hvm).
The choice then is between xen and
kvm, but I think the preparation for
the xen environment is a lot more intrusive than the kvm option. I also found a
recent performance
comparison
that essentially showed no major performance difference between the
technologies. Bring on the flame wars in the comments ;-)
So kvm it is going to be for me.
Installation
Installing the required packages couldn’t be easier given arch’s excellent
package manager. sudo pacman -S qemu brings all the required bits onto my
machine, and looking through the installation instructions in the arch wiki
shows that I already have all the required kernel modules on my machine. So far
(I’m 2 months in) the experience on arch with regards to package quality and
packages being up to date has been absolutely brilliant, so I’m still in my
honey moon with this distro.
Networking
Here be dragons :-)
Considering the options for the network it becomes evident that there two ways
about it:
The fully integrated solution, that makes it really easy on the user, by
effectively creating vlans and bridges in the kvm / qemu stack, and hiding
that completely from the user. This solution doesn’t allow to have an
optimised interface in the guest, and as a result of that is slower than the
other solutions.
The bridged networking version, which pushes most of the management complexity
onto the user, but provides the benefit of a virtio interface, that maps the
io directly to the host, and is much faster as a result.
Thinking how hard can it be I decide to take a stab at the bridged networking
option. Now this is a bit of a pain on a laptop, if you want the guest os
(windows in my case) to be able to call out to the network. This is a solid
requirement in my case, for being able to get windows updates, etc.
The reason it is difficult is that wireless interfaces in linux don’t allow
bridging on the kernel network stack (promiscuous mode), which means that they
can’t be slaved to the bridge.
The alternative is to create a bridge device and forward the guest’s traffic via
network address translation (NAT), to the wifi interface, using the linux
firewall (iptables).
Having NetworkManager as my main mechanism for connecting to wifi (I find the
gui way a bit easier than arch’s preferred netctl), I need to come up with a way
of making the bridge start on system startup, and when I start the vm for the
interface to attach to the bridge. The attaching bit is pretty much handled for
me in recent versions of kvm, where I can give it a flag with a bridge
interface, and it will handle the rest.
So I decide to buy into the brave new world that is systemd, and think I’ll be
able to use this knowledge in the future, as that behemoth seems to not stop for
anything, and given the engineering power behind it, I would be surprised to see
it go away any time soon.
Create the bridge, and start on system startup
To create the bridge device I add the file
/etc/systemd/network/qbr0.netdev:
For it’s network configuration I add:
/etc/systemd/network/qbr0.network
In order to enable the network part of systemd I need to enable the
systemd-networkd daemon. systemctl enable systemd-networkd. Simple enough.
Allow qemu to connect to the bridge
In order for qemu to be allowed to use the bridge I add the file
/etc/qemu/bridge.conf, containing only the line: allow qbr0.
Add dnsmasq to provide ip’s to guests
In order for the guests to be able to receive ip’s I add dnsmasq to serve ips on
the bridge device.
I add the configuration file: /etc/dnsmasq-qbr0.conf (apologies for top level
folder, this is all still a bit rough around the edges).
Note the assignment of the fixed ip, this is so that I can reliably rdp to the
same instance without having to go down the dns route.
I also add a service to systemd, to make this start up on system start (adding
dependencies to the network service to have started before):
/etc/systemd/system/dnsmasq-qbr0.service
Adding a start script for the kvm instance
Finally I modify a
script
from the ubuntu forums by the awesome okky-htf, shout out, couldn’t have done it
without you!
This is what I run to start the vm:
I have set up rdp in the guest, hence the graphics are disabled at the bottom.
For the initial installation plus boot the start up order needs to be changed,
and the drives for the cd iso’s need to be enabled.
I use this script to rdp into the instance, and am very happy with the
performance and stability of the set up.
This is all, I hope someone finds this useful, I’m mainly leaving it here for
myself, to be able to repeat this in the future. Comments (particularly
alternatives for the networknig) are very welcome.
At my job we had a requirement for connecting one of our systems to a new
sql backend. As there are several other existing solutions in our code base that
already do schema migrations and db initialization I didn’t get to use
dbup, which would normally be my go to solution for
doing this kind of job in .NET.
I’m strongly convinced that being able to run the same build on the build server
as on your local machine is a powerful thing, when it comes to troubleshooting
issues with it. So I generally abstain from putting any logic of how to build
on the build server, and rather let it figure out when to do it.
So I was stuck with powershell, and I wanted to make the existing scripts with
our build tool of choice: FAKE.
It turned out to be quite a task, as all the libraries in the
System.Automation.Management.aspx)
namespace looked frighteningly unfamiliar, and were obscured with the typical
windows security features that make it so hard to understand the usage patterns
for a Microsoft Windows Framework. I’m sure I’ll have forgotten how to do this
tomorrow, and I don’t want to go through the process of piecing it together
again, so I’m adding this post for future reference.
Here is the code:
One thing that is noteworthy is that even though the framework threw an
exception after the script’s execution (which performed it’s task perfectly),
this didn’t bubble up to FAKE, so it’s not fit to stop the build in case of
failure, at this point.
One of the things that you need to address when executing powershell scripts in
a non-interactive host, is that the Write-Host cmdlets you may be using will
no longer work, as they’re effectively trying to write to console. I was able to
resolve this by changing them to Write-Output, after which the exceptions
stopped for me.
I have recently become a big fan of Microsofts ML language
FSharp. In my opinion one of the things that is holding
wider adoption of it back is the lack of a good cross platform story, like what
for instance clojure or java
have to offer. Minor niggle here, about the story for ruby and python on
Windows being far from great too. One thing that has traditionally been painful
is the fact that the mono runtime was a bit of a
pain to get recent packages for. Without @tpokorras efforts with his high
quality mono-opt packages for all major linux flavours it would have meant
a developer would have needed to compile the runtime environment herself, in
order to get a recent version on her machine (in a non-standard installation
folder). Obviously with the advent of Xamarin’s great
cross platform tools and those being based on the mono runtime things have
started looking up a lot since then.
Microsoft wants us to use their languages and tools
I am convinced that the current trend for using linux as a .NET host is going to
gain even more momentum. Microsoft may be interested in such a development due
to the fact that Xamarin’s cross platform story is a great way to channel people
down to the Azure platform as the cloud provider of the services for these
Xamarin apps. Another area where Microsoft is leaving the comfort zone of
Windows and actively seeking cross platform support by their framework is
ASP.NET vNEXT. Again this may be useful for
Microsoft if they manage to get people onto their cloud platform by offering a
broader bandwith of choices, but in my opinion it is too early to tell if this is
in fact a vaible assumption. Regardless of what we are able to observe now, I
believe that Microsoft’s cross platform efforts are going to be a great
investment in the long run, as it will make their services and potentially
devices more appealing to a wider audience of developers.
FSharp is the ugly duckling at Microsoft
Microsoft has one of if not the best solution for developing object orientated
software on a vm, namely
CSharp). It has
been widely adopted by the public sector and the enterprise. For hobbyists and
‘recreational programmers’ not so much, at least that I’m aware of. A great
amount of resource at Microsoft is dedicated to the advancement, refinement and
marketing of this tooling, and rightly so.
The story for FSharp at Microsoft is different. One example where Microsoft was
marketing C# and to an extent C++ and javascript was the
Build conference in 2014. During the whole
conference there was not one single talk about F#. The tooling and integration
support for using F# leaves a developer wanting in a number of places. The most
obvious one in my opinion is that one has to ship the latest FSharp.Core
libraries as a compiled artifact when trying to host applications using them in
azure. These are windows boxes maintained by Microsoft, so it strikes me as odd
that the F# framework on these boxes is not the latest version available to
developers. There are more areas like sponsoring where Microsoft is hardly
visible when it comes to promoting F#. I find it hard to find reasons for why F#
is being left in the background to this extent. I am entertaining two possible
explanations, which are both purely speculative and quite absurd.
This is great for the open source movement
Due to the relative apathy of Microsoft in regards to engaging with the
community around FSharp, the community prospers. There is a number of problems
that have been solved by the FSharp open source community in a manner of great
technical excellence. Examples of this would be the number of highly useful type
providers, data analysis solutions and parsers.
The core language is open source, and accepting contributions from the
community. This is big news for a language that was developed under the umbrella
of a big corporate like Microsoft. Under the vision of Don Syme, who is not only
the father of F#, but also responsible for some of the key features that make C#
so attractive I believe there is a bright future for the language ahead.
The lack of percievable involvement from Microsoft in the current affairs of the
language makes it more palatable to people who feel an ingrained antipathy for
whatever reasons for the company. There is a great number of fantastic
programmers in this group, and it is a highly desirable to get some of them ‘on
board’, in order to start spreading the word.
A key ingredient to further the spread of F# in these communities is being able
to use it without having to use the Windows operating system.
I am hoping that my contribution will help this process along.
When trying to install vagrant for the second time, I ran into an error about installing the nokogiri gem again. (This happened to me before, but I since forgot how I fixed it).
Thank god nowadays there is stackoverflow, so a solution to most problems is never far away.
This post describes that one only needs to export an environment variable when trying to install the vagrant_fusion plugin.
When packaging a console exe for easy deployment on a client machine I wanted to run ilmerge.exe recently. I have found that all it takes is to reference the right framework assemblies in the /targetplatform command line switch.
The full command on my machine then looks like:
When I tried to install xen on my new Inspiron laptop (15R N5110) - I ran into a bit of trouble with the pci probing of the xend daemon. It has something to do with the capabilities of the pci devices going into a loop or not being able to handle 2 graphics adapters. I found a similar problem here and the same patch worked for me. I am posting this here again, so I don’t forget and other people can find it.
The fix: