Sometimes I write things, sometimes I don't.

To content | To menu | To search

Saturday 17 January 2015

Juniper vSRX on Proxmox VE

Juniper provides a JUNOS, based on the one used by the SRX series, than can be used in a virtual machine. That product is great for Juniper users that want to play with their favorite network OS and also for people who would like to discover the JUNOS world.

Juniper is providing images for VMware and KVM based hypervisors. As Proxmox VE user you know that it uses KVM to get things done. So, having Firefly Perimeter working on Proxmox VE should be doable without much troubles. But here are the steps to get things working.

Downloading vSRX (Firefly Perimeter)

To setup vSRX on Proxmox VE we need to download the JVA file provided by Juniper. This file is an archive containing the KVM VM definition and the QCOW2 disk of the VM.

Preparing the VM

We then need to create a VM with the following characteristics (see also the end of this article):

  • OS: Other OS types (other)
  • CD/DVD: Do not use any media
  • Hard Disk: VIRTIO0 or IDE0, size of 2 GB, QCOW2 format
  • CPU: at least 2 sockets and 1 core, type KVM64 (default on latest versions of Proxmox VE)
  • Memory: 1024 MB are recommended (but 2048 MB should be better)
  • Network: maximum of 10 interfaces, use VIRTIO or Intel E1000 as model for interfaces

Using the vSRX Disk

Now that the VM definition has been created, we need to use the disk provided in the JVA file. For that we first need to extract it.

# bash junos-vsrx-12.1X47-D10.4-domestic.jva -x

The disk will be available in the directory that has been created. We justneed to copy the disk to replace the one used by the VM (replace VMID by the ID of your VM).

# cp junos-vsrx-12.1X47-D10.4-domestic.img /var/lib/vz/images/VMID/vm-VMID-1.qcow2

With this, the VM is now bootable and JUNOS will load properly, we will not be able to use it though. For that we need to find a way to send the serial output to the Proxmox VE's noVNC console.

Getting the serial output in the Proxmox VE console

First we need to find where our VM definition is stored. Usually it is under /etc/pve/nodes/NODENAME/qemu-server/VMID.conf (replace NODENAME and VMID with your owns). But we can use a command like the following:

# find / -name 'VMID.conf'

Then we can edit the VM definition file:

# vim /etc/pve/nodes/NODENAME/qemu-server/VMID.conf

And we have to add the following line in the configuration:

args: -serial tcp:localhost:6000,server,nowait

And eventually, we need to change the VM display to use Cirrus Logic GD 5446 (cirrus) via the Proxmox VE web interface or just by adding vga: cirrus in the VM definition.

The End

We can now just start the VM, the output will be displayed in the Proxmox VE's console. Enjoy using JUNOS with virtual machines.

vsrx-promox.png

Edit (2015-06-17):

After some tests I was glad to see that both disk and network interfaces can use the VIRTIO drivers. I would recommend to use this type of drivers since it is supposed to improve the scheduling on the hypervisor level.

Friday 14 November 2014

Samsung 840 EVO Performance fix

Several weeks ago Samsung has released a fix for the 840 series of their SSDs that had performance issues on long time stored data. While the fix procedure is quite simple to apply on Windows, when you use your SSD on a GNU/Linux powered system it can be quite tricky. So to fix your SSD you will need a bootable USB key with the Samsung binaries. Moreover the Samsung documentation is not really well written and can lead to confusion. So here are the steps to fix dear GNU/Linux users' SSDs.

Some preps

Firstly, prepare a USB key (at least 512 MB, just to be sure) and download FreeDOS.

Creating the bootable USB key

Once FreeDOS is on your computer, plug the USB key in and find the device to interact with it. You can generally find the device using the dmesg command. This will output something like this:

[1017607.068095] usb 2-1: new high-speed USB device number 110 using ehci-pci
[1017607.278127] usb 2-1: New USB device found, idVendor=1b1c, idProduct=1ab1
[1017607.278135] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[1017607.278140] usb 2-1: Product: Voyager
[1017607.278145] usb 2-1: Manufacturer: Corsair
[1017607.278150] usb 2-1: SerialNumber: AA00000000000634
[1017607.278936] usb-storage 2-1:1.0: USB Mass Storage device detected
[1017607.279084] scsi12 : usb-storage 2-1:1.0
[1017608.389828] scsi 12:0:0:0: Direct-Access Corsair Voyager 1100 PQ: 0 ANSI: 0 CCS
[1017608.390448] sd 12:0:0:0: Attached scsi generic sg2 type 0
[1017608.391272] sd 12:0:0:0: [sdb] 15663104 512-byte logical blocks: (8.01 GB/7.46 GiB)
[1017608.392259] sd 12:0:0:0: [sdb] Write Protect is off
[1017608.392266] sd 12:0:0:0: [sdb] Mode Sense: 43 00 00 00
[1017608.394784] sd 12:0:0:0: [sdb] No Caching mode page found
[1017608.394792] sd 12:0:0:0: [sdb] Assuming drive cache: write through
[1017608.402247]  sdb: sdb1
[1017608.405637] sd 12:0:0:0: [sdb] Attached SCSI removable disk

In this case you want to use the /dev/sdb dive as seen in the log.
Now you can just write the FreeDOS image on the USB disk. The image is compressed so you'll need to decompress it before.

$ bunzip2 FreeDOS-1.1-memstick-2-256M.img.bz2
$ dd if=FreeDOS-1.1-memstick-2-256MB.img of=/dev/sdb bs=512k

Copying the Samsung binaries

Download the Samsung binaries.

Mount the USB key and unzip those binaries at the USB key root. In this way you will be able to use them from FreeDOS later.

# mount /dev/sdb1 /mnt
# unzip Samsung_Performance_Restoration_USB_Bootable.zip
# mv 840Perf/* /mnt
# umount /mnt
# eject /dev/sdb

The fix

Plug the USB key in your machine and reboot the host. Do what is necessary to boot on the USB key. Choose the option 4 of FreeDOS "Load FreeDOS without driver".

Once FreeDOS is running just run the PERF.EXE file and the Samsung tool will start. Enter the index in front of the SSD you want to upgrade and fix. The utility will take care of everything (firmware upgrade and fix). Note that the pass to fix the SSD can take some time.

Once the tool has finished to fix your SSD just reboot the host by typing reboot in FreeDOS. Do not forget to unplug the USB key to avoid booting from it later.

Enjoy your brand new fixed SSD!

Wednesday 18 September 2013

GIR for java-gnome: last update / summary


Continue reading...

Wednesday 11 September 2013

GIR for java-gnome: update week 36

There is not a lot of things to talk about for the past week.

The code generator can now be considered as stable and I decided to cover the GWeather library to demonstrate how to add the coverage of a new library. You can check the progress of this work in this branch. There are still some examples and tests to write. I will also write a quick how-to guide to help the contributors to implement a new library.

See you next week for an shorter report too? No it will be the last one so I will do a recap of all the work that I have done during this great Google Summer of Code.

Wednesday 4 September 2013

GIR for java-gnome: update week 35

I am a little late on this report due to the end of the holidays and the start of a new university year.

Only 3 weeks left for this GSoC so it is time to polish everything and document the work that has been done. This is why the week 35 has seen the latest big changes in the code so I can now focus on details. There was polishing of course such as code optimization but also pretty big changes such as the followings:

  • All the GIR data are now loaded in memory before the parsing. Actually the code now starts to parse all the XML data and keep references to the whole <repository> elements in an IntrospectionRepository object. This object contains references to <namespace> elements and headers files to include in the generated C code. A list of C identifier prefixes is also built when scanning all the repositories. The IntrospectionParser is now fed with IntrospectionRepository objects and parse each namespace contained in each repository. This change allowed to remove the hard coded list of C identifiers prefixes which make the code more flexible.
  • The combined whitelist and blacklist format for the types.list file has been changed to XML. Since that we know how to parse and use XML data properly in java-gnome now, it was a obvious step to take. The XML based format is a lot more flexible than the previous format and allows us to include more information in it. A small summary of the format to respect is provided at the beginning of the file so the contributors can easily understand how it works.
  • Due to the change of format for the types.list file it was possible to include the Java classes names overrides and the Java packages names overrides in the same file. So now the build is depending on only one file which is a lot better when you want to add, remove or change something in java-gnome.
  • To prevent the java-gnome build process from failing, The org.gnome.glib.File class was made public. This change was not meant to be definitive since we do not use this File outside the package. So I changed this class scope back to default and I added to the blacklisted the functions that were relying on it. We have a well working File class in Java we do not need another one.
  • The rest of the work was polishing, removing useless code, updating comments and email writing.

Week 36 and 37 goal is to add the coverage of another library thanks to the new Introspection based parser and write a simple how-to about it. After that the GSoC will be almost over.

Tuesday 27 August 2013

GIR for java-gnome: update week 34

The Google Summer of Code is ending in a month and it is time to make things stable, to do some polishing and to write documentation.

The goal of the week 34 was to improve some parts of the code that I have done so far to make them less cumbersome and more maintainable. I have started with a couple of code cleanup and by adding some comments where they were needed. After that I have worked on a piece of code that bothered me for some time now : guessing the C type to give to the code generator based for a return value or parameter. You may remember the ugly method that I have written about 3 weeks ago. It is now removed to be replaced by two methods which do the work in a better way starting from this commit. The idea behind these two methods is: taking an XML element (a return-value or a parameter element) and scanning it to find a type and the modifiers applied to it (const for example) and then generate a string that our code generator can understand (const-GList-GtkWindow* for example). This code is still depending on an hard-coded list of modules that we handle. I need to modify this to make it dynamic and retrieve modules' C identifier prefixes with the c:identifier-prefixes attribute of a namespace element.

The second part of the week has been used to write test cases to validate that the Introspection parser is working properly. It tests that we are creating Block elements properly and also that they contain the needed data to generate the code without any troubles. To make this possible I had to change the input of the IntrospectionParser to make it able to parse files but also strings.

Eventually, I have dedicated the end of the week to modify the output of the IntrospectionParser. Starting from this commit the parser now outputs a map of Block arrays which are identified by strings. I have made the following change to change the behavior of the code that override or add data thanks to the .defs files. This operation is now done right from the BindingGenerator class instead of doing the Block change inside the DefsFile class. I have made this choice because I believed that a DefsFile object should not be modified after being created. So now we have a code generator working like this:
java-gnome-code-gen-2.png

Like a said last week, java-gnome using GIR data is now compiling and passing all the tests suite without any problems. There is still some code to improve to be able to remove some ugly hard coded things such as headers files to import (we probably need a keyword in the types.list file for example). For the end of this GSoC I plan to add the coverage of another library and to write a how-to for the interested (or future) java-gnome's hackers.

Monday 19 August 2013

GIR for java-gnome: update week 33

I am glad to say that I am ahead of my schedule. The GSoC will still continue for approximately a month but I have already reached the goal that I was supposed to reach in 2 weeks. So I am happy to say that since this commit java-gnome compiles and passes the tests suite with the Introspection based code generator. There is still work to do though. So the plan is to focus on the code polishing and the documentation of the new code generator behavior. I have written a mail on the java-gnome hackers mailing list to inform and summarize my work to the contributors. There are also several questions that still need answer to make sure that everyone will be happy with the new code generator.

I have done a quick graph to show how the code generator works in my java-gnome Introspection branch.

java-gnome-code-gen.png
As we can see the data that feed the code generator come from 2 sources. The first and main source is the GIR data and the second source is the DEFS data. We are using DEFS data for several years in java-gnome and they are now an optional source of data because GIR is meant to replace DEFS. Why did I choose to keep .defs files? To be able to add coverage for libraries that do not provide GIR data yet or to override methods, functions and signals. In this way we can combine the accuracy of GIR data with the power of the .defs parser. Some people might be confused when viewing the DefsFile[] structure in the middle of the graph. This is not a step that says that we are writing .defs files from GIR data. DefsFile is actually a class that represents what we parsed before. Until the Introspection it was only DEFS data but now I guess that this class needs a better name.

Here is a list of what I have done during the week 33:
  • Handling of GError** parameters when functions throw error.
  • Providing a new way to whitelist types and blacklist methods, functions and signals.
  • Cleanup some part of the code.
  • Add several special cases where constructors and methods needed to be overridden.
  • Fix some unit tests.
  • Fix the parsing of enumerations.
  • Fix some memory management issues due to constructor return value owner problem.
Things are now getting stable, the goal to the end of this GSoC is to polish the code, write some documentation, take some decisions to validate or invalidate some design choices, add the coverage of another library (WebKitGTK+ maybe, if you have an idea feel free to tell me) and in the future, release java-gnome 4.2.0 with a fully Introspection based bindings!

Tuesday 13 August 2013

GIR for java-gnome: update week 32

The status of the java-gnome compilation process of last week reported about 250 errors. There are now 6 errors left to fix. That is a pretty good progress that I have done during the week. The commit that I just did few minutes ago summarize the work that I have done quite well.

So how is the code generate fed in my java-gnome "introspection" branch?

  1. First it needs to pass the configure step. This step will check that the .gir files (containing XML data) are available on the system. Andrew is working on a patch so we can give a --gir=/directory/containing/gir/files to the Perl configure script.
  2. The second step of the process takes all .gir files and parse them thanks to the IntrospectionParser class. The parser loads a whitelist of types that we can use to avoid the parsing and code generation for types that we don't need (GRand for example).
  3. The following step will take .defs data that can be found in the src/overriders directory. This .defs data will be used by the code generator to add data or to redefine the data take from the Introspection files.
  4. Once all the previous steps are done, the translation layer and JNI layer are generated.
  5. Finally the make process try to build everything and this is at this time that we can see if the public API matches with generated code.
To be able to do all of this process I had to modify several part of the existing classes by adding methods to get information about a Block object (the corresponding C name, the object it belongs to and if it represents a constructor or not. A Block is a data structure that the code generator uses to represents a type, a method, a constructor or anything else.

The XML parser still needs some polishing to handle the throws attribute and to make it more beautiful to read and maintain.

Sunday I had a meeting with Andrew to talk about my progress and what we are going to do next. We agreed that it would be interesting to start merging some code into java-gnome official repository. We also talked about having regular Google Hangout so we can continue to discuss. Since the progress on using Introspection data to fed the code generator is quite good, we talked about trying to add coverage for an another GNOME library to add some value to the work that I have done so far.

Wednesday 7 August 2013

GIR for java-gnome: update week 31

The status update of my work is a little late this time due to some personal issues I had this week end.

The work is mostly the same as during week 30. The XML parser needs improvements a lot I found improvements to make every day. So now the parser can handle Union types, the special methods like "free" and "copy". I also had to tweak the public API of java-gnome due to some changes introduced by the use of Introspection data (some variables to delete because they do not exist anymore for example). Since we will probably do a major release when the Introspection based code generator will work I guess that all the public API changes will not be a problem.

The other part of my work consisted in trying to make the Introspection parser more maintainable. It has grown a lot since the beginning of this GSoC and sadly it has become less and less readable and maintainable due to copy/paste of code blocks etc… Currently, the Introspection parser only is composed of about 1800 lines. To compare, the .defs files parser is composed of about 390 lines so there is a big difference. I moved some lines of code into static methods so they can be used more easily and be more maintainable.

The code of the XML parser is starting to get stable because it works in most of the cases. I still have to investigate about 250 compilations errors when building java-gnome (errors produced by the files that the code generator writes). A lot of errors are due to the way the .defs files were written before so they should be fix in the following weeks.

I had to do some black magic to bypass some problems with GList and GSList parameters or return types and also with parameters where the name was specified but not the real type. I do not know if it is something generated on purpose when building the Introspection data or if it is a bug but it happens sometimes. Lets take an example. A common seen XML description of a parameter in a .gir file looks like this:

<parameter name="pixbuf" transfer-ownership="none" allow-none="1">
 <doc xml:whitespace="preserve">a #GdkPixbuf, or %NULL</doc>
 <type name="GdkPixbuf.Pixbuf" c:type="GdkPixbuf*"/>
</parameter>


But sometimes we can found this:

<parameter name="pixbuf" transfer-ownership="none" allow-none="1">
 <doc xml:whitespace="preserve">a #GdkPixbuf, or %NULL</doc>
 <type name="GdkPixbuf.Pixbuf"/>
</parameter>


So this is a tricky part because of several things:

  • The code generator can not handle a type like GdkPixbuf.Pixbuf it actually needs GdkPixbuf.
  • How can we know if it just a GdkPixbuf or a GdkPixbuf* ?
  • If we have just EventType how can we know what namespace owns EventType (Gdk, Gtk, ...)?
So here is the black magic. Can we make thing uglier? :(

A meeting with Serkan Kaba (my mentor) and Andrew Cowie (java-gnome's maintainer) has been set up to discuss about my progress and to make some technical choices to decide what path we should take.

During this week (week 32) and the following I will try to implement a way to override Introspection information so we can control the behavior of the code generator like we can when using the .defs files.

Tuesday 30 July 2013

GIR for java-gnome: update week 30

Not a lot of commits for this week but there was work. The only commit I have made was to detect dynamically the location of Introspection XML files for each distribution (for now only Debian based distros are handled).

Most of the work that I have made is not commited yet. I worked on the XML parser to:

  • handle both virtual methods and GLib signals,
  • handle functions,
  • handle fields for boxeds.

There are still work to do to have a proper parser that can handle without any problems all the .gir files. Each time I fix something two new problems appear but heh this is how computing stuff works. The XML parser is getting larger every day and I will have to refactor it once more to reduce the amount of code due to some copies and pastes. The Introspection files are now detected based on the path where they are put by the distributions packages. This is done at the ./configure step. The script tries to find the files and asks the user to install the required package if it fails.

I have also planned to provide a way to override Introspection data to make the code generator behaves like we want to. This is a little bit the case right now with the possibility to override the names of objects but I would like to extend this feature to the whole range of things that we can handle (methods, functions, etc).

In my proposal I have written that I would like to have a code generator using Introspection data that would work with a minimal sets of types. It is the case right now. The code generator works, it generates JNI and translation layers, but there are still errors on some parts. For a basic class the code generator is working really well but when the complexity (in the Introspection side or even in the java-gnome side) is increasing problems appear. One of the problems right now is the handling of parameters and return values based on GList and GSList for example. I hope to fix all of this soon to continue with more interesting stuff.

Monday 22 July 2013

GIR for java-gnome: update week 29

Since we are getting closer to midterm this week was full of work. I have made several changes on the Introspection XML parser to fix a bunch of parsing errors and also to improve some data that the code generator uses (Java packages names for example). To do such a thing I have added the feature to override the name of an Introspection namespace (which will be the Java package name) so the code generator can use a much better package name to generate the bindings layers.

I also have uncommited changes because I need to discuss them to validate my point of view with people from java-gnome. After this first month of Summer of Code, I have realized that java-gnome and GObject Introspection cannot really be friends without some friendship agreements. The idea behind using .defs files for the code generator was to add .defs files and their data only if we wanted to implement the corresponding Java classes and methods for them. So java-gnome has a small number of .defs files with not all the data in them. The idea with the GObject Introspection is different. In the XML files we have all the data (objects, interfaces, enumerations, etc) that we need to generate a complete binding. This is a problem for us.

When we generate the JNI and translation layers in java-gnome we need to have at least a little stub in the public API. This is actually required by the translation layer for the library to compile without any errors. But we don't have public API stubs for everything. We can generate them but is it the right choice? So the idea that I had was to blacklist some of the types and objects that we do not need or do not wish to implement in java-gnome yet (the first types we can think of are GRand and GDateTime for example because we already have the corresponding classes in the Java API. But the blacklist would be very long. I don't know how many percents of the GNOME objects we are covering but we can fairly say that we do not cover 50% of them. So the second idea (which is implemented in the uncommited changes) was to use a whitelist, a whitelist of objects that we want to cover. But this is still not as accurate as the current code generator behavior with .defs files.

I would like to apologize because I don't have any screenshots to show, my work is not related to any user interface it is a deep modification of a library. But here is a little screenshot (which show that I still have a lot of errors) of what I do.

gsoc-java-gnome-compile-errors.png

Monday 15 July 2013

GIR for java-gnome: update week 28

java.pngLike I said last week my goal was to produce a more maintainable code. I took advantage of this need to rewrite the code to use the data structures provided by the java-gnome code generator to handle the current parsing of .defs files.

So the first part of my work was to simplify the XML parsing. I decided to use the XOM library. I choosed this library because it is quite easy to use and it is also fast enough for our parsing needs. It can handle namespaces which is important when parsing XML Introspection data and it provides several classes and methods to manage XML elements and attributes. The use of XOM adds two new dependencies for the build process of java-gnome: XOM and Jaxen (Jaxen is actually a dependency of XOM). These 2 dependencies are available in Debian and Ubuntu and probably other distributions too. I started to work on adding checks to the build system to handle the new dependencies. The patch is almost done but need some tiny polishing to be commited.

The second part of my work was to reproduce a parser like the currently used DefsParser. Write a parser that work in the same way will save some time when it will be fully complete. The idea is to be able to use the Introspection parser in place of the .des parser in an easy way. To do such a thing I decided to use the classes available to help us representing all the objects with their methods, virtuals and more. The available Introspection parser is able to parse a good set of GIR data but there are several fixes in the staging area of my local Git branches. I will probably commit and push them in a day or two.

The goal of the next week will be to improve the parser and try to compare the results that it will give with the results from the DefsParser. I will also work on the detection of the .gir files in the operating system files so we do not have to embed them in our java-gnome Git tree.

Monday 8 July 2013

GIR for java-gnome: update week 27

java.pngThe week that has just passed has been the first one with code involved. I started to work on the Introspection data parser to generate a bunch of .defs files that we can use in the java-gnome code generator. The idea behind this work is not to always use a tool to convert Introspection data to .defs data. The goal is to help me understand how I can use the Introspection data, how I can read them to be able to adapt the current code generator with a minimum of changes.

The current work is still quite ugly. It parses the XML based GIR data with some Java code and write .defs files. I try to have .defs files as similar as the ones that java-gnome currently uses. It is a quite long work since I need to write a XML parser which is a kind of pain in the ass (according to me, that is terribly boring :-D). The current code can be found in a Git branch on GitHub. The GIR data parser is located in the tests/prototype/introspection directory of the branch.

The goal for the next week is to polish the current code to make it more easy to read and especially take advantage of the XML API provided by Java. In this way the parser will be more understandable (by everyone) and it will also be more maintainable (it is not for now). I will also have to go deeper in the Introspection data parsing to generate more accurate data since there is still some strange output when the parser runs.

Saturday 29 June 2013

Now the summer is really beginning

code.pngLike said in the last article that I wrote here I'm working on java-gnome to write a brand new code generator that will use GObject Introspection data. During the first two weeks of this summer of code I was not really dedicated to my project. I just finished my year at the university by having almost two weeks of exams. So I had to make a choice and decided to work more on my exams than on my summer project.

Serkan Kaba and I found the time to make some choices about what I will have to do during the whole summer starting from now. We have chosen to focus on the XML version of the Introspection data so we can easily parse them with a minimum of changes in the current code generator. If you don't know how the code generator currently works, it just parse .defs files data and generate several layers (JNI, translation) in the code of java-gnome. So the idea is to in a first time try to write a piece of code that will write .defs files from Introspection XML data. This will allow us to validate that we have fully understood how to use the data from GIR and to use the current powerful code generator to continue to generate some bindings layers. In a second time, I will work on implementing the new code generator that will use GIR data natively.

My exams are done I can now focus on this summer project. I am very glad to participate to the Google Summer of Code again (this is my third time). And I am mainly happy to be able to improve one of the project that I love.

Sunday 2 June 2013

Spending this summer coding for GNOME

code.pngThis blog has not seen a lot of activity since last year. I don't know if it is because I don't want to write anything or because I am simply busy with studies and other stuff.

For the thrid time I am participating to the Google Summer of Code and this year I will be coding for GNOME during the whole summer. If you already know me, you know that I am contributing to java-gnome since 2009. So this summer I will be able to work full time on improving the Java bindings for GNOME by rewriting the code generator so it can use GObject Introspection data. Several contributors of java-gnome have discussed about this for some times and now it is time to code.

I would like to thank the people from the SoC team of GNOME for choosing my project for this GSoC, Serkan Kaba (@serkankaba) for mentoring me and Andrew Cowie (@afcowie) for his support to this project and for maintaining java-gnome for several years.

Friday 3 February 2012

FOSDEM 2012

divers.pngI arrived in Belgium yesterday.

I'm glad to see people that were at DebConf in Bosnia and Herzgovina. I'll try to assistat many conferences about Java, Debian, GNOME and more if I can. I will also participate to the LibreDinner tomorrow.

Well, see you at FOSDEM 2012!

fosdem2012-going-to.png

Sunday 22 January 2012

Une extension à Nautilus pour GNOME Split

astuce.pngBientôt 2 ans après avoir exprimé ma volonté de créer une extension pour Nautilus afin de pouvoir lancer un découpage ou un assemblage avec GNOME Split, cette extension est enfin arrivée.

Le but de l'extension est de proposer deux nouvelles entrées dans le menu contextuel de Nautilus. Lors d'un clic droit sur un fichier quelconque l'item "Découper le fichier..." apparaît, l'utiliser permet de lancer GNOME Split avec les arguments qui vont biens pour découper le fichier sélectionné. Lors d'un clic droit sur un fichier étant considéré comme la première partie d'un fichier qui a été découpé auparavant, l'entrée "Assembler les fichiers..." apparaît permettant de lancer GNOME Split pour assembler les parties.

nautilus-extension-decoupage.png
Concrètement, cette extension de Nautilus est codée en C, publiée sous GPL version 3 et déjà présente dans le PPA pour la version 11.10 d'Ubuntu.

    ~$ sudo add-apt-repository ppa:gnome-split-team/ppa
    ~$ sudo aptitude update
    ~$ sudo aptitude install nautilus-gnome-split


N'étant pas expert en développement d'extensions pour Nautilus toute aide est la bienvenue.
En espérant que cette extension soit utile.

Friday 18 November 2011

Chaînes de caractères dynamiques

mobile.png Tout bon programmeur Android a pour habitude de lister toutes les chaînes de caractères (ou presque) dans le fichier res/values/strings.xml. A priori, seules les chaînes de caractères statiques ne peuvent être mises dans ce fichier. Mais, il existe des méthodes pour rendre ces chaînes un petit peu plus dynamique qu'elles n'y paraissent. Voyons tout ça.

1. Les formes plurielles

Les formes plurielles varient plus ou moins d'un langage à un autre. Par exemple, en français, on peut vouloir écrire :

  • Je ne possède pas de stylo.
  • Je possède un stylo.
  • Je possède des stylos.

L'API Android met à disposition plusieurs distinctions de formes plurielles grâce à 6 mot clés.

  1. zero, pour la quantité 0 ;
  2. one, pour la quantité 1 ;
  3. two, pour la quantité 2 ;
  4. few, pour représenter une petite quantité ;
  5. many, pour représenter une moyenne quantité ;
  6. other, pour représenter une quantité supérieure à une autre suivant la langue.

En français, on aura tendance à n'utiliser que zero, one et other.
Ainsi pour former les chaînes précédemment citées on mettra dans notre fichier res/values/strings.xml le code suivant :

<?xml version="1.0" encoding="utf-8"?>
<resources>
  <plurals name="nombreDeStylos">
    <item quantity="zero">Je ne possède pas de stylo.</item>
    <item quantity="one">Je possède un stylo.</item>
    <item quantity="other">Je possède des stylos.</item>
  </plurals>
</resources>

Dans le code Java, on récupérera la bonne forme de cette manière.

final List<Stylo> stylos = getStylos();
final int count = stylos.size();
final String text = this.getResources().getQuantityString(R.plurals.nombreDeStylos, count, count);

2. Formatage des chaînes

Il peut être également utile d'afficher des nombres dans les chaînes de caractères comme par exemple : "Bonjour X, vous avez Y nouveaux messages !". Cela se fait de manière relativement simple grâce à la méthode String.format(String, Object...) de Java. Cette méthode est également invoquée automatiquement lors de l'utilisation de Context.getString(int, Object...). La chaîne de caractères à formater doit contenir quelques marques de formatage comme %1$s pour dire qu'il faut remplacer cette marque par le premier argument et que celui-ci est une chaîne de caractère ou encore %2$d pour dire qu'il faut mettre ici le second argument qui est un entier.
Ainsi dans le fichier res/values/strings.xml on écrira :

<string name="welcome_message">Bonjour %1$s, vous avez %2$d nouveaux messages !</string>

Et on utilisera le code Java suivant :

final String username = "toto";
final int mailCount = 10;
final String text = String.format(this.getResources().getString(R.string.welcome_message), username, mailCount);

On pourra aussi utiliser celui-ci :

final String username = "toto";
final int mailCount = 10;
final String text = this.getResources().getString(R.string.welcome_message, username, mailCount);

3. Conclusion

Comme on peut le voir Android met à disposition plusieurs méthodes pour manipuler les chaînes de caractères et les rendre aussi flexibles que possible tout en simplifiant un maximum le travail des traducteurs par exemple. Il ne reste plus qu'à exploiter ces possibilités pour créer un code simple et efficace.

Saturday 25 June 2011

Apache 2 : mod_macro

internet.pngIl y a des logiciels, des outils, des modules ou on sait directement que l’on ne pourra plus s’en passer. Soit parce que c’est génial et révolutionnaire soit par ce que c’est très pratique et qu’il fallait y penser. C’est exactement ce que je me suis dis en découvrant le mod_macro pour Apache 2. Il y a quelques semaines je ne soupçonnais par l’existence d’un tel module. On m’en a parlé, j’ai laissé mariner et puis j’ai finalement mis la main à la pâte. Et là j’en ai eu la larme à l’œil. Combien de fois je me suis arraché les cheveux sur ma configuration de Apache, combien de copier/coller de virtual hosts j’ai fait… Avec mod_macro tout ça c’est terminé.

Pour résumer, ce module permet de factoriser sa configuration de Apache. On définit des macros puis on les appelle un peu comme les fonctions avec un langage de scripts. On peut faire des macros pour plus ou moins tout ainsi en organisant bien son système et en sachant factoriser sa configuration correctement on peut grandement se simplifier la vie.

Pour installer le mod_macro sur Debian et dérivées, rien de plus simple :
# aptitude install libapache2-mod-macro
# a2enmod macro


Ensuite, il s’agit de ne pas faire n’importe quoi. Afin de gérer mes macros, j’ai créé un fichier macro.conf dans le répertoire /etc/apache2/conf.d/ qui est chargé au démarrage de Apache. Comme je l’ai dit précédemment, on peut se servir des macros pour faire énormément de choses donc je n’aborderai pas tout ici.

Admettons que nous disposons des 2 hôtes virtuels suivants (exemple bidon mais c’est pour montrer le principe) :
<VirtualHost [ipv6]:80 ipv4:80>
  DocumentRoot /var/www/toto
  ServerName   toto.domaine.tld
  ServerAlias  toto.domaine.tld
  ServerAdmin  admin@toto.com

  ErrorLog     /var/log/apache2/toto_error.log
  TransferLog  /var/log/apache2/toto_access.log
</VirtualHost>


<VirtualHost [ipv6]:80 ipv4:80>
  DocumentRoot /var/www/titi
  ServerName   titi.domaine.tld
  ServerAlias  titi.domaine.tld
  ServerAdmin  admin@titi.com

  ErrorLog     /var/log/apache2/titi_error.log
  TransferLog  /var/log/apache2/titi_access.log
</VirtualHost>


Il y a quand même pas mal de points communs entre ces 2 hôtes virtuels. Imaginez s’il y en avait 20 comme ça, la configuration devient vite lourde à gérer. Avec une seule et unique macro on va bien se simplifier la vie. Dans le fichier /etc/apache2/conf.d/macro.conf on écrit donc la macro suivante :
<Macro Domain $sub $domain $root>
  DocumentRoot /var/www/$root
  ServerName   $sub.$domain
  ServerAlias  $sub.$domain
  ServerAdmin  admin@$domain

  ErrorLog     /var/log/apache2/$domain_error.log
  TransferLog  /var/log/apache2/$domain_access.log
</Macro>


Vous voyez le principe. On regroupe les éléments communs aux hôtes virtuels pour les mettre dans une macro qui s’appelle Domain et qui prend 3 paramètres sub, le sous-domaine à utiliser, domain, le domaine à utiliser, et root le répertoire où se trouve les fichiers à rendre disponible sur le web.
Le fichier contenant les virtual hosts peut alors se simplifier en :
<VirtualHost [ipv6]:80 ipv4:80>
  Use Domain toto domaine.tld rep_toto
</VirtualHost>

<VirtualHost [ipv6]:80 ipv4:80>
  Use Domain titi domaine.tld rep_titi
</VirtualHost>


L’utilisation de la macro se fait avec la syntaxe très script-like Use NomMacro [paramètres]. Il faut impérativement donner tous les paramètres à la macro sans quoi la vérification de la configuration échouera.

L’exemple d’utilisation donné ici est simple mais on peut faire des choses plus complexes en utilisant des macros dans d’autres macros, etc… Il faut juste trouver la configuration qui convient le mieux pour son serveur. En tout cas je ne me séparerai plus du mod_macro que je trouve vraiment intéressant pour gérer les hôtes virtuels (chose que je trouve très ennuyante).

Thursday 28 April 2011

Google Summer of Code 2011

java.pngLe Google Summer of Code est un programme organisé tout les ans par Google depuis 2005. Il permet pendant une période de donner l’opportunité à des projets libres de se développer en embauchant des étudiants volontaires qui sont rémunérés de fin mai à fin août.

Pourquoi je parle de ça aujourd’hui ? Parce que lundi, ma candidature a été officiellement approuvée. J’aurais donc durant cet été l’immense honneur de travailler pour Debian, un projet que j’apprécie tout particulièrement. J’aurais pour but d’empaqueter et de contribuer à Jigsaw avec l’aide de Tom Marble.

debian.jpg
Qu’est-ce que Jigsaw ? C’est la prochaine grosse évolution du JDK et de la machine virtuelle Java qui devrait être fournie avec Java 8. Elle permet de ne plus voir la JVM comme un gros bloc mais comme plusieurs modules qui pourront s’assembler. Ainsi, dans Debian, l’installation d’une application Java ne devrait installer que les modules dont elle a besoin. De cette manière, les performances devraient être améliorées : moins de mémoire consommée, temps de démarrage plus rapide, etc…

Pour un programme simple type Hello World ça nous donnerait :

Avec le JDK et la JVM actuelle :

  • JDK : 136 Mio à télécharger
  • Application : 425 octets
  • Total : 136.000425 Mio

Avec Jigsaw :

  • JDK : 30 Kio (à la louche)
  • Application : 425 octets
  • Total : 30.425 Kio

Sacrée différence n’est-ce pas ? Maintenant, il n’y a plus qu’à faire. Je pense que ce Summer of Code sera pour moi l’occasion d’apprendre beaucoup sur le futur de Java ainsi que sur Debian tout en me gardant bien occupé durant l’été.

- page 1 of 24