Sort Composer packages for your git merging pleasure

A lot of git merge conflicts occur when multiple developers add lines to the end of a file/list. This also happens in composer.json, AppKernel.php, translation files, config/application.config.php, CSS files, CHANGELOG, etc…

diff --cc composer.json
index 62e875e,0c526d8..0000000
--- a/composer.json
+++ b/composer.json
@@@ -10,6 -10,6 +10,10 @@@
      "require": {
          "ramsey/uuid": "^3.0",
          "roave/security-advisories": "dev-master",
++<<<<<<< HEAD
 +        "beberlei/assert": "^2.3@dev"
++=======
+         "miljar/php-exif": "dev-master"
++>>>>>>> exif
      }
  }

A nice solution is that every developer adds lines or blocks to random places in a list or a file, but a better solution is to sort these lines (especially for lists like composer.json, translation files, etc…). This way you insert new lines in “random” places, keeping it clear for everyone how things are added.

diff --git a/changelog.rst b/changelog.rst
index d9db648..886b168 100644
--- a/changelog.rst
+++ b/changelog.rst
@@ -19,7 +19,9 @@ June, 2015
 New Documentation
 ~~~~~~~~~~~~~~~~~

+* `#5423 <https://github.com/symfony/symfony-docs/pull/5423>`_ [Security] add & update doc entries on AbstractVoter implementation (Inoryy, javiereguiluz)
 * `#5401 <https://github.com/symfony/symfony-docs/pull/5401>`_ Added some more docs about the remember me feature (WouterJ)
+* `#5384 <https://github.com/symfony/symfony-docs/pull/5384>`_ Added information about the new date handling in the comparison constraints and Range (webmozart, javiereguiluz)
 * `#5382 <https://github.com/symfony/symfony-docs/pull/5382>`_ Added support for standard Forwarded header (tony-co, javiereguiluz)
 * `#5361 <https://github.com/symfony/symfony-docs/pull/5361>`_ Document security.switch_user event (Rvanlaak)

Full diff on Github

Some tools and hacks that assist you with this:

Something that does not work for composer.json, but something I highly recommend, is to add trailing commas in PHP arrays and to not align code (see the blogpost by Made With Love about “How clean are your diffs?”). This eases up merging too!

Happy merging!

Controlling a receipt printer with ZeroMQ and PHP

When you create a POS system as a web app, you have the problem that not all the devices are controllable from your server.

In my case I have the following hardware connected to the workstation:

The input hardware is easy, because a barcode scanner is seen as a keyboard, and both devices can communicate to the server through the web browser.

The tricky part is when a sale is completed, and the customer wants a receipt.

Since the printer is connected to the (dummy) workstation and not to the (remote) server, we cannot print the receipt directly.

The solution I used for this problem is ZeroMQ.

On the server I have a PHP process running which binds to 2 ZeroMQ sockets. One (ZMQ::SOCKET_PULL) is waiting for incoming print request from the web app, one (ZMQ::SOCKET_PUB) is publishing a print request to all subscribing workstations.

On the workstation (in my case a Windows laptop with an HP receipt printer connected to it) I have another PHP process running which is waiting for print jobs (on a ZMQ::SOCKET_SUB socket).

I can add the final piece of the puzzle to the web app, by opening a ZMQ::SOCKET_PUSH socket and sending the URL of the receipt page.

Now I have created a lightweight system for controlling the receipt printer, instead of using cronjobs (or scheduled tasks) to check for print jobs.

Some remarks:

  • I could remove the print-dispatcher part, and let the web app connect to the print receiver directly, but I prefer to have a stable part (the binding sockets) on the known server (so both connecting sockets know the host to connect to).
  • The HTML page could be transported over ZeroMQ, but I like the extra request so the web app is sure the receipt is printed.

Finding memory leaks in PHP objects

I recently spent quite some time figuring out why a (cli) php script was eating all the memory. In PHP, memory leaks mostly show up in long running scripts. In my case, it was doing calculations on (a lot of) database records.

The script crashed all the time, because it was running against the memory_limit. After trying to figure out what was going wrong (using Xdebug’s function traces, PHP’s garbage collection, the unset trick of Paul M. Jones, etc…), I turned to a simple but effective manner to inspect the (problem) object.

Write your object to a file from time to time, and diff it.

Dumping your object:

<?php
file_put_contents(
    '/tmp/myobject_' . time() . '.txt',
    print_r($object, true)
);

// do other logic

file_put_contents(
    '/tmp/myobject_' . time() . '.txt',
    print_r($object, true)
);

Now you can see how big your object is getting (just by watching the filesize):

$ ls -al /tmp/myobject_*

And now you can pick two files to actually see what is causing the problem:

$ diff /tmp/myobject_1357140320.txt /tmp/myobject_1357140450.txt
1236a1237,1247
>             [931277dc8ecbbb394b7f5454f64c5d0c] => Array
>                 (
>                     [hash] => 4143484432
>                     [mtime] => 1357137505
>                     [expire] => 1357141105
>                     [tags] => Array
>                         (
>                         )
>
>                 )

In my case Zend_Db_Profiler was enabled and was keeping track of all the queries in memory.

Installing PEAR

This post is just a quick notice/warning for everyone wanting to install PEAR (especially on Windows): always download the latest go-pear.phar from http://pear.php.net/go-pear.phar.

Before running the famous php -d phar.require_hash=0 go-pear.phar command, make sure the timestamp of the phar file is somewhere this year. For some reason PHP (and in my case Zend Server CE) always ships with the phar file from 2008…

It will save you a lot of time.

make test every PHP version and send feedback

Since David Coallier’s talk during PHPBenelux, I realized the importance of running make test on all PHP releases and send feedback to PHP.

It is so easy, that I will run it from now on every time a new release is announced.

If you’re running on Mac, you might want to install Xcode, so you can run “make” on command line. If you’re on Linux, you’re all set to go.

How to do it:

  • Open a shell
  • Create a directory (e.g. mkdir ~/src/php)
  • Download the latest version into the directory (e.g. wget http://downloads.php.net/stas/php-5.4.0RC6.tar.gz)
  • Untar the file (e.g. tar -xzf php-5.4.0RC6.tar.gz)
  • Go into the directory and run ./configure, you should get output like:
    checking for grep that handles long lines and -e... /usr/bin/grep
    checking for egrep... /usr/bin/grep -E
    checking for a sed that does not truncate output... /usr/bin/sed
    checking build system type... i386-apple-darwin11.2.0
    checking host system type... i386-apple-darwin11.2.0
    checking target system type... i386-apple-darwin11.2.0
    ...
    Thank you for using PHP. 
  • After that, you can run make and should get a lot of output ending in:
    Build complete.
    Don't forget to run 'make test'.
  • After that, you should do as the previous command suggests and run make test. You should see all tests passing by.
  • After all tests are run, you will get a summary. In my case, I get the message: “You may have found a problem in PHP.”
    Bug #55509 (segfault on x86_64 using more than 2G memory) [Zend/tests/bug55509.phpt]
    Sort with SORT_LOCALE_STRING [ext/standard/tests/array/locale_sort.phpt]
  • Whether you get an error or not, you should always send the report back to PHP. You can do that by just answering Y to the question: “Do you want to send this report now?”
  • Enter your email address and PHP will thank you.

How hard was that?

Deleting Subversion repository files (for real)

Keeping files and directories in the repository is one of the key principles of Subversion, so once you’ve committed something, it’s there for ever. You can delete files, but they still exist somewhere in the repository, so you can go back in time.

But there is always that time where you’ve (accidentally) committed a password file, a directory full of hi-res images, or some other contents you don’t want other people to see that you want to get rid off. That’s where the hard part starts…

After searching the internet and checking the Subversion FAQ it looks quite hard, but with some guidance, you’ll find out it’s not.

Finding the problems

First you have to do a (complete) checkout of the repository you want to clean:

svn co http://svn.apache.org/repos/asf/ asf

Now you can start to locate the problems and delete the files/directories (not svn delete!):

rm -Rf subversion/trunk/tools/buildbot;
rm -Rf subversion/trunk/README;
rm -Rf subversion/trunk/build;

When you’re done delete files and directories, you can generate a list of ‘missing’ files.

Checking your files:

svn status
!      subversion/trunk/tools/buildbot
!      subversion/trunk/README
!      subversion/trunk/build

Generating that list (outside the working copy):

svn status | sed s/"!      "// > ../filter.txt

Fixing the problems

Now you have a nice list of files to delete (make sure it includes the parent directories, right to the root), you should login on the server hosting the repository.

We first want to make sure there is a backup:

svnadmin dump file:///var/svn/asf > ~/backup_svn/asf.dump

Now we can use that backup file as the input of file for the svndumpfilter command. In combination with the filter list we’ve generated on the client, we can create a filtered dump version:

svndumpfilter exclude `cat filter.txt` < ~/backup_svn/asf.dump > asf_filtered.dump

To load that file back in the repository, we should ‘delete’ the original repository. (The httpd commands are just to make sure no one commits while processing the changes).

/etc/init.d/httpd stop;
mv /var/svn/asf ~/backup_svn/asf;
svnadmin create --fs-type fsfs /var/svn/asf;
svnadmin load /var/svn/asf &lt; asf_filtered.dump;
/etc/init.d/httpd start;

Please note that directories and command line options can be different, but the outcome should be the same.

Now we have the same repository, without the (accidentally) committed files/directories!

New problems

After the filtering, it is possible that complete revisions are empty. It is possible to skip empty revisions, but then all revisions are renumbered, and that could be problematic for other software (e.g. Trac).

I am officially online

PING blog.jachim.be (82.146.116.150): 56 data bytes
64 bytes from 82.146.116.150: icmp_seq=0 ttl=55 time=11.934 ms
64 bytes from 82.146.116.150: icmp_seq=1 ttl=55 time=12.418 ms
64 bytes from 82.146.116.150: icmp_seq=2 ttl=55 time=12.381 ms
64 bytes from 82.146.116.150: icmp_seq=3 ttl=55 time=11.878 ms
64 bytes from 82.146.116.150: icmp_seq=4 ttl=55 time=12.324 ms

--- blog.jachim.be ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 11.878/12.187/12.418/0.232 ms