I have owned a Nikon D3100 for a few years now. I generally have it set up to take nef/raw images and in camera processed jpg files so I end up with a load of files on my SD card called _DSCNNNN.NEF and _DSCNNNN.JPG. I sometimes edit the raw/nef files in Nikon capture NX and write a new JPG out as _DSCNNNN_copy.JPG and upload it to picasa or Nikon or google plus.
This morning I had a printer head jam on my Kodak Hero 5.1 and despite searching on the net I could not find anyone with a solution except a youtube video for another Kodak printer.
Obviously, you follow this at your own risk.
I've just uploaded DBD::ODBC 1.46_2 to the CPAN. In the process of writing Some Common Unicode Problems and Solutions using Perl DBD::ODBC and MS SQL Server and github repo I discovered a serious bug in the way DBD::ODBC can attempt to insert unicode characters into char/varchar/longvarchar columns. This experimental release fixes that issue but it does mean this release contains a significant change in behaviour. Since 1.46_1 yet another unicode fix was added too.
I've just uploaded DBD::ODBC 1.46_1 to the CPAN. In the process of writing Some Common Unicode Problems and Solutions using Perl DBD::ODBC and MS SQL Server and github repo I discovered a serious bug in the way DBD::ODBC can attempt to insert unicode characters into char/varchar/longvarchar columns. This experimental release fixes that issue but it does mean this release contains a significant change in behaviour.
I've just uploaded DBD::ODBC 1.45 to the CPAN. As always I'd draw your attention to a few small changes in behaviour. The changes since 1.43 are listed below but I need to warn you about an upcoming change first.
WARNING - PLEASE READ:
The next development cycle of DBD::ODBC will contain signficant changes to the way unicode strings in your Perl scripts are inserted into CHAR and VARCHAR columns. In an attempt to write up exactly how this all works (see https://github.com/mjegh/dbd_odbc_sql_server_unicode and http://email@example.com/msg07364.html) I have discovered that unicode strings are not being inserted into CHAR/VARCHAR columns correctly in the unicode build of DBD::ODBC. There may also be changes to how unicode strings are read back from the database but I have not evaluated that yet.
Please make sure you keep an eye out of DBD::ODBC development releases 1.46_N and ensure you test them before the next full release is made. In the mean time if you are using unicode with DBD::ODBC and have any comments, have hit any strange issues or are using any workarounds I strongly urge you to contact me now before I get too far into these changes.
Having skipped the announcement for 1.44_3, here is 1.44_4. I expect this to become 1.45 in the next week unless someone comes up with something I've badly broken. You should note there are a few changes in behaviour.
DBD::ODBC has had increasing support for unicode since version 1.16. However, unicode seems to be an issue that causes a lot of confusion and especially when it comes to DBI and DBDs. The mantra of just DWIM, is complicated because most DBDs were originally written with no unicode support.
I've heard people mention travis-ci but not really paid much attention to it, that is, until yesterday when I issued a pull request from my github account for a few minor fixes to DBI. I wondered if my pull request was applied and took a quick look at DBI on github to see my pull request was still pending. Clicking on my pull request I see a "All is well — The Travis CI build passed" and just clicked on it.
After being forced to drop the subversion repository used by DBD::ODBC as perl.org has dropped subversion I moved it to github. Then, my friends working on DBI related modules setup up perl5-dbi and Merijn (Tux) helpfully moved DBD::ODBC under that umbrealla for me - thanks Tux.
I spent a small amount of time debugging a problem in a script I was modifying this morning as a while loop with each seemed to loop forever: